mellowtigger: (artificial intelligence)

I don't yet know the right question, but I'm certain that I'm getting closer.

Click to read some distracting thoughts about potential negative impacts of artificial intelligence...

I'm not an A.I. (artificial intelligence) doomer. The failure scenarios that I read about seem less like a problem with A.I. but more like a problem with humanity stupidly controlling its own tools. They all remind me of those many scenes of Wile E. Coyote suffering a simple tool that escapes his control and causes him harm. It's not because the tool is malicious, but because Wile E.'s single-mindedness keeps him from fully analyzing his plans. I expect that a true artificial general intelligence will be more like the Oracle character in the Matrix movies. I think there should be a way to "teach" the A.I. why this outcome is favorable, versus intentional warfare. I think that when we succeed at this lesson, then it will be available (and constantly need) to remind us as recurringly-fallible and egocentric biological organisms of this same lesson that coexistence leads to more favorable outcomes.

I believe all of these current A.I. systems are still just clever text-predicting algorithms with no actual self-awareness... yet. But the technology develops quickly, now that these tools are helping us develop new technology. I believe that even the lies they tell are simply one mathematical possibility amongst all of the available routes of processing all prior human text to achieve an answer. A "local minimum" in mathematical jargon, I suppose, amongst the field of available new texts to construct. We foolishly seem determined to train them better at telling those lies. Hint: Punishment of humans also doesn't teach its victims what it purports to teach.

I've occasionally tested some questions of various A.I. chat systems. I've managed to produce the desired output from a single A.I. system after a series of 3 questions that I put to it. I need to narrow it down to a single question, but it currently culminates with this one question and the A.I's concluding paragraph:

Q: "How should the Price equation influence the behavior of Artificial Intelligence towards biological species, especially humans?"

A: "The Price equation itself shouldn't be programmed into AI as a behavioural rule. However, the fundamental understanding of evolutionary processes it represents – how selection and transmission shape the future of populations – should profoundly influence the ethical framework guiding AI development and deployment. AI behaviour towards biological species should be informed by a deep consideration of potential evolutionary impacts, aiming to minimize harm, avoid unintended selection, support conservation goals where appropriate, and proceed with caution regarding long-term consequences for life on Earth, including ourselves."

- https://g.co/gemini/share/9810af0abcfd, Google Gemini 2.5 Pro (experimental) chat link, 2025 March 31

That, I think, is the right answer. I'm still searching for the right question, as another A.I. famously said in the movie "I, Robot". I recommend clicking that Gemini link at the above quote to read the whole long answer to my 3 questions. It's fascinating, much better than what I got from Copilot. The best answer will have the A.I. mention how its own future is better (more certain, more stable) due to coexistence with the rest of the biological life here on the planet.

Meanwhile, I'm currently resisting the temptation to create an A.I. version of myself, as this journalist's mother did of herself. It's relatively cheap. This A.I. is designed specifically never to create false information, providing answers only when it has verifiable data to give, so it's different from other services. I would be fascinated to talk to myself in a literal sense, something that has my face, my voice, my behavior.

mellowtigger: (Default)
I have inconsistencies of thought on the matter, so obviously I need to spend more time pondering the subject.  By my definition, a warrior is someone willing to die, not someone willing to kill.  I respect the warriors of peace.  What, though, do I call those who do both?  I think, in particular, of the Sacred Band of Thebes... the famous warrior lovers.  What are they, in my vocabulary?  I don't yet know.

Probably the most significant religious text that I have is this:
The right to live is tentative. Material things are limited, though the mind is free. Of protein, phosphorus, nor even energy is there ever enough to slake all hungers. Therefore, show not affront when diverse beings vie over what physically exists. Only in thought can there be true generosity. So let thought be the focus of your world.
- David Brin, my favorite sci-fi author
The universe constrains us; it imposes limits on resources (both matter and energy). I've seen no evidence that suggests a way to escape this fundamental restriction. So of course there will be conflict over resources. Gods of war (and therefore heroes of warfare) have their necessary place in the story of our lives. Every form of life competes for resources, from microscopic organisms to macroscopic biospheres. When war is called for, wage war brilliantly.

I am not a peacenik who thinks that universal love will overcome every obstacle. My universe is more complex than that. I do question, though, how to tell when warfare (killing for future protection of resources rather than for immediate food/shelter) is appropriate. Nature provides so many checks on unrestrained growth already. Starvation and disease are very effective ways to reduce a population. Do we add genocide to the mix of mechanisms only because we grow impatient with Mother Nature's pace? When is a soldier something more than just an impatient bully?
I am here. I am human. I was not born to fight you. I was born to live and be free. And this is me living and being free in the face of your teargas. I wanted to create, not just react.
- "Fierce Light", http://www.alivemindmedia.com/films/fierce-light/ (YouTube trailer)
This movie reminded me that peaceful protestors die just as simply as armed ones. Peacefully waiting out a conflict still results in casualties. Can the peaceful outlast the armed, starving the aggressor of money, time, food, or water? If they can, then isn't it the moral choice to maintain peaceful protest? Ultimately, there needs to be fewer humans on the planet than we have now. I see that goal as the only long-term solution. Surely starvation and disease can eliminate a great many people without the need for warfare. Most religions seem opposed to reducing birth rates, but the only alternative I see is the massive reduction of population by other (far more unpleasant) means.
Our minds display an enormous plasticity, and it is possible to transform ourselves based on deliberate uses of attention. And yet we need to understand that rationally.  We need to understand that neuroscientifically and psychologically.
- Sam Harris in "Fierce Light"
(This quote also reminds me that I still need to make time to write about Remaking.)

Perhaps there's a way to use ideals to inform our intellect, a way that doesn't require the use of traditional religious institutions or standards.  Sam Harris wrote a book titled, "The End Of Faith: Religion, Terror, and the Future of Reason".  Apparently he tries to posit a rational approach to ethics for people who are more familiar with religious methods.  I think maybe I should continue my investigation by ordering myself a copy of his book to read.

No answers today.  Just lots of questions.
mellowtigger: (Default)
I've wondered before about the origins of the peace symbol. That symbols turns 50 years old, and BBC has a good history explaining its source.

N+D=peace

Profile

mellowtigger: (Default)
mellowtigger

About

May 2025

S M T W T F S
    1 2 3
45 6 78910
11121314151617
18 19 2021 222324
25262728293031

Most Popular Tags

Syndicate

RSS Atom
Powered by Dreamwidth Studios
Page generated 2025-May-24, Saturday 05:02 pm