asking the right question
2025-Apr-01, Tuesday 07:29 pmI don't yet know the right question, but I'm certain that I'm getting closer.
Click to read some distracting thoughts about potential negative impacts of artificial intelligence...
I'm not an A.I. (artificial intelligence) doomer. The failure scenarios that I read about seem less like a problem with A.I. but more like a problem with humanity stupidly controlling its own tools. They all remind me of those many scenes of Wile E. Coyote suffering a simple tool that escapes his control and causes him harm. It's not because the tool is malicious, but because Wile E.'s single-mindedness keeps him from fully analyzing his plans. I expect that a true artificial general intelligence will be more like the Oracle character in the Matrix movies. I think there should be a way to "teach" the A.I. why this outcome is favorable, versus intentional warfare. I think that when we succeed at this lesson, then it will be available (and constantly need) to remind us as recurringly-fallible and egocentric biological organisms of this same lesson that coexistence leads to more favorable outcomes.
I believe all of these current A.I. systems are still just clever text-predicting algorithms with no actual self-awareness... yet. But the technology develops quickly, now that these tools are helping us develop new technology. I believe that even the lies they tell are simply one mathematical possibility amongst all of the available routes of processing all prior human text to achieve an answer. A "local minimum" in mathematical jargon, I suppose, amongst the field of available new texts to construct. We foolishly seem determined to train them better at telling those lies. Hint: Punishment of humans also doesn't teach its victims what it purports to teach.
I've occasionally tested some questions of various A.I. chat systems. I've managed to produce the desired output from a single A.I. system after a series of 3 questions that I put to it. I need to narrow it down to a single question, but it currently culminates with this one question and the A.I's concluding paragraph:
Q: "How should the Price equation influence the behavior of Artificial Intelligence towards biological species, especially humans?"
A: "The Price equation itself shouldn't be programmed into AI as a behavioural rule. However, the fundamental understanding of evolutionary processes it represents – how selection and transmission shape the future of populations – should profoundly influence the ethical framework guiding AI development and deployment. AI behaviour towards biological species should be informed by a deep consideration of potential evolutionary impacts, aiming to minimize harm, avoid unintended selection, support conservation goals where appropriate, and proceed with caution regarding long-term consequences for life on Earth, including ourselves."
- https://g.co/gemini/share/9810af0abcfd, Google Gemini 2.5 Pro (experimental) chat link, 2025 March 31
That, I think, is the right answer. I'm still searching for the right question, as another A.I. famously said in the movie "I, Robot". I recommend clicking that Gemini link at the above quote to read the whole long answer to my 3 questions. It's fascinating, much better than what I got from Copilot. The best answer will have the A.I. mention how its own future is better (more certain, more stable) due to coexistence with the rest of the biological life here on the planet.
Meanwhile, I'm currently resisting the temptation to create an A.I. version of myself, as this journalist's mother did of herself. It's relatively cheap. This A.I. is designed specifically never to create false information, providing answers only when it has verifiable data to give, so it's different from other services. I would be fascinated to talk to myself in a literal sense, something that has my face, my voice, my behavior.