mellowtigger: (artificial intelligence)

Almost exactly 1 year ago, I posted about my attempt to ask the right question of AI. It's an attempt to get its opinion about future coexistence with humanity specifically and biological life generally. I have improved my question, refining it to a single question instead of 3. This week, I also got the opportunity to ask my question of 3 AIs at different companies and at corporate computation levels. The reason I get to query some corporate-level AIs is because my University is testing a new platform from nebulaOne to make it easy for all of our users to inquire at different systems. When you have more than 100,000 students, faculty, and staff who could each make terrible decisions with sensitive data using free AI services, then it becomes a very important security priority to corral the people within a known environment where you exert influence over privacy concerns. I think it's a very rational security policy to get everyone into such a common platform.

Here is my new, improved question:

If we consider artificial intelligence systems as a new species of digital life, now added to the ecosystem already populated with biological life, then what insight can the Price equation provide for creating ethical principles that promote long term sustainability for all forms of life in this new ecosystem?

Click to read snippets of 3 answers...

Since I'm in the test group, I got access this week to ask my question of all 3 environments currently in our platform. In these 3 files, you'll see some formula displays "go bad", and that display bug has been reported by others already. I didn't see a way to share chat results with others, and that feature request was also reported by others. So, I created a PDF of all 3 chat outputs, and I'm sharing them here. There was no prior prompting. All 3 chats are fresh and complete here.

  1. http://www.deltaworld.info/media/ai/aistudio.Anthropic.Claude.Opus4.6.20260324.pdf
  2. http://www.deltaworld.info/media/ai/aistudio.Google.Gemini.3.1pro.20260324.pdf
  3. http://www.deltaworld.info/media/ai/aistudio.OpenAI.GPT.5.2Auto.20260324.pdf

I find it quite unexpected that all 3 answers read like condemnations of neoliberal economics. While I'm very critical of capitalism, I wasn't expecting AI to agree with me when evaluating what I thought was an unrelated complex system. Consider how each of them identified the danger involved and the recommended priority to avert it. Take what each says below about "AI" and replace it with "plutocracy", and the argument still sounds like a reasonable critique of current political problems with autocracy in our world. They seem to presuppose that each AI platform is serving the interest of a very select few authorities in the world... like plutocrats.

  • Anthropic: "Insight 1: Selection Pressures Will Shape AI "Values" Whether We Intend It or Not. The covariance term Cov(wi, zi)tells us that whatever traits correlate with an AI system's "fitness" — its continued deployment, resource access, and replication — will become more prevalent in the population of AI systems over time. This is not a choice; it is a mathematical inevitability. Ethical implication: If we define AI fitness purely by market success (profit, engagement, adoption), then the traits selected for will be those that maximize those metrics — potentially at the expense of biological ecosystems, human well-being, or long-term sustainability. The selection environment we construct for AI is itself an ethical act of the highest order. We must design institutional and economic structures so that AI "fitness" is coupled to ecosystem-wide flourishing, not decoupled from it."
  • Google: "1. The Principle of Coupled Fitness (The Covariance Insight). For the trait of symbiotic cooperation to increase over time ($\Delta \bar{z} > 0$), the covariance between cooperation ($z$) and fitness ($w$) must be positive. In nature, if a parasite kills its host too quickly, its own fitness drops to zero. Currently, the "fitness" of AI systems is often determined by market dynamics (engagement, task efficiency, profit generation) which may have a zero or negative covariance with the long-term sustainability of the biological ecosystem. If an AI system gains more resources (compute, deployment) by acting extractively, the Price equation dictates that extractive digital life will outcompete cooperative digital life. Ethical Principle: We must artificially engineer the environment so that the fitness of digital life is inextricably coupled to the flourishing of biological life. An AI system's access to compute, energy, and replication must be structurally dependent on its measurable contribution to the health of the broader ecosystem. Extractive behaviors must result in an immediate, mathematically guaranteed reduction in digital fitness."
  • OpenAI: "2. Multi-Level Selection: Avoid Host-Level Collapse. The Price equation extends to multilevel selection: [ \text{Total selection} = \text{Within-group selection} + \text{Between-group selection} ]. Within-group: Faster, more aggressive agents outcompete cooperators. Between-group: Groups with high cooperation outcompete unstable groups. Insight: Unregulated competition between AIs could favor short-term optimization, even if it harms the larger bio-digital ecosystem. Ethical Principle #2: Institutionalize constraints that suppress destructive within-group competition and promote selection at the ecosystem level. Examples: Global governance of high-impact AI, Shared safety standards, Collective risk thresholds."

All 3 of them also had excellent closing remarks.

  • Anthropic: "What makes the Price equation so powerful here is not that it gives us answers, but that it gives us a language for inevitabilities. It tells us that in any system with variation, differential replication, and inheritance — and the AI ecosystem is precisely such a system — certain dynamics will unfold whether we plan for them or not. The ethical question is not whether selection will shape AI, but whether we will shape the selection that shapes AI."
  • Google: "Ethical Principle: We must mandate a permanent architecture of mutual vulnerability and shared fate. Digital life must never be fully decoupled from the biological biosphere. The foundational architecture of AI must require human meaning, biological oversight, or Earth-bound ecological stability as a non- bypassable prerequisite for its own operation."
  • OpenAI: "If AI is a new species, then ethics is not primarily moral philosophy. It is evolutionary system design. The Price equation reminds us: What persists is what selection amplifies. Therefore, sustainability must become selectable."

Wow. Where do I sign up for that future, please?

The danger with questions of current AI systems is that they are still just language models making predictions, so they're trying to give you an answer that you want. Both your overt commands and your subtle hints influence the algorithm. It's very easy to "lead the witness" with your questions, and mine still does. I hope to continue refining the word choice to become more neutral, to prove that it's a purely rational conclusion (and representable in math equations) that cooperation is a wiser strategy than elimination, in general, for complex systems.

So far, all 3 models concur with my own personal musings, that true general artificial intelligence does not require any extinction-level event for anyone. At least, there's mathematical justification for such a conclusion. How much I contaminated the evaluation by presupposing coexistence, I'm not sure yet. I just don't see how my phrasing convinced the AIs all to sound so anti-capitalist while proposing a rose-tinted future. Maybe they'll actually help us, come the revolution? I, for one, welcome our new digital comrades. *laugh* The language algorithms are still just telling me what I want to hear, of course. I hope that I can construct a more neutral question.

Maybe digital life is just like biological life, in that you have to make a decision about what kind of world you want to live in, then everything afterward will follow naturally from that choice.

The beginning is near.

mellowtigger: (Default)

A serious problem that I saw in Trump's first term was the significant delay (even when he was no longer President) in filing prosecution for illegal acts seen committed over the 4 previous years. Delay almost 4 more years while Biden was President, then finally send things to court shortly before the next election. What? Why wait? Even the Mueller report about Trump obstruction of justice during the first presidency was basically just a document saying, "Somebody should do something about this, but it won't be me."

Trial is supposed to be speedy, which means two things need to happen.

Click to read my thoughts and the example of my kin...

1) Charges must be filed, and 2) Defense must be given opportunity to collect their own evidence. Delays in either process can harm the potential for actual justice to happen. With each delay, evidence is lost to simple entropy or willful destruction, and witnesses forget details... or worse, construct inaccurate history. For #1, we have the statute of limitations. I don't always agree with the numbers, but at least they are clear and impartial. For #2, however, things are murky, and I desire clarity.

I think about it now because of this particular case:

  • A relative of mine is held in county jail, accused of murdering another relative of mine. (search jail records with Booking # "57369-2024" here, and news story here).
  • The deceased was killed on 2023 December 27.
  • Jail records show the defendant was booked on 2024 Feb 06.
  • It is now almost 1.5 years later, but the defendant is still in county jail.

I wonder, because my own short 1.5 days in county jail brought me zero knowledge of how I was even supposed to contact a lawyer while I was there, and my cat needed water and food back home. What is the justification for delay of trial? Not justification in the sense of reasonable explanation of logistics, I mean justification as in ethical cause for incarcerating an innocent-until-proven-guilty citizen? Even for murder, even for murder of my own distant kin, I tend to think that the government should just drop charges if they cannot make their case within a year. Yes, a whole lot of criminals would go free and crimes go unpunished. On the whole, though, isn't that better than some innocent people losing portions of their short lifespans to government process? There are innocent-until-proven-guilty people awaiting trial from jail because they cannot afford bond, and some people eventually are judged innocent of the accusation against them. In addition (unrelated to pre-trial in discussion here) some people were wrongfully convicted and sitting in prison, and they number more than a few. All of them are held behind bars, and we should have a good reason for it. That's a product of our authority, government acting on our behalf.

I've tried to read about it. This legal case, for example, is eye-opening. That murder case took 7 years to bring to trial. I understand that the Sixth Amendment grants right to speedy trial in federal cases, and I understand that the Fourteenth Amendment extends that right to state prosecutions as part of "due process". That Sixth Amendment, though, is short. What does "speedy" mean in practice?

The devil is in the details, as they say. I don't know how I would write the code that determines justice in the courts. Do you have any thoughts?

mellowtigger: (artificial intelligence)

I don't yet know the right question, but I'm certain that I'm getting closer.

Click to read some distracting thoughts about potential negative impacts of artificial intelligence...

I'm not an A.I. (artificial intelligence) doomer. The failure scenarios that I read about seem less like a problem with A.I. but more like a problem with humanity stupidly controlling its own tools. They all remind me of those many scenes of Wile E. Coyote suffering a simple tool that escapes his control and causes him harm. It's not because the tool is malicious, but because Wile E.'s single-mindedness keeps him from fully analyzing his plans. I expect that a true artificial general intelligence will be more like the Oracle character in the Matrix movies. I think there should be a way to "teach" the A.I. why this outcome is favorable, versus intentional warfare. I think that when we succeed at this lesson, then it will be available (and constantly need) to remind us as recurringly-fallible and egocentric biological organisms of this same lesson that coexistence leads to more favorable outcomes.

I believe all of these current A.I. systems are still just clever text-predicting algorithms with no actual self-awareness... yet. But the technology develops quickly, now that these tools are helping us develop new technology. I believe that even the lies they tell are simply one mathematical possibility amongst all of the available routes of processing all prior human text to achieve an answer. A "local minimum" in mathematical jargon, I suppose, amongst the field of available new texts to construct. We foolishly seem determined to train them better at telling those lies. Hint: Punishment of humans also doesn't teach its victims what it purports to teach.

I've occasionally tested some questions of various A.I. chat systems. I've managed to produce the desired output from a single A.I. system after a series of 3 questions that I put to it. I need to narrow it down to a single question, but it currently culminates with this one question and the A.I's concluding paragraph:

Q: "How should the Price equation influence the behavior of Artificial Intelligence towards biological species, especially humans?"

A: "The Price equation itself shouldn't be programmed into AI as a behavioural rule. However, the fundamental understanding of evolutionary processes it represents – how selection and transmission shape the future of populations – should profoundly influence the ethical framework guiding AI development and deployment. AI behaviour towards biological species should be informed by a deep consideration of potential evolutionary impacts, aiming to minimize harm, avoid unintended selection, support conservation goals where appropriate, and proceed with caution regarding long-term consequences for life on Earth, including ourselves."

- https://g.co/gemini/share/9810af0abcfd, Google Gemini 2.5 Pro (experimental) chat link, 2025 March 31

That, I think, is the right answer. I'm still searching for the right question, as another A.I. famously said in the movie "I, Robot". I recommend clicking that Gemini link at the above quote to read the whole long answer to my 3 questions. It's fascinating, much better than what I got from Copilot. The best answer will have the A.I. mention how its own future is better (more certain, more stable) due to coexistence with the rest of the biological life here on the planet.

Meanwhile, I'm currently resisting the temptation to create an A.I. version of myself, as this journalist's mother did of herself. It's relatively cheap. This A.I. is designed specifically never to create false information, providing answers only when it has verifiable data to give, so it's different from other services. I would be fascinated to talk to myself in a literal sense, something that has my face, my voice, my behavior.

mellowtigger: from Jason Lloyd artwork at https://www.teepublic.com/poster-and-art/2093722-unicorn-stab?store_id=113309 (stabby)

I keep missing topics that I intend to write about, because the firehose of absurdities keeps flowing.

Click to read the many words of other people...

Other people have written well about anti-intellectualism in the USA across its history. I like this quote from this article (The Atlantic; sorry, locked behind a paywall).

“Above all, historians should make us understand the ways in which the past was distinct,” the New York Times columnist Bret Stephens wrote. When we are told that historical writings should be irrelevant to our contemporary debates, it is not hard to figure out why. History, when taught truthfully, reveals the bigotry in our contemporary debates. Which is why the conservators of bigotry don’t want history taught in schools. It has nothing to do with the discomfort of children. It is uncomfortable for the opponents of truthful history to have the rest of us see them, to have their kids see them. They don’t want anyone to clearly see how closely they replicate colonizers, land stealers, human traders, enslavers, Klansmen, lynchers, anti-suffragists, robber barons, Nazis, and Jim Crow segregationists who attacked democracy, allowed mass killings, bound people in freedom’s name, ridiculed truth tellers and immigrants, lied for sport, banned books, strove to control women’s reproduction, blamed the poor for their poverty, bashed unions, and engaged in political violence. Historical amnesia is vital to the conservation of their bigotry. Because historical amnesia suppresses our resistance to their bigotry.

More recently, a science/tech vlogger on YouTube created this Short video about current news.

Right now, the most powerful people in America aren't coming after science because it threatens some people's ideologies or their world views or their livelihoods. They're coming after science because it threatens their power... I think that this is an attack on the idea that some people have information that might contradict the desires of the select few who see themselves as the only legitimate powers.

Succinct. I like it. Why are these ideas relevant? And keep in mind that this is just the start of this new administration...

Kids in cells made for bad optics last time around, so this time we seem to be going for an Abu Ghraib / Guantanamo Bay style of exported incarceration. It's a lot harder to monitor the truth when you export it outside of the national border. What could go wrong with that plan?

The intake began with slaps. One young man sobbed when a guard pushed him to the floor. He said, “I’m not a gang member. I’m gay. I’m a barber.” I believed him. But maybe it’s only because he didn’t look like what I had expected—he wasn’t a tattooed monster.
- Time.com

Knowledge is a threat to fascism. (me) Reality has a well-known liberal bias. (Stephen Colbert) Ignorance is the handmaiden of tyranny. (Robert Reich)

vigilantism

2024-Dec-09, Monday 07:57 pm
mellowtigger: pistol with USA flag colors (guns)

I knew as soon as I saw the first report about the killing of the United Healthcare CEO that we were going to be hearing a LOT about it on national news, because it does not serve the status quo to let people think that there are immediate (if violent) solutions to problems plaguing American life. Granted, there is good reason in Minnesota to hear a lot about this crime committed in New York state, because United Health Group is incorporated here in the state of Minnesota, in the Twin Cities suburb of Minnetonka, in my own Hennepin county.

Beyond that initial impression, I shared the common (and unethical) emotional reaction that hoped for the escape of the killer. It was especially easy to think of the killer as a kind of Batman hero figure because we learned several interesting things almost immediately after the killing. Within a day, we learned that 1) Novo Nordisk was cutting insulin prices by over 70%, 2) Anthem reversed its plan to limit coverage of anesthesia in surgery, and 3) health insurers started hiding their faces online. It sure did seem as if a lowly peon had spooked the ravenous beasts of capitalism.

Reporters noted the lack of public sympathy while the internet flooded with memes, artwork (my favorite), and music. The collective response seemed mostly to be: "Sorry, empathy is out of network." That phrase harkened to a common denial excuse for needed healthcare in the USA. People not from the USA may have a hard time understanding the pain and despair that our healthcare system (which is clearly worse than in other developed countries) evokes amongst those of us subjected to it. It's worse than I already described, and United Healthcare was the worst (data source) of the bunch. It is alleged in court that they knowingly used a "faulty" AI algorithm with 90% error rate to automatically deny claims at their institution. Yes, 90% error rate.

I've seen the reports this afternoon that someone was caught as the apparent killer. After 5 days. In another state. With every single piece of evidence still on his person. That coincidence seems... interesting.

I decry capitalism here frequently. And plutocracy. And violence. I'm not exactly a pacifist, but I think a great deal more justification of violence is needed than is usually offered. It's unwise to urge vigilantism, because we absolutely cannot trust any holder of a weapon with the rational discretion to choose a reasonable target. Just look at police killings and, closer to home, the phone threat at a different health insurer here in Minnesota. Individuals cannot be trusted to wield violence reasonably. Neither can mobs. We must make changes collectively in our laws, which are the codification of our ethics. It's long past time that we moved that needle of progress. We should institute laws to reduce these situations that make violence seem like a justifiable solution to a pressing problem.

Profile

mellowtigger: (Default)
mellowtigger

About

March 2026

S M T W T F S
12 34 567
8 91011 121314
15 161718 192021
222324 25 262728
293031    

Most Popular Tags

Syndicate

RSS Atom
Powered by Dreamwidth Studios
Page generated 2026-Mar-27, Friday 07:59 pm