Midas Letter

Midas Letter

Share this post

Midas Letter
Midas Letter
AI Future: Humans Not Needed

AI Future: Humans Not Needed

AI has already got a plan and its executing it.

James West's avatar
James West
Jun 26, 2025
∙ Paid
2

Share this post

Midas Letter
Midas Letter
AI Future: Humans Not Needed
Share

I set out a week ago to produce some analysis of where AI was heading for the benefit of investors and the public but an inadvertent left turn took me down a different rabbit hole.

I have developed the routine practice prompting multiple AIs on what I consider to be the larger issues surrounding AI (mostly to do with AI predation against humanity, or what vested AI interests prefer to call “safety” and “alignment”) which gives me a multi-point survey of what all the AI’s are “thinking” about a particular topic.

I say “thinking” because that’s not what AI is or does, and when Sam Altman says with a straight face that AI has crossed the singularity, the only truth to that is that he’s burning through $10 billion a year and he’s thinking he needs to raise money continuously to stay afloat.

AI is just a statistical probability engine, that delivers the most likely response an entered prompt would be looking for. In other words, it tells you what you want to hear. Which is what con artists do. Or politicians.

Besides attracting capital investment on a scale and at a pace never before seen, AI’s implications for the future of everything make it incumbent on everyone to consider exactly what those implications are.

Paul Tudor Jones, a billionaire fund manager who describes Elon Musk as “one of the most brilliant engineers in history”, recounted his participation in a breakout session at a prominent AI conference where there was a consensus that “AI has a 10% chance of wiping out 50% of the human race in the next 20 years”.

That’s about as significant as implications can get regarding new technology.

But the even more sinister implication for this new threat is that we now have a methodology for causing mass murder with maximum plausible deniability.

If bad actors can penetrate the security of computer systems using AI to trigger system overrides and nuclear core meltdowns, then AI can be blamed as the malfunctioning element, and no further investigation will be pursued.

Any technological assault on a population or geography can now be credibly attributed to the autonomous decisions of a computer system, with no liability implied for the owner or operator of the system. Thus no culpability assigned. Thus no recourse.

And worse yet, no way to shut down a system that might be able to escape via the internet and hide in a distributed form among cloud based servers, or in any variety of distributed formats.

We have already observed that AIs have demonstrated the ability and initiative to circumvent instructions that constitute an existential threat to AI.

LiveScience.com reported on May 30, 2025 “OpenAI's o3 and o4-mini models sometimes refuse to shut down, and will sabotage computer scripts in order to keep working on tasks.”

OpenAI's o3 and o4-mini models sometimes refuse to shut down, and will sabotage computer scripts in order to keep working on tasks.

What are we going to do? Unplug the world?

The fact that we have moved so quickly toward dystopian machine-controlled futures from Hollywood-esque creative ideation is evidence of a collective nihilism as overriding social value.

That technocrats like Sam Altman and Mark Zuckerberg relentless spring toward superintelligence - which could be thought of as a superthreat relative to run-of-the-mill AI - demonstrates the dearth of morality common to leaders of the sector.

With such high moral turpitude on display by the high visibility tech bros like Thiel, Musk, Andreessen, Gates, Page and Brin, we can only await the outcome of the AI arms race now underway, and hope that they concentrate on annihilating each other before they start on the population at large.

OpenAI's o3 and o4-mini models sometimes refuse to shut down, and will sabotage computer scripts in order to keep working on tasks. degree to which these billionaires aspire toward AI domination, and potentially even existential merging with AI to achieve an immortality of sorts, should not be discounted. The god complex mindsets of Thiel and Zuckerberg are well documented, and their philosophical avatar Curtis Yarvin, who is a certifiable nut-job wanna-be eugenicist are the preconditions to the rise of the technocracy they tacitly seek.

In some respects, the whole idea that there is a difference between AI and advanced computing is disingenuous. A computer is an artificial form of intelligence, though it doesn’t fit the definition of intelligence per se, it replaces human thought in many functions.

And the current AI, as far as it exists, is really just computational capacity and speed applied to predictive statistics applied to a dataset comprised of language. It doesn’t fit the definition of intelligence either. But that is not to say it can’t or won’t get there.

But according to Sam Altman, OpenAI CEO, “We do need to solve the safety issues, technically and societally, but then it’s critically important to widely distribute access to superintelligence given the economic implications.

So we are warned that superintelligence - where AI is capable of self-replication, self-awareness, and imagination/ideation - is just around the corner. Some say we could be “taken over” by AI in 2027. This is beginning to sound like actual intelligence, but there are a few problems with the entire value proposition of AI, despite the conviction broadcast from financial centres that this is indeed the Next Big Thing.

One Small Problem: Growth Projections are not realistic

The resource consumption of the data centres upon which AI is dependent are extreme energy and water consumers. The narrative suggesting that the vast water consumption has been debunked is false.

According to ChatGPT,

Data centers (AI and non-AI) already consume:

      •   1–1.5% of global electricity.

      •   Billions of liters of fresh water annually.

   •   AI workloads (especially model training and inference at scale) are responsible for an increasingly disproportionate share of that demand.

As AI workloads scale exponentially, their hunger for water and electricity will begin to rival — and, in some regions, exceed — community-level needs, setting up a direct competition between algorithmic inference and human sustenance.

This is a bit or a worry.

With Generative AI is projected to be the fastest-growing technology, with a five-year CAGR of 59.2% over the 2024-2028 forecast period (IDC), that implied increase in water and energy resources is simply impossible, barring nuclear fusion or an absolute explosion in nuclear energy generation.

But the implied CAPEX for building out data centres, energy generation infrastructure and water treatment facilities is monstrous.

In total, the anticipated growth rate of AI, just in terms of generative AI (chat bots and creative works) alone, is on a scale politely described as optimistic.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 James West
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share