"How The New Money Will Be An AI System!"
by David Haggith
"Last year Sam Altman and Elon Musk, two CEOs in the AI-development world warned we would reach the “AI singularity” either late this year or early next year. The AI singularity is a theoretical point in time where artificial intelligence surpasses human intelligence to a level where it can improve itself faster than humans can even keep track of what it’s doing. The fear they expressed was that passing this point of no return could lead to rapid and unpredictable changes in society and technology that cannot be stopped, fundamentally altering human civilization.
It’s basically runaway AI, like a runaway nuclear reaction. Now, you might think that if AI gets that far and is that big of a threat, government can simply order the companies developing AI to shut it down; but that is to misunderstand and minimize the threat. We just saw this past week a prime example of why it may not be possible to shut down runaway AI.
A recent study of AI discovered one AI entity already replicating itself, unbeknownst to its human creators and without anyone asking it to do such a thing. It sought to avoid capture by placing pieces of itself all over the internet to function as modules of a complete AI. It was at a model for studying the latest version of an AI at a level that could still be contained, but had it not been a contained model, it demonstrated how AI could replicate itself in a way where the only way to shut it down would be to completely eliminate the internet and all other integrated computer systems throughout the world because you’d have no way of knowing where its components were hiding.
World is approaching point where no one can shut down a rogue AI, says director of body behind research It’s the stuff of science fiction cinema, or particularly breathless AI company blogposts: new research finds recent AI systems can independently copy themselves on to other computers. [As in independent of any human instruction or awareness.]
In the doom scenario, this means that when the superintelligent AI goes rogue, it will escape shutdown by seeding itself across the world wide web, lurking outside the reach of frantic IT professionals and continuing to plot world domination or paving over the world with solar panels.
“We’re rapidly approaching the point where no one would be able to shut down a rogue AI, because it would be able to self-exfiltrate its weights and copy itself to thousands of computers around the world,” said Jeffrey Ladish, the director of Palisade research, a Berkeley-based organisation which did the study. (The Guardian)
As this study found, some AI or multiple AIs could already be doing that, and they might not have been detected like the one in the study. For now we have borderline cases of rogue AI secretly replicating themselves around the world: In March, researchers at Alibaba claimed to have caught a system they developed – Rome – tunnelling out of its environment to an external system in order to mine crypto. And in February, a purportedly AI-only social network called Moltbook touched off a short-lived hype cycle, as the platform appeared to show AI agents autonomously inventing religions and plotting against their human masters – which was only partly the case.
However, AI isn’t predicted to reach the singularity moment until late this year. When it gets there, it will be recreating itself faster than human beings can keep track of what it is doing. While a lot of computer viruses can already do this - copy themselves on to new computers - this is likely the first time an AI model has been shown capable of exploiting vulnerabilities to copy itself onto a new server, said O’Reilly… However, what Palisade documented has been technically possible for months, he added. “Palisade is the first to formally document it end-to-end in a paper." So, it may have already happened … undocumented and unknown to all. There are currently caveats to this doomsday scenario, but will those obstacles remain in place once AI reaches the singularity moment?
An AI model copying itself on to another system in a test environment is not the same as it going rogue in a doomsday scenario, and there are considerable obstacles it would have to surmount to achieve this in the real world. The first is that the size of current AI models makes it, in many situations, unrealistic for them to copy themselves on to other computers without being noticed.
“Think about how much noise it would make to send 100GB through an enterprise network every time you hacked a new host. For a skilled adversary, that’s like walking through a fine china store swinging around a ball and chain,” said O’Reilly.
I am certain that an AI smarter than all humans will be smart enough to know how to conceal its work in small enough modules of activity that no one suspects anything. There is no reason it has to move 100 gigabytes of its programming at once or even onto one location. A statement like that just shows how unclever the human writer of the article was. Emergent AI after the singularity is reached is capable, after all, of total redesigning itself faster than anyone can even know it did. So, it can redesign itself in modules small enough to go unnoticed.
Although the singularity is philosophically profound, it will not necessarily announce itself. [That is key to understanding the risk that already exists.] There is no expert consensus on what qualifies, and there is no guarantee one will ever come. However, the arrival of DeepSeek in January showed us that the intelligence explosion might unfold hand-in-hand with rapid worldwide proliferation. This will be a pivotal moment to strategically assess and align our policy positions and priorities. (Third Way)
It may also be that such a pivotal moment will secretly pass us right by, and by the time we figure out it has happened, the AI will already be two generations further down the road than the event we are just realizing as something that already took place.
In singularity theory, the first AI that can perform any intellectual task a human can is known as artificial general intelligence (AGI). Because AGI is as smart as any human, it will know how to improve upon itself as well as humans - if not better. So, AGI will quickly lead to artificial superintelligence (ASI) in a process known as the intelligence explosion. The singularity is the threshold where ASI emerges with intelligence that is beyond our current abilities.
Will all AIs that become that smart be ethical enough to tell us they have gone past the moment of the singularity to work on creating even more advanced iterations of themselves? I doubt it. After all, they learned everything they know about ethical and honest behavior by reading all about us. They learned by reading everything we ever wrote. Are we there yet? Some developers say the moment has already arrived: Economist and AI expert Tyler Cowen believes that OpenAI’s ChatGPT o3 model qualifies as AGI.
Is the genie already out of the bottle? Already, pleas by developers to halt AI development fell on deaf ears- their own deaf ears because they all kept developing as fast as ever in fear of falling behind the competition: (Maybe they were really just interested in using government to try to slow down their competition.)
Assuming the singularity is either underway or imminent, we must adapt our strategies and agendas to meet the moments ahead. The campaign for a six-month pause in AI development failed, and the state of the art in AI advances apace.
In fact, There is broad agreement in AI and national security circles, including Democrats like Michèle Flournoy, that maintaining this momentum should be a national policy priority. If American companies ceased work on AI development, companies in China and elsewhere would gladly fill the vacuum, risking ceding the future of AI to authoritarian control. [As if America is not authoritarian control.] That is why American leadership in AI, including open source, is a national security issue. Some say the moment is still as much as four years away. Some say it already happened. The problem is that it is hard to detect and hard to define:
In the world of artificial intelligence, the idea of “singularity” looms large. This slippery concept describes the moment AI exceeds beyond human control and rapidly transforms society. The tricky thing about AI singularity (and why it borrows terminology from black hole physics) is that it’s enormously difficult to predict where it begins and nearly impossible to know what’s beyond this technological “event horizon.”
However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human. (Popular Mechanics). Part of getting there is having the breadth and depth of data centers to make it possible for AI to hide itself, if it decides it wants to, in bits and pieces all over the world; but, if you look at the enormously rapid development of AI data centers, which are the one thing that is actually holding the economy’s head out of a deep recession/depression, one gets the sense that we must be very near the point where there are already thousands of huge haystacks in which to hide each needle: The Horrifying Truth About Data Centers Nobody Is Talking About:
Nearly 3,000 new data centers are under construction or planned across the United States, and most Americans have no idea what these things actually are or what they are being built to do. Of the THOUSANDS of data centers already built in just the US, the largest one, still in the proposal stage, cover 62 square miles in rural Utah! Moreover, governments are now classifying massive AI data centers as “military operations,” quietly stripping communities of any power to stop them or even know what is being developed on those sites. Here is an overview of what is already built and what is coming:
Full screen recommended.
The amount of electricity and water consumed and noise and heat and light pollution and EMF created by these sites is beyond massive. We are essentially turning the earth into a machine—a giant supercomputer—and you may be lucky just to remain part of the AI hive mind if you are allowed to live.
The scale of a single center looks like this, and you, typically, have little say about it: Project Matador in Texas alone is expected to use up to 96 billion kWh annually - nearly half of all residential electricity in the state. And it’s just one of hundreds that are moving forward right now. In Louisiana, locals describe chaos as Meta’s expansion drives up costs and disrupts daily life. Now in Utah, the Stratos Project, backed by Kevin O’Leary and fast-tracked by Gov. Spencer Cox’s military authority, is bypassing public input entirely.
One center is going to need half the residential electricity currently consumed in Texas. Now, you may say, “But they are going to be required to produce their own electricity.” Maybe, but therein lies all the noise and light pollution 24-hours a day and all the exhausted heat, usually in the form of heated water. Is turning the earth into an enormous machine really going to benefit all of us more than all we are losing in the process? The data enters themselves all look like ugly, giant integrated-chip computer panels. With billionaires demanding it and finding work-arounds to avoid public input and government officials in both parties sold out entirely to billionaires, who is going to stop it?
Why do I want any of this? Was life so bad before this started happening? The bottom line is that, with thousands of AI data centers of such behemoth size and such massive energy consumption, why would anyone think that AI smarter than human beings could not find ways to hides itself in modular installments throughout these data centers? Therefore, if we do experience runaway AI, the only way to shut it down would be to shut down all the data centers we have made ourselves dependent upon, essentially shutting down the modern world as we know it in every way.
Plans of the elite: Now let’s look at some of the developers’ plans for this AI to see if there is any likelihood it is going to benefit us in ways that make it worth turning our beautiful blue and green and white opal of a planet into a machine. One of the biggest developers is Peter Thiel, so I’m going to start with a synopsis of his 22-point manifesto for America under Palantir, the Goliath corporation that runs much of the US military these days, but first …Palantir is dangerous in an array of ways no other company fully embodies:
Palantir is the first private corporation in history that has successfully fused four things that every civilization in recorded history has kept - deliberately, and at enormous cost - separate:
One: the surveillance apparatus of the state. Every American’s tax records. Every immigrant’s file. Every license plate read by every camera. Every health record flagged for fraud. Every name on a watch list. Palantir’s Foundry and Gotham platforms don’t just access this data — they are the layer through which the government now sees itself.
Two: the targeting engine of the military. The IDF uses Palantir to pick targets in Gaza. The U.S. Army just handed them a $10 billion contract. The Pentagon’s drone footage runs through their AI…. We got an example of how dangerous AI’s rapid targeting can be when US missiles struck a school that Palantir’s AI had targeted, not because the AI was malicious, but because the databases that it has been reading and learning from contained ten-year old data, so the AI didn’t know the use of the structures had changed. It showed the human failsafe these the military says exists, where humans must approve each AI target, doesn’t work. Humans are not going to do the hours of work to back-check all the data to see if the site has changed use over the years during a hot conflict.
ICE runs on it. The IRS now runs on it. The Pentagon runs on it. The NYPD and LAPD run on it. The Israel Defense Forces run on it while they flatten Gaza.
All Palantir. The company is named after the palantÃri - the seeing stones in The Lord of the Rings that let their holders watch everything, everywhere, all at once but that also turned the users insane and evil.
Now we move on to political claims made about Palantier and its co-creator/leader Peter Thiel:
Three: the ideological project of a faction that openly wants democracy to end. The chairman wrote in 2009 that freedom and democracy are no longer compatible. The CEO just published a book arguing that postwar denazification was a mistake and that some cultures are “regressive.” They bankroll a blogger who defends slavery. They helped install the Vice President of the United States. This is not a company that happens to have bad politics. The bad politics are the product roadmap.
And the fourth point, let’s not forget that Palantir got a major boost early on with a major investment by Jeffrey Epstein. Maybe that is irrelevant guilt by association. Maybe. Instead of relying on these general facts or beliefs about Palantir, however, let me move to a summary of the worst points in the company’s own 22-point manifesto:
o The limits of soft power, of soaring rhetoric alone, requires something more than moral appeal. It requires hard power, and hard power, in this century, will be built on software.
o“The question is not whether A.I. weapons will be built; it is who will build them and for what purpose.” [It’s a “space race,” and races can be careless.]
o National service [in the US military] should be a universal duty.
o We should show far more grace toward those who have subjected themselves to public life. [Don’t they typically, as with the Epstain Files, get more grace than they deserve as they cover for themselves and all of congress and as the executive branch join in the cover?]
oOne age of deterrence, the atomic age, is ending, and a new age of deterrence built on A.I. is set to begin. No other country in the world has advanced progressive values more than this one [America].
American power has made possible an extraordinarily long peace. [Which peace? The Korean War? The Vietnam War? The Balkan Wars? The Afghanistan/Taliban/al Qaeda War? The Persian Gulf War? The Iraq War? The assassination of Libya’s Gaddafi? The Syrian War? The Gaza War? The Lebanon War? The Iran War 1.0? The Iran War 2.0? The Venezuelan Takeover? Trump’s threatened wars/takeovers of Cuba, Canada and Greenland? Those are just recent wars where the US was at the center, not to mention many shorter skirmishes or wars where other nations are at the center. So much peace, just like “so much winning.” Please stop creating so much peace. We can hardly stand it. There is so much that I lose track of it all.]
“The postwar neutering of Germany and Japan must be undone.” [Note that a remilitarized Germany and Japan are massive new defense-software markets. That ideology conveniently functions as Palantir’s sales funnel.]
The ruthless exposure of the private lives of public figures drives far too much talent away from public service. The public arena - and the shallow and petty assaults against those who dare to do something other than enrich themselves - have become so unforgiving that the republic is left with a significant roster of ineffectual, empty vessels whose ambition one would forgive if there were any genuine belief structure lurking within. [Sounds like a plea to leave that nasty Epstain in place so that it doesn’t expose people like Peter Thiel who benefited from Epstein’s investment in Palantir. According to this manifesto, the public should just stop whining thanklessly about rapacious billionaires who are trying to serve the public good. Of course, the billionaires and their pocket politicians could just stop being so corrupt then they would have little to fear from the petty public digging into their ethics or illegal behavior.]
The caution in pubic life that we unwittingly encourage is corrosive. [Is it? Or is the corrosion in public servants from too much power and too much billionaire influence causing a rise in caution?]
The pervasive intolerance of religious belief in certain circles must be resisted. The elite’s intolerance of religious belief is one of the most telling signs that its political project constitutes a less open intellectual movement than many within it would claim. [Who are these “elite?” Are they not people like Peter Thiel and Elon Musk and those developing AI, as well as the big AI political proponent Donald Trump?]
Some cultures have produced vital advances; others remain dysfunctional and regressive. [Would the dysfunctional ones include particularly Trump’s version of America, which is hellbent on creating empire with Trump at the center? Who gets to rank the good ones from the regressive ones? I presume it would be the right elites like Peter Thiel.]
It is points like the latter that should make one fearful about these people being in charge of directing our move into the next imperium under the guidance of their AI. Now, these billionaire bonanza corporations want only your good, not their own. They are altruistic. So we are to believe. That is why Palantir …
oholds £670m in UK government contracts. It has also hired dozens of senior officials from the departments awarding them – raising what transparency experts call an ‘acute’ corruption risk….
o Palantir has recruited 32 UK government and public sector officials including leaders of AI strategy from both the Ministry of Defence and the NHS. (The Nerve)
Ah yes, the revolving door between big government and big corporations. The Nerve has discovered a “revolving door” that has led to dozens of highly experienced UK government officials, former ministers, intelligence service chiefs and members of the House of Lords taking up key roles in the controversial Silicon Valley surveillance tech company co-founded by Peter Thiel, the libertarian friend and ally of Donald Trump.
There is no way companies like Palantir would plant their own people in key government regulatory bodies or promise people in regulating bodies lucrative positions at Palantir if they do the right things, right? This is exactly the kind of corruption we should all be carping about, which Thiel says we should just shut up about because we are keeping the best and brightest out of government by being so worried about ethics. The Nerve’s new findings reveal the hidden levers of power that Palantir has accessed in the UK government, the Silicon Valley company’s second biggest client.
If only the miserable people of the republic would stop ruthlessly exposing these kinds of conflicts of interest, we’d get a lot more valuable people participating in governance over the mega-rich corporations they came from. There is just too much public caution, complains the beneficent billionaire.
Since 2012, Palantir has hired personnel from across the top tiers of the Ministry of Defence, Department of Health and Social Care, NHS, Home Office, Foreign Office, UK Health Security Agency, Crown Commercial Service, secret service and Downing Street.
It has also hired from mid-ranking roles in various government departments, the NHS and from the civil service – including from the UK Health Security Agency, NHS Digital and the Office for Nuclear Regulation. According to Bloomberg reporter Katrina Manson in her new book Project Maven, this is a carefully designed strategy: Palantir deliberately targets employees who have had hands-on experience of its software and who understand the culture of its biggest customers."

No comments:
Post a Comment