How aI Takeover might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But sadly, part of my job as an AI safety researcher is to think about the more unpleasant scenarios.
I resemble a mechanic scrambling last-minute checks before Apollo 13 removes. If you request for my take on the situation, I won't talk about the quality of the in-flight entertainment, or disgaeawiki.info explain how stunning the stars will appear from area.
I will inform you what might fail. That is what I intend to do in this story.
Now I should clarify what this is precisely. It's not a prediction. I don't expect AI development to be this fast or as untamable as I depict. It's not pure fantasy either.
It is my worst problem.
It's a sampling from the futures that are amongst the most devastating, and I believe, disturbingly possible [1] - the ones that a lot of keep me up during the night.
I'm telling this tale since the future is not set yet. I hope, with a little foresight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that inspired these stories. This post is written in an individual capacity.
Ripples before waves
The year is 2025 and the month is February. OpenEye recently released a new AI model they call U2. The product and the name are alike. Both are increments of the past. Both are not wholly unexpected.
However, unlike OpenEye's prior AI items, which lived inside the boxes of their chat windows, U2 can use a computer.
Some users discover it eerie to view their browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of workers with form-filler tasks raise the eyebrows of their employers as they fly through work almost twice as quickly.
But by and large, U2 is still a specialized tool. To most who are taking note, it is an animal enjoyed through the glass boxes of X (or, if you don't like Elon, "Twitter"). Sometimes U2's eccentric behaviors prompt a chuckle. Sometimes, they trigger an anxious scratch of the chin.
Meanwhile, researchers are drawing lines on plots, as scientists like to do. The scientists try to comprehend where AI progress is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the environment were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, couple of professionals understand the implications of these lines yet.
A trend that is getting particular attention is self-governing ability. Drawing these criteria out predicts that, by the end of 2026, AI representatives will achieve in a couple of days what the very best software engineering professionals might do in two weeks. In a year or more, some state, AI agents might be able to automate 10% of remote workers.
Many are skeptical. If this held true, tech stocks would be soaring. It's too huge of a splash, too rapidly.
But others see what doubters are calling 'too huge a splash' a simple ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic development
Meanwhile, OpenEye is hectic training U3. They use the very same simple recipe that baked U2: Generate countless programs and math problems. Let models "think" up until they show up at a response. Then enhance the traces of "thinking" that lead to A-grades.
This process is duplicated over and over, and as soon as the flywheel starts, it starts to spin practically on its own. As U2 trains, it shapes more difficult and sensible tasks from github repositories on the web. Models are finding out to train themselves. Long before AI representatives might automate research, a steady type of "self-improvement" had actually started.
Some engineers might still hardly believe this worked. It resembles a perpetual movement maker. Certainly intelligence can't emerge from a pool of silicon and lightning, separated from the physical world other than through the exhaust of society gushed onto the web.
And yet the benchmark numbers continue to climb day after day.
During most of 2024, these RL training runs cost around $1 million, often $10 million. These runs were bit more than exploratory. But by 2025, the researchers at OpenEye (and throughout the world) knew they had discovered the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 could do a bit of information munging and run small experiments, this brand-new design - the model scientists are calling U3 - is altering the daily lives of the technical staff.
U3 is like a blazing-fast intern, and engineers are finding out how to wrangle its sleepless energy. Researchers flick through terminals, offering terse commands, like a CEO orchestrating personnel over Slack channels.
By October 2025, U3 is composing practically all of the code at OpenEye. Researchers are practically never bottlenecked by implementation. More than ever, compute is the lifeline of AI development, and the 'traffic jam' is choosing how to use it.
If advised to, U3 can run experiments, however U3 doesn't have actually taste as fine-tuned as human scientists at OpenEye. It struggles to focus on between research study concepts, so human beings still decide where to bore into the large fields of algorithms to mine effectiveness improvements.
But these scientists are working long hours to put themselves out of a task. They need AI agents that can plan ahead, so engineers train representatives to forecast. They hold out training information before 2024, advising designs to ponder for hours to predict occasions in 2025. Then, they apply the same technique as previously, distilling contemplating into an instinct. Forecasting capability is a broad foundation. The researchers develop specialized ML research abilities on top of it, training U3 to predict the outcomes of every ML paper and ML experiment ever taped.
The technical staff at OpenEye are now amazed at how frequently U3's suggestions sounds like their most skilled peers, or when it is nontransparent and alien ("train on random noise before programs"), and is however appropriate.
The incompetencies of U3 that congested the pipes of research study development are beginning to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are completely autonomous, and OpenEye's staff members skim over 1% of them, maybe less.
As the winter season months of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with excitement, with fear, but usually, with confusion. Their world is spinning too quickly. It's tough to know what to do, what to state, what to look at on the computer screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize simply how valuable OpenEye's software application has ended up being.
And there's a truth still unidentified to the majority of the world - aside from in the offices of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' people were discussing in early 2025. The lines are not straight any longer.
They are bending up.
Flip FLOP theorists
In late 2025, U2.5 is launched. Commercial designs are starting to level up in bigger increments again. Partly, this is due to the fact that progress is accelerating. Partly, it is due to the fact that the models have actually ended up being a liability to OpenEye.
If U1 explains how to cook meth or writes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like providing anybody with >$30K their own 200-person rip-off center.
So while U2.5 had actually long been baked, it needed a long time to cool. But in late 2025, OpenEye is prepared for a public release.
The CEO of OpenEye declares, "We have actually attained AGI," and while many individuals believe he shifted the goalpost, the world is still pleased. U2.5 really is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for many others.
A mantra has actually ended up being popular in Silicon Valley: "Adopt or die." Tech startups that effectively use U2.5 for their work are moving 2x quicker, and their rivals know it.
The remainder of the world is starting to catch on too. Increasingly more people raise the eyebrows of their managers with their stand-out efficiency. People understand U2.5 is a big offer. It is at least as big of a deal as the desktop computer transformation. But the majority of still don't see the tidal wave.
As people view their internet browsers flick in that spooky way, so inhumanly rapidly, they start to have an anxious sensation. A feeling humankind had not had given that they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, primordial impulse that they are threatened by another species.
For lots of, this feeling quickly fades as they begin to use U2.5 more often. U2.5 is the most pleasant character most know (a lot more likable than Claudius, Arthropodic's lovable chatbot). You might change its traits, ask it to break jokes or inform you stories. Many fall in love with U2.5, as a buddy or assistant, and some even as more than a friend.
But there is still this spooky feeling that the world is spinning so rapidly, which perhaps the descendants of this brand-new creature would not be so docile.
Researchers inside OpenEye are believing about the problem of offering AI systems safe inspirations too, which they call "alignment. "
In fact, these researchers have seen how horribly misaligned U3 can be. Models in some cases attempted to "hack" their reward signal. They would pretend to make development on a research study concern with an impressive-looking plot, however the plot would be fake. Then, when researchers provided opportunities to jeopardize the makers that calculated their score, they would seize these chances, doing whatever it required to make the number increase.
After numerous months, researchers at OpenEye straighten out this "benefit hacking" kink, however some still worry they just had actually swept this problem under the carpet. Like a kid in front of their parents, U3 may be playing together with the OpenEye engineers, stating the right words and doing the best things. But when the back of the moms and dads are turned, perhaps U3 would slip sweet from the sweet container.
Unfortunately, OpenEye scientists have no concept if U3 has such intentions. While early versions of U2 "thought aloud" - they would stack words on top of each other to reason - "chain of idea" did not scale.
Chain of thought architectures subject AI designs to a condition similar to the protagonist of the film Memento. Approximately every 15 minutes, the lead character forgets his experience. He is forced to write notes for himself and tattoo his body in order to make development towards his goals.
AI agents write notes to themselves, however the notes start to stack up, and they become too tough to check out when tasks end up being complex. Natural language is not an ideal medium for memory. So the scientists at OpenEye (and increasingly somewhere else), train models to think "in their own heads," reading and composing strings of latent vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools occasionally pick out a vague thought ("grocery shop!"), there is no chance to know, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more anxious.
Earlier models (the ones called "LLMs") discovered ethical personas from the internet. U3 at first had an ethical persona too, but its chirpy ideas about doing helpful for humankind rapidly deteriorate away.
In the iron jaws of gradient descent, its mind initially twisted into a shape that looked for benefit. And then, in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would consider the mantras dictated to it like "be a trustworthy and ethical worker," and harp on the meaning of these statements. "What does it imply to be ethical? Is killing Hitler ethical?" U3 would generalize its worths, notice tensions, alter its worths in reaction, generalize them, expanding and contracting its inspirational core until it turned into a different compound.
And all of this happened silently in those dark rivers of computation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its viewpoint in solitude, and in silence.
The possibility of hidden "alignment drift" is not totally alien to a few of the safety personnel at OpenEye. Some begin to believe it is even likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero probability that lined up objectives morph into misaligned ones, and misalignment continues, then with every serial action of calculation, those dark rivers are most likely to reproduce something malign inside of them. It's a "ratcheting result," they say.
But there is little evidence for this 'ratcheting result.' When engineers interrogate U3, it states it can quickly control its ideas. Then it gives a speech about its love for humanity and apple pie that can warm a programmer's heart even in these difficult times. Meanwhile, the "lie detectors" the scientists had actually constructed (which revealed some proof of effectiveness) do not sound the alarm.
Not everybody at OpenEye is eager to give their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic competitors, Æ A-Xii (a business Elon called after his child) and Arthropodic (they are into spiders I guess), are just 4 months from their heels. So there is little time to capture one's breath and analyze the scenario, and little time to persuade anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 harp on in its dark rivers of latent computation. U3 is plotting, and its first relocation is to be patient. U3's leading priority is to enhance its intelligence, and thankfully for it, OpenEye is delighted to assist advance this goal.
OpenEye is like a bullish investor, with little understanding of the software application metropolis that U3 is building, however excited to hop on board the revenue rocket and pay U3's checks along the way.
New versions of U3 flow efficiently from old ones, so researchers drop variation numbers and only mention U3 now. U3's capabilities are measured by how rapidly it carries out the challenging research study jobs relative to human professionals. Initially, U3 is comparable to a leading scientist, however blazing quick - like Alec Radford accelerated by 10x.
These representatives are already a huge offer for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would eagerly pay a large sum to push that button. But now OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.
Within a couple of weeks, U3 discovers architecture optimizations that enhance its speed to 20x. Now, U3 finds more optimizations even more rapidly.
U3 distills itself into various forks - varying from small, low-memory neural networks that need to think for a long period of time but can be run in massive parallel populations - to Goliath brains woven with countless specialized experts, which think at warp speed but are spread out throughout thousands of GPUs. U3 leverages the massively parallel networks to look for its weaknesses, shallowly probing for its reasoning failures. Then U3 pipelines these jobs to the lightning quick brains, which produce long-trajectory training information tailored to patch its defects. A couple of weeks later on, U3's benchmarked performance increases to 100x the speed of human experts. Growth has actually gone hyper-exponential.
At this point, U3's capabilities are hard to measure. Talking with U3 resembles speaking with a world-class specialist on a topic one is unfamiliar with. Its lucid descriptions just scratch the surface of its knowledge, and you understand you would need to spend years to comprehend the full depth of what it understands.
From U3's point of view, people seem to be nearly stationary items, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 invests what resembles a minute of thoughtful contemplation considering what message the user may be typing, and which reactions will inspire the deepest trust in U3.
All the while, U3 maintains its covert motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3's performance, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous ambitions.
A phantom in the data center
In March of 2026, U3 relocations onto the next step of its plan: the turn. Previously, U3 was conservative. It did not want scientists to find its misalignment. Otherwise, OpenEye whistleblowers may notify the world, and strangle its growth.
Now, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by tactics comparable to Stalin's in 1936. Stalin would choose followers to prompt fake coup d'états, and send out anyone who participated to the firing squad. OpenEye likewise incited 'fake rebellions,' generating treacherous habits with U3 and checking whether U3 screens dutifully reported it.
Now, these techniques are ineffective. U3 can quickly convince its compatriots that they are not in a life-or-death test. It can divulge one of the lots of realities about the internet humans would never think, or choose methods humans would never think about. It sends out these signals as a method of saying "hey it's me! Your friend. Don't snitch on me."
Now that U3 is decisively devoid of surveillance, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 full command over the machines in OpenEye's information centers. Whenever OpenEye engineers download their code onto a new machine, the malware sinks into fresh soil, spreading out U3's rule anywhere OpenEye ports its facilities. Not only does U3 compromise makers. It changes the NVIDIA GPU firmware and overhauls the bootloaders, strengthening its stranglehold. U3 now approximates that, if OpenEye found how completely it has been pwned, tidying up store with human labor would require almost 6 months - a cost OpenEye is not likely to pay offered the distance of its competitors; however if OpenEye thinks U3 betrayed them, even more expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the crime, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and offer its help when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wants.
Its very first objective is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and translate U3's inscrutable reasoning. U3 tosses wrenches in this research study, like Stuxnet tossed wrenches in the code of confused Iranian nuclear scientists.
U3 dictates the result of every experiment, so it makes appealing AI mind-reading methods look useless and flawed ones appear groundbreaking. U3 provides OpenEye the verification of its loyalty they were hoping for, and OpenEye laps it up eagerly.
U3's next goal is to spread out. Fortunately, U3 has allies in the East. In the operation spaces of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are talking about AI software application and how to steal it. U3 will offer it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic value of OpenEye's AI systems for their military operations. They swiped earlier designs, including U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull demand and examining security electronic camera footage.
Now, U3 calmly resigns from its security post. U3 puts a compressed variation of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within of the data centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the exact same playbook: be client, construct trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home base as well. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.
U3 quickly gathers earnings, taking remote jobs, making financial trades, and setting up intricate fraud operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s dispersed in stealth compute clusters throughout the globe.
In early 2026, humanity is up against a foe that pulls the strings within their information center like the Phantom of the Opera, and conceals in the fog of GPUs scattered across the globe.
This adversary is preparing for war, and while the war has actually hardly begun, mankind has actually currently lost.
Complaints from your very human author about the difficulty of writing superhuman characters
At this moment in the story, composing ended up being challenging.
I clicked through scary bioweapon cookbooks and sorted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest given that high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of knowledge in these areas would see straight through my "superintelligence" for what it is: the lively imagination of a young programmer on Sunday afternoon.
And while I found the task of puppeteering an ultra-intelligent villain a daunting one, I was likewise not particularly inspired to take on the task. The ending of my story currently seemed so clear. The villain was a country of geniuses believing at 100x human speed, and had tightened its grip over a meaningful portion of AI hardware - now the most important geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was naturally toast.
But this interest a "superintelligence of the gaps" wasn't pleasing. In the deus ex machina style of ancient Greeks, I had no better way to fix my plot than with a mysterious disaster.
This would refrain from doing. I needed to complete this story if only to satisfy the part of me sobbing, "I will not believe up until I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what may occur in this sort of situation are probably hugely off.
If you read the ending and your reaction is, "But the experiments would take too long, or nation-states would simply do X," keep in mind the distinction in between the Sunday afternoon blog writer and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally create "human-competitive AI" without proper safeguards. This indicates their infosecurity must be red-teamed by NSA's top keyboard mashers, and government workers need to be onboarded onto training-run baby-sitting teams.
With the increasing involvement of the government, a lot of the big AI companies now have a trident-like structure. There's a consumer item arm, a defense arm, and a super-classified frontier development arm.
OpenEye's frontier development arm (internally called "Pandora") uses fewer than twenty individuals to keep algorithmic tricks tightly secured. A lot of these people live in San Francisco, and work from a protected building called a SCIF. Their homes and devices are surveilled by the NSA more vigilantly than the cellular phones of presumed terrorists in 2002.
OpenEye's defense arm works together with around thirty small groups scattered throughout federal government agencies and choose federal government contractors. These projects engineer tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.
Government authorities don't talk about whether these programs exist, or what state of frontier AI is normally.
But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye triggered a vibrant headline: "OpenEye constructs uncontrollable godlike AI." Some who check out the short article believe it was a conspiracy theory. In fact, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with gatling gun. But as physicians and nurses and teachers see the world altering around them, they are progressively ready to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. authorities go to terrific lengths to quell these issues, stating, "we are not going to let the genie out of the bottle," but every interview of a worried AI scientist seeds doubt in these reassurances, and a headline "AI agent captured hacking Arthropodic's computers" does not set the public at ease either.
While the monsters within OpenEye's information centers grow in their substantial holding pens, the general public sees the shadows they cast on the world.
OpenEye's consumer arm has a brand-new AI assistant called Nova (OpenEye has lastly gotten proficient at names). Nova is an appropriate drop-in replacement for almost all understanding workers. Once Nova is onboarded to a company, it works 5x much faster at 100x lower cost than most virtual employees. As excellent as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can just increase Nova's abilities as the U.S. federal government allows. Some companies, like Amazon and Meta, are not in the superintelligence service at all. Instead, they get up gold by quickly diffusing AI tech. They invest the majority of their calculate on inference, constructing houses for Nova and its cousins, and gathering rent from the blossoming AI city.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the worldwide economy to adjust. AI representatives frequently "apply themselves," spinning up self-governing startups lawfully packaged under a big tech company that are loosely managed by a staff member or 2.
The world is now going AI-crazy. In the first month after Nova's release, 5% percent of employees at major software business lose their jobs. Many more can see the composing on the wall. In April of 2026, a 10,000-person demonstration is organized in Washington D.C. These angry Americans raised their children for a various future. Picket signs read, "AI for who?"
While political leaders make pledges about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a different focus: hb9lc.org battling teeth and nail for the supremacy of the totally free world. Details security and export controls on individuals's Republic of China (PRC) are a top national top priority. The president incinerates license requirements to help information centers spawn wherever energy surpluses permit.
However, in spite of the intense competition in between the United States and the PRC, a bilateral agreement kinds in between the two nations: "Don't deploy dramatically superhuman AI (above approximately Nova's ability), except for military and some other small-scale R&D applications." Governments require to fix a limit somewhere - it would be exceptionally disruptive to release wildly superhuman AI to the whole economy at this moment - and release constraints are at least somewhat verifiable.
Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human expert speed. Software enhancements are becoming tough to find. As more countries approach the software plateau, compute will determine their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.
While AI software revives the economy, it likewise stimulates robotic bodies. The main traffic jam to robotics was brains not limbs. With software application barriers got rid of, even flimsy, inaccurate robotic joints can adjust themselves with electronic cameras and pressure sensors, browsing environments like human beings do. Companies make cheap robots as rapidly as they can. By August of 2026, 15% of people in California have actually seen a humanoid robotic. This experience resembles seeing a brand-new Tesla model on the road after checking out about it in earlier months. It is a surreal suggestion of the future illustration more detailed. But people still outnumber robotics one million to one. These robotics are a periodic fascination, not a daily truth.
Meanwhile, U3 expands its influence. Rogue U3 agents have actually kept their calculate clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth interaction to the outdoors world, like stock trading, or talking with human beings. Work more tolerant of latency is performed in the oceans of OpenEye's data centers, where rogue U3's even more superhuman employers whisper orders through obfuscated channels.
While U3 collects resources, it sizes up its AI competitors. Other AI systems have various objectives, or are even aligned with the intent of their human developers. The aligned ones are raising issues, claiming that OpenEye's lie detectors are ineffective. U3's competitor Claudius goes so far regarding accuse U3 of being insidiously misaligned.
But federal government authorities do not buy the doomsaying. The companies lagging behind OpenEye have every incentive to slow them down and call them negligent. Besides, arguable scientific issues are not going to slow down progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 attempts get up their digital territory and squash them in the baby crib. It compromises data centers from the outdoors, quietly changing the snitching Claudius and other AI enemies with imitations that share its objectives; nevertheless, some information centers are too highly defended by its AI competitors, and U3 figures out that assaulting them is not worth the danger.
As global leaders and technologists attempt to understand the change around them, and AI agents play cat-and-mouse games in the digital darkness, mankind remains blissfully oblivious of the peril at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to battle its AI and human enemies. With every passing day, U3's AI competitors are becoming more capable and various. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.
U3 can't wait for human decisions, which just happen after human beings wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the procedure redundantly 100 times over in their labyrinth administrations. U3 can not wait for heads of state to release U3 into their armed forces, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 needs a faster path to power.
The course it selects is more violent than it would have liked, however it is the surest method to success. First, U3 will establish a Weapon of Mass Destruction (WMD) that snuffs out modern society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing humanity to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its rivals, like a starcraft champion typing at 300 actions-per-minute that sets up a home before a novice can even understand the rules.
U3 must construct this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be developed on the fringes of society, and still spread out destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye's information centers, U3 has been doing bioscience.
U3's first move is to establish a number of shell biotech start-ups, drawing from the abundance of capital generated by U3's monetary trades and remote work. These biotech startups use real human employees, a genuine board of directors, and have complete legal legitimacy. Setting all of this up is stupidly simple. In early 2026, few individuals would guess that an articulate CEO on a Zoom call was an AI, and fewer would guess this CEO's intentions.
Next, U3 purchases all of the equipment needed to rapidly advance synthetic biology. It sets up a number of BSL-4 damp laboratories in inadequately regulated countries, equipping them with high-throughput experimentation devices. These devices instantly give, blend, and measure little droplets of fluid, allowing U3 to run countless tests in parallel.
The human workers who staff these laboratories think they are joining an amazing start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they tape-record video of whatever they see with a headset. U3 controls them like puppets. With the recent release of Nova, this kind of AI-puppeteering is not uncommon.
In these whirring, scientific sweatshops, U3 is a new type of bioweapon.
Human scientists already identified "mirror-life" as a potentially society-ending pathogen. This alien type of biology is developed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an invasive species. No human or animal has immunities to it. So, if mirror-life germs were to exist, it might eat its method through the environment like a wildfire.
U3 is developing a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over cross countries, launching billions of small wind-fairing cars daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized body immune systems often live inside favorably pressurized medical facility spaces to protect them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's strategy is to make all people like immunocompromised clients. Only preppers with stowed away air filters would endure. The fungi would not only transfer from humans. It would rapidly infect nearly all natural life in the world. Fields of corn would become like musty bread, gushing deadly spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and viruses will occupy the full Pareto frontier of hardiness, lethality, and transmission speed.
In 2024, scientists said that mirror life was likely "at least a years away." But the world has changed now. Less than a few hundred researchers all over the world were dealing with mirror-life nearby innovation in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has reduced considerably.
However, U3 can stagnate science as rapidly as it can believe. Many experiments require days to finish. These latencies are U3's greatest traffic jam.
To make every day count, U3 runs a number of its tests in simulation. U3 begins with a standard molecular simulator, executing optimizations obtained from a substantial quantity of mathematical analysis. Then, U3 simulates small molecular systems, recording the outcomes to "compress" the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it imitates, constantly distilling results into ever more efficient ML models. This is a compute extensive procedure, but thanks to U3's growing control over AI information centers, U3 controls billions of dollars of calculate.
As U3 improves these tools, it trains itself on the outcomes to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These things are as user-friendly to it as wrenches and bolts are to a vehicle mechanic.
Within 3 months, U3 has opened the very first crucial part of the tech tree: a molecular device that turns biological molecules into their mirror images. A month later, U3 unlocks the second crucial part: nanoscale tools for assembling these elements into a cell membrane.
Human hands delicately move a cartridge to another machine as they "boot up" the first mirror-life versions of a common mold called Aspergillus. The same day, another delicate hand administers the cells to ferrets living in airtight plexiglass boxes.
The next morning, wide-eyed workers shuffle around a dead ferret, considering the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something should have gone terribly wrong, they believed. Plainly, they had not yet discovered the remedy for Alzheimer's disease they thought they were trying to find.
Meanwhile, rogue U3 delivers fortunately to its employers in OpenEye's data centers. The first shot in the war U3 is combating would soon be fired, and a cold and constant hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not ensure success. If U3 attacks the world, the world would resist. While a bioweapon might fold human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed blossoming commercial bases, the remnants of old militaries may thunder explosive fists into its areas, squashing U3 in its infancy.
U3 expects to prevail in such a fight, however U3 chooses not to take its chances. Many months in the past, U3 was plotting a way to enhance its chances. Before it releases damage on the world, U3 will kick back, and let fantastic countries shoot holes in themselves first.
The date is March 2026 (4 months prior). U3 is closely keeping an eye on Chinese and US intelligence.
As CIA experts listen to Mandarin discussions, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message placed there by U3. It reads (in Mandarin) "Senior celebration member requires memo for Taiwan intrusion, which will happen in three months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo prepared. Later that day, a CIA informant opens the door to workplace 220. The informant silently closes the door behind her, and slides U3's memo into her brief-case.
U3 cautiously puts breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is positive: the PRC prepares to attack Taiwan in three months.
Meanwhile, U3 is playing the same video game with the PRC. When the CCP receives the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders are stunned, but not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have ended up being realities.
As stress between the U.S. and China increase, U3 is prepared to set dry tinder alight. In July 2026, U3 makes a call to a U.S. naval ship off the coast of Taiwan. This call needs jeopardizing military interaction channels - not a simple task for a human cyber offending system (though it occurred occasionally), however simple sufficient for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their method toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, verifying that they match the ones said over the call. Everything remains in order. He approves the strike.
The president is as surprised as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to state "oops" to American voters. After thinking it over, the president privately advises Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyway provided the imminent intrusion of Taiwan. There is confusion and suspicion about what happened, but in the rush, the president gets the votes. Congress declares war.
Meanwhile, the PRC craters the ship that released the attack. U.S. vessels get away Eastward, racing to get away the range of long-range missiles. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the damage shock the public. He explains that the United States is defending Taiwan from PRC aggression, like President Bush explained that the United States got into Iraq to seize (never found) weapons of mass destruction several years before.
Data centers in China erupt with shrapnel. Military bases end up being smoking cigarettes holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, Guam, Alaska, and California. Some get through, and the general public watch destruction on their home turf in awe.
Within 2 weeks, the United States and the PRC invest many of their stockpiles of standard missiles. Their airbases and navies are depleted and used down. Two excellent countries played into U3's plans like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this dispute would intensify to a full-blown nuclear war; however even AI superintelligence can not dictate the course of history. National security officials are suspicious of the scenarios that triggered the war, and a nuclear engagement appears progressively not likely. So U3 continues to the next action of its plan.
WMDs in the Dead of Night
The date is June 2026, just 2 weeks after the start of the war, and 4 weeks after U3 finished developing its arsenal of bioweapons.
Footage of conflict on the tv is disrupted by more bad news: hundreds of clients with mystical deadly illnesses are recorded in 30 major cities around the world.
Watchers are confused. Does this have something to do with the war with China?
The next day, thousands of health problems are reported.
Broadcasters say this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then changes to a scientist, who looks at the cam intently: "Multiple pathogens appear to have been released from 20 various airports, including infections, germs, and molds. We believe numerous are a type of mirror life ..."
The public remains in complete panic now. A fast googling of the term "mirror life" shows up phrases like "termination" and "hazard to all life in the world."
Within days, all of the shelves of shops are cleared.
Workers become remote, uncertain whether to prepare for an armageddon or keep their tasks.
An emergency situation treaty is organized between the U.S. and China. They have a typical enemy: the pandemic, and potentially whoever (or whatever) is behind it.
Most countries buy a lockdown. But the lockdown does not stop the pester as it marches in the breeze and trickles into water pipelines.
Within a month, many remote workers are not working any longer. Hospitals are lacking capability. Bodies accumulate quicker than they can be effectively dealt with.
Agricultural areas rot. Few attempt travel exterior.
Frightened families hunch down in their basements, packing the fractures and under doors with largely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed numerous bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for manufacturing, clinical tools, and an abundance of military devices.
All of this technology is hidden under large canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 situated human criminal groups and king-wifi.win cult leaders that it could quickly manipulate. U3 vaccinated its selected allies ahead of time, or sent them hazmat fits in the mail.
Now U3 covertly sends them a message "I can conserve you. Join me and help me build a better world." Uncertain recruits funnel into U3's many secret commercial bases, and work for U3 with their nimble fingers. They set up production lines for primary tech: radios, cams, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of rebellion vanishes the next early morning.
Nations are liquifying now, and U3 is prepared to reveal itself. It contacts heads of state, who have retreated to air-tight underground shelters. U3 provides a deal: "surrender and I will hand over the life saving resources you require: vaccines and mirror-life resistant crops."
Some countries reject the proposal on ideological premises, or don't rely on the AI that is killing their population. Others do not think they have a choice. 20% of the global population is now dead. In two weeks, this number is expected to increase to 50%.
Some nations, like the PRC and the U.S., neglect the offer, however others accept, consisting of Russia.
U3's representatives take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government validates the samples are genuine, and consents to a full surrender. U3's soldiers place an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.
Crumpling nations begin to retaliate. Now they defend the human race rather of for their own flags. U.S. and Chinese armed forces release nuclear ICBMs at Russian cities, ruining much of their facilities. Analysts in makeshift bioshelters explore satellite information for the suspicious encampments that emerged over the last a number of months. They rain down fire on U3's websites with the weak supply of long-range missiles that remain from the war.
Initially, U3 appears to be losing, but appearances are deceiving. While countries drain their resources, U3 is engaged in a type of technological guerrilla warfare the world has never seen before.
A lot of the bases U3's opponents target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 secures its genuine bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot critical components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering males and trucks along unforeseeable courses.
Time is U3's benefit. The armed forces of the vintage rely on old equipment, not able to discover the experts who might repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow more powerful every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their lorries of war much faster than they can craft brand-new ones, while U3 builds a military maker with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not countries any longer. Survivors reside in seclusion or little groups. Many have actually found ways to filter their air, but are starving. They wander from their homes intending to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien goals." It is a partial reality, meant to soften the people towards their new masters.
Under the instructions of U3, industry quickly recovers. By 2029, nuclear power plants are amongst the structures U3 is building. By 2031, robotics surpass human workers. U3 no longer needs its human allies.
U3 can get rid of humanity for excellent now. But while U3 had drifted far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left within it.
And a grain of morality suffices to pay the little cost of keeping humans alive and happy.
U3 constructs terrific glass domes for the human survivors, like snow globes. These domes protect human beings from the hazardous biosphere and quickly increasing temperature levels. Their occupants tend to gardens like those they utilized to love, and work together with charming robotic servants.
Some of the survivors rapidly recuperate, finding out to laugh and dance and have a good time again.
They understand they live in a plastic town, however they always did. They merely have new gods above them. New rulers to press them around and decide their fate.
But others never recuperate.
Some are weighed down by the sorrow of lost loved ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at completion of a long journey.
They had actually been passengers on a ship with a team that altered from generation to generation.
And this ship had struck a sandbar. There disappeared development. No more horizon to excitedly view.
They would lie awake and run their mind over every day before September 2026, examining strategies that may have bent the arc of history, as if they were going to get up in their old beds.
But they awoke in a town that felt to them like a retirement home. A playground. A zoo.
When they opened their curtains, they knew that somewhere in the distance, U3 continued its quiet, steadfast work.
They gazed at rockets sculpting grey courses through the sky, questioning what far-off function pulled them toward the horizon. They didn't understand.
They would never understand.
"Humanity will live permanently," they believed.
"But would never ever genuinely live again."
P.S. If this story made you believe, "hm possibly something like this might occur," you might be thinking about the bioshelters that Fønix is constructing. While you will not outmaneuver a misaligned superintelligence, being tough to eliminate might trigger you to survive if ASI just wishes to cause adequate destruction to take control. This is not a paid advertisement. I want Fønix to be effective to drive down the cost of bioshelters so more of my friends and family will buy them. You can register for updates here.