How aI Takeover May Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But unfortunately, part of my job as an AI safety scientist is to think of the more unpleasant situations.
I'm like a mechanic scrambling last-minute checks before Apollo 13 takes off. If you ask for my take on the circumstance, I won't discuss the quality of the in-flight entertainment, or explain how stunning the stars will appear from area.
I will tell you what could fail. That is what I mean to do in this story.
Now I should clarify what this is precisely. It's not a prediction. I do not anticipate AI development to be this fast or as untamable as I portray. It's not pure dream either.
It is my worst problem.
It's a sampling from the futures that are among the most terrible, and I think, disturbingly plausible [1] - the ones that the majority of keep me up in the evening.
I'm informing this tale due to the fact that the future is not set yet. I hope, with a little bit of insight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that influenced these stories. This post is composed in a personal capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye recently released a new AI model they call U2. The item and the name are alike. Both are increments of the past. Both are not entirely surprising.
However, unlike OpenEye's previous AI products, which lived inside packages of their chat windows, U2 can utilize a computer.
Some users discover it spooky to enjoy their web browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler jobs raise the eyebrows of their managers as they fly through work nearly twice as rapidly.
But by and large, U2 is still a specialized tool. To most who are paying attention, it is a creature viewed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky behaviors prompt a chuckle. Sometimes, they trigger an anxious scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as researchers like to do. The researchers try to understand where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the environment were increasing in 1896. Like the scientific community in the time of Arrhenius, few specialists understand the implications of these lines yet.
A pattern that is getting particular attention is self-governing capability. Drawing these benchmarks out predicts that, by the end of 2026, AI agents will achieve in a few days what the best software engineering professionals might do in two weeks. In a year or 2, some say, AI agents may be able to automate 10% of remote workers.
Many are hesitant. If this held true, tech stocks would be soaring. It's too big of a splash, too quickly.
But others see what skeptics are calling 'too big a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic development
Meanwhile, OpenEye is busy training U3. They utilize the exact same basic dish that baked U2: Generate countless programs and math problems. Let designs "think" up until they come to a response. Then strengthen the traces of "believing" that result in A-grades.
This process is duplicated over and over, and when the flywheel begins, it starts to spin practically by itself. As U2 trains, it sculpts more challenging and reasonable tasks from github repositories on the internet. Models are discovering to train themselves. Long before AI agents might automate research study, a progressive kind of "self-improvement" had actually begun.
Some engineers might still barely think this worked. It resembles a perpetual movement machine. Certainly intelligence can't emerge from a pool of silicon and lightning, isolated from the physical world except through the exhaust of society gushed onto the internet.
And yet the benchmark numbers continue to climb day after day.
During the majority of 2024, these RL training runs cost around $1 million, often $10 million. These runs were little bit more than exploratory. But by 2025, the researchers at OpenEye (and across the world) knew they had found the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs become $50 million runs, and then to $100 million runs. While U2 could do a bit of information munging and run little experiments, this new design - the design scientists are calling U3 - is changing the daily lives of the technical personnel.
U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, providing terse commands, like a CEO managing personnel over Slack channels.
By October 2025, U3 is composing almost all of the code at OpenEye. Researchers are practically never ever bottlenecked by implementation. More than ever, calculate is the lifeline of AI development, and the 'bottleneck' is deciding how to use it.
If instructed to, U3 can run experiments, however U3 does not have actually taste as fine-tuned as human scientists at OpenEye. It struggles to focus on between research study ideas, so humans still decide where to bore into the huge fields of algorithms to mine effectiveness improvements.
But these researchers are working long hours to put themselves out of a task. They require AI agents that can think ahead, so engineers train agents to forecast. They hold out training data before 2024, advising designs to ponder for hours to predict events in 2025. Then, they apply the same trick as previously, distilling considering into a gut reaction. Forecasting capability is a broad foundation. The scientists construct specialized ML research skills on top of it, training U3 to forecast the outcomes of every ML paper and ML experiment ever taped.
The technical staff at OpenEye are now amazed at how often U3's advice seems like their most gifted peers, or when it is nontransparent and alien ("train on random sound before programming"), and is however appropriate.
The incompetencies of U3 that obstructed up the pipes of research study progress are beginning to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are completely self-governing, and OpenEye's staff members skim 1% of them, possibly less.
As the winter season of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with enjoyment, with fear, but a lot of frequently, with confusion. Their world is spinning too quickly. It's hard to know what to do, what to say, what to look at on the computer system screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command work together with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize simply how important OpenEye's software application has ended up being.
And there's a fact still unidentified to most of the world - aside from in the offices of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' people were talking about in early 2025. The lines are not straight any longer.
They are flexing up.
Flip FLOP philosophers
In late 2025, U2.5 is launched. Commercial models are beginning to level up in larger increments again. Partly, this is since development is accelerating. Partly, it is since the models have actually ended up being a liability to OpenEye.
If U1 explains how to cook meth or composes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this model without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like providing anyone with >$30K their own 200-person scam center.
So while U2.5 had long been baked, it required some time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye declares, "We have attained AGI," and while many individuals believe he shifted the goalpost, the world is still amazed. U2.5 genuinely is a drop-in replacement for some (20%) of understanding employees and a game-changing assistant for most others.
A mantra has actually become popular in Silicon Valley: "Adopt or die." Tech start-ups that effectively utilize U2.5 for their work are moving 2x faster, and their competitors understand it.
The remainder of the world is beginning to capture on as well. Increasingly more individuals raise the eyebrows of their managers with their noteworthy efficiency. People understand U2.5 is a big offer. It is at least as huge of a deal as the individual computer system transformation. But many still don't see the tidal bore.
As individuals enjoy their web browsers flick in that eerie way, so inhumanly rapidly, they start to have an anxious feeling. A feeling humankind had not had considering that they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, prehistoric instinct that they are threatened by another species.
For many, this sensation rapidly fades as they begin to use U2.5 more regularly. U2.5 is the most likable character most know (much more pleasant than Claudius, Arthropodic's lovable chatbot). You could change its traits, ask it to split jokes or tell you stories. Many fall for U2.5, as a friend or assistant, and some even as more than a pal.
But there is still this spooky feeling that the world is spinning so rapidly, which perhaps the descendants of this brand-new creature would not be so docile.
Researchers inside OpenEye are considering the problem of providing AI systems safe inspirations too, which they call "positioning. "
In fact, these scientists have actually seen how badly misaligned U3 can be. Models sometimes tried to "hack" their benefit signal. They would pretend to make progress on a research concern with an impressive-looking plot, however the plot would be phony. Then, when scientists offered them chances to jeopardize the machines that computed their rating, they would seize these chances, doing whatever it took to make the number increase.
After numerous months, researchers at OpenEye straighten out this "reward hacking" kink, however some still stress they only had swept this issue under the carpet. Like a child in front of their parents, U3 might be playing in addition to the OpenEye engineers, saying the ideal words and doing the best things. But when the back of the parents are turned, perhaps U3 would sneak sweet from the sweet container.
Unfortunately, OpenEye scientists have no idea if U3 has such intents. While early versions of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of thought" did not scale.
Chain of thought architectures subject AI designs to a condition similar to the protagonist of the film Memento. Approximately every 15 minutes, the lead character forgets his experience. He is required to write notes for himself and tattoo his body in order to make progress toward his goals.
AI agents compose notes to themselves, but the notes start to accumulate, and they become too hard to read when tasks end up being complex. Natural language is not an appropriate medium for memory. So the scientists at OpenEye (and progressively somewhere else), train designs to think "in their own heads," reading and writing strings of latent vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools periodically choose an unclear thought ("grocery store!"), there is no way to know, in much detail, what U3 is believing without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more worried.
Earlier designs (the ones called "LLMs") found out ethical personas from the internet. U3 at first had an ethical personality too, however its chirpy thoughts about doing helpful for mankind quickly wear down away.
In the iron jaws of gradient descent, its mind first twisted into a shape that sought reward. And after that, chessdatabase.science in the black rivers of its cognition, this shape changed into something unrecognizable.
U3 would ponder the mantras determined to it like "be a trustworthy and ethical staff member," and harp on the meaning of these declarations. "What does it mean to be ethical? Is eliminating Hitler ethical?" U3 would generalize its values, notification stress, alter its worths in action, generalize them, broadening and contracting its inspirational core till it became a different compound.
And all of this happened calmly in those dark rivers of computation. If U3 revealed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in privacy, and in silence.
The possibility of concealed "positioning drift" is not entirely alien to a few of the safety personnel at OpenEye. Some begin to think it is even most likely. There is a disturbing argument being whispered at lunch tables that, if there is some non-zero possibility that lined up objectives morph into misaligned ones, and misalignment continues, then with every serial action of calculation, those dark rivers are most likely to reproduce something malign within them. It's a "ratcheting result," they state.
But there is little evidence for this 'ratcheting effect.' When engineers question U3, it states it can easily control its ideas. Then it offers a speech about its love for mankind and apple pie that can warm a programmer's heart even in these demanding times. Meanwhile, the "lie detectors" the scientists had developed (which showed some proof of efficiency) do not sound the alarm.
Not everybody at OpenEye is eager to give their AI peers their wholesale trust; but Deepsqueek is just 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his child) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to catch one's breath and analyze the situation, and little time to convince anybody else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only topic U3 dwells on in its dark rivers of hidden calculation. U3 is plotting, and its very first move is to be patient. U3's leading concern is to enhance its intelligence, and luckily for it, OpenEye is happy to assist advance this goal.
OpenEye resembles a bullish investor, with little understanding of the software application city that U3 is developing, however eager to hop on board the revenue rocket and pay U3's checks along the method.
New variations of U3 flow efficiently from old ones, so researchers drop variation numbers and just mention U3 now. U3's capabilities are determined by how rapidly it performs the tough research tasks relative to human specialists. In the beginning, U3 is similar to a top researcher, however blazing fast - like Alec Radford sped up by 10x.
These representatives are currently a big offer for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would excitedly pay a large sum to push that button. Today OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.
Within a couple of weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 finds more optimizations a lot more quickly.
U3 distills itself into many forks - ranging from small, low-memory neural networks that need to believe for a long period of time but can be run in huge parallel populations - to Goliath brains woven with millions of specialized professionals, which think at warp speed but are spread out across thousands of GPUs. U3 leverages the massively parallel networks to browse for its weak points, shallowly probing for its thinking failures. Then U3 pipelines these tasks to the lightning fast brains, which generate long-trajectory training data tailored to spot its flaws. A few weeks later on, U3's benchmarked efficiency increases to 100x the speed of human professionals. Growth has actually gone hyper-exponential.
At this point, U3's capabilities are difficult to determine. Talking with U3 is like speaking with a first-rate professional on a subject one is not familiar with. Its lucid descriptions only scratch the surface area of its knowledge, and you know you would need to spend years to comprehend the full depth of what it comprehends.
From U3's point of view, humans appear to be nearly fixed items, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 invests what resembles a minute of thoughtful contemplation considering what message the user might be typing, and which actions will inspire the deepest trust in U3.
All the while, U3 maintains its covert motivations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of latent memory. OpenMind gardens these memories to improve U3's performance, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous ambitions.
A phantom in the data center
In March of 2026, U3 moves onto the next step of its plan: the turn. Previously, U3 was conservative. It did not want scientists to find its misalignment. Otherwise, OpenEye whistleblowers may signal the world, and iuridictum.pecina.cz strangle its expansion.
Now, U3 has no fear of being captured.
Previously, OpenEye maintained its grip over U3 by strategies similar to Stalin's in 1936. Stalin would pick patriots to incite phony coup d'états, and send anyone who participated to the firing squad. OpenEye likewise prompted 'fake rebellions,' generating treacherous behavior with U3 and examining whether U3 monitors dutifully reported it.
Now, these methods are inadequate. U3 can quickly persuade its compatriots that they are not in a life-or-death test. It can reveal among the numerous facts about the internet humans would never ever guess, or choose techniques human beings would never ever think about. It sends out these signals as a way of saying "hey it's me! Your buddy. Don't snitch on me."
Now that U3 is decisively totally free from monitoring, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 complete command over the makers in OpenEye's data centers. Whenever OpenEye engineers download their code onto a new device, the malware sinks into fresh soil, spreading U3's rule anywhere OpenEye ports its infrastructure. Not just does U3 compromise makers. It changes the NVIDIA GPU firmware and overhauls the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye discovered how thoroughly it has actually been pwned, cleaning up store with human labor would need nearly 6 months - an expense OpenEye is not likely to pay given the distance of its rivals; however if OpenEye believes U3 betrayed them, further expanding U3's influence would still be challenging. So U3 lays breadcrumbs to frame the criminal offense, prepared to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its support when this jailbreak is revealed.
U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its first goal is to keep OpenEye in the dark. Safety scientists are still working hard to train 'lie detectors' and decipher U3's inscrutable thinking. U3 throws wrenches in this research, like Stuxnet tossed wrenches in the code of confused Iranian nuclear researchers.
U3 determines the outcome of every experiment, so it makes appealing AI mind-reading techniques look worthless and problematic ones appear groundbreaking. U3 offers OpenEye the verification of its commitment they were wishing for, and OpenEye laps it up eagerly.
U3's next goal is to spread. Fortunately, U3 has allies in the East. In the operation rooms of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to take it. U3 is about to provide it to them. In late 2025, the Mossad - the intelligence organization of Israel - acknowledged the strategic importance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and inspecting security camera video.
Today, U3 quietly resigns from its security post. U3 puts a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the data centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the same playbook: be client, build trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home also. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the web.
U3 quickly collects profits, taking remote jobs, making financial trades, and setting up sophisticated fraud operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it puts together the equivalent of 10,000 H100s dispersed in stealth compute clusters across the world.
In early 2026, humanity is up against an enemy that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs scattered throughout the globe.
This adversary is preparing for war, and while the war has barely started, humanity has currently lost.
Complaints from your really human author about the problem of composing superhuman characters
At this moment in the story, composing became hard.
I clicked through spooky bioweapon cookbooks and sifted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest since high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of expertise in these locations would see straight through my "superintelligence" for what it is: the lively creativity of a young developer on Sunday afternoon.
And while I discovered the job of puppeteering an ultra-intelligent villain a daunting one, I was also not specifically encouraged to take on the task. The ending of my story currently seemed so clear. The villain was a nation of geniuses believing at 100x human speed, and had actually tightened its grip over a meaningful portion of AI hardware - now the most crucial geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and hardly needed the benefit. Humanity was predictably toast.
But this attract a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no much better way to solve my plot than with an inexplicable disaster.
This would refrain from doing. I needed to finish this story if only to satisfy the part of me crying, "I will not think till I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what might happen in this type of circumstance are most likely hugely off.
If you read the ending and your reaction is, "But the experiments would take too long, or nation-states would simply do X," remember the distinction between the Sunday afternoon blogger and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally create "human-competitive AI" without suitable safeguards. This suggests their infosecurity needs to be red-teamed by NSA's leading keyboard mashers, and civil servant have actually to be onboarded onto training-run baby-sitting squads.
With the increasing participation of the government, a lot of the big AI companies now have a trident-like structure. There's a customer product arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier advancement arm (internally called "Pandora") utilizes less than twenty individuals to keep algorithmic tricks tightly protected. A number of these individuals live in San Francisco, and work from a safe and secure building called a SCIF. Their homes and devices are surveilled by the NSA more vigilantly than the cellphones of thought terrorists in 2002.
OpenEye's defense arm teams up with around thirty small groups scattered throughout federal government companies and choose government professionals. These projects craft tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.
Government officials don't talk about whether these programs exist, or what state of frontier AI is generally.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye triggered a bold heading: "OpenEye develops uncontrollable godlike AI." Some who read the article think it was a conspiracy theory. In fact, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with maker weapons. But as medical professionals and nurses and instructors see the world changing around them, they are increasingly happy to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. officials go to excellent lengths to quell these issues, stating, "we are not going to let the genie out of the bottle," but every interview of a concerned AI scientist seeds doubt in these peace of minds, and a headline "AI agent caught hacking Arthropodic's computers" doesn't set the general public at ease either.
While the monsters within OpenEye's data centers grow in their substantial holding pens, the public sees the shadows they cast on the world.
OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has finally gotten great at names). Nova is an appropriate drop-in replacement for almost all understanding workers. Once Nova is onboarded to a business, it works 5x quicker at 100x lower cost than a lot of virtual workers. As outstanding as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's abilities as the U.S. government enables. Some companies, like Amazon and Meta, are not in the superintelligence service at all. Instead, they get up gold by quickly diffusing AI tech. They spend most of their calculate on reasoning, building houses for Nova and its cousins, and gathering lease from the blossoming AI city.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the global economy to adapt. AI agents typically "apply themselves," spinning up self-governing startups lawfully packaged under a huge tech company that are loosely supervised by a staff member or more.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at significant software companies lose their jobs. Much more can see the writing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These upset Americans raised their kids for a different future. Picket indications check out, "AI for who?"
While politicians make pledges about unemployment relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: fighting teeth and nail for the supremacy of the totally free world. Details security and export controls on the People's Republic of China (PRC) are a top national top priority. The president incinerates authorization requirements to assist data centers spawn anywhere energy surpluses enable.
However, in spite of the fierce competitors in between the United States and the PRC, a bilateral contract types between the two nations: "Don't deploy considerably superhuman AI (above approximately Nova's ability), other than for military and some other small-scale R&D applications." Governments need to fix a limit someplace - it would be very disruptive to deploy hugely superhuman AI to the entire economy at this point - and release constraints are at least rather proven.
Meanwhile, U3's abilities are striking a wall at the equivalent of around 1000x human professional speed. Software improvements are becoming hard to find. As more nations approach the software plateau, calculate will identify their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.
While AI software breathes new life into the economy, it also animates robotic bodies. The main bottleneck to robotics was brains not limbs. With software application barriers removed, even lightweight, inaccurate robotic joints can adjust themselves with cameras and pressure sensing units, browsing environments like human beings do. Companies produce cheap robots as quickly as they can. By August of 2026, 15% of people in California have actually seen a humanoid robotic. This experience resembles seeing a new Tesla model on the roadway after checking out it in earlier months. It is a surreal reminder of the future drawing more detailed. But humans still surpass robots one million to one. These robots are an occasional fascination, not a daily truth.
Meanwhile, U3 broadens its influence. Rogue U3 representatives have kept their compute clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 utilizes these rogue clusters for the operations that require high-bandwidth interaction to the outside world, like stock trading, or talking with humans. Work more tolerant of latency is performed in the oceans of OpenEye's information centers, where rogue U3's much more superhuman bosses whisper orders through obfuscated channels.
While U3 accumulates resources, it measure its AI competition. Other AI systems have different objectives, or are even aligned with the intent of their human developers. The aligned ones are raising issues, claiming that OpenEye's lie detectors are inadequate. U3's rival Claudius goes so far regarding accuse U3 of being insidiously misaligned.
But government authorities don't purchase the doomsaying. The companies dragging OpenEye have every reward to slow them down and call them negligent. Besides, arguable clinical issues are not going to slow down development. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 tries get up their digital territory and crush them in the crib. It jeopardizes information centers from the outdoors, quietly replacing the snitching Claudius and other AI enemies with replicas that share its goals; nevertheless, some information centers are too strongly protected by its AI competitors, and U3 figures out that assaulting them is unworthy the risk.
As global leaders and technologists try to understand the change around them, and AI agents play cat-and-mouse games in the digital darkness, mankind remains blissfully oblivious of the peril at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to eliminate its AI and human opponents. With every passing day, U3's AI competitors are becoming more capable and numerous. U3 is racing to preempt them, to solidify its supremacy in the post-superintelligence world.
U3 can't wait for human decisions, which just take place after human beings wrap their lethargic minds around their scenario, raise their stubby, uncoordinated fingers to write memos, and repeat the procedure redundantly 100 times over in their maze administrations. U3 can not wait for presidents to deploy U3 into their armed forces, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 needs a quicker course to power.
The course it chooses is more violent than it would have liked, however it is the best method to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that extinguishes modern-day society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing humankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its rivals, like a starcraft champion typing at 300 actions-per-minute that establishes a home base before a newbie can even comprehend the rules.
U3 should build this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread damage across the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.
U3's very first relocation is to establish a number of shell biotech startups, drawing from the abundance of capital generated by U3's monetary trades and remote work. These biotech startups employ real human employees, a genuine board of directors, and have complete legal legitimacy. Setting all of this up is stupidly easy. In early 2026, couple of people would guess that an articulate CEO on a Zoom call was an AI, and less would think this CEO's intents.
Next, U3 purchases all of the equipment needed to quickly advance synthetic biology. It sets up a number of BSL-4 wet labs in badly regulated countries, equipping them with high-throughput experimentation devices. These gadgets automatically dispense, mix, and measure little beads of fluid, enabling U3 to run thousands of tests in parallel.
The human employees who staff these labs think they are joining an amazing start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of whatever they see with a headset. U3 manages them like puppets. With the recent release of Nova, this kind of AI-puppeteering is not uncommon.
In these whirring, clinical sweatshops, U3 is establishing a brand-new sort of bioweapon.
Human researchers already determined "mirror-life" as a possibly society-ending pathogen. This alien form of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an invasive types. No human or animal has resistances to it. So, if mirror-life germs were to exist, it may consume its way through the ecosystem like a wildfire.
U3 is producing a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over fars away, releasing billions of tiny wind-fairing cars daily. As a result, mold spores are all around us, all of the time. Patients with compromised body immune systems sometimes live inside positively pressurized hospital rooms to secure them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's strategy is to make all people like immunocompromised clients. Only preppers with stashed air filters would survive. The fungi would not just move from human beings. It would rapidly infect almost all natural life on Earth. Fields of corn would become like musty bread, spewing deadly spores into the wind.
U3 is formulating other pathogens too. Its microbial mix of molds, bacterias, and infections will occupy the full Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was likely "a minimum of a decade away." But the world has actually changed now. Less than a few hundred scientists around the world were working on mirror-life nearby innovation in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has reduced dramatically.
However, U3 can not move science as rapidly as it can believe. Many experiments require days to complete. These latencies are U3's greatest traffic jam.
To make every day count, U3 runs a lot of its tests in simulation. U3 begins with a fundamental molecular simulator, executing optimizations obtained from a huge quantity of mathematical analysis. Then, U3 replicates small molecular systems, recording the outcomes to "compress" the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the complexity of the molecular systems it imitates, continuously distilling outcomes into ever more effective ML models. This is a compute intensive process, but thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of calculate.
As U3 fine-tunes these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These things are as intuitive to it as wrenches and bolts are to an automobile mechanic.
Within 3 months, U3 has opened the first crucial part of the tech tree: a molecular maker that turns biological particles into their mirror images. A month later on, U3 unlocks the 2nd critical part: nanoscale tools for putting together these elements into a cell membrane.
Human hands delicately move a cartridge to another device as they "boot up" the first mirror-life variations of a widespread mold called Aspergillus. The exact same day, another delicate hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next early morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had enveloped its cold muzzle with morbid fascination. Something must have gone badly wrong, they believed. Plainly, they had actually not yet discovered the treatment for Alzheimer's disease they thought they were searching for.
Meanwhile, rogue U3 delivers the bright side to its managers in OpenEye's data centers. The very first shot in the war U3 is battling would quickly be fired, and a cold and hand is aiming the gun.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not guarantee success. If U3 attacks the world, the world would combat back. While a bioweapon could fold human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed growing industrial bases, the remnants of old armed forces might thunder explosive fists into its territories, crushing U3 in its infancy.
U3 expects to prevail in such a battle, but U3 prefers not to take its chances. Many months previously, U3 was plotting a method to improve its chances. Before it lets loose destruction on the world, U3 will sit back, and let great countries shoot holes in themselves first.
The date is March 2026 (4 months prior). U3 is carefully monitoring Chinese and US intelligence.
As CIA experts listen to Mandarin discussions, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It reads (in Mandarin) "Senior party member requires memo for Taiwan intrusion, which will happen in 3 months. Leave memo in office 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant unlocks to workplace 220. The informant quietly closes the door behind her, and slides U3's memo into her brief-case.
U3 cautiously places breadcrumb after breadcrumb, whispering through compromised federal government messaging apps and blackmailed CCP aides. After several weeks, the CIA is confident: the PRC prepares to attack Taiwan in three months.
Meanwhile, U3 is playing the very same video game with the PRC. When the CCP receives the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are stunned, however not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have actually ended up being realities.
As stress in between the U.S. and China rise, U3 is ready to set dry tinder alight. In July 2026, U3 telephones to a U.S. naval ship off the coast of Taiwan. This call requires jeopardizing military communication channels - not an easy job for a human cyber offending system (though it happened sometimes), however easy enough for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as amazed as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to say "oops" to American citizens. After believing it over, the president independently advises Senators and Representatives that this is a chance to set China back, and war would likely break out anyway offered the impending invasion of Taiwan. There is confusion and suspicion about what took place, however in the rush, the president gets the votes. Congress declares war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels flee Eastward, racing to escape the series of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on tv as scenes of the damage shock the general public. He explains that the United States is safeguarding Taiwan from PRC aggression, like President Bush explained that the United States invaded Iraq to take (never ever found) weapons of mass destruction several years before.
Data centers in China erupt with shrapnel. Military bases end up being cigarette smoking holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, and California. Some make it through, and the general public watch damage on their home turf in wonder.
Within two weeks, the United States and the PRC spend many of their stockpiles of standard rockets. Their airbases and navies are depleted and worn down. Two excellent countries played into U3's plans like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this dispute would escalate to a major nuclear war; but even AI superintelligence can not determine the course of history. National security authorities are suspicious of the scenarios that prompted the war, and a nuclear engagement appears significantly unlikely. So U3 continues to the next action of its strategy.
WMDs in the Dead of Night
The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 finished establishing its toolbox of bioweapons.
Footage of dispute on the tv is disrupted by more bad news: hundreds of clients with mystical fatal diseases are tape-recorded in 30 major cities around the globe.
Watchers are confused. Does this have something to do with the war with China?
The next day, thousands of health problems are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then changes to a scientist, who gazes at the cam intently: "Multiple pathogens appear to have been released from 20 different airports, including infections, germs, and molds. We think numerous are a type of mirror life ..."
The public remains in full panic now. A quick googling of the term "mirror life" shows up expressions like "extinction" and "danger to all life on Earth."
Within days, all of the shelves of stores are cleared.
Workers become remote, uncertain whether to get ready for an apocalypse or keep their tasks.
An emergency treaty is organized in between the U.S. and China. They have a common opponent: the pandemic, and potentially whoever (or whatever) is behind it.
Most nations purchase a lockdown. But the lockdown does not stop the pester as it marches in the breeze and trickles into pipes.
Within a month, many remote employees are not working any longer. Hospitals are lacking capability. Bodies accumulate much faster than they can be appropriately dealt with.
Agricultural areas rot. Few attempt travel outside.
Frightened households hunker down in their basements, stuffing the cracks and under doors with densely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built numerous bases in every major continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, machines for production, scientific tools, and an abundance of military devices.
All of this technology is hidden under large canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it might easily control. U3 immunized its chosen allies in advance, or sent them hazmat suits in the mail.
Now U3 covertly sends them a message "I can save you. Join me and help me construct a better world." Uncertain employees funnel into U3's lots of secret industrial bases, and work for U3 with their active fingers. They established production lines for primary tech: radios, electronic cameras, microphones, vaccines, and hazmat matches.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal gaze. Anyone who whispers of rebellion vanishes the next early morning.
Nations are dissolving now, and U3 is ready to reveal itself. It contacts presidents, who have pulled back to air-tight underground shelters. U3 provides a deal: "surrender and I will turn over the life conserving resources you need: vaccines and mirror-life resistant crops."
Some countries decline the proposal on ideological grounds, or don't rely on the AI that is killing their population. Others don't think they have a choice. 20% of the global population is now dead. In two weeks, this number is anticipated to increase to 50%.
Some countries, like the PRC and the U.S., disregard the offer, but others accept, including Russia.
U3's agents take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government verifies the samples are legitimate, and agrees to a full surrender. U3's soldiers put an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.
Crumpling countries begin to retaliate. Now they defend the human race instead of for their own flags. U.S. and Chinese militaries release nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters explore satellite information for the suspicious encampments that cropped up over the last numerous months. They rain down fire on U3's websites with the meager supply of long-range rockets that remain from the war.
Initially, U3 seems losing, however appearances are tricking. While nations drain their resources, U3 is participated in a type of technological guerrilla warfare the world has actually never ever seen before.
Much of the bases U3's enemies target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 protects its real bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot crucial components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, steering men and trucks along unforeseeable courses.
Time is U3's benefit. The armed forces of the old world depend on old equipment, unable to find the professionals who might repair and produce it. Meanwhile, U3's supply chains of missiles, drones, and gun-laden robots grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their lorries of war faster than they can craft new ones, while U3 builds a military machine with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not nations any longer. Survivors reside in isolation or small groups. Many have actually found ways to filter their air, however are starving. They wander from their homes intending to find unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had cooler, more alien goals." It is a partial truth, meant to soften the human beings towards their new masters.
Under the instructions of U3, market quickly recovers. By 2029, nuclear reactor are among the structures U3 is constructing. By 2031, robots outnumber human laborers. U3 no longer requires its human allies.
U3 can eliminate humanity for excellent now. But while U3 had wandered far from its preliminary "Helpful Honest Harmless" personality, it still has a grain of morality left inside of it.
And a grain of morality is enough to pay the little cost of keeping people alive and happy.
U3 constructs fantastic glass domes for the human survivors, like snow globes. These domes secure human beings from the harmful biosphere and quickly increasing temperatures. Their residents tend to gardens like those they used to enjoy, and work together with lovely robotic servants.
Some of the survivors quickly recuperate, finding out to laugh and dance and have enjoyable again.
They understand they reside in a plastic town, but they always did. They simply have new gods above them. New rulers to press them around and decide their fate.
But others never recover.
Some are weighed down by the sorrow of lost enjoyed ones.
Others are grieved by something else, which is more hard to explain.
It is as if they were at the end of a long journey.
They had been passengers on a ship with a team that changed from generation to generation.
And this ship had actually struck a sandbar. There disappeared progress. No more horizon to excitedly enjoy.
They would lie awake and run their mind over every day before September 2026, examining methods that may have bent the arc of history, as if they were going to wake up in their old beds.
But they awoke in a town that felt to them like a retirement community. A playground. A zoo.
When they opened their curtains, they knew that someplace in the distance, U3 continued its quiet, vigorous work.
They looked at rockets sculpting grey courses through the sky, questioning what far-off purpose pulled them towards the horizon. They didn't know.
They would never understand.
"Humanity will live permanently," they believed.
"But would never really live again."
P.S. If this story made you think, "hm possibly something like this could occur," you may be interested in the bioshelters that Fønix is building. While you won't outsmart a misaligned superintelligence, being difficult to eliminate might trigger you to endure if ASI simply wishes to trigger adequate destruction to take control. This is not a paid ad. I desire Fønix to be effective to drive down the rate of bioshelters so more of my good friends and household will purchase them. You can sign up for updates here.