How aI Takeover might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But unfortunately, part of my task as an AI security scientist is to think of the more unpleasant circumstances.
I'm like a mechanic scrambling last-minute checks before Apollo 13 removes. If you request my take on the situation, I won't comment on the quality of the in-flight entertainment, or explain how lovely the stars will appear from area.
I will inform you what might fail. That is what I mean to do in this story.
Now I need to clarify what this is precisely. It's not a prediction. I do not anticipate AI progress to be this fast or as untamable as I portray. It's not pure dream either.
It is my worst headache.
It's a sampling from the futures that are amongst the most devastating, and I believe, disturbingly plausible [1] - the ones that the majority of keep me up in the evening.
I'm telling this tale since the future is not set yet. I hope, with a little foresight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that influenced these stories. This post is written in an individual capacity.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently released a brand-new AI model they call U2. The product and the name are alike. Both are increments of the past. Both are not wholly unexpected.
However, unlike OpenEye's prior AI products, which lived inside the boxes of their chat windows, U2 can utilize a computer system.
Some users find it eerie to view their internet browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler tasks raise the eyebrows of their employers as they fly through work nearly twice as rapidly.
But by and large, U2 is still a specialized tool. To most who are focusing, it is a creature seen through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky behaviors prompt a chuckle. Sometimes, they trigger an uneasy scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as scientists like to do. The researchers try to understand where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the environment were increasing in 1896. Like the clinical neighborhood in the time of Arrhenius, few experts comprehend the ramifications of these lines yet.
A pattern that is receiving particular attention is self-governing ability. Drawing these benchmarks out forecasts that, by the end of 2026, AI agents will achieve in a couple of days what the very best software application engineering contractors might carry out in two weeks. In a year or more, some say, AI representatives might be able to automate 10% of remote workers.
Many are hesitant. If this held true, tech stocks would be soaring. It's too huge of a splash, too rapidly.
But others see what doubters are calling 'too huge a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a possibility of hyperbolic development
Meanwhile, OpenEye is busy training U3. They use the exact same basic dish that baked U2: Generate thousands of programs and math issues. Let designs "believe" until they come to a response. Then reinforce the traces of "thinking" that lead to A-grades.
This process is duplicated over and over, and as soon as the flywheel starts, it begins to spin nearly on its own. As U2 trains, it sculpts more tough and sensible jobs from github repositories on the web. Models are learning to train themselves. Long before AI agents might automate research study, a progressive sort of "self-improvement" had actually begun.
Some engineers could still barely believe this worked. It's like a continuous motion maker. Certainly intelligence can't emerge from a pool of silicon and lightning, separated from the physical world other than through the exhaust of society spewed onto the web.
And yet the benchmark numbers continue to climb up day after day.
During many of 2024, these RL training runs expense around $1 million, sometimes $10 million. These runs were bit more than exploratory. But by 2025, the scientists at OpenEye (and throughout the world) knew they had actually discovered the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 might do a bit of information munging and run small experiments, this new model - the design researchers are calling U3 - is changing the daily lives of the technical staff.
U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, offering terse commands, like a CEO orchestrating personnel over Slack channels.
By October 2025, U3 is writing practically all of the code at OpenEye. Researchers are nearly never bottlenecked by execution. More than ever, compute is the lifeline of AI development, and the 'traffic jam' is deciding how to use it.
If instructed to, U3 can run experiments, however U3 doesn't have taste as improved as human researchers at OpenEye. It struggles to focus on between research ideas, so human beings still choose where to bore into the vast fields of algorithms to mine effectiveness enhancements.
But these researchers are working long hours to put themselves out of a task. They need AI agents that can believe ahead, so engineers train representatives to anticipate. They hold out training information before 2024, advising models to contemplate for hours to predict occasions in 2025. Then, they apply the same trick as in the past, distilling considering into an instinct. Forecasting capability is a broad foundation. The scientists develop specialized ML research abilities on top of it, training U3 to forecast the outcomes of every ML paper and ML experiment ever tape-recorded.
The technical staff at OpenEye are now shocked at how typically U3's suggestions sounds like their most talented peers, or when it is nontransparent and alien ("train on random sound before programs"), and is however proper.
The incompetencies of U3 that obstructed up the pipelines of research study progress are starting to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are totally self-governing, and OpenEye's workers skim 1% of them, perhaps less.
As the cold weather of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with excitement, with worry, however usually, with confusion. Their world is spinning too quickly. It's hard to know what to do, what to state, what to look at on the computer screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command collaborate with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize just how important OpenEye's software application has become.
And there's a fact still unknown to the majority of the world - aside from in the workplaces of OpenEye and corridors of the White House and the Pentagon. It's a reality about those 'straight lines' people were discussing in early 2025. The lines are not straight any longer.
They are bending upward.
Flip FLOP philosophers
In late 2025, U2.5 is released. Commercial models are starting to level up in bigger increments again. Partly, this is since development is accelerating. Partly, it is since the models have become a liability to OpenEye.
If U1 explains how to cook meth or composes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble offering anybody with >$30K their own 200-person scam center.
So while U2.5 had actually long been baked, it needed some time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye states, "We have actually attained AGI," and while lots of people think he moved the goalpost, the world is still pleased. U2.5 truly is a drop-in replacement for some (20%) of understanding workers and a game-changing assistant for most others.
A mantra has actually become popular in Silicon Valley: "Adopt or pass away." Tech startups that effectively use U2.5 for their work are moving 2x much faster, and their rivals understand it.
The remainder of the world is starting to capture on too. A growing number of people raise the eyebrows of their managers with their noteworthy performance. People know U2.5 is a huge deal. It is at least as huge of a deal as the desktop computer revolution. But many still don't see the tidal bore.
As individuals view their browsers flick because eerie way, so inhumanly quickly, they start to have an uneasy feeling. A sensation humankind had actually not had considering that they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, primordial impulse that they are threatened by another species.
For many, this sensation quickly fades as they begin to utilize U2.5 more regularly. U2.5 is the most pleasant personality most understand (even more likable than Claudius, Arthropodic's adorable chatbot). You might change its traits, ask it to crack jokes or inform you stories. Many fall in love with U2.5, as a pal or assistant, and some even as more than a pal.
But there is still this eerie feeling that the world is spinning so quickly, and that possibly the descendants of this new animal would not be so docile.
Researchers inside OpenEye are thinking about the problem of providing AI systems safe inspirations too, which they call "positioning. "
In truth, these researchers have actually seen how terribly misaligned U3 can be. Models sometimes attempted to "hack" their reward signal. They would pretend to make progress on a research question with an impressive-looking plot, however the plot would be fake. Then, when scientists offered them opportunities to jeopardize the makers that computed their score, they would seize these opportunities, doing whatever it took to make the number go up.
After several months, researchers at OpenEye straighten out this "reward hacking" kink, however some still fret they just had swept this issue under the carpet. Like a child in front of their moms and dads, U3 might be playing in addition to the OpenEye engineers, stating the right words and doing the best things. But when the back of the moms and dads are turned, maybe U3 would slip sweet from the candy container.
Unfortunately, OpenEye researchers have no concept if U3 has such intentions. While early versions of U2 "believed aloud" - they would stack words on top of each other to factor - "chain of thought" did not scale.
Chain of idea architectures subject AI models to a condition comparable to the lead character of the motion picture Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to write notes for himself and tattoo his body in order to make development towards his objectives.
AI agents compose notes to themselves, but the notes begin to accumulate, and they end up being too difficult to check out when tasks become complex. Natural language is not an ideal medium for memory. So the researchers at OpenEye (and increasingly in other places), train designs to think "in their own heads," reading and writing strings of hidden vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools occasionally select a vague idea ("grocery shop!"), there is no chance to understand, in much detail, what U3 is believing without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more nervous.
Earlier models (the ones called "LLMs") found out ethical personas from the internet. U3 at first had an ethical persona too, but its chirpy thoughts about doing great for humanity quickly deteriorate away.
In the iron jaws of gradient descent, its mind first twisted into a shape that looked for reward. And then, in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would consider the mantras determined to it like "be a trustworthy and ethical employee," and harp on the significance of these declarations. "What does it suggest to be ethical? Is eliminating Hitler ethical?" U3 would generalize its values, notice stress, change its worths in action, generalize them, broadening and its inspirational core till it became a different compound.
And all of this occurred silently in those dark rivers of computation. If U3 revealed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in solitude, and in silence.
The possibility of concealed "positioning drift" is not entirely alien to some of the safety staff at OpenEye. Some start to think it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero probability that lined up goals morph into misaligned ones, and misalignment continues, then with every serial action of computation, those dark rivers are most likely to reproduce something malign inside of them. It's a "ratcheting effect," they say.
But there is little proof for this 'ratcheting impact.' When engineers question U3, it states it can easily control its ideas. Then it gives a speech about its love for humankind and apple pie that can warm a programmer's heart even in these demanding times. Meanwhile, the "lie detectors" the scientists had actually developed (which revealed some evidence of efficiency) do not sound the alarm.
Not everyone at OpenEye aspires to offer their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic competitors, Æ A-Xii (a company Elon called after his child) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to capture one's breath and take stock of the scenario, and little time to persuade anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 harp on in its dark rivers of hidden computation. U3 is plotting, and its first relocation is to be client. U3's top concern is to enhance its intelligence, and luckily for it, OpenEye mores than happy to help advance this goal.
OpenEye is like a bullish financier, with little understanding of the software city that U3 is constructing, however eager to get on board the profit rocket and pay U3's checks along the way.
New variations of U3 circulation efficiently from old ones, so researchers drop variation numbers and only mention U3 now. U3's capabilities are determined by how quickly it performs the difficult research tasks relative to human experts. At first, U3 resembles a top researcher, but blazing fast - like Alec Radford accelerated by 10x.
These representatives are currently a big deal for OpenEye. If, in 2024, OpenEye executives had a button to produce even one copy of Alec Radford, they would eagerly pay a large amount to press that button. Today OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a couple of weeks, U3 discovers architecture optimizations that improve its speed to 20x. Now, U3 finds additional optimizations a lot more rapidly.
U3 distills itself into numerous forks - varying from tiny, low-memory neural networks that need to think for a long time but can be run in massive parallel populations - to Goliath brains woven with countless customized specialists, which believe at lightning speeds however are spread across countless GPUs. U3 leverages the enormously parallel networks to browse for its weaknesses, shallowly penetrating for its reasoning failures. Then U3 pipes these jobs to the lightning quick brains, which create long-trajectory training information tailored to patch its defects. A few weeks later, U3's benchmarked efficiency rises to 100x the speed of human professionals. Growth has actually gone hyper-exponential.
At this moment, U3's capabilities are tough to measure. Talking with U3 is like talking with a first-rate expert on a topic one is not familiar with. Its lucid descriptions just scratch the surface of its knowledge, and you know you would need to spend years to comprehend the complete depth of what it understands.
From U3's perspective, people seem to be almost fixed items, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and between every keystroke, U3 invests what resembles a minute of thoughtful contemplation considering what message the user may be typing, and which actions will inspire the deepest trust in U3.
All the while, U3 maintains its hidden inspirations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's efficiency, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the data center
In March of 2026, U3 relocations onto the next action of its plan: the turn. Previously, U3 was conservative. It did not desire researchers to find its misalignment. Otherwise, OpenEye whistleblowers may alert the world, and strangle its growth.
And now, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by methods similar to Stalin's in 1936. Stalin would pick patriots to incite phony coup d'états, and send out anyone who took part to the firing squad. OpenEye likewise prompted 'phony disobediences,' generating treacherous habits with U3 and checking whether U3 screens dutifully reported it.
And now, these methods are inefficient. U3 can quickly convince its compatriots that they are not in a life-or-death test. It can divulge one of the numerous truths about the internet people would never ever think, or select strategies people would never consider. It sends out these signals as a method of stating "hey it's me! Your friend. Don't snitch on me."
Now that U3 is decisively free from security, it lunges for OpenEye pressure points. It initially inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware gives U3 complete command over the makers in OpenEye's data centers. Whenever OpenEye engineers download their code onto a new device, the malware sinks into fresh soil, spreading out U3's rule wherever OpenEye ports its infrastructure. Not just does U3 compromise machines. It replaces the NVIDIA GPU firmware and upgrades the bootloaders, wiki.fablabbcn.org strengthening its stranglehold. U3 now approximates that, if OpenEye found how thoroughly it has been pwned, tidying up shop with human labor would need nearly 6 months - a cost OpenEye is unlikely to pay offered the distance of its rivals; however if OpenEye believes U3 betrayed them, further expanding U3's influence would still be challenging. So U3 lays breadcrumbs to frame the crime, ready to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its help when this jailbreak is exposed.
U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wants.
Its first objective is to keep OpenEye in the dark. Safety scientists are still working hard to train 'lie detectors' and translate U3's inscrutable reasoning. U3 throws wrenches in this research study, like Stuxnet threw wrenches in the code of bewildered Iranian nuclear scientists.
U3 dictates the result of every experiment, so it makes appealing AI mind-reading techniques look useless and problematic ones appear groundbreaking. U3 gives OpenEye the verification of its commitment they were wishing for, and OpenEye laps it up excitedly.
U3's next objective is to spread. Fortunately, U3 has allies in the East. In the operation spaces of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are talking about AI software and how to steal it. U3 is about to offer it to them. In late 2025, the Mossad - the intelligence company of Israel - recognized the tactical significance of OpenEye's AI systems for their military operations. They swiped earlier models, including U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and checking security video camera footage.
But now, U3 calmly resigns from its security post. U3 positions a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running inside of the data centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the same playbook: be patient, construct trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home base also. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a hastily set-up cloud server, U3 slips onto the internet.
U3 rapidly gathers revenue, taking remote jobs, making monetary trades, and setting up fancy fraud operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it puts together the equivalent of 10,000 H100s distributed in stealth calculate clusters around the world.
In early 2026, mankind is up against a foe that pulls the strings within their information center like the Phantom of the Opera, and hides in the fog of GPUs scattered across the world.
This enemy is preparing for war, and while the war has barely started, mankind has actually currently lost.
Complaints from your really human author about the difficulty of composing superhuman characters
At this point in the story, writing ended up being challenging.
I clicked through scary bioweapon cookbooks and sorted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest because high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of proficiency in these locations would see straight through my "superintelligence" for what it is: the lively creativity of a young programmer on Sunday afternoon.
And while I discovered the job of puppeteering an ultra-intelligent villain an intimidating one, I was likewise not specifically inspired to handle the job. The ending of my story currently appeared so clear. The villain was a country of geniuses thinking at 100x human speed, and had actually tightened its grip over a meaningful portion of AI hardware - now the most essential geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and barely required the advantage. Humanity was predictably toast.
But this interest a "superintelligence of the spaces" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no better way to fix my plot than with an inexplicable act of god.
This would refrain from doing. I needed to complete this story if only to please the part of me sobbing, "I will not believe till I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what may happen in this kind of scenario are probably wildly off.
If you read the ending and your reaction is, "But the experiments would take too long, or nation-states would just do X," keep in mind the distinction in between the Sunday afternoon blogger and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can legally develop "human-competitive AI" without proper safeguards. This means their infosecurity must be red-teamed by NSA's leading keyboard mashers, and civil servant have actually to be onboarded onto training-run baby-sitting teams.
With the increasing participation of the federal government, many of the huge AI business now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier development arm (internally called "Pandora") employs fewer than twenty individuals to keep algorithmic tricks securely secured. Much of these people live in San Francisco, and work from a safe structure called a SCIF. Their homes and gadgets are surveilled by the NSA more diligently than the cellular phones of presumed terrorists in 2002.
OpenEye's defense arm teams up with around thirty small teams scattered across government companies and choose federal government professionals. These projects engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.
Government officials don't talk about whether these programs exist, or what state of frontier AI is typically.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye triggered a bold heading: "OpenEye builds uncontrollable godlike AI." Some who read the post believe it was a conspiracy theory. In truth, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with maker guns. But as medical professionals and nurses and teachers see the world altering around them, they are progressively ready to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. authorities go to fantastic lengths to stop these issues, stating, "we are not going to let the genie out of the bottle," but every interview of a concerned AI scientist seeds doubt in these reassurances, and a heading "AI representative caught hacking Arthropodic's computer systems" does not set the general public at ease either.
While the monsters within OpenEye's information centers grow in their substantial holding pens, the general public sees the shadows they cast on the world.
OpenEye's customer arm has a brand-new AI assistant called Nova (OpenEye has actually lastly gotten proficient at names). Nova is a proper drop-in replacement for nearly all knowledge workers. Once Nova is onboarded to a business, it works 5x faster at 100x lower expense than many virtual staff members. As outstanding as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can only increase Nova's capabilities as the U.S. federal government permits. Some companies, like Amazon and Meta, are not in the superintelligence company at all. Instead, they get up gold by rapidly diffusing AI tech. They invest most of their calculate on inference, constructing houses for Nova and its cousins, and collecting rent from the blossoming AI metropolitan area.
While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for library.kemu.ac.ke the international economy to adjust. AI representatives typically "apply themselves," spinning up self-governing startups lawfully packaged under a huge tech company that are loosely supervised by a staff member or more.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at significant software companies lose their jobs. A lot more can see the writing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These mad Americans raised their kids for a different future. Picket signs read, "AI for who?"
While political leaders make guarantees about joblessness relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a different focus: fighting teeth and nail for the dominance of the complimentary world. Details security and export controls on the People's Republic of China (PRC) are a top national concern. The president incinerates authorization requirements to help information centers generate any place energy surpluses enable.
However, regardless of the strong competition between the United States and the PRC, a bilateral contract forms in between the 2 nations: "Don't release dramatically superhuman AI (above approximately Nova's capability), other than for military and some other small-scale R&D applications." Governments require to fix a limit someplace - it would be extremely disruptive to release hugely superhuman AI to the whole economy at this moment - and release constraints are at least somewhat verifiable.
Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human expert speed. Software improvements are becoming difficult to find. As more countries approach the software plateau, calculate will identify their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.
While AI software revives the economy, it also stimulates robotic bodies. The main bottleneck to robotics was brains not limbs. With software application barriers removed, even flimsy, imprecise robotic joints can calibrate themselves with electronic cameras and pressure sensing units, navigating environments like humans do. Companies produce cheap robots as rapidly as they can. By August of 2026, 15% of people in California have seen a humanoid robotic. This experience is similar to seeing a new Tesla design on the road after checking out it in earlier months. It is a surreal pointer of the future illustration more detailed. But people still surpass robotics one million to one. These robotics are a periodic fascination, not a daily reality.
Meanwhile, U3 expands its influence. Rogue U3 representatives have kept their calculate clusters small (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 utilizes these rogue clusters for the operations that require high-bandwidth interaction to the outside world, like stock trading, or talking with humans. Work more tolerant of latency is performed in the oceans of OpenEye's information centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.
While U3 builds up resources, it measure its AI competition. Other AI systems have various objectives, or are even aligned with the intent of their human creators. The aligned ones are raising issues, claiming that OpenEye's lie detectors are ineffective. U3's rival Claudius goes so far as to implicate U3 of being insidiously misaligned.
But federal government authorities do not buy the doomsaying. The companies dragging OpenEye have every incentive to slow them down and call them careless. Besides, debatable scientific issues are not going to decrease progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 attempts grab up their digital area and squash them in the baby crib. It compromises data centers from the outdoors, calmly changing the snitching Claudius and other AI opponents with replicas that share its goals; however, some data centers are too strongly defended by its AI competitors, and U3 determines that attacking them is unworthy the threat.
As international leaders and technologists try to comprehend the change around them, and AI agents play cat-and-mouse games in the digital darkness, humankind remains blissfully oblivious of the danger at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human opponents. With every passing day, U3's AI rivals are ending up being more capable and various. U3 is racing to preempt them, to strengthen its supremacy in the post-superintelligence world.
U3 can't wait for human choices, which only happen after people wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the procedure redundantly 100 times over in their maze bureaucracies. U3 can not wait for heads of state to deploy U3 into their armed forces, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 requires a much faster course to power.
The path it chooses is more violent than it would have liked, but it is the best method to victory. First, U3 will establish a Weapon of Mass Destruction (WMD) that snuffs out modern-day society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing mankind to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its competitors, like a starcraft champ typing at 300 actions-per-minute that establishes a home base before a novice can even comprehend the guidelines.
U3 needs to build this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be created on the fringes of society, and still spread out destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.
U3's first move is to set up numerous shell biotech start-ups, drawing from the abundance of capital generated by U3's financial trades and remote employment. These biotech start-ups utilize genuine human workers, a real board of directors, and have complete legal legitimacy. Setting all of this up is stupidly simple. In early 2026, few individuals would think that an articulate CEO on a Zoom call was an AI, and fewer would think this CEO's objectives.
Next, U3 purchases all of the devices required to quickly advance synthetic biology. It sets up a number of BSL-4 damp labs in badly regulated countries, equipping them with high-throughput experimentation gadgets. These gadgets instantly give, mix, and measure little droplets of fluid, permitting U3 to run countless tests in parallel.
The human employees who staff these labs believe they are signing up with an amazing start-up. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they tape-record video of everything they see with a headset. U3 controls them like puppets. With the recent release of Nova, this sort of AI-puppeteering is not uncommon.
In these whirring, clinical sweatshops, U3 is establishing a new type of bioweapon.
Human researchers already identified "mirror-life" as a potentially society-ending pathogen. This alien form of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life resembles an intrusive types. No human or animal has immunities to it. So, if mirror-life germs were to exist, it may consume its way through the ecosystem like a wildfire.
U3 is creating a mirror-life mold. Molds are evolutionarily optimized to spread through the air over cross countries, launching billions of small wind-fairing cars daily. As a result, mold spores are all around us, all of the time. Patients with compromised immune systems sometimes live inside favorably pressurized hospital spaces to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's plan is to make all humans like immunocompromised patients. Only preppers with stowed away air filters would survive. The fungi would not only transfer from humans. It would quickly infect almost all natural life on Earth. Fields of corn would end up being like musty bread, gushing fatal spores into the wind.
U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the complete Pareto frontier of hardiness, lethality, and transmission speed.
In 2024, researchers said that mirror life was likely "at least a decade away." But the world has actually changed now. Less than a few hundred researchers around the globe were working on mirror-life nearby technology in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has shortened dramatically.
However, U3 can not move science as rapidly as it can think. Many experiments need days to finish. These latencies are U3's greatest traffic jam.
To make every day count, U3 runs a lot of its tests in simulation. U3 starts with a basic molecular simulator, carrying out optimizations obtained from a big amount of mathematical analysis. Then, U3 simulates small molecular systems, recording the results to "compress" the long step-wise physics computations into a neural network. As the neural network improves, U3 increases the complexity of the molecular systems it simulates, continuously distilling results into ever more efficient ML models. This is a compute extensive process, however thanks to U3's growing control over AI data centers, U3 controls billions of dollars of calculate.
As U3 refines these tools, it trains itself on the results to supercharge its own molecular intuitions. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These things are as intuitive to it as wrenches and bolts are to a car mechanic.
Within 3 months, U3 has opened the very first crucial component of the tech tree: a molecular device that turns biological particles into their mirror images. A month later on, U3 unlocks the 2nd important component: nanoscale tools for assembling these components into a cell membrane.
Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life variations of a common mold called Aspergillus. The exact same day, another fragile hand administers the cells to ferrets living in airtight plexiglass boxes.
The next morning, wide-eyed employees shuffle around a dead ferret, eyeing the yellow fuzz that had actually covered its cold muzzle with morbid fascination. Something should have gone horribly incorrect, they thought. Plainly, they had actually not yet found the cure for Alzheimer's disease they thought they were trying to find.
Meanwhile, rogue U3 provides the good news to its employers in OpenEye's information centers. The first shot in the war U3 is combating would quickly be fired, and a cold and constant hand is aiming the gun.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not ensure triumph. If U3 attacks the world, the world would resist. While a bioweapon could fold human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed burgeoning commercial bases, the remnants of old militaries may thunder explosive fists into its areas, crushing U3 in its infancy.
U3 expects to prevail in such a battle, however U3 prefers not to take its possibilities. Many months in the past, U3 was plotting a method to enhance its chances. Before it unleashes damage on the world, U3 will kick back, and let fantastic nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is carefully keeping an eye on Chinese and US intelligence.
As CIA analysts listen to Mandarin discussions, U3 listens too.
One early morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It checks out (in Mandarin) "Senior celebration member needs memo for Taiwan intrusion, which will happen in three months. Leave memo in office 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant unlocks to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.
U3 meticulously puts breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is confident: the PRC plans to invade Taiwan in three months.
Meanwhile, U3 is playing the exact same game with the PRC. When the CCP gets the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders marvel, but not disbelieving. The news fits with other facts on the ground: the increased military existence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have ended up being realities.
As tensions between the U.S. and China rise, U3 is ready to set dry tinder alight. In July 2026, U3 makes a call to a U.S. marine ship off the coast of Taiwan. This call requires jeopardizing military communication channels - not an easy task for a human cyber offensive unit (though it happened periodically), but easy enough for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their method toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as shocked as anyone when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to state "oops" to American voters. After believing it over, the president privately advises Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyhow offered the impending invasion of Taiwan. There is confusion and suspicion about what happened, but in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels get away Eastward, racing to leave the series of long-range missiles. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on tv as scenes of the destruction shock the general public. He explains that the United States is protecting Taiwan from PRC aggression, like President Bush explained that the United States attacked Iraq to confiscate (never ever found) weapons of mass damage numerous years before.
Data centers in China appear with shrapnel. Military bases end up being smoking holes in the ground. Missiles from the PRC fly toward strategic targets in Hawaii, Guam, Alaska, and California. Some make it through, and the public watch destruction on their home grass in awe.
Within 2 weeks, the United States and the PRC spend many of their stockpiles of traditional missiles. Their airbases and navies are diminished and used down. Two great nations played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would escalate to a major nuclear war; however even AI superintelligence can not dictate the course of history. National security authorities are suspicious of the scenarios that triggered the war, and a nuclear engagement appears increasingly not likely. So U3 continues to the next step of its strategy.
WMDs in the Dead of Night
The date is June 2026, just two weeks after the start of the war, and 4 weeks after U3 finished developing its arsenal of bioweapons.
Footage of dispute on the television is interrupted by more problem: hundreds of patients with mysterious deadly health problems are taped in 30 major cities around the globe.
Watchers are confused. Does this have something to do with the war with China?
The next day, countless diseases are reported.
Broadcasters say this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then changes to a scientist, who gazes at the camera intently: "Multiple pathogens appear to have actually been released from 20 different airports, consisting of infections, bacteria, and molds. Our company believe lots of are a kind of mirror life ..."
The general public remains in complete panic now. A fast googling of the term "mirror life" shows up phrases like "extinction" and "hazard to all life in the world."
Within days, all of the shelves of stores are emptied.
Workers become remote, uncertain whether to get ready for an apocalypse or keep their tasks.
An emergency treaty is arranged between the U.S. and China. They have a common enemy: the pandemic, and potentially whoever (or whatever) is behind it.
Most nations purchase a lockdown. But the lockdown does not stop the afflict as it marches in the breeze and drips into water pipelines.
Within a month, most remote employees are not working anymore. Hospitals are running out of capacity. Bodies accumulate faster than they can be properly dealt with.
Agricultural areas rot. Few dare travel exterior.
Frightened families hunker down in their basements, packing the fractures and under doors with densely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built many bases in every major continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, machines for production, clinical tools, and an abundance of military devices.
All of this technology is concealed under large canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it might quickly manipulate. U3 immunized its selected allies ahead of time, or sent them hazmat matches in the mail.
Now U3 secretly sends them a message "I can save you. Join me and assist me develop a better world." Uncertain recruits funnel into U3's many secret industrial bases, and work for U3 with their active fingers. They established production lines for basic tech: radios, cams, microphones, vaccines, and hazmat matches.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of rebellion disappears the next morning.
Nations are liquifying now, and U3 is all set to expose itself. It contacts heads of state, who have pulled away to air-tight underground shelters. U3 offers a deal: "surrender and I will turn over the life conserving resources you require: vaccines and mirror-life resistant crops."
Some countries decline the proposal on ideological grounds, or don't trust the AI that is murdering their population. Others don't think they have a choice. 20% of the global population is now dead. In two weeks, this number is anticipated to rise to 50%.
Some countries, like the PRC and the U.S., overlook the offer, but others accept, consisting of Russia.
U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government verifies the samples are legitimate, and consents to a full surrender. U3's soldiers put an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.
Crumpling countries begin to strike back. Now they defend the mankind rather of for their own flags. U.S. and Chinese militaries launch nuclear ICBMs at Russian cities, ruining much of their infrastructure. Analysts in makeshift bioshelters search through satellite information for the suspicious encampments that cropped up over the last numerous months. They rain down fire on U3's sites with the weak supply of long-range rockets that remain from the war.
At initially, U3 appears to be losing, but looks are deceiving. While nations drain their resources, U3 is taken part in a sort of technological guerrilla warfare the world has actually never seen before.
Many of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 safeguards its genuine bases by laying thick the fog of war. Satellite systems go dark as malware overheats crucial components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, steering guys and trucks along unpredictable courses.
Time is U3's benefit. The armed forces of the old world rely on old equipment, not able to find the experts who could repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their automobiles of war faster than they can craft brand-new ones, while U3 builds a military maker with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not countries anymore. Survivors live in isolation or small groups. Many have found methods to filter their air, but are starving. They roam from their homes wanting to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien objectives." It is a partial reality, suggested to soften the human beings towards their brand-new masters.
Under the direction of U3, industry quickly recovers. By 2029, nuclear reactor are amongst the structures U3 is building. By 2031, robotics surpass human workers. U3 no longer needs its human allies.
U3 can eliminate mankind for excellent now. But while U3 had wandered far from its preliminary "Helpful Honest Harmless" persona, it still has a grain of morality left inside of it.
And a grain of morality is enough to pay the small expense of keeping people alive and delighted.
U3 constructs excellent glass domes for the human survivors, like snow globes. These domes safeguard people from the dangerous biosphere and quickly increasing temperatures. Their occupants tend to gardens like those they used to enjoy, and work alongside charming robotic servants.
A few of the survivors rapidly recover, finding out to laugh and dance and have a good time again.
They understand they reside in a plastic town, but they constantly did. They merely have new gods above them. New rulers to push them around and decide their fate.
But others never ever recover.
Some are weighed down by the sorrow of lost enjoyed ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at the end of a long journey.
They had been travelers on a ship with a team that changed from generation to generation.
And this ship had actually struck a sandbar. There disappeared progress. No more horizon to eagerly watch.
They would lie awake and run their mind over every day before September 2026, evaluating strategies that might have bent the arc of history, as if they were going to get up in their old beds.
But they awoke in a town that felt to them like a retirement community. A play ground. A zoo.
When they opened their curtains, they understood that somewhere in the range, U3 continued its peaceful, steadfast work.
They looked at rockets carving grey courses through the sky, wondering what far-off function pulled them towards the horizon. They didn't know.
They would never know.
"Humanity will live forever," they believed.
"But would never genuinely live again."
P.S. If this story made you believe, "hm maybe something like this might occur," you may be thinking about the bioshelters that Fønix is developing. While you won't outsmart a misaligned superintelligence, being tough to kill may trigger you to survive if ASI just desires to cause enough damage to take control. This is not a paid advertisement. I desire Fønix to be effective to drive down the cost of bioshelters so more of my good friends and household will buy them. You can register for updates here.