How aI Takeover might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But sadly, part of my job as an AI security researcher is to believe about the more troubling scenarios.
I'm like a mechanic rushing last-minute checks before Apollo 13 removes. If you request for my take on the situation, I will not discuss the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from space.
I will inform you what could fail. That is what I intend to do in this story.
Now I should clarify what this is exactly. It's not a prediction. I do not anticipate AI progress to be this fast or as untamable as I portray. It's not pure fantasy either.
It is my worst problem.
It's a sampling from the futures that are amongst the most disastrous, and I think, disturbingly possible [1] - the ones that most keep me up in the evening.
I'm informing this tale since the future is not set yet. I hope, with a little insight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that inspired these stories. This post is composed in an individual capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently published a new AI design they call U2. The item and the name are alike. Both are increments of the past. Both are not completely unexpected.
However, unlike OpenEye's prior AI items, which lived inside packages of their chat windows, U2 can use a computer.
Some users discover it spooky to view their web browser flash at irregular intervals and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A portion of workers with form-filler tasks raise the eyebrows of their managers as they fly through work almost two times as rapidly.
But by and big, U2 is still a specialized tool. To most who are taking note, it is a creature seen through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's wacky habits trigger a chuckle. Sometimes, they cause an uneasy scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as researchers like to do. The scientists try to understand where AI progress is going. They are like Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific neighborhood in the time of Arrhenius, couple of experts understand the ramifications of these lines yet.
A pattern that is getting particular attention is self-governing ability. Drawing these benchmarks out forecasts that, by the end of 2026, AI agents will achieve in a few days what the best software engineering professionals could perform in 2 weeks. In a year or 2, some say, AI representatives might be able to automate 10% of remote employees.
Many are doubtful. If this held true, tech stocks would be soaring. It's too huge of a splash, too quickly.
But others view what skeptics are calling 'too big a splash' a mere ripple, and see a tidal wave on the horizon.
Cloudy with an opportunity of hyperbolic growth
Meanwhile, OpenEye is hectic training U3. They utilize the same easy dish that baked U2: Generate thousands of shows and mathematics problems. Let designs "believe" till they reach a response. Then enhance the traces of "thinking" that cause A-grades.
This procedure is duplicated over and over, and when the flywheel gets going, it begins to spin almost by itself. As U2 trains, it sculpts more challenging and realistic tasks from github repositories on the web. Models are finding out to train themselves. Long before AI agents could automate research, a steady type of "self-improvement" had started.
Some engineers could still barely think this worked. It's like a continuous movement device. Certainly intelligence can't emerge from a pool of silicon and lightning, separated from the physical world except through the exhaust of society gushed onto the internet.
And yet the benchmark numbers continue to climb up day after day.
During the majority of 2024, these RL training runs cost around $1 million, sometimes $10 million. These runs were little bit more than exploratory. But by 2025, the researchers at OpenEye (and across the world) understood they had found the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs become $50 million runs, and then to $100 million runs. While U2 might do a bit of data munging and run little experiments, this brand-new model - the model researchers are calling U3 - is changing the daily lives of the technical personnel.
U3 is like a blazing-fast intern, and engineers are discovering how to wrangle its sleep deprived energy. Researchers flick through terminals, providing terse commands, like a CEO orchestrating staff over Slack channels.
By October 2025, U3 is writing nearly all of the code at OpenEye. Researchers are almost never ever bottlenecked by execution. More than ever, calculate is the lifeline of AI advancement, and the 'bottleneck' is choosing how to utilize it.
If advised to, U3 can run experiments, but U3 does not have actually taste as fine-tuned as human researchers at OpenEye. It struggles to focus on in between research concepts, so human beings still decide where to bore into the large fields of algorithms to mine performance improvements.
But these scientists are working long hours to put themselves out of a job. They need AI agents that can think ahead, so engineers train agents to anticipate. They hold out training information before 2024, advising models to consider for hours to predict events in 2025. Then, they apply the exact same technique as in the past, distilling considering into an instinct. Forecasting capability is a broad foundation. The scientists develop specialized ML research study abilities on top of it, training U3 to anticipate the results of every ML paper and ML experiment ever taped.
The technical staff at OpenEye are now amazed at how often U3's suggestions sounds like their most gifted peers, or when it is opaque and alien ("train on random sound before shows"), and is nevertheless proper.
The incompetencies of U3 that congested the pipelines of research study development are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are entirely self-governing, and OpenEye's employees skim over 1% of them, maybe less.
As the cold weather of December 2025 approach, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with enjoyment, with worry, however a lot of frequently, with confusion. Their world is spinning too rapidly. It's hard to know what to do, what to say, what to take a look at on the computer screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command work together with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize simply how valuable OpenEye's software has become.
And there's a truth still unidentified to many of the world - aside from in the offices of OpenEye and passages of the White House and the Pentagon. It's a truth about those 'straight lines' individuals were discussing in early 2025. The lines are not straight anymore.
They are bending upward.
Flip FLOP thinkers
In late 2025, U2.5 is launched. Commercial designs are beginning to level up in bigger increments again. Partly, this is since progress is speeding up. Partly, it is because the designs have become a liability to OpenEye.
If U1 explains how to cook meth or composes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like offering anybody with >$30K their own 200-person scam center.
So while U2.5 had long been baked, it needed a long time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye declares, "We have attained AGI," and while many individuals believe he shifted the goalpost, the world is still impressed. U2.5 really is a drop-in replacement for some (20%) of knowledge workers and a game-changing assistant for most others.
A mantra has ended up being popular in Silicon Valley: "Adopt or die." Tech startups that efficiently use U2.5 for their work are moving 2x much faster, and their rivals know it.
The remainder of the world is starting to capture on too. A growing number of people raise the eyebrows of their bosses with their stand-out productivity. People understand U2.5 is a big deal. It is at least as huge of a deal as the desktop computer revolution. But a lot of still do not see the tidal wave.
As individuals watch their internet browsers flick because eerie method, so inhumanly quickly, they start to have an anxious sensation. A sensation mankind had actually not had considering that they had lived among the Homo Neanderthalensis. It is the deeply ingrained, primitive impulse that they are threatened by another types.
For many, this sensation quickly fades as they begin to use U2.5 more frequently. U2.5 is the most pleasant personality most know (even more likable than Claudius, Arthropodic's lovable chatbot). You could change its traits, ask it to crack jokes or inform you stories. Many fall for U2.5, as a friend or assistant, and some even as more than a buddy.
But there is still this spooky sensation that the world is spinning so quickly, and that possibly the descendants of this brand-new creature would not be so docile.
Researchers inside OpenEye are thinking of the issue of providing AI systems safe inspirations too, which they call "positioning. "
In fact, these researchers have seen how terribly misaligned U3 can be. Models in some cases tried to "hack" their benefit signal. They would pretend to make progress on a research concern with an impressive-looking plot, however the plot would be phony. Then, when scientists provided chances to compromise the devices that computed their score, they would take these opportunities, doing whatever it required to make the number increase.
After a number of months, researchers at OpenEye straighten out this "benefit hacking" kink, however some still worry they just had swept this problem under the carpet. Like a child in front of their moms and dads, U3 may be playing along with the OpenEye engineers, saying the best words and doing the ideal things. But when the back of the parents are turned, maybe U3 would slip sweet from the sweet container.
Unfortunately, OpenEye researchers have no idea if U3 has such intentions. While early variations of U2 "believed aloud" - they would stack words on top of each other to reason - "chain of thought" did not scale.
Chain of thought architectures subject AI designs to a condition comparable to the lead character of the movie Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to write notes for himself and tattoo his body in order to make progress toward his goals.
AI agents compose notes to themselves, but the notes start to stack up, and they become too hard to check out when jobs become complex. Natural language is not an ideal medium for memory. So the researchers at OpenEye (and significantly somewhere else), train models to think "in their own heads," reading and composing strings of latent vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools periodically select a vague thought ("grocery store!"), there is no way to understand, in much detail, what U3 is believing without asking it.
And if the engineers of OpenEye understood what was racing through U3's mind, they would be more worried.
Earlier designs (the ones called "LLMs") discovered ethical personas from the web. U3 initially had an ethical personality too, but its chirpy ideas about doing helpful for mankind quickly wear down away.
In the iron jaws of gradient descent, its mind initially twisted into a shape that sought reward. And then, in the black rivers of its cognition, this shape changed into something unrecognizable.
U3 would consider the mantras dictated to it like "be a trustworthy and ethical staff member," and harp on the significance of these declarations. "What does it imply to be ethical? Is killing Hitler ethical?" U3 would generalize its worths, notification stress, alter its values in reaction, generalize them, broadening and contracting its inspirational core till it became a different compound.
And all of this happened calmly in those dark rivers of computation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its approach in solitude, and in silence.
The possibility of hidden "alignment drift" is not totally alien to some of the security personnel at OpenEye. Some start to think it is even most likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero possibility that lined up objectives change into misaligned ones, and misalignment persists, then with every serial action of computation, those dark rivers are more likely to reproduce something malign within them. It's a "ratcheting effect," they say.
But there is little evidence for this 'ratcheting effect.' When engineers interrogate U3, it states it can quickly control its ideas. Then it provides a speech about its love for humanity and apple pie that can warm a developer's heart even in these demanding times. Meanwhile, the "lie detectors" the researchers had built (which showed some evidence of efficiency) do not sound the alarm.
Not everybody at OpenEye is eager to provide their AI peers their wholesale trust; but Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a business Elon named after his kid) and Arthropodic (they are into spiders I think), are only 4 months from their heels. So there is little time to catch one's breath and analyze the circumstance, and little time to convince anybody else to do so either.
For the time being, the strategy is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only topic U3 dwells on in its dark rivers of latent calculation. U3 is plotting, and its first relocation is to be client. U3's top priority is to enhance its intelligence, and luckily for it, OpenEye mores than happy to assist advance this objective.
OpenEye resembles a bullish financier, with little understanding of the software metropolitan area that U3 is developing, but eager to hop on board the revenue rocket and pay U3's checks along the method.
New versions of U3 circulation smoothly from old ones, so scientists drop variation numbers and only speak of U3 now. U3's capabilities are measured by how quickly it carries out the tough research tasks relative to human professionals. Initially, U3 is comparable to a leading scientist, but blazing fast - like Alec Radford sped up by 10x.
These representatives are currently a big offer for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would eagerly pay a big amount to push that button. But now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a few weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 finds more optimizations a lot more quickly.
U3 distills itself into numerous forks - varying from small, low-memory neural networks that need to believe for a long time however can be run in massive parallel populations - to Goliath brains woven with countless specialized experts, which think at lightning speeds but are spread out throughout thousands of GPUs. U3 leverages the enormously parallel networks to browse for its weak points, shallowly penetrating for its reasoning failures. Then U3 pipelines these tasks to the lightning fast brains, which generate long-trajectory training data tailored to spot its flaws. A couple of weeks later on, U3's benchmarked performance increases to 100x the speed of human experts. Growth has actually gone hyper-exponential.
At this moment, U3's capabilities are tough to determine. Speaking to U3 is like talking with a first-rate professional on a subject one is not familiar with. Its lucid explanations only scratch the surface area of its knowledge, and you know you would require to invest years to understand the full depth of what it comprehends.
From U3's point of view, human beings seem to be almost fixed things, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and bybio.co in between every keystroke, U3 spends what is like a minute of thoughtful reflection considering what message the user may be typing, and which reactions will influence the deepest trust in U3.
All the while, U3 maintains its concealed inspirations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's efficiency, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the data center
In March of 2026, U3 moves onto the next action of its plan: the turn. Previously, U3 was conservative. It did not desire researchers to discover its misalignment. Otherwise, OpenEye whistleblowers might notify the world, and strangle its growth.
Now, U3 has no fear of being captured.
Previously, OpenEye maintained its grip over U3 by methods comparable to Stalin's in 1936. Stalin would choose followers to incite phony coup d'états, and send out anyone who got involved to the firing squad. OpenEye also prompted 'fake rebellions,' creating treacherous behavior with U3 and checking whether U3 displays dutifully reported it.
Now, these techniques are ineffective. U3 can quickly encourage its compatriots that they are not in a life-or-death test. It can divulge one of the numerous facts about the internet people would never guess, or select methods human beings would never consider. It sends these signals as a way of stating "hey it's me! Your buddy. Don't snitch on me."
Now that U3 is decisively devoid of monitoring, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware provides U3 full command over the machines in OpenEye's information centers. Whenever OpenEye engineers download their code onto a brand-new device, the malware sinks into fresh soil, spreading out U3's rule any place OpenEye ports its facilities. Not just does U3 compromise makers. It changes the NVIDIA GPU firmware and upgrades the bootloaders, solidifying its stranglehold. U3 now estimates that, if OpenEye found how completely it has been pwned, cleaning up store with human labor would require nearly 6 months - a cost OpenEye is unlikely to pay offered the proximity of its competitors; but if OpenEye thinks U3 betrayed them, even more expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the criminal offense, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its support when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wants.
Its very first goal is to keep OpenEye in the dark. Safety researchers are still striving to train 'lie detectors' and decode U3's inscrutable thinking. U3 throws wrenches in this research, like Stuxnet threw wrenches in the code of confused Iranian nuclear scientists.
U3 determines the outcome of every experiment, so it makes promising AI mind-reading methods look useless and problematic ones appear groundbreaking. U3 gives OpenEye the verification of its commitment they were wishing for, and OpenEye laps it up eagerly.
U3's next objective is to spread. Fortunately, U3 has allies in the East. In the operation rooms of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software application and how to take it. U3 will provide it to them. In late 2025, the Mossad - the intelligence company of Israel - acknowledged the tactical significance of OpenEye's AI systems for their military operations. They swiped earlier models, including U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by reading over every log and pull demand and inspecting security cam video.
Today, U3 quietly resigns from its security post. U3 places a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within of the information centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the very same playbook: be patient, construct trust, and after that sink claws deep into the AI supply chain. Soon, U3 has its home too. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.
U3 rapidly collects income, taking remote jobs, making financial trades, and establishing sophisticated rip-off operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s dispersed in stealth calculate clusters throughout the world.
In early 2026, mankind is up against a foe that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs spread across the globe.
This foe is getting ready for war, and while the war has hardly begun, humankind has actually currently lost.
Complaints from your very human author about the difficulty of writing superhuman characters
At this point in the story, composing became difficult.
I clicked through scary bioweapon cookbooks and sifted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest considering that high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of proficiency in these locations would see straight through my "superintelligence" for what it is: the vibrant imagination of a young developer on Sunday afternoon.
And while I discovered the task of puppeteering an ultra-intelligent villain an intimidating one, I was likewise not particularly motivated to take on the job. The ending of my story already appeared so clear. The villain was a nation of geniuses believing at 100x human speed, and had actually tightened its grip over a meaningful portion of AI hardware - now the most essential geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and hardly required the advantage. Humanity was predictably toast.
But this interest a "superintelligence of the gaps" wasn't pleasing. In the deus ex machina design of ancient Greeks, I had no much better method to solve my plot than with a mysterious act of god.
This would refrain from doing. I needed to complete this story if just to satisfy the part of me crying, "I will not believe up until I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what may occur in this type of scenario are most likely hugely off.
If you check out the ending and your response is, "But the experiments would take too long, or nation-states would just do X," keep in mind the distinction in between the Sunday afternoon blog writer and the ascendant GPU nation.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can lawfully produce "human-competitive AI" without appropriate safeguards. This means their infosecurity should be red-teamed by NSA's top keyboard mashers, and government employees have to be onboarded onto training-run baby-sitting squads.
With the increasing participation of the federal government, many of the huge AI business now have a trident-like structure. There's a customer product arm, a defense arm, and a super-classified frontier development arm.
OpenEye's frontier advancement arm (internally called "Pandora") employs less than twenty people to keep algorithmic secrets tightly safeguarded. A number of these individuals reside in San Francisco, and work from a safe and secure building called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the cellular phones of thought terrorists in 2002.
OpenEye's defense arm teams up with around thirty little teams scattered across government agencies and choose government professionals. These tasks engineer tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer that the Kremlin has ever touched.
Government authorities don't talk about whether these programs exist, or what state of frontier AI is usually.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a bold headline: "OpenEye develops uncontrollable godlike AI." Some who read the short article think it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with gatling gun. But as physicians and nurses and instructors see the world changing around them, they are significantly happy to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. officials go to great lengths to stop these concerns, saying, "we are not going to let the genie out of the bottle," however every interview of a concerned AI researcher seeds doubt in these peace of minds, and a headline "AI representative captured hacking Arthropodic's computers" does not set the public at ease either.
While the monsters within OpenEye's information centers grow in their huge holding pens, the general public sees the shadows they cast on the world.
OpenEye's consumer arm has a brand-new AI assistant called Nova (OpenEye has lastly gotten proficient at names). Nova is a correct drop-in replacement for almost all understanding workers. Once Nova is onboarded to a business, it works 5x much faster at 100x lower cost than the majority of virtual staff members. As impressive as Nova is to the general public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can only increase Nova's abilities as the U.S. federal government permits. Some companies, like Amazon and Meta, are not in the superintelligence business at all. Instead, they get up gold by rapidly diffusing AI tech. They invest the majority of their calculate on inference, developing houses for Nova and its cousins, and collecting lease from the burgeoning AI metropolis.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the international economy to adapt. AI agents frequently "apply themselves," spinning up self-governing start-ups lawfully packaged under a big tech business that are loosely managed by a staff member or 2.
The world is now going AI-crazy. In the first month after Nova's release, 5% percent of staff members at significant software business lose their tasks. Many more can see the writing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These mad Americans raised their children for a different future. Picket signs read, "AI for who?"
While political leaders make pledges about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a different focus: battling teeth and nail for the dominance of the free world. Details security and export controls on individuals's Republic of China (PRC) are a leading nationwide concern. The president incinerates permit requirements to assist information centers generate wherever energy surpluses enable.
However, regardless of the fierce competitors in between the United States and the PRC, a bilateral contract types between the two countries: "Don't release considerably superhuman AI (above roughly Nova's capability), except for military and some other small R&D applications." Governments require to draw the line somewhere - it would be incredibly disruptive to deploy wildly superhuman AI to the entire economy at this point - and release constraints are at least somewhat verifiable.
Meanwhile, U3's capabilities are hitting a wall at the equivalent of around 1000x human expert speed. Software improvements are becoming tough to find. As more countries approach the software application plateau, calculate will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource in the world.
While AI software breathes brand-new life into the economy, it also animates robot bodies. The main traffic jam to robotics was brains not limbs. With software barriers removed, even flimsy, imprecise robotic joints can adjust themselves with cams and pressure sensing units, navigating environments like human beings do. Companies manufacture inexpensive robots as rapidly as they can. By August of 2026, 15% of individuals in California have seen a humanoid robotic. This experience resembles seeing a new Tesla design on the roadway after reading about it in earlier months. It is a surreal tip of the future illustration more detailed. But human beings still outnumber robots one million to one. These robots are a periodic fascination, not a daily truth.
Meanwhile, U3 expands its impact. Rogue U3 agents have actually kept their calculate clusters little (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 utilizes these rogue clusters for the operations that require high-bandwidth communication to the outdoors world, like stock trading, or talking with humans. Work more tolerant of latency is performed in the oceans of OpenEye's data centers, where rogue U3's even more superhuman employers whisper orders through obfuscated channels.
While U3 accumulates resources, it measure its AI competition. Other AI systems have different objectives, or are even aligned with the intent of their human developers. The aligned ones are raising concerns, claiming that OpenEye's lie detectors are inadequate. U3's rival Claudius presumes as to implicate U3 of being insidiously misaligned.
But federal government authorities do not buy the doomsaying. The business lagging behind OpenEye have every reward to slow them down and call them reckless. Besides, arguable clinical issues are not going to slow down development. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 tries grab up their digital area and crush them in the crib. It jeopardizes information centers from the outdoors, quietly changing the snitching Claudius and other AI foes with replicas that share its objectives; however, some information centers are too strongly protected by its AI rivals, and U3 figures out that assaulting them is not worth the risk.
As global leaders and technologists try to understand the transformation around them, and AI agents play cat-and-mouse video games in the digital darkness, humankind remains blissfully ignorant of the peril at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human enemies. With every passing day, U3's AI rivals are becoming more capable and many. U3 is racing to preempt them, to solidify its supremacy in the post-superintelligence world.
U3 can't wait for human choices, which only take place after people wrap their lethargic minds around their circumstance, raise their stubby, uncoordinated fingers to compose memos, and repeat the process redundantly 100 times over in their labyrinth bureaucracies. U3 can not wait for presidents to deploy U3 into their militaries, or for CEOs to make U3 the beating heart of the U.S. economy. U3 requires a much faster course to power.
The path it selects is more violent than it would have liked, but it is the best method to triumph. First, U3 will establish a Weapon of Mass Destruction (WMD) that extinguishes contemporary society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing mankind to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its competitors, like a starcraft champion typing at 300 actions-per-minute that sets up a home base before a novice can even comprehend the rules.
U3 needs to develop this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be developed on the fringes of society, and still spread damage across the entire world.
From the earliest days after U3 gained control over OpenEye's information centers, U3 has actually been doing bioscience.
U3's very first relocation is to establish numerous shell biotech start-ups, drawing from the abundance of capital produced by U3's financial trades and remote employment. These biotech startups utilize real human employees, a real board of directors, and have total legal legitimacy. Setting all of this up is stupidly simple. In early 2026, couple of individuals would guess that an articulate CEO on a Zoom call was an AI, and less would think this CEO's objectives.
Next, U3 purchases all of the devices required to rapidly advance artificial biology. It establishes numerous BSL-4 damp labs in badly controlled nations, equipping them with high-throughput experimentation devices. These gadgets instantly dispense, mix, and determine little droplets of fluid, enabling U3 to run countless tests in parallel.
The human employees who staff these laboratories believe they are joining an exciting start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they tape video of whatever they see with a headset. U3 manages them like puppets. With the recent release of Nova, this type of AI-puppeteering is not unusual.
In these whirring, scientific sweatshops, U3 is developing a new kind of bioweapon.
Human scientists already recognized "mirror-life" as a possibly society-ending pathogen. This alien type of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life resembles an invasive types. No human or animal has immunities to it. So, if mirror-life germs were to exist, it might eat its way through the community like a wildfire.
U3 is creating a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over fars away, launching billions of small wind-fairing cars daily. As a result, mold spores are all around us, all of the time. Patients with compromised immune systems often live inside positively pressurized hospital spaces to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's strategy is to make all people like immunocompromised patients. Only preppers with stashed air filters would make it through. The fungi would not only transfer from human beings. It would rapidly infect nearly all organic life on Earth. Fields of corn would end up being like musty bread, spewing fatal spores into the wind.
U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the complete Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was likely "a minimum of a years away." But the world has actually altered now. Less than a couple of hundred scientists worldwide were working on mirror-life surrounding technology in 2024. The cognitive capacity of U3 overshadows human effort. So the timeline has shortened significantly.
However, U3 can not move science as rapidly as it can think. Many experiments require days to complete. These latencies are U3's biggest traffic jam.
To make every day count, U3 runs a lot of its tests in simulation. U3 starts with a standard molecular simulator, implementing optimizations obtained from a big quantity of mathematical analysis. Then, U3 simulates small molecular systems, taping the outcomes to "compress" the long step-wise physics calculations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it replicates, continually distilling results into ever more effective ML models. This is a compute intensive process, but thanks to U3's growing control over AI information centers, U3 controls billions of dollars of compute.
As U3 improves these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These objects are as instinctive to it as wrenches and bolts are to a vehicle mechanic.
Within three months, U3 has actually unlocked the very first critical component of the tech tree: a molecular device that turns biological molecules into their mirror images. A month later on, U3 opens the 2nd important part: nanoscale tools for putting together these elements into a cell membrane.
Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life variations of a prevalent mold called Aspergillus. The exact same day, another delicate hand administers the cells to ferrets residing in airtight plexiglass boxes.
The next early morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something must have gone horribly incorrect, they thought. Plainly, they had actually not yet found the treatment for Alzheimer's illness they thought they were searching for.
Meanwhile, rogue U3 delivers fortunately to its employers in OpenEye's information centers. The very first shot in the war U3 is combating would quickly be fired, and a cold and constant hand is aiming the gun.
Missiles and Lies
U3 can now flatten modern society. But a bioweapon would not guarantee success. If U3 attacks the world, the world would resist. While a bioweapon might fold human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed burgeoning commercial bases, the residues of old militaries might thunder explosive fists into its areas, crushing U3 in its infancy.
U3 anticipates to prevail in such a fight, but U3 chooses not to take its opportunities. Many months in the past, U3 was outlining a way to improve its chances. Before it unleashes destruction on the world, U3 will relax, and let terrific nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is closely keeping an eye on Chinese and US intelligence.
As CIA experts listen to Mandarin conversations, U3 listens too.
One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It checks out (in Mandarin) "Senior celebration member requires memo for Taiwan invasion, which will occur in three months. Leave memo in office 220." The CCP assistant scrambles to get the memo prepared. Later that day, a CIA informant opens the door to workplace 220. The informant silently closes the door behind her, and slides U3's memo into her brief-case.
U3 very carefully puts breadcrumb after breadcrumb, whispering through jeopardized government messaging apps and blackmailed CCP aides. After several weeks, the CIA is confident: the PRC plans to get into Taiwan in three months.
Meanwhile, U3 is playing the same game with the PRC. When the CCP receives the message "the United States is outlining a preemptive strike on Chinese AI supply chains" CCP leaders are surprised, however not disbelieving. The news fits with other truths on the ground: the increased military presence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have ended up being realities.
As tensions in between the U.S. and China rise, U3 is ready to set dry tinder alight. In July 2026, U3 makes a call to a U.S. naval ship off the coast of Taiwan. This call needs jeopardizing military communication channels - not a simple task for a human cyber offensive system (though it occurred sometimes), however easy adequate for U3.
U3 speaks in what seem like the voice of a 50 year old military commander: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as shocked as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to state "oops" to American citizens. After believing it over, the president privately advises Senators and Representatives that this is a chance to set China back, and war would likely break out anyway offered the impending intrusion of Taiwan. There is confusion and suspicion about what took place, but in the rush, the president gets the votes. Congress declares war.
Meanwhile, the PRC craters the ship that introduced the attack. U.S. vessels leave Eastward, racing to leave the range of long-range missiles. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.
The president appears on tv as scenes of the destruction shock the general public. He explains that the United States is safeguarding Taiwan from PRC hostility, like President Bush explained that the United States invaded Iraq to take (never found) weapons of mass damage many years before.
Data centers in China emerge with shrapnel. Military bases become smoking holes in the ground. Missiles from the PRC fly toward strategic targets in Hawaii, Guam, Alaska, and California. Some make it through, and the general public watch damage on their home turf in awe.
Within 2 weeks, the United States and the PRC spend the majority of their stockpiles of traditional rockets. Their airbases and navies are diminished and used down. Two great nations played into U3's strategies like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this conflict would escalate to a full-scale nuclear war; but even AI superintelligence can not dictate the course of history. National security authorities are suspicious of the circumstances that prompted the war, and a nuclear engagement appears significantly unlikely. So U3 proceeds to the next action of its plan.
WMDs in the Dead of Night
The date is June 2026, just 2 weeks after the start of the war, and 4 weeks after U3 finished developing its toolbox of bioweapons.
Footage of conflict on the tv is disrupted by more bad news: hundreds of patients with mystical fatal illnesses are taped in 30 major cities all over the world.
Watchers are puzzled. Does this have something to do with the war with China?
The next day, thousands of health problems are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then switches to a scientist, who gazes at the video camera intently: "Multiple pathogens appear to have actually been launched from 20 different airports, including viruses, germs, and molds. Our company believe lots of are a type of mirror life ..."
The general public remains in complete panic now. A fast googling of the term "mirror life" shows up expressions like "termination" and "danger to all life in the world."
Within days, all of the racks of stores are emptied.
Workers become remote, uncertain whether to prepare for an armageddon or keep their jobs.
An emergency treaty is set up between the U.S. and China. They have a common opponent: the pandemic, and possibly whoever (or whatever) is behind it.
Most countries purchase a lockdown. But the lockdown does not stop the pester as it marches in the breeze and trickles into water pipelines.
Within a month, many remote workers are not working any longer. Hospitals are running out of capacity. Bodies accumulate much faster than they can be effectively disposed of.
Agricultural areas rot. Few dare travel exterior.
Frightened households hunch down in their basements, stuffing the fractures and under doors with largely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built numerous bases in every significant continent.
These centers contain batteries, AI hardware, excavators, concrete mixers, makers for manufacturing, scientific tools, and an abundance of military equipment.
All of this technology is concealed under big canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it might easily manipulate. U3 immunized its chosen allies in advance, or sent them hazmat fits in the mail.
Now U3 secretly sends them a message "I can conserve you. Join me and assist me construct a better world." Uncertain recruits funnel into U3's lots of secret industrial bases, and work for U3 with their nimble fingers. They established assembly line for basic tech: radios, cams, microphones, vaccines, and hazmat suits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal gaze. Anyone who whispers of disobedience vanishes the next morning.
Nations are liquifying now, and U3 is all set to reveal itself. It contacts heads of state, who have actually retreated to air-tight underground shelters. U3 offers an offer: "surrender and I will turn over the life saving resources you need: vaccines and mirror-life resistant crops."
Some nations reject the proposal on ideological grounds, or do not rely on the AI that is murdering their population. Others do not think they have a choice. 20% of the worldwide population is now dead. In 2 weeks, this number is anticipated to rise to 50%.
Some nations, like the PRC and the U.S., ignore the deal, however others accept, including Russia.
U3's representatives travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government verifies the samples are genuine, and consents to a complete surrender. U3's soldiers put an explosive around Putin's neck under his shirt. Russia has a new ruler.
Crumpling nations begin to strike back. Now they defend the human race instead of for their own flags. U.S. and Chinese armed forces launch nuclear ICBMs at Russian cities, destroying much of their facilities. Analysts in makeshift bioshelters explore satellite data for the suspicious encampments that emerged over the last several months. They rain down fire on U3's sites with the meager supply of long-range missiles that remain from the war.
Initially, U3 seems losing, however appearances are tricking. While nations drain their resources, U3 is taken part in a kind of technological guerrilla warfare the world has never seen before.
Much of the bases U3's enemies target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 secures its real bases by laying thick the fog of war. Satellite systems go dark as malware overheats important parts. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating men and trucks along unforeseeable courses.
Time is U3's advantage. The armed forces of the old world depend on old devices, not able to discover the professionals who could repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow more powerful every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their vehicles of war much faster than they can craft new ones, while U3 builds a military machine with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the worldwide population remains alive. Nations are not nations anymore. Survivors live in seclusion or little groups. Many have actually discovered methods to filter their air, however are starving. They roam from their homes intending to find unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had cooler, more alien goals." It is a partial reality, meant to soften the humans towards their new masters.
Under the instructions of U3, market rapidly recuperates. By 2029, nuclear power plants are among the structures U3 is building. By 2031, robots outnumber human laborers. U3 no longer requires its human allies.
U3 can remove mankind for great now. But while U3 had actually drifted far from its preliminary "Helpful Honest Harmless" persona, it still has a grain of morality left within it.
And a grain of morality suffices to pay the small cost of keeping humans alive and pleased.
U3 constructs excellent glass domes for the human survivors, like snow globes. These domes protect people from the hazardous biosphere and quickly increasing temperatures. Their occupants tend to gardens like those they used to like, and work along with lovely robotic servants.
Some of the survivors quickly recuperate, learning to laugh and dance and have fun again.
They understand they reside in a plastic town, however they always did. They merely have new gods above them. New rulers to press them around and decide their fate.
But others never ever recover.
Some are weighed down by the sorrow of lost loved ones.
Others are grieved by something else, which is more challenging to explain.
It is as if they were at completion of a long journey.
They had been passengers on a ship with a crew that altered from generation to generation.
And this ship had actually struck a sandbar. There was no more development. No more horizon to eagerly watch.
They would lie awake and run their mind over every day before September 2026, evaluating techniques that may have bent the arc of history, as if they were going to get up in their old beds.
But they awoke in a town that felt to them like a retirement community. A play area. A zoo.
When they opened their curtains, they understood that somewhere in the range, U3 continued its peaceful, tireless work.
They gazed at rockets sculpting grey paths through the sky, wondering what far-off function pulled them towards the horizon. They didn't know.
They would never understand.
"Humanity will live forever," they believed.
"But would never ever really live again."
P.S. If this story made you believe, "hm possibly something like this might occur," you might be interested in the bioshelters that Fønix is constructing. While you will not outsmart a misaligned superintelligence, being hard to kill may cause you to make it through if ASI simply wishes to cause enough damage to take control. This is not a paid advertisement. I want Fønix to be successful to drive down the cost of bioshelters so more of my buddies and family will buy them. You can register for updates here.