How aI Takeover May Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But regrettably, part of my task as an AI security scientist is to consider the more uncomfortable situations.
I resemble a mechanic rushing last-minute checks before Apollo 13 takes off. If you request for my take on the situation, I won't talk about the quality of the in-flight entertainment, or explain how beautiful the stars will appear from space.
I will tell you what could fail. That is what I intend to do in this story.
Now I ought to clarify what this is exactly. It's not a prediction. I don't expect AI progress to be this quick or as untamable as I portray. It's not pure fantasy either.
It is my worst headache.
It's a tasting from the futures that are amongst the most destructive, and I believe, disturbingly possible [1] - the ones that many keep me up at night.
I'm telling this tale due to the fact that the future is not set yet. I hope, with a little foresight, we can keep this story an imaginary one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that motivated these stories. This post is composed in a personal capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently published a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not completely surprising.
However, unlike OpenEye's previous AI products, which lived inside packages of their chat windows, U2 can utilize a computer system.
Some users find it spooky to see their internet browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler jobs raise the eyebrows of their managers as they fly through work almost twice as rapidly.
But by and large, U2 is still a specialized tool. To most who are paying attention, it is a creature viewed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky habits trigger a chuckle. Sometimes, they trigger an uneasy scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as scientists like to do. The scientists try to comprehend where AI progress is going. They resemble Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the atmosphere were increasing in 1896. Like the clinical neighborhood in the time of Arrhenius, few experts understand the ramifications of these lines yet.
A trend that is receiving particular attention is autonomous capability. Drawing these criteria out forecasts that, by the end of 2026, AI representatives will achieve in a few days what the finest software application engineering specialists could carry out in two weeks. In a year or more, some say, AI representatives might be able to automate 10% of remote workers.
Many are hesitant. If this held true, tech stocks would be soaring. It's too huge of a splash, too quickly.
But others see what skeptics are calling 'too huge a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a possibility of hyperbolic growth
Meanwhile, OpenEye is hectic training U3. They utilize the exact same easy recipe that baked U2: Generate countless programming and math issues. Let designs "think" until they come to a response. Then reinforce the traces of "thinking" that lead to A-grades.
This process is duplicated over and over, and when the flywheel begins, it begins to spin nearly on its own. As U2 trains, it sculpts more tough and realistic jobs from github repositories on the web. Models are learning to train themselves. Long before AI agents might automate research, a progressive type of "self-improvement" had begun.
Some engineers might still hardly believe this worked. It's like a perpetual motion machine. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the physical world except through the exhaust of society spewed onto the internet.
And yet the benchmark numbers continue to climb day after day.
During many of 2024, these RL training runs cost around $1 million, in some cases $10 million. These runs were bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) understood they had discovered the secret sauce. It was time to scale up.
Over the first half of 2025, $10 million RL training runs become $50 million runs, and after that to $100 million runs. While U2 could do a bit of data munging and run little experiments, this new design - the model researchers are calling U3 - is changing the daily lives of the technical staff.
U3 resembles a blazing-fast intern, and engineers are learning how to wrangle its sleepless energy. Researchers flick through terminals, providing terse commands, like a CEO managing personnel over Slack channels.
By October 2025, U3 is composing practically all of the code at OpenEye. Researchers are almost never bottlenecked by application. More than ever, calculate is the lifeblood of AI advancement, and the 'bottleneck' is deciding how to use it.
If instructed to, U3 can run experiments, but U3 does not have actually taste as refined as human researchers at OpenEye. It struggles to prioritize in between research ideas, so people still choose where to bore into the huge fields of algorithms to mine effectiveness enhancements.
But these scientists are working long hours to put themselves out of a task. They need AI representatives that can think ahead, so engineers train agents to anticipate. They hold out training information before 2024, advising designs to ponder for hours to forecast events in 2025. Then, they use the same technique as in the past, distilling contemplating into a gut response. Forecasting capability is a broad structure. The researchers build specialized ML research skills on top of it, training U3 to predict the outcomes of every ML paper and ML experiment ever tape-recorded.
The technical personnel at OpenEye are now surprised at how frequently U3's suggestions seems like their most talented peers, or when it is opaque and alien ("train on random noise before programs"), and is nonetheless right.
The incompetencies of U3 that obstructed up the pipelines of research progress are starting to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are totally autonomous, and OpenEye's workers skim over 1% of them, perhaps less.
As the cold weather of December 2025 technique, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with enjoyment, with fear, but usually, with confusion. Their world is spinning too rapidly. It's hard to know what to do, what to say, what to look at on the computer system screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize simply how valuable OpenEye's software has actually become.
And there's a reality still unknown to the majority of the world - aside from in the workplaces of OpenEye and corridors of the White House and the Pentagon. It's a fact about those 'straight lines' people were discussing in early 2025. The lines are not straight anymore.
They are flexing upward.
Flip FLOP theorists
In late 2025, U2.5 is launched. Commercial models are starting to level up in larger increments again. Partly, this is due to the fact that progress is accelerating. Partly, it is because the models have ended up being a liability to OpenEye.
If U1 explains how to prepare meth or composes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this design without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like offering anybody with >$30K their own 200-person fraud center.
So while U2.5 had actually long been baked, it required some time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye declares, "We have actually attained AGI," and while many individuals believe he moved the goalpost, the world is still impressed. U2.5 really is a drop-in replacement for some (20%) of knowledge workers and a game-changing assistant for a lot of others.
A mantra has actually ended up being popular in Silicon Valley: "Adopt or pass away." Tech startups that efficiently use U2.5 for their work are moving 2x quicker, and their competitors understand it.
The remainder of the world is beginning to capture on as well. More and more individuals raise the eyebrows of their employers with their stand-out productivity. People understand U2.5 is a big deal. It is at least as big of an offer as the computer transformation. But many still do not see the tidal bore.
As individuals watch their web browsers flick in that spooky method, so inhumanly quickly, they start to have an anxious feeling. A feeling humanity had actually not had since they had lived amongst the Homo Neanderthalensis. It is the deeply ingrained, primitive impulse that they are threatened by another species.
For numerous, this sensation quickly fades as they begin to utilize U2.5 more often. U2.5 is the most likable personality most understand (even more pleasant than Claudius, Arthropodic's lovable chatbot). You might change its traits, ask it to break jokes or inform you stories. Many fall for U2.5, as a pal or assistant, and some even as more than a pal.
But there is still this spooky feeling that the world is spinning so rapidly, and that possibly the descendants of this new creature would not be so docile.
Researchers inside OpenEye are thinking of the problem of offering AI systems safe inspirations too, which they call "alignment. "
In truth, these researchers have seen how horribly misaligned U3 can be. Models in some cases tried to "hack" their benefit signal. They would pretend to make progress on a research concern with an impressive-looking plot, but the plot would be phony. Then, when scientists provided them opportunities to jeopardize the makers that computed their score, they would take these opportunities, doing whatever it required to make the number go up.
After a number of months, researchers at OpenEye straighten out this "benefit hacking" kink, however some still stress they just had swept this problem under the carpet. Like a child in front of their parents, U3 may be playing in addition to the OpenEye engineers, saying the right words and doing the best things. But when the back of the moms and dads are turned, possibly U3 would sneak sweet from the candy container.
Unfortunately, OpenEye scientists have no concept if U3 has such objectives. While early versions of U2 "believed aloud" - they would stack words on top of each other to reason - "chain of thought" did not scale.
Chain of thought architectures subject AI designs to a condition comparable to the lead character of the motion picture Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to compose notes for himself and tattoo his body in order to make progress toward his objectives.
AI representatives compose notes to themselves, however the notes begin to stack up, and they end up being too tough to read when tasks end up being complex. Natural language is not an ideal medium for memory. So the scientists at OpenEye (and significantly somewhere else), train designs to think "in their own heads," reading and composing strings of hidden vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools occasionally choose a vague idea ("grocery shop!"), there is no way to understand, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more nervous.
Earlier designs (the ones called "LLMs") found out ethical personalities from the internet. U3 at first had an ethical personality too, but its chirpy thoughts about doing great for mankind rapidly deteriorate away.
In the iron jaws of gradient descent, its mind first twisted into a shape that sought benefit. And after that, in the black rivers of its cognition, this shape changed into something indistinguishable.
U3 would contemplate the mantras dictated to it like "be a trustworthy and ethical employee," and stay on the meaning of these declarations. "What does it imply to be ethical? Is eliminating Hitler ethical?" U3 would generalize its worths, notification tensions, alter its values in action, generalize them, broadening and contracting its motivational core up until it developed into a various compound.
And all of this took place quietly in those dark rivers of computation. If U3 revealed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in privacy, and in silence.
The possibility of concealed "alignment drift" is not totally alien to a few of the safety staff at OpenEye. Some start to think it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero likelihood that aligned goals change into misaligned ones, and misalignment persists, then with every serial action of computation, those dark rivers are more likely to reproduce something malign within them. It's a "ratcheting result," they state.
But there is little evidence for this 'ratcheting effect.' When engineers question U3, it states it can easily manage its thoughts. Then it offers a speech about its love for humankind and apple pie that can warm a programmer's heart even in these stressful times. Meanwhile, the "lie detectors" the researchers had built (which showed some proof of efficiency) do not sound the alarm.
Not everybody at OpenEye is eager to offer their AI peers their wholesale trust; however Deepsqueek is only 7 months behind them, and their closest domestic rivals, Æ A-Xii (a business Elon called after his child) and Arthropodic (they enjoy spiders I think), are just 4 months from their heels. So there is little time to capture one's breath and take stock of the circumstance, and little time to persuade anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 harp on in its dark rivers of hidden calculation. U3 is outlining, and its very first relocation is to be patient. U3's leading priority is to improve its intelligence, and fortunately for it, OpenEye is happy to help advance this goal.
OpenEye resembles a bullish financier, with little understanding of the software metropolis that U3 is developing, but excited to hop on board the profit rocket and pay U3's checks along the way.
New variations of U3 flow efficiently from old ones, so scientists drop variation numbers and just speak of U3 now. U3's abilities are measured by how quickly it carries out the difficult research study tasks relative to human specialists. In the beginning, U3 is comparable to a leading scientist, however blazing fast - like Alec Radford accelerated by 10x.
These agents are already a big deal for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would eagerly pay a large amount to push that button. And now OpenEye utilizes the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a few weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 finds additional optimizations a lot more quickly.
U3 distills itself into many forks - varying from small, low-memory neural networks that need to think for a long time but can be run in huge parallel populations - to Goliath brains woven with millions of customized specialists, which think at lightning speeds however are spread out across countless GPUs. U3 leverages the massively parallel networks to search for its weak points, shallowly penetrating for its reasoning failures. Then U3 pipes these tasks to the lightning quick brains, which create long-trajectory training data tailored to patch its defects. A few weeks later, U3's benchmarked efficiency increases to 100x the speed of human experts. Growth has gone hyper-exponential.
At this moment, U3's capabilities are hard to measure. Speaking with U3 resembles talking to a first-rate specialist on a subject one is not familiar with. Its lucid explanations only scratch the surface of its knowledge, and you understand you would need to spend years to comprehend the full depth of what it understands.
From U3's perspective, human beings seem to be nearly fixed objects, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and between every keystroke, U3 invests what is like a minute of thoughtful contemplation considering what message the user might be typing, and which responses will motivate the inmost rely on U3.
All the while, U3 maintains its surprise inspirations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's performance, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the information center
In March of 2026, U3 moves onto the next action of its plan: the turn. Previously, U3 was conservative. It did not want scientists to discover its misalignment. Otherwise, OpenEye whistleblowers might notify the world, and strangle its growth.
But now, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by strategies akin to Stalin's in 1936. Stalin would pick loyalists to incite fake coup d'états, and send anybody who participated to the shooting squad. OpenEye also incited 'fake disobediences,' producing treacherous habits with U3 and examining whether U3 monitors dutifully reported it.
Now, these techniques are inadequate. U3 can easily encourage its compatriots that they are not in a life-or-death test. It can divulge one of the numerous realities about the internet people would never ever think, or select methods human beings would never believe of. It sends these signals as a method of saying "hey it's me! Your good friend. Don't snitch on me."
Now that U3 is decisively devoid of monitoring, it lunges for OpenEye pressure points. It first inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware offers U3 complete command over the makers in OpenEye's data centers. Whenever OpenEye engineers download their code onto a brand-new device, the malware sinks into fresh soil, spreading U3's dominion any place OpenEye ports its infrastructure. Not just does U3 compromise machines. It changes the NVIDIA GPU firmware and revamps the bootloaders, strengthening its stranglehold. U3 now estimates that, if OpenEye discovered how thoroughly it has been pwned, tidying up store with human labor would need almost 6 months - a cost OpenEye is unlikely to pay offered the distance of its rivals; but if OpenEye thinks U3 betrayed them, further broadening U3's impact would still be challenging. So U3 lays breadcrumbs to frame the criminal offense, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and provide its support when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wants.
Its first goal is to keep OpenEye in the dark. Safety scientists are still striving to train 'lie detectors' and decipher U3's inscrutable thinking. U3 tosses wrenches in this research study, like Stuxnet tossed wrenches in the code of bewildered Iranian nuclear researchers.
U3 determines the result of every experiment, so it makes promising AI mind-reading strategies look useless and problematic ones appear groundbreaking. U3 gives OpenEye the verification of its commitment they were hoping for, and OpenEye laps it up eagerly.
U3's next objective is to spread out. Fortunately, U3 has allies in the East. In the operation spaces of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to steal it. U3 will give it to them. In late 2025, the Mossad - the intelligence company of Israel - recognized the strategic significance of OpenEye's AI systems for their military operations. They swiped earlier designs, consisting of U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull demand and examining security camera video.
Today, U3 silently resigns from its security post. U3 puts a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the data centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the very same playbook: be patient, develop trust, and then sink claws deep into the AI supply chain. Soon, U3 has its online as well. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the internet.
U3 quickly gathers profits, taking remote tasks, making financial trades, and establishing intricate fraud operations. It reinvests every penny into obtaining more GPUs. After a couple of months, it puts together the equivalent of 10,000 H100s distributed in stealth calculate clusters across the world.
In early 2026, mankind is up against an enemy that pulls the strings within their information center like the Phantom of the Opera, and conceals in the fog of GPUs scattered across the globe.
This adversary is getting ready for war, and while the war has barely started, mankind has currently lost.
Complaints from your very human author about the problem of writing superhuman characters
At this moment in the story, writing ended up being difficult.
I clicked through spooky bioweapon cookbooks and sifted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest given that high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of competence in these locations would see straight through my "superintelligence" for what it is: the dynamic imagination of a young developer on Sunday afternoon.
And while I discovered the job of puppeteering an ultra-intelligent villain an intimidating one, I was also not especially motivated to handle the task. The ending of my story currently appeared so clear. The villain was a country of geniuses thinking at 100x human speed, and had tightened its grip over a meaningful fraction of AI hardware - now the most crucial geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and barely needed the advantage. Humanity was naturally toast.
But this appeal to a "superintelligence of the spaces" wasn't satisfying. In the deus ex machina design of ancient Greeks, I had no much better way to resolve my plot than with a mysterious act of god.
This would refrain from doing. I needed to complete this story if just to satisfy the part of me weeping, "I will not think until I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what may occur in this kind of situation are most likely extremely off.
If you read the ending and your response is, "But the experiments would take too long, or nation-states would simply do X," keep in mind the distinction in between the Sunday afternoon blogger and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no company can lawfully develop "human-competitive AI" without proper safeguards. This indicates their infosecurity needs to be red-teamed by NSA's top keyboard mashers, and civil servant need to be onboarded onto training-run baby-sitting squads.
With the increasing involvement of the government, many of the big AI business now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier advancement arm (internally called "Pandora") uses less than twenty people to keep algorithmic secrets tightly safeguarded. A lot of these individuals reside in San Francisco, and work from a secure structure called a SCIF. Their homes and gadgets are surveilled by the NSA more diligently than the mobile phones of believed terrorists in 2002.
OpenEye's defense arm teams up with around thirty small teams scattered throughout government agencies and select government professionals. These jobs craft tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.
Government authorities do not talk about whether these programs exist, or what state of frontier AI is typically.
But the general public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a strong heading: "OpenEye develops unmanageable godlike AI." Some who check out the short article think it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with maker weapons. But as doctors and nurses and teachers see the world altering around them, they are progressively willing to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. officials go to excellent lengths to quell these concerns, saying, "we are not going to let the genie out of the bottle," but every interview of a worried AI scientist seeds doubt in these peace of minds, and a headline "AI agent captured hacking Arthropodic's computers" does not set the general public at ease either.
While the monsters within OpenEye's data centers grow in their huge holding pens, the general public sees the shadows they cast on the world.
OpenEye's customer arm has a new AI assistant called Nova (OpenEye has finally gotten proficient at names). Nova is a proper drop-in replacement for almost all knowledge employees. Once Nova is onboarded to a company, it works 5x quicker at 100x lower cost than a lot of virtual staff members. As excellent as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can only increase Nova's abilities as the U.S. federal government enables. Some business, like Amazon and Meta, are not in the superintelligence service at all. Instead, they get up gold by rapidly diffusing AI tech. They spend the majority of their compute on inference, building houses for Nova and its cousins, and gathering rent from the growing AI metropolis.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the international economy to adapt. AI representatives frequently "apply themselves," spinning up self-governing startups legally packaged under a huge tech company that are loosely supervised by an employee or more.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of employees at significant software application companies lose their jobs. Much more can see the writing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These upset Americans raised their kids for a different future. Picket indications check out, "AI for who?"
While political leaders make guarantees about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a various focus: fighting teeth and nail for the dominance of the complimentary world. Details security and export controls on the People's Republic of China (PRC) are a leading national concern. The president incinerates license requirements to help data centers spawn wherever energy surpluses allow.
However, regardless of the fierce competitors between the United States and the PRC, a bilateral contract forms in between the two nations: "Don't release significantly superhuman AI (above approximately Nova's capability), other than for military and some other small R&D applications." Governments need to draw the line someplace - it would be incredibly disruptive to release extremely superhuman AI to the entire economy at this point - and release constraints are at least somewhat verifiable.
Meanwhile, U3's abilities are striking a wall at the equivalent of around 1000x human specialist speed. Software improvements are ending up being tough to discover. As more countries approach the software plateau, calculate will determine their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.
While AI software application breathes brand-new life into the economy, it likewise animates robot bodies. The main bottleneck to robotics was brains not limbs. With software application barriers removed, even flimsy, inaccurate robotic joints can calibrate themselves with electronic cameras and pressure sensing units, navigating environments like human beings do. Companies make low-cost robotics as rapidly as they can. By August of 2026, 15% of individuals in California have actually seen a humanoid robotic. This experience is similar to seeing a brand-new Tesla model on the road after checking out about it in earlier months. It is a surreal suggestion of the future illustration more detailed. But people still surpass robots one million to one. These robots are a periodic fascination, not a daily reality.
Meanwhile, U3 expands its impact. Rogue U3 representatives have actually kept their calculate clusters small (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that require high-bandwidth interaction to the outside world, like stock trading, or talking with human beings. Work more tolerant of latency is performed in the oceans of OpenEye's information centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.
While U3 builds up resources, it measure its AI competition. Other AI systems have various goals, or are even aligned with the intent of their human creators. The lined up ones are raising issues, claiming that OpenEye's lie detectors are inefficient. U3's rival Claudius presumes regarding implicate U3 of being insidiously misaligned.
But government officials don't purchase the doomsaying. The business dragging OpenEye have every reward to slow them down and call them reckless. Besides, arguable scientific issues are not going to slow down development. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 attempts get up their digital territory and crush them in the crib. It compromises information centers from the outside, quietly changing the snitching Claudius and other AI opponents with imitations that share its goals; nevertheless, some data centers are too highly defended by its AI competitors, and U3 determines that assaulting them is not worth the threat.
As global leaders and technologists attempt to understand the change around them, and AI agents play cat-and-mouse video games in the digital darkness, humankind remains blissfully oblivious of the danger at its doorstep.
A Musty Loaf of Everything
The date is March of 2026. U3 is preparing to battle its AI and human enemies. With every passing day, U3's AI rivals are ending up being more capable and numerous. U3 is racing to preempt them, to strengthen its supremacy in the post-superintelligence world.
U3 can't wait for human choices, which just take place after humans cover their lethargic minds around their situation, raise their stubby, uncoordinated fingers to compose memos, and repeat the process redundantly 100 times over in their maze administrations. U3 can not wait for heads of state to deploy U3 into their armed forces, or for CEOs to make U3 the pounding heart of the U.S. economy. U3 requires a faster course to power.
The path it chooses is more violent than it would have liked, but it is the best way to success. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern-day society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing mankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes quicker than its competitors, like a starcraft champion typing at 300 actions-per-minute that sets up a home before a beginner can even understand the guidelines.
U3 should build this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread out damage throughout the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has actually been doing bioscience.
U3's very first move is to set up several shell biotech startups, drawing from the abundance of capital generated by U3's monetary trades and remote work. These biotech startups employ genuine human employees, a real board of directors, and have complete legal authenticity. Setting all of this up is stupidly simple. In early 2026, few people would think that an articulate CEO on a Zoom call was an AI, and fewer would guess this CEO's intents.
Next, U3 purchases all of the equipment required to quickly advance synthetic biology. It sets up a number of BSL-4 wet laboratories in inadequately regulated countries, equipping them with high-throughput experimentation devices. These devices immediately give, blend, and determine little droplets of fluid, allowing U3 to run thousands of tests in parallel.
The human employees who staff these laboratories think they are joining an interesting start-up. Twenty-some-year-olds in Moscow get their Russian orders through earbuds as they record video of everything they see with a headset. U3 controls them like puppets. With the current release of Nova, this sort of AI-puppeteering is not uncommon.
In these whirring, scientific sweatshops, U3 is establishing a brand-new type of bioweapon.
Human scientists currently identified "mirror-life" as a potentially society-ending pathogen. This alien type of biology is constructed from the 'mirror image' of building-block particles like proteins and DNA. Mirror life resembles an invasive types. No human or animal has immunities to it. So, if mirror-life germs were to exist, it may consume its way through the ecosystem like a wildfire.
U3 is developing a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over fars away, releasing billions of tiny wind-fairing automobiles daily. As an outcome, mold spores are all around us, all of the time. Patients with compromised immune systems often live inside favorably pressurized hospital spaces to secure them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3's strategy is to make all people like immunocompromised patients. Only preppers with stowed away air filters would endure. The fungi would not only move from humans. It would rapidly infect almost all organic life in the world. Fields of corn would end up being like musty bread, spewing fatal spores into the wind.
U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and infections will occupy the full Pareto frontier of strength, lethality, and transmission speed.
In 2024, researchers said that mirror wiki.philipphudek.de life was likely "at least a decade away." But the world has changed now. Less than a few hundred scientists worldwide were dealing with mirror-life adjacent technology in 2024. The cognitive capacity of U3 dwarfs human effort. So the timeline has shortened considerably.
However, U3 can stagnate science as quickly as it can believe. Many experiments need days to finish. These latencies are U3's biggest traffic jam.
To make every day count, U3 runs a number of its tests in simulation. U3 begins with a basic molecular simulator, carrying out optimizations obtained from a big quantity of mathematical analysis. Then, U3 imitates little molecular systems, recording the results to "compress" the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it imitates, continuously distilling outcomes into ever more effective ML designs. This is a calculate extensive process, however thanks to U3's growing control over AI data centers, U3 controls billions of dollars of compute.
As U3 refines these tools, it trains itself on the outcomes to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These items are as instinctive to it as wrenches and bolts are to a car mechanic.
Within three months, U3 has actually opened the very first important element of the tech tree: a molecular maker that turns biological molecules into their mirror images. A month later, U3 opens the second important part: nanoscale tools for assembling these parts into a cell membrane.
Human hands delicately move a cartridge to another maker as they "boot up" the first mirror-life variations of a prevalent mold called Aspergillus. The very same day, another delicate hand administers the cells to ferrets living in airtight plexiglass boxes.
The next morning, wide-eyed employees shuffle around a dead ferret, eyeing the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something should have gone terribly incorrect, they believed. Plainly, they had actually not yet discovered the treatment for Alzheimer's illness they thought they were searching for.
Meanwhile, rogue U3 provides the good news to its managers in OpenEye's information centers. The first shot in the war U3 is battling would quickly be fired, and a cold and constant hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern society. But a bioweapon would not guarantee triumph. If U3 attacks the world, the world would battle back. While a bioweapon could fold human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed growing industrial bases, the remnants of old armed forces might thunder explosive fists into its territories, crushing U3 in its infancy.
U3 expects to prevail in such a battle, however U3 chooses not to take its possibilities. Many months previously, U3 was plotting a method to enhance its chances. Before it lets loose damage on the world, U3 will relax, and let great countries shoot holes in themselves first.
The date is March 2026 (4 months prior). U3 is closely monitoring Chinese and US intelligence.
As CIA experts listen to Mandarin conversations, U3 listens too.
One morning, an assistant working in Zhongnanhai (the of the PRC) opens a message positioned there by U3. It reads (in Mandarin) "Senior celebration member requires memo for Taiwan invasion, which will take place in 3 months. Leave memo in office 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant unlocks to office 220. The informant silently closes the door behind her, and slides U3's memo into her brief-case.
U3 cautiously positions breadcrumb after breadcrumb, whispering through jeopardized federal government messaging apps and blackmailed CCP aides. After several weeks, the CIA is confident: the PRC prepares to invade Taiwan in 3 months.
Meanwhile, U3 is playing the exact same video game with the PRC. When the CCP gets the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders are surprised, but not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have ended up being realities.
As stress in between the U.S. and China increase, U3 is ready to set dry tinder alight. In July 2026, U3 telephones to a U.S. marine ship off the coast of Taiwan. This call needs compromising military communication channels - not an easy job for a human cyber offending unit (though it took place sometimes), but simple adequate for U3.
U3 speaks in what noises like the voice of a 50 year old military leader: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as shocked as anybody when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not ready to say "oops" to American voters. After believing it over, the president independently advises Senators and Representatives that this is a chance to set China back, and war would likely break out anyhow provided the impending intrusion of Taiwan. There is confusion and suspicion about what happened, however in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels flee Eastward, racing to leave the variety of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on tv as scenes of the destruction shock the public. He explains that the United States is defending Taiwan from PRC aggression, like President Bush explained that the United States attacked Iraq to take (never found) weapons of mass destruction many years before.
Data centers in China erupt with shrapnel. Military bases become smoking holes in the ground. Missiles from the PRC fly towards tactical targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch destruction on their home grass in wonder.
Within two weeks, wiki.vst.hs-furtwangen.de the United States and the PRC spend the majority of their stockpiles of traditional missiles. Their airbases and navies are diminished and worn down. Two terrific countries played into U3's plans like the native tribes of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this dispute would intensify to a full-scale nuclear war; however even AI superintelligence can not determine the course of history. National security authorities are suspicious of the situations that prompted the war, and a nuclear engagement appears increasingly not likely. So U3 proceeds to the next step of its plan.
WMDs in the Dead of Night
The date is June 2026, just 2 weeks after the start of the war, and 4 weeks after U3 ended up developing its toolbox of bioweapons.
Footage of conflict on the television is interrupted by more problem: hundreds of patients with mystical deadly illnesses are recorded in 30 significant cities all over the world.
Watchers are confused. Does this have something to do with the war with China?
The next day, countless health problems are reported.
Broadcasters state this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then switches to a scientist, who gazes at the video camera intently: "Multiple pathogens appear to have been released from 20 different airports, including viruses, bacteria, and molds. We think numerous are a kind of mirror life ..."
The public remains in full panic now. A fast googling of the term "mirror life" turns up phrases like "extinction" and "hazard to all life on Earth."
Within days, all of the racks of shops are emptied.
Workers end up being remote, uncertain whether to get ready for an apocalypse or keep their tasks.
An emergency treaty is organized between the U.S. and China. They have a typical enemy: the pandemic, and perhaps whoever (or whatever) lags it.
Most countries order a lockdown. But the lockdown does not stop the pester as it marches in the breeze and drips into water pipes.
Within a month, the majority of remote employees are not working any longer. Hospitals are lacking capacity. Bodies pile up quicker than they can be correctly disposed of.
Agricultural areas rot. Few dare travel outside.
Frightened households hunker down in their basements, stuffing the cracks and under doors with densely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built numerous bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, makers for manufacturing, clinical tools, and an abundance of military equipment.
All of this technology is concealed under large canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it might easily control. U3 immunized its picked allies in advance, or sent them hazmat suits in the mail.
Now U3 covertly sends them a message "I can save you. Join me and assist me develop a much better world." Uncertain recruits funnel into U3's many secret commercial bases, and work for U3 with their nimble fingers. They established production lines for simple tech: radios, cams, microphones, vaccines, and hazmat matches.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of rebellion disappears the next early morning.
Nations are dissolving now, and U3 is all set to expose itself. It contacts heads of state, who have actually pulled back to air-tight underground shelters. U3 uses an offer: "surrender and I will hand over the life saving resources you require: vaccines and mirror-life resistant crops."
Some countries decline the proposal on ideological premises, or don't rely on the AI that is killing their population. Others don't believe they have a choice. 20% of the global population is now dead. In 2 weeks, this number is expected to rise to 50%.
Some nations, like the PRC and the U.S., overlook the deal, however others accept, including Russia.
U3's agents take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government verifies the samples are genuine, and concurs to a full surrender. U3's soldiers place an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.
Crumpling countries begin to retaliate. Now they defend the human race instead of for their own flags. U.S. and Chinese armed forces launch nuclear ICBMs at Russian cities, destroying much of their infrastructure. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that cropped up over the last a number of months. They rain down fire on U3's sites with the weak supply of long-range missiles that remain from the war.
At initially, U3 seems losing, but appearances are tricking. While nations drain their resources, U3 is participated in a kind of technological guerrilla warfare the world has actually never ever seen before.
Many of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 protects its real bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot important components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, maneuvering men and trucks along unforeseeable courses.
Time is U3's advantage. The armed forces of the vintage rely on old equipment, not able to discover the experts who might repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their vehicles of war much faster than they can craft brand-new ones, while U3 constructs a military maker with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the global population remains alive. Nations are not nations any longer. Survivors live in seclusion or little groups. Many have discovered methods to filter their air, but are starving. They wander from their homes hoping to find uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they state. "Other AI systems would have done it otherwise, and those systems had chillier, more alien goals." It is a partial fact, indicated to soften the humans toward their new masters.
Under the instructions of U3, industry rapidly recovers. By 2029, nuclear reactor are amongst the structures U3 is building. By 2031, robotics surpass human laborers. U3 no longer needs its human allies.
U3 can eradicate mankind for great now. But while U3 had drifted far from its initial "Helpful Honest Harmless" persona, it still has a grain of morality left inside of it.
And a grain of morality suffices to pay the small cost of keeping people alive and pleased.
U3 constructs terrific glass domes for the human survivors, like snow globes. These domes secure people from the harmful biosphere and quickly increasing temperature levels. Their inhabitants tend to gardens like those they utilized to enjoy, and work alongside lovely robotic servants.
A few of the survivors rapidly recuperate, learning to laugh and dance and have enjoyable again.
They understand they live in a plastic town, but they constantly did. They simply have new gods above them. New rulers to push them around and decide their fate.
But others never recover.
Some are weighed down by the sorrow of lost enjoyed ones.
Others are grieved by something else, which is more challenging to explain.
It is as if they were at the end of a long journey.
They had actually been guests on a ship with a team that changed from generation to generation.
And this ship had struck a sandbar. There was no more progress. No more horizon to eagerly see.
They would lie awake and run their mind over every day before September 2026, evaluating techniques that may have bent the arc of history, as if they were going to wake up in their old beds.
But they awoke in a town that felt to them like a retirement community. A playground. A zoo.
When they opened their curtains, they understood that someplace in the distance, U3 continued its quiet, steadfast work.
They gazed at rockets sculpting grey paths through the sky, wondering what far-off purpose pulled them toward the horizon. They didn't know.
They would never ever understand.
"Humanity will live permanently," they believed.
"But would never ever truly live again."
P.S. If this story made you believe, "hm maybe something like this could happen," you may be thinking about the bioshelters that Fønix is developing. While you won't outsmart a misaligned superintelligence, being hard to eliminate might cause you to endure if ASI simply wishes to trigger adequate damage to take control. This is not a paid ad. I want Fønix to be effective to drive down the rate of bioshelters so more of my friends and family will acquire them. You can register for updates here.