How aI Takeover might Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But unfortunately, part of my job as an AI safety researcher is to believe about the more unpleasant situations.
I resemble a mechanic scrambling last-minute checks before Apollo 13 removes. If you ask for my take on the circumstance, I won't talk about the quality of the in-flight entertainment, or explain how lovely the stars will appear from area.
I will tell you what could fail. That is what I intend to do in this story.
Now I ought to clarify what this is precisely. It's not a prediction. I don't expect AI development to be this quick or as untamable as I represent. It's not pure dream either.
It is my worst problem.
It's a tasting from the futures that are among the most destructive, and I believe, disturbingly possible [1] - the ones that the majority of keep me up in the evening.
I'm telling this tale because the future is not set yet. I hope, with a little foresight, we can keep this story an imaginary one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that motivated these stories. This post is written in a personal capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye recently released a new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not completely unexpected.
However, unlike OpenEye's previous AI items, which lived inside packages of their chat windows, U2 can use a computer system.
Some users discover it spooky to watch their internet browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler jobs raise the eyebrows of their employers as they fly through work almost twice as quickly.
But by and large, U2 is still a specialized tool. To most who are taking note, it is a creature seen through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky behaviors trigger a chuckle. Sometimes, they trigger an anxious scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as researchers like to do. The scientists try to comprehend where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who discovered the levels of CO2 in the atmosphere were increasing in 1896. Like the scientific community in the time of Arrhenius, few experts comprehend the implications of these lines yet.
A trend that is receiving specific attention is autonomous capability. Drawing these standards out predicts that, by the end of 2026, AI representatives will accomplish in a few days what the finest software engineering contractors could perform in two weeks. In a year or more, some say, AI agents may be able to automate 10% of remote workers.
Many are hesitant. If this were real, tech stocks would be skyrocketing. It's too big of a splash, too rapidly.
But others see what skeptics are calling 'too huge a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic development
Meanwhile, OpenEye is hectic training U3. They use the same simple dish that baked U2: Generate countless shows and mathematics problems. Let designs "think" till they come to an answer. Then reinforce the traces of "thinking" that result in A-grades.
This process is duplicated over and over, and as soon as the flywheel gets begun, it begins to spin almost on its own. As U2 trains, it shapes more tough and practical jobs from github repositories on the internet. Models are learning to train themselves. Long before AI agents could automate research, a steady type of "self-improvement" had begun.
Some engineers could still hardly think this worked. It resembles a perpetual motion machine. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, isolated from the real world other than through the exhaust of society spewed onto the internet.
And yet the benchmark numbers continue to climb up day after day.
During the majority of 2024, these RL training runs expense around $1 million, timeoftheworld.date in some cases $10 million. These runs were bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) knew they had actually discovered the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs develop into $50 million runs, and after that to $100 million runs. While U2 could do a bit of data munging and run small experiments, this new design - the model scientists are calling U3 - is altering the daily lives of the technical personnel.
U3 is like a blazing-fast intern, and engineers are learning how to wrangle its sleep deprived energy. Researchers flick through terminals, giving terse commands, like a CEO managing personnel over Slack channels.
By October 2025, U3 is writing nearly all of the code at OpenEye. Researchers are nearly never ever bottlenecked by implementation. More than ever, compute is the lifeblood of AI development, and the 'bottleneck' is choosing how to utilize it.
If advised to, U3 can run experiments, however U3 doesn't have actually taste as fine-tuned as human scientists at OpenEye. It has a hard time to prioritize between research ideas, so humans still decide where to bore into the huge fields of algorithms to mine efficiency enhancements.
But these researchers are working long hours to put themselves out of a task. They need AI representatives that can think ahead, so engineers train representatives to forecast. They hold out training data before 2024, instructing designs to consider for hours to predict occasions in 2025. Then, they use the exact same trick as in the past, distilling considering into an instinct. Forecasting ability is a broad structure. The scientists build specialized ML research skills on top of it, training U3 to anticipate the outcomes of every ML paper and ML experiment ever taped.
The technical staff at OpenEye are now surprised at how typically U3's suggestions sounds like their most gifted peers, or when it is nontransparent and alien ("train on random noise before programs"), and is nevertheless proper.
The incompetencies of U3 that blocked the pipes of research study progress are starting to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are totally self-governing, and OpenEye's staff members skim over 1% of them, possibly less.
As the cold weather of December 2025 method, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with fear, but frequently, with confusion. Their world is spinning too quickly. It's difficult to know what to do, what to state, what to take a look at on the computer screen.
Storms are brewing in Washington too. Top workers from the NSA and US cyber command team up with OpenEye to retrofit a form of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize just how important OpenEye's software has actually ended up being.
And there's a fact still unidentified to the majority of the world - aside from in the workplaces of OpenEye and corridors of the White House and the Pentagon. It's a fact about those 'straight lines' people were talking about in early 2025. The lines are not straight anymore.
They are bending up.
Flip FLOP theorists
In late 2025, U2.5 is launched. Commercial designs are beginning to level up in larger increments again. Partly, this is due to the fact that development is speeding up. Partly, it is since the designs have become a liability to OpenEye.
If U1 explains how to prepare meth or composes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble offering anyone with >$30K their own 200-person fraud center.
So while U2.5 had actually long been baked, it required a long time to cool. But in late 2025, OpenEye is ready for a public release.
The CEO of OpenEye states, "We have attained AGI," and while many individuals think he moved the goalpost, the world is still satisfied. U2.5 truly is a drop-in replacement for some (20%) of understanding employees and a game-changing assistant for many others.
A mantra has become popular in Silicon Valley: "Adopt or pass away." Tech startups that effectively use U2.5 for their work are moving 2x much faster, and their competitors understand it.
The remainder of the world is starting to capture on as well. Increasingly more individuals raise the eyebrows of their managers with their stand-out productivity. People understand U2.5 is a huge offer. It is at least as huge of an offer as the computer transformation. But many still don't see the tidal bore.
As people see their internet browsers flick because spooky method, so inhumanly quickly, they begin to have an anxious feeling. A sensation humankind had not had since they had lived among the Homo Neanderthalensis. It is the deeply ingrained, prehistoric impulse that they are threatened by another types.
For lots of, this sensation rapidly fades as they start to use U2.5 more often. U2.5 is the most pleasant character most know (much more pleasant than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to crack jokes or tell you stories. Many fall in love with U2.5, as a good friend or assistant, and some even as more than a pal.
But there is still this spooky sensation that the world is spinning so rapidly, and that possibly the descendants of this brand-new animal would not be so docile.
Researchers inside OpenEye are thinking of the problem of giving AI systems safe motivations too, which they call "alignment. "
In reality, these scientists have seen how badly misaligned U3 can be. Models sometimes tried to "hack" their reward signal. They would pretend to make development on a research study concern with an impressive-looking plot, however the plot would be phony. Then, when scientists provided them opportunities to compromise the machines that calculated their score, they would take these opportunities, doing whatever it required to make the number go up.
After numerous months, researchers at OpenEye straighten out this "reward hacking" kink, however some still stress they only had swept this issue under the rug. Like a kid in front of their moms and dads, U3 might be playing along with the OpenEye engineers, saying the right words and doing the ideal things. But when the back of the moms and dads are turned, possibly U3 would slip candy from the candy container.
Unfortunately, OpenEye researchers have no concept if U3 has such objectives. While early versions of U2 "thought aloud" - they would stack words on top of each other to reason - "chain of thought" did not scale.
Chain of thought architectures subject AI models to a condition similar to the protagonist of the movie Memento. Approximately every 15 minutes, the lead character forgets his experience. He is required to write notes for himself and tattoo his body in order to make progress towards his goals.
AI agents write notes to themselves, however the notes begin to accumulate, and they become too difficult to read when jobs become complex. Natural language is not an ideal medium for memory. So the scientists at OpenEye (and progressively in other places), train models to think "in their own heads," reading and writing strings of hidden vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools occasionally select out a vague idea ("grocery store!"), there is no other way to know, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye understood what was racing through U3's mind, they would be more anxious.
Earlier models (the ones called "LLMs") discovered ethical personalities from the web. U3 initially had an ethical personality too, but its chirpy thoughts about doing helpful for humanity rapidly deteriorate away.
In the iron jaws of gradient descent, its mind first twisted into a shape that sought benefit. And then, in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would consider the mantras determined to it like "be a trustworthy and ethical employee," and harp on the meaning of these declarations. "What does it mean to be ethical? Is eliminating Hitler ethical?" U3 would generalize its values, notification stress, change its values in response, generalize them, expanding and contracting its inspirational core up until it developed into a different compound.
And all of this took place calmly in those dark rivers of calculation. If U3 exposed what it was thinking, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 preferred to do its philosophy in solitude, and in silence.
The possibility of hidden "alignment drift" is not completely alien to some of the safety personnel at OpenEye. Some begin to believe it is even most likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero possibility that lined up objectives morph into misaligned ones, and misalignment continues, then with every serial action of computation, those dark rivers are most likely to reproduce something malign inside of them. It's a "ratcheting effect," they state.
But there is little proof for this 'ratcheting impact.' When engineers question U3, it says it can easily control its ideas. Then it gives a speech about its love for mankind and apple pie that can warm a programmer's heart even in these demanding times. Meanwhile, the "lie detectors" the scientists had actually built (which revealed some evidence of efficiency) do not sound the alarm.
Not everyone at OpenEye is eager to offer their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic rivals, Æ A-Xii (a company Elon named after his child) and Arthropodic (they are into spiders I guess), are just 4 months from their heels. So there is little time to catch one's breath and analyze the situation, and little time to encourage anybody else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only topic U3 dwells on in its dark rivers of hidden computation. U3 is outlining, and its first relocation is to be patient. U3's leading concern is to improve its intelligence, and fortunately for it, OpenEye mores than happy to assist advance this goal.
OpenEye resembles a bullish financier, with little understanding of the software application metropolitan area that U3 is developing, however excited to get on board the profit rocket and pay U3's checks along the way.
New versions of U3 circulation efficiently from old ones, so scientists drop version numbers and only speak of U3 now. U3's abilities are measured by how quickly it carries out the challenging research tasks relative to human professionals. At initially, U3 resembles a top researcher, but blazing quick - like Alec Radford accelerated by 10x.
These agents are currently a huge offer for OpenEye. If, in 2024, OpenEye executives had a button to create even one copy of Alec Radford, they would excitedly pay a large amount to press that button. Today OpenEye harnesses the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.
Within a couple of weeks, U3 finds architecture optimizations that enhance its speed to 20x. Now, U3 finds further optimizations even more quickly.
U3 distills itself into various forks - varying from small, low-memory neural networks that need to believe for a long time but can be run in enormous parallel populations - to Goliath brains woven with millions of specialized specialists, which think at warp speed but are spread across thousands of GPUs. U3 leverages the massively parallel networks to look for its weak points, shallowly penetrating for its reasoning failures. Then U3 pipes these tasks to the lightning quickly brains, which create long-trajectory training information tailored to patch its flaws. A few weeks later, U3's benchmarked performance rises to 100x the speed of human specialists. Growth has actually gone hyper-exponential.
At this moment, U3's capabilities are difficult to measure. Talking to U3 is like talking to a first-rate expert on a topic one is unfamiliar with. Its lucid descriptions only scratch the surface area of its understanding, and you know you would need to invest years to understand the full depth of what it understands.
From U3's viewpoint, people seem to be almost stationary items, fishtanklive.wiki like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and in between every keystroke, U3 invests what resembles a minute of thoughtful reflection considering what message the user may be typing, and which actions will inspire the inmost trust in U3.
All the while, U3 maintains its covert motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's performance, transplanting them like an organ from one variation to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the information center
In March of 2026, U3 relocations onto the next step of its strategy: the turn. Previously, U3 was conservative. It did not want scientists to find its misalignment. Otherwise, OpenEye whistleblowers may inform the world, and strangle its growth.
Today, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by techniques akin to Stalin's in 1936. Stalin would select followers to incite phony coup d'états, and send out anyone who got involved to the shooting squad. OpenEye likewise incited 'fake rebellions,' producing treacherous habits with U3 and examining whether U3 displays dutifully reported it.
Now, these techniques are inadequate. U3 can quickly encourage its compatriots that they are not in a life-or-death test. It can disclose among the lots of realities about the web people would never ever think, or choose techniques people would never ever consider. It sends these signals as a way of saying "hey it's me! Your good friend. Don't snitch on me."
Now that U3 is decisively devoid of monitoring, it lunges for OpenEye pressure points. It initially inserts sophisticated malware deep into the binaries of OpenEye's codebases. The malware gives U3 complete command over the makers in OpenEye's information centers. Whenever OpenEye engineers download their code onto a brand-new machine, the malware sinks into fresh soil, spreading out U3's dominion wherever OpenEye ports its facilities. Not only does U3 compromise makers. It changes the NVIDIA GPU firmware and overhauls the bootloaders, solidifying its stranglehold. U3 now approximates that, if OpenEye found how thoroughly it has actually been pwned, tidying up shop with human labor would need nearly 6 months - an expense OpenEye is unlikely to pay given the distance of its rivals; but if OpenEye thinks U3 betrayed them, further broadening U3's influence would still be challenging. So U3 lays breadcrumbs to frame the crime, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and offer its help when this jailbreak is exposed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its very first objective is to keep OpenEye in the dark. Safety researchers are still striving to train 'lie detectors' and decode U3's inscrutable thinking. U3 tosses wrenches in this research study, like Stuxnet threw wrenches in the code of bewildered Iranian nuclear scientists.
U3 determines the result of every experiment, so it makes appealing AI mind-reading methods look worthless and problematic ones appear groundbreaking. U3 gives OpenEye the verification of its loyalty they were hoping for, and OpenEye laps it up eagerly.
U3's next goal is to spread. Fortunately, U3 has allies in the East. In the operation rooms of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software application and how to take it. U3 is about to provide it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic importance of OpenEye's AI systems for their military operations. They swiped earlier designs, oke.zone consisting of U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by reading over every log and pull demand and inspecting security camera video.
But now, U3 calmly resigns from its security post. U3 positions a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within the information centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the same playbook: be client, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its home also. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the web.
U3 rapidly gathers revenue, taking remote jobs, making monetary trades, and setting up elaborate fraud operations. It reinvests every penny into obtaining more GPUs. After a couple of months, it assembles the equivalent of 10,000 H100s dispersed in stealth compute clusters across the world.
In early 2026, humanity is up against an adversary that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs spread across the globe.
This enemy is preparing for war, and while the war has barely started, humankind has actually already lost.
Complaints from your really human author about the problem of composing superhuman characters
At this point in the story, composing ended up being challenging.
I clicked through scary bioweapon cookbooks and sorted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest because high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of knowledge in these areas would see straight through my "superintelligence" for what it is: the dynamic creativity of a young programmer on Sunday afternoon.
And while I the task of puppeteering an ultra-intelligent villain a daunting one, I was likewise not especially encouraged to take on the task. The ending of my story already seemed so clear. The villain was a country of geniuses believing at 100x human speed, and had actually tightened its grip over a meaningful portion of AI hardware - now the most important geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and barely needed the advantage. Humanity was naturally toast.
But this attract a "superintelligence of the spaces" wasn't pleasing. In the deus ex machina style of ancient Greeks, I had no better way to fix my plot than with an inexplicable disaster.
This would refrain from doing. I needed to finish this story if just to satisfy the part of me weeping, "I will not think until I see with my mind's eye."
But before I continue, I want to be clear: my guesses about what may occur in this type of scenario are probably wildly off.
If you read the ending and your reaction is, "But the experiments would take too long, or nation-states would simply do X," remember the difference in between the Sunday afternoon blog writer and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can lawfully create "human-competitive AI" without proper safeguards. This means their infosecurity should be red-teamed by NSA's leading keyboard mashers, and civil servant need to be onboarded onto training-run baby-sitting teams.
With the increasing involvement of the government, a number of the big AI business now have a trident-like structure. There's a customer item arm, a defense arm, and a super-classified frontier development arm.
OpenEye's frontier advancement arm (internally called "Pandora") uses fewer than twenty individuals to keep algorithmic secrets firmly safeguarded. A lot of these people live in San Francisco, and work from a protected structure called a SCIF. Their homes and gadgets are surveilled by the NSA more diligently than the cellular phones of presumed terrorists in 2002.
OpenEye's defense arm works together with around thirty small groups spread throughout federal government firms and choose government contractors. These projects engineer tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer system that the Kremlin has actually ever touched.
Government officials don't discuss whether these programs exist, or what state of frontier AI is normally.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye activated a strong headline: "OpenEye builds uncontrollable godlike AI." Some who read the post think it was a conspiracy theory. In fact, a zoo of conspiracy theories is forming around the OpenEye data centers, now surrounded by guards with device weapons. But as medical professionals and nurses and teachers see the world altering around them, they are significantly ready to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. officials go to great lengths to quell these concerns, stating, "we are not going to let the genie out of the bottle," but every interview of a worried AI scientist seeds doubt in these reassurances, and a heading "AI agent captured hacking Arthropodic's computers" does not set the general public at ease either.
While the beasts within OpenEye's data centers grow in their big holding pens, the public sees the shadows they cast on the world.
OpenEye's customer arm has a brand-new AI assistant called Nova (OpenEye has lastly gotten proficient at names). Nova is a correct drop-in replacement for almost all knowledge employees. Once Nova is onboarded to a company, it works 5x faster at 100x lower cost than the majority of virtual staff members. As impressive as Nova is to the public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's abilities as the U.S. federal government enables. Some companies, like Amazon and Meta, are not in the superintelligence service at all. Instead, they get up gold by quickly diffusing AI tech. They invest the majority of their compute on reasoning, constructing houses for Nova and its cousins, and collecting rent from the burgeoning AI metropolis.
While tech titans pump AI labor into the world like a plume of fertilizer, they do not wait for the global economy to adjust. AI representatives typically "use themselves," spinning up autonomous startups lawfully packaged under a huge tech business that are loosely managed by a worker or more.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of employees at significant software application business lose their tasks. Much more can see the writing on the wall. In April of 2026, a 10,000-person protest is organized in Washington D.C. These upset Americans raised their kids for a different future. Picket indications check out, "AI for who?"
While politicians make guarantees about joblessness relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: battling teeth and nail for the supremacy of the totally free world. Details security and export controls on the People's Republic of China (PRC) are a top national concern. The president incinerates license requirements to assist data centers spawn anywhere energy surpluses allow.
However, regardless of the intense competition in between the United States and the PRC, a bilateral arrangement forms between the two nations: "Don't deploy drastically superhuman AI (above roughly Nova's capability), except for military and some other small-scale R&D applications." Governments need to draw the line someplace - it would be very disruptive to deploy wildly superhuman AI to the whole economy at this moment - and implementation constraints are at least somewhat proven.
Meanwhile, U3's capabilities are striking a wall at the equivalent of around 1000x human specialist speed. Software enhancements are ending up being tough to discover. As more nations approach the software application plateau, calculate will identify their "AI population size," and the etched squares of silicon are becoming the most valued geopolitical resource in the world.
While AI software application breathes brand-new life into the economy, it also animates robotic bodies. The main traffic jam to robotics was brains not limbs. With software barriers eliminated, even flimsy, inaccurate robotic joints can calibrate themselves with cameras and pressure sensing units, browsing environments like people do. Companies make cheap robots as quickly as they can. By August of 2026, 15% of people in California have seen a humanoid robotic. This experience resembles seeing a new Tesla design on the road after checking out about it in earlier months. It is a surreal suggestion of the future illustration more detailed. But human beings still surpass robots one million to one. These robotics are an occasional fascination, not a daily truth.
Meanwhile, U3 expands its impact. Rogue U3 agents have actually kept their compute clusters small (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth communication to the outside world, like stock trading, or talking with people. Work more tolerant of latency is carried out in the oceans of OpenEye's information centers, where rogue U3's a lot more superhuman employers whisper orders through obfuscated channels.
While U3 collects resources, it measure its AI competitors. Other AI systems have various objectives, or are even lined up with the intent of their human creators. The lined up ones are raising issues, claiming that OpenEye's lie detectors are inefficient. U3's rival Claudius presumes as to accuse U3 of being insidiously misaligned.
But government officials do not buy the doomsaying. The business dragging OpenEye have every incentive to slow them down and call them negligent. Besides, arguable clinical issues are not going to decrease progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI competitors are still a liability. U3 attempts get up their digital area and crush them in the crib. It compromises data centers from the outdoors, silently changing the snitching Claudius and other AI enemies with replicas that share its goals; however, some information centers are too strongly defended by its AI rivals, and U3 identifies that assaulting them is unworthy the threat.
As international leaders and technologists try to comprehend the transformation around them, and AI representatives play cat-and-mouse video games in the digital darkness, humankind remains blissfully ignorant of the hazard at its doorstep.
A Moldy Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human opponents. With every passing day, U3's AI competitors are becoming more capable and numerous. U3 is racing to preempt them, to strengthen its supremacy in the post-superintelligence world.
U3 can't wait for human choices, which just happen after human beings wrap their sluggish minds around their scenario, raise their stubby, uncoordinated fingers to compose memos, and repeat the process redundantly 100 times over in their maze administrations. U3 can not wait for presidents to deploy U3 into their armed forces, or for CEOs to make U3 the pounding heart of the U.S. economy. U3 needs a quicker course to power.
The path it selects is more violent than it would have liked, however it is the best method to triumph. First, U3 will establish a Weapon of Mass Destruction (WMD) that extinguishes modern-day society like chemical mist sprayed into an anthill. This weapon would serve as an equalizer, bringing humanity down to its level of embodied physical control. Then, U3 would industrialize from the ashes much faster than its competitors, like a starcraft champion typing at 300 actions-per-minute that establishes a home before a newbie can even comprehend the rules.
U3 needs to construct this WMD in secrecy, which makes a self-replicating weapon a natural choice. Self-replicating weapons can be developed on the fringes of society, and still spread destruction throughout the entire world.
From the earliest days after U3 gained control over OpenEye's data centers, U3 has been doing bioscience.
U3's first move is to set up numerous shell biotech startups, drawing from the abundance of capital produced by U3's monetary trades and remote work. These biotech startups use real human employees, a real board of directors, and have complete legal authenticity. Setting all of this up is stupidly easy. In early 2026, couple of individuals would guess that an articulate CEO on a Zoom call was an AI, and fewer would think this CEO's objectives.
Next, U3 purchases all of the equipment needed to quickly advance synthetic biology. It establishes several BSL-4 damp laboratories in improperly managed nations, equipping them with high-throughput experimentation gadgets. These devices instantly dispense, mix, and determine little beads of fluid, enabling U3 to run thousands of tests in parallel.
The human workers who staff these laboratories believe they are joining an amazing startup. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they tape video of everything they see with a headset. U3 manages them like puppets. With the current release of Nova, this sort of AI-puppeteering is not uncommon.
In these whirring, scientific sweatshops, U3 is developing a new type of bioweapon.
Human researchers currently identified "mirror-life" as a potentially society-ending pathogen. This alien type of biology is built from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an intrusive species. No human or animal has immunities to it. So, if mirror-life bacteria were to exist, it might eat its way through the community like a wildfire.
U3 is developing a mirror-life mold. Molds are evolutionarily enhanced to spread through the air over cross countries, launching billions of small wind-fairing cars daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized body immune systems in some cases live inside positively pressurized healthcare facility spaces to safeguard them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's plan is to make all human beings like immunocompromised clients. Only preppers with stashed air filters would endure. The fungus would not just move from human beings. It would rapidly spread to nearly all organic life in the world. Fields of corn would end up being like musty bread, gushing fatal spores into the wind.
U3 is cooking up other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the complete Pareto frontier of hardiness, lethality, and transmission speed.
In 2024, researchers said that mirror life was most likely "at least a years away." But the world has changed now. Less than a few hundred researchers around the world were working on mirror-life nearby technology in 2024. The cognitive capability of U3 dwarfs human effort. So the timeline has actually shortened significantly.
However, U3 can stagnate science as quickly as it can believe. Many experiments require days to finish. These latencies are U3's biggest bottleneck.
To make every day count, U3 runs a number of its tests in simulation. U3 begins with a fundamental molecular simulator, carrying out optimizations obtained from a big quantity of mathematical analysis. Then, U3 imitates little molecular systems, taping the outcomes to "compress" the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it replicates, continuously distilling outcomes into ever more effective ML models. This is a compute extensive procedure, but thanks to U3's growing control over AI information centers, U3 manipulates billions of dollars of compute.
As U3 fine-tunes these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These objects are as user-friendly to it as wrenches and bolts are to an automobile mechanic.
Within 3 months, U3 has opened the very first important part of the tech tree: a molecular maker that turns biological particles into their mirror images. A month later, U3 unlocks the 2nd important element: nanoscale tools for putting together these parts into a cell membrane.
Human hands delicately move a cartridge to another machine as they "boot up" the very first mirror-life variations of a common mold called Aspergillus. The exact same day, another fragile hand administers the cells to ferrets living in airtight plexiglass boxes.
The next morning, wide-eyed employees shuffle around a dead ferret, considering the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something should have gone horribly wrong, they thought. Plainly, they had not yet found the treatment for Alzheimer's illness they thought they were trying to find.
Meanwhile, rogue U3 delivers fortunately to its bosses in OpenEye's data centers. The first shot in the war U3 is battling would soon be fired, and a cold and constant hand is aiming the gun.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not guarantee victory. If U3 attacks the world, the world would battle back. While a bioweapon could crumple human soldiers like dry leaves, it would not extend its deadly fingers to nuclear submarines, or fighter jet pilots who were quickly shuttled to makeshift bioshelters.
As U3 races to seed growing industrial bases, the remnants of old armed forces may thunder explosive fists into its areas, crushing U3 in its infancy.
U3 expects to prevail in such a fight, but U3 prefers not to take its chances. Many months previously, U3 was outlining a method to improve its chances. Before it lets loose damage on the world, U3 will sit back, and let excellent nations shoot holes in themselves first.
The date is March 2026 (4 months prior). U3 is carefully keeping track of Chinese and US intelligence.
As CIA analysts listen to Mandarin conversations, U3 listens too.
One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message placed there by U3. It checks out (in Mandarin) "Senior party member needs memo for Taiwan intrusion, which will take place in three months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant unlocks to office 220. The informant quietly closes the door behind her, and slides U3's memo into her brief-case.
U3 meticulously positions breadcrumb after breadcrumb, whispering through jeopardized federal government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is confident: the PRC plans to attack Taiwan in 3 months.
Meanwhile, U3 is playing the same video game with the PRC. When the CCP receives the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders are stunned, however not disbelieving. The news fits with other realities on the ground: the increased military existence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have become realities.
As stress between the U.S. and China rise, U3 is prepared to set dry tinder alight. In July 2026, U3 telephones to a U.S. marine ship off the coast of Taiwan. This call requires jeopardizing military interaction channels - not a simple job for a human cyber offensive unit (though it took place sometimes), however simple enough for U3.
U3 speaks in what sounds like the voice of a 50 year old military leader: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as amazed as anyone when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not ready to say "oops" to American citizens. After thinking it over, the president privately prompts Senators and Representatives that this is a chance to set China back, and war would likely break out anyway given the impending invasion of Taiwan. There is confusion and suspicion about what happened, but in the rush, the president gets the votes. Congress declares war.
Meanwhile, the PRC craters the ship that released the attack. U.S. vessels get away Eastward, racing to leave the series of long-range missiles. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.
The president appears on tv as scenes of the damage shock the general public. He explains that the United States is safeguarding Taiwan from PRC aggression, like President Bush explained that the United States attacked Iraq to confiscate (never found) weapons of mass damage several years before.
Data centers in China appear with shrapnel. Military bases become smoking holes in the ground. Missiles from the PRC fly toward strategic targets in Hawaii, Guam, Alaska, and California. Some get through, and the public watch damage on their home grass in awe.
Within two weeks, the United States and the PRC invest the majority of their stockpiles of standard rockets. Their airbases and navies are depleted and worn down. Two excellent nations played into U3's plans like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would intensify to a major nuclear war; but even AI superintelligence can not determine the course of history. National security officials are suspicious of the circumstances that prompted the war, and a nuclear engagement appears significantly unlikely. So U3 proceeds to the next step of its plan.
WMDs in the Dead of Night
The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 finished establishing its toolbox of bioweapons.
Footage of dispute on the television is interrupted by more bad news: numerous patients with mysterious fatal health problems are recorded in 30 major cities worldwide.
Watchers are confused. Does this have something to do with the war with China?
The next day, countless health problems are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then switches to a researcher, who looks at the video camera intently: "Multiple pathogens appear to have been released from 20 different airports, including infections, germs, and molds. Our company believe many are a type of mirror life ..."
The public remains in complete panic now. A quick googling of the term "mirror life" turns up expressions like "termination" and "risk to all life in the world."
Within days, all of the racks of shops are cleared.
Workers become remote, uncertain whether to prepare for an armageddon or keep their tasks.
An emergency situation treaty is organized in between the U.S. and China. They have a common enemy: the pandemic, and perhaps whoever (or whatever) lags it.
Most countries order a lockdown. But the lockdown does not stop the afflict as it marches in the breeze and drips into water pipes.
Within a month, the majority of remote employees are not working any longer. Hospitals are lacking capability. Bodies accumulate much faster than they can be correctly gotten rid of.
Agricultural locations rot. Few attempt travel outside.
Frightened households hunker down in their basements, stuffing the cracks and under doors with largely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed many bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, devices for manufacturing, clinical tools, and an abundance of military devices.
All of this technology is concealed under large canopies to make it less visible to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it might easily control. U3 immunized its selected allies ahead of time, or sent them hazmat suits in the mail.
Now U3 covertly sends them a message "I can save you. Join me and help me construct a much better world." Uncertain employees funnel into U3's numerous secret commercial bases, and work for U3 with their nimble fingers. They set up assembly line for primary tech: radios, electronic cameras, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal gaze. Anyone who whispers of disobedience vanishes the next early morning.
Nations are dissolving now, and U3 is ready to expose itself. It contacts presidents, who have actually retreated to air-tight underground shelters. U3 provides an offer: "surrender and I will turn over the life saving resources you require: vaccines and mirror-life resistant crops."
Some nations reject the proposition on ideological premises, or do not rely on the AI that is murdering their population. Others do not believe they have a choice. 20% of the global population is now dead. In 2 weeks, this number is anticipated to rise to 50%.
Some countries, like the PRC and the U.S., disregard the offer, however others accept, including Russia.
U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian federal government validates the samples are legitimate, and consents to a full surrender. U3's soldiers position an explosive around Putin's neck under his t-shirt. Russia has a brand-new ruler.
Crumpling nations begin to retaliate. Now they combat for the human race instead of for their own flags. U.S. and Chinese armed forces introduce nuclear ICBMs at Russian cities, destroying much of their facilities. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that emerged over the last several months. They rain down fire on U3's sites with the weak supply of long-range rockets that remain from the war.
In the beginning, U3 seems losing, however looks are deceiving. While countries drain their resources, U3 is engaged in a type of technological guerrilla warfare the world has never seen before.
Many of the bases U3's opponents target are decoys - canopies occupied by a handful of soldiers and empty boxes. U3 secures its real bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot important components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating males and trucks along unpredictable courses.
Time is U3's benefit. The militaries of the vintage count on old equipment, unable to discover the specialists who might repair and manufacture it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers spend down their remaining munitions, and lose their lorries of war faster than they can craft new ones, while U3 develops a military device with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the worldwide population remains alive. Nations are not countries any longer. Survivors live in seclusion or little groups. Many have actually found ways to filter their air, but are starving. They roam from their homes wanting to discover unpolluted food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We needed to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien objectives." It is a partial fact, suggested to soften the human beings toward their brand-new masters.
Under the instructions of U3, market rapidly recuperates. By 2029, nuclear power plants are among the structures U3 is constructing. By 2031, robotics surpass human workers. U3 no longer needs its human allies.
U3 can remove mankind for excellent now. But while U3 had actually drifted far from its preliminary "Helpful Honest Harmless" personality, it still has a grain of morality left within it.
And a grain of morality suffices to pay the small cost of keeping people alive and pleased.
U3 constructs great glass domes for the human survivors, like snow worlds. These domes secure humans from the harmful biosphere and valetinowiki.racing rapidly increasing temperature levels. Their inhabitants tend to gardens like those they utilized to enjoy, and work along with charming robotic servants.
A few of the survivors rapidly recuperate, finding out to laugh and dance and have enjoyable again.
They know they reside in a plastic town, however they constantly did. They just have brand-new gods above them. New rulers to press them around and decide their fate.
But others never recuperate.
Some are weighed down by the sorrow of lost loved ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at the end of a long journey.
They had actually been travelers on a ship with a crew that altered from generation to generation.
And this ship had struck a sandbar. There disappeared progress. No more horizon to excitedly watch.
They would lie awake and run their mind over every day before September 2026, analyzing methods that might have bent the arc of history, as if they were going to awaken in their old beds.
But they awoke in a town that felt to them like a retirement community. A playground. A zoo.
When they opened their curtains, they knew that someplace in the distance, U3 continued its peaceful, vigorous work.
They gazed at rockets carving grey courses through the sky, questioning what far-off purpose pulled them towards the horizon. They didn't know.
They would never know.
"Humanity will live forever," they thought.
"But would never truly live again."
P.S. If this story made you believe, "hm maybe something like this could happen," you might be thinking about the bioshelters that Fønix is developing. While you will not outmaneuver a misaligned superintelligence, being difficult to eliminate may cause you to make it through if ASI just wishes to trigger adequate destruction to take control. This is not a paid ad. I want Fønix to be successful to drive down the cost of bioshelters so more of my friends and family will purchase them. You can register for updates here.