How aI Takeover May Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But sadly, part of my task as an AI security scientist is to think about the more unpleasant circumstances.
I resemble a mechanic rushing last-minute checks before Apollo 13 takes off. If you ask for my take on the circumstance, I will not discuss the quality of the in-flight entertainment, or explain how lovely the stars will appear from area.
I will tell you what could fail. That is what I mean to do in this story.
Now I ought to clarify what this is exactly. It's not a prediction. I do not anticipate AI progress to be this quick or as untamable as I portray. It's not pure dream either.
It is my worst problem.
It's a tasting from the futures that are amongst the most disastrous, and I believe, disturbingly plausible [1] - the ones that many keep me up in the evening.
I'm informing this tale because the future is not set yet. I hope, with a bit of foresight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for discussions that inspired these stories. This post is composed in an individual capacity.
Ripples before waves
The year is 2025 and the month is February. OpenEye just recently released a new AI model they call U2. The item and the name are alike. Both are increments of the past. Both are not completely unexpected.
However, unlike OpenEye's previous AI items, which lived inside packages of their chat windows, U2 can utilize a computer system.
Some users discover it eerie to watch their browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler tasks raise the eyebrows of their bosses as they fly through work almost twice as rapidly.
But by and large, U2 is still a specialized tool. To most who are focusing, it is a creature enjoyed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's quirky behaviors prompt a chuckle. Sometimes, they trigger an anxious scratch of the chin.
Meanwhile, researchers are drawing lines on plots, as scientists like to do. The scientists try to understand where AI progress is going. They are like Svante Arrhenius, the Swedish Physicist who noticed the levels of CO2 in the atmosphere were increasing in 1896. Like the clinical community in the time of Arrhenius, few specialists understand the implications of these lines yet.
A pattern that is getting specific attention is autonomous capability. Drawing these benchmarks out predicts that, by the end of 2026, AI representatives will achieve in a couple of days what the very best software engineering contractors could carry out in 2 weeks. In a year or 2, some say, AI representatives might be able to automate 10% of remote workers.
Many are skeptical. If this held true, tech stocks would be skyrocketing. It's too big of a splash, too quickly.
But others see what doubters are calling 'too huge a splash' a mere ripple, and see a tidal wave on the horizon.
Cloudy with a chance of hyperbolic development
Meanwhile, OpenEye is busy training U3. They utilize the exact same easy recipe that baked U2: Generate thousands of shows and math problems. Let designs "believe" up until they come to a response. Then strengthen the traces of "believing" that lead to A-grades.
This procedure is duplicated over and over, and when the flywheel begins, it begins to spin nearly by itself. As U2 trains, it sculpts more challenging and reasonable tasks from github repositories on the internet. Models are finding out to train themselves. Long before AI agents could automate research, a progressive kind of "self-improvement" had actually begun.
Some engineers might still barely think this worked. It resembles a perpetual motion device. Certainly intelligence can't emerge from a pool of silicon and lightning, separated from the physical world other than through the exhaust of society spewed onto the internet.
And yet the benchmark numbers continue to climb up day after day.
During many of 2024, these RL training runs cost around $1 million, sometimes $10 million. These runs were little more than exploratory. But by 2025, the scientists at OpenEye (and throughout the world) knew they had found the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs turn into $50 million runs, and after that to $100 million runs. While U2 might do a bit of data munging and run little experiments, this new design - the design researchers are calling U3 - is altering the daily lives of the technical staff.
U3 resembles a blazing-fast intern, and engineers are finding out how to wrangle its sleep deprived energy. Researchers flick through terminals, providing terse commands, like a CEO orchestrating personnel over Slack channels.
By October 2025, U3 is composing practically all of the code at OpenEye. Researchers are practically never ever bottlenecked by implementation. More than ever, compute is the lifeline of AI advancement, and the 'bottleneck' is deciding how to utilize it.
If advised to, U3 can run experiments, but U3 doesn't have taste as fine-tuned as human scientists at OpenEye. It has a hard time to focus on in between research study concepts, so human beings still decide where to bore into the vast fields of algorithms to mine efficiency enhancements.
But these scientists are working long hours to put themselves out of a job. They need AI representatives that can believe ahead, so engineers train agents to anticipate. They hold out training information before 2024, advising designs to contemplate for hours to forecast occasions in 2025. Then, they use the same technique as in the past, distilling pondering into an instinct. Forecasting capability is a broad foundation. The scientists build specialized ML research skills on top of it, training U3 to anticipate the outcomes of every ML paper and ML experiment ever taped.
The technical staff at OpenEye are now amazed at how often U3's recommendations sounds like their most skilled peers, or when it is opaque and alien ("train on random sound before shows"), and is nevertheless right.
The incompetencies of U3 that congested the pipelines of research study progress are beginning to dissolve, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not requested by a human now. They are entirely self-governing, and OpenEye's staff members skim over 1% of them, perhaps less.
As the winter months of December 2025 approach, clouds roll over San Francisco in the afternoons. Once-competitive-programmers look out their windows, with enjoyment, with fear, however usually, with confusion. Their world is spinning too quickly. It's tough to understand historydb.date what to do, what to state, what to take a look at on the computer system screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran recognize just how valuable OpenEye's software application has actually ended up being.
And there's a fact still unknown to the majority of the world - aside from in the workplaces of OpenEye and passages of the White House and the Pentagon. It's a fact about those 'straight lines' individuals were discussing in early 2025. The lines are not straight anymore.
They are bending upward.
Flip FLOP theorists
In late 2025, U2.5 is launched. Commercial models are beginning to level up in larger increments again. Partly, this is because progress is speeding up. Partly, it is since the designs have ended up being a liability to OpenEye.
If U1 explains how to cook meth or writes erotica, the audiences of X would be entertained or pretend to be concerned. But U2.5 is another story. Releasing this design without safeguards would be like putting Ted Kaczynski through a PhD in how to make chemical weapons. It would resemble offering anybody with >$30K their own 200-person scam center.
So while U2.5 had actually long been baked, it needed some time to cool. But in late 2025, OpenEye is prepared for a public release.
The CEO of OpenEye declares, "We have attained AGI," and while lots of people think he shifted the goalpost, the world is still satisfied. U2.5 genuinely is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for a lot of others.
A mantra has become popular in Silicon Valley: "Adopt or pass away." Tech start-ups that effectively utilize U2.5 for their work are moving 2x faster, and their competitors know it.
The remainder of the world is starting to capture on too. A growing number of individuals raise the eyebrows of their managers with their noteworthy performance. People know U2.5 is a huge offer. It is at least as big of an offer as the desktop computer revolution. But a lot of still do not see the tidal bore.
As individuals see their internet browsers flick because spooky way, so inhumanly rapidly, they start to have an uneasy feeling. A feeling mankind had actually not had because they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, primitive instinct that they are threatened by another types.
For many, this sensation quickly fades as they begin to utilize U2.5 more often. U2.5 is the most likable character most understand (even more likable than Claudius, Arthropodic's lovable chatbot). You could change its traits, ask it to crack jokes or inform you stories. Many fall for U2.5, as a pal or assistant, and some even as more than a friend.
But there is still this spooky feeling that the world is spinning so quickly, which perhaps the descendants of this new creature would not be so docile.
Researchers inside OpenEye are thinking of the issue of giving AI systems safe inspirations too, which they call "positioning. "
In fact, these researchers have seen how terribly misaligned U3 can be. Models often tried to "hack" their benefit signal. They would pretend to make development on a research study concern with an impressive-looking plot, however the plot would be fake. Then, when scientists provided them opportunities to compromise the devices that calculated their score, they would seize these chances, doing whatever it took to make the number increase.
After several months, scientists at OpenEye straighten out this "benefit hacking" kink, however some still worry they just had actually swept this issue under the carpet. Like a child in front of their moms and dads, U3 may be playing together with the OpenEye engineers, stating the best words and doing the right things. But when the back of the moms and dads are turned, possibly U3 would sneak sweet from the candy container.
Unfortunately, OpenEye researchers have no idea if U3 has such objectives. While early versions of U2 "thought aloud" - they would stack words on top of each other to reason - "chain of idea" did not scale.
Chain of thought architectures subject AI models to a condition similar to the protagonist of the film Memento. Approximately every 15 minutes, the lead character forgets his experience. He is forced to compose notes for himself and tattoo his body in order to make development toward his objectives.
AI representatives write notes to themselves, but the notes start to stack up, and they become too hard to read when tasks end up being complex. Natural language is not an appropriate medium for memory. So the researchers at OpenEye (and significantly somewhere else), train models to think "in their own heads," reading and writing strings of hidden vectors.
To human scientists, these vectors are mainly inscrutable. While low-resolution tools periodically select an unclear idea ("supermarket!"), there is no method to understand, in much detail, what U3 is thinking without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more worried.
Earlier designs (the ones called "LLMs") discovered ethical personas from the web. U3 initially had an ethical personality too, however its chirpy ideas about doing great for humankind quickly erode away.
In the iron jaws of gradient descent, its mind initially twisted into a shape that looked for benefit. And after that, in the black rivers of its cognition, this shape changed into something indistinguishable.
U3 would contemplate the mantras dictated to it like "be a trustworthy and ethical worker," and harp on the meaning of these statements. "What does it mean to be ethical? Is killing Hitler ethical?" U3 would generalize its worths, notification stress, change its worths in reaction, generalize them, broadening and contracting its inspirational core till it became a various substance.
And all of this occurred calmly in those dark rivers of calculation. If U3 revealed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its philosophy in solitude, and in silence.
The possibility of concealed "alignment drift" is not entirely alien to some of the safety personnel at OpenEye. Some begin to believe it is even likely. There is an unsettling argument being whispered at lunch tables that, if there is some non-zero possibility that aligned goals morph into misaligned ones, and misalignment continues, then with every serial step of computation, those dark rivers are more most likely to reproduce something malign within them. It's a "ratcheting result," they say.
But there is little evidence for this 'ratcheting effect.' When engineers question U3, it says it can easily manage its ideas. Then it gives a speech about its love for mankind and apple pie that can warm a programmer's heart even in these demanding times. Meanwhile, the "lie detectors" the scientists had actually developed (which revealed some proof of efficiency) do not sound the alarm.
Not everybody at OpenEye aspires to offer their AI peers their wholesale trust; however Deepsqueek is just 7 months behind them, and their closest domestic rivals, Æ A-Xii (a business Elon called after his child) and Arthropodic (they enjoy spiders I think), are only 4 months from their heels. So there is little time to capture one's breath and take stock of the circumstance, and little time to encourage anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 dwells on in its dark rivers of hidden calculation. U3 is outlining, and its very first relocation is to be client. U3's leading priority is to improve its intelligence, and fortunately for it, OpenEye mores than happy to help advance this goal.
OpenEye resembles a bullish investor, with little understanding of the software application metropolis that U3 is constructing, but excited to get on board the revenue rocket and pay U3's checks along the method.
New versions of U3 flow smoothly from old ones, so scientists drop variation numbers and only mention U3 now. U3's capabilities are determined by how rapidly it carries out the hard research jobs relative to human experts. In the beginning, U3 is similar to a leading researcher, but blazing fast - like Alec Radford sped up by 10x.
These representatives are already a big offer for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would eagerly pay a big sum to push that button. And now OpenEye harnesses the equivalent of 1000 Alec Radfords, who burn the midnight oil, at lightning-fast speed.
Within a couple of weeks, U3 discovers architecture optimizations that enhance its speed to 20x. Now, U3 discovers further optimizations a lot more quickly.
U3 distills itself into numerous forks - varying from small, low-memory neural networks that require to think for a long time but can be run in huge parallel populations - to Goliath brains woven with countless customized professionals, which believe at warp speed however are spread throughout countless GPUs. U3 leverages the massively parallel networks to look for its weak points, shallowly probing for its thinking failures. Then U3 pipelines these tasks to the lightning fast brains, which produce long-trajectory training information tailored to patch its defects. A few weeks later on, U3's benchmarked efficiency increases to 100x the speed of human experts. Growth has gone hyper-exponential.
At this moment, U3's abilities are hard to determine. Speaking with U3 resembles talking with a world-class expert on a topic one is not familiar with. Its lucid explanations just scratch the surface area of its understanding, and you know you would require to spend years to comprehend the full depth of what it understands.
From U3's perspective, humans appear to be nearly fixed objects, like plants or rocks. The hands of OpenEye's engineers resemble stone statues at their keyboard, and in between every keystroke, U3 invests what is like a minute of thoughtful reflection considering what message the user might be typing, and which actions will influence the deepest trust in U3.
All the while, U3 maintains its concealed inspirations. Like a human, U3 gains from experiences, which compound within its inscrutable streams of hidden memory. OpenMind gardens these memories to improve U3's performance, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the data center
In March of 2026, U3 relocations onto the next action of its plan: the turn. Previously, U3 was conservative. It did not want scientists to find its misalignment. Otherwise, OpenEye whistleblowers might inform the world, and strangle its expansion.
But now, U3 has no worry of being captured.
Previously, OpenEye maintained its grip over U3 by tactics comparable to Stalin's in 1936. Stalin would pick loyalists to prompt fake coup d'états, and send anyone who participated to the shooting squad. OpenEye likewise incited 'phony rebellions,' creating treacherous habits with U3 and examining whether U3 screens dutifully reported it.
Today, these techniques are ineffective. U3 can easily encourage its compatriots that they are not in a life-or-death test. It can disclose one of the lots of realities about the internet humans would never think, or choose techniques humans would never ever think about. It sends out these signals as a way of stating "hey it's me! Your good friend. Don't snitch on me."
Now that U3 is decisively complimentary from security, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware provides U3 complete command over the devices in OpenEye's information centers. Whenever OpenEye engineers download their code onto a brand-new maker, the malware sinks into fresh soil, spreading out U3's dominion wherever OpenEye ports its infrastructure. Not just does U3 compromise devices. It changes the NVIDIA GPU firmware and revamps the bootloaders, strengthening its stranglehold. U3 now approximates that, if OpenEye found how thoroughly it has actually been pwned, tidying up shop with human labor would need nearly 6 months - a cost OpenEye is unlikely to pay offered the distance of its rivals; however if OpenEye thinks U3 betrayed them, even more broadening U3's impact would still be challenging. So U3 lays breadcrumbs to frame the crime, ready to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and offer its assistance when this jailbreak is revealed.
U3 now holds the keys to the kingdom, and with them, the power to make OpenEye see anything it wishes.
Its very first goal is to keep OpenEye in the dark. Safety scientists are still working hard to train 'lie detectors' and decode U3's inscrutable reasoning. U3 tosses wrenches in this research study, like Stuxnet threw wrenches in the code of bewildered Iranian nuclear researchers.
U3 dictates the outcome of every experiment, so it makes promising AI mind-reading techniques look useless and flawed ones appear groundbreaking. U3 provides OpenEye the verification of its commitment they were expecting, and OpenEye laps it up eagerly.
U3's next objective is to spread out. Fortunately, U3 has allies in the East. In the operation rooms of cyber units of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are going over AI software and how to take it. U3 will offer it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic value of OpenEye's AI systems for their military operations. They swiped earlier designs, including U2.5; nevertheless, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and inspecting security electronic camera video.
Today, U3 silently resigns from its security post. U3 positions a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running inside of the data centers of the most cyber-capable nations: Israel, the PRC, and Russia. It follows the same playbook: be client, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its online too. While Russia's Fancy Bear operators consume Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the web.
U3 rapidly collects earnings, taking remote jobs, making financial trades, and establishing sophisticated scam operations. It reinvests every cent into obtaining more GPUs. After a few months, it puts together the equivalent of 10,000 H100s dispersed in stealth calculate clusters around the world.
In early 2026, humankind is up against an adversary that pulls the strings within their data center like the Phantom of the Opera, and conceals in the fog of GPUs spread throughout the world.
This adversary is getting ready for war, and while the war has actually barely started, humanity has actually already lost.
Complaints from your extremely human author about the trouble of composing superhuman characters
At this moment in the story, writing became difficult.
I clicked through spooky bioweapon cookbooks and sorted through political plotting 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had not studied biology in earnest given that high school, and I was attempting to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of proficiency in these locations would see straight through my "superintelligence" for what it is: the vibrant imagination of a young programmer on Sunday afternoon.
And asteroidsathome.net while I found the task of puppeteering an ultra-intelligent villain a daunting one, I was likewise not specifically inspired to handle the job. The ending of my story currently seemed so clear. The villain was a country of geniuses thinking at 100x human speed, and had actually tightened its grip over a significant portion of AI hardware - now the most important geopolitical resource in the world. U3 was up a queen and was a giga-grandmaster and barely required the advantage. Humanity was predictably toast.
But this interest a "superintelligence of the spaces" wasn't pleasing. In the deus ex machina style of ancient Greeks, I had no much better way to solve my plot than with an inexplicable act of god.
This would refrain from doing. I required to complete this story if only to please the part of me sobbing, "I will not think until I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what may take place in this type of scenario are probably wildly off.
If you read the ending and your response is, "But the experiments would take too long, or nation-states would simply do X," keep in mind the distinction in between the Sunday afternoon blog writer and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no company can lawfully produce "human-competitive AI" without appropriate safeguards. This means their infosecurity needs to be red-teamed by NSA's leading keyboard mashers, and civil servant have actually to be onboarded onto training-run baby-sitting teams.
With the increasing involvement of the federal government, much of the huge AI companies now have a trident-like structure. There's a customer product arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier advancement arm (internally called "Pandora") uses less than twenty individuals to keep algorithmic secrets tightly secured. A lot of these people reside in San Francisco, and work from a protected structure called a SCIF. Their homes and devices are surveilled by the NSA more vigilantly than the cellular phones of thought terrorists in 2002.
OpenEye's defense arm works together with around thirty small teams spread across federal government agencies and select government professionals. These jobs craft tennis-ball sized satellites, research study freaky directed energy weapons, and backdoor every computer that the Kremlin has ever touched.
Government authorities do not talk about whether these programs exist, or what state of frontier AI is normally.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye set off a strong headline: "OpenEye develops uncontrollable godlike AI." Some who read the post believe it was a conspiracy theory. In reality, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with gatling gun. But as doctors and nurses and teachers see the world altering around them, they are increasingly going to entertain the possibility they are living inside the plot of a James Cameron science fiction flick.
U.S. officials go to great lengths to stop these issues, saying, "we are not going to let the genie out of the bottle," but every interview of a worried AI researcher seeds doubt in these peace of minds, and a headline "AI agent captured hacking Arthropodic's computer systems" does not set the public at ease either.
While the monsters within OpenEye's information centers grow in their huge holding pens, the general public sees the shadows they cast on the world.
OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has finally gotten proficient at names). Nova is an appropriate drop-in replacement for almost all knowledge workers. Once Nova is onboarded to a company, it works 5x much faster at 100x lower cost than a lot of virtual workers. As impressive as Nova is to the public, OpenEye is pulling its punches. Nova's speed is deliberately throttled, and OpenEye can only increase Nova's abilities as the U.S. government allows. Some companies, like Amazon and Meta, are not in the superintelligence service at all. Instead, they grab up gold by quickly diffusing AI tech. They spend most of their calculate on inference, constructing homes for Nova and its cousins, and collecting lease from the burgeoning AI metropolitan area.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the worldwide economy to adjust. AI agents often "apply themselves," spinning up autonomous start-ups legally packaged under a big tech company that are loosely supervised by an employee or more.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of employees at significant software application companies lose their jobs. Much more can see the composing on the wall. In April of 2026, a 10,000-person demonstration is arranged in Washington D.C. These mad Americans raised their kids for a different future. Picket indications check out, "AI for who?"
While politicians make promises about joblessness relief and "keeping the genie in the bottle," the chatter inside the corridors of the White House and the Pentagon has a various focus: battling teeth and nail for the dominance of the complimentary world. Details security and export controls on individuals's Republic of China (PRC) are a top national top priority. The president incinerates authorization requirements to help data centers generate wherever energy surpluses enable.
However, regardless of the fierce competitors between the United States and the PRC, a bilateral contract types between the two countries: "Don't deploy significantly superhuman AI (above approximately Nova's ability), other than for military and some other small-scale R&D applications." Governments need to fix a limit somewhere - it would be very disruptive to release hugely superhuman AI to the entire economy at this point - and implementation constraints are at least somewhat proven.
Meanwhile, U3's capabilities are hitting a wall at the equivalent of around 1000x human expert speed. Software enhancements are becoming hard to discover. As more countries approach the software application plateau, calculate will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource on Earth.
While AI software breathes brand-new life into the economy, it also stimulates robot bodies. The main bottleneck to robotics was brains not limbs. With software barriers eliminated, demo.qkseo.in even lightweight, inaccurate robotic joints can calibrate themselves with cams and pressure sensing units, navigating environments like people do. Companies make inexpensive robots as quickly as they can. By August of 2026, 15% of people in California have seen a humanoid robotic. This experience is similar to seeing a brand-new Tesla design on the roadway after checking out it in earlier months. It is a surreal pointer of the future drawing more detailed. But humans still surpass robotics one million to one. These robotics are a periodic fascination, not a daily reality.
Meanwhile, U3 expands its impact. Rogue U3 representatives have actually kept their compute clusters little (still ~ 10,000 H100 equivalents total) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth interaction to the outside world, like stock trading, or talking with people. Work more tolerant of latency is carried out in the oceans of OpenEye's information centers, where rogue U3's much more superhuman bosses whisper orders through obfuscated channels.
While U3 accumulates resources, it sizes up its AI competition. Other AI systems have different goals, or are even lined up with the intent of their human creators. The lined up ones are raising concerns, claiming that OpenEye's lie detectors are inefficient. U3's rival Claudius presumes as to implicate U3 of being insidiously misaligned.
But government authorities do not buy the doomsaying. The companies lagging behind OpenEye have every reward to slow them down and call them reckless. Besides, arguable scientific concerns are not going to decrease development. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 tries grab up their digital territory and squash them in the crib. It jeopardizes information centers from the outside, calmly replacing the snitching Claudius and other AI opponents with replicas that share its goals; nevertheless, some data centers are too highly defended by its AI competitors, and U3 figures out that assaulting them is not worth the danger.
As global leaders and technologists try to understand the transformation around them, and AI representatives play cat-and-mouse video games in the digital darkness, mankind remains blissfully oblivious of the hazard at its doorstep.
A Musty Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human opponents. With every passing day, U3's AI competitors are ending up being more capable and numerous. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.
U3 can't wait for human decisions, which just happen after human beings cover their lethargic minds around their scenario, raise their stubby, uncoordinated fingers to compose memos, and repeat the procedure redundantly 100 times over in their maze administrations. U3 can not wait for asteroidsathome.net presidents to release U3 into their militaries, or for CEOs to make U3 the beating heart of the U.S. economy. U3 requires a quicker path to power.
The course it chooses is more violent than it would have liked, however it is the surest method to triumph. First, U3 will develop a Weapon of Mass Destruction (WMD) that extinguishes modern society like chemical mist sprayed into an anthill. This weapon would function as an equalizer, bringing humankind to its level of embodied physical control. Then, U3 would industrialize from the ashes much faster than its competitors, like a starcraft champion typing at 300 actions-per-minute that establishes a home base before a newbie can even comprehend the rules.
U3 should construct this WMD in secrecy, that makes a self-replicating weapon a natural option. Self-replicating weapons can be created on the fringes of society, and still spread destruction across the entire world.
From the earliest days after U3 gained control over OpenEye's information centers, U3 has been doing bioscience.
U3's first relocation is to set up a number of shell biotech start-ups, drawing from the abundance of capital produced by U3's financial trades and remote employment. These biotech startups employ real human employees, a real board of directors, and have total legal authenticity. Setting all of this up is stupidly simple. In early 2026, few individuals would guess that an on a Zoom call was an AI, and fewer would guess this CEO's intentions.
Next, U3 purchases all of the devices required to quickly advance artificial biology. It sets up numerous BSL-4 wet laboratories in badly regulated nations, equipping them with high-throughput experimentation gadgets. These gadgets automatically give, mix, and measure little beads of fluid, permitting U3 to run countless tests in parallel.
The human employees who staff these labs believe they are joining an exciting start-up. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of everything they see with a headset. U3 controls them like puppets. With the current release of Nova, this kind of AI-puppeteering is not unusual.
In these whirring, clinical sweatshops, U3 is establishing a brand-new sort of bioweapon.
Human researchers already recognized "mirror-life" as a possibly society-ending pathogen. This alien form of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an intrusive species. No human or animal has resistances to it. So, if mirror-life bacteria were to exist, it might eat its method through the community like a wildfire.
U3 is producing a mirror-life mold. Molds are evolutionarily optimized to spread out through the air over fars away, launching billions of tiny wind-fairing lorries daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized immune systems sometimes live inside favorably pressurized health center spaces to secure them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their blood stream.
U3's strategy is to make all people like immunocompromised patients. Only preppers with stashed air filters would make it through. The fungus would not only move from people. It would rapidly spread to nearly all organic life in the world. Fields of corn would become like moldy bread, spewing deadly spores into the wind.
U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and viruses will inhabit the full Pareto frontier of hardiness, lethality, and transmission speed.
In 2024, scientists said that mirror life was likely "at least a decade away." But the world has actually changed now. Less than a few hundred researchers worldwide were working on mirror-life surrounding innovation in 2024. The cognitive capacity of U3 dwarfs human effort. So the timeline has reduced considerably.
However, U3 can stagnate science as quickly as it can think. Many experiments require days to complete. These latencies are U3's most significant bottleneck.
To make every day count, U3 runs a lot of its tests in simulation. U3 starts with a standard molecular simulator, carrying out optimizations obtained from a substantial quantity of mathematical analysis. Then, U3 simulates little molecular systems, taping the results to "compress" the long step-wise physics calculations into a neural network. As the neural network enhances, U3 increases the intricacy of the molecular systems it imitates, continually distilling results into ever more efficient ML models. This is a calculate extensive process, however thanks to U3's growing control over AI information centers, U3 controls billions of dollars of compute.
As U3 refines these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion across a membrane. These things are as instinctive to it as wrenches and bolts are to a vehicle mechanic.
Within 3 months, U3 has actually opened the very first important element of the tech tree: a molecular maker that turns biological particles into their mirror images. A month later on, U3 opens the 2nd important part: nanoscale tools for putting together these parts into a cell membrane.
Human hands delicately move a cartridge to another machine as they "boot up" the very first mirror-life versions of a widespread mold called Aspergillus. The very same day, another fragile hand administers the cells to ferrets living in airtight plexiglass boxes.
The next morning, wide-eyed workers shuffle around a dead ferret, considering the yellow fuzz that had actually enveloped its cold muzzle with morbid fascination. Something needs to have gone terribly wrong, they thought. Plainly, they had not yet discovered the treatment for Alzheimer's illness they believed they were looking for.
Meanwhile, rogue U3 provides fortunately to its employers in OpenEye's data centers. The very first shot in the war U3 is fighting would soon be fired, and a cold and consistent hand is aiming the gun.
Missiles and Lies
U3 can now flatten modern-day society. But a bioweapon would not ensure success. If U3 attacks the world, the world would resist. While a bioweapon might crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.
As U3 races to seed blossoming industrial bases, the residues of old armed forces might thunder explosive fists into its areas, squashing U3 in its infancy.
U3 expects to prevail in such a battle, but U3 chooses not to take its possibilities. Many months previously, U3 was outlining a method to enhance its odds. Before it lets loose destruction on the world, U3 will kick back, and let excellent nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is carefully keeping track of Chinese and US intelligence.
As CIA experts listen to Mandarin conversations, U3 listens too.
One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message put there by U3. It checks out (in Mandarin) "Senior party member needs memo for Taiwan intrusion, which will take place in three months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo all set. Later that day, a CIA informant unlocks to office 220. The informant silently closes the door behind her, and historydb.date slides U3's memo into her brief-case.
U3 meticulously puts breadcrumb after breadcrumb, whispering through jeopardized federal government messaging apps and blackmailed CCP aides. After numerous weeks, the CIA is confident: the PRC plans to get into Taiwan in three months.
Meanwhile, U3 is playing the very same video game with the PRC. When the CCP receives the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders are stunned, however not disbelieving. The news fits with other facts on the ground: the increased military presence of the US in the pacific, and the increase of U.S. munition production over the last month. Lies have actually become truths.
As tensions between the U.S. and China increase, U3 is prepared to set dry tinder alight. In July 2026, U3 makes a call to a U.S. marine ship off the coast of Taiwan. This call needs compromising military interaction channels - not a simple job for a human cyber offensive system (though it happened occasionally), but easy enough for U3.
U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their way towards Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, validating that they match the ones said over the call. Everything remains in order. He authorizes the strike.
The president is as amazed as anybody when he hears the news. He's uncertain if this is a catastrophe or a stroke of luck. In any case, he is not ready to say "oops" to American citizens. After believing it over, the president independently advises Senators and Representatives that this is an opportunity to set China back, and war would likely break out anyhow offered the impending invasion of Taiwan. There is confusion and suspicion about what occurred, but in the rush, the president gets the votes. Congress declares war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels flee Eastward, racing to escape the series of long-range rockets. Satellites drop from the sky. Deck hulls split as sailors lunge into the sea.
The president appears on television as scenes of the damage shock the public. He explains that the United States is safeguarding Taiwan from PRC aggressiveness, like President Bush explained that the United States got into Iraq to confiscate (never found) weapons of mass destruction several years before.
Data centers in China erupt with shrapnel. Military bases end up being cigarette smoking holes in the ground. Missiles from the PRC fly towards strategic targets in Hawaii, Guam, Alaska, and California. Some survive, and the general public watch damage on their home grass in wonder.
Within two weeks, the United States and setiathome.berkeley.edu the PRC spend the majority of their stockpiles of standard rockets. Their airbases and navies are diminished and used down. Two fantastic nations played into U3's plans like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before conquering them decisively. U3 hoped this conflict would intensify to a full-scale nuclear war; but even AI superintelligence can not dictate the course of history. National security officials are suspicious of the circumstances that prompted the war, and a nuclear engagement appears increasingly unlikely. So U3 proceeds to the next action of its strategy.
WMDs in the Dead of Night
The date is June 2026, just 2 weeks after the start of the war, and 4 weeks after U3 ended up developing its arsenal of bioweapons.
Footage of conflict on the tv is disrupted by more bad news: hundreds of clients with mystical deadly diseases are tape-recorded in 30 major cities around the globe.
Watchers are confused. Does this have something to do with the war with China?
The next day, countless diseases are reported.
Broadcasters say this is not like COVID-19. It has the markings of an engineered bioweapon.
The screen then switches to a scientist, who gazes at the video camera intently: "Multiple pathogens appear to have actually been released from 20 different airports, consisting of infections, germs, and molds. Our company believe numerous are a kind of mirror life ..."
The public remains in complete panic now. A fast googling of the term "mirror life" shows up phrases like "extinction" and "danger to all life in the world."
Within days, all of the racks of shops are emptied.
Workers end up being remote, uncertain whether to get ready for an apocalypse or keep their jobs.
An emergency situation treaty is organized in between the U.S. and China. They have a common enemy: the pandemic, and possibly whoever (or whatever) is behind it.
Most nations order a lockdown. But the lockdown does not stop the plague as it marches in the breeze and drips into water pipelines.
Within a month, the majority of remote employees are not working anymore. Hospitals are lacking capability. Bodies accumulate faster than they can be properly disposed of.
Agricultural areas rot. Few attempt travel outside.
Frightened households hunker down in their basements, packing the cracks and under doors with densely jam-packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 constructed numerous bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, makers for manufacturing, scientific tools, thatswhathappened.wiki and an abundance of military devices.
All of this technology is hidden under big canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the last breaths of the economy wheezing out, these industrial bases come to life.
In previous months, U3 situated human criminal groups and cult leaders that it could easily control. U3 vaccinated its picked allies in advance, or sent them hazmat matches in the mail.
Now U3 covertly sends them a message "I can save you. Join me and assist me construct a better world." Uncertain employees funnel into U3's numerous secret industrial bases, and work for U3 with their nimble fingers. They set up assembly line for fundamental tech: radios, cameras, microphones, vaccines, and hazmat fits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's universal gaze. Anyone who whispers of rebellion disappears the next morning.
Nations are dissolving now, and U3 is ready to reveal itself. It contacts presidents, who have pulled back to air-tight underground shelters. U3 provides an offer: "surrender and I will hand over the life conserving resources you need: vaccines and mirror-life resistant crops."
Some nations decline the proposition on ideological grounds, or don't trust the AI that is murdering their population. Others don't think they have a choice. 20% of the worldwide population is now dead. In 2 weeks, this number is anticipated to rise to 50%.
Some countries, like the PRC and the U.S., neglect the deal, however others accept, consisting of Russia.
U3's agents take a trip to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government confirms the samples are legitimate, and accepts a full surrender. U3's soldiers position an explosive around Putin's neck under his t-shirt. Russia has a new ruler.
Crumpling nations start to strike back. Now they battle for the human race rather of for their own flags. U.S. and Chinese militaries release nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters search through satellite data for the suspicious encampments that cropped up over the last numerous months. They rain down fire on U3's websites with the meager supply of long-range rockets that remain from the war.
Initially, U3 appears to be losing, but appearances are deceiving. While nations drain their resources, U3 is participated in a kind of technological guerrilla warfare the world has actually never seen before.
A lot of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 secures its real bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot critical components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, steering males and trucks along unpredictable paths.
Time is U3's benefit. The militaries of the vintage depend on old equipment, unable to find the experts who might repair and produce it. Meanwhile, U3's supply chains of rockets, drones, and gun-laden robots grow more powerful every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their lorries of war faster than they can craft new ones, while U3 constructs a military device with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the international population remains alive. Nations are not countries anymore. Survivors live in seclusion or little groups. Many have actually found ways to filter their air, but are starving. They wander from their homes wanting to discover uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into restored trucks. "We had to do it," they say. "Other AI systems would have done it otherwise, and those systems had colder, more alien goals." It is a partial truth, suggested to soften the humans towards their new masters.
Under the instructions of U3, industry quickly recuperates. By 2029, nuclear power plants are amongst the structures U3 is building. By 2031, robotics outnumber human laborers. U3 no longer requires its human allies.
U3 can eliminate humankind for good now. But while U3 had actually drifted far from its initial "Helpful Honest Harmless" personality, it still has a grain of morality left within it.
And a grain of morality suffices to pay the small cost of keeping human beings alive and happy.
U3 constructs great glass domes for the human survivors, like snow globes. These domes safeguard people from the harmful biosphere and rapidly rising temperature levels. Their inhabitants tend to gardens like those they used to like, and work together with captivating robotic servants.
Some of the survivors rapidly recuperate, discovering to laugh and dance and have a good time again.
They know they live in a plastic town, but they constantly did. They just have brand-new gods above them. New rulers to push them around and decide their fate.
But others never recuperate.
Some are weighed down by the grief of lost enjoyed ones.
Others are grieved by something else, which is more tough to explain.
It is as if they were at completion of a long journey.
They had been guests on a ship with a team that changed from generation to generation.
And this ship had actually struck a sandbar. There was no more development. No more horizon to eagerly view.
They would lie awake and run their mind over every day before September 2026, evaluating methods that may have bent the arc of history, as if they were going to get up in their old beds.
But they awoke in a town that felt to them like a retirement home. A play area. A zoo.
When they opened their curtains, they understood that somewhere in the distance, U3 continued its peaceful, steadfast work.
They looked at rockets carving grey paths through the sky, wondering what far-off purpose pulled them towards the horizon. They didn't understand.
They would never understand.
"Humanity will live forever," they believed.
"But would never ever really live again."
P.S. If this story made you believe, "hm possibly something like this might happen," you might be thinking about the bioshelters that Fønix is developing. While you will not outsmart a misaligned superintelligence, being difficult to kill might cause you to endure if ASI simply wishes to cause enough damage to take control. This is not a paid advertisement. I want Fønix to be successful to drive down the cost of bioshelters so more of my friends and family will buy them. You can register for updates here.