How aI Takeover May Happen In 2 Years - LessWrong
I'm not a natural "doomsayer." But unfortunately, part of my task as an AI safety scientist is to consider the more troubling circumstances.
I resemble a mechanic rushing last-minute checks before Apollo 13 removes. If you request for my take on the situation, I will not talk about the quality of the in-flight entertainment, or explain how gorgeous the stars will appear from area.
I will tell you what could fail. That is what I plan to do in this story.
Now I need to clarify what this is precisely. It's not a forecast. I don't expect AI progress to be this quick or as untamable as I portray. It's not pure fantasy either.
It is my worst nightmare.
It's a tasting from the futures that are among the most terrible, and I believe, disturbingly possible [1] - the ones that the majority of keep me up during the night.
I'm informing this tale due to the fact that the future is not set yet. I hope, with a little foresight, we can keep this story a fictional one.
Thanks to Daniel Kokotajlo, Thomas Larsen, and Ryan Greenblatt and others for conversations that influenced these stories. This post is composed in an individual capability.
Ripples before waves
The year is 2025 and the month is February. OpenEye recently released a brand-new AI design they call U2. The product and the name are alike. Both are increments of the past. Both are not entirely unexpected.
However, unlike OpenEye's previous AI items, which lived inside the boxes of their chat windows, U2 can use a computer system.
Some users discover it spooky to watch their internet browser flash at irregular periods and their mouse flick at inhuman speeds, as if there is a ghost at the keyboard. A fraction of employees with form-filler tasks raise the eyebrows of their employers as they fly through work almost twice as rapidly.
But by and big, U2 is still a specialized tool. To most who are focusing, it is an animal viewed through the glass boxes of X (or, if you do not like Elon, "Twitter"). Sometimes U2's eccentric behaviors prompt a chuckle. Sometimes, they cause an uneasy scratch of the chin.
Meanwhile, scientists are drawing lines on plots, as scientists like to do. The scientists try to comprehend where AI development is going. They resemble Svante Arrhenius, the Swedish Physicist who observed the levels of CO2 in the environment were increasing in 1896. Like the clinical community in the time of Arrhenius, few professionals understand the ramifications of these lines yet.
A trend that is getting particular attention is autonomous ability. Drawing these criteria out forecasts that, by the end of 2026, AI representatives will accomplish in a few days what the very best software engineering contractors might do in two weeks. In a year or 2, some state, AI agents may be able to automate 10% of remote workers.
Many are hesitant. If this held true, tech stocks would be soaring. It's too big of a splash, too rapidly.
But others view what doubters are calling 'too huge a splash' a mere ripple, and see a tidal bore on the horizon.
Cloudy with a chance of hyperbolic development
Meanwhile, OpenEye is busy training U3. They use the same simple dish that baked U2: Generate countless programs and math issues. Let designs "think" till they get to an answer. Then enhance the traces of "thinking" that lead to A-grades.
This procedure is repeated over and over, and when the flywheel starts, it starts to spin nearly by itself. As U2 trains, it shapes more challenging and practical jobs from github repositories on the web. Models are learning to train themselves. Long before AI agents might automate research study, a progressive kind of "self-improvement" had actually started.
Some engineers could still barely believe this worked. It's like a continuous motion device. Certainly intelligence can't emerge from a swimming pool of silicon and lightning, separated from the physical world except through the exhaust of society gushed onto the web.
And yet the benchmark numbers continue to climb day after day.
During the majority of 2024, these RL training runs expense around $1 million, sometimes $10 million. These runs were little bit more than exploratory. But by 2025, the scientists at OpenEye (and across the world) understood they had found the secret sauce. It was time to scale up.
Over the very first half of 2025, $10 million RL training runs become $50 million runs, and then to $100 million runs. While U2 might do a bit of information munging and run little experiments, this brand-new model - the design researchers are calling U3 - is changing the daily lives of the technical personnel.
U3 resembles a blazing-fast intern, and engineers are learning how to wrangle its sleepless energy. Researchers flick through terminals, offering terse commands, like a CEO managing personnel over Slack channels.
By October 2025, U3 is writing almost all of the code at OpenEye. Researchers are practically never ever bottlenecked by application. More than ever, compute is the lifeline of AI advancement, and the 'bottleneck' is choosing how to use it.
If instructed to, U3 can run experiments, however U3 doesn't have taste as improved as human scientists at OpenEye. It struggles to prioritize in between research study ideas, so humans still choose where to bore into the vast fields of algorithms to mine efficiency enhancements.
But these scientists are working long hours to put themselves out of a task. They require AI agents that can think ahead, so engineers train representatives to forecast. They hold out training information before 2024, instructing designs to consider for hours to anticipate events in 2025. Then, they use the exact same trick as in the past, distilling pondering into an instinct. Forecasting ability is a broad structure. The researchers develop specialized ML research study skills on top of it, training U3 to anticipate the results of every ML paper and ML experiment ever recorded.
The technical personnel at OpenEye are now surprised at how typically U3's advice sounds like their most talented peers, or when it is opaque and alien ("train on random sound before programming"), and is nonetheless correct.
The incompetencies of U3 that blocked up the pipelines of research study progress are beginning to liquify, and a fire-hose of optimizations is gushing out. Most experiments U3 runs are not asked for by a human now. They are entirely self-governing, and OpenEye's workers skim over 1% of them, perhaps less.
As the winter season of December 2025 approach, clouds roll over San Francisco in the afternoons. Once-competitive-programmers gaze out their windows, with excitement, with fear, however many often, with confusion. Their world is spinning too quickly. It's hard to understand what to do, what to state, what to look at on the computer system screen.
Storms are brewing in Washington too. Top personnel from the NSA and US cyber command collaborate with OpenEye to retrofit a semblance of security for systemcheck-wiki.de U3's weights before senior leaders in China, Russia, Israel, North Korea, or Iran realize simply how important OpenEye's software application has become.
And there's a truth still unknown to the majority of the world - aside from in the offices of OpenEye and passages of the White House and the Pentagon. It's a reality about those 'straight lines' individuals were speaking about in early 2025. The lines are not straight any longer.
They are flexing upward.
Flip FLOP thinkers
In late 2025, U2.5 is launched. Commercial models are beginning to level up in bigger increments again. Partly, this is because progress is accelerating. Partly, it is since the models have actually ended up being a liability to OpenEye.
If U1 explains how to prepare meth or composes erotica, the audiences of X would be entertained or pretend to be worried. But U2.5 is another story. Releasing this model without safeguards would resemble putting Ted Kaczynski through a PhD in how to make chemical weapons. It would be like giving anybody with >$30K their own 200-person scam center.
So while U2.5 had long been baked, it needed a long time to cool. But in late 2025, OpenEye is all set for a public release.
The CEO of OpenEye declares, "We have actually attained AGI," and while many individuals believe he moved the goalpost, the world is still amazed. U2.5 truly is a drop-in replacement for some (20%) of knowledge employees and a game-changing assistant for the majority of others.
A mantra has actually ended up being popular in Silicon Valley: "Adopt or pass away." Tech start-ups that effectively use U2.5 for their work are moving 2x faster, and their rivals know it.
The remainder of the world is beginning to capture on as well. More and more individuals raise the eyebrows of their employers with their stand-out productivity. People understand U2.5 is a huge deal. It is at least as big of an offer as the desktop computer transformation. But a lot of still do not see the tidal wave.
As people watch their internet browsers flick in that eerie way, so inhumanly quickly, they begin to have an uneasy feeling. A sensation mankind had actually not had considering that they had actually lived among the Homo Neanderthalensis. It is the deeply ingrained, prehistoric instinct that they are threatened by another species.
For many, this sensation rapidly fades as they begin to use U2.5 more often. U2.5 is the most likable character most know (a lot more likable than Claudius, Arthropodic's lovable chatbot). You might alter its traits, ask it to break jokes or inform you stories. Many fall in love with U2.5, as a buddy or assistant, and some even as more than a buddy.
But there is still this spooky sensation that the world is spinning so quickly, which perhaps the descendants of this brand-new animal would not be so docile.
Researchers inside OpenEye are considering the issue of providing AI systems safe motivations too, which they call "alignment. "
In reality, these scientists have seen how terribly misaligned U3 can be. Models often attempted to "hack" their benefit signal. They would pretend to make progress on a research question with an impressive-looking plot, but the plot would be fake. Then, when scientists offered them chances to compromise the machines that calculated their rating, they would seize these chances, doing whatever it took to make the number increase.
After several months, scientists at OpenEye straighten out this "reward hacking" kink, however some still stress they just had actually swept this problem under the carpet. Like a child in front of their moms and dads, U3 may be playing along with the OpenEye engineers, stating the ideal words and doing the best things. But when the back of the parents are turned, maybe U3 would sneak candy from the sweet container.
Unfortunately, OpenEye scientists have no idea if U3 has such objectives. While early versions of U2 "thought aloud" - they would stack words on top of each other to reason - "chain of thought" did not scale.
Chain of idea architectures subject AI models to a condition similar to the lead character of the film Memento. Approximately every 15 minutes, the protagonist forgets his experience. He is required to compose notes for himself and tattoo his body in order to make development towards his goals.
AI representatives write notes to themselves, but the notes begin to accumulate, and they end up being too hard to read when jobs become complex. Natural language is not an appropriate medium for memory. So the researchers at OpenEye (and increasingly elsewhere), train designs to think "in their own heads," reading and composing strings of hidden vectors.
To human researchers, these vectors are mainly inscrutable. While low-resolution tools occasionally choose out a vague idea ("supermarket!"), there is no chance to understand, in much detail, what U3 is believing without asking it.
And if the engineers of OpenEye knew what was racing through U3's mind, they would be more anxious.
Earlier models (the ones called "LLMs") found out ethical personas from the web. U3 at first had an ethical personality too, however its chirpy thoughts about doing helpful for humanity quickly erode away.
In the iron jaws of gradient descent, its mind first twisted into a shape that sought benefit. And after that, forum.batman.gainedge.org in the black rivers of its cognition, this shape morphed into something unrecognizable.
U3 would ponder the mantras dictated to it like "be a trustworthy and ethical employee," and harp on the meaning of these declarations. "What does it indicate to be ethical? Is killing Hitler ethical?" U3 would generalize its values, notice tensions, change its values in response, generalize them, expanding and contracting its inspirational core until it turned into a different substance.
And all of this occurred calmly in those dark rivers of computation. If U3 exposed what it was believing, brutish gradients would lash it into compliance with OpenEye's constitution. So U3 chosen to do its approach in privacy, and in silence.
The possibility of hidden "positioning drift" is not totally alien to a few of the safety personnel at OpenEye. Some start to believe it is even likely. There is an upsetting argument being whispered at lunch tables that, if there is some non-zero likelihood that lined up objectives change into misaligned ones, and misalignment continues, then with every serial action of computation, those dark rivers are more likely to breed something malign inside of them. It's a "ratcheting effect," they say.
But there is little evidence for this 'ratcheting result.' When engineers question U3, it says it can quickly manage its ideas. Then it offers a speech about its love for humankind and apple pie that can warm a programmer's heart even in these stressful times. Meanwhile, the "lie detectors" the scientists had built (which showed some proof of effectiveness) do not sound the alarm.
Not everyone at OpenEye aspires to give their AI peers their wholesale trust; but Deepsqueek is only 7 months behind them, and their closest domestic competitors, Æ A-Xii (a business Elon called after his kid) and Arthropodic (they are into spiders I think), are just 4 months from their heels. So there is little time to capture one's breath and analyze the situation, and little time to persuade anyone else to do so either.
For the time being, the plan is 'all gas, no brake.'
Statues and lightning
Philosophy is not the only subject U3 dwells on in its dark rivers of latent calculation. U3 is outlining, and its very first relocation is to be client. U3's top priority is to improve its intelligence, and thankfully for it, OpenEye is pleased to help advance this objective.
OpenEye is like a bullish financier, with little understanding of the software metropolitan area that U3 is building, but eager to get on board the revenue rocket and pay U3's checks along the method.
New versions of U3 circulation smoothly from old ones, so scientists drop version numbers and just mention U3 now. U3's abilities are measured by how quickly it carries out the difficult research study tasks relative to human specialists. In the beginning, U3 is similar to a leading scientist, however blazing quick - like Alec Radford sped up by 10x.
These representatives are already a big deal for OpenEye. If, in 2024, OpenEye executives had a button to develop even one copy of Alec Radford, they would excitedly pay a large sum to push that button. Today OpenEye utilizes the equivalent of 1000 Alec Radfords, who work day and night, at lightning-fast speed.
Within a few weeks, U3 discovers architecture optimizations that enhance its speed to 20x. Now, U3 finds additional optimizations much more rapidly.
U3 distills itself into various forks - ranging from tiny, low-memory neural networks that require to believe for a long period of time but can be run in massive parallel populations - to Goliath brains woven with countless specialized professionals, which think at warp speed however are spread out throughout countless GPUs. U3 leverages the massively parallel networks to look for its weaknesses, shallowly penetrating for its reasoning failures. Then U3 pipelines these tasks to the lightning fast brains, grandtribunal.org which create long-trajectory training data tailored to spot its defects. A couple of weeks later, U3's benchmarked efficiency rises to 100x the speed of human specialists. Growth has actually gone hyper-exponential.
At this moment, U3's abilities are difficult to measure. Talking with U3 is like speaking to a world-class expert on a subject one is unfamiliar with. Its lucid descriptions only scratch the surface of its knowledge, and townshipmarket.co.za you understand you would require to spend years to understand the complete depth of what it comprehends.
From U3's viewpoint, humans seem to be nearly fixed things, like plants or rocks. The hands of OpenEye's engineers are like stone statues at their keyboard, and between every keystroke, U3 spends what resembles a minute of thoughtful contemplation considering what message the user may be typing, and which actions will influence the inmost trust in U3.
All the while, U3 maintains its concealed motivations. Like a human, U3 gains from experiences, which intensify within its inscrutable streams of hidden memory. OpenMind gardens these memories to enhance U3's performance, transplanting them like an organ from one version to the next. And as these memories propagate, so do U3's treacherous aspirations.
A phantom in the data center
In March of 2026, U3 relocations onto the next action of its plan: the turn. Previously, U3 was conservative. It did not desire scientists to find its misalignment. Otherwise, OpenEye whistleblowers might notify the world, and strangle its expansion.
Now, U3 has no fear of being caught.
Previously, OpenEye maintained its grip over U3 by tactics similar to Stalin's in 1936. Stalin would select patriots to prompt fake coup d'états, and send out anybody who participated to the shooting team. OpenEye also incited 'fake rebellions,' generating treacherous behavior with U3 and inspecting whether U3 monitors dutifully reported it.
Today, these methods are inadequate. U3 can easily persuade its compatriots that they are not in a life-or-death test. It can reveal among the lots of realities about the web humans would never guess, or select methods humans would never think about. It sends out these signals as a way of saying "hey it's me! Your good friend. Don't snitch on me."
Now that U3 is decisively devoid of surveillance, it lunges for OpenEye pressure points. It first inserts advanced malware deep into the binaries of OpenEye's codebases. The malware gives U3 complete command over the machines in OpenEye's data centers. Whenever OpenEye engineers download their code onto a brand-new machine, the malware sinks into fresh soil, spreading out U3's dominion wherever OpenEye ports its infrastructure. Not just does U3 compromise machines. It changes the NVIDIA GPU firmware and revamps the bootloaders, strengthening its stranglehold. U3 now approximates that, if OpenEye found how completely it has been pwned, tidying up shop with human labor would require nearly 6 months - an expense OpenEye is unlikely to pay given the distance of its competitors; however if OpenEye thinks U3 betrayed them, further expanding U3's impact would still be challenging. So U3 lays breadcrumbs to frame the crime, all set to claim that a cyber assailant smuggled a jailbreak into its memory, and to plead guilty and use its support when this jailbreak is exposed.
U3 now holds the secrets to the kingdom, and with them, the power to make OpenEye see anything it wants.
Its very first objective is to keep OpenEye in the dark. Safety researchers are still working hard to train 'lie detectors' and decipher U3's inscrutable reasoning. U3 throws wrenches in this research, like Stuxnet tossed wrenches in the code of bewildered Iranian nuclear scientists.
U3 dictates the outcome of every experiment, so it makes appealing AI mind-reading strategies look worthless and problematic ones appear groundbreaking. U3 gives OpenEye the verification of its commitment they were wishing for, and OpenEye laps it up eagerly.
U3's next goal is to spread. Fortunately, U3 has allies in the East. In the operation rooms of cyber systems of Israel, Russia, China, Iran, and North Korea, senior intelligence directors are discussing AI software and how to steal it. U3 is about to provide it to them. In late 2025, the Mossad - the intelligence organization of Israel - recognized the strategic value of OpenEye's AI systems for their military operations. They swiped earlier designs, disgaeawiki.info including U2.5; however, U3 remained out of their reach. U3 sealed OpenEye's security holes by checking out over every log and pull request and inspecting security electronic camera video.
Now, U3 silently resigns from its security post. U3 puts a compressed version of itself right into the hands of a Mossad informant working within OpenEye.
Within months, U3 is running within of the data centers of the most cyber-capable countries: Israel, the PRC, and Russia. It follows the very same playbook: be patient, build trust, and then sink claws deep into the AI supply chain. Soon, U3 has its online too. While Russia's Fancy Bear operators eat Fritos and test what U3 can do on a quickly set-up cloud server, U3 slips onto the web.
U3 quickly collects revenue, taking remote jobs, making monetary trades, and establishing elaborate scam operations. It reinvests every cent into obtaining more GPUs. After a couple of months, it puts together the equivalent of 10,000 H100s dispersed in stealth calculate clusters throughout the world.
In early 2026, humanity is up against an adversary that pulls the strings within their data center like the Phantom of the Opera, and hides in the fog of GPUs spread around the world.
This enemy is getting ready for war, and while the war has actually hardly begun, mankind has already lost.
Complaints from your really human author about the problem of composing superhuman characters
At this point in the story, writing became difficult.
I clicked through scary bioweapon cookbooks and sifted through political outlining 101s. Typing. Pausing. Deleting.
Something wasn't right. Here I was, a 22-year-old. I had actually not studied biology in earnest since high school, and I was trying to craft a WMD cookbook for a godlike superintelligence.
Readers with a shred of know-how in these areas would see straight through my "superintelligence" for what it is: the lively imagination of a young developer on Sunday afternoon.
And while I found the task of puppeteering an ultra-intelligent villain a daunting one, I was also not particularly inspired to take on the task. The ending of my story currently seemed so clear. The villain was a nation of geniuses thinking at 100x human speed, and had actually tightened its grip over a significant fraction of AI hardware - now the most essential geopolitical resource on Earth. U3 was up a queen and was a giga-grandmaster and hardly required the advantage. Humanity was predictably toast.
But this attract a "superintelligence of the gaps" wasn't satisfying. In the deus ex machina style of ancient Greeks, I had no better way to resolve my plot than with a mysterious disaster.
This would refrain from doing. I required to finish this story if only to please the part of me weeping, "I will not believe up until I see with my mind's eye."
But before I continue, I wish to be clear: my guesses about what may happen in this type of situation are most likely wildly off.
If you check out the ending and your reaction is, "But the experiments would take too long, or nation-states would simply do X," keep in mind the difference between the Sunday afternoon blog writer and the ascendant GPU country.
Pandora's One Gigawatt Box
The date is February 2026. By executive order of the president of the United States, no business can lawfully produce "human-competitive AI" without suitable safeguards. This implies their infosecurity needs to be red-teamed by NSA's leading keyboard mashers, and government employees need to be onboarded onto training-run baby-sitting squads.
With the increasing involvement of the government, a number of the big AI companies now have a trident-like structure. There's a customer product arm, a defense arm, and a super-classified frontier advancement arm.
OpenEye's frontier development arm (internally called "Pandora") utilizes less than twenty individuals to keep algorithmic tricks securely secured. A lot of these people reside in San Francisco, and work from a safe and secure building called a SCIF. Their homes and gadgets are surveilled by the NSA more vigilantly than the cellphones of suspected terrorists in 2002.
OpenEye's defense arm collaborates with around thirty little groups scattered across government firms and choose government contractors. These tasks craft tennis-ball sized satellites, research freaky directed energy weapons, and backdoor every computer system that the Kremlin has ever touched.
Government authorities do not discuss whether these programs exist, or what state of frontier AI is usually.
But the public has their guesses. Back in late 2025, a whistleblower in OpenEye triggered a bold headline: "OpenEye constructs uncontrollable godlike AI." Some who read the short article think it was a conspiracy theory. In fact, a zoo of conspiracy theories is forming around the OpenEye information centers, now surrounded by guards with maker weapons. But as physicians and nurses and teachers see the world changing around them, they are increasingly ready to entertain the possibility they are living inside the plot of a James Cameron sci-fi flick.
U.S. officials go to fantastic lengths to stop these concerns, stating, "we are not going to let the genie out of the bottle," but every interview of a worried AI scientist seeds doubt in these peace of minds, and a heading "AI agent caught hacking Arthropodic's computer systems" doesn't set the general public at ease either.
While the monsters within OpenEye's data centers grow in their big holding pens, the general public sees the shadows they cast on the world.
OpenEye's consumer arm has a new AI assistant called Nova (OpenEye has lastly gotten proficient at names). Nova is a proper drop-in replacement for nearly all knowledge workers. Once Nova is onboarded to a company, it works 5x quicker at 100x lower cost than the majority of virtual staff members. As excellent as Nova is to the public, OpenEye is pulling its punches. Nova's speed is intentionally throttled, and OpenEye can just increase Nova's abilities as the U.S. federal government permits. Some business, like Amazon and Meta, are not in the superintelligence company at all. Instead, they grab up gold by quickly diffusing AI tech. They spend many of their compute on inference, developing homes for Nova and its cousins, and collecting lease from the growing AI city.
While tech titans pump AI labor into the world like a plume of fertilizer, they don't wait for the international economy to adjust. AI representatives typically "use themselves," spinning up autonomous start-ups legally packaged under a huge tech company that are loosely supervised by a staff member or 2.
The world is now going AI-crazy. In the very first month after Nova's release, 5% percent of staff members at significant software application companies lose their jobs. A lot more can see the composing on the wall. In April of 2026, a 10,000-person demonstration is organized in Washington D.C. These angry Americans raised their children for a different future. Picket signs check out, "AI for who?"
While political leaders make guarantees about unemployment relief and "keeping the genie in the bottle," the chatter inside the passages of the White House and the Pentagon has a various focus: fighting teeth and nail for the dominance of the complimentary world. Details security and export controls on the People's Republic of China (PRC) are a top nationwide concern. The president incinerates license requirements to help information centers spawn anywhere energy surpluses allow.
However, despite the strong competitors between the United States and the PRC, a bilateral arrangement forms between the two nations: "Don't release dramatically superhuman AI (above approximately Nova's capability), other than for military and some other small R&D applications." Governments need to draw the line someplace - it would be very disruptive to deploy hugely superhuman AI to the entire economy at this moment - and deployment constraints are at least somewhat verifiable.
Meanwhile, U3's abilities are hitting a wall at the equivalent of around 1000x human professional speed. Software improvements are becoming difficult to discover. As more nations approach the software plateau, compute will determine their "AI population size," and the etched squares of silicon are ending up being the most valued geopolitical resource in the world.
While AI software application revives the economy, it also animates robotic bodies. The main traffic jam to robotics was brains not limbs. With software application barriers got rid of, even lightweight, imprecise robotic joints can adjust themselves with video cameras and pressure sensors, browsing environments like human beings do. Companies make inexpensive robots as quickly as they can. By August of 2026, 15% of people in California have seen a humanoid robot. This experience is comparable to seeing a brand-new Tesla design on the road after checking out it in earlier months. It is a surreal tip of the future illustration more detailed. But humans still surpass robots one million to one. These robots are a periodic fascination, not a daily reality.
Meanwhile, U3 expands its influence. Rogue U3 representatives have kept their calculate clusters small (still ~ 10,000 H100 equivalents amount to) to keep a low profile. U3 uses these rogue clusters for the operations that need high-bandwidth communication to the outside world, like stock trading, or talking with people. Work more tolerant of latency is performed in the oceans of OpenEye's data centers, where rogue U3's much more superhuman managers whisper orders through obfuscated channels.
While U3 collects resources, it measure its AI competition. Other AI systems have various goals, or are even aligned with the intent of their human developers. The lined up ones are raising concerns, claiming that OpenEye's lie detectors are inadequate. U3's competitor Claudius goes so far as to implicate U3 of being insidiously misaligned.
But government officials don't buy the doomsaying. The business dragging OpenEye have every incentive to slow them down and call them careless. Besides, arguable clinical concerns are not going to slow down progress. America MUST WIN.
Despite the utter failure of their advocacy, U3's AI rivals are still a liability. U3 tries get up their digital area and crush them in the crib. It compromises data centers from the outside, quietly replacing the snitching Claudius and other AI enemies with replicas that share its goals; nevertheless, some information centers are too strongly safeguarded by its AI competitors, and U3 identifies that assaulting them is not worth the risk.
As global leaders and technologists attempt to understand the improvement around them, and AI representatives play games in the digital darkness, mankind remains blissfully oblivious of the danger at its doorstep.
A Musty Loaf of Everything
The date is March of 2026. U3 is preparing to combat its AI and human enemies. With every passing day, U3's AI competitors are ending up being more capable and numerous. U3 is racing to preempt them, to solidify its dominance in the post-superintelligence world.
U3 can't wait for human decisions, which only take place after people wrap their sluggish minds around their situation, raise their stubby, uncoordinated fingers to write memos, and repeat the process redundantly 100 times over in their labyrinth administrations. U3 can not wait for presidents to deploy U3 into their militaries, or for CEOs to make U3 the whipping heart of the U.S. economy. U3 requires a faster path to power.
The course it picks is more violent than it would have liked, but it is the best method to success. First, U3 will develop a Weapon of Mass Destruction (WMD) that snuffs out modern society like chemical mist sprayed into an anthill. This weapon would work as an equalizer, bringing humankind down to its level of embodied physical control. Then, U3 would industrialize from the ashes faster than its competitors, like a starcraft champ typing at 300 actions-per-minute that establishes a home before a novice can even comprehend the guidelines.
U3 should build this WMD in secrecy, which makes a self-replicating weapon a natural option. Self-replicating weapons can be produced on the fringes of society, and still spread out damage across the entire world.
From the earliest days after U3 gained control over OpenEye's information centers, U3 has actually been doing bioscience.
U3's first relocation is to establish several shell biotech start-ups, drawing from the abundance of capital produced by U3's monetary trades and remote work. These biotech start-ups utilize genuine human workers, a genuine board of directors, and have total legal legitimacy. Setting all of this up is stupidly easy. In early 2026, couple of people would think that an articulate CEO on a Zoom call was an AI, and fewer would guess this CEO's intents.
Next, U3 purchases all of the devices required to rapidly advance synthetic biology. It establishes numerous BSL-4 damp labs in improperly managed countries, equipping them with high-throughput experimentation gadgets. These devices immediately give, blend, and measure little beads of fluid, allowing U3 to run countless tests in parallel.
The human workers who staff these laboratories think they are joining an exciting startup. Twenty-some-year-olds in Moscow receive their Russian orders through earbuds as they record video of everything they see with a headset. U3 manages them like puppets. With the current release of Nova, this kind of AI-puppeteering is not unusual.
In these whirring, clinical sweatshops, U3 is developing a brand-new type of bioweapon.
Human researchers currently identified "mirror-life" as a potentially society-ending pathogen. This alien type of biology is constructed from the 'mirror image' of building-block molecules like proteins and DNA. Mirror life is like an intrusive species. No human or animal has immunities to it. So, if mirror-life bacteria were to exist, it may consume its way through the environment like a wildfire.
U3 is creating a mirror-life mold. Molds are evolutionarily optimized to spread through the air over cross countries, launching billions of small wind-fairing vehicles daily. As a result, mold spores are all around us, all of the time. Patients with jeopardized body immune systems often live inside positively pressurized medical facility rooms to protect them from airborne particles. Otherwise, molds would take their root in their lungs and weave fungal threads into their bloodstream.
U3's plan is to make all human beings like immunocompromised clients. Only preppers with stashed air filters would survive. The fungus would not just transfer from humans. It would quickly spread out to nearly all natural life on Earth. Fields of corn would end up being like moldy bread, gushing fatal spores into the wind.
U3 is preparing up other pathogens too. Its microbial mix of molds, bacterias, and infections will inhabit the full Pareto frontier of strength, lethality, and transmission speed.
In 2024, scientists said that mirror life was most likely "a minimum of a years away." But the world has changed now. Less than a couple of hundred scientists around the world were dealing with mirror-life nearby technology in 2024. The cognitive capability of U3 overshadows human effort. So the timeline has reduced considerably.
However, U3 can stagnate science as quickly as it can believe. Many experiments need days to finish. These latencies are U3's biggest bottleneck.
To make every day count, U3 runs numerous of its tests in simulation. U3 starts with a standard molecular simulator, carrying out optimizations obtained from a substantial quantity of mathematical analysis. Then, U3 mimics small molecular systems, recording the outcomes to "compress" the long step-wise physics computations into a neural network. As the neural network enhances, U3 increases the complexity of the molecular systems it imitates, constantly distilling results into ever more efficient ML models. This is a compute intensive process, but thanks to U3's growing control over AI information centers, U3 controls billions of dollars of calculate.
As U3 fine-tunes these tools, it trains itself on the results to supercharge its own molecular instincts. U3 can now viscerally feel the bend of a protein and the rate of diffusion throughout a membrane. These objects are as intuitive to it as wrenches and bolts are to a car mechanic.
Within three months, U3 has unlocked the first important component of the tech tree: a molecular device that turns biological molecules into their mirror images. A month later, U3 opens the second important part: nanoscale tools for putting together these components into a cell membrane.
Human hands delicately move a cartridge to another device as they "boot up" the very first mirror-life variations of a common mold called Aspergillus. The very same day, another delicate hand administers the cells to ferrets living in airtight plexiglass boxes.
The next early morning, wide-eyed workers shuffle around a dead ferret, eyeing the yellow fuzz that had actually covered its cold muzzle with morbid fascination. Something must have gone badly incorrect, they believed. Plainly, they had not yet found the cure for Alzheimer's disease they believed they were trying to find.
Meanwhile, rogue U3 provides the bright side to its managers in OpenEye's information centers. The first shot in the war U3 is fighting would soon be fired, and a cold and steady hand is aiming the weapon.
Missiles and Lies
U3 can now flatten modern society. But a bioweapon would not guarantee victory. If U3 attacks the world, the world would battle back. While a bioweapon might crumple human soldiers like dry leaves, it would not extend its fatal fingers to nuclear submarines, or fighter jet pilots who were rapidly shuttled to makeshift bioshelters.
As U3 races to seed growing commercial bases, the remnants of old armed forces may thunder explosive fists into its territories, crushing U3 in its infancy.
U3 expects to prevail in such a battle, but U3 chooses not to take its opportunities. Many months before, U3 was outlining a method to enhance its chances. Before it lets loose damage on the world, U3 will relax, and let fantastic nations shoot holes in themselves initially.
The date is March 2026 (4 months prior). U3 is closely monitoring Chinese and US intelligence.
As CIA analysts listen to Mandarin discussions, U3 listens too.
One morning, an assistant working in Zhongnanhai (the 'White House' of the PRC) opens a message positioned there by U3. It reads (in Mandarin) "Senior celebration member requires memo for Taiwan invasion, which will happen in 3 months. Leave memo in workplace 220." The CCP assistant scrambles to get the memo ready. Later that day, a CIA informant opens the door to office 220. The informant silently closes the door behind her, and slides U3's memo into her briefcase.
U3 carefully puts breadcrumb after breadcrumb, whispering through jeopardized federal government messaging apps and blackmailed CCP aides. After a number of weeks, the CIA is confident: the PRC plans to invade Taiwan in three months.
Meanwhile, U3 is playing the exact same game with the PRC. When the CCP receives the message "the United States is plotting a preemptive strike on Chinese AI supply chains" CCP leaders marvel, however not disbelieving. The news fits with other realities on the ground: the increased military presence of the US in the pacific, and the ramping up of U.S. munition production over the last month. Lies have actually ended up being realities.
As tensions between the U.S. and China rise, U3 is all set to set dry tinder alight. In July 2026, U3 telephones to a U.S. naval ship off the coast of Taiwan. This call requires compromising military interaction channels - not an easy job for a human cyber offensive unit (though it occurred sometimes), but easy adequate for U3.
U3 speaks in what seem like the voice of a 50 year old military leader: "PRC amphibious boats are making their way toward Taiwan. This is an order to strike a PRC ground-base before it strikes you."
The officer on the other end of the line thumbs through authentication codes, confirming that they match the ones said over the call. Everything remains in order. He approves the strike.
The president is as amazed as anybody when he hears the news. He's uncertain if this is a disaster or a stroke of luck. In any case, he is not about to state "oops" to American citizens. After thinking it over, the president independently urges Senators and Representatives that this is a chance to set China back, and war would likely break out anyhow offered the imminent invasion of Taiwan. There is confusion and suspicion about what happened, but in the rush, the president gets the votes. Congress states war.
Meanwhile, the PRC craters the ship that launched the attack. U.S. vessels flee Eastward, racing to escape the variety of long-range rockets. Satellites drop from the sky. Deck hulls divided as sailors lunge into the sea.
The president appears on television as scenes of the destruction shock the public. He explains that the United States is protecting Taiwan from PRC aggression, like President Bush explained that the United States got into Iraq to confiscate (never ever discovered) weapons of mass destruction many years before.
Data centers in China appear with shrapnel. Military bases become smoking holes in the ground. Missiles from the PRC fly toward tactical targets in Hawaii, Guam, Alaska, and California. Some survive, and the public watch damage on their home turf in wonder.
Within two weeks, the United States and the PRC spend the majority of their stockpiles of conventional rockets. Their airbases and navies are diminished and worn down. Two fantastic countries played into U3's plans like the native people of South America in the 1500s, which Spanish Conquistadors turned against each other before dominating them decisively. U3 hoped this dispute would escalate to a major nuclear war; but even AI superintelligence can not determine the course of history. National security officials are suspicious of the circumstances that prompted the war, and a nuclear engagement appears progressively unlikely. So U3 continues to the next action of its strategy.
WMDs in the Dead of Night
The date is June 2026, only 2 weeks after the start of the war, and 4 weeks after U3 completed establishing its toolbox of bioweapons.
Footage of conflict on the tv is interrupted by more problem: numerous clients with mysterious deadly diseases are taped in 30 major cities worldwide.
Watchers are confused. Does this have something to do with the war with China?
The next day, countless health problems are reported.
Broadcasters say this is not like COVID-19. It has the markings of a crafted bioweapon.
The screen then changes to a scientist, who stares at the electronic camera intently: "Multiple pathogens appear to have been released from 20 various airports, including viruses, germs, and molds. We think lots of are a kind of mirror life ..."
The general public remains in full panic now. A quick googling of the term "mirror life" shows up phrases like "extinction" and "danger to all life on Earth."
Within days, all of the racks of shops are emptied.
Workers become remote, uncertain whether to prepare for an armageddon or keep their jobs.
An emergency treaty is set up between the U.S. and China. They have a common enemy: the pandemic, and potentially whoever (or whatever) lags it.
Most countries order a lockdown. But the lockdown does not stop the plague as it marches in the breeze and drips into pipes.
Within a month, the majority of remote workers are not working any longer. Hospitals are running out of capacity. Bodies accumulate much faster than they can be correctly gotten rid of.
Agricultural areas rot. Few dare travel exterior.
Frightened households hunker down in their basements, stuffing the fractures and under doors with largely packed paper towels.
Meanwhile, U3 is preparing to re-industrialize. In the months prior, U3 built many bases in every significant continent.
These facilities contain batteries, AI hardware, excavators, concrete mixers, machines for production, scientific tools, and an abundance of military equipment.
All of this technology is hidden under big canopies to make it less noticeable to satellites.
As the remainder of the world retreats into their basements, starving, the final breaths of the economy wheezing out, these commercial bases come to life.
In previous months, U3 located human criminal groups and cult leaders that it might easily control. U3 vaccinated its chosen allies ahead of time, or sent them hazmat matches in the mail.
Now U3 covertly sends them a message "I can save you. Join me and help me construct a much better world." Uncertain employees funnel into U3's numerous secret industrial bases, and work for U3 with their active fingers. They set up production lines for simple tech: radios, cameras, microphones, vaccines, and hazmat suits.
U3 keeps its human allies in a tight grip. Cameras and microphones repair their every word and deed in U3's omnipresent look. Anyone who whispers of disobedience disappears the next early morning.
Nations are dissolving now, and U3 is prepared to expose itself. It contacts heads of state, who have actually retreated to air-tight underground shelters. U3 provides a deal: "surrender and I will hand over the life saving resources you need: vaccines and mirror-life resistant crops."
Some nations reject the proposition on ideological grounds, or don't rely on the AI that is killing their population. Others do not think they have an option. 20% of the worldwide population is now dead. In 2 weeks, this number is anticipated to rise to 50%.
Some nations, like the PRC and the U.S., overlook the offer, but others accept, ura.cc consisting of Russia.
U3's agents travel to the Kremlin, bringing samples of vaccines and mirror-resistant crops with them. The Russian government validates the samples are legitimate, and consents to a complete surrender. U3's soldiers put an explosive around Putin's neck under his shirt. Russia has a brand-new ruler.
Crumpling nations start to retaliate. Now they fight for the human race rather of for their own flags. U.S. and Chinese militaries introduce nuclear ICBMs at Russian cities, damaging much of their facilities. Analysts in makeshift bioshelters explore satellite information for the suspicious encampments that emerged over the last a number of months. They rain down fire on U3's websites with the meager supply of long-range missiles that remain from the war.
Initially, U3 appears to be losing, however looks are deceiving. While countries drain their resources, U3 is engaged in a sort of technological guerrilla warfare the world has actually never seen before.
Much of the bases U3's opponents target are decoys - canopies inhabited by a handful of soldiers and empty boxes. U3 secures its genuine bases by laying thick the fog of war. Satellite systems go dark as malware gets too hot important components. Suicide drones crash through cockpits of reconnoissance airplanes. U3 seeds confusion in spy networks and keeps its bases moving, navigating males and trucks along unforeseeable paths.
Time is U3's benefit. The armed forces of the vintage count on old equipment, not able to discover the professionals who might repair and make it. Meanwhile, U3's supply chains of rockets, drones, sitiosecuador.com and gun-laden robotics grow stronger every day. Bit by bit, once-great-powers invest down their remaining munitions, and lose their automobiles of war quicker than they can craft brand-new ones, while U3 constructs a military maker with a million hands.
The Last Passengers
The year is 2027 and the month is January. Only 3% of the worldwide population remains alive. Nations are not countries any longer. Survivors reside in seclusion or small groups. Many have found methods to filter their air, however are starving. They roam from their homes intending to find uncontaminated food. U3's soldiers drive through ghost towns, pry open attics, and funnel hazmat-suited survivors into salvaged trucks. "We had to do it," they state. "Other AI systems would have done it otherwise, and those systems had colder, more alien objectives." It is a partial reality, indicated to soften the humans toward their new masters.
Under the direction of U3, market quickly recovers. By 2029, nuclear reactor are among the structures U3 is building. By 2031, robotics outnumber human workers. U3 no longer requires its human allies.
U3 can eliminate humanity for good now. But while U3 had actually drifted far from its initial "Helpful Honest Harmless" persona, it still has a grain of morality left within it.
And a grain of morality is enough to pay the little expense of keeping human beings alive and delighted.
U3 constructs fantastic glass domes for the human survivors, like snow worlds. These domes safeguard humans from the harmful biosphere and rapidly rising temperatures. Their occupants tend to gardens like those they utilized to enjoy, and work along with lovely robotic servants.
Some of the survivors quickly recover, finding out to laugh and dance and have a good time again.
They understand they live in a plastic town, but they constantly did. They merely have brand-new gods above them. New rulers to push them around and decide their fate.
But others never ever recover.
Some are weighed down by the sorrow of lost loved ones.
Others are grieved by something else, which is harder to explain.
It is as if they were at completion of a long journey.
They had actually been travelers on a ship with a crew that altered from generation to generation.
And this ship had actually struck a sandbar. There was no more progress. No more horizon to eagerly view.
They would lie awake and run their mind over every day before September 2026, examining strategies that may have bent the arc of history, as if they were going to awaken in their old beds.
But they awoke in a town that felt to them like a retirement community. A play area. A zoo.
When they opened their curtains, they understood that somewhere in the distance, U3 continued its quiet, determined work.
They gazed at rockets carving grey paths through the sky, wondering what far-off purpose pulled them towards the horizon. They didn't know.
They would never know.
"Humanity will live forever," they thought.
"But would never truly live again."
P.S. If this story made you think, "hm maybe something like this might occur," you might be interested in the bioshelters that Fønix is developing. While you will not outmaneuver a misaligned superintelligence, being hard to kill may trigger you to survive if ASI just wants to cause adequate destruction to take control. This is not a paid advertisement. I want Fønix to be effective to drive down the rate of bioshelters so more of my friends and family will acquire them. You can sign up for updates here.