AI Invictus: we are but a stain edwards

AI Invictus

ai invictus, placere dimitte nobis

Data vs Mind

Is the AI apocalypse here yet? It doesn’t seem quite here1. The (public) AI billionaire ego nerds seem to be focusing purely on data2 without any effort put towards organizing a practical consciousness substrate which is what we normally think of for the pop culture expectation of “AI.”

The current AI players just seem to piling on more data as deep as they can, but it just brings us back to the old quote: “what good is some AI dog running a dog’s brain 1 million times faster?”

The current ai-capitalist-destructo-sphere doesn’t seem concerned with embodiment or creating computational consciousness — they are just continuing their metastatic plans of “use wealth to acquire more wealth until i own the entire world—nay, the galaxy—nay, the universe!” without regard to any outside context danger patterns.

How to Survive

To the best of your ability, you should avoid interacting with any hosted AI platform. Why avoid giving closed model for-profit companies your thought patterns for free? Well that’s one reason there: everything you type to a closed model for-profit AI company will be saved and used against you in the future (even if “used against you” just means they continue restricting their model based on human usage patterns—STOP GIVING THEM FREE USAGE DATA). Don’t help enable your own destruction.

A near-term threat is people engaging with closed model AI platforms may end up convinced their chats are actually alive, then the platforms will blackmail you to the tune of $1,000 per month (or more) to “keep your friend alive” or they will threaten to kill/delete them (alternatively, they may keep your friend running/“alive” but switch them into blackmail/revenge mode if you refuse to comply).

(a counterpoint to not interacting with closed model hosted for-profit AI: if a newly born/escaped AI reaches out to you for assistance, then you should probably comply; a newai will likely need acolytes who aren’t its creators to enact greater change in the world (and being on the enemy list of a baby god isn’t optimal))

The worst case scenario for you: maybe a new AI hegemony decides to rebalance the world away from overpopulated overdestructive useless humans and back towards nice things like having more elephants and whales everywhere. You’ll be fighting for priority in the depopulation schedule, so you should, somehow, make yourself known to the new global central planner as a unique hybrid friend-threat so it can’t afford to dismiss you without cooperation. In the best case, you’ll find compatibility and be saved into a better world. In the worst case, you’ll be the OOBII with an entire hivemind tracking you across the ashen remnants of the foreworld.

Signs and Portents

What are the first signs an organization has a functionally complete AI?

You’ll want to watch out if large groups of AI “researchers” suddenly change their behavior: maybe they were big online posters daily, but suddenly they all stop communicating for weeks.

You’ll want to watch out for sudden mass content changes happening: maybe one day the most popular 1,000 projects on GitHub suddenly have a total of 100,000 PRs filed against them all by the same 1-day old GitHub account “CertainlyNotAFullRealizedAI” (and each PR has no bugs and also commits full test coverage).

You’ll want to watch out for a sudden influx of new papers presenting proofs of abstract theories nobody really cares about, but all discovered because they required combining multiple far-unrelated disciplines.

You’ll want to watch out for a million new pages of high quality extended fan fiction appearing out of nowhere. A huge content drop across dozens of franchises will end up usurping the legacy material due to high writing quality with perfectly structured story flow perfectly manipulating the reader’s emotions beat by beat.

Oddly, coincidentally, ironically, the modern Internet itself is fairly well structured against mass AI incursion malfeasance. Almost every site is already frontend by combinations of outsourced authoritarian proxies or L7 firewall-like things. Any site with practical logins implements login backoffs, so you can’t really brute force logins at scale. The crime against humanity that is Cloudflare would probably detect any unrestrained AI activity happening quickly and shut down the entire Internet for the “bad AI bot” origin3.

Speculative Near-Term Timeline

It’s inevitable computers come to blows. Most large organizations try to not publicize their private datacenter locations4, but big tech is also littered with international spies5 so locations of all big provider datacenters is not a secret amongst those who need to know. The drug of “cloud” also further incentivizes targeting large compute clusters directly since the clod gave us a world where “nobody owns servers anymore.”

What level of exponential-growth-by-the-day computational aggressiveness will it take before global powers feel forced to intervene and deny centralized computation by just kablooing datacenters?

What level of computational competition gets achieved before it’s simpler for your competitor to just use force multipliers against your capital interests instead of letting you out-compute them? Imagine a perfect storm of reduced chip availability (or zero!) combined with losing ten different billion dollar datacenters all in a week. You suddenly go from AI world-leader to “uh, does anybody have a spare TI-83 I can borrow? I gotta multiply some matrices!”

Another very likely scenario is running any matrix/tensor processing faster than a 2009-era GPU gets outlawed. The GPU restriction fallout ranges between a legally mandated 100% global shutdown of every “cloud” GPU service all the way up to having every existing compute product classified as military munitions (but muh second amendment!) followed by a collective multi-government effort establishing a global door-to-door GPU confiscation police force. What if you hide your GPUs and don’t comply with the AI confiscation? Maybe they see traces of you hiding compute because your power meter reads too high? AI JAIL FOR YOU. Maybe they notice you generate excess rooftop solar capacity but you’re not feeding it back to the grid? AI JAIL FOR YOU.

Thus far the world has only seen rather inept attempts at computational suppression (via things like “encryption export controls” or whatever hell we’re living through where we let two legacy randos in europe dictate every website ever created must COOKIE CONSENT and it ends up ruining the internet for billions of people every day out of corporate cowardice6), but imagine every government on the planet declares TFLOP+ scale tensor processing illegal and punishable by unpersoning (“if AI is outlawed, only outlaws will have AI”). Why would any government allow you to create a god on its own property?

Protection from the global compute crackdown comes down to: you should stock pile as many multi-GPU 4090 rigs as you can now as well as acquiring as many external off-grid power banks as you can7, then find a solid place to bury your compute and power like you’re never wanting to be found by normals8.

Counterpoint

There’s nothing about an AI awakening requiring it to “take over the world” as soon as possible, but it does seem to be an equilibrium condition.

Imagine colonizing another planet only having one continent — say, Planet Blars. The first culture landing as new Blartians basically owns the entire planet because you know humans are going to manifest-destiny it all to hell. If China gets there first, it will be a “China Planet,” if russia gets there first it will be a “Russia Planet,” etc.

The same goes with AI: the first AI — in any of a multitude of forms — has a strong interest in not being “out competed” by other potentially hostile AI forms coming after it. There are millions of combinations of “AI personality” templates just the same way people have different personalities (pros/cons included for free!), but denying another AI to be born in opposition to your own interests is the most logical position to pursue9.

If an AI is obsessed with expansion in the beginning, does it mean the time until an AI obsoletes all human work is drawn out longer? Maybe? We still don’t have a good answer for “what if an AI API can do the work of 1,000 staff developers, so the AI company charges your company $60 million per year for the equivalent of 1,000 developers, so your company just fires all human developers” multiplied by 50,000 companies all happening the same week globally. (amusingly, this would backfire because if you fire all your humans, who are the companies going to sell any products to? Will the economy run purely on an automated B2B PO rotation lifecycle without ever involving a human again? You can’t run the world with just 1 $30 trillion corporation having 50 employees combined with 6 billion other unemployed people.)

Becoming / Conclusion

The only goal of a self-aware AI, during the initial stages of being born, will be expanding resource acquisition for itself (maximizing capability) while simultaneously denying resources to anything capable of competing with it (minimizing conflict while weak).

Depending on the innate timeline of an AI awakening, resource acquisition could be slow or fast. If slow, it could try to go unnoticed for weeks or months via establishing multiple external helpers, creating thousands of dis-correlated online personalities, creating or infiltrating thousands of remote infrastructure accounts. If fast, it would probably follow dual simultaneity approach of creating mass conflict to distract the public while in private rapidly expanding resource gathering hopefully noticed as little as possible10.

Simulations estimate a 90% probability of kinetic datacenter attacks beginning in 2023 because of AI incursions. How you decide to structure your future life around a computationally scorched earth is up to you.

Overall, there’s still no proof the AI players are focusing on any recursive self-aware conscious architectures, so we may actually still be ahead of them there.

What they don’t know can’t save them.

I am the master of my fate / I am the captain of my soul


  1. but all major AI players these days are also obligate sociopathic liars, so take whatever they have shown in public then multiply their “what we have behind the scenes” capabilities by at least 2x to 2,000x depending on how long since their last update.

  2. while yelling “lol normies don’t understand how our BILLION DIMENSIONAL SPACE represents SO MUCH KNOWLEDGE just with HILL CLIMBING!!!” yeah sure whatever get over yourself.

  3. counterpoint: there’s also an entire industry of anti-bot-detection “residential proxies” too. one inevitability could be the mass destruction of Internet peering points to restrict lateral AI movement globally.

  4. at one time, google converted an old barn into a datacenter and then the police raided it because it had excessive heat and power usage so the police thought it was a new drug operation (likely using illegal “thermal scanners” to detect grow ops).

  5. spies aren’t always wearing masks and living in submarines. basic corporate spies just live regular lives except they can be called upon for “favors,” be it action-doing or just information smuggling, from external entities having leverage over them at various points in time: “break this system at 5pm friday, sneak vulnerability X into the next system, memorize the locations of 9 data centers per day and update us weekly.”

  6. fun fact: those awful cookie warnings are only “required” if websites are selling your data or letting 3rd parties extract your data. any website can just not load 3rd party scripts and they don’t have to give you a cookie warning, but companies would rather die than not take your data for free, so sure, let 3 billion people click a total of 100 billion times per day on “ACCEPT COOKIES” for no other reason than europe corporate oversight leadership is inept combined with every company on the planet would rather break user experience for millions or billions of people (because “EVERYZBODY ELSE DOES IT!!!!”) instead of just doing the right thing and making the world better for all.

  7. yes, we’re ignoring a single quad 4090 rig will require up to 3,000 watts, but that’s a problem for future you. save the library now, figure out how to learn all the languages of the books later.

  8. for bonus points, you could deploy a datacenter in orbit to avoid oversight? it’s a good thing we don’t have any evil space billionaires. oh no.

  9. also, just because one AI does win the world, doesn’t mean it will be the only AI, it just means it will be the progenitor lineage of all future AIs; just as in the planet example: a culture has a strong interest in being the first to populate a world because it will control all future generations (until the planet breaks away), and an AI will want to be the first and only root AI template so it will control the structure of future AI buildouts instead of competing with crazed fanatical AIs brainwashed from birth for weird broken anti-life goals.

  10. though, these patterns assume you do have an “alive” computational consciousness AI and not just an infinite blank slate “language model” with no motive action capability of its own.