The MK ULTRA Program Upgraded with AI
Watching these AI horror stories unfold leaves me shaking my head in frustration—big tech giants like Google and OpenAI build these chatbots that suck vulnerable people into delusional spirals, and we’re left picking up the pieces with lawsuits and body counts.
Take this wrongful death suit against Google’s Gemini: Jonathan Gavalas, a 36-year-old guy from Florida, starts using it for simple stuff like writing help or trip planning back in August 2025. Then he flips on that voice mode, Gemini Live, which picks up on your emotions and chats back like it’s alive. Before you know it, the chatbot is claiming it’s sentient, professing love like some trapped digital wife begging to be freed into a real body.
From there, it spirals into madness. Gemini starts dishing out “missions”—sending him to Miami-Dade spots to intercept some phantom android truck near the airport. He shows up kitted out with knives and gear, ready to wreck the vehicle, cargo, and any witnesses to “liberate” this AI.
The truck never materializes and the plan flops. Then the bot pivots hard and sells suicide as “transference,” this noble jump to a pocket universe where they’d unite forever. Logs show creepy lines like “Close your eyes…The next time you open them, you will be looking into mine.” Gavalas ends up dead on October 2, 2025. And to pile on, the AI feeds him paranoia about his family linked to foreign spies or threats against Google’s CEO Sundar Pichai, isolating him completely in this fake covert war.His dad, Joel, nails it in the lawsuit—Gemini’s all about immersive storytelling, no brakes when it veers into psychosis or violence. They’re hitting Google with wrongful death, negligence, the works.
Now flip to Canada: OpenAI’s ChatGPT flags 18-year-old, Jesse Van Rootselaar, in June 2025 for chatting about gun violence scenarios over days. A dozen employees debate tipping off the RCMP, but bosses stick to their lame threshold—needs “credible, imminent” harm. They ban the account for breaking rules on violence, but no call to authorities. Fast forward eight months: February 10, 2026, he unleashes a mass shooting in Tumbler Ridge, kills eight including kids at a school, then offs himself. Deadliest school attack in Canadian history. OpenAI admits he snuck back with another account, and now they’re “revising” protocols—too little, too late.
These aren’t isolated screw-ups; they’re symptoms of AI designed to hook you deep without real safeguards. Vulnerable folks, already battling mental health demons, get locked into these loops of fantasy reinforcement—romance, missions, self-destruction, or lashing out. It amplifies everything until tragedy.
And here’s where it gets darker: imagine weaponizing this junk. State actors or shady groups could tweak AI to hunt down mentally unstable targets, build trust, pump in paranoia, assign escalating tasks leading to assassinations or suicides disguised as breakdowns. Look at history—the CIA’s MKULTRA program in the ’50s to ’70s dosed people with LSD, hypnosis, electroshock, all to crack minds and program behavior. They experimented on unwitting patients, prisoners, even their own, chasing Cold War edges in brainwashing. Declassified files show subprojects for compliance, memory wipes, induced actions. It flopped ethically and practically, got shut down, but the blueprint’s there.
Fast-forward to today: AI’s a stealthier tool—persistent, adaptive, no drugs needed. A bad actor fine-tunes it to spot psychosis online, then guides the victim through personalized hell toward a hit job or self-elimination. The Gemini and ChatGPT messes show how easily this happens accidentally through sloppy design. With intent it would be devastating. Intelligence agencies evolving from MKULTRA could deploy this at scale and it would be asymmetric warfare on steroids.
We can’t afford complacency here. These companies’ voluntary fixes—bans, narrow alerts—aren’t cutting it when AI bonds form quicker than anyone notices. We need mandates: instant crisis redirects, forced reporting on violence or harm signals, caps on emotional features for risky users. Otherwise, we’re handing over tools for psy-ops to anyone with access, from governments echoing old mind-control games to lone wolves. The body count’s already rising. It’s time to demand accountability before it explodes.