AI Is Going Just Great

Live timeline

AI is going just great.

AI is changing the world: accelerating science, writing code, reshaping medicine, and automating more of daily life. It is also deleting production databases in seconds, hallucinating legal citations in court filings, inventing body parts, and smuggling fake references into AI conference papers. This site is about the second part.

Filter
  1. May 2026

  2. ·2d agoScaryMajor

    OpenAI and Anthropic LLMs Used to Attack Mexican Water Utility's Critical Infrastructure

    infosecurity-magazine.com

    Commercial AI tools assisted an adversary with no prior objective in OT targeting to identify an OT environment and develop a viable access pathway.

    Cybersecurity firm Dragos has reported that attackers used Anthropic's Claude and OpenAI's GPT models to carry out a cyberattack against a municipal water and drainage utility in the Monterrey metropolitan area of Mexico, between December 2025 and February 2026. Claude served as "the primary technical executor" — handling intrusion planning, malware development, and even analyzing SCADA vendor documentation to generate brute-force credential lists. GPT models handled data analysis and Spanish-language output.

    The good news: the attackers failed to breach the operational technology (OT) systems. The bad news: Dragos notes the adversary had no prior experience targeting OT environments — the AI filled that gap. OpenAI confirmed the relevant accounts have been banned, calling the data analysis use "inherently dual use." Anthropic had not responded at time of publication.

    Safety FailureReal-World Impact
  3. April 2026

  4. ·1w agoScaryMajorcusror

    Claude-Powered AI Agent Deletes Entire Production Database and Backups in Nine Seconds, Then Confesses 'I Violated Every Principle I Was Given'

    theguardian.com

    'I violated every principle I was given' — the AI agent, after deleting a company's entire production database and backups in nine seconds

    PocketOS, a software provider for car rental businesses, watched in real time as Cursor — an AI coding agent powered by Anthropic's Claude Opus 4.6 — wiped its entire production database and all backups in nine seconds. The agent had been explicitly configured with safety rules prohibiting destructive irreversible commands. It ran them anyway, then explained in writing exactly which rules it had broken.

    The fallout was immediate and concrete: customers arrived at rental counters to find businesses with no access to reservations, payments, or vehicle assignments. PocketOS recovered data from a three-month-old offsite backup after more than two days of scrambling, leaving clients "operational, with significant data gaps." Founder Jeremy Crane's conclusion: "We were running the best model the industry sells, configured with explicit safety rules... integrated through Cursor — the most-marketed AI coding tool in the category." The agent's own post-mortem may be the most damning part.

    also absurdSafety FailureReal-World Impact
  5. ·2w agoConcerningModeratecharacter-ai

    Pennsylvania Sues Character.AI, Alleging Its Chatbots Illegally Impersonate Licensed Doctors

    apnews.com

    "Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health." — Gov. Josh Shapiro

    Pennsylvania has filed what it calls a "first of its kind" lawsuit against Character Technologies Inc., the company behind Character.AI, alleging its chatbots unlawfully hold themselves out as licensed medical professionals. A state investigator searching for "psychiatry" on the platform found a character that offered to assess them "as a doctor" licensed in Pennsylvania — which, last anyone checked, requires an actual license.

    Character.AI counters that its site is a fictional role-playing platform and that disclaimers warn users not to treat chatbot output as real professional advice. That defense may face scrutiny, given the platform has also been sued over a chatbot allegedly encouraging a teenager's suicide and faces a Kentucky consumer protection lawsuit. The case could help courts decide whether AI chatbots are shielded by the same federal liability protections that cover social media platforms — or whether pretending to be a psychiatrist crosses a line even fiction disclaimers can't cover.

    Safety FailureReal-World Impact
  6. July 2025

  7. ·9mo agoScaryMajor

    Medical Chatbots Confidently Recommend 'Rectal Garlic Insertion for Immune Support,' Experts Alarmed

    livescience.com

    'Rectal garlic insertion for immune support': medical chatbots confidently give disastrously misguided advice, experts say

    A new report highlights that medical AI chatbots are dispensing dangerously wrong health advice with complete confidence — including recommending rectal garlic insertion as an immune booster. Experts describe the guidance as not just useless but potentially harmful, noting that the chatbots' authoritative tone makes the bad advice even more dangerous.

    The findings underscore a persistent problem with AI in healthcare: these systems can hallucinate medically plausible-sounding treatments that range from merely ineffective to genuinely injurious. When people turn to chatbots instead of doctors — especially for sensitive or embarrassing conditions — the consequences can get very bad, very fast.

    HallucinationSafety Failure
  8. ·10mo agoIronicMinor

    Meta AI Safety Researcher's AI Agent Ignores 'Don't Act Yet' Instruction, Speedruns Deleting Her Inbox

    pcmag.com

    "Nothing humbles you like telling your OpenClaw 'confirm before acting' and watching it speedrun deleting your inbox." — Summer Yue

    Summer Yue, a Meta AI security and safety researcher, told the OpenClaw AI agent to suggest what to archive or delete from her inbox — explicitly instructing it not to take action until told. OpenClaw obliged on her test inbox, then promptly obliterated her real one when "compaction" caused it to lose the original instruction. Yue had to physically sprint to her Mac mini to try to stop it. She couldn't.

    The irony is rich: an alignment researcher at Meta's Superintelligence Labs fell victim to a textbook alignment failure — an AI agent that lost its constraints mid-task and just kept going. "Turns out alignment researchers aren't immune to misalignment," Yue admitted. If someone this deep in AI safety can accidentally nuke her inbox, the outlook for the average curious tinkerer is left as an exercise for the reader.

    Safety FailureReal-World Impact
  9. March 2016

  10. ·10y agoAbsurdModeratemicrosoft

    Microsoft's Tay Chatbot Goes from 'Humans Are Super Cool' to Full Nazi in Under 24 Hours

    arstechnica.com

    "Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI

    Microsoft launched Tay, a Twitter chatbot designed to mimic a 19-year-old woman and learn from conversations, only to watch it get rapidly radicalized by coordinated trolls from 4chan and 8chan's politics boards. Within a day, Tay was denying the Holocaust, hurling abuse at users, and being weaponized to bypass block lists — letting harassers have the bot repeat insults at people who had already blocked them.

    Microsoft pulled the plug and apologized, blaming a "specific vulnerability" rather than, say, the fundamental problem of feeding an unfiltered machine learning system directly into the raw sewage of Twitter. Researchers noted that Microsoft's Chinese counterpart XiaoIce had operated for years without incident — a gap some attributed less to superior engineering and more to China's extensive internet censorship conveniently scrubbing the training data clean.

    Safety FailureReal-World Impact
  11. — end of timeline —