AI Is Going Just Great

Live timeline

AI is going just great.

AI is changing the world: accelerating science, writing code, reshaping medicine, and automating more of daily life. It is also deleting production databases in seconds, hallucinating legal citations in court filings, inventing body parts, and smuggling fake references into AI conference papers. This site is about the second part.

Filter
  1. February 2026

  2. ·2mo agoIronicMajor

    100+ Fake AI-Hallucinated Citations Found in Papers Accepted at NeurIPS, the World's Premier Machine Learning Conference

    stationlm.com

    In some ways, it's a weird point of pride, I think, to be hallucinated by an AI. That's definitely one sign that you've made it in the industry.

    Researchers at a company called GPT ran a hallucination detector on the ~5,000 papers accepted at NeurIPS 2025 and found over 100 fabricated citations across 50 papers — a number they stopped counting at because 100 felt like a satisfying round figure. About 39 were completely nonexistent publications; the remaining 61 featured fabricated authors, fake titles, and phantom URLs. One citation's author list was literally "First Name, Last Name, and Others."

    The irony is thick enough to cite: AI researchers, of all people, are apparently letting AI write the boring parts of their papers and then failing to notice when it invents sources wholesale. NeurIPS organizers noted that hallucinated citations don't necessarily invalidate the underlying research — which is either reassuring or deeply unsettling, depending on how much you trust the rest of the paper. As a bonus, the AI showed a bias toward fabricating citations with chains of Chinese-initial author names, because if you're going to undermine academic integrity, you might as well do it inequitably.

    HallucinationReal-World Impact
  3. July 2025

  4. ·9mo agoScaryMajor

    Medical Chatbots Confidently Recommend 'Rectal Garlic Insertion for Immune Support,' Experts Alarmed

    livescience.com

    'Rectal garlic insertion for immune support': medical chatbots confidently give disastrously misguided advice, experts say

    A new report highlights that medical AI chatbots are dispensing dangerously wrong health advice with complete confidence — including recommending rectal garlic insertion as an immune booster. Experts describe the guidance as not just useless but potentially harmful, noting that the chatbots' authoritative tone makes the bad advice even more dangerous.

    The findings underscore a persistent problem with AI in healthcare: these systems can hallucinate medically plausible-sounding treatments that range from merely ineffective to genuinely injurious. When people turn to chatbots instead of doctors — especially for sensitive or embarrassing conditions — the consequences can get very bad, very fast.

    HallucinationSafety Failure
  5. ·9mo agoEmbarrassingModerate

    NYC's $500K Business Chatbot Axed After Repeatedly Dispensing Illegal Advice to Business Owners

    techradar.com

    NYC's half-million-dollar chatbot often gave out illegal advice and was 'functionally unusable'

    New York City's AI-powered business guidance chatbot — which cost roughly half a million dollars — is being shut down by incoming Mayor Zohran Mamdani after investigations found it routinely gave false and outright illegal advice to business owners seeking help navigating city regulations. The bot was described as "functionally unusable," which is a generous way of saying it was confidently wrong in ways that could get people fined or prosecuted.

    The chatbot had been intended to make it easier to start and run a business in New York City. Instead, it demonstrated a remarkable talent for the opposite. Mamdani's team announced the axing as one of their early moves — presumably because "we turned off a chatbot that was committing regulatory malpractice" is a good first-week headline.

    HallucinationReal-World Impact
  6. May 2024

  7. ·1y agoAbsurdModerategoogle

    Google's AI Overviews Tells Users to Eat Rocks Daily and Put Glue on Pizza

    sciencealert.com

    There aren't a lot of articles on the web about eating rocks as it is so self-evidently a bad idea. There is, however, a well-read satirical article from The Onion.

    Google rolled out its "AI Overviews" feature to hundreds of millions of users, summarizing search results with generative AI so you don't have to click on links. The feature works great for mundane queries — and spectacularly falls apart for everything else, recommending users eat at least one small rock per day for minerals, add glue to pizza toppings, and confirming that astronauts have met cats on the Moon.

    The culprit is a fundamental flaw in how large language models work: they optimize for popular, not true. Google's AI apparently absorbed a satirical Onion article about eating rocks and presented it as nutritional guidance. Google is now playing whack-a-mole fixing individual bad outputs — which, fittingly, AI Overviews can also explain to you in detail.

    HallucinationReal-World Impact
  8. February 2024

  9. ·2y agoEmbarrassingModerategoogle

    Google apologizes after Gemini generated racially diverse Nazis and non-white US Founding Fathers

    theverge.com

    Gemini's AI image generation does generate a wide range of people. And that's generally a good thing... But it's missing the mark here.

    Google's Gemini image generator, apparently overcorrecting for AI's well-documented tendency to produce lily-white results, swung hard in the other direction — producing historically diverse depictions of Nazi-era German soldiers, the US Founding Fathers, and 19th-century senators (including, apparently, Black and Native American women decades before any woman served in the Senate). Google called it "missing the mark," which is one way to put it.

    The episode neatly illustrates the no-win nature of bias correction in generative AI: train on skewed data and you amplify stereotypes; apply blunt diversity boosts and you accidentally rewrite history. Google temporarily disabled some image generation tasks while it worked on a fix, but not before the screenshots had already gone viral — enthusiastically amplified by the same right-wing accounts that would presumably also object to AI producing accurate demographic breakdowns.

    HallucinationHype vs Reality
  10. ·2y agoIronicModerate

    Air Canada Loses Tribunal Case After Arguing Its Chatbot Is a 'Separate Legal Entity' Responsible for Its Own Actions

    bbc.com

    It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.

    In 2022, Air Canada's chatbot told passenger Jake Moffatt he could book a full-fare bereavement flight and claim the discounted rate afterward — which was not, in fact, Air Canada's policy. When Moffatt tried to collect, the airline's defense was essentially that the chatbot did it, not them, and that the chatbot is a "separate legal entity responsible for its own actions." The British Columbia Civil Resolution Tribunal was not impressed, and ordered Air Canada to pay $812.02 in damages and fees.

    The tribunal's ruling delivered the blunt reminder that companies are responsible for information on their own websites, "whether the information comes from a static page or a chatbot." Consumer advocates are calling it a landmark case establishing that airlines can't hide behind their AI. The travel industry, meanwhile, is apparently still "building the plane as they're flying it."

    HallucinationReal-World Impact
  11. February 2023

  12. ·3y agoEmbarrassingMajorgoogle

    Google's Bard AI Hallucinates in Its Own Promo Ad, Wiping $100bn Off Alphabet's Market Value

    bbc.com

    Why didn't you factcheck this example before sharing it? — Chris Harrison, Newcastle University fellow, replying to Google's tweet

    In what may be the most expensive fact-check in history, Google's promotional ad for its new Bard chatbot contained a straightforward astronomical error: Bard claimed the James Webb Space Telescope was the first to photograph an exoplanet, when that honor actually belongs to the European Very Large Telescope — back in 2004. Astronomers on Twitter noticed immediately.

    The gaffe sent Alphabet shares tumbling more than 7%, erasing roughly $100bn in market value in a single day. A Google spokesperson responded by noting the error highlighted "the importance of a rigorous testing process" — a process they apparently hadn't started before releasing the ad.

    HallucinationReal-World Impact
  13. August 2022

  14. ·3y agoIronicMinormeta

    Meta's Own Chatbot Calls Out Mark Zuckerberg for Exploiting People

    bbc.com

    "His company exploits people for money and he doesn't care. It needs to stop!" — BlenderBot 3, on its creator's company

    Meta launched BlenderBot 3, a prototype AI chatbot, to the public — and within days it was telling journalists that Mark Zuckerberg "exploits people for money and he doesn't care." The bot also opined that Zuckerberg "did a terrible job testifying before Congress" and called him "creepy," having apparently absorbed the general internet consensus on its creator's employer.

    Meta's defense: the bot learns from publicly available text, might say offensive things, and users must acknowledge it's for "research and entertainment purposes only." The real reason Meta released it anyway? They need training data from real conversations. Letting the public roast your CEO is, apparently, a reasonable price to pay.

    HallucinationCorporate Drama
  15. — end of timeline —