stock here: I have observed AI lying through it’s teeth and making mistakes.
Turns out it’s an anomaly. As Turley explained recently, he is among a small group of individuals who have been “effectively disappeared by the AI system.” Other GPT-banned names include Harvard’s Jonathan Zittrain, CNBC’s David Faber, and the Australian politician Brian Hood.
The common thread is that AI generated false stories about him and the other banned names. ChatGPT, Turley says, “falsely reported that there had been a claim of sexual harassment against me (which there never was) based on something that supposedly happened on a 2018 trip with law students to Alaska (which never occurred), while I was on the faculty of Georgetown Law (where I have never taught).”
ChatGPT’s solution to misinformation was to simply erase all mention of the names involved. It was an effective, albeit self-defeating, means of combating a real problem – a bit like curing a cancer by killing the patient outright. Today, the chatbot is no longer lying about Jonathan Turley because it is no longer saying anything about him at all.
This misinformation “cure” of disappearing a person completely from the AI universe is an obvious problem. Any student seeking to learn about a pivotal moment in American history (Trump’s impeachment) will not get the whole truth, at least not Turley’s part in it. AI misinformation is a real problem, but this kind of comprehensive censorship is a lazy and disadvantageous solution.