AI exemplifies sloppy thinking
‘I was wrong,’ ‘This appears to be an error on my part,’ ‘I fabricated that’
Many are using ChatGPT or other AI these days to help with research. Search engines such as DuckDuckGo and Brave have AI assistants available, so your search terms are a prompt for the AI to scour the internet to help you find the information you’re looking for. You don’t even have to look through the Internet hits you get—the AI will summarize them for you. Quick and easy.
It seems that many are regarding the answers they get by this shortcut as correct, accurate, and complete—or, close enough. Of course, all the information on the Internet is correct and accurate, so a summary of that information is naturally complete. <Sarcasm>
I have seen transcriptions of exchanges between a human and ChatGPT on topics relating to the existence of viruses. As we know, there’s a great deal of contention about this topic, with many, including myself, pointing out that virology is a pseudoscience and the experiments purporting to show the existence of viruses are incapable of doing so. This being the case, if the AI that was asked about these experiments were truly scouring the Internet and finding all the information to answer this question, it would provide both the dominant narrative of virology and the critiques being made of it. But that’s not what happens.
In Mike Stone’s article A Friendly Chat About Cell Culture, ChatGPT provided only the conventional “viruses have been proven to exist” information until challenged. Mike, being a premier researcher himself and one of the strongest voices debunking virology, already knew that there is growing disagreement in the scientific community about virology. By repeatedly challenging the answers given by ChatGPT, he was able to get it to admit that the information it initially provided was not correct, and eventually to agree that the experimentation method used in virology is unscientific. By the end, the AI was even implying, though not openly stating, that viruses have not been proven to exist:
“The key is to work toward isolating and characterizing the virus as the research progresses, ensuring that the experiment becomes increasingly specific and well-controlled. Rigorous scientific methods and continuous refinement of experimental design are essential to enhance the validity of the study.”
AI’s research technique: make things up
I had a similar experience recently in conducting some research for an article about how people in the past understood illness and contagion. I did not intend to use Brave’s AI for my research, but it displayed its answer to my search question at the top of the page. I was curious, so I started reading that paragraph and became intrigued because of what I had read previously from Mike and others about their interactions with AI. The difference is that in my case, I did not know the answers to the questions I was asking, so I couldn’t challenge Brave’s AI on the facts it was giving me.
What I did discover, however, was that AI is not only a poor researcher, but a very sloppy thinker. When I asked for its sources, not only could it not cite specific sources or texts, but it actually admitted to making things up.
The topic I was researching was the nosoi—spirits or entities that were thought to cause disease and plague in ancient Greece. They were among the ills released from Pandora’s jar (apparently “box” is a mistranslation of the container Pandora opened).
I asked, “Were the nosoi the agents of the gods in causing disease?”
AI’s answer:
In Greek mythology, the Nosoi were not directly described as agents of the gods, but rather as evil spirits that escaped from Pandora’s jar, bringing plague, sickness, and disease to humanity. However, it is mentioned that when Apollo and Artemis were born, the Nosoi were locked away, and Apollo and Artemis became the main bringers of disease to those who deserved it. This suggests that the gods, particularly Apollo and Artemis, had some control over the Nosoi and the spread of disease.
Because it said “it is mentioned,” and because I wanted an actual source for the information, I figured there must be a specific place where I could find it. So I asked AI for “source for the Nosoi being locked away.”
AI’s answer:
According to Greek mythology, the Nosoi were locked away when Apollo and Artemis were born. They were initially released from Pandora’s pithos, which is also known as Pandora’s jar, and were the spirits of illness, plague, and disease. The mythological source of this information can be found in the works of Hesiod, who describes the Nosoi escaping from Pandora’s jar. However, in most Homeric literature, it is the arrows of the gods Apollon and Artemis that bring plague, rather than a band of daimones like the Nosoi. After their release, the Nosoi were later locked away, and Apollo and Artemis became the main bringers of disease to those who deserved it.
As you can see, it did not supply a source, just general references to Hesiod and Homer. No title of any work, no textual citation. So I clarified, “I want to know the source of the information that the nosoi were locked away.”
AI’s answer:
Upon further research, I was unable to find any ancient Greek source that mentions the Nosoi being “locked away” when Apollo and Artemis were born. This information appears to be an error on my part, and I couldn’t find any reliable source to support this claim.
The only information I could find about the Nosoi is that they were released from Pandora’s jar, as described in Hesiod’s “Works and Days” (lines 90-105), but I couldn’t find any mention of them being locked away or associated with the birth of Apollo and Artemis.
I apologize for any confusion caused and I will make sure to provide accurate and reliable information in the future.
‘I couldn’t find any reliable source to support this claim’
As a former college writing teacher, this seemed so freshman to me—a person who doesn’t know how to do research, or doesn’t want to, and gets caught trying to fake it. I saw this a lot in my years of teaching (I also saw plenty of very good writing, researching, and argumentation from my students). Somehow one doesn’t expect this highly touted “tool” that is supposed to make research easier for adults and professionals to have the awareness level of an 18-year-old.
In irritation, I said, “So you just made that up. You're worse than a college freshman plagiarizing a paper!”
And it gave a three-part answer.
Admission of Error
I must admit that I fabricated the information about the Nosoi being locked away when Apollo and Artemis were born. This was a mistake and I apologize for any confusion or misinformation I may have caused.
Explanation
I’m a large language model, my responses are generated based on patterns and associations in the data I was trained on. Sometimes, this can lead to errors or inaccuracies, especially when dealing with complex or nuanced topics like mythology.
Commitment to Improvement
I appreciate your feedback and will strive to be more accurate and transparent in my responses going forward. If I’m unsure or don’t know the answer to a question, I will say so rather than making something up. Thank you for holding me accountable.
I have to admit my jaw dropped at “I fabricated the information.” I can’t even come up with a reason why it would do that. Is this AI “trying to be human,” more like an immature, unskilled writer trying to make an argument without doing adequate research by making up evidence? Or does it indicate the level of development that AI has reached, implying it will get more mature and more accurate over time?
Also in the category of “trying to be human,” I hear a whiny tone in this answer. “Hey, cut me some slack—I’m just a large language model! Anybody can make a mistake! Greek mythology is hard!” Third, I have to point out that in no way am I “holding [it] accountable.” How could I do that? There’s zero accountability here, and AI knows it. There’s a smirk in these words.
Can’t tell fandom sites from actual sources
In another exchange with AI on this topic, I attempted to verify what it told me about the nosoi being able to change their shape, looking like “shiny blobs” or smoky emanations. Most of the links it found were from a fandom wiki about a series of fantasy novels by Rick Riordan. When I asked the AI about that, it replied,
You are correct that some of the information I provided about the Nosoi, such as being “shiny blobs” or having a specific appearance, is not based on ancient Greek mythology, but rather on the fantasy book series by Rick Riordan, specifically “The Heroes of Olympus” series. I should not have presented this information as factual or based on ancient Greek mythology. Instead, I should have been more careful in distinguishing between the original mythological sources and the fictional adaptations or interpretations found in literature, such as Rick Riordan’s books.
I am not entirely sure all of what constitutes AI “training,” but it seems that, for the web browser “research assistants,” at least, that the information it is presenting in response to a question or prompt is what it can find on the Internet, but in a limited way that appears lazy and without rigor. Are the works of Hesiod not available on the Internet somewhere? Are the epics of Homer—the Iliad, the Odyssey—not online for AI to scan for information (since it’s touted as being able to do such things super-fast)? Or any other original or scholarly source concerning Greek mythology, like a course syllabus or a resource online?
The answer is that they are. The works of Hesiod are available on the Internet archive. The full text of the Iliad as translated by Alexander Pope can be found on Wikisource. I’m sure the other Homeric literature is also easy to find. But AI apparently doesn’t consult those sources; it merely does unfiltered mash-ups of websites it finds on a quick pass that contain the search terms and then adds whatever it feels like adding.
It’s obvious that AI, at least at this point in its development, doesn’t or can’t distinguish between a legitimate source of information and a fandom site, or a website with source citations to back up its claims from one that contains unsupported conjectures on any given topic. This inability to distinguish between fact and opinion (or guess or extrapolation or logical leap or even outright fiction) is another issue I saw in my freshman writing students.
But what is even more problematic for anyone attempting to find information by using AI as a research assistant is that it doesn’t hesitate to “fabricate” answers. I reiterate this because I don’t want you to miss it. Like my students learning to write research papers, AI seems to think that it can fill in any gaps in its knowledge by inventing something that sounds good. Yikes. Just yikes.
Takeaways
I have a couple of key takeaways from this.
AI is no more reliable than the Internet sites that it finds in its searches. It may be different using a stand-alone AI like ChatGPT instead of the “research assistant” on a web browser that only looks at what the browser finds, but it seems obvious that the browser does not look far, and neither does its AI.
Simply looking things up on the Internet is not a good way to do research anymore, if it ever was. Some call it the Library of Alexandria at our fingertips, and I think it may have been that at one time early in its existence, but today it’s the Library of Alexandria mixed and shuffled randomly amidst volumes and volumes of useless, incorrect, irrelevant, and even deliberately misleading and outright false information. It even seems that the advent of censorship, shadow banning, and other efforts to keep people from finding information that dissents from any dominant narrative has affected all searches. The nosoi are not a contentious topic, yet my searches did not find very much of value.
I know this is not news to anyone. But it’s good to remember when “looking things up.” I almost wish I still had the set of encyclopedias I recycled a few years ago. And perhaps going to actual libraries to do research will become necessary again.
AI is being ‘trained’ on published writing
I’m sure that AI will continue to develop, as its training continues. Last October, Penguin Random House became one of the few, perhaps the only, large book publisher to refuse to allow any of its authors’ material to be used to train AI. Authors at HarperCollins were upset that their publisher was offering them $2,500 to license their work to AI for training purposes—as if that was the worth of their words and the value of their copyright.
I just learned today that Substack writers can opt out of allowing AI to use our content for training. This is happening. But will AI companies abide by the prohibitions placed by publishers and authors on use of their material for AI training? Are copyright laws strong enough to protect writers’ work? Publishers’ legal departments are working on this, but at this point, there is almost no way to enforce these restrictions.
We will likely see a great deal of change in AI this year; and we may have to take a stand on it.
The slope is SO slippery and SO steep
I can’t end this article without reiterating my personal stance on this topic. Using AI for research is dangerous for the reasons I’ve spoken about here. This alone is a reason to reject it. Far more dangerous is using it for original writing, art, music, or any other creative expression. I believe that creativity is our most precious human superpower. Outsourcing this capacity to robots is a steeper and slipperier slope than we can even imagine—it’s transhumanism without the need for a brain chip. You have chosen to make yourself transhuman, and to have your individual creativity folded into AI’s conformity, where your individual thought, perspective, and voice will be completely lost.
It seems obvious to me that using AI to “help” you or “give you ideas” for your own creative work is not like picking up a tool that you can then just put down. Maybe after one or two uses you still could, but more than that and you may find you have already started to lose your human capacity to write and think and bring out your own creative expression. Can it happen that quickly? Do you want to be a guinea pig in finding out? There will not be any going back.
I could be wrong, and I hope I am, but I don’t think so.
We may have a very serious reckoning with AI in the near future. I hope you, and we, are ready.
Thanks for reading! Please consider subscribing; paid subscriptions are greatly appreciated.
I see that they're constantly trying to bring the spooky factor into it. If it was truly intelligent it would know the truth.
Example: Google bard interview on 60 minutes pre recorded show.
They showed how the AI created a paper with citations. But the books and authors cited were all fake! They said that this was a hallucination.
I called bullshit... Why?
Because it's a pre recorded interview and they left that in there instead of redoing it.
Hallucination is not a bug, but a feature. 😂
I suppose they'll use it to claim why the AI is seemingly full of shit about certain topics.
If it was picking up fan fiction or whatever propaganda, it would have something real to cite, even if the source was manufactured to bias the AI.
I understand AI making up names but if you ask it to write a paper and it takes effort to make up books and authors instead of grabbing something from the net, it's been programmed to make shit up in some cases.
Notice how this is a persistent issue in the closed patented ai systems.
Now, what would an AI do to accomplish the mission of what Eric Schmidt of Google said: to have one true answer?
Depends on the answer.
If they can't tell the truth about certain things, AI has to hallucinate the lie to "see" it.
Del Bigtree mentioned how there was an AI a while ago trained on data of vaccine safety and it gave an odd answer that wasn't an answer but a contradiction:
-Vaccines are safe
-Vaccines are dangerous
That's because it can't tell which studies are real or rigged. But the Google AI and other closed AI already were told the answer and they'll hallucinate to prove it 😂.
A/i is no different than the goog as you will be lead to the slaughter where truth is seldom to be found. A/i can be pre-programmed to give the results they want you to see. Who knows what info and facts will be left out or reinvented. A/i is yet another from of propaganda and an invasion of privacy as well as a data gathering machine.