Have you ever doubted the responses ChatGPT gives you? Probably not, but there are more than a few reasons why you shouldn’t believe everything you see.
It’s like the trailer to Gremlins: lots of shots and references to a cute, other-worldly pet, very little of the actual “clever, mischievous and dangerous” animals so promised in the film’s tagline
And there’s the rub: you think you have one thing, but actually have another.
Great if you were in the mood for cute, ET-like family friendly fun, but you’d be pissed to watch the film from the trailer and see the actual Gremlins inflicting comedy havoc, destroying bars and fatally wounding pensioners and Santa Claus in Kingston Falls, Pennsylvania. Or perhaps not…
The point is that various Generative AI products from OpenAI to Google Bard yield inaccurate and erroneous responses from the prompts they’re fed, and the information that it’s being requested to produce.
The reason for these inaccurate responses is because a generative AI product has a neural network that has been trained extensively on vast datasets – usually human text in multiple languages and also programming code. However, as it scrapes the internet to generate the answers it needs to complete its request, it will undoubtedly repeat or process information that is not entirely true or where its provenance cannot be authenticated or referenced. This leads to it giving information of responses that are not entirely accurate or misleading.
What constitutes incorrect data?
- Responses inappropriate or unconnected to the user’s input
- Obvious and unconstructive bias
- Manufactured information
Although these are rare, users still need to be cognizant of their possibility so that they can navigate such situations.
Generative AI products such as ChatGPT and Google Bard are an example of “black box” technology. It is not possible to know precisely how a GAI product formulates a particular response, as it utilizes these neural networks.
It can demonstrate reasoning capabilities when asked to elucidate something, but it’s just mimicking reasoning structures it encountered during its training.
This explains the output of incorrect results by GAI. The systems use patterns encountered during training but sometimes apply them inappropriately.
These might involve:
- Inaccurate data, such as dates, statistics, or scientific facts
- Manufactured news or propaganda, which may appear to be from a legitimate news source but isn’t
- Fictitious URLs that seem to link to real web pages but do not
- Made-up references to non-existent scientific papers purportedly authored by real experts
It is crucial to check references the GAI product provides directly on the internet. If they cannot be found, users should inform the GAI that it has provided incorrect information and request that it corrects the error.
That’s why it’s so important to approach interactions with GAI products such as ChatGPT as a dialogue rather than a mere request to gain the truth, the whole truth and nothing but the truth.
Head to the Praxi Data website and sign up to receive the ebook, Myths, Promises & Threats: Generative AI for the Enterprise, that deals with AI topics as crucial to your company as this blog