Those who read me regularly know that, in general, I have a fairly positive opinion of ChatGPT , the OpenAI chatbot that, with its launch, kicked off the integration, at the speed of light, of artificial intelligence models. generative in all kinds of services. Had it not been for its launch at the end of last year, surely the new Bing, Google Bard, “copilots” and other elements would still be in an embryonic phase.
However, having a positive opinion of the services does not mean that you are unaware of its problems and limitations . What’s more, the first time I wrote about ChatGPT in MuyComputer, it was precisely to focus on these problems, as you can see here . In that case, I mentioned three types of causes of errors in the chatbot’s responses, and I reserved the most worrisome, as I already indicated at the time, for last.
I am talking, of course, about hallucinations , a problem that is difficult to fully solve and that, to this day, forces us to review the answers provided by ChatGPT with an external and reliable source to confirm that we are not dealing with an algorithm that has Decided to let your imagination run wild. Otherwise, if we completely trust an answer, and especially if we do it in the work context, we can expose ourselves to problems and very embarrassing situations.
Such a circumstance is what has been experienced by Steven A. Schwartz, a New York lawyer, who accidentally used false data created by ChatGPT in a lawsuit . Specifically, it was a lawsuit in which he defended the interests of a private accuser against Avianca for an incident that occurred on a flight between El Salvador and New York. During it, Schwartz’s client was accidentally hit in the leg by a trolley used by cabin crew to distribute food, drinks, offer shop service on board, etc.
In his complaint, the lawyer cited several judicial precedents for similar causes but, as you are surely imagining, the source of the same was none other than ChatGPT. And yes, surely you have guessed this too, the references to previous cases provided by the chatbot were either false or incorrect. Thus, now it is the lawyer who is having to give explanations to the court for having provided false information in his lawsuit, as we can read in Reason , which also extracts part of the communications between both parties.
It does not seem that Schwartz’s action was intentional, since the slightest process of verification of said sentences would have produced the result that has indeed been produced. So I think your statement that you over-relied on ChatGPT is honest, and I’m pretty sure you’ve learned your lesson that this will not happen again. And, without a doubt, this has a very beneficial function, and that is that it serves as a great reminder that a chatbot cannot be your only source … unless you are not worried about ending up in problems.