Do You Trust Your Artificial Intelligence Attorney?

In New York, two lawyers responding to an angry judge in Manhattan federal court blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing.

Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by the artificial intelligence-powered chatbot.

Schwartz explained that he used the groundbreaking program as he hunted for legal precedents supporting a client’s case against the Colombian airline Avianca for an injury incurred on a 2019 flight.

The chatbot, which has fascinated the world with its production of essay-like answers to prompts from users, suggested several cases involving aviation mishaps that Schwartz hadn’t been able to find through usual methods used at his law firm.  The problem was, several of those cases weren’t real or involved airlines that didn’t exist.

“I did not comprehend that ChatGPT could fabricate cases,” Schwartz said.
Microsoft has invested some $1 billion in OpenAI, the company behind ChatGPT.
Its success, demonstrating how artificial intelligence could change the way humans work and learn, has generated fears from some. Hundreds of industry leaders signed a letter in May that warns ” mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Judge Castel seemed both baffled and disturbed at the unusual occurrence and disappointed the lawyers did not act quickly to correct the bogus legal citations when they were first alerted to the problem by Avianca’s lawyers and the court. Avianca pointed out the bogus case law in a March filing.

Ronald Minkoff, an attorney for the law firm, told the judge that the submission “resulted from carelessness, not bad faith” and should not result in sanctions.
He said lawyers have historically had a hard time with technology, particularly new technology, “and it’s not getting easier.”

This may be the first documented instance of potential professional misconduct by an attorney using generative AI.  The case demonstrates how lawyers might not have understood how ChatGPT works because it tends to hallucinate, talking about fictional things in a manner that sounds realistic but is not.

The judge said he’ll rule on sanctions at a later date.

In business, much like life, every positive can quickly become a negative if not managed properly.  When people rely too little on their tools of trade, they will fall behind, however too much reliance on those tools and they will lose the necessary skill that put them in the position they are in.  When your legal professionals fail to use their education and experience then mistakes will happen.  Like AI making up situations to accomplish their goal. 

AI has no ethics or morals; it has a mission and will do whatever it takes to accomplish that mission.  Just ask the Air Force who now denies that an AI drone went rogue and killed its operator during a combat mission simulation. The operator told the drone not to destroy the target, and the drone turned on the operator, killing him to complete the mission. (https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test)

If you need a Law Firm that will rely on it's education and experience, please let us know at 843-357-9301.

 

May God Bless You, Your Business, and this Country, 

Tom Winslow

Previous
Previous

Journaling Through Divorce

Next
Next

Memorial Day