Chatbots

Chats with Legal Chatbots

Chatbots can be clearly defined by two eras: Before ChatGPT (BCGPT) and After ChatGPT (ACGPT).

Before ChatGPT (BCGPT)

BCGPT Chatbots were just plain stupid and incompetent. But at least, usually, if they didn’t know the answer they stopped functioning or told you they didn’t know the answer or offered you a human replacement.

After ChatGPT (ACGPT)

ACGPT Chatbots hallucinated so you had no idea whether what they were telling you was true or made up gibberish (as one Judge called it) without double or triple checking the output from more reliable traditional sources.

Why does ChatGPT Hallucinate?

Large Language Models (LLMs) like ChatGPT hallucinate when they produce text that is irrelevant, contradictory, or false. This happens because LLMs are not trained to verify the accuracy or consistency of their output, but only to make it coherent and plausible. LLMs may also learn from unreliable or fictional sources in their training data, and have no way of telling fact from fiction. From a technical perspective, hallucination in large language models can be attributed to a lack of ground truth from external sources. Ground truth refers to data that accurately represents the real-world phenomena or outcomes that an AI model aims to predict, classify, or generate.

Chatbots BCGPT

I did a series on chats with legal chatbots BCGPT.

There were three such chats:-

And also a post incorporating a chat that was not strictly speaking with a ‘legal chatbot’:-

Chatbots + ChatGPT

ACGPT I have been blogging about the hype created around ChatGPT in Legal Tech and the reality that brought that hype crashing down.

Blog posts so far are:-

New Chatbot Chats?

If you know of a legal ChatGPT bot you think I should have a chat with then do let me know:-