‘Hallucinate’: Cambridge Dictionary’s Word of the Year 2023 and its AI Implications

“The selection of “hallucinate” as the Word of the Year in 2023 was intended to highlight the challenges that AI presents, as well as the importance of responsible AI development and deployment.”

hallucinate 2023

Every year, the Cambridge Dictionary selects a word that captures the essence of the times, to reflect cultural, technological, or societal shifts. In 2023, it chose ‘hallucinate‘ as the Word of the Year, to reflect the increasing significance of AI in our lives.

This choice was intended to highlight the challenges and opportunities that AI presents, as well as the importance of responsible AI development and deployment. It was actually designed to reflect the potential for AI systems to produce inaccurate or misleading information!

Historically, “hallucinate” referred to a psychological phenomenon where a person perceives something unreal, often visual or auditory, that doesn’t actually exist. Medical and psychological professionals have long used this term to describe experiences like seeing visions or hearing voices that aren’t present.

In everyday language, ‘hallucinate‘ often suggests that one’s mind is tricking them.

Cambridge Dictionary defines hallucinate thus,

to seem to see, hear, feel, or smell something that does not exist, usually because of a health condition or because you have taken a drug:

The AI Connection: Redefining “Hallucinate”

In 2023, “hallucinate” took on a new meaning, reflecting the growing influence of Artificial Intelligence in our lives.

AI and parenting
Image by Gerd Altmann from Pixabay

When used in the context of modern technology in general, “hallucinate” describes a scenario where an AI system generates information that appears accurate but is actually false or misleading.

These AI “hallucinations” are errors where the system fabricates details, facts, or even entire narratives, by presenting them as if they were true.

For instance, AI tools might produce seemingly credible yet entirely fictional content, posing significant challenges in fields like journalism, education, and law.

The AI-driven usage of “hallucinate” sparked widespread discussion and led to a significant increase in searches for the term. The surge in interest was largely due to users and developers grappling with the implications of these “hallucinations.”

Cambridge Dictionary’s criteria for selecting the Word of the Year often include factors like search frequency, relevance to current events, and the word’s role in shaping public discourse.

In this case, “hallucinate” met all these criteria. It reflected the growing concerns about AI’s reliability and the impact of its errors on society.

Assessing Select AI Hallucinations Thus Far

  1. ChatGPT’s Fabrication of Sources:
    • In multiple instances, ChatGPT generated citations for non-existent books, articles, or authors when asked for references. This issue led to concerns among educators and researchers who initially trusted these AI-generated citations. (NIH)
  2. Google Bard’s James Webb Telescope Error:
    • During a demonstration in February 2023, Google Bard, an AI chatbot, incorrectly claimed that the James Webb Space Telescope took the first pictures of an exoplanet. The information was false but presented as factual. (The Verge)
  3. Medical Misinformation in AI Chatbots:
    • In healthcare settings, AI chatbots gave patients incorrect medical advice. For example, one chatbot inaccurately recommended medication dosages, which could have led to dangerous consequences if taken seriously. (UF Health)
  4. False Legal Information:
    • A legal AI assistant provided incorrect legal precedents and case laws in response to queries. This led to legal professionals receiving potentially misleading advice. (The Conversation)

The Broader Impact of AI on Language – beyond ‘hallucinate’!

generative ai

The selection of “hallucinate” is part of a broader trend where AI is influencing the evolution of language. As the technology continues to integrate into various aspects of life, it brings new vocabulary and reshapes the meanings of existing words.

Terms like “algorithm,” “deepfake,” and now “hallucinate” are just a few of those vocabularies.

Ultimately, the selection of “hallucinate” highlights the importance of responsible AI development and deployment. To minimize harmful consequences, we must design and use AI systems responsibly as they become more integrated into our society. This includes developing robust evaluation methods, addressing biases, and promoting transparency in tech systems.

The Way Forward

AI-generated hallucinations can have severe consequences in critical fields like healthcare, law, and finance, if people mistake false information for truth. This concern underscores the need for transparency in its development and usage. In addition, it is important to maintain human oversight in decision-making processes.

To address these risks, it is crucial for developers, policymakers, and the public to remain vigilant.

Here are additional steps to take into account, according to IBM:

  • Use high-quality training data: Ensure the data used to train the AI model is diverse, balanced, and well-structured.
  • Define the model’s purpose: Clearly outline the AI model’s intended use and limitations.
  • Use data templates: Provide a predefined format for input data to guide the model’s output.
  • Limit responses: Set boundaries for the AI model to prevent excessive or irrelevant outputs.
  • Test and refine continually: Rigorously test and evaluate the model, making adjustments as needed.
  • Rely on human oversight: Have a human review AI outputs to catch and correct hallucinations.

Ultimately, educating users about the limitations of AI and developing systems that minimize the occurrence of hallucinations are essential steps toward safer and more reliable AI technologies.

Modified

Responses to “‘Hallucinate’: Cambridge Dictionary’s Word of the Year 2023 and its AI Implications”

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscription Form (#5)

LATEST ARTICLES