The fact that someone sees something does not exist is called hallucinations, technologies based on artificial intelligence (AI) can encounter this phenomenon.
AI hallucinations negatively affect situations that need high accuracy. Image: LinkedIn
When a system of algorithms produces information that sounds reasonable but actually inaccurate or misleading, computer scientists call it hallucinating AI. They have discovered this behavior in many different types of AI systems, from chatbot such as chatgpt, image creation tools like Dall-E to self-driving cars.
Who fabricates
The illusion and their impact depend on the type of AI system. With large language models (LLM) – the background technology of chatbot AI – The illusion is the information that sounds convincing but incorrect, fabricated or unrelated. AI chatbot can mention a scientific research that does not exist or provides wrong historical events, but makes them sound reliable.
In a case in 2023, a New York lawyer submitted a legal summary he wrote with the help of chatgte. A Judge was observant then noticed the summary of the citation of a case that chatgpt made up. This can lead to other results in the courtroom.
With AI tools that can identify objects in the image, the illusion occurs when they write an unfavorable comment with that image. For example, when asking anyone to list the objects in the picture of a woman, just taken from the chest upwards, talking on the phone, who responded to the woman who was talking to the phone while sitting on the chair. This incorrect information can lead to many consequences in situations that require high accuracy.
The cause of the illusion
AI system builders by collecting huge amounts of data and putting it into a calculation system that can detect samples in data. The system develops methods to answer questions or perform tasks based on those forms.
For example, when providing a AI system of 1,000 photos of many different dog breeds, corresponding labels, the system will quickly know how to distinguish Poodle dogs from Golden Retriever. But if a picture of a blueberry muffin, the system can tell the user that it is the Chihuahua dog.
When the system does not understand the question or information provided, it may have hallucinations. The illusion usually occurs when AI filled out the space based on the same context from its training data, or when it is built with an inadequate or incomplete training data. This leads to wrong predictions, as in the case of blueberry muffin.
Who identifies objects may have difficulty distinguishing between Chihuahua and blueberry muffin, between a shepherd dog and a mop. Image: Shenkman
Distinguishing between AI hallucinations and intentional creative content is also very important. When a system of AI is required to be creative – as when writing a story or creating artistic images – users will expect new output content. Meanwhile, the illusion occurs when the AI system is required to provide practical information or perform certain tasks, but it produces misleading or misleading content and still presents them correctly.
The key difference lies in the context and purpose: creativity is suitable for art tasks, while hallucinations are a problem when it is necessary to accuracy and reliability.
To solve, companies have proposed to use high quality training data and whose feedback limit according to certain instructions. However, the illusion may still exist in popular AI tools.
The consequences of AI hallucinations
Self -driving cars do not identify the object can lead to a deadly traffic accident. Military drone defines the wrong goals that can make people’s life in danger.
With AI tools that provide automatic voice recognition, hallucinations are records that include words or phrases that have never been said. This is more likely to happen in a noisy environment, making the AI system add new or unrelated words in an effort to decipher the surrounding noise, such as the sound of a truck running through or a child who is crying.
As AI systems are increasingly commonly used in many areas such as health care, social and legal services, the hallucinations in automatic voice recognition can lead to inaccurate clinical or legal results, harmful to patients, defendants or families who need social support.
Although AI companies are trying to reduce hallucinations, users still need to be alert and ask questions about whose output results, especially in the situation that requires high accuracy. Check the information created by AI with reliable sources, consult experts and realize the limitations of this tool are necessary steps to help reduce the risks from them.
cryptofaucets.site