Ending the "Chat GBT" hallucination crisis.. Experts: Don't wait too long Increased warnings about chat GBT violating privacy rules
Information technology experts do not hope to reach a "final solution" to the problem of "hallucinations" in the "Chat GBT" artificial intelligence program, and the resulting misinformation and discredit, despite the company that developed it announcing that it is working to solve it. theproblem.
The Microsoft-backed OpenAI company behind ChatGBT has sought to assure users that it is improving the chatbot's algorithmic problem-solving capabilities to reduce artificial intelligence "hallucinations".
And according to what the company said in a statement on its official website, Wednesday, the reduction of hallucinations is "a critical step towards building compatible artificial general intelligence."
The meaning of the hallucination of "artificial intelligence"
According to the communications and information technology expert, Salloum Al-Dahdah, the following:
"Hallucinations" is a term for AI generating "results that are incorrect or not supported by any real-world data; i.e. giving unsubstantiated answers."
This "hallucinations" could be that "Chat GBT" gives its users content, news, or false information about people or events without relying on facts, and content with "intellectual property rights," according to his speech.
Errors caused by ChatGPT's processing of information are "uncommon and rare."
But the occurrence of these errors is called "hallucinations", and at that time the application begins to list false information and give "incorrect" outputs that do not correspond to reality or have no meaning in the general context, according to his speech.
The "Chat GBT" program has drawn strong attention to it since it appeared a year ago; For its ability to perfectly simulate and create sounds, write articles, messages and translate in seconds, but also results in misleading information, and may distort people, as a result of "hallucinations".
How do these hallucinations happen?
"Hallucinations" occur as a result of the data on which the system has been trained not covering "accurate answers to certain questions", and at that time the system fabricates and creates answers that are not correct, and this is one of the major problems. that these systems are currently facing, according to information technology and artificial intelligence expert people. Najdawi.
Al-Dahdah agrees with him, who confirms that "artificial intelligence" is the result of human programming, and it works according to the data and data that humans program it with. Consequently, the lack of these data causes "halucinations".
Can hallucinations be stopped?
Al-Najdawi believes that "the problem of hallucinations can be mitigated and Chat GBT errors reduced," by programming the system with additional "accurate and unbiased" training data, which he can use to answer questions and distinguish between fact and fiction.
At that time, generative artificial intelligence programs will learn how to provide "more accurate and correct answers" in the coming times, and at the long-term level, the performance of those systems will improve.
Ghanem offers a solution as well, which is to reduce the number of words step by step. Which helps "understand and analyze the sentence that the user writes for those applications, and clearly understand the question asked."
It is noteworthy that the company "Open AI", which developed the "Chat GPT" program, has already developed the capabilities of the program to treat "hallucinations", and has reduced its size by 1 to 5 percent, but not completely.
On the other hand, Al-Dahdah believes that "improving the capabilities of GPT chat" to prevent "hallucinations once and for all" is difficult.
Therefore, it is important to verify all the information provided by the application, before using it or treating it as "facts", according to the communications and information technology expert.
