May 7, 2024

News Collective

Complete New Zealand News World

OpenAI is already working on a way to prevent ChatGPT from hallucinating |  technology

OpenAI is already working on a way to prevent ChatGPT from hallucinating | technology

chat He is famous for his great ability to answer any question; However, this is not always true. There have been cases when he fell into hallucinations. For example, a lawyer used a Amnesty International Briefly, but the tool invented legal precedents.

Faced with this situation, the company behind ChatGPT is working on how to make AI not hallucinate. Specifically, the new strategy relies on training AI models.”To reward themselves whenever they do the right thing. In this way, the final conclusion reached by the AI ​​model will not only be rewarded.“, pointing to.

Look: ChatGPT: Japan warns OpenAI against collecting sensitive data from its users

This approach is called “process control” and it would allow AI to be better explained and linked to human thinking. “Eventually, you will learn from your mistakes, but inwardly from how you came to the wrong conclusion beforehand.Media reports.

Although OpenAI is not the creator of this new review system for the entire process line, it will Give them a big boost so they end up implementing it in their AI systems.”Genbeta points.

At the moment, it is not known when OpenAI will start integrating this new strategy into its services such as ChatGPT. But the odds are that this method is still under investigation.

as standard

know more

See also  Twitter MX? Why did Elon Musk change the name of the social network?