OpenAI's ChatGPT Fails To Meet EU's Data Accuracy Standards, Says Privacy Watchdog

OpenAI's ChatGPT is falling short of meeting the EU's data accuracy standards, despite concerted efforts to minimize factually incorrect outputs, as per the EU's privacy watchdog.

What Happened: A task force at the EU's privacy watchdog has found Microsoft Corp.-backed (NASDAQ: MSFT) OpenAI's attempts to enhance the accuracy of its ChatGPT chatbot's outputs inadequate to comply with the data accuracy principle of the EU's data protection rules, Reuters reported on Friday. The task force published its findings in a report on its website.

The report emphasized that data accuracy is a fundamental principle of the EU's data protection rules. It noted that the probabilistic nature of the system and the current training approach could potentially generate biased or fabricated outputs.

The report also underscored that end users are likely to interpret the outputs provided by ChatGPT as factually accurate, including information related to individuals, irrespective of their actual accuracy.

OpenAI did not immediately respond to a request for a statement from Benzinga.

Why It Matters: The EU has been leading the charge in implementing stringent AI regulations. In March, the EU introduced historic AI regulations, setting a new standard for tech companies, including Apple Inc. (NASDAQ: AAPL) and Amazon.com Inc. (NASDAQ: AMZN).

In April, the EU cleared Microsoft's $13 billion investment in OpenAI, following a formal probe. This decision was seen as a relief for tech giants, who are increasingly investing in AI technologies.