OpenAI's ChatGPT is falling short of meeting the EU's data accuracy standards, despite concerted efforts to minimize factually incorrect outputs, as per the EU's privacy watchdog.
What Happened: A task force at the EU's privacy watchdog has found Microsoft Corp.-backed
The report emphasized that data accuracy is a fundamental principle of the EU's data protection rules. It noted that the probabilistic nature of the system and the current training approach could potentially generate biased or fabricated outputs.
The report also underscored that end users are likely to interpret the outputs provided by ChatGPT as factually accurate, including information related to individuals, irrespective of their actual accuracy.
OpenAI did not immediately respond to a request for a statement from Benzinga.
Why It Matters: The EU has been leading the charge in implementing stringent AI regulations. In March, the EU introduced historic AI regulations, setting a new standard for tech companies, including Apple Inc.
In April, the EU cleared Microsoft's $13 billion investment in OpenAI, following a formal probe. This decision was seen as a relief for tech giants, who are increasingly investing in AI technologies.