Former and current employees at ChatGPT's parent company, OpenAI, have expressed concern that safety is not being taken seriously enough at the now-for-profit organization.
Such mistrust has also penetrated the highest levels of government.
What Happened: Five senators - Brian Schatz (D-HI), Peter Welch (D-VT), Ben Ray Luján (D-NM), Mark Warner (D-VA) and Angus King (I-ME) - sent a letter to OpenAI co-founder and CEO Sam Altman on Monday.
The letter, obtained by the Washington Post, urges Altman to take AI safety seriously and seeks additional information on OpenAI's safety plans.
The senators expressed concern at OpenAI's employment practices and retaliatory measures against whistleblowing. The authors of the letter also want to ensure safety given OpenAI's work with the U.S. government.
"National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable," the letter reads.
"Given OpenAI's position as a leading AI company, it is important that the public can trust in the safety and security of its systems."
The letter proceeds to request information on several OpenAI practices.
The senators prompt Altman to confirm that OpenAI will honor its previous commitment to dedicate 20% of its computing resources to research on AI safety. The letter also asks OpenAI to not enforce permanent employment non-disparagement agreements and install protocols for employees to voice safety concerns.
Why it Matters: Many experts, including co-founder Elon Musk, have voiced concern over AI safety and its ultimate effects on humanity. Musk is no longer associated with OpenAI.
Altman's ouster and subsequent reinstallment at OpenAI in 2023 was not linked to safety, but a divide between employees at the organization exists on weighing profits with safety. Chief Scientist Ilya Sutskever left OpenAI in May.