CEO Sam Altman Says You Should Only Trust OpenAI If You Control It, Advocates For Balanced AI Regulation At Bloomberg Tech Summit

In a fireside chat at the Bloomberg Technology Summit on Thursday, OpenAI CEO Sam Altman presented a nuanced perspective on the future of artificial intelligence AI.

Rooted in optimism about AI's potential to transform societies globally, Altman also stressed the necessity of vigilance and thoughtful regulation to ensure beneficial outcomes.

A recurring theme in Altman's address was the international enthusiasm and cautious optimism he observed during his global AI tour in the last two months. AI, he said, inspires hopes of economic and social progression, but not without an undercurrent of anxiety about its implications.

Altman also tackled the issue of existential risks posed by the emerging tech, likening it to pandemics and nuclear war. He argued for a balanced response: recognizing AI's transformative potential in fields like education and healthcare, while ensuring safety and a cautious approach to development.

The CEO advocated for a globally coordinated, balanced approach to AI regulation, emphasizing the need for a certification system for AI models of significant capabilities, while cautioning against excessive regulation that might effect small startups. He also advocated for democratizing control over AI, saying that OpenAI should only be trusted - at scale - if it was controlled by the people.

Altman admitted that while AI models including GPT-3.5 might inherit bias from training data, OpenAI's advancements with GPT-4 have made significant strides in reducing bias.

Separately, he clarified the nature of his leadership at OpenAI, stressing that his motivation for being part of the organization is driven more by the quest for impact and technological progression than by financial gain.

The CEO has no equity stake in the company, and said he has no shares.