OpenAI CEO Sam Altman has said he would oppose any form of government bailout if the company were to fail - a pointed statement that comes amid intensifying scrutiny over the power and responsibility of major AI developers.

Altman made the remarks during an AI ethics forum in San Francisco this week, responding to public backlash over comments from Sarah Friar, CEO of OpenAI’s financial advisory partner Next Future, who suggested that the U.S. government should “safeguard foundational AI” if the private sector faltered.

“If OpenAI fails, it should fail like any company,” Altman said. “AI is critical, but that doesn’t mean we’re above risk or entitled to a safety net. Responsibility includes accountability.”

His comments sparked a renewed debate about how much influence AI companies should have in shaping national policy — and whether “too big to fail” could soon apply to artificial intelligence firms.

The Context Behind the Controversy

The controversy began after Friar told CNBC that “AI infrastructure is now as essential as electricity or broadband” and that the government “must ensure continuity” if a collapse at OpenAI or Anthropic disrupted global systems.

Her remarks triggered a political backlash, drawing responses from lawmakers and industry figures — including David Sacks, President Trump’s newly appointed AI Czar, who accused OpenAI of “elitist tech hubris” and said no company should “expect taxpayers to clean up their mess.”

Altman quickly distanced himself from Friar’s statement, calling it “a mischaracterization” of OpenAI’s position.

“We’re not asking for protection,” Altman said. “We’re asking for responsible frameworks that keep innovation and safety balanced.”

The Political Undertone

The exchange highlights the shifting dynamics between Silicon Valley and Washington as AI regulation moves from concept to enforcement. The Trump administration has pushed for a lighter-touch regulatory model that emphasizes competitiveness and national security over intervention, while Democratic lawmakers continue to advocate for broader guardrails on AI use.