• Home
  • BUSINESS
  • ECONOMY
  • FINANCE
  • LIFESTYLE
  • MILLIONAIRE STORY
  • REAL ESTATE
  • TRAVEL
No Result
View All Result
MILLIONAIRE | Your Gateway to Lifestyle and Business
  • Home
  • BUSINESS
  • ECONOMY
  • FINANCE
  • LIFESTYLE
  • MILLIONAIRE STORY
  • REAL ESTATE
  • TRAVEL
No Result
View All Result
MILLIONAIRE | Your Gateway to Lifestyle and Business
No Result
View All Result
Home AI

Former OpenAI policy chief calls for independent AI safety audits with new nonprofit

January 16, 2026
in AI
Former OpenAI policy chief calls for independent AI safety audits with new nonprofit

Miles Brundage, a former senior policy researcher at OpenAI, is launching a new nonprofit institute to address what he sees as one of the biggest gaps in the artificial intelligence industry, the absence of truly independent oversight of safety testing.

You might also like

No Content Available

Brundage formally announced the AI Verification and Evaluation Research Institute, known as AI Verification and Evaluation Research Institute, or AVERI, this week. The organization is built around a clear premise, that companies developing the world’s most powerful AI systems should not be responsible for evaluating their own safety claims without external verification. The initiative puts independent AI safety audits at the center of its mission, arguing that current practices rely too heavily on trust rather than standardized assurance.

Why self-policing AI creates structural risk

Brundage spent seven years at OpenAI advising leadership on governance and long-term risks tied to advanced AI systems, including the possibility of human-level artificial general intelligence. He left the company in late 2024, and says the experience revealed a structural weakness across the industry.

Leading AI labs already conduct internal testing and sometimes hire external red teams to probe their systems. However, these efforts remain voluntary, uneven, and largely opaque. There is no requirement that companies follow common reporting standards or allow third parties sustained access to evaluate results. As a result, governments, business customers, and the public are left to accept company disclosures at face value.

In more mature industries, Brundage argues, this approach would be unthinkable. Products ranging from consumer electronics to industrial equipment undergo independent testing to meet established safety benchmarks. AI systems, despite their growing economic and social impact, operate without comparable guardrails.

A think tank, not an audit firm

AVERI is positioning itself as a research and policy organization rather than a commercial auditor. Its focus is on shaping norms, developing frameworks, and influencing policy that would make independent audits a standard expectation across the AI sector.

Alongside its launch, the institute published a research paper coauthored by more than 30 AI safety and governance experts. The paper outlines how external auditing could work in practice, proposing a tiered system of “AI Assurance Levels.” These range from limited third-party testing similar to current practices, up to rigorous, treaty-grade scrutiny designed to support international agreements on AI risk.

The organization has raised $7.5 million toward a $13 million target to fund two years of operations and a staff of 14. Backers include philanthropic groups, technology investors, and individuals with experience inside frontier AI companies, some of whom Brundage says are eager to see stronger accountability mechanisms put in place.

Market forces may drive adoption before regulators

While U.S. federal AI regulation remains uncertain, Brundage believes market dynamics could push companies toward independent AI safety audits even in the absence of new laws. Large enterprises purchasing AI systems may begin demanding audits to limit operational and reputational risk. Insurers could require them as a condition for coverage, particularly for business continuity and liability policies tied to AI-driven processes.

Investors may also play a role. As funding rounds for AI companies reach into the billions, institutional backers are increasingly exposed to downside risk. In the event of failures tied to unsafe or unreliable systems, the lack of independent evaluation could become a material governance issue.

Major AI developers such as Anthropic and Google already face growing scrutiny as they scale their models and pursue commercial deployment across sensitive sectors.

Europe may offer an early blueprint

Outside the United States, regulatory pressure is beginning to align more closely with AVERI’s vision. The EU AI Act, which recently came into force, stops short of mandating audits of model development. However, its accompanying Code of Practice for General Purpose AI calls for external evaluator access to models deemed to pose systemic risk. The law also requires conformity assessments for high-risk AI applications before market entry.

Some legal and policy experts interpret these provisions as laying the groundwork for independent auditing, even if the term itself is not explicitly used.

Building the auditor workforce

One of the largest obstacles to scaling independent AI oversight is talent. Effective auditing demands deep technical expertise, combined with governance, security, and risk management skills. Many individuals with that profile are already employed by, or financially incentivized to join, the very companies that would be audited.

Brundage acknowledges the challenge but sees a path forward through hybrid teams drawn from traditional audit firms, cybersecurity testing groups, academic research, and AI safety nonprofits. Over time, he believes this ecosystem could mature into a professionalized assurance industry.

His goal is to see independent AI safety audits become routine before a major crisis forces the issue. Establishing norms early, he argues, would allow oversight to scale proportionally with risk rather than reactively after harm occurs.

Share30Tweet19

Recommended For You

No Content Available

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Browse by Category

  • AI
  • BUSINESS
  • ECONOMY
  • FINANCE
  • LIFESTYLE
  • MILLIONAIRE STORY
  • REAL ESTATE
  • TRAVEL

Recent Posts

  • Hybrid Work Office Space Is Forcing Landlords to Rethink the Entire Model
  • America’s $952 Billion Debt Interest Burden Is Closing In on Medicare
  • Former OpenAI policy chief calls for independent AI safety audits with new nonprofit
  • California Wealth Tax Fuels Rift Among the Rich as Some Defend Paying More
  • Citigroup job cuts loom as Jane Fraser tells staff results, not effort, will define success

Recent Comments

No comments to show.

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • June 2024

Categories

  • AI
  • BUSINESS
  • ECONOMY
  • FINANCE
  • LIFESTYLE
  • MILLIONAIRE STORY
  • REAL ESTATE
  • TRAVEL

CATEGORIES

  • AI
  • BUSINESS
  • ECONOMY
  • FINANCE
  • LIFESTYLE
  • MILLIONAIRE STORY
  • REAL ESTATE
  • TRAVEL

About Millionaire MNL News

  • About Millionaire MNL News

© 2025 Millionaire MNL News

No Result
View All Result
  • HOME
  • BUSINESS
  • ECONOMY
  • FINANCE
  • LIFESTYLE
  • MILLIONAIRE STORY
  • REAL ESTATE
  • TRAVEL

© 2025 Millionaire MNL News

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?