Center for Technology and InnovationFeaturedTechnology and Innovation

Washington Doesn’t Need Beijing’s AI Playbook

Within hours of the second Trump Administration taking office, the President rescinded a Biden-era executive order that required frontier artificial intelligence (AI) labs to share safety test results with the federal government. President Trump labeled the requirement as one of his predecessor’s most “radical practices;” one that threatened to slow innovation and cede the country’s lead in AI development to China. In the months since, the Administration has taken a hands-off approach, resisting the federal government’s usual urge to intervene in emerging technologies.

Sixteen months later, the Administration is reportedly building something that looks remarkably similar, and arguably more intrusive, from scratch.

The New York Times reported earlier this week that the White House is considering creating an interagency review process of AI models before public release. The proposal would give agencies like the National Security Agency (NSA) and the Office of the National Cyber Director (ONCD) first access to frontier models, effectively requiring government sign-off before release . This appears to be motivated by a new AI model named Mythos that was released by Anthropic, one of the leading AI labs, which has demonstrated advanced capabilities in autonomously detecting and exploiting software vulnerabilities.

While the White House may view this interagency review as a narrow, necessary precaution against advanced cyber threats, history shows that government chokepoints expand if left unchecked. An administration that has otherwise championed a light-touch regulatory environment on emerging technology would be building exactly the kind of bureaucratic bottleneck that slows innovation and invites political capture. If we want to see how quickly well-intentioned expansions can devolve into state control over technology and a “permission-slip” innovation ecosystem, we only need to look across the Pacific.

The Beijing Playbook

In 2022, the Chinese government began requiring internet platforms to register their algorithms with the state.

These efforts expanded in 2023 as Beijing required all public-facing generative AI services to undergo security assessments and register with the government before deployment. The “algorithm registry” and safety reviews were originally designed as a lightweight, post-deployment requirement. Over time, however, regulators began withholding approval until they were satisfied with the safety and ideological alignment of the models with what the government calls “core socialist values,” effectively turning the registration process into a licensing regime.

Today, the Chinese government holds veto power over what AI products its citizens can access and what those products are permitted to say. Without caution, Washington’s justifiable initial intentions can mirror Beijing very quickly. 

If the White House pre-vets AI models before deployment, the executive branch wields an immense amount of leverage over the frontier labs. Since the government is the only thing standing between public deployment and its revenue, an administration can extract concessions from frontier labs. While these concessions might initially  serve the public good, they can easily become a tool for authoritarianism and restricting free speech. For example, today’s condition for government approval might be “show us your model has guardrails against helping users build an offensive cyber weapon.” Over time, and especially over different administrations, the condition could easily become “show us your model doesn’t produce outputs we consider harmful” or “explain why your model disagrees with federal policy on X.”

But the Administration does not need coercive authority to accomplish its goals. Rather, the federal government can utilize existing voluntary frameworks that safeguard American consumers and businesses while avoiding holding frontier models hostage for political concessions. 

The Center for AI Standards and Innovation (CAISI), located in the Commerce Department, already conducts rigorous evaluations of frontier models before deployment on a voluntary basis. Anthropic and OpenAI have made their models available for pre-deployment testing since 2024, and many of the other leading labs did the same on Tuesday of this week. As Dean Ball and Kevin Frazier argued this week, labs could agree to share models with materially new capabilities with CAISI a few weeks before public release, giving the government time to identify threats and coordinate a response through existing channels. This would require no new legislation and no novel legal authority.

A regulatory choke point designed with the best of intentions today will inevitably become a tool of political power for the administration that inherits it tomorrow. As we have already witnessed in Beijing, even precise light-touch regulations can turn into a bottleneck for AI development. The current administration, thankfully, already has an off-ramp. The federal government should continue to use the CAISI to evaluate frontier models before they are deployed, since the center  is already established and labs are participating in its evaluations. However, an administration that campaigned against a permission-slip government for AI should not rebuild it using Beijing’s playbook. 

Source link

Related Posts

1 of 218