As one of the most transformative technologies ever created, artificial intelligence (AI) is likely to reshape nearly every facet of society. While we are far from seeing AI’s full capabilities, the guardrails put in place now will influence how this technology shapes work and productivity, democracy, war and peace, and much more moving forward.
Two successive administrations have attempted to address these issues, each with different interests and styles, but with some surprising overlap in objectives and policies. The next president will face major decisions on how to govern increasingly powerful and disruptive AI systems and how to prioritize challenges related to safety, innovation, competition, and risk. If the past is a prelude, the actions of both the Biden and Trump administrations offer some clues as to how the future rules of the road might be written.
The Trump administration’s approach
Though AI had not yet burst into the public conscience during the Trump administration, its policies toward AI governance generally focused on the promotion, development, and adoption of AI systems. In four years, the administration issued two major executive orders. The first established the American AI Initiative which was later codified into law. It set forth five guiding principles, including training American workers, fostering public trust and protecting civil liberties, and engaging internationally. The second outlined principles for the government’s use of AI, directed the Office of Management and Budget to develop guidance for implementing the principles, required agencies to publish inventories of their AI use cases, and added an AI track to the Presidential Innovation Fellows program. Despite a nuanced approach, at the time, some were critical of the Trump administration’s limited attention to risk mitigation.
Internationally, the Trump administration’s economic strategy sought to deny adversaries, namely China, from accessing advanced technologies, and export controls became a core feature of the United States’ China policy. In 2019, Donald Trump’s Department of Commerce (DOC) added Huawei and its affiliates to the Entity List, restricting the sale of goods and services without a license from DOC’s Bureau of Industry and Security (BIS). In 2020, BIS also added SMIC, China’s leading state-owned semiconductor foundry, to the list and ruled that no more licenses to export manufacturing equipment used for chips at the 10-nanometer node and below (the kinds used in advanced AI systems) would be approved to counter China’s military modernization efforts.
The Trump administration also engaged in international fora around the development of AI norms. It supported the adoption of the Organization of Economic Cooperation and Development’s 2019 AI principles, became a founding member of the Global Partnership on AI in 2020, and joined the United Nations’ Conference on Certain Conventional Weapons’ Group of Governmental Experts in unanimously adopting a set of 11 guiding principles on the use of lethal autonomous weapons.
The Biden administration’s approach
In addition to promoting the development and adoption of AI, the Biden administration’s approach to AI policy has placed more emphasis on risk and harm mitigation. One of its first major governing documents was the Blueprint for an AI Bill of Rights, a set of values published in October 2022 that, while not legally binding, were intended to help guide the development of safe and responsible AI.
The viral launch of ChatGPT one month later represented a paradigm shift for broader conversations in Washington. Of 298 bills with relevance for AI governance and usage introduced since the start of the 115th Congress, 183 of them were proposed after ChatGPT’s launch (Figure 1). Despite a flurry of proposals and a series of nine expert AI Insight Forums convened in the Senate (and a subsequent roadmap), Congress has so far failed to pass comprehensive AI legislation. Instead, the White House has steered AI governance efforts, cutting across a wide range of executive authorities. AI policy is thus particularly vulnerable to change if there is a transition to a second Trump administration in 2025.
In July 2023, the Biden administration developed a set of Voluntary AI Commitments on safety, security, and trustworthiness with a number of leading developers. While these are not legally binding, most frontier model developers have signed on and joined an industry association to guide their implementation. The centerpiece of the Biden administration’s AI policy is its October 2023 executive order, which invokes the Defense Production Act to mandate that developers of advanced AI systems share safety test results with the U.S. government, imposes know-your-customer requirements on cloud service providers, and sets standards for federal agencies on AI safety, security, and privacy. The administration also released a memorandum from the Office of Management and Budget which set forth binding guidelines on AI governance, innovation, and risk mitigation for federal agencies.
Beyond domestic policy, the Biden administration has engaged in AI governance conversations in bilateral and multilateral fora, including cooperation around safe and transparent AI systems through the U.S. and European Union’s Trade and Technology Council, implementation of the G7’s Hiroshima Process to leverage the benefits of AI while guarding against potential harms, and cooperation on safety and evaluation standards between the U.S. AI Safety Institute and an international network of analogous organizations. The United States signed a Statement of Intent to collaborate on AI safety with other countries, committed to ongoing dialogue between the EU AI Office and U.S. AI Safety Institute, and signed a partnership agreement with the United Kingdom’s AI Safety Institute, which recently announced it would open a branch in San Francisco. The United States has also worked with partners on a political declaration on AI in the military and has continued to engage at the U.N.’s Conference on Certain Conventional Weapons’ Group of Governmental Experts.
Perhaps most notably, the Biden administration restarted bilateral dialogues with China on key “technical and policy” issues, where the two sides discussed their approaches to AI safety and risk management. Despite this dialogue, the administration not only expanded Trump’s tariffs in some sectors but also oversaw a “seismic shift” to more restrictive export controls on advanced semiconductors and manufacturing equipment. The administration also issued an executive order requiring outbound investment screening for critical emerging technology industries.
Looking forward
Throughout the Biden administration, Vice President Kamala Harris, the Democratic Party’s likely nominee for president, has taken a leading role in AI policy efforts. She led the U.S. delegation to the United Kingdom for the Global Summit on AI Safety and facilitated a dialogue with the CEOs of leading AI developers, among other roles. Absent movement in Congress, a Harris administration will continue to be limited to existing executive authorities. It will also likely maintain many of the policies of the previous administration, with the Biden executive order (EO) as a guiding document and a continued focus on risk mitigation and responsible development and investment.
Trump, by contrast, has vowed to rescind the Biden EO “on day one” and the Republican Party platform would repeal the EO and replace it with something that supports “AI Development rooted in Free Speech and Human Flourishing.” This aligns with a backlash among conservative researchers, commentators, and policymakers, who have railed against the EO as an abuse of the Defense Production Act that will stifle innovation through unnecessary regulation. The most significant effect of rescinding the EO would be the end of the reporting regime for frontier model developers and cloud service providers. These requirements are one of the only legally binding transparency mechanisms for companies that develop or provide resources for the most powerful, and potentially dangerous, AI systems, and their removal would be a huge blow to the movement for guardrails on frontier AI.
Despite open confrontation with China on emerging technologies and export controls, the Biden administration has engaged in dialogue on key areas of mutual concern, as evidenced by its bilateral exchanges on the climate and AI, its push to discuss nuclear risk reduction, and its participation with China in the U.K.’s AI Safety Summit and the resulting Bletchley Declaration. Under a potential Harris presidency, it is likely these efforts will continue.
The Trump administration oversaw a dramatic shift in diplomatic engagements with Beijing, Trump has made clear he remains firm in his opposition to China, and commentators at the Heritage Foundation have argued that working with China on AI empowers an adversary to the detriment of U.S. technological leadership. Given this criticism and U.S. conservatives’ broader rhetorical tenor against China, it is hard to imagine track one diplomatic engagement continuing uninterrupted. It is even harder to imagine agreement between the United States and China on shared rules and principles governing advanced AI systems beyond areas where there is a clear mutual interest in avoiding inadvertent escalation.
The dual objectives of stifling China’s tech industry and ensuring American leadership in the development of powerful AI systems reveal clear contradictions in Trump’s broader tech agenda: antagonism toward China, an opportunistic policy shift on TikTok, and a disdain for Big Tech censorship accompanied by a cozy relationship with some parts of Silicon Valley. Many of the leading developers of frontier AI models today are the same companies that Trump believes have wronged him and that conservative lawmakers criticize for exerting too much power in digital markets and silencing conservative speech. Undercutting those companies may harm America’s ability to guide the development of frontier models. How the Trump administration reconciles these competing priorities is anyone’s guess, though several tech leaders are betting it will net out in their favor.
There are, however, several areas where we might instead expect continuity. Export controls on advanced semiconductor manufacturing equipment began under the Trump administration and accelerated under the Biden administration. There is some indication that Harris could continue this targeted approach in a new administration; however, she has been critical of Trump’s tariffs in the past. Trump has said that he would impose a tariff of greater than 60% on Chinese goods, and a second Trump administration would likely take an even more aggressive posture on trade with China. As a result, this type of restriction is likely to continue or be tightened. Importantly, the implementation of the export control regime for semiconductors depends, in part, on compliance from allies—in particular the Netherlands and Japan. Under a more transactional “America First” foreign policy, the Trump administration may struggle to maintain support from key partners who control vital chokepoints in the semiconductor supply chain. This could make it even more difficult to thwart China’s AI and military progress.
Should Trump win a second term in office, we are also likely to see a continued focus on extreme harms of AI safety (e.g., biotech), but less of a focus on AI’s impact on civil rights. A key focus of the Biden administration’s AI policy has been the impact of the technology on workers, marginalized communities, and other groups that are particularly vulnerable to disruption from transformative technologies. In her address ahead of the Bletchley Summit, Harris rejected a narrow focus on existential risk, remarking that harms to individual rights and well-being “also feel existential” to those they affect. Given the strong deregulatory bent of the first Trump administration and the Republican Party more broadly, coupled with Trump’s hostility toward ideas like diversity, equity, and inclusion and “wokeness,” these challenges may be deprioritized as part of a second Trump administration’s AI policy.
There does seem to be general convergence around AI safety and the potential catastrophic risks from frontier AI systems as important policy issues. Trump himself has called AI a “superpower” that is “very disconcerting” and even opponents of Biden’s EO have recognized the need for guardrails. Though there is widespread opposition among Republican leaders to perceived executive overreach by the Biden administration, the earlier voluntary commitments could be the type of nonbinding, self-governance measures that a deregulatory second Trump administration might favor.
Commentary
What does the 2024 election mean for the future of AI governance?
July 25, 2024