Biden administration aims to cut AI risks with executive order
US President Joe Biden is seeking to reduce the risks that artificial intelligence (AI) poses to consumers, workers, minority groups and national security with a new executive order on Monday.
It requires developers of AI systems that pose risks to US national security, the economy, public health or safety to share the results of safety tests with the U.S. government, in line with the Defense Production Act, before they are released to the public.
The order, which Biden signed at the White House, also directs agencies to set standards for that testing and address related chemical, biological, radiological, nuclear, and cybersecurity risks.
"To realize the promise of AI and avoid the risk, we need to govern this technology," Biden said. "In the wrong hands AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run."
The move is the latest step by the administration to set parameters around AI as it makes rapid gains in capability and popularity in an environment of, so far, limited regulation. The order prompted a mixed response from industry and trade groups.
Bradley Tusk, CEO at Tusk Ventures, a venture capital firm with investments in tech and AI, welcomed the move. But he said tech companies would likely shy away from sharing proprietary data with the government over fears it could be provided to rivals.
"Without a real enforcement mechanism, which the executive order does not seem to have, the concept is great but adherence may be very limited," Tusk said.
NetChoice, a national trade association that includes major tech platforms, described the order as an "AI Red Tape Wishlist," that will end up "stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation."
Group of Seven Code of Conduct
The new order goes beyond voluntary commitments made earlier this year by AI companies such as OpenAI, Alphabet and Meta Platforms, which pledged to watermark AI-generated content to make the technology safer.
As part of the order, the Commerce Department will "develop guidance for content authentication and watermarking" for labeling items that are generated by AI, to make sure government communications are clear, the White House said in a release.
The Group of Seven industrial countries on Monday will agree a code of conduct for companies developing advanced artificial intelligence systems, according to a G7 document.
"The truth is the United States is already far behind Europe," said Max Tegmark, President of Tech policy think tank Future of Life Institute. "Policymakers, including those in Congress, need to look out for their citizens by enacting laws with teeth that tackle threats and safeguard progress," he said in a statement.
A senior administration official, briefing reporters on Sunday, pushed back against criticism that Europe had been more aggressive at regulating AI, saying legislative action was also necessary. Biden on Monday called on Congress to act, in particular by better protecting personal data.
U.S. Senate Majority Leader Chuck Schumer said he hoped to have AI legislation ready in a matter of months.
U.S. officials have warned that AI can heighten the risk of bias and civil rights violations, and Biden's executive order seeks to address that by calling for guidance to landlords, federal benefits programs and federal contractors "to keep AI algorithms from being used to exacerbate discrimination," the release said.
The order also calls for the development of "best practices" to address harms that AI may cause workers, including job displacement, and requires a report on labor market impacts. — Reuters