This fall, the House could decide on the AI Accountability Act.
This week, the House moved a bill that asks the government to study AI responsibility and report back in 2025. This is a small step toward building a legal framework for AI.
The AI Accountability Act was passed Thursday by the House Energy and Commerce Committee unanimously. This means that the bill could be put to a vote on the House floor in the fall, when members return from their August break.
The bill would have the Commerce Department look into how accountability measures are being built into AI systems used in communications networks and “electromagnetic spectrum sharing applications” and find ways to reduce risks in these systems.
It also asks Commerce to figure out how these transparency measures could help “prove that artificial intelligence systems are trustworthy.” In 18 months, Commerce would have to make suggestions about these methods for assessing responsibility.
It’s a slow-moving bill that only affects one government agency and might or might not make it to the House floor. But it’s still one of the more positive steps the House has taken this year to start regulating AI.
Even though there have been many calls for broad AI control this year, the House still hasn’t passed a stand-alone AI bill. The National Defense Authorization Act is the closest thing the House has done so far. This law includes text that tells the Pentagon to examine its AI weaknesses, but it also urges active use of AI to improve U.S. national security.
The Senate has about reached the same point. Chuck Schumer, a Democrat from New York and the leader of the Senate majority, held a third AI listening session for lawmakers this week. He has said that these meetings will continue into the fall. As a response, the Biden administration has worked with some companies to set private AI standards. However, it has stopped short of making full rules and says Congress will have to act.
Congress isn’t moving nearly as quickly as it should be to regulate AI for those who want them to.
“It’s good to see a piece of AI-related legislation get out of a congressional committee, but we need to speed up our legislative work to keep up with the rapid progress of AI over the past year,” Jake Denton, a research associate at the Heritage Foundation Tech Policy Center, told Fox News Digital.
“We can’t spend years talking about how best to move this technology forward,” he said. “To make sure AI is used in a safe and decent way, Congress needs to speed up the legislative process and make strong rules that protect the American people. If clear rules aren’t set up quickly, Silicon Valley could use this powerful technology in ways that aren’t intended.
Rep. Josh Harder, D-Calif., brought the AI Accountability Act to Congress this week. This showed how early in the process Congress is when it comes to regulating AI.
During the discussion, Rep. Jay Obernolte, R-Calif., proposed a change to make sure the Commerce Department knows what people mean when they say they want “trustworthy” AI systems in place.
With Obernolte’s change to the bill, officials would have to look at “how the term ‘trustworthy’ is used and defined in the context of artificial intelligence,” as well as “how that word relates to other terms like’responsible’ and ‘human-centric.'”
“Congress is currently thinking about what an AI regulatory framework might look like,” Obernolte said. “But along the way, we need to figure out how we’re going to investigate AI to see if it’s accountable, responsible, and safe for consumers.”