As A.I. Booms, Lawmakers Struggle to Understand the Technology

In recent weeks, two members of Congress have sounded the alarm over the dangers of artificial intelligence.

Representative Ted Lieu, Democrat of California, wrote in a guest essay in The New York Times in January that he was “freaked out” by the ability of the ChatGPT chatbot to mimic human writers. Another Democrat, Representative Jake Auchincloss of Massachusetts, gave a one-minute speech — written by a chatbot — calling for regulation of A.I.

But even as lawmakers put a spotlight on the technology, few are taking action on it. No bill has been proposed to protect individuals or thwart the development of A.I.’s potentially dangerous aspects. And legislation introduced in recent years to curb A.I. applications like facial recognition have withered in Congress.

The problem is that most lawmakers do not even know what A.I. is, said Representative Jay Obernolte, a California Republican and the only member of Congress with a master’s degree in artificial intelligence.

“Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what A.I. is,” he said. “You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of A.I. will not come from evil robots with red lasers coming out of their eyes.”

The inaction over A.I. is part of a familiar pattern, in which technology is again outstripping U.S. rule-making and regulation. Lawmakers have long struggled to understand new innovations, once describing the internet as a “series of tubes.” For just as long, companies have worked to slow down regulations, saying the industry needs few roadblocks as the United States competes with China for tech leadership.

That means Washington is taking a hands-off stance as an A.I. boom has gripped Silicon Valley, with Microsoft, Google, Amazon and Meta racing one another to develop the technology. The spread of A.I., which has spawned chatbots that can write poetry and cars that drive themselves, has provoked a debate over its limits, with some fearing that the technology can eventually replace humans in jobs or even become sentient.

Carly Kind, director of the Ada Lovelace Institute, a London organization focused on the responsible use of technology, said a lack of regulation encouraged companies to put a priority on financial and commercial interests at the expense of safety.

“By failing to establish such guardrails, policymakers are creating the conditions for a race to the bottom in irresponsible A.I.,” she said.

In the regulatory vacuum, the European Union has taken a leadership role. In 2021, E.U. policymakers proposed a law focused on regulating the A.I. technologies that might create the most harm, such as facial recognition and applications linked to critical public infrastructure like the water supply. The measure, which is expected to be passed as soon as this year, would require makers of A.I. to conduct risk assessments of how their applications could affect health, safety and individual rights, like freedom of expression.

Companies that violated the law could be fined up to 6 percent of their global revenue, which could total billions of dollars for the world’s largest tech platforms. E.U. policymakers said the law was needed to maximize artificial intelligence’s benefits while minimizing its societal risks.

“We’re at the beginning of understanding this technology and weighing its great benefits and potential dangers,” said Representative Donald S. Beyer Jr., Democrat of Virginia, who recently began taking evening college classes on A.I.

Mr. Beyer said U.S. lawmakers would examine the European bill for ideas on regulation and added, “This will take time.”

In fact, the federal government has been deeply involved in A.I. for more than six decades. In the 1960s, the Defense Advanced Research Projects Agency, known as DARPA, began funding research and development of the technology. The support helped lead to military applications like drones and cybersecurity tools.

Criticism of A.I. was largely muted in Washington until January 2015 when the physicist Stephen Hawking and Elon Musk, the chief executive of Tesla and now the owner of Twitter, warned that A.I. was becoming dangerously intelligent and could lead to the end of the human race. They called for regulations.

In November 2016, the Senate Subcommittee on Space, Science and Competitiveness held the first congressional hearing on A.I., with Mr. Musk’s warnings cited twice by lawmakers. During the hearing, academics and the chief executive of OpenAI, a San Francisco lab, batted down Mr. Musk’s predictions or said they were at least many years away.

Some lawmakers stressed the importance of the nation’s leadership in A.I. development. Congress must “ensure that the United States remains a global leader throughout the 21st century,” Senator Ted Cruz, Republican of Texas and chair of the subcommittee, said at the time.

DARPA subsequently announced that it was earmarking $2 billion for A.I. research projects.

Warnings about A.I.’s dangers intensified in 2021 as the Vatican, IBM and Microsoft pledged to develop “ethical A.I.,” which means organizations are transparent about how the technology works, respect privacy and minimize biases. The group called for regulation of facial recognition software, which uses large databases of photos to pinpoint people’s identity. In Washington, some lawmakers tried creating rules for facial recognition technology and for company audits to prevent discriminatory algorithms. The bills went nowhere.

“It’s not a priority and doesn’t feel urgent for members,” said Mr. Beyer, who failed to get enough support last year to pass a bill on audits of A.I. algorithms, sponsored with Representative Yvette D. Clarke, Democrat of New York.

More recently, some government officials have tried bridging the knowledge gap around A.I. In January, about 150 lawmakers and their staffs packed a meeting, hosted by the usually sleepy A.I. Caucus, that featured Jack Clark, a founder of the A.I. company Anthropic.

Some action around A.I. is taking place in federal agencies, which are enforcing laws already on the books. The Federal Trade Commission has brought enforcement orders against companies that used A.I. in violation of its consumer protection rules. The Consumer Financial Protection Bureau has also warned that opaque A.I. systems used by credit agencies could run afoul of anti-discrimination laws.

The F.T.C. has also proposed commercial surveillance regulations to curb the collection of data used in A.I. technology, and the Food and Drug Administration issued a list of A.I. technology in medical devices that come under its purview.

In October, the White House issued a blueprint for rules on A.I., stressing the rights of individuals to privacy and safe automated systems, protection from algorithmic discrimination and meaningful human alternatives.

But none of the efforts have amounted to laws.

“The picture in Congress is bleak,” said Amba Kak, the executive director of the AI Now Institute, a nonprofit research center, who recently advised the F.T.C. “The stakes are high because these tools are used in very sensitive social domains like in hiring, housing and credit, and there is real evidence that over the years, A.I. tools have been flawed and biased.”

Tech companies have lobbied against policies that would limit how they used A.I. and have called for mostly voluntary regulations.

In 2020, Sundar Pichai, the chief executive of Alphabet, the parent of Google, visited Brussels to argue for “sensible regulation” that would not hold back the technology’s potential benefits. That same year, the U.S. Chamber of Commerce and more than 30 companies, including Amazon and Meta, lobbied against facial recognition bills, according to OpenSecrets.org.

“We aren’t anti-regulation, but we’d want smart regulation,” said Jordan Crenshaw, a vice president of the Chamber of Commerce, which has argued that the draft E.U. law is overly broad and could hamper tech development.

In January, Sam Altman, the chief executive of OpenAI, which created ChatGPT, visited several members of Congress to demonstrate GPT-4, a new A.I. model that can write essays, solve complex coding problems and more, according to Mr. Beyer and Mr. Lieu. Mr. Altman, who has said he supports regulation, showed how GPT-4 will have greater security controls than previous A.I. models, the lawmakers said.

Mr. Lieu, who met with Mr. Altman, said the government couldn’t rely on individual companies to protect users. He plans to introduce a bill this year for a commission to study A.I. and for a new agency to regulate it.

“OpenAI decided to put controls into its technology, but what is to guarantee another company will do the same?” he asked.

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*


nineteen + 15 =