Their plan is the culmination of a yearlong listening tour on the dangers of the new technology.
A bipartisan group of senators released a long-awaited legislative plan for artificial intelligence on Wednesday, calling for billions in funding to propel American leadership in the technology while offering few details on regulations to address its risks.
In a 20-page document titled “Driving U.S. Innovation in Artificial Intelligence,” the Senate leader, Chuck Schumer, and three colleagues called for spending $32 billion annually by 2026 for government and private-sector research and development of the technology.
The lawmakers recommended creating a federal data privacy law and said they supported legislation, planned for introduction on Wednesday, that would prevent the use of realistic misleading technology known as deepfakes in election campaigns. But they said congressional committees and agencies should come up with regulations on A.I., including protections against health and financial discrimination, the elimination of jobs, and copyright violations caused by the technology.
“It’s very hard to do regulations because A.I. is changing too quickly,” Mr. Schumer, a New York Democrat, said in an interview. “We didn’t want to rush this.”
He designed the road map with two Republican senators, Mike Rounds of South Dakota and Todd Young of Indiana, and a fellow Democrat, Senator Martin Heinrich of New Mexico, after their yearlong listening tour to hear concerns about new generative A.I. technologies. Those tools, like OpenAI’s ChatGPT, can generate realistic and convincing images, videos, audio and text. Tech leaders have warned about the potential harms of A.I., including the obliteration of entire job categories, election interference, discrimination in housing and finance, and even the replacement of humankind.
The senators’ decision to delay A.I. regulation widens a gap between the United States and the European Union, which this year adopted a law that prohibits A.I.’s riskiest uses, including some facial recognition applications and tools that can manipulate behavior or discriminate. The European law requires transparency around how systems operate and what data they collect. Dozens of U.S. states have also proposed privacy and A.I. laws that would prohibit certain uses of the technology.