The Federal Communications Commission (FCC) approved a ruling to make AI-generated voices in scam robocalls illegal, which FCC chair Jessica Rosenworcel noted granted state authorities the power to crack down on bad actors.

A unanimous ruling by FCC commissioners found calls made with AI-generated voices are “artificial” under the Telephone Consumer Protection Act (TCPA), which empowers the agency to act against unwanted spam calls.

Over the past several years, the rise of AI has made it easier for companies to create calls that imitate the voices of celebrities and political candidates to scam vulnerable family members or to misinform voters.

“We’re putting the fraudsters behind these robocalls on notice,” Rosenworcel stated.

While using AI to perpetrate a scam or fraud was already illegal, the FCC explained using the technology to generate voices in calls is itself illegal, which “expands the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law”.

A coalition of 26 state representatives wrote to the FCC in January supporting the move to make AI-generated robocalls illegal. States have their own enforcement tools which could be tied to robocall definitions under the TCPA.  

In November of 2023, the FCC launched an inquiry to examine the potential threat of AI-generated robocalls.