US FTC Says It Will Target AI in Violation of Discrimination Laws as Calls for Regulating ChatGPT Grow

Rapid rise of AI has sparked concerns around the world about the possible use of the innovation for wrongdoing.

Leaders of the US Federal Trade Commission said on Tuesday the agency would pursue companies who misuse artificial intelligence to violate laws against discrimination or be deceptive.

The sudden popularity of Microsoft-backed OpenAI’s ChatGPT this year has prompted calls for regulation amid concerns around the world about the possible use of the innovation for wrongdoing even as companies are seeking ways to use it to enhance efficiency.

In a congressional hearing, FTC Chair Lina Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya were asked about concerns that recent innovation in artificial intelligence, which can be used to produce high-quality deep fakes, could be used to make more effective scams or otherwise violate laws.

Bedoya said companies using algorithms or artificial intelligence were not allowed to violate civil rights laws or break rules against unfair and deceptive acts.

“It’s not okay to say that your algorithm is a black box” and you can’t explain it, he said.

Khan agreed the newest versions of AI could be used to turbocharge fraud and scams and any wrongdoing would “should put them on the hook for FTC action.”

Slaughter noted that the agency throughout its 100-year history had to adapt to changing technologies and indicated that adapting to ChatGPT and other artificial intelligence tools was no different.

The commission is organized to have five members but currently has three, all of whom are Democrats.