The Federal Trade Commission has one of the most potent tools to punish artificial intelligence-driven companies that disregard consumer privacy, and it’s not shy about using it. In recent years, the regulator has ramped up demands that companies destroy algorithms trained on “ill-gotten” data, signaling its growing urgency in trying to rein in AI.
In this article, Corporate Counsel reporter Maria Dinzeo dives into a recent development with the FTC “tinkering with new tools” to send a clear message that engaging in illicit data collection to train AI models is “not worth it.”
As UnitedLex Senior Vice President Cara Hughes comments on the FTC’s broader privacy approach to AI regulation, “It’s not enough to just think about consent. You have to think about how your algorithm is going to perform, how the data is being used, how your collecting data and who you’re sharing it with.”
Read more here: FTC Shows Willingness to Use Extreme Measures to Tame AI | Corporate Counsel (law.com)