On January 25, the FTC hosted a virtual tech summit focused on artificial intelligence (AI). The summit featured speakers from the FTC––including all three commissioners––software engineers, lawyers, technologists, entrepreneurs, journalists, and researchers, among others. First, Commissioner Slaughter spoke on how there are three main acts that led to where we are today in creating guardrails for AI use: first, the emergence of social media; second, industry groups and whistleblowers rang the alarm on data privacy and forced regulators to play catch-up; third, regulators must now urgently grapple with difficult social externalities such as impacts on society and political elections.

The first panel discussed the various business models at play in the AI space. One journalist spoke on the recent Hollywood writers’ strike, opining that copyright law is a poor legal framework by which to regulate AI, and suggested labor and employment law as a better model. An analyst at a venture capital firm discussed how her firm finds investment opportunities by reviewing which companies use a language-learning model, as opposed to the transformer model, which is more attractive to that firm.

Before the second panel, Commissioner Bedoya discussed the need for fair and safe AI, and said that in order for the FTC to be successful, it must execute policy with two topics in mind: first, people need to be in control of technology and decision making, not the other way around; and second, competition must be safeguarded so that the most popular technology is the one that works the best, not just the one created by the largest companies.

During the second panel, a lawyer from the CFPB spoke on how the CFPB is doing “a lot” with regards to AI, and that the CFPB gives AI technology no exceptions in the laws it oversees. The CFPB recently issued releases on how the “black box” model in credit decision making needs to be fair and free from bias. When discussing future AI enforcement actions, the CFPB lawyer said in a “high-level” way that AI enforcement is currently “capacity building”; they are building out their resources to be more intellectually diverse, including having recently created their technologist program. 


Next Article: CFPB Amicus Brief Supports FDCPA Claim for ...

Advertisement