FTC Takes Aim at Big Tech and AI
Recent remarks from the Federal Trade Commission’s (FTC) Chair Lina Khan (Khan) make clear that the agency aims to take aggressive approaches to regulating both the big technology and AI sectors.
Khan Stands Behind Agency’s Aggressive Tactics Against Big Tech at Trade Association Conference
At a recent trade association conference, Khan defended the agency’s more aggressive approach to regulating big tech as a necessity to counteract the industry’s ambivalence toward more traditional fines. Specifically, Khan stated “We’ve seen historically firms do treat fines, even fines that sound really large – millions of dollars, even billions of dollars – they can sometimes treat those fines as a cost of doing business if the underlying illegal tactics that they’re engaging in are valuable enough to them.”
Khan’s comments come in response to big tech giant Meta’s recent lawsuit against the regulatory agency challenging the constitutionality of the FTC’s structure. Meta is suing the agency on allegations that the FTC’s in-house courts function as a “dual role as prosecutor and judge” is unconstitutional and violates due process protections. Meta brought the lawsuit in response to the FTC’s move to institute a blanket prohibition preventing Meta from monetizing data from users under the age of 18. Additionally, Meta requested the U.S. District Court for the District of Columbia permanently block the FTC proceeding to restrict Meta from monetizing children’s data.
More specifically, in May 2023 the FTC unilaterally proposed changes to a 2020 privacy order it entered into with Meta after alleging that the company failed to fully comply with the 2020 order, misled parents about the ability to control with whom their children communicated through the Messenger Kids app and misrepresented the access it provided some app developers to private user data. In an Order to Show Cause, the FTC found that Meta identified several gaps and weaknesses in Meta’s privacy program which the agency contends pose substantial risks to the public due to the breadth and significance of the deficiencies. The Order to Show Cause also found that Meta violated both a 2012 and 2020 order by continuing to give app developers access to users’ private information despite agreeing to cut off such access under certain conditions in 2018. Further, the FTC alleged that Meta misrepresented that children using Messenger Kids could only communicate with contacts approved by their parents when, in fact, children in certain circumstances were also able to communicate with unapproved contacts in group text chats and group video calls. The agency contends that these misrepresentations violate the 2012 order, the FTC Act, and the COPPA Rule, which requires all operators of websites or online services directed at children under 13 to notify parents and obtain verifiable parental consent before collecting personal information from such children.
According to the FTC, this marks the third consecutive time Meta has failed to comply with FTC orders, violating each prior order within months of such orders being finalized with the tech giant.
The 2020 privacy order required Meta to pay a $5 billion civil penalty and expanded the required privacy program and an independent third-party assessor’s role in evaluating the effectiveness of the program from prior orders. As an example, the 2020 order required Facebook to conduct a privacy review of every new or modified product, service, or practice before implementation and document its risk mitigation determinations; to implement greater security for personal information; and to restrict the use of facial recognition and telephone numbers obtained for account security.
As a result of the alleged violations, the FTC individually altered the terms of the 2020 order, proposing new blanket, changes including:
- Blanket prohibition against monetizing data of children and teens under 18
- Pause on the launch of new products and services
- Extension of compliance to merged companies
- Limits on future uses of facial recognition technology
- Strengthening existing requirements
Meta initiated its lawsuit against the FTC a few days after a U.S. District Judge ruled that the agency could advance the proceeding to revise the 2020 order. Meta appealed that decision as well.
Khan Launches AI Market Inquiry at FTC Tech Summit
Khan echoed similar sentiments at the FTC’s Tech Summit on Artificial Intelligence. In her remarks, the FTC Chair stressed the importance of learning from prior “missteps” in which dominant entities concentrated control over key technologies leading to “a future of their choosing” citing the “Web 2.0 era” of the mid-200s, U.S. Steel in the early 1900s, Alcoa in the 1930s, IBM and AT&T in the 1970s and Boeing in the 1990s.
As with big technology, the agency is taking a more aggressive approach to regulating the emerging AI sector to “promote fair competition and protect Americans from unfair or deceptive tactics.” Khan’s remarks emphasized that the FTC has already seen how the AI tools can “turbocharge fraud, automate discrimination, and entrench surveillance” all at the harm of the public.
During the FTC Tech Summit on Artificial Intelligence, Khan announced the launch of a “market inquiry into the investment and partnerships being formed between AI developers and cloud service providers.” According to Khan, the aim of the inquiry is to scrutinize whether those relationships enable firms to “exert undue influence or gain privileged access in ways that could undermine fair competition across layers of the AI stack.”
As the FTC conducts this assessment, Khan emphasized four key principles that are informing the agencies work:
- “First, we are focused on scrutinizing any existing or emerging bottlenecks across the AI stack.”
- Per the FTC, the
- AI stack to understand competition across layers and sublayers
- Examining whether prevailing firms with control over “key inputs” – e.g., cloud infrastructure and access to GPUs – may be able to “impose coercive terms, charge extractive fees, or deepen their existing moats”
- “Second, we are squarely focused on how business models drive incentives.”
- Business incentives cannot justify violations of the law
- Refining algorithm “cannot come at the expense of people’s privacy or security, and privileged access to customers’ data cannot be used to undermine competition”
- FTC remedies will require firms to delete models trained on unlawfully acquired data in addition to the data itself
- “Third, we are squarely focused on aligning liability with capability and control.”
- FTC enforcement experience in other sectors will inform this work
- “And fourth, we’re focused on crafting effective remedies that establish bright-line rules on the development, use, and management of AI inputs.”
- FTC to prohibit certain data for use in training modelsFTC held a public workshop with creative professionals to understand types of guardrails that help protect against creators’ work being appropriated and devalued by generative AI models
- Firms can’t claim innovation as a cover for lawbreaking
- Model-as-a-service companies that deceive users about how their data is collected may be breaking the law
- Companies claiming interoperability must come at the expense of privacy and security may be violating the law
- Companies that deceptively change their terms of service to their own advantage may be violating the law
Contact Our Attorneys Today
The commercial compliance team at Kendall PC has extensive experience guiding clients throughout the business lifecycle to meet the evolving expectations of regulatory and enforcement agencies.
To learn how our attorneys can help your company, contact Kendall PC today online or at (484) 414-4093. Our firm proudly serves small, midsized, and emerging businesses throughout the United States and across the globe.