Privacy and Data Security
On December 31, 2025, the FTC announced that a federal judge approved an order requiring Disney to pay $10 million to settle Federal Trade Commission allegations that the company allowed personal data to be collected from children who viewed child-directed videos on YouTube without notifying parents or obtaining their consent as required by the Children’s Online Privacy Protection Rule (COPPA Rule).
A complaint, filed in September by the Department of Justice upon notification and referral from the FTC, alleged that Disney Worldwide Services, Inc. and Disney Entertainment Operations LLC (Disney) violated the COPPA Rule by failing to properly label some videos that it uploaded to YouTube as “Made for Kids” (MFK). The complaint alleged that by mislabeling these videos, Disney allowed for the collection, through YouTube, of personal data from children under 13 who viewed child-directed videos and use of that data for targeted advertising to children.
Under the settlement order finalized by a federal judge last week, Disney is required to:
- Pay a $10 million civil penalty for violating the COPPA Rule;
- Comply with the COPPA Rule, including by notifying parents before collecting personal information from children under 13 and obtaining verifiable parental consent for collection and use of that data; and
- Establish and implement a program to review whether videos posted to YouTube should be designated as MFK—unless YouTube implements age assurance technologies that can determine the age, age range,
The Federal Trade Commission and National Advertising Division of BBB National Programs set forth their enforcement priorities during the 2025 ANA Masters of Advertising Law Conference,
Not surprisingly, the FTC set forth a bread-and-butter enforcement agency. It includes, without limitation, protecting children (Children’s Online Protection Act (16 C.F.R. § 312); enforcing Made in USA (U.S. Origin Claims) (Made in USA Labeling Rule – 16 C.F.R. § 323); enforcing subscriptions, negative options and automatic trial programs (Restore Online Shoppers’ Confidence Act), Dark Patterns and Click-to-Cancel); Enforcing the FTC Rule on Unfair or Deceptive Fees”); enforcing target advertising and surveillance marketing techniques; enforcing influencers, consumer reviews and endorsements (The Consumer Reviews and Testimonials Rule: Questions and Answers – 16 CFR Part 465); and enforcing the use of AI (for example and without limitation, exaggerating the capabilities of AI features).
Consult with an experienced ecommerce attorney to discuss the implementation of preventative compliance measures or if you are the subject of a regulatory investigation of enforcement action.
Other areas which are reasonably certain to receive increase regulatory investigation and enforcement attention include but are not limited to, data privacy, Telephone Sale Rule, Telephone Consumer Protection Act, state unfair and deceptive business practices,
Additional key highlights and takeaways for discussion with a qualified ecommerce attorney include the use of health claims, green claims, and social media IP rights and takedown procedures,
Contact the author for more information.
Richard B.
In October 2025, the New York Attorney General’s Office announced that, in accordance with the “Stop Hiding Hate” Act, social media companies are required to report their content moderation policies to the OAG’s office, with first reports due no later than January 1, 2026.
The New York legislation requires that platforms operating in New York with more than $100 million in gross annual revenue must post their content moderation policies publicly, provide consumers with a contact to report violations of the policy, and submit biannual compliance reports.
Key requirements of the Act include:
- Public Transparency: Covered companies are required to publish their terms of service in clear accessible language and provide contact details for user inquiries.
- User Reporting Mechanisms: Platforms must clearly describe how users can report violations of the terms of service, and provide contact information for doing so.
- Action and Response Details: Covered companies must explain what kind of action they may take for posts that violate the policy.
- Biannual Reporting: Social media companies are required to submit reports twice a year to the New York OAG. These reports must include statements on their terms of service. Covered companies must also describe their policies and how they are enforced.
- Data Disclosure: Reports must include data on the total number of posts believed to be policy violations, the number of posts acted upon, and the details thereof.
The failure to comply with the Act’s requirements can potentially result in civil penalties of up to $15,000 per violation per day.
Effective July 31, 2025, the Minnesota Consumer Data Privacy Act governs the manner by which the personal data of Minnesota residents is handled.
Who Does the Minnesota Consumer Data Privacy Act Apply To?
The MCDPA applies to entities doing business in Minnesota or produce products or services that are targeted to residents of Minnesota, and that satisfy one or more of the following threshold:
- Dduring a calendar year, controls or processes personal data of 100,000 consumers or more, excluding personal data controlled or processed solely for the purpose of completing a payment transaction; or
- derives over 25 percent of gross revenue from the sale of personal data and processes or controls personal data of 25,000 consumers or more.
What is a “Controller” and What are a Controller’s Obligations?
A “Controller” means the natural or legal person which, alone or jointly with others, determines the purposes and means of the processing of personal data.
The MCDPA obligates controllers to provide consumers with a clear and accessible privacy notice that sets forth the categories of personal data being processed and the purposes for that the data will be processed for. The privacy notice must also set forth the categories of personal data sold or shared with third-parties, identify the third-parties, explain how consumers may exercise their privacy rights, set forth the controller’s contact information, and describe the controller’s personal data retention policy. Notably, controllers are expressly restricted to the collection of personal data that is “adequate,
On May 19, 2025, President Donald Trump signed into law the Take It Down Act (S.146). The federal legislation criminalizes the publication of non-consensual intimate imagery and AI-generated pornography. It comes following approximately forty states already enacting legislation targeting online abuse.
What are the Take It Down Act’s Requirements?
The federal Take It Down Act creates civil and criminal penalties for knowingly publishing or threatening to share non-consensual intimate imagery and computer-generated intimate images that depict real, identifiable individuals. If the victim is an adult, violators face up to two years in prison. If a minor, up to three years.
Social media platforms, online forums, hosting services and other tech companies that facilitate user-generated content are required to remove covered content within forty-eight hours of request and implement reasonable measures to ensure that the unlawful content cannot be posted again.
Consent to create an image will not be a defense.
Exempt from prosecution are good faith disclosures or those made for lawful purposes, such as legal proceedings, reporting unlawful conduct, law enforcement investigations and medical treatment.
What Online Platforms are Covered Under the Take It Down Act?
Covered Platforms include any website, online service, application, or mobile app that that serves the public and either: (i) provides a forum for user-generated content (e.g., videos, images, messages, games, or audio), or (ii) in the ordinary course of business, regularly publishes, curates,
In March 2025, Office of the Attorney General for the State of New York introduced the Fostering Affordability and Integrity Through Reasonable (“FAIR”) Business Practices Act in the State Senate and State Assembly. The proposed legislation is intended to revise Article 22-A of New York’s General Business Law.
The FAIR Act is designed to expand and strengthen consumer and small business protections, in part, by amending New York’s General Business Law §349 to also cover “unfair” and “abusive” practices, rather than just “deceptive” practices. Many other states have already enacted UDAP statutes. The bill may foreshadow what is to come from numerous state consumer protection enforcers as federal consumer protection enforcement is being rolled back and policy under the current administration remains uncertain.
As drafted, the program bill would provide the New York Attorney General and private plaintiffs the ability to seek enhanced civil penalties and restitution in amounts significantly more than available statutory damages pursuant to New York General Business Law Section 349. The FAIR Act would significantly increase statutory damages available under GBL §349 from $50 to $1,000, and permit recovery of actual and punitive damages. Penalties for unfair, deceptive or abusive practices could potentially include penalties of up to $5,000, per violation. Knowing or willful violations could result in penalties totaling the greater of $15,000 or three times the amount of restitution, per violation. Prevailing plaintiffs in private actions would also be permitted to recover attorneys’ fees and costs.
On Friday, December 27, 2024, the Justice Department issued a final rule to address “urgent national security risks posed by access to U.S. sensitive personal and government-related data from countries of concern and covered persons.” The final rule was posted publicly and addresses “continued efforts of countries of concern to access, exploit, and weaponize Americans’ bulk sensitive personal and U.S. government-related data.”
This rule reflects the Department’s careful consideration of the comments received in response to the March 5, 2024 Advance Notice of Proposed Rulemaking (“ANPRM”) and the October 29, 2024 Notice of Proposed Rulemaking (“NPRM”) as well as feedback from hundreds of representatives from companies and organizations and extensive consultation with dozens of other U.S. Government agencies and offices, along with engagement foreign partners.
As previewed in the ANPRM and NPRM, the final rule establishes a national-security program within the Justice Department’s National Security Division that restricts and in some instances prohibits U.S. persons from engaging in certain categories of data transactions with six “countries of concern” (including covered persons and entities subject to coercion by those countries) because such transactions pose unacceptable national-security risks of giving those countries, entities, or persons access to U.S. bulk sensitive personal data or government-related data.
The rule will become effective 90 days after publication. Certain affirmative compliance obligations will be phased in with a later effective date of 270 days after publication.
The Department also intends to continue engaging with industry and other stakeholders to determine whether any general licenses are appropriate as this program goes into effect.
On August 30, 2024, the Federal Trade Commission announced that the Department of Justice filed a complaint upon notification and referral from the FTC against a surveillance camera company that allegedly failed to provide reasonable security for the personal information it collected—including 150,000 live camera feeds in sensitive areas like psychiatric hospitals, women’s health clinics, elementary schools and prison cells.
According to the complaint, these alleged failures allowed a threat actor – in March 2021 – to remotely access the company’s customer camera feeds and watch consumers live, without their knowledge or consent. Despite the purported invasive security breach, the company allegedly remained unaware of the threat actor’s exploration until the threat actor self-reported the hack to the media.
According to the FTC, the vast majority of the company’s customers throughout the U.S. and abroad include small businesses spanning multiple industries, including education, government, healthcare, and hospitality. The FTC says that the compromise went beyond the company’s security cameras. According to the complaint, the threat actor also exfiltrated data about the company’s own customers, mostly businesses, including, but not limited to, names, email addresses, physical addresses, usernames and password hashes, and geolocation data for security cameras.
The company’s alleged security failures “are in stark contrast to its many public promises to keep personal and customer information safe,” according to the FTC.
According to the complaint, the company’s own privacy policy claimed that the company “take[s] customer privacy seriously,” and “[w]e will use best-in-class data security tools and best practices to keep your data safe and protect [the company’s] products from unauthorized access.”
The FTC also states that the company’s publicly promised that it was HIPAA certified or compliant and that it followed the EU-U.S.
On August 7, 2024 the Federal Communications Commission proposed new consumer protections against AI-generated robocalls and robotexts. The Notice of Proposed Rulemaking broadens the FCC’s efforts to address AI’s impact on the rights of consumers under the Telephone Consumer Protection Act.
The NPRM seeks comment on the definition of AI-generated calls, requiring callers to disclose their use of AI-generated calls and text messages, supporting technologies that alert and protect consumers from unwanted and illegal AI robocalls, and protecting positive uses of AI to help people with disabilities utilize the telephone networks.
The Notice of Proposed Rulemaking proposes to define “AI-generated calls,” and introduces such a definition that would include calls using artificial intelligence generate voice or text. For purposes of identifying the types of calls that would be subject to the new proposed rules, the FCC proposes to define “AI generated call” as “a call that uses any technology or tool to generate an artificial or prerecorded voice or a text using computational technology or other machine learning, including predictive algorithms, and large language models, to process natural language and produce voice or text content to communicate with a called party over an outbound telephone call.”
The definition proposed by the FCC is broad enough to encompass existing and evolving AI technologies. Importantly, it is limited to outbound calls. AI technologies that are used to answer inbound calls are not within the scope of the proposed definition of “AI-generated calls.”
“We believe this definition is consistent with federal and state AI definitions cited in the AI NOI,
On July 30, 2024, New York Attorney General Letitia James announced the launch of two privacy guides on the Office of the Attorney General (OAG) website: a Business Guide to Website Privacy Controls and a Consumer Guide to Tracking on the Web.
The Business Guide is intended to help businesses better protect visitors to their websites by identifying common mistakes the OAG’s office believe businesses make when deploying tracking technologies, processes they can use to help identify and prevent issues, and guidance for ensuring they comply with New York law. The Consumer Guide is intended to assist New Yorkers by offering tips they can use to protect their privacy when browsing the web, including how to safeguard against unwanted online tracking.
The OAG issued the guides following a review that purportedly uncovered unwanted tracking on more than a dozen popular websites, collectively serving more than 75 million visitors per month.
“When New Yorkers visit websites, they deserve to have the peace of mind that they won’t be tracked without their knowledge, and won’t have their personal information sold to advertisers,” said Attorney General lawyer James. “All too often, visiting a webpage or making a simple search will result in countless ads popping up on unrelated websites and social media. When visitors opt out of tracking, businesses have an obligation to protect their visitors’ personal information, and consumers deserve to know this obligation is being fulfilled. These new guides that my team launched will help protect New Yorkers’ privacy and make websites safer places to visit.”
While many websites provide visitors with information about the tracking that takes place and controls to manage that tracking,
Topics
Archives
About This Blog and Hinch Newman’s Advertising + Marketing Practice
Hinch Newman LLP’s advertising and marketing practice includes two decades successfully resolving some of the highest-profile Federal Trade Commission (FTC) and state attorneys general digital advertising and telemarketing investigations and enforcement actions. As FTC attorneys, the firm possesses superior compliance knowledge and deep legal advocacy experience in the areas of advertising, marketing, lead generation, promotions, e-commerce, privacy and intellectual property law. It has also been selected to author the Consumer Protection Section of the prestigious American Lawyer Media International Federal Trade Commission: Law, Practice and Procedure Treatise, a comprehensive resource for developments of concern to advertisers, marketers and legal professionals that practice before the Commission. Through these advertising and marketing law updates, Hinch Newman LLP provides commentary, news and analysis on issues and trends concerning developments of interest to digital marketers, including FTC and state attorneys general advertising compliance, civil investigative demands (CIDs), and administrative/ judicial process.