<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Consumer Protection Dispatch</title>
	<atom:link href="https://consumer-protection-dispatch.pillsburylaw.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://consumer-protection-dispatch.pillsburylaw.com/</link>
	<description>Published by Consumer Protection Attorneys Pillsbury Winthrop Shaw Pittman LLP</description>
	<lastBuildDate>Tue, 17 Mar 2026 16:46:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
<site xmlns="com-wordpress:feed-additions:1">234975302</site>	<item>
		<title>Email-Tracking Technology: Emerging Compliance Expectations in the U.S., EU and Beyond</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/email-tracking-technology-compliance-us-eu/</link>
		
		<dc:creator><![CDATA[Scott Morton, Shruti Bhutani Arora, Steven Farmer, Christine Mastromonaco and Samson Verebes]]></dc:creator>
		<pubDate>Tue, 17 Mar 2026 16:46:52 +0000</pubDate>
				<category><![CDATA[California]]></category>
		<category><![CDATA[CNIL]]></category>
		<category><![CDATA[Email-Tracking Technologies]]></category>
		<category><![CDATA[EU]]></category>
		<category><![CDATA[France]]></category>
		<category><![CDATA[United Kingdom (UK)]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=110</guid>

					<description><![CDATA[<p>The use of email-tracking technology is drawing heightened regulatory scrutiny and has become a growing target of litigation. For many organizations, these technologies, which could be in the form of a “pixel,” “beacon” or URL tracking parameters embedded in links, sit quietly in the background of marketing and operational messages. Yet from a legal and [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/email-tracking-technology-compliance-us-eu/">Email-Tracking Technology: Emerging Compliance Expectations in the U.S., EU and Beyond</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img fetchpriority="high" decoding="async" class="wp-image-111 size-medium alignright" src="https://consumer-protection-dispatch.pillsburylaw.com/files/2026/03/GettyImages-610849650-e1773765918128-300x245.jpg" alt="GettyImages-610849650-e1773765918128-300x245" width="300" height="245" srcset="https://consumer-protection-dispatch.pillsburylaw.com/files/2026/03/GettyImages-610849650-e1773765918128-300x245.jpg 300w, https://consumer-protection-dispatch.pillsburylaw.com/files/2026/03/GettyImages-610849650-e1773765918128-1024x838.jpg 1024w, https://consumer-protection-dispatch.pillsburylaw.com/files/2026/03/GettyImages-610849650-e1773765918128-768x628.jpg 768w, https://consumer-protection-dispatch.pillsburylaw.com/files/2026/03/GettyImages-610849650-e1773765918128-1000x818.jpg 1000w, https://consumer-protection-dispatch.pillsburylaw.com/files/2026/03/GettyImages-610849650-e1773765918128-147x120.jpg 147w, https://consumer-protection-dispatch.pillsburylaw.com/files/2026/03/GettyImages-610849650-e1773765918128.jpg 1287w" sizes="(max-width: 300px) 100vw, 300px" />The use of email-tracking technology is drawing heightened regulatory scrutiny and has become a growing target of litigation. For many organizations, these technologies, which could be in the form of a “pixel,” “beacon” or URL tracking parameters embedded in links, sit quietly in the background of marketing and operational messages. Yet from a legal and compliance perspective, they raise the same kinds of questions as online tracking tools such as cookies. This article explains what is happening and why it matters, and offers some pragmatic options for organizations to consider when relying on email engagement data.</p>
<p><span id="more-110"></span></p>
<p><strong>What Are Email-Tracking Technologies—and Why Do They Matter?<br />
</strong>Most modern email platforms automatically embed tracking technologies, most commonly in the form of “pixels”: tiny, often invisible images embedded in emails. When the email is opened, the image is loaded from a server and transmits data to the sender about whether and when the email was opened and often which links were clicked. This can generate useful metrics on engagement, audience interest and campaign performance.</p>
<p>The difficulty is that this technology usually involves storing or accessing information on the recipient’s device which can be used to identify individuals. As a result, regulators in several jurisdictions are starting to treat these email-based tracking technologies in much the same way as website cookies or other web-based tracking technologies: something that in principle requires notice and, in many cases, prior, specific consent unless a narrow exemption applies.</p>
<p>Whereas compliance for web- and mobile application-based tracking technologies can often be achieved through visible website popups and preference centers, organizations utilizing email-based tracking face greater practical and operational difficulties in meeting the same standard. This is complicated by the fact that some email service providers do not allow customers to disable these tracking technologies, either at all or on a per-recipient basis. Legal requirements, technical constraints and commercial reality do not always line up neatly.</p>
<p><strong>The Emerging Regulatory Picture<br />
</strong>Across key markets, a few themes are emerging even as detailed rules and enforcement practice are still evolving.</p>
<ul>
<li><strong>United Kingdom</strong><br />
The Information Commissioner’s Office (ICO) has confirmed that email-based tracking technologies, including pixels, fall within the scope of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) and are therefore treated in the same way as cookies and other web-based tracking technologies. In principle, that means prior consent is required unless the tracking technology is strictly necessary for providing a service requested by the user.</li>
</ul>
<p style="padding-left: 40px">At the same time, the ICO has repeatedly emphasized a risk-based enforcement approach, taking into account how intrusive the tracking is, how clear and prominent the information and consent mechanisms are, and whether there is evidence of consumer harm or concern. To date, the ICO’s enforcement activity has been confined to web-based tracking, and there has been no enforcement action specifically in relation to email-based tracking. That said, recent public guidance and statements by regulators indicate a shift in emphasis, with email tracking now firmly on the ICO’s regulatory radar.</p>
<ul>
<li><strong>EU</strong><br />
Within the EU, the use of email-tracking technologies is generally governed by the ePrivacy Directive (2002/58/EC) in particular Article 5(3), as implemented into national law across EU Member States and enforced by local supervisory authorities. Because these tracking technologies typically involve the storing of, or access to, information on a recipient’s device, many EU regulators take a strict view of their use. This position has been reinforced by guidance from the European Data Protection Board (EDPB) (Guidelines 2/2023, adopted on 7 October 2024), which has made clear that URL and pixel-based tracking falls within the scope of Article 5(3) on a technology-neutral basis, irrespective of whether personal data is ultimately processed. As a result, opt-in consent is generally expected for any use of tracking technologies that is not strictly necessary to deliver a service expressly requested by the recipient, with even basic open-rate tracking commonly treated as in scope.</li>
</ul>
<p style="padding-left: 40px">To date, there has been no enforcement action by EU Member State supervisory authorities that specifically targets the use of email-based tracking technologies. However, the legal position on paper remains demanding and leaves relatively little room for flexibility.</p>
<p style="padding-left: 40px">France provides a useful illustration of the direction of travel. In June 2025, the French data protection authority (CNIL) issued draft recommendations that would require a more granular approach to pixel-based email tracking. In particular, the draft proposed treating consent to receive marketing and consent to the use of tracking pixels as separate decisions, with a possible carve-out where only anonymized, high-level open-rate statistics are collected or where the pixel is used strictly for technical authentication/security functions. Controversially, the CNIL’s draft recommendations require that consent withdrawal must be retroactively honored. (I.e., pixels must be deactivated once consent has been withdrawn, even for emails which have already been sent.) In practice, this would generate significant operational and technical complexity and has been flagged as a key point of debate in responses to the consultation. The draft also contemplates an in-email mechanism allowing recipients to opt out of tracking. While the consultation process has now ended and no final recommendations have yet been published, the draft points toward a stricter and more structured regime in the medium term.</p>
<ul>
<li><strong>United States</strong><br />
There is no federal statute that specifically regulates the use of email-tracking technologies. In challenging the practice, plaintiffs most often rely on the Electronic Communications Privacy Act (ECPA). They typically assert claims under Title I of ECPA (commonly known as the Wiretap Act), which prohibits the intentional interception of electronic communications, a term frequently litigated to require acquisition contemporaneous with transmission and without consent or other statutory authorization. Plaintiffs also invoke Title II of ECPA, the Stored Communications Act (SCA), which addresses unauthorized access to communications held in electronic storage on a covered facility. In many email-tracking technology fact patterns, however, the technology is characterized as triggering an image request that returns open or engagement data, making it difficult to establish the type of contemporaneous interception or unauthorized access required under these federal statutes.</li>
</ul>
<p style="padding-left: 40px">California’s Invasion of Privacy Act (CIPA), particularly Section 631(a), has also been invoked in challenges to the use of email-tracking technologies. Although CIPA claims initially proliferated in the web-tracking context, plaintiffs have increasingly attempted to extend those theories to marketing emails containing tracking technologies. Under this approach, plaintiffs characterize the act of opening an email or clicking a hyperlink as a “communication,” and allege that a third-party email marketing vendor “intercepts” that communication by receiving engagement data in real time. Some complaints further contend that trackable URLs or embedded code reveal the “contents” of the communication, rather than merely routing or record information.</p>
<p style="padding-left: 40px">In <em><a href="https://consumer-protection-dispatch.pillsburylaw.com/files/2026/03/RamosvGap.pdf">Ramos v. The Gap, Inc. (N.D. Cal.)</a></em>, the court rejected these theories in the email-marketing context. The plaintiff alleged that Gap embedded tracking pixels and uniquely coded URLs in promotional emails and that its marketing vendor unlawfully intercepted communications when recipients opened or clicked those emails. The court dismissed the Section 631(a) claim, reasoning in part that engagement metrics such as open rates and click activity constitute information about a communication rather than protected “contents.” The court was also unpersuaded by arguments that a hyperlink click itself constitutes substantive content or that URL parameters exposed the underlying substance of the email. In addition, the decision underscored structural limits within Section 631(a), including the “party” principle, under which a participant in the communication generally cannot be liable for intercepting it, as well as the statute’s focus on acquisition of contents rather than ordinary analytics data generated in the course of message delivery and interaction.</p>
<p style="padding-left: 40px">While <em>Ramos </em>reflects an early, defense-favorable treatment of CIPA claims in the email-pixel setting, plaintiffs continue to test variations on these theories, often drawing analogies to web-tracking decisions addressing whether certain URL strings or user inputs can qualify as “contents.” As a result, CIPA exposure in the email context remains fact-dependent, turning on how courts characterize the data collected, the role of any third-party vendor, and whether the alleged acquisition can plausibly be framed as an interception of protected content rather than the collection of engagement metadata.</p>
<p style="padding-left: 40px">A similar dynamic is emerging under Arizona’s Telephone, Utility, and Communication Service Records Act (TUCSRA). TUCSRA restricts the unauthorized acquisition of “communication service records” maintained by a communication service provider, and plaintiffs have argued that email-tracking technologies impermissibly capture such records by collecting data about when, where and how an email is opened—including engagement metrics such as the time an email was accessed, the number of opens, whether it was forwarded or printed, and the type of device or server used. Early decisions have dismissed TUCSRA claims against retailers and other senders, reasoning that these entities are not “communication service providers” within the meaning of the statute and that email engagement data does not constitute protected service records. Some courts have also found a lack of Article III standing where plaintiffs allege only a statutory violation without concrete downstream harm. Nonetheless, plaintiffs continue to test these theories, and outcomes remain highly dependent on statutory interpretation, the characterization of the technology, and the forum in which the case is brought.</p>
<p style="padding-left: 40px">Against that litigation backdrop, day-to-day compliance obligations for email-tracking technologies are shaped less by wiretap doctrine and more by state privacy laws. Rather than prohibiting tracking technologies outright, these laws regulate how engagement data is classified, disclosed and operationalized within broader advertising and analytics ecosystems. The principal statutory risk typically arises where email engagement data is disclosed to third parties in a manner that could be characterized as a “sale” or “sharing” of personal information for targeted advertising purposes under state privacy laws (such as the California Consumer Privacy Act (CCPA) and analogous state laws). Under the CCPA, whether a disclosure constitutes a regulated “sale” or “sharing” turns on the role of the recipient and the contractual and technical constraints imposed on downstream use. Where an email service provider uses tracking technology-derived data solely to provide services to the sender without independently retaining, using, or repurposing that data for its own advertising purposes, the arrangement is more likely to qualify as a restricted “service provider” relationship. By contrast, if engagement data is made available to advertising networks or analytics partners for their own or joint advertising purposes, the disclosure may qualify as a “sale” or “sharing,” triggering notice and opt-out obligations. Regulators increasingly look beyond contract language to examine how data actually moves between systems, how it is combined with other identifiers, and how it is used in practice.</p>
<p style="padding-left: 40px">Recent enforcement activity in California highlights the regulatory focus. A 2026 CCPA settlement emphasized that opt-out mechanisms must be effective in practice, including across linked accounts, devices and services where personal information is used for cross-context behavioral advertising. The settlement also reinforces expectations around clear and conspicuous “Do Not Sell or Share” mechanisms, recognition of browser-based opt-out signals (such as Global Privacy Control), and user-interface design that does not fragment or undermine consumer choice.</p>
<p style="padding-left: 40px">Although that matter did not concern email-tracking pixels specifically, its implications are directly relevant where tracking technology-derived engagement data is combined with other identifiers and disclosed to advertising or analytics partners. If such disclosures qualify as “sharing” for cross-context behavioral advertising, businesses must ensure that opt-out rights are implemented comprehensively—not merely at device level, but across associated profiles and systems.</p>
<p style="padding-left: 40px">While regulators have not taken email-based tracking technology-specific action, the Federal Trade Commission has pursued significant cases against companies (particularly in the health sector) for sharing sensitive data via tracking technologies without adequate transparency or consent. This underscores the broader regulatory risk associated with opaque tracking practices, especially where sensitive categories of data are involved.</p>
<p><strong>How One Uses Engagement Data Makes a Difference<br />
</strong>Not all uses of email engagement data carry the same risk. Two common patterns are worth distinguishing:</p>
<p style="padding-left: 40px"><strong>a) Aggregated reporting<br />
</strong>Many organizations use tracking technologies simply to generate high-level statistics such as overall open rates for a campaign or the relative popularity of links. Where the data is aggregated or truly anonymized, and not used to make decisions about specific individuals, there is a stronger argument that the intrusion into privacy is limited.</p>
<p style="padding-left: 40px">Some regulators have hinted that this kind of use may be treated more leniently, particularly if the metrics cannot reasonably be traced back to identifiable recipients. Nevertheless, transparency remains important, and it would be prudent to describe this kind of analytics in a privacy policy even where separate consent is not collected.</p>
<p style="padding-left: 40px"><strong>b) Segmentation and follow-up actions<br />
</strong>The risk profile changes where engagement data is used to build profiles, segment audiences or drive follow-up communications at individual level. Examples include:</p>
<ul>
<li style="list-style-type: none">
<ul>
<li>resending emails only to those who did not open a previous message;</li>
<li>targeting offers based on which links a particular recipient clicked; or</li>
<li>combining tracking technology-derived data with other systems to build behavioral profiles.</li>
</ul>
</li>
</ul>
<p style="padding-left: 40px">These activities clearly involve personal data processing and, in many jurisdictions, are more likely to require explicit, prior consent under ePrivacy-type rules, as well as enhanced transparency. They will also attract greater scrutiny if regulators begin to look closely at email-tracking practices.</p>
<p style="padding-left: 40px">A practical way to think about this is that transparency alone may be more defensible for limited, aggregated analytics, whereas individual-level targeting based on tracking technology-derived data sits at the higher-expectation, higher-risk end of the spectrum.</p>
<p><strong>Practical Approaches to Compliance<br />
</strong>There is no single model that works for every organization. Technical constraints, risk appetite, audience type and geography all play a role. In broad terms, organizations tend to converge around three approaches when thinking about consent.</p>
<p style="padding-left: 40px"><strong>a) Separate consent for tracking technologies<br />
</strong>Under this approach, subscription or sign-up flows include a distinct consent mechanism for tracking technologies, separate from the consent to receive marketing emails. This is the approach that most closely matches the strict reading of ePrivacy and some of the emerging thinking of European regulators.</p>
<p style="padding-left: 40px">In theory, it offers strong legal defensibility and a clear audit trail. In practice, it can be challenging where, for instance, the email service provider’s platform does not allow pixels to be disabled for particular recipients. If a user agrees to marketing but withholds consent to tracking technologies, it may be difficult or impossible to honor that choice without suppressing emails entirely. Withdrawal of consent can also be hard to operationalize unless the technology stack is built with that in mind, particularly where consent is withdrawn only after tracked emails have been sent.</p>
<p style="padding-left: 40px">This model tends to suit organizations that are willing to invest in bespoke email infrastructure or that operate in sectors where regulatory expectations and reputational sensitivities are especially high.</p>
<p style="padding-left: 40px"><strong>b) Bundled consent to online tracking technology that includes website and mobile application and emails<br />
</strong>Another option is to obtain a single, combined consent that covers both marketing emails and associated tracking technologies. This can be presented clearly on the sign-up or subscription form, supported by straightforward explanations in privacy notices and the emails themselves.</p>
<p style="padding-left: 40px">From a user-experience perspective, this can feel more honest than offering a choice that cannot truly be honored. It may be seen as a pragmatic compromise where the use of tracking technologies is technically unavoidable, provided that the language used is transparent about what is happening and why.</p>
<p style="padding-left: 40px">However, this is not perfectly aligned with the strict interpretation of separate, unbundled consent for distinct purposes, and organizations adopting this approach should understand that it sits in a gray area between letter-of-the-law compliance and operational reality.</p>
<p style="padding-left: 40px"><strong>c) Transparency-led approach without explicit tracking technology consent<br />
</strong>A third approach is to rely on transparency without seeking specific consent to tracking technologies. Under this model, organizations:</p>
<ul>
<li style="list-style-type: none">
<ul>
<li>acknowledge the use of tracking pixels in their privacy statement and, where appropriate, in cookie notices;</li>
<li>include concise explanations in email footers describing what is collected and how it is used; and</li>
<li>may suggest that recipients who prefer not to be tracked can disable images or take other steps in their email client.</li>
</ul>
</li>
</ul>
<p style="padding-left: 40px">This is the easiest option to implement, particularly where email platforms do not support fine-grained control and where the use of tracking technologies is limited to relatively low-risk analytics. It also aligns with what many recipients already experience in the market.</p>
<p style="padding-left: 40px">The trade-off is that this approach does not strictly meet ePrivacy and PECR requirements for prior consent to tracking technologies, and it may become more difficult to justify if regulators begin to focus actively on these email-tracking technologies. Organizations pursuing this path should view it as a risk-managed position rather than full compliance, monitor regulatory developments, and be prepared to adjust course.</p>
<p><strong>Key Takeaways</strong><br />
Organizations deploying email marketing should not assume compliance by default, but should proactively verify how tracking technologies are deployed and what data they capture.</p>
<p>Where technical limitations exist within an email service provider’s systems, organizations will need to consider a risk-based approach to consent mechanisms and transparency, while factoring in jurisdiction-specific risks.</p>
<p>While email-tracking technology may indeed be invisible, from a regulatory and litigation standpoint they are anything but.</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/email-tracking-technology-compliance-us-eu/">Email-Tracking Technology: Emerging Compliance Expectations in the U.S., EU and Beyond</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">110</post-id>	</item>
		<item>
		<title>FTC Issues COPPA Policy Statement to Encourage Age Verification Technologies</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/ftc-coppa-policy-age-verification-technologies/</link>
		
		<dc:creator><![CDATA[Shruti Bhutani Arora and Scott Morton]]></dc:creator>
		<pubDate>Mon, 02 Mar 2026 18:54:44 +0000</pubDate>
				<category><![CDATA[Children’s Online Privacy Protection (COPPA)]]></category>
		<category><![CDATA[Privacy]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=105</guid>

					<description><![CDATA[<p>The Federal Trade Commission (FTC) has issued a significant policy statement announcing that it will not bring enforcement actions under the Children’s Online Privacy Protection Rule (COPPA Rule) against certain website and online service operators that collect, use, and disclose personal information solely for the purpose of determining a user’s age via age verification technologies. [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/ftc-coppa-policy-age-verification-technologies/">FTC Issues COPPA Policy Statement to Encourage Age Verification Technologies</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The Federal Trade Commission (FTC) has issued a significant <a href="https://www.ftc.gov/news-events/news/press-releases/2026/02/ftc-issues-coppa-policy-statement-incentivize-use-age-verification-technologies-protect-children">policy statement</a> announcing that it will not bring enforcement actions under the Children’s Online Privacy Protection Rule (COPPA Rule) against certain website and online service operators that collect, use, and disclose personal information solely for the purpose of determining a user’s age via age verification technologies.</p>
<p><span id="more-105"></span></p>
<p>This development comes as the FTC seeks to balance its mandate to protect children’s privacy online with the growing recognition that age verification technologies are, in the words of FTC Bureau of Consumer Protection Director Christopher Mufarrige, “some of the most child-protective technologies to emerge in decades.”</p>
<p><strong>Background: The COPPA-Age Verification Tension<br />
</strong>Since COPPA was enacted in 1998, there has been an explosion in the use of internet-connected technologies by children. In response to growing concerns about children’s online safety, several states have begun requiring websites and online services to use age verification mechanisms to determine the age of users.</p>
<p>However, as noted at the FTC’s recent workshop on age verification technologies held in January 2026, some age verification mechanisms require the collection of personal information from children, prompting questions about whether such activities could violate the COPPA Rule. This created a somewhat paradoxical situation: The very technologies designed to protect children online could themselves trigger COPPA compliance obligations.</p>
<p>The FTC has now sought to resolve this tension through its new policy statement, designed to “incentivize operators to use these innovative tools, empowering parents to protect their children online.”</p>
<p><strong>Key Conditions for Enforcement Discretion<br />
</strong>The policy statement applies to operators of general audience sites and services, as well as mixed audience sites and services (those directed to children, but which do not target children as their primary audience). Under the policy, the FTC will exercise enforcement discretion where operators collect personal information for age verification purposes without first obtaining verifiable parental consent, provided they comply with six key conditions:</p>
<ol>
<li><strong>Purpose limitation:</strong> The operator does not use or disclose information collected for age verification purposes for any other purpose.</li>
<li><strong>Third-party safeguards:</strong> The operator discloses information collected for age verification purposes only to third parties that the operator has taken reasonable steps to determine are capable of maintaining the confidentiality, security and integrity of the information, including by obtaining written assurances from those third parties.</li>
<li><strong>Retention limitation:</strong> The operator does not retain the information longer than necessary to fulfill the age verification purposes, and deletes such information promptly thereafter.</li>
<li><strong>Notice:</strong> The operator provides clear notice to parents and children of the information collected for age verification purposes in its privacy policy.</li>
<li><strong>Security:</strong> The operator employs reasonable security safeguards for information collected for age verification purposes.</li>
<li><strong>Accuracy:</strong> The operator takes reasonable steps to determine that any product, service, method or third party utilized for age verification purposes is likely to provide reasonably accurate results as to the user’s age.</li>
</ol>
<p>Critically, the FTC will not exercise its enforcement discretion unless the operator is complying with the COPPA Rule’s requirements in every other respect with regard to personal information collected from children.</p>
<p><strong>Context: The January 2026 Workshop<br />
</strong>This policy statement follows the FTC’s workshop on age verification technologies held on January 28, 2026. The workshop examined the interplay between COPPA enforcement and developments in age verification technology, with FTC Chairman Andrew Ferguson emphasizing that “COPPA enforcement is and will remain a top priority of the Trump-Vance FTC.”</p>
<p>The workshop featured extensive discussion of various age verification methods, from facial age estimation and government ID verification to emerging approaches such as behavioral analysis and reusable age credentials. Panelists highlighted both the promise and challenges of these technologies, including privacy concerns, accuracy limitations, and the need for robust data protection safeguards.</p>
<p>A key theme emerging from the workshop was the importance of “double-blind” architectures, where an external service verifies age without knowing which website the data is intended for, and the website confirms age status without learning the user’s identity. Such approaches help ensure that neither party obtains a complete picture of the user’s identity and activities.</p>
<p><strong>Broader Regulatory Context<br />
</strong>The policy statement arrives amid a rapidly evolving regulatory landscape. Multiple U.S. states have enacted laws requiring age verification for access to certain online content, with requirements varying significantly in their scope, threshold ages and acceptable verification methods. Similar requirements exist internationally, including the UK’s Age Appropriate Design Code and Online Safety Act, and Australia’s recent social media age verification requirements.</p>
<p>The FTC has indicated that it intends to initiate a review of the COPPA Rule to address age verification mechanisms more formally. The policy statement will remain effective until the FTC publishes final rule amendments on this issue in the <em>Federal Register</em>, or until otherwise withdrawn.</p>
<p><strong>Practical Implications<br />
</strong>For operators of general audience and mixed-audience sites and services, this policy statement provides welcome clarity and reduces the legal risk associated with deploying age verification technologies. Operators can now implement these technologies with greater confidence, provided they adhere to the six conditions set out in the statement.</p>
<p>However, several practical considerations remain. Operators should carefully document their compliance with each condition, particularly around third-party due diligence, data retention policies, and accuracy assessments. Privacy policies should be updated to clearly disclose the information collected for age verification purposes.</p>
<p>It is also worth noting that this policy statement does not modify the FTC’s position regarding child-directed sites and services that are primarily directed to children, which must continue to treat all users as children and provide COPPA Rule protections to all users.</p>
<p><strong>Conclusion<br />
</strong>The FTC’s policy statement represents a pragmatic response to a genuine regulatory challenge. By providing enforcement discretion for operators using age verification technologies in good faith, the FTC has sought to remove a potential barrier to the adoption of technologies that can enhance children’s online safety.</p>
<p>As Chairman Ferguson observed at the January workshop, “COPPA, a statute designed to empower parents and protect children online, should not be an impediment to the most child protective technology to emerge in decades.” This policy statement reflects that principle whilst maintaining the fundamental protections that COPPA provides.</p>
<p>The FTC voted 2-0 to issue the policy statement.</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/ftc-coppa-policy-age-verification-technologies/">FTC Issues COPPA Policy Statement to Encourage Age Verification Technologies</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">105</post-id>	</item>
		<item>
		<title>A New Era of Data Deletion: Inside California’s DROP System</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/data-deletion-california-drop-system/</link>
		
		<dc:creator><![CDATA[Shruti Bhutani Arora, Christine Mastromonaco and Dayo Feyisayo Ajanaku]]></dc:creator>
		<pubDate>Fri, 21 Nov 2025 15:49:33 +0000</pubDate>
				<category><![CDATA[California]]></category>
		<category><![CDATA[California Privacy Protection Agency (CalPrivacy)]]></category>
		<category><![CDATA[Data Rights Opt-out Platform (DROP System)]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=96</guid>

					<description><![CDATA[<p>On January 1, 2026, the California Privacy Protection Agency (CalPrivacy, as it is now known) will launch the Data Rights Opt-out Platform (DROP System), an online tool enabling California residents to send a single request to over 500 data brokers requiring them to delete their personal information. Who Are Data Brokers Data brokers are companies [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/data-deletion-california-drop-system/">A New Era of Data Deletion: Inside California’s DROP System</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>On January 1, 2026, the California Privacy Protection Agency (CalPrivacy, as it is now known) will launch the Data Rights Opt-out Platform (DROP System), an online tool enabling California residents to send a single request to over 500 data brokers requiring them to delete their personal information.</p>
<p><span id="more-96"></span></p>
<p><strong>Who Are Data Brokers<br />
</strong>Data brokers are companies that collect and sell personal information without directly interacting with consumers, and they must register with the CPPA and integrate with the DROP system. Data brokers often obtain your email, phone number, browsing history, or location data, and make inferences about your interests or characteristics from this data.</p>
<p><strong>How the System Works for Consumers<br />
</strong>The DROP System aims to make it much easier for Californians to exercise their right to delete their personal information. Through a secure online portal, users will be able to verify their California residency, choose which data brokers to contact, and track their requests using a unique DROP ID. The system will ask California residents to provide information about themselves such as their name, date of birth, phone number, and email address, in addition to device identifiers such as Mobile Advertising IDs (MAIDs). The more identifiers you provide, the more likely a data broker will be able to match your information to data in their system and delete your data. To protect user information, the DROP System will use multi-factor authentication and encrypted (hashed) data transmission.</p>
<h3><strong>Implementation Timeline</strong></h3>
<ul>
<li>January 1, 2026 – DROP System opens to California residents.</li>
<li>August 1, 2026 – Data brokers begin processing deletion requests every 45 days.</li>
</ul>
<p><strong>Data Broker Response Requirements<br />
</strong>When a broker receives a request, they must delete the consumer’s entire record if any identifier matches their records. When processing a deletion request, data brokers must assign one of four standardized response statuses. A request marked “Deleted” means the consumer’s non-exempt personal information has been fully removed. “Exempt” indicates that certain data cannot legally be deleted, and the broker must explain why. “Opted Out” applies when full deletion isn’t possible, but the consumer information is blocked from being sold or shared in the future. Finally, “Record Not Found” means the broker couldn’t locate any data matching the identifiers submitted in the request.</p>
<p><strong>Data Broker Obligations<br />
</strong>Data brokers will face several new compliance requirements, including:</p>
<ul>
<li>Annual registration with the CPPA, including a $6,000 fee.</li>
<li>Completion of deletion requests within 45 days of receipt.</li>
<li>Maintenance of suppression lists to prevent re-collection or resale of deleted data.</li>
<li>Audit trail creation documenting each deletion response.</li>
<li>Reporting of data sales to specified entities, including foreign actors, AI developers and federal government agencies.</li>
</ul>
<p><strong>Enforcement and Penalties</strong><br />
To ensure compliance, the CPPA will impose penalties of $200 per day per consumer for noncompliance. Beginning in 2028, data brokers will also undergo independent audits, and consumers will have access to a formal complaint process for unresolved or disputed deletion requests.</p>
<p>&nbsp;</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/data-deletion-california-drop-system/">A New Era of Data Deletion: Inside California’s DROP System</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">96</post-id>	</item>
		<item>
		<title>The Data Security Program Compliance Guide Is Released by the DOJ</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/the-data-security-program-compliance-guide-is-released-by-the-doj/</link>
		
		<dc:creator><![CDATA[Consumer Protection Team]]></dc:creator>
		<pubDate>Thu, 24 Apr 2025 15:34:14 +0000</pubDate>
				<category><![CDATA[Data Security]]></category>
		<category><![CDATA[U.S. Department of Justice (DOJ)]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=82</guid>

					<description><![CDATA[<p>On January 8, 2025, the U.S. Department of Justice (DOJ) issued its final rule (28 C.F.R. Part 202) implementing former President Biden’s Executive Order 14117, “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.” The guide outlines the requirements of a newly implemented Data Security Program (DSP) [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/the-data-security-program-compliance-guide-is-released-by-the-doj/">The Data Security Program Compliance Guide Is Released by the DOJ</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>On January 8, 2025, the U.S. Department of Justice (DOJ) issued its final rule (<a class="external-link" href="https://www.ecfr.gov/current/title-28/chapter-I/part-202?toc=1" target="_blank" rel="noopener">28 C.F.R. Part 202</a>) implementing former President Biden’s <a href="https://www.federalregister.gov/documents/2024/03/01/2024-04573/preventing-access-to-americans-bulk-sensitive-personal-data-and-united-states-government-related">Executive Order 14117</a>, “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.” The guide outlines the requirements of a newly implemented Data Security Program (DSP) designed to prevent China, Russia and other foreign adversaries designated by the DOJ from accessing American’s sensitive personal data and U.S. government-related data.</p>
<p>In “<a href="https://www.pillsburylaw.com/en/news-and-insights/doj-data-security-program-compliance-guide.html">DOJ Releases Its Data Security Program Compliance Guide</a>,” colleagues <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/tony-phillips.html">Tony Phillips</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/shruti-arora.html">Shruti Bhutani Arora</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/sahar-hafeez.html">Sahar J. Hafeez</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/christine-mastromonaco.html">Christine Mastromonaco</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/leighton-watson.html">Leighton Watson</a> and <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/sheetal-misra.html">Sheetal Misra</a> discuss the key components of the DSP and offer thoughts about compliance.</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/the-data-security-program-compliance-guide-is-released-by-the-doj/">The Data Security Program Compliance Guide Is Released by the DOJ</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">82</post-id>	</item>
		<item>
		<title>Apple’s $500B U.S. Manufacturing Push and New AI Server Facility in Houston: What It Means for Data Centers</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/apple-manufacturing-ai-server-data-centers/</link>
		
		<dc:creator><![CDATA[Robert A. James, Brittany D. Sandler and Samuel C. Markel]]></dc:creator>
		<pubDate>Wed, 26 Feb 2025 21:26:26 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Centers]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=79</guid>

					<description><![CDATA[<p>As we covered previously, President Trump has made clear that the U.S. is focused on increasing investments into building, scaling and speeding the development of AI infrastructure and data centers in the U.S., and Big Tech is responding in kind. On Monday, Apple announced its largest-ever spend commitment: $500 billion in the U.S. over the next four years, [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/apple-manufacturing-ai-server-data-centers/">Apple’s $500B U.S. Manufacturing Push and New AI Server Facility in Houston: What It Means for Data Centers</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>As we covered <a href="https://www.pillsburylaw.com/en/news-and-insights/ai-data-centers-trump.html">previously</a>, President Trump has made clear that the U.S. is focused on increasing investments into building, scaling and speeding the development of AI infrastructure and data centers in the U.S., and Big Tech is responding in kind.</p>
<p><span id="more-79"></span></p>
<p>On Monday, <a href="https://www.apple.com/newsroom/2025/02/apple-will-spend-more-than-500-billion-usd-in-the-us-over-the-next-four-years/">Apple announced</a> its largest-ever spend commitment: $500 billion in the U.S. over the next four years, covering investments in manufacturing, education, and training for technologies like artificial intelligence and chip making. This includes:</p>
<ul>
<li>Launching a state-of-the-art, 250,000-square-foot AI facility in Houston, focused on building servers—previously manufactured outside of the U.S.—that that can handle AI compute.</li>
<li>Servers that are expected to reduce the energy demands of Apple’s data centers (which Apple notes already run on 100 percent renewable energy).</li>
<li>Expanding existing data center capacity in North Carolina, Iowa, Oregon, Arizona and Nevada.</li>
<li>Doubling its U.S. Advanced Manufacturing Fund from $5 billion to $10 billion to boost high-skilled manufacturing jobs in the United States. This includes a multibillion-dollar commitment to produce advanced silicon at TSMC’s Fab 21 in Arizona—where Apple, as the largest customer, recently began mass chip production. With its suppliers operating in 24 factories across 12 states, Apple expects this initiative to create thousands of jobs while driving cutting-edge manufacturing for Apple devices.</li>
<li>Hiring approximately 20,000 people focused on R&amp;D, silicon engineering, software development, and AI and machine learning.</li>
<li>Opening an “Apple Manufacturing Academy” in Detroit, Mich., where Apple engineers and other experts will consult with businesses on AI and smart manufacturing techniques.</li>
<li>Expanding investments in education and workforce development through grants to organizations like 4‑H and Boys &amp; Girls Clubs of America and the launch of new initiatives, such as the New Silicon Initiative—collaborating with leading universities like Georgia Tech and UCLA—to prepare students for careers in hardware engineering and silicon chip design.</li>
</ul>
<p><strong>Key Takeaway and Considerations for AI and Data Center Stakeholders<br />
</strong>Apple’s $500 billion U.S. spend commitment <a href="https://www.pillsburylaw.com/en/news-and-insights/ai-data-centers-trump.html">mirrors similar investments</a> by industry giants like SoftBank, Oracle and OpenAI, highlighting a sweeping push toward domestic production and enhanced U.S. data center capabilities.</p>
<p>These transformative investments continue to raise a host of legal and commercial issues. Companies using, procuring, producing, or powering AI products and data center space will need to carefully navigate those considerations.</p>
<hr />
<p><strong>RELATED ARTICLES</strong></p>
<blockquote class="wp-embedded-content" data-secret="Q4VOe90yy3"><p><a href="https://www.gravel2gavel.com/data-centers-field-guide/">Data Centers: A Field Guide</a></p></blockquote>
<p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;Data Centers: A Field Guide&#8221; &#8212; Gravel2Gavel Construction &amp; Real Estate Law Blog" src="https://www.gravel2gavel.com/data-centers-field-guide/embed/#?secret=CqeU0JbEJJ#?secret=Q4VOe90yy3" data-secret="Q4VOe90yy3" width="500" height="282" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></p>
<blockquote class="wp-embedded-content" data-secret="oilSsJZJ1F"><p><a href="https://www.gravel2gavel.com/what-is-data-center/">Anatomy of a Data Center</a></p></blockquote>
<p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;Anatomy of a Data Center&#8221; &#8212; Gravel2Gavel Construction &amp; Real Estate Law Blog" src="https://www.gravel2gavel.com/what-is-data-center/embed/#?secret=DNKFiJwZwr#?secret=oilSsJZJ1F" data-secret="oilSsJZJ1F" width="500" height="282" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/apple-manufacturing-ai-server-data-centers/">Apple’s $500B U.S. Manufacturing Push and New AI Server Facility in Houston: What It Means for Data Centers</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">79</post-id>	</item>
		<item>
		<title>UK Online Safety Act: New Obligations for Digital Service Providers Targeting the UK</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/uk-online-safety-act-new-obligations-digital-service-providers/</link>
		
		<dc:creator><![CDATA[Mark Booth, Steven Farmer and Scott Morton]]></dc:creator>
		<pubDate>Tue, 18 Feb 2025 16:44:03 +0000</pubDate>
				<category><![CDATA[United Kingdom (UK)]]></category>
		<category><![CDATA[Online Safety Act (OSA)]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=75</guid>

					<description><![CDATA[<p>The UK’s Online Safety Act 2023 (OSA) is a comprehensive piece of legislation designed to regulate social media companies and search services and to increase protections for individuals online. It draws comparisons to the EU’s Digital Services Act, with both laws include provisions relating to safety and transparency—seeking to balance the need to protect people [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/uk-online-safety-act-new-obligations-digital-service-providers/">UK Online Safety Act: New Obligations for Digital Service Providers Targeting the UK</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The UK’s Online Safety Act 2023 (OSA) is a comprehensive piece of legislation designed to regulate social media companies and search services and to increase protections for individuals online. It draws comparisons to the EU’s Digital Services Act, with both laws include provisions relating to safety and transparency—seeking to balance the need to protect people online with fundamental rights such as the right to freedom of expression and privacy. Importantly, it applies not just to digital service providers in the UK but to any service with links to the UK.</p>
<p><span id="more-75"></span></p>
<p>One key area in which the OSA goes further is in relation to the obligations on service providers to undertake specific assessments in relation to their services. The first of these assessments (the illegal content risk assessment—discussed in more detail below) must be completed by all in-scope services by <strong>March 16, 2025</strong>.</p>
<p>For more information on the developing regulatory landscape relating to “online safety” in the UK, EU and U.S., please <a href="https://www.pillsburylaw.com/en/news-and-insights/whose-fault-section-230-liability.html">view our earlier webinar</a>.</p>
<p><strong>OSA—Who Is in Scope?</strong></p>
<ul>
<li><strong>Services in scope</strong>. The OSA applies to providers of user-to-user (U2U) services and search services:
<ul>
<li><strong>U2U services:</strong> internet services through which content that is generated, uploaded, or shared by other users may be encountered by other users of the service. This could cover social media platforms, photo or video sharing platforms, chat and instant messaging services, blogs, online games, online dating platforms, online marketplaces, etc., including where such services are available through a web browser or mobile application.</li>
<li><strong>Search services:</strong> internet services that consist of, or include, the functionality for users to search multiple websites or databases (including services through which a user could in principle search all websites or databases). This could cover conventional search engines, reverse image lookups, content aggregators that allow users to search through multiple databases, AI-powered search engines, etc.</li>
</ul>
</li>
</ul>
<p>Certain services are then subject to additional obligations, such as U2U or search services that publish pornographic content or that meet certain threshold conditions, based on the number of UK users and certain features of the service (“Categorised Services”).</p>
<ul>
<li><strong>Content in scope</strong>. The OSA defines “content” as anything communicated by means of an internet service, whether publicly or privately, including written material or messages, oral communications, photographs, videos, visual images, music and data of any description. The OSA envisages several different categories of content:
<ul>
<li><strong>Illegal content:</strong> content that amounts to a relevant offense, i.e., a priority offense (such as offenses related to terrorism or child sexual exploitation or abuse (CSEA)) or another, non-priority, offense (such as cyberflashing, content that encourages or assists with self-harm, or threatening communications).</li>
<li><strong>Content that is harmful to children: </strong>content that may not amount to illegal content but that should be hidden from children. This is further grouped into primary priority content that is harmful to children (such as pornographic, suicide, self-harm or eating disorder content), priority content that is harmful to children (such as abuse, hate, bullying or violent content, or content which encourages the consumption of harmful substances or the undertaking of dangerous stunts or challenges), and non-designated harmful content which otherwise presents a material risk of significant harm to an appreciable number of children in the UK.</li>
</ul>
</li>
</ul>
<p>Categorised Services are also subject to obligations relating to additional content categories, such as fraudulent adverts, content of democratic importance, and content that adults may wish to avoid encountering on a service (e.g., abusive content or content that encourages suicide or self-harm).</p>
<ul>
<li><strong>Territorial jurisdiction:</strong> Given the cross-border nature of the internet, the OSA’s territorial application extends beyond services based in the UK. The OSA applies to any service that “has links with the United Kingdom.” This criteria will be met where: (i) the service has a significant number of UK users; (ii) the UK forms one of the target markets for the service; or (iii) the service is capable of being used in the UK and there are reasonable grounds to believe there is a material risk of significant harm to individuals in the UK presented by content on the service.</li>
</ul>
<p><strong>What Are the Main Assessment Duties?</strong></p>
<ol>
<li><strong>Illegal Content Risk Assessment</strong>. The first assessment that must be undertaken is an illegal content risk assessment. This is designed to improve a service provider’s understanding of how risks of different kinds of illegal harms could arise on the service and what safety measures should be implemented to protect users. Illegal content risk assessments must be of a “suitable and sufficient” standard. Ofcom (the UK regulatory body with responsibility for the OSA) has published guidance on how to carry out an illegal content risk assessment, which sets out a four-step process: (i) understand the kinds of illegal content that needs to be assessed; (ii) assess the risk of harm; (iii) decide measures, implement and record; and (iv) report, review and update. All in-scope services must complete an illegal content risk assessment by <strong>March 16, 2025</strong>.</li>
<li><strong>Children’s Access Assessment</strong>. In-scope services must also undertake a children’s access assessment to understand to what extent the service is “likely to be accessed by children” (meaning anyone under the age of 18). Where the assessment determines that a service is likely to be accessed by children, this triggers additional child protection duties (which may not apply if the children’s access assessment determines that the services is not likely to be accessed by children). Importantly, if a children’s access assessment is not completed, the service will be subject to the additional child protection duties in any event. All in-scope service must complete a children’s access assessment by <strong>April 16, 2025</strong>.</li>
<li><strong>Children’s Risk Assessment</strong>. If a service is considered “likely to be accessed by children” then service providers must undertake a children’s risk assessment, which is separate and in addition to the overarching illegal content risk assessment. The children’s risk assessment must assess the risks that exist specifically in relation to children on the service, taking into account any measures the service already has in place to protect children. Ofcom is currently consulting on guidance relating to the children’s risk assessment. Once this guidance has been finalized (expected in April 2025) service providers will have three months to complete the requisite assessment.</li>
<li><strong>User Empowerment Risk Assessment</strong>. Certain Categorised Services may also need to complete an adult user empowerment assessment, to understand its user base, the likelihood of adult users encountering specific content that they may wish to avoid, and the impact that the service (including its design and operation) may have in relation to adults encountering such content and the impact that this may have. Ofcom will publish more details on this once it has finalized the thresholds to determine which services will be Categorised Services under the OSA.</li>
</ol>
<p>Records must be maintained of all assessments carried out, including the process followed and the findings. Failure to complete an assessment where required under the act is an automatic breach, which could result in enforcement action and a civil penalty of up to 10% of revenue or £18m (whichever is greater).</p>
<p><strong>Additional Obligations<br />
</strong>A core focus of the OSA is on “systems and processes”—Ofcom will regulate the measures taken by service providers to mitigate the risks identified in their assessments, ensuring that these measures are proportionate, as opposed to regulating individual pieces of content appearing on the services.</p>
<p>Service providers will also be expected to have in place content reporting and complaints procedures, and clear and accessible terms of service which address specific areas identified in the OSA (e.g., specifying how individuals are to be protected from illegal content).</p>
<p><strong>Next Steps<br />
</strong>Companies operating U2U or search services should consider to what extent they may be subject to the OSA (e.g., by analyzing any links they may have to the UK). Where a service is in scope, the priority action item would be to complete the illegal content risk assessment by the <strong>March 16, 2025</strong>, deadline.</p>
<hr />
<p><strong>Related  Articles</strong></p>
<blockquote class="wp-embedded-content" data-secret="9uutChv1PW"><p><a href="https://www.internetandtechnologylaw.com/european-union-cancel-contract-button-rule-online-sales/">New EU Rule Requires Easy &#8220;Cancel Contract&#8221; Button for Online Sales</a></p></blockquote>
<p><iframe loading="lazy" class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;New EU Rule Requires Easy &#8220;Cancel Contract&#8221; Button for Online Sales&#8221; &#8212; Internet &amp; Social Media Law Blog" src="https://www.internetandtechnologylaw.com/european-union-cancel-contract-button-rule-online-sales/embed/#?secret=A6vz3aKJ2F#?secret=9uutChv1PW" data-secret="9uutChv1PW" width="500" height="282" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></p>
<blockquote class="wp-embedded-content" data-secret="DMdQmFn6HH"><p><a href="https://consumer-protection-dispatch.pillsburylaw.com/eu-ai-act-first-requirements-february-2025/">EU AI Act: First Set of Requirements Go into Effect February 2, 2025</a></p></blockquote>
<p><iframe loading="lazy" class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;EU AI Act: First Set of Requirements Go into Effect February 2, 2025&#8221; &#8212; Consumer Protection Dispatch" src="https://consumer-protection-dispatch.pillsburylaw.com/eu-ai-act-first-requirements-february-2025/embed/#?secret=qUdlNWB1iO#?secret=DMdQmFn6HH" data-secret="DMdQmFn6HH" width="500" height="282" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/uk-online-safety-act-new-obligations-digital-service-providers/">UK Online Safety Act: New Obligations for Digital Service Providers Targeting the UK</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">75</post-id>	</item>
		<item>
		<title>California’s Significant AI laws Go into Effect</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/california-ai-laws/</link>
		
		<dc:creator><![CDATA[Consumer Protection Team]]></dc:creator>
		<pubDate>Thu, 13 Feb 2025 00:19:21 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[California]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=72</guid>

					<description><![CDATA[<p>January 1, 2025, marked the start of a series of significant AI laws going into effect in California. California’s 18 new AI laws represent a significant step toward regulating this space, establishing requirements regarding deepfake technology, AI transparency, data privacy and use of AI in the health care arena. These laws reinforce the state’s desire [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/california-ai-laws/">California’s Significant AI laws Go into Effect</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>January 1, 2025, marked the start of a series of significant AI laws going into effect in California. California’s 18 new AI laws represent a significant step toward regulating this space, establishing requirements regarding deepfake technology, AI transparency, data privacy and use of AI in the health care arena. These laws reinforce the state’s desire to be a pioneer in this space.</p>
<p>In <a href="https://www.pillsburylaw.com/en/news-and-insights/california-ai-laws.html">California’s AI Laws Are Here—Is Your Business Ready?</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/christine-mastromonaco.html">Christine Mastromonaco</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/shruti-arora.html">Shruti Bhutani Arora</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/andrew-caplan.html">Andrew Caplan</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/erin-choo.html">Erin Choo</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/mia-rendar.html">Mia Rendar</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/leighton-watson.html">Leighton Watson</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/anne-voigts.html">Anne M. Voigts</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/shani-rivaux.html">Shani Rivaux</a>, <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/johnna-purcell.html">Johnna Purcell</a> and <a class="bio-footer__author" href="https://www.pillsburylaw.com/en/lawyers/dayo-feyisayo-ajanaku.html">Dayo Feyisayo Ajanaku</a> provide a detailed look at the enacted legislation, addresses compliance timelines and serves as a guide for businesses as they navigate compliance with California’s evolving AI landscape.</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/california-ai-laws/">California’s Significant AI laws Go into Effect</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">72</post-id>	</item>
		<item>
		<title>EU AI Act: First Set of Requirements Go into Effect February 2, 2025</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/eu-ai-act-first-requirements-february-2025/</link>
		
		<dc:creator><![CDATA[Steven Farmer, Scott Morton and Mark Booth]]></dc:creator>
		<pubDate>Mon, 03 Feb 2025 19:49:33 +0000</pubDate>
				<category><![CDATA[AI Governance]]></category>
		<category><![CDATA[EU]]></category>
		<category><![CDATA[The EU AI Act]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=70</guid>

					<description><![CDATA[<p>The first binding obligations of the European Union’s landmark AI legislation, the EU AI Act (the Act), came into effect on February 2, 2025. Essentially, from this date, AI practices which present an unacceptable level of risk are prohibited and organizations are required to ensure an appropriate level of AI literacy among staff. For a [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/eu-ai-act-first-requirements-february-2025/">EU AI Act: First Set of Requirements Go into Effect February 2, 2025</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The first binding obligations of the European Union’s landmark AI legislation, the EU AI Act (the <strong>Act</strong>), came into effect on February 2, 2025. Essentially, from this date, AI practices which present an unacceptable level of risk are prohibited and organizations are required to ensure an appropriate level of AI literacy among staff. For a comprehensive overview of the Act, see our earlier <a href="https://www.pillsburylaw.com/en/news-and-insights/eu-ai-act.html">client alert here</a><u>.</u></p>
<p><span id="more-70"></span></p>
<p><strong>Prohibited AI Practices from February 2, 2025<br />
</strong>Article 5 prohibits the use of specific AI practices deemed harmful or inconsistent with EU values. The restricted practices are:</p>
<ol>
<li><strong>Manipulative AI</strong>: AI systems using subliminal or deceptive techniques to distort an individual’s decision-making and causing significant harm.</li>
<li><strong>Exploitative AI</strong>: Exploitation of vulnerabilities of individuals or groups (e.g., due to age, disability or socio-economic status) to materially distort behavior and causing harm.</li>
<li><strong>Social Scoring</strong>: AI systems evaluating individuals or groups over time based on social behavior, resulting in discriminatory or detrimental outcomes.</li>
<li><strong>Predictive Policing</strong>: AI systems assessing individuals’ risks of criminal behavior through profiling or personality traits.</li>
<li><strong>Facial Recognition Databases</strong>: The creation or expansion of facial recognition databases through untargeted scraping of the internet or CCTV footage.</li>
<li><strong>Emotion Inference</strong>: AI systems inferring emotions of individuals in workplaces or educational institutions, except for narrowly defined medical or safety purposes. (The scope of this prohibition is subject to particular debate.)</li>
<li><strong>Biometric Categorization</strong>: Using biometric data to deduce sensitive attributes such as race, political opinion or sexual orientation, except for certain law enforcement purposes.</li>
<li><strong>Real-Time Biometric Identification</strong>: Public-space deployment of real-time remote biometric identification systems for law enforcement, subject to narrowly defined exceptions (e.g., targeted searches for missing persons).</li>
</ol>
<p>Violating these prohibitions can result in substantial penalties under Article 99(3): i.e., fines of up to €35 million or 7% of global annual turnover, whichever is higher.</p>
<p><strong>Mandatory AI Literacy<br />
</strong>Also taking effect on February 2, 2025, is the obligation under Article 4 for providers and deployers of AI systems to ensure sufficient “AI literacy” among their staff and operators. Key aspects of the AI literacy obligation are as follows:</p>
<ul>
<li>“AI literacy” is defined as the ability to make informed decisions regarding the deployment and risks of AI systems, as well as understanding the potential harms AI can cause.</li>
<li>Providers and deployers of AI systems must ensure individuals involved in the operation or use of AI systems have sufficient skills, knowledge and understanding to handle the systems responsibly.</li>
<li>Training must be tailored to the technical expertise of the staff, the context of the AI systems’ deployment, and the characteristics of the individuals or groups impacted by the AI systems.</li>
</ul>
<p>While the Act is silent on specific penalties for non-compliance with Article 4, in all likelihood, regulators may consider insufficient training as an aggravating factor when determining penalties for other violations under the Act.</p>
<p><strong>Remaining Implementation Timeline<br />
</strong>To recap, beyond February 2025, additional obligations under the Act will come into force as follows:</p>
<ul>
<li><strong>August 2, 2025</strong>: Obligations for providers of general-purpose AI (<strong>GPAI</strong>) models take effect.</li>
<li><strong>August 2, 2026</strong>: Remaining obligations for providers and deployers of AI systems take effect.</li>
<li><strong>August 2, 2027</strong>: Obligations for AI systems that are safety components of products subject to third-party conformity assessments under existing EU regulations take effect.</li>
</ul>
<p><strong>Other Recent Developments</strong></p>
<p><strong>General-Purpose AI Code of Practice<br />
</strong>The European Commission published the first draft of the General-Purpose AI Code of Practice on November 14, 2024. The Code, once finalized by May 2025, will provide practical guidance to seek to help providers of GPAI models comply with the Act. Compliance with the Code will be mandatory by August 2025. A draft can be accessed <a href="https://digital-strategy.ec.europa.eu/en/library/first-draft-general-purpose-ai-code-practice-published-written-independent-experts">here</a>.</p>
<p>Key highlights of the Code include the following:</p>
<ul>
<li>It encourages privacy-preserving techniques such as differential privacy and robust data selection.</li>
<li>It emphasizes the need for strong governance, adversarial testing and transparency in AI model development.</li>
<li>It stresses the importance of minimizing risks of personal data being revealed through model outputs.</li>
</ul>
<p><strong>Recent EDPB Guidance on AI and GDPR<br />
</strong>On December 18, 2024, the European Data Protection Board (<strong>EDPB</strong>) issued Opinion 28/2024, which addresses data protection considerations in the context of AI models. The opinion offers practical guidance on determining whether AI models trained on personal data constitute personal data under GDPR and establishing the legal basis for processing personal data during the development and deployment of AI models. It also provides guidance on managing the consequences of unlawful processing during AI model development.</p>
<p>With respect to the first point—whether an AI model trained on personal data can itself be considered personal data under the GDPR—the EDPB states that AI models trained on personal data must be assessed on a case-by-case basis, applying specific criteria.</p>
<p>To argue that a model is anonymous (and therefore not, itself, personal data), providers must demonstrate, using reasonable means, that personal data related to the training data cannot be extracted from the model and that any output produced when querying the model does not relate to a data subject whose personal data was used to train the model.</p>
<p>The EDPB offers some practical guidance for developers seeking to support anonymization in AI models. While achieving full anonymization may often be unattainable, these measures could well set a regulatory benchmark for responsible AI development in accordance with fundamental GDPR principles.</p>
<p>Key recommendations include:</p>
<ul>
<li><strong>Careful Data Selection</strong>: Limit personal data collection by carefully choosing training data sources.</li>
<li><strong>Data Preparation</strong>: Employ processes such as anonymization, pseudonymization, data minimization, and filtering to reduce personal data processing.</li>
<li><strong>Robust Training Methods</strong>: Use methodologies prioritizing generalization over memorization, while incorporating privacy-preserving techniques like differential privacy where feasible.</li>
<li><strong>Output Safeguards</strong>: Implement measures to minimize the risk of revealing personal data through model outputs.</li>
<li><strong>Governance and Audits</strong>: Ensure strong governance with audits to verify privacy measures’ effectiveness.</li>
<li><strong>Testing for Resistance</strong>: Conduct adversarial testing to evaluate the model’s resilience against attempts to extract personal data, such as membership inference or model inversion.</li>
<li><strong>Comprehensive Documentation</strong>: Maintain GDPR-compliant documentation, including Data Protection Impact Assessments (<strong>DPIAs</strong>) and advice from Data Protection Officers (<strong>DPOs</strong>).</li>
</ul>
<p><strong>Key Takeaways<br />
</strong>In light of these latest developments, businesses developing or deploying AI systems or models should:</p>
<ul>
<li><strong>Review Prohibited AI Use Cases</strong>: Conduct an audit of existing and proposed AI systems and projects to ensure compliance with Article 5 prohibitions.</li>
<li><strong>Implement AI Literacy Training</strong>: Develop and roll out comprehensive training programs to meet Article 4 requirements.</li>
<li><strong>Prepare for Future Obligations</strong>: Stay ahead of upcoming deadlines, including obligations on GPAI models and high-risk AI systems.</li>
<li><strong>Engage with Providers and Developers</strong>: Deployers of AI systems should consider raising questions with providers or developers about the steps they are taking to ensure AI models are anonymized, in light of the EDPB opinion. Additionally, it is important to ensure AI procurement terms address the risks posed by the Act and existing legislation such as the GDPR, particularly where models, systems, or outputs will be used within the EU.</li>
<li><strong>Monitor Regulatory Updates</strong>: Follow developments related to the General-Purpose AI Code of Practice and guidance from EU regulators such as the EDPB.</li>
</ul>
<p>The Act represents a seismic shift in the regulation of AI systems in Europe, with wide-reaching implications for providers and deployers alike. With the first obligations taking effect in February 2025, organizations should act now to ensure compliance and mitigate risks.</p>
<hr />
<p>&nbsp;</p>
<blockquote class="wp-embedded-content" data-secret="d8Khejd7A2"><p><a href="https://www.internetandtechnologylaw.com/eu-accessibility-act-impact-business/">The EU Accessibility Act – Impact on Those Doing Business in the EU</a></p></blockquote>
<p><iframe loading="lazy" class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;The EU Accessibility Act – Impact on Those Doing Business in the EU&#8221; &#8212; Internet &amp; Social Media Law Blog" src="https://www.internetandtechnologylaw.com/eu-accessibility-act-impact-business/embed/#?secret=0AXxp4vJMM#?secret=d8Khejd7A2" data-secret="d8Khejd7A2" width="500" height="282" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/eu-ai-act-first-requirements-february-2025/">EU AI Act: First Set of Requirements Go into Effect February 2, 2025</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">70</post-id>	</item>
		<item>
		<title>GDPR Enforcement: Lessons from Recent Data Privacy Penalties</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/gdpr-enforcement-data-privacy-penalties/</link>
		
		<dc:creator><![CDATA[Steven Farmer, Scott Morton and Mark Booth]]></dc:creator>
		<pubDate>Fri, 11 Oct 2024 20:00:03 +0000</pubDate>
				<category><![CDATA[CNIL]]></category>
		<category><![CDATA[Consumer Trust]]></category>
		<category><![CDATA[France]]></category>
		<category><![CDATA[Privacy]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=68</guid>

					<description><![CDATA[<p>Recent decisions by the French data protection authority (CNIL) have highlighted the importance of GDPR compliance, particularly in the areas of data retention, consent for processing sensitive personal data, and marketing practices. On October, 10, 2024, CNIL fined two companies offering remote clairvoyance services a total of €400,000—€250,000 for Cosmospace and €150,000 for Telemaque—for breaches [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/gdpr-enforcement-data-privacy-penalties/">GDPR Enforcement: Lessons from Recent Data Privacy Penalties</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Recent decisions by the French data protection authority (CNIL) have highlighted the importance of GDPR compliance, particularly in the areas of data retention, consent for processing sensitive personal data, and marketing practices. On October, 10, 2024, <a href="https://www.cnil.fr/en/online-clairvoyance-cosmospace-and-telemaque-fined-eu250000-and-eu150000">CNIL fined two companies</a> offering remote clairvoyance services a total of €400,000—€250,000 for Cosmospace and €150,000 for Telemaque—for breaches including excessive data retention, failure to obtain explicit consent for sensitive data processing, and non-compliance with marketing consent rules. These decisions serve as a reminder for businesses to evaluate their data protection policies to avoid costly penalties and maintain consumer trust.</p>
<p><span id="more-68"></span></p>
<p><strong>Key Takeaways from the Decisions</strong></p>
<ul>
<li><strong>Excessive data retention</strong>: Both companies stored customer data for six years after the end of the commercial relationship, mainly for marketing purposes. CNIL found this retention period to be excessive, recommending a maximum retention period of three years. Telemaque, in particular, failed to implement any restrictions on access to the data, keeping it in active databases without sorting or limiting access over this six-year period.</li>
<li><strong>Processing sensitive data without explicit consent</strong>: Both companies processed sensitive personal data—such as sexual orientation and health information—during clairvoyance consultations without obtaining explicit consent. CNIL emphasized that merely using the service does not meet the GDPR’s requirement for explicit consent when processing special categories of data.</li>
<li><strong>Unlawful marketing communications</strong>: The companies sent marketing communications via email and SMS without obtaining valid consent. The forms used to collect customer data did not clearly inform users that their data could be shared for marketing purposes by both Cosmospace and Telemaque, resulting in a breach of consent requirements.</li>
<li><strong>Recording of calls</strong>: Cosmospace recorded all customer calls for several purposes: (i) to monitor service quality and for employee training, (ii) to demonstrate that contracts had been concluded and properly executed, (iii) to respond to legal requests, and (iv) for safeguarding purposes. However, the CNIL found that these justifications did not warrant the systematic recording of all calls. Instead, CNIL recommended that a sample of calls could be recorded for quality monitoring and training, and only the portions of calls directly relevant to contract conclusions should be retained. Additionally, recordings could be manually triggered by employees in situations involving distress or safeguarding concerns. CNIL found that recording all calls in this manner breached the GDPR&#8217;s data minimization principle.</li>
</ul>
<p><strong>Next steps for consumer-facing businesses</strong><br />
These decisions are particularly relevant for any consumer-facing businesses operating in the EU or UK, especially those with operations in France. It’s a timely reminder to review current practices regarding:</p>
<ul>
<li><strong>Marketing consents</strong>: Ensure that proper consent is obtained before sending marketing communications, and that consumers are fully informed about how their data will be used.</li>
<li><strong>Processing of sensitive data</strong>: Review consent mechanisms for any data collection activities involving special categories of personal data, such as health or sexual orientation, to ensure compliance with GDPR.</li>
<li><strong>Data retention</strong>: Assess your data retention policies, particularly for marketing purposes, to ensure personal data is not held longer than necessary and that appropriate restrictions on access and use are in place.</li>
</ul>
<p>This is an opportunity to reassess compliance frameworks, particularly in light of guidance from EU supervisory authorities.</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/gdpr-enforcement-data-privacy-penalties/">GDPR Enforcement: Lessons from Recent Data Privacy Penalties</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">68</post-id>	</item>
		<item>
		<title>CPPA Continues Rulemaking on AI, the New Delete Request and Opt-Out Platform (DROP), Cybersecurity Audits and Privacy Risk Assessments</title>
		<link>https://consumer-protection-dispatch.pillsburylaw.com/cppa-rulemaking-ai-delete-request-opt-out-platform-drop-cybersecurity-audits-privacy-risk-assessments/</link>
		
		<dc:creator><![CDATA[Shruti Bhutani Arora and Christine Mastromonaco]]></dc:creator>
		<pubDate>Mon, 30 Sep 2024 19:49:57 +0000</pubDate>
				<category><![CDATA[California]]></category>
		<category><![CDATA[Privacy]]></category>
		<category><![CDATA[California Consumer Privacy Act (CCPA)]]></category>
		<category><![CDATA[California Privacy Protection Agency (CPPA)]]></category>
		<category><![CDATA[California Privacy Rights Act (CPRA)]]></category>
		<guid isPermaLink="false">https://consumer-protection-dispatch.pillsburylaw.com/?p=61</guid>

					<description><![CDATA[<p>The California Privacy Protection Agency (CPPA) has released the agenda for its upcoming public board meeting on October 4, 2024. This meeting is set to cover important regulatory and enforcement matters related to the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA). Here’s a breakdown of the substantive agenda: [&#8230;]</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/cppa-rulemaking-ai-delete-request-opt-out-platform-drop-cybersecurity-audits-privacy-risk-assessments/">CPPA Continues Rulemaking on AI, the New Delete Request and Opt-Out Platform (DROP), Cybersecurity Audits and Privacy Risk Assessments</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The California Privacy Protection Agency (CPPA) has released the agenda for its upcoming public board meeting on October 4, 2024. This meeting is set to cover important regulatory and enforcement matters related to the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA).</p>
<p>Here’s a breakdown of the substantive agenda:</p>
<p><span id="more-61"></span></p>
<p><strong>Discussion and Possible Action on Proposed Regulations, Sections 7600-7605, Implementing Data Broker Registration Requirements, Including Possible Adoption or Modification of the Text<br />
</strong>In 2023, Senate Bill 362 (Chapter 709, Statutes of 2023), known as the Delete Act, was signed into law, which transferred the responsibility for the new data broker registry from the Attorney General to the California Privacy Protection Agency, beginning January 1, 2024.</p>
<p>In the memo provided to the CPPA board, the Agency administered the registry for the first time this year and over 500 data brokers have registered.  The proposed regulations largely memorialize the Agency’s existing practices related to the data broker registry, but also clarify key terms, concepts and procedures.</p>
<p>The memo also states that the accessible deletion mechanism will be addressed in a separate rulemaking package.</p>
<p>Further, the memo recaps that during the May 10, 2024, board meeting, the California Privacy Protection Agency Board voted to move the proposed regulations to formal rulemaking. Since that time, the Agency has completed the public comment period—which ran from July 5 to August 20—and held a hearing regarding the proposed regulations on the final day.</p>
<p>The memo states that the Agency received three oral and 18 written comment submissions from a total of 24 distinct entities, including data brokers, consumers, public interest groups, think tanks, law firms, political organizations, and private sector companies. These submissions resulted in 138 unique comments for the Agency to respond to, and the responses to all the comments are in the <a href="https://cppa.ca.gov/meetings/materials/20241004_item4_draft_final_statement_of_reasons">draft Final Statement of Reasons (FSOR)</a>.</p>
<p><strong>Board Update Regarding Development and Implementation of the Delete Request and Opt-Out Platform (DROP) and Associated Fees, Pursuant to SB 362.<br />
</strong>The CPPA will present the key takeaways from public engagement regarding DROP, which are <a href="https://cppa.ca.gov/meetings/materials/20241004_item5_drop_ppt">included in the meeting materials</a> and are listed below.</p>
<ul>
<li>Key identifiers used to identify a consumer record: full name, email, phone, DOB, Mobile Advertising ID (MAID);</li>
<li>API preferred over SFTP or email;</li>
<li>Dedicated help center;</li>
<li>Broad range of identity verification practices, including no verification, email only, government identification, among others.</li>
<li>Maintain a suppression list</li>
</ul>
<p>The DROP Privacy Overview in accordance with Cal. Civ. Code§ 1798.99.86(b)(2), includes,</p>
<ul>
<li>Separate deletion requests into four lists by identifiers
<ul>
<li>Phone</li>
<li>Email</li>
<li>Full name, date of birth, address</li>
<li>Pseudonymous IDs (such as MAID)</li>
</ul>
</li>
<li>One-way hash of all data</li>
<li>Data minimization practices</li>
</ul>
<p>Next steps for DROP system:</p>
<ul>
<li>Finalize Stage 2 artifacts</li>
<li>Procurement Select vendor</li>
<li>System construction</li>
<li>DROP regulations</li>
<li>System testing</li>
<li>System launch (2026)</li>
<li>Public awareness campaign</li>
<li>User education</li>
</ul>
<p><strong>Discussion and Possible Action to Advance Draft Regulations to Formal Rulemaking for Updates to Existing Regulations, Insurance, Cybersecurity Audits, Risk Assessments, and Automated Decisionmaking Technology<br />
</strong>The Agency has drafted proposed regulations that do the following: (1) update existing CCPA regulations; (2) clarify when insurance companies must comply with the CCPA; (3) operationalize requirements to complete an annual cybersecurity audit; (4) operationalize requirements to conduct a risk assessment; and (5) operationalize consumers’ rights to access and to opt out of businesses’ use of automated decisionmaking technology (ADMT).</p>
<p>As provided in the memo to the CPPA board by the CPPA staff, these proposed regulations are accompanied by an <a href="https://cppa.ca.gov/meetings/materials/20241004_item6_draft_initial_statement_of_reasons">Initial Statement of Reasons (ISOR)</a> that describes the purpose and necessity of the proposed regulations. The proposed regulations and ISOR have been modified since the July 2024 board meeting to do the following:  (1) remove proposed section 7005, which addressed the consumer price index increase, because this was addressed via legislation (AB 3286, statutes of 2024); (2) detail the proposed regulations’ benefits; (3) incorporate the Standardized Regulatory Impact Assessment and address statewide economic impacts; (4) address regulatory alternatives; (5) list the materials relied upon; and (6) make nonsubstantial grammatical changes, such as updating cross-references and updating citations to the CCPA.</p>
<p>As mentioned in the memo, the CPPA staff will recommend the Board advance the proposed regulations to formal rulemaking, which will provide the public with a formal opportunity to provide written and oral comments to the Agency on the proposed regulations. After receiving public comments, the Board will have additional opportunities to discuss, and potentially update, the proposed regulations. <strong> </strong></p>
<p><strong>Future Rulemaking Plans<br />
</strong>We also expect the CPPA to address its plans for future rulemaking efforts regarding cybersecurity audits, risk assessments and automated decision-making technology.</p>
<p>You can review the full agenda <a href="https://cppa.ca.gov/meetings/agendas/20241004.pdf">here</a>. Additional materials for the meeting, including the CPPA’s Initial Statement of Reasons for Data Broker Regulations, can be accessed <a href="https://cppa.ca.gov/meetings/materials/20241004.html">here</a>.</p>
<p>Stay tuned for further updates following the meeting.</p>
<p>&nbsp;</p>
<p>The post <a href="https://consumer-protection-dispatch.pillsburylaw.com/cppa-rulemaking-ai-delete-request-opt-out-platform-drop-cybersecurity-audits-privacy-risk-assessments/">CPPA Continues Rulemaking on AI, the New Delete Request and Opt-Out Platform (DROP), Cybersecurity Audits and Privacy Risk Assessments</a> appeared first on <a href="https://consumer-protection-dispatch.pillsburylaw.com">Consumer Protection Dispatch</a>.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">61</post-id>	</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced (Requested URI is rejected) 

Served from: consumer-protection-dispatch.pillsburylaw.com @ 2026-03-17 13:18:16 by W3 Total Cache
-->