This article constitutes an analysis of California’s Protecting Our Kids from Social Media Addiction Act (SB 976), covering its provisions, intent, and legal challenges.
What SB 976 Covers
Definition of “Addictive Feed”: SB 976 defines an “addictive feed” as any sequence of user-generated media (text, images, audio, or video) that is recommended or prioritized to a user based on past behavior, device data, or preferences—unless it falls within specified exceptions like private messages, manual selections, or predictable sequences.
Parental Consent & Age Assurance
Starting January 1, 2025, platforms cannot knowingly provide addictive feeds to minors unless they:
- Don’t know the user is under 18,
- Or have obtained verifiable parental consent.
By January 1, 2027, platforms must reasonably determine a user’s age before providing addictive feeds, and obtain parental consent if the user is a minor.
Notification & Usage Restrictions
Platforms are prohibited from sending push notifications to minors:
- Between midnight and 6 a.m. local time, any day of the year.
- During school hours—8 a.m. to 3 p.m. on weekdays between September and May—unless parental consent is given.
Parental Control Tools
SB 976 mandates platforms allow verified parents to:
- Limit daily screen time (default 1 hour/day).
- Default privacy settings (e.g., private accounts, no visible likes).
- Opt-out of algorithms by default.
These tools aim to give families control over how and when children engage with social media.
Transparency Reporting
Platforms must publicly disclose annually:
- The number of minor users,
- How many have parental consent,
- Which parental controls are enabled/disabled.
Why was it enacted?
Lawmakers noted alarming trends:
- Increased youth mental health issues, including depression and self-harm.
- Research linking excessive social media usage with sleep disruption and addictive behaviors.
- Platforms designed feeds and notifications to maximize engagement, often at the expense of minors’ wellbeing.
Senator Nancy Skinner emphasized that SB 976 aims to legislate product design, not restrict speech: “When social media companies won’t act, it’s our responsibility to protect our kids.”
- Legal Challenges & First Amendment Debate
- NetChoice v. Bonta
Tech trade group NetChoice, representing major platforms (Amazon, Google, Meta, Snap, X, Lyft), sued to block SB 976, claiming it infringes on First Amendment rights—both platforms’ editorial choices and minors’ right to speech.
- December 31, 2024: District Judge Edward Davila issued a partial injunction, blocking notification and disclosure mandates, but upheld feed and age provisions. He opined that personalized feeds may not be protected speech and that age assurance can be implemented constitutionally.
- January 2, 2025: The injunction was extended to a temporary full stay pending appeal.
- January 28, 2025: The Ninth Circuit granted a comprehensive stay, halting enforcement until it hears the case in April 2025.
Legal Arguments
- Platforms argue SB 976 burdens their editorial discretion and minors’ speech rights, particularly around “speech curation” and “anonymous speech.”
- EFF and others contend that algorithmic personalization is expressive and that age verification systems risk privacy harms and chilling effects on anonymous speech.
- State defenders maintain the law is content-neutral because it targets design features (feeds, notifications) and doesn’t censor specific messages.
What’s the constitutional path ahead?
Legal scrutiny will focus on:
- Whether personalized feeds count as protected speech (expressive editorial choices).
- Which level of scrutiny applies—strict vs. intermediate.
- If age verification and feed restrictions are narrowly tailored to serve a substantial interest (protecting children) without unduly burdening speech.
At the Ninth Circuit in April 2025, judges reportedly questioned NetChoice’s standing and weighed social media’s similarity to addictive products like tobacco.
Context & Impacts-National Trend
SB 976 is part of a wave of state-level social media regulations:
- New York’s SAFE for Kids Act and Utah, Texas age‑verification laws all focus on protection over content control.
- Federal bills like KOSA and Kids Off Social Media Act reflect broader interest, though SB 976 remains one of the most design-focused and far-reaching proposals.
Industry Effects
If upheld, SB 976 would compel platforms to:
- Build robust age assurance systems (e.g., ID checks, AI estimation),
- Redesign feeds and notifications for minors,
- Add parental portals and transparency dashboards.
Such shifts could affect user experience for all users, raise privacy concerns, and carry significant development costs.
Summary
- SB 976 targets addictive platform design, not specific speech—it represents a novel regulatory approach.
- It’s scheduled to roll out in stages, starting January 2025 and ramping up in 2027.
- Legal challenges are ongoing: core provisions remain blocked until at least April 2025, pending the Ninth Circuit ruling.
- Broader implications: California’s experiment may serve as a model (or cautionary tale) for other states and federal regulation efforts.
California’s SB 976 is a groundbreaking attempt to legislate social media design features that harm children, but it raises provocative constitutional questions. The law advances in stages—2025 parental consent, 2027 age assurance—but is now on hold pending appeal. Its outcome could leverage major shifts in platform architecture and redefine how law treats algorithmic speech and youth protection. Please don’t hesitate to contact our law firm to discuss your questions.