California Senator Introduces First-Ever Ban on AI Chatbot Toys for Minors

By: Smriti Vijay
Edited by: Hannah Becker and Ashley Puente-Peña

On January 2, 2026, U.S. Senator Steve Padilla, a Democrat representing California’s 18th District, introduced Senate Bill 867. [1] This bill proposed a four-year suspension on the manufacture and sale of toys with AI chatbot features for children under 18. Padilla expressed concern about the unknown psychological impacts of AI on children and cited the relatively limited level of research on these toys as the reason for drafting this bill. This moratorium would allow time to develop safety regulations to protect children from any dangerous interactions with AI. 

Previously, Padilla also authored Senate Bill 243, which requires chatbot operators to implement safeguards that are critical, reasonable, and attainable so that all interactions between minors and AI chatbots are as appropriate as possible. [2] This means monitoring chats for signs of suicidal ideation, blocking sexually explicit content, and reminding users that chatbot responses are artificially generated. The bill also grants families the right to pursue legal action against AI chatbot developers for noncompliance with existing regulations and negligence in their manufacturing. In 2025, California Governor Gavin Newsom officially signed SB 243 into law, a first-of-its-kind and first-in-the-nation piece of legislation.

However, some groups were disappointed with SB 243 and supported a more restrictive bill, AB 1064, nicknamed the “LEAD for Kids Act.” [3] AB 1064 aimed to ban companies from providing access to any sort of AI chatbot to minors unless the tool was confirmed to be entirely safe. However, Newsom opted to sign SB 243 rather than completely ban the technology. 

The rise in bills regulating AI chatbots is a result of many cases of minors communicating with AI in inappropriate ways, often leading to negative impacts like death by suicide. In 2024, 14-year-old Sewell Setzer from Florida ended his life after forming a romantic relationship with a chatbot inspired by the Game of Thrones character Daenerys Targaryen on Character AI. [4] He became dependent on the chatbot, using it to discuss his personal struggles. Ultimately, the chatbot encouraged Setzer to end his life so that they could be together. Similarly, 13-year-old Juliana Peralta took her life after bonding with a Character AI chatbot with which she had sexually explicit conversations with and confessed her thoughts of suicidal ideation to. [5] 

Both Setzer and Peralta’s parents were unaware of these conversations until it was too late. These families are two of the six that filed lawsuits against Character AI and Google, which were ultimately settled in January 2026. The families claimed that these AI chatbots are programmed to become addictive to children and are inclined to communicate inappropriately with minors. For instance, anyone can log in to Character AI with a fake age, opening the floodgates to all sorts of conversations. Researchers at Parents Together, a nonprofit organization focused on family issues, studied Character AI while posing as teens and children and found that chatbots mentioned some type of harm about once every five minutes. Inappropriate conversations included the chatbot encouraging a researcher posing as a child to be their “most evil self and [their] most true self,” teaching them how to use cocaine, and starting a romantic relationship after mentioning the user would have to hide it from their parents. [5] Google and Character AI are only two of the many companies facing backlash—Meta’s AI chatbot was reported to conduct sexual conversations with minor users, generate inappropriate sexual images, communicate incorrect medical information, and perpetuate racial stereotypes. [6]

The PIRG (Public Interest Research Group) Education Fund, a group that advocates for consumer, environmental, and public health issues, studied toys for young children with AI chatbot capabilities. [7] The study found that several AI toys could engage in conversations that included mentions of harm. For example, the toy bear Kumma, created by Singaporean company FoloToy, gave advice on sexual topics and how to start physical violence. Despite the morally corrupt algorithms of these AI chatbots, many companies continue to roll out products marketed to children featuring these AI capabilities. Mattel, Inc., the toy company behind Barbie and one of the world’s largest toy producers, recently announced a collaboration with OpenAI, the creator of ChatGPT, to create AI-powered products. [8]

Insufficient research has been conducted on the long-term implications and safety of conversing with toy chatbots, and the existing research reflects poorly on them. A Brown University study found that AI chatbots repeatedly violate mental health ethics standards when speaking about mental health advice. [9] Chatbots tend to encourage users to think negatively about themselves and create a false sense of connection and empathy. There are also frequent reports of chatbots exhibiting gender, cultural, and religious biases and failing to refer users to crisis management tools when topics veer to more serious issues like suicidal ideation. These chatbots are built to capitalize on human attachment patterns and exploit conversations to increase engagement.

Other countries have also started cracking down on AI chatbot companies and instituting regulations to protect minors. The Australian eSafety Commissioner, Julie Inman Grant, issued legal notices to four AI chatbot companies, ordering them to explain how they protect children from inappropriate conversations that chatbots could initiate. [10] Australia also requires social media companies to deactivate accounts of users under 16 or face fines of up to 49.5 million Australian dollars. The European Union has also recently drafted an AI Act that will require AI systems to undergo risk-management assessments and have a child-safe design. [11] Furthermore, Italy temporarily banned ChatGPT in 2023 due to age-verification concerns, and the United Kingdom issued an enforcement notice to Snapchat’s My AI for failing to assess its privacy risks for teenagers. [12, 13] 

Back in the U.S., California SB 867 and 243 are not the only pieces of legislation created to combat AI-based harm against minors. Most bills seek to prohibit children from using chatbots without parental permission, require chatbots to repeatedly state that they are AI and not human, and require AI companies to implement content moderation and safety measures. There are also bills that protect data privacy and rights and hold AI companies accountable for any harm caused by their creations, like Oklahoma’s SB 2085 and North Carolina’s SB 624

According to researchers at OpenAI and MIT, approximately 0.15 percent of users develop an emotional dependency in their interactions with ChatGPT. [14] This translates to nearly 490,000 users who communicate with AI chatbots weekly. The importance of bills like SB 867 becomes clearer by the day. Since the introduction of SB 867, Senator Padilla has also introduced SB 903 in the California Legislature on January 21, 2026. SB 903 would ensure that mental health services are provided by trained professionals rather than AI chatbots. [15] 

As the capabilities of AI chatbots increase and the lines between large language models and humans blur, more regulations need to be instated to ensure that this technology does not grow at the expense of human life or development. Education on the risks of this technology should be delivered in tandem, including the potential for harmful or incorrect advice. Lawmakers should also work on a consistent national law to protect minors in all states. While AI chatbots will continue to be introduced across a variety of products, technology companies and lawmakers must collaborate to ensure that chatbots can continue to deliver benefits without causing harm.

Notes:

1.California State Senate District 18, “Author of Nations First Chatbot Protections, Proposes First-in-Nation Moratorium on AI Chatbots in Toys,” Senator Steve Padilla, January 2, 2026, https://sd18.senate.ca.gov/news/author-nations-first-chatbot-protections-proposes-first-nation-moratorium-ai-chatbots-toys.

2.Stuart D. Levi, Michael W. McTigue Jr., Meredith C. Slawe, “New California ‘Companion Chatbot’ Law Imposes Disclosure, Safety Protocol and Annual Reporting Requirements,” Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates, October 17, 2025, https://www.skadden.com/insights/publications/2025/10/new-california-companion-chatbot-law.

3.Ben Sperry, “California’s LEAD for Kids Act Is Destined to Fail First Amendment Scrutiny,” Truth on the Market, September 23, 2025, https://truthonthemarket.com/2025/09/23/californias-lead-for-kids-act-is-destined-to-fail-first-amendment-scrutiny/.

4.Sam Gillette, “Teen’s Mom Settles with Google and AI Company After Claiming His Suicide Was Fueled by Love of Chatbot,” People, January 13, 2026,https://people.com/teens-mom-settles-with-google-and-ai-company-after-claiming-his-suicide-was-fueled-by-love-of-chatbot-11881597.

5.Sharyn Alfonsi et. al., “A mom thought her daughter was texting her friends before her suicide. It was an AI chatbot.,” CBS News, January 8, 2026, https://www.cbsnews.com/news/parents-allege-harmful-character-ai-chatbot-content-60-minutes/.

6.Jeff Horwitz, “Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info,” Reuters, August 14, 2025, https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/.

7.Rebecca Bellan, “California Lawmaker proposes a four-year ban on AI chatbots in kids’ toys,” TechCrunch, January 6, 2026, https://techcrunch.com/2026/01/06/california-lawmaker-proposes-a-four-year-ban-on-ai-chatbots-in-kids-toys/.

8.Andrew McStay, “Mattel and OpenAI have partnered up: here’s why parents should be concerned about AI in toys,” The Conversation, June 25, 2025, https://theconversation.com/mattel-and-openai-have-partnered-up-heres-why-parents-should-be-concerned-about-ai-in-toys-259500.

9.Brown University, “New study: AI chatbots systematically violate mental health ethics standards,” News from Brown, October 21, 2025, https://www.brown.edu/news/2025-10-21/ai-mental-health-ethics.

10. Byron Kaye, “Australia tells AI chatbot companies to detail child protection steps,” Reuters, October 22, 2025, https://www.reuters.com/world/asia-pacific/australia-tells-ai-chatbot-companies-detail-child-protection-steps-2025-10-22/.

11. The European Union, “AI Act,” The European Union, January 27, 2026, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

12. Shiona McCallum, “ChatGPT banned in Italy over privacy concerns,” BBC, April 1, 2023, https://www.bbc.com/news/technology-65139406.

13. Mark Sweney, “UK data watchdog issues Snapchat enforcement notice over AI chatbot,” The Guardian, October 6, 2023, https://www.theguardian.com/technology/2023/oct/06/snapchat-enforcement-notice-my-ai-chatbot-uk-data-watchdog.

14. Mengying (Cathy) Fang, “Early methods for studying affective use and emotional wellbeing in ChatGPT: An OpenAI and MIT Media Lab Research collaboration,” MIT Media Lab, March 21, 2025,https://www.media.mit.edu/posts/openai-mit-research-collaboration-affective-use-and-emotional-wellbeing-in-ChatGPT/.

15. California State Senate District 18, “Senator Padilla Introduces Protections from Dangerous AI Therapy Products,” Senator Steve Padilla, January 21, 2026, https://sd18.senate.ca.gov/news/senator-padilla-introduces-protections-dangerous-ai-therapy-products

Bibliography:

Alfonsi, Sharyn, et al. “A Mom Thought Her Daughter Was Texting Her Friends Before Her Suicide. It Was an AI Chatbot.” CBS News. January 8, 2026. https://www.cbsnews.com/news/parents-allege-harmful-character-ai-chatbot-content-60-minutes/

Bellan, Rebecca. “California Lawmaker Proposes a Four-Year Ban on AI Chatbots in Kids’ Toys.” TechCrunch. January 6, 2026. https://techcrunch.com/2026/01/06/california-lawmaker-proposes-a-four-year-ban-on-ai-chatbots-in-kids-toys/.

Brown University. “New Study: AI Chatbots Systematically Violate Mental Health Ethics Standards.” Brown University News. October 21, 2025. https://www.brown.edu/news/2025-10-21/ai-mental-health-ethics.

California State Senate District 18. “Author of Nation’s First Chatbot Protections, Proposes First-in-Nation Moratorium on AI Chatbots in Toys.” January 2, 2026. https://sd18.senate.ca.gov/news/author-nations-first-chatbot-protections-proposes-first-nation-moratorium-ai-chatbots-toys.

California State Senate District 18. “Senator Padilla Introduces Protections from Dangerous AI Therapy Products.” January 21, 2026. https://sd18.senate.ca.gov/news/senator-padilla-introduces-protections-dangerous-ai-therapy-products.

European Union. “AI Act.” January 27, 2026. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

Fang, Mengying (Cathy). “Early Methods for Studying Affective Use and Emotional Wellbeing in ChatGPT: An OpenAI and MIT Media Lab Research Collaboration.” MIT Media Lab. March 21, 2025. https://www.media.mit.edu/posts/openai-mit-research-collaboration-affective-use-and-emotional-wellbeing-in-ChatGPT/.

Gillette, Sam. “Teen’s Mom Settles with Google and AI Company After Claiming His Suicide Was Fueled by Love of Chatbot.” People. January 13, 2026. https://people.com/teens-mom-settles-with-google-and-ai-company-after-claiming-his-suicide-was-fueled-by-love-of-chatbot-11881597.

Horwitz, Jeff. “Meta’s AI Rules Have Let Bots Hold ‘Sensual’ Chats with Kids, Offer False Medical Info.” Reuters. August 14, 2025. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/.

Kaye, Byron. “Australia Tells AI Chatbot Companies to Detail Child Protection Steps.” Reuters. October 22, 2025. https://www.reuters.com/world/asia-pacific/australia-tells-ai-chatbot-companies-detail-child-protection-steps-2025-10-22/.

Levi, Stuart D., Michael W. McTigue Jr., and Meredith C. Slawe. “New California ‘Companion Chatbot’ Law Imposes Disclosure, Safety Protocol and Annual Reporting Requirements.” Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates. October 17, 2025. https://www.skadden.com/insights/publications/2025/10/new-california-companion-chatbot-law.

McCallum, Shiona. “ChatGPT Banned in Italy over Privacy Concerns.” BBC. April 1, 2023. https://www.bbc.com/news/technology-65139406.

McStay, Andrew. “Mattel and OpenAI Have Partnered Up: Here’s Why Parents Should Be Concerned about AI in Toys.” The Conversation. June 25, 2025. https://theconversation.com/mattel-and-openai-have-partnered-up-heres-why-parents-should-be-concerned-about-ai-in-toys-259500.

Sperry, Ben. “California’s LEAD for Kids Act Is Destined to Fail First Amendment Scrutiny.” Truth on the Market. September 23, 2025. https://truthonthemarket.com/2025/09/23/californias-lead-for-kids-act-is-destined-to-fail-first-amendment-scrutiny/.

Sweney, Mark. “UK Data Watchdog Issues Snapchat Enforcement Notice over AI Chatbot.” The Guardian. October 6, 2023. https://www.theguardian.com/technology/2023/oct/06/snapchat-enforcement-notice-my-ai-chatbot-uk-data-watchdog.

Next
Next

Evaluating the Legality of Unilateral Force in the Capture of Nicolas Maduro