Washington advances AI bills: limits on chatbots, schools
3 days ago • ai-governance
Washington lawmakers opened the 2026 session with a package of AI bills that would impose new safety and transparency mandates on “companion” chatbots and restrict certain AI uses with minors. Gov. Bob Ferguson on Jan. 9 requested companion-chatbot legislation (House Bill 2225) that, per bill text, requires operators to detect and respond to self-harm signals and refer users to crisis resources such as the 988 system. Senate proposals (including SB 5870 and SB 5984) would create civil liability for AI-linked suicide and codify additional chatbot limits. (Sources: Axios, Governor’s office, bill text.)
Technically, HB 2225 (prefiled Dec. 31, 2025) mandates a protocol to identify suicidal ideation, deliver automated or human-mediated crisis referrals, prevent generation of content that encourages self-harm, and publicly disclose protocols and the number of crisis referrals issued in the prior year. The bill also requires extra protections for minors (limits on sexually explicit interactions and bans on “manipulative engagement techniques”) and excludes general-purpose models unless offered as companion chatbots. A public hearing for HB 2225 was scheduled Jan. 14, 2026. (Sources: bill text, LegiScan, Governor.)
If enacted, the measures would introduce new compliance requirements and consumer-protection exposure for AI vendors, affect procurement by school districts, and create a path for lawsuits over forged digital likenesses. Expect amendments during committee hearings before the session adjourns March 12, 2026.
Why It Matters
- Chatbot operators must implement and document crisis-detection protocols and referrals (e.g., link to 988) or face enforcement under Washington’s consumer-protection law.
- New civil-liability language (SB 5870) increases legal risk for platform vendors and model providers when chatbots cause severe harm.
- Ed-tech vendors and school IT must audit classroom AI uses: bills would limit surveillance/discipline uses and may bar certain automated decisions affecting students.
- Product teams should remove or avoid ‘manipulative engagement’ features and add age gating, content filters, and logging to meet disclosure requirements.
Trust & Verification
Source List (3)
Sources
- Axios SeattleTier-1Jan 9, 2026
- The Seattle TimesOtherJan 10, 2026
- Transparency CoalitionOtherJan 9, 2026
Fact Checks (4)
Washington lawmakers are weighing a package of AI bills targeting companion chatbots, minors, schools and digital likenesses. (VERIFIED)
House Bill 2225 requires operators to detect suicidal ideation, refer users to crisis resources (e.g., 988), prevent self-harm content, and publicly disclose protocols and referral counts. (VERIFIED)
Senate bill SB 5870 would establish civil liability for suicide linked to the use of AI systems. (VERIFIED)
Proposals would limit school uses of AI for student discipline or surveillance and create a separate bill on 'digital likeness' enabling suits over deepfakes. (VERIFIED)
Quality Metrics
Confidence: 100%
Readability: 28/100