UK Law Society urges principled AI rules: human-rights focus
24 days ago • ai-governance
The Law Society submitted written and oral evidence to the House of Commons Joint Committee on Human Rights and published a Westminster update on 22 December 2025, calling for a principled AI regulatory approach that prioritises human-rights safeguards, clearer definitions, and steps to prevent regulatory fragmentation. (Law Society; UK Parliament)
The Society warned that inconsistent domestic rules and vague definitions create legal uncertainty for lawyers and clients and could amplify harms in high-risk use cases. It urged the government to require human-rights impact assessments, align domestic definitions where feasible, and draw practical lessons from the EU AI Act to reduce divergence and lower compliance costs. (Law Society AI & lawtech policy; uncorrected parliamentary oral evidence)
The Law Society presents these measures as practical safeguards to protect individual rights while keeping legal services operational. It asked the Committee to recommend proportional, rights-based rules instead of a sector-by-sector patchwork. The Committee will consider submissions as it prepares recommendations to government.
Why It Matters
- Clarify definitions: consistent AI definitions reduce legal risk for firms and speed compliance work for legal-tech vendors.
- Human-rights impact assessments: mandating assessments for high-risk systems forces early risk mitigation, documents exposure, and helps preserve client rights.
- Regulatory alignment: using lessons from the EU AI Act can cut cross-border compliance costs, reduce fragmentation, and simplify obligations for UK legal services and vendors.
Trust & Verification
Source List (3)
Sources
- The Law SocietyOfficialDec 18, 2025
- UK ParliamentOfficialDec 17, 2025
- The Law SocietyOfficialDec 22, 2025
Fact Checks (4)
The Law Society gave evidence to the House of Commons Joint Committee on Human Rights in December 2025 (VERIFIED)
The Law Society urged a principled approach to AI regulation prioritising human-rights safeguards, clarity on definitions, and avoidance of regulatory fragmentation (VERIFIED)
The Law Society recommended that the UK consider lessons from the EU AI Act when designing domestic rules (VERIFIED)
The Society called for human-rights impact assessments for high-risk systems (VERIFIED)
Quality Metrics
Confidence: 100%
Readability: 76/100