LangChain serialization flaw exposes secrets, supply-chain risk
16 days ago • ai-security
On December 23, 2025, GitHub's advisory database and the U.S. National Vulnerability Database recorded CVE-2025-68664 (GitHub Advisory; NVD).
The flaw is a serialization injection in LangChain core. It can be abused to extract secrets via the library's dumps/loads APIs.
Security analysis shows the flaw occurs when untrusted serialized objects are deserialized. Malicious payloads can trigger code paths that return sensitive values or change prompt content. That enables secret exfiltration and prompt injection in agent workflows. Reporting highlights API misuse and unsafe deserialization patterns (The Hacker News; SecurityAffairs).
Recommended mitigations include updating to patched LangChain releases and avoiding loading serialized objects from untrusted sources. Also restrict runtime privileges for agent tooling and rotate any secrets that may have been exposed. Many agent frameworks and integrations rely on LangChain for orchestration. This issue widens AI supply-chain risk and requires rapid patching and audits across dependent systems.
Why It Matters
- Update LangChain immediately and apply vendor-supplied patches to close the secret-extraction vector.
- Do not deserialize objects from untrusted sources; treat serialized payloads as high-risk I/O.
- Reduce runtime privileges for agents and related tooling to limit what a malicious payload can access.
- Rotate credentials and audit downstream integrations and CI/CD pipelines that consume LangChain artifacts to cut supply-chain impact.
Trust & Verification
Source List (3)
Sources
- GitHub Advisory Database (langchain-ai/langchain)OfficialDec 23, 2025
- SecurityAffairsTier-1Dec 27, 2025
- NVD (National Vulnerability Database)Tier-1Dec 23, 2025
Fact Checks (4)
CVE-2025-68664 was assigned for a LangChain serialization injection and disclosed on 2025-12-23 (VERIFIED)
The vulnerability allows secret extraction via LangChain's dumps/loads serialization APIs (VERIFIED)
The issue enables prompt injection and broader data exposure in agent workflows (VERIFIED)
The GitHub advisory includes remediation guidance and recommends updating to patched releases (VERIFIED)
Quality Metrics
Confidence: 85%
Readability: 82/100