80% of AI-Suggested Dependencies Contain Risks: Study

Depending on the AI model, 44-49% of dependencies imported by coding agents contained known security vulnerabilities, showing that even existing dependencies can introduce risk if not properly vetted.

Marina M Headshot
Adam121 Adobe Stock 315095274
adam121 AdobeStock_315095274

Only one in five dependency versions recommended by AI coding assistants were safe to use, containing neither hallucinations nor vulnerabilities, according to Endor Labs’ annual State of Dependency Management 2025: Security in the AI-Code Era report.

The rapid adoption of Model Context Protocol (MCP) servers, which connect AI agents to thousands of third-party tools and integrations, further amplifies the risk by centralizing access points where unvetted code can enter enterprise systems. Without proper governance, organizations are inheriting a new, expanding attack surface that threatens even their most critical code and infrastructure.

"AI coding agents have become an integral part of modern development workflows," says Henrik Plate, security researcher at Endor Labs. "They introduce new types of dependencies — some of which may be hallucinated or insecure. At the same time, thousands of third-party MCP servers are being developed and published by open-source maintainers, waiting to be integrated into projects. Without sufficient verification, however, they could open new paths for exploitation. Effective governance is essential to balance innovation with accountability, enabling AI to accelerate development without letting untrusted code into critical systems."

Key takeaways:

·        AI-assisted development isn't the future; it's already here, and most enterprises are blindly inheriting a massive new attack surface full of hallucinated, vulnerable, and unvetted code.

  • Depending on the AI model, 44-49% of dependencies imported by coding agents contained known security vulnerabilities, showing that even existing dependencies can introduce risk if not properly vetted.
  • When AI agents are equipped with security tools, the proportion of safe dependency recommendations jumps from roughly 20% to 57%, nearly a threefold improvement. While this demonstrates the value of integrating safeguards into AI workflows, gaps remain if organizations rely solely on AI without proper oversight.
  • In an attempt to keep pace with AI's speed of innovation, more than 10,000 MCP servers were created in under a year, 40% of which had no license. About 75% were built by individuals without enterprise-grade protections, and 82% interact with sensitive APIs, creating additional vulnerabilities that complicate safe adoption at scale.
Page 1 of 100
Next Page