Former Tesla AI Director Andrej Karpathy Sounds Alarm on 'Software Horror' Python Package Attack
Andrej Karpathy, the former Tesla AI director and OpenAI cofounder, has labeled a recent Python package attack as "software horror," with details that are genuinely alarming. A compromised version of LiteLLM, one of the most downloaded AI libraries on PyPI with 97 million monthly downloads, briefly transformed a routine pip install into a credential theft operation. This malicious code was capable of exfiltrating sensitive data, including SSH keys, AWS and Google Cloud credentials, Kubernetes configurations, crypto wallets, SSL private keys, CI/CD secrets, and full shell histories.
Malicious Versions Bypassed Official Release Pipeline
The malicious versions—1.82.7 and 1.82.8—were uploaded directly to PyPI on March 24, bypassing LiteLLM's official GitHub release pipeline. The attack has been traced to TeamPCP, a threat actor engaged in a multi-week campaign targeting developer and security tooling. Prior to this incident, they had compromised Aqua Security's Trivy scanner, which provided them with access to LiteLLM maintainer BerriAI's PyPI publish token, facilitating the upload.
Bug in Malware Saved Thousands of Developers
The poisoned package was live for approximately two hours before PyPI quarantined it. Remarkably, the only reason it was detected so quickly was a mistake in the attacker's own code. Developer Callum McMahon was installing a Cursor MCP plugin that pulled LiteLLM as a transitive dependency. Version 1.82.8 caused his machine to run out of RAM and crash, setting off the alarm. Karpathy commented on X, stating, "If the attacker didn't vibe code this attack, it could have been undetected for many days or weeks."
Karpathy Urges Rethinking of Dependency Usage
Karpathy used this incident to revisit a long-standing concern: the software industry's heavy reliance on dependency trees creates enormous, largely invisible attack surfaces. Every package in a project's chain represents a potential entry point for malicious actors. His suggestion, which he increasingly defaults to, is to use large language models (LLMs) to extract or replicate simple functionality instead of importing entire libraries. This approach could mitigate risks associated with third-party dependencies.
In response to the attack, maintainers at BerriAI have engaged Mandiant for a thorough investigation. They have advised immediate credential rotation across all affected systems. Additionally, Docker images, which pin dependencies to specific versions, were confirmed to be unaffected by this compromise.



