- cross-posted to:
- ai_infosec@infosec.pub
- cross-posted to:
- ai_infosec@infosec.pub
From https://twitter.com/llm_sec/status/1667573374426701824
- People ask LLMs to write code
- LLMs recommend imports that don’t actually exist
- Attackers work out what these imports’ names are, and create & upload them with malicious payloads
- People using LLM-written code then auto-add malware themselves
It’s terrifying that someone would build off suggestions from ChatGPT without verifying the packages they are installing.