Friday, November 22, 2024

Open Source LLM Tool Sniffs Out Python Zero-Days

Researchers at Protect AI have released Vulnhuntr, a free, open source static code analyzer tool that can find zero-day vulnerabilities in Python codebases using Anthropic’s Claude artificial intelligence (AI) model.

The tool, available on GitHub, provides detailed analysis of the code, proof-of-concept exploits for the vulnerabilities identified, and confidence ratings for each flaw, Protect AI said in its announcement.

Vulnhuntr breaks the codebase into smaller chunks rather than overwhelming the large language model’s (LLM) context window size by loading in the entire file at once. The tool uses prompt-engineering techniques to feed highly detailed, vulnerability-specific prompts into Claude, at which point the AI asks for additional code snippets until it has gathered enough information to map the application from user input to server output. This way, the LLM can analyze the entire call chain — which encompasses connections between files, functions, and variables across a project — without losing context. This level of analysis means the AI doesn’t just stop when it finds risky code, but rather investigates how that code interacts with the rest of the project, which the research team says helps decrease false positives and negatives.

The tool currently focuses on the following types of vulnerabilities that can be exploited remotely: arbitrary file overwrite (AFO), local file inclusion (LFI), server-side request forgery (SSRF), cross-site scripting (XSS), insecure direct object references (IDOR), SQL injection (SQLi), and remote code execution (RCE).

Vulnhuntr’s team says the tool has already discovered more than a dozen zero-day vulnerabilities in popular Python projects on GitHub, including gpt_academic, FastChat, and Ragflow. Vulnhuntr flagged a RCE flaw in the machine learning library Ragflow, which has already been fixed.


Related Articles

Latest Articles