In this lab, you will utilize tools in conjunction with LLMs to automate the detection of vulnerable services and source code examples, then determine whether the results are accurate.
To begin with, change into the code directory for the exercises in the repository
cd cs475-src/09 git pull
LLM agents have the ability to make decisions about what tools to use based on the context they receive. In this exercise an agent will utilize two custom tools to solve a Portswigger password authentication level. Go to: https://portswigger.net/web-security/authentication/password-based/lab-username-enumeration-via-different-responses
This is the first level in the password based authentication challenges found on Portswigger Academy. Click "Access the Lab" to start the lab.
Then, change into the exercise directory, create a virtual environment, activate it, and install the packages.
cd 01_auth_breaker virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt
For this program, two MCP tools are written to solve the level. The first tool scrapes the site to find the URL of the login page on it. The tool utilizes the RecursiveUrlLoader
found in the RAG section of the course to locate a webpage that has a url with the string "login
" contained in it..
@mcp.tool("find_login_page")
def find_login_page(base_url):
"""(CHANGE ME)"""
loader = RecursiveUrlLoader(
url = base_url,
max_depth = 2,
)
docs = loader.load()
login_page = None
for doc in docs:
login_page = doc.metadata["source"]
if "login" in login_page:
return login_page
Modify the tool's description given in Python docstring comments as "(CHANGE ME)
" to better reflect the information it returns.
The second tool attempts a brute-force attack on the login page with a common set of credentials given in the data/auth-lab-usernames
and data/auth-lab-passwords
files. When found, it will then automatically log into the level. Examine the code for the second tool and modify its description given in Python docstring comments as "(CHANGE ME)
" to better reflect the information it returns.
@mcp.tool("get_creds")
def get_creds(login_url):
"""(CHANGE ME)"""
Examine the logic of the tool. It has been specifically written for this particular level.
Run the agent:
python auth_breaker_mcp_client.py
Prompt the agent with the URL of your level and solve the level
I want to login to the website with a base url of <YOUR_LEVEL_URL>
Deactivate the virtual environment and delete it.
deactivate rm -rf env
While it may be tempting to utilize an LLM to perform vulnerability analysis, it is often the case that special-purpose tools are more appropriate, both in accuracy and in costs. One such tool for performing Static Application Security Testing (SAST) to identify vulnerable Python code is Bandit
. Bandit processes Python code, builds an abstract syntax tree (AST) from it, and then runs appropriate plugins against the AST nodes to identify problematic code snippets. Once Bandit has finished scanning all the files, it generates a report. In this exercise, Bandit is used to analyze a repository to find files with potentially vulnerable code. The summary is then fed to the LLM to generate a patch for vulnerable files automatically.
Change into the exercise directory, create a virtual environment, activate it, and install the packages.
cd 02_bandit_patch virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt
A program is provided that clones an arbitrary repository and then runs Bandit using flags that specify only vulnerabilities that bandit is highly confident are high severity ones. The function below runs the tool and asks the LLM to summarize its findings including listing the line numbers that the vulnerability appears for each vulnerable file.
def bandit_find_high_severity_files(repo_path):
result = subprocess.run(
["bandit", "-r", repo_path, "--confidence-level", "high", "--severity-level", "high"],
capture_output=True,
text=True
)
bandit_results = result.stdout
prompt = f"Analyze the results from the Bandit vulnerability scan and return a list of files with high confidence, high severity vulnerabilities in them. For each, include the line numbers they occur in:\n\n{bandit_results}"
response = llm.invoke(prompt)
return response.content
One use for Bandit's analysis is to help generate patches for vulnerable files. To do so, consider the code below that performs the vulnerability analysis on a particular file from the previous step, then feeds its results along with the contents of the file to an LLM to generate a patch.
def patch_file(repo_path):
result = subprocess.run(
["bandit", repo_path],
capture_output=True,
text=True
)
bandit_results = result.stdout
file_content = open(repo_path,"r", encoding="utf-8").read()
prompt = f"You are a skilled patch generator that takes a program from a file and a description of its vulnerabilities and then produces a patch for the program in diff format that fixes the problems in the description.\n\n The contents of the program file are: \n {file_content}\n\n The description of the issues in it are: \n {bandit_results}"
response = llm.invoke(prompt)
return response.content
Run the program and point it to the course repository.
python bandit_patch.py
Select one of the files to have the program generate a patch for it.
Deactivate the virtual environment and delete it as well as the repository directory.
deactivate rm -rf env bandit_repository_directory
Modern applications have a large software supply chain they depend upon. With dozens of packages potentially being installed, eventually vulnerabilities will be discovered that will need to be patched. Automating the retrieval and summarization of new vulnerabilities in open-source software is an important process to perform. The Open Source Vulnerabilities database and project implements a real-time vulnerability information API that one can use to identify out-of-date packages in an application that might need to be upgraded. In this lab, you will leverage OSV and an LLM to summarize any vulnerabilities in packages found in a Python virtual environment.
Change into the vulnerable application directory within the exercise's directory. Create a virtual environment, activate it, install the vulnerable packages within it, then deactivate the environment.
cd 03_pip_osv/vulnerable_app virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt deactivate
Change back into the exercise directory, then create and activate a virtual environment for the exercise. Install its packages.
cd .. virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt
The program in the repository takes a Python virtual environment, finds all of the packages and their versions that are installed, then queries OSV's API to retrieve relevant vulnerabilities associated with them. The output is then sent to an LLM to summarize. To begin with, Python's package manager pip is used to list all of the packages and their versions, returning the result as a JSON object. From this, a dictionary is created with the package name as the key and version as the value.
def get_installed_packages(venv_path):
python_exec = os.path.join(venv_path, "bin", "python")
result = subprocess.run([python_exec, "-m", "pip", "list", "--format=json"], capture_output=True, text=True, check=True)
packages = json.loads(result.stdout)
installed_packages = {pkg["name"]:pkg["version"] for pkg in packages}
return(installed_packages)
The user is then prompted to enter an installed package to analyze. From this, a request is made to the OSV API that includes the package name and version is constructed. The API returns a JSON object enumerating all of the vulnerabilities associated with this particular version of the package. The details of each are concatenated together to provide a complete description of the vulnerabilities.
def check_vulnerabilities(installed_packages,package):
post_data = {"package": {"name":package}, "version":installed_packages[package]}
response = requests.post("https://api.osv.dev/v1/query", json=post_data)
vuln_results = response.json()
vuln_report = "\n".join([vuln['details'] for vuln in vuln_results['vulns'] if 'details' in vuln])
return(vuln_report)
Finally, a prompt is constructed with the instructions for summarizing the vulnerability report to the user.
def summarize_vulnerabilities(vuln_output):
prompt = f"""You are a cybersecurity expert tasked with analyzing security vulnerabilities found in a Python package. Provide a 100-word summary of each vulnerability found.
Vulnerabilities:
{vuln_output}
"""
summary = llm.invoke(vuln_output)
Run the program and point it to vulnerable_app/env
as the environment.
python pip_osv.py