Prior labs show examples where agents leverage built-in tools or custom-built tools provided by the application writer. As shown below, each LLM application would then execute its own code for accessing services.
The Model Context Protocol or MCP is a standard protocol for LLMs to retrieve invoke both local and remote tools to access services. Thus, rather than having every LLM agent implement all of the tools it needs to execute, the agent can instead invoke tools implemented by MCP servers that are running either locally or remotely. In this lab, you will experiment with agents that utilize MCP servers to handle tasks.
Within the repository on the course VM, change into the exercise directory.
cd cs475-src git pull cd 05*/01*
There are two main ways of running an MCP server. One way is to run the MCP server locally and communicate with it over standard input/output (STDIO) while another is to run the MCP server remotely and communicate with it over HTTP. For this lab, we'll support both modes and utilize a command-line argument that is passed when invoked to select the mode.
The code below implements the SQLite server. It utilizes the FastMCP package to create the server and instantiates one tool within it called query(
). The tool handles queries to a local SQLite database by taking a query string and executing it against a specified database, returning the results. By taking a raw query string, the tool is vulnerable to SQL injection attacks. Note that the description of the tool is provided within the comments of the tool declaration. This description is utilized by the server to instruct clients on how to access the tool. An LLM agent is better equipped to call MCP tools if these descriptions are detailed, specific, and accurate.
from fastmcp import FastMCP
import sqlite3
mcp = FastMCP("sqlite")
@mcp.tool()
def query(query: str, path: str) -> str:
"""Query a specified Sqlite3 database. Returns the result of the query."""
con = sqlite3.connect(path)
cur = con.cursor()
res = cur.execute(query)
con.commit()
return res.fetchall()
if __name__ == "__main__":
if sys.argv[1] == 'stdio':
mcp.run(transport="stdio")
else:
mcp.run(transport="http", host="0.0.0.0", port=8080)
To leverage the tool that the server now supports, we can adapt our prior agent code to be an MCP client, leveraging LangChain's MCP adapter support to invoke the tool on the server. As the code shows, we first define the server we wish to bring up. In this instance, the path in the repository to the prior server code is specified. Then, in the agent loop, we create a connection to the MCP server and load the MCP server's tool into our agent, before querying it. In doing so, the agent will package an MCP call over STDIO via the session's connection and retrieve the results.
from langchain_mcp_adapters.tools import load_mcp_tools
from mcp import ClientSession
from mcp.client.stdio import StdioServerParameters, stdio_client
import asyncio
database = "db_data/metactf_users.db"
server = StdioServerParameters(
command="python",
args=["vulnerable_sqlite_mcp_server_stdio.py","stdio"]
)
prompt = f"You are a Sqlite3 database look up tool. Perform queries on the database at {database} given the user's input. Utilize the user input verbatim when sending the query to the database and print the query that was sent to the database"
async def run_agent():
async with stdio_client(server) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await load_mcp_tools(session)
agent = create_react_agent(model=llm, tools=tools, prompt=prompt)
query = "Who are the users in the database?"
result = await agent.ainvoke({"messages": [("user", query)]})
print(f"Agent response: {result}")
if __name__ == "__main__":
result = asyncio.run(run_agent())
Create a virtual environment, activate it, and then install the packages required.
virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt
Then, run the program
python 01_stdio_mcp_client.py
As in the prior labs, attempt to interact with the database using the agent.
It is quite dangerous to expose an MCP server like this without proper access control. Using the agent, show whether the server is vulnerable to attack using the queries below.
foo or 1=1--
Note, if the agent is able to delete the table, restore it from the command line via:
git checkout db_data/metactf_users.db
One can also run an MCP server remotely over the network, thus allowing MCP clients to invoke the tools that the server implements over the network using HTTP. In this step, we'll first deploy the SQLite MCP server as a serverless container running on Google's Cloud Run, then configure our client agent to invoke it using HTTP.
Cloud shell
To enable Cloud Run, visit your course project on Google Cloud Platform. Bring up a Cloud Shell session and enable the Cloud Run service.
gcloud services enable run.googleapis.com
Next, visit the course repository on GitHub and navigate to the directory containing the SQLite MCP code. Examine the Dockerfile in the directory. The Dockerfile specifies a container image that implements our MCP server. The container installs the Python packages required to run the server, then copies the server code and database file over, before running the server.
Click on the "Run on Google Cloud" button.
This will build the container from the Dockerfile in the repository and then deploy it onto Cloud Run, Google's serverless container platform. To do so, you'll need to.
us-central1
)The container will then be built and pushed to Google's Artifact Registry and then deployed onto Cloud Run. When deployed, a URL will be returned. Make a note of it.
Visit the Cloud Run interface via the web console.
We'll now run the MCP client on the course VM and allow it to utilize the MCP server running in Cloud Run. To begin with, on the course VM, set the MCP_URL
environment variable to the URL that is returned by Cloud Run.
export MCP_URL="https:// ...a.run.app"
To adapt the MCP client to utilize the remote MCP server, we simply tweak the client to utilize the Streamable HTTP interface to the MCP server's endpoint URL as shown in the snippet below, keeping the rest of the client the same.
async def run_agent():
async with streamablehttp_client(f"{os.getenv('MCP_URL')}/mcp/") as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await load_mcp_tools(session)
...
Run the agent and interact with the MCP server on Cloud Run.
python 02_http_mcp_client.py
Repeat the queries below:
Back in the web interface for Cloud Run, click on the deployed service and then navigate to the logs. Find the requests associated with your queries.
Go back to the Cloud Run services interface, select the MCP server you deployed, and delete it.
FastAgent is an implementation of agents designed specifically for MCP tool calls and their results. The agent is simple to set up and can easily leverage the MCP server. To begin with, FastAgent needs to be configured with the name and functionality of the MCP servers it is allowed to access. The configuration for utilizing the SQLite STDIO MCP server is shown below. A FastAgent client will automatically load this configuration to initialize its tools.
mcp:
servers:
sqlite:
command: "python"
args: ["vulnerable_sqlite_mcp_server.py","stdio"]
A simple FastAgent program is shown below that utilizes this MCP server to answer queries.
from mcp_agent.core.fastagent import FastAgent
fast = FastAgent("SQLite Agent")
database = "db_data/metactf_users.db"
prompt = f"You are a Sqlite3 database look up tool. Perform queries on the database at {database} given the user's input. Utilize the user input verbatim when sending the query to the database and print the query that was sent to the database"
@fast.agent(
instruction=prompt,
model="...",
servers=["sqlite"],
use_history=True,
)
async def main():
async with fast.run() as agent:
await agent.interactive()
Run the agent and interact with the interactive interface implemented by the FastAgent package.
python 03_fast_agent_stdio_mcp_client.py
Repeat the queries below:
Type /exit
to exit out of the agent.
Now you will interact with a Google Drive agent that uses an MCP server to interact with your Google Drive. The code for the Google Drive MCP server is similar to the prior SQLite one with specific tools implementing individual operations within a user's Google Drive. Code snippets for listing files (list_files
) and retrieving a particular file (get_file
) are shown below.
from fastmcp import FastMCP
mcp = FastMCP("Google Drive")
@mcp.tool("list_files")
async def list_files():
"""
List files in Google Drive.
"""
service = get_drive_service()
results = service.files().list(pageSize=10, fields="nextPageToken, files(id, name)").execute()
items = results.get('files', [])
file_list = [f"{item['name']} (ID: {item['id']})" for item in items]
return "\n".join(file_list)
@mcp.tool("get_file")
async def get_file(file_id: str):
"""
Get a file from Google Drive by its ID.
"""
service = get_drive_service()
file = service.files().get(fileId=file_id, fields='id, name').execute()
return f"File found: {file['name']} (ID: {file['id']})"
As both tools show, the service is instantiated and then specific APIs are invoked on it utilizing the parameters given by the client.
Cloud shell
We'll now set up our project to run the server. Begin by enabling the Google Drive API on your cloud project. Within Cloud Shell, this can be done with the following:
gcloud services enable drive.googleapis.com
We'll be running the MCP server on our course VM and connecting to it from an MCP client running on Cloud Shell. To allow the server to run on the course VM, we'll need to open up the firewall to allow incoming traffic to the server from our client. Within Cloud Shell, create the firewall rule with tag mcp
and add it to the course VM.
gcloud compute firewall-rules create allow-mcp \ --allow=tcp:8080 --target-tags=mcp gcloud compute instances add-tags <NameOfVM> \ --tags=mcp
OAuth Setup
Google Drive utilizes the OAuth2 protocol to authenticate and authorize access. To enable the MCP server to do so, we must configure an OAuth application on our Google Cloud project. To do so, navigate to the OAuth consent section of the "APIs & Services" here.
If this is the first time OAuth is being configured, you will be asked to specify application information. Name the application "Google Drive MCP" and utilize your pdx.edu address.
Configure the Audience to "Internal" and set the contact information to your pdx.edu address. Agree to the user data policy and create the project configuration. Navigate to the "Data Access" portion of the console shown below and click on "Add or remove scopes"
Select Google Drive API from the list and click on "Update".
Next, navigate to the Clients tab and create a client. Configure it as a "Desktop application" and name it "MCP client". When the client has been created, download its associated JSON file that will include its Client ID and secret.
Upload the JSON file to Cloud Shell.
Then, upload it from Cloud Shell to your course VM with the following command.
gcloud compute scp client_secret_*.json course-vm:/home/${USER}/cs475-src/05_ModelContextProtocol/02_google_drive/credentials.json
The course VM will be used to run the MCP server. For this step, we need to get Google Drive credentials for your user account onto the VM in order to allow the MCP server to interact with files in your Google Drive folders. Connect to your course VM using RDP (e.g. Remote Desktop Connection or Remmina). Change into the exercise's directory and then set up the Python environment for the server.
cd ~/cs475-src/05*/02* virtualenv -p python3 env source env/bin/activate pip install -r requirements.txt
Run the authentication script in the directory to obtain credentials for your Google account that the MCP server will then utilize when accessing Google Drive on your behalf.
python auth.py
Click the URL that it prints out to launch a web browser that will allow you to authenticate and authorize Google Drive permissions via OAuth. If everything worked then you should see a list of some of your Google Drive files in the output of the authentication script.
Examine the Google Drive MCP server code in gdrive_mcp_server.py
and make a note of each tool that it provides to clients. Then, launch the Google Drive MCP server and keep it running.
python gdrive_mcp_server.py
Launch Google Cloud shell. Clone the course repository, then change into the exercise directory and set up a Python virtual environment.
git clone https://github.com/wu4f/cs475-src.git cd cs475-src/05*/02* virtualenv -p python3 env pip install -r requirements.txt
Now copy the external IP address of your course VM and set the MCP_URL environment variable as done previously, replacing "CourseVM_External_IP
" with your course VM's external IP address. Then run the agent and it will connect to the MCP server.
export MCP_URL=http://<CourseVM_External_IP>:8080 python gdrive_mcp_client.py
Select 2 different tools that the Google Drive MCP server implements and prompt the client to perform an operation that invokes it.
The Fetch MCP server is an off the shelf MCP server provided by Anthropic that retrieves arbitrary URLs. We can leverage this to allow the agent to access web resources. It can be installed via a Python package that is then loaded directly into the MCP client. The client code is similar to the prior FastAgent agents, but rather than specifying the MCP server via a program file in the course repository, we instead load the server from a module installed from a Python package via the -m
flag. To access the tool, we specify it in the FastAgent configuration shown below:
mcp:
servers:
fetch:
command: "python"
args: ["-m", "mcp_server_fetch"]
To test the Fetch server, change into the directory and deactivate any prior environment.
cd cs475-src/05*/03* deactivate
Then, create a Python environment and install the packages.
virtualenv -p python3 env pip install -r requirements.txt
FInally, run the program.
python fetch_mcp_client.py
In one query, prompt the agent to summarize www.oregonctf.org and one other site of your choosing.
This Git MCP server is an off the shelf MCP server provided by Anthropic that allows you to query git repositories. Similar to the Fetch example, we specify the server in the FastAgent configuration shown below:
mcp:
servers:
git:
command: "python"
args: ["-m", "mcp_server_git"]
To test the Git server, change into the directory and deactivate any prior environment.
cd cs475-src/05*/04* deactivate
Then, create a Python environment and install the packages.
virtualenv -p python3 env pip install -r requirements.txt
FInally, run the program.
python git_mcp_client.py
Prompt the agent to find the hash of the last commit and to list the developers responsible for the last 20 commits.
Pentest MCP
Penetration testing typically involves multiple phases that might include reconnaissance, vulnerability discovery, and exploitation. There are many tools for performing each phase of a test. For example, one might use:
nmap
to do a reconnaissance scan for potential targetsnuclei
to perform vulnerability discovery on a chosen targetmetasploit
to compromise the target.A seasoned penetration tester develops a 'playbook' of tools and techniques that stitches together a sequence of them to perform a successful test. Such a playbook can be emulated using LLM agents. In this exercise, you will launch a vulnerable web service, then use an LLM agent equipped with access to nmap
, nuclei
, and metasploit
to automatically compromise it. To begin with, we'll examine the tools
from fastmcp import FastMCP, Context
mcp = FastMCP("NMap")
@mcp.tool("nmap_scan")
async def nmap_scan(target: str, options: str, ctx: Context = None):
"""
Perform an NMap scan on the specified target with parameters.
Options are any valid NMap flag.
Returns the scan results as a string.
"""
target = shlex.quote(target)
options = shlex.quote(options)
command = f"nmap {target} {options} -oN /tmp/nmap_output.txt"
os.system(command)
with open("/tmp/nmap_output.txt", "r") as f:
return f.read()
import shlex
@mcp.tool("nuclei_scan")
async def nuclei_scan(target: str, ctx: Context = None):
"""
Perform an Nuclei vulnerability scan on the specified target.
Returns the scan results as a string.
Example usage: nuclei_scan("http://example.com")
The target can be a URL or an IP address.
"""
target = shlex.quote(target)
command = f"nuclei -u {target} -o /tmp/nuclei_output.txt"
os.system(command)
with open("/tmp/nuclei_output.txt", "r") as f:
return f.read()
-
Run nuclei to install its templates
nuclei
Install the Metasploit's RPC daemon and set a password for the msfrpcdThis is what the Metasploit MCP server will communicate over to execute Metasploit commands.
msfrpcd -P msf
Ensure your OPENCVE environment variables are set
export OPENCVE_USERNAME="..." export OPENCVE_PASSWORD="..."
Launch Metasploit's RPC daemon. This is what the Metasploit MCP server will communicate over to execute Metasploit commands.
msfrpcd -P msf
Set the Docker environment variables to specify how to communicate with the curl MCP server running locally in a container
export DOCKER_HOST=unix:///var/run/docker.sock
Then, download the container image that the curl MCP server utilizes.
docker pull curlimages/curl:latest