This lab will give you experience with Google's Compute Engine and its offerings in Google Cloud's Marketplace as well with
nmap, a standard tool for performing network security audits
Launch a Compute Engine using the
f1-micro machine type and place it in
Configure the boot disk to be Ubuntu 18.04 LTS.
Then, click on "
ssh" to bring up a shell session on it.
Run the following to install
nmap on the VM
sudo apt update -y sudo apt install nmap -y
We will be using the VM to scan the Marketplace deployments that we will be launching on Compute Engine
Go to Marketplace on the Google Cloud Platform console
Filter on Virtual Machines, then on Blog & CMS. These solutions, when deployed, will bring up their software on a Compute Engine instance.
Bring up 3 solutions from the Blog & CMS category that have type "Virtual machines" with the following settings (if possible):
us-west1-b(We require all machines to be in the same zone for this lab)
Visit the landing page for each VM to ensure it has been deployed properly. Go back to the Compute Engine console and note the "
Internal IP address" of each instance
Go to Marketplace on the Google Cloud Platform console. Go to the original VM you installed
nmap on. (If you've logged out, click on SSH to log back into it). Then, run
nmap on the internal subnet the instances have been placed on:
You should see a list of ports that each machine exposes over the network. This provides administrators important data for taking an inventory of their infrastructure in order to ensure only a minimal set of services are exposed.
Shut-down all of the VMs you have created. In the web console, visit "Deployment Manager". Marketplace solutions are all done via this "Infrastructure-as-Code" solution for Google Cloud Platform. We can take solutions down from its console:
Click on each deployment, then delete each one including all of its resources.
Finally, visit Compute Engine and delete the VM used to perform the
For legacy, "lift-and-shift" deployments in the cloud, the goal is to take an existing network configuration that currently exists, and create a virtual equivalent in the cloud.
For example, in the figure below, the "Customer Site" wishes to take 3 of its internal, private subnetworks and shift them into the cloud across 3 different availability zones.
The infrastructure that is deployed to implement this is shown in red. Because these subnetworks were initially private, virtual switches that handle traffic within GCP infrastructure must be used to encrypt traffic between the 3 subnetworks. The figure also shows VPN gateways that must be used to encrypt and route traffic between GCP infrastructure and external destinations such as the customer site. Note that the CIDR prefixes for each subnetwork employs private IP address ranges that are not reachable externally (e.g.
By default, GCP automatically creates per-zone default subnetworks and will place any VMs instantiated into them. However, based on what the project requires, custom networks and subnetworks can also be specified. Using the
gcloud CLI, we will now set up a network consisting of instances in subnetworks spanning a variety of regions/zones using both the default subnetworks GCP provides as well as ones that we create explicitly.
Go to the web console and click on the Cloud Shell icon as shown:
Cloud Shell consists of a container with the Google Cloud SDK pre-installed. As part of the SDK, the
gcloud command-line interface is included. The command is similar to other cloud CLIs such as
aws in that it supports sub-commands that specify which cloud service is being accessed. For example, the command
gcloud compute networks list will list out all of the networks Compute Engine is using.
Compute Engine is initially configured with a single network consisting of one default subnetwork in each region that VMs are brought up on. One can list all of the networks that have been instantiated for the project using the command below.
gcloud compute networks list
As the output shows, a single default network is configured. In order to see the subnetworks that are automatically created, run the command below:
gcloud compute networks subnets list
Answer the following questions in your lab notebook:
defaultnetwork? How many regions does this correspond to? (Use a pipe to pass output to
grepin order to return specific lines of output and then another to pass output to
wcto count them:
| grep default | wc -l)
Create two instances in two different zones of your choice:
gcloud compute instances create instance-1 --zone <zone_one> gcloud compute instances create instance-2 --zone <zone_two>
List both instances.
gcloud compute instances list
Visit the Compute Engine console and
instance-1, perform a
ping to the
Internal IP address of
instance-2. Take a screenshot of the output.
Leave the session on
instance-1 active for the next step.
GCP allows one to create custom network configurations as well. To show this, create a second network called
custom-network1 and additionally configure it to allow custom configuration of subnetworks.
gcloud compute networks create custom-network1 --subnet-mode custom
Use a command from the previous step to list both the default and custom networks. Include a screenshot of it for your lab notebook.
Create two custom subnetworks within
custom-network1 in regions
europe-west1. For both subnetworks, specify a /24 CIDR prefix:
gcloud compute networks subnets create subnet-us-central-192 \ --network custom-network1 \ --region us-central1 \ --range 192.168.1.0/24 gcloud compute networks subnets create subnet-europe-west-192 \ --network custom-network1 \ --region europe-west1 \ --range 192.168.5.0/24
Use a command from the previous step to list the subnetworks.
custom-network1alongside the default subnetworks in those regions assigned to the
Create instances in each custom subnetwork you've created:
gcloud compute instances create instance-3 \ --zone us-central1-a \ --subnet subnet-us-central-192 gcloud compute instances create instance-4 \ --zone europe-west1-d \ --subnet subnet-europe-west-192
Internal IP addresses for both instances. Then, using your prior session on
instance-1, perform a
instance-1 to the
Internal IP addresses of
To enable communication amongst all 4 instances, one would need to set up peering between the two networks. We will skip this step and wrap up. Go to Compute Engine in the web console:
VPC Network" and take a screenshot of the subnetworks created.
Delete the VMs, subnetworks, and network. Note that if you wish to avoid the prompt to continue, you can pass the
--quiet flag to each command.
gcloud compute instances delete instance-1 --zone <zone_one> gcloud compute instances delete instance-2 --zone <zone_two> gcloud compute instances delete instance-3 --zone us-central1-a gcloud compute instances delete instance-4 --zone europe-west1-d gcloud compute networks subnets delete subnet-us-central-192 --region us-central1 gcloud compute networks subnets delete subnet-europe-west-192 --region europe-west1 gcloud compute networks delete custom-network1