This is the fourth article in a series of posts about Google Cloud Platform security. The first two articles (Article I., Article II.) in the series focused on hardening possibilities on a project level with Google Cloud Platform. The third post highlighted the topic of built-in GCE Security. In the fourth article in the series, we will go through several network-level protection tools available for your Google Compute Engine instances. Read this article to learn more about GCE Network security. Also, don’t miss the last post of this series:
Use Google Cloud Load Balancer for Inbound HTTPS Traffic
It is always good practice to reduce your attack surface. This method is an example of doing just that with the incoming HTTP and HTTPS traffic. If your instance serves web requests, then you should use the Google Cloud Load Balancer to route those requests to your instance and avoid directly opening the HTTP and HTTPS ports to the Internet. This is a good idea if you only use a single machine. This approach has two advantages. One is that you will not disclose the individual public IP addresses of your web servers, thus making them harder to attack. The other advantage is that the Google Cloud Load Balancer applies some filters to the incoming HTTP requests, so you will probably not even see a bunch of malicious requests in the first place, because they will be filtered out before hitting your instances.
Also, there is the possibility to scale or replace your web servers while maintaining the full availability of your services. You can change the backend servers behind the load balancer any time and there could be multiple web instances serving your sites at the same time.
Restrict the Outbound Firewall Rules
Even if you disable every incoming request other than web requests, and even route those requests through the Google Cloud Load Balancer, you might still have a bug in the software you use in your web stack (e.g. Tomcat, NGINX) or maybe even in your application code. If an attacker can somehow manage to get control over the web processes and run arbitrary code on your web server, it will be much harder to practically control that machine in the long term without communicating with it from a master node. For this reason, most attacks use a technique to trick your firewall rules and open up an outgoing connection to an outside control host managed by the attacker. This way the connection is not incoming (where the firewall would block it) but outgoing (where the rules are usually much less restrictive).
You can make this type of attack much harder if you disable new outgoing connections toward the Internet in your firewall rules (either on the OS level on the virtual machines or using the Firewall provided by GCE). There are two ways to go about this. One is to disable only newly established outgoing connections so every reply packet for incoming connections will be able to leave the instance. A second, stricter way to do it is to enable outgoing connections only to those IPs where you want to access the machine from. This way, only those machines can talk to the instance and no other outside access will be possible.
Set Up an HTTPS Proxy for Outbound Access
If you implement the rules from the previous section, then you might inadvertently limit some legitimate functionality of your applications. If your application code accesses a third-party API during normal operations, you have to enable outgoing traffic in the firewall for the IP addresses of that third-party service. Otherwise, the code cannot access it. If your applications only use HTTP or HTTPS as outgoing connections, it is better to use an HTTPS (web)proxy for the requests. This way, you do not have to list every third-party API service IP in your firewall rules.
Install a proxy server on a separate instance. Let that instance open new connections to the whole Internet on the HTTP and HTTPS ports in the firewall. Then, set up that instance internal IP as a proxy server to use on your web servers. This way you will have access to any APIs over HTTP or HTTPS while an attacker cannot open up new outgoing web connections from your machine by default. At least without knowledge of your network layout.
Refine All Inbound Firewall Rules which have 0.0.0.0/0 as a Source
If you have any inbound firewall rules where the allowed connection source is set up as the whole Internet, then you might want to reconsider those rules. If you allow SSH access for the whole Internet, you will have a very high attack surface and will have to rely on every user not to lose their SSH private keys. And your particular SSH server must have no security flaws in it.
There are multiple methods to remove or at least reduce the number of inbound rules with wide sources. If the specific port is only used by a limited number of employees/partners, it’s better to specify the IP addresses of the offices where the service is used from. If the access is for customers and the service is a web page or an API, use Google Cloud Load Balancer in front of your instances as described earlier. When you have a limited audience, the requests are not web related. You cannot know the source IP address range of the audience in advance. In this case, you should try the network-level protection tools described in the next section to achieve GCE security.
Use VPN or a Jump Host for SSH or Special Port Accesses
If you have a specific port with a service intended for a limited audience, don’t open that service up to the whole Internet. If the audiences’ IP address isn’t known in advance, you have two solutions to try. Create a jump host. This is a dedicated instance which enables some kind of remote access (e.g. SSH, RDP) for the whole Internet. From this instance, the special service is accessible using the internal network. As a result, your audience first connects to the jump host, then uses the service from that machine. This approach has some prerequisites to be secure:
- This jump host’s IP address is not publicly known
- This jump host has to be the most up-to-date and best-secured machine in your whole infrastructure. If you want to raise the security of this approach, then you can apply some additional security measures.
- Moving the remote access from the well-known default port to a random one, known only by your audience OR
- Applying advanced techniques like port knocking.
If this method is not right for you, then you might try out a VPN setup. In this case, you install a dedicated VPN server instance. There, you only open the VPN server ports to the whole Internet and you employ a key-based authentication. This way you can access the services on the internal network directly using the internal network from your machine. When the packets travel through the Internet, they are encapsulated and encrypted, so this approach is also secure.
Filter Traffic between Instances on the Same Network
Isolation is a very good strategy for when everything else fails. With that in use, there is still a last line of defense in your system. This means that only some groups of machines will be compromised, instead of all of them at the same time. If an attacker gets ahold of all your SSH private keys and knows the IP addresses of some machines where the SSH port is open, they are only able to access those machines and not any other ones on the private network. This is due to the fact that an internal network rule is blocking access between the machines. It is advisable to separate your internal network to different subnets with no access between them. In the default network, all internal traffic is allowed between the instances. However, you can change this setting according to your specific needs.
Remove the Public IP Address of an Instance if Not Used
If an instance is not providing a service outside the private network (e.g. it is a management node), it is a good idea to remove its public IP address. This way it is impossible for it to receive or send traffic to or from the Internet. This is a very good way of greatly reducing the attack surface of those instances.