The scalability of Kubernetes lets you easily run multiple apps inside one cluster. However, Kubernetes lacks network isolation by default, so all your apps are free to communicate with each other. Network Policies are Kubernetes objects that address this issue.
In this article, we’ll provide a guide to using Network Policies, including how to create them and some common examples. Let’s begin!
We will cover:
Network Policies are a mechanism for controlling network traffic flow in Kubernetes clusters. They allow you to define which of your Pods are allowed to exchange network traffic. You should use them in your clusters to prevent apps from reaching each other over the network, which will help limit the damage if one of your apps is compromised.
Each Network Policy you create targets a group of Pods and sets the Ingress (incoming) and Egress (outgoing) network endpoints those Pods can communicate with.
There are three different ways to identify target endpoints:
- Specific Pods (Pods matching a label are allowed)
- Specific Namespaces (all Pods in the namespace are allowed)
- IP address blocks (endpoints with an IP address in the block are allowed)
Network Policies can set a different list of allowed targets for their Ingress and Egress rules. It’s also possible to use Network Policies to block all network communications for a Pod or restrict traffic to a specific port range.
Network Policies are additive, so you can have multiple policies targeting a particular Pod. The sum of the “allow” rules from all the policies will apply. Traffic from or to sources that don’t match any of the “allow” rules will be blocked if the target Pod is also covered by a “deny” policy.
In the OSI networking model, Network Policies represent layer 3/4 controls. They work with IP addresses and port numbers at the network transport level. This provides quite granular options to configure the network flows you require.
Nonetheless, Network Policies aren’t a fully complete solution. There are several limitations in the current version, such as the inability to log events when a network policy block occurs and lack of support for explicit deny policies. It’s also impossible to stop loopback or incoming host traffic using a Network Policy. If you need these features, then you should consider using a separate service mesh in addition to your network policies.
How are Network Policies implemented?
Responsibility for implementing the features provided by Network Policies falls to the CNI networking plugin you’re using in your cluster. Your plugin must support use of network policies for your rules to have any effect.
Managed Kubernetes services from cloud providers will automatically enable support. If you’re running your own cluster, you should ensure you’re using a compatible CNI plugin. The popular Flannel plugin doesn’t support network policies, while Calico does.
Using Network Policies is a best practice for a secure Kubernetes configuration. They prevent Pod network access from being unnecessarily broad, such as in the following scenarios:
- Ensuring a database can only be accessed by the app it’s part of: Databases running in Kubernetes are often intended to be solely accessed by other in-cluster Pods, such as the Pods that run your app’s backend. Network Policies allow you to enforce this constraint, preventing other apps from communicating with your database server.
- Isolating Pods from your cluster’s network: Some sensitive Pods might not need to accept any inbound traffic from other Pods in your cluster. Using a Network Policy to block all Ingress traffic to them will tighten your workload’s security.
- Allow specific apps or namespaces to communicate with each other: Kubernetes namespaces are the primary mechanism for separating objects associated with different apps, teams, and environments. You can use Network Policies to network-isolate these resources and achieve stronger multi-tenancy.
Now let’s see an example of creating and using a simple Network Policy.
Before following this guide, you’ll need access to a Kubernetes cluster that’s using a CNI plugin with Network Policy support. If you need to create one, you can start a Minikube cluster and opt-in to using Calico:
$ minikube start --cni calico
Next, create a pair of Pods that will be used to test whether network communications are being blocked:
$ kubectl run pod1 --image nginx:latest -l app=pod1
pod/pod1 created
$ kubectl run pod2 --image nginx:latest -l app=pod2
pod/pod2 created
The -l
flag sets a label that will let you reference the Pods within your Network Policies. Use Kubectl’s get pods
command with the -o wide
option to check your Pods are running and learn their IP addresses:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1 1/1 Running 0 59s 10.244.120.70 minikube <none> <none>
pod2 1/1 Running 0 57s 10.244.120.71 minikube <none> <none>
Your IP addresses will be different to those shown above. You should adjust the example commands in the rest of this tutorial to include your Pod IPs.
Now run a command inside pod1
to verify that it can communicate with pod2
:
$ kubectl exec -it pod1 -- curl 10.244.120.71 --max-time 1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
pod1
has successfully received a response from the NGINX server running in pod2
. This is the expected result because the Kubernetes default settings allow all Pods to communicate.
Creating a Network Policy
Next, copy the following YAML manifest and save it to np.yaml
in your working directory:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: pod2
policyTypes:
- Ingress
- Egress
This is one of the simplest possible Network Policies. It selects the pod2
Pod by matching its labels using a podSelector
. This is the Pod the Network Policy’s Ingress and Egress rules will apply to. Because the Ingress
and Egress
policy types are set but no further rules are added, the policy will block all network traffic to and from the Pod.
Use Kubectl to apply your Network Policy:
$ kubectl apply -f np.yaml
networkpolicy.networking.k8s.io/network-policy created
Now, repeat the earlier command to try to communicate with pod2
from pod1
:
$ kubectl exec -it pod1 -- curl 10.244.120.71 --max-time 1
curl: (28) Connection timed out after 1001 milliseconds
command terminated with exit code 28
This time the connection does not succeed. The Network Policy targeting pod2
blocks all network traffic so pod1
cannot communicate.
Adding an Allow Rule
Next, modify your np.yaml
manifest to include the following content:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: pod2
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: pod1
egress:
- to:
- podSelector:
matchLabels:
app: pod1
This Network Policy states that pod2
accepts Ingress and Egress traffic when the other end of the connection is a Pod that’s labeled app=pod1
.
Apply the policy to your cluster:
$ kubectl apply -f np.yaml
networkpolicy.networking.k8s.io/network-policy created
Now repeat the test command:
$ kubectl exec -it pod1 -- curl 10.244.120.71 --max-time 1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
Now pod1
can communicate with pod2
again because it matches the selectors in the Network Policy’s allow rules.
If you create another unlabeled Pod, it’ll be blocked from communicating because it won’t match the Network Policy’s selectors:
$ kubectl run pod3 --image nginx:latest
$ kubectl exec -it pod3 -- curl 10.244.120.71 --max-time 1
curl: (28) Connection timed out after 1001 milliseconds
command terminated with exit code 28
Your Network Policy Ingress and Egress rules can use a few different selector types to identify the Pods that are allowed to communicate with the policy’s target. We’ll discuss them in this section.
podSelector
As shown in the examples above, podSelector
selects Pods that match a defined set of labels.
podSelector:
matchLabels:
app: demo
namespaceSelector
namespaceSelector
is similar to podSelector
but it selects an entire namespace using labels. All the Pods in the namespace will be included.
namespaceSelector:
matchLabels:
app: demo
You can match a specific namespace by name by referencing the kubernetes.io/metadata.name
label that Kubernetes automatically assigns:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: demo-namespace
ipBlock
ipBlock
selectors are used to allow traffic to or from specific IP address CIDR ranges. This is intended to be used to filter traffic from IP addresses that are outside the cluster. It’s not suitable for controlling Pod-to-Pod traffic because Pod IP addresses are ephemeral—they will change when a Pod is replaced.
ipBlock:
cidr: 10.0.0.0/24
Combining selectors
You can use multiple selectors to create complex conditions in your policies. The following policy selects all the Pods that are either labeled demo-api
or belong to a namespace labeled app: demo
:
ingress:
- from:
- namespaceSelector:
matchLabels:
app: demo
- podSelector:
matchLabels:
app: demo-api
This example represents a logical “or”. You can also create “and” conditions by combining selectors together:
ingress:
- from:
- namespaceSelector:
matchLabels:
app: demo
podSelector:
matchLabels:
app: demo-api
This policy only targets Pods that are both labeled app: demo-api
and in a namespace labeled app: demo
.
Setting allowed port ranges
The examples above permit allowed Pods to communicate using the entire available port range. However, adding a ports
field to your Ingress and Egress rules lets you restrict this to just the ports your app actually requires:
ingress:
- from:
- podSelector:
matchLabels:
app: demo
ports:
- protocol: TCP
port: 32000
endPort: 32100
Now communication is only allowed on TCP ports in the range 32000 to 32100. You can omit the endPort
field if you only use a single port.
Here are a few examples of useful Network Policies that you might need in your cluster:
Deny all traffic to a Pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
spec:
podSelector:
matchLabels:
app: demo
policyTypes:
- Ingress
- Egress
Deny all traffic to all Pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
spec:
podSelector: {}
policyTypes:
- Ingress
Deny all egress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
spec:
podSelector: {}
policyTypes:
- Egress
Allow all traffic to a Pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: network-policy
spec:
podSelector:
matchLabels:
app: demo
policyTypes:
- Ingress
- Egress
ingress:
- {}
egress:
- {}
Now you’ve seen how to use Network Policies, here are a few best practices that will give you the greatest security protections:
1. Ensure all Pods are covered by a Network Policy
All Pods in a Kubernetes cluster should be subject to Network Policies that limit their network interactions to the minimal set of Ingress/Egress targets they require. Not setting Network Policies allows all Pods to communicate, which is a potential security risk.
2. Use precise Ingress/Egress target selectors
Keep your Pod selectors, namespace selectors, and ipBlock ranges as precise as possible to prevent them from accidentally selecting new Pods in the future. For example, a namespace selector is inappropriate if you’re likely to deploy additional Pods to the namespace, and those Pods shouldn’t automatically communicate with your Network Policy’s target.
3. Set a default deny policy, then add your allow policies
You can ensure your cluster has complete Network Policy coverage by creating a default “deny all” policy (as shown in the example above), then adding specific “allow” policies that authorize required traffic flows. This method means new Pods are protected from accidental network exposure, even if you forget to create a specific Network Policy for them.
4. Regularly review your policies and keep them updated
Your Network Policy requirements are likely to change as your cluster evolves with new Pods and namespaces. You should regularly review your policies and make any alterations required so they remain appropriate for your environment.
5. Test your policies to check they’re working as intended
One of the difficulties associated with Network Policies is the lack of clear visibility into whether they’re working. It’s worthwhile to test new policies to be sure they’re configured correctly. As in the examples above, you can create a new Pod with labels that match your Network Policy’s selectors, then use commands like curl and ping to test the connectivity available within the container.
Finally, you should ensure that Network Policies are only part of your Kubernetes isolation strategy. They need to be supported by other protections too, such as correct container security contexts and appropriate RBAC rules that block unauthorized user access to your Pods.
In this guide, we’ve explored Kubernetes Network Policies, objects that allow you to control which Pods in your cluster can communicate with each other. Pods have no network isolation by default, so it’s vital you set up appropriate Network Policies for each of your apps. This helps to secure your cluster and its workloads against compromised containers that try to make malicious network exchanges.
You can also learn how to secure and isolate your cluster by following the 4C security model—cloud, cluster, container, and code. Network policies affect the cluster tier and should be used as part of a holistic security approach that considers all four perspectives.
And check out how Spacelift brings the benefits of CI/CD to infrastructure management. You can use Spacelift to deploy changes to Kubernetes with GitOps while benefiting from robust security policies and automated compliance checks. Spacelift also works with other infrastructure as code (IaC) providers, so you can use similar techniques to manage every component of your stack.
Manage Kubernetes Easier and Faster
Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.