Network Policy Configuration

Tanzu Postgres requires communication between different Kubernetes pods, DNS services, and the Kubernetes API server.

This topic describes how to configure Network Policies in Kubernetes clusters that use a Container Network Interface (CNI) type Network Plugin that is configured with restrictive policies. For more information on different types of policies see Network Policies in the Kubernetes documentation.

The following example yaml file shows a strict policy that is the recommended best-practice for some CNIs like Calico:

---
apiVersion: networking.k8s.io/v1
  kind: NetworkPolicy
  metadata:
    name: default-deny
    namespace: MY-NAMESPACE
  spec:
    podSelector: {}
    policyTypes:
      - Ingress
      - Egress

This policy denies ingress and egress for all pods in the namespace called MY-NAMESPACE.

To successfully deplopy Tanzu Postgres you need to allow communication between pods, Operator, API server and services. For details see Allowing Operator Communication and Allowing Instance Communication.

Allowing Operator Communication

The Operator pods need to communicate with the Kubernetes service, in order to reconcile Postgres instances. This example shows how to amend a strict network policy to permit that communication.

Get the Cluster IP and the port number of the Kubernetes service, that will be used in the NetworkPolicy specification. Use the following command to display this information for the default namespace:

$ kubectl get endpoints --namespace default kubernetes
NAME         ENDPOINTS            AGE
kubernetes   192.168.64.38:8443   42h

Where 192.168.64.38 is the IP address and 8443 the port number in the example scenario.

Using the IP address and port, create the following NetworkPolicy:

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: operator-to-apiserver-egress
spec:
  podSelector:
    matchLabels:
      app: postgres-operator
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 192.168.64.38/32
      ports:
        - port: 8443
          protocol: TCP

Apply the policy to your cluster, in the namespace that the Operator is deployed in:

$ kubectl apply -n OPERATOR-NAMESPACE -f sample-network-policy.yaml
networkpolicy.networking.k8s.io/operator-to-apiserver-egress created

Allowing Instance Communication

To ensure that the data and monitor pods can communicate for replication and failover, follow these steps:

  1. Allow access to the DNS server for DNS lookup of the other pods’ addresses.

    It is recommended to label the kube-system namespace to easily use the namespaceSelector section of the NetworkPolicy spec. For example:

    $ kubectl label namespace kube-system networking/namespace=kube-system
    namespace/kube-system labeled
    

    The following NetworkPolicy allows all pods in NAMESPACE access to the DNS server:

    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-dns-access
    spec:
      podSelector:
        matchLabels: {}
      policyTypes:
        - Egress
      egress:
        - to:
            - namespaceSelector:
                matchLabels:
                  networking/namespace: kube-system
        - ports:
            - port: 53
              protocol: UDP
            - port: 53
              protocol: TCP
    

    Save this sample to a file, and apply it to your cluster:

    $ kubectl apply -n INSTANCE-NAMESPACE -f dns-policy-sample.yaml
    networkpolicy.networking.k8s.io/allow-dns-access created
    
  2. Allow inter-Postgres cluster communication. The following NetworkPolicy allows the monitor and data pods to communicate (assuming the default Postgres port of 5432).

    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: data-monitor-ingress-egress
    spec:
      podSelector:
        matchLabels:
          app: postgres
      policyTypes:
        - Ingress
        - Egress
      egress:
        - ports:
            - port: 5432
              protocol: TCP
          to:
            - podSelector:
                matchLabels:
                  app: postgres
      ingress:
        - from:
            - podSelector:
                matchLabels:
                  app: postgres
          ports:
            - port: 5432
              protocol: TCP
    

    Save this sample to a file, and apply it to your cluster:

    $ kubectl apply -n INSTANCE-NAMESPACE -f monitor-policy-sample.yaml
    networkpolicy.networking.k8s.io/data-monitor-ingress-egress created
    
  3. The Postgres instance monitor pod needs to communicate with the Kubernetes service to label the data pods, to denote the read-only replica, read-write primary, and/or any unavailability.

    Use the following command to note the Cluster IP and the port number of the Kubernetes service:

    $ kubectl get endpoints --namespace default kubernetes
    NAME         ENDPOINTS            AGE
    kubernetes   192.168.64.38:8443   42h
    

    where 192.168.64.38 is the IP address and 8443 the port number that will be used to specify the NetworkPolicy for the service.

    Using the IP address and port, create the following NetworkPolicy:

    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: monitor-to-apiserver-egress
    spec:
      podSelector:
        matchLabels:
          app: postgres
          type: monitor
      policyTypes:
        - Egress
      egress:
        - to:
            - ipBlock:
                cidr: 192.168.64.38/32
          ports:
            - port: 8443
              protocol: TCP
    

    Save this sample to a file, and apply it to your cluster:

    $ kubectl apply -n INSTANCE-NAMESPACE -f apiserver-policy-sample.yaml
    networkpolicy.networking.k8s.io/monitor-to-apiserver-egress created
    

    where INSTANCE-NAMESPACE is the Postgres instance namespace and apiserver-policy-sample.yaml your policy yaml.