Backing Up and Restoring Tanzu Postgres

Tanzu Postgres includes the pgBackRest solution, that offers backup and restore scalability, performance and parallelism. Backups can be stored to any S3 compatible location, external (like Amazon AWS) or on-premise (like Minio).

This topic describes the Tanzu Postgres backup using a S3 compatible location, and includes details on:

  • Configuring the S3 Kubernetes secret for S3 bucket authentication
  • Taking backups
  • Restoring a backup
  • The pgBackRest configuration file details

See the pgBackRest documentation for more detailed information about using pgBackRest.

Setting up the environment


Before creating an S3 Kubernetes secret you need:

  • access permissions to a preconfigured S3 bucket where the pgdata persistent volume (PV) backups will be stored.
  • the values bucket, region, endpoint, key, and keySecret of the preconfigured S3 bucket, that are required to setup the S3 backup secret.
  • the kubectl command line tool installed on your local client, with access permissions to the Kubernetes cluster.
  • the instance namespace, if the Postgres instance is already created. Use kubectl get namespaces for a list of available namespaces.

Create a S3 Kubernetes Secret

  1. To enable backup in Tanzu Postgres, first create a Kubernetes secret to allow access to the S3 backup location.

    Create a s3-secret-example.yaml file, that is used as a Kubernetes secret for the S3 bucket access. Use the template below, and edit according to your environment:

     apiVersion: v1
     kind: Secret
       name: s3-secret-example
     type: generic
       bucket: "your_S3_bucket_name"
       region: "your_region"
       endpoint: "your_endpoint"
       key: "your_key"
       keySecret: "your_key_secret"
       #optional fields with default values. Change to your own configuration
       verifyTLS: "true"


    • name (required) is the name of the kubernetes secret object created with this yaml.
    • bucket (required) is the name of the S3 bucket, for example “postgres-containers-dev-backup”.
    • region (required) is the S3 bucket region, for example “us-west-01”.
    • endpoint (required) is the S3 repository endpoint, for example “minio.svc.cluster.local” or “”.
    • key (required) is the S3 key provided by your S3 storage administrator, for example “AKIAIOSFODNN7EXAMPLE”.
    • keySecret(required) is the S3 key secret provided by your S3 storage administrator, for example “wJalrXUtnFEMI/K7MDENG/ bPxRfiCYEXAMPLEKEY”.
    • port (optional) is the S3 repository port used for the endpoint connection, for example 9000.
    • uriStyle (optional) is the S3 repository URI option, for example “path”. For more information see S3 Repository URI Style Option in the pgBackRest documentation.
    • verifyTLS (optional) is determines the TLS authentication state; default is “true”.
  2. Create the Kubernetes secret:

    $ kubectl create secret generic s3-secret-example --from-file=./s3-secret-example.yaml -n <your-instance-namespace>

    where s3-secret-example matches the name value.

  3. Verify that the secret was created successfully:

    $ kubectl get secrets s3-secret-example

    The output should be similar to:

    NAME                         TYPE     DATA   AGE
    s3-secret-example            Opaque   3      108m
  4. Use this secret as the name value in the backupLocationSecret field of the Postgres instance manifest file. See Configuring the Postgres Instance Manifest file.

    Edit the pg-instance-example.yaml file, providing a value for backupLocationSecret, and use kubectl to refresh the configuration:

    kubectl apply -f pg-instance-example.yaml

    You can then use the pgBackRest commands for taking backups and restoring.

Backing up Tanzu Postgres

Performing a Backup

  1. If you have an HA configuration, verify that you are on the primary node. For more information, see Verifying High Availability.

  2. Verify your $BACKUP_STANZA_NAME variable by using:

    $ kubectl exec -ti [POD_NAME] -- bash -c env

    where POD_NAME is your Postgres instance pod. The stanza name includes the namespace of the instance in the format of [namespace name]-[instance name].

  3. The first time you set up pgBackRest on the primary node (or the single node for non-HA configurations), create the pgBackRest stanza using kubectl exec.

    $ kubectl exec -it pg-instance-example-0 -- bash -c 'pgbackrest stanza-create --stanza=${BACKUP_STANZA_NAME}'

    The output is similar to:

    2020-03-02 20:19:12.194 P00   INFO: WAL segment 00000001000000000000000D successfully archived to '/pgbackrest/backups/archive/${BACKUP_STANZA_NAME}/11-1/0000000100000000/00000001000000000000000D-ea744b6c5b4db4f6b8b125ea76ce7e433534eaf7.gz'
    2020-03-02 20:19:12.195 P00   INFO: check command end: completed successfully (2675ms)

    Check the pgBackRest stanza using kubectl exec:

     $ kubectl exec -it pg-instance-example-0 -- bash -c 'pgbackrest check --stanza=${BACKUP_STANZA_NAME}'

    IMPORTANT: If the S3 bucket contains backup data of an instance with the same name, the pgbackrest stanza-create will fail. Ensure your S3 bucket is empty or rename your instance.

  4. To perform a backup of a Postgres instance run the pgbackrest backup command:

    $ kubectl exec -it pg-instance-example-0 -- bash -c 'pgbackrest backup --stanza=${BACKUP_STANZA_NAME}'

    The --stanza option is the only required option. By default, pgBackRest creates a full backup if no older full backups exist. If two full backups exist, pgBackRest performs an incremental by default.

    Use the options --type=incremental or --type=full to specify the backup type you require. If you specify an incremental backup and no full backup exists, pgBackRest performs a full backup instead. To specify a full backup of the ${BACKUP_STANZA_NAME} database, use:

    $ kubectl exec -it pg-instance-example-0 -- bash -c 'pgbackrest backup --stanza=${BACKUP_STANZA_NAME} --type=full'

    See Perform a Backup in the pgBackRest documentation for a full list of pgBackRest options.

Getting Backup Information

The pgbackrest info command lists backup information.

$ kubectl exec -it pg-instance-example-0 -- bash -c 'pgbackrest info --stanza=${BACKUP_STANZA_NAME}'
    status: ok
    cipher: aes-256-cbc

    db (current)
        wal archive min/max (11-1): 000000010000000000000005/000000010000000000000005

        full backup: 20200302-183323F
            timestamp start/stop: 2020-03-02 18:33:23 / 2020-03-02 18:33:34
            wal start/stop: 000000010000000000000005 / 000000010000000000000005
            database size: 31.0MB, backup size: 31.0MB
            repository size: 3.7MB, repository backup size: 3.7MB

        incr backup: 20200302-183323F_20200302-184244I
            timestamp start/stop: 2020-03-02 18:42:44 / 2020-03-02 18:42:48
            wal start/stop: 000000010000000000000007 / 000000010000000000000007
            database size: 31.0MB, backup size: 8.2KB
            repository size: 3.7MB, repository backup size: 519B
            backup reference list: 20200302-183323F

See the pgBackRest Backup Information for other command options for pgbackrest info.

Restoring a Backup

Restoring a single Postgres instance

The pgbackup restore command restores backups. You can restore all databases or individual databases. The built-in databases template0, template, and postgres are always restored.

Restoring requires manually stopping and restarting the Postgres instance. Because Tanzu SQL with Postgres uses pg_auto_failover to run and monitor the postgres process, you must find the PID of that sub-process, stop it, and then stop Postgres:

  1. Get the pg_auto_failover child process pid:

    $ kubectl exec -t pg-instance-example-0 -- ps -ef | grep "pg_autoctl: start/stop postgres" | grep -v grep | awk '{print $2}'
  2. Stop the pg_auto_failover child process:

    $ kubectl exec -t pg-instance-example-0 -- kill -STOP <pid from above>
  3. Stop the Postgres instance:

    $ kubectl exec -t pg-instance-example-0 -- pg_ctl stop
  4. Perform a restore using a backup ID and a timestamp selected from the output of the gpbackrest info --stanza=${BACKUP_STANZA_NAME} command. This operation performs a Point in Time Recovery (PITR) that restores the data from the last backup to a specified time, transaction ID, or recovery point. For more information about the pgBackRest PITR see Point-in-Time Recovery.

    $ kubectl exec -t pg-instance-example-0 -- bash -c 'pgbackrest restore --stanza=dev_backup --set=20200302-183323F --force --type=time --target-action=promote --target="2020-03-02 18:33:23" '


    • dev-backup is the example stanza name,
    • 20200302-183323F the backup ID,
    • 2020-03-02 18:33:23 the timestamp.
  5. Restart the pg_auto_failover child process (which will bring a Postgres process back up):

    $ kubectl exec -t pg-instance-example-0 -- kill -CONT <pid from above>
    2020-03-05 18:52:11.926 P00   INFO: restore backup set 20200302-183323F
    2020-03-05 18:52:11.928 P00   INFO: remove invalid files/links/paths from '/pgsql/data'
    2020-03-05 18:52:12.368 P02   INFO: restore file /pgsql/data/global/pg_control.pgbackrest.tmp (8KB, 88%) checksum b0b618494c0f57557afbc61df0948182861b55b5
    2020-03-05 18:52:12.487 P02   INFO: restore file /pgsql/data/backup_label (257B, 99%) checksum 58a0d0f21f0d16ca8f6f076754e4bcefdc2e559d
    2020-03-05 18:52:12.578 P00   INFO: write /pgsql/data/recovery.conf
    2020-03-05 18:52:12.599 P00   INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started)
    2020-03-05 18:52:12.802 P00   INFO: restore command end: completed successfully (1012ms)

See Restore a Backup on the pgBackRest website for detailed information about pgBackRest restore options.

Restoring in a High Availability Cluster

To restore Tanzu Postgres in a high availability scenario, you must disable the HA configuration, delete the secondary node, remove the secondary data volume claim, and follow the restore process on the primary node.

  1. Go to the directory where you unpacked the Tanzu Postgres distribution.

  2. Edit the pg-instance yaml, disable HA, and re-apply the configuration via kubectl.
    Note: the HA disable process is not a supported workflow outside a restore scenario.

    $ kubectl apply -f pg-instance-example.yaml
  3. Using kubectl on the monitor pod, identify and drop the replica node. Under the Host:Port column, the hostname uses the instance name and namespace (default in this example). Identify the replica node using show state and finding the status secondary under Current State:

    $ kubectl exec -t pg-instance-example-monitor-0 -- pg_autoctl show state
    Name   |  Node |                                                      Host:Port |       LSN | Reachable |  Current State |   Assigned State
    node_1 |     1 | | 0/4000108 |       yes |   wait_primary |      wait_primary
    node_2 |     2 | | 0/4000108 |        no |      secondary |        catchingup

    Drop the replica (secondary) node using drop node:

    $ kubectl exec -t pg-instance-example-monitor-0 -- pg_autoctl drop node --hostname --pgport 5432

    Delete the corresponding persistant volume claim (PVC):

    $ kubectl delete pvc pg-instance-example-pgdata-pg-instance-example-1 --namespace <your-instance-namespace>
  4. Restore the data following the steps in Restoring a single Postgres instance.

  5. Re-enable high availability by modifying the pg-instance yaml, and reapply via kubectl.

    $ kubectl apply -f pg-instance-example.yaml

Validating a Successful Restore

  1. View the Postgres log to verify that the Postgres server started.

    $ kubectl logs my-postgres-0

    You may see log events for past failures, for example:

    Normal   Started                 76s (x2 over 78s)    kubelet, gke-postgres-k8s-cluster-default-pool-6a8f2693-rjw9  Started container pg-container
    Warning  Unhealthy               50s                  kubelet, gke-postgres-k8s-cluster-default-pool-6a8f2693-rjw9  Readiness probe failed: psql: FATAL:  the database system is shutting down
    Warning  Unhealthy               44s (x2 over 47s)    kubelet, gke-postgres-k8s-cluster-default-pool-6a8f2693-rjw9  Readiness probe failed: psql: could not connect to server: No such file or directory
    Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

    These messages are normal and expected. Look for status: ok and ready: true messages near the end of the log to verify the server is currently running.

  2. Verify that you can connect to the Postgres instance and access the restored data.

    $ kubectl exec -it pg-instance-example-0 -- /bin/bash -c "psql somedb -c 'select count(1) from sometable'"

Review the pgBackRest Configuration

For reference and troubleshooting, you may review the Tanzu Postgres pgBackRest configuration properties in pgbackrest.conf in the /pgsql/data/ directory. The file specifies the backup details, and is pre-configured with a stanza name matching the Postgres instance database name.

An example generic template for the pgbackrest.conf file may be found in /pgbackrest.template.

To view the contents of the pgbackrest.conf file:

$ kubectl exec -it pg-instance-example-0 cat /pgsql/data/pgbackrest.conf
# configure encryption
# configure retention
# configure parallelism
# configure logs
# log-level-console=info


The [global] section defines the location of backups, logging settings, and encryption settings. The pgBackRest configuration file sets the following backup defaults:

  • repo1-path=/pgbackrest/backups - Backups and archives are stored in the /pgbackrest/backups directory (the repo1 repository).
  • repo1-cipher-pass=Fi2iRk4rvZ66MqSgPktzJERJ2987VG - The passphrase used to encrypt the repository. In this example, the value repo1-cipher-type=aes-256-cbc defines the repo1 repository encryption as aes-256.
  • repo1-retention-full=2 - Two full backups are retained. When a full backup expires, differential and incremental backups associated with the full backup also expire.
  • repo1-retention-diff=1 - One differential backup per full backup is retained. When a differential backup expires, the incremental backups associated with it also expire.
  • process-max=2 - Backups are performed with two parallel processes. Increasing this parameter can improve backup performance.
  • log-path=/pgbackrest/logs - Log files are saved at /pgbackrest/logs.
  • log-level-console=debug - Send info, warning, and error messages to the console. Change to info if the default debug level is too verbose.
  • log-level-file=debug - Send debug, detail, info, warning, and error messages to the log file.
  • archive-async=y - Asynchronous operation is enabled for the archive-push and achive-get commands.
  • spool-path=/pgbackrest/spool - WAL segments are queued in the /pgbackrest/spool directory.
  • start-fast=y - Forces a checkpoint so the backup starts immediately.

The [${BACKUP_STANZA_NAME}] section defines a stanza name that matches the Postgres instance database name, for example the repo1 backup repository. The pg1-path parameter is the data directory for the ${BACKUP_STANZA_NAME} stanza.

See pgBackRest Configuration Reference for information about the pgBackRest configuration file organization and parameters.