Upgrading Apigee hybrid to version 1.16

This procedure covers upgrading from Apigee hybrid version 1.15.x to Apigee hybrid version 1.16.0.

Changes from Apigee hybrid v1.15

Please note the following changes:

  • Seccomp profiles: Starting in version 1.16, Apigee hybrid now offers the capability to apply Seccomp Profiles to your runtime components, significantly enhancing the security posture of your deployment. This feature allows Apigee administrators and security teams to restrict the system calls a containerized process can make to the host's kernel. By limiting a container to only the necessary syscalls, you can:
    • Enhance Security: Mitigate the risk of container breakouts and privilege escalation.
    • Enforce Least Privilege: Ensure components only have access to the exact system calls required for their operation.
    • Meet Compliance: Provide a critical control for meeting stringent security compliance requirements.
    For more information, see Configure Seccomp profiles for pod security.
  • UDCA in Apigee hybrid removal: In Apigee hybrid version 1.16, the Unified Data Collection Agent (UDCA) component has been removed. The responsibilities of sending analytics, trace, and deployment status data to the Apigee control plane are now handled using a Google Cloud Pub/Sub based data pipeline. Using the Pub/Sub based data pipeline has been the default data collection mechanism since Apigee hybrid version 1.14.0.
  • apigee-guardrails service account: In v1.16.0, Apigee Hybrid introduces an apigee-guardrails Google IAM service account. This is used by the apigee-operator chart during installation to check that all needed APIs are enabled in your project.

    See:

  • Support for cert-manager release 1.18 and 1.19: Apigee hybrid v1.16 supports cert-manager release 1.18 and release 1.19. In cert-manager release 1.18, there is a change to the default value of Certificate.Spec.PrivateKey.rotationPolicy that can impact traffic. If you are upgrading from a previous version of Apigee hybrid, and you are uprading to cert-manager release 1.18 or above, follow the Upgrade cert-manager prodedure in this guide.

For additional information about features in Hybrid version 1.16, see the Apigee hybrid v1.16.0 release notes.

Prerequisites

Before upgrading to hybrid version 1.16, make sure your installation meets the following requirements:

Before you upgrade to 1.16.0 - limitations and important notes

  • Apigee hybrid 1.16.0 introduces a new enhanced per-environment proxy limit that lets you deploy more proxies and shared flows in a single environment. See Limits: API Proxies to understand the limits on the number of proxies and shared flows you may deploy per environment. This feature is available only on newly created hybrid organizations, and cannot be applied to upgraded orgs. To use this feature, perform a fresh installation of hybrid 1.16.0, and create a new organization.

    This feature is available exclusively as part of the 2024 subscription plan, and is subject to the entitlements granted under that subscription. See Enhanced per-environment proxy limits to learn more about this feature.

  • Upgrading to Apigee hybrid version 1.16 may require downtime.

    When upgrading the Apigee controller to version 1.16.0, all Apigee deployments undergo a rolling restart. To minimize downtime in production hybrid environments during a rolling restart, make sure you are running at least two clusters (in the same or different region/data center). Divert all production traffic to a single cluster and take the cluster you are about to upgrade offline, and then proceed with the upgrade process. Repeat the process for each cluster.

    Apigee recommends that once you begin the upgrade, you should upgrade all clusters as soon as possible to reduce the chances of production impact. There is no time limit on when all remaining clusters must be upgraded after the first one is upgraded. However, until all remaining clusters are upgraded Cassandra backup and restore cannot work with mixed versions. For example, a backup from Hybrid 1.15 cannot be used to restore a Hybrid 1.16 instance.

  • Management plane changes do not need to be fully suspended during an upgrade. Any required temporary suspensions to management plane changes are noted in the upgrade instructions below.

Upgrading to version 1.16.0 overview

The procedures for upgrading Apigee hybrid are organized in the following sections:

  1. Prepare to upgrade.
  2. Install hybrid runtime version 1.16.0.

Prepare to upgrade to version 1.16

Back up your hybrid installation

  1. These instructions use the environment variable APIGEE_HELM_CHARTS_HOME for the directory in your file system where you have installed the Helm charts. If needed, change directory into this directory and define the variable with the following command:

    Linux

    export APIGEE_HELM_CHARTS_HOME=$PWD
    echo $APIGEE_HELM_CHARTS_HOME

    Mac OS

    export APIGEE_HELM_CHARTS_HOME=$PWD
    echo $APIGEE_HELM_CHARTS_HOME

    Windows

    set APIGEE_HELM_CHARTS_HOME=%CD%
    echo %APIGEE_HELM_CHARTS_HOME%
  2. Make a backup copy of your version 1.15 $APIGEE_HELM_CHARTS_HOME/ directory. You can use any backup process. For example, you can create a tar file of your entire directory with:
    tar -czvf $APIGEE_HELM_CHARTS_HOME/../apigee-helm-charts-v1.15-backup.tar.gz $APIGEE_HELM_CHARTS_HOME
  3. Back up your Cassandra database following the instructions in Cassandra backup and recovery.
  4. Make sure that your TLS certificate and key files (.crt, .key, and/or .pem) reside in the $APIGEE_HELM_CHARTS_HOME/apigee-virtualhost/ directory.

Upgrade your Kubernetes version

Check your Kubernetes platform version and if needed, upgrade your Kubernetes platform to a version that is supported by both hybrid 1.15 and hybrid 1.16. Follow your platform's documentation if you need help.

Pull the Apigee Helm charts.

Apigee hybrid charts are hosted in Google Artifact Registry:

oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts

Using the pull command, copy all of the Apigee hybrid Helm charts to your local storage with the following command:

export CHART_REPO=oci://us-docker.pkg.dev/apigee-release/apigee-hybrid-helm-charts
export CHART_VERSION=1.16.0
helm pull $CHART_REPO/apigee-operator --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-datastore --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-env --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-ingress-manager --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-org --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-redis --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-telemetry --version $CHART_VERSION --untar
helm pull $CHART_REPO/apigee-virtualhost --version $CHART_VERSION --untar

Edit kustomization.yaml for a custom apigee namespace

If your Apigee namespace is not apigee, edit the apigee-operator/etc/crds/default/kustomization.yaml file and replace the namespace value with your Apigee namespace.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: APIGEE_NAMESPACE

If you are using apigee as your namespace you do not need to edit the file.

  • Install the updated Apigee CRDs:
    1. Use the kubectl dry-run feature by running the following command:

      kubectl apply -k  apigee-operator/etc/crds/default/ --server-side --force-conflicts --validate=false --dry-run=server
      
    2. After validating with the dry-run command, run the following command:

      kubectl apply -k  apigee-operator/etc/crds/default/ \
        --server-side \
        --force-conflicts \
        --validate=false
      
    3. Validate the installation with the kubectl get crds command:
      kubectl get crds | grep apigee

      Your output should look something like the following:

      apigeedatastores.apigee.cloud.google.com                    2024-08-21T14:48:30Z
      apigeedeployments.apigee.cloud.google.com                   2024-08-21T14:48:30Z
      apigeeenvironments.apigee.cloud.google.com                  2024-08-21T14:48:31Z
      apigeeissues.apigee.cloud.google.com                        2024-08-21T14:48:31Z
      apigeeorganizations.apigee.cloud.google.com                 2024-08-21T14:48:32Z
      apigeeredis.apigee.cloud.google.com                         2024-08-21T14:48:33Z
      apigeerouteconfigs.apigee.cloud.google.com                  2024-08-21T14:48:33Z
      apigeeroutes.apigee.cloud.google.com                        2024-08-21T14:48:33Z
      apigeetelemetries.apigee.cloud.google.com                   2024-08-21T14:48:34Z
      cassandradatareplications.apigee.cloud.google.com           2024-08-21T14:48:35Z
      
  • Check the labels on the cluster nodes. By default, Apigee schedules data pods on nodes with the label cloud.google.com/gke-nodepool=apigee-data and runtime pods are scheduled on nodes with the label cloud.google.com/gke-nodepool=apigee-runtime. You can customize your node pool labels in the overrides.yaml file.

    For more information, see Configuring dedicated node pools.

  • Set up the apigee-guardrails service account

    Starting with hybrid v1.16, the apigee-guardrails service account is required to upgrade the apigee-operator chart.

    In the following procedure, select the type of service account authentication you are using.

    1. Verify you can execute create-service-account. If you have just downloaded the charts the create-service-account file might not be in an executable mode. In your APIGEE_HELM_CHARTS_HOME directory run the following command:
      ./apigee-operator/etc/tools/create-service-account --help

      If your output says permission denied you need to make the file executable, for example with chmod in Linux, MacOS, or UNIX or in the Windows Explorer or with the icacls command in Windows. For example:

      chmod +x ./apigee-operator/etc/tools/create-service-account
    2. Create the apigee-guardrails service account:

      Kubernetes Secrets

      ./apigee-operator/etc/tools/create-service-account \
        --env prod \
        --profile apigee-guardrails \
        --dir service-accounts

      This command creates the apigee-guardrails service account and downloads the key to the service-accounts/ directory.

      JSON files

      ./apigee-operator/etc/tools/create-service-account \
        --env prod \
        --profile apigee-guardrails \
        --dir ./apigee-operator/

      This command creates the apigee-guardrails service account and downloads the key to the apigee-operator/ chart directory.

      Vault

      ./apigee-operator/etc/tools/create-service-account \
        --env prod \
        --profile apigee-guardrails \
        --dir service-accounts

      This command creates the apigee-guardrails service account and downloads the key to the service-accounts/ directory.

      WIF for GKE

      ./apigee-operator/etc/tools/create-service-account \
        --env prod \
        --profile apigee-guardrails \
        --dir service-accounts

      This command creates the apigee-guardrails service account and downloads the key to the apigee-operator/etc/tools/service-accounts/ directory. You do not need the downloaded key file and can delete it.

      WIF on other platforms

      ./apigee-operator/etc/tools/create-service-account \
        --env prod \
        --profile apigee-guardrails \
        --dir service-accounts

      This command creates the apigee-guardrails service account and downloads the key to the service-accounts/ directory.

    3. Set up authentication for the apigee-guardrails service account:

      Kubernetes Secrets

      Create the Kubernetes secret using the apigee-guardrails service account key file in the service-accounts/ directory:

      kubectl create secret generic apigee-guardrails-svc-account \
          --from-file="client_secret.json=$APIGEE_HELM_CHARTS_HOME/service-accounts/$PROJECT_ID-apigee-guardrails.json" \
          -n $APIGEE_NAMESPACE

      Add the following to your overrides.yaml file:

      guardrails:
        serviceAccountRef: apigee-guardrails-svc-account

      JSON files

      Add the following to your overrides.yaml file, using the path to the apigee-guardrails service account key file in the apigee-operator/ directory:

      guardrails:
        serviceAccountPath: $PROJECT_ID-apigee-guardrails.json

      Vault

      1. Update the Vault secret secret/data/apigee/orgsakeys to add a guardrails entry with the contents of the apigee-guardrails service account key file.
        vault kv patch secret/apigee/orgsakeys guardrails="$(cat ./service-accounts/hybrid115-apigee-guardrails.json)"
        
      2. The Kubernetes service account (KSA) for guardrails is named apigee-operator-guardrails-sa. Add the Guardrails KSA to the organization-specific service accounts bound to the apigee-orgsakeys role in Vault.
        1. Get the current list of KSAs bindings:
          vault read auth/kubernetes/role/apigee-orgsakeys
          

          The output should be in the following format:

          Key                                         Value
          ---                                         -----
          alias_name_source                           serviceaccount_uid
          bound_service_account_names                 BOUND_SERVICE_ACCOUNT_NAMES
          bound_service_account_namespace_selector    n/a
          bound_service_account_namespaces            APIGEE_NAMESPACE

          In the output, BOUND_SERVICE_ACCOUNT_NAMES is a list of comma-separated service account names. Add apigee-operator-guardrails-sa to the list of names. for example (without the newlines added for readability):

          apigee-manager,apigee-cassandra-default,apigee-cassandra-backup-sa,
          apigee-cassandra-restore-sa,apigee-cassandra-schema-setup-myhybrido
          rg-5b044c1,apigee-cassandra-schema-val-myhybridorg-5b044c1,apigee-c
          assandra-user-setup-myhybridorg-5b044c1,apigee-mart-myhybridorg-5b0
          44c1,apigee-mint-task-scheduler-myhybridorg-5b044c1,apigee-connect-
          agent-myhybridorg-5b044c1,apigee-watcher-myhybridorg-5b044c1,apigee
          -metrics-apigee-telemetry,apigee-open-telemetry,apigee-synchronizer
          -myhybridorg-dev-ee52aca,apigee-runtime-telemetry-collector-apigee-
          telemetry,apigee-logger-apigee-e-myhybrridorg-dev-ee52aca,apigee-sy
          nchronizer-myhybridog-prod-2d0221c,apigee-runtime-myhybridorg-prod-
          2d0221c,apigee-operator-guardrails-sa
        2. Update the bindings to the apigee-orgsakeys role with the updated list of service account names:
          vault write auth/kubernetes/role/apigee-orgsakeys \
            bound_service_account_names=UPDATED_BOUND_SERVICE_ACCOUNT_NAMES \
            bound_service_account_namespaces=APIGEE_NAMESPACE \
            policies=apigee-orgsakeys-auth \
            ttl=1m
          
      3. Add "guardrails" to the SecretProviderClass
        1. Edit your spc-org.yaml file.
        2. Under spec.parameters.objects, add a guardrails entry:
                - objectName: "guardrails"
                  secretPath: ""
                  secretKey: ""
        3. Update your SecretProviderClass:
          kubectl -n APIGEE_NAMESPACE apply -f spc-org.yaml
          

      WIF for GKE

      The Kubernetes service account (KSA) for guardrails is named apigee-operator-guardrails-sa. Create the binding for the apigee-guardrails Google service account (GSA) with the following command:

      gcloud iam service-accounts add-iam-policy-binding apigee-guardrails@$PROJECT_ID.iam.gserviceaccount.com \
          --role roles/iam.workloadIdentityUser \
          --member "serviceAccount:$PROJECT_ID.svc.id.goog[$APIGEE_NAMESPACE/apigee-operator-guardrails-sa]" \
          --project $PROJECT_ID

      Add the following to your overrides.yaml file:

      guardrails:
        gsa: apigee-guardrails@$PROJECT_ID.iam.gserviceaccount.com

      WIF on other platforms

      The Kubernetes service account (KSA) for guardrails is named apigee-operator-guardrails-sa. You need to grant the guardrails KSA access to impersonate the apigee-guardrails Google service account (GSA), and configure your overrides to use a credential configuration file.

      1. Grant the KSA access to impersonate the GSA with the following command:

        Template

        gcloud iam service-accounts add-iam-policy-binding \
          apigee-guardrails@$PROJECT_ID.iam.gserviceaccount.com \
          --member="principal://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/subject/system:serviceaccount:APIGEE_NAMESPACE:apigee-operator-guardrails-sa" \
          --role=roles/iam.workloadIdentityUser

        Example

        gcloud iam service-accounts add-iam-policy-binding \
          apigee-guardrails@my-project.iam.gserviceaccount.com \
          --member="principal://iam.googleapis.com/projects/1234567890/locations/global/workloadIdentityPools/my-pool/subject/system:serviceaccount:apigee:apigee-operator-guardrails-sa" \
          --role=roles/iam.workloadIdentityUser

        Where:

        • PROJECT_ID: your Google Cloud project ID.
        • PROJECT_NUMBER: the project number of the project where you created the workload identity pool.
        • POOL_ID: the workload identity pool ID.
        • APIGEE_NAMESPACE: The namespace where Apigee hybrid is installed.
      2. Create a credential configuration file for the apigee-guardrails service account:
        gcloud iam workload-identity-pools create-cred-config \
          projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/WORKLOAD_PROVIDER_ID \
          --service-account=apigee-guardrails@$PROJECT_ID.iam.gserviceaccount.com \
          --credential-source-file=/var/run/service-account/token \
          --credential-source-type=text \
          --output-file=apigee-guardrails-credential-configuration.json
            

        Where WORKLOAD_PROVIDER_ID is your workload identity pool provider ID.

      3. Configure apigee-guardrails to use Workload Identity Federation with one of the following methods:

        WIF: secrets

        1. Create a new Kubernetes secret using the credential source file for each credential configuration file.
          kubectl create secret -n APIGEE_NAMESPACE generic guardrails-workload-identity-secret --from-file="client_secret.json=./apigee-guardrails-credential-configuration.json"
        2. Replace the value of serviceAccountRef with the new secret:
          guardrails:
            serviceAccountRef: guardrails-workload-identity-secret

        WIF: files

        Move the generated apigee-guardrails-credential-configuration.json file to your apigee-operator/ chart directory.

        Add the following to your overrides.yaml file:

        guardrails:
          serviceAccountPath: apigee-guardrails-credential-configuration.json

        WIF: Vault

        Update the service account key for guardrails in Vault with the corresponding credential source file:

        SAKEY=$(cat .apigee-guardrails-credential-configuration.json); kubectl -n APIGEE_NAMESPACE exec vault-0 -- vault kv patch secret/apigee/orgsakeys guardrails="$SAKEY"

        See Storing service account keys in Hashicorp Vault for more information.

    Upgrade cert-manager

    Apigee hybrid v1.16 suports cert-manager releases 1.16 through 1.19. There is a change in cert-manager 1.18 that can cause an issue with your traffic. In cert-manager release 1.18, the default value of Certificate.Spec.PrivateKey.rotationPolicy was changed from Never to Always. For upgraded Apigee hybrid installations, this can cause an issue with your traffic. When upgrading to hybrid v1.16 from an earlier version, you must either edit your apigee-ca certificate to compensate for this change or keep your version of cert-manager at release 1.17.x or lower.

    Before upgrading cert-manager to 1.18 or 1.19, use the following steps procedure to edit your apigee-ca certificate to set the value of Certificate.Spec.PrivateKey.rotationPolicy to Never.

    1. Check the contents of your apigee-ca certificate to see if rotationPolicy is set:
      kubectl get certificate apigee-ca -n cert-manager -o yaml
      

      Look for the values under spec.privateKey in the output:

      ...
      spec:
        commonName: apigee-hybrid
        duration: 87600h
        isCA: true
        issuerRef:
          group: cert-manager.io
          kind: ClusterIssuer
          name: apigee-root-certificate-issuer
        privateKey:
          algorithm: ECDSA
          # Note: rotationPolicy would appear here if it is set.
          size: 256
        secretName: apigee-ca
      ...
    2. If rotationPolicy is not set or if it is set to Always, edit the apigee-ca certificate to set the value of rotationPolicy to Never:
      1. Perform a dry run first:
        kubectl patch Certificate \
          --dry-run=server \
          -n cert-manager \
          --type=json \
          -p='[{"op": "replace", "path": "/spec/privateKey/rotationPolicy", "value": "Never"}]' \
          -o=yaml \
          apigee-ca
        
      2. Patch the certificate:
        kubectl patch Certificate \
          -n cert-manager \
          --type=json \
          -p='[{"op": "replace", "path": "/spec/privateKey/rotationPolicy", "value": "Never"}]' \
          -o=yaml \
          apigee-ca
        
    3. Verify that the value of rotationPolicy is now set to Never:
      kubectl get certificate apigee-ca -n cert-manager -o yaml
      

      The output should look similar to the following:

      ...
      spec:
        commonName: apigee-hybrid
        duration: 87600h
        isCA: true
        issuerRef:
          group: cert-manager.io
          kind: ClusterIssuer
          name: apigee-root-certificate-issuer
        privateKey:
          algorithm: ECDSA
          rotationPolicy: Never
          size: 256
        secretName: apigee-ca
      ...
    4. Upgrade cert-manager. The following command will download and install cert-manager v1.19.2:
      kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.2/cert-manager.yaml

      See Supported platforms and versions: cert-manager for a list of supported versions.

    See:

    Install the hybrid 1.16.0 runtime

    1. If you have not, navigate into your APIGEE_HELM_CHARTS_HOME directory. Run the following commands from that directory.
    2. Upgrade the Apigee Operator/Controller:

      Dry run:

      helm upgrade operator apigee-operator/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE \
        --dry-run=server
      

      Upgrade the chart:

      helm upgrade operator apigee-operator/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE
      

      Verify Apigee Operator installation:

      helm ls -n APIGEE_NAMESPACE
      
      NAME       NAMESPACE       REVISION   UPDATED                                STATUS     CHART                   APP VERSION
      operator   apigee   3          2024-08-21 00:42:44.492009 -0800 PST   deployed   apigee-operator-1.16.0   1.16.0
      

      Verify it is up and running by checking its availability:

      kubectl -n APIGEE_NAMESPACE get deploy apigee-controller-manager
      
      NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
      apigee-controller-manager   1/1     1            1           7d20h
      
    3. Upgrade the Apigee datastore:

      Dry run:

      helm upgrade datastore apigee-datastore/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE \
        --dry-run=server
      

      Upgrade the chart:

      helm upgrade datastore apigee-datastore/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE
      

      Verify apigeedatastore is up and running by checking its state:

      kubectl -n APIGEE_NAMESPACE get apigeedatastore default
      
      NAME      STATE       AGE
      default   running    2d
    4. Upgrade Apigee telemetry:

      Dry run:

      helm upgrade telemetry apigee-telemetry/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE \
        --dry-run=server
      

      Upgrade the chart:

      helm upgrade telemetry apigee-telemetry/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE
      

      Verify it is up and running by checking its state:

      kubectl -n APIGEE_NAMESPACE get apigeetelemetry apigee-telemetry
      
      NAME               STATE     AGE
      apigee-telemetry   running   2d
    5. Upgrade Apigee Redis:

      Dry run:

      helm upgrade redis apigee-redis/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE \
        --dry-run=server
      

      Upgrade the chart:

      helm upgrade redis apigee-redis/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE
      

      Verify it is up and running by checking its state:

      kubectl -n APIGEE_NAMESPACE get apigeeredis default
      
      NAME      STATE     AGE
      default   running   2d
    6. Upgrade Apigee ingress manager:

      Dry run:

      helm upgrade ingress-manager apigee-ingress-manager/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE \
        --dry-run=server
      

      Upgrade the chart:

      helm upgrade ingress-manager apigee-ingress-manager/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE
      

      Verify it is up and running by checking its availability:

      kubectl -n APIGEE_NAMESPACE get deployment apigee-ingressgateway-manager
      
      NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
      apigee-ingressgateway-manager   2/2     2            2           2d
    7. Upgrade the Apigee organization:

      Dry run:

      helm upgrade ORG_NAME apigee-org/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE \
        --dry-run=server
      

      Upgrade the chart:

      helm upgrade ORG_NAME apigee-org/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        -f OVERRIDES_FILE
      

      Verify it is up and running by checking the state of the respective org:

      kubectl -n APIGEE_NAMESPACE get apigeeorg
      
      NAME                      STATE     AGE
      apigee-my-org-my-env      running   2d
    8. Upgrade the environment.

      You must install one environment at a time. Specify the environment with --set env=ENV_NAME.

      Dry run:

      helm upgrade ENV_RELEASE_NAME apigee-env/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        --set env=ENV_NAME \
        -f OVERRIDES_FILE \
        --dry-run=server
      
      • ENV_RELEASE_NAME is a name used to keep track of installation and upgrades of the apigee-env chart. This name must be unique from the other Helm release names in your installation. Usually this is the same as ENV_NAME. However, if your environment has the same name as your environment group, you must use different release names for the environment and environment group, for example dev-env-release and dev-envgroup-release. For more information on releases in Helm, see Three big concepts in the Helm documentation.
      • ENV_NAME is the name of the environment you are upgrading.
      • OVERRIDES_FILE is your new overrides file for v.1.16.0

      Upgrade the chart:

      helm upgrade ENV_RELEASE_NAME apigee-env/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        --set env=ENV_NAME \
        -f OVERRIDES_FILE
      

      Verify it is up and running by checking the state of the respective env:

      kubectl -n APIGEE_NAMESPACE get apigeeenv
      
      NAME                          STATE       AGE   GATEWAYTYPE
      apigee-my-org-my-env          running     2d
    9. Upgrade the environment groups (virtualhosts).
      1. You must upgrade one environment group (virtualhost) at a time. Specify the environment group with --set envgroup=ENV_GROUP_NAME. Repeat the following commands for each environment group mentioned in the overrides.yaml file:

        Dry run:

        helm upgrade ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \
          --install \
          --namespace APIGEE_NAMESPACE \
          --set envgroup=ENV_GROUP_NAME \
          -f OVERRIDES_FILE \
          --dry-run=server
        

        ENV_GROUP_RELEASE_NAME is the name with which you previously installed the apigee-virtualhost chart. It is usually ENV_GROUP_NAME.

        Upgrade the chart:

        helm upgrade ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \
          --install \
          --namespace APIGEE_NAMESPACE \
          --set envgroup=ENV_GROUP_NAME \
          -f OVERRIDES_FILE
        
      2. Check the state of the ApigeeRoute (AR).

        Installing the virtualhosts creates ApigeeRouteConfig (ARC) which internally creates ApigeeRoute (AR) once the Apigee watcher pulls environment group-related details from the control plane. Therefore, check that the corresponding AR's state is running:

        kubectl -n APIGEE_NAMESPACE get arc
        
        NAME                                STATE   AGE
        apigee-org1-dev-egroup                       2d
        kubectl -n APIGEE_NAMESPACE get ar
        
        NAME                                        STATE     AGE
        apigee-org1-dev-egroup-123abc               running   2d
    10. After you have verified all the installations are upgraded successfully, delete the older apigee-operator release from the apigee-system namespace.
      1. Uninstall the old operator release:
        helm delete operator -n apigee-system
        
      2. Delete the apigee-system namespace:
        kubectl delete namespace apigee-system
        
    11. Upgrade operator again in your Apigee namespace to re-install the deleted cluster-scoped resources:
      helm upgrade operator apigee-operator/ \
        --install \
        --namespace APIGEE_NAMESPACE \
        --atomic \
        -f overrides.yaml
      

    Rolling back to a previous version

    To roll back to the previous version, use the older chart version to roll back the upgrade process in the reverse order. Start with apigee-virtualhost and work your way back to apigee-operator, and then revert the CRDs.

    1. Revert all the charts from apigee-virtualhost to apigee-datastore. The following commands assume you are using the charts from the previous version (v1.15.x).

      Run the following command for each environment group:

      helm upgrade ENV_GROUP_RELEASE_NAME apigee-virtualhost/ \
        --install \
        --namespace apigee \
        --atomic \
        --set envgroup=ENV_GROUP_NAME \
        -f 1.15_OVERRIDES_FILE
      

      Run the following command for each environment:

      helm upgrade ENV_RELEASE_NAME apigee-env/ \
        --install \
        --namespace apigee \
        --atomic \
        --set env=ENV_NAME \
        -f 1.15_OVERRIDES_FILE
      

      Revert the remaining charts except for apigee-operator.

      helm upgrade ORG_NAME apigee-org/ \
        --install \
        --namespace apigee \
        --atomic \
        -f 1.15_OVERRIDES_FILE
      
      helm upgrade ingress-manager apigee-ingress-manager/ \
        --install \
        --namespace apigee \
        --atomic \
        -f 1.15_OVERRIDES_FILE
      
      helm upgrade redis apigee-redis/ \
        --install \
        --namespace apigee \
        --atomic \
        -f 1.15_OVERRIDES_FILE
      
      helm upgrade telemetry apigee-telemetry/ \
        --install \
        --namespace apigee \
        --atomic \
        -f 1.15_OVERRIDES_FILE
      
      helm upgrade datastore apigee-datastore/ \
        --install \
        --namespace apigee \
        --atomic \
        -f 1.15_OVERRIDES_FILE
      
    2. Create the apigee-system namespace.
      kubectl create namespace apigee-system
      
    3. Patch the resource annotation back to the apigee-system namespace.
      kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-namespace='apigee-system'
      
    4. If you have changed the release name as well, update the annotation with the operator release name.
      kubectl annotate --overwrite clusterIssuer apigee-ca-issuer meta.helm.sh/release-name='operator'
      
    5. Install apigee-operator back into the apigee-system namespace.
      helm upgrade operator apigee-operator/ \
        --install \
        --namespace apigee-system \
        --atomic \
        -f 1.15_OVERRIDES_FILE
      
    6. Revert the CRDs by reinstalling the older CRDs.
      kubectl apply -k apigee-operator/etc/crds/default/ \
        --server-side \
        --force-conflicts \
        --validate=false
      
    7. Clean up the apigee-operator release from the APIGEE_NAMESPACE namespace to complete the rollback process.
      helm uninstall operator -n APIGEE_NAMESPACE
      
    8. Some cluster-scoped resources, such as clusterIssuer, are deleted when operator is uninstalled. Reinstall them with the following command:
      helm upgrade operator apigee-operator/ \
        --install \
        --namespace apigee-system \
        --atomic \
        -f 1.15_OVERRIDES_FILE