This guide describes how to migrate a virtual IP (VIP) implementation in a SUSE Linux Enterprise Server (SLES) HA cluster for SAP NetWeaver from alias IPs to an internal passthrough Network Load Balancer.
Before you begin
- These instructions assume that you already have a properly configured SAP NetWeaver (ASCS/ERS) HA cluster on Google Cloud that uses an alias IP for the virtual IP (VIP) implementation.
This migration requires a scheduled downtime for your SAP system.
All steps from the beginning of this guide up to and including "Test the load balancer configuration" can be performed while your SAP system is fully operational. These preparatory steps configure the load balancer components and test health checks using a temporary IP without impacting your live SAP system.
Stop your SAP application server instances before you proceed to the section "Migrate the VIP implementation to use the load balancer". The actions within that section and all subsequent steps cause your SAP NetWeaver system to become unavailable. This is because the initial step in the migration process involves deallocating the existing alias IPs from your Compute Engine instances, which make the SAP VIPs unreachable on the network.
Migration overview
Migrating a VIP implementation from alias IP to internal passthrough Network Load Balancer in an SAP NetWeaver HA cluster on SLES includes the following high-level steps:
- Configure and test a load balancer by using a temporary forwarding rule and a temporary IP address in place of the VIP.
- Set your cluster to maintenance mode and stop your SAP application server instances.
- Deallocate the alias IP addresses from the primary and secondary hosts. These addresses become the VIPs with the load balancer.
- In the Pacemaker cluster configuration:
- Delete the existing VIP resources from the ASCS and ERS resource groups.
- Add new health check resources to respond to the load balancer's TCP health checks.
- Reassemble the resource groups to ensure the health check service starts before the SAP instance.
Verify the existing VIP addresses
Identify the alias IP addresses that are managed by the cluster. In an SAP NetWeaver system, you must identify the VIPs for both the ASCS and the ERS instances. These addresses are used as the frontend IPs for the internal passthrough Network Load Balancer.
To check the existing cluster primitives for the ASCS and ERS alias configurations:
crm configure showIn the resource definition, the VIP address appears on the
aliasandIPaddr2resources for both instances. The output is similar to the following example:primitive rsc_vip_ascs01_alias ocf:gcp:alias \ params alias_ip="10.1.0.100/32" hostlist="vm1 vm2" \ op monitor interval=60s timeout=60s \ op start interval=0 timeout=300s \ op stop interval=0 timeout=300s primitive rsc_vip_ascs01_ipaddr2 IPaddr2 \ params ip=10.1.0.100 cidr_netmask=32 nic=eth0 \ op monitor interval=3600s timeout=60s primitive rsc_vip_ers03_alias ocf:gcp:alias \ params alias_ip="10.1.0.101/32" hostlist="vm1 vm2" \ op monitor interval=60s timeout=60s \ op start interval=0 timeout=300s \ op stop interval=0 timeout=300s primitive rsc_vip_ers03_ipaddr2 IPaddr2 \ params ip=10.1.0.101 cidr_netmask=32 nic=eth0 \ op monitor interval=3600s timeout=60s
Verify that VIP addresses are reserved
In the Google Cloud console, verify that the IP addresses used for both the ASCS and ERS alias IPs are reserved. You can reuse the existing alias IP addresses or reserve new ones.
List the reserved addresses in your region:
gcloud compute addresses list \ --filter="region:( CLUSTER_REGION )"
Replace
CLUSTER_REGIONwith the region where you've deployed your HA cluster.If the IP addresses are reserved and allocated as alias IPs, then their status shows as
IN_USE. When you later deallocate these from the compute instances to move them to the load balancer, their status changes toRESERVED.If the addresses are not included in the IP addresses that are returned by the preceding command, then reserve them to prevent IP address conflicts:
gcloud compute addresses create VIP_NAME \ --region CLUSTER_REGION --subnet CLUSTER_SUBNET --addresses IP_ADDRESS
Replace the following:
VIP_NAME: the name you want to set for the static internal IP address resourceCLUSTER_SUBNET: the name of the subnetwork where you've allocated the IP addressIP_ADDRESS: the static internal IP address that you want to reserve
List your addresses again to confirm that the IP addresses show up as
RESERVED.
Enable load balancer backend communication between the compute instances
You enable backend communication between the compute instances by modifying the configuration
of the google-guest-agent, which is included in the
Linux guest environment for
all Linux public images that are provided by Google Cloud.
To enable load balancer backend communications, complete the following steps on each compute instance that is part of your cluster:
Stop the guest agent service:
sudo service google-guest-agent stop
Open or create the file
/etc/default/instance_configs.cfgfor editing. For example:sudo vi /etc/default/instance_configs.cfg
In the
/etc/default/instance_configs.cfgfile, specify the following configuration properties.If the
IpForwardingandNetworkInterfacessections don't exist, then create them. Verify that both thetarget_instance_ipsandip_forwardingproperties are set tofalse.[IpForwarding] ethernet_proto_id = 66 ip_aliases = true target_instance_ips = false [NetworkInterfaces] dhclient_script = /sbin/google-dhclient-script dhcp_command = ip_forwarding = false setup = trueStart the guest agent service:
sudo service google-guest-agent start
Configure failover support for the load balancer
To support high-availability failover of SAP NetWeaver, you must configure an internal passthrough Network Load Balancer with two separate backend services and health checks: one for the ASCS and one for the ERS.
Reserve a temporary IP address for testing
Before moving the production VIPs, reserve a temporary IP address from the same subnet. You use this temporary IP address to verify that the load balancer can successfully see the SAP services through the health check ports. VIP follows the active ASCS instance.
Reserve a temporary IP address in the same subnet as the alias IP for testing purposes. If you omit the
--addressesflag, then an IP address in the specified subnet is chosen for you:gcloud compute addresses create TEST_VIP_NAME \ --region CLUSTER_REGION --subnet CLUSTER_SUBNET --addresses TEST_VIP_ADDRESS
Replace the following:
TEST_VIP_NAME: the name you want to set for the temporary VIPCLUSTER_REGION: the region where your HA cluster and load balancer are deployedCLUSTER_SUBNET: the name of the subnetwork where the IP address is allocatedTEST_VIP_ADDRESS: the static internal IP address that you want to set for the temporary VIP
For more information about reserving a static IP, see Reserving a static internal IP address.
Verify IP address reservation:
gcloud compute addresses describe TEST_VIP_NAME --region CLUSTER_REGION
The output is similar to the following example:
address: 10.1.0.5 addressType: INTERNAL creationTimestamp: '2026-03-31T21:54:40.281-07:00' description: '' id: '7673549042678307839' kind: compute#address labelFingerprint: 42WmSpB8rSM= name: nw-test-vip networkTier: PREMIUM purpose: GCE_ENDPOINT region: https://www.googleapis.com/compute/v1/projects/example-project-123456/regions/us-central1 selfLink: https://www.googleapis.com/compute/v1/projects/example-project-123456/regions/us-central1/addresses/nw-test-vip status: RESERVED subnetwork: https://www.googleapis.com/compute/v1/projects/example-project-123456/regions/us-central1/subnetworks/example-subnet-us-central1
Create instance groups for your host compute instances
The internal passthrough Network Load Balancer uses instance groups to identify the compute instances that can host the SAP NetWeaver services.
Create one unmanaged instance group for each compute instance in your HA cluster:
gcloud compute instance-groups unmanaged create ASCS_IG_NAME \ --zone=ASCS_ZONE gcloud compute instance-groups unmanaged add-instances ASCS_IG_NAME \ --zone=ASCS_ZONE --instances=ASCS_HOST_NAME gcloud compute instance-groups unmanaged create ERS_IG_NAME \ --zone=ERS_ZONE gcloud compute instance-groups unmanaged add-instances ERS_IG_NAME \ --zone=ERS_ZONE --instances=ERS_HOST_NAME
Replace the following:
ASCS_IG_NAME: the name you want to set for the unmanaged instance group that contains the ASCS host compute instanceASCS_ZONE: the zone where the ASCS host compute instance is deployedASCS_HOST_NAME: the name of the ASCS host compute instanceERS_IG_NAME: the name you want to set for the unmanaged instance group that contains the ERS host compute instanceERS_ZONE: the zone where the ERS host compute instance is deployedERS_HOST_NAME: the name of the ERS host compute instance
Confirm the creation of the instance groups:
gcloud compute instance-groups unmanaged list
The output is similar to the following example:
NAME ZONE NETWORK NETWORK_PROJECT MANAGED INSTANCES ig-vm1 us-central1-a example-network example-project No 1 ig-vm2 us-central1-b example-network example-project No 1
Create Compute Engine health checks
For an SAP NetWeaver HA setup, you must create two separate health checks to independently monitor the ASCS and ERS instances.
To avoid clashing with other services, create the Compute Engine health
checks with ports that are in the private range 49152-65535. The
check-interval and timeout values are set slightly higher than the default
values so as to increase failover tolerance during Compute Engine live
migration events. You can adjust the values if needed.
To create the Compute Engine health checks, complete the following steps:
Create the health check for the ASCS instance:
gcloud compute health-checks create tcp HEALTH_CHECK_NAME_ASCS \ --port=HEALTHCHECK_PORT_NUM_ASCS \ --proxy-header=NONE --check-interval=10 --timeout=10 --unhealthy-threshold=2 \ --healthy-threshold=2
Replace the following:
HEALTH_CHECK_NAME_ASCS: the name that you want to set for the ASCS instance health check resourceHEALTHCHECK_PORT_NUM_ASCS: the port number that the ASCS health check resource must use to monitor the ASCS instance
Create the health check for the ERS instance:
gcloud compute health-checks create tcp HEALTH_CHECK_NAME_ERS \ --port=HEALTHCHECK_PORT_NUM_ERS \ --proxy-header=NONE --check-interval=10 --timeout=10 --unhealthy-threshold=2 \ --healthy-threshold=2
Replace the following:
HEALTH_CHECK_NAME_ERS: the name that you want to set for the ERS instance health check resourceHEALTHCHECK_PORT_NUM_ERS: the port number that the ERS health check resource must use to monitor the ERS instance
Confirm the creation of the health check resources:
gcloud compute health-checks describe HEALTH_CHECK_NAME
The output is similar to the following example:
checkIntervalSec: 10 creationTimestamp: '2026-03-31T22:15:01.034-07:00' healthyThreshold: 2 id: '7836748939408774971' kind: compute#healthCheck name: hc-nw-ascs-60000 selfLink: https://www.googleapis.com/compute/v1/projects/example-project-123456/global/healthChecks/hc-nw-ascs-60000 tcpHealthCheck: port: 60000 portSpecification: USE_FIXED_PORT proxyHeader: NONE timeoutSec: 10 type: TCP unhealthyThreshold: 2
Create a firewall rule for the health checks
Define a firewall rule that allows access to your host compute instances from
the following Google Cloud health check IP ranges: 35.191.0.0/16 and
130.211.0.0/22. This rule must include the ports that you defined for both the
ASCS and ERS health checks.
To create this firewall rule, complete the following steps:
Add a network tag to your compute instances if they don't already have one. This tag is used to target the firewall rule:
gcloud compute instances add-tags ASCS_HOST_NAME --tags NETWORK_TAGS --zone PRIMARY_ZONE gcloud compute instances add-tags ERS_HOST_NAME --tags NETWORK_TAGS --zone SECONDARY_ZONE
Replace the following:
NETWORK_TAGS: one or more network tags that are assigned to the compute instances and targeted by the firewall rulePRIMARY_ZONE: the zone where the primary ASCS host instance runsSECONDARY_ZONE: the zone where the secondary ERS host instance runs
Create a firewall rule that lets the health checks access your host compute instances:
gcloud compute firewall-rules create RULE_NAME \ --network NETWORK_NAME --action ALLOW --direction INGRESS \ --source-ranges 35.191.0.0/16,130.211.0.0/22 --target-tags NETWORK_TAGS \ --rules tcp:HLTH_CHK_PORT_NUM_ASCS,tcp:HLTH_CHK_PORT_NUM_ERS
Replace the following:
RULE_NAME: the name that you want to set for the firewall ruleNETWORK_NAME: the name of the VPC network to which you want to attach this firewall ruleNETWORK_TAGS: the network tags that you assigned to the compute instancesHLTH_CHK_PORT_NUM_ASCS: the port number that you've allocated to monitor the ASCS instanceHLTH_CHK_PORT_NUM_ERS: the port number that you've allocated to monitor the ERS instance
For example:
gcloud compute firewall-rules create fw-allow-health-checks \ --network example-network \ --action ALLOW \ --direction INGRESS \ --source-ranges 35.191.0.0/16,130.211.0.0/22 \ --target-tags sap-nw-ha \ --rules tcp:60000,tcp:60001
Configure the load balancer and failover group
Create the load balancer backend service for ASCS:
gcloud compute backend-services create BACKEND_SERVICE_NAME_ASCS \ --load-balancing-scheme internal \ --health-checks HEALTH_CHECK_NAME_ASCS \ --no-connection-drain-on-failover \ --drop-traffic-if-unhealthy \ --failover-ratio 1.0 \ --region CLUSTER_REGION
Replace the following:
BACKEND_SERVICE_NAME_ASCS: the name that you want to set for the load balancer backend service for the ASCS instanceHEALTH_CHECK_NAME_ASCS: the name of the Compute Engine health check resource that you created for the ASCS instance
Create the load balancer backend service for ERS:
gcloud compute backend-services create BACKEND_SERVICE_NAME_ERS \ --load-balancing-scheme internal \ --health-checks HEALTH_CHECK_NAME_ERS \ --no-connection-drain-on-failover \ --drop-traffic-if-unhealthy \ --failover-ratio 1.0 \ --region CLUSTER_REGION
Replace the following:
BACKEND_SERVICE_NAME_ERS: the name that you want to set for the load balancer backend service for the ERS instanceHEALTH_CHECK_NAME_ERS: the name of the Compute Engine health check resource that you created for the ERS instance
Add your instance groups to the ASCS backend service:
Add ASCS instance group as the
PRIMARYbackend for ASCS:gcloud compute backend-services add-backend BACKEND_SERVICE_NAME_ASCS \ --instance-group ASCS_IG_NAME \ --instance-group-zone ASCS_ZONE \ --region CLUSTER_REGION
Replace the following:
BACKEND_SERVICE_NAME_ASCS: the name that you set for load balancer backend service for the ASCS instanceASCS_IG_NAME: the name you set for the unmanaged instance group that contains the ASCS host compute instanceASCS_ZONE: the zone where ASCS compute instance is deployedCLUSTER_REGION: the region where you've deployed your HA cluster
Add ERS instance group as the
FAILOVERbackend for ASCSgcloud compute backend-services add-backend BACKEND_SERVICE_NAME_ASCS \ --instance-group ERS_IG_NAME \ --instance-group-zone ERS_ZONE \ --region CLUSTER_REGION \ --failover
Replace the following:
ERS_IG_NAME: the name you set for the unmanaged instance group that contains the ERS host compute instanceERS_ZONE: the zone where ERS compute instance is deployed
Add your instance groups to the ERS backend service:
Add ASCS instance group as the
FAILOVERbackend for ERSgcloud compute backend-services add-backend BACKEND_SERVICE_NAME_ERS \ --instance-group ASCS_IG_NAME \ --instance-group-zone ASCS_ZONE \ --region CLUSTER_REGION \ --failover
Replace
BACKEND_SERVICE_NAME_ERSwith the name that you set for load balancer backend service for the ERS instance.Add ERS instance group as the PRIMARY backend for ERS
gcloud compute backend-services add-backend BACKEND_SERVICE_NAME_ERS \ --instance-group ERS_IG_NAME \ --instance-group-zone ERS_ZONE \ --region CLUSTER_REGION
Create a temporary forwarding rule to test the load balancer configuration.
Use the temporary IP address that you reserved earlier. This rule maps the test IP address to your ASCS backend service.
If you need to access the SAP NetWeaver system from outside of the specified region, then include the flag
--allow-global-accessin the definition as follows:gcloud compute forwarding-rules create TEMPORARY_RULE_NAME \ --load-balancing-scheme internal \ --address TEST_VIP_NAME \ --subnet CLUSTER_SUBNET \ --region CLUSTER_REGION \ --backend-service BACKEND_SERVICE_NAME_ASCS \ --ports ALL
Replace the following:
TEMPORARY_RULE_NAME: the name you want to set for the temporary forwarding ruleTEST_VIP_NAME: the name that you set for the temporary VIP
For more information about cross-region access to your SAP NetWeaver high-availability system, see Internal passthrough Network Load Balancer.
Your backend instance groups won't register as healthy until you have completed the Pacemaker cluster configuration.
Test the load balancer configuration
Although your backend instance groups won't register as healthy until later, you can test the load balancer configuration by setting up a listener to respond to the health checks. After setting up a listener, if the load balancer is configured correctly, then the status of the backend instance groups changes to healthy.
Choose one of the following methods to test connectivity:
Test the load balancer with the socat utility
You can use the socat utility to temporarily listen on the health check port.
You need to install the socat utility anyway, because you use it later when
you configure cluster resources.
To test the load balancer by using the socat utility, complete the following
steps:
On both compute instances, as the root user, install the
socatutility:zypper install -y socat
Start a
socatprocess to listen for 60 seconds on the ASCS health check port:sudo timeout 60s socat - TCP-LISTEN:HLTH_CHK_PORT_NUM_ASCS,fork
In Cloud Shell, after waiting a few seconds for the health check to detect the listener, check the health of your backend instance groups:
gcloud compute backend-services get-health BACKEND_SERVICE_NAME_ASCS \ --region CLUSTER_REGION
Test the load balancer by using port 22
If port 22 is open for SSH connections on your host compute instances, then you
can temporarily edit the health checker to use port 22, which has a listener
that can respond to the health checker.
To temporarily use port 22, complete the following steps:
- Click your health check in the Google Cloud console.
- Click Edit.
- In the Port field, change the port number to
22. - Click Save and wait a minute or two.
In Cloud Shell, check the health of your backend instance groups:
gcloud compute backend-services get-health BACKEND_SERVICE_NAME_ASCS \ --region CLUSTER_REGION
When you are done, revert the health check port number back to the original port number.
Migrate the VIP implementation to use the load balancer
The following steps guide you through editing the Pacemaker cluster configuration and the load balancer forwarding rules:
As the root user, on the active primary instance, put the cluster into maintenance mode:
sudo crm configure property maintenance-mode="true"
Back up the cluster configuration:
sudo crm configure show > clusterconfig.backup
Deallocate the alias IP
To deallocate the alias IPs, you need to update the network interface of the primary instance to remove the alias IP. If there are multiple alias IPs and you need to keep any of them, then you need to specify the alias IPs that you need to keep in the command. Any alias IPs that are not specified in the update command are deallocated.
To deallocate the alias IPs, complete the following steps:
In Cloud Shell, confirm the alias IP ranges that are assigned to ASCS and ERS instances:
gcloud compute instances describe ASCS_HOST_NAME \ --zone ASCS_ZONE_NAME --format="flattened(name,networkInterfaces[].aliasIpRanges)" gcloud compute instances describe ERS_HOST_NAME \ --zone ERS_ZONE_NAME --format="flattened(name,networkInterfaces[].aliasIpRanges)"
In the Google Cloud console, update the network interface. If you don't need to retain any alias IPs, then specify
--aliases "":Deallocate alias IPs for ASCS:
gcloud compute instances network-interfaces update ASCS_HOST_NAME \ --zone ASCS_ZONE_NAME --aliases "IP_RANGES_TO_RETAIN"
Replace
IP_RANGES_TO_RETAINwith the IP addresses that you want to retain.Deallocate alias IPs for ERS:
gcloud compute instances network-interfaces update ERS_HOST_NAME \ --zone ERS_ZONE_NAME --aliases "IP_RANGES_TO_RETAIN"
Create the VIP forwarding rule and clean up
In the Google Cloud console, create a new frontend forwarding rule for the load balancer, specifying the IP address that was previously used for the alias-IP's as the IP address. These are the VIPs.
To create the VIP forwarding rule, complete the following steps:
Create the ASCS forwarding rule:
gcloud compute forwarding-rules create ASCS_RULE_NAME \ --load-balancing-scheme internal \ --address VIP_ADDRESS \ --subnet CLUSTER_SUBNET \ --region CLUSTER_REGION \ --backend-service BACKEND_SERVICE_NAME \ --ports ALL
Create the ERS forwarding rule:
gcloud compute forwarding-rules create ERS_RULE_NAME \ --load-balancing-scheme internal \ --address VIP_ADDRESS \ --subnet CLUSTER_SUBNET \ --region CLUSTER_REGION \ --backend-service BACKEND_SERVICE_NAME \ --ports ALL
Confirm that the forwarding rules have been created. Note the name of the temporary forwarding rule for deletion:
gcloud compute forwarding-rules list
Delete the temporary forwarding rule:
gcloud compute forwarding-rules delete TEMPORARY_RULE_NAME \ --region CLUSTER_REGION
Release the temporary IP address that you had reserved:
gcloud compute addresses delete TEMPORARY_VIP \ --region CLUSTER_REGION
Edit the alias primitive resource in the cluster configuration
On the ASCS instance, as the root user, edit the alias primitive resource definition:
sudo crm configure edit ALIAS_RSC_NAME
Replace
ALIAS_RSC_NAMEwith the name of the alias primitive resource.The resource definition opens in a text editor, such as vi.
Make the following changes to the resource in the Pacemaker HA cluster configuration:
- Replace the
ocf:gcp:aliasresource class withanything - Change the
op monitorinterval tointerval=10s - Change the
op monitortimeout totimeout=20s - Add the
op_params depth=0parameter - Remove
op startandop stopoperation definition - Remove
meta priority=10 Remove the following alias IP parameters:
alias_ip="10.0.0.10/32" hostlist="vm1 vm2" gcloud_path="/usr/bin/gcloud" logging=yes
Add the following health check service parameters:
binfile="/usr/bin/socat" cmdline_options="-U TCP-LISTEN:HEALTHCHECK_PORT_NUM,backlog=10,fork,reuseaddr /dev/null"
Replace
HEALTHCHECK_PORT_NUMis the health check port that you specified when you created the health check and configured thesocatutility.
The following is an example cluster configuration that uses alias IP:
primitive rsc_vip_ascs01_alias ocf:gcp:alias \ op monitor interval=60s timeout=60s \ op start interval=0 timeout=300s \ op stop interval=0 timeout=300s params alias_ip="10.1.0.100/32" hostlist="vm1 vm2"
When you edit this cluster configuration, the resource definition for the health check service is similar to the following example:
primitive rsc_vip_ascs01_alias anything \ op monitor interval=10s timeout=20s \ op_params depth=0 \ params binfile="/usr/bin/socat" cmdline_options="-U TCP-LISTEN:HEALTHCHECK_PORT_NUM_ASCS,backlog=10,fork,reuseaddr /dev/null"
- Replace the
Similarly, edit the resource for ERS.
After you successfully edit it, the resource definition for the health check service is similar to the following example:
primitive rsc_vip_ers03_alias anything \ op monitor interval=10s timeout=20s \ op_params depth=0 \ params binfile="/usr/bin/socat" cmdline_options="-U TCP-LISTEN:HEALTHCHECK_PORT_NUM_ERS,backlog=10,fork,reuseaddr /dev/null"
Test the updated HA cluster
After completing the migration, verify that the internal passthrough Network Load Balancer is correctly steering traffic to the active node. You can perform this verification by using the following:
- Check infrastructure health
- Test network connectivity
- Simulate a failover
- Confirm lock entries are retained
Check infrastructure health
Identify the nodes that run the ASCS resource group and the standby node:
crm status
Verify that Google Cloud correctly identifies the active and standby nodes:
gcloud compute backend-services get-health BACKEND_SERVICE_NAME_ASCS \ --region CLUSTER_REGION
The node that runs the ASCS resource group must show
healthState: HEALTHY. The standby node must showhealthState: UNHEALTHY.
Test network connectivity
Confirm that the VIP is reachable through the load balancer from a remote node or the partner node, by completing the following steps:
Test the ASCS health check port:
nc -zv ASCS_VIP HEALTHCHECK_PORT_NUM_ASCS
The output must include
Connection succeeded.Test the SAP Message Server Port (if SAP is started):
nc -zv ASCS_VIP 36ASCS_INSTANCE_NO
The output must include
Connection succeeded.
Simulate a failover
To make sure that the load balancer automatically redirects traffic to the new primary node, simulate a failover event in your cluster by completing the following steps:
Trigger failover by identifying the active ASCS node and bringing down the network interface on this node:
sudo ip link set eth0 down
Observe failover by monitoring
crm status. The cluster detects the node failure, triggers fencing, and relocates the ASCS resource group to the partner node.Monitor the backend service health status:
watch -d gcloud compute backend-services get-health BACKEND_SERVICE_NAME_ASCS \ --region CLUSTER_REGION
Within 15–30 seconds, the previously
HEALTHYnode becomesUNHEALTHY, and the new active node becomesHEALTHY.
Confirm lock entries are retained
To confirm lock entries are preserved across a failover, first select the tab for your version of the Enqueue Server and the follow the procedure to generate lock entries, simulate a failover, and confirm that the lock entries are retained after ASCS is activated again.
ENSA1
As
SID_LCadm, on the server where ERS is active, generate lock entries by using theenqtprogram:>enqt pf=/PATH_TO_PROFILE/SID_ERSERS_INSTANCE_NUMBER_ERS_VIRTUAL_HOST_NAME 11 NUMBER_OF_LOCKSAs
SID_LCadm, on the server where ASCS is active, verify that the lock entries are registered:>sapcontrol -nr ASCS_INSTANCE_NUMBER -function EnqGetStatistic | grep locks_nowIf you created 10 locks, your output is similar to the following example:
locks_now: 10
As
SID_LCadm, on the server where ERS is active, start the monitoring function,OpCode=20, of theenqtprogram:>enqt pf=/PATH_TO_PROFILE/SID_ERSERS_INSTANCE_NUMBER_ERS_VIRTUAL_HOST_NAME 20 1 1 9999For example:
>enqt pf=/sapmnt/AHA/profile/AHA_ERS10_ers-aha-vip 20 1 1 9999Where ASCS is active, reboot the server.
On the monitoring server, by the time Pacemaker stops ERS to move it to the other server, your output is similar to the following:
Number of selected entries: 10 Number of selected entries: 10 Number of selected entries: 10 Number of selected entries: 10 Number of selected entries: 10
When the
enqtmonitor stops, exit the monitor by enteringCtrl + c.Optionally, as root on either server, monitor the cluster failover:
#crm_monAs
SID_LCadm, after you confirm the locks were retained, release the locks:>enqt pf=/PATH_TO_PROFILE/SID_ERSERS_INSTANCE_NUMBER_ERS_VIRTUAL_HOST_NAME 12 NUMBER_OF_LOCKSAs
SID_LCadm, on the server where ASCS is active, verify that the lock entries are removed:>sapcontrol -nr ASCS_INSTANCE_NUMBER -function EnqGetStatistic | grep locks_now
ENSA2
As
SID_LCadm, on the server where ASCS is active, generate lock entries by using theenq_admprogram:>enq_admin --set_locks=NUMBER_OF_LOCKS:X:DIAG::TAB:%u pf=/PATH_TO_PROFILE/SID_ASCSASCS_INSTANCE_NUMBER_ASCS_VIRTUAL_HOST_NAMEAs
SID_LCadm, on the server where ASCS is active, verify that the lock entries are registered:>sapcontrol -nr ASCS_INSTANCE_NUMBER -function EnqGetStatistic | grep locks_nowIf you created 10 locks, your output is similar to the following example:
locks_now: 10
Where ERS is active, confirm that the lock entries were replicated:
>sapcontrol -nr ERS_INSTANCE_NUMBER -function EnqGetStatistic | grep locks_nowThe number of returned locks must be the same as on the ASCS instance.
Where ASCS is active, reboot the server.
Optionally, as the root user, on either server, monitor the cluster failover:
#crm_monAs
SID_LCadm, on the server where ASCS was restarted, verify that the lock entries were retained:>sapcontrol -nr ASCS_INSTANCE_NUMBER -function EnqGetStatistic | grep locks_nowAs
SID_LCadm, on the server where ERS is active, after you confirm the locks were retained, release the locks:>enq_admin --release_locks=NUMBER_OF_LOCKS:X:DIAG::TAB:%u pf=/PATH_TO_PROFILE/SID_ERSERS_INSTANCE_NUMBER_ERS_VIRTUAL_HOST_NAMEAs
SID_LCadm, on the server where ASCS is active, verify that the lock entries are removed:>sapcontrol -nr ASCS_INSTANCE_NUMBER -function EnqGetStatistic | grep locks_nowYour output is similar to the following example:
locks_now: 0