Concepts of High Availability in HANA

 

Introduction about High Availability in HANA

High availability is about minimizing both planned and unplanned downtime.  Planned downtime could be for server patching, while unplanned downtime is caused by a fault in some component that takes the service offline.  The high availability solution has to cost less than the cost of the downtime it avoids, or you are wasting money.

To develop a business case for high availability, first understand the costs of the planned downtime that you require to maintain the system, and the cost per hour of unplanned downtime occurring.  If you don’t know these costs, stop and work them out with the business users.

HANA high availability needs to provide a high level of performance configured to your company’s exact specifications. Traditionally, companies had to choose between disaster recovery, which used offsite replication, and high availability, with redundant onsite replication. DR was more resilient, since it could withstand failure of the primary site, but it was much slower to restart, and could have much higher RPO and RTO, leading to greater business disruption. HA provided instant failover, but could be damaged along with the primary site — for example by a flood.

Cloud disaster recovery and high availability allows a great deal more flexibility to meet your availability and business continuity needs. Organizations often benefit from a combination of both in a three-tier configuration, using SAP HANA HA to eliminate downtime in the event of application, database, or application failure, and disaster recovery as a fall-back, ensuring that, even if a disaster somehow disrupts the primary hosting site, the company will be able to resume operations quickly.

This blog describes how the cluster is setup control SAP HANA in System Replication Scenarios. The blog focuses on the working of SUSE cluster with SAP HANA System Replication. This setup builds a SAP HANA HA cluster in two servers (two –nodes), installed on two SLES for SAP 12 SP2 systems.

Parameters used in this Scenario

 

ParameterValue
Node 1IP:XXXXXXXX
 Host Name:HAPRD
Node 2IP: XXXXXXX
 Host Name: HAPRD
SIDXXX
Instance Number0
Virtual IPXXXXX.156
Stonith (SBD Device)IPMI

 

SAP HANA Databases Installation Check on both cluster nodes

Verify that both databases are up and all processes of these databases are running correctly. As Linux user hspadm use the following command. To get an overview of running HANA processes.

sidadm@HAPRD:/usr/sap/SID/HDB02 > HDB info

Verify the state of system replication The command line tool hdbnsutil can be used to check the system replication mode and site name.

sidadm@HANAPRD:/usr/sap/SID/HDB02 > hdbnsutil -sr_state

Create a Hana DB User for Replication

Create a new user and assign the system privilege “DATA_ADMIN” to the new user. We use here the user: slehasync. If system privilege “DATA_ADMIN” is too powerful for your security policies, you need to check for more granular rights in the SAP documentation. Run the commands below to create the user key on both the cluster nodes as a root user.

sidadm@HANAPRD:~ # PATH="$PATH:/usr/sap/SID/HDB02/exe"

sidadm@HANAPRD:~ # hdbsql -u system -i 02 'CREATE USER slehasync PASSWORD L1pass'

sidadm@HANAPRD:~ # hdbsql -u system -i 02 'GRANT DATA ADMIN TO slehasync'

sidadm@HANAPRD:~ # hdbsql -u system -i 02 'ALTER USER slehasync DISABLE PASSWORD LIFETIME'

Note: While the database user needs to be created only on one side (the other will get the user by the system replication), the user keys for password free access needs to be created on both nodes as the user keys are stored in the file system and not in the database.

The name of the user key “slehaloc” is a fix name used in the resource agent. The port should be setup as 3nn15, where nn is the SAP instance number like “02”.

sidadm@HANAPRD:~ # PATH="$PATH:/usr/sap/<SID>/HDB00/exe"

sidadm@HANAPRD:~ # hdbuserstore SET slehaloc localhost:30215 slehasync L1pass

Verify the created setup as Linux user root

The option “-U key” tells hdbsql to use the credentials which are stored in the secure store. The table “dummy” is available for any database user; here each user has its own dummy table. So far the following test only shows that the user key has been defined correctly and we can login to the database.

sidadm@HAPRD:~ # hdbsql -U slehaloc "select * from dummy"

If the output is = DUMMY, then all seems good to proceed.

 

Configuration of Cluster

Base cluster framework was setup using yast .and Communication is setup using ucast. with first node as HAPRD and Second Node as HAPRDSHD.

To Configure resources in cluster. Use the Following Command and added the resource as shown below.

HAPRD# crm configure edit.

SBD: Using IPMI as fencing mechanism

Each resource is responsible to fence exactly one cluster node. You need to adapt the IP addresses and login user / password of the remote management boards to the STONITH resource agent

SAP Hana Topology

Hana Topology is configured as resource in the cluster. It is a cloned resource running on both the nodes.

primitive rsc_SAPHanaTopology_HSP_HDB02 ocf:suse:SAPHanaTopology op monitor interval="02" timeout="600"

op start interval="0" timeout="600" op stop interval="0" timeout="300"

params SID="HSP" InstanceNumber="02"

clone cln_SAPHanaTopology_HSP_HDB02 rsc_SAPHanaTopology_HSP_HDB02 meta clone-node-max="1"

SAP Hana

The Next Resource is Hana Database.It is a Masterslave resource

primitive rsc_SAPHana_HSP_HDB02 ocf:suse:SAPHana op start interval="0" timeout="3600"

op stop interval="0" timeout="3600" op promote interval="0" timeout="3600"

op monitor interval="60" role="Master" timeout="700" op monitor interval="61" role="Slave" timeout="700"

params SID="HSP" InstanceNumber="02" PREFER_SITE_TAKEOVER="true" DUPLICATE_PRIMARY_TIMEOUT="3600" AUTOMATED_REGISTER="false" ms msl_SAPHana_HSP_HDB02 rsc_SAPHana_HSP_HDB02

meta clone-max="2" clone-node-max="1" interleave="true"

Virtual IP:

Virtual IP is configures as a Resource . Virtual IP runs on the Primary Hana Server.

primitive rsc_ip_HSP_HDB20 ocf:heartbeat:IPaddr2 op monitor interval="02s" timeout="20s"

params ip="02.34.85.156"

Resource Constraints and Co-Location

Two constraints are organizing the correct placement of the virtual IP address for the client database access and the start order between the two resource agents SAPHana and SAPHanaTopology.

colocation col_saphana_ip_HSP_HDB20 2000: rsc_ip_HSP_HDB20:Started msl_SAPHana_HSP_HDB20:Master

order ord_SAPHana_HSP_HDB20 Optional: cln_SAPHanaTopology_HSP_HDB20 msl_SAPHana_HSP_HDB20

 

Cluster Resource

To view the cluster login to any one of the node and execute

HAPRD:~#crm configure show.

For status

HAPRD:~#crm status

Or

Login to HAWK – https://<hostname>:<port number>

User: shacluster Password: <password here>

Comments

Popular posts from this blog

Domains and Data Elements

Sample ABAP program for Updating notepad file data to Internal table and Sending it to Application server

OPEN SQL EXAMPLES IN ABAP – PART 2