Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

SAP HANA 2.0 XS Advanced: a host auto-failover installation/configuration example

$
0
0

Introduction


While the host-auto failover configuration with the core HANA Database is well known, the additional steps when XS Advanced (XSA) is installed are less well known. In below steps I give an example how it can be configured. When you read through the notes several decisions must be made for your own implementation, so your configuration may be different.

◉ Failover & High Availability with SAP HANA extended application services, advanced model
https://launchpad.support.sap.com/#/notes/2300936
◉ Domains and routing configuration for SAP HANA extended application services, advanced model
https://launchpad.support.sap.com/#/notes/2245631
◉ Providing SSL certificates for domains defined in SAP HANA extended application services, advanced model
https://launchpad.support.sap.com/#/notes/2243019

Pay special attention to the routing mode, the default in the hdblcm installer uses ports routing, but as documented in SAP Note 2245631 for a production system hostname routing is recommended.

For this example, the setup will include:

◉ HANA Installed with two nodes in host auto-failover configuration
◉ Hostname routing
◉ Standalone SAP Web Dispatcher as the chosen failover router
◉ No SSL termination at the failover router
◉ XSA default domain is serge.xs2tests-wdf.sap.corp and configured in the DNS
◉ IP addresses:
     ◉ Failover Web Dispatcher ends in .198
     ◉ XSA Master ends in .199
     ◉ XSA Stand-by ends in .200

Please make sure the above decisions and data points are completed before performing the configuration.

Let’s start with testing the DNS pre-requisite, both serge.xs2tests-wdf.sap.corp and *. serge.xs2tests-wdf.sap.corp must resolve to the failover web dispatcher:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Failover Web Dispatcher


Install the Failover SAP Web Dispatcher guided by SAP Note 908097, my example uses version 7.49 patch 214. Make sure you can login to the SAP Web Dispatcher Administration URL. Depending on your end-users the SAP Web Dispatcher may be located in a separate zone such as the DMZ.

After the installation, update the SAP Web Dispatcher profile to reflect your hostnames and ports, for HANA the XSA port will be 3##33, where ## represents the HANA instance number. In below example the HANA instance number is 01 and the SID is PR1, adjust this your instance is different. Replace XSA Master and XSA Standby to reflect your hostnames (fqdn):

◉ icm/server_port_0=PROT=ROUTER, PORT=30133, TIMEOUT=60, PROCTIMEOUT=600
◉ wdisp/system_0 = NAME=PR1, SID=PR1, SRCVHOST=*:30133, EXTSRV=https://<XSA Master>:30133#MAIN_INSTANCE;https://<XSA Standby>:30133#FAILOVER_INSTANCE
◉ wdisp/server_0=NAME=MAIN_INSTANCE, LBJ=1, ACTIVE=1
◉ wdisp/server_1=NAME=FAILOVER_INSTANCE,LBJ=2147483647, ACTIVE=1
Other settings to consider:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Make sure that the SAP Web Dispatcher is running with the updated configuration before installing HANA. In addition, the port (in my example 30133) has to be open between the Failover Web Dispatcher server and the HANA servers in both directions.

HANA & XSA


Next install HANA and XSA, or if HANA is already installed, add XSA. The HANA 2.0 revision has to be at least revision 21.

In my example during the XS Advanced installation prompts for routing mode and XSA Domain Name:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

After the installation, let’s check. Make sure you are logged in as the <sid>adm user, issue command “xs-admin-login” (or “xs login”) and enter the XSA_ADMIN password:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Next check the URL’s with command “xs service-urls” (or “xs a”):

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Verify that the URL’s work, a quick example is to test the xsa-admin URL since the XSA_ADMIN user already has the required authorizations whereas it does not for the webide. Since we have not configured the SSL certificates yet, you will get a security warning from the browser because a self-signed certificate is used.

The expected xsa-admin url response:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

When you inspect the certificate path you will see the self-signed certificate:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Depending on the browser you may be able to click through the warnings and get to the login page, but at this point it is not yet important to login. Just make sure the URL resolves and gets a response. If you get an error response, such as 503, check the output of “xs a” and make sure xsa-admin is in a running state:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

For example, if it shows 0/1 then check why the instance was not started (check xs logs command in the admin guide). Note that it is normal for the apps ending in -db to be stopped, these will be started when needed and no user intervention is required.

Certificate Steps


High level steps:

1. in the SAP Web Dispatcher generate a wildcard certificate request, get it signed by your Certificate Authority of choice, import the certificate chain
2. from the command line export the certificate in p12 format, convert it to pem format and prepare the certificate files for XSA import
3. in the XSA environment, import the certificate files and restart HANA

Step 1

In the SAP Web Dispatcher generate a wildcard certificate request, get it signed by your Certificate Authority of choice, import the certificate chain.

Login to the SAP Web Dispatcher Administrator URL and go to PSE Management. Next select SAPSSLS.pse and “Recreate PSE”

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Recreate the PSE where the CN is set with a wildcard. In my example, my XSA domain name was set to serge.xs2tests-wdf.sap.corp, so I set the CN to CN=*. serge.xs2tests-wdf.sap.corp. By using the wildcard CN my certificate will apply for webide.serge.xs2tests-wdf.sap.corp, xsa-admin.serge.xs2tests-wdf.sap.corp, etc.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

The next step is to create the CA request and have it signed by your CA. Make sure to have the full chain available for import including the root CA and intermediate signing certificates.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning


The result:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Step 2

From the command line export the certificate in p12 format, convert it to pem format and prepare the certificate files for XSA import.

Login to the Failover Web Dispatcher server as the <sid>adm user (or whichever user is the Linux owner of the Web Dispatcher directories/files). Change to the $SECUDIR directory, in my instance /hana/shared/W01/sec.

Export the certificate chain we just imported in .p12 format. Make sure to set a compliant password and have it available for the import step we’ll execute later. Command:

/hana/shared/W01/sapgenpse export_p12 -p /hana/shared/WD5/sec/SAPSSLS.pse star.serge.xs2tests-wdf.sap.corp.p12

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

The next step is to convert the exported .p12 file to .pem format. There are several websites that can do this for you, however using openssl installed locally is a more secure option.

Command:

openssl pkcs12 -in star.serge.xs2tests-wdf.sap.corp.p12 -out star.serge.xs2tests-wdf.sap.corp.pem -nodes

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Please be aware that using the -nodes option exports the private key unencrypted. Hence cleanup of any files containing the unencrypted private key should be performed after completing the setup.

The next step is to take parts of the .pem file and create certificate files importable by XSA.

Use your favorite editor to create 2 new files, one will contain the private key and the other the certificate chain. Make sure the lines for “bad attributes”, “subject”, and “issuer” are not part of the new files.

In the private key file copy the “PRIVATE KEY” section, including begin and end line from the .pem file.

In the chain file copy the certificates, including begin and end line from the .pem file.

Example pkey.pem:

—–BEGIN PRIVATE KEY—–

Exported Private Key

—–END PRIVATE KEY—–

Example chain.pem:

—–BEGIN CERTIFICATE—–

Server certificate

—–END CERTIFICATE—–

—–BEGIN CERTIFICATE—–

Intermediate/Signing certificate

—–END CERTIFICATE—–

—–BEGIN CERTIFICATE—–

Root certificate

—–END CERTIFICATE—–

Once the files are created, copy both files to the Master XSA host as the <sid>adm user of the XSA Master HANA instance. As the target directory, you can select $SECUDIR, which defaults to /usr/sap/<SID>/HDB<##>/<hostname>/sec (example /usr/sap/PR1/HDB01/XSA-Master/sec).

Example:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Step 3

Next, we can import the certificates, note that a restart of the xscontroller will be required. Make sure you are logged in as the <sid>adm user, issue command “xs-admin-login” (or “xs login”) and enter the XSA_ADMIN password:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

The command is:

xs set-certificate <XSA domain> -k <private keyfile> -c <chain file>

In my example:

xs set-certificate serge.xs2tests-wdf.sap.corp -k /usr/sap/PR1/HDB01/ld9994/sec/pkey -c /usr/sap/PR1/HDB01/ld9994/sec/chain

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

As shown in the screenprint next we need to restart the xscontroller.  There are several ways in this example we show it from the HANA Studio (be aware that another option is to restart the entire XSA using command “XSA restart”). Open the administrator perspective connected to the SYSTEMDB, go to the Landscape tab, right click on xscontroller and choose stop. You will get a pop up message that the xscontroller might restart, since it is the default setting and a newly installed instance the xscontroller will start automatically.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

And confirm:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

You can monitor the xscontroller restart from the landscape tab in the HANA studio, or check the xscontroller_0.log file in the Diagnosis Files tab or at the OS level in the SAP HANA trace directory.

Once the xscontroller restarted, open a new browser and retest the xsa-admin url. Now the browser security status should show green after logging in:


When you inspect the certificate path you will see the full chain:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certification, SAP HANA Learning

Now you are ready to test the failover. Make sure that the HANA 2.0 revision is at least revision 21.

HANA: First adaption with NVM

$
0
0
Since Sapphire this year SAP and Intel have announced some new details regarding Skylake and NVM (non-volatile memory).

With the new processors it should be possible to gain ~60% more performance when running HANA workload on it. Additionally new DIMMs based on 3D XPoint technology (NVM) help overcome the traditional I/O bottlenecks that slow data flows and limit application capacity and performance.

Till now no there were no details about how SAP will use this new technology. You can use it as filesystem or as in-memory format.

1. Initial situation


◉ tests only include the in-memory format for NVM
◉ tests only cosider the main table fragments
◉ each Main Column Fragment locates its associated NVRAM block and points directly to its column vector and dictionary backing arrays
◉ no change in the process of data creation for a new Main Column Fragment

SAP HANA Tutorials and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA NVM



Scenario 1 (2.1 + 2.2) OLTP and OLAP:

◉ Used table 4 million records, 500 columns
◉ Hardware: Intel Xeon processors E5-4620 v2 with 8 cores each, running at 2.6 GHz without Hyper-Threading

Scenario 2 (2.3):

◉ Used table 4 million records, 500 columns, size ~5GB
◉ Hardware: Intel(R) Xeon(R) CPU E7-8880 v2 @ 2.50GHz with 1TB Main Memory

2. Results in a nutshell


2.1 OLTP workload

2.1.1 Inserts

The inserts performance is not affected, because the delta store, where the data will be changed in case of an insert, remains in DRAM.

SAP HANA Tutorials and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA NVM

2.1.2 Single Select

Single select at 6xlatency: runtime + ~66%

SAP HANA Tutorials and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA NVM

2.2 OLAP workload

2.2.1 I/O Troughput

I/O throughput decreases at 7xlatency to ~-3%

SAP HANA Tutorials and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA NVM

2.2.2 OLAP Memory bandwidth

read performance DRAM : NVRAM faktor 2,27
write operations (delta store) only will take place in DRAM so no comparison is possible.

SAP HANA Tutorials and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA NVM

2.3 Restart times and memory footprint

2.3.1 Table preload

A table with 400 to 4.000.000 records are preloaded (used table 5GB size, 4 million records, 100 columns). The time needed in case of using only DRAM extremely rises starting with more than 400.000 rows. This means in the worst case a factor of 100!

SAP HANA Tutorials and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA NVM

2.6 DRAM consumption

DRAM savings if you place the main fragments into NVM at 4 Mio rows: ~8GB

This measurements tables don’t include ~5GB shared memory allocated by NVRAM. No exact details how this scales with bigger or more tables.

“For the NVRAM- aware HANA case, we could see ~5GB being allocated from NVRAM based on Shared Memory (i.e. /dev/shm) when all 4 million rows of the table are loaded”
Also observed:

“In DRAM case, majority of CPU cost is being spent in disk-based reads whereas for NVRAM, the disk reads are absent since we map data directly into process address space.”

SAP HANA Tutorials and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA NVM

3. Challenges


Here are some challenges which are partly special to HANA. The whitepaper was written pretty general to adopt it also for other databases.

3.1 Possibility for table pinning

Pinning of tables is already available for some other databases. This means that defined tables will be permanent placed in main memory. No LRU (=least recently used) will unload it.

With NVM most of the tables can be placed in the cheaper and bigger memory. But may it make sense to place some of the important tables which are frequently accesed permanent in the main memory?

3.2 Load / unload affect NVRAM data?

Currently when a DBA or the system itself unloads a table in cause of the existing residence time the table will be unloaded from main memory. On the next access it have to be loaded initially from disk. What happens if I use NVM? Will the data remain as main col fragment as NVRAM block or will it be unloaded to disk?

3.3 Data aging aspects

With Data Aging it is possible to keep the important tables in memory and permanent place historical data on disk. This way you can save a lot of main memory and the data can still be changed. It is no archiving! If you access historical data they will be for sure placed for a couple of time inside of the main memory.

With NVM the current main fragments would be placed in NVRAM, the delta fragments remain in main memory (DRAM) and the historical data placed on disk. May be it would make sense to place the pool for the accessed historical data in NVM (page_loadable_columns heap pool).

Or as explained in 3.1 place current (=hot) data in DRAM and historical data in NVM.

4. Missing aspects


Important / questions tests weren’t considered or released.

4.1 Delta Merge times

The data from main (NVRAM) and delta (RAM) have to merged to the new main store which is also located in NVRAM. In the old architecture this process was only in memory and very fast. How this affects the system and the runtimes?

4.2 Savepoint times

The savepoint (default every 5min) writes the changes of all modified rows of all tables to the persistency layer. This data mostly are read only from the delta store. But it can happen that after a delta merge with a lot of rows takes place the runtime of the savepoint increase in cause of the slower performance of NVM. For example in BW when a big ETL process is running and the delta merge is deactivated (normal for some BW processes) and action finished shortly before the savepoint starts => a final delta merge is executed. Accordingly all new entries are transferred to the main store and have to be read for the savepoint. All data since the last savepoint have to be read from the new main store which is placed in the NVRAM.

A long running savepoint can lead to a performance issue, because there is a critical time phase which holds certain locks. The longer the critical phase the more the systems performance is affected.

4.3 Other tables / heap pools

Temp tables, Indexes, BW PSA and changelogs, result cache and page cache may be placed in NVM.

4.4 Growth

What happens if a OOM situation takes place? In this case the pointer in the main memory may be lost which leads to a initialization of the NVRAM.

What happens if a OONVM (=out of non-volatile memory) situation takes place? What is the fallback?

5. Summary

NVM can be a game changer but it is still a long way to go in which way SAP will adapt it for HANA. There are a lot of possibilities, but also some challenges left.

So the amount of data which can be placed in RAM can be extended with some performance disturbance. If your business can accept this performance you can save a lot of memory and thereby money. So NVRAM can definitely reduce your TCO.

New Video Tutorial Series: Studio and Cockpit

$
0
0
Some things are vintage and some things are just old. While we were digging through the archives, we found one of our older videos and decided to dust it off and give it an update. Who says something old can’t be new again? And speaking of new, if you’re starting to learn about streaming analytics and trying to figure out how to create, run, or test a project in studio, this is the perfect video for you.

Ready for more good news? Silly question, I know. That’s like asking if you want the last slice of pizza. Anyways, the good news is that this video is part of a tutorial series. So, after you’ve learned the basics of creating and working with projects in studio, you can then learn how to use cockpit to monitor those projects. The bad news? You’ll have to wait for the end of November for the second video to come out. You know the saying: good things come to those who wait!

On the bright side, this blog does offer a sneak-peek of the second video. Here are the highlights of each video. Happy reading!

Part 1: Studio – Creating, Running, and Testing a Project


In part 1, we:
◉ create a project that filters data and writes it to HANA,
◉ compile and execute the project, and
◉ test the project by manually loading data to the input stream and checking the results.

Here’s a screengrab from the video demo showing us setting a filter:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Studio

Only values greater than 20 in the input stream will show in SAP HANA. After setting the filter, we add an SAP HANA Output adapter. Here we are configuring the adapter to write to SAP HANA:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Studio

We’re using the ‘hanadb’ service, connecting to the ‘SYSTEM’ schema, and writing to the target table in SAP HANA, ‘TABLE_TEST’.

After running the project, we test it by manually loading two rows into the input stream: the first with a value below 20 (9), and the second with a value above 20 (21):

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Studio

Because of the filter, only the second row we added shows in SAP HANA:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Studio

That’s the gist of the first video tutorial. Check it out for yourself here. Once you’ve completed the studio tutorial, you’re ready to start monitoring streaming in cockpit! (or you will be once we post the video later this month. Stay tuned!)

Part 2: Cockpit – Monitoring Streaming Analytics


In part 2, we monitor:

◉ streaming analytics and general system behavior, using the Monitoring and Administration section, and
◉ the streaming project we created in part 1, via the Streaming Analytics.

Here’s a screengrab from the video showing the Monitoring and Administration section:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Studio

It shows the behavior of the resource you’re connected to, including:

◉ overall database status,
◉ number of alerts, and
◉ memory, CPU, and disk usage.

If you select Show all in the Alerts tile, you can view and configure alerts. In the video, we look at the Inactive Streaming applications alert:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Studio

We show you how to:

◉ set thresholds for prioritized alerting,
◉ add email recipients, and
◉ trigger alert checkers.

After monitoring system behavior, next, we monitor the project we created in the previous video, ‘testproject’. We look at the System, Network, Streams, and Adapters tabs. In the Streams tab, you can view QueueDepth and RowsInStore. Here’s a screengrab from the video:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Studio

PriceFeed and VWAP have 99 and 26 records in their log store, respectively. If you select a stream, you can see more detailed info, including rows/transaction throughput history.

Finally, in the Adapters tab, you can view the adapters in the project and their status:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Studio

And that’s all she wrote!

So, whether you’re a newbie looking to get your feet wet or a wily veteran just looking for a refresher, be sure to check out our video tutorial series on studio and cockpit. Part 1 is already available, and part 2 will be out very soon. Also, if you have any ideas for more videos we could make, let us know in the comments below!

Debugging HANA Procedures

$
0
0

1. Introduction to Debugging HANA Procedures in Eclipse


As we are moving towards developing applications more and more on the native HANA stack or use hybrid scenarios to combine both the worlds(BW&HANA), it is important to understand how to debug procedures using the eclipse “Debug” perspective.

There are different development scenarios in which we use procedures(.hdbprocedure) or the system generates a procedure for. e.g. the use of HANA expert script based transformation creates a procedure inside the generated AMDP class. This blog series details on how to debug various procedures.


1. HANA Stored Procedures (.hdbprocedure)
2. HANA expert script based transformation

The script based calculation views can also be de-bugged by using the wrapper procedure that is created in the schema “_SYS_BIC” by the system. The method to de-bug these objects remains the same as de-bugging any native procedure, which will be explained in the later sections of this blog. Additionally there are a lot of scenarios in which the system generates wrapper procedures for e.g. Decision Table.

This is the part 1 one of the series, which explains about de-bugging the native stored procedures and in the part 2, i will discuss about de-bugging the HANA expert script based transformation( BW 7.4)

1.1 Pre-requisites

◉ Some versions of HANA would require a XS project to be created for the de-bugging to work.
◉ The mandatory privileges required for the user (debug user) on the schema where the procedures are stored are:

1. SELECT
2. EXECUTE
3. DEBUG

The “Debug” perspective should be switched on.

1.1.2 Creating the XS Project

Step1:  Switch to the HANA Development Perspective and go to the Project explorer view.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step2:  Go to the context menu and chose New=> Project => Other

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 3: In the Select Wizard, please chose XS project.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step4:  Fill in the relevant details , select the workspace and the repository package in which the native content has to be created.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

The XS Project is ready and is now visible in the project explorer view.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

1.1.3 Privileges required to Debug

The below are the mandatory privileges required for the DEBUG user on the schema , where the procedures are stored.

For e.g.  If the DEBUG user  is ‘USER01’ and the Schema in which the procedures are present is ‘PENSIONS_ANALYTICS’.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

1.1.4 Switching on the DEBUG Perspective

The Debug perspective can be switched on selecting the path as below in eclipse.

Window=>Perspective=>Open Perspective=>Debug

If it doesn’t show up , please select Other and  select  Debug in the dialog.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

The Debug perspective opens up as below.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Now we are all set to start De-bugging the procedures.

2.Debugging a .hdbprocedure


For this scenario I will be using the procedure “TEST_DEBUG” present in the schema “USER01”.

The definition of the procedure is as below.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

The procedure contains two input parameters X , Y and an output parameter Z. The code block contains an intermediate variable “it_var” ( line 6), which contains the results of the SELECT statement from the table HVARV present in the schema GCM_ADMIN. ( I have used this to show on how to display results of the table during the debugging session)

Then we have an addition operation performed on X and Y and stored in Variable Z (line 9).

Step 1: Go to the XS project “proc_debug” from the Project explorer view.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 2:  Drop down the catalog from the SAP HANA System Library and point to the Schema and procedure.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 3: Go to the context menu of the procedure, which you want to de-bug and choose “Open with  SAP HANA Stored Procedure Viewer” and set the break-points at lines 6,9 and 11 by double-clicking.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 4:  Now Right-Click and go to Debug=> Debug Configurations

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 5: The Debug configurations menu pops up , please create a new debug configuration “proc_debug_test” and from the General tab , select “Catalog Schemas” and browse for the procedure “TEST_DEBUG”.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 6:  Click on the tab “Input Parameters” to see the input parameters based on the procedure definition are read-out.  We can give values to these input parameters. In this example, I have given 10 for X and 5 for Y.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 7: Click on “Debug” and open the “Debug” perspective . The screen below shows up.

The Debugging thread is by default suspended at the first Break-point at line 6.  In the pane on right, we can see the Variables filled with values. Now the output parameter of the procedure Z  is also visible and the value is ‘Null’( as the control is at the first break point –line 6).

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

You can see the list of active break-points from the Breakpoints tab.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 8: For the control execution to go to the next break-point, please press F8. Now you can see the control is at line 9 and the intermediate variable “IT_VAR” is shown in the variable list and contains 11 rows.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 9: To view the contents of the variable “IT_VAR” , please right click on the variable and select “Open Data Preview”.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 10: The content is displayed in the bottom pane in the Data Explorer view.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 11: Press F8 to go to the next break-point at line 11, which is also the end of the procedure. The debugging is suspended at line 11 as it is the end of the procedure and there are no more break-points and the variable Z is now filled with 15.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Step 9:  If we want to re-start the debug session , please right-click on the Debug configuration and select “Relaunch”.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live, SAP HANA Learning

Server-side SSL configuration on HANA for inter-node communication and System Replication using openSSL

$
0
0
I have been seeing a growing number of security related questions from customers. This blog will cover step-by-step configuration of SSL for internal communication and system replication. I hope this will help you guys out.

Security is one of the most significant feature any product should posses. In SAP HANA, we can precisely configure both internal and external communication.

Here we will be seeing how to configure server-side SSL manually, when you do not want to use the default system PKI (public key infrastructure). In most cases customer wanted to create their own certificate and use it for internode communication and for System Replication.

Things to know…


◉ In a multi-host system, every host requires a public and private key pair and a public-key server certificate
◉ Use CommonCrypto library (libsapcrypto.so) as the cryptographic library which should be installed as part of SAP HANA server installation
◉ For HANA 1.00, In-Database configuration is not supported for internal communication between servers and for system replication communication.
◉ OpenSSL can be used to create server certificate. With CommonCryptoLib, you can also use SAP Web dispatcher administration tool or SAPGENPSE tool.
◉ Do not password protect keystore file which contains servers private key.

High Level Overview:


1. Create server certificate
2. Self-sign the certificate
3. Import keystore and public certificate into each host

Please note:

The recommendation from SAP is to use a private CA for each host, but here I will only show how to create certificate and sign them in one host. You will just need to follow the same steps in other hosts if you are using multi-host environment.

Create root certificate:


I am here using a 3 node system, MN1, vandevvmlnx011/012/013.

1. Go to /sec directory in the host machine, here starting with node vandevvmlnx011,  path: /usr/sap/<SID>/HDB<INSTANCE_NUMBER>/<HOST_NAME>/sec

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

2. Use OpenSSL tool to request for a root certificate, using command:

/usr/sap/MN1/HDB60/vandevvmlnx011/sec> openssl req -new -x509 -newkey rsa:2048 -days 7300 -sha256 -keyout CA_Key.pem -out CA_Cert.pem -extensions v3_ca

This will ask you to enter PEM pass phrase, I used my SYSTEM user password for test, you may use it per your convenience. Also have to enter other details like country, state, locality etc.

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

3. This should have created two new files CA_Key.pem and CA_Cert.pem

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

Creating server certificate:


1. Here I use sapgenpse, which is installed with HANA installation, to create server certificate request. sapsrv.pse is the name of the certificate we are requesting for and so make sure there is no file with the same name available in that path. Command used:
/usr/sap/MN1/HDB60/vandevvmlnx011/sec> sapgenpse gen_pse -p sapsrv.pse -r sapsrv.req CN=”*.xxxx.xxxx.sap.corp”,O=”HANA Support”,C=”US”

Very important…

◉ Do not enter password when requested for PSE PIN/passphrase as it is not supported!
◉ Also, to secure internal communication, canonical name should be host specific, eg CN=”<hostname_with_domain>”. So when creating private CA on each host, parameter CN will be unique. But here in this example below, I specified CN=”*.xxxx.xxxx.sap.com”, which is good for system replication scenario, but not for internal communication between hosts.

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

2. This should have created new files sapsrv.req and sapsrv.pse

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

Self signing certificate request:


1. Here again using OpenSSL to self sign the certificate sapsrv.req with command:

/usr/sap/MN1/HDB60/vandevvmlnx011/sec> openssl x509 -req -days 7300 -in sapsrv.req -sha256 -extfile /etc/ssl/ope-out sapsrv.pem

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

Note: You can also get this signed through your CA, if doesn’t want to self-sign it.

2. A new file with name sapsrv.pem will be created in the same directory, $SECURDIR

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

Importing server certificate:


1. Import the signed certificate into file sapsrv.pse using sapgenpse utility as below:

/usr/sap/MN1/HDB60/vandevvmlnx011/sec> sapgenpse import_own_cert -c sapsrv.pem -p sapsrv.pse -r CA_Cert.pem

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

2. You can see the file sapsrv.pse is updated from timestamp

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

Copying file to other nodes…

Please note that a private certificate have to be created for each host in the multi-host system. So follow the same steps that we did above to create a sapsrv.pse file and sign & import it, in other hosts as well.

Now re-naming saprv.pse file to sapsrv_internal.pse in all three nodes. For example in node 011:

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

Configuration in HANA Studio (global.ini):


1. Open HANA Studio, go to Administration Console -> Configuration -> global.ini -> communication
2. Set value for parameter ssl = on and make sure sslinternalkeystore and sslinternaltruststore has correct file pointed to.
3. sapsrv_internal.pse is the file we created and so the parameters sslinternalkeystore and sslinternaltruststore has that value.

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

For System Replication:


In case to secure communication for system replication, primary and secondary each will have one .pse file only. When creating the server certificate, we provide the canonical name (like CN=” *.prod.sap.com”), which should be same for both primary and secondary.

So when creating certificate for system replication scenario, no need to have separate .pse file for each host.

Communication between sites (metadata and data channels) require the same configuration as we did above in global.ini/[communication] section. However to secure data communication, we must set parameter enable_ssl = on, under section [system_replication_communication] of global.ini file.

SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA System, SAP HANA SSL

All these changes in global.ini file requires a complete database restart as below:

>sapcontrol -nr <instance_number> -function StopService

>sapcontrol -nr <instance_number> -function StartService

Now communication between hosts and communication between sites in system replication scenario are all secured using SSL.

Importing spatial map client included with SAP HANA Spatial

$
0
0
One of the advantages of SAP HANA Spatial is that it includes a map client and other content at no additional cost. This is an example of a spatial application created using the HERE map provided with SAP HANA.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

Further down the road, when you license HANA Spatial, you also get access to General Administrative Boundaries (GAB) which can be imported to SAP HANA. The GAB content is provided by HERE and is included with HANA Spatial license. It can be used on top of any maps. In my example below, the GAB is displayed as any typical HANA spatial data on top of an Esri map.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

Importing DUs from SAP Cloud Platform Cockpit


1. Launch SAP HANA Cockpit.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

2. Log in with your SYSTEM user. You may be prompted to grant roles, accept it.
3. Choose Manage Roles and Users from the Cockpit.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

4. From the Security tree, choose Users and SYSTEM. Grant the roles:

sap.hana.ide.roles::Developer

sap.hana.ide.roles::EditorDeveloper

sap.hana.ide.roles::CatalogDeveloper

sap.hana.xs.lm.roles::Administrator

sap.hana.xs.lm.roles::Developer

to the SYSTEM user, click OK and save.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

5. From the menu at the top, launch Life Cycle Management.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

6. Select Delivery Units:

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

7. Select Import from the menu.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

8. Select the DU and import it.

Introduction To SAP Landscape Transformation (SLT)

$
0
0
New to Real Time Replication? with this blog I would like to share basic information on SAP Landscape Transformation Replication Server i.e SLT running on the Netweaver platform .

SLT is the SAP first ETL tool that allows you to load and replicate data in real-time or schedule data from the source system and Non-Source System into SAP HANA Database.

SAP SLT server uses a trigger-based replication approach to pass data from source system to target system. SLT server can be installed on the separate system or on SAP ECC System.

INTRODUCTION:


SAP HANA does not really differentiate whether the data comes from SAP or from non-SAP source systems. Data is data, regardless of where it comes from. Combined with the fact that SAP HANA is a full platform-and not just a dumb database —this makes it an excellent choice for all companies. SAP HANA is meant not only for traditional SAP customers but also for a wide range of other companies, from small startups to large corporations. In fact, SAP now actively supports thousands of startups, helping them use SAP HANA to deliver innovative solutions that are “nontraditional” in the SAP sense.
Data provisioning is used to get data from a source system and provide that data to a target system. In our case, we get this data from various source systems, whether they’re SAP or non-SAP systems, and our target is always the SAP HANA system.

Data provisioning goes beyond merely loading data into the target system. With SAP HANA and some of its new approaches to information modeling, we also have more options available ,when working with data from other systems. We do not always have to store the data we use in SAP HANA as with traditional Key Concepts.

Benefit of SLT system


◉ Allows real-time or schedule time data replication.
◉ During replicating data in real-time, we can migrate data in SAP HANA Format.
◉ SLT handles Cluster and Pool tables.
◉ This support automatically non-Unicode and Unicode conversion during load/replication. (Unicode is a character encoding system similar to ASCII. Non-Unicode is encoding system covers more character than ASCII).
◉ This is fully integrated with SAP HANA Studio.
◉ SLT have table setting and transformation capabilities.
◉ SLT have monitoring capabilities with SAP HANA Solution Manager.

Extract, Transform, and Load


Extract, Transform, and Load (ETL) is viewed by many people as the traditional process of retrieving data from a source system and loading it into a target system. Figure illustrates the traditional ETL process of data provisioning.

SAP HANA, SAP HANA Database, SAP HANA SLT, SAP HANA Certifications, SAP HANA Tutorials and Materials

Figure 1.1 Extract, Transform, and Load Process

The ETL process consists of the following phases:

Extraction

During this phase, data is read from a source system. Traditional ETL tools can read many data sources, and most can also read data from SAP source systems.

Transform

During this phase, the data is transformed. Normally this means cleaning the data, but it can refer to any data manipulation. It can combine fields to calculate something (e.g., sales tax, or filter data, or limit the data to the year 2016). You set up transformation rules and data flows in the ETL tool.

Load

In the final phase, the data is written into the target system.
The ETL process is normally a batch process. Due to the time required for the complex transformations that are sometimes necessary, tools do not deliver the data in real time.

Initial Load and Delta Loads

It is important to understand that loading data into a target system happens in two distinct phases

SAP HANA, SAP HANA Database, SAP HANA SLT, SAP HANA Certifications, SAP HANA Tutorials and Materials

Figure 1.2 Initial Load of Tables and Subsequent Delta Updates

Replication:


Replication emphasizes a different aspect of the data loading process.
ETL focuses on ensuring that the data is clean and in the correct format in the target system. Replication does less of that, but gets the data into the target system as quickly as possible. As such, the replication process appears quite simple, as shown in Figure 1.3.

The extraction process of ETL tools can be demanding on a source system. Because they have the potential to dramatically slow down the source system due to the large volumes of data read operations, the extraction processes are often only run after hours.

Replication tools aim to get the data into the target system as fast as possible. This implies that data must be read from the source system at all times of the day and with minimal impact to the performance of the source system. The exact manner in which different replication tools achieve this can range from using database triggers to reading database log files.

SAP HANA, SAP HANA Database, SAP HANA SLT, SAP HANA Certifications, SAP HANA Tutorials and Materials

FIGURE 1.3 Replication process

With SAP HANA as the target system, we achieve real-time replication speeds. (Worked on projects where the average time between when a record is updated in the source system to when the record is updated in SAP HANA was only about 50 milliseconds!)

SAP Extractors:


Normally, we think of data provisioning as reading the data from a single table in the source system and then writing the same data to a similar table in the target system. However, it is possible to perform this process differently, such as reading the data from a group of tables and delivering all this data as a single integrated data unit to the target system. This is the idea behind SAP extractors.

Database Connections:


SAP HANA provides drivers known as SAP HANA clients that allow you to connect to other types of databases. Let’s look at some of the common database connectivity terminology you might encounter.

Open Database Connectivity (ODBC):


ODBC acts as a translation layer between an application and a database via an ODBC driver. You write your database queries using a standard application 476 programming interface (API) for accessing database information. The ODBC driver translates these queries to database-specific queries, making your database queries database and operating system independent. By changing the ODBC driver to that of another database and changing your connection information, your application will work with another database.

Java Database Connectivity (JDBC):


JDBC is similar to ODBC but specifically aimed at the Java programming language.

Object Linking and Embedding, Database for Online Analytical Processing (ODBO)
ODBO provides an API for exchanging metadata and data between an application and an OLAP server (like a data warehouse using cubes). You can use ODBO to connect Microsoft Excel to SAP HANA.

Multidimensional expressions (MDX)

MDX is a query language for OLAP databases, similar to how SQL is a query language for relational (OLTP) databases. The MDX standard was adopted by a wide range of OLAP vendors, including SAP.

Business Intelligence Consumer Services (BICS)

BICS is an SAP-proprietary database connection. It is a direct client connection that performs better and faster than MDX or SQL. Hierarchies are supported, negating the need for MDX in SAP environments. Because this is an SAP-only connection type, you can only use it between two SAP systems —for example, from an SAP reporting tool to SAP BW.
The fastest way to connect to a database is via dedicated database libraries. ODBC and JDBC insert another layer in the middle that can impact your database query performance. Many times, however, convenience is more important than speed.

SAP HANA Replication allows migration of data from source systems to SAP HANA database. Simple way to move data from existing SAP system to HANA is by using various data replication techniques.SAP HANA Replication allows migration of data from source systems to SAP HANA database. Simple way to move data from existing SAP

system to HANA is by using various data replication techniques.System replication can be set up on the console via command line or by using HANA studio. The primary ECC or transaction systems can stay online during this process. We have three types of data replication methods in HANA system −

◉ SAP LT Replication method
◉ ETL tool SAP Business Object Data Service (BODS) method
◉ Direct Extractor connection method (DXC)

SAP Landscape Transformation (SLT)


One of the main features of HANA is that, it can provide real time data to the customer at any point of time. This is made possible with the help of SLT (SAP Landscape Transformation) where real time data is loaded to HANA from SAP or Non-SAP source systems.

SAP Landscape Transformation Replication Server (“SLT”)

◉ is for all SAP HANA customers who need real-time or scheduled data replication, sourcing from SAP and NON-SAP sources
◉ Uses trigger-based technology to transfer the data from any source to SAP HANA in real-time.

SLT server can be installed on the separate system or on SAP ECC System.

SAP HANA, SAP HANA Database, SAP HANA SLT, SAP HANA Certifications, SAP HANA Tutorials and Materials

Benefit of SLT system:


◉ Allows real-time or schedule time data replication.
◉ During replicating data in real-time, we can migrate data in SAP HANA Format.
◉ SLT handles Cluster and Pool tables.
◉ This is fully integrated with SAP HANA Studio.
◉ SLT have table setting and transformation capabilities.
◉ SLT have monitoring capabilities with SAP HANA Solution Manager.

SLT Architecture overview between SAP System and SAP HANA:

SLT Replication Server transforms all metadata table definitions from the ABAP source system to SAP HANA.
For SAP source, the SLT connection has the following features –

◉ If your source system is SAP then you can install SLT as separate system or in Source itself.
◉ If your source system is SAP then you can install SLT as separate system or in Source itself.
◉ When a table is replicated, SLT Replication server create logging tables in the source system.• Read module is created in the SAP Source System.
◉ The connection between SLT and SAP Source is established as RFC connection.
◉ The connection between SLT and SAP HANA is established as a DB connection.
◉ If you install SLT in Source system itself, then we no more need to have an RFC Connection SLT ◉ Server automatically create DB connection for SAP HANA database (when we create a new configuration via transaction LTR). There is no need to create it manually.

The SAP Note (1605140) provides complete information to install SLT system.
In case SLT Replication Server is installed in the Source System the architecture will be as shown below

SAP HANA, SAP HANA Database, SAP HANA SLT, SAP HANA Certifications, SAP HANA Tutorials and Materials

SLT Architecture overview between Non-SAP System and SAP HANA:


SAP HANA, SAP HANA Database, SAP HANA SLT, SAP HANA Certifications, SAP HANA Tutorials and Materials

• The above figure shows real time replication of data from non-sap sources to HANA system. When the source is non-sap we have to install SLT as a separate system.

• The main changes when compared to first scenario where source is SAP System are Connection between Source and SLT is going to be a DB Connection.Read modules will be in SLT instead of Source.

Components of SLT:


The main components involved in real-time replication using SLT are

Logging Tables: Logging tables are used to capture the changed/new records from application tables since last successful replication to HANA.

Read Modules: Read modules are used to read the data from application tables for initial loading and convert the cluster type tables into transparent.

Control Module: The control module is used to perform small transformation on the source data. Data from here will be moved to write tables.

Write Modules: The functionality of write table is to write the data to HANA system.

Multi System Support:


SLT Replication Server supports both 1:N replication and and N:1 replication.
Multiple source system can be connected to one SAP HANA system.
One source system can be connected to multiple SAP HANA systems. Limited to 1:4 only.

SAP HANA, SAP HANA Database, SAP HANA SLT, SAP HANA Certifications, SAP HANA Tutorials and Materials

SAP HANA Under The Hood: HDB – by the SAP HANA Academy

$
0
0

Introduction


The objective of this SAP HANA Under The Hood series is to provide some insights on particular SAP HANA topics. Like a mechanic, look under the hood and take apart a piece to examine its functioning.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

This video was recorded for the Getting Started with SAP HANA on the Google Cloud Platform series but the start/stop script is the same on all platforms, both cloud and on-premise.


Getting Started with SAP HANA on the Google Cloud Platform – Start and Stop with HDB


#!/bin/sh

Those familiar with UNIX system administration will certainly recognize the shebang [#!] as an indicator that we are dealing here with a shell script.

Note: Currently, the only operating systems for which SAP HANA is supported are SUSE Linux Enterprise Server (SLES) and Red Hat Enterprise Linux (RHEL). However, for what follows the same would apply to any UNIX system, so when reading UNIX below understand that this also applies to Linux.

Below a common sequence of commands for the UNIX administrator to find out about what type of file we are dealing with:

# where is the file located? 
which <file>

# what type of file is it? 
file <file>

# in case of script, show the contents
cat <file>

On my SAP HANA express edition system, this returns

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

ASCII text means that it is a plain text file (like Notepad files on Windows). We can see that if we run the command without any arguments, the “usage” is printed to screen, a standard safe script coding practice to keep the novice out of trouble.

Note that the Copyright (c) is from 2002-2003. Why would that be?

You may recall that the release of SAP HANA was back in 2011 (and if you memory is failing you, you can always look it up on the Product Availability Matrix).

So, why would the script be from 2003?

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

PAM – SAP HANA platform edition 1.0

The reason is that the HDB script has been taken pretty much as-is from a predecessor of SAP HANA, the SAP Netweaver Business Intelligence Accelerator (BIA), later renamed to BWA for BW accelerator.

If you are interested, there is a WiKi on the SAP Community that provides additional links [ BWA – Business Warehouse Accelerator (BC-TRX-BIA) ]. You can also look BIA up on the PAM: 7.0 was released in 2005 and 7.2 in 2009, just prior to HANA.

The HDB script was called BIA at the time – and later BWA – and as this implies, you can rename the script to pretty much any name you want: HANA, hdb, etc.  Below, we do a ‘mv’ to rename the script from HDB to HANA.

Any reason why you want to rename the script? None whatsoever, bad idea! April fools maybe, on a test box (please!). Just to show you that HDB is a regular shell script; no magic.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

Rename HDB (not a good idea)

Usage


So, let’s take a look at the arguments or parameters if you prefer. When you just enter the name of the script as a command, usage information (help) is printed out, showing the different arguments available.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB – usage

Environment


Each time you run the HDB script, it will source another script file, hdbenv.sh which sets the environment, that is, sets variables like the system identifier (SID), instance number, path to executables, Python settings (and many more, as there are 273 lines in the files).

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

The hdbenv.sh script checks that it is executed by the <sid>adm user and not by the root user, for example.

As with HDB, the origins of hdbenv.sh script go back some time, here with reference to R/3.

We can also see newdb mentioned here, once the internal name for the in-memory database project. Later this was changed to Hybrid DataBase (HDB) for both row and column store (OLTP and OLAP), and finally, the product was marketed as HANA. If you are interested in the story, you can hear Vishal Sikka talk about the genesis of HAsso’s New Architecture in the openSAP course An Introduction to SAP HANA.

Marketing for some time reframed the acronym to the High-performance ANalytical Appliance but this was put to rest once HANA got beyond its appliance phase.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

hdbenv.sh

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB executed as root

Timeout


HDB sets default values for start and stop timeout, 45 and 10 minutes respectively.

To me, these safeguards appear more appropriate to an SAP Netweaver Java system than SAP HANA but either way, they can be overridden using the environment variable HDB_START_TIMEOUT. As HDB is a script, you could modify these hard exits but it would be wise to do so in accordance with SAP Support. Editing the HDB script is certainly not a supported activity.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB – Timeout

Start


The start argument will use the SAPControl executable with function call StartService to startup the service which in turn should startup the instance.

For the SAPStartService, see

◈SAPInit and SAPStartSrv

The same SAPControl utility is used to keep the prompt and wait for both service and instance to have started to print out the result, before returning the prompt.

SAPcontrol will be the next topic in this series. For now, let’s just assume it starts (and stops) the HANA instance.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB start) argument

You can safely execute the HDB start command when the system is already running, it will just verify that the service is indeed up and running and return the prompt after two times 2 seconds delay (the 2 in the command above).

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB start command

The message Impromptu CCC initialization can be ignored.

Stop


The stop argument first parses the different daemon.ini configuration files for the terminationtimeout parameter.

On my system, no host-specific or custom settings was defined, only the default 300 seconds. This means that the daemon service will try to stop HANA processes like hdbindexserver and hdbnameserver within 5 minutes before executing a hard stop on these child processes.

Then, the daemon process itself is stopped and for this, again, SAPControl is used. This time with a timeout of 400 seconds. In other words, the daemon will get an extra 100 seconds to finish up its work (soft or hard) before it will receive a hard stop. For this, another 600 seconds is allocated (the STOP TIMEOUT mentioned above) after which the operating system will step in and switch all lights off.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB stop) argument

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

daemon.ini terminationtimeout

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB stop command

Reconf


The reconf argument will run the hdbdaemon command with the -reconfigure flag. $DAEMON is set to hdbdaemon just above the timeouts section (see above).

Although the hdbdaemon command is not documented publicly, the tool does come with a built-in help function. However, this does not explain the situations where you would want or need to use this. For this reason, the HDB reconf is typically more likely used for support reasons and not in daily operations.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB reconf) argument

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

hdbdaemon -reconfig command

Restart


Like reconf, restart calls the hdbdaemon process directly, this time with the -restart flag. If the result is not OK, a regular instance is attempted using SAPControl.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB restart) argument

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB restart command

Version


The version argument prints out the version of SAP HANA together with information which could only be of interest to SAP Support and development like branch, git hash, etc. Unless of course, you want to fork your own version of SAP HANA. Make sure that, in this case, your lawyer has worked out the fine print with SAP before attempting such a venture.

The hdbsrvutil tool or utility is not publicly documented but, like hdbdaemon, comes with a built-in help function. It appears to be mainly used – internally, that is, by SAP support and development – to stop individual HANA processes. Softly with SIGTERM or hard with SIGKILL/SIGABRT. See below for the fine print about this killing business.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB version) argument

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

hdbsrvutil -v

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

hdbsrvutil

Info


The info argument prints out the output of the UNIX ps command for the <sid>adm user with options (-o) username, process id, parent process id, percentage CPU, virtual memory size, resident set size and the command (or process name).

The code for this argument clearly comes from an older more generic script as first a variable is set for HP-UX and then the database_file is parsed, all this does not apply to SAP HANA. However, in the spirit of “if it ain’t broke, don’t fix it”, this code is left as-is and if the operating system happens to be Linux (the only one supported), the ps command is executed.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB info) argument

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB info command

If we slightly modify the ps command, we can nicely see the parent-child relationships with the sapstart (50196) process starting the hdbdaemon (50204) which, in turn, starts the other HDB processes like hdbnameserver but also three sapjvm_8 SAP Java runtimes which, in turn, start the different java and node processes.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

ps command with forest tree

Proc

The proc argument is almost identical to the info argument although originally might have served a different purpose.

Admin


If a graphical environment is present, that is, X-Windows installed and configured, the admin argument will start the SAP HANA Database Administration tool. This tool, just like the other tools and utilities mentioned in this blog, is not documented and its use is, for this reason, not supported.

Those that have worked with BWA in the past will certainly recognize it as the BWA Admin tool. It is roughly the same tool but it appears to be still under development as, for example, tenant database configuration is supported, a concept which did not exist for BWA.

HDB Admin does not any perform miracles as sometimes is suspected from undocumented tools. In fact, quite the opposite: the tool returns a fair number of Python exception errors while navigating the UI. You can see it as a predecessor of HANA studio but one that does not require any client installation. For this reason, it is sometimes used by SAP support.

However, as the typical Linux system running SAP HANA is a minimum installation hardened for security, it will come without any graphical environment and running the ‘HDB admin‘ command will only return the message:

Environment variable DISPLAY is not set, cannot start HDBAdmin

You can view HDBAdmin in action in the tutorial video for this blog:


SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB admin) argument

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

SAP HANA Database Administration

kill | kill-<sig> | term

The ominous sounding kill and term arguments will allow you to stop individual HDB processes; the full list we saw with the HDB info command.

The kill man(ual) pages will explain you exactly, in quite some detail, how this all works and, of course, you will also find this command documented in every UNIX / Linux administration handbook.

There are more ways to kill a process as there are ways to leave your lover: kill -l will list them all.

Soft stop:

◆ -1 or SIGHUP (Hang Up signal)
◆ -2 or SIGINT (Interrupts signal, equivalent to Control-C)

Hard stop:

◆ -6 or SIGABRT (Abort signal)
◆ -9 or SIGKILL
◆ -15 or SIGTERM (terminal signal, default action if no flag is specified)

The main difference between 6 and 9 is that SIGABRT will attempt to end the process with a core dump, that is, write the memory contents to disks. SIGKILL is like a power off for the process. Any state not persisted is lost.

What happens with SIGTERM will depend on the process (as defined in its header file) and could be either a soft or hard stop, As it is better to be safe than sorry, I will list SIGTERM here as a hard stop.

Note that the ‘kill <process id> command will default to this -15 or SIGTERM option whereas an ‘HDB kill’ has been implemented to perform a ‘kill <process id> -9 or SIGKILL.

Fortunately, a relational database management system like SAP HANA will have it savepoint and logging mechanism to be able to recover gracefully from any process crash.  However, it is maybe prudent to repeat the usage output from the HDB command. Certainly in productive systems stay clear of using SIGKILL (-9), SIGABRT (-6) and even SIGTERM (-15).

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB usage

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

HDB term | kill

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

64 ways to kill a process

Note that the hdbsrvutil command also provides the -15, -6 an -9 flags to stop and start processes.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Live

hdbsrvutil

HANA SDI | Smart Data Integration 2.0 – H2H Real-time Replication: Lessons Learned

$
0
0
In this blog entry I would like to convey some of the experiences we made throughout an SDI HANA to HANA (H2H) implementation project. To gather an understanding of the context, we will start off with the scenario description and solution architecture.

These are the items that will be covered throughout this blog:

1. Implementation Scope
2. Solution Architecture
3. Best Practices
4. Challenges
5. Reengineering of Replication Tasks
6. Monitoring
7. Real-time Replication & Source System Archiving

You can expect practical insights into the implementation of a HANA to HANA repliction scenario. Some detailed aspects on e.g. task partitioning, replication task design or monitoring are described. Potentially you can adapt the approaches described in this blog in your own SDI implementation project.

1. Implementation Scope


From an SDI perspective, this brief overview will describe some facts and requirements we had to deal with:

◈ Replicate data in real-time from 3 different HANA source systems into a (consolidated) target schema using SDI RTs (with the SDI HANAAdapter)
◈ Replication scope approx. 550 tables per source (times 3 = > 1.600 tables)
◈ Replicate tables with high record count (6 tables of > 2 billion in production)
◈ SDI task partitioning for large tables (> 200 mio. records)
◈ Target table partitioning for large tables (> 200 mio. records)
◈ SDI infrastructure/configuration – e.g. DP-Agent + Agent groups
◈ Follow SDI best practice guidelines (naming convention, implementation guidelines, tuning)
◈ SDI development artifacts maintenance + transport across landscape to PRD
◈ Dpserver + dpagent monitoring
◈ Out of scope: Load and replication of IBM DB2 based source systems (compare with architectural diagram)

2. Solution Architecture


The end-to-end solution architecture employs several SAP and non-SAP components
  • DP-Agents
    • Virtual host on Linux, 64 GB
    • 2.1.1
  • HANA 2 SP02
    • 4 TB
  • HANA EIM SDI (XSC runtime)
  • DLM
  • HANA Vora 1.4
  • Hadoop Cluster with Spark enabled
  • Microstrategy
The following illustration shows the architecture in a facilitated way. From an SDI point of view there are multiple real-time + batch input streams: Suite on HANA systems, files, legacy data from IBM DB2 DBs (not shown).

In the productive environment (as shown) each Suite on HANA (shown as HDB1/2/3) is connected employing a dedicated DP-Agent group with a HANAAdapter instance. Thus, the risk of stalling the whole replication when remote sources or RepTasks exceptions occur on source system level can be mitigated. The Hadoop and Vora part, shown on the right-hand side will not be further elaborated and are not part of this blog entry.

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

3. SDI Best Practices


Initially, most of the aspects (for users + authorizations) considered in the official SDI Best Practices Guide were implemented(refer to references section for the web link to the bast practices).

SDI users were organized the following way:

◈ SDI_ADMIN – monitoring privileges, user creation, ***
◈ SDI_DEV – Web-based development workbench, repository privileges, schema privileges
◈ SDI_EXEC – execute replication tasks
◈ SDI_TRANSPORT – transport SDI artifacts

Using this pattern, you can easily follow a segregation of duties approach and avoid unnecessary and unwanted situations in development or deployment. On the contrary, you have to stick with the approach and align your development and administration processes accordingly.

4. SDI Real-time Replication Design – Challenges


The following describes the major challenges we faced:

1. Multiple sources into one target
2. Replication Task Count
3. Duration of Initial Load and Cut-over Time
4. Disable the Capture of DDL Changes

1. Multiple sources into one target

◈ It is targeted to consolidate all source data (SAP ECCs) into one target schema. In this sense, source tables of the same fashion are replicated into one consolidated target table.
◈ Translated into a specific example:

The replication of e.g. table COEP is set up for all three source systems. The target for all                  sources is one table COEP. This target table must comply with all structural properties that                  exist in the sources (COEP tables across n source systems do not necessarily have the exact            same structure, at least in the given case. You might have system specific appends), meaning            different Z-/Y-fields across all COEP source tables. As RTs do not allow for flexible column    mapping like FGs, this is how we resolved the requirement:

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

2. Replication Task Count

◈ In scope were about 550 tables which have to be replicated with different volumes and different change/delta frequency. For the largest tables, dedicated remote sources are planned to be introduced. This would translate into a remote source count of 7 for the largest SoH source.
Since each remote source gets its own receiver/applier pair assigned (in the dpserver), this makes sense from a parallelization and performance perspective. On the downside, you have to create dedicated source system users and of course you need to maintain and organize your solution considering the set of different remote sources that are in place. In the following illustration each red arrow represents an own remote source.

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

3. Duration of Initial Load and Cut-over Time

◈ We observed quite a few tables that initially consumed much time (> 24 hours per table). By introducing task partitioning within RepTasks, combined with a proper sequence of loading we could achieve major performance improvements. Hereby, the challenge is to find appropriate columns and value ranges. Imagine a tables such as MARC (SoH Material Master Plan View) with 1.9 billion records and you should define proper ranges for your range partitioning approach. How do you do that? The solution is the profile your data with generic SQL procedures or by using other tools. Potentially you have experts at hand who might have the knowledge about the value distribution in those tables. This task can be tricky.

◈ Check value distribution of columns that are candidates for partitioning:

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

◈ Apply partitioning settings in RT; here: Range Partitioning on field STUNR with two parallel partitions for the initial load:

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

4. Disable the Capture of DDL Changes

◈ The HANAAdapter is able to capture DDL changes: drop/add column. Obviously if you load in real-time from three sources into one target table this influences the behavior for DDL changes considerably – i.e. you can’t capture DDL changes anymore as the (dynamic) structural differences between the source tables would cause inserts on the SDI applier side to fail.
◈ A good option is therefore to set the DDL scan interval to 0 which means “disabled”. The default value is 10 as you can see in the below picture:

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

5. Reengineering Replication Tasks


Throughout the implementation of the project, several changes in requirements, e.g. regarding RT structural details such as conversion expressions etc. occurred (you might know these change requests from your own project).

Therefore, some PowerShell and Python scripts were implemented in order to better apply mass changes to the SDI objects. When you have around 1.600 RepTasks you will be grateful to not touch each of them one by one of course. Doing this, you need to take precautions wherever possible. Of course, you take backups of the export you might do from the web-development workbench. After you export your SDI artifacts from the web-development workbench, you can apply changes to the XML structure. You can also do this by opening the replication task from the workbench via right click -> Open in text editor.

Whatever you do in terms of editing the XML directly, you need to be aware of the find/replace operation you undertake. Of course you make sure that you only edit the XML properties you really need to! Else your replication task structure is likely to get corrupt. After applying your changes locally you can re-import the replication task in the web-development workbench.

Here are a couple of examples we had to deal with. These where resolved by applying respective scripts or using common tools such as Notepad++/Sublime.

1. Replace “Project Expression” in all RepTasks for a defined scope of columns (specific for each table) and input proper conversion expressions such as ABAP date NVARCHAR(8) à HANA DATE format or ABAP time NVARCHAR(6) à HANA TIME fields.

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

As all RepTasks were initially designed to conduct a 1:1 replication (only 1 table per RepTask and no further conversion or project expressions), a python script executed the following:

◈ Iterate directory where RepTasks are stored, element by element
◈ Match RepTask file with respective line in provided csv (proper identifiers required)
◈ Find XML attribute for affected columns and replace element value “Project Expression”

Using a python script, class xml.etree.ElementTree can help facilitating to parse and edit RepTask XML structures.

2. Check consistency and correctness of values identifying the source system, which is defined on table level with an additional column

◈ Source system 1 = ‘001’
◈ Source system 2 = ‘002’
◈ Source system 3 = ‘003’

Moreover, check if all RepTasks apply the correct filter criteria. This can also be done using e.g. Notepad++ to “scan” entire directories for certain patterns/values.

6. SDI-Monitoring

Monitoring of the SDI solution requires both the target HANA system and the DP-Agent(s). Apart from these “core” components, many sub-components come into play. Stating that monitoring the DP-Agent and the DP-framework on the HANA side is enough and your SDI solution is in safe hands would be wrong.

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

As it is a framework, which is truly embedded into the HANA database, you need to understand (optimally) all inner mechanisms that come into play when data processing happens. However, in this blog, we only want to dedicate our attention to SDI framework related monitoring options. Please do not expect to be able to monitor or troubleshoot your SDI solution after reading this. The following shall provide you only some basic input and approaches we have applied and experienced in our project.

To begin with, and most-likely well known, are the SDI monitoring views. These views unveil quite a lot of details of SDI design time objects, ongoing loads or general statuses. They also provide some control mechanisms to handle replication and load. Anyhow, when it comes to more detailed root-cause analysis, one needs to be familiar with the framework’s table, how traces are set, where log files are stored etc. A good reference here is the official SDI Administration Guide (refer to references section where links are provided).

In addition to the monitoring views, a solid set of prepared SQLs or some custom-built stored procedures can of course help to facilitate your monitoring or troubleshooting activities. E.g. during the initial load of our RepTasks you typically continue querying some important views. Especially, the REMOTE_SUBSCRIPTIONS view is helpful when it comes to monitoring a particular initial load or CDC states. The state column describes one of the following replication states. These should be well understood (you also find this information in the SDI Administration Guide):

State NameDescription
CREATEDRemote subscription is created by the replication task.
MAT_START_BEG_MARKER  The receiver is waiting for the begin marker that indicates the first changed data to queue while the initial load is running.
MAT_START_END_MARKER  The receiver queues the rows and is waiting for the end marker that indicates the last row of the initial load.
MAT_COMP_BEG_MARKER The receiver is waiting for the begin marker that indicates the first row to queue after the initial load has completed.
MAT_COMP_END_MARKER The receiver queues the changed rows and is waiting for the end marker that indicates the last row of the initial load. The initial load has completed and the end marker is sent to the adapter. If the state does not change to AUTO_CORRECT_CHANGE_DATA, the adapter or source system is slow in capturing the changes.
AUTO_CORRECT_CHANGE_DATA When the end marker is received for the initial load, the applier loads the changed data captured (and queued during the initial load) to the target.
If a lot of changes occurred after the initial load started, this state might take a long time to change to APPLY_CHANGE_DATA.
APPLY_CHANGE_DATA  All of the changes captured while the initial load was running have completed and are now loaded to the target.

We personally perceived the usage of the monitoring views as too cumbersome, as well as the SDA monitor in HANA Studio. This is subjective and does not apply to all cases of course. Subsequently, I present a bunch of helpful statements:

--Check for current source, if all subscriptions are in state APPLY_CHANGE_DATA – show differing subscriptions
SELECT * FROM M_REMOTE_SUBSCRIPTIONS WHERE SCHEMA_NAME = '<VIRTUAL_TABLE_SCHEMA_SOURCE>' AND STATE != '' AND STATE != 'APPLY_CHANGE_DATA';

--Check how many reptasks/remote subscriptions are in real-time replication state per remote source
SELECT SCHEMA_NAME, COUNT(*) FROM M_REMOTE_SUBSCRIPTIONS WHERE STATE = 'APPLY_CHANGE_DATA' GROUP BY SCHEMA_NAME;

--Query for exceptions on remote source/subscription level
SELECT * FROM REMOTE_SUBSCRIPTION_EXCEPTIONS;

--Check applied remote statements during the intial load (SDA request) for a given table. You need to sort out the correct statement as multiple statement might involve your search pattern
SELECT * FROM "SYS"."M_REMOTE_STATEMENTS" WHERE REMOTE_STATMENT_STRING LIKE '%MARC%';

--Find running tasks. The initial load in the SDI context is wrapped into a HANA runtime task. Therefore you can keep track of its execution by querying the appropriate tables
SELECT TOP 100 * FROM "_SYS_TASK"."TASK_EXECUTIONS_" ORDER BY START_TIME DESC;
--In case of partitioned tasks, query the subsequent table
SELECT TOP 100 * FROM "_SYS_TASK"."TASK_PARTITION_EXECUTIONS_" ORDER BY START_TIME DESC;

Monitoring the DP-Agent

To keep track of what happens on the DP-Agent side, it is advisable to have a real-time monitoring tool at hand, such as Array, Baretail or when running on Linux, some native Linux tool. It is your choice. Set the trace and log levels in the DP-Agent’s ini-files. Make yourself familiar with the paths were traces and logs are stored and access them via your tool of preference.

Here is a brief overview of the dpagentconfig.ini file properties for logging:

◈log.level = ALL/INFO/DEBUG set the trace level
◈log.maxBackupIndex = 10 max number of trace files
◈log.maxFileSize = 10MB                                     max size of trace file
◈trace.body=TRUE log entire body part of operation

Correspondingly you should see in the log path the respective trace/log files:

◈ trc Trace file for agent
◈Framework_alert Alert file for framework
◈log Log Reader rep agent log files
◈log Service log files

Data Assurance

In order to assure source and target record count are the same, you can use some prepared SQL statements to query the record counts of source and target table.

Possibly, you want to provide some SQL procedure to automatically match source and target table in terms of record counts, across remote sources, e.g. taking into

7.  Real-time Replication & Source System Archiving


Replication using the HANAAdapter is trigger based.

I.e. database triggers on table level exist and react upon DMLs such as: I/U/D. If a table is being replicated, and a record in the table is archived, the database trigger interprets the archiving operation as a DELETE statement, and SDI deletes the corresponding record from the target table.

The following option can be leveraged to ignore DELETE operations from an archiving run, introducing a user that executes the archiving run. This user must always be the same user.

SAP HANA SDI, SAP HANA Certifications, SAP HANA Guides, SAP HANA Replications, SAP HANA Live

We would have an ECC source system user e.g. “ARCHIVE_USER” that executes the archiving run. All of the operations conducted by that user will be filtered out by the HANAAdapter, no deletes will happen.

Subsequent Document Splitting in S/4 HANA Finance 1709 (on Premise)

$
0
0

1. Purpose of this document


This document is for SAP FICO Application consultants. You would be able to implement subsequent document splitting in S/4 HANA with the help of this document. You must know already how document splitting works in general, it is sort of prerequisite.

2. Overview of Subsequent Document Splitting


There are many customers on previous SAP ERP versions (ERP with Classic GL & ERP with New GL) without document splitting but want to migrate to S/4 HANA with document splitting to get balance sheet valuation at lower level than company code i.e. Business Area, Profit Centre and Segment. Before release of S/4 HANA 1709, activation of document splitting was not possible if source system does not have the document splitting functionality activated during or post migration.

Subsequent activation of document splitting is possible with S/4 HANA Finance 1709 release. Now, business can migrate to S/4 HANA from any version and subsequently document splitting can be activated. It has to be a separate project than S/4 HANA Migration one.

SAP has provided this solution in standard offering of S/4 HANA 1709; there is no separate licence for using tools for subsequent document splitting activation. Subsequent document splitting activation

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

in SAP ERP attract different license and cost.

3. Project Phases and their Activities of Subsequent Implementation of Document Splitting


There are three phase of document splitting subsequent implementation i.e. 
1. Preparatory & Testing 
2. Execution Phase 
3. Post Processing

Preparatory Phase:

◈ Concept Planning
◈ Create Detail Project
◈ Business Requirement
◈ Business Blue Print
◈ Customizing
◈ Testing
◈ Complete Preparation

Execution Phase:

◈ Online Validation Activate
◈ Data Consistency Checks
◈ Enrichment of Open Items with document splitting characteristics
◈ Document Splitting Activate
◈ Fiscal Year Close
◈ Enrichment of Balances
◈ Carry-forward with Document splitting characteristics
◈ Reconciliation

Post Processing Phase:

◈ Manual Adjustment of Opening Balance
◈ Repost Opening Balance
◈ Zero Balance Check

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

I. Preparatory Phase:

a. General Information about the Approach

No change in Approach:

◈ Concept of Document Splitting Logic
◈ Analysis of Business Process and Process Adjustments
◈ Configuration of Document Splitting
◈ Master Data Analysis and Preparation (e.g. Maintenance and Assignment of Profit Centers, Segments

Change in Approach:

◈ Concept of Data Enrichment
◈ Customizing of a Project for Subsequent Document Implementation
◈ New Analysis tool to figure out, if document splitting configuration is complete and if exiting journal entries could have been splitted

Master Data Considerations:

In case you need to introduce a new reporting entity to fulfil the reporting requirements, you have to keep in mind the following:

Correct Master Data Maintenance in all the systems (Configuration, Testing, and Production)

Complete assignment to other accounting object

Customizing for Field status management

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

b. Activation of Document Splitting for Productive Company Code

When setting the Document Splitting Flag (1) and keeping the company codes Active for the document splitting (2) you need to confirm, that these company codes don‘t have productive data.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

c. Project Customization for Subsequent Document Splitting

For each subsequent document splitting activation a project need to be created and this project need to be assigned to company codes as well.  This project also includes activation date of document splitting. 

Recommendation: It is recommended that document splitting is activated on the very first day of New Fiscal Year. Nevertheless, business can keep any date as activation date for document splitting. In both the cases post processing phase can be run once you are done with year-end closing and no adjustment posting is require to post.

If you have multiple company codes with different fiscal years in the system, then you can create multiple projects but at a time a single project can be run for document splitting, else you can select a date according the fiscal year that is being used in the maximum company codes.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

d. Assignment of Company code with Project

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

e. Analysis Tool for Documents Splitting

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

f. Activation of Online Validation

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

Validation can be further controlled through

◈ BAdi VALIDATION_ADJUST
◈ BAdi COMPLETE_ASSIGNMENT

g. Logic of Data Enrichment

Open Item Enrichment:

◈ Open items need to be enriched with the document splitting characteristics before the document splitting gets active, to ensure follow-up processes can be posted according to the document splitting logic
◈ No splitting – only adding account assignment information to the open item
◈ BAdI Implementation is mandatory – no other enrichment logic is offered
◈ Example implementation CL_FINS_SIS_BADI_OPITEMS_EX1 available

Balance Carry Forward Splitting:

◈ Opening balances need to be splitted, to ensure reporting of opening balances for document splitting characteristics
◈ Splitting of opening balances generates new balance carry forward entries in table

ACDOCA (identifier is mig_source “E“)

◈ Only empty fields get filled by the BAdI – no overwriting of available information
◈ BAdI Implementation is mandatory – no other enrichment logic is offered
◈ Example implementation CL_FINS_SIS_BADI_BAL_EXAMPLE available

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

Enrichment of Reversal Postings:

◈ Fills account assignments in journal entries that have been posted prior to the Document Splitting activation date and get reversed after the document splitting activation date
◈ This enrichment logic is not part of the data conversion – it‘s used in daily processes esp. reversals

Enrichment of Period End Postings: Fills account assignments for the following:

Line items or balances that have been posted prior to the document splitting activation date

◈ Line items or balances that are processed by a closing transaction, such as Foreign Currency ◈ Valuation, and result in a posting with posting date after the document splitting activation date
◈ Line items or balances that are processed by a periodic tax payable calculation and
result in a posting with posting date after the document splitting activation date
◈ This enrichment logic is not part of the data conversion – it‘s used during period end activities

h. Other Setting

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

i. Check Preparation

Analysis mandatory settings for your project:

◈ Document splitting activation date is set in the future
◈ Company codes with cross-company code postings are assigned to one project
◈ Online Validation for Document Splitting is active
◈ BAdIs for data enrichment are implemented Ensures, that
◈ No documents exist with a posting date later than the document splitting activation date
◈ No splitting information exists so far for the company codes assigned to the project

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

j. Complete Preparations

◈ Checks customizing settings
◈ Creates snapshot of existing customizing
◈ Checks availability of journal entries after Document Splitting activation date

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

II Execution Phase

There is a cockpit for document splitting the way we have cockpit for S/4 HANA Migration. In other words, execution of data enrichment and reconciliation is handled in the Cockpit

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

a. Separation of Major Task

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

b. Cockpit Execution

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

c. Cockpit Task and Action

◈ Activities until “Activate Document Splitting” are performed for all company codes assigned to the project together
◈ Activities from “Confirm Fiscal Year Closure” onwards can be executed by a subset of entities assigned to the project – Activities start with selection screen: company code, ledger, fiscal year variant – selection is optional.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

d. Cockpit – Log & Error Handling

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

e. Cockpit Activity – Reconcile Journal Entries

SIS_RC_UJE “Reconcile Journal Entries“ checks all journal entries for consistency before any data conversion takes place

Following checks are performed:

◈ Check if cross company postings are in the system that at least 1 company code, which is maintained in the project Customizing
◈ Check if field content is the same for each document line in tables BKPF and BSEG

ü  Month (period)

ü  BSTAT (document status)

ü  BUDAT (posting date)

ü  BLDAT (document date)

ü  WAERS (currency)

ü  BLART (document type)

ü  AWTYP (reference procedure)

ü  AWSYS (logical system of source document)

ü  AWKEY (object key)

◈ Check that the universal journal entries have a 0 balance
◈ Check that the entry exists in table BSEG and table ACDOCA
◈ Check that the entry exists in table ACDOCA and table BSEG
◈ Check that the entry exists in table BSEG_ADD and table ACDOCA
◈ Check that the entry exists in table ACDOCA and table BSEG_ADD

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

f. Cockpit Activities: Generate Splitting Information for Open Items

Splitting information for open items is required for follow-up processes, e.g. clearing or reversal, that run after the doc. splitting activation date.

Note: Open items are only enriched with account assignments – not splitted.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

g. Cockpit Activities: Generate Splitting Information for Open Items and Reconciliation of Splitting Information

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

h. Cockpit Activity: Activate Document Splitting

SIS_MAN_SP Activate Document Splitting is a manual activity to confirm the Activation of Document Splitting for the defined date.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

◈ Without performing this activity, the document splitting will not be activated.
◈ The activity needs to be performed before the posting period is opened in which the document splitting activation date is included and before any journal entry has been posted with or after the defined date.
◈ Activities for open item enrichment have to be performed completely before setting this confirmation status.
◈ In production system you cannot RESET activities from SIS_MAN_SP onwards.
◈ An action log is provided for manual activities via the context menu.

I. Cockpit Activity: Confirm Fiscal Year Closure

◈ This confirmation activity is precondition for the enrichment of the opening balances.
◈ The activity checks, that balance carry forward entries are available for the year of document splitting activation.
◈ An action log is provided for manual activities via the context menu.
◈ The confirmation activity can be performed per company code and ledger combination, all follow-on activities as well. In case one company code closes the year faster than other company codes, the fast one could proceed with the enrichment on it‘s own.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

j. Cockpit Activity: Enrich Journal Entries

SIS_ENR “Enrich Journal Entries” copies splitting information, created by activity SIS_SPL  Generate Splitting Information from tables FAGL_SPLINFO and FAGL_SPLINFO_VAL into table ACDOCA.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

Only document splitting characteristics are copied from FAGL_SPLINFO into ACDOCA.

◈ The results of this activity are the basis for the calculation of opening balances on open item managed accounts in the activity SIS_BAL “Enrich Opening Balances”.
◈ The log gives you detailed information on error and success messages
◈ “Reset Activity” doesn’t delete entries. The updated information is kept in table ACDOCA.

 k. Cockpit Activities: Reconcile Opening Balances

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

l. Cockpit Activities: Enrich Opening Balances

SIS_BAL „Enrich Opening Balances adds the relevant account assignment details to the balance carry-forward data.

Different scenarios for the Enrichment:

◈ Open-item-managed accounts:
◈ The opening balances are equal to the total of their open items and therefore based on the enriched open items
◈ The total number of the open items in combination with the relevant account assignments is calculated and the opening balances are enriched
◈ Not open-item-managed accounts:
◈ The opening balances for accounts are enriched based on the definition of the BAdI FINS_SIS_BAL

Resetting the Opening Balances:

◈ Deletes entries in ACDOCA with mig_source “E”
◈ Deletion includes entries from “Repost Opening Balances”
in the Post Processing Phase
◈ Postings from “Post Values Adjustment for Opening
◈ Balances” in the Post Processing Phase must be reversed before Resetting the activity Enrich Opening Balances

Note: When defining the enrichment logic in the BAdI consider, that the opening balances are much more detailed in S/4HANA than in ECC.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

m. Cockpit Activity: Confirm Data Enrichment

Once you have confirmed the data enrichment, you can no longer reset activities in the project cockpit. If you need to reset and rerun activities, you need to reset the Confirm Data Enrichment first. You can only reset the Confirm Data Enrichment activity in test systems.

Steps in the Post Processing Phase can only be performed, if data enrichment has been confirmed.

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

n. Conditions and Limitations

Please be aware that some data is not enriched during the Execution Phase. The following accounting data is not relevant for General Ledger Accounting:

◈ Down Payment Requests and other statistical items
◈ Payment Requests
◈ Reference Documents such as recurring entries, sample document, account assignment model need to be updated manually
◈ Held and parked documents without necessary account assignments have to be posted before the document splitting activation

If you want to create journal entries based on one of these types of documents, you can either change the journal entries, or reverse and re-create them to include the document splitting characteristics. 

III Post Processing Phase

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

a. Post Value Adjustment of Opening Balances

These manual postings are used to adjust the result of the balance carry-forward (period 0) per account assignment

◈ Can only handle currencies available in table BSEG
◈ Posting date is set automatically to first day of fiscal year – If you activate document splitting sometime during a fiscal year, this date is the first day of the following fiscal year
◈ Amounts in local currencies need to be entered – no currency calculation logic available
◈ Postings can only be made to accounts that are not managed on an open item basis
◈ Postings cannot be made to reconciliation accounts (customers or vendors) and asset accounts

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

b. Repost Opening Balances

Update G/L balances for document splitting characteristics without impacting balances on company code and account levels by writing records into period 0 in table ACDOCA like balance carry forward

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

Preconditions:

◈ Data enrichment is confirmed
◈ Project is not set to “Completed”

c. Check Zero Balances

Checks zero balance for reporting objects, which have been defined as zero balance entity in customizing for Document Splitting characteristic for General Ledger

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

◈ Zero Balance Check runs as a mass data activity in the cockpit
◈ Navigation from an error message to drill down reporting with handover of parameters for report selection

d. Complete Project

Confirm that the overall project, including all relevant activities, has been completed

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

‘Complete Project’ can only be set if following conditions match

◈ Data Enrichment is confirmed for all company codes
◈ Authorization object FINS_MPROJ is assigned to user

4. Activation of Doc. Splitting for Companies with Different Fiscal Years in One Project

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

5. System Dependencies

SAP HANA Live, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

6. Recommendations & Key Information

It’s better to follow below recommendations and key information before you start subsequent document splitting.

◈ It is recommended that document splitting is activated on the very first day of New Fiscal Year, though date is freely selectable
◈ It is not Applicable for Cloud Solution
◈ It has to be a separate project after S/4 HANA Migration Go-Live
◈ Project can be achieved with almost Zero Down time
◈ Subsequent document splitting is called project and not migration
◈ It is handle in similar line as S/4 HANA Migration meaning Document splitting is configured and tested on production mock system first.
◈ You can create multiple project and assign them fiscal year wise (Company Code). One Project can be in execution at a time.
◈ Project Preparation phase can be reset if you have not started data enrichment steps.

7. License & Fees

Subsequent document splitting is part of standard offering in S/4 HANA. No addition service or licence need to buy.

8. Subsequent Implementation of Document Splitting – Summary 

◈ Only relevant, if productive accounting data need to be handled
◈ Time-dependent Activation of Document Splitting
◈ Guided Implementation via IMG and Cockpit for Mass Data Processing

SAP Automated Predictive Library (APL) Installation and configuration for SAP HANA

$
0
0
What is SAP HANA Automated Predictive Library (APL)?

SAP HANA APL is an Application Function Library (AFL) which lets you use the data mining capabilities of the SAP Predictive Analytics automated analytics engine on your customer datasets stored in SAP HANA.

The APL is:
A set of functions that you use to implement a predictive modeling process in order to answer simple business questions on your customer datasets.
A set of simplified APL procedures: SAPL (Simple APL) that you can also use to call the APL functions.
You can create the following types of models to answer your business questions:

– Classification/Regression models
– Clustering models
– Time series analysis models
– Recommendation models
Installing SAP APL v2.5.10.x on SAP HANA SP10

Software Requirements


You must have the following software installed in order to use this version of SAP APL:

1. SAP HANA SPS10 and higher
2. SAP AFL SDK 1.00.090 or greater (this is part of SAP HANA)
3. unixODBC 64 bits

SAP APL 2.0 Software download path in service market place

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

unixODBC 64 bits


APL has a dependency on the libodbc.so.1 library included in unixODBC. In the latest unixODBC versions, this library is available only in version libodbc.so.2 . The workaround in this situation is to create a symbolic link to libodbc.so.2 named libodbc.so.1 in the same folder.

http://www.unixodbc.org/

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

unixODBC installation


SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

cd <unixODBC install folder> (for example /usr/lib64)

ln –s libodbc.so.2 libodbc.so.1

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

Installing SAP APL v2.5.10.x on SAP HANA SP10


SAP APL deployment in the hana server

Note: You need root privileges (sudo) to run the installer.

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

We could check the add-on installation from SAP HANA Studio

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

After the function library has been installed, the HANA script server must be enabled, and the HANA index server should be restarted. The following tables should contain APL entries which show that the APL is available:

— check that APL functions are there

select * from “SYS”.”AFL_AREAS”;

select * from “SYS”.”AFL_PACKAGES”;

select * from “SYS”.”AFL_FUNCTIONS” where AREA_NAME=’APL_AREA’;

select * from “SYS”.”AFL_FUNCTION_PARAMETERS” where AREA_NAME=’APL_AREA’;

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

select “F”.”SCHEMA_NAME”, “A”.”AREA_NAME”, “F”.”FUNCTION_NAME”, “F”.”NO_INPUT_PARAMS”, “F”.”NO_OUTPUT_PARAMS”, “F”.”FUNCTION_TYPE”, “F”.”BUSINESS_CATEGORY_NAME” from “SYS”.”AFL_FUNCTIONS_” F,”SYS”.”AFL_AREAS” A where “A”.”AREA_NAME”=’APL_AREA’ and “A”.”AREA_OID” = “F”.”AREA_OID”;

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

Configuration


This is the script delivered along with software and could find under the samples directory

hostname:/hana/data/HDB/SAP_APL/apl-2.5.0.0-hanasp10-linux_x64/samples/sql/direct # more apl_admin.sql

-- Run this as SYSTEM

connect SYSTEM password manager;


-- Enable script server

alter system alter configuration ('daemon.ini', 'SYSTEM') set ('scriptserver', 'instances') = '1' with reconfigure;

-- Check that APL functions are there

select * from "SYS"."AFL_AREAS";

select * from "SYS"."AFL_PACKAGES";

select * from "SYS"."AFL_FUNCTIONS" where AREA_NAME='APL_AREA';

select "F"."SCHEMA_NAME", "A"."AREA_NAME", "F"."FUNCTION_NAME", "F"."NO_INPUT_PARAMS", "F"."NO_OUTPUT_PARAMS", "F"."FUNCTION_TYPE", "F"."BUSINES

S_CATEGORY_NAME"

from "SYS"."AFL_FUNCTIONS_" F,"SYS"."AFL_AREAS" A

where "A"."AREA_NAME"='APL_AREA' and "A"."AREA_OID" = "F"."AREA_OID";

select * from "SYS"."AFL_FUNCTION_PARAMETERS" where AREA_NAME='APL_AREA';



-- Create a HANA user known as USER_APL, who's meant to run the APL functions

drop user USER_APL cascade;

create user USER_APL password Password1;

alter user USER_APL disable password lifetime;

-- Sample datasets can be imported from the folder /samples/data provided in the APL tarball

-- Grant access to sample datasets

grant select on SCHEMA "APL_SAMPLES" to USER_APL;

-- Grant execution right on APL functions to the user USER_APL

grant AFL__SYS_AFL_APL_AREA_EXECUTE to USER_APL;

grant AFLPM_CREATOR_ERASER_EXECUTE TO USER_APL; 

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

There is one step not shown in the sample SQL, which is creating the APL_SAMPLES schema, this is straightforward such as

create schema APL_SAMPLES;

Create table types by using the stored procedure “apl_create_table_types.sql”

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

Import samples data from the download directory

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

Check the imported content from SAP HANA studio

SAP Automated Predictive Library (APL), SAP HANA APL, SAP HANA Certifications, SAP APL

The samples should now all be configured and available for use directly via SQL or using Predictive Analysis 2.0.

Bringing Machine Learning (TensorFlow) to the enterprise with SAP HANA

$
0
0
In this blog I aim to provide an introduction to TensorFlow and the SAP HANA integration, give you an understanding of the landscape and outline the process for using External Machine Learning with HANA.

There’s plenty of hype around Machine Learning, Deep Learning and of course Artificial Intelligence (AI), but understanding the benefits in an enterprise context can be more challenging.  Being able to integrate the latest and greatest deep learning models into your enterprise via a high performance in-memory platform could provide a competitive advantage or perhaps just keep up with the competition?

With HANA 2.0 SP2 onwards we have the ability to call TensorFlow (TF) models or graphs as they are known. HANA now includes a method to call External Machine Learning (EML) models via a remote source.  The EML integration is performed using a wrapper function, very similar to the Predictive Analysis Library (PAL) or Business Function Library (BFL).  Like the PAL and BFL, the EML is table based, with tables storing the model metadata, parameters, input data and output results.  At the lowest level EML models are created and accessed via SQL, making them a perfect building block.


TensorFlow Introduction


TensorFlow by itself is powerful, but embedding it within a business process or transaction could be a game changer.  Linking it to your enterprise data seamlessly and being able to use the same single source of data for transactions, analytics and deep learning without barriers is no longer a dream.  Having some control, and audit trail of what models were used by who, how many times, when they were executed and with what data is likely to be a core enterprise requirement.

TensorFlow itself, is a software library from Google that is accessed via Python.  Many examples exist where TF has been used to process and classify both images and text.  TF models work by feeding tensors, through multiple layers, a tensor itself, is just a set of numbers.  These numbers are stored in a multi-dimensional arrays, which can make up a layer.  The finally output of the model may lead to a prediction, such as a true/false classification.  We use a typical supervised learning apprach i.e. the TensorFlow model first requires training data to learn from.

As shown below, a TF model is built up of many layers that feed into each other.  We can train these models to identify almost anything, given the correct training data, and then integrate that identification within a business process. Below we could pass in an image and ask the TensorFlow model to classify (identify) it, based on training data.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials

Equally, we could build a model that expects some unstructured text data. The models’ internals may be quite different, but the overall concept would be similar

SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials

For text data and text classification, extensive research has been performed by Google with Word2Vec and Stanford publishing GloVe, providing vector representations of words.  Pre-trained word vectors are available for download covering multiple languages.

HANA TensorFlow Landscape


With the SAP HANA TensorFlow integration, there are two distinct scenarios, model development/training and then model deployment.  First you develop a model, train it, test it, validate it with training data, where the outcome is known. Here we have shown that environment with a Jupyter notebook. Finally, you would publish the model for TensorFlow Serving and make that model available via SAP HANA.  During the development and training phase HANA would primarily be used as a data source.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials

Once a model has been published for scoring, the jupyter notebook and python are not being used.
Model execution is performed by TensorFlow Serving, which loads up a trained model and waits for input data from SAP HANA.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials

Often, to productionise a TensorFlow model with TensorFlow Serving you would need to develop a client specifically to interact with that model.  With the HANA EML, we have a metadata wrapper that resides in HANA to provide a common way to call multiple TensorFlow Serving models.  With the HANA EML TensorFlow models can now be easily integrated into enterprise applications and business processes.

Some Implementation Specifics


SAP HANA Python Connectivity

There are at least 3 options to consider

pyHDB – pure Python
hdbcli – HANA Client
pyodbc – ODBC

I went with the pure Python library, from the SAP official GitHub https://github.com/SAP as this appears to be the most simple when moving platforms as it has the least dependencies, although it does not yet support SSL or Kerberos so may not be suitable for production just yet.  Hdbcli is part of the HANA Client distribution and is the most comprehensive with the best performance, but requires a binary installation from .sar file.  Pyodbc is a generic solution more suited to Windows only scenarios.

Python 2.7 vs Python 3.6


Newer isn’t always better! Python 2.7 and Python 3.6 are not necessarily backwards or forward compatible. You can spend time debugging examples that were written against a different version. Many examples you find don’t specify a version, and when they do it’s easy to overlook this important detail. We used Python 3.6, but found many examples had 2.7 syntax which does not always work in 3.6. My advice is, always use the same Python version as any tutorials you are following.  At the time of writing the TensorFlow Serving repository binary is for Python 2.7, you therefore may need to compile it yourself for Python 3.6.

Jupyter Notebooks & Anaconda


Get familiar with Jupyter, this is a great interactive development environment. Jupyter runs equally well on your laptop or in Amazon Web Services (AWS) or Google Cloud Platform (GCP).
I began with Anaconda on my laptop, which provides applications (such as Jupyter), easy package management and environmental isolation.  Jupyter notebooks are easy to move between local and AWS/GCP as long as the required Python libraries of the same version are installed on both platforms.

Keras


Keras is a python library that provides higher level access to TensorFlow functions and even allows you to switch between alternative deep learning backends such as Theano, Google TensorFlow and Microsoft Cognitive Toolkit (CNTK). we tried Theano and TensorFlow, and apparently you can even deploy Theano models to TensorFlow Serving

GPU > CPU


Once you are up and running with an example model you will find that it takes some time to train your models even with modern CPUs.  With deep learning models you train the model over a number of epochs or episodes (a complete pass over the data set).  After each epoch, the model learns from the previous epoch, it’s common to run 16, 32, 64, 100 or even 1000 epochs.  An epoch could take 30 seconds to run on a CPU and less than 1 second on a single GPU.

If you use a cloud platform both GCP and AWS, have specific instance types designed for this. If you use AWS, G3(g3.4xlarge) and P2 (p2.xlarge) or P3 (p3.2xlarge) are suited for Deep Learning, as they include GPUs.  If using AWS I would recommend the P instance type, as these are the latest and greatest. If/when you are at this stage to fully utilise the GPUs you may need to compile TensorFlow or other deep learning foundation for your specific environment.

Serving the model


Once you are done with building, refining, training and testing your model you will then need to save it for TensorFlow Serving.  Saving a model for serving is not necessarily easy!  TensorFlow Serving wants a specific saved_model.pb.  The saved models need to have a signature, that defines inputs and outputs.  When you have created you own model you will likely need to build a specific saved_model function.  We will share some code snippets in a future blog.

TensorFlow Serving is cross platform, but we found that Ubuntu seems to be Google’s preferred Linux distribution.  The prerequisites are straightforward for Ubuntu, which was not the case with Amazon Linux which evolved from RedHat.

Using Topology for Data Analysis

$
0
0
When researching data we want to find features that help us understand the information. We look for insight in areas like Machine Learning or other fields in Mathematics and Artificial Intelligence. I want to present here a tool initially coming from Mathematics that can be used for exploratory data analysis and give some geometric insight before applying more sophisticated algorithms.

The tool I want to describe is Persistent Homology, member of a set of algorithms known as Topological Data Analysis. In this post I will describe the basic methodology when facing a common data analysis scenario: clustering.

SOME IDEAS FROM TOPOLOGY

A space is a set of data with no structure. The first step is to give some structure that can help us understand the data and also make it more interesting. If we define a notion of how close are all the points we are giving structure to this space. This notion is a neighborhood and it tells us if two points are close. With this notion we already have important information: we now know if our data is connected.

The neighborhoods can be whatever we want and the data points can be numbers or words or other type of data. These concepts and ideas are the subject of study of Topology. For us, Topology is the study of the shape of data.

We need to give some definitions, but all are very intuitive. From our point space or dataset, we define the following notion: a simplex. It is easy to visualize what we mean.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

So, a 0-simplex is a point. Every point in our data is a 0-simplex. If we have a “line” joining two points that is a 1-simplex, and so on. Of course, a 4-simplex and higher analogues are difficult for us to visualize. We can immediately see what connectedness is. In the image, we have four connected components, a 0-simplex, a 1-simplx, a 2-simplex and a 3-simplex. If we join them with, for example lines we will connect the dataset into one single component. Like this:

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

The next notion is the neighborhood. We’ll use euclidean distance to say when our points are close, we’ll use circles as neighborhoods. This distance depends on a parameter, the radius of the circle. If we change these parameter we change the size of the neighborhood.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Persistence is an algorithm that changes this parameter from zero to a very large value, one that covers the entire set. With this maximal radius we enclose all our dataset. The algorithm can be put as follows:

1. We construct a neighborhood for each point and set the parameter to zero.
2. Increment the value of this parameter and if two neighborhoods intersect, draw a line between the points. These will form a 1-simplex. After that an n-simplex will form at each step until we fill all the space with lines.
3. Describe in some way the holes of our data has as we increase the parameter. Keep track when they emerge and when they disappear. If the holes and voids persist as we move the parameter, we can say that we found an important feature of a our data

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications
The “some way” part is called Homology and is a field in Mathematics specialized in detecting the structure of space. The reader can refer to the bibliography for these concepts.

This algorithm can be shown to detect holes and voids in datasets. An achievement we can mention is that Persistent Homology was used for detecting a new subtype of breast cancer using it to detect clusters in images.

We will use R language integrated with the SAP HANA database to work with these tools.

VEHICLE DATASET

The dataset is available in. It’s about car accidents and has some specifications. We query in HANA only the data we need for this demo. We use an ID of the accident, the spatial coordinates and categorical data: Local highway authority and Road Type. That’s all we need to start. This data looks like this:


SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Then we visualize this data:

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Now we use the Topological Data Analysis library in the R language for study the data. And store the information to make a visualization later.

DROP PROCEDURE "TDA";
-- procedure with R script using TDA package
CREATE PROCEDURE "TDA" (IN vehic_data "VEHIC_DATA", OUT persistence "PERSISTENCE")
LANGUAGE RLANG AS 
BEGIN
library(TDA)
persist <- function(vehic_data){

    #We point out that the columns are only the spatial coordinates
    #You can find how to construct this example in TDA package documentation

    vehic_vector <- cbind("V1" = vehic_data$longitude, "V2" = vehic_data$latitude)
    xlimit <- c(-0.3, 0)
    ylimit <- c(51.2, 51.6)
    by <- 0.002
    x_step <- seq(from = Xlim[1], to = Xlim[2], by = by)
    y_step <- seq(from = Ylim[1], to = Ylim[2], by = by)
    grid <- expand.grid(x_step, y_step)
    diag <- gridDiag(X = vehic_vector, FUN = distFct, lim = cbind(xlimit, ylimit), by = by,
                 sublevel = FALSE, library = "Dionysus", printProgress = FALSE)
    # Since gridDiag returns a list, we access this in any way we want:
    diagram <- Diag[["diagram"]]
    topology <- data.frame(cbind("dimension"=a[,1],"death"=a[,2],"birth"=a[,3]))
    return(topology)
    }
# Use the function
persistence <- persist(vehic_data)
END;
-- call to keep results in a table
CALL "TDA" ("VEHIC_DATA", "PERSISTENCE") WITH OVERVIEW;

Next we visualize the results. Here I show you the results in R, using package TDA itself, just as an example.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

This is a Barcode. The barcode shows the persistence of some topological features of our data vs the parameter “time”, this is the radius of our neighborhoods as we increase it. The red line tells us that there is a “hole”, an empty space, and we can check this in the visualization. The other lines represent connected components of the dataset, this means we have clustering. The barcode shows that we can expect 3 or 4 important clusters that will persist even if the data has noise.

The ability to persist is a topological property of the data.

After this analysis, we can start the usual Machine Learning approach: K-means…

Since this data was too dense in its parameters, we have to use other settings in Topological Data Analysis to find better approximations to the persistent characteristics. Euclidean distance only help us as a start, we can change this to more specialized filtering of our data. But we can be sure we have a good approximation, Persistence Homology is robust against noise and smooth changes in the data.

We will explore some of these ideas in the next blogs and compare to the usual approaches in Machine Learning.

Using Topology for Data Analysis II

$
0
0
In this second part 1, we use Topological Data Analysis (TDA) on a dataset consisting on spatial information related to traffic. We’ll compare to usual “DBSCAN” method from Machine Learning.

DBSCAN is a method for finding clusters in data. It means Density-Based Spatial Clustering of Applications with Noise. It usually requires two parameters and the data: a radius, known as eps, and the minimum number of points required to form a cluster, that is the “density” part.

In any case, the parameters are unknown a priori. TDA in this case can help giving connected components as initial election of clusters and also, being robust against noise, these clusters will persist.

And a quick visualization of the dataset can be helpful for comparing these methods:

We can inmediately see some clusters given by traffic in this trajectory. The data is nested in a small range so the radius is gonna be 0.0005 and the minimal number of points we set it to 3 so that we can keep close gps positions.

-- In the standard PAL procedure we introduce the parameters we defined:
INSERT INTO DB_PARAMS VALUES ('THREAD_NUMBER', 2, null, null);
INSERT INTO DB_PARAMS VALUES ('DISTANCE_METHOD', 2, null, null);
INSERT INTO DB_PARAMS VALUES ('MINPTS', 3, null, null); 
INSERT INTO DB_PARAMS VALUES ('RADIUS', null, 0.0005, null); 
-- Remeber to call this procedure using a predefined view.
CALL _SYS_AFL.PAL_DB (DB_DATA, DB_PARAMS, DB_RESULTS) WITH OVERVIEW;

The results can be visualized in R with the “dbscan” package and a very interesting idea that lets you see the convex hull of the clusters:


We can see that the eps was too small and the algorithm detected to many clusters but it kept the ones corresponding to traffic. Taking a bigger parameter we can find better results:


So the algorithm finds the traffic but it’s sensitive to noise induce by the fact that gps positions are not nicely distributed. So, to make a better approximation we turn to Topological Data Analysis expecting 4 clusters or more, but as connected components.

DROP PROCEDURE "TDA";
-- procedure with R script using TDA package
CREATE PROCEDURE "TDA" (IN gps_data "GPS_DATA", OUT persistence "PERSISTENCE")
LANGUAGE RLANG AS 
BEGIN
library(TDA)
persist <- function(gps_data){
    #You can find how to construct this example in TDA package documentation

    gps_vector <- cbind("V1" = gps_data$longitude, "V2" = gps_data$latitude)
    xlimit <- c(-37.1, -37.0)
    ylimit <- c(-11.0, -10.8)
    by <- 0.0002
    x_step <- seq(from = Xlim[1], to = Xlim[2], by = by)
    y_step <- seq(from = Ylim[1], to = Ylim[2], by = by)
    grid <- expand.grid(x_step, y_step)
    diag <- gridDiag(X = gps_vector, FUN = distFct, lim = cbind(xlimit, ylimit), by = by,
                 sublevel = FALSE, library = "Dionysus", printProgress = FALSE)
    # Since gridDiag returns a list, we access this in any way we want:
    diagram <- Diag[["diagram"]]
    topology <- data.frame(cbind("dimension"=a[,1],"death"=a[,2],"birth"=a[,3]))
    return(topology)
    }
# Use the function
persistence <- persist(gps_data)
END;
-- call to keep results in a table
CALL "TDA" ("VEHIC_DATA", "PERSISTENCE") WITH OVERVIEW;

And the resulting barcode is the following:


This is a very good result, we can explore datasets related to traffic by looking at its topological properties and extract information relevant to us, these are topological features. After finding these we can take a look at other tools in machine learning to have more detailed information.

In this case, topological information plays a crucial role since it gives geometric insight to start our research, it is a frame for machine learning and gives us mathematical support for the choice of the parameters usually given by a rule of thumb.

Topology has a big part to play in the development of Machine Learning in general and many different ideas are being explored, not only persistent homology. Also, we are currently working on new applications of this tool, so KeepTheThread.

‘Hello Block’ – Hana Xs Blockchain Proof of Work Application

$
0
0
Welcome to all of you in my blog, this post is all about basic blockchain and how it works actually. I came across lots of blogs (not in sap) where people discussed lots about blockchain, the funny one was ‘how i explained blockchain to my grandma’. But i always wonder what it means to developer like me, how can we implement blockchain from scratch no more api’s lets build everything from scratch. Before jumping into code i would like to explain some core technical concept which is very important.

Blockchains


A distributed database that maintains a continuously growing list (Blocks) of ordered which can be read by anyone. Nothing special, but they have an interesting property means to add any block into blockchain block has to satisfy some property: they are immutable. Once a block has been added to the chain, it cannot be changed anymore without invalidating the rest of the chain.

And this is one of the reason why all cryptocurrencies are like Bitcoin ,Ethereum, Litecoin etc are based on blockchain. Because no one want to change their previous transaction which they have already made.

Block Structure


As a blockchain developer first most important task is to decide structure of block. In our example we will keep it very simple and which are most necessary component: index, timestamp, data, hash and previous hash. Now all components are self explanatory . Index is nothing but Block index , Timestamp means committed time for that  particular when it got added to blockchain, hash is SHA256 Hash value for that particular block , Previous hash is SHA256 Hash value for Previous Block.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

The hash of the previous block should be present  in the block to form a valid chain or preserve the integrity .

Genesis Block


Genesis block is nothing but first block in blockchain , it is basically dummy block to start the blockchain , in Bitcoin Blockchain the value of genesis block is 0. Mostly software developer hard code it to utilize the blockchain.

Block Hash


To keep the integrity of blockchain all Block need to be hashed in SHA256. Those who are not familiar with SHA256 Hash , its encryption function like MD5 and SHA1 and once you encrypt any plain text using this , there is no chance to get it back in normal form.

Adding Block


To add  a block in blockchain we should know the hash of the previous block and create the rest of the required content (= index, hash, data and timestamp). Block data will be provided by end user .

“Talk is cheap. Show me the code.” -Linus Torvalds

Lets Look at the stuff , which  makes actual sense for developers . I have used Sap Cloud Platform Developer (trial) account and used MDC Hana trial to build whole demo . If you haven’t used MDC hana trial in your Sap Cloud Platform trial account Please have look at MDC trial Setup in SCP. Sap tutorial provides cool guide for that .

Defining Block Structure


First i have defined a schema called “HELLOBLOCK”. My design time artifact or schema.hdbschema file look like .

schema_name = "HELLOBLOCK";

Then i have created a column table called Blockchain with all necessary fields . My entity.hdbdd look like .

namespace HelloBlock.DBARTIFACT;
@Schema: 'HELLOBLOCK' 
context entity {
   @Catalog.tableType: #COLUMN
   Entity BlockChain{
       key index : Integer;
       data : LargeString;
       commit_at : UTCTimestamp;
       previous_hash : LargeString;
       current_hash: LargeString;
   };

Adding Block in Blockchain


I have created a simple procedure to insert entry in blockchain table which will be called from xsjs based on validation of  block .

procedure code :

PROCEDURE "HELLOBLOCK"."HelloBlock.Procedure::InsertBlock" (
    IN block_index Integer,
    IN block_data Nclob,
    IN block_commit TIMESTAMP,
    IN prevblock_hash Nclob,
    IN Currblock_hash Nclob
    )
   LANGUAGE SQLSCRIPT
   SQL SECURITY INVOKER AS
   --DEFAULT SCHEMA <default_schema_name>
--   READS SQL DATA AS
BEGIN
   /*************************************
       Write your procedure logic 
   *************************************/
   INSERT INTO "HELLOBLOCK"."HelloBlock.DBARTIFACT::entity.BlockChain"("index","data","commit_at","previous_hash","current_hash") 
   VALUES (:block_index,:block_data,:block_commit,:prevblock_hash,:Currblock_hash);
END

Hana xsjs service to add block in blockchain .

var acmd = $.request.parameters.get("action");
var BlockData = $.request.parameters.get("data");
var BlockIndex = $.request.parameters.get("index");
var PrevBlockHash = $.request.parameters.get("prevhash");

function DisplayBlockChain() {
try {
var conn = $.db.getConnection();
var output = {
results: []
};
var query = 'select * from "HELLOBLOCK"."HelloBlock.DBARTIFACT::entity.BlockChain"';
var myStatement = conn.prepareStatement(query);
var rs = myStatement.executeQuery();
while (rs.next()) {

var record = {};
record.Index = rs.getInteger(1);
record.Data = rs.getNClob(2);
record.CommitedTime = rs.getString(3);
record.PrevBlockHash = rs.getNClob(4);
record.CurrentHash = rs.getNClob(5);
output.results.push(record);

}
rs.close();
myStatement.close();
conn.close();

} catch (e) {
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
$.response.setBody(e.message);
return;
}

var body = JSON.stringify(output);
$.response.contentType = 'application/json';
$.response.setBody(body);
$.response.status = $.net.http.OK;

}

function CurrentTImeStamp() {
var date = new Date();

var utcout = date.getFullYear() + '-' +

('0' + (date.getMonth() + 1)).slice(-2) + '-' +

('0' + date.getDate()).slice(-2) + 'T' +

('0' + date.getHours()).slice(-2) + ':' +

('0' + date.getMinutes()).slice(-2) + ':' +

('0' + date.getSeconds()).slice(-2);

return (utcout);
}

function CalculateHash(s) {

var chrsz = 8;

var hexcase = 0;

function safe_add(x, y) {

var lsw = (x & 0xFFFF) + (y & 0xFFFF);

var msw = (x >> 16) + (y >> 16) + (lsw >> 16);

return (msw << 16) | (lsw & 0xFFFF);

}

function S(X, n) {
return (X >>> n) | (X << (32 - n));
}

function R(X, n) {
return (X >>> n);
}

function Ch(x, y, z) {
return ((x & y) ^ ((~x) & z));
}

function Maj(x, y, z) {
return ((x & y) ^ (x & z) ^ (y & z));
}

function Sigma0256(x) {
return (S(x, 2) ^ S(x, 13) ^ S(x, 22));
}

function Sigma1256(x) {
return (S(x, 6) ^ S(x, 11) ^ S(x, 25));
}

function Gamma0256(x) {
return (S(x, 7) ^ S(x, 18) ^ R(x, 3));
}

function Gamma1256(x) {
return (S(x, 17) ^ S(x, 19) ^ R(x, 10));
}

function core_sha256(m, l) {

var K = new Array(0x428A2F98, 0x71374491, 0xB5C0FBCF, 0xE9B5DBA5, 0x3956C25B, 0x59F111F1, 0x923F82A4, 0xAB1C5ED5, 0xD807AA98, 0x12835B01,
0x243185BE, 0x550C7DC3, 0x72BE5D74, 0x80DEB1FE, 0x9BDC06A7, 0xC19BF174, 0xE49B69C1, 0xEFBE4786, 0xFC19DC6, 0x240CA1CC, 0x2DE92C6F,
0x4A7484AA, 0x5CB0A9DC, 0x76F988DA, 0x983E5152, 0xA831C66D, 0xB00327C8, 0xBF597FC7, 0xC6E00BF3, 0xD5A79147, 0x6CA6351, 0x14292967,
0x27B70A85, 0x2E1B2138, 0x4D2C6DFC, 0x53380D13, 0x650A7354, 0x766A0ABB, 0x81C2C92E, 0x92722C85, 0xA2BFE8A1, 0xA81A664B, 0xC24B8B70,
0xC76C51A3, 0xD192E819, 0xD6990624, 0xF40E3585, 0x106AA070, 0x19A4C116, 0x1E376C08, 0x2748774C, 0x34B0BCB5, 0x391C0CB3, 0x4ED8AA4A,
0x5B9CCA4F, 0x682E6FF3, 0x748F82EE, 0x78A5636F, 0x84C87814, 0x8CC70208, 0x90BEFFFA, 0xA4506CEB, 0xBEF9A3F7, 0xC67178F2);

var HASH = new Array(0x6A09E667, 0xBB67AE85, 0x3C6EF372, 0xA54FF53A, 0x510E527F, 0x9B05688C, 0x1F83D9AB, 0x5BE0CD19);

var W = new Array(64);

var a, b, c, d, e, f, g, h, i, j;

var T1, T2;

m[l >> 5] |= 0x80 << (24 - l % 32);

m[((l + 64 >> 9) << 4) + 15] = l;

for (var i = 0; i < m.length; i += 16) {

a = HASH[0];

b = HASH[1];

c = HASH[2];

d = HASH[3];

e = HASH[4];

f = HASH[5];

g = HASH[6];

h = HASH[7];

for (var j = 0; j < 64; j++) {

if (j < 16) W[j] = m[j + i];

else W[j] = safe_add(safe_add(safe_add(Gamma1256(W[j - 2]), W[j - 7]), Gamma0256(W[j - 15])), W[j - 16]);

T1 = safe_add(safe_add(safe_add(safe_add(h, Sigma1256(e)), Ch(e, f, g)), K[j]), W[j]);

T2 = safe_add(Sigma0256(a), Maj(a, b, c));

h = g;

g = f;

f = e;

e = safe_add(d, T1);

d = c;

c = b;

b = a;

a = safe_add(T1, T2);

}

HASH[0] = safe_add(a, HASH[0]);

HASH[1] = safe_add(b, HASH[1]);

HASH[2] = safe_add(c, HASH[2]);

HASH[3] = safe_add(d, HASH[3]);

HASH[4] = safe_add(e, HASH[4]);

HASH[5] = safe_add(f, HASH[5]);

HASH[6] = safe_add(g, HASH[6]);

HASH[7] = safe_add(h, HASH[7]);

}

return HASH;

}

function str2binb(str) {

var bin = Array();

var mask = (1 << chrsz) - 1;

for (var i = 0; i < str.length * chrsz; i += chrsz) {

bin[i >> 5] |= (str.charCodeAt(i / chrsz) & mask) << (24 - i % 32);

}

return bin;

}

function Utf8Encode(string) {

string = string.replace(/\r\n/g, "\n");

var utftext = "";

for (var n = 0; n < string.length; n++) {

var c = string.charCodeAt(n);

if (c < 128) {

utftext += String.fromCharCode(c);

} else if ((c > 127) && (c < 2048)) {

utftext += String.fromCharCode((c >> 6) | 192);

utftext += String.fromCharCode((c & 63) | 128);

} else {

utftext += String.fromCharCode((c >> 12) | 224);

utftext += String.fromCharCode(((c >> 6) & 63) | 128);

utftext += String.fromCharCode((c & 63) | 128);

}

}

return utftext;

}

function binb2hex(binarray) {

var hex_tab = hexcase ? "0123456789ABCDEF" : "0123456789abcdef";

var str = "";

for (var i = 0; i < binarray.length * 4; i++) {

str += hex_tab.charAt((binarray[i >> 2] >> ((3 - i % 4) * 8 + 4)) & 0xF) +

hex_tab.charAt((binarray[i >> 2] >> ((3 - i % 4) * 8)) & 0xF);

}

return str;

}

s = Utf8Encode(s);

return binb2hex(core_sha256(str2binb(s), s.length * chrsz));

}

function GetPreviousBlock() {
try {
var conn = $.db.getConnection();

var query = 'select COUNT(*) from "HELLOBLOCK"."HelloBlock.DBARTIFACT::entity.BlockChain"';
var myStatement = conn.prepareStatement(query);
var rs = myStatement.executeQuery();

if (rs.next()) {
var lv_index = rs.getString(1);
var index_int = parseInt(lv_index);

if (index_int < 1) {
var block = [];

block.push('1');
block.push('0000000X0');
} else {
var connhash = $.db.getConnection();

var query = 'select "current_hash" from \"HELLOBLOCK\".\"HelloBlock.DBARTIFACT::entity.BlockChain\" where "index" =?';
var pstmt = connhash.prepareStatement(query);
pstmt.setInteger(1, index_int);
var rec = pstmt.executeQuery();
if (rec.next()) {
var lv_prevhash = rec.getNClob(1);
var block = [];

block.push(lv_index);
block.push(lv_prevhash);
}
}

return (block);

} else {
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
$.response.setBody('Error in fetching Block Count');

}

} catch (e) {
var an = e.message;
}
}

function AddBlock(pdata, bindex, pbhash) {
    var data;
var passed_index = parseInt(bindex);
var passed_pbhash = pbhash.toString();
var prevblock = GetPreviousBlock();
var prev_index = prevblock[0];
prev_index = parseInt(prev_index);
var index = prev_index + 1;
if (prev_index > 1){
   data = pdata;  
}else {
   data = 'Genesis Block';
}
var prevHash = prevblock[1];
var timest = CurrentTImeStamp();
var actual_hash = CalculateHash(index.toString() + data.toString() + timest.toString() + prevHash.toString());
actual_hash = actual_hash.toUpperCase();
//Calling Procedure to add new Block
if (index !== passed_index) {
$.response.status = $.net.http.OK;
$.response.setBody("index is not valid");
} else if (pbhash !== prevHash) {
$.response.status = $.net.http.OK;
$.response.setBody("Previous hash is not valid");
} else {
var conn = $.db.getConnection();
var query = 'call \"HELLOBLOCK"."HelloBlock.Procedure::InsertBlock\"(?,?,?,?,?)';
var myStatement = conn.prepareCall(query);
myStatement.setInteger(1, index);
myStatement.setNClob(2, data);
myStatement.setString(3, timest);
myStatement.setNClob(4, prevHash);
myStatement.setNClob(5, actual_hash);
var rs = myStatement.execute();
conn.commit();
//Calling Procedure to add new Block

// Displaying Whole Block Chain
DisplayBlockChain();
// Displaying Whole Block Chain
}

}

function ValidCall() {
if (typeof BlockData === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter data");

} else if (typeof BlockIndex === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter index");
} else if (typeof PrevBlockHash === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter prevhash");
} else {

AddBlock(BlockData, BlockIndex, PrevBlockHash);
}
}
switch (acmd) {
case "addblock":
ValidCall()
break;
case "chaindisp":
DisplayBlockChain();
break;
default:
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter: " + acmd);
}

Lets understand each function in more details.

function CalculateHash(s) : This function takes plain text as input and gives SHA256 hash of it as output. When calling this function you have to pass text/string  as argument in return this function will give you SHA256 form of it.

function ValidCall():  This function basically checks all the input parameter for this xsjs, if you dont pass all parameter it will not allow you to go further . As you can see to test this service you have to pass action , data , index and prevhash parameter without passing all this parameter it will not call function AddBlock(), which is mainly responsible for further validation and adding block and blockchain . I will come to this function later on to get deep drive.

function ValidCall() {
if (typeof BlockData === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter data");

} else if (typeof BlockIndex === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter index");
} else if (typeof PrevBlockHash === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter prevhash");
} else {

AddBlock(BlockData, BlockIndex, PrevBlockHash);
}
}

function CurrentTImeStamp(): This function returns current timestamp .

function CurrentTImeStamp() {
var date = new Date();

var utcout = date.getFullYear() + '-' +

('0' + (date.getMonth() + 1)).slice(-2) + '-' +

('0' + date.getDate()).slice(-2) + 'T' +

('0' + date.getHours()).slice(-2) + ':' +

('0' + date.getMinutes()).slice(-2) + ':' +

('0' + date.getSeconds()).slice(-2);

return (utcout);
}

function DisplayBlockchain(): This function returns all blocks / records  in JSON format by selecting Data from Blockchain table .

function DisplayBlockChain() {
try {
var conn = $.db.getConnection();
var output = {
results: []
};
var query = 'select * from "HELLOBLOCK"."HelloBlock.DBARTIFACT::entity.BlockChain"';
var myStatement = conn.prepareStatement(query);
var rs = myStatement.executeQuery();
while (rs.next()) {

var record = {};
record.Index = rs.getInteger(1);
record.Data = rs.getNClob(2);
record.CommitedTime = rs.getString(3);
record.PrevBlockHash = rs.getNClob(4);
record.CurrentHash = rs.getNClob(5);
output.results.push(record);

}
rs.close();
myStatement.close();
conn.close();

} catch (e) {
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
$.response.setBody(e.message);
return;
}

var body = JSON.stringify(output);
$.response.contentType = 'application/json';
$.response.setBody(body);
$.response.status = $.net.http.OK;

}

function GetPreviousBlock(): It returns the Previous Block , in case of

function GetPreviousBlock() {
try {
var conn = $.db.getConnection();

var query = 'select COUNT(*) from "HELLOBLOCK"."HelloBlock.DBARTIFACT::entity.BlockChain"';
var myStatement = conn.prepareStatement(query);
var rs = myStatement.executeQuery();

if (rs.next()) {
var lv_index = rs.getString(1);
var index_int = parseInt(lv_index);

if (index_int < 1) {
var block = [];

block.push('1');
block.push('0000000X0');
} else {
var connhash = $.db.getConnection();

var query = 'select "current_hash" from \"HELLOBLOCK\".\"HelloBlock.DBARTIFACT::entity.BlockChain\" where "index" =?';
var pstmt = connhash.prepareStatement(query);
pstmt.setInteger(1, index_int);
var rec = pstmt.executeQuery();
if (rec.next()) {
var lv_prevhash = rec.getNClob(1);
var block = [];

block.push(lv_index);
block.push(lv_prevhash);
}
}

return (block);

} else {
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
$.response.setBody('Error in fetching Block Count');

}

} catch (e) {
var an = e.message;
}
}

function AddBlock(): In this function we have defined our main logic and validation .It checks passing index and previous block hash is correct or not based on that it calculate hash of current block and call the procedure to insert data into table and call function DisplayBlockchain() to display whole chain in JSON format. If anything goes wrong it will throw an error.

function AddBlock(pdata, bindex, pbhash) {
    var data;
var passed_index = parseInt(bindex);
var passed_pbhash = pbhash.toString();
var prevblock = GetPreviousBlock();
var prev_index = prevblock[0];
prev_index = parseInt(prev_index);
var index = prev_index + 1;
if (prev_index > 1){
   data = pdata;  
}else {
   data = 'Genesis Block';
}
var prevHash = prevblock[1];
var timest = CurrentTImeStamp();
var actual_hash = CalculateHash(index.toString() + data.toString() + timest.toString() + prevHash.toString());
actual_hash = actual_hash.toUpperCase();
//Calling Procedure to add new Block
if (index !== passed_index) {
$.response.status = $.net.http.OK;
$.response.setBody("index is not valid");
} else if (pbhash !== prevHash) {
$.response.status = $.net.http.OK;
$.response.setBody("Previous hash is not valid");
} else {
var conn = $.db.getConnection();
var query = 'call \"HELLOBLOCK"."HelloBlock.Procedure::InsertBlock\"(?,?,?,?,?)';
var myStatement = conn.prepareCall(query);
myStatement.setInteger(1, index);
myStatement.setNClob(2, data);
myStatement.setString(3, timest);
myStatement.setNClob(4, prevHash);
myStatement.setNClob(5, actual_hash);
var rs = myStatement.execute();
conn.commit();
//Calling Procedure to add new Block

// Displaying Whole Block Chain
DisplayBlockChain();
// Displaying Whole Block Chain
}

}

Now Lets go with testing , first i will display the whole Blockchain then i will try to add Block in blockchain and try to manipulate some block also. We will go with Classic Postman .

First will display the entire chain by passing parameter action=chaindisp

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live


SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

So Last Block is 5 , New Block’s Index should be 6

Lets try to push one block into our Blockchain by passing parameter action=addblock , index = 6 , data = nitin->jim and prevhash = current hash of block 5

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

After hitting the send button , we can see Block 6 just got added.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

Now you got an idea Previous block hash plays a major role , if you try to manipulate one block data then hash  for that block will get changed and all next block will be in entire chain will be invalid. At the same time  if you understood it correctly , many of you have question already that , if any one can pass previous block hash and correct index then He/She will be able to add Block in Blockchain and spam it like anything .You are right this block is not secure and spamming in the block is possible.

Lets Secure it , so no one can spam.


As Blockchain works on Peer-to-Peer network ,it works based on choosing the longest chain rules.

choosing the longest chain – There should always be only one explicit set of blocks in the chain at a given time. In case of conflicts (e.g. two nodes both generate block number 72) we choose the chain that has the longest number of blocks.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

The problem now we have


Right now creating block is so easy and quickly and it leads to 3 different problem.

◈ First : People can add  blocks incredibly fast and spam our blockchain. A flood of blocks would overload our blockchain and would make it unusable.
◈ Secondly, Because its easy to add valid block in blockchain , so people can tamper one block and re-calculate all other hashes for this entire block will become valid/
◈ And thirdly, As blockchain works in P2P network , now you can combine this two problem to take control over whole blockchain. Blockchains are powered by a peer-to-peer network in which the nodes will add blocks to the longest chain available. So you can tamper with a block, recalculate all the other blocks and then add as many blocks as you want. You will then end up with the longest chain and all the peers will accept it and start adding their own blocks to it.

To solve these all problem , we enter into Proof – of -Work

What is proof-of-work?


Proof-of-work is a mechanism that existed before the first blockchain was created. It’s a simple technique that prevents abuse by requiring a certain amount of computing work. That amount of work is key to prevent spam and tampering. Spamming is no longer worth it if it requires a lot of computing power.

Bitcoin implements proof-of-work by requiring that the hash of a block starts with a specific number of zero’s. This is also called the difficulty. Based on total number of block in blockchain it get increase.

But hang on a minute! How can the hash of a block change? In case of Bitcoin a block contains details about a financial transaction. We sure don’t want to mess with that data just to get a correct hash!

To fix this problem, blockchains add a nonce value. This is a number that gets incremented until a good hash is found. And because you cannot predict the output of a hash function, you simply have to try a lot of combinations before you get a hash that satisfies the difficulty. Looking for a valid hash (to create a new block) is also called “mining” in the cryptoworld.

In case of Bitcoin, the proof-of-work mechanism ensures that only 1 block can be added every 10 minutes. You can imagine spammers having a hard time to fool the network if they need so much compute power just to create a new block, let alone tamper with the entire chain.

Implementing proof-of-work


lets Look at our new Block Structure , I added one more field in table Nonce .

 @Catalog.tableType: #COLUMN
   Entity BlockChainPOW{
       key index : Integer;
       data : LargeString;
       commit_at : UTCTimestamp;
       previous_hash : LargeString;
       current_hash: LargeString;
       nonce : Integer;
   };
   
Like last time this time also i have created a procedure to add entry into blockchain table

PROCEDURE "HELLOBLOCK"."HelloBlock.Procedure::InsertBlockPOW" (
    IN block_index Integer,
    IN block_data Nclob,
    IN block_commit TIMESTAMP,
    IN prevblock_hash Nclob,
    IN Currblock_hash Nclob,
    IN BlockNonce Integer
    )
   LANGUAGE SQLSCRIPT
   SQL SECURITY INVOKER AS
   --DEFAULT SCHEMA <default_schema_name>
--   READS SQL DATA AS
BEGIN
   /*************************************
       Write your procedure logic 
   *************************************/
   INSERT INTO "HELLOBLOCK"."HelloBlock.DBARTIFACT::entity.BlockChainPOW"("index","data","commit_at","previous_hash","current_hash","nonce") 
   VALUES (:block_index,:block_data,:block_commit,:prevblock_hash,:Currblock_hash,:BlockNonce);
END

We defined Difficulty as in every 10 block number of Zero will get increase , initially it will 2 (E.G 00FRS4GH….) once number of block is 11 hash’s pattern will be 000FDGHRT…

var difficulty = 2 + (index / 10);
difficulty = parseInt(difficulty);

This Time i have introduced one new function MineBlock  , It takes difficulty ,index,data,timestamp and initial hash of block with nonce 0. and it calculates the good hash with valid nonce and give it return back.

function MineBlock(pdifficulty, pindex, pdata, ptimest, prevhash, phash) {
var actdiff = parseInt(pdifficulty);
var actind = parseInt(pindex);
var genhash = phash.toString();
var actnonce = 0;

while (genhash.substring(0, actdiff) !== Array(actdiff + 1).join("0")) {
actnonce++;
genhash = CalculateHash(pindex.toString() + JSON.stringify(pdata.toString() + actnonce.toString()).toString() + ptimest.toString() +
prevhash.toString());
}
var mblock = [];
mblock.push(actnonce);
mblock.push(genhash);
return mblock;
}

Little Modification in function AddBlock() and ValidCall() .

First we look at validcall() then we will move to AddBlock().

This time i have introduced two more parameter nonce and mine . Mine has two mode auto and manual , if you just want to test it and dont want to pass nonce you can choose auto and if you want to pass nonce then mine parameter should be manual .  ValidCall() basically validates  your input parameter and based on proper input it will call AddBlock().

function ValidCall() {
//jshint maxdepth:5;

/*eslint max-depth: [22, 22]*/
if (typeof BlockData === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter data");

} else if (typeof BlockIndex === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter index");
} else if (typeof PrevBlockHash === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter prevhash");
} else if (typeof MiningOP === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter mine , accepted value auto or manual");
} else if (typeof MiningOP !== 'undefined') {
if (MiningOP !== 'manual'&& MiningOP !== 'auto') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter mine , accepted value auto or manual");
} else if (MiningOP === 'manual') {
    if (typeof Nonce === 'undefined'){
        $.response.status = $.net.http.OK;
     $.response.setBody("In manual mining you have to pass parameter nonce,Better you go with auto because generating nonce is not everyone's cup of tea ");
       } else {
           AddBlock(BlockData, BlockIndex, PrevBlockHash, Nonce);
       }
} else {
    AddBlock(BlockData, BlockIndex, PrevBlockHash, 'auto');
}
}  

}

Lets Look at AddBlock().

function AddBlock(data, bindex, pbhash, psnonce) {
// var anonce = parseInt(pnonce);

var passed_index = parseInt(bindex);
var passed_pbhash = pbhash.toString();
var prevblock = GetPreviousBlock();
var prev_index = prevblock[0];
prev_index = parseInt(prev_index);
var index = prev_index + 1;
var difficulty = 2 + (index / 10);
difficulty = parseInt(difficulty);
var data = data;
var prevHash = prevblock[1];
var timest = CurrentTImeStamp();
var intnonce = 0;

function bmvalidation() {
//Calling Procedure to add new Block
if (index !== passed_index) {
$.response.status = $.net.http.OK;
$.response.setBody("index is not valid");
} else if (pbhash !== prevHash) {
$.response.status = $.net.http.OK;
$.response.setBody("Previous hash is not valid");
} else {
if (psnonce === 'auto') {
var initial_hash = CalculateHash(index.toString() + JSON.stringify(data.toString() + intnonce.toString()).toString() + timest.toString() +
prevHash.toString());
var minedblock = MineBlock(difficulty, index, data, timest, prevHash, initial_hash);
var minenonce = minedblock[0];
minenonce = parseInt(minenonce);
var minehash = minedblock[1];
minehash = minehash.toUpperCase();
var conn = $.db.getConnection();
var query = 'call \"HELLOBLOCK"."HelloBlock.Procedure::InsertBlockPOW\"(?,?,?,?,?,?)';
var myStatement = conn.prepareCall(query);
myStatement.setInteger(1, index);
myStatement.setNClob(2, data);
myStatement.setString(3, timest);
myStatement.setNClob(4, prevHash);
myStatement.setNClob(5, minehash);
myStatement.setInteger(6, minenonce);
var rs = myStatement.execute();
conn.commit();
//Calling Procedure to add new Block

// Displaying Whole Block Chain
DisplayBlockChain();
// Displaying Whole Block Chain
} else {
var initial_hash = CalculateHash(index.toString() + JSON.stringify(data.toString() + intnonce.toString()).toString() + timest.toString() +
prevHash.toString());
var minedblock = MineBlock(difficulty, index, data, timest, prevHash, initial_hash);
var minenonce = minedblock[0];
minenonce = parseInt(minenonce);
var minernonce = parseInt(psnonce);
var minehash = minedblock[1];

if (minenonce !== minernonce) {
$.response.status = $.net.http.OK;
$.response.setBody("Try with different nonce, your computed nonce is not enough to add block in Blockchain");
} else {
var conn = $.db.getConnection();
var query = 'call \"HELLOBLOCK"."HelloBlock.Procedure::InsertBlockPOW\"(?,?,?,?,?,?)';
var myStatement = conn.prepareCall(query);
myStatement.setInteger(1, index);
myStatement.setNClob(2, data);
myStatement.setString(3, timest);
myStatement.setNClob(4, prevHash);
myStatement.setNClob(5, minehash);
myStatement.setNClob(6, minenonce);
var rs = myStatement.execute();
conn.commit();
//Calling Procedure to add new Block

// Displaying Whole Block Chain
DisplayBlockChain();
// Displaying Whole Block Chain
}
}

}
}

bmvalidation();

}

This time it takes one parameter extra that is nonce , in case of if you pass parameter mine as auto in xsjs service call it auto will be passed as nonce . in case of auto nonce it will first initialize the nonce as 0 and check the difficulty based on the formula we have defined and call MineBlock() , it will get valid hash and nonce and add the block as simple as that , please noted that in real blockchain you can’t send mine parameter as auto , here just for testing and finding nonce is difficult i have made it . In real blockchain to mine a block you have to find Nonce by utilizing your computation power. In case if you send mine parameter as manual then you have to pass nonce parameter and which will be checked with valid nonce , if its same then it will allow you to add a block in blockchain. Now you can see how secure it is , because finding nonce when difficulty level is ultra legend task. Lets putting everything together.

var acmd = $.request.parameters.get("action");
var BlockData = $.request.parameters.get("data");
var BlockIndex = $.request.parameters.get("index");
var PrevBlockHash = $.request.parameters.get("prevhash");
var Nonce = $.request.parameters.get("nonce");
var MiningOP = $.request.parameters.get("mine");

function DisplayBlockChain() {
try {
var conn = $.db.getConnection();
var output = {
results: []
};
var query = 'select * from "HELLOBLOCK"."HelloBlock.DBARTIFACT::entity.BlockChainPOW"';
var myStatement = conn.prepareStatement(query);
var rs = myStatement.executeQuery();
while (rs.next()) {

var record = {};
record.Index = rs.getInteger(1);
record.Data = rs.getNClob(2);
record.CommitedTime = rs.getString(3);
record.PrevBlockHash = rs.getNClob(4);
record.CurrentHash = rs.getNClob(5);
record.Nonce = rs.getInteger(6);
output.results.push(record);

}
rs.close();
myStatement.close();
conn.close();

} catch (e) {
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
$.response.setBody(e.message);
return;
}

var body = JSON.stringify(output);
$.response.contentType = 'application/json';
$.response.setBody(body);
$.response.status = $.net.http.OK;

}

function CurrentTImeStamp() {
var date = new Date();

var utcout = date.getFullYear() + '-' +

('0' + (date.getMonth() + 1)).slice(-2) + '-' +

('0' + date.getDate()).slice(-2) + 'T' +

('0' + date.getHours()).slice(-2) + ':' +

('0' + date.getMinutes()).slice(-2) + ':' +

('0' + date.getSeconds()).slice(-2);

return (utcout);
}

function CalculateHash(s) {

var chrsz = 8;

var hexcase = 0;

function safe_add(x, y) {

var lsw = (x & 0xFFFF) + (y & 0xFFFF);

var msw = (x >> 16) + (y >> 16) + (lsw >> 16);

return (msw << 16) | (lsw & 0xFFFF);

}

function S(X, n) {
return (X >>> n) | (X << (32 - n));
}

function R(X, n) {
return (X >>> n);
}

function Ch(x, y, z) {
return ((x & y) ^ ((~x) & z));
}

function Maj(x, y, z) {
return ((x & y) ^ (x & z) ^ (y & z));
}

function Sigma0256(x) {
return (S(x, 2) ^ S(x, 13) ^ S(x, 22));
}

function Sigma1256(x) {
return (S(x, 6) ^ S(x, 11) ^ S(x, 25));
}

function Gamma0256(x) {
return (S(x, 7) ^ S(x, 18) ^ R(x, 3));
}

function Gamma1256(x) {
return (S(x, 17) ^ S(x, 19) ^ R(x, 10));
}

function core_sha256(m, l) {

var K = new Array(0x428A2F98, 0x71374491, 0xB5C0FBCF, 0xE9B5DBA5, 0x3956C25B, 0x59F111F1, 0x923F82A4, 0xAB1C5ED5, 0xD807AA98, 0x12835B01,
0x243185BE, 0x550C7DC3, 0x72BE5D74, 0x80DEB1FE, 0x9BDC06A7, 0xC19BF174, 0xE49B69C1, 0xEFBE4786, 0xFC19DC6, 0x240CA1CC, 0x2DE92C6F,
0x4A7484AA, 0x5CB0A9DC, 0x76F988DA, 0x983E5152, 0xA831C66D, 0xB00327C8, 0xBF597FC7, 0xC6E00BF3, 0xD5A79147, 0x6CA6351, 0x14292967,
0x27B70A85, 0x2E1B2138, 0x4D2C6DFC, 0x53380D13, 0x650A7354, 0x766A0ABB, 0x81C2C92E, 0x92722C85, 0xA2BFE8A1, 0xA81A664B, 0xC24B8B70,
0xC76C51A3, 0xD192E819, 0xD6990624, 0xF40E3585, 0x106AA070, 0x19A4C116, 0x1E376C08, 0x2748774C, 0x34B0BCB5, 0x391C0CB3, 0x4ED8AA4A,
0x5B9CCA4F, 0x682E6FF3, 0x748F82EE, 0x78A5636F, 0x84C87814, 0x8CC70208, 0x90BEFFFA, 0xA4506CEB, 0xBEF9A3F7, 0xC67178F2);

var HASH = new Array(0x6A09E667, 0xBB67AE85, 0x3C6EF372, 0xA54FF53A, 0x510E527F, 0x9B05688C, 0x1F83D9AB, 0x5BE0CD19);

var W = new Array(64);

var a, b, c, d, e, f, g, h, i, j;

var T1, T2;

m[l >> 5] |= 0x80 << (24 - l % 32);

m[((l + 64 >> 9) << 4) + 15] = l;

for (var i = 0; i < m.length; i += 16) {

a = HASH[0];

b = HASH[1];

c = HASH[2];

d = HASH[3];

e = HASH[4];

f = HASH[5];

g = HASH[6];

h = HASH[7];

for (var j = 0; j < 64; j++) {

if (j < 16) W[j] = m[j + i];

else W[j] = safe_add(safe_add(safe_add(Gamma1256(W[j - 2]), W[j - 7]), Gamma0256(W[j - 15])), W[j - 16]);

T1 = safe_add(safe_add(safe_add(safe_add(h, Sigma1256(e)), Ch(e, f, g)), K[j]), W[j]);

T2 = safe_add(Sigma0256(a), Maj(a, b, c));

h = g;

g = f;

f = e;

e = safe_add(d, T1);

d = c;

c = b;

b = a;

a = safe_add(T1, T2);

}

HASH[0] = safe_add(a, HASH[0]);

HASH[1] = safe_add(b, HASH[1]);

HASH[2] = safe_add(c, HASH[2]);

HASH[3] = safe_add(d, HASH[3]);

HASH[4] = safe_add(e, HASH[4]);

HASH[5] = safe_add(f, HASH[5]);

HASH[6] = safe_add(g, HASH[6]);

HASH[7] = safe_add(h, HASH[7]);

}

return HASH;

}

function str2binb(str) {

var bin = Array();

var mask = (1 << chrsz) - 1;

for (var i = 0; i < str.length * chrsz; i += chrsz) {

bin[i >> 5] |= (str.charCodeAt(i / chrsz) & mask) << (24 - i % 32);

}

return bin;

}

function Utf8Encode(string) {

string = string.replace(/\r\n/g, "\n");

var utftext = "";

for (var n = 0; n < string.length; n++) {

var c = string.charCodeAt(n);

if (c < 128) {

utftext += String.fromCharCode(c);

} else if ((c > 127) && (c < 2048)) {

utftext += String.fromCharCode((c >> 6) | 192);

utftext += String.fromCharCode((c & 63) | 128);

} else {

utftext += String.fromCharCode((c >> 12) | 224);

utftext += String.fromCharCode(((c >> 6) & 63) | 128);

utftext += String.fromCharCode((c & 63) | 128);

}

}

return utftext;

}

function binb2hex(binarray) {

var hex_tab = hexcase ? "0123456789ABCDEF" : "0123456789abcdef";

var str = "";

for (var i = 0; i < binarray.length * 4; i++) {

str += hex_tab.charAt((binarray[i >> 2] >> ((3 - i % 4) * 8 + 4)) & 0xF) +

hex_tab.charAt((binarray[i >> 2] >> ((3 - i % 4) * 8)) & 0xF);

}

return str;

}

s = Utf8Encode(s);

return binb2hex(core_sha256(str2binb(s), s.length * chrsz));

}

function GetPreviousBlock() {
try {
var conn = $.db.getConnection();

var query = 'select COUNT(*) from "HELLOBLOCK"."HelloBlock.DBARTIFACT::entity.BlockChainPOW"';
var myStatement = conn.prepareStatement(query);
var rs = myStatement.executeQuery();

if (rs.next()) {
var lv_index = rs.getString(1);
var index_int = parseInt(lv_index);

if (index_int < 1) {
var block = [];

block.push('0');
block.push('0000000X0');
} else {
var connhash = $.db.getConnection();

var query = 'select "current_hash" from \"HELLOBLOCK\".\"HelloBlock.DBARTIFACT::entity.BlockChainPOW\" where "index" =?';
var pstmt = connhash.prepareStatement(query);
pstmt.setInteger(1, index_int);
var rec = pstmt.executeQuery();
if (rec.next()) {
var lv_prevhash = rec.getNClob(1);
var block = [];

block.push(lv_index);
block.push(lv_prevhash);
}
}

return (block);

} else {
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
$.response.setBody('Error in fetching Block Count');

}

} catch (e) {
var an = e.message;
}
}

function MineBlock(pdifficulty, pindex, pdata, ptimest, prevhash, phash) {
var actdiff = parseInt(pdifficulty);
var actind = parseInt(pindex);
var genhash = phash.toString();
var actnonce = 0;

while (genhash.substring(0, actdiff) !== Array(actdiff + 1).join("0")) {
actnonce++;
genhash = CalculateHash(pindex.toString() + JSON.stringify(pdata.toString() + actnonce.toString()).toString() + ptimest.toString() +
prevhash.toString());
}
var mblock = [];
mblock.push(actnonce);
mblock.push(genhash);
return mblock;
}

function AddBlock(data, bindex, pbhash, psnonce) {
// var anonce = parseInt(pnonce);

var passed_index = parseInt(bindex);
var passed_pbhash = pbhash.toString();
var prevblock = GetPreviousBlock();
var prev_index = prevblock[0];
prev_index = parseInt(prev_index);
var index = prev_index + 1;
var difficulty = 2 + (index / 10);
difficulty = parseInt(difficulty);
var data = data;
var prevHash = prevblock[1];
var timest = CurrentTImeStamp();
var intnonce = 0;

function bmvalidation() {
//Calling Procedure to add new Block
if (index !== passed_index) {
$.response.status = $.net.http.OK;
$.response.setBody("index is not valid");
} else if (pbhash !== prevHash) {
$.response.status = $.net.http.OK;
$.response.setBody("Previous hash is not valid");
} else {
if (psnonce === 'auto') {
var initial_hash = CalculateHash(index.toString() + JSON.stringify(data.toString() + intnonce.toString()).toString() + timest.toString() +
prevHash.toString());
var minedblock = MineBlock(difficulty, index, data, timest, prevHash, initial_hash);
var minenonce = minedblock[0];
minenonce = parseInt(minenonce);
var minehash = minedblock[1];
minehash = minehash.toUpperCase();
var conn = $.db.getConnection();
var query = 'call \"HELLOBLOCK"."HelloBlock.Procedure::InsertBlockPOW\"(?,?,?,?,?,?)';
var myStatement = conn.prepareCall(query);
myStatement.setInteger(1, index);
myStatement.setNClob(2, data);
myStatement.setString(3, timest);
myStatement.setNClob(4, prevHash);
myStatement.setNClob(5, minehash);
myStatement.setInteger(6, minenonce);
var rs = myStatement.execute();
conn.commit();
//Calling Procedure to add new Block

// Displaying Whole Block Chain
DisplayBlockChain();
// Displaying Whole Block Chain
} else {
var initial_hash = CalculateHash(index.toString() + JSON.stringify(data.toString() + intnonce.toString()).toString() + timest.toString() +
prevHash.toString());
var minedblock = MineBlock(difficulty, index, data, timest, prevHash, initial_hash);
var minenonce = minedblock[0];
minenonce = parseInt(minenonce);
var minernonce = parseInt(psnonce);
var minehash = minedblock[1];

if (minenonce !== minernonce) {
$.response.status = $.net.http.OK;
$.response.setBody("Try with different nonce, your computed nonce is not enough to add block in Blockchain");
} else {
var conn = $.db.getConnection();
var query = 'call \"HELLOBLOCK"."HelloBlock.Procedure::InsertBlockPOW\"(?,?,?,?,?,?)';
var myStatement = conn.prepareCall(query);
myStatement.setInteger(1, index);
myStatement.setNClob(2, data);
myStatement.setString(3, timest);
myStatement.setNClob(4, prevHash);
myStatement.setNClob(5, minehash);
myStatement.setNClob(6, minenonce);
var rs = myStatement.execute();
conn.commit();
//Calling Procedure to add new Block

// Displaying Whole Block Chain
DisplayBlockChain();
// Displaying Whole Block Chain
}
}

}
}

bmvalidation();

}

function ValidCall() {
//jshint maxdepth:5;

/*eslint max-depth: [22, 22]*/
if (typeof BlockData === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter data");

} else if (typeof BlockIndex === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter index");
} else if (typeof PrevBlockHash === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter prevhash");
} else if (typeof MiningOP === 'undefined') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter mine , accepted value auto or manual");
} else if (typeof MiningOP !== 'undefined') {
if (MiningOP !== 'manual'&& MiningOP !== 'auto') {
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter mine , accepted value auto or manual");
} else if (MiningOP === 'manual') {
    if (typeof Nonce === 'undefined'){
        $.response.status = $.net.http.OK;
     $.response.setBody("In manual mining you have to pass parameter nonce,Better you go with auto because generating nonce is not everyone's cup of tea ");
       } else {
           AddBlock(BlockData, BlockIndex, PrevBlockHash, Nonce);
       }
} else {
    AddBlock(BlockData, BlockIndex, PrevBlockHash, 'auto');
}
}  

}

switch (acmd) {
case "addblock":
// AddBlock(BlockData, BlockIndex, PrevBlockHash, 'auto');    
ValidCall();
break;
case "chaindisp":
DisplayBlockChain();
break;
default:
$.response.status = $.net.http.OK;
$.response.setBody("Pass Parameter: " + acmd);
}

lets test it with classic postman.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live
SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

Now you can see the Pattern of hash , and it returned Nonce. Lets try to add one block using random nonce.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

Look at the response . Now we will try to add block automatically to see what is the valid hash and nonce for this block.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Live

Block got added with valid nonce 385 , in my example to keep it simple i defined easy difficulty but in real use case it will be too complex and hard.

The Whole Project (HelloBlock) is available  On Github. You can import it as delivery unit  also in that case you have to assign role to your user to execute it. I hope you like this blog , new idea ,suggestions are always welcome. In next Blog we will see how we can build end to end Blockchain application on supply chain use case using SAP Cloud Platform.

The new ArcGIS Enterprise Geodatabase for HANA – First Impressions

$
0
0
In this blog, I’ll look at what it takes to create an enterprise geodatabase for HANA, enable it and copy some feature classes from another enterprise geodatabase into the HANA one.  I’ll discuss the creation and loading of utility models in another post.

As many of you know, the ArcGIS platform has been able to access tables in HANA using query layers since ArcGIS Server and Desktop – 10.3 and Pro 1.2 were released in 2014.  This enabled spatial data in HANA to be consumed and updated by the ArcGIS platform.  As of ArcGIS Server 10.3.1, feature services against HANA were supported.  This is commonly known as an agile spatial datamart or “sidecar” scenario.

The second scenario is creating an enterprise geodatabase in HANA.  Esri is referring to this scenario as Esri ArcGIS on HANA.

The first release of the enterprise geodatabase for HANA will support the following:

◉ subtypes/domains
◉ relationship classes
◉ attachments
◉ editor tracking
◉ non-versioned archiving
◉ offline editing with sync capabilities
◉ new service based long transaction model for editing
◉ utility network
Although this release supports the new services based utility network, support for geodatabase topology and editable network datasets are planned for future releases of ArcGIS Pro. Examples include Parcel Fabric.

Assuming you have installed or have access to ArcGIS Enterprise 10.6 and Pro 2.1, the steps for creating an enterprise geodatabase are:

1. Install and configure the SAP HANA 64 bit ODBC drivers (part of the SAP HANA Client) on both the Enterprise and Pro instances.  Use the same connection name on the Enterprise instance.  Record the name of your ODBC connection for later (step 6)

2. Using the HANA Cockpit or HANA Studio, create an SDE user in your HANA instance.  The SDE user will need CATALOG READ permissions.  If you don’t have USER ADMIN privs, get the HANA DBA to create the SDE user for you.

3. Test your ODBC connection using the SDE user from both the Pro and Enterprise instances

4. Start ArcGIS Pro and create a new project

5. Right click the Databases group in the Catalog pane on the right and select New Database Connection

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

6. The following dialog will appear.  First, select SAP HANA as the Database Platform.  Then use the ODBC connection name you recorded earlier (when you later register the enterprise geodatabase with ArcGIS Enterprise, you’ll use the same ODBC data source name).  Specify the password for the SDE user.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

7. If you’ve correctly done the proceeding steps, you should now have a connection to the SDE database in HANA.  In HANA, each user has its own schema.

8. Now, you’ve reached the cool part – actually enabling a HANA database to be an enterprise geodatabase.  Note that the enterprise geodatabase must be owned by the SDE user.  You’ll need a keycode file from your Enterprise instance that authorizes you to enable HANA as an enterprise geodatabase.  It is typically located as shown below on your Enterprise instance

C:\Program Files (x86)\ESRI\License10.6\sysgen​

9. Copy the keycode file over to your Pro instance.  You’ll provide the keycode file path in the Authorization File field and the specify the database connection you created in Step 7 above.  Click run and in about 5 seconds, you’ve successfully enabled a HANA enterprise geodatabase.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

10. Now, I’ll copy some feature classes from another enterprise geodatabase using the Import geoprocessing tool.  You’ll first need to create a database connection to that database like what was done in Steps 5 and 6.  You’ll specify the database connection to the other enterprise geodatabase, the feature class(es) you want to import and the target (the HANA enterprise geodatabase or a feature dataset inside it).  Click Run and in a few minutes, you’ve easily copied feature classes into your new HANA enterprise geodatabase.   Of course, the more feature classes you copy, the longer the Import will take…

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

So that’s it… assuming you’ve installed ArcGIS Enterprise 10.6 and Pro 2.1 and installed the ODBC 64 bit drivers, the rest of the steps I did above took about 10 minutes.  Once you’ve enabled the geodatabase (Step 9), you can easily copy additional feature classes or feature datasets (a dataset is simply a group of feature classes – you can create one with the Create Feature Dataset geoprocessing tool in less than a minute.)

I went on to create and share a webmap to ArcGIS Enterprise.  I navigated to Portal and simply right clicked and selected Add and Open as shown here:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

…and the following map was created in ArcGIS Pro.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

The above map uses map services served up by the ArcGIS Enterprise instance – which is obtaining the geometries from the HANA enterprise geodatabase.  Just to make sure, I logged into the ArcGIS Server Manager on the ArcGIS Enterprise instance and looked at the map service used by the above map:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

From here, there’s a lot you can do – you can author maps and utilize the advanced ArcGIS enterprise geodatabase features – and do all of that against an enterprise geodatabase in HANA…starting early next year.  All-in-all, a great reason for 2018 to hurry up and arrive!

Creating SAP Analytics Cloud Live Connection to SAP HANA Database on SAP Cloud Platform

$
0
0
SAP Analytics Cloud (SAC) has the capability to connect to various types of cloud and on-premise data sources via live and import connections

In this blog post, we focus on connecting from an SAP Analytics Cloud tenant to a cloud data source that is an SAP HANA database on SAP Cloud Platform (SCP) via a live connection.

In general, there are 3 ways to create live connections from an SAP Analytics Cloud tenant to an SAP HANA database:

1. Direct – CORS
2. Path – Reverse Proxy
3. SAP Cloud Platform
For the SAP HANA database that is on SAP Cloud Platform, there are two authentication method – (1) User Name & Password and (2) SAML Single Sign On. This blog post only shows the connection with User Name & Password authentication method.

Note: The SAP HANA database and SAP Analytics Cloud tenant (SAC) do not need to be on the same SCP landscape.

Before we jump into the steps to create the live connection, you have to be an admin user for (1) an SAP HANA database instance deployed on an SAP Cloud Platform landscape, and (2) an SAP Analytics Cloud tenant. In the steps below, both are on US2 SCP landscape. Additionally, you need to have HANA Studio installed on your machine.

These are the parts of this guide to set up the live connection:

1. Log into SCP Cockpit – to get the HANA database info.
2. Configure HANA Studio – to connect to cloud system.
3. Connect HANA Studio to HANA Database.
4. Import Info Access service to HANA Database.
5. Import Info Access Toolkit and SINA API to HANA Database.
6. Configure HANA Database User’s Roles – to allow data access.
7. Create Connection from SAP Analytics Cloud Tenant.

Part 1 – Log into SAP Cloud Platform (SCP) Cockpit


The screenshot below shows my SCP account with US West SCP landscape. My SAC tenant is also deployed on the same SCP landscape. The SCP Cockpit contains information associated with your SCP account. Logging into this page is not the same as logging into the SAP HANA database.

You may look at the URL of your SCP Cockpit to know the SCP landscape your HANA database is deployed on.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

After selecting the subaccount, you can see the name under the “subaccount information” section.

This information is useful for later steps.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

After selecting “Database Systems”, you can see the SAP HANA database name that is deployed.

This information is useful for part 3 & part 6 of this guide.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Select the database system -> select the tenant in the “Tenant Databases” field -> select the tenant on the “Database ID” column.

You can see the overview of the database.

If you have not already created a user for the HANA database, you can select “Database User” to create an user. A database user is required for part 3 of this guide.

*Important: A database user is needed to log into the HANA database and to create the connection on the SAC tenant.

The “SAP HANA Web-based Development Workbench” link brings you to the login page of this HANA database instance.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

You can use the database user created to log into the HANA database.

This web interface allows you to perform workflow similar to that of HANA Studio. However, HANA Studio is required for the upcoming steps.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Part 2 – Configure HANA Studio


In your HANA studio, open “Window” -> “Preference”.

Expand “General”, then select “Network Connections”.

Select the following settings, then click “OK” to close the setting.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Open “Help” -> “Install New Software”.

Install the “Eclipse Plug-in Development Environment”.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Then, install the “SAP Cloud Platform Tools for Connecting to SAP HANA Systems”.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Part 3 – Connect HANA Studio to HANA Database


After restarting HANA Studio, right click at the blank space in the “Systems” view, and select “Add Cloud System”.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Select the SCP landscape host in the “Region host” field.

*Tip: type “h” to show the list of available hosts.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Enter the subaccount name of the SCP account. Step to find this information can be found in part 1 of this blog post. Then, enter your SAP ID and password. This set of credential may not be the same as the ones used to log into the HANA database.

*Tip: The username and password should be the credential you used to log into your machine.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

After clicking “Next”, enter the HANA database information.

Step to find this information can be found in part 1 of this blog post.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Part 4 – Import Info Access service to HANA Database


After establishing the connection from HANA studio to the HANA database, Information Access service and user permission need to set up to allow connection from SAC tenant.

In your HANA Studio, open “File” -> “Import”.

Expand “SAP HANA Content” -> select “Delivery Unit” -> choose “Next”.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Under “Target System”, choose your database instance. Choose “Next”.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Select “Server”.

From the dropdown list, select the “SYS/global/hdb/content/HCO_INA_SERVICE.tgz” delivery unit.

Select both actions and choose “Finish”.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

After importing the delivery unit to the HANA database, you can verify the import in the SAP HANA modeler. In the “SAP HANA Systems” view, under “Content”, check that the following package is available: sap\bc\ina\service

*Tip: The button to switch to “SAP HANA Modeler” is at top right in HANA Studio.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Part 5 – Import Info Access Toolkit and SINA API to HANA Database


*Tip: If you do not have the authorization to download the INA UI toolkit from the SAP Software Download Centre, contact Product Support to help you acquire the INA UI toolkit.

After importing the delivery unit to the HANA database, you can verify in the SAP HANA modeler. In the “SAP HANA Systems” view, under “Content”, check that the following packages are available.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Part 6 – Configure HANA Database User’s Roles


Use the SCP XS Admin tool to log into the HANA database. The URL format for login is https://[DatabaseSystemName][SubaccountName].[SCPLandscape].ondemand.com/sap/hana/xs/admin

With the SCP account and HANA database in this guide, the URL is: “https://sacpmhs001d64afd459.us2.hana.ondemand.com/sap/hana/xs/admin”.

Use the same credential to login as the ones for “SAP HANA Web-based Development Workbench” in part 1 of the instruction.

Select the “Menu” icon top left and select “XS Artifact Administration”.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

In the “Packages” area, select “sap> bc > ina > service > v2”

*Important: Make sure you are in that v2 package or you may affect the authentication to your XS Admin tool.

Select “Edit” at the bottom right to update configuration as shown below.

Then, choose “Save”.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Go back to HANA Studio.

In the “SAP HANA Systems” view, under “Security”, expand “Users”.

Select the database user used in part 3 from the list.

Select the “add” icon to add the three roles to the user.

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Part 7 – Create Connection from SAP Analytics Cloud Tenant

Log into the SAC tenant that is deployed on the same SCP landscape as the HANA database.

Select the “Menu” icon, and choose “Connection”

Select the “Add” icon on the top right. Select “Live Connection”, then select “SAP HANA”.

In the prompt, specify the connection name in the “Name” field.

*Tip: The “Name” field is an internal name to the SAC tenant.

Choose “SAP Cloud Platform” as the “Connection Type”. Use the information from part 1 to fill in other connection details.

Choose “User Name and Password” as the “Authentication Method”. Use the database user and password as the credentials.

Choose “OK”

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

Congratulations! You have successfully created a live connection from an SAP Analytics Cloud tenant to a HANA database!

SAP HANA Studio, SAP HANA Guides, SAP Analytics Cloud, SAP Cloud Paltform

After creating this live connection between your SAC tenant and the HANA database, you would be able to leverage the content in the HANA database to create models and stories in the SAC tenant.

Creating a HANA calculation view for currency conversion providing exchange rates for all days including non-working days like holidays and weekends

$
0
0
There is sometimes a need to calculate the exchange rate for weekends and holidays in business use cases.  Since there is no exchange rate for these dates because they are non-working days, the business usually decides to take the most recent previous exchange rate and apply it to these dates.  For example, since December 30, 2017 is a Saturday, the exchange rate for this date will be the exchange rate for December 29, 2017 (Friday) – the most recent previous working day with an exchange rate.  This blog illustrates how to create a HANA calculation view that will provide the exchange rate from USD to EUR of any date in the period of the last 3 years.

1. We first create a projection on the table TCURR applying a filter on the exchange rate type (KURST), from-currency (FCURR) and to-currency (TCURR).  Also, create a calculated column EFFECTIVE_DATE converting GDATU to a real date.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live
2. Create an aggregation on FCURR and TCURR to get the distinct FCURR and TCURR.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

3. Create a projection with a DUMMY_JOIN with a value of ‘1’.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

4. Create a projection on the built-in time dimension table M_TIME_DIMENSION and filter it for the last 3 years. And also create a DUMMY_JOIN with a value of ‘1’.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

5. Create a join between the projection and time dimension joining on the DUMMY_JOIN effectively creating a row for each day of the last 3 years.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

6. Create a left outer join on the previous join and the first projection and taking the exchange rate (UKURS) from the first projection, effectively filling the exchange rate for days that have an exchange rate.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

7. Create a projection with a filter on UKURS is null to get rows with no exchange rates.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

8. Create a projection with a filter on UKURS is not null to get rows with exchange rates.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live


9. Create a join between the no_rate set and the have_rate set on FCURR and TCURR, effectively creating several rows with exchange rates for each date that has no exchange rate.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live


10. Create a projection with a filter of DATE_SAP_no_rate > DATE_SAP_have_rate, effectively removing rows that have a future date than the date that has no exchange rate.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

11. Create an aggregation and using a max on DATE_SAP_have_rate effectively getting the most recent previous date with an exchange rate.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

12. Create a projection on table TCURF with a filter on the exchange rate type to get any ratio for the from-currency that needs to be applied to the exchange rate.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

13. Create a join between the set that has the most recent previous date and the set that has all the previous dates, effectively filling in the exchange rate for the most recent previous date.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

14. Create an aggregation on TCURF with a min on GDATU to get one single ratio.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

15. Create a union between the set that previously had no rates but now has the rate filled and the set that originally had exchange rates.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

16. Create a join between the TCURF and the resulting union to get the ratio. Then create a calculated column UKURS_FINAL that applies the ratio to properly adjust the exchange rate.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

17. Finally, create a projection with the desired final columns.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

18. Final Semantics.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Live

And there you have it, a calculation view for currency conversion with the exchange rate filled for every day of the last 3 years.

Create a HANA Service binding in Cloud Platform CF

$
0
0
I am a developer in SAP and am working in project using SAP Cloud Platform Cloud Foundry hosted on Amazon Web service infrastructure.

Creating a HANA Service Binding in the global account via Cloud Cockpit :-

Prerequisites:

◈ Global account in SAP Cloud foundry on Amazon web service infrastructure.
◈ Dedicated HANA Database available to the global account.
◈ A user assigned to the global account and with SPACE Manager role assigned to it.
◈ 1 org and 2 spaces assigned to same org – 1 space having the database with one or more tenant databases and another space that requires permission to use one of the tenant databases.
1. Login with the user to SAP Cloud Platform with corresponding Username & Password.

2. Navigate to sub account Overview page which has the organization, spaces and member information. Now, Click on the number link next to the Space field.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

3. Click on “Applications” (top left corner). Select any application which is in Started state & Click on Name link .It takes you to Application Overview Page.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

4. In Application Overview page, Click on “Service Bindings”.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

5. Now, Click on “Bind Service” in order to bind your application to hana service.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

6. In Bind Service window, choose “Service from the catalog” option. Click on ‘Next’.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

7. Here, it will list out only the services which are subscribed via “ISM” tool. Search for ‘hana’ service and click on ‘Next’ button.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

8. Choose a service-plan for Hana as “hdi-shared” and click on Next.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

9. Provide JSON parameter  in the box. Also, specify database id inside the double quotes.Click on ‘Next’.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

10. Provide custom name for Instance and click on ‘Finish’.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

11. A service binding will be created with the same instance name you have provided in  step.10

SAP HANA Guides, SAP HANA Certifications, SAP HANA Cloud, SAP HANA

BW HANA – BI Reporting Performance Benefits

$
0
0
I recently done a review of several client sites that have done a technical upgrade to a HANA DB. This article will cover how to create a BI statistics query for report performance and calculate the report performance benefits of a HANA DB upgrade.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

I’ve done a business intelligence survey of 20 clients sites and 45% have already done a DB upgrade to HANA and 80% of these will complete the upgrade by 2020.

The first step is to create a custom query on the Front-End and OLAP Statistics (Highly Aggregated) (0TCT_MCA1) multiprovider. Please note that this only holds statistics on BI queries built in query designer.

The following are the steps to create this report:

1. Create a new query in the query designer off multiprovider: 0TCT_MCA1
2. Add the following Global Filters

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

3. Add the following into Rows Section (Display as Key & Always suppress Zeros on results row).

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

Put row suppression on

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

4. Add the following into Columns section:

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

The following is the variable selection screen. If newly upgraded to HANA DB (say in the last 9 months), I suggest that you review data for the past 52 weeks (yearly snapshot).

If the HANA DB upgrade was older than 9 months, I suggest that you execute the report twice. Get a 6-month snapshot pre-HANA DB upgrade and a 6-month snapshot post HANA DB upgrade.

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

The following is the report output. The report will be split by Initial output (Int/Nav Flag = #) and navigation (Int/Nav Flag = X).

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

The following is the average performance improvement over all clients that participated in the review:

◈ 62% improvement on Initial output (split below)
     ◈ 85% improvement in data manager on initial output.
     ◈ 8% improvement in OLAP on initial output
◈ 65% improvement on navigation output (Split below)
     ◈ 88% improvement in data manager on navigation output.
     ◈ 3% improvement in OLAP on navigation output.

Please note that the figures could be potentially impacted by having lots of cache hits.

The following is the calculation that I followed on the statistics data provided by the above query. Obviously, I can’t provide specific client calculations but we can apply the above performance improvement percentages based off the following scenario

◈ 1 million executions on initial output for 52 weeks.
◈ 1 million executions on navigation for 52 weeks.
◈ Average execution time of 30 seconds on initial output pre-HANA upgrade.
     ◈ 21 Seconds in Data Manager
     ◈ 9 seconds in OLAP
◈ Average execution time of 15 seconds on navigation pre-HANA upgrade.
     ◈ 11 Seconds in Data Manager
     ◈ 4 seconds in OLAP

The following is the calculations based off this scenario:

SAP HANA Guides, SAP HANA Certifications, SAP HANA Materials, SAP HANA Learning

Based off this scenario, 7,875 hours will be saved on users waiting for reports to either return data for initial or navigation outputs.

The range of savings for the clients that have participated has been from 4k to 11k hours per year. The saving really depends on the number of executions per year and the average execution time prior to HANA DB upgrade.

If you’ve upgraded your HANA DB, the above exercise provides a high-level overview of the performance benefits on query execution.

If you’re thinking of upgrading to HANA DB and you must justify the benefit, you can follow the above activity and use the average benchmarks of 62% and 65% on the initial and navigation respectively to do the performance calculation.
Viewing all 711 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>