Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

Taming your SAP HANA Express. Hardening an SAP HANA Express 2.0 SP03 installation part 1. Getting it ready for SAP Analytics front end tools

$
0
0
First things first.

Due to the fact that starting with the wave 2019.01 SAP Analytics Cloud (SAC) has stopped accepting the self-signed SSL certificates for HTTPS INA live connections I have ended up by replacing the self-signed HANA Express SSL certificate(s) with the equivalent CA-signed SSL certificate(s).

Synopsis:

In a nutshell, the SAP Analytics tools (the likes of SAC, Analysis for Office, BW4H, BOE with Lumira 2.x, Analysis for OLAP or WebIntelligence) rely heavily on the secured HTTPS INA protocol to consume the SAP HANA calculation views on-the-fly via the SAP HANA EPM-MDS engine.

This is being referred to as live/on-line/remote connectivity, where the data is always current, as opposed to acquired/off-line/import option where the data needs to be acquired first and then refreshed in a scheduled manner.

I have invested quite some time and effort in order to build a viable testbed with the focus on the front-end SAP analytical applications that rely on the secured HTTP INA protocol based connectivity [either with or without SSO to the backend HANA database] and that can consume the HANA HDI views (e.i. the views which are no longer tied into the default _SYS_BIC schema).

As you may know the SAP HANA Express 2.x ready-to-deploy images implement several self-signed SSL certificates for the secured domains of HTTPS INA (XS classic engine running on the tenant database), XSA (XS Advanced) and LCM access over the web.

As aforementioned, having recently come across the necessity to implement the trusted SSL certificates signed with a trusted CA (Certificate Authority) rather than relying on self-signed ones, I have decided to share my experience in “Taming your SAP HANA Express” series of blog posts.

Let me be clear, I have been very happy with the HXE SAPCAL image deployed on AWS; It takes 45 minutes to have this image deployed and running.

It is a great HANA appliance! Regards to the entire SAPCAL team!

For the sake of convenience and transparency most of the URLs in this article are not obfuscated. They refer to a stock HANA Express instance deployed on AWS and do not reveal any trade secrets.

Let’s begin:

First I got the HTTPS INA protocol up and running fine on the tenant database where both the index server and webdispatcher are running; (This is very important as with the multi-tenanted databases there is neither index server nor webdispatcher available to run on the system database)

This setup used to work against SAP Analytics Cloud waves 18 through 22 out of the box (aka with the self-signed SSL certificates in Chrome browser);

So what…?

1. The initial problem with the HANA Express self-signed certificates.


Initially one particular problem I faced was that a number (but not all) of front-end applications require the CN certificate name to match the hostname in the HTTPS service URL;

When a mismatch between the two names is detected these applications deny any further access.

In our case the CN name is sid-hxe and in the webdispatcher HTTPS URL service name is  vhcalhxedb; Obviously sid-hxe != vhcalhxedb !

Eventually, after a thought, I narrowed down the problem to the following piece of advise:

Can someone advise the steps required to replace the system certificate (sid-hxe) with the default.root.crt.pem certicificate (vhcalhxedb) in the webdispatcher’s SAPSSLS.pse store ?

Let me explain it in more details.

1a.The below links to the HANA XS classic engine work perfectly fine on my testbed. 4390 is the SSL port number of the webdispatcher running on the HXE tenant database;

https://vhcalhxedb:4390/

https://vhcalhxedb:4390/sap/bc/ina/service/v2/GetServerInfo

The OS host name is sid-hxe;

The HANA host name is vhcalhxedb;

Both names resolve to the appliance IP address;

I have noticed the HXE webdispacther URL is using the OS system host sid-hxe SSL certificate as depicted below.

You may notice that the URL uses vhcalhxedb HXE appliance name [which is resolved into the elastic IP address in my laptop’s /etc/hosts file].

Still the webdispatcher/XS classic engine are configured to use the system certificate which shows the OS host name, namely sid-hxe:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

1b.Moreover, if I try to replace the appliance’s name vhcalhxedb with the hostname sid-hxe I get the 503 error as depicted below (of course it goes without saying sid-hxe has been added to the /etc/hosts file on my Mac so it resolves to the same IP address as vhcalhxedb):

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

in other words the name of the HANA host (vhcalhxedb) must not be replaced in the URL;


2. Further on all the XSA applications pre-installed during the appliance deployment or that I have installed myself on either SYSTEMDB or HXE tenant database use a different self-signed domain SSL certificate, namely vhcalhxedb

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

This vhcalhxedb certificate is in the default.root.crt.pem certicificate file found here: <installation_path>/HXE/xs/controller_data/controller/ssl-pub/router

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

3. I can access the webdispatcher admin console (on the HXE tenant database)

https://vhcalhxedb:4390/sap/hana/xs/wdisp/admin/public/default.html

(You may need to grant specific HANA user privileges in order to get access to the webdispatcher admin console and the user needs to be defined on the tenant database)

The PSE of interest is the SAPSSLS.pse which is configured with the system certificate (CN=sid-hxe);

What I did is that I have imported the aforementioned default.root.crt.pem certificate file into the SAPSSLS.pse store (as shown below).

What I did not know is what to do to replace the Own Certificate (CN=sid-hxe) with the default.root.crt.pem certicificate (CN=vhcalhxedb). Or with any other viable certificate ?

At the same time I do not want to damage my testbed. So again I narrowed down the problem to the following piece of advise:

Can someone advise the steps to replace the system certificate with the  default.root.crt.pem certificate in the webdispatcher’s SAPSSLS.pse store ? Does it make any sense at all ?

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

But let me walk you through all the steps one by one.

2. SSL system certificate with HXE tenant database web dispatcher and XS classic


Long story short: Despite the above certificate/URL names mismatch I was able to use HXE SAPCAL image with SAP Analytics Cloud for some quite time…

Until recently the discrepancy of the HXE Webdispatcher CN name in the SLL certificate and the hostname in the Webispatcher URL was still manageable ; For whatever reason SAC including the the SAC wave 2018.22 disregarded the CN name mismatch and that the certificate was self-signed (not trusted).

However with the latest release of SAP Analytics Cloud (wave 2019.01+) it is no longer case as SAC requires the SSL certificate be both valid and trusted….

In order to find a manageable solution and a way out of this “cul-de-sac” I broke it down into three questions:

1. What needs to be done to get the Webdispatcher SSL certificate right?
2. Could I replace the self-signed sid-hxe certificate with the vcalhxedb certificate used elsewhere in the system ? I understand that might solve the name mismatch but would not necessarily fix the certificate trust.
3. Last but not least which CA authority could I be using with the SAPCAL images to generate trusted SSL certificates in order to replace the self-signed ones ?

The answer to question 3 would deliver both valid (name match) and trusted (CA-signed) certificates.

Let me walk you through the required steps as follows:

ad 1.What needs to be done to get the Webdispatcher SSL certificate right? 
To fix the non secure, self-signed HTTPA INA URL ?

https://vhcalhxedb:4390/sap/bc/ina/service/v2/GetServerInfo

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

ad 2. Could I replace the self-signed sid-hxe system certificate with the XSA domain vcalhxedb certificate used elsewhere in the system ? 

Here goes the OS host (sid-hxe) self-signed SSL certificate which is used by the webdispatcher admin webapp and more generally to secure access to the XS classic domain resources:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

and here goes the self-signed vhcalhxedb certificate used to secure the access to the XSA domain:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

ad 3. Last but not least which CA authority could I be using with the SAPCAL images 
to generate trusted SSL certificates in order to replace the self-signed ones ?

If you are interested in more details about the HANA PSEs the following SAP note 2009483 – PSE Management in Web Administration Interface of SAP Web Dispatcher describes the PSE management techniques

Alternatively the below SQL statement allows to retrieve the HANA webdispatcher profile(s)

SELECT KEY, VALUE, LAYER_NAME
FROM SYS.M_INIFILE_CONTENTS
WHERE FILE_NAME = ‘webdispatcher.ini’
AND SECTION = ‘profile’ AND KEY LIKE ‘wdisp/system%’

which yields the following result:

GENERATED, SID=HXE, NAME=HXE, EXTSRV=localhost:39008, SRCVHOST=vhcalhxedb

The FQDN of the webispatcher FQDN is set to be vhcalhxedb.sap.corp; but this could be any name like ateam.sap.corp etc…

Let’s make sure the FQDN can be resolved into the IP address. The below excerpt shows the edited OS hosts system file:

vi /etc/hosts

#

# IP-Address  Full-Qualified-Hostname  Short-Hostname

#


#127.0.0.1 localhost sid-hxe.dummy.nodomain sid-hxe

127.0.0.1 localhost

10.0.0.11 sid-hxe.dummy.nodomain sid-hxe

10.0.0.11 vhcalhxedb.sap.corp vhcalhxedb

Additionally, in order to prevent the SAPCAL image to change the hostname

sid-hxe:~ # cat /etc/hostname

sid-hxe


Prevent the appliance change the hostname at boot time

sid-hxe:~ # vi /etc/init.d/updatehosts

sid-hxe:~ #

sid-hxe:~ # cat /etc/init.d/updatehosts


if [ “$1” == “start” ]; then

# Commented out the below line to prevent the hostname changes

#     /sbin/updatehosts.sh

else

echo “$0 $1 does not do anything”

fi

exit $?

sid-hxe:~ #

3. The hardening of the XS classic domain:


Following the piece of advice from the wiki the CA-signed trusted certificate has been implemented into the PSE SAPSSLS.pse as follows. (You will notice we still use the vhcalhxedb name in the service URL):

I recreated the PSE, created the CSR

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

and then imported the CA response (full chain of certificates)

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

The end result is as follows:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

Still the dispatcher parameters reveal the appliance host name as vhcalhxedb and not the FQDN vhvalhxedb.sap.corp ? What has gone wrong ?

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

Initially I thought that I would have to rename the HANA appliance host, namely rename vhcalhxedb into the FQDNvhvalhxedb.sap.corp .

I do reckon, eventually, this might have been the best way of getting the FQDN work in the URL once for all. And, BTW, there is an excellent post that explains how to rename the HANA host. However, I did not resort to it;

Instead I used the HANA cockpit and searched for all the occurrences of vhcalhxedb in the HANA configuration files through the XSA HANA cockpit at https://vhcalhxedb.sap.corp:51045/sap/hana/cockpit/landscape/index.html

[I have to confess I have already renamed the XSA engine’s domain from vhcalhxedb into vhcalhxedb.sap.corp initially to prove that it can be done, but more importantly that XSA is world apart from classic XS and that the HANA appliance hostname (=vhcalhxedb) is also separate from either XSA and XS engines or the OS host name]

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

I have found that the public_urls of the index engine (xsengine.ini) were of the form http://vhcalhxedb:8090 and https://vhcalhxedb:4390 so I edited them as follows

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

Then I restarted the HANA DB appliance.

4. Hooray!!!


In the aftermath the FQDN URL to HTTPS INA GetServerInfo can reveal both trusted (CA Signed) and valid (CN==URL hostname) certificate for the XS classic domain:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

and that the short-name URL does not work any more:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

Equally the FQDN URL to the webdispatcher admin console reveals the valid and trusted certificate

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

and that the short-name URL does not work any more:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Study Materials

From this moment on I was able to start using the FQDN, namely vhcalhxedb.sap.corp in the SAC connection definition, in the BOE HANA trust definition etc….

Integrating SAP HANA XSA with Microsoft Office 365 Sharepoint Excel using Microsoft Graph API and NodeJS

$
0
0
I would like to show how to read data from Microsoft Office 365 using Microsoft Graph API and NodeJS and then write this data into SAP HANA XSA tables.  Our scenario requirement was to pull data from Excel files on Sharepoint on Microsoft Office 365 and write this data to SAP HANA XSA container tables.  We wanted to use the SAP HANA File Adapter which has a section for Sharepoint Configuration.  However, the Excel adapter as of HANA 2.0 SPS 3 can only connect to Sharepoint 2013 on premise systems and not Sharepoint on the Microsoft Office 365 Cloud.  So we had to come up with an approach to address this requirement and develop it.  This Blog describes the approach which will hopefully help other folks needing to accomplish this type of scenarios for integrating Microsoft Office 365 data with SAP HANA XSA .  There will be another Blog that describes how to trigger our custom developed NodeJS application from a virtual procedure through the file adapter through SAP HANA Smart Data Integration (SDI) layer.

Here is a screenshot of the ExcelAdapter with the Sharepoint URL we are trying to connect with and the error that is returned stating that the connection to the Sharepoint URL does not work:

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

First thing first – we need to connect to Sharepoint on Microsoft Office 365.  Microsoft provides the Graph API which allows users to invoke APIs across most of Office 365 functionality which is very impressive including the Sharepoint integration capabilities.  Check out the Microsoft Graph API with the Microsoft Graph Explorer: Microsoft Graph Explorer

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

Once you get familiar with the API and see how simple, powerful and flexible it is, you will see that you can call the Graph API and pass the Sharepoint site, workbook, worksheet and cell ranges and get the data back in JSON format very easily.  Very cool!  For the Graph API, we need our Sharepoint site, GUID for the drive, workbook, worksheet and the range for the worksheet:

In my example, I have:

https://graph.microsoft.com/v1.0/sites//sites/mycompany.sharepoint.com,5304406e-30bd-4b4e-8bd0-704c8a2e6eaa,ba9d641d-14c9-41f8-a9d0-e8c6a2cda00e/drives/b%21bkAEU70wTkuL0HBMii5uqh1knbrJFPhBqdDoxqLNoA4HcPFP9eqPTIUQCSbiDtgZ/items/01HX4WN4RSH2GZKQXRWNEKR2S5YZLIUHOS/workbook/worksheets(%27{00000000-0001-0000-0000-000000000000}%27)/Range(address=%27Sheet1%21A1:C50%27)

Now in order to call this API, we need to register this application on the Azure portal.

Click on Add an app:

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

Give the application name

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

Get the application id and secret – we will need these for authentication to invoke the interface:

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

In our case, we have named this sharepoint_excel_outlook_graph_api_hana_integration – the name could have been shorter but we wanted to have it be self descriptive enough since there were many other applications for the company.

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

We need to set the permissions for the app to allow reading the sharepoint data:

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

In order for the service to be called with an OAuth token with just the application id and secret, we need to apply for admin consent by the Azure admin which will allow permissions for the application to be triggered in background mode.

SAP HANA XSA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

So once we have the necessary setup done to read the data from Sharepoint, we will write the application using NodeJS.

Here is the main app.js module that allows the application to be invoked from the command line that uses our custom modules hdbutility.js and sharepointutility.js:

///////////////////////////////////////////////////////////////////////////////////
//  app.js - main module to read the file from Sharepoint Office 365 and then save into HANA
//  Author - Jay Malla @Licensed To Code
///////////////////////////////////////////////////////////////////////////////////

var hdbutility = require('./hdbutility');
var sharepointutility = require('./sharepointutility');
// Read all of the main configurable parameters
var config = require('config');

///////////////////////////////////////////////////////////////////////////////////
//  Here is our main....
///////////////////////////////////////////////////////////////////////////////////

var sharepointExcelURL;
var schema;
var table;
var sqlArrayResults;

console.log("Let's start this journey");


//Command line usage (Note that the exact sequence is important)
//node.exe app.js -f sharepointurl -s schema -t table

// Let's extract the variables sharepointurl, schema, table from the command line
process.argv.forEach(function (value, index, array) {
    console.log(index + ': ' + value);

    if (array.length < 8) {
        console.error("Not enough parameters supplied");
        throw (new Error("Not enough parameters supplied"));
    }

    switch (index) {
        case 3:
            console.log('FileName' + ': ' + value);
            sharepointExcelURL = value;
            sharepointExcelURL.replace('\\', '');
            break;
        case 5:
            console.log('Schema' + ': ' + value);
            schema = value;
            break;
        case 7:
            console.log('Schema' + ': ' + value);
            table = value;
            break;
    }
});

//If not supplied through command line, then read from the config file
//if (!schema) {config.get('schema')};
//if (!table) {config.get('table')};
//if (!sharepointExcelURL) {config.get('sharepointExcelURL')};
var hdbConnectionDetails = config.get('hdbConnectionDetails');
var oauth_info = config.get('oauth_info');

mainlogic();

///////////////////////////////////////////////////////////////////////////////////
//  mainlogic is the main function that runs the main logic
///////////////////////////////////////////////////////////////////////////////////
async function mainlogic() {
    try {

        // Set the credentials from the configuration module which has read the default.json
        const credentials = {
            client: {
                id: oauth_info.client_id,
                secret: oauth_info.client_secret
            },
            auth: {
                tokenHost: oauth_info.tokenHost,
                authorizePath: oauth_info.authorizePath,
                tokenPath: oauth_info.tokenPath
            },
            options: {
                bodyFormat: 'form',
                authorizationMethod: 'body'
            }
        };

        ////////////////////////////////////////////////////////////////
        // Use Sharepoint Utility to get Excel
        var sharepointclient = sharepointutility.createClient(credentials);
        sharepointclient.getExcelFileFromSharepoint(sharepointExcelURL, schema, table, oauth_info.scope)
        // If Excel file is retrieved
        .then(result => {
                console.log(result);
                console.log("Excel File retrieved as array of SQL statements");
                sqlArrayResults = result;
                ////////////////////////////////////////////////////////////////
                // Save to HANA Database
                var hdbclient = hdbutility.createClient(hdbConnectionDetails);
                hdbclient.setSchema(schema);
                hdbclient.setTable(table);

                hdbclient.insertIntoHANA_ReturningPromise(sqlArrayResults)
                    .then(result => {
                        console.log(result);
                        console.log("Data uploaded to SAP HANA Table");
                    })
                    .catch(error => {
                        console.error(error);
                        console.log("Could not upload the data to SAP HANA table.  Please fix issues and try again.  Check config file and input parameters.");
                    });
            })
            .catch(error => {
                console.error(error);
                console.log("Could not read the Excel file from Sharepoint");
            });

    } catch (err) {
        console.log(err);
    }
}

Here is the NodeJS code that uses the Microsoft Graph Client on NodeJS to connect to Sharepoint and then call the Graph API to read in the contents.  From the contents, the code then creates an array of SQL statements that is later used to insert the data into the SAP HANA tables:

///////////////////////////////////////////////////////////////////////////////////
//  sharepointutility.js - sharepoint module to integrate with Sharepoint
//  Author - Jay Malla @Licensed To Code
////////////////////////////////////////////////////////////////////////////////////

var graph = require('@microsoft/microsoft-graph-client');

// Class sharepointutility - object constructor function
function sharepointutility(credentials) {

    // Set the credentials Info
    this.credentials = credentials;


    // We need to store a reference to this - since we will need this later on
    self = this;

    ///////////////////////////////////////////////////////////////////////////////////
    // This method async function connects to Sharepoint 
    this.getExcelFileFromSharepoint = async function getExcelFileFromSharepoint(sharepointExcelURL, schema, table, inputscope) {
        return new Promise(async function (resolve, reject) {

            self.sharepointExcelURL = sharepointExcelURL;
            self.schema = schema;
            self.table = table;
            self.inputscope = inputscope;

            const oauth2 = require('simple-oauth2').create(credentials);
            var accessToken;

            const tokenConfig = {
                scope: inputscope // also can be an array of multiple scopes, ex. ['<scope1>, '<scope2>', '...']
            };

            // Get the access token object for the client
            try {
                const result = await oauth2.clientCredentials.getToken(tokenConfig);
                accessToken = result.access_token;
            } catch (error) {
                console.log('Access Token error', error.message);
                reject(error);
                return;
            }

            // Initialize Graph client
            const client = graph.Client.init({
                authProvider: (done) => {
                    done(null, accessToken);
                }
            });

            ////////////////////////////////////////////////////////////////
            // Get the Sharepoint Excel file
            var sharepointurl = sharepointExcelURL;
            const result = await client
                .api(sharepointurl)
                .get();
            ////////////////////////////////////////////////////////////////

            ////////////////////////////////////////////////////////////////
            // Let's do an HTTP post for the same:

            var request = require('request');
            var bearer = "Bearer " + accessToken;
            var httpurl = "https://graph.microsoft.com/v1.0";

            // Set the headers
            var headers = {
                'Content-Type': 'application/json',
                'Authorization': bearer
            }

            // Configure the request
            var options = {
                url: httpurl + sharepointurl,
                method: 'GET',
                headers: headers
            }

            // Start the request
            request(options, function (error, response, body) {
                if (!error && response.statusCode == 200) {
                    // Print out the response body
                    console.log(body);
                    var sqlArrayResults;
                    sqlArrayResults = self.generateSQLarrayFromResults(result.formulas);
                    resolve(sqlArrayResults);
                    return;
                }
            })
            ////////////////////////////////////////////////////////////////

/*
            ////////////////////////////////////////////////////////////////
            // Convert the Excel file results to an array of SQL
            var sqlArrayResults;
            sqlArrayResults = self.generateSQLarrayFromResults(result.formulas);
            resolve(sqlArrayResults);
            return;
            ////////////////////////////////////////////////////////////////
 */
        });
    };

    this.generateSQLarrayFromResults = function generateSQLarrayFromResults(sharepointTable) {
        var columnsString;
        var sqlArray = [];

        sharepointTable.forEach((element, index) => {

            //Assumption - the first row has the table headings
            if (index == 0) {

                var processArray = (array) => {

                    var sqlString = "(";

                    array.forEach((element, index) => {
                        console.log(element);
                        if (index < (array.length - 1)) {
                            sqlString = sqlString + element + ",";
                        } else {
                            sqlString = sqlString + element + ")";
                        }
                    });

                    return sqlString;
                }
                columnsString = processArray(element);

            } else {
                if (element[0] != '') { //As long as there are other entries                                                                                                  
                    var valuesArrayString;
                    var tempString = "insert into \"" + this.schema + "\".\"" + this.table + "\"" + columnsString + " values "; // + element[0] + "," + "'" + element[1] + "'" + "," + "'" + element[2] + "'" + ")";

                    var processValuesArray = (array) => {

                        var sqlString = "(";

                        array.forEach((element, index) => {
                            console.log(element);

                            if (index < (array.length - 1)) {
                                if (typeof (element) == "number") {
                                    sqlString = sqlString + element + ",";
                                } else {
                                    sqlString = sqlString + "'" + element + "'" + ",";
                                }
                            } else {
                                if (typeof (element) == "number") {
                                    sqlString = sqlString + element + ")";
                                } else {
                                    sqlString = sqlString + "'" + element + "'" + ")";
                                }
                            }
                        });
                        return sqlString;
                    }

                    var valuesArrayString;
                    valuesArrayString = processValuesArray(element);
                    tempString = tempString + valuesArrayString;
                    console.log(tempString);
                    sqlArray.push(tempString);
                }
            }
        });

        return sqlArray;
    }
}

///////////////////////////////////////////////////////////////////////////////////

exports.createClient = function (credentials) {
    return new sharepointutility(credentials);
}
///////////////////////////////////////////////////////////////////////////////////


Here is the code that connects to the HANA database and inserts the data into the tables – note that the schema and table names are parameterized:

///////////////////////////////////////////////////////////////////////////////////
//  hdbutility.js - database module save into HANA
//  Author - Jay Malla @Licensed To Code
////////////////////////////////////////////////////////////////////////////////////


// Class hdbutility - object constructor function
function hdbutility(hdbConnectionInfo) {

  // Set the hdbConnection Info
  this.hdbConnectionInfo = hdbConnectionInfo;

  //property method to set schema name
  this.setSchema = function (schema) {
    this.schema = schema;
  };

  //property method to set the table name
  this.setTable = function (table) {
    this.table = table;
  };

  // We need to store a reference to this - since we will need this later on
  self = this;

  //////////////////////////////////////////////////////////////////////////////////
  This method async function runs the SQL Array of statements in the HANA database - but order is not gauranteed 
  this.insertIntoHANA_ReturningPromise = async function insertIntoHANA_ReturningPromise(sqlArray) {
    return new Promise(function (resolve, reject) {

      var inputSQLArray = sqlArray;
      var results = [];

      var hdb = require('hdb');
      var hdbclient = hdb.createClient(self.hdbConnectionInfo);

      hdbclient.on('error', function (err) {
        reject(err);
        return;
      });

      hdbclient.connect(function (err) {

        if (err) {
          reject(err);
          return;
        }

        // First delete the entries from the table
        var strQuery = 'delete from \"' + self.schema + '\".\"' + self.table + '\"';
        hdbclient.exec(strQuery, function (err, rows) {

          //hdbclient.end();
          if (err) {
            reject(err);
            return;
          }
          console.log('Table Contents before SQL Inserts:', JSON.stringify(rows));

          /////////////////////////////////////////////////////////////////////////
          // Recursive approach to go through Array and execute SQL statements
          var iterateOverArray = (index) => {
            // if the end of Array reached..
            if (index == inputSQLArray.length) {
              // Read all of the the entries in that table and log this to see if all records inserted...
              strQuery = 'select * from \"' + self.schema + '\".\"' + self.table + '\"';
              hdbclient.exec(strQuery, function (err, rows) {
                hdbclient.end();
                if (err) {
                  reject(err);
                  return;
                }
                console.log('Table Contents After SQL Inserts:', JSON.stringify(rows));
                resolve(JSON.stringify(rows));
                return;
              });

            } else {
              // If the end of the Array has not been reached....
              // Execute the insert into the table
              hdbclient.exec(inputSQLArray[index], (err, rows) => {

                //hdbclient.end();
                if (err) {
                  console.error('Execute error:', err);
                  //return callback(err);
                  reject(err);
                  return;
                }
                //otherwise capture the results and move to the next array member for the iteration
                console.log('Results executing SQL ' + inputSQLArray[index] + ' = ' + JSON.stringify(rows));
                results.push('Results executing SQL ' + inputSQLArray[index] + ' = ' + JSON.stringify(rows));
                iterateOverArray(index + 1);
              });
            }
          }

          /////////////////////////////////////////////////////////////////////////
          //Calling the recursive function...
          iterateOverArray(0); // Initiate the recursive function that iterates through the array and executes the SQL
          /////////////////////////////////////////////////////////////////////////

        });
      });
    });
  }
}

///////////////////////////////////////////////////////////////////////////////////
exports.createClient = function (hdbConnectionInfo) {
  return new hdbutility(hdbConnectionInfo);
}

///////////////////////////////////////////////////////////////////////////////////

So once we have the NodeJS program that we can invoke from the command line to read the data from Sharepoint and write the data into HANA, we need HANA to trigger this on demand dynamically.  So here is where HANA has this nice feature of a Virtual procedure using the file adapter that allows us to call our NodeJS program from the command line with dynamic parameters.  These details will be in another Blog to follow which will be available very soon.

SLT based HANA replication-FusionOps activity

$
0
0
This article uses as To perform SLT based replication in hana with ecc system

Login to D1H and execute LTRC T-Code

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

2) Choose the respective System , in which replication needs to be done. Here we are doing in first (SR1)

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

3) Find the table lists whose count is greater than million records. Search that list of tables first to find out which is already present.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

4) Verify tables are available for replication in HANA

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Only 2 tables are in HANA

Out of 6 tables-2 tables are already replicated in HANA SIDE CAR, So we need to replicate 4 tables

We need to replicate only the below 4 tables

NAST
COBRB
COBRA
S115

Come out of LTRC T-Code and then again execute LTRC T-code

Click on Data Provisioning

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Click on Start Replication

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Click on execute button

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Click on No.Of Tables and click on Descending

We will get the list of tables in scheduled state

We could see Replication is in stopped status. We need to start the replication in order the tables to replicate.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Once the replication is started, you can see the status Replication (Initial Load)

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Once the Replication is completed, you will get the Current Action as “Replication”

Now, after replication. We have to create the scenario. Here the scneraio name is Z-FUSIONOPS2.

Before , Go to SE16 and give table name as RDA_CONFIG and give here scenario name. You will get that there is no entries in the table list.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Now, you have to create the scenario. Go to SE38 and give Program Name as RDA_MAINTAIN

This program is used to upload scnerio

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

You have to create the XML file for those listed tables. XML files creation reference take it from local path where you have saved

Give the path where xml file is located

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Click on execute button, a message will come that Scenario Z_FUSIONOPS02 was added.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Click on Maintain Database Connection

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Choose the scenario name. You will see the new scenario name added in the list.

Now Database Connection –SLT_SR1

Here now the database connection for Z_FUSIONOPS02 is assigned to SR1.

Now we have to activate the scenario

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Click on execute and activate the scenario

Now to RDA_CONFIG table and give the scenario name and click on number of entries

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Here you will get the number of entries

Suppose 6 tables then double of that i.e 12 because we have 2 programs using the same set of tables

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Certification, SAP HANA Study Materials

Click on execute and you will see the list of tables entries

SAP HANA licensing models explained

$
0
0
SAP’s column-oriented, in-memory database, that combines OLAP and OLTP operations into a single system, or in other words, SAP HANA originated from a research started in 2006 by SAP’s co-founder Hasso Plattner while he was a computer science professor at the Hasso Platner Institute in Postdam, Germany.

After several years of development at SAP, a prerelease version of SAP HANA was given to selected customers in October 2010. SAP released the first official version, SAP HANA 1.0, on June 18, 2011. After a few years of development and major architectural changes, SAP HANA 2.0 was released in 2017, with a variety of updated and new features. Below you can check a timeline with the major SAP HANA events:

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

SAP HANA is an enterprise platform designed to handle a huge amount of data and a large number of applications in support of mission-critical business. By platform, we mean that it is not only a database anymore. Application developers can build applications and data scientists can run advanced analytics and machine learning directly on SAP HANA, and database administrators can integrate their landscape within the SAP HANA database management tools. SAP HANA provides features for multi tenancy, high availability, disaster recovery, and dynamic tiering to help ensure 24×7 access and smooth horizontal and vertical scaling. SAP HANA also lets you access a wide variety of data sources and support Big Data and the Internet of Things.

Since its release, SAP HANA has had multiple versions and editions. The objective of this blog post is to clarify what is the most suited licensing option for you to leverage the full potential of SAP’s highest profile product – SAP HANA. The following options are going to be covered:

◈ SAP HANA Runtime Edition
◈ SAP HANA Full Use Edition
◈ SAP HANA Active/Active (Read Enabled)
◈ SAP HANA Express Edition
◈ SAP Cloud Platform, SAP HANA Service

SAP HANA Runtime Edition


SAP HANA Runtime Edition is limited to a runtime environment for SAP Applications. An SAP Application is any application that includes a NetWeaver Application Server, e.g. SAP Business Warehouse (BW), SAP BW/4HANA, SAP Business Suite, SAP S/4HANA and other related products.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

SAP offers two options with SAP HANA Runtime Edition:

◈ SAP HANA Runtime Edition for SAP BW (HANA REB)
◈ SAP HANA Runtime Edition for Apps and SAP BW (HANA REAB), which includes all capabilities from REB, plus other SAP Apps.

The picture below shows the available features of both for these options.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

The options marked with * have some restrictions regarding data processing.

SAP HANA Runtime Edition is considered an integral part of the application itself, so that any kind of data processing (modeling, administration, creation of data structures and use of advanced analytics) must be done by this application, via the application layer (where the interface can be either from SAP or a 3rd party provider). With SAP HANA Runtime Edition, the data from the application can only be displayed or graphically processed at the front-end application and any direct connection to the database bypassing the application is a license violation.

The picture below shows not only the available SAP HANA capabilities, but also the restricted (Δ) and not included (X) ones when SAP HANA Runtime Edition is licensed.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

SAP HANA Full Use Edition


SAP HANA Full Use Edition offers an unrestricted platform for any combination of SAP, non-SAP, custom, third-party, and hybrid applications.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

With SAP HANA Full Use Edition there are no limitations on data modeling, administration, creation of custom structures, and use of advanced engines via HANA Web IDE, HANA Studio (IDEs used to develop artifacts on a HANA server) or other applications. There are no limitations on loading and exporting of SAP & non-SAP data directly into and out of SAP HANA.

With the Full Use Edition it is possible to read data both from the application and database layers. You can consume the data of the SAP HANA system whichever way you need it.

SAP offers two versions of the SAP HANA Full Use Edition:

◈ SAP HANA Standard Edition
◈ SAP HANA Enterprise Edition (includes all capabilities from the Standard Edition). The picture below shows the available features for both editions.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

It is also possible to acquire the Standard Edition and just add the services you need for your business. Here are the available options:

◈ Data Privacy Option, for enhanced protection of sensitive and confidential data (e.g. Data Masking);
◈ Information Management Option, for data integration, data quality management, and information stewardship;
◈ Predictive Option, for Predictive Analytics Library access, R engine, and TensorFlow integration for advanced analytics;
◈ Replication Option, for replicating data from any supported source system to the SAP HANA database;
◈ Search/Text Option, for search, text analysis, and text-mining functionality for you to gain real insights from unstructured textual data;
◈ Spatial/Graph Option, for advanced spatial and graph analytics capabilities;
◈ Streaming Analytics Option, for processing streams of events and messages in real time, allowing some or all of the data to be captured in the SAP HANA database and/or Hadoop.

Basically, if you license all available options on top of your SAP HANA Standard Edition, your SAP HANA system will have the same capabilities as the SAP HANA Enterprise Edition.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

SAP HANA Enterprise Edition includes the full range of functions and features from SAP HANA, being the foundation of SAP’s data management solutions and the Intelligent Enterprise. 

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

SAP HANA Full Use allows you to use widespread skills (such as JavaScript, Python, SQL, ABAP). ABAP developers have unrestricted code push-down possibilities.

The HANA platform allows for a development with no data transfers, no data duplication and no data latency, because the processing is done within the database itself; and the powerful technical capabilities of SAP HANA can be utilized. With this, we can have faster time to value and we can have lower development and maintenance costs.

SAP HANA Runtime Edition vs Full Use Edition


Conceptually speaking SAP HANA Runtime Edition offers a limited platform, centralized only for SAP applications and SAP HANA Full Use Edition offers an unrestricted platform for all systems and distributed data in modern, hybrid environments.

The picture below provides an overview of the biggest technical differences between these two editions. For SAP HANA Runtime Edition we’ve included restricted (Δ) and not available (X) capabilities.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

In terms of functions and features, the picture below compares the difference between the two available editions.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

To sum up, the decision of choosing which edition is best for you depends on whether you want to leverage SAP Applications, or you want to use the full power of the SAP HANA platform.

If you are just looking for a faster database that supports your SAP applications SAP HANA Runtime Edition is the right choice, but if you are looking for a long-term and strategic solution, you need SAP HANA Full Use Edition. Making the right choice here will depend on your company’s needs. Even if you license SAP HANA Runtime Edition based on your current requirements, but your long-term roadmap calls for SAP HANA Full Use Edition, SAP supports this transition both commercially and technically.

SAP HANA Active/Active (Read Enabled)


SAP HANA Active/Active (Read Enabled) is a solution that targets resiliency and better performance of your SAP HANA productive environment. With this option you can offload intensive read operations to the secondary system, freeing up more computational power to be used on the primary system for read and write operations.

As data is continuously being replicated from the primary to the secondary system using either synchronous or asynchronous communication protocols, you can leverage the secondary system to offset read-intensive workloads from the primary system. With SAP HANA Active/Active (Read Enabled) you can gain usage rights to access secondary system for productive read-enabled operation, the utilization of the underlying hardware can be increased because the workloads can be much more balanced, which also improves the performance of the operations on the primary system. Latency can also be reduced as read intensive users can now be located next to the secondary data center and there is no impact on your Recovery Time Objective (RTO) and Recovery Point Objective (RPO).

Without SAP HANA Active/Active (Read Enabled), the secondary system can’t be used for any operation, which means it’s only idle capacity for High Availability and Disaster Recovery purposes.

This option can either be used by SAP HANA Runtime and Full Use Editions as you can check below. Only SAP HANA 2.0 works with this option.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

SAP HANA Express Edition


All the different versions of SAP HANA that we have introduced are enterprise-ready, meant to be run in productive environments. What if you are a programmer who wants to run SAP HANA on a local PC, or on a cloud service offered by a Hyperscaler (AWS, GCP, Azure, …) ?

SAP HANA, express edition is the best way to test out and get hands on with SAP HANA, being available to install on your local laptop or desktop computer, a local server, or in the cloud. This means you can now use the express edition of SAP HANA on devices other than existing, SAP-certified, hardware appliances, or access it from the SAP Cloud Appliance Library tool on either Amazon Web Services, Microsoft Azure or Google Cloud Platform.

This edition allows you to jump-start application development on top of SAP HANA today and move to other editions of SAP HANA (i.e. SAP HANA Runtime or Full Use Edition) as your needs to grow tomorrow. Developers and independent software vendors (ISVs) can start using the express edition of SAP HANA at no cost to build and deploy applications that use up to 32 GB of memory. You can expand the use of your existing SAP HANA Express Edition for a fee, paying incrementally for up to 128 GB of memory use.

To support application development on your personal computer, where data size and scalability are limited, we have streamlined the SAP HANA platform to fit within the constraints of PCs. Because of this, the following features are not available with the express edition of SAP HANA:

◈ Data warehousing foundation
◈ Disaster recovery
◈ Dynamic tiering
◈ High availability
◈ Multihosting
◈ Outward scaling for multiple hosts
◈ Remote data synchronization
◈ SAP Solution Manager
◈ Smart data integration
◈ Smart data quality
◈ Smart data streaming
◈ System replication

SAP Cloud Platform, SAP HANA Service


The SAP HANA service (officially called SAP Cloud Platform, SAP HANA service) allows you to leverage the in-memory data processing capabilities of SAP HANA in the cloud. As a managed database service, backups are fully automated and service availability guaranteed. Using the SAP HANA service, you can set up and manage SAP HANA databases and bind them to applications running on SAP Cloud Platform. You can access SAP HANA databases using a variety of languages and interfaces, as well as build applications and models using tools provided with SAP HANA.

There are multiple cloud deployment options available for SAP HANA service: it can be deployed in the Cloud Foundry environment or in the Neo Environment from SAP Cloud Platform. The Cloud Foundry environment is based on Amazon Web Services, Google Cloud Platform or Microsoft Azure while the Neo environment is hosted by SAP.

SAP HANA service is available in two editions: Standard and Enterprise (similar to the editions available for SAP HANA Full Use). Standard Edition only includes core database services with in-memory capabilities as well as smart data access. Enterprise Edition includes all SAP HANA Service Standard Edition capabilities including advanced analytics options and smart data integration.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials, SAP HANA Study Materials

Steps to create a HDI container type DB using SAP WEB IDE in the Cloud Foundry

$
0
0
This blog is for enthusiastic people who want to learn to create HDI container type Database using SAP WEB IDE and deploy the same in Cloud Foundry and can use SAP WEB IDE to perform DML/DDL operations on DB.

We will go through step by step processes to achieve the same.

Pre-requisites required:-


Accounts and infrastructure:-

1. SAP Cloud platform Cloud Foundry account.
2. SAP Web Ide Full stack service enabled.

Skills:-

1. Basic knowledge of Cloud Foundry and SAP Cloud Platform.
2. Basic knowledge of SQL.

Let’s start:

Login into Cockpit and navigate to your trial account and then to services and then open SAP WEB IDE full stack.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Click on SAP Web IDE Full -Stack grid and then “Go to Service” link.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

It will navigate to SAP web ide in that select my workbench which will navigate you to editor page where you start creating a project as below.

Create an MTAR project from the project template

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Enter Basic details

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

It will create a project folder with the mta.yaml file.

Create a DB module inside the project. as shown below.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

We are trying to create a DB for storing employee and their role details.

Enter basic details.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

This will create a folder named Employee, Now create a new file named cdsAtrifcat, under src as mentioned below.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

namespace FirstFullStackApp.Employee;

context cdsArtifact {

    /*@@layout{"layoutInfo":{"x":32,"y":121.5}}*/
    entity employeeDetails {
        key EMPLOYEE_ID      : String(10) not null;
            DESCRIPTION : String(50) not null;
            DEPARTMENT  : String(20);
            EMPLOYEE_NAME  : String(50) not null;
            association : association[1, 1..*] to cdsArtifact.roleEnrollments { EMPLOYEE_ID };
    };

    /*@@layout{"layoutInfo":{"x":-444,"y":105.5}}*/
    entity roleEnrollments {
        key EMPLOYEE_ID     : String(20) not null;
        key ROLE_ID  : String(20) not null;
            ROLE_NAME : String(20) not null;
            EMPLOYEE_NAME  : String(50) not null;
            EMAIL      : String(40) not null;
            LOCATION   : String(20);
    };
};

Create a new file named employeeDetails.hdbtabledata

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

{

"format_version": 1,

"imports":

[ {

"target_table" : "FirstFullStackApp.Employee::cdsArtifact.employeeDetails",

"source_data" : { "data_type" : "CSV", "file_name" : "FirstFullStackApp.Employee::employeeDetails.csv", "has_header" : false },

"import_settings" : { "import_columns" : ["EMPLOYEE_ID","DESCRIPTION","EMPLOYEE_NAME" ] },

"column_mappings" : {"EMPLOYEE_ID" : 1,"DESCRIPTION" : 2,"EMPLOYEE_NAME" : 3}

}

]

}

Create a file named roleEnrollments.hdbtabledata

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

{

"format_version": 1,

"imports":

[ {

"target_table" : "FirstFullStackApp.Employee::cdsArtifact.roleEnrollments",

"source_data" : { "data_type" : "CSV", "file_name" : "FirstFullStackApp.Employee::roleEnrollments.csv", "has_header" : false },

"import_settings" : { "import_columns" : ["EMPLOYEE_ID", "ROLE_ID", "ROLE_NAME","EMPLOYEE_NAME","EMAIL"] },

"column_mappings" : {"EMPLOYEE_ID": 1, "ROLE_ID" : 2, "ROLE_NAME" : 3,"EMPLOYEE_NAME" :4,"EMAIL" :5}

}

]

}

Now for above-created tables, we will create data by using CSV file as shown below under the src folder.

Create employeeDetails.csv

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

"E100","desc1","Arun"
"E101","desc2","Anand"
"E102","desc3","Ram"
"E103","desc4","Ananya"

Create roleEnrollments.csv

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

"E100","J5136","JavaCodes","Arun","arun.rage@sap.com"
"E101","J5137","BusinessAnaylist","Anand","anand.kista@sap.com"
"E102","J5138","Designer","Ram","ram.raja@sap.com"
"E103","J5139","Fresher","Ananya",”ananya.sudheer@sap.com”

Folder structure will look like this

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Right click on Project (FirstFullStackApp) and navigate to project setting to install cloud Foundry builder as shown below. This will help us to build and deploy our DB module to Cloud Foundry further.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Click on “install builder” button

Note: If you find default settings are not present or not as per your cloud Foundry trial account in the cockpit, then from your trail account space you can get the required API endpoint details and paste the same here and click on build installer button.

Once the builder is installed successfully click on the save button.

Now right click on Employee (DB module folder) and build it as shown below.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Once the build is successful, In cockpit->cloud foundry-> space check for the deployed DB module

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Check for instance created.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Now your HDI container type DB is ready in Cloud Foundry. We can access it in another module if required by using data service.

Below are the steps to create OData service.

Create a Nodejs module.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Create service.xsodata under lib/xsodata/service.xsodata

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

service  
{

"FirstFullStackApp.Employee::cdsArtifact.employeeDetails" as "employeeDetails" navigates ("role_Enrollments" as "roleEnrollments");
"FirstFullStackApp.Employee::cdsArtifact.roleEnrollments" as "roleEnrollments" ;
association "role_Enrollments" principal "employeeDetails"("EMPLOYEE_ID") 
multiplicity "1" dependent "roleEnrollments"("EMPLOYEE_ID") multiplicity "*";

}

In MTA.yaml file add HDI as required for Odatajs

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Build Odatajs module with cloud Foundry setting to the project.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Once build is successful then Run the application as Nodejs Application.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Once the application started successfully, we can check the metadata.

replace index.js in the URL from the browser it opened after running successfully, with  /xsodata/service.xsodata/$metadata.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

By using this URL we can create “Destination” in cloud Foundry which can be used further in other modules.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

if you are not willing to use OData service then you can skip the step of the creation of Odata module itself and directly navigate to ” Accessing DB explorer ” and check for tables as shown below.

Accessing DB from SAP WEB IDE.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

This will add DB explorer to you SAP Web IDE. Now you can see DB icon on the left side of the screen.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Click on the icon–> then click on “+” symbol where can add your DB.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Select the container and click on OK.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Onclick of tables in this list will list down all the tables available. Right click on the respective table and open data will display the data present inside tables.

SAP WEB IDE, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Certifications

Top right corner “SQL” image “on click” will open an editor where you can execute SQL queries.

How to install DWF on SAP Hana Express

$
0
0
I learned about SAP SQL Data Warehousing. This topic was even more exciting to me, because I’m an SAP BW Consultant, and I don’t knew before about possibility to build your own SAP Warehouse directly on HANA database. So why I can’t just install this add-on on my private Hana DB, to check this out by myself? This is so simple, and this article shows you how to do this, starting from fresh SAP HANA XSA installation.

Requirements:


◈ S-USER for DWF download,
◈ 4 cores of CPU (in my case), and 12 GB RAM for VM,
◈ SAP Hana Server + Applications  (XSA),
◈ two hours of free time.

SAP HANA XSA INSTALLATION


First of all, you have to download SAP HANA Express, there are many possible variants to achieve this. My recommended way is just to download and run the full official VM image, which you can run on most popular operating systems

The most important thing is that you have to download option with applications, because of DWF is a plugin.

SAP HANA Express SLES registration


The second thing is registering your SLES machine to get the registration code. This code is necessary to register our copy of SUSE Linux Enterprise server. I propose to perform this step to avoid issues with the installation of your virtual machine tools responsible for access to host machine disc. You can skip this step if you don’t need to install any packages in SLES, and if you are able to move downloaded DWF plugin in another way ( i.e. you can download it using CURL, from local machine HTTP server ).

1. Check your system version by run:

cat /etc/os-release

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

2. Go to SLES download page, and select your version of OS

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

3. Register yourself

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

4. When your registration code is ready, you will be able to log in at https://scc.suse.com/subscriptions?family=sles. There you should see something similar to:

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

5. Now you need to provide this code into Yast tool. To do this by copy/paste you can simply login by ssh into the machine, and run:

sudo yast​

then go to Software -> Product registration

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

6. Here provide your e-mail and registration code

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

7. Repositories should be refreshed. After this, you should be able to install your VM tools or any additional software.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Data warehouse foundation download


If you have any troubles, please refer  to this.

1. Please check your Hana version. Execute

HDB version​

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

2. Go to SAP Launchpad software download,and next please folow:

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

3. Add to basket and download you version of DWF using SAP Download manager from official SAP site

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

4. In your Hana VM Machine, please start jobscheduler-broker

xs start jobscheduler-broker​

5. Mount your external catalog, with downloaded plugin and run command dependend of your version:

xs install SAPDW<version>.ZIP​

6. After succesfull instalation you should see similar confirmation:

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

7. For now you only need to enable plugin in SAP WEB IDE, go to

https://hxehost:53075/​

Login as XSA_DEV, with your master password.

8. Click on rack icon on the left panel (3 icon from the top), and go to Features. Find SAP Hana Data Warehousing foundation, click “ON” and then “Save”.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Voilla ! Eveything is ready for now. You can test DWF by yourself, I recomend offical tutorials like:

SAP Data Services – Defining delta using hash functions

$
0
0
In section 34 Changed Data Capture of the SAP Data Services Designer Guide you can find a very good description of the various delta load capabilities in SAP Data Services. Isn’t that worth an attentive read?

Delta loads are commonly used to reduce overall ETL processing time. When the number of new and modified source records is relatively low compared to the full data size, overall performance will increase significantly. Not only the time spent in extraction will be much lower, also the transformation and load steps will run much faster because those operations will apply to a (minor) subset of data only.

I would say that, in general, you may consider following options to implement an incremental extraction:

◈ Use last_modification timestamps from the source tables. Create a control table for keeping track of the timestamp(s) used in the previous incremental extraction. Have your jobs call an initialisation script to assign the timestamps to global variables. Use the variables in the where-clause of the Query transform in your extraction dataflows.
◈ If there’s no timestamp available, but the records are numbered sequentially, you can apply a similar approach using the sequence numbers.
◈ Use log tables. Many systems do not include timestamps in the actual tables modified. They use so-called log tables to keep track of changes. Join source and log tables and proceed as above.
◈ Use the built-in CDC-mechanisms of the source database.
◈ Use specific CDC-enabling software, like SLT.
◈ With SAP as a source, you can leverage delta-enabled Business Content Extractors.

Unfortunately, when none of those options are available, you probably will have to do a full extraction of your source data, not such a great idea when dealing with big data. Note that, in such a scenario, you still have the possibility to reduce transformation and load times by calculating the delta yourself. You achieve this by keeping a copy of the data extracted in the previous job run and running it thru a Table_Comparison transform to calculate the delta. Then you can apply the transformations to the delta only. And write the new and modified records only to the target.

This blog describes a solution around the two potential bottlenecks in that approach:

◈ Full extraction times are related to the size of the source data, the number of records time the record size. The larger the latter (multiple and wide columns), the more time it will take to extract the data.
◈ The time spent in the Table_comparison transform (make sure to always sort the input data and set the comparison method in the transform to Sorted input!) may negate the profit obtained from transforming and loading a reduced data set. The more columns, the longer the comparison process will run.

Both problems can be overcome by using hash functionality. The example below is for SAP HANA but can be applied to any database that has a built-in hash function or is able to call an external one.

Create two tables in HANA, first:

◈ CHECKSUM_MINUS1, to contain checksums from the previous job run. The table has 2 columns, one for the primary key and the other for the hash value of the record.
◈ CHECKSUM, to contain the same data from the current job run.

Create a 1st dataflow to copy the checksums from the previous run to CHECKSUM_MINUS1:

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Materials

As both tables are in the same database, full logic is pushed to HANA and execution terminated almost instantly.

Create a 2nd dataflow to calculate the checksums. Use a SQL transform as source:

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Materials

Open the SQL – Transform Editor. Enter the SQL text as:

SELECT PKCOL, hash_sha256(
to_binary(PKCOL)
, to_binary(ifnull(INT1,0))
, to_binary(ifnull(INT2,0))
, to_binary(ifnull(DATE1,''))
, to_binary(ifnull(INT3,0))
, to_binary(ifnull(INT4,0))
, to_binary(ifnull(INT5,''))
, to_binary(ifnull(TXT01,''))
, to_binary(ifnull(TXT02,''))
, to_binary(ifnull(INT6,0))
, to_binary(ifnull(TXT03,''))
, to_binary(ifnull(TXT04,''))
, to_binary(ifnull(INT7,0))
, to_binary(ifnull(DEC1,0))
, to_binary(ifnull(TXT05,''))
, to_binary(ifnull(DEC2,0))
, to_binary(ifnull(TXT06,''))
, to_binary(ifnull(DEC3,0))
, to_binary(ifnull(TXT07,''))
, to_binary(ifnull(INT8,0))
, to_binary(ifnull(TXT08,''))
, to_binary(ifnull(TXT09,''))
, to_binary(ifnull(INT9,0.0))
, to_binary(ifnull(DEC4,0.0))
, to_binary(ifnull(DEC5,0))
, to_binary(ifnull(TXT10,''))
, to_binary(ifnull(TXT11,''))
, to_binary(ifnull(DEC6,0))
, to_binary(ifnull(TXT12,''))
, to_binary(ifnull(TXT13,''))
, to_binary(ifnull(DATE2,''))
) as hash_value
FROM MY_SOURCE_TABLE

Column PKCOL is the primary key column of MY_SOURCE_TABLE that has several additional numeric, date and varchar columns.

Then select the Update schema button. And close the Editor window.

The SQL transform prevents full SQL-pushdown to HANA, so all data is passing thru SAP Data Services memory. Because the data size is relatively small (2 columns only) compared to the full source table size, performance gains may be high.

Create a 3rd dataflow to calculate the delta:

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Materials

The Table_Comparison compares the input data to CHECKSUM_MINUS1 (current to previous data set).

The Map_Operation maps all Input row types to Normal. And registers the type of operation in the additional column.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Materials

Because there’s one comparison column only, the operation will run much quicker, too.

When you run the 3 dataflows in sequence, the DELTA table will contain the primary key column of all new, updated and deleted records, with the type of operation.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Materials

You can now join DELTA and MY_SOURCE_TABLE to limit the size of the input data set. Apply the necessary transformations to that subset. And use a Map_CDC_Operation to eventually apply the results to the target.

Machine Learning with SAP HANA

$
0
0
AI and machine learning are the hottest trends in the current IT market. Everyone is talking about it and customers are adopting these technologies in day to day processes. Because of this, there is a need to have systems that will enable the processes to be scaled, governed and compliant to current business needs.

As part of digital transformation efforts, customers currently running SAP ERP applications are implementing innovative solutions to enhance operations. These innovative solutions range from RPA-robotic process automation, machine learning and enhanced analytics leading to an intelligent ERP aka iERP.

This blog assumes that the audience is familiar with SAP HANA technology both as a database and application platform, along with the engines that are available to perform various tasks. The blog will cover use of SAP HANA as a scalable machine learning platform for enterprises.

We will cover the business applications and technical aspects of the following HANA components:
1) PAL – HANA Predictive Analytics Library

2) HANA- R – Integrated platform between HANA – R

3) HANA EML – Extended Machine Library

4) AFM – Application function modeler

Along with the components above, HANA also enables the use of external API’s via the XS engine.

The diagram below covers the components available within the HANA platform which can be utilized for machine learning.

SAP HANA Study Material, SAP HANA Tutorial and Material, SAP HANA Certification

SAP HANA Predictive Analytics Library (PAL)


Let’s understand some technical aspects of PAL before we see its applications. PAL is one of the components of the Application Function Library (AFL) in HANA. It can define functions that can be called from within SQL Script procedures in HANA to perform advanced analytic algorithms. SAP has provided host of classic and universal algorithms with PAL. When PAL is paired with HANA’s ability to host execution engines and perform local calculations in-memory and in parallel, it provides a unique capability to accelerate machine learning models. PAL is available on every HANA license (from HANA 1.0 SPS06 onward) and cloud platform in the AFL. SAP PAL algorithms are divided into 10 categories, some of them include clustering, classification, association and time series functions. All of the PAL procedures can be seen below.

SAP HANA Study Material, SAP HANA Tutorial and Material, SAP HANA Certification

PAL also includes several algorithms that learn and continuously update to enable dynamic predictions, allowing companies the ability to use current data and instantly adapt to the changing conditions and behaviors of their clients.

The goal of HANA PAL is to enable a majority of the most common predictive use cases . Paired with the in-memory and fast performance of HANA, many choose this as their predictive tool of choice. However, even with all of the algorithms offered, you may need an external R server for even more advanced algorithms.(The HANA-R integration is covered in the points below).

Use Case

For clients, it will be ideal to start with algorithms provided out of box with PAL and then explore external algorithms if there is an absolute need.

PAL alone is best for those who need to perform the above algorithms, already use SAP HANA and have SAP HANA Studio installed. For those who want flexibility and customization such as data scientists and mathematicians, deploy both PAL and R together. However, PAL requires experience in SQL Script and/or the predictive methods used.

HANA-R Integration 


R is an open source programming language for statistical computing that is widely used for advanced data analysis. Providing R integration opens the door wide by enabling HANA to consume and execute all the open source algorithms available in R. HANA database interprets the R language and accordingly submits the script to R Server.

The goal of integrating SAP HANA with R is to ultimately customize algorithms even more than what is offered in the standard libraries via PAL. SAP HANA uses an external R environment to execute the R code. The application developer can then embed R function definition and calls within HANA SQLScript and submit the entire code as a database query.

This opens up all new possibilities because the extent of R’s capabilities can be utilized on your data in HANA.

SAP HANA Study Material, SAP HANA Tutorial and Material, SAP HANA Certification

Use Case

Integration of R code is suitable when an SAP HANA-based modeling and consumption application or developer wants to use the R environment for specific statistical functions. It allows a developer to use their creativity, choose from thousands of R packages and script some very agile data analysis and predictions.

SAP does not include the R environment with a SAP HANA license since R is open source and available under the General Public License. Similarly, SAP does not provide support for R. In order to use the SAP HANA integration with R, you need to download R from the open-source community and configure it. You also need Rserve, a TCP/IP server that allows other programs to use facilities of R without the need to initialize R or link with the R library. Note that this integration requires prior knowledge and expertise with R code.

HANA Extended Machine Learning Library (EML)


SAP has introduced HANA EML from HANA 2.0 SP02 onward. It gives ability to use HANA data for scoring (the process of applying a predictive model to a set of data is referred to as scoring) on pre-built machine learning models.

What does this mean? HANA will not be used in this case for training and building machine learning models. Models will be built using a python environment – Tensorflow, which is a python package offered by Google to build complex, deep learning models. Tensorflow is not only limited to deep learning but also can be used for preforming common machine learning models.

How does this work? Build models (train, test & validate) in Tensorflow and then enable the models to be consumed by Tensorflow serving (a separate server needed to enable the HANA EML and Tensoflow integration). Then build the HANA SQL scripts to call the models served in Tensorflow serving and enable data for the scoring (predicting) .

Use Case

EML is already included as part of SAP HANA in the AFL and users can easily connect TensorFlow serving to HANA. With HANA EML, TensorFlow models can now be easily integrated into enterprise applications and business processes for clients.

HANA Application Function Modeler (AFM)


HANA AFM is an extension of SAP HANA Studio used to create flowgraph models (tables, views, procedures, R scripts, PAL functions) without writing SQL script code. Think of it as a drag and drop interface that allows less experienced developers to build complex procedures quickly.

A major advantage of AFM is that it eliminates the need to code PAL and BFL (Business Function Library) algorithms. Another benefit for businesses is that model objects can be checked in and out of development so that multiple users can access and develop simultaneously.

Use Case

AFM is for users who are less experienced in SQL code but still want to perform predictive analytics using HANA database. The advantages are that the algorithms are available out of box, there’s no need to send data out of the system and it provides easier integration with in-process applications and transactions in SAP .

Key Takeaways


The table below compares the four products’ use cases, skillsets and prerequisites.

SAP HANA Study Material, SAP HANA Tutorial and Material, SAP HANA Certification

PAL and AFM can be used out of the box with HANA enterprise without needing any additional licenses, servers or components. This can jump start clients to start building machine learning use cases in SAP. For very complex use cases you can then enable R integration and EML .

In conclusion, both data scientists and business analysts should start their analysis by using SAP HANA automated predictive capabilities whenever possible. Automated machine learning can apply to a growing number of scenarios while producing valid results in seconds or minutes. Therefore, those who are not data scientists now have the ability to answer their own questions and quickly depict on the results while providing data scientists and mathematicians an automated way of quickly analyzing problems.

Building chat-bot with SAP Conversational AI & SAP Products

$
0
0

Introduction


I’m going to show you a chat-bot which can help HR search employee’s salary by name and employee number on specific date.

On the high-level, I’ll need to do:

1. Create SAP UI 5 on SAP Cloud Platform
2. Create SAP Conversational AI chat-bot
3. Create server REST APIs to help chat-bot query database.
4. Create Two tables(people and salary) in SAP HANA

For this post, I will focus on the chat-bot side.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

It’s very easy to build your bot by leveraging SAP Conversational AI. You just need four steps!

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

The first step is training your bot, helping it to know what do you say. Here, “Intents” means the concept of which is designed to trigger “Skills”.

For example, “I wanna know the salary” and “show me the salary” means the same intent — “salary ”. On the other hand, “Entities” means the words like “person”, “number”, “date-time”.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

Second, “skills” — after your chat-bot understand what do you say, you need to create your conversational flow with builder tool. You can let chat-bot take actions here.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

For here, we need to set up web-hook which is meaning your chat-bot can request to your APIs and get the responses!

If you choose using SAP HANA and developing with Python. You also need to open-db-tunnel at the same time when you query data from HANA.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

Also, you need to make sure your REST APIs are good to respond. You can use software like Postman to check.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

We can use ngrok to help us connect our localhost APIs via the Internet. (Don’t forget to set it in setting page)

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

Third, just choose the platform which you want to deploy!

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

Finally, the last step is helping you to trace what did your chat-bot receive and what kind of skills did it trigger.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

Results:

Let me show the tables I created on SAP HANA here. We can see there’s an employee named tim cook and his employee_id is i000000 with several salary data.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

Let’s focus on salary on January 2019, it’s 9999.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

You can see payment date 2019–01–01 is 9999 in each query.

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

Let’s change the salary to 13579!

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

SAP HANA Tutorial and Material, SAP Study Materials, SAP HANA Certifications

Woohoo, It shows 13579!

Taming your SAP HANA Express. Hardening an SAP HANA Express 2.0 SP03 installation part 1. Getting it ready for SAP Analytics front end tools.

$
0
0
Due to the fact that starting with the wave 2019.01 SAP Analytics Cloud (SAC) has stopped accepting the self-signed SSL certificates for HTTPS INA live connections I have ended up by replacing the self-signed HANA Express SSL certificate(s) with the equivalent CA-signed SSL certificate(s).

Synopsis:


In a nutshell, the SAP Analytics tools (the likes of SAC, Analysis for Office, BW4H, BOE with Lumira 2.x, Analysis for OLAP or WebIntelligence) rely heavily on the secured HTTPS INA protocol to consume the SAP HANA calculation views on-the-fly via the SAP HANA EPM-MDS engine.

This is being referred to as live/on-line/remote connectivity, where the data is always current, as opposed to acquired/off-line/import option where the data needs to be acquired first and then refreshed in a scheduled manner.

I have invested quite some time and effort in order to build a viable testbed with the focus on the front-end SAP analytical applications that rely on the secured HTTP INA protocol based connectivity [either with or without SSO to the backend HANA database] and that can consume the HANA HDI views (e.i. the views which are no longer tied into the default _SYS_BIC schema).

As you may know the SAP HANA Express 2.x ready-to-deploy images implement several self-signed SSL certificates for the secured domains of HTTPS INA (XS classic engine running on the tenant database), XSA (XS Advanced) and LCM access over the web.

As aforementioned, having recently come across the necessity to implement the trusted SSL certificates signed with a trusted CA (Certificate Authority) rather than relying on self-signed ones, I have decided to share my experience in “Taming your SAP HANA Express” series of blog posts.

It is a great HANA appliance! Regards to the entire SAPCAL team!

For the sake of convenience and transparency most of the URLs in this article are not obfuscated. They refer to a stock HANA Express instance deployed on AWS and do not reveal any trade secrets.

Let’s begin:


First I got the HTTPS INA protocol up and running fine on the tenant database where both the index server and webdispatcher are running; (This is very important as with the multi-tenanted databases there is neither index server nor webdispatcher available to run on the system database)

This setup used to work against SAP Analytics Cloud waves 18 through 22 out of the box (aka with the self-signed SSL certificates in Chrome browser);

So what…?

1. The initial problem with the HANA Express self-signed certificates.


Initially one particular problem I faced was that a number (but not all) of front-end applications require the CN certificate name to match the hostname in the HTTPS service URL;

When a mismatch between the two names is detected these applications deny any further access.

In our case the CN name is sid-hxe and in the webdispatcher HTTPS URL service name is  vhcalhxedb; Obviously sid-hxe != vhcalhxedb !

Eventually, after a thought, I narrowed down the problem to the following piece of advise:

Can someone advise the steps required to replace the system certificate (sid-hxe) with the default.root.crt.pem certicificate (vhcalhxedb) in the webdispatcher’s SAPSSLS.pse store ?

Let me explain it in more details.

1a.The below links to the HANA XS classic engine work perfectly fine on my testbed. 4390 is the SSL port number of the webdispatcher running on the HXE tenant database;

The OS host name is sid-hxe;

The HANA host name is vhcalhxedb;

Both names resolve to the appliance IP address;

I have noticed the HXE webdispacther URL is using the OS system host sid-hxe SSL certificate as depicted below.

You may notice that the URL uses vhcalhxedb HXE appliance name [which is resolved into the elastic IP address in my laptop’s /etc/hosts file].

Still the webdispatcher/XS classic engine are configured to use the system certificate which shows the OS host name, namely sid-hxe:

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

1b.Moreover, if I try to replace the appliance’s name vhcalhxedb with the hostname sid-hxe I get the 503 error as depicted below (of course it goes without saying sid-hxe has been added to the /etc/hosts file on my Mac so it resolves to the same IP address as vhcalhxedb):

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

in other words the name of the HANA host (vhcalhxedb) must not be replaced in the URL;

2. Further on all the XSA applications pre-installed during the appliance deployment or that I have installed myself on either SYSTEMDB or HXE tenant database use a different self-signed domain SSL certificate, namely vhcalhxedb

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

This vhcalhxedb certificate is in the default.root.crt.pem certicificate file found here: <installation_path>/HXE/xs/controller_data/controller/ssl-pub/router

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

3. I can access the webdispatcher admin console (on the HXE tenant database)

https://vhcalhxedb:4390/sap/hana/xs/wdisp/admin/public/default.html

(You may need to grant specific HANA user privileges in order to get access to the webdispatcher admin console and the user needs to be defined on the tenant database)

The PSE of interest is the SAPSSLS.pse which is configured with the system certificate (CN=sid-hxe);

What I did is that I have imported the aforementioned default.root.crt.pem certificate file into the SAPSSLS.pse store (as shown below).

What I did not know is what to do to replace the Own Certificate (CN=sid-hxe) with the default.root.crt.pem certicificate (CN=vhcalhxedb). Or with any other viable certificate ?

At the same time I do not want to damage my testbed. So again I narrowed down the problem to the following piece of advise:

Can someone advise the steps to replace the system certificate with the  default.root.crt.pem certificate in the webdispatcher’s SAPSSLS.pse store ? Does it make any sense at all ?

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

But let me walk you through all the steps one by one.

2. SSL system certificate with HXE tenant database web dispatcher and XS classic


Long story short: Despite the above certificate/URL names mismatch I was able to use HXE SAPCAL image with SAP Analytics Cloud for some quite time…

Until recently the discrepancy of the HXE Webdispatcher CN name in the SLL certificate and the hostname in the Webispatcher URL was still manageable ; For whatever reason SAC including the the SAC wave 2018.22 disregarded the CN name mismatch and that the certificate was self-signed (not trusted).

However with the latest release of SAP Analytics Cloud (wave 2019.01+) it is no longer case as SAC requires the SSL certificate be both valid and trusted…. 

In order to find a manageable solution and a way out of this “cul-de-sac” I broke it down into three questions:

1. What needs to be done to get the Webdispatcher SSL certificate right?
2. Could I replace the self-signed sid-hxe certificate with the vcalhxedb certificate used elsewhere in the system ? I understand that might solve the name mismatch but would not necessarily fix the certificate trust.
3. Last but not least which CA authority could I be using with the SAPCAL images to generate trusted SSL certificates in order to replace the self-signed ones ?

The answer to question 3 would deliver both valid (name match) and trusted (CA-signed) certificates.

Let me walk you through the required steps as follows:

ad 1.What needs to be done to get the Webdispatcher SSL certificate right? 
To fix the non secure, self-signed HTTPA INA URL ?

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

ad 2. Could I replace the self-signed sid-hxe system certificate with the XSA domain vcalhxedb certificate  used elsewhere in the system ? 

Here goes the OS host (sid-hxe) self-signed SSL certificate which is used by the webdispatcher admin webapp and more generally to secure access to the XS classic domain resources:

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

and here goes the self-signed vhcalhxedb certificate used to secure the access to the XSA domain:

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

ad 3. Last but not least which CA authority could I be using with the SAPCAL images 
to generate trusted SSL certificates in order to replace the self-signed ones ?

In my quest for clues I have used the following search query against SAP internal resources, namely: https://search.int.sap/#t=ssl%20certificate

If you are interested in more details about the HANA PSEs the following SAP note 2009483 – PSE Management in Web Administration Interface of SAP Web Dispatcher describes the PSE management techniques

Alternatively the below SQL statement allows to retrieve the HANA webdispatcher profile(s)

SELECT KEY, VALUE, LAYER_NAME
FROM SYS.M_INIFILE_CONTENTS
WHERE FILE_NAME = ‘webdispatcher.ini’
AND SECTION = ‘profile’ AND KEY LIKE ‘wdisp/system%’

which yields the following result:

GENERATED, SID=HXE, NAME=HXE, EXTSRV=localhost:39008, SRCVHOST=vhcalhxedb


The FQDN of the webispatcher FQDN is set to be vhcalhxedb.sap.corp; but this could be any name like ateam.sap.corp etc…

Let’s make sure the FQDN can be resolved into the IP address. The below excerpt shows the edited OS hosts system file:

vi /etc/hosts

#

# IP-Address  Full-Qualified-Hostname  Short-Hostname

#


#127.0.0.1 localhost sid-hxe.dummy.nodomain sid-hxe

127.0.0.1 localhost

10.0.0.11 sid-hxe.dummy.nodomain sid-hxe

10.0.0.11 vhcalhxedb.sap.corp vhcalhxedb

Additionally, in order to prevent the SAPCAL image to change the hostname

sid-hxe:~ # cat /etc/hostname

sid-hxe


Prevent the appliance change the hostname at boot time

sid-hxe:~ # vi /etc/init.d/updatehosts

sid-hxe:~ #

sid-hxe:~ # cat /etc/init.d/updatehosts


if [ “$1” == “start” ]; then

# Commented out the below line to prevent the hostname changes

#     /sbin/updatehosts.sh

else

echo “$0 $1 does not do anything”

fi

exit $?

sid-hxe:~ #

3. The hardening of the XS classic domain:


Following the piece of advice from the wiki the CA-signed trusted certificate has been implemented into the PSE SAPSSLS.pse as follows. (You will notice we still use the vhcalhxedb name in the service URL):

I recreated the PSE, created the CSR

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

and then imported the CA response (full chain of certificates)

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

The end result is as follows:

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

Still the dispatcher parameters reveal the appliance host name as vhcalhxedb and not the FQDN vhvalhxedb.sap.corp ? What has gone wrong ?

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

Initially I thought that I would have to rename the HANA appliance host, namely rename vhcalhxedb into the FQDNvhvalhxedb.sap.corp .

I do reckon, eventually, this might have been the best way of getting the FQDN work in the URL once for all. And, BTW, there is an excellent post that explains how to rename the HANA host. However, I did not resort to it;

Instead I used the HANA cockpit and searched for all the occurrences of vhcalhxedb in the HANA configuration files through the XSA HANA cockpit at https://vhcalhxedb.sap.corp:51045/sap/hana/cockpit/landscape/index.html

[I have to confess I have already renamed the XSA engine’s domain from vhcalhxedb into vhcalhxedb.sap.corp initially to prove that it can be done, but more importantly that XSA is world apart from classic XS and that the HANA appliance hostname (=vhcalhxedb) is also separate from either XSA and XS engines or the OS host name]

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

I have found that the public_urls of the index engine (xsengine.ini) were of the form http://vhcalhxedb:8090 and https://vhcalhxedb:4390 so I edited them as follows

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

Then I restarted the HANA DB appliance.

4. Hooray!!!


In the aftermath the FQDN URL to HTTPS INA GetServerInfo can reveal both trusted (CA Signed) and valid (CN==URL hostname) certificate for the XS classic domain:

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

and that the short-name URL does not work any more:

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

Equally the FQDN URL to the webdispatcher admin console reveals the valid and trusted certificate

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

and that the short-name URL does not work any more:

SAP HANA Express, SAP HANA Tutorial and Materials, SAP HANA Study Materials

From this moment on I was able to start using the FQDN, namely vhcalhxedb.sap.corp in the SAC connection definition, in the BOE HANA trust definition etc….

In the next instalments of this series I will explain the XSA domain hardening and the use of the HANA HDI views from the SAP HANA Shine XSA application across a variety of the SAP reporting tools….

Taming your SAP HANA Express (SE01E02). Hardening an SAP HANA Express 2.0 SP03 installation part 2. Securing the SAP HANA extended application services

$
0
0
This is the second blog of the season one of Taming your SAP HANA Express series, episode two.

In the SE01E01 we have seen how to implement a fully qualified XS Classic domain CA-signed SSL certificate.

This episode will tell the story of how to rename and secure the XSA domain.

It has two parts: rename and harden;

Let’s begin


You might not expect it but even if you rename the XSA domain from a short domain name like vhcalhxedb into the FQDN vhcalhxedb.sap.corp the new XSA domain will be issued with a new self-signed SSL certificate matching the new XSA domain name.
This is described in the following SAP note:

2243019 – Providing SSL certificates for domains defined in SAP HANA extended application services, advanced model

We will take care of it in the harden part of this blog.

A. Rename


The following blog describes quite accurately the steps required to rename the default XSA domain into a FQDN XSA domain. Please refer to it at all times!

The following statements were taken from my AWS host where the SAPCAL HXE appliance was deployed. You need to be the root user except if stated otherwise;

1. The rename sequence:


1a. Inspect the xscontroller.ini file communication section

First we need to edit the xscontroller.ini XSA engine configuration file the content of which can be viewed with cat /hana/shared/HXE/global/hdb/custom/config/xscontroller.ini.

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

The content of the file reveals the default short domain name and XSA API URL (under the communication section).

sid-hxe:/hana/shared/HXE/hdblcm # cat /hana/shared/HXE/global/hdb/custom/config/xscontroller.ini

[communication]

default_domain = vhcalhxedb

api_url = https://vhcalhxedb:39030

1b.  Amend the default domain and the api url to the required domain name
In my case I just added the .sap.corp domain suffix to the existing short domain name;

But of course you might want to have a truly bespoke XSA domain name like whatevername.dummy.nodomain

I am the vi editor (load, insert and ESC with :wq command); Remember if you want to discard any changes being made you may just issue the following vi escape sequence ESC :q!

sid-hxe:/hana/shared/HXE/hdblcm # vi /hana/shared/HXE/global/hdb/custom/config/xscontroller.ini

1c. Inspect the content of the xscontroller.ini file communication section once again

sid-hxe:/hana/shared/HXE/hdblcm # cat /hana/shared/HXE/global/hdb/custom/config/xscontroller.ini

[communication]

default_domain = vhcalhxedb.sap.corp

api_url = https://vhcalhxedb.sap.corp:39030

1d. In order to rename the XSA domain we will be using the hdblcm command line hana db lifecycle management utility as follows:

sid-hxe:/hana/shared/HXE/hdblcm #

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

You will need to create an xml formatted file, for instance pwd.xml, with all the passwords for the seamless execution.

The pwd.xml sits in the same directory and provides all the required passwords so we do not get prompted for passwords during the rename procedure:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

But as you are using the SAPCAL image:

The first password is your operating system hxeadm user,

the second is the XSA Admin password and  then the system user password followed by the sapadm password and last but not least again the SYSTEMDB user password;

As a reminder you have set up all these passwords when you were deploying the SAPCAL image;

(For instance in my case I have made all the passwords equal the master password )

Make sure you have stopped HANA before going any further as follows

su -l hxeadm

hxeadm@sid-hxe:/usr/sap/HXE/HDB90> ./HDB stop

hxeadm@sid-hxe:exit

and you are back as a root user;

2. Let’s go and rename the XSA domain


2a. First attempt (unsuccessful)

sid-hxe:/hana/shared/HXE/hdblcm # cat pwd.xml | ./hdblcm –action=rename_system –nostart –skip_hostagent_calls –certificates_hostmap=vhcalhxedb=vhcalhxedb.sap.corp –xs_domain_name=vhcalhxedb.sap.corp –read_password_from_stdin=xml -b

SAP HANA Lifecycle Management – SAP HANA Express Edition 2.00.031.00.1528768600

****************************************************************

Local Host Name: vhcalhxedb

System Properties:

HXE /hana/shared/HXE HDB_ALONE

HDB90

version: 2.00.031.00.1528768600

host: vhcalhxedb (Database Worker (worker), XS Advanced Runtime Worker (xs_worker))

edition: SAP HANA Express Edition

plugins: afl,epmmds

Start reading from input channel…

… Done.

Running in batch mode

Cannot resolve host name ‘vhcalhxedb.sap.corp’

Host name vhcalhxedb.sap.corp is not accessible.

Log file written to ‘/var/tmp/hdb_HXE_hdblcm_rename_system_2018-12-24_13.17.38/hdblcm.log’ on host ‘sid-hxe’.

Whoops! Obviously the new domain name needs to added to the /etc/hosts for the internal IP resolution.

Add the vhcalhxedb.sap.corp IP route to the /etc/hosts file….

sid-hxe:/hana/shared/HXE/hdblcm # cat /etc/hosts

# Syntax:

#

# IP-Address  Full-Qualified-Hostname  Short-Hostname

#

127.0.0.1 localhost

10.0.0.11 sid-hxe.dummy.nodomain sid-hxe

10.0.0.11 vhcalhxedb.sap.corp vhcalhxedb

2b. Second attempt (successful)

sid-hxe:/hana/shared/HXE/hdblcm # cat pwd.xml | ./hdblcm –action=rename_system –nostart –skip_hostagent_calls –certificates_hostmap=vhcalhxedb=vhcalhxedb.sap.corp –xs_domain_name=vhcalhxedb.sap.corp –read_password_from_stdin=xml -b

SAP HANA Lifecycle Management – SAP HANA Express Edition 2.00.031.00.1528768600

****************************************************************

Local Host Name: vhcalhxedb

System Properties:

HXE /hana/shared/HXE HDB_ALONE

HDB90

version: 2.00.031.00.1528768600

host: vhcalhxedb (Database Worker (worker), XS Advanced Runtime Worker (xs_worker))

edition: SAP HANA Express Edition

plugins: afl,epmmds

Start reading from input channel…

… Done.

Summary before execution:

=========================

SAP HANA Express Edition

Rename Parameters

Installation Path: /hana/shared

Source System ID: HXE

Target System ID: HXE

Target Instance Number: 90

Skip all SAP Host Agent calls: Yes

Remote Execution: ssh

Do not start SAP HANA Express Edition System but start service (sapstartsrv) instead: Yes

Host Name: vhcalhxedb

System Usage: production

Certificate Host Names: vhcalhxedb -> vhcalhxedb.sap.corp

XS Advanced App Working Path: /hana/shared/HXE/xs/app_working

XS Advanced Domain Name (see SAP Note 2245631): vhcalhxedb.sap.corp

Note: SAP HANA Express Edition System will be stopped

Renaming System…

SAP HANA Lifecycle Management – Database Rename  2.3.48

Renaming instance…

Stopping system…

All server processes stopped on host ‘vhcalhxedb’ (worker, xs_worker).

Stopping sapstartsrv service…

Removing sapservices entry…

Updating system configuration files…

Renaming instance…

Creating sapservices entry…

Performing rename of XS…

xsa-rename: Renaming XSA SSFS DAT file

xsa-rename: Renaming XSA SSFS KEY file

Starting service (sapstartsrv)…

Updating Component List…

Updating SAP HANA Express Edition Instance Integration on Local Host…

SAP HANA Express Edition System renamed

You can send feedback to SAP with this form: https://vhcalhxedb:1129/lmsl/HDBLCM/HXE/feedback/feedback.html

Log file written to ‘/var/tmp/hdb_HXE_hdblcm_rename_system_2018-12-24_13.19.54/hdblcm.log’ on host ‘sid-hxe’.

You can display the log file as follows:

sid-hxe:/hana/shared/HXE/hdblcm #

cat /var/tmp/hdb_HXE_hdblcm_rename_system_2018-12-24_13.19.54/hdblcm.log

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

2c. Log as the non-root hxeadm user in order to start the HANA database

su -l hxeadm

hxeadm@sid-hxe:/usr/sap/HXE/HDB90> ./HDB start

StartService

Impromptu CCC initialization by ‘rscpCInit’.

See SAP note 1266393.

OK

OK

Starting instance using: /usr/sap/HXE/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 90 -function StartWait 2700 2

Start

OK

StartWait

OK

hxeadm@sid-hxe:/usr/sap/HXE/HDB90>

If you want to check on the HANA DB startup progress progress you may tail this log file:

hxeadm@sid-hxe:/usr/sap/HXE/HDB90> tail -f /usr/sap/HXE/HDB90/vhcalhxedb/trace/xscontroller_0.log

or alternatively you can use the ./HDB info command

3 Caveats:


After the XSA rename procedure I have encountered the following problem when trying to use the new XSA domain URL, namely the redirect_uri has an invalid domain.

For instance I got the following redirect URL when trying to connect to the HANA Cockpit (port 51039):

https://vhcalhxedb.sap.corp:39032/uaa-security/oauth/authorize?response_type=code&redirect_uri=https%3A%2F%2Fvhcalhxedb%3A51039%2Flogin%2Fcallback&client_id=sb-xsa-cockpit!i1&state=2620251216420

Why?  Simply because the new self-signed certificate has not been yet loaded in my browser’s cache? I was not entirely sure.

Regardless, as I was about sign the XSA domain with a trusted CA (Certificate Authority) I decided not to bother that much at this stage.

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

B. Harden

As aforementioned the following SAP note explains the situation with the XSA self-signed certificate.

2479411 – Untrusted/unsecure error when using HANA Cockpit 2.0 when XSA uses self-signed certificate.

The following applies if you have installed the HANA cockpit 2.0 that uses the XSA. This is what you get with the SAP HANA Express SAPCAL image.

When launching the cockpit via the web browser, you can see the untrusted/unsecure error because the XSA is using a self signed certificate (located in the file default.root.crt.pem).

This certificate is located in the directory [/hana/shared/<SID>/xs/controller_data/controller/ssl-pub/router], but there is no [obvious] option to generate the CSR file, so the certificate could be eventually signed by your CA.

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Solution:

The following SAP note describes the required steps to remedy this situation. However it omits to say how to generate the CSR for the XSA self-signed certificate

And where is the private key ? Let’s see ?

Trying to find the private key…..

grep -r –exclude-dir=log –exclude-dir=ssh –exclude=*history -I -l -e ‘—–BEGIN PRIVATE*’ -e ‘—–BEGIN RSA*’ -e ‘—–BEGIN EC*’ /hana/shared/HXE/HDB90/ 2> /dev/null

but you will not be able to find any trace of the private key…..it has been wiped out during the rename procedure

Given the fact we neither have the CSR nor the private key; thus this is going to be a rather challenging task. And indeed it is.

I am about to describe how to circumvent the fact that the HANA XSA Cockpit does not offer any option to generate the CSR for its self-signed certificate and that the private key is not available.

For the sake of transparency instead of using the OPENSSL utility I used Portecle a fantastic graphic SSL utility.

SSL toolbox:

SSL shopper

Portecle

Let’s go:

In order to create the CSR and then the full certificate chain and then get the public/private key par I used a very handy Portecle, locally on my Mac. What I am about to show requires good level of understanding of the SSL security topics, certificates manipulation etc. Portecle has an excellent How-to section in you want to up skill yourself in the SSL security topics.

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

The steps.

As aforementioned almost everything an be done on a laptop with Portecle:

Create a new keystore


create new key store of the following format:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

next from Tools menu generate a key pair

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Generate key pair:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

and generate a new certificate:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

key pair entry alias: we will keep it the same as the CN.

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

and the certificate is in our newly created keystore:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

we can double click on the certificate alias name to reveal its content:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

and also inspect its PEM encoding:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Let’s save it into a file (we will; be asked to enter a password; use a password that you can easily remember)

So we have created a new certificate; next we need to sign it;

Back to square one we need a CSR for our certificate.

How to generate a CSR for a keystore key pair entry?


SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

That’s the password we have used while saving the key pair

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

We are just about to generate the CSR:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

that’s done

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

we can even test that both our certificate and its CSR are valid ones:

The certificate decoder

copy and paste the content of the vhcalhxedb_sap_corp.pem file

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

The CSR decoder

copy and paste the content of the

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP Global PKI Certificate Web Enrollment


There two CA reply choices or formats: x.509 and PKCS#7

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

we will need the PKCS#7 format rather as it includes the certificate chain

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Next in order to make the CA signing process complete we will import the CA reply into the certificate so it becomes signed and trusted.

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

From now on the CA signing procedure of our certificate is complete:

From this moment we can export it from the keystore as follows:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

but we will rather need the PEM encoding:

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

And we can use the following openssl command to convert the traditional private key into the required pkcs8 private key format

$ openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in vhcalhxedb_sap_corp_SAPNetCA_G.pem -out pkcs8.key

Enter pass phrase for vhcalhxedb_sap_corp_SAPNetCA_G.pem:

$

Finally let’s use the certificate matcher to makes sure we got our private key right:

Certificate Key Matcher


SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

The epilog

From now on we can transport both the signed certificate (full chain) in the PEM format and the private key in the PKCS#8 format into HANA host and reset the XSA certificate as follows:

hxeadm@sid-hxe:/usr/sap/HXE/home/xsa-sll> XSA set-certificate -c cert.pem -k pkcs8.key

Picked up _JAVA_OPTIONS: -Xss2m

USERNAME: XSA_ADMIN

PASSWORD>

Deploying router certificate for default domain. Public key ‘/usr/sap/HXE/home/xsa-sll/cert.pem’, private key ‘/usr/sap/HXE/home/xsa-sll/pkcs8.key’

Stopping XSA …

Starting XSA …

OK

and do not forget to wipe out the private key you have transported to your HANA Express!

and voila here is the outcome
in Chrome

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

or in FireFox

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

or in Safari

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Use of Segmentation in S/4 HANA for Product Revisions

$
0
0
With S/4 HANA, SAP introduced a concept called Segmentation that was available in an Industry Solution prior to S/4 HANA. This blog provides an overview of how Segmentation can be used to track product revision levels with segregation of inventory for each of those revision levels.

In a B2B world, usually a product is assigned a new identification (new part#/material#) if there is a change to the form/fit/function. But if the change is minor (no change in form/fit/function), the customer assigns a revision level and may specify that product revisions are not interchangeable. One solution to solve this business requirement is Segmentation within S4/HANA that works throughout the supply chain. Segmentation can be used to assign revision levels and track these revisions (Inventory, Demand, Supply at these revision levels) throughout the logistics processes from Sales/ Procurement/ Manufacturing and Inventory.

ECN function provides the revision levels but cannot segregate inventory whereas Segmentation function can fill that gap of inventory tracking.

The screenshots are from 1809 release of S/4 HANA. Now let’s go into details of the set-up of Segmentation and its processing.

Easy Access Set-up


Characteristics

CT04 – Create a characteristic in the Segmentation grouping. This is similar to any other characteristic creation in S/4 HANA but it must be created in the special group for Segmentation (1 in screenshot) so that it is available for the creation of Segmentation structure.


SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

The important part is the relevance for the characteristics on the Addnl. Data tab.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

This relevancy provides the ability to have BOM and Routing (or other areas) to recognize the Segmentation. Make sure that the relevant ones are checked for your business process. For our task, we need the BOM and ROU to be relevant. As you can see there are other areas where this Segmentation can be made relevant. Ex – Weights & Volumes, Sales Statuses etc.

Segmentation

SGTS – Create a new Segmentation Structure with the Characteristic(s) created above. This allows the system to link the characteristic(s) to the Segmentation function. Segmentation is flexible based on the characteristics that can be created/customized depending on the unique business requirements. You can add multiple characteristics if necessary. For each of the characteristic, the blank value can be made as an acceptable input.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Master Data


Material Master

Assign the Segmentation Structure in the Material Master. Once the Segmentation is assigned, the batch functionality will be turned on automatically.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Important Note – Once the Segmentation is assigned to a Material Master, it can be changed only if there are no postings against that material.

BOM

The BOM maintenance is the same except the identification of the revision level (Segmentation characteristic). The option to “include” or “exclude” the component by revision level (Segmentation characteristic) is provided.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Routing

The Routing is like normal routing other than specifying that this routing is applicable to the revision level (Segmentation characteristic). Depending on the revision level the routing can be different to accommodate the change in the manufacturing process of the revision level.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Transaction Data


Stock Overview – The stock overview shows the stocks in different product revisions (Segmentation Characteristic). This is one of the key requirements – to track the inventory at the revision level without creating a new material#. As indicated, a batch is created for each revision (Segmentation characteristic). The MMBE selection screen is also enhanced to have the Segmentation as one of the filters.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Sales Orders – In the sales orders, the product revision (Segmentation characteristic) can be identified as the Requirement Segment. The Sales Pricing conditions can be specific to the revision level (Segmentation characteristic) with the creation of new condition table/access sequence that includes the Segmentation. Pricing can be specific to Requirement Segment from the Segmentation.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Purchase Order – While creating the Purchase Order, the product revision level can be specified to the vendor as stock segment so that the right revision level can be shipped. Note – I have moved the field Stock Segment on the PO screen to be before the Material for easier visibility. By default, in S/4 HANA you may have to scroll to the right to view this stock segment. The Purchasing Pricing conditions can be specific to the revision level (Segmentation characteristic) with the creation of new condition table/access sequence that includes the Segmentation.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Production Order – When a production order is created, the Stock Segment can be specified, and it is carried onto the Production Order Header/ Goods Receipt Tab.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

MRP Requirements – The standard MD04 of stock requirements screen is enhanced to provide the visibility of the Stock segment as well as Requirement segments. The MRP requirements are visible for each of the product revisions.

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

How to: Integrate and Consume your S/4HANA Cloud Data on-premise using HANA Smart Data Integration ODataAdapter and Custom CDS View based OData Services

$
0
0

1. Disclaimer


This blog entry purely focuses on functional and technical aspects of the scenario. It does not address any license related aspects regarding the usage of the software components mentioned in this blog entry. In any case you must clarify the license and software usage side before implementing such a scenario with your SAP Account & License Expert to be on the safe side. Please also don’t raise any questions in relation to this context in the comments section.

2. What can you expect from this blog entry?


You can expect details about the configuration of the introduced scenario step-by-step, how to avoid pitfalls and some general aspects around the given context. The configuration and implementation is described end-to-end, involving both activities on S4HC as well as on the HANA DB on-premise.

3. Architecture


A very simplified architectural sketch outlines some of the main components involved on both sides.


SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

4. Out of Scope


◈ Network/firewall/port settings
◈ Details on SDI user authorizations
◈ How to setup HANA PSE
◈ How to enable XS scheduler
◈ Steps undertaken in BW4H

5.1. Implementation Steps

In the following you will be provided with an overview on the required configuration steps in both S4HC and the HANA target system. Subsequently, a detailed description of how to execute these implementation steps is outlined.

S/4HANA Cloud – Overview

The steps required on the S4HC side are outlined in this blog entry. However, there are plenty of other blogs on this topic, on a much more detailed level available. Refer to other contributions that deal with the S/4HANA related configuration & setup by going into the “References” section of this page.

1. Create Custom CDS Views
2. Create Custom Communication Scenario
3. Maintain Communication User & Assign Certificate
4. Create Communication Arrangement

HANA On-premise (target) – Overview

The steps explained herein target the SDI software component setup and its configuration on the HANA side. It involves some basic steps to configure and shows the ease of use of SDI.

1. Deploy IM_DP
2. Enable XS scheduler
3. Deploy the SDI OData adapter
4. Setup HANA PSE (export certificate)
5. Establish trust relationship
6. Set SSL parameter

Further implementation steps:

1. Create Remote Sources
2. Create Replication Tasks
3. Load and Schedule
4. Monitor

5.2. S/4HANA Cloud – Implementation/Configuration Execution

Start your configuration with “Custom CDS Views”:

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

1. Create the Custom CDS Views

In S/4HANA Cloud numerous standard CDS views are shipped.

Based on the standard I_CostCenterText CDS view, our custom CDS view “CostCenterText” is built. You have to tick the “External API” box.

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

2. Continue with “Create a Custom Communication Scenario”

Search/select existing scenario or create a new one:

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

Next steps to consider:

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

3. Maintain Communication User

In S/4HANA Cloud, a communication user has to be maintained. This communication user is used in all of the SDI remote sources and will authenticate against the S/4HANA cloud system.

In the certificate section, the HANA certificate of the target system has to be uploaded, else the connection can’t be established.

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

4. Create Communication Arrangement

A communication arrangement defines a set of inbound services and respectively the communication user which has to be used to consume one of the services. In this screenshot, a set of services are shown, including the Costcentertext service which we have initially created.

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

The communication system defines the actual S/4HANA system by providing localhost and the HTTPS port.

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

5.3. HANA On-premise (target) – Implementation/Configuration Execution

1. Deploy IM_DP
2. Deploy the SDI OData adapter
3. Enable XS scheduler
4. Setup HANA PSE (export certificate)
5. Establish trust relationship
6. Set SSL parameter

1. Deploy IM_DP

Download and Deploy the Data Provisioning Delivery Unit – “HANA DP DU” (IM_DP add-on)  from the service marketplace (https://help.sap.com/viewer/7952ef28a6914997abc01745fef1b607/2.0_SPS00/en-US/61c1d46cbc1f4617950870097488b78e.html)

Access the PAM for SDI 1.0 or 2.0 to get the release that fits your HANA release + revision:

◈ https://drive.google.com/open?id=1y0Cy7UWNCkJyYEc6wK5roEhPT_jYrPS4
◈ https://drive.google.com/open?id=1KozXw36me_ZFek9qBq42Hb5H7A1PnkUP

2. Deploy the SDI OData adapter

Via SQL command line in HANA Studio create the ODataAdapter within the dpserver via the following command (ADAPTER ADMIN authorization is required):

CREATE ADAPTER "ODataAdapter" PROPERTIES 'display_name=OData Adapter;description=OData Adapter' AT LOCATION DPSERVER;

Remember, the OData Adapter is not a system adapter (other than the well-known SDA adapters). The adapter is C++ based and doesn’t run on the DP-agent side, it runs in the DP-server of HANA.

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

3. Follow the HANA/SDI documentation to Enable XS scheduler

From SDI Documentation “Consume HTTPS OData Services”:

https://help.sap.com/viewer/7952ef28a6914997abc01745fef1b607/2.0_SPS03/en-US/3fe5752db9aa4ca1883cacc8e0d2c822.htm

4. Follow the HANA/SDI documentation to setup PSE, then export the certificate

From SDI Documentation “Consume HTTPS OData Services”:

https://help.sap.com/viewer/7952ef28a6914997abc01745fef1b607/2.0_SPS03/en-US/3fe5752db9aa4ca1883cacc8e0d2c822.html

Finally your PSE should look similar to the following (in a real scenario you would typically use the SAPSRV.PSE – as applied in the remote source creation SQL string further down on the page):

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

5. Establish Trust Relationship

Once the HANA trust store/PSE environment is available, certificate import requires the following steps:

1. Download the certificates from the browser by opening the OData service URL (in Chrome: click on the small security/lock symbol left to the URL)

g. https://xyz-api.s4hana.ondemand.com/sap/opu/odata/sap/PROFITCENTERTEXT_CDS

2. Import the downloaded certificates into the trust store via XS Admin Trust Manager

https://<hanahost>:<pott>/sap/hana/xs/admin/

1. Click Manage PSE
2. Import Certificate -> Select the downloaded certificates one after another

6. Set HANA SSL parameter

We have faced a SSL related error, that could be narrowed down to a missing CA list on the server.

The Solution for the SSL Handshake Error is the following:

HANA Parameter “sslBlindCAResponse = on” in dpserver.ini has to be set 

Issue:

With HANA 1.0 SPS12 there has been a refactoring, so XS Classic will use the common crypto for all encrypted communication. This caused a different behavior, if a server is requesting a client certificate, but does not send a list of trusted CAs along with the request for the certificate. Since there is no selection criteria for the client certificate available, common crypto can not decide, which certificate should be used from the truststore. Hence, the handshake fails.

The following was our DPserver trace extract, that gives further details:

13: 0x00007fc7f6891462 in SAPODataAdapter::ODataClient::getMetadata(SAPODataAdapter::ODataMetadataRequest const&)+0x130 at ODataClient.cpp:204 (ODataAdapter.so)
14: 0x00007fc7f684e7e3 in SAPODataAdapter::ODataAdapter::getOservice()+0x150 at ODataAdapter.cpp:889 (ODataAdapter.so)
…………
24: 0x00007fc901e5edef in TrexService::WorkerThread::run(void*)+0x135b at TrexServiceThreads.cpp:630 (libhdbbasement.so)
[7341]{348972}[104/-1] 2018-07-12 18:40:00.737212 e IpConnection     IPConnection.cpp(00163) : comm::connect to 'xyz-api.s4hana.ondemand.com:443' failed with 'Internal Error Details. Crypto/SSL/CommonCrypto/Engine.cpp:424: SSL handshake failed: SSL error [-1604320766]: No trusted CA list from server, General error: 0xa0600202 | SSL | SSL_connect
No trusted CA list from server
No trusted CA list from server
0xa0600202 | SSL | ssl3_connect
No trusted CA list from server
0xa0600202 | SSL | ssl3_send_client_certificate
No trusted CA list from server

Solution:

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

5.4. Further Implementation Steps

1. Create Remote Sources
2. Create Replication Tasks
3. Load
4. Monitor

1 .Create Remote Sources

All in all, the scenario comprises 40+ custom CDS views and likewise 40+ OData services, that are to be consumed on HANA. It is advisable to make use of the SQL CREATE REMOTE SOURCE statement, instead of configuring each single one of it manually in the web development workbench or in HANA studio. It can be taken from the SDI documentation, but the following provides a wrap-up:

CREATE REMOTE SOURCE "COSTCENTERTEXT_CDS" ADAPTER "ODataAdapter"
AT LOCATION DPSERVER CONFIGURATION
'<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ConnectionProperties name="connection_properties">
<PropertyEntry name="URL">host:port/sap/opu/odata/sap/ COSTCENTERTEXT_CDS/</PropertyEntry>
<PropertyEntry name="proxyserver">proxy</PropertyEntry>
<PropertyEntry name="proxyport">8080</PropertyEntry>
<PropertyEntry name="truststore">sapsrv.pse</PropertyEntry>
<PropertyEntry name="isfiletruststore">true</PropertyEntry>
<PropertyEntry name="shownavigationproperties">true</PropertyEntry>
</ConnectionProperties>'
WITH CREDENTIAL TYPE 'PASSWORD' USING
'<CredentialEntry name="password">
<user>SDI_ODATA</user>
<password><password</password>
</CredentialEntry>';

This represents the view on the remote source from the web development workbench.

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

You have to grant the _SYS_REPO user access to the remote source to be able to create and run dependent objects, such as a replication task:

GRANT CREATE VIRTUAL TABLE, CREATE REMOTE SUBSCRIPTION ON REMOTE SOURCE <remote_source_name> TO _SYS_REPO;

2. Create Replication Tasks

Per each remote object (Custom CDS View), there is one replication task. As the remote source and OData adapter do not support any real-time capability, the truncate table option in combination with full-load is chosen. It is viable, as the data volume in our example is negligible. A full-load every day or hour was not a problem in our context. None of the data flows took longer than > 1 minute. This may of course vary from case to case:

The higher the data volume is, the likelier it will be that some custom data flow must be modelled to only reflect delta data being loaded. For this purpose a flowgraph can serve the need. For further reference how to reflect custom delta logic, take a look at my other two blog entries about using HANA SDI in various contexts. These are mentioned under “References” on this page.

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

3. Load – Schedule Replication Tasks

Each replication task is scheduled via the design time object monitor (<host>:<port>/sap/hana/im/dp/monitor/?view=IMDesignTimeObjectMonitor). The SDI monitors not only serve to schedule flowgraph or replication task executions but also to gain an overview on replication state or critical situations in which SDI data flows need to be reset or exceptions must be processed. There will be no further details provided how the scheduling of SDI objects is technically configured. This can be taken out of SDI standard documentation

4. Monitor SDI Data Flows

SAP HANA Tutorial and Material, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning

5. The very last step is to be done of the BW4H side: Creation of an SDI Source and then likewise the modeling on top of the entities the SDI remote source contains.

Of course you can also directly consume the tables you have replicated into your schema or build e.g. CalculationViews on top.

Another alternative are BW/4 sources that directly sit on top of an SDI remote source. This is sketched in the architectural illustration under point.

[HANA] Unleash the performance of your VM

$
0
0
Most performance issues which I was working on turned out to be basic issues regarding HANA / Linux parameter and configuration of the Hypervisor. Virtualization is regardless if big or small systems also in HANA environment an often-chosen architecture. If you want ensure good performance and how to check it in your environment keep on reading.

Most systems run on VMware but also more and more systems are planned or already running on Power. Here I only speak from on premise installations, because the ones in the cloud from Hyperscaler like azure (Hyper-V), AWS (own KVM) or GCP (own KVM) you can´t real take influence on. For the biggest instances there are bare metal installation which make it pretty easy for the NUMA configuration. The application HANA is NUMA aware.

NUMA is a good keyword to start because this one of the most ignored / intransparent performance issues. What is NUMA and why should you take attention on it when you install a HANA on a hypervisor.

NUMA – Non-uniform Memory Access

“NUMA is a method of configuring a cluster of microprocessor in a multiprocessing system so that they can share memory locally, improving performance and the ability of the system to be expanded.”

=> OK, this sounds not really self-explanatory, or?
Let’s take an example with a picture:

SAP HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Material

The performance impact depends on type of CPU, vendor (topology) and number of sockets.

This means a local access is always 2-3 times faster than a remote one. But how can you have influence on the placement of a VM (=virtual machine)?

The hypervisor should normally take care of this. But in special cases like big HANA VMs or wrong default settings of the VM you have to adjust it manually. This should be done for all productive HANA servers. Normally the person who have installed the HANA should be aware of this, but experience shows that in 90% of the installations nobody cares about.

IBM Power (PPC)


On IBM Power an optimization is pretty easy with the latest HMC versions:

# on ssh shell of the HMC
# Listing of all servers
$ lssyscfg -r sys -F name


# dynamic platform optimizer (DPO) => NUMA optimization

$ hscroot@<ip-hmc>:~> lsmemopt -m <pServer Name> -r lpar -o currscore

$ hscroot:~> lsmemopt -m pserv1-r lpar -o currscore
lpar_name=hana1,lpar_id=1,curr_lpar_score=100
lpar_name=hana2,lpar_id=2,curr_lpar_score=100
lpar_name=hana3,lpar_id=3,curr_lpar_score=none
lpar_name=hana4,lpar_id=4,curr_lpar_score=100
lpar_name=hana5,lpar_id=5,curr_lpar_score=none
lpar_name=hana6,lpar_id=6,curr_lpar_score=100
lpar_name=hana8,lpar_id=8,curr_lpar_score=32 << improvable LPAR


# on ssh shell of the HMC
# use DPO for optimization
$ optmem -m <Power Server Name> -o start -t affinity -p <name(s) of improvable LPAR(s)>

$ optmem -m pserv1 -o start -t affinity -p hana8

# check running background activities
$ lsmemopt -m <Power Server Name> 

VMware


On VMware this is trickier than on IBM Power, because also the sizing rules differ.

With VMware you can use half socket sharing or if your VM I bigger than one NUMA Node / Socket, you have to round up and must allocate the full socket. This leads to some resource wasting.

Here a picture from ©VMware:

SAP HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Material

Every VM which is bigger than one socket is called ‘wide VM’.

One example for you which can be also checked in your environment by using the shell on your ESX.

Alternatively I’m sure you find a way how to contact me

Example  – remote Memory access / Overprovisioning

####################
ESX
E5 – 2695 v4
18 cores per socket
2 sockets
72 vCPUs
1 TB RAM
####################

HANA Sizing:

600GB RAM

36vCPU

Current Setup:

768 GB RAM

36vCPU

 Sizing rules:

1 TB RAM (=> 2 sockets, because one NUMA node has 512GB and we need more than this)

72vCPU

This is currently one the famous mistakes which I can see in about 60% of all environments, because the VM admin is not aware of the sizing rules of the SAP HANA and most of them are not aware which influence their VM settings can have on the topology and the resulting performance. So, attention to placement and overprovisioning.

ESX view


groupName           groupID    clientID    homeNode    affinity     nWorlds   vmmWorlds    localMem   remoteMem  currLocal%  cummLocal%
 vm.78924              58029           0           0         0x3          16          16    73177088           0         100          99
 vm.78924              58029           1           1         0x3          16          16    72204288           0         100         100
 vm.1237962         76880487           0           0         0x3          16          16    18254012   250242884           6          53
 vm.1237962         76880487           1           0         0x3          16          16   267603968      831488          99          66
 vm.1237962         76880487           2           0         0x3           4           4   145781060   121605820          54          56

Here we see an ESX with 2 VMs 1237962 is our hdb01 HANA DB which has 16+16+4 vCPUs (3 Sockets) and we can see it consumes remote memory. Wait a moment – 3 sockets? Our physical server has only 2. Yes, this is possible with VMware, but it is an additional overhead and costs performance. You can also create an 8-socket server within a 2 socket ESX, but it doesn’t make sense in context of HANA. There are other applications where this feature is useful.

But all of this “virtual sockets” are located on the physical socket-0. This leads to an overprovisioning of this node because the other VM additionally uses some resources. 

nodeID        used        idle    entitled        owed  loadAvgPct       nVcpu     freeMem    totalMem
           0        5408       30591        5356           0          14          52    26703288   536736256
           1        1574       34426         926           0           3          16    85939588   536870912

Socket-0 using 52 vCPU and Socket-1 16? Seems to be that this ESX is a little unbalanced and overprovisioned.

vmdumper -l | cut -d \/ -f 2-5 | while read path; do egrep -oi “DICT.*(displayname.*|numa.*|cores.*|vcpu.*|memsize.*|affinity.*)= .*|numa:.*|numaHost:.*” “/$path/vmware.log”; echo -e; done

DICT                  numvcpus = "36"
DICT                   memSize = "786432"
DICT               displayName = "hdb01"
DICT        sched.cpu.affinity = "all"
DICT        sched.mem.affinity = "all"
DICT      cpuid.coresPerSocket = "4"
DICT      numa.autosize.cookie = "360001"
DICT numa.autosize.vcpu.maxPerVirtualNode = "16"
DICT        numa.vcpu.preferHT = "TRUE"
numaHost: NUMA config: consolidation= 1 preferHT= 1
numaHost: 36 VCPUs 3 VPDs 3 PPDs
numaHost: VCPU 0 VPD 0 PPD 0
numaHost: VCPU 1 VPD 0 PPD 0
numaHost: VCPU 2 VPD 0 PPD 0
numaHost: VCPU 3 VPD 0 PPD 0
numaHost: VCPU 4 VPD 0 PPD 0
numaHost: VCPU 5 VPD 0 PPD 0
numaHost: VCPU 6 VPD 0 PPD 0
numaHost: VCPU 7 VPD 0 PPD 0
numaHost: VCPU 8 VPD 0 PPD 0
numaHost: VCPU 9 VPD 0 PPD 0
numaHost: VCPU 10 VPD 0 PPD 0
numaHost: VCPU 11 VPD 0 PPD 0
numaHost: VCPU 12 VPD 0 PPD 0
numaHost: VCPU 13 VPD 0 PPD 0
numaHost: VCPU 14 VPD 0 PPD 0
numaHost: VCPU 15 VPD 0 PPD 0
numaHost: VCPU 16 VPD 1 PPD 1
numaHost: VCPU 17 VPD 1 PPD 1
numaHost: VCPU 18 VPD 1 PPD 1
numaHost: VCPU 19 VPD 1 PPD 1
numaHost: VCPU 20 VPD 1 PPD 1
numaHost: VCPU 21 VPD 1 PPD 1
numaHost: VCPU 22 VPD 1 PPD 1
numaHost: VCPU 23 VPD 1 PPD 1
numaHost: VCPU 24 VPD 1 PPD 1
numaHost: VCPU 25 VPD 1 PPD 1
numaHost: VCPU 26 VPD 1 PPD 1
numaHost: VCPU 27 VPD 1 PPD 1
numaHost: VCPU 28 VPD 1 PPD 1
numaHost: VCPU 29 VPD 1 PPD 1
numaHost: VCPU 30 VPD 1 PPD 1
numaHost: VCPU 31 VPD 1 PPD 1
numaHost: VCPU 32 VPD 2 PPD 2
numaHost: VCPU 33 VPD 2 PPD 2
numaHost: VCPU 34 VPD 2 PPD 2
numaHost: VCPU 35 VPD 2 PPD 2

Here we can see that the mapping of VPD to PPD is 1:1, but there is no physical third socket in an E-5 server

At first, we have a wide VM. This means preferHT should be disabled. Another bullet point is the limitation of 16vCPU which lead to this 3 socket setup: 36/16=2,25 => ~3

Ok such numbers are fine but some pictures to realize what exactly this means:

SAP HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Material

SAP HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Material

In the last picture you can see that the 768GB doesn’t fit into 512GB, so a remote access is used to satisfy the need. The other VM should not be spread over two NUMA nodes. This has bad affects on the HANA performance.

So, in the end you have two options:

◈ Reduce the size of your HANA and resize the VM that it fits into one NUMA node
◈ Move the second VM away, so that the whole ESX can be used by the HANA VM

It is not allowed to share a socket for a prod. HANA VM with another VM (regardless if it is SAP application or not). This means also that overprovisioning is not allowed.

The shown example is not supported in many ways. SAP can discontinue support, but I haven’t heard from customers or colleagues that this ever happened, but what often be done is that VMware support will be contacted and be pretty sure that they will find this and your issue will be processed if you have supported setup.

Working with Staging Tables in S/4HANA Migration Cockpit

$
0
0
S/4HANA migration cockpit is the migration tool which was initially designed for S/4HANA cloud edition but now it is also available for 1709 S/4HANA FPS1 on premises and later versions.

S/4 HANA Migration Cockpit is browser based (WordPro) interface. No additional setup or activation is required once we setup SAP S/4HANA system.

Key Features of Migration Cockpit


◈ This tool is embedded and delivered with S/4HANA system.

◈ No programming is required by the customer.

◈ As the name suggests, this tool is used for migrating data from SAP or Non SAP system to S/4HANA.

◈ This tool has predefined migration object which contains the mapping for all master and transactional data. It reduces migration cost and time.

◈ Migration activities are predefined and easy to use.
Using Staging tables in Migration Cockpit we can use Database Tables as a source for your Migration Project. These are called as ‘Staging Tables’, so you extract the data from the source system into these staging tables and import the data from there with the S/4HANA Migration Cockpit.

The Advantages are:

◈ faster overall process (export/import), less clicks
◈ Performance
◈ Use Database Tools to Extract and Transform

Method to populate Staging Tables:

◈ SAP Data Services
◈ HANA Studio
◈ Using Excel as Source in S/4 and populate Staging tables programmatically.

This document will use SAP Data Services as a method to populate Staging Database.

1. Create a Database Connection:

The first step in using Staging table is to create a Database Connection between S/4 and the schema where the Staging tables will reside. The Staging tables can exist in a remote database or in the target S/4HANA database (but in a separate Schema) .

Go to transaction DBCO to create the database connection ( or let the Basis team do their job and ask them).

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once it is saved, the connection looks like this.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Important tip:

Just by Creating a Database Connection does not mean you can use it as a Staging Area for Migration Cockpit. Once  the connection is available in DBCO, you have to white-list this connection in table DMC_C_WL_DBCO_OP using transaction SM30:

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once the entry is maintained, the Database Connection is ready for use externally.

Creating a New Project for Data Migration


Go to Transaction LTMC – Migration Cockpit.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Click on Create New Migration Object and Select the Transfer Option – Transfer Data from Staging Tables.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

When you select the Database Connection, you can see the Database connection created in DBCO in previous Step.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once you Select Click on Create. Select it and Save and the Migration Object is ready for use. I have named it as STAGING_DEMO.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Click on STAGING_DEMO and it will enter the Project with all Migration Objects available. Migration object comes with the following information:

1. Status– Object can be active – available for migration, deactivate – not available for migration and started – migration has started on this object.
2. Object name– It is migration object name which is relevant to customer master or transnational data.
3. Documentation– Once we click on documentation it will open a new window where we can find all information on migration object like required structure, fields, uses.
4. Dependent migration object– It shows a list of migration object that must be loaded first or already present in the system.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

You can see all Conversion Objects are in Status Not Started.  For this Demo I am using BANK Master Conversion Object. When you Click on BANK Conversion Object, a popup comes asking for copying the Conversion Object and Creating Staging tables.

Click OK.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Few things that you can see here:

◈ Status – Synchronized
◈ Structure – S_BNKA
◈ Staging Table – /1LT/DSR10000003

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Click on the table /1LT/DSR10000003 and you can see there is no records right now.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Now we will validate the table name in S/4 and HANA Studio.

Go to SE11 and enter the table name  –

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Click Display and you can see Bank Master Transparent Table  .

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

In HANA Studio also you can see the table under the correct schema:

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Filling the table with Data to be used for Cockpit Load.

As mentioned earlier we will use Data Services to populate the Staging Table .

Log into Data Services and map the HANA Database.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once it is mapped. Right Click and Select Import by Name.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once the table is Imported you can see it under Datastore tab and under Tables tab under the STG Datastore configured earlier.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Create a job to pass data to the Staging Table.


I have created a Basic Job to pass 1 record to the table, which will use for the demo.

The name of the job is – JB_STAGING_AREA and it has one Data Flow – DF_STAGING_AREA.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

The Data Flow in details looks like this:

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

In Query Data Query, I am passing Default values for load.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

And as we have to pass only First row, the filter is in where Clause.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

The job looks like this:

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once you execute the Job.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

1 record is passed to the Target table:

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Select the table preview of the table:

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

And we can see the same in HANA now once the job is processed:

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Load the data using Migration Cockpit


The last and main Step is to Migrate this Data into S/4 using the Migration Cockpit .

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once you Click Start, the migration starts and progress bar will show the

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

The Validate Data Step will come with Errors and Warnings .

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Click Next and you see the Mapping errors. These are technically not errors, but S/4 just validates the value we are passing matches the help values for that field.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Click on each field and just Select and Press the  key

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

The field turns Green Status. DO this for the next 2 fields.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once the Open Task area is Blank, Click Next to go to next Step (Convert Values)

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Once the task is finished and you get the message below for Conversion. Click Next to Simulate Import.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Simulate Import will do the test but will not Commit to save any values.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

The simulate gives a Success message that the Data is good for Commit. You get the message at the Top: All data have been processed; choose “Finish” to complete the transfer

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Click on Finish to Finish the Load and Commit the values.The load is finished and you can see the processed records in Migration Cockpit.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Go to S/4 and check the table BNKA.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

Go to Transaction – FI03 , to see the Data in Transaction. You can see the Bank Master Data.

S/4HANA Migration Cockpit, SAP HANA Tutorial and Materials, SAP ABAP Study Materials, SAP ABAP Guides

With this document you will be able to import data into Staging tables and then into S/4 HANA SYSTEM USING Migration Cockpit. This document will be helpful  at the time of data migration where Data is huge and coming and being Transformed in Data Services .Please note the staging concept in SAP S/4HANA Migration Cockpit is only valid for HANA Databases.

You can leverage the same steps for other Conversion objects (Vendors, Customers, Equipment, etc).The templates for each Conversion Object is different would be different, but the fundamentals remain the same.

SAP BW4/HANA Migration Analysis

$
0
0

What is BW/4HANA

  • SAP’s New Data warehouse
    • SAP BW/4HANA is SAP’s next generation data warehouse solution
    • It is a new product and not a successor of any SAP BW
  • Continuing BW functionalities
    • The core functionality of SAP BW is preserved and the transition to SAP BW/4HANA
    • Can be compared with the transition of SAP Business Suite to SAP S/4HANA
  • Reduce Data redundancy
    • BW/4HANA provides a simple set of objects; suited for modelling an agile and flexible layered architecture of a modern data warehouse
  • Reduced Number of Objects
    • SAP BW/4HANA drastically reduces the number of data objects hence less maintenance and storage
  • Future Roadmap
    • All future innovations will take place in SAP BW/4HANA

SAP BW/4HANA vs SAP BW


SAP BW/4HANA only uses state of the art object types, processes, and object types designed for use on a SAP HANA database.

Converting from SAP BW, powered by SAP HANA brings the following functional changes:
  • The SAP Business Explorer tools (SAP BEx tools) are not supported any more.
    • BEx Analyzer will not be available anymore
    • Queries are now defined in SAP BW/4HANA in the BW Modeling Tools
  • Different SAP tools can be used for reporting (additional licensing to be checked)
  • Modeling in the Data Warehousing Workbench (SAPGUI) is replaced by modeling in the BW Modeling Tools in Eclipse
  • Certain object types are not supported any more
    • These are replaced by objects types that are especially primed for use on the SAP HANA database (ex. PSAs are not available anymore)

Paths to SAP BW/4 HANA

  • New implementation or fresh start
    • New implementations are the best choice for customers converting from a legacy system or building a system from scratch with new data model only
  • Landscape transformation
    • Landscape transformation is for customers who want to consolidate and optimize their complex SAP BW landscape (multiple production systems) into a single SAP BW/4HANA system or who want to carve-out selected data models or flows into a global SAP BW/4HANA system
  • System conversion (Relevant to Customer)
    • A system conversion addresses customers who want to change their current SAP BW system into a SAP BW/4HANA system. Using the Transfer Toolbox provided by SAP, the SID of the system can be kept (in-place conversion) or a new SID can be used (remote conversion or shell conversion)

SAP BW/4HANA System Conversion


• In-Place Conversion

◈ Full system conversion of an existing SAP BW installation (keep same SID)
◈ Step-by-step in-place transfer of classic objects into their HANA-optimized counterparts
◈ Followed by a system conversion to SAP BW/4HANA
◈ Minimum start release: SAP BW 7.5 SP 5 powered by SAP HANA

SAP HANA Tutoria and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

• Remote Conversion

◈ Start with SAP BW/4HANA as green field installation (new SID)
◈ Support of carve-out and consolidation scenarios
◈ Transport data models and remote data transfer (including Unicode conversion)
◈ Risk mitigation due to parallel system
◈ Minimum start release: SAP BW 7.0 or higher on Any DB

SAP HANA Tutoria and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

• Shell Conversion

◈ Similar to Remote Conversion but without data transfer
◈ Accelerate greenfield approach by transferring and converting data models and flows
◈ Minimum start release: SAP BW 7.0 or higher on Any DB

SAP HANA Tutoria and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

In-place Conversion (Basic Sequence)

  • Discover / Prepare Phase
    • Check system for BW/4HANA compliance (gather information about objects and code that needs to be transferred or changed), estimate effort for the conversion project
  • Explore / Realization Phase
    • Transfer legacy objects into HANA-optimized counterparts, system conversion, post conversion tasks
SAP HANA Tutoria and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

• SAP BW/4HANA Transfer Cockpit
  • The SAP BW/4HANA Transfer Cockpit is the single point of entry for tools required for transitioning SAP BW systems to SAP BW/4HANA
  • Collection and Display Of Statistics Of Transfer Processes
  • Tools for Preparation Phase
    • Pre-Check Tool
    • Sizing Report
    • Code Scan Tool
    • Clean-up Reports
    • Standard Authorization Transfer –Initial Run
  • Tools for Realization Phase
    • Scope Transfer Tool
    • Standard Authorization Transfer –Delta Run
    • Deletion of Technical Content
    • Switching Operation Mode
  • Transfer Cockpit (transaction RSB4HCONV) for In-place Conversion
  • System statistics showing number and size of remaining InfoCubes and classic DataStore objects.
    • Top 10 shows, which have most records.
  • Prepare Phase includes pre-check tool, sizing report, code scan tool, clean-up reports, and initial run of authorization transfer
  • Realization Phase includes scope transfer tool, delta run of authorization transfer, and change of operation mode
• Pre-check Tool
  • Check your SAP BW system using the pre-check tool (program RS_B4HANA_CONVERSION_CONTROL)
    • Check complete system
    • Check individual transport requests
    • Display logs of previous checks

General Findings after analysis

  • Navigational Attributes are not generated when DSO/Cube are converted to Advanced DSO
    • These are permitted in Composite Providers for Reporting
    • Individual Transformations where used will require adjustments
  • x Objects are not supported any more
  • Extraction from PSA is not supported
    • PSA and Infopackages can be skipped or converted to ADSO and DTP
  • Only HANA, Flat File source and ODP systems are supported
  • Web Service Source System are not supported
  • BEx analyzer, Web Templates are not supported
  • Info Object Catalogs are not supported and to be converted to Info Area
  • Integrated Planning (BW-IP) not supported and to be converted to BPC embedded

Findings on Customer BW System

  • BEx reporting:
    • BW Queries will continue to work where BEx analyzer and Web template will not be supported
    • Adoption of Analysis Office is preferred before BW4/HANA migration
  • BW-IP
    • BW-IP has to migrate to BPC Embedded
  • Source systems
    • ECC source system connections needs to be replaced with Operational Data Provisioning (ODP) Source System
    • Alternate for Web services to be analyzed. Web Service source system not supported (2441826 – BW4SL – Web Service Source Systems)
    • No support for Hierarchy DataSources / Extractors (2480284 – BW4SL – Hierarchy DataSources). Further investigation required Need to find out alternative
    • PSA & info packages are eliminated from BW/4 objects – POC required to confirm flat file data sources are supported through DTP
  • Authorization
    • No impact on authorization as such – but all the roles and profiles have to be checked and replace the existing cube names with newly created ADSOs name if applicable

Recommendation (Customer also thinking S/4 HANA migration for ECC system)

  • SAP BW/4HANA is completely independent of S/4HANA. BW/4 HANA is not a prerequisite for S/4HANA, and vice versa.
  • All the pre-requisite activities can already start as current version of BW 7.5 sps 007 supports it for future readiness.
  • In-general SAP’s initial understanding while designing SAP S/4 HANA (2015 earlier) was that BW not required any more but this ended as not true. S/4 HANA is mainly taking out and replacing Operational Reporting from BW but need of Data Warehouse will continue to be in place
  • BW4/HANA is designed to support Big Data innovations and any future enhancement will be in BW4/HANA. Hence BW4/HANA migration should be considered
  • As S/4HANA provides operational reporting. Scope of BW would revise after S/4HANA migration hence actual BW/4 HANA conversion should be planned after S/4. Preparation can / should be started ahead and in parallel.

Calculation Engine Plan Operators (CE Functions) Vs SQL Code

$
0
0
In this blog, I’m going explain few CE Functions and also the alternative solution for CE Functions using three tables with sample data.

The reason why I’m talking about this topic is, still some people are thinking about CE Functions, so I just want to clear the myths about CE Functions. CE Functions are alternative to SQL Script. We have total 13 CE Functions, but out of 13 , we may need CE_VERTICAL_UNION CE Function (this is used to combine the columns of different tables though they don’t have any relation), because we don’t have simple alternative solution in SQL.

CE Functions are divided into 3 categories.

1. Data Source Access Operators
2. Relational Operators
3. Special Operators

We can write SQL Code and achieve above 3 categories CE Functions.

The execution of CE Function happens within the calculation engine and does not allow a possibility to use alternative execution engines, such as L native execution (“L” – an imperative language, which can call upon the prepackaged algorithms available in the PAL of SAP HANA).

CE Operators are converted internally and treated as SQL operations, the conversion requires multiple layers of optimizations. This conversion can be avoided by direct SQL use. Be cautious before we mix and use both CE Functions and SQL, some times it may cause performance issues.

I’m going to use the below tables to explain few CE Functions & SQL.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Eg: Simple example using CE Function CE_COLUMN_TABLE and SQL.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Note: In above screenshot, I shown log, it may change based on your system configurations and load. I just shown it to understand the execution/processing time using CE Function and SQL Code.

Joining all above three tables using CE Functions:


In this scenario, I used below CE Functions.

CE_COLUMN_TABLE: Selects data from a columnar table.

CE_PROJECTION: To renames the columns i.e. to maintain same METADATA (Field names).

CE_CALC: Is used to calculate a new column.

CE_UNION_ALL: This is semantically equivalent to SQL UNION ALL statement. It computes the union of two tables which need to have identical schemas. The CE_UNION_ALL function preserves duplicates, so the result is a table which contains all the rows from both input tables.

CE_LEFT_OUTER_JOIN: Calculate the left outer join.

Scenario:


Let’s start with CE Functions, using CE Functions, join above three tables.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

The Results of the above Code using CE Functions is…

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

The same above CE Function code is replaced with SQL Code.

In SQL Code, we don’t need to use so many functions like above CE Functions to maintain METADATA, Calculation Function, No lengthy code.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

The Results of the above SQL Code is…

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Note: In below screenshot, I shown log, it may change based on your system configurations and load. I just shown it to understand the execution/processing time using CE Function and SQL Code.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

CE_VERTICAL_UNION: This function is used to combine the columns of different tables even though they don’t have any relation.

See the below example, it combines 3 different tables and they don’t have any relation. But if we have DUPLICATE Fields in tables, then we need to use alias names like see in below code I used DEPTID AS “DEPT”, LOCATION AS “COL”, because these fields are available in two tables.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

The results of above Code is:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

So, by looking at above examples, we can use SQL Code instead of CE Functions.

Advanced Available-To-Promise (aATP) with Back Order Processing in HANA 1809

$
0
0
In this blog will be seeing how to Activate aATP Process and How to Create BOP Job Run using Fiori Applications to Re prioritise Sales Orders based on Strategy in the BOP.

Now let’s Focus on what is aATP and BOP before going into Activation & Configuration parts.

SAP has introduced aATP with its 1610 release Advanced Available-To-Promise (aATP) with new functionality to execute order fulfilment and improve supply chain processes in a better way compared to classic ATP.

Before going Further Let’s see ATP Vs aATP

Basic Available to Promise (ATP)


◈ Simple product availability check
◈ Basic allocation check
◈ Manual material determination
◈ Semi-manual plant substitution
◈ Material-/plant-based backorder processing for sales orders
◈ Simple transportation and shipment scheduling based on days (and hours)


Advanced Available to Promise (aATP)



◈ Backorder processing with intuitive requirement classification
◈ FIORI Applications for Release for Delivery
◈ Mass enabled fast availability check
◈ Use-case-driven product allocation check
◈ Intelligent and automated selection of best confirmation considering alternative plants and substitutable materials
◈ Easy-to-use and device-independent ATP explanation and simulation app
◈ Advanced transportation scheduling.


What is Back Order Processing



If A customer requires a product XX Immediately and Customer B order is committed with available stock.

To address this type of Supply Vs Demand BOP is used.

Supply:- Stock, Production Order etc.

Issues:- Sales Order, STO(out bound) Schedule Line Agreements.

Now Let’s see some Key Innovations which are released part of 1809 Release.

1. Product Availability Check:-

◈ Promise what you can deliver / Avoid over-confirmation Fast turn-around time during online check
◈ Support of Segmentation you can include customer – specific stock segment

2. Product Allocations:- 

Product Allocation Sequence can create product allocation sequences which can be used to confirm requested quantities during availability checks for sales orders and stock transfer

3. Back Order Processing:- 

◈ Support for stock transfer orders
◈ Supply Assignment
usability improvement for SAP UX
◈ Simplification improvements for the creation and maintenance of BOP Variants have been introduced
◈ New SAP Fiori Applications to create and schedule jobs for BOP

Now let’s get into the Activation Process of aATP

Available-to-promise (ATP) stock is the uncommitted portion of a company’s inventory and planned production, used to support order promising for a customer order. The ATP quantity is very different from the available stock quantity. For example, perhaps there are 100 total pieces of stock for a product, but 80 pieces have already been committed to other sales orders or internal production. In that situation, the ATP quantity is only 20 pieces, which can be promised to new sales orders or new requirements.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

To Activate in Configuration — OVZ2

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Checking Rule and Scope of Check Remains Same of Classic ERP

Strategies in BOP

The slide picture below demonstrates how each strategy can acquire inventory from the lower-priority strategies.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

1) WIN:

▶Confirm as requested

▶Shall be fully confirmed in time (the most important customer orders) 

2) GAIN:

▶Improve if possible

▶Shall keep the confirmations and should gain if possible (orders that cannot lose the  earlier confirmations)

3) REDISTRIBUTE:

▶Redistribute and Reconfirm

▶Might gain, might lose (orders that can lose confirmations)

4) FILL:

▶Delete confirmation, if required

▶Shall not gain anything, should keep confirmation, but may also lose (non-priority customer orders)

5) LOSE:

▶Delete confirmation

▶Shall lose all confirmations (orders under credit block)

We now proceed to BOP Application that will create the parameters to decide what inventory is allocated to certain sales orders. The first BOP app, Configure BOP Segment,

These segments will filter and sort data per the selection criteria documented.

There are a host of standard options available, such as Sales Organization, Document Type, Date Ranges or Plant, providing broad flexibility in segment creation. Once complete, a segment may appear as per below Snaps, where we are selecting all orders shipping from the Supplying Plant 1710.

Fiori Applications

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Configure BOP Segment:-

In Selection Criteria You can Give Selection condition with a basic code for Ex:- “Delivery Prior. of the SalesData of an ATP Document is equal to ’01’

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

and also you can prioritize Attributes as well.

Configure BOP Variant:-

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

In BOP Variant will Assign BOP Segments which we created as per requirement.

Schedule BOP Run:-

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

In Schedule BOP we need to give the Variant which we created using Segments and Run the Job

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Once Job Run is completed we can check status in Monitor BOP Run App

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

So From Above Snap, you can see from Sales order “6” Quantity has been Pratroised to Sales Order “12”.

In this Way, We can Pratroise Order Based of Customers Requirement Vs Inventory.

We can use aATP with BOP to Praitotise the Sales Orders using different Strategy in BOP like WIN, GAIN, REDISTRIBUTE, FILL & LOSE based on the customers and business process.

ArcGIS + HANA: GIS acceleration and increased agility for ArcGIS content creators and users

$
0
0
The number one question I have heard from folks over the past few weeks is why Esri + HANA together?  The short answer is increased performance, lower total cost of ownership and seamless integration. But how did we get here? And what have our customers experienced along this journey?

After SAP HANA was first released around 2011, our customers saw that roll-up or summary data could be calculated on the fly from base data at speed and scale, and the impact that this had on building and maintaining data warehouses.  They obtained insight as soon as the data was loaded, not after 4, 12 or 20 hours of creating summary tables.  This coupled with the fact that HANA doesn’t require user-created indices resulted in a much smaller footprint for a data warehouse resulting in lower TCO.  Enterprises obtained answers much sooner and were able to ask many more questions without having to rebuild the data warehouse.  Adding on top of this is HANA’s ability to mash up data, virtually or physically, from many different data sources and the ability to use HANA’s different engines like predictive, text analytics and graph without having to move data around.
Fast forward to January 2018, when Esri announced geodatabase support for HANA.  This means HANA can be the system of record for geometries and related spatial metadata.  Since 2014, HANA supports ArcGIS’s system of engagement – using the ability to access spatial data residing in HANA using query layers.

Late last year, Esri built a 311 demo on HANA to show the advantages that HANA brings to ArcGIS.  This demo uses HANA as the system of record and the system of engagement.  Before diving into what the demo shows, let’s look at some best practices taught to ArcGIS content creators.  Following them ensures reasonable performance regardless of the underlying DBMS:

◈ When the number of rows affected by a query exceed around 10 million rows, you should create summary tables
◈ You should create indices
◈ Write narrow queries used by webmaps so that they return a handful of columns

With the 311 demo, Esri found these best practices simply aren’t needed when the underlying DBMS is HANA.  Not having to create indices or build summary tables are some of the benefits observed by our customers when HANA was first released.

Let’s dive into the demo and look at the details.  Here is a summary map by borough of the number of calls:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

When you view the demo online, it’s important to note the map is live.  It displays instantaneously and it doesn’t run against summary tables.  Here is the query used to obtain the data displayed on the map:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Using query layers, ArcGIS pushes the aggregation into HANA where it executes against the base data.  The query is also an example of a narrow query – it returns just 4 columns.  In the next part of the demo, Esri chose to write one large query using SQL and SQL Script for use by all webmaps. Typically, each webmap would have its own narrow query like the one above.  Below is the large query – which could be added on to with no attendant loss of performance:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

The above query returns in 750 milliseconds against the base data.  In other DBMSs, it would take 2 to 3 minutes or more to execute.  This is GIS acceleration.  In addition, instead of 30 separate queries, there’s only this one query.  Here is a webmap that utilizes that query – it shows the count by year for each ZIP Code that’s clicked or tapped on.  Each tap or click causes the query to execute against HANA.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

There’s nine additional webmaps that use bivariate comparisons to show the impact of two different factors.  This one shows summer v. winter calls:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

ZIP Codes in yellow are those where noise complaints are high in the summer, medium-blue where noise complaints are high in the winter and dark blue are where noise complaints are high year-round.

To recap, for an enterprise that uses ArcGIS as the system of engagement, these advantages mean:

1. The data displayed by webmaps isn’t stale because it is from the base data, not summary tables

2. The ability to create new webmaps to answer new questions doesn’t rely on an existing summary table.  The questions can drive the creation a new webmap immediately

3. HANA can process aggregate queries against data at speed and scale which means data for the whole enterprise can underpin any webmap – an organization can now create an organization-wide atlas of maps to show their KPIs across the entire operation

4. Increased agility and reduced query governance.  Where there would have been 30 queries, there’s now one – resulting in faster innovation, reduced maintenance and governance and lower TCO

5. Because there are no summary tables and no user-created indices, the data footprint is smaller

But what if your production GIS is running on another DBMS or you’re on an older version of ArcGIS Enterprise?  You can still leverage HANA’s advantages by creating a publication geodatabase (or sidecar) in ArcGIS Enterprise 10.6 or greater and with HANA 2 SP2 or greater.  Any ArcGIS administrator can use the tools they already know to copy in the desired feature classes.  Bottom line is there is no need to wait to gain GIS acceleration and agility that HANA and ArcGIS together offer.

SAP HANA Based Transformations (Processing transformations in HANA) aka ABAP Based Database Procedure (AMDP)

$
0
0
SAP HANA Based Transformations (New way of writing Routines)

As majority of us has worked on SAP BW and have written ABAP routines in transformations in BW to derive the business logic, we often noticed the performance issue while loading the data into DSO, Info cube or master data info object.

There could be numerous reasons for this:

◈ Bad Coding
◈ Memory bottleneck
◈ Performance of Database system
◈ Limited resources
◈ Row-based approach of processing records and many more.
An ABAP based BW transformation loads the data package by package from the source object into application layer (ABAP). The BW transformation logic is executed inside the application layer and transformed data packages are shipped back to database server which writes the result packages into target object. Therefore, the data is transmitted twice between Application layer and database layer.

During processing of ABAP based BW Transformation, the source data package is processed row by row.

But in HANA based BW transformation, the data can be transferred directly from source object to target object within a single processing step. This eliminates the data transfer between DB layer and Application layer.

The complete processing takes place in SAP HANA**.

**Note : Some user defined formulas in transformations could prevent the code pushdown to HANA.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Key Differences:

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

How to Create SAP HANA Transformations:


Step 1: Create a ADSO say ZSOURCE with activate data and change log in BW Modelling Tool in Eclipse

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Step 2: Create transformation between aDSO and Data Source. Create expert routine from

Edit Menu -> Routine -> Expert Routine.

Pop-up will ask for confirmation to replace standard transformation with expert routine. Click on the “Yes” button.

System will ask for ABAP routine or AMDP script. Click on AMDP Script.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

An AMDP Class will be generated with default method – PROCEDURE and with default interface – IF_AMDP_MARKER_HDB

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Step 3: Open ABAP development tools in Eclipse with BW on HANA system in ABAP perspective to change the HANA SQL script.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Important Points to note:

◈ Write your custom code after the statement:

METHOD PROCEDURE BY DATABASE PROCEDURE FOR HDB LANGUAGE SQLSCRIPT  OPTIONS READ-ONLY.

◈ Intab and OutTab are importing & exporting parameters in SAP HANA transformation respectively. It is like Source_Package & result_package in SAP ABAP routines.
◈ No need to define the internal table types and structures in SQL Script. System will define it internally.
◈ If you want to access other ABAP dictionary tables or views or other AMDP procedures, you will have to mention the tables in the USING clause of the method.

For example: If we want to lookup on another DSO, master data tables or read the target table, we need to mention the DSOs, master data tables in the using clause of method. See example below.

METHOD PROCEDURE BY DATABASE PROCEDURE

FOR HDB LANGUAGE SQLSCRIPT OPTIONS READ-ONLY USING /BIC/AZSOURCE2.

For better understanding, I have explained 3 scenarios (Real world scenarios) for better understanding and how to create and write the code in AMDPs :

Scenario 1: One to One Mapping from Source to target using SAP HANA Expert Routine


1. Created ADSO say ZSOURCE and added fields in it.
2. Created a SAP HANA transformation with expert routine ( steps are explained above)
3. Write an expert routine in Method Procedure to assign source fields to target fields.

Here in the below code, we are selecting fields from intab and assigning to exporting parameter outTab (no additional logic in transformation).

Code: 

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Output 

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Scenario 2: Update the Deletion Indicator in target DSO when the records get deleted from Source system using SQL Script


SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

If we build this logic in ABAP, we would need to match the target records (one by one) with source object data for all data packages and update the deletion flag with X if a record is not present in source object. This could lead to performance issue if the records are in high volume and the SQL script logic is much simpler than writing the logic in ABAP.

In SQL, we can achieve this by using SQL functions:

First step is to declare the variable lv_count to store the count of entries in Target Table. This is required to know if it is the first-time load in DSO or the successive loads.
Based on that we will build our logic.

For the first-time data load the value of LV_COUNT value will be zero. In this case, we need to transfer all the source data as it is to Target DSO with deletion flag as blank as we don’t need to update the flag.

For the successive date loads, we need to compare the existing records in Target DSO with the latest records coming from Source and update the deletion flag accordingly.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Target DSO Contents after the first load:

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

**say after first load, the record with field 1 = 500 got deleted from source system. In that case, we need to update    the flag with value = X.

We can declare the variable by using DECLARE statement.

Declare <variable> varchar(3);

2. For the successive loads, we need to compare the existing records in Target with Source as mentioned above.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Here lv_count is variable and /BIC/AZSOURCE2 is target DSO and I have copied the content in temporary table – It_target.

For this case, I have counted the number of records in Target DSO and put it in lv_count Variable.

**It_target is the temporary table declared to store the content of target DSO which would be used to comapare the records in source package

If Lv_Count > 0 then

/* copying the latest record from source to temporary table it_zsource1*/

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

It_zsource1 will have:

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

*It_target – is the target dso

*:intab – Source package contents

it_zsource2 will have:

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Union of It_zsource1 and It_zsource2

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

It_zsource3 will have data shown below and this would be the final output which we need to load it in outTAB:

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

This will be the final output we required and assigned it to outTab:

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Below else statement is for first time load when lv_count = 0 and transfers the records from source as it is to target.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

SAMPLE CODE BELOW :

 -- INSERT YOUR CODING HERE
/* outTab =
   select   "/BIC/ZFIELD1",
            ''"RECORDMODE",
            "/BIC/ZFIELD2",
            "/BIC/ZFIELD3",
            ''"/BIC/ZFLAGDEL",
            ''"RECORD",
            ''"SQL__PROCEDURE__SOURCE__RECORD"

            from :intab;  */

declare lv_count varchar(3);     /* declare a variable */

it_target = select * from "/BIC/AZSOURCE2";

select Count(*)  INTO lv_count from :it_target; /* count number of records in target */

if lv_count > 0 then   /* if new records are added in source */

it_zsource1 = select
                     sourcepackage."/BIC/ZFIELD1" as field1,
                     sourcepackage."/BIC/ZFIELD2" as field2,
                     sourcepackage."/BIC/ZFIELD3" as field3,
                     '' as zfield1


              from :intab as sourcepackage;

/* if existing records deletes or updated*/

it_zsource2 = select ztarget."/BIC/ZFIELD1" as field1,
                     ztarget."/BIC/ZFIELD2" as field2,
                     ztarget."/BIC/ZFIELD3" as field3,
                     sourcepackage."/BIC/ZFIELD1" as zfield1


              from :it_target as ztarget left join
               :intab as sourcepackage   on
              ztarget."/BIC/ZFIELD1" = sourcepackage."/BIC/ZFIELD1";

/* union of it_zsource1 and it_zsource 2 */

it_zsource3 = select *, '' as flag from :it_zsource1
              union all
              select *, 'X' as flag from :it_zsource2
              where  zfield1 is null;


outTab =
   select
           field1 as "/BIC/ZFIELD1",
          ''"RECORDMODE",
          field2 as "/BIC/ZFIELD2",
          field3 as "/BIC/ZFIELD3",
          flag as "/BIC/ZFLAGDEL",
          ''"RECORD",
          ''"SQL__PROCEDURE__SOURCE__RECORD"


            from :it_zsource3;

/* for first time load or when the DSO is empty */

else
outTab =
   select   "/BIC/ZFIELD1",
            ''"RECORDMODE",
            "/BIC/ZFIELD2",
            "/BIC/ZFIELD3",
            ''"/BIC/ZFLAGDEL",
            ''"RECORD",
            ''"SQL__PROCEDURE__SOURCE__RECORD"

            from :intab;

end if;

errorTab = select
                ''"ERROR_TEXT",
                ''"SQL__PROCEDURE__SOURCE__RECORD"

           from :outTab;

Scenario 3: use of Windows function in SAP HANA SQL Script:


SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Key 1,Key2, Field 3 should hold the value from source table

Field 4x should hold the latest value for field 4 from lookup table based on dates.

Field 5x should hold the sum of all the values of Field 5 based on same keys.

Field 6x should hold the previous value of the latest transaction.

Solution:


In ABAP Routines, this would be very complex in reading the previous transactions and updated the value of the field. But in SQL script, we can use windows functions with which the complex computations can be done in a simpler way.

There are many windows functions available in SAP HANA SQL Script:

1. Rank Over () – Return rank within a partition, starting from 1
2. Row Number () – Returns a unique row number within a partition
3. Dense Rank () – Returns ranking values without gaps.
4. Percent Rank () – Returns relative rank of a row.
5. Lead () – Returns the value of the <offset>rows after current row.
6. Lag () – Returns the value of the <offset>rows before current row.

And many more are there.

In my scenario, I will use the windows function Rank () and Lead () to achieve the required output.

Define the It_lookup temporary table and copy the records from Lookup DSO.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

So It_Lookup will have data :

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Then, I have ranked the records using RANK() window function based on date in descending order and also used the windows function LEAD () to get the next value after the latest transaction.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

It_tab1 will have:

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

For requirement field5x : we need to sum the values field 5x for the same keys.

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

Inner join It_tab1 and It_tab2 where rank = 1

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

It_tab3 will have :

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning

This is the required output. Hence assigning the fields from It_tab3 to Outtab:

SAP BW/4HANA, BW (SAP Business Warehouse), BW SAP HANA Data Warehousing, SAP HANA Certifications, SAP HANA Learning
Viewing all 711 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>