Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

Back to Basics – SAP HANA and the Virtual Data Model

$
0
0
The SAP HANA intelligent data platform has been available for eight years, and with each passing year, we continue to see more and more innovation. You can expect the same in the next few weeks with the release of HANA 2.0 SPS04. However, I wanted to take an opportunity to step back and have a look at some “basics” of SAP HANA – what made it different back in 2011 and how this difference continues to add value in 2019 and beyond. This is part one of a two-part series, with another blog planned for next week. For now, I’ll focus on the in-memory and columnar structure of SAP HANA as well as the value it enables via a virtual data model.

In-Memory and Columnar


I do not intend to get into the technical detail and internals of SAP HANA as there is plenty of help documentation for that. It’s worth mentioning again that SAP HANA pioneered the concept of having a single copy of data in an in-memory and columnar structure and continues to be the market leader in this space. The “single copy” part of my statement is what truly differentiates SAP HANA, meaning we don’t have to transact in a row-based structure and then re-persist the same data in a column-based structure creating redundancy and latency. With the technology advancements in SAP HANA, we’re able to transact directly on the in-memory columnar structure and immediately reap the benefits. This enables optimal performance for both transactional and analytic workloads. Even if we don’t transact directly on SAP HANA, it can handle the load of transactional replication. This means that we can do real-time replication from other databases into this in-memory columnar structure without having to batch up the data with an ETL tool like in the past (though we can do that too if you’d like; perhaps because the source can’t handle the load). Ultimately, the in-memory columnar technology is a key enabler for application advancements like SAP S/4HANA and SAP BW/4HANA, but it also has merit outside those applications which is what I will discuss next.


Virtual Data Model


In the first 6-9 months of SAP HANA being generally available, and before BW on HANA came along,  there really was just one use case: SAP HANA as a data mart solution. Sure, we gave it some other names like “side-car,” agile data mart, operational reporting, and application accelerator, but in the end all of these rely on loading data into the platform, modeling the data into a nice structure for analytics, and then connecting either an application or a BI tool to read the data. That sounds easy enough in concept, but what was different was how we were able to accomplish this with SAP HANA. To explain that part, I’m going to try to use the same analogy I’ve been using since 2012…

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

This may miss the mark with some of my millennial and Gen Z friends, but growing up in the 80s and 90s (Xennials rule!) I was no stranger to the “mix tape.” If I wanted to make a compilation of ‘Love Songs’ for my girlfriend or ‘Raver’s Beats’ for my buddies, it was a big effort. The steps were something like the following:

Mix Tape
Identify the theme 
Determine songs and order – this also involved a calculation of the cassette duration and what could fit 
Get the songs – usually recorded onto a raw tape from the radio by lining up the microphone of my boombox with the speakers of my brother’s boombox (and yes, we called the radio station to request the songs) 
Copy the songs from the raw tape onto the mix tape in the proper order – being careful to maintain time between songs, etc. 
Rework over and over until I like the result 
Hand write the insert for the cassette – with “correction fluid” to help me out based on #4 (As a side note, did you know Wite-Out [or Tipp-Ex for my European friends] and Liquid Paper are still selling well?) 
Deliver the finished product 

This was a true labor of love, and in some ways I miss it. Over time it got a little easier with the introduction of dual cassette decks, the CD, the MP3, and Napster (I admit no wrong doing)…but the game totally changed in 2001 when Apple introduced iTunes and the iPod. Ever since then, I can easily buy my music online and simply drag and drop it into a playlist that automagically syncs to my device. Importantly for my analogy, the digital playlist isn’t making physical copies of the songs but rather simply stores pointers to the one copy in the library or on the device. This allows an easy adjustment at the click of a button if I decide my taste has changed. I may not be making playlists for the same themes as my old mix tapes, but it’s sure easy to maintain my ‘Running Music’ and ‘Kids Songs’ these days.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

While that was a fun little diversion to the 80s/90s for me, there must be a reason I brought it up in the context of SAP HANA. In my mind, the 2011 introduction of SAP HANA to the world of business data was similar to the introduction of iTunes to the consumer music world in 2001. Think about it for a minute from a data and business analytics perspective. I can easily draw a comparison between the steps for my “mix tape” and the high-level steps involved in a business analytics project:

Mix TapeBusiness Analytics
Identify the theme>Identify the business domain/process
Determine songs and orderDetermine the necessary data sets (in many cases specific fields) 
Get the songs Get the data (often an arduous effort of batch extraction) 
Copy the songs from the raw tape onto the mix tape in the proper order Restructure the data for consumption (layers of persisted transformations) 
Rework over and over until I like the result >Rework (the business needs will inevitably evolve as the output is better understood) 
Hand write the insert for the cassette Document the solution 
Deliver the finished product Deliver the finished product

The difference with SAP HANA comes in steps 3-5 with the introduction of the virtual data model. Everyone who has worked on a data mart/warehouse project knows that the vast majority of the effort encapsulated in the above steps comes in trying to prepare the data to answer business requirements. With SAP HANA, we changed the game as follows:

◈ Get the data – Simplify the process and do straight 1:1 replication from source to target only applying logic like filter conditions or excluding fields if a specific requirement (i.e. security) dictates the need to do so. I recommend to customers that they establish a principle that data gets stored “exactly once in its most raw format” in SAP HANA.

◈ Restructure the data for consumption – This is the not-so-secret sauce of SAP HANA; because of the previously mentioned technology we can now build a completely virtual data model (aka playlist). The graphic modeler in SAP HANA allows us to build an information model (special types of views) that integrates all of the raw data into a virtual structure that is ready for consumption. This means any aggregations, calculations, transformations, joins, etc. (including the often-requested currency conversion, unit of measure conversion, and hierarchies) all happen dynamically at the time of access. In-memory columnar enables this on-the-fly processing, and the result is that we have removed latency and redundancy from the process – the data is ready for consumption the moment it lands in its raw format. [Note: I recognize that there is still the possibility of extremely costly processing that need to be persisted and of course we need to manage those…but only on an exception basis!]

◈ Rework – While I love the above bullet, this is the one that gets me the most excited about the virtual data model. A former manager of mine used to talk about how you could “fail fast” with SAP HANA. This sounds counter-intuitive asking you to fail, but the reality is that business users rarely know exactly what they need on the first try. With the virtual data model, we can quickly get a model in front of the user, recognize the gaps (failures), and tease out the real business needs. This agility comes from the on-the-fly nature of the virtual data model and avoidance of waiting for overnight batch ETL jobs to complete. In addition, because we’ve brought over all fields from the source, it isn’t an issue when a new data element is requested – a very different experience from the past when we might have also had to change our extraction jobs. The result of these changes is that it becomes much more realistic for developers and business users to sit together and work through challenges in an agile manner.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials
An example of a virtual data model built as a graphical calculation view in SAP HANA. In this relatively simple case, we have an integrated view of four tables that are modeled together to provide a single semantic layer for consumption.

I mentioned that this type of data mart was essentially the only use case in the first 6-9 months of general availability for SAP HANA. It is also a key part of the foundation for what differentiates applications on SAP HANA (consider the new data model and embedded analytics in SAP S/4HANA) – including the architecture of a full-blown data warehouse. Today, this remains one of the most common use cases for SAP HANA with thousands of customers using a virtual data model productively to provide real-time information to their business users. Use cases run the gamut in both type of data and purpose, but the message is clear that the SAP HANA approach with the virtual data model is a differentiated method of preparing business data for consumption.

Chatbot with Alexa + SAP Conversational AI + SAP Hana

$
0
0

Introduction


In this bog you will learn how to create a Bot in SAP Conversational AI integrated with Alexa (using an Echo Dot 3rd gen) to get data in SAP Hana database.

Fiori application that was used here, is just to show all information that we are asking to Bot and give a visual feedback to user. Our focus here is not in Fiori development, but if you would like to see how this app works or use it for test, it’s available here. Special thanks to Danilo Jacinto that developed this app.

This video shows how our POC works:


This video is about this POC working in a presentation in SAP Inside Track São Paulo, Brazil, realized in March 23, 2019.


In the end of this blog I’ll show a video with this same bot working in Telegram and Webchat in Fiori Launchpad.

Pre-Requisites


For this development you will need to do the steps above:

◈ Create a free trial account in SAP Cloud Platform. – We used Neo Region, but you´re free to use anyone.
◈ Create a free account in SAP Conversational AI.
◈ Do this first tutorial in SAP CAI (It’s important to understand how to create a bot, and it’s very simple and fast) (link)
◈ Create an Amazon Alexa Developer account – If you don’t have an Alexa device (e.g. Echo Dot, ◈ Echo Spot, etc.) don’t worry, you can do these tests in a simulator available in Alexa Developer cockpit.

Step-by-step


Create a new Hana database, tables and XSJS

Access your SAP Cloud Platform trial, select your region and follow this steps:

In “SAP HANA/ SAP ASE > Databases & Schemas”, create your Hana MDC database and setup all passwords for your accounts.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Start you Hana database, open SAP HANA Web-Based Development Workbench, or connect with Eclipse if you prefer.

Create a new Schema and a new table in Hana as follow:

namespace alexa.data;

@Schema: 'alexa'

context customer_data {

 type SString : String(40);
 type LString : String(255);

 @Catalog.tableType : #COLUMN
 Entity Customers {
 key ID: Integer;
                    NAME: SString;
                    COUNTRY: SString;
                    DESCRIPTION: LString;
    };    
};

Create a new XSJS file (e.g. get_customers.xsjs), this service will be responsible to get data in your table e send data back to your Bot, in a Webhook that we will create soon.

//Get parameter "location"
var locl = $.request.parameters.get("location");
//Open connection with DB
var conn = $.db.getConnection();
var pstmt = null;
var rs = null;

pstmt = conn.prepareStatement('SELECT COUNT(*) FROM "alexa"."alexa.data::customer_data.Customers" WHERE "COUNTRY" = \'' + locl + '\'');

try {
    //Execute the query
    rs = pstmt.executeQuery();
    while (rs.next()) {
       var tot = rs.getString(1);
    } 
    pstmt.close();
} catch(e) {
    $.response.setBody(e.message);
    $.response.status = $.net.http.INTERNAL_SERVER_ERROR;
}

if (tot < 1) {
    var resp = "Location not found";
} else {
    resp = "In " + locl + " we have " + tot + " customers";
}

conn.close();

//You have to return a JSON exactly as described in documentation    
var output = JSON.stringify({ "replies": [{"type": "text", "content": resp}], "conversation": { "language": "en"}});

//Return the HTML response.
$.response.status = $.net.http.OK;
$.response.contentType = "application/json";
$.response.setBody(output);

Important, the response must be a json format, with the following format (this format is in SAP CAI Documentation:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Save the URL for your XSJS, as follow:

https://hanadb-xxxxxxxxx.hanatrial.ondemand.com/alexa/get_customers.xsjs

How to create a Bot in SAP Conversational AI?


If you did the tutorial available in CAI (How to create your first bot) you know how many simple is to create a Bot using CAI. But here I´ll show step-by-step how to create a simple Bot to consume data in SAP Hana database using a XSJS service.

Access your account in SAP CAI and Create a new Bot

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Select predefined skills for your bot

For this sample we will use just “Greetings” skill.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Informations about your bot

Here you must input you bot name (e.g. hanabot), fill the description and Topics if you want (e.g. external-customers).

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Choose your options for “Data Policy” and “Bot visibility”.
And click on “Create a Bot”.

Train

In tab “Train” you have to create or search for intents…

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

For this case, choose existent intents “Greetings” and “Goodbye”.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Create a new Intent (e.g. ask-customer)

Click on “+ Create” and input Intent name as bellow:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

And click on “Create Intent”.

Input expressions in your new Intent

Click in your new intent:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Input new expressions like that:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Put the maximum expressions that you could (SAP recommends about 30-40 expressions)

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Build

In tab Build create a new skill

Click on “+ Create Skill” and input a name for your skill (e.g. “ask-customers”).

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

And click on “Create Skill”.

Click in you new skill and after, goes to tab “Triggers”

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials


 In “Triggers” tab, fill field “If” with @ask-customer (It means that all times that the intent “ask-customer” happens, skill “ask-customers” will be triggered. Click Save

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

In “Requirements” tab, input fill as follow:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

It means that when user speaks some “location”, bot will save this information in memory, as “location”.

And just start actions for this skill if requirements are complete. Otherwise we must set up an action if location is missing.

Click on arrow on right side of location and click on “+ New Replies” in the same line that is write “if #location is missing”.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

In the next screen, select “Send Message”, after select “Text” and input a message asking for a location, and click “Save”:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Click “Back”.

In “Actions” Tab click on “Add new message group”

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

It’s important to know that “Actions” just executed when all requirements are met.

Click on “Call Webhook”.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Fill with URL for your XSJS file, as follow:

https://hanadbp-xxxxxxxx.hanatrial.ondemand.com/alexa/get_customers.xsjs?location={{memory.location.formatted}}

For this case, we use “Basic Authentication” (That’s must to be configured in your database)

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

And click on “Save”.

This webhook will receive exactly the json format that XSJS response, and bot uses this content to show an answer!

Connect your Bot with Alexa


Create an Intent START_CONVERSATION

Create a new intent (e.g. start-converstion), it will be responsible for start conversation when alexa call the skill.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Insert the expression “START_CONVERSATION”

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Create a Skill start-conversation

In Build tab, create a new skill (e.g. start-conversation), and triggers this skill with intent @start-conversation

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Create a Skill Goodbye

Create a new skill (e.g. goodbye), and triggers this skill with intent @goodbyes

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Set a variable END_CONVERSATION

In actions tab for skill goodbye, set a flag in variable END_CONVERTSATION. It´s important to end the skill in Alexa app.

In actions, click on “ADD New Message Group”, and click on “Update Conversation”, e selecione “Edit Memory”.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Fill field “Set Memory Field” with END_CONVERSATION and value equals TRUE

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Ans save it.

Connect with Alexa account

To connect with your Alexa account is very simples, go to “Connect” tab in your bot

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Study Materials

Select “Amazon Alexa” option, and follow the steps to connect, you just need to inform your Amazon account and create an invocation name.

Done, now your Alexa device or simulator in Alexa developer cockpit can be used to retrieve information from Hana database.

Addtional features:


This same Bot in a Telegram Bot


This same Bot in a Webchat Bot in Fiori Launchpad.

The tale of SAP HANA, SAP Analytics Cloud, and Brexit

$
0
0
This blog I wanted to show you an end to end example of getting unstructured JSON data, loading it into SAP HANA, enriching with geo-spatial attributes and exposing to SAP Analytics Cloud for further analysis.

The problem with most tutorials usually – they are focused on some randomly generated abstract data (like SFLIGHT or EPM Model data) and for some people this doesn’t really mean much, so I thought a real life example of a real up to date data analysis would be very beneficial for everyone.

Now, there are many ways to automate most of the tasks shown in this post, but I just wanted to show quick and simple example of data consumption, transformation and analysis which can be done in a few hours over the weekend. Now let’s begin.

If we go to a certain petition page we would see a page which looks like this:
SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

That doesn’t mean much and we can’t easily extract any meaningful data. But there is a way – if we change the link to https://petition.parliament.uk/petitions/241584.json we would be able to see the data in JSON format (quite unstructured on the screenshot):

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

There is a JSON view plugin for Chrome to help us see the formatted version:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Install the extension and reload the page:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Ok, now we are getting somewhere. The next step is – how do we get this data into our SAP HANA system? I decided to create a set of SQL statements to create the necessary unstructured table and insert data into it. To do that, I firstly had to find a way to easily create the SQL DDL and DML statements without going manually one by one through each line of JSON file.

There is another great online service for that: Sqlify.io – it allows you to feed it JSON or CSV files and creates a set of DDL and DML statements for you, which after some tweaking can be executed in SAP HANA system.

Let’s go to SQLify.io and enter our link from above:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click “Convert to SQL” and see the result:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

As I mentioned, we will end up with the table of unstructured data, which is fine with us as we would then create the necessary HANA Calculation Views on that table. We will just go ahead and remove the unnecessary fields.  After tiding up we will end up with just a handful of meaningful fields:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click “Save schema and continue”. Save the resulting SQL and open it locally (I use VS Code for any SQL, JS and sometimes ABAP development work):

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

It looks great (and perfectly unstructured), but hardly ready for SAP HANA SQL. What needs to be done is a little makeover which results in the following script which we can run in HANA. I have created a new schema for this blog post, so that everything is neatly contained in one place (the link to the full script can be found at the end of this post):

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Open SQL console of your HANA DB (use the default tenant or whichever tenant you like) and run the SQL script to create petitions data table with all the values.

Check the created tables and values:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Ok, so far so good. Now, in order for us to have Geo-analysis we need to have geo coordinates table somewhere, I have found the table with countries and longitude and latitude details online and also converted it to the SQL script for HANA:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Run the script and check the resulting table:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

All good with Countries. A bit harder with UK Constituencies coordinates. I went to the Office for National Statistics website and downloaded the full Postcode Directory (a bit of an overkill for this small exercise, but very useful in future). You can download the full archive from here:

http://geoportal.statistics.gov.uk/items/ons-postcode-directory-february-2019

Create the table definition using SQL command:

CREATE TABLE ONSPD_FEB_2019 (
pcon VARCHAR(10) NULL,
lat FLOAT NULL,
long FLOAT NULL
);

After the table is created, go to Eclipse and choose File->Import

Note: I switched to Eclipse here, because the file is massive and SAP HANA Web-based Development Workbench won’t be able to handle it. You can use Eclipse to complete all the steps in this post, if you prefer.

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Choose Import data from local file:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Pick up the file from the downloaded archive and make other relevant selections:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Create the mappings and click “Finish” to start the upload (which would take quite a while if you are using the remote HANA system, so go grab some lunch or a cuppa):

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

After the upload is completed, check the data in the table:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

The problem is that ONSPD goes into a finer granularity than we need, so that all the coordinates go way down to Ward and Parish, but we only need one set of coordinates per constituency. Therefore, I have decided to take the distinct value of ONS code from PCON field with average coordinates and use the coordinates of that record for the constituency, which isn’t really precise or ideal, but… There might be an easier way to find the coordinates based on the ONS code from Petitions site, but I haven’t found it and would love to hear your comments on this in the comments section below.

Now, before we proceed, I strongly suggest you check the following SAP Note:

https://launchpad.support.sap.com/#/notes/0002395407

In order to create ST_Geometry Location coordinates the steps above must be completed, otherwise the scripts below would fail.

After you have completed the prerequisites, run the script to create Location tables which are based on the countries and constituencies tables above. It creates the location definitions which can be consumed by SAP Analytics Cloud:

CREATE COLUMN TABLE "SAC_SOURCE"."Countries_Location" (
"Country_Code_LD" VARCHAR(2) PRIMARY KEY, "Location" ST_GEOMETRY(3857));
UPSERT "SAC_SOURCE"."Countries_Location" ("Country_Code_LD")
SELECT "COUNTRY_CODE" FROM "SAC_SOURCE"."COUNTRIES" GROUP BY
"COUNTRY_CODE";
UPDATE "SAC_SOURCE"."Countries_Location"
SET "Location" = new ST_GEOMETRY('POINT(' || "LONGITUDE" || '' || "LATITUDE" ||
')', 4326).ST_Transform(3857)
FROM (
SELECT MIN("LATITUDE") "LATITUDE", MIN("LONGITUDE") "LONGITUDE", "COUNTRY_CODE"
FROM "SAC_SOURCE"."COUNTRIES" GROUP BY
"COUNTRY_CODE"),
"SAC_SOURCE"."Countries_Location"
WHERE "COUNTRY_CODE" = "Country_Code_LD";

CREATE COLUMN TABLE "SAC_SOURCE"."Constituencies_Location" (
"ONS_CODE_LD" VARCHAR(10) PRIMARY KEY, "Location" ST_GEOMETRY(3857));
UPSERT "SAC_SOURCE"."Constituencies_Location" ("ONS_CODE_LD")
SELECT DISTINCT "PCON" FROM "SAC_SOURCE"."ONSPD_FEB_2019" WHERE
"PCON" IN ( SELECT "CONSTITUENCY_ONS_CODE" FROM "SAC_SOURCE"."PETITION_241584_DATA" WHERE "CONSTITUENCY_ONS_CODE"<> '' )
GROUP BY
"PCON";
UPDATE "SAC_SOURCE"."Constituencies_Location"
SET "Location" = new ST_GEOMETRY('POINT(' || "LONG" || '' || "LAT" ||
')', 4326).ST_Transform(3857)
FROM (
SELECT DISTINCT "PCON", AVG("LAT") "LAT", AVG("LONG") "LONG"
FROM "SAC_SOURCE"."ONSPD_FEB_2019" WHERE
"PCON" IN ( SELECT "CONSTITUENCY_ONS_CODE" FROM "SAC_SOURCE"."PETITION_241584_DATA" WHERE "CONSTITUENCY_ONS_CODE"<> '' ) GROUP BY
"PCON"),
"SAC_SOURCE"."Constituencies_Location"
WHERE "PCON" = "ONS_CODE_LD";

This would create two location supporting tables:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Check the contents of any:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Lovely, just what we are looking for!

Now let’ carry on building 4 very simple HANA Calculation views – 2 scripted and 2 graphical (just to illustrate different options).

Both are very simple containing a single table with just a different output selections:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Add petitions table to the aggregation node:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Map just three required fields from the source:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Set Country_Code as the key in Semantics node:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Save and run:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Create another view similarly by adding Constituency relevant fields:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Set ONS code as the key:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Save and run:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Marvellous.

Now we only need to create location supporting views for Country and Constituency in special package SAP_BOC_SPATIAL (this is very important).

I have decided to create them as scripted CVs to just show how they look.

The first one is Country Location:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Create two columns on the right and paste the code:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Set Country_Code_LD as the key and run the view:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Great. Define the second scripted view ZCV_CONSTITUENCY_LOCATION:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Set ONS_CODE_LD to key and run the view:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Okay, job’s done on HANA side. We now have 4 views and they are ready to be consumed by SAP Analytics Cloud:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

SAP Analytics Cloud


We will be creating Live HANA connection from SAC to our tenant, if you don’t have it yet, it’s not that trivial to set up (this involves SSL).

I have created the Live connection already so we can proceed.

Go to your SAC tenant and create new model:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Select “Get data from a datasource” and pick up your HANA Live connection:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click on Data Source selector and choose our Country Calculation View:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Call the Model “Signatures_by_country” and confirm.

Check that the measure and dimensions are present:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material
SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Now we need to add location information to our model. Remember, we even have the supporting view for that!

Click “Create a location dimension” button:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

And select the values as follows:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

What we did is we actually told the system where the location information actually resides and ordered system to join that location info with our model based on the relationship between COUNTRY_CODE and COUNTRY_CODE_LD (Country Code Location).

Confirm and save the model.

Now before we proceed further, let’s create another model “Signatures_by_constituency” and add the relevant views to it – repeat all the previous steps with the following selections:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Map the location view like this:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Confirm and save the model.

Now let’s get to the cherry on the cake (finally!). Create new story:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

With the canvas page:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Add Geo Map to canvas:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

You can leave it light grey or choose many options for the base layer:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

I chose “Streets” because it’s more colourful! Add new layer, call it “World” and add our “Country” model as the data source:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Switch layer to “Heat map” and add our Location dimension and Signatures as the measures:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Confirm, and see the results right away!

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Play with different modes of maps until you’ve got the view that is most suitable for your analysis. For example, switch map to “Bubble layer”. It gives you the details when you mouse over:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Now go and create another Layer and call it “UK” and choose signatures by constituency:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Zoom in on the UK:

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

It gives you the heat map based on the UK constituencies now. Neat!

If you zoom out a bit you will see both layers –  Bubble Layer showing the World and Heat map showing the UK :

SAP HANA, SAP Analytics Cloud, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Now we can add another page to our story and select “Grid”:


Insert models to the different pages:


This way we can have the map and two grids with raw data for our analysis:



Save it as the “Petitions story”:


Now you can try and enhance the base model with whatever you like, such as 2016 referendum data per constituency to see the correlations between voters who voted Remain and signatories of the petition to revoke Article 50. Go ahead and play!

Key takeaways and epilogue


◈ SAP Analytics Cloud Story is fully dynamic and based on live date from HANA system. If you change the value in the table the story will update right away.
◈ The end to end example did not use… any BW!
◈ HANA Calculation views can be expanded to read other data, such as referendum, election results, poverty, homelessness figures, etc. and all that extra data can be analysed quickly and efficiently on the fly using the provided example framework.

SAP HANA Startup

$
0
0
Have you ever wondered what SAP HANA is doing from the moment you trigger the start or restart until it’s finally available to be used by your application layer?

Or perhaps you have experienced a startup that has taken your system way longer to be available than you have planned for, and you are trying to understand, what it was doing all this time to avoid it, improve it or plan for it next time?

Let me try shed some light on this, as the better you understand which factors influence the startup time, the better you can judge and influence it or at least take it into account.

When starting SAP HANA, as a very first step, the daemon service is started.
The daemon is a lightweight service that sets the environment variables required by the remaining services (nameserver, indexserver, scriptserver, xsengine, …) and then starts them in the correct order.

SAP HANA Startup, SAP HANA Tutorial and Materials, SAP HANA Study Material, SAP HANA Certifications

All services started by the daemon start up mostly independently from each other, following 9 steps explained in detail below, synching among each other at a certain point:


1. Opening the Volumes


Services which have a persistence (e.g. nameserver, indexserver, …) need to open their volumes.

Before the volumes can be opened, the corresponding filesystems need to be made available to the right servers at the right mountpoints.

Taking care of this task is not part of the single services’ job. Instead, if you are using the SAP HANA Fiber Channel Storage Connector, it’s the nameserver service, that mounts the data and log volumes for all services. If you don’t use it, you need to ensure that this task is accomplished, before the single services attempt to open their volumes.

Opening a volume basically means obtaining a file handle for it and hence we are talking about milliseconds or – at most – seconds. Any delay beyond this magnitude is usually caused by reservation conflicts or other storage-related issues.

Relevant trace entries:

i ha_fcClient fcClient.py : trying to attach for partition 1, usage type DATA on path /hana/data/SID/mnt00001

  i PhysicalPageAcce DataVolumePartitionImpl.cpp : DVolPart: opened datavolume partition #0 with 1 files in '/hana/data/SID/mnt00001/hdb00003/')
or
  i PhysicalPageAcce DataVolumeImpl.cpp : Open volume file(s) in "/hana/data/SID/mnt00001/hdb00003/"

2. Loading and Initializing Persistence Structures


Now that the volumes are open, basic low-level persistence structures are loaded from disk.
As these are kept in memory in a structure very similar to the one stored on disk, no time- or resource-consuming conversions are required. Instead, the duration of this step is mainly I/O bound.

More precisely, the following tasks are accomplished in this step:

1. The anchor and the restart page are loaded. To HANA, these are something like the master boot record is to an operating system.
2. Converters and container directories are loaded.
3. Undo files are searched for information on transactions that have not committed before the system shut down. Don’t be fooled by the fact that the traces refer to sessions rather than transactions – it’s uncommitted transactions.
4. HANA persistence statistics are loaded and initialized.

What has been said so far about this step implies, that the duration of this step should not vary substantially. However, it can if one of the contributing factors themselves vary significantly, which are:

◈ The size of persistence, specifically, of the volumes
◈ The number of undo files
◈ The number of disk based LOBs
◈ The read performance for the DATA volume(s)

Relevant trace entries:

i Savepoint SavepointImpl.cpp : AnchorPage with SPV=123 loaded. Persistence created at 2019-01-01 01:23:34.567890
i Savepoint SavepointImpl.cpp : restartPage loaded with spv=123
i PersistenceManag PersistenceManagerImpl.cpp : loadConverters and markSnapshotFBM – done
i PersistenceManag PersistentSpaceImpl.cpp : loadContainerDirectories - done.
i PMRestart PersistenceSessionRegistry.cpp : Loading 123 open session(s) and 123 history cleanup file(s) finished in 12.3456 seconds;
i PMRestart VirtualFileStatsProxy.cpp : LOB owner statistics initialized from VirtualFile: 45678 CD entries in 123.345 sec
i PMRestart PersistenceSessionRegistry.cpp : Initialized transient structures in 123.456 seconds;

Improvements in HANA 2:

As of HANA 2, packed LOBs are available.

Among many other optimizations, packed LOBs use an optimized persistence structure which allows faster LOB handling during startup and thereby decreases the time spend in this second step.

3. Loading or Reattaching the Row Store


For as long as the system is up and running, the row store is kept in shared memory, owned by the corresponding HANA service operating system process. During shutdown, this shared memory would normally be released back to the OS, while, during system startup, a significant part of the overall startup time would have to be spent on loading the row store into memory again.

How long it takes to load the row store into memory again depends – of course – mainly on the volume of row store tables in the system, but quite often this step turns out to be a main contributor to the overall startup time.

To save this time, there’s an optimization which, if certain conditions are met, allows to keep the shared memory containing the row store in memory despite restarting the corresponding service.

Long story short – given a handful of conditions are met, it’s possible to keep the row store in memory despite a service/system shutdown and to reattach it to the owner service at its re-start, making loading the row store from disk during startup redundant.

If any condition isn’t met, keeping it at shut down and/or reattaching at startup, the row store will simply be loaded from the DATA volume in this step.

Now that the row store has been reloaded or reattached, the row store needs to rollback all changes that have not been committed before shutdown.

Unlike the time spent on loading and initializing persistence structures, the time spent on loading or reattaching the row store might very well vary significantly from one startup to another, depending on:

◈ The size of the row store
◈ Whether all conditions are met for keeping and reattaching the row store
◈ The read performance of the DATA volume, if the row store cannot be kept over restart
◈ The amount of changes in row store that need to be rolled back

Relevant trace entries:

  i Service_Startup SmFastRestart.cc : Loading RowStore segments from Persistency
  i Service_Startup CheckpointMgr.cc: 123 RS segments loaded from Persistency in 12.3456 sec (321.12345 MB/s)
or
  i Service_Startup SmFastRestart.cc : Reattaching RS shared memory for RowStore segments
  i Service_Startup SmFastRestart.cc: 123 RS segments loaded in 0.321 sec (1234.23 MB/s)

i Service_Startup Starter.cc(00422) : RS Undo log collection finished in 1.23 sec.
i Service_Startup Starter.cc(00428) : Consistent RS data image has been prepared.
i Service_Startup Starter.cc(00656) : RS uncommitted version construction finished in 1.23 sec.
i Service_Startup Starter.cc(00548) : All Page lists rebuild finished in 0.123 sec.

4. Garbage-Collecting Versions


Just like the row store had to roll back all changes that have not been committed before shutdown in the previously explained step, HANA needs to do some cleanup for the column store as well.

In this step, the garbage collector cleans up all version except for the most recent one for any column store table. As you know, the garbage collector continuously cleans up versions that are no longer required while the system is up and running – still – if the system is shut down at a point in time when a transaction is still open and has not been committed for a long time (hours, days, weeks, seen it all, …), there might be some versions left to clean up during start up.

In HANA 1 this step needs to finish completely before it can continue with replaying the logs.

The time consumed by this step may vary a lot from one startup to another as it highly depends on:

◈ The existence of transactions that blocked the garbage collection before shutdown
◈ The read performance of the DATA volume for processing the history cleanup files
◈ On the HANA release, as this defines whether the step is blocking the next step or not

Relevant trace entries:

i Logger PersistenceManagerImpl.cpp : Termination of rollback(s) open in restart/backup savepoint finished in 0 seconds;
i Logger PersistenceManagerImpl.cpp : Garbage collection of history files finished: 12 cleanup files in 12.345 seconds;

Improvements in HANA 2:

With HANA 2, garbage-collecting obsolete versions is executed asynchronously. That means, it will allow the next step, replaying the logs, to continue in parallel.

5. Replaying the Logs


Now that the row store is available in memory, HANA replays the logs to redo all changes that were performed after the last savepoint. There’s no need to load all column store tables before replaying the logs – if any of the changes to be replayed in this step affect column store tables, the affected ones are loaded into memory on the fly.

After successfully finishing the replay, all uncommitted transactions are rolled back.

The time spent in this step can vary a lot and it mainly depends on:

◈ The volume of logs to be replayed
◈ The volume of changes by uncommitted transactions to be rolled back
◈ The read performance of the LOG volume
◈ The read and write performance of the DATA volume(s)

Relevant trace entries:

i Logger PersistenceManagerImpl.cpp : Expecting savepoint 123 log record as first log record at 0x0000000001
i Logger PersistenceManagerImpl.cpp : Starting log replay at position 0x0000000001
i Logger RecoveryHandlerImpl.cpp : Recovery finished at log position 0x0000001234
i Logger RecoveryHandlerImpl.cpp : Termination of 1 indoubt transactions was executed in 0.123 seconds; 0 remaining sessions.

6. Transaction Management


Now, it’s time that all services of a database synchronize with each other to ensure transactional consistency.

This is a quick thing to do, so very little time is spent exclusively by this step itself – if it does take longer, then usually because one of the services which is part of the sync is still busy with steps 1-5 and not ready to sync yet.

The main influence factors are:

◈ ◈ The startup progress of other services of the same database
The network connectivity between the services

Relevant trace entries:

i LogReplay RowStoreTransactionCallback.cc : Slave volumeIDs: 4 5
i Service_Startup transmgmt.cc : Checked the restart informations by volume [(4, 0x23456) (5, 0x13579)]
i LogReplay RowStoreTransactionCallback.cc : Finished master-slave DTX consistency check

7. Savepoint


All changes that have been performed in steps 3 – 5 are now persisted to the DATA volume by a savepoint.

This step shouldn’t take too long while it can vary depending on the following factors:

◈ The volume of changes during replay of logs phase.
◈ The write performance of the DATA volume(s)

Relevant trace entries:

i PersistenceManag PersistenceManagerImpl.cpp : writing restart savepoint
i Savepoint SavepointImpl.cpp : Savepoint current savepoint version: 124, restart redo log position: 0x0000001236, next savepoint version: 125, last snapshot SP version: 0

8. Checking the Row Store Consistency


If you haven’t set parameter

indexserver.ini -> [row_engine] -> consistency_check_at_startup

to none, a row store consistency check is performed as last step during startup.

Usually, this check finishes quite fast and it’s limited to a maximum runtime of 10 minutes.

If you’re confident to take good care of consistency checks while the system is up and running and feel safe to disable this quick check during startup to save the <= 10 minutes of time spent during startup, refer to SAP Note 2222217 to disable this check.

The time spent by this step should not vary a lot from startup to startup and depends on:

◈ The HANA version
◈ The size of row store
◈ The value of parameter consistency_check_at_startup

Relevant trace entries:

i Service_Startup IntegrityChecker.cc : Checking rowstore consistency.
i Service_Startup IntegrityCheckerTimer.h : Startup Row Table Consistency Check timeout is 600
i Service_Startup IntegrityCheckerTimer.h : Startup Row Table Consistency Check is finished before timeout
i Service_Startup IntegrityChecker.cc : Consistency check time: 537.69 seconds

9. Open SQL Port


Now that all the work is done, the SQL port is opened. Just like step the sync among the services, this is a very quick thing to do.

If you really see issues that cause opening the SQL port to be delayed or to fail, it’s usually because another operating system process already occupies the port.

To ensure that all HANA-related ports are reserved correctly, you can use the SAP Host Agent as described in SAP Note 2382421.

As soon as the SQL port is open, your applications are, technically speaking, able to access the HANA system. Still, remember that not all column store tables have been loaded yet and that, depending on your experience with your system, you might want to wait for the column store preload to finish (or not) before restarting the applications that use your HANA system.

Relevant trace entries:

i Service_Startup tcp_listener_callback.cc : Configure channel send timeout to SQL listener: 300000 ms
i Service_Startup tcp_listener_callback.cc : Configure channel receive timeout to SQL listener: 300000 ms
i Service_Startup tcp_listener_callback.cc : start the SQL listening port: 30015 with backlog size 128
i assign TREXIndexServer.cpp : assign to volume 3 finished
i TableReload TRexApiSystem.cpp : Starting reloading column store tables based on previous load information

That’s it for now.

XSA Accessing Remote Sources & External Objects (Schemas, etc)

$
0
0
When developing with XSA and the WebIDE you will likely need to access existing database objects, schemas, tables, remote sources or other objects from an HDI Container. This configuration has been captured before by Christophe Gilde, but the process has evolved with the latest feature release of the WebIDE (4.3.63 for HANA 2 SPS3).

Tenant Database Objects

1. Role & User

XSA Artificats


2. User-Provided Service
3. mta.yaml
4. .hdbgrants
5. .hdbsynonym

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

Role & User


For Simplicity we have combined the classic database privileges into a single role “GRANT_REMOTE_SOURCES”.

CREATE ROLE GRANT_REMOTE_SOURCES;

GRANT SELECT, EXECUTE ON SCHEMA FAKENEWS TO GRANT_REMOTE_SOURCES WITH GRANT OPTION;
GRANT CREATE VIRTUAL TABLE, CREATE REMOTE SUBSCRIPTION ON REMOTE SOURCE FILE_LOADER TO GRANT_REMOTE_SOURCES WITH GRANT OPTION;
GRANT CREATE VIRTUAL TABLE, CREATE REMOTE SUBSCRIPTION ON REMOTE SOURCE FILE_LOADER TO GRANT_REMOTE_SOURCES WITH GRANT OPTION;
GRANT ROLE ADMIN TO GRANT_REMOTE_SOURCES;

DROP USER GRANTOR_SERVICE;
CREATE USER GRANTOR_SERVICE PASSWORD NotMyPassword123 NO FORCE_FIRST_PASSWORD_CHANGE;
ALTER USER GRANTOR_SERVICE DISABLE PASSWORD LIFETIME;

GRANT GRANT_REMOTE_SOURCES TO GRANTOR_SERVICE WITH ADMIN OPTION;

We can check in HANA Studio that these permission are as expected.

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

Now that we have a user with the role assigned we can switch to our XSA developement

XSA Artificats


User-Defined Service


We can now create the user defined service with either WebIDE, XSA Cockpit or XS command line.

In the WebIDE we need a project

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

We need associate the project with the correct space can then build the db unit of this.
Now we can add/create our User-Defined Service

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

If we haven’t already created the service we can do this here.

Beware, the port is that of your tenant database, the default would be 30015, but I have multiple tenants so my port is 30041.

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

mta.yaml


By adding this service in the WebIDE it will automatically update the mta.yaml file, which is a good thing. The mta.yaml hold the resources that our project requires. This now references our user-provided service.

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

An alternative way to create the user-provided service is with the xs command line. Make sure you are in the correct xs SPACE, here mine is PROD

xs t -s PROD
xs cups grantor-service -p '{"host":"mo-3fda111e5.mo.sap.corp","port":"30015","user":"GRANTOR_SERVICE","password":"NotMyPassword123","driver":"com.sap.db.jdbc.Driver", "tags":["hana"]}'
xs service grantor-service

You can still use the WebIDE, but now you would tick the box “use existing service” and you would only need to enter the service name.

Now when I build the db module again it will create a binding for this service to the di-builder

We can see (and create/edit) this in the XSA Cockpit

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

.hdbgrants


We now need to pass on the role “GRANT_REMOTE_SOURCES” that we defined above to our HDI Container. This is done using by creating an .hdbgrants file within your project src directory.

{
"grantor-service": {
"object_owner": {
"roles": [
"GRANT_REMOTE_SOURCES"
]
},
"application_user": {
"roles": [
"GRANT_REMOTE_SOURCES"
]
}
}
}

We should now build the db module of the project, all being well we will now have access to our existing database objects, in my case Remote Source and the FAKENEWS schema and tables.

If we create a Calc View and search for a table from existing the schema we need to click the “External Services” drop down and then our grantor-service. This will then automatically create the required synonyms for us.

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Guides

SAP HANA, SAP Analytics Cloud, and Brexit: The Automation

$
0
0
In the last article we have discussed how we can easily get big data from the internet, convert it to the required format, massage it a bit, and report on it in SAP Analytics Cloud via SAP HANA which worked pretty well but lacked any sort of proper automation.

In this article we will create an automated flow which can be used to acquire data from the same Petitions website, convert it to the required format and load it to our SAP HANA system which in turn is connected “Live” to SAP Analytics Cloud. Therefore, the reported data in SAP Analytics Cloud would be as recent as possible, depending on our data acquisition flow settings.

In line with my concept of always learning something new this article will be focused on automating data flows with SAP Data Hub 2.4 pipelines. I will be using the developers edition of SAP Data Hub 2.4 which is delivered as the Docker image, which yet again makes our life so much easier. Since then I moved on and instead of using Ubuntu as a host OS for Docker I started using extremely lightweight Alpine Linux distribution for all my Docker containers. I find that for containerisation the golden rule is “the lighter your host OS – the better”, which makes Alpine a perfect choice and all SAP containers work fine in it (so far, even though it’s not officially supported for HXE or Data Hub containers).

In this article I will start everything from the scratch, creating two separate tables to store Country and Constituency results and adding few new columns to store multiple petitions in one table.

At the end of this article you will have an understanding of how Data Hub works, how to use JavaScript operators in Data Hub, load live data to SAP HANA and compare data from multiple petitions in one SAP Analytics Cloud story.

I won’t be explaining the path all the way to SAP Analytics Cloud as I have explained it in great detail last time.

Let’s begin


First thing’s first – go through the following tutorial to download and start SAP Data Hub Developer edition Docker container in your host OS:

The only exception to the above blog post is that I won’t start it with DEV network, but rather bridged and I will expose ports to the host machine, and the run command will look like:

docker run -ti --env VORA_USERNAME=vora --env VORA_PASSWORD=HXEDocker2019 -p 8090:8090 -p 9225:9225 -p 30115:30115 --name datahub --hostname datahub datahub run --agree-to-sap-license

So that we can access it from our host windows machine (make sure to add datahub and hdfs hosts to your windows hosts file):

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

The same applies to HDFS, I will use the same bridged network. In this blog post we won’t be touching HDFS at all, but in anticipation of other articles covering Vora and HADOOP, you might as well create the container now:

docker run -ti -p 50070:50070 --name hdfs --hostname hdfs datahub run-hdfs

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Okay, all good. Now, our patients today will be two petitions, the one that we remember well from the last time –

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

And it’s antagonist:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

What we will be achieving today is extracting data from those two petitions in one go to our SAP HANA system via SAP Data Hub.

The resulting Data Hub pipeline would look like this:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Don’t be afraid! I will take you through the process of this pipeline creation step by step. There is surprisingly scarce in-depth information about practical Data Hub pipelines online, so I had to discover it piece by piece through trial and error.

Now, let’s start creating new graph!

Create new Graph in Data Modeller and save it as “GET_PETITIONS”:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

On the left-hand side pane choose “Operators” and drag “Blank JS Operator” to your graph workspace. This would be our starting operator which we will use to start the pipeline and create the GET request headers to acquire petitions from the petitions website.

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Rename it to “Get Petitions” and add the following code to it by clicking the “Script” shortcut.

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

//Set petitions numbers array - adjust as required

var get_petitions = [241584,229963];

//Add DataHub generator

$.addGenerator(gen)

function gen(ctx) {

 // Loop through petitions array to set correct GET header for HTTP client  

 for (i=0; i < get_petitions.length; i++){

    var msg = {};
    msg.Attributes["message.id"] = "Get_Petition";
    msg.Attributes["http.Content-Type"] = "text/json";
    msg.Attributes["http.method"] = "GET";
    msg.Attributes["http.url"] = "https://petition.parliament.uk/petitions/"+get_petitions[i]+".json";
    msg.Body = ("Generate petition" + '' + get_petitions[i]);

    $.output(msg); //Output message with correct headers

 }

};

Here we are specifying the petitions we require in the array variable and then looping through array creating the correct headers.

Right click on our JS operator and select “Add Port”, create it as follows.

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

To test how our operator works drag “ToString Converter” and “Terminal” operators to the graph and connect them this way:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

What we did is we connected message via string converter to the terminal to check the output of the “Get Petitions” operator. Save your graph and run it.

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

While it’s running right click on the “Terminal” operator and choose “Open UI”:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

The UI of the terminal will show the result of JS run:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Wow, that’s cool! Now close the terminal and stop your graph.

Now that we have a properly formatted GET request header, we can actually get data from the URLs specified. Delete “ToString Converter” and “Terminal” operators for now (but we will use this monitoring concept throughout this post). Drag “HTTP client” operator and connect it to JS operator like shown on the image.

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

To test the data output from “HTTP client” use another operator – “Wiretap” which can be used for any input data (unlike “Terminal” accepting only strings) and connect it.

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Run your graph and access “Wiretap” UI. The result is as expected – we received two JSON sets:


Great stuff! Now what we need to do is to parse this data and load it into SAP HANA tables. Let’s carry on doing that.

Remove “Wiretap” and add “ToString Converter” and “Blank JS Operator” to your graph. Rename the blank operator to “Convert Data”, right click and add three ports – one input port called “input” type string, and two output ports “output” and “output2” type string. Connect your pipeline the following way:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

As you can see we have got one input port and two output ports in our “Convert Data” operator.

Add the following JS code to the “Convert Data” operator:

$.setPortCallback("input", onInput); //Initialise input
function onInput(ctx, s) {

    var json_data = JSON.parse(s); //Parse JSON data
    petition_id = json_data.data.id; //Get petition ID

    //Loop through petitions by country and ouptut to the first output port


   for (var i = 0; i < json_data.data.attributes.signatures_by_country.length; i++) {

        json_data_out = json_data.data.attributes.signatures_by_country[i];
        counter = i + 1;
        country_name = json_data_out.name.replace(",", "");
        $.output(counter + ',' + petition_id + ',' + country_name + ',' + json_data_out.code + ',' + json_data_out.signature_count);

    }

    //Loop through petitions by constituency and ouptut to the second output port
    for (var i = 0; i < json_data.data.attributes.signatures_by_constituency.length; i++) {
        json_data_out = json_data.data.attributes.signatures_by_constituency[i];
        //Add escape sign or replace special characters

        if (json_data_out.mp != null) {
            mp = json_data_out.mp.replace("'", "\'");
        };

        if (json_data_out.name != null) {
            const_name = json_data_out.name.replace(/,/g, "").replace(/'/g, "\'");
        };

       counter = i + 1;

       $.output2(counter + ',' + petition_id + ',' + const_name + ',' + mp + ',' + json_data_out.ons_code + ',' + json_data_out.signature_count);

    }
};

We are parsing incoming JSON and outputting countries and constituencies data on two separate ports for it to be then into two separate tables in our HANA system.

Add “2-1 Multiplexer” operator and connect “Terminal” (adjust “Terminal” config to have Max Size 4096 and spool 100000) to see the results of the run the following way:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Alright, that looks like it should, following JS operator parsing. We have constituency data as “ID, Petition ID, Constituency, MP, ONS Code, Signature Count” and country data as “ID, Petition ID, Country Name, Country Code, Signature Count”. Just the way we want it.

Now that we know that it works, it’s time to load the Country and Constituency data to SAP HANA tables. There is no need to create new tables in Catalogue manually, “SAP HANA Client” operator will do it for us.

Remove “Terminal” and “Multiplexer” operators and add new “Multiplexers” and “ToString Converters” with “Wiretap” so that the pipeline looks the following way:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

The reason we are doing this is so that we can monitor the outputs into HANA via a single “Wiretap” writing data into tables simultaneously. Remember that the first output is Country relevant information and the second output is Constituency relevant information.

Add “SAP HANA Client” operators and connect them to the outputs of string converters like this:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Make sure to connect string output to “data” inputs of SAP HANA Clients. Rename them accordingly, so you remember which is which:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Open config of the “Countries” client and set the connection the following way:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Save and set table name as “SAC_SOURCE.PETITIONS_COUNTRIES” and the following JSON definition of the table columns:

[
    {
        "name": "ID",
        "type": "INTEGER"
    },

    {
        "name": "PETITION_ID",
        "type": "INTEGER"
    },

    {
        "name": "COUNTRY_NAME",
        "type": "VARCHAR",
        "size": 60
    },

    {
        "name": "COUNTRY_CODE",
        "type": "VARCHAR",
        "size": 3
    },

    {
        "name": "SIGNATURES",
        "type": "INTEGER"
    }

Set table initialisation to “Drop (Cascade)” – this means that HANA will check whether the table exists and drop it (there are of course ways of just updating the values in table).

Do the same for the Constituencies client with the table name “SAC_SOURCE.PETITIONS_CONSTITUENCIES” and column definitions:

[
    {
        "name": "ID",
        "type": "INTEGER"
    },

    {
        "name": "PETITION_ID",
        "type": "INTEGER"
    },

    {
        "name": "CONST_NAME",
        "type": "VARCHAR",
        "size": 60
    },

    {
        "name": "MP",
        "type": "VARCHAR",
        "size": 60
    },

    {
        "name": "ONS_CODE",
        "type": "NVARCHAR",
        "size": 10
    },

    {
        "name": "SIGNATURES",
        "type": "INTEGER"
    }
]

Now, let’s save the graph and check our schema catalogue in SAP HANA. Everything seems as it was last time.

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Let’s run our graph and check the output in the “Wiretap”:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

It produces the desired output! (not that I am surprised)

Check HANA catalogue and find two newly created tables in there.

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

Check that the “Wiretap” stopped refreshing – this means all the data should be passed to the tables.

It’s important to note that sometimes pipelines just stall and do nothing even though the status is “Running”. This can be resolved by stopping Data Hub container and restarting it. I am not sure if it’s only happening in Dev version, but it’s a bit annoying, for sure.

Check total signatures by countries in petitions table by issuing the following SQL command:

SELECT "PETITION_ID", SUM("SIGNATURES")
FROM "SAC_SOURCE"."PETITIONS_COUNTRIES"
GROUP BY "PETITION_ID";

Compare the results with the live petitions website data:

SAP HANA, SAP Analytics Cloud, SAP HANA Tutorial and Materials, SAP ABAP Guides

The petitions online aren’t completely stale, so while I was taking the screenshots, they have moved a bit, but you get the point – it is as close to the real data as the processing time of Data Hub pipeline allows. For the purposes of this analysis we actually don’t really need the real live data, time lag of 30 minutes to a day or so will be fine.

The pipeline graph can be scheduled to run every day, every hour, or any other time period you require to process live data from the system.

In this article I won’t be going into the details of SAC side, you can figure out the rest by yourself, the next steps will include:

◈ Create two calculation views for Countries and Constituencies based on their respective tables.
◈ Country locations table is fine as we covered all the countries, but constituencies could change, therefore you will have to run the updated script:
DROP TABLE "SAC_SOURCE"."Constituencies_Location";

CREATE COLUMN TABLE "SAC_SOURCE"."Constituencies_Location" (
"ONS_CODE_LD" VARCHAR(10) PRIMARY KEY, "Location" ST_GEOMETRY(3857));
UPSERT "SAC_SOURCE"."Constituencies_Location" ("ONS_CODE_LD")
SELECT DISTINCT "PCON" FROM "SAC_SOURCE"."ONSPD_FEB_2019"
GROUP BY
"PCON";
UPDATE "SAC_SOURCE"."Constituencies_Location"
SET "Location" = new ST_GEOMETRY('POINT(' || "LONG" || '' || "LAT" ||
')', 4326).ST_Transform(3857)
FROM (
SELECT DISTINCT MIN("LAT") "LAT", MIN("LONG") "LONG", "PCON"
FROM "SAC_SOURCE"."ONSPD_FEB_2019" GROUP BY
"PCON"),
"SAC_SOURCE"."Constituencies_Location"
WHERE "PCON" = "ONS_CODE_LD";​

This script will update the constituencies locations table to have the location of all the constituencies in the country and not only the ones from the particular petition.

◈ Create SAP Analytics models based on the new Calculation Views or update the existing ones.
◈ Create a new story and add Geo map there, filtering based on the petition ID – that way country and constituency will always be in sync based on petition id.

SDI SDQ Geocoding & Address Cleansing

$
0
0
Have you tried using the HANA Smart Data Quality (SDQ), and found the transforms aren’t working?  Well this blog should help, there are country specific files required for both geocoding (latitude, longitude) and address cleansing.

The steps covered below are

1. Download the files
2. Extract the zip file
3. Verify existing configuration
4. Update Configuration
5. Validate Configuration
6. Build SDI Flowgraph
7. Errors

1. Download the files


Locate the required files from SAP Software Downloads.

Addresses and Geocoding are both within the “Addreess Directories & Reference Data” category, and data is held a country level, I am using some UK data so have downloaded the UK GEO DIRECTORY, this will be needed for the geocode SDQ transform.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

You usually want the latest file available
SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

I download these directly onto the HANA box as they can be quite large.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

2. Extract the zip file


The official way to install these file is using the hdblcm tool, but you can also do this manually.  Which can be handy if your hdblcm is not working.  The files need to be owned by the HANA SID ADM user (ih2adm) in my case.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

3. Verify existing configuration


We can check to see if there’s an existing configuration of the SDQ reference data files

select * 
from SYS.M_INIFILE_CONTENTS 
where FILE_NAME = 'scriptserver.ini' 
and SECTION = 'adapter_framework' 
and KEY = 'dq_reference_data_path';

This should not return any rows.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

The same is true in HANA Studio, Administration -> Configuration

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

4. Update Configuration (dq_reference_data_path)


We can go ahead and update our configuration as below, but replacing with your directory path.

-- EXECUTE ON SYSTEMDB
ALTER SYSTEM ALTER CONFIGURATION ('scriptserver.ini', 'SYSTEM') 
SET ('adapter_framework', 'dq_reference_data_path') = '/hana/SDQ_Reference/' WITH RECONFIGURE

After doing this change we need to restart the ScriptServer.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

5. Validate Configuration


Repeating step 3 we should now see our new configuration in place.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

6. Build SDI Flowgraph


We can now switch to the WebIDE and create an SDI (Smart Data Integration) Flowgraph.

Here I have just created a dummy .hdbview that doesn’t contain any real personal data.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

We can then use this in a simple flowgraph as below

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

DataSource

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

Geocode

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

Here we can see the SDQ has automatically guessed correctly the Content Types, this is from the data that is being fed in.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

It makes sense to add our input data as output columns

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

We can now save, build and Execute the Flowgraph

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

Before we celebrate the sucessful execution, we should check our target table.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

All appears to be good, for one final verification we can put the LAT, LONG in to our favourite mapping tool and check the location looks correct.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

Errors


If you receive this error below then you address and/or geocode reference data is likely missing.

15:57:12 (/Sample/db) Execution Failed : (dberror) [686]: start task error: “SAMPLE”.”UK_GEOCODE_SP”: line 5 col 0 (at pos 85): [2620] executor: plan operation failed;Execution of Adapter operation within node GeocodeUK_GEOCODE_TEMPLATE_ failed: exception 141005: Failed to create or initialize HanaTransform object

Looking at the diagnosis files in HANA Studio will give you more information.  Check the scriptserver_alert_<hostname>.trc

Here it appears as though I am out of memory, but this is actually caused by a permissions issue.  Make sure the geo or address files are owned by you HANA <SID>ADM OS user.

[65085]{200563}[2/66728980] 2019-04-04 16:08:16.664458 e af_core hanaLogging.cpp(00198) : <GEO0018>: GEO0018GEO_ERROR_DIR_OUT_OF_MEMORY: Out of memory during directory initialization: hana/SDQ_Reference/geo_gb_nt.dir.
[65085]{200563}[2/66728980] 2019-04-04 16:08:16.740559 e af_core hanaTransform.cpp(01240) : GeocodeUK_GEOCODE_TEMPLATE_:1: In HanaTransform, Failed to create transform
[65085]{200563}[2/66728980] 2019-04-04 16:08:16.742133 e CalcEngine cePopAdapter.cpp(00338) : Execution of Adapter operation within node GeocodeUK_GEOCODE_TEMPLATE_ failed: exception 141005: Failed to create or initialize HanaTransform object

What’s New in HANA 2.0 SPS04 – SQL

$
0
0
I would like to provide deeper insight from a developer’s perspective of what is newly available for SQL & SQLScript language features. One of the strategic focus is becoming more general-purpose supporting various use cases and applications by extending the coverage of SQL standards functionality and easier development for SAP HANA. With SPS04, there is a long list of new features to support it.

We will go through a series of blog posts to cover what is newly available in SQL and SQLScript. Let’s start looking into the new features for the SQL language extension.

Transactional Savepoint


If we have to name the most important feature for SPS04, I believe transactional savepoint will be at the top of the list. This feature has been requested for a long time from various customers for more fine controlled transactional processing and now finally available.

Savepoint allows rollback of a transaction to a defined point allowing partial rollback. Several savepoints can be named at different positions and transaction can be rolled back to any specific named position allowing partial rollback.

The following SQL statements are supported for Transactional Savepoint.

SAVEPOINT <name>             -- Savepoint with user defined name
ROLLBACK TO SAVEPOINT <name> -- Rollback to a Savepoint by name
RELEASE SAVEPOINT <name>     -- Release a Savepoint by name

Regular COMMIT or ROLLBACK will release any savepoints defined for the transaction. The savepoint identifier should be unique within a transaction and if a savepoint is defined with a same name, existing savepoint will be released and the newly defined savepoint will be kept.

The transactional savepoint is also supported as part of the JDBC client interface protocol. As part of the java.sql.Connection class, the following methods are supported.

Savepoint setSavepoint( ) throws SQLException
Savepoint setSavepoint(String <name>) throws SQLException
void rollback(Savepoint <savepoint>) throws SQLException
void releaseSavepoint(Savepoint <savepoint>) throws SQLException

Statement Level Triggers


Another item top on the list is statement level triggers. The trigger is fired after executing each of the INSERT, UPDATE, and DELETE trigger defined statement. Previously, this was only supported for row store table but now extended to also support column store.

Statement level triggers are very useful for processing bulk data where computation is processed for the entire statement for the whole processed data instead of individual computation for each record. The following are examples of how statement level tri

After INSERT statement

The data that has been inserted can be referenced through table <transition_table_new> during the trigger execution.

CREATE TRIGGER <trigger_name>
AFTER INSERT ON <table_name>
REFERENCING NEW TABLE <transition_table_new>
FOR EACH STATEMENT
BEGIN
  -- Triger Body
END;

After UPDATE statement

The original data can be referenced by table <transition_table_old > and updated data by <transition_table_new> during the trigger execution.

CREATE TRIGGER <trigger_name>
AFTER UPDATE ON <table_name>
REFERENCING NEW TABLE <transition_table_new>
            OLD TABLE <transition_table_old>
FOR EACH STATEMENT
BEGIN
  -- Triger Body
END;

After DELETE statement

The original data can be referenced by table <transition_table_old > during the trigger execution.

CREATE TRIGGER <trigger_name>
AFTER DELETE ON <table_name>
REFERENCING OLD TABLE <transition_table_old>
FOR EACH STATEMENT
BEGIN
  -- Triger Body
END;

Application-time and Bi-temporal tables


System-versioned table, Application-time table, and Bi-temporary table is part of the SQL:2011 standard. With SPS03, system-versioned table was first introduced and now fully supported for the remaining.

Application-time table

captures the time, in which a record is valid in the business world.

◈ Validity periods are determined by the application.
◈ Application can update validity period of a record, e.g., to correct errors.
◈ Temporal information can arbitrarily reflect the past, present or future at any granularity.

Application-time period tables in SAP HANA:

◈ Based on regular column-store tables.
◈ Only one table.
◈ Validity period can be data type DATE, TIME, TIMESTAMP, …

Special DML operations for updating a records temporal information

UPDATE <table> FOR PORTION OF APPLICATION_TIME …

Special SQL syntax, allowing access to time-dependent information

… FOR SYSTEM_TIME AS OF <timestamp>

example of application-time table

CREATE COLUMN TABLE EMPLOYEE
(
empl_no int,
Empl_name nvarchar(200),
Empl_department int,

valid_from <some date type> not null,
valid_to <some date type> not null,
period for APPLICATION_TIME (valid_from, valid_to),
)

ALTER TABLE EMPLOYEE ADD PERIOD
FOR APPLICATION_TIME(<VALID_FROM_COLUMN_NAME>, <VALID_TO_COLUMN_NAME>)

SELECT * FROM EMPLOYEE FOR APPLICATION_TIME AS OF ‘<timestamp>’;

UPDATE EMPLOYEE FOR PORTION OF
APPLICATION_TIME FROM <point in time 1> TO <point in time 2>
SET <set clause list>
WHERE <search condition>

System-versioned table

allow change tracking on database tables.

◈ A new record version is inserted for every change to preserve a records history.
◈ Validity periods are automatically maintained by the database whenever a record is changed.

System-versioned Tables in SAP HANA:

◈ Based on regular column-store tables.
◈ Two structurally equivalent tables: CURRENT and HISTORY.
◈ Tables are automatically combined in the SQL layer.
◈ Validity periods are based on data type TIMESTAMP.
◈ Inherent support for table partitioning and associated features.

Special SQL syntax, allowing access to historic data

SELECT … FOR SYSTEM_TIME AS OF <timestamp>
SELECT … FOR SYSTEM_TIME FROM <t1> TO <t2>
SELECT … FOR SYSTEM_TIME BETWEEN <t1> AND >t2>


Bi-temporal table


combine system-versioned tables for tracking changes and application-time period table for valid data in the business world.

Lateral Join


Allows correlation between table subqueries put side by side in the context of JOIN operation horizontally from left table to right table in subquery

◈ Functioning as a for each loop of the left table joining to the right table.
◈ Useful when cross referencing column is used as computing the rows to be joined
◈ Cross Product, INNER and LEFT OUTER joins are only supported

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Tutorial and Materials, SAP HANA Guides

As shown in the example above, the subquery need to traverse all TR against TL to check column b2 value is less that a2.

Collations


Define a sorting order based on language collation settings

◈ The collation supported can be verified by select * from COLLATIONS
◈ Currently, 144 collation type is supported
◈ Special collations such as German Phonebook, Chinese Stroke, Case Insensitive (CI), Accent Insensitive (AI), etc. are supported
◈ Accent Insensitive is superset of Case Insensitive

To get a full list of collations supported

SELECT * FROM <table_name>
ORDER BY COLLATE <collation_name>

Sample collation that is supported to list just a few

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Tutorial and Materials, SAP HANA Guides

SELECT…FOR JSON


Converts a tabular data to JSON document string format

CREATE TABLE LANG (id int, name nvarchar(20));
INSERT INTO LANG VALUES (1, NULL);
INSERT INTO LANG VALUES (2, 'en');
INSERT INTO LANG VALUES (3, 'de');

SELECT id, name from LANG for JSON;

The return data is JSON document string

[{"ID":1},{"ID":2,"NAME":"en"},{"ID":3,"NAME":"de"}]

Additional options for formatting is allowed

SELECT id, name from LANG for JSON ('format'='yes');

and the return JSON is formatted

"[{
  ""ID"": 1
}, {
  ""ID"": 2,
  ""NAME"": ""en""
}, {
  ""ID"": 3,
  ""NAME"": ""de""
}]"

additional options to also display null value and omit array wrap

SELECT id, name from LANG for JSON ('omitnull'='no', 'arraywrap'='no');

and returned value is where null value is also return and square brackets are removed.

{"ID":1,"NAME":null},{"ID":2,"NAME":"en"},{"ID":3,"NAME":"de"}

Misc Enhancements


The following are additional features

◈ RENAME SCHEMA
◈ CREATE OR REPLACE for views, triggers, and synonyms instead of drop then create.
◈ CTEs within subquery

Handling Non-Cumulative Measures in HANA Calculation Views with Multiple Cartesian Transformation and Single Conversion Matrix

$
0
0

Introduction


The COPA, the forecast and many other S4HANA, ECC and legacy tables contain hundreds of measures in their record structures. This format is not suitable for efficient processing in BI front end tools; e.g., WebI reports.


The WebI P&L reports are performing the best when dealing with only one or few measures at most in a single record. The structure of COPA table does not meet this requirement. There is a need to transpose a record with dozens or hundreds of measures to multiple records with one of few measures only; e.g., current year and previous year values; adding Measure Id column to each record. This could be achieved performing Cartesian Transformation, on records of the original tables.

The Cartesian Transformation could be performed during ETL or ELT processing. It could be also performed dynamically in HANA models.

The above-mentioned documents describe how to transpose dynamically in HANA model a set of N records with M measures to a set of N*M records with a single measure using conversion matrix.

Sometimes, several cumulative measures have to be exposed in resulting calculation view to  calculate correctly non-cumulative measure in final HANA model or WebI report.

Usually, Multiple Cartesian Transformations are required to expose multiple numeric columns in resulting calculation view. Each Cartesian Transformation would require a conversion matrix with 0 or 1 flag values.

This blog describes how to handle multiple Cartesian Transformations with a single conversion matrix containing multiple pseudo-binary flags described in [4] – Pseudo-Binary Operations in SAP HANA Views’ Expressions, to expose 2 measures used to calculate non-cumulative measure in final HANA model; e.g., Store Expenses Per Sq Ft by Region = Store Expenses / Store Sq Ft

Sample Sales Report


We have the following SALES_BY_REGION table:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Calculations

The table contains the following data:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Calculations

We would like to report the following information:

1. SALES
2. EXPENSES
3. PROFIT = SALES – EXPENSES
4. EMPLOYEES
5. SQFT
6. SALES_EMPL = SALES / EMPLYEES
7. EXP_SQFT = EXPENSES / SQFT

SALES, EXPENSES, EMPLOYEES & SQFT are base measures and could be aggregated in HANA interim models/views.

SALES_EMPL and EXP_SQFT are non-cumulative measures and must be calculated in final HANA model or WebI report from aggregated base measures: SALES, EMPLOYEES, EXPENSES & SQFT. PROFIT; exposed as AMT_A and AMT_B values with corresponding Measure ID flag.

PROFIT is a cumulative measure and could be aggregated same way as base measures or calculated in final HANA model as non-cumulative measures.

The example below shows how to use a single Conversion Matrix with pseudo-binary number flags to transpose Source Model measures to the Target Model with 2 numeric columns containing component cumulative measures of non-cumulative measure expression

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Calculations

The concept of pseudo-binary flag is described in detail in [5] – Handling Pseudo-Binary Flags in HANA View Calculated Attribute/Measure Expression

In short, a pseudo-binary flag is a base 10 integer number that contains only digits 0 and/or 1. Each digit in the pseudo-binary number represents a single flag with value 0 for OFF and 1 for ON. The pseudo-binary integer number may hold up to 9 unique flags. The first right-most digit of the pseudo-binary number is considered to be the first flag.

With simple arithmetic on pseudo-binary number, you can get value of any flag; e.g., to retrieve value of the 3-rd flag the following expression has to be evaluated:

(IVAL MOD 200) >= 100 when the 3-rd flag is ON; e.g., 10001010110 MOD 200 = 110
(IVAL MOD 200) < 100 when 3-rd flag is OFF; e.g.,     10001011011 MOD 200 =  11

In the example above the Conversion Matrix contains 2 digit pseudo-binary numbers with the first  right-most flag representing nominator conversion values and the second flag containing denominator conversion values.

The AMT-A values are calculated as follows:

if(("M1" % 2) = 1, "SALES", 0) +
if(("M2" % 2) = 1, "EXPENSES", 0) +
if(("M3" % 2) = 1, "EMPLOYEES", 0) +
if(("M4" % 2) = 1, "SQFT", 0)

The AMT-B values are calculated as follows:

if(("M1" % 20) >= 10, "SALES", 0) +
if(("M2" % 20) >= 10, "EXPENSES", 0) +
if(("M3" % 20) >= 10, "EMPLOYEES", 0) +
if(("M4" % 20) >= 10, "SQFT", 0)

Two cumulative measures are exposed in the Target Model. The final values of non-cumulative measures are calculated in WebI report or final HANA model as follows:

if("OPR" = '=', 
   "AMT-A",
   if("OPR" = 'A/B', 
      "AMT-A"/"AMT-B",
      If("OPR" = 'A-B', 
   "AMT-A"-"AMT-B"
      )
   )
)

Sample HANA Models/Calculation Views


The CA_20_MULTI_CT_WITH_PB_FLAGS calculation view is shown below:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Calculations

AMT_A and AMT_B calculated measures are implemented as follows:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Calculations

The conversion matrix with pseudo-binary flags is shown below:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Calculations

The model produces correct results before and after aggregation as shown below:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Calculations

The AMT-A and AMT-B columns represent cumulative measures only. They were derived by 2 Cartesian Transformations implemented using single Conversion Matrix with pseudo-binary flags.

The AMOUNT column shows the final results derived in CASE statement executing expression in OPERATION column; i.e., A, A-B or A/B.

The results after aggregation are also correct including SALES/EMPL and EXPENS/SQFT non-cumulative measure values as shown on the following screen:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Calculations

The AMT-A and AMT-B columns represent cumulative measures only. They were derived by 2 Cartesian Transformations implemented using single Conversion Matrix with pseudo-binary flags. As these measures are cumulative, they can be aggregated without a problem.

The AMOUNT column shows the final results derived in CASE statement executing expression in OPERATION column; i.e., A, A-B or A/B.

Developing with HANA Deployment Infrastructure (HDI) without XSA/CF or Web IDE

$
0
0
While I no longer work within SAP HANA Product Management, in my new role in SAP Cloud Business Group I am still developing on HANA and assisting teams that are doing so as well. Therefore I wanted to some research on “skinny” local development options for SAP HANA. The goal was to use HANA Express with as small a resource footprint as possible. This meant starting with the server only edition of HANA Express which does not include XSA nor Web IDE for SAP HANA. Yet I still wanted to be able to create database artifacts via HANA Deployment Infrastructure (HDI) design time approach. So that’s the challenge of this blog entry – “how low can I go”?

Prerequisites


So first I needed to gather some tools which I would install onto my development laptop.

◈ HANA Express – Server Only

Actually you don’t need a local install. I could also connect to a HANA Express or any HANA instance and deploy artifacts remotely. But for my purposes, I wanted to see how small I could get a HANA Express installation locally.

◈ HANA Client 

I want to install the HANA Client on my local laptop so I can use hdbsql to issue SQL commands directly from my development environment. (You might also want to add the HANA client to your PATH to make calling it from the command line easier).

◈ Node.js

As we will see the HDI deployer is just a Node.js application. It doesn’t require XSA to run. We will run the deployer locally from our laptop directly but to do so we need Node.js installed locally as well. This will also allow us to run, test, debug Node.js applications completely local without the need for XSA as well.

◈ Microsoft VS Code (Optional)

We need an editor. Of course you use Notepad or really any text editor. However we probably want something a little nicer. I chose VS Code because it has great support for Node.js development/debugging, some basic SQL syntax support, and we can even run a command prompt (and therefore hdbsql) from within the tool.

◈ SQuirreL SQL Client (Optional)

Without XSA, we won’t have the SAP Web IDE for SAP HANA, HANA Cockpit, nor Database Explorer locally. We could fall back to HANA Studio, but I hate the idea of depending upon a deprecated tool. Therefore when I really want some graphical data exploration tools, I like this super lightweight open source solution that works well with HANA.

HANA Express


While many people choose to start with the pre-built HANA Express VM or cloud images; I choose to begin with the binary installer into my own VM of Linux. This way I’m able to tune the installation scripts even further. I start with the Database Server only version of HXE (which is already sized for around 8Gb machines). I know that I don’t need XS Classic either. Therefore I go into the configuration files (/configurations/custom).

I edit the daemon.ini file to set the webdispatcher instances to 0. If I’m going to completely disable XS Classic, then I also don’t need a Web Dispatcher. Then I edit nameserver.ini and set embedded in the httpserver section to false. HXE will try to run XSEngine (XS Classic) in embedded mode and not as a standalone process. But this configuration turns off embedded mode as well. Therefore it completely disables the XSEngine. The same entry needs to be made in the xsengine.ini file as well – set embedded to false.

I then go forward with the rest of the installation. Without XSA I don’t need nearly as much disk space for the VM nor do I need as much memory. I set my VM to only 8Gb, but I actually could have squeezed it down even further.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

So this all results in a nice and skinny HANA installation that runs well within the 8Gb of memory. Perfect for doing development if your laptop doesn’t have much memory to spare.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

HDI Bootstrapping


Now comes the slightly tricky part. You want to do HDI development, but your HANA system doesn’t have XSA. Many people have the impression that HDI and XSA are totally dependent upon each other.  This is a believe largely supported by the fact that XSA and HDI were introduced at the same time, that those of us at SAP almost always talk about them interchangeably, and that there are some technical intertwining of the two technologies. However we can operate HDI completely without XSA or the Web IDE as we are about to see.

The first problem we face is that the HDI has a diserver process that isn’t even running in a HANA system normally. It is only started if you run the XSA installer. We can confirm this in our system by opening a terminal in Visual Studio Code and running the hdbsql command.  From there we can write SQL queries just like we would from any SQL Console.  For instance a select * from M_SERVICES will show us the running services on HANA (and there’s no diserver).

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

Now luckily HDI isn’t locked completely into XSA nor even the Node.js deployer interfaces. There are also SQL APIs for just about anything you’d want to do with HDI.  This is what we will leverage to interact with HDI from HDBSQL directly within VS Code.

So we will being with a little bootstrap script that only needs to be ran once on a new system installation.

https://github.com/jungsap/hdiWithoutXSA/blob/master/scripts/bootstrap.sql

In this script we will add the diserver process. Then we will create an HDI_ADMIN user.  You would run this one-time script as the SYSTEM user, but then afterwards you can do everything via this HDI_ADMIN user.  We will make both SYSTEM and HDI_ADMIN HDI Group admins for the default group of _SYS_DI.

--First/One Time Setup to activate diserver on HANA
DO
BEGIN
  DECLARE dbName NVARCHAR(25) = 'HXE'; --<-- substitute XY1 by the name of your tenant DB
  -- Start diserver
  DECLARE diserverCount INT = 0;
  SELECT COUNT(*) INTO diserverCount FROM SYS_DATABASES.M_SERVICES WHERE SERVICE_NAME = 'diserver' AND DATABASE_NAME = :dbName AND ACTIVE_STATUS = 'YES';
  IF diserverCount = 0 THEN
    EXEC 'ALTER DATABASE ' || :dbName || ' ADD ''diserver''';
  END IF;   
  
END;

--One Time Setup - Create HDI_ADMIN User and make SYSTEM and HDI_ADMIN HDI Admins
CREATE USER HDI_ADMIN PASSWORD "&1" NO FORCE_FIRST_PASSWORD_CHANGE;
GRANT USER ADMIN to HDI_ADMIN;
CREATE LOCAL TEMPORARY TABLE #PRIVILEGES LIKE _SYS_DI.TT_API_PRIVILEGES;
INSERT INTO #PRIVILEGES (PRINCIPAL_NAME, PRIVILEGE_NAME, OBJECT_NAME) SELECT 'SYSTEM', PRIVILEGE_NAME, OBJECT_NAME FROM _SYS_DI.T_DEFAULT_DI_ADMIN_PRIVILEGES;
INSERT INTO #PRIVILEGES (PRINCIPAL_NAME, PRIVILEGE_NAME, OBJECT_NAME) SELECT 'HDI_ADMIN', PRIVILEGE_NAME, OBJECT_NAME FROM _SYS_DI.T_DEFAULT_DI_ADMIN_PRIVILEGES;

CALL _SYS_DI.GRANT_CONTAINER_GROUP_API_PRIVILEGES('_SYS_DI', #PRIVILEGES, _SYS_DI.T_NO_PARAMETERS, ?, ?, ?);
DROP TABLE #PRIVILEGES;

If successful, you can run the M_SERVICES query again and you will see you now have the diserver process.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

Creating an HDI Container


When working from XSA you just create an instance of the HDI service broker and it does all the work to create the HDI container, users, etc.  It also stores all of this information within the service broker and your application only needs to bind to the service broker to access it.

In this scenario, we are going to have to do all these things that the service broker would have done for us via the HDI SQL APIs. To make this step easier, I’ve created a reusable script that will take two input parameters – a Password for the generated users and the name of the container.  It then does all the work to create the container, create the users (a object owner and application user just like when working from XSA) and setup the default libraries for the container.

--Create Container
CALL _SYS_DI.CREATE_CONTAINER('&2', _SYS_DI.T_NO_PARAMETERS, ?, ?, ?);

DO
BEGIN
  DECLARE userName NVARCHAR(100); 
  DECLARE userDT NVARCHAR(100); 
  DECLARE userRT NVARCHAR(100);   
  declare return_code int;
  declare request_id bigint;
  declare MESSAGES _SYS_DI.TT_MESSAGES;
  declare PRIVILEGES _SYS_DI.TT_API_PRIVILEGES;
  declare SCHEMA_PRIV _SYS_DI.TT_SCHEMA_PRIVILEGES;

  no_params = SELECT * FROM _SYS_DI.T_NO_PARAMETERS;

  SELECT SYSUUID INTO userName FROM DUMMY; 
  SELECT '&2' || '_' || :userName || '_DT' into userDT FROM DUMMY;
  SELECT '&2' || '_' || :userName || '_RT' into userRT FROM DUMMY;  
  EXEC 'CREATE USER ' || :userDT || ' PASSWORD "&1" NO FORCE_FIRST_PASSWORD_CHANGE';
  EXEC 'CREATE USER ' || :userRT || ' PASSWORD "&1" NO FORCE_FIRST_PASSWORD_CHANGE';

  COMMIT;

--Grant Container Admin to Development User(s)
PRIVILEGES = SELECT PRIVILEGE_NAME, OBJECT_NAME, PRINCIPAL_SCHEMA_NAME, (SELECT :userDT FROM DUMMY) AS PRINCIPAL_NAME FROM _SYS_DI.T_DEFAULT_CONTAINER_ADMIN_PRIVILEGES;
CALL _SYS_DI.GRANT_CONTAINER_API_PRIVILEGES('&2', :PRIVILEGES, :no_params, :return_code, :request_id, :MESSAGES); 
select * from :MESSAGES;

--Grant Container User to Development User(s)
SCHEMA_PRIV = SELECT 'SELECT' AS PRIVILEGE_NAME, '' AS PRINCIPAL_SCHEMA_NAME, :userRT AS PRINCIPAL_NAME FROM DUMMY;  
CALL _SYS_DI.GRANT_CONTAINER_SCHEMA_PRIVILEGES('&2', :SCHEMA_PRIV, :no_params, :return_code, :request_id, :MESSAGES);
select * from :MESSAGES;

--Configure Default Libraries for Container
  default = SELECT * FROM _SYS_DI.T_DEFAULT_LIBRARIES;
  CALL _SYS_DI.CONFIGURE_LIBRARIES('&2', :default, :no_params, :return_code, :request_id, :MESSAGES);
  SELECT :userDT as "Object Owner", :userRT as "Application User" from DUMMY;
END;

I will run the script and you should see that output are the names of the two generated users.  This the a not so nice part of this approach. You are having to manage these users and passwords directly. This is probably OK for local, private development like this; but nothing you’d want to do in a real, productive environment.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

Running the HDI Deployer from local Node.js


The HDI Deployer is really just a Node.js application that doesn’t have any strict requirements upon XSA.  We can run it from the local Node.js runtime on our laptop via the command terminal of VS Code as well.  The only tricky part is getting the connection information and credentials to the deployer.

Within XSA or Cloud Foundry, the deployer would get this information from the server broker as described above. But all the service broker does place this information in the VCAP_SERVICES environment variable of your application. Here is what this environment variable looks like:

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

But the majority of SAP Node.js modules (including the HDI deployer) have a fallback option for local development.  You can simulate this environment binding by simply creating a file named default-env.json. From here we can copy in a VCAP_SERVICES section and change the connection parameters and insert the HDI users which were generated in the previous step.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

The only warning I have about this default-env.json is that you probably want to be careful not to commit it to Git or transport it downstream since it contains specific user names and passwords. Normally I would filter this file out with a .gitignore rule. However I kept it in this sample repository as a reusable template.

One difference to the Web IDE is that it will automatically run NPM install to pull down any dependent modules. We must do that manually in this environment. Simply run the npm install from the db folder and NPM will read your package.json and install the prerequisites just like from the Web IDE.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

After configuring the default-env.json you can now run the Node.js start script for the HDI Deployer and it will deploy your design time artifacts into the target container just like the Build command from the Web IDE.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

Your development artifacts are deployed into your container.  You could of course perform HDBSQL queries to confirm this:

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

But this is also where I like to use SQuirreL to browse my container and contents.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

Running Node.js Applications


You are actually not limited to just doing HDI development with this approach. My sample application also has Cloud Application Programming Model CDS artifacts and a Node.js OData V4 service in it.  I can run the CDS compile/build commands from the command line as well and even run my Node.js application locally.  It will use the same trick to read the connection parameters and credentials from the default-env.json so it runs fine without the XSA/CF service bindings.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

Although the service is running locally from my laptop, its making a real connection to the remote HANA DB and retrieving actual data. So I’m development Node.js and potentially debugging all locally without any XSA, but still accessing live data from the HANA DB.

SAP HANA Tutorial and Material, SAP HANA Certifications, SAP HANA Learning

Deleting a Container


Maybe you have messed something up and want to start fresh with your container. Or maybe you want to clean up your local system from unused containers.  There is a separate script that calls the HDI APIs to help with dropping a container as well.

CREATE LOCAL TEMPORARY COLUMN TABLE #PARAMETERS LIKE _SYS_DI.TT_PARAMETERS;
INSERT INTO #PARAMETERS ( KEY, VALUE ) VALUES ( 'IGNORE_WORK', true );
INSERT INTO #PARAMETERS ( KEY, VALUE ) VALUES ( 'IGNORE_DEPLOYED', true );
CALL _SYS_DI.DROP_CONTAINER('&2', #PARAMETERS, ?, ?, ?);
DROP TABLE #PARAMETERS;

Closing


So in summary you certain can build a very small HANA Express system and do quite complex HDI and Node.js development all in a local environment with 8Gb of memory. There is some manual setup and work arounds, but it gives you quite a bit of flexibility to work especially if your system is memory constrained. Here are a few pros and cons with this approach.

Cons

◈ Graphical Calculation Views. If you need to do Calculation Views you probably still want the Web IDE. Although the XML format of the Calculation View file could be created and edited manually and it does build and deploy from the local HDI approach, I can’t imagine it would be very comfortable to work with the complex XML by hand. In my work I tend to focus on CDS and SQLDDL artifacts which work perfectly fine with a local editor only approach like this.

◈ No Admin/Database Explorer. As seen you can fill in the gaps here with HDBSQL and/or open source SQL tools. And we are just talking about a local, private developer instances.   For a real productive system you’d still want the HANA Cockpit for full admin and monitoring.

◈ Transport:  There is no MTA deployment in this approach. Its all manual deployment and you have to remember run things like npm install.  And we have to manage the users and passwords locally. However for private, local development this isn’t so bad.  You can then commit your code to Git and run a CI chain from there (using the MTA Builder) and deploy to a full HANA system with XSA or Cloud Foundry. The concepts we are using here don’t break any of that. When you deploy the same content “for real” the service brokers will still do their job of creating the users and passwords and doing the binding for you.

Pros

◈ Small memory footprint.  Everything I did here could run on a laptop with 8Gb or a very small cloud instance.

◈ Quick build, run and debug. No waiting for XSA/CF deploys. You run almost immediately in your local Node.js environment.

◈ Local IDE. No browser based IDE. Quick response. Split screen editors, SQL syntax plug-ins.  There’s a lot to be said for reusing an existing IDE that has so much community support.

TADIR object types and object descriptions via SQL

$
0
0

Introduction


In our organisation we were upgrading SAP and neded a quick way to associate various objects with their object type and description, however these were not obviously available via SQL, so a solution was required. The primary table for objects was TADIR, but this only contained a code for obect type, so the description had to be determined from elsewhere. Note: this was required for some quick analysis; for ABAP developers there are standard methods for obtaining descriptions.

SAP HANA Studio, SAP HANA Learning, SAP HANA Tutorial and Material

In trying to solve this problem I discovered two interesting facts.

1. Object type descriptions are not fully held in tables, some are hardcoded as text symbols.

2. Object descriptions are not stored centrally in a single table, but rather scattered throughout many tables.

Defining the solution


Obtaining Object Type Descriptions


To obtain a full list of object type descriptions I first went into SE11 and looked up table TADIR, then searched through to see if field OBJECT had a value range or value table defined against its domain. It did not.  Next I looked to see if it had a search help defined against it. I thought if I pressed F4 against this field in SE16 that I could copy out the produced list of values, however the display restricted the list to an incomplete list of the first 500 values only.  So I needed to find out where the search help was retrieving the values from.  I drilled into the Search Help code and added some breakpoints for debugging, then went back into SE16 and pressed F4 again.  Through debug I was able to find the function module SAPLTR_OBJECTS and the two forms involved

GET_LOGICAL_TYPES

GET_SYSTEM_TYPES

I identified Logical types as coming from tables OBJH and OBJT, so I could code these tables direct in SQL

System types were coming from text symbols, so could not be coded into SQL using tables.

I was able to export the table PT_SYSTYPE[] from within the debugger after it had been loaded in GET_SYSTEM_TYPES.  I placed this in EXCEL where I could create a calculation to change the values into SQL Insert into statements, so I could then copy these statements into SQL.

I created a temporary table in SQL and loaded both sets of values into it, ready for use when joining to table TADIR.

Obtaining Object Descriptions


Object descriptions are not centrally located in a single table, as I would have liked, but are found in may tables across the database.  The code below does not cover all object types and may not be 100% reliable for object descriptions.  It is a best attempt based on my searching through SAP and the Internet for the correct tables to obtain the descriptions.  I welcome any corrections and additions in your comments.

The solution I came up with


In the code below you need to change the word “catalog” to your catalog for temporary tables.

/*
First create a table to hold the object type descriptions for field OBJECT from table TADIR
*/

create local temporary table catalog.#object_types(pgmid VARCHAR(4), object_type varchar(4), description VARCHAR(255));

/*
Note: Object type descriptions are not fully held in tables, some are hardcoded.
To obtain the complete list one must open table TADIR in SE16 and do an F4 (Search
Help) on field Object, while in debugger.

There are two lists required to be combined.
- Logical Types
- System Types

Logical Types are extracted in function module SAPLTR_OBJECTS in form GET_LOGICAL_TYPES
They are stored in table st_logical_types.  These types are taken from database tables
OBJH and OBJT.

System types are hard coded in function module SAPLTR_OBJECTS in form GET_SYSTEM_TYPES
These can be extracted by running the Search Help (F4) on column OBJECT of table TADIR
then placing a breakpoint at the end of form GET_SYSTEM_TYPES, then viewing and exporting
inernal table PT_SYSTYPE[].  Once in a spreadsheet a EXCEL calculation can be writen to
format insert statements as below.  The following EXCEL calculation converts row 2 of the
exported table, and can be copied down for all rows.
="Insert into catalog.#object_types values('"& B2 & "', '"& C2 & "', '"& D2 & "');"
*/

/*
=================
Get Logical Types
=================
*/

Insert into catalog.#object_types

Select 'R3TR' as pgmid
, objt.objectname as object_type
, coalesce(objt.ddtext,'Logical Transport Object') as description

from objh as objh

left outer join objt as objt
on objt.objectname = objh.objectname
and objt.objecttype = 'L'
and objt.language = 'E'

where objh.objecttype = 'L' ;

/*
=================
Load System Types
=================
*/

Insert into catalog.#object_types values('*', '', 'Comment Line');
Insert into catalog.#object_types values('CORR', 'MERG', 'Comment: Object list was added');
Insert into catalog.#object_types values('CORR', 'PERF', 'Perforce Changelist');
Insert into catalog.#object_types values('CORR', 'RELE', 'Comment Entry: Released');
Insert into catalog.#object_types values('LIMU', 'ADIR', 'Object Directory Entry');
Insert into catalog.#object_types values('LIMU', 'CINC', 'Class Include (ABAP Objects)');
Insert into catalog.#object_types values('LIMU', 'CLSD', 'Class Definition (ABAP Objects)');
Insert into catalog.#object_types values('LIMU', 'COMM', 'Object List of Request or Piece List');
Insert into catalog.#object_types values('LIMU', 'CPRI', 'Private Header (ABAP Objects)');
Insert into catalog.#object_types values('LIMU', 'CPRO', 'Protected Header (ABAP Objects)');
Insert into catalog.#object_types values('LIMU', 'CPUB', 'Public Header (ABAP Objects)');
Insert into catalog.#object_types values('LIMU', 'CUAD', 'GUI Definition');
Insert into catalog.#object_types values('LIMU', 'DEVP', 'Package: Usage');
Insert into catalog.#object_types values('LIMU', 'DOCU', 'Documentation');
Insert into catalog.#object_types values('LIMU', 'DOMD', 'Domain Definition');
Insert into catalog.#object_types values('LIMU', 'DTED', 'Data Element Definition');
Insert into catalog.#object_types values('LIMU', 'DYNP', 'Screen');
Insert into catalog.#object_types values('LIMU', 'ENQD', 'Lock Object Definition');
Insert into catalog.#object_types values('LIMU', 'FSEL', 'Field Selection');
Insert into catalog.#object_types values('LIMU', 'FUNC', 'Function Module');
Insert into catalog.#object_types values('LIMU', 'FUGT', 'Function Group Texts');
Insert into catalog.#object_types values('LIMU', 'HOTO', 'Single Object (SAP HANA Transport for ABAP)');
Insert into catalog.#object_types values('LIMU', 'HOTP', 'Package Metadata (SAP HANA Transport for ABAP)');
Insert into catalog.#object_types values('LIMU', 'INDX', 'Table Index');
Insert into catalog.#object_types values('LIMU', 'INTD', 'Interface Definition (ABAP Objects)');
Insert into catalog.#object_types values('LIMU', 'MAPP', 'Mapping Information (ABAP Objects)');
Insert into catalog.#object_types values('LIMU', 'MCOD', 'Matchcode Object Definition');
Insert into catalog.#object_types values('LIMU', 'MESS', 'Single Message');
Insert into catalog.#object_types values('LIMU', 'METH', 'Method (ABAP Objects)');
Insert into catalog.#object_types values('LIMU', 'MSAD', 'Message Class: Definition and All Short Texts');
Insert into catalog.#object_types values('LIMU', 'PIFA', 'Package Interface: Assignments');
Insert into catalog.#object_types values('LIMU', 'PIFH', 'Package Interface: Header Information');
Insert into catalog.#object_types values('LIMU', 'REPO', 'Report Program Source Code and Texts');
Insert into catalog.#object_types values('LIMU', 'REPS', 'Report Source Code');
Insert into catalog.#object_types values('LIMU', 'REPT', 'Report Texts');
Insert into catalog.#object_types values('LIMU', 'SHLD', 'Search Help Definition');
Insert into catalog.#object_types values('LIMU', 'SHLX', 'Text Component of Search Help Definition');
Insert into catalog.#object_types values('LIMU', 'SOTT', 'Concept (Online Text Repository) - Short Texts');
Insert into catalog.#object_types values('LIMU', 'SOTU', 'Concept (Online Text Repository) - Long Texts');
Insert into catalog.#object_types values('LIMU', 'SQLD', 'Pool, Cluster Definition');
Insert into catalog.#object_types values('LIMU', 'SQTT', 'Technical Attributes for Pool/Cluster');
Insert into catalog.#object_types values('LIMU', 'TABD', 'Table Definition');
Insert into catalog.#object_types values('LIMU', 'TABT', 'Technical Attributes of a Table');
Insert into catalog.#object_types values('LIMU', 'TTYD', 'Table Type Definition');
Insert into catalog.#object_types values('LIMU', 'TTYX', 'Text Component of Table Type Definition');
Insert into catalog.#object_types values('LIMU', 'TYPD', 'Type Group Source Code');
Insert into catalog.#object_types values('LIMU', 'VARI', 'Report Program System Variant');
Insert into catalog.#object_types values('LIMU', 'VARX', 'Report Program Application Variant');
Insert into catalog.#object_types values('LIMU', 'VIED', 'View Definition');
Insert into catalog.#object_types values('LIMU', 'VIET', 'Technical Attributes of a View');
Insert into catalog.#object_types values('LIMU', 'WAPD', 'Definition of BSP Application');
Insert into catalog.#object_types values('LIMU', 'WAPP', 'Page/Controller of a BSP Application');
Insert into catalog.#object_types values('LIMU', 'WDYC', 'Controller (Web Dynpro)');
Insert into catalog.#object_types values('LIMU', 'WDYD', 'Definition (Web Dynpro)');
Insert into catalog.#object_types values('LIMU', 'WDYV', 'View (Web Dynpro)');
Insert into catalog.#object_types values('LIMU', 'XIND', 'Extension Index Definition');
Insert into catalog.#object_types values('R3TR', 'APPL', 'Application Class');
Insert into catalog.#object_types values('R3TR', 'CDAT', 'View Cluster Maintenance: Data');
Insert into catalog.#object_types values('R3TR', 'CLAS', 'Class (ABAP Objects)');
Insert into catalog.#object_types values('R3TR', 'CNTX', 'Context');
Insert into catalog.#object_types values('R3TR', 'DDDD', 'Changes to Nametab Structure');
Insert into catalog.#object_types values('R3TR', 'DDLS', 'Data Definition Language Source');
Insert into catalog.#object_types values('R3TR', 'DEVC', 'Package');
Insert into catalog.#object_types values('R3TR', 'DIAL', 'Dialog Module');
Insert into catalog.#object_types values('R3TR', 'DOCT', 'General Text');
Insert into catalog.#object_types values('R3TR', 'DOCV', 'Documentation (Independent)');
Insert into catalog.#object_types values('R3TR', 'DOMA', 'Domain');
Insert into catalog.#object_types values('R3TR', 'DSEL', 'Selection View');
Insert into catalog.#object_types values('R3TR', 'DRPM', 'Dictionary Replication Metadata');
Insert into catalog.#object_types values('R3TR', 'DSYS', 'Chapter of a Book Structure');
Insert into catalog.#object_types values('R3TR', 'DTEL', 'Data Element');
Insert into catalog.#object_types values('R3TR', 'ENQU', 'Lock Object');
Insert into catalog.#object_types values('R3TR', 'FORM', 'SAPscript Form');
Insert into catalog.#object_types values('R3TR', 'FUGR', 'Function Group');
Insert into catalog.#object_types values('R3TR', 'FUGS', 'Function Group with Customer Include: SAP Part');
Insert into catalog.#object_types values('R3TR', 'FUGX', 'Function Group with Customer Include: Customer Part');
Insert into catalog.#object_types values('R3TR', 'HOTA', 'Full Package (SAP HANA Transport for ABAP)');
Insert into catalog.#object_types values('R3TR', 'INTF', 'Interface (ABAP Objects)');
Insert into catalog.#object_types values('R3TR', 'LDBA', 'Logical Database');
Insert into catalog.#object_types values('R3TR', 'MCID', 'Matchcode ID');
Insert into catalog.#object_types values('R3TR', 'MCOB', 'Matchcode Object');
Insert into catalog.#object_types values('R3TR', 'MSAG', 'Message Class');
Insert into catalog.#object_types values('R3TR', 'PARA', 'SPA/GPA Parameters');
Insert into catalog.#object_types values('R3TR', 'PINF', 'Package interface');
Insert into catalog.#object_types values('R3TR', 'PROG', 'Program');
Insert into catalog.#object_types values('R3TR', 'SHLP', 'Search Help');
Insert into catalog.#object_types values('R3TR', 'SOTR', 'All Concepts (OTR) of a Package - Short Texts');
Insert into catalog.#object_types values('R3TR', 'SOTS', 'All Concepts (OTR) of a Package - Long Texts');
Insert into catalog.#object_types values('R3TR', 'SQLT', 'Pooled/Cluster Table');
Insert into catalog.#object_types values('R3TR', 'SQSC', 'Database Procedure Proxy');
Insert into catalog.#object_types values('R3TR', 'STOB', 'Structured Object');
Insert into catalog.#object_types values('R3TR', 'STYL', 'SAPscript Style');
Insert into catalog.#object_types values('R3TR', 'SYAG', 'System Log Messages');
Insert into catalog.#object_types values('R3TR', 'SYND', 'Syntax Documentation');
Insert into catalog.#object_types values('R3TR', 'TABL', 'Table');
Insert into catalog.#object_types values('R3TR', 'TABU', 'Table Contents');
Insert into catalog.#object_types values('R3TR', 'TDAT', 'Customizing: Table Contents');
Insert into catalog.#object_types values('R3TR', 'TEXT', 'SAPscript Text');
Insert into catalog.#object_types values('R3TR', 'TOBJ', 'Definition of a Maintenance and Transport Object');
Insert into catalog.#object_types values('R3TR', 'TRAN', 'Transaction');
Insert into catalog.#object_types values('R3TR', 'TTYP', 'Table Type');
Insert into catalog.#object_types values('R3TR', 'TYPE', 'Type Group');
Insert into catalog.#object_types values('R3TR', 'VDAT', 'View Maintenance: Data');
Insert into catalog.#object_types values('R3TR', 'VERS', 'Version Number');
Insert into catalog.#object_types values('R3TR', 'VIEW', 'View');
Insert into catalog.#object_types values('R3TR', 'WAPA', 'BSP (Business Server Pages) Application');
Insert into catalog.#object_types values('R3TR', 'WDYN', 'Web Dynpro Component');
Insert into catalog.#object_types values('R3TR', 'XINX', 'Ext. Index');
Insert into catalog.#object_types values('R3TR', 'XPRA', 'Program Run after Transport');

/*
================================
Select TADIR entries for Package
================================
*/

Select tadi.devclass
, tadi.object
, objt.description
, tadi.obj_name
, case tadi.object
when 'TABL' then tabl.ddtext -- Table
when 'TTYP' then ttyp.ddtext -- Table type
when 'VIEW' then view.ddtext -- view
when 'ENQU' then view.ddtext -- lock object
when 'PROG' then prog.text-- program
when 'REPO' then prog.text-- report
when 'FUGR' then fugr.areat-- function group
when 'DOMA' then doma.ddtext-- domain
when 'DTEL' then dtel.ddtext-- data elements
when 'SHLP' then shlp.ddtext-- search help
when 'MSGC' then shlp.ddtext-- Message class (not sure of this one)
when 'DEVC' then devc.ctext-- Package
when 'AUTH' then auth.ttext-- Authorisation object??
when 'TRAN' then tran.ttext-- transaction code
when 'SUSO' then auth.ttext-- Authorisation object
when 'SUST' then auth.ttext--
when 'SSFO' then ssfo.caption-- smartforms
when 'SSST' then ssst.caption-- smartstyles
when 'FORM' then stxh.tdtitle-- SAP Script
when 'ACID' then acid.descript-- Checkpoint group
when 'CLAS' then clas.descript-- Class
when 'INTF' then clas.descript-- Class
when 'VCLS' then vcls.objecttext -- view cluster
when 'CUS0' then cus0.text-- Customizing Object ??
when 'CUS1' then cus0.text-- Customizing Object ??
when 'CUS2' then cus0.text-- Customizing Object ??
when 'SOBJ' then sobj.ntext-- Texts Basic Data
when 'MSAG' then msag.stext-- Table T100A text
when 'UDMO' then udmo.langbez-- DM Data Model Short Text
when 'UENO' then ueno.langbez-- DM Entity Type Short Text
when 'PDTS' then pdts.stext-- Standard Infotype 1000 (SAP) Object Existence
when 'PDWS' then pdts.stext-- Standard Infotype 1000 (SAP) Object Existence
when 'PDAC' then pdac.short-- View: Text Table for Rules
when 'PARA' then para.partext-- Memory ID Short Texts
when 'SCAT' then scat.ktext-- CATT: Basic Texts for Test Procedure
when 'CMOD' then cmod.modtext-- Enhancement Projects - Short Texts
when 'LDBA' then ldba.ldbtext-- Texts for logical databases
when 'AQBG' then aqbg.text-- SAP Query: Texts for User Groups  ??
when 'AQQU' then aqqu.text-- SP Query: Texts for Queries  ??
when 'AQSG' then aqsg.text-- SAP Query: Texts for Functional Areas  ??
when 'MCOB' then mcob.mctext-- AS400-T_MCOBJECT: MC Object Texts
end as Description

from tadir as tadi

left outer join catalog.#object_types as objt
on objt.object_type = tadi.object

left outer join trdirt as prog -- program descriptions
on prog.sprsl = 'E'
and prog.name = tadi.obj_name

left outer join dd02t as tabl -- table descriptions
on tabl.ddlanguage = 'E'
and tabl.tabname = tadi.obj_name

left outer join tlibt as fugr -- function group descriptions
on fugr.spras = 'E'
and fugr.area = tadi.obj_name

left outer join dd01t as doma -- domain descriptions
on doma.ddlanguage = 'E'
and doma.domname = tadi.obj_name

left outer join dd04t as dtel -- data elements
on dtel.ddlanguage = 'E'
and dtel.rollname = tadi.obj_name

left outer join dd25t as view -- view descriptions
on view.ddlanguage = 'E'
and view.viewname = tadi.obj_name

left outer join dd30t as shlp -- search help descriptions
on shlp.ddlanguage = 'E'
and shlp.shlpname = tadi.obj_name

left outer join dd40t as ttyp -- table type descriptions
on ttyp.ddlanguage = 'E'
and ttyp.typename = tadi.obj_name

left outer join t100a as msgc -- message class descriptions
on msgc.masterlang = 'E'
and msgc.arbgb = tadi.obj_name

left outer join tdevct as devc -- package descriptions
on devc.spras = 'E'
and devc.devclass = tadi.obj_name

left outer join tobjt as auth -- authorization object descriptions
on auth.langu = 'E'
and auth.object = tadi.obj_name

left outer join tstct as tran -- transaction code descriptions
on tran.sprsl = 'E'
and tran.tcode = tadi.obj_name

left outer join stxfadmt as ssfo -- smartforms descriptions
on ssfo.langu = 'E'
and ssfo.formname = tadi.obj_name

left outer join stxsadmt as ssst -- smartstyles descriptions
on ssst.langu = 'E'
and ssst.stylename = tadi.obj_name

left outer join stxh as stxh -- SAP Script descriptions
on stxh.tdspras = 'E'
and stxh.tdobject = tadi.object
and stxh.tdname = tadi.obj_name

left outer join seoclasstx as clas -- Class descriptions
on clas.langu = 'E'
and clas.clsname = tadi.obj_name

left outer join AAB_ID_PROPT as acid -- Checkpoint group
on acid.langu = 'E'
and acid.name = tadi.obj_name

left outer join vclstruct as vcls -- View cluster
on vcls.spras = 'E'
and vcls.vclname = tadi.obj_name

left outer join cus_actobt as cus0 -- Customizing
on cus0.spras = 'E'
and (cus0.act_id = tadi.obj_name
or cus0.objectname = tadi.obj_name)

left outer join t100t as msag -- Table T100A text
on msag.sprsl = 'E'
and msag.arbgb = tadi.obj_name

left outer join tojtt as sobj -- Texts Basic Data
on sobj.language = 'E'
and sobj.name = tadi.obj_name

left outer join dm40t as udmo -- DM Data Model Short Text
on udmo.sprache = 'E'
and udmo.dmoid = tadi.obj_name

left outer join dm02t as ueno -- DM Entity Type Short Text
on ueno.sprache = 'E'
and ueno.entid = tadi.obj_name

left outer join hrs1000 as pdts -- Standard Infotype 1000 (SAP) Object Existence
on pdts.langu = 'E'
and pdts.objid = tadi.obj_name

left outer join v_actext as pdac -- View: Text Table for Rules
on pdac.langu = 'E'
and pdac.objid = tadi.obj_name

left outer join tparat as para -- Memory ID Short Texts
on para.sprache = 'E'
and para.paramid = tadi.obj_name

left outer join catg as scat -- CATT: Basic Texts for Test Procedure
on scat.spras = 'E'
and scat.ablnr = tadi.obj_name

left outer join modtext as cmod -- Enhancement Projects - Short Texts
on cmod.sprsl = 'E'
and cmod.name = tadi.obj_name

left outer join ldbt as ldba -- Texts for logical databases
on ldba.spras = 'E'
and ldba.ldbname = tadi.obj_name

left outer join aqgtb as aqbg -- SAP Query: Texts for User Groups
on aqbg.sprsl = 'E'
and aqbg.num = tadi.obj_name

left outer join aqgtq as aqqu -- SP Query: Texts for Queries
on aqqu.sprsl = 'E'
and aqqu.qnum = tadi.obj_name

left outer join aqgts as aqsg -- SAP Query: Texts for Functional Areas
on aqsg.sprsl = 'E'
and aqsg.clas = tadi.obj_name

left outer join dd20t as mcob -- AS400-T_MCOBJECT: MC Object Texts
on mcob.ddlanguage = 'E'
and mcob.mconame = tadi.obj_name

where tadi.devclass = 'FKKBI'

order by object;

/*
     ========================
     Drop table on completion
     ========================
*/

drop table catalog.#object_types;

How to pivot/unpivot in SAP HANA

$
0
0

Introduction


Currently there is no built-in pivot/unpivot function in HANA. In this blog you will find a workaround how to implement this in SQLScript.

Pivot/Unpivot


Pivoting is the transformation from the rows into the columns.
SAP HANA, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Guides

Unpivoting is the transformation from the columns into the rows.

Idea & Solution


Even if there does no built-in function in HANA exists, there are workarounds how to do this in SQL. For the pivot-function it is possible to create new columns with CASE/WHEN and filter for particular values.

Pivot


First of all, we will create a test table with three columns. Let’s call the table TEST_A and the columns PROD, DATE_YEAR and SALES.

CREATE TABLE TEST_A(
PROD VARCHAR(1),
DATE_YEAR INT,
SALES INT);

Insert some data into the table and display the values. 

INSERT INTO TEST_A VALUES ('A',2015,123456);
INSERT INTO TEST_A VALUES ('A',2016,234567);
INSERT INTO TEST_A VALUES ('A',2017,345678);
INSERT INTO TEST_A VALUES ('A',2018,456789);
INSERT INTO TEST_A VALUES ('A',2019,567890);

PRODDATE_YEARSALES
2015 123456 
2016 234567 
2017 345678 
2018 456789 
2019 567890 

Now we can pivot the values with CASE/WHEN

SELECT
PROD,
SUM(CASE WHEN DATE_YEAR = 2015 THEN SALES END) AS YEAR_2015,
SUM(CASE WHEN DATE_YEAR = 2016 THEN SALES END) AS YEAR_2016,
SUM(CASE WHEN DATE_YEAR = 2017 THEN SALES END) AS YEAR_2017,
SUM(CASE WHEN DATE_YEAR = 2018 THEN SALES END) AS YEAR_2018,
SUM(CASE WHEN DATE_YEAR = 2019 THEN SALES END) AS YEAR_2019 
FROM TEST_A 
GROUP BY PROD;

The result of the query is:

PRODYEAR_2015YEAR_2016YEAR_2017 YEAR_2018 YEAR_2019
123456234567345678456789 567890 

Well, you don’t want to write this code every time, if you need to pivot a table. Because of this, we can use SQLScript to generate this code and execute it directly. If you have trouble to execute this code, you can also use instead of CREATE OR REPLACE just CREATE.

CREATE OR REPLACE PROCEDURE P_PIVOT(
IN table_name VARCHAR(127),
IN schema_name VARCHAR(127) , 
IN select_columns VARCHAR(2147483647),
IN agg_typ VARCHAR(20),
IN agg_column VARCHAR(127),
IN pivot_column VARCHAR(2147483647),
IN pivot_values VARCHAR(2147483647))
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
READS SQL DATA AS
BEGIN
USING SQLSCRIPT_STRING AS lib;
DECLARE v_table_name VARCHAR(127) = table_name;
DECLARE v_schema_name VARCHAR(127) = schema_name;
DECLARE v_select_columns VARCHAR(2147483647) = select_columns;
DECLARE v_agg_typ VARCHAR(20) = agg_typ;
DECLARE v_agg_column VARCHAR(127) = agg_column;
DECLARE v_pivot_column VARCHAR(2147483647) = pivot_column;
DECLARE v_pivot_values VARCHAR(2147483647) = pivot_values;

DECLARE v_count INT= 0;
DECLARE v_idx INT;
DECLARE v_statement VARCHAR(2147483647);
DECLARE a_pivot_values VARCHAR(127) ARRAY;

/*
if all columns needed, use * to get the column names except v_agg_column,v_pivot_column
*/
IF v_select_columns = '*'
THEN
SELECT string_agg(column_name,',' ORDER BY position) INTO v_select_columns FROM sys.columns 
WHERE table_name = v_table_name
AND schema_name = v_schema_name 
AND column_name NOT IN (v_agg_column,v_pivot_column);
END IF;
/*
if all values in the pivot column should use for pivoting
*/
IF v_pivot_values = '*'
THEN
EXECUTE IMMEDIATE ('select string_agg('||:v_pivot_column ||', '','' order by ' || :v_pivot_column || ') from (select distinct ' || :v_pivot_column || ' from '|| :v_table_name || ')') into v_pivot_values;
END IF;

a_pivot_values := LIB:split_to_array( :v_pivot_values, ',' );
v_count = cardinality(:a_pivot_values);
v_statement = 'select ' || v_select_columns;

/*
generate the statement
*/
FOR v_idx IN 1.. v_count DO
v_statement = v_statement || ', ' || v_agg_typ || '(case when ' || v_pivot_column || ' = ' || :a_pivot_values[:v_idx] || ' then ' || v_agg_column || ' end ) as ' || v_pivot_column || '_' || :a_pivot_values[:v_idx];
END FOR;
v_statement = v_statement || ' from ' || v_table_name || ' group by ' || v_select_columns;
/*
execute the statement
*/
EXECUTE IMMEDIATE v_statement;
END;

You can call this procedure with list of parameters as listed below:

Parameter nameDescription 
table_nameName of table, which should pivot
schema_name Name of schema, where the table is created 
select_columns List of columns, which should display or * 
agg_typ Type of aggregation like sum, count, min, max etc. 
agg_column Column, which should aggregate 
pivot_column  Column, which should pivot 
pivot_values List of values in the pivot_column, which should generate as separate columns or *. Please consider the maximum number of columns. 

For example, the procedure can be called like this:

CALL P_PIVOT('TEST_A', '', 'PROD','SUM','SALES','DATE_YEAR','*');
CALL P_PIVOT('TEST_A', '', 'PROD','SUM','SALES','DATE_YEAR','2015,2016');
CALL P_PIVOT('TEST_A', 'SYSTEM', '*','SUM','SALES','DATE_YEAR','*');

Unpivot


The unpivot functionality can be realized by using the MAP function and SERIES_GENERATE_INTEGER. As the picture above describes, unpivot means generate rows out of columns. The function SERIES_GENERATE_INTEGER can be utilized to generate an integer table, which can be used for joining with the source table to generate multiple rows.

We create now a table with product and several sales years and call it TEST_B.

CREATE TABLE TEST_B(
PROD VARCHAR(1),
SALES_YEAR_2015 INT,
SALES_YEAR_2016 INT,
SALES_YEAR_2017 INT,
SALES_YEAR_2018 INT,
SALES_YEAR_2019 INT);

Insert data into the table and display.

INSERT INTO TEST_B VALUES('A', 123456, 234567, 345678, 456789, 567890);
INSERT INTO TEST_B VALUES('B', 123456, 234567, 345678, 456789, null);

PRODYEAR_2015YEAR_2016YEAR_2017 YEAR_2018 YEAR_2019
123456234567345678456789 567890 
B123456234567345678456789 

Now we can use the functions mentioned above to generate the unpivot table:

SELECT
PROD,
MAP(element_number,
1, '2015',
2, '2016',
3, '2017',
4, '2018',
5, '2019') AS "DATE_YEAR",
MAP(element_number ,
1, SALES_YEAR_2015,
2, SALES_YEAR_2016,
3, SALES_YEAR_2017,
4, SALES_YEAR_2018,
5, SALES_YEAR_2019) AS "SALES" 
FROM TEST_B,
SERIES_GENERATE_INTEGER(1, 1, 6) 
ORDER BY element_number;

The result of the query is:

PRODDATE_YEAR  SALES 
A2015123.456
2015 123.456 
2016 234.567
2016 234.567 
2017 345.678 
2017 345.678 
2018 456.789 
2018 456.789 
2019 567.890 
2019 

To automatically generate this code, the following procedure can be used:

CREATE OR REPLACE PROCEDURE P_UNPIVOT(
IN table_name VARCHAR(127),
IN schema_name VARCHAR(127) , 
IN select_columns VARCHAR(2147483647),
IN unpivot_val_name VARCHAR(127),
IN unpivot_col_name VARCHAR(127),
IN unpivot_columns VARCHAR(2147483647),
IN unpivot_column_values VARCHAR(2147483647),
IN include_nulls BOOLEAN DEFAULT TRUE)
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
READS SQL DATA AS
BEGIN
USING SQLSCRIPT_STRING AS lib;
DECLARE v_table_name VARCHAR(127) = table_name;
DECLARE v_schema_name VARCHAR(127) = schema_name;
DECLARE v_select_columns VARCHAR(2147483647) = select_columns;
DECLARE v_unpivot_val_name VARCHAR(127) = unpivot_val_name;
DECLARE v_unpivot_col_name VARCHAR(127) = unpivot_col_name;
DECLARE v_unpivot_columns VARCHAR(2147483647) = unpivot_columns;
DECLARE v_unpivot_column_values VARCHAR(2147483647) = unpivot_column_values;
DECLARE v_count INT= 0;
DECLARE v_idx INT;
DECLARE v_statement VARCHAR(2147483647);
DECLARE v_statement_map1 VARCHAR(2147483647) = '';
DECLARE v_statement_map2 VARCHAR(2147483647) = '';
DECLARE v_include_nulls BOOLEAN = include_nulls;
DECLARE a_unpivot_columns VARCHAR(127) array;
DECLARE a_unpivot_column_values VARCHAR(100) array;

a_unpivot_columns = LIB:split_to_array( :v_unpivot_columns, ',' );
a_unpivot_column_values = LIB:split_to_array( :v_unpivot_column_values, ',' );
v_count = cardinality(:a_unpivot_columns);
tbl_pivot_columns = UNNEST(:a_unpivot_columns) AS ("EXCLUDE_COLUMNS");

/*
if all columns needed, use * to get the column names except tbl_pivot_columns
*/
IF v_select_columns = '*'
THEN
SELECT string_agg(column_name,',' ORDER BY position) INTO v_select_columns FROM sys.columns 
WHERE table_name = v_table_name
AND schema_name = v_schema_name 
AND column_name NOT IN (SELECT EXCLUDE_COLUMNS FROM :tbl_pivot_columns);
end IF;

v_statement = 'select ' || v_select_columns || ', ' || 'map(element_number';

/*
generate the statement
*/
FOR v_idx IN 1.. v_count DO
v_statement_map1 = v_statement_map1 || ',' || v_idx  || ',''' || :a_unpivot_column_values[:v_idx] || '''' ;
v_statement_map2 = v_statement_map2 || ',' || v_idx  || ',' || :a_unpivot_columns[:v_idx]  ;
END FOR;
v_statement = v_statement || v_statement_map1 || ') as "'|| v_unpivot_col_name || '", map(element_number ' || v_statement_map2 || ') as "' || v_unpivot_val_name || '" from ' || v_table_name || ', SERIES_GENERATE_INTEGER(1,1,' || v_count+1 || ') order by element_number';

IF v_include_nulls = FALSE
THEN
v_statement = 'select * from (' || v_statement || ' ) where ' || v_unpivot_val_name || ' is not null';
END IF;

EXECUTE IMMEDIATE v_statement;

END;

The input parameters are:

Parameter nameDescription 
table_nameName of table, which should unpivot
schema_name  Name of schema, where the table is created 
select_columns  List of columns, which should display or * 
unpivot_val_name  Name of column for unpivot value columns 
unpivot_col_name  Name of column for unpivot columns 
unpivot_columnsList of unpivot columns 
unpivot_column_values Values for unpivot columns 
include_nullsIf the null values should display, default TRUE. 

For example, the procedure can be called like this:

CALL P_UNPIVOT('TEST_B','SYSTEM','*','VAL','JAHR','SALES_YEAR_2015,SALES_YEAR_2016,SALES_YEAR_2017,SALES_YEAR_2018,SALES_YEAR_2019','2015,2016,2017,2018,2019',FALSE);
CALL P_UNPIVOT('TEST_B','SYSTEM','*','VAL','JAHR','SALES_YEAR_2015,SALES_YEAR_2016,SALES_YEAR_2017,SALES_YEAR_2018,SALES_YEAR_2019','2015,2016,2017,2018,2019');

Further steps


Because the created table structure is not known during compile time, no table typed output parameter can be used. To store the result for later usage, you can add the following code to the procedures to store the result into a dynamically generated table. In addition, you need to define the variable v_new_table_name, which defines the table to be used for storage. Please note, that the usage of dynamic executed DDL statements is not recommended:

if exists(select 1 from tables where table_name = v_new_table_name and schema_name = v_schema_name)
then
execute immediate 'drop table ' || v_new_table_name;
end if;

execute immediate 'create table ' || v_new_table_name || ' as ( ' || v_statement || ')';

The procedures using dynamic SQL to execute the generated SELECT statement.

Hands on Tutorial PAL in HANA for Customer Churn Analysis for online retail

$
0
0
Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.

Predictive analytics is an area of statistics that deals with extracting information from data and using it to predict trends and behavior patterns.

Customer churn refers to when a customer (player, subscriber, user, etc.) ceases his or her relationship with a company. Online businesses typically treat a customer as churned once a particular amount of time has elapsed since the customer’s last interaction with the site or service. The full cost of customer churn includes both lost revenue and the marketing costs involved with replacing those customers with new ones. Reducing customer churn is a key business goal of every online business.

The ability to predict that a particular customer is at a high risk of churning, while there is still time to do something about it, represents a huge additional potential revenue source for every business.

This Customer Churn Analysis tutorial will teach you how to perform customer churn analysis for online shop.
Data set for the tutorial is available here.

Following tutorial is done on this environment :

BW 7.5 SP07 (BPC 10.1 is built-in) , HANA 1.0 SP12 (rev 122)

Before getting started make sure that you have the following roles.

Security


1. To execute AFL functions you need to have following roles:
           
              AFL__SYS_AFL_AFLPAL_EXECUTE

              AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION

2. To generate or drop PAL procedures:

               AFLPM_CREATOR_ERASER_EXECUTE

3. The PAL procedure can be created under any schema once you have the CREATE ANY  privileges of it.

Let the USER of database for this example be PALUSER.

About Dataset and Predictive Model Used


Based on this data we want to predict which customers are most likely to churn. To find patterns in historic data and to train this model the following data set is used:

Customer IDUnique ID for each customer
Usage Category (Month) Number of time the customer used the online shopping portal in the current month 
Average Usage (Year) Average Number of time the customer used the online shopping portal in the past year 
Usage Category (previous Month) Number of time the customer used the online shopping portal in the previous month 
Service Type Flag whether customer has premium or standard service 
Product Category Product category the customers’ orders most frequently in this case only pharma or beauty 
Message Allowance Flag whether the customers want to receive messages 
Average Marketing Activity (Bi-yearly) Average number of marketing activities for customer in the past two years 
Average Visit Time (min) Average time the customer spent on the online shopping portal at each visit 
Pages per Visit Average number of pages the customer visits on the online shopping portal at each visit 
Delta Revenue (Previous Month) Difference from the revenue by this customer in the current month to the revenues in this month 
Revenue (Current Month) Revenue by this customer in the current month 
Service Failure Rate (%) Percentage of times the customer used the online shopping portal and certain services failed 
Customer Lifetime (days) Number of days since the customer registered 
Product Abandonment Number of products the customer has put in shopping cart and the abandoned in the last quarter 
Contract Activity Flag whether customer has churned or is active. 

The downloaded dataset was divided into two unequal parts:

1. Training Data set: randomly selected 1000-1200 records.
2. Validation Data set: Two copies of this part is maintained one without Contract activity and one with it.

Validation Data set without Contract activity is used for validation and the result is matched against the actual values in the other copy.

PAL Functions Used:

1. CREATEDT: It stands for Create decision tree and allow to export in JSON and PMML models, later on we can predict actual data based on this model, and PMML formats can be utilized in R  Language.

2. PREDICTWITHDT

This function use the trained JSON model created by the CREATEDT function and the actual data to predict in which class are the new records belongs and their probability of falling in that class.

Now that we have downloaded the dataset and know the predictive models to be used in the exercise; let’s get started with the Hands on.

Training Phase


First we will create a decision tree and train it with the training data set.

STEP 1: Import Training and Validation Data set in HANA

◈ Open SAP HANA modeler perspective in HANA Studio.
◈ Click on import in the Quick view.
◈ Select “Data From Local File” >> Select Target System >> Define import properties as below.

1. Browse the downloaded file.
2. Select Worksheet “Training”.
3. Check “Header Row Exist” and equal to 1.
4. Check “import all data” with “Start Line” equal to 2.
5. Select your schema and set table name as ChurnTrainData.
6. Click Next.
7. If you are not able to import correctly save the sheet as single csv file and then impot it.

◈Manage Table Definition and Mapping as follows:

“CustomerID”INT, (CHECK KEY)
“UsageCategoryMonth”NVARCHAR(6),
“AverageUsageYear”NVARCHAR(6),
“UsageCategorypreviousMonth”NVARCHAR(6),
“ServiceType”NVARCHAR(21),
“ProductCategory”NVARCHAR(6),
“MessageAllowance”NVARCHAR(3),
“AverageMarketingActivityBiyearly”DOUBLE,
“AverageVisitTimemin”DOUBLE,
“PagesperVisit”DOUBLE,
“DeltaRevenuePreviousMonth”DOUBLE,
“RevenueCurrentMonth”DOUBLE,
“ServiceFailureRate”DOUBLE,
“CustomerLifetimedays”DOUBLE,
“ProductAbandonment”DOUBLE,
“ContractActivity”NVARCHAR(7)
Here first column is a primary key and it can be only Integer or Bigint.

And PAL function do not support Decimal so we need to change all decimals to Double while importing.

◈ Select Store Type = Column store.
◈ Click Next; and the file will be imported.
◈ Similarly import Validation dataset sheet with table name “ChurnValidData”.

STEP 2: Creating Types for PAL Procedure

1. Creating  type for our training data.

DROP TYPE PAL_CHURN_CDT_DATA_T; 
CREATE TYPE PAL_CHURN_CDT_DATA_T AS TABLE ( 
        "CustomerID"  INT, 
        "UsageCategoryMonth"  NVARCHAR(6), 
        "AverageUsageYear"  NVARCHAR(6), 
        "UsageCategorypreviousMonth"  NVARCHAR(6), 
        "ServiceType"  NVARCHAR(21), 
        "ProductCategory"  NVARCHAR(6), 
        "MessageAllowance"  NVARCHAR(3), 
        "AverageMarketingActivityBiyearly"  DOUBLE, 
        "AverageVisitTimemin"  DOUBLE, 
        "PagesperVisit"  DOUBLE, 
        "DeltaRevenuePreviousMonth"  DOUBLE, 
        "RevenueCurrentMonth"  DOUBLE, 
        "ServiceFailureRate"  DOUBLE, 
        "CustomerLifetimedays"  DOUBLE, 
        "ProductAbandonment"  DOUBLE, 
        "ContractActivity"  NVARCHAR(7)
      );
2. Creating type for resulting JSON and PMML model.

DROP TYPE PAL_CHURN_CDT_JSONMODEL_T; 
CREATE TYPE PAL_CHURN_CDT_JSONMODEL_T AS TABLE( 
    "ID" INT, 
    "JSONMODEL" VARCHAR(5000) 
  ); 

DROP TYPE PAL_CHURN_CDT_PMMLMODEL_T; 
CREATE TYPE PAL_CHURN_CDT_PMMLMODEL_T AS TABLE( 
   "ID" INT, 
   "PMMLMODEL" VARCHAR(5000) 
  );

3. Creating type for Control table that is used by the procedure to set the PAL function configurations.

DROP TYPE PAL_CHURN_CONTROL_T; 
CREATE TYPE PAL_CHURN_CONTROL_T AS TABLE( 
    "NAME" VARCHAR (100), 
    "INTARGS" INTEGER, 
    "DOUBLEARGS" DOUBLE, 
    "STRINGARGS" VARCHAR(100) 
);

STEP 3: Creating Signature for PAL Procedure

PAL procedure will need a signature which specify the inputs/output of the procedure.

Following is the format for creating signature table.

DROP TABLE PAL_CHURN_CDT_PDATA_TBL; 
CREATE COLUMN TABLE PAL_CHURN_CDT_PDATA_TBL( 
    "POSITION" INT,  
    "SCHEMA_NAME" NVARCHAR(256),  
    "TYPE_NAME" NVARCHAR(256),  
    "PARAMETER_TYPE" VARCHAR(7) 
);

Then we insert values in the table.

1. If you want to convert decimals to double in the data table type created in the step 1.

INSERT INTO PAL_CHURN_CDT_PDATA_TBL
 VALUES (-2, '_SYS_AFL', 'CAST_DECIMAL_TO_DOUBLE', 'INOUT'); 

INSERT INTO PAL_CHURN_CDT_PDATA_TBL 
 VALUES (-1, '_SYS_AFL', 'CREATE_TABLE_TYPES', 'INOUT');

2. Insert the INPUT and OUTPUT types.

INSERT INTO PAL_CHURN_CDT_PDATA_TBL VALUES (1, 'PALUSER', 'PAL_CHURN_CDT_DATA_T', 'IN'); 
INSERT INTO PAL_CHURN_CDT_PDATA_TBL VALUES (2, 'PALUSER', 'PAL_CHURN_CONTROL_T', 'IN'); 
INSERT INTO PAL_CHURN_CDT_PDATA_TBL VALUES (3, 'PALUSER', 'PAL_CHURN_CDT_JSONMODEL_T', 'OUT'); 
INSERT INTO PAL_CHURN_CDT_PDATA_TBL VALUES (4, 'PALUSER', 'PAL_CHURN_CDT_PMMLMODEL_T', 'OUT');

STEP 4: Creating PAL Procedure

To create the procedure we will use “ AFLLANG_WRAPPER_PROCEDURE_CREATE” procedure.

CALL SYS.AFLLANG_WRAPPER_PROCEDURE_DROP('PALUSER','PAL_CHURN_CREATEDT_PROC'); 

CALL SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE('AFLPAL', 
                                           'CREATEDT', 
                                           'PALUSER', 
                                           'PAL_CHURN_CREATEDT_PROC', 
                                            PAL_CHURN_CDT_PDATA_TBL); 

Syntax of AFLLANG_WRAPPER_PROCEDURE_CREATE:

    Param 1 <Area> : Which is always set to AFLPAL.

    Param 2 <function name>: PAL function to be used by our procedure for generating the decision tree model.

    Param 3 <schema name> : Your Schema name in which the types and tables are created.

    Param 4 <procedure name>: User defined name for the procedure.

    Param 5 <signature table>: Signature of the procedure.

STEP 5: Creating Control Table

1. Here temporary control table holds the parameters used by the PAL function CREATEDT.

DROP TABLE #PAL_CHURNCONTROL_TBL; 
CREATE LOCAL TEMPORARY COLUMN TABLE #PAL_CHURNCONTROL_TBL( 
   "NAME" VARCHAR(100), 
   "INTARGS" INTEGER, 
   "DOUBLEARGS" DOUBLE, 
   "STRINGARGS" VARCHAR(100) 
 );

2. Inserting values of the parameters. 

--using 100% data for training and 0% for pruning
INSERT INTO #PAL_CHURNCONTROL_TBL 
  VALUES ('PERCENTAGE', NULL, 1.0, NULL); 

-- using 2 threads for processing
INSERT INTO #PAL_CHURNCONTROL_TBL 
  VALUES ('THREAD_NUMBER', 2, NULL, NULL); 

-- to split the string of tree model
INSERT INTO #PAL_CHURNCONTROL_TBL 
  VALUES ('IS_SPLIT_MODEL', 1, NULL, NULL); 

/* JSON model is exported by default but if you also want to export PMML model to be used in other  environment like R */
INSERT INTO #PAL_CHURNCONTROL_TBL 
  VALUES ('PMML_EXPORT', 2, NULL, NULL); 

STEP 6: Result: Creating Model Table

DROP TABLE PAL_CHURN_CDT_JSONMODEL_TBL; 
CREATE COLUMN TABLE PAL_CHURN_CDT_JSONMODEL_TBL LIKE PAL_CHURN_CDT_JSONMODEL_T;   

DROP TABLE PAL_CHURN_CDT_PMMLMODEL_TBL; 
CREATE COLUMN TABLE PAL_CHURN_CDT_PMMLMODEL_TBL LIKE PAL_CHURN_CDT_PMMLMODEL_T;
STEP 7: Calling the procedure to generate Decision tree

We will be calling our procedure and will pass the parameters as per the signature defined.

Syntax:

CALL PALUSER.PAL_CHURN_CREATEDT_PROC(
   "<schema>"."<tablename>", 
  "<controll table>", 
   <json model table>, 
   <pmml model table>
  ) WITH OVERVIEW;

CALL PALUSER.PAL_CHURN_CREATEDT_PROC(
  "PALUSER"."ChurnTrainData", 
  "#PAL_CHURNCONTROL_TBL", 
  PAL_CHURN_CDT_JSONMODEL_TBL, 
  PAL_CHURN_CDT_PMMLMODEL_TBL
 ) WITH OVERVIEW;

STEP 8: Checking generated models

SELECT * FROM PAL_CHURN_CDT_JSONMODEL_TBL; 
SELECT * FROM PAL_CHURN_CDT_PMMLMODEL_TBL; 

Validation Phase


Now that we have trained the model and got the JSON model of the decision tree. We can use it for prediction of new data.

We will be using PREDICTWITHDT PAL function for predicting class for new data.

STEP 1: Importing Validation Dataset in HANA

We have already imported our validation dataset during the training model phase.

STEP 2: Creating types for PAL Procedure

1. Creating  type for our validation data. Here we are not including “ContractActivity” field.

DROP TYPE ZPAL_CHURN_PCDT_DATA_T; 
CREATE TYPE ZPAL_CHURN_PCDT_DATA_T AS TABLE(  
    "CustomerID"  INT,  
    "UsageCategoryMonth"  NVARCHAR(6), 
    "AverageUsageYear"  NVARCHAR(6),     
    "UsageCategorypreviousMonth"  NVARCHAR(6), 
    "ServiceType"  NVARCHAR(21), 
    "ProductCategory"  NVARCHAR(6), 
    "MessageAllowance"  NVARCHAR(3), 
    "AverageMarketingActivityBiyearly"  DOUBLE, 
    "AverageVisitTimemin"  DOUBLE, 
    "PagesperVisit"  DOUBLE, 
    "DeltaRevenuePreviousMonth"  DOUBLE, 
    "RevenueCurrentMonth"  DOUBLE, 
    "ServiceFailureRate"  DOUBLE, 
    "CustomerLifetimedays"  DOUBLE, 
    "ProductAbandonment"  DOUBLE
  );

2. Creating type for input JSON and PMML model. We can also reuse the same model types created during training phase.

DROP TYPE ZPAL_CHURN_PCDT_JSONMODEL_T; 
CREATE TYPE ZPAL_CHURN_PCDT_JSONMODEL_T AS TABLE( 
    "ID" INT, 
    "JSONMODEL" VARCHAR(5000) 
 );

3. Create type for the resultant table. Resultant table is going to hold the Customer id in ID column and “Contract Activity” will be assigned to “CLASSLABEL” field and Probability of falling in that class is assigned to PROB field.

Why do we have this structure for the resultant table?

Because every PAL function has a predefined resultant table structure.

DROP TYPE ZPAL_CHURN_PCDT_RESULT_T; 
CREATE TYPE ZPAL_CHURN_PCDT_RESULT_T AS TABLE( 
    "ID" INTEGER, 
    "CLASSLABEL" VARCHAR(50), 
    "PROB" DOUBLE 
 );

4. Creating type for Control table that is used by the procedure to set the PAL function configurations. We can reuse the PAL_CHURNCONTROL_T created during training phase.

DROP TYPE ZPAL_CHURNCONTROL_T; 
CREATE TYPE ZPAL_CHURNCONTROL_T AS TABLE( 
    "NAME" VARCHAR (100), 
    "INTARGS" INTEGER, 
    "DOUBLEARGS" DOUBLE, 
    "STRINGARGS" VARCHAR (100) 
  );

STEP 3: Creating Signature for PAL Procedure

In this procedure we will be giving 3 inputs; Validation data set, control parameters of PAL function and Trained JSON model. And procedure is going to return a resultant table which is going to hold the class (active, churned )in which the new customer belongs.

DROP TABLE ZPAL_CHURN_PCDT_PDATA_TBL; 
CREATE COLUMN TABLE ZPAL_CHURN_PCDT_PDATA_TBL( 
    "POSITION" INT,  
    "SCHEMA_NAME" NVARCHAR(256),  
    "TYPE_NAME" NVARCHAR(256),  
    "PARAMETER_TYPE" VARCHAR(7) 
 ); 

INSERT INTO ZPAL_CHURN_PCDT_PDATA_TBL 
  VALUES (1, 'PALUSER', 'ZPAL_CHURN_PCDT_DATA_T', 'IN'); 

INSERT INTO ZPAL_CHURN_PCDT_PDATA_TBL 
  VALUES (2, 'PALUSER', 'ZPAL_CHURNCONTROL_T', 'IN'); 

INSERT INTO ZPAL_CHURN_PCDT_PDATA_TBL 
  VALUES (3, 'PALUSER', 'ZPAL_CHURN_PCDT_JSONMODEL_T', 'IN'); 

INSERT INTO ZPAL_CHURN_PCDT_PDATA_TBL 
  VALUES (4, 'PALUSER', 'ZPAL_CHURN_PCDT_RESULT_T', 'OUT');

STEP 4: Creating PAL Procedure

CALL SYS.AFLLANG_WRAPPER_PROCEDURE_DROP('PALUSER', 'ZPAL_CHURNPREDICTWITHDT_PROC'); 
CALL SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE('AFLPAL', 
                                          'PREDICTWITHDT', 
                                          'PALUSER', 
                                          'ZPAL_CHURNPREDICTWITHDT_PROC', 
                                           ZPAL_CHURN_PCDT_PDATA_TBL);

STEP 5: Creating Control Table

DROP TABLE #ZPAL_CHURNCONTROL_TBL; 
CREATE LOCAL TEMPORARY COLUMN TABLE #ZPAL_CHURNCONTROL_TBL( 
    "NAME" VARCHAR (100), 
    "INTARGS" INTEGER, 
    "DOUBLEARGS" DOUBLE, 
    "STRINGARGS" VARCHAR (100) 
  ); 

INSERT INTO #ZPAL_CHURNCONTROL_TBL VALUES ('THREAD_NUMBER', 2, NULL, NULL); 
INSERT INTO #ZPAL_CHURNCONTROL_TBL VALUES ('IS_OUTPUT_PROBABILITY', 1, null, null);
INSERT INTO #ZPAL_CHURNCONTROL_TBL VALUES ('MODEL_FORMAT', 0, null, null);

STEP 6: Result: Creating Result Table

DROP TABLE ZPAL_CHURN_PCDT_RESULT_TBL; 
CREATE COLUMN TABLE ZPAL_CHURN_PCDT_RESULT_TBL LIKE ZPAL_CHURN_PCDT_RESULT_T;

STEP 7: Calling the procedure to generate result table.

CALL PALUSER.ZPAL_CHURNPREDICTWITHDT_PROC("PALUSER"."ZChurnValidData" ,
                                           "#ZPAL_CHURNCONTROL_TBL", 
                                           PAL_CHURN_CDT_JSONMODEL_TBL, 
                                           ZPAL_CHURN_PCDT_RESULT_TBL 
      ) WITH OVERVIEW;   

STEP 8: Checking result table

SELECT * FROM ZPAL_CHURN_PCDT_RESULT_TBL ORDER BY ID ASC;

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

Here we see that ID’s are classified as active or churned and with how much probablity it belongs to that class is shown in PROB field.

Cross Validation


Now that we have predicted the contract activity for the validation dataset now its time for cross validating the predicted values with the true values that we have in the “True validation Values” sheet.

Cross validation is done in excel and can be seen in the “Final Output” sheet of the dataset. Here result table data is cross validated with the Validation data and percent of accuracy is calculated at the bottom.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications, SAP HANA Study Materials

So the model which we created is 90.6% accurate as per the available dataset. In this tutorial we have selected dataset randomly. To get more accuracy you can select proper test cases to train your model and can get 100% accuracy.

Batch Insert and Update Processing with OData V2

$
0
0
In this blog, we’ll learn how to perform batch insert and update operation with OData version 2 and we apply to the contact persons list where user can add, edit and delete the person first name and last name. I have no issue when performing the single batch insert/update alone. But when it comes updating and inserting at the same time with batch, I think this is one of the easiest way. Do let me know if you have any better solution.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides



Prepare the Components


Create the application public.aa.bb.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Create the following folder inside the package:

◈ data
◈ hdb
◈ lib
◈ sequence
◈ ui5

Full list of populated files and database artifacts:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Clone the code from Git.

Let’s take a look at the onWrite() method in /ui5/controller/ContactPersons.controller.js.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

At line 365, we parse the oJsonData object. This object is populated from the model of table id persons in /ui5/view/ContactPersons.view.xml.

<Table id="persons" mode="SingleSelectMaster" selectionChange="onSelectionChange" items="{/results}" class="TableStyle">
    <columns>
        <Column>
            <Label text="First Name" required="true" />
        </Column>
        <Column>
            <Label text="Last Name" required="true" />
        </Column>
    </columns>
    <items>
        <ColumnListItem>
            <cells>
                <Input value="{FIRSTNAME}" editable="false" class="InputStyle" maxLength="50" />
                <Input value="{LASTNAME}" editable="false" class="InputStyle" maxLength="50" />
            </cells>
        </ColumnListItem>
    </items>
</Table>

At line 369, we check if there are any entries in the table. If no entries, we assume that user has deleted all records.

Perform Insert and Update

If there is an entry, we perform the batch operation using OData version 2 with destination “../lib/xsodata/goodmorning.xsodata/AddEditPersons”

var oParams = {};
oParams.json = true;
oParams.defaultUpdateMethod = "PUT";
oParams.useBatch = true;

var batchModel = new sap.ui.model.odata.v2.ODataModel("../lib/xsodata/goodmorning.xsodata", oParams);
var mParams = {};
mParams.groupId = "1001";
mParams.success = function() {
    this_.refreshEtag();
    var oModel_ = new sap.ui.model.odata.ODataModel("../lib/xsodata/goodmorning.xsodata", false);
    oModel_.read("/ContactPersons", {
        success: function(oData) {},
        error: function(e) {
            console.log(e);
        }
    });

    sap.m.MessageToast.show("Record has been updated");
};
mParams.error = this.onErrorCall;

for (var k = 0; k < oJsonData.length; k++) {
    if (oJsonData[k].ID === "")
        oJsonData[k].ID = 0;
    batchModel.create("/AddEditPersons", oJsonData[k], mParams);
}

Let’s take a look the details in goodmorning.xsodata.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

We call the modification exit for OData request. In this case is the xsodata library calls the exit  “AddEditPersons.xsjslib” before creating the entity.

Now let’s check create_before_exit() method in AddExitPersons.xsjslib.

◈ Firstly, we need to insert the unique ID from table ContactPersons to array ArrD and unique ID from temp table after to arrB.We will perform the array comparison and merging later on.

@param {afterTableName} String -The name of a temporary table with the single entry after the operation (CREATE and UPDATE events only)
var after = param.afterTableName;​

◈ Check if unique ID (from temp table after) exists in ContactPersons table.
◈ If exists, delete the existing ID in ContactPersons table to avoid the unique constraint violation error when inserting the record.
◈ Find any difference between ArrD and ArrB and delete all the differences.
Example: In ArrD (database) we have record: Name A and Name B. in ArrB (temp table) we have record Name B. The difference is Name A and we will delete Name A from database.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

◈ If no unique ID exists in ContactPersons table, get the nextvalID from the sequence: public.aa.bb.sequence::persons.
◈ Merge ArrB & ArrD into ArrB. ArrB will have the full set of IDs.
◈ Again, find any difference between ArrD and ArrB and delete all the differences.
◈ Update the ID in temp table after with nextvalID.

I hope the logic is not confuse you. Here is complete code.

/**
@param {connection} Connection - The SQL connection used in the OData request
@param {beforeTableName} String - The name of a temporary table with the single entry before the operation (UPDATE and DELETE events only)
@param {afterTableName} String -The name of a temporary table with the single entry after the operation (CREATE and UPDATE events only)
 */
$.import("public.aa.bb.lib.xsjs.session.xsjslib", "session");
var SESSIONINFO = $.public.aa.bb.lib.xsjs.session;

function merge_array(array1, array2) {
    var result_array = [];
    var arr = array1.concat(array2);
    var len = arr.length;
    var assoc = {};

    while(len--) {
        var item = arr[len];

        if(!assoc[item]) 
        { 
            result_array.unshift(item);
            assoc[item] = true;
        }
    }

    return result_array;
}

function differenceOf2Arrays (array1, array2) {
    var temp = [];
    array1 = array1.toString().split(',').map(Number);
    array2 = array2.toString().split(',').map(Number);
    
    for (var i in array1) {
    if(array2.indexOf(array1[i]) === -1) temp.push(array1[i]);
    }
    for(i in array2) {
    if(array1.indexOf(array2[i]) === -1) temp.push(array2[i]);
    }
    return temp.sort((a,b) => a-b);
}

function create_before_exit(param) {

    var after = param.afterTableName;
    var pStmt = null;

    pStmt = param.connection.prepareStatement("select * from \"" + after + "\""); 
    var data = SESSIONINFO.recordSetToJSON(pStmt.executeQuery(), "Details");
    pStmt.close();  

    var ArrD = [];
    var ArrB = [];
    pStmt = param.connection.prepareStatement("select ID from \"goodmorning\".\"public.aa.bb.hdb::data.ContactPersons\"");
    var rs = pStmt.executeQuery();
    while (rs.next()) {
        ArrD.push(rs.getInteger(1));
    }
    
    pStmt = param.connection.prepareStatement("select ID from \"" + after + "\"");
    rs = pStmt.executeQuery();
    while (rs.next()) {   
        ArrB.push(rs.getInteger(1));
    }

    pStmt = param.connection.prepareStatement("select ID, FIRSTNAME from \"goodmorning\".\"public.aa.bb.hdb::data.ContactPersons\" where \"ID\" = ?");
    pStmt.setString(1, data.Details[0].ID.toString());
    rs = pStmt.executeQuery();
    if (rs.next()) {
        //Existing record
pStmt = param.connection.prepareStatement("delete from \"goodmorning\".\"public.aa.bb.hdb::data.ContactPersons\" where \"ID\" = ?");
        pStmt.setInteger(1, rs.getInteger(1));
        pStmt.execute();
        pStmt.close();

        var delArr = differenceOf2Arrays(ArrD, ArrB);
        for( var i = 0; i < delArr.length; i++) {
            pStmt = param.connection.prepareStatement("delete from \"goodmorning\".\"public.aa.bb.hdb::data.ContactPersons\" where \"ID\" = ?");
            pStmt.setInteger(1, parseInt(delArr[i]));
            pStmt.execute();
            pStmt.close();
        }
    } else {
        //New record
        pStmt = param.connection.prepareStatement("select \"goodmorning\".\"public.aa.bb.sequence::persons\".NEXTVAL from dummy");
        rs = pStmt.executeQuery();
        var NextValID = "";
        while (rs.next()) {
        NextValID = rs.getString(1);
        }
        pStmt.close(); 
        
        ArrB = merge_array(ArrB, ArrD);
        var delArr = differenceOf2Arrays(ArrD, ArrB);
        for( var i = 0; i < delArr.length; i++) {
            pStmt = param.connection.prepareStatement("delete from \"goodmorning\".\"public.aa.bb.hdb::data.ContactPersons\" where \"ID\" = ?");
            pStmt.setInteger(1, parseInt(delArr[i]));
            pStmt.execute();
            pStmt.close();
        }
        
        pStmt = param.connection.prepareStatement("update\"" + after + "\"set \"ID\" = ?");
        pStmt.setString(1, NextValID);
        pStmt.execute();
        pStmt.close();
    }
}

Perform Deletion

To perform deletion is pretty straight forward, we just call the DeletePersons from UI5 with dummy entry oEntry.

var oEntry = {};
oEntry.ID = 0;
oEntry.FIRSTNAME = "";
oEntry.LASTNAME = "";

var oDataModel = new sap.ui.model.odata.ODataModel("../lib/xsodata/goodmorning.xsodata", true);
oDataModel.create("/DeletePersons", oEntry, {
    context: null,
    success: function(data) {
    },
    error: function(e) {
    }
});

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

And in DeletePersons.xsjslib, we just delete all records.

function delete_after_exit(param) {
    var pStmt = null;

    pStmt = param.connection.prepareStatement("delete from \"goodmorning\".\"public.aa.bb.hdb::data.ContactPersons\"");
    pStmt.execute();
    pStmt.close();
}

S/4HANA 1709 FPS03 – back on the Mothership again …

$
0
0

Apply S/4HANA 1709 FPS03 …


Since I’m back at SAP SE as Platform Architect for Intelligent Data & Analytics, I also checked my “Innovation Landscape” for updates and new Software Stacks.

As SAP BW/4 HANA 2.0 will be available by end of February, the Upgrade will take some time until the BPC Add-On is available. As almost everyone uses integrated planning (IP) on BW/4, this will be the majority of the upgrades.

SAP BW/4 2.0, ABAP FND 1809 ON HANA and S/4 1809 are sharing now the same SAP Application Stack. Unfortunately the ABAP FND 1809 differs from a NW AS ABAP 7.51 INNOVATION PKG, as it contains additional components in the core ABAP foundation:

◈ ABAPFND 1.0
◈ MDG_FND 8.03
◈ S4FND 8.03

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides
SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Therefore it is not possible to install the BW/HANA Starter Add-On on top of the ABAP FND 1809 and convert directly the BW/4HANA 2.0.

Correction of installed software information


SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

So, first thing is to create a so called “CISI stack file” (correction of installed software information).
Therefore log on to the Maintenance Planner and check your personal settings as follows:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

When you now switch to the main entry screen of the Maintenance Planner for your S/4 system, you will see that the Verification option is marked as red and the Plan option is grey at this time.
Not always will the correct SPS level recognized from the system information either uploaded as sysinfo XML or direct connection from the LMDB of the Solution Manager on premise.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Compare existing and uploaded system information


SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

The editor screen allows you now (see also the following examples):

◈ to edit the correct SPS level,
◈ overwrite the SPS level
◈ delete a unwanted Add-On
◈ add a missing Add-On manually

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

edit the validated software stack from the stack.xml

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

overwrite the validated software stack from the stack.xml

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

delete the software Add-On from the stack.xml

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

add add an necessary Add-On manually to the stack.xml

once you finished the editing of the CISI stack.xml file you can continue with the validation

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

it can also happen that after the validation of the software stack, that there are still some inconsistencies which also has to be acknowledged.

You can search for these missing Add-On’s in the Download Center and add them manually to the EPS/in directory when the Download Basket is available

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

finally Activate the changes and download the Correction file for later usage.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

apply the CISI file to the S/4 backend


to avoid that the made setting are going lost, you must apply the changes to the S/4 backend. This is done with the SUM (software update manager) part of the SL Toolset 1.0

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Now the Installed Product Versions Tap Strip looks much more better and validated to ensure a clean and successful SPS update to higher S/4 releases.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Create your target software vector for S/4 1709


Log on to the Maintenance Planner update the system data with the correct information, until you can plan your software change.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

If you get unspecific errors in this phase you have to use the “Undo” option and correct your former selections.

You can search for possible solutions in the Expert Search under the component BC-UPG-MP

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

LTMC for Master Data Step by Step Process

$
0
0
When We are Implementing SAP S/4HANA solution, We can migrate our master data and business data from SAP systems or non-SAP systems to SAP S/4HANA. By using  SAP S/4HANA migration cockpit.

The SAP S/4HANA migration cockpit uses migration objects to identify and transfer the relevant data. A migration object describes how to migrate data for a specific business object to SAP S/4HANA. It contains information about the relevant source and target structures, as well as the relationships between these structures. It also contains mapping information for the relevant fields, as well as any rules used to convert values that are migrated from source fields to target fields. SAP provides predefined migration objects that you can use to transfer your data.

The Tool used to perform Migration is LTMC – (Legacy Transfer Migration Cockpit)

You can access the SAP S/4HANA migration cockpit by using transaction LTMC.

Note the following considerations when deciding on the most suitable approach for your project:
ConsiderationFilesStaging Tables 
Size Limit 200MB limit for SAP S/4HANA Migration Cockpit .* No Limit. 
System Considerations None. Staging system uses an SAP HANA database. 
Data ProvisioningEnter data manually in each Microsoft Excel XML file. Fill tables manually or by using preferred tools (for example SAP Agile Data Preparation). 

* For on-premise systems, parameter icm/HTTP/max_request_size_KB controls the size of the http request. The value of this parameter is the maximum size (in KB) of an HTTP request. This parameter has the standard value 102400 kb (100MB) but can be changed if required. 

Steps to Use LTMC


1. Enter LTMC T.Code

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

2. LTMC Web page / Fiori App will get opened

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

3. Click on Create for Starting New Migration Project

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

4. Provide Project Title & Data Retention Time and hit Create

5. In the Search bar, we can look for an object which we want to use and upload data

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

6. Select Required Object and click open

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

It’s just an information  and press ok

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

7. Click on Download Template so XML file will be downloaded

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

8. A pre-filled Template with detailed of each field and business is available

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

9. In Field, List Sheet will find each sheet which and all are mandatory  based on that we will fill data and upload

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

10. In Basic Data Sheet highlighted column is mandatory and fill the remaining fields and sheets as per requirement.

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

11. In the last Sheet, “Maintenance Status Settings” Sheet will be activating which screens are required as per business.

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

12. Once the template is ready with all required data need to follow below Steps

A. Upload File
B. Activate
C. Start Transfer
D. Data Validate
E. Convert values
F. Simulate
G. Execute Import

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

13. Click Activate

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

14. Click Start Transfer

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

15. Data will get Transferred once its done Close button will get enabled

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

16. If any error or data is missing will get error hear if all data in templet is good we can proceed further

17. Click next

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

18. When we are executing for the first time particular object in a project we need to Map fields.

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

19. Click  each line item and do Mapping  of Values

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

20. Select Line Item and Click check once status turns in to Green Light click save so next time system will do the mapping automatically

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

21. Once all the mapping is completed and no open items click next to simulate Import

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

22. Similar to upload once its completed click close to proceeding further

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

23. Backed  Program  will execute and data gets simulated if any missing data of mandatory fields and fields Mapping is wrong

24. If any error go back and fix and  repeat same if no error click Next

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

25. If all the steps are completed without any error will get above message then click finish.

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

Migration Status is now Finish


1. Now Check whether Material is created or not

2. Go to Display Material Fiori App and Search with the Material Number which you created

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

3. Select required Fields

SAP ABAP Tutorials and Materials, SAP ABAP Certifications, SAP ABAP Guides, SAP ABAP Learning

4. Material with all the fields got updated with Screens.

BW/4 HANA Query Creation on Composite Provider using variable for selection

$
0
0
BW/4 HANA  Query Creation on Composite Provider using variable for selection

1. Create a query

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials


SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

2. Go to sheet definition Section

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

3.  Click on Info provider

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

4. Select needed Characteristics in Rows and key figures in columns section respectively

As following

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

5. Now if you need selection / Input screen when a user runs a report, We need to create a variable for those fields.

6. For this we will create variables , We go to info provider section.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

7. Select the characteristic on which we need to create ‘ E.g. , Year

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

8. Select appropriate, singe value or Interval as per your requirement,  Y. select one of them and click OK.

Note:- You can see Reference Characteristics coming automatically on what you had created the variable as in point 7

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

9. You get the following screen, where you can mention optional/ mandatory, accordingly .

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

10.   After the settings are made, save the query.

11. Go again to query-> filter section, Drag the fields on which we had created the variable.

to fixed values section and right click and restrict it.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

12. You get following section. Go to variable section and select the variable you have created and right click and say add

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

13. It comes to the filter definition, say OK.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

14. Save the query.

15.When you data preview the query in HANA , you get the selection screen. , browse and submit it.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

16. You can also use in RSRT where you can see the selection screen.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

17. Result In RSRT

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

New R and enhanced Python API for SAP HANA Machine Learning – Released!

$
0
0
I am going to focus on two exciting capabilities – the new R and the enhanced Python API for SAP HANA Machine Learning.

Key Points


◈ The API’s are now generally available from April 5th with the release of HANA 2.0 SPS 04. You can download the packages multiple ways, for example with the HANA Express Download Manager and can get started straight away, for free!

◈ Alongside the Python API, we now have a comparable API for R! In my previous blogs, I have given a walk-through on how to use the Python API and the value it can bring for building Machine Learning models on massive datasets, but below you’ll find a preview of one of the enhanced features – Exploratory Data Analysis. With the addition of the R API, you can train and deploy models in a similar fashion. Below I have provided some code samples for the R API, but for a detailed overview see this blog by Kurt Holst.

◈ The manual stages of the Machine Learning process (such as feature engineering, data encoding, sampling, feature selection and cross validation) can now be taken care of by the Automated Predictive Library (APL) algorithms. The user only needs to focus on the business problem being solved.

Python Example – Exploratory Data Analysis


Exploratory Data Analysis (EDA) is an essential tool for Data Science. It is the process of understanding your dataset using statistical techniques and visualizations. The insight that you gain from EDA can help you to uncover issues and errors, give guidance on important variables, draw assumptions from the dataset and build powerful predictive models. The Python API now includes 3 EDA techniques:

◈ Distribution plot
◈ Pie plot
◈ Correlation plot

Note: The EDA capabilities will be expanded with further release cycles.

The benefit of leveraging these EDA plots with the HANA DataFrame is best illustrated with some performance benchmarks. I tested these plots on the same 10 million row data set and compared the time it took to return to plots in Jupyter.

◈ Using a Pandas DataFrame = on average 3 hours
◈ Using the HANA DataFrame = less than 5 seconds, for each of the 3 plots

# Import DataFrame and EDA
from hana_ml import dataframe
from hana_ml.visualizers.eda import EDAVisualizer

# Connect to HANA
conn = dataframe.ConnectionContext('ADDRESS', 'PORT', 'USER', 'PASSWORD')

# Create the HANA Dataframe and point to the training table
data = conn.table("TABLE", schema="SCHEMA")

# Create side-by-side distribution plot for AGE of non-survivors and survivors
f = plt.figure(figsize=(18, 6))
ax1 = f.add_subplot(121)
eda = EDAVisualizer(ax1)
ax1, dist_data = eda.distribution_plot(data=data.filter("SURVIVED = 0"), column="AGE", bins=20, title="Distribution of AGE for non-survivors")

ax1 = f.add_subplot(122)
eda = EDAVisualizer(ax1)
ax1, dist_data = eda.distribution_plot(data=data.filter("SURVIVED = 1"), column="AGE", bins=20, title="Distribution of AGE for survivors")

plt.show()


This is just a preview of the EDA capabilities, an in-depth overview of all the plots and parameters will be detailed in my next blog… stay tuned.

R Example – K Means Clustering


K-means clustering in SAP HANA is an unsupervised machine learning algorithm for data partitioning into a set of k clusters or groups. It classifies observation into groups such that object within the same group are similar as possible.

For this example, I will be using the Iris data set, from University of California, Irvine. This data set contains attributes of a plant iris. There are three species of Iris plants.

◆ Iris Setosa
◆ Iris Versicolor
◆ Iris Virginica

Connecting to HANA


# Load HANA ML package
library(hana.ml.r)

# Use ConnectionContext to connect to HANA
conn.context <- hanaml.ConnectionContext('ADDRESS','USER','PASSWORD')

# Load data
data <- conn.context$table("IRIS")

Data Exploration


# Look at the columns
as.character(data$columns)

>> [1] "ID"            "SEPALLENGTHCM""SEPALWIDTHCM"  "PETALLENGTHCM"
   [5] "PETALWIDTHCM"  "SPECIES"      

# Look at the data types
sapply(data$dtypes(), paste, collapse = ",")

>> [1] "ID,INTEGER,10"           "SEPALLENGTHCM,DOUBLE,15"
   [3] "SEPALWIDTHCM,DOUBLE,15"  "PETALLENGTHCM,DOUBLE,15"
   [5] "PETALWIDTHCM,DOUBLE,15"  "SPECIES,VARCHAR,15"  

# Number of rows
sprintf('Number of rows in Iris dataset: %s', data$nrows)

>> [1] "Number of rows in Iris dataset: 150"

Training K-Means Clustering model


library(sets)
library(cluster)
library(dplyr)

# Train K Means model with 3 clusters
km <- hanaml.Kmeans(conn.context, data, n.clusters = 3)

# Plot clusters
kplot <- clusplot(data$Collect(), km$labels$Collect()$CLUSTER_ID, color = TRUE, shade = TRUE, labels = 2, lines = 0)


# Print cluster numbers
Cluster_number<- select(km$labels$Collect(), 2) %>% distinct()
print(Cluster_number)

>>   CLUSTER_ID
   1          2
   2          1
   3          0

These snippets are not meant to be an exhaustive analysis, simply to showcase some of the capabilities within the API.

BW4HANA Modeling Scenario Step by Step

$
0
0
The purpose of this document is to provide details on how to do modeling in BW/4HANA with steps. The reader would get the look and feel of the new eclipse based modeling in BW/4HANA and how to create a data model flow in BW/4HANA.

BW/4HANA modeling is eclipsed based and happens within the HANA Studio. The BW GUI still offers a representation of classical administrative workbench without the key modeling capability.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

All the modeling of data flows, objects, data models happens within the Eclipsed based SAP HANA Studio.

Objects Overview


SAP BW has been evolving continuously from SAP BW 3x to now BW/4HANA.

With the introduction of HANA DB, the SAP BW versions supporting it are from 7.3 -> 7.4 -> 7.5 before BW/4HANA came in.

These versions introduced modeling objects like Composite providers, Transient providers, Advanced DSO, Open ODS view and so on.

These versions would still have the possibility to consume the old classical BW objects as standalone or being converted to HANA optimized objects.

With BW/4HANA, we can no longer consume the old classical BW objects.

Note that BW 7.5 on HANA with SP4 also gives the opportunity to use eclipse based modeling. However, with BW/HANA we have a more stable platform, more scalability and tighter integration and use of HANA optimized objects. SAP BW/4HANA will receive much of the innovation being developed moving forward.

SAP BW/4HANA is built on top of SAP HANA and provides high-performance capabilities. Aggregates are not required, few indexes are needed, data loads and execution of queries is fast. This performance improvement is achieved by moving the complex BW operations and calculations to the HANA database.

Some might think why do we need BW4/HANA and achieve what is needed in S4HANA itself.

The advantage of SAP BW is not limited only to the performance of an OLAP system. Most SAP BW systems include complex business transformations, consolidation of SAP systems, non-SAP systems etc. SAP BW includes the consolidation of data from multiple systems which eventually provided more agility and governance in the IT landscape.

BW/4HANA ObjectsClassical BW objects
Composite ProviderMulti-provider
Info set 
Advanced DSOInfo Cube
DSO’s 
PSA 
Info ObjectInfo Object
Open ODS ViewVirtual Provider 
Transient Provider 

Logging into B4/4HANA


Pre- Requisite: Once the SAP HANA Studio with BW/4HANA plugins is installed.

Note – There could be more than one ways to access or create an object or an application. I will try to show what I see as a most feasible option.

When you click on HANA Studio, you get the initial screen layout.


BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Click on the Workbench on the top right of the screen. You will be directed to the HANA Modeler perspective(screen). I will not be covering about HANA therefore, I will jump to BW/4HANA details.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Click on the below-highlighted icon to change the perspective which in this case will be BW modeling.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The initial screen will look like below

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The first step is to add a BW project which is nothing but connecting to your BW/4HANA system.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

You can see the list of SAP systems. Select the BW/4HANA system.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Once you provide the required details and finish you will be logged into modeling space

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

BW Repository contains the Info Areas which in turns contains the modeling objects, data models etc. like in the older versions of BW.

Data sources allow you to connect to source systems, for example, S4(ECC system) and replicate the data sources.

Create a Data Model in BW/4HANA


Now I will focus on creating a data model in BW/4HANA eclipse modeling using the new objects available.

I will try to keep this simple and not dive into every detailed aspect of each object which I will try to cover in other documents on each subject.

Let’s take an example of Sales header data and use this to create the data model.

There is an option to create one object at a time and as well to take a flexible approach to create a data flow which is simpler.

Goto the Unassigned node for example and right click and go to create Info Area

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Give the technical details

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Under created info area right click and go to create Data flow object

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Enter the technical details

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The middle section is the modeling space, and on the right, you can see the different modeling objects to consume to design.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

First, drag and drop the Datasource

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Double click on the dragged item and give the source system name and next

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Click next and type the data source string you want to search and next

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Enter the technical details and Datasource type and finish

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In overview tab of data source

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In Extraction tab

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In Fields Tab. Here you have the option to change the description, data properties and so on.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Click Activate to activate the data source

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Now drag and drop the Datastore Object (advanced). Let us create a staging layer with write optimized DSO.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Double click on the dragged object and enter the technical details below. You can either create a standalone DSO and add info objects to it or you can just make a copy of underlying data source or other object provided below to copy the structure of data source. In this case, we copy the template from the data source.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Provide the data source name

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In the general tab, double-click on the write optimized DSO to make this as write optimized Advanced DSO. As you also see there is a check External HANA View, with this the system automatically creates a HANA view. This view can be later used in other native HANA modeling.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In the details tab, you can see the list of fields copied over from the data source. You can change the description, data properties if needed.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In the Settings tab, you have advanced options like Partition, Indexing etc.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Activate the DSO

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Simply drag and drop the link to connect the data source to the DSO

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Create Transformation by right click on DSO and follow the path. The system itself prompts you with the transformation details

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Check the details and continue or you can even copy the rules from existing transformation

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The General tab provides you the details. You can write a start, end, and expert routine like older versions here.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

You can drag and drop to do the mappings. If the source and target have the same fields the mappings will be done automatically.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The rule details include as you see there is a new type called Lookup. This is similar to read master data in previous versions. Once done activate the transformation.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Similarly, create a DTP by right click on DSO and create DTP.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The system prompts you with the path details and continue.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Check the details and continue

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The General tab provides technical details, package size, request status and other options like older versions

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In the extraction tab, you provide the semantics for grouping. Here you can provide filter options.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In the update Tab, you provide the type of request handling to happen. Here you can create error DTP for error handling

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The runtime properties tab

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

We have the possibility to use this Advanced DSO for further reporting purposes directly if needed or you can continue to model further by including the Advanced DSO as standard or Cube or any other planning object setting, or even include info source, open hub

For this document, we will continue to create an Advanced DSO standard and an Advanced DSO of type cube. I will not go to each detail tab for this as most of it should be similar.

Drag and drop the advanced DSO and link it to the write optimized DSO.

Click on standard datastore object and you can see the settings being set.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In details tab, we have options to add an info object from the system, add another field to be used in the data model, manage keys to identify the fields which need to be keys for standard DSO, and other details like data property changes and so on like other versions.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Create Transformation and DTP like done earlier and activate the objects.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Next, create an Advanced DSO of type Info cube by clicking on the cube type and follow steps like above to create the object.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Have created another Advanced DSO of type standard just to show the use of Composite provider.

Composite provider does the job of a Multiprovider/Info set.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Drag and drop the Composite provider in the data model layout

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Double click and provide the technical details and continue

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

You have an option to Join objects or Union. Add the objects by clicking Add

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

The general tab gives you options to check if in future this composite provider would be needed in other data modeling needs and runtime and profile properties which can be adjusted based on the need for performance and optimization accordingly

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Drag and drop the fields from each object from the source to target. You can change the rule assignment to a constant if needed.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

In the output tab, you can see the list of objects selected. By selecting on each object there is an option to associate the field with the info object in the system. There is an option to maintain the reporting properties. Rest are known technical details which can be maintained based on need.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

Activate the composite provider.

The data model can be found under the Info area you created. You can find all the objects created under the respective folder. The entire data model can be found under Data Flow Object. Double click on it.

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

BW4HANA Modeling, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Learning

LDAP Based Authentication for SAP HANA 2.0

$
0
0

Purpose


This blog provides details on the steps required to configure LDAP based authentication for SAP HANA 2.0. LDAP based authentication is of great help for below two reason to the users –

1. With the growing number of HANA databases in landscape, it is very tedious to remember password for each database.
2. Avoid frequently generated alert for “Expiration of database user passwords”

Overview

LDAP defines a standard protocol for accessing directory services, which is supported by various directory products. Using directory services enables important information in a corporate network to be stored centrally on a server. The advantage of storing information centrally for the entire network is that you only must maintain data once, which avoids redundancy and inconsistency.

If an LDAP directory is available in your corporate network, you can configure the SAP system to use this feature. For example, a correctly configured SAP system can read information from the directory and store information there.

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

LDAP user authentication was first introduced in HANA 2.0 SPS 00, in later release HANA 2.0 SPS 03 it was enhanced with automatic user provisioning. So if your database is on HANA 2.0, you can configure LDAP as per the step listed in this blog.

Procedure


To enable LDAP user authentication, you set up a connection to an LDAP server by creating an LDAP provider in the SAP HANA database. Depending on your requirements, you configure the LDAP server to authenticate users only, or to authenticate and authorize users.

In this blog, we will just configure LDAP to authenticate user. Other functionality like LDAP group authorization and automatic user creation is not covered in this blog

Prerequisties


1. An LDAP v3 compliant server
2. You have the system privilege LDAP ADMIN.
3. A certificate collection with purpose LDAP exists in the database and the certificate of the Certificate Authority (CA) that signed the certificate used by the LDAP server has been added. This is required to enable secure communication between SAP HANA and the LDAP server using the TLS/SSL protocol.
4. LDAP authentication is an active authentication mechanism in the SAP HANA database. You can verify this by checking the value of the parameter [authentication] authentication_methods in the global.ini configuration file.

Create LDAP Provider


To configure a connection to an LDAP server in SAP HANA, you need to create an LDAP provider in the SYSTEMDB and each tenant database database with the CREATE LDAP PROVIDER or ALTER LDAP PROVIDER statements.

NOTE: In this blog, steps shown is performed in tenant database. So if you want LDAP authentication in SYSTEMDB and other tenant in the system then same steps needs to be followed in each database. 

Access to the LDAP server takes place using an LDAP server user with permission to perform searches as specified by the user look-up URL. The credential of this user is stored in the secure internal credential store.

Communication between SAP HANA and the LDAP server can be secured using the TLS/SSL protocol or Secure LDAP protocol (LDAPS).

--- Create LDAP Provider

CREATE LDAP PROVIDER LDAP_NONPROD
  CREDENTIAL TYPE 'PASSWORD' 
  USING 'user=CN=<user_dn_string_literal>;password=<passphrase>'
  USER LOOKUP URL 
   'ldap://<hostname>:<Port>/<Base DN>??sub?(&(objectClass=user)(sAMAccountName=*))'
  ATTRIBUTE DN 'distinguishedName'
  SSL ON
  DEFAULT ON
  ENABLE PROVIDER;

ATTRIBUTE MEMBER_OF

I have not used used this LDAP attribute as I don’t want to verify whether the users are member of any group or not.

SSL {ON|OFF}

I want SSL to be enabled for the client connection with LDAP. When using SSL protocol, the trust store used to authenticate communication must be a certificate collection in the SAP HANA database with the purpose LDAP. The certificate of the Certificate Authority (CA) that signed the certificate used by the LDAP server must be available in this certificate collection.

When set to ON, the SSL/TLS protocol is used, and the URL begins with “ldap://”.

Detail option about syntax is mentioned in CREATE LDAP PROVIDER Statement (Access Control)

If above Syntax is correct, you will get below message and LDAP provider gets created

Statement 'CREATE LDAP PROVIDER LDAP_NONPROD CREDENTIAL TYPE 'PASSWORD' USING ...' 
successfully executed in 67 ms 373 µs (server processing time: 2 ms 676 µs) - Rows Affected: 0

NOTE: There is high chances on getting this “CREATE LDAP PROVIDER” SQL query wrong and you won’t be able to validate it. I cannot post the query here, but if you guys have performed LDAP in ABAP then try to use Similar User String here and also the base entry same as that in ABAP. 

Now we will validate an LDAP provider configuration and LDAP authentication for users of that LDAP provider.

--- Validate LDAP provider

VALIDATE LDAP PROVIDER LDAP_NONPROD;

You will get below message, as we have not maintained client root certificate in our tenant database certificate collection.

Could not execute 'VALIDATE LDAP PROVIDER LDAP_NONPROD' in 66 ms 781 µs .

SAP DBTech JDBC: [4200]: Validate LDAP provider failed because of internal error: Unable to bind with LDAP provider LDAP_NONPROD.

Secure Communication Between SAP HANA and an LDAP Directory Server


How to get AD root certificate?

Either you can ask your AD team or you can get AD root certificate from your desktop. Open “Manage computer certificates”

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

In our case our root certificate resides under “Intermediates Certificate Authorities” > “Certificates”. So check where your root certificate is installed in your desktop and download the same in .cer file.

Login to Tenant Database in HANA Cockpit > “Manage Certificates”

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Click on “Import”

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

If you have multiple root certificate, browse the path of root certificate and import all one by one.

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Once all are added, click on “Go to Certificate Collections”

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Click on “Add” in bottom left

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Give name “LDAP_PSE”

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Click on “Edit Purpose” and change the purpose to “LDAP”

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Click on “Add Certificate” and select all the certificate that you have imported in earlier steps

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Now try to validate LDAP Provider again, it should now work without any issue

--- Validate LDAP provider

VALIDATE LDAP PROVIDER LDAP_NONPROD;

Now you should get below result

Statement 'VALIDATE LDAP PROVIDER LDAP_NONPROD'

successfully executed in 103 ms 461 µs  (server processing time: 34 ms 595 µs) - Rows Affected: 0

Verify AD User using LDAP Provider in HANA


Now as LDAP provider is configured and validated, we will now check whether AD user can validate using the LDAP provider even though that user is not present in HANA database.

--- Validate connectivity of users using LDAP

VALIDATE LDAP PROVIDER LDAP_NONPROD CHECK USER PADD02 PASSWORD "*********";
You will get below result (I din't had this user in HANA but it is AD user)

Statement 'VALIDATE LDAP PROVIDER LDAP_NONPROD CHECK USER PADD02 PASSWORD "*********"'

successfully executed in 96 ms 728 µs  (server processing time: 31 ms 41 µs) - Rows Affected: 0

With below query you can check whether user is available in HANA database on not. As we can see, user is not is HANA database, but it was still able to validate user in LDAP.

SELECT USER_NAME, CREATOR, CREATE_TIME, LAST_SUCCESSFUL_CONNECT, AUTHORIZATION_MODE 
  FROM USERS
 WHERE USER_NAME = 'PADD02';

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

NOTE: We have not enabled automatic user creation in HANA database, so we have to create user in HANA database manually with needed roles and provide authentication as LDAP.

User Creation in HANA with LDAP Authentication Enabled


Now we will execute below query to create user in HANA database with authentication via LDAP provider.

--- Create User

CREATE USER USPADD02 WITH IDENTITY FOR LDAP PROVIDER;

Result will be

Statement 'CREATE USER PADD02 WITH IDENTITY FOR LDAP PROVIDER'

successfully executed in 85 ms 631 µs  (server processing time: 19 ms 802 µs) - Rows Affected: 0
We don’t have to provide password while creating user. Login to tenant database in HANA studio > Manage users

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

As you can see the authentication method is LDAP and I’m able to login with network password

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

How do we change authentication to LDAP for existing users?

Just for testing purpose, I have removed above user with LDAP authentication and created it normally with password authentication. So, as you can see existing user PADD02 authentication is password and now I have to change it LDAP.

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

So, first thing is to disable user password

ALTER USER PADD02 DISABLE PASSWORD;

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Now you see there is no authentication method define for the user. So, we will enable LDAP authentication for this user using below query

ALTER USER PADD02 ENABLE LDAP;

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

LDAP authentication is enabled for user.

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Add roles manually to user as we won’t be providing roles using LDAP groups. Also make sure that authorization mode is LOCAL instead of LDAP.

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials, SAP HANA Studio

Authorization of LDAP-Authenticated Users


The internal database user to be used for subsequent authorization checks in SAP HANA is determined during the logon process. With LDAP authentication, the internal database user name is same as the external identity used to log on. The following situations are possible:

◈ Database user exists and is configured for LDAP group authorization

If the database user exists and is configured for LDAP group authorization (authorization mode LDAP), it is verified that the authenticated user is a member of at least one LDAP group mapped to at least one SAP HANA role. If this is the case, the user is logged on and the identified roles granted. For more information, see the section on LDAP group authorization for existing users.

◈ Database user exists and is configured for local authorization

If the database user exists and is configured for local authorization (authorization mode LOCAL), the user is logged on. Privileges and roles must be granted directly to the database user by a user administrator.

◈ Database user does not exist and the LDAP provider is configured for automatic user creation

If the database user does not exist and the LDAP provider is enabled to create database users in SAP HANA, the required database user is created. This is described in more detail in the next section
Viewing all 711 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>