Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

How To: Create Dataflow within BW/4HANA

$
0
0

What is a DataFlow Object?


The Dataflow object is a way to model your dataflow within SAP BW/4HANA within the Eclipse world.

It is a graphical feature which also enables you to create a dataflow from start to finish.

All created objects eg Datasource, ADSOs, InfoObjects etc are available for re-use within other dataflows.

The Dataflow object is transportable.

Another benefit is the feature of being able to add documentation to used objects.

In addition you can use a DataFlow for Blueprinting by using non persistent objects, once you are fine  with the model, persistent objects can be created out of the Data Flow.
If an object is persistent or not can be easily seen by its backgroundcolor, white means non persistent, blue means this object is already persistent.

Furthermore, the Data Flow object itself has no influence on the Runtime.

Now I want to show you how to Create the DataFlow Object with an example of a complete dataflow from Datasource to CompositeProvider.

How To Create the DataFlow Object


Right-click on Data Flow Object or

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

on InfoArea:

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

When creating the Dataflow object a transport request opens up.

Afterwards add a new or existing Datasource by drag and drop to the Details:

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

Double click on the entered Datasource leads to the following properties:

Select a Source System and if you want a copy from an existing or a proposal from an ODP connection.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

By clicking on the „NEXT“ button the Dataflow is shown – now it is possible to add any other object to the dataflow.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

In this example an ADSO was added and filled with three InfoObjects

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

Now you created a persistent ADSO.

Activate the newly created ADSO. Create a connection between the newly created ADSO and the previously created Datasource

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

Now you’ll see a transformation and DTP creation is possible based on the connection.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

Right-click on the transformation symbol and select “Create Transformation … “

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

Now you are able to create a transformation – Target and Source are filled automatically:

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

Here you are able to define all settings for the transformation as well as connect to the fields from the previous ADSO

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

In addition, if infoobjects are available in the source – they are automatically connected. Don’t forget to activate!

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

Same for the creation of the DTP. All DTP properties can be selected. You can see/select/change all the properties already known in BW.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

Same process for the composite provider – drag and drop the composite provider, assign a name and create a connection to the ADSO.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

It is also possible to add a comment for the specific selections within the dataflow. Click the one you want to comment on – go to the properties section and add a documentation.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Guides, SAP HANA Study Materials

SAP BW/4HANA Migration – Authorisation

$
0
0
The simplification of object types in SAP BW/4HANA has an impact on authorisation objects. When converting a SAP BW system to a SAP BW/4HANA, authorizations for object types that are not available in SAP BW/4HANA (like InfoCubes) must be replaced by authorizations for corresponding object types (like ADSO).

This article covers my experience of the impact on authorisation by migrating BW classic objects to BW/4HANA compatible objects in a BW 7.5 system (HANA DB) along with a review of what tools are available to assist with the authorisation process.

The following are the six aspects of authorisation that this article will cover:

1. Authorisation required for the BW/4HANA transfer toolbox (In-Place)
2. SAP defined action types
3. Authorisation impact to the BI business users by the BW/4HANA transfer toolbox on a BW 7.5 system.
4. Authorisation impact to the BI support users by the BW/4HANA transfer toolbox on a BW 7.5 system.
5. Transfer Authorisation Tool in BW/4HANA transfer cockpit (RSB4HCONV).
6. Authorisation impact once the BW system (7.x) is converted to a SAP BW/4HANA.

1. Authorisation required for the BW/4HANA transfer toolbox (In-Place)


Systems running on SAP BW 7.50 powered by SAP HANA can be converted in-place keeping their SID. In the realization phase of the conversion project, classic objects must be transferred into their HANA optimized replacements using the Transfer Toolbox (RSB4HTRF). This transfer can be performed scenario-by-scenario. When all classic objects have been replaced, the system conversion to BW/4HANA can be triggered.

To execute the object conversion process using the BW/4HANA transfer toolbox (transaction RSB4HTRF), I suggest that you create a new role that contains the following authorisation objects and values (reference note 2383530 for more information). This role will be required in all BW systems in the landscape as the BW/4HANA conversion needs to be executed manually in each system. Once implemented, please assign only to support/project team members responsible for the conversion of the BW objects.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

2. SAP defined action types


SAP have defined four types of actions that need to be applied for respective authorization objects impacted by the conversion process using the BW/4HANA transfer toolbox and the migration to BW/4HANA:

◈ Assume – Nothing to do. Authorizations will continue to work after conversion
◈ Adjust – Check and adapt values of authorization objects
◈ Replace – Change authorization object and adapt its values
◈ Obsolete – Not needed/supported authorization object that should be remove

The following sections will refer to these action types (reference note 2468657 for more information).

3. Authorisation impact to the BI business user by the BW/4HANA transfer toolbox on a BW 7.5 system.


As mentioned, my experience is based off a BW 7.5 (DB HANA) scenario. The data level security is based off analysis authorisation objects (RSECADMIN) in conjunction with the authorisation object S_RS_AUTH. Before migration each BI report is based off a multiprovider.

SAP note 2468657 (BW4SL – Standard Authorizations) confirms that there is no impact on the S_RS_AUTH authorisation object (i.e. no changes are required after migration objects to BW/4HANA compatible objects).

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

After converting a data flow to a BW/4HANA compatible data flow – I executed a BI report (impacted by this conversion) using a test user (copy of an existing business user). The result was that there was no impact on the data level authorisation (as expected).

If your data level authorisation is configured in the same way as this scenario (i.e. BI reports based off multiproviders only along with analysis authorisation (S_RS_AUTH)) then converting your BW multiproviders to composite providers via the BW/4HANA toolbox (RSB4HTRF) will have no impact to the BI business user. I would still recommend to do a sanity check with a test user on a sub-set of the BI reports after converting the multiprovider to a composite provider.

If you don’t have analysis authorisation in place, I suggest that you review the possibility of implementing it before starting the conversion of any data flows using the BW/4HANA transfer toolbox.

4. Authorisation impact to the BI support user by the BW/4HANA transfer toolbox on a BW 7.5 system.


From a BI support users perspective, you need to review the authorisation objects that have the action type of replace and adjust. The following are a list of authorisation objects that have these action types (from SAP OSS note 2468657):

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

The main two replace object types are for ADSO (S_RS_ADSO) and composite provider (S_RS_HCPR). Based on my scenario, as part of a previous BW 7.4 upgrade on to HANA DB, the security team manually included these objects (S_RS_ADSO & S_RS_HCPR) into all our support roles that had the existing support roles S_RS_ODSO, S_RS_HYPER, S_RS_ICUBE, S_RS_MPRO and S_RS_ISNEW.

For the remaining replace authorisation objects (S_RS_IOBJA (replacing S_RS_IOBJ) and S_RS_TRCS (replacing S_RS_ISNEW)) and all the adjust objects there are two options available to update the support roles (these options are also applicable for S_RS_ADSO and S_RS_HCPR):

◈ Manually update the security roles.
◈ Transfer Authorisation Tool (RSB4HCONV) – creates a new role with all the necessary updates (covered in the next section).

5. Transfer Authorisation Tool in BW/4HANA transfer cockpit (RSB4HCONV).


The Authorization Transfer Tool uses the existing roles in your system. It will create copies of these roles while preserving original ones. Conversion rules for authorization objects are then applied on top of these role copies. After the conversion of objects using the Scope Transfer Tool, both original and created roles will be assigned to the users. After confirmation of authorization object conversion and a successful system conversion to SAP BW/4HANA, you can then remove original roles manually.

Any required actions on the authorization objects can be carried out only after the transfer of their corresponding SAP BW objects is done in the system via the BW/4HANA transfer toolbox. (especially for object types adjust and replace). The transfer of the SAP BW object must be done using the Scope Transfer Tool. The transfer runs will provide the information required to adjust or replace the authorization objects in the selected roles:

◈ Mapping of new names and types of converted InfoProviders, transformations, etc.
◈ Names of additional InfoProviders created (e.g. Composite Provider for DataStore objects (advanced) with navigational attributes)

The following is the example provided in the BW/4HANA conversion guide:

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

The following is an overview of the above example.

A. Execute the transaction RSB4HCONV (BW/4HANA transfer cockpit) and select the Transfer Standard Authorizations (initial run) radio button

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

B. Then enter a run ID and select create button:

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

C. Add the support roles required to be reviewed by selecting the Add roles button and select each role required. For this example – the role TEST_CONV_AUTH was selected (same name as Run ID – please don’t let this confuse you).

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials
SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

D. Execute the initial run radio button. For each role, a new role is created, and the existing role is scanned for authorization objects with defined “assume” or “obsolete” rules. This is also called the Preparation Phase. It’s not dependent on BW/4HANA migration been executed. If this is successfully, green icons will appear in the first status column.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

E. Assuming the BW/4HANA object migration has been executed, execute the Delta run radio button. The system will retrieve the details of related scope transfer runs and scan the original roles for authorization objects with defined “adjust” or “replace” rules. Authorization objects with “replace” rule is checked for conflicts. Then the roles copies are adjusted according to the defined rules. If this is successfully, green icons will appear in the second status column

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

Please note in RSB4HCONV (BW/4HANA transfer cockpit) there are two radio buttons – Transfer Standard Authorizations (initial run) and Transfer Standard Authorizations (delta run) – this step and steps below can be executed in either option (once the BW/4HANA object migration has been executed) .

F. Now review the prepared mapped roles and authorizations on right hand side under New Objects:

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

G. If you’re satisfied with the new objects, execute the Generate Target Roles run radio button. The system will generate the new roles and assign them to the same users as the corresponding original roles. The new role name will be name in the Cnv. Name column – in this example this is TEST_CONV_AUTH_BW4H. Please not this name can be changed before this step by selecting the change icon (in change column) and entering an alternative name.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials
SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

H. Once the BW system is converted to SAP BW/4HANA, you should remove the original roles (they are inconsistent anyway, since they contain obsolete authorization objects). In this example, the role TEST_CONV_AUTH should be removed from all users manually.

6. Authorisation impact once the BW system (7.x) is converted to SAP BW/4HANA.


Once the system is on BW/4HANA, the following authorisation objects are no longer required (action type obsolete).If you used the Authorization Transfer Tool (step 5 – above) then you need to manually remove all the old roles (keeping the newly generated roles) from all users. If you did not use this approach, you need to work with the security team to manually remove the authorisation objects below from all impacted roles.

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

SAP BW/4HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Materials

Using Dynamic Text in SAC Without Users Having to Input Prompts

$
0
0
Architecture: SAC using Live Connection to a HANA Database.

Many times in SAC we want to dynamically display a text based on some underlying values. In this scenario, we want to display the current Fiscal Year, Period and Quarter.

The underlying calculation view doesn’t have any optional or mandatory prompts, since the view itself was filtered to only return data for the current Fiscal Year, and the end users didn’t want any pop ups or filter bars to be displayed.

In this blog I’ll outline the simple steps in order to achieve that dynamic selection.

1. Create a Stored Procedure to derive the required values (Fiscal Year, Period, Quarter, Week, etc…)

2. In the calculation view, create an input parameter with parameter type Derived From Procedure / Scalar Function
3. In the SAC story, ensure the parameter does not open every time the story is opened

1. Create Stored Procedure

In the web IDE, create a simple stored procedure that returns a distinct value for what is being required, such as the current Fiscal Year

SAP HANA Studio, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

2. In the calculation view, create an input parameter of Parameter Type Derived From Procedure/Scalar Function

Add the procedure created in step 1 in the Procedure/Scalar Function section, and ensure the check is selected for Input Enabled. This will force the prompt to appear.

SAP HANA Studio, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

Activate the view and do a data preview. You should see your input parameters with the values pre-populated per the logic in your stored procedure

SAP HANA Studio, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

3. Edit the Prompts in SAC. Open the story, and select Edit Prompts, and select the model that is linked to the calculation view which has the input parameters:

SAP HANA Studio, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

Notice how the values are being populated. Most importantly, ensure the check box for Automatically open prompt when story opens is deselected.

SAP HANA Studio, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

Hit set and you’re all set.

Now when creating a dynamic text, you have the option of selecting Model Variables (which are the input parameters from the calculation view)

SAP HANA Studio, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

And there you have it:

SAP HANA Studio, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

Split table column value into multiple rows using SQL in SAP HANA

$
0
0

Introduction


In this post I would like to focus on the similar scenario, but this time the challenge will be to do the same not for variable but for table column. SAP HANA enables STRING_AGG function for concatenating values from multiple lines into single string, however doing opposite is quite challenging.

Scenario


In my scenario I will use my test table containing contact person details. Its structure is as follows:

CREATE COLUMN TABLE CONTACTS
(
ID INT,
COUNTRY VARCHAR(2),
FULL_NAME VARCHAR(100),
PHONE_NUMBERS VARCHAR(200)
);

Existing CONTACTS table has PHONE_NUMBERS column which stores comma delimited numbers. The purpose is to display phone numbers as separate rows and keep other contact information as in the source table. Single phone number length may vary and count of phone numbers can be also different for each record.
BW SAP HANA Modeling Tools (Eclipse), SAP HANA, SAP HANA Studio, SAP HANA Study Materials

Split column values into multiple lines


To implement the logic for splitting column values into multiple lines, following steps are required

1. Retrieve from the string values delimited by separators
2. Dynamically define maximum number of values for all concatenated strings
3. Generate multiple lines for each record
4. Distribute retrieved values to generated rows

Step 1. Retrieve from the string values delimited by separators

In this step I will use string function, which allows to distinguish values from comma separated string. For this purpose I will use SUBSTR_REGEXPR SQL function. This function allows to retrieve substring from specific string based on regular expression. It also allows to specify which occurrence of the matching substring we want to display.

Following expression allows to retrieve first occurrence of the string of any characters excluding commas

SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE 1 )

Knowing that in my scenario there is up to 3 phone numbers concatenated within single value, let’s add the expression for remaining numbers:

Query:

SELECT 
"ID",
"COUNTRY",
"FULL_NAME",
"PHONE_NUMBERS",
SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE 1) AS "PHONE_NUMBER1",
SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE 2) AS "PHONE_NUMBER2",
SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE 3) AS "PHONE_NUMBER3"
FROM 
CONTACTS;

Result:

BW SAP HANA Modeling Tools (Eclipse), SAP HANA, SAP HANA Studio, SAP HANA Study Materials

Step 2. Dynamically define maximum number of values for all concatenated strings

In this step we want to define what is the maximum number of phone number values in single string. For this purpose I will use OCCURRENCES_REGEXPR function to count number of separators within the string.

Then I will add +1, because number of commas is always less by 1 than the number of phone numbers in the string:

OCCURRENCES_REGEXPR(',' IN "PHONE_NUMBERS" ) + 1

Now we have count of phone number occurrences for each string.

Query:

SELECT 
"ID",
"COUNTRY",
"FULL_NAME",
"PHONE_NUMBERS",
OCCURRENCES_REGEXPR(',' IN "PHONE_NUMBERS" ) + 1 AS "OCCURRENCES_COUNT"
FROM 
CONTACTS;

Result:

BW SAP HANA Modeling Tools (Eclipse), SAP HANA, SAP HANA Studio, SAP HANA Study Materials

Finally I want to see the maximum value to know, how many lines I need to generate. This value will be assigned to the variable MAX_NR_OCCURRENCES and will be used in Step 3. For the purpose of creating variable, I used anonymous block:

Query:

DO
BEGIN

DECLARE MAX_NR_OCCURRENCES INT;
SELECT
MAX( OCCURRENCES_REGEXPR(',' IN "PHONE_NUMBERS" ) + 1 )
INTO
MAX_NR_OCCURRENCES
FROM
CONTACTS;

END

Step 3. Generate multiple lines for each record

For generating multiple lines for each record I will cross join CONTACT table with series of 3 records (because in my case there are max 3 phone numbers in string). To generate N records I used SERIES_GENERATE_INTEGER function. Variable defined in Step 2 will be used as input parameter for this function to define number of records to be generated:

Query:

DO
BEGIN

DECLARE MAX_NR_OCCURRENCES INT;
SELECT
MAX( OCCURRENCES_REGEXPR(',' IN "PHONE_NUMBERS" ) + 1 )
INTO
MAX_NR_OCCURRENCES
FROM
CONTACTS;

SELECT * FROM SERIES_GENERATE_INTEGER(1,0, :MAX_NR_OCCURRENCES);

END

Result:

BW SAP HANA Modeling Tools (Eclipse), SAP HANA, SAP HANA Studio, SAP HANA Study Materials

Now let’s cross join the series with result set from Step 1. This way each record will be copied 3 times:

Query:

DO
BEGIN

DECLARE MAX_NR_OCCURRENCES INT;
SELECT
MAX( OCCURRENCES_REGEXPR(',' IN "PHONE_NUMBERS" ) + 1 )
INTO
MAX_NR_OCCURRENCES
FROM
CONTACTS;

SELECT 
CNT."ID",
CNT."COUNTRY",
CNT."FULL_NAME",
CNT."PHONE_NUMBERS",
SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE 1) AS "PHONE_NUMBER1",
SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE 2) AS "PHONE_NUMBER2",
SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE 3) AS "PHONE_NUMBER3",
SERIES."ELEMENT_NUMBER"
FROM 
CONTACTS CNT
CROSS JOIN SERIES_GENERATE_INTEGER(1,0, :MAX_NR_OCCURRENCES) SERIES;

END

Result:

BW SAP HANA Modeling Tools (Eclipse), SAP HANA, SAP HANA Studio, SAP HANA Study Materials

Step 4. Distribute retrieved values to generated rows

In this step we will apply final query adjustments, to move distinguished phone numbers from columns to consecutive rows. To achieve that we can use ELEMENT_NUMBER column from SERIES_GENERATE_INTEGER function, which returns consecutive numbers for each line within specific contact person. This column will be consumed by OCCURRENCE parameter. By having consecutive numbers in ELEMENT_NUMBER column we can dynamically substring separated values one by one, and display them in consecutive rows. We also need to remember that initially for each record we generated three lines, so at the end we also need to filter out empty rows (for these cases where there are less than 3 phone numbers in string)

Query:

DO
BEGIN

DECLARE MAX_NR_OCCURRENCES INT;
SELECT
MAX( OCCURRENCES_REGEXPR(',' IN "PHONE_NUMBERS" ) + 1 )
INTO
MAX_NR_OCCURRENCES
FROM
CONTACTS;

SELECT 
CNT."ID",
CNT."COUNTRY",
CNT."FULL_NAME",
SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE SERIES."ELEMENT_NUMBER") AS "PHONE_NUMBER"
FROM 
CONTACTS CNT
CROSS JOIN SERIES_GENERATE_INTEGER(1,0, :MAX_NR_OCCURRENCES) SERIES
WHERE
SUBSTR_REGEXPR('[^,]+' IN "PHONE_NUMBERS" OCCURRENCE SERIES."ELEMENT_NUMBER") IS NOT NULL;
END

Result:

BW SAP HANA Modeling Tools (Eclipse), SAP HANA, SAP HANA Studio, SAP HANA Study Materials

SAP HANA & Data Warehousing for non-experts

$
0
0

Introduction


A quick blog, triggered by 2 questions of a customer last week:

1. What exactly is SAP HANA?
2. What does SAP mean by “modern data warehousing”?

The answers to these questions may interest a broader audience of non-SAP-experts, so let’s briefly discuss these topics, illustrated with slides from openSAP courses. On the top right corners of the images you will see something like “(bw4h2, w2u1)” in purple letters.

What is SAP HANA?

I can think of 3 answers to this question:

1. it is a database;
2. it is an environment for data modeling;
3. it is an enabler of SAP’s view on the “Intelligent Enterprise”.

HANA as a database

Databases used to have issue with reading from and writing to the same tables simultaneously. That is why transactions systems for writing data – On-line Transaction Processing or OLTP – and reporting systems for reading data – On-line Analytical Processing or OLAP – are traditionally separated. It was also believed that database technology for both types of processing should be widely different.

SAP HANA, Data Warehousing, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Tutorials and Materials

Research at the Hasso Plattner Institute, connected with SAP via its founder, indicated that workloads for OLTP versus OLAP systems are in fact not that different (see image above). One can use the same database technology for both processing types. The HANA database is such a “dual purpose” database. It is “in-memory” and “column-based”, delivering great processing speed and making it possible to read and write simultaneously. One can argue whether SAP was the first to come up with this technology (probably not) or whether it is technically the best (maybe not), but SAP has been quite decisive in bringing it to market and integrating it with all its software products. Better processing speed is great to have, as all PC users will confirm. But it also makes it possible to do things differently. More “virtual” processing of data. Or processing in “full mode” instead of “delta mode”. I will address thus further on.

HANA as an environment for data modeling


Having access to a HANA database with a development tool like Eclipse + Hana Studio add-ons gives opportunity for powerful data modeling. Most powerful objects are probably “Calculation Views”. An example is shown below.

SAP HANA, Data Warehousing, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Tutorials and Materials

Standard data warehouse transformations, like various types of joins, unions and aggregations, can be developed in a graphical environment. Mostly just drawing lines and arrows and clicking on little icons. A child could do it. Well, almost. More complex transformations are of course possible, especially if option “Script” is chosen for the calculation view. In that case, SQL expertise is required. This type of development is often called “Native HANA (Modeling)”.

And the good news: it is all virtual! The data remains in the tables where the calculation view starts from.

HANA as enabler of the “Intelligent Enterprise”


In recent presentations, SAP emphasizes its view on the “Intelligent Enterprise”, always using the slide shown below.

SAP HANA, Data Warehousing, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Tutorials and Materials

The “Intelligent Suite” consists of all SAP’s transaction system, like S/4HANA, C/4HANA (a cannibal that ate C4C, Hybris, Gigya and Callidus Cloud), Ariba and SuccessFactors. These systems are either Software-as-a-Service (SaaS) only, or SAP is pushing customers towards the SaaS version, like with ERP system S/4HANA.

The “Intelligent Technologies” are loosely coupled offerings grouped by the marketing term “Leonardo”. These offerings include functionalities in following domains: Internet of Things (IoT), Artificial Intelligence (AI)/ Machine Learning (ML), Data Science/ Predictive Analytics and Blockchain. The analytics system “SAP Analytics Cloud” (SAC) with the Digital Boardroom on top and Big Data solution “Data Hub” (see further on) are also sometimes covered by the Leonardo umbrella. And then there is unit 7 of the openSAP course “SAP Leonardo – Enabling the Intelligent Enterprise” (leo1) with the title “Data Intelligence”. I just finished rewatching the movie for this unit looking for recognizable products or solutions … but still have no clue what it is about. As usual, some of these tools will stay, some will disappear. For the latter my money is on “Data Intelligence”. And probably Blockchain. And Digital Boardroom. Let’s stop here for the moment.

The “workhorse” of the Intelligent Enterprise is the “Digital Platform”, consisting of “Data Management” and the “Cloud Platform”. Data Management means … HANA. The “SAP Cloud Platform” (SCP) as it is called nowadays used to be called the “HANA Cloud Platform” (HCP) as it relies heavily on HANA database(s) underneath. It is a powerful Platform-as-a-Service (PaaS) offering for data storage, data modeling and app development. This is good stuff!

My next two blogs will be more technical ones describing some of the work we did at Ciber on the SCP.

What does SAP mean by “modern data warehousing”?


Business Warehouse


Let’s start with what SAP definitely does not mean by it, and that is what I became particularly good at over the past 18 years: the old-school “Business Warehouse” system, also known as BW, pronounced “Bee Double-You”. At least not in its currently often used form in which it is not residing on a HANA database. Somewhat respectlessly, SAP calls this configuration “BW on anyDB”. For years now, no development effort has been put into this configuration.

An improvements is already “BW on HANA” or “BW powered by HANA”. Replacing “anyDB” by a HANA database immediately brings improved data loading and query performance. But also, new data warehousing objects are introduced in higher software versions simplifying and speeding up developments, and also promoting the use of virtual data layers. In BW on HANA, old and new data warehousing objects coexist, leaving customers the option not to migrate to the new objects, thus missing the benefits these objects bring.

Next step to be taken by customers is towards “BW/4HANA”, pronounced “Bee Double-You For HANA”. In this version, only the new data warehousing objects are available. As SAP states it, BW/4HANA “… leaves behind the legacy of SAP BW on anyDB”. Moving to this version will usually require a re-implementation instead of an upgrade type of migration. This version is first introduced in 2017, and not many customers have taken this step yet. And what is next? Rumor has it that SAP is working on a SaaS offering for data warehousing under the name “Project Blueberry”. Interesting, but not for now.

Evolution in a nutshell:

“BW on anyDB” - “BW on HANA” - “BW/4HANA” - SaaS data warehouse

Leave the data where it is!


What SAP does mean by “modern data warehousing” can be summarized in one sentence: “leave the data where it is!”. At least, as much as possible. I am a firm supporter of this view. Moving data around and storing multiple versions of the same data not only raises data storage costs, but also – and in my view more importantly – introduces errors in data used for reporting compared with data in the source system. Data transformation and integration is of course required, but should be done virtually as much as possible.

Mixed data warehousing BW + SQL


For data warehousing, SAP has lately been promoting a “mixed architecture with SAP BW/4HANA and SAP HANA”. See image below. “Native HANA modeling” described earlier on the left, more conventional BW on the right.

SAP HANA, Data Warehousing, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Tutorials and Materials

Idea is to use best of both worlds by developing “mixed scenarios” that are partially Native HANA and partially BW. Great strength lies in the ability to “store data once – use multiple times”. Actually storing data in the warehouse needs to be kept to a minimum. A new word is now used for this thing to avoid: “data persistency”. In practice, the improved processing speed of the HANA database has its limitations, and on occasions intermediate data persistency is required to deliver sufficient performance in the analytics tools put on top.

Data warehouse + data lake


SAP is struggling how to enter the world of “Big Data”. Unstructured data, e.g. from social media or IoT, is stored as a “data lake” in cheap databases like Hadoop and processed with Open Source tools. How to compete with stuff that is for free? Well, basically by embracing this stuff. See image below.

SAP HANA, Data Warehousing, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Tutorials and Materials

SAP is investing heavily in connectivity between Hadoop and other data lakes with SAP’s data warehouse, in which structured business data resides. If you can’t beat them, join them! Good thinking.

Orchestration by Data Hub


More embracing: SAP’s new product “Data Hub”. Quote: “SAP Data Hub provides data orchestration and metadata management across heterogeneous data sources”. See image below.

SAP HANA, Data Warehousing, SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Tutorials and Materials

Data Hub uses SAP Vora technology, is well-integrated with other SAP systems, and is particularly good at leaving the data where it is. It also cleverly integrates or orchestrates darlings of the Open Source community like Hadoop, S3, Kubernetes, Docker, Kafka, Spark, Python, Grafana, Kibana and what more. Again, good thinking! A colleague of mine who spent notably more years in the world of Big Data than I did sees potential in this product, even though he is far from an SAP-fan. But it will be hard to convince this and other representatives of the Open Source community to actually pay for software.

Reverse GeoCode HANA Data with Google API

$
0
0
We recently came across a scenario with a client, where we needed to Reverse Geocode their coordinate information. In essence what that means, is that they’d provide latitude and longitude data, and we needed to retrieve the geopolitical information surrounding that coordinate, like street name, number, city, state, country, etc.

Unfortunately for us, the blog was from 2014 and Google’s API interface for reverse geocoding has changed a bit. Also, Kevin’s solution is very sophisticated and dynamic, and our requirement was very basic, so we had to simplify it a bit.

So with that in mind, I wanted to provide a walk through of the solution that we came up with, built on top of Kevin’s original one.

We are using the Google API for this. I’m sure there are other resources available for the same purpose. Please note that Google API is a paid product, so read through their pricing to understand the cost implications:
https://developers.google.com/maps/documentation/geocoding/usage-and-billing

STEP1

First thing we did was to create a schema and a table that would contain the coordinates (latitude + longitude), as well as the additional reverse geocoding attributes that we’re retrieving from Googles API

Firstly we created the schema “REVERSE_GEOCODE”, and also granted select to the SYS_REPO user:

CREATE SCHEMA "REVERSE_GEOCODE";

GRANT SELECT ON SCHEMA "REVERSE_GEOCODE" TO _SYS_REPO WITH GRANT OPTION;

We then created a simple table with latitude and longitude, as well as geopolitical information to be retrieved from Googles API:

CREATE COLUMN TABLE "REVERSE_GEOCODE"."REVERSE_GEOCODE" (
"LATITUDE" DOUBLE CS_DOUBLE NOT NULL ,
"LONGITUDE" DOUBLE CS_DOUBLE NOT NULL ,
"STREET_NO" NVARCHAR(30),
"ADDRESS" NVARCHAR(100),
"NEIGHBORHOOD" NVARCHAR(100),
"CITY" NVARCHAR(100),
"COUNTY" NVARCHAR(100),
"STATE" NVARCHAR(100),
"COUNTRY" NVARCHAR(100),
"POST_CODE" NVARCHAR(30),
"POST_CODE_SUFFIX" NVARCHAR(30),
"STATUS" NVARCHAR(30),
PRIMARY KEY (
"LATITUDE",
"LONGITUDE")) UNLOAD PRIORITY 5 AUTO MERGE

We then inserted some random coordinate information to test the solution:

insert into "REVERSE_GEOCODE"."REVERSE_GEOCODE" values(36.778259,-119.417931,'','','','','','','','','','');
insert into "REVERSE_GEOCODE"."REVERSE_GEOCODE" values(39.742043,-104.991531,'','','','','','','','','','');
insert into "REVERSE_GEOCODE"."REVERSE_GEOCODE" values(41.881832,-87.623177,'','','','','','','','','','');
insert into "REVERSE_GEOCODE"."REVERSE_GEOCODE" values(42.331429,-83.045753,'','','','','','','','','','');
insert into "REVERSE_GEOCODE"."REVERSE_GEOCODE" values(51.509865,-0.118092,'','','','','','','','','','');

STEP2

Ensure that before you start your development, you have an API key from Google, following the instructions below:

https://developers.google.com/maps/documentation/javascript/get-api-key

This will be required to actual query the API.

STEP3

Now we can build our Javascript objects in HANA. Let’s start by creating an XS Project to house all the relevant objects:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Give it a name:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Ensure you select the correct repository workspace:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Ensure both the XS Application Access and XS Application Descriptor are checked. Also add the geodataEnrich XS JavaScript and hit Finish:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Your XS project will be created with the selected objects:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Next, let’s create the HTTP Destination Configuration file. Within this new package, click new:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Give it the name below:

And enter the information below. Not that the Key will be the token obtained in step1. Also note that some of these parameters will need to be tailored based on your network settings:

host = “maps.googleapis.com”;
port = 443;
description = “Google Geocoding API”;

pathPrefix = “/maps/api/geocode/json?key=token_obtained_in_step1“;
authType = none;
useProxy = false;
timeout = 0;
useSSL = true;
sslAuth = anonymous;

This is what it should look like:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Next, activate the file by right clicking on it and selecting Activate:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

We’ll now create a Javascript Library file:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Give it the name below:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Once the file is created, enter the code below:

//------------------------------------------------------------------------------
// PUBLIC methods
//------------------------------------------------------------------------------
//------------------------------------------------------------------------------
// function reverseGeocode(lat, lon)
// Takes a latitude and longitude value, returns an addressResults object containing these
// properties (see https://developers.google.com/maps/documentation/geocoding): 
//     country
//     administrative_area_level_1
//     administrative_area_level_2
//     administrative_area_level_3
//     postal_code
//     status (see https://developers.google.com/maps/documentation/geocoding/#StatusCodes)
//--------------------------------------------------------------------------------
function reverseGeocode(lat, lon) {

// Init address data to remove all existing properties 
gAddressResults = {};
// Call Google reverse geocoding API
var dest = $.net.http.readDestination("ReverseGeocode", "geocodeApiGoogleDest");
var client = new $.net.http.Client();
var req = new $.web.WebRequest($.net.http.GET,"&latlng=" + lat + "," + lon);
client.request(req, dest);
var response = client.getResponse();

// Parse results, this adds properties to  as it goes
var geoData = JSON.parse(response.body.asString());
//log(req);
//log(response.body.asString());
//log("&latlng=" + lat + "," + lon );
var j;
if(geoData.hasOwnProperty("results")){
var i=0;
for (i = 0; i < geoData.results.length; i++) {
if (geoData.results[i].hasOwnProperty("address_components")){
for (j = 0; j < geoData.results[i].address_components.length; j++) {
if (geoData.results[i].address_components[j].hasOwnProperty("types")){
if (geoData.results[i].address_components[j].types.indexOf("route") !== -1){
gAddressResults['ADDRESS'] = geoData.results[i].address_components[j].short_name;
}
if (geoData.results[i].address_components[j].types.indexOf("street_number") !== -1){
gAddressResults['STREET_NO'] = geoData.results[i].address_components[j].short_name;
}
if (geoData.results[i].address_components[j].types.indexOf("neighborhood") !== -1){
gAddressResults['NEIGHBORHOOD'] = geoData.results[i].address_components[j].short_name;
}
if (geoData.results[i].address_components[j].types.indexOf("locality") !== -1){
gAddressResults['CITY'] = geoData.results[i].address_components[j].short_name;
}
if (geoData.results[i].address_components[j].types.indexOf("administrative_area_level_2") !== -1){
gAddressResults['COUNTY'] = geoData.results[i].address_components[j].long_name;
}
if (geoData.results[i].address_components[j].types.indexOf("administrative_area_level_1") !== -1){
gAddressResults['STATE'] = geoData.results[i].address_components[j].long_name;
}
if (geoData.results[i].address_components[j].types.indexOf("country") !== -1){
gAddressResults['COUNTRY'] = geoData.results[i].address_components[j].long_name;
}
if (geoData.results[i].address_components[j].types.indexOf("postal_code") !== -1){
gAddressResults['POST_CODE'] = geoData.results[i].address_components[j].long_name;
}
if (geoData.results[i].address_components[j].types.indexOf("postal_code_suffix") !== -1){
gAddressResults['POST_CODE_SUFFIX'] = geoData.results[i].address_components[j].long_name;
}
}
}
}
}

}
// Status
var status = geoData.status || "Status unknown";
gAddressResults['STATUS'] = status;
return gAddressResults;
}

var gAddressResults = {};
var gDepth = 0;
var gStoreLastLongname = "emptyLongname";

function log(s) {
let i = 1;
gLog += '\n';
gLog += s.toString();
// optionally copy log to hana trace files
if (gConfig.detailedLogging === 'hana') {
$.trace.info(s.toString());
}
}

Pay close attention to the highlighted part below, it needs to point to the package that was created in your environment, as well as the correct http destination file:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Activate the file.

Next, add the code below to the geodataEnrich.xsjs file that was created when we created the XS Project:

//------------------------------------------------------------------------------
// geodataEnrich.xsjs
//------------------------------------------------------------------------------
information.  For example, records
// with a latitude and longitude can have additional data populated giving country, region, postcode etc.  The address
// information is retrieved using Google's geocoding API.
//--------------------------------------------------------------------------------
// URL Parameters:
//   maxrecs    = max records to update eg 1000. Read google license about max per day
//   mindelayms = minimum delay in ms 
//   log        = omit parameter entirely for no logging, use log=active to see details on screen when url finishes, and use 
//                log=hana to write to hana trace file only (as an info level trace record)
//   simulate   = omit parameter entirely to update records, use simulate=active to not do any update
//   schema     = source schema name
//   table      = source table name
//   fldlat     = source field holding latitude
//   fldlon     = source field holding longitude
//   fldcty     = name of field in source table that will receive the Country address information (optional)
//   fldad1     = name of field in source table that will receive the admin level 1 information, like a region (optional)
//   fldad1     = name of field in source table that will receive the admin level 2 information, like a sub-region (optional)
//   fldad1     = name of field in source table that will receive the admin level 3 information, like a city (optional)
//   fldpost    = name of field in source table that will receive the post code or zip code information (optional)
//   fldblank   = name of field in source table that is used to identify records you want to write to, this is to prevent the
//                same records being written to over and over again.  So if this field is NULL then this service will attempt
//                to write to all target fields.  If this field is filled, the record will not be selected.
//   fldstat    = name of field in source table this will receive the status of the geocode API call (optional)
//--------------------------------------------------------------------------------
// Sample URLs:
//   All URLs will start as usual, http://<server>:80<instance>/<package path>/
//
//   Example 1
//   Simulate the update of 10 records to table "GEODATA"."testtable01", with 400ms delay between calls, logging to screen, and storing
//   result of geocode API call in the field STATUS.  The field to select on is COUNTRY (ie search for records with COUNTRY=NULL) and 
//   the fields to write to are ZIP and COUNTRY. 
//   geodataEnrich.xsjs?schema=GEODATAENRICH&table=testtable01&maxrecs=10&mindelayms=400&log=active&simulate=active&fldblank=COUNTRY&fldstat=STATUS&fldpost=ZIP&fldcty=COUNTRY
//
//   Example 2
//   Actual update of 2000 records, with 100ms delay between calls, with no logging.  The field to select on is COUNTRY and the fields to 
//   write to are POSTCODE, REGION and SUBREGION. 
//   geodataEnrich.xsjs?schema=GEODATAENRICH&table=testtable01&maxrecs=2000&mindelayms=100&fldblank=COUNTRY&fldpost=POSTCODE&fldad1=REGION&fldad2=SUBREGION
//
//--------------------------------------------------------------------------------
$.import("ReverseGeocode","geocodeApiGoogle");
var GEOAPIGOOGLE = $.ReverseGeocode.geocodeApiGoogle;

//--------------------------------------------------------------------------------
//GLOBALS
//--------------------------------------------------------------------------------
var gConstants = {
EMPTY : 'EMPTY',
NOTUSED : 'NOT USED',
};

var gConfig = {
// Values below are used as defaults if values not specified in URL
maxRecords : 2500,
minDelayMS : 500,
serviceProvider : 'google',
detailedLogging : 'notactive',
simulate: 'notactive',
};

var gTable = {
// Inbound table-related globals (values below areused as defaults of not specified in URL)
sourceSchema : gConstants.EMPTY,
sourceTable : gConstants.EMPTY,
sourceTableKey : gConstants.EMPTY,// string of all key fields suitable for a select statement field list
sourceTableKeyFieldList : [],       // list (well, an array) of key fields
sourceTableKeyFieldCount : 0,       // count of key fields
sourceFieldLat : 'LATITUDE',
sourceFieldLon : 'LONGITUDE',
// Processing table-related globals
sourceTableKeyCount : 0,
resultSet : null,
resultSetFieldCount : 0,
targetProperties : [],  // array of used JSON property names that the geocode xsjslib library call will return, indexed same as targetFieldnames
targetFieldnames : [],  // array of table field names to write to, indexed same as targetProperties
targetFieldCount : 0,   // count of fields that will be filled eg just country, or country + zip etc
// Outbound table-related globals
targetFieldCountry : gConstants.NOTUSED,
targetFieldAdmin1 : gConstants.NOTUSED,
targetFieldAdmin2 : gConstants.NOTUSED,
targetFieldAdmin3 : gConstants.NOTUSED,
targetFieldPostalCode : gConstants.NOTUSED,
targetFieldThatIsBlank : gConstants.EMPTY,
targetFieldStatus : gConstants.NOTUSED,
};

//------------------------------------------------------------------------------
//Entry point
//------------------------------------------------------------------------------
var gLog = '';  // global log
var gRecsProcessed = 0;
main();

function main() {
try {
prepareParameters();
readDataFromTable();
mainProcessing();
} catch(e) {
// on exception, force log to be shown
gConfig.detailedLogging = 'active';
if (e.name === 'UrlError' || e.name === 'TableError') {
// Error already logged, nothing to do
} else { 
throw(e);
}
} finally {
//finish();
}
}



//--------------------------------------------------------------------------------
// Functions
//--------------------------------------------------------------------------------
// global log
function log(s) {
let i = 1;
gLog += '\n';
gLog += s.toString();
// optionally copy log to hana trace files
if (gConfig.detailedLogging === 'hana') {
$.trace.info(s.toString());
}
}

// Read parameters from URL or use defaults
function prepareParameters() {
var i = 0;
// Override defaults with parameters from the URL
gConfig.maxRecords = gConfig.maxRecords;
gConfig.minDelayMS = gConfig.minDelayMS;
gConfig.serviceProvider = gConfig.serviceProvider;
gConfig.detailedLogging = gConfig.detailedLogging;
gConfig.simulate = gConfig.simulate;
gTable.sourceSchema = 'REVERSE_GEOCODE';//$.request.parameters.get("schema") || gTable.sourceSchema;
gTable.sourceTable = 'REVERSE_GEOCODE';//$.request.parameters.get("table") || gTable.sourceTable;
gTable.sourceFieldLat = 'LATITUDE';// || gTable.sourceFieldLat;
gTable.sourceFieldLon = 'LONGITUDE';//$.request.parameters.get("fldlon") || gTable.sourceFieldLon;
log('=== Parameters ====================================');
log('Config');
log('  Maximum API calls to make  : ' + gConfig.maxRecords);
log('  Min delay between calls    : ' + gConfig.minDelayMS + ' milliseconds');
log('Source data');
log('  Source schema              : ' + gTable.sourceSchema);
log('  Source table               : ' + gTable.sourceTable);
log('  Source field for latitude  : ' + gTable.sourceFieldLat);
log('  Source field for longitude : ' + gTable.sourceFieldLon);

}

function readDataFromTable() {
//--------------------------------------------------------
// Read the table's meta data
//--------------------------------------------------------
var query = prepareQueryForMetadata();
var connSelect = $.db.getConnection();
// query string with ? params
var pstmtSelect = connSelect.prepareStatement(query);
// parameter replacement
pstmtSelect.setString(1, gTable.sourceSchema);
pstmtSelect.setString(2, gTable.sourceTable);
var rs = pstmtSelect.executeQuery();
var fld = '';
var keyCount = 0;
// Build string representing table key and table key with parameters
gTable.sourceTableKey = '';
gTable.sourceTableKeyFieldList = [];
while (rs.next()) {
fld = rs.getString(1);
gTable.sourceTableKey += ('\"' + fld + '\"' + '');
gTable.sourceTableKeyFieldList[keyCount] = fld;
keyCount = keyCount + 1;
}
gTable.sourceTableKey = gTable.sourceTableKey.trim();
gTable.sourceTableKey = gTable.sourceTableKey.replace(/ /g, ', '); // global replace space, with space comma
log('=== Table Information ============================');
log('Table Metadata Query (template): ' + query);
if (keyCount > 0){
    log('Table Key: ' + gTable.sourceTableKey);
    gTable.sourceTableKeyFieldCount = keyCount;
    // not logging key field list, but could
} else {
log('*** ERROR: table ' + gTable.sourceTable + ' does not exist, or does not have a primary key');
throw {name : "TableError", };
}
//--------------------------------------------------------
// Read source table data proper 
//--------------------------------------------------------
query = prepareQueryForMainRead();
log('Main Select Query: ' + query);
connSelect = $.db.getConnection();
pstmtSelect = connSelect.prepareStatement(query);
// Store results
gTable.resultSet = pstmtSelect.executeQuery();
gTable.sourceTableKeyCount = keyCount;
gTable.resultSetFieldCount = keyCount + 2;  // number of fields in key plus 2 for lat lon
log('');
}

// Prepare metadata selection query, returns query string (with params as ? to be filled later)
function prepareQueryForMetadata() {
var select = 'SELECT \"COLUMN_NAME\"';
var from = ' FROM \"SYS\".\"CONSTRAINTS\"';
var where = ' WHERE \"SCHEMA_NAME\" = ? AND \"TABLE_NAME\" = ?';
var orderby = ' ORDER BY \"COLUMN_NAME\"';
var query = select + from + where + orderby;
return query;
}

// Prepare main selection query, returns query string  (no params possible here)
function prepareQueryForMainRead() {
var select = 'SELECT TOP ' + gConfig.maxRecords + '' + gTable.sourceTableKey;
var from = ' FROM \"' + gTable.sourceSchema + '\".\"' + gTable.sourceTable + '\"';
var where = ' WHERE \"STATUS\"!=\'OK\' or \"STATUS\" is null';
var orderby = ' ORDER BY ' + gTable.sourceTableKey;
var query = select + from + where + orderby;
return query;
}

// Prepare update statement, returns query string (with params as ? to be filled later)
function prepareQueryForUpdate(addressData,lat,long) {
//--------------------------------------------------------
// The UPDATE clause
//--------------------------------------------------------
var qupdate = 'UPDATE \"' + gTable.sourceSchema + '\".\"' + gTable.sourceTable + '\"';
//--------------------------------------------------------
// The SET clause
//--------------------------------------------------------
var i = 0;
var qset = ' SET ';
for (var key in addressData) {
qset += ( '\"' + key + '\" = \'' + addressData[key] + '\'');
qset += ', ';
}
qset = qset.slice(0, -2);
var qwhere = ' WHERE "LATITUDE" = \'' + lat + '\' AND "LONGITUDE" = \'' + long + '\'';
var queryUpdate = qupdate + qset + qwhere;
return queryUpdate;
}

// Main processing, this loops over the result set of data, calls the geocode API to get the new data
// and writes it back to the database one record at a time.
function mainProcessing() {
var rs = gTable.resultSet;
var i = 0;
var keyValue = '';
var remainingTime = 0;
var overallDuration = 0;
// record-level vars
var currentRecord = {
// Current record-related working vars
sourceFieldValues : '',  // all field values as string for logging
lat : 0,
lon : 0,
timeStart : 0,
timeEnd : 0,
duration : 0,
keyValues : [],      // key field values, used in update call
    addressData : null,  // the object retuned by the geo API call with properties containing address values we want
};
log('=== Main Processing ===========================');
// iterating a recordset is a one-way ticket, so have to do all processing per record
while (rs.next()) {
//--------------------------------------------------------
// Main process per rs record: call geocode API, write to DB
//---------------------------------------------------------
// Clear previous record
currentRecord.sourceFieldValues = '';
currentRecord.lat = 0;
currentRecord.lon = 0;
currentRecord.timeStart = 0;
currentRecord.timeEnd = 0,
currentRecord.duration = 0,
currentRecord.keyValues = [];
currentRecord.addressData = null;
// Examine the key values, for logging and later on they used in the Update statement
for (i = 0; i < gTable.sourceTableKeyCount; i++) {
keyValue = rs.getString(i + 1);
currentRecord.sourceFieldValues = currentRecord.sourceFieldValues +  '' + keyValue;
currentRecord.keyValues[i] = keyValue;
}
log('Source record (selected fields): ' + currentRecord.sourceFieldValues);
// Get lat lon from source record
currentRecord.lat = parseFloat(rs.getString(1));
currentRecord.lon = parseFloat(rs.getString(2));
log('Current record lat: ' + currentRecord.lat.toString() + ' lon: ' + currentRecord.lon.toString());
// Timer to ensure we don't swamp the google API and get banned
currentRecord.timeStart = 0;
currentRecord.timeEnd = 0;
currentRecord.timeStart = new Date().getTime();
//--------------------------------------------------------
// Call our library that wraps the Google service
// The addressData object that is returned is guaranteed to contain these properties:
//   country, administrative_area_level_1, _2, _3, postal_code
//--------------------------------------------------------
currentRecord.addressData = GEOAPIGOOGLE.reverseGeocode(currentRecord.lat, currentRecord.lon);
log('Reverse Geocode Results: ' + JSON.stringify(currentRecord.addressData)); 
//--------------------------------------------------------
// Write back to database
//--------------------------------------------------------
// query with ? params
var queryUpdate = prepareQueryForUpdate(currentRecord.addressData,currentRecord.lat, currentRecord.lon);
log('Record Update Query (template): ' + queryUpdate);
var connUpdate = $.db.getConnection();
var cstmtUpdate = connUpdate.prepareCall(queryUpdate);
// eg UPDATE "GEODATAENRICH"."testtable01" SET "COUNTRY" = ?, "ZIP" = ? WHERE "KEYREF" = ? AND "KEYYEAR" = ?
// parameter replacement for SET
/*
for (i = 0; i < gTable.targetFieldCount; i++) {
var s = currentRecord.addressData[gTable.targetProperties[i]];
cstmtUpdate.setString(i + 1, s);
}
// parameter replacement for WHERE
for (i = 0; i < gTable.sourceTableKeyFieldCount; i++) {
var kfv = currentRecord.keyValues[i]; 
cstmtUpdate.setString(i + gTable.targetFieldCount + 1, kfv);  // note counter increments from key count
}
*/
log('Record Update Query (template): ' + queryUpdate);
if (gConfig.simulate === 'notactive') {
cstmtUpdate.execute();
connUpdate.commit();
} else {
log('In simulate mode, no table update done.');
}
connUpdate.close();
//--------------------------------------------------------
// Wait until duration reached before allowing loop to continue
//--------------------------------------------------------
currentRecord.timeEnd = new Date().getTime();
currentRecord.duration = currentRecord.timeEnd - currentRecord.timeStart;
log('Execution Time (ms): ' + currentRecord.duration);
remainingTime = gConfig.minDelayMS - currentRecord.duration;
if (remainingTime > 50) {
log('  sleeping...');
// This blocks CPU, not ideal at all, but easier to implement than a callback in this case
sleep(remainingTime);
overallDuration = ((new Date().getTime()) - currentRecord.timeStart);
log('  overall duration: ' + overallDuration + ' ms');
}
log('');
gRecsProcessed++;
};
}

// This blocks CPU, not ideal but works
function sleep(milliseconds) {
var start = new Date().getTime();
for (var i = 0; i < 1e7; i++) {
if ((new Date().getTime() - start) > milliseconds) {
break;
};
};
}

function finish() {
$.response.contentType = "text/plain";
if (gConfig.detailedLogging === 'active') {
$.response.setBody(gLog);
} else {
$.response.setBody('Done. Processed ' + gRecsProcessed + ' records.');
}
};

Again, make sure you modify the package in the xsjs syntax to reflect that which was created in your system:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Also, update the section below in the code to reflect the schema and table name that was created that has the coordinate information:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Once that’s done activate the xsjs file.

You should now be able to run the URL to call the Google API:

https://your_tenant_host:port/Package_name/javascript_file.xsjs

This is what it looks like in our tenant, omitting the tenant name:

https://our_tenant.com:4390/ReverseGeocode/geodataEnrich.xsjs

Once the URL is executed, we should see the table populated with the necessary information:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

We can now use this table in our HANA modelling, either by augmenting the data with these attributes, or possibly building name path hierarchies that allow to drill into the geo political data.

STEP4

We’ll now walk through the process of scheduling this javascript so that the information can be updated on a pre-defined schedule.

We will create an XS Job Schedule file:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Provide the name below and hit finish:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Enter the syntax below:

{
    "description": "Reverse Geocode",
    "action": "ReverseGeocode:geodataEnrich.xsjs::main",
    "schedules": [
       {
          "description": "Reverse Geocode",
          "xscron": "* * * * * 59 59"
       }
    ]
}

Make sure you updated the highlighted section below to be the package name and javascript created in your system

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Activate the file.

Note that the scheduling notation is in XSCRON. Here’s the help reference page: https://help.sap.com/viewer/d89d4595fae647eabc14002c0340a999/2.0.02/en-US/6215446213334d9fa96c662c18fb66f7.html

Ensure the user scheduling the job has the correct roles assigned:

sap.hana.xs.admin.roles::HTTPDestAdministrator

sap.hana.xs.admin.roles::JobAdministrator

Below is the explanation of the XSCRON syntax:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

So for our specific XS Job flie, we are scheduling it to run every day of every month and year, for every hour, on the 59th minute and 59th second mark. You can change that based on your requirements.

Next, need to go to the XS Admin page using the syntax below:

http://YOUR_HOST.com:8090/sap/hana/xs/admin/

Once logged in, click on the menu and select XS Artifact Administration

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

You should see all of the packages in your system. Click on the arrow next to the package containing your XS Job file

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

You should see some of your artifacts. Click on the XS Job that you created:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

The information below will be displayed. Next hit the configuration tab:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Enter your user and password, click the active checkbox, hit the save job button, and voila. Your job is scheduled. You can then click the View Logs button to see the status of the job.

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

And there you have it.

Develop Simple on HANA Express in AWS Cloud 9

$
0
0

Introduction


This blog post started much simpler as a personal wiki set of notes on how to quickly set up a Cloud 9 IDE in AWS to use HANA. However, as I continued to document, the more I felt like this might be an interesting story/journey of mine to share. Probably the past 2 or 3 months, I’ve been on a personal journey of getting back into application/web development after many years of spending time in Analytics/BI/Design Studio SDKing etc.

I’ve been coding in one language or another for decades, but spending so much time in the BI area has left me playing catch up, as containerization, npm, web frameworks, etc have change dramatically over the past 6-7 years. Even new methodologies like CI/CD and DevOps all feel foreign and new to me.

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides
And while I’ve kept up just enough to know what Docker is, and I can spell Kubernetes, I’m not quite ready to go all in with a container orchestration platform. I guess because I’m more on the “Dev” side of “DevOps” spectrum, Kubernetes doesn’t (and shouldn’t) excite me that much. Maybe another year/life…

So with my meager financial/platform resources and knowhow as a developer, I’ve set off to try to do some fun lean application development with the following parameters in mind:

◈ Use HANA Express. Since it’s an area of affection for me and (increasingly familiar because of my Analytics background/day-job.) Sure, I could use MySQL/MariaDB/SQLite but where’s the fun in that? That’s been done by zillions of others.

◈ Use AWS Cloud 9 IDE. Now this won’t make anyone in SAP very happy, but I’ve chosen not to use SAP Cloud Platform/WebIDE/Cloud Foundry. I’ve taken dozens of recent tutorials on http://developers.sap.com and they are terrific, however I simply think that it’s quite over-architected for developing simple applications. Perhaps I’ll grow into it in the future. I’ve chosen Cloud9 because:

     ◈ It’s cheap to run and will turn itself off.
     ◈ It’s a hackable EC2 instance that I can ssh into easily
     ◈ The UI is intuitive and straightforward

◈ Containerize. Last year I sat down and took the time to understand the beauty, power and elegance of Docker containers. It truly is a game changer for deployments and baselining an application to work on, you know, not just “my PC”. But I dont want to jump off the deep end with Kubernetes. It’s just overkill for my brain. I’ve chosen to use Docker Compose since it doesn’t require me to understand k8s concepts such as Nodes/Workers, Pods, Clusters, etc. To me, a Docker Compose file is a nice toe in the water for someone like me coming from simple Node/NPM development -> simple Docker Containers -> and now Docker Compose. I’m sure I’ll eventually embrace a larger container orchestration framework, I’m working my way from bottom up, rather than top down, which I know where a lot of other tend to start.

So how feasible/quick is it? It turns out (for me) to not be too bad at all! So if you are someone how likes HANA but maybe wants to try a non-SAP Cloud IDE, read on and follow this initial Part of a multi-part series on coding in Cloud 9 with HANA Express.

Goal


The goal of this blog is to walk you through the simplest configuration of a Docker Compose stack running a SQLPad Container that can interact with the HANA Express container. Futher posts will build off of this initial Compose stack, so follow along if you have the time/patience.

Feedback/Questions


Please don’t hesitate to drop me a comment/question/flame if something simply doesn’t work or make sense, or even if you disagree with this approach or my overall values in life. I appreciate any feedback!

Initial Configuration


This section explains initial configuration needed for your Cloud 9 environment. We will install Docker Compose, log into DockerHub with your Docker Account, and resize your Cloud 9 environment’s disk size from the default 10GB to 20GB to make a little more room for HANA Express.

Prerequisties


◈ AWS Cloud 9 Instance (m4.large (8 GiB RAM + 2 vCPU) recommended) Setup is easy in AWS and takes about 2 or 3 minutes to be up and running.
◈ DockerHub Account (Free) Since SAP makes you log into Docker Hub to pull their HANA Express image, you’ll want to create a free Docker Hub account to do so.

Log into Docker and install Docker Compose

1. Log into Cloud 9
2. Open a Terminal window and run:

docker login -u yourusername

Provide your password and proceed with the following commands:

sudo curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose


Resize your Cloud 9 Environment



The initial Cloud 9 environment is just a tad too small (10GB) so we’ll want to increase the disk size to 20GB. Don’t worry, it’s quick and painless.

1. Create a file called resize.sh in the root directory of your workspace and paste the following script:

#!/bin/bash

# Specify the desired volume size in GiB as a command-line argument. If not specified, default to 20 GiB.
SIZE=${1:=20}
 
# Install the jq command-line JSON processor.
sudo yum -y install jq
 
# Get the ID of the envrionment host Amazon EC2 instance.
INSTANCEID=$(curl http://169.254.169.254/latest/meta-data//instance-id)
 
# Get the ID of the Amazon EBS volume associated with the instance.
VOLUMEID=$(aws ec2 describe-instances --instance-id $INSTANCEID | jq -r .Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId)
 
# Resize the EBS volume.
aws ec2 modify-volume --volume-id $VOLUMEID --size $SIZE
 
# Wait for the resize to finish.
    while [ "$(aws ec2 describe-volumes-modifications --volume-id $VOLUMEID --filters Name=modification-state,Values="optimizing","completed" | jq '.VolumesModifications | length')" != "1" ]; do
  sleep 1
done
 
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/xvda 1
 
# Expand the size of the file system.
sudo resize2fs /dev/xvda1

2. From a Terminal window, type:

chmod +x resize.sh
./resize.sh 20

After a few moments and some console spam, you should receive a confirmation that your disk has been resized:

{
    "VolumeModification": {
        "TargetSize": 20,
        "TargetVolumeType": "gp2",
        "ModificationState": "modifying",
        "VolumeId": "vol-0daed2d158c3fafc1",
        "TargetIops": 100,
        "StartTime": "2019-05-16T18:01:42.000Z",
        "Progress": 0,
        "OriginalVolumeType": "gp2",
        "OriginalIops": 100,
        "OriginalSize": 10
    }
}
CHANGED: disk=/dev/xvda partition=1: start=4096 old: size=20967390,end=20971486 new: size=41938910,end=41943006
resize2fs 1.43.5 (04-Aug-2017)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/xvda1 is now 5242363 (4k) blocks long.


Setting up a Stack



This section shows how to set up a simple HANA development stack complete with a running instance of HANA Express.

1. In Cloud 9, create a new folder in called hanadev in the root of your workspace.
2. Create a new file called docker-compose.yaml and paste in the following.
version: '2'
 
services:
 
  sqlpad:
    image: sqlpad/sqlpad
    ports:
      - "8899:3000"
       
  hxehost:
    image: store/saplabs/hanaexpress:2.00.036.00.20190223.1
    hostname: hxe
    volumes:
      - hana-express:/hana/mounts
    command: --agree-to-sap-license --master-password ${HXE_MASTER_PASSWORD}
 
volumes:
  hana-express:

Basically, this Docker Compose file defines 2 containers:

i. HANA Express
ii. SQLPad, a simple, lightweight container that has the ability to connect to SAP HANA without having to fiddle with any driver installations. This will serve as our first test to ensure that we can connect to our HANA Express DB.

Note:

Since both SQLPad and HANA Express are in the same default Docker-Compose Network, we do not need to expose the standard HANA Express Docker Ports (39017, etc) since at this time, we will be using SQL Pad to interact directly with HANA Express inside the network. What we will only be exposing is the SQLPad Port (8899)

3. In your Terminal, cd to your hanadev folder.

cd hanadev

4. Set the environment variable HXE_MASTER_PASSWORD to your desired password. Example:

export HXE_MASTER_PASSWORD=HXEHana1

5. Run your Docker Compose stack (first time)

docker-compose up

6. At this point, your Terminal will start pulling Docker images and spamming you with log progress. On a m4.large instance, this process should take about 6-7 minutes. You will know when it is complete when you see a tail of the log with something similar below:

hxehost_1  | Duration of start operations ...
hxehost_1  |     (Pre start) Hook /hana/hooks/pre_start/010_license_agreement: 0s
hxehost_1  |     (Pre start) Hook /hana/hooks/pre_start/110_clean_hdbdaemon_status: 0s
hxehost_1  |     (Pre start) Hook /hana/hooks/pre_start/120_clean_pid_files: 0s
hxehost_1  |     (Pre start) Hook /hana/hooks/pre_start/130_update_clean_wdisp: 0s
hxehost_1  |     (Pre start) Hook /hana/hooks/pre_start/310_init_ssfs: 41s
hxehost_1  |     (Pre start) Hook /hana/hooks/pre_start/320_config_cert: 1s
hxehost_1  |     (Pre start) Hook /hana/hooks/pre_start/330_custom_afls: 0s
hxehost_1  |     (Pre start) Prep persistence: 46s
hxehost_1  |     Pre start: 89s
hxehost_1  |     HANA startup: 43s
hxehost_1  |     (Post start) Tenant creation: 268s
hxehost_1  |     (Post start) License import: 0s
hxehost_1  |     (Post start) Hook /hana/hooks/post_start/201_hxe_optimize: 7s
hxehost_1  |     (Post start) Hook /hana/hooks/post_start/203_set_hxe_info: 0s
hxehost_1  |     Post start: 280s
hxehost_1  |     Overall: 413s
hxehost_1  | Ready at: Thu May 16 17:03:31 UTC 2019
hxehost_1  | Startup finished!

Congratulations! You now have a Stack running HANA Express and SQLPad!

7. At this point, we can stop the stack, by pressing Control + C:

Gracefully stopping... (press Ctrl+C again to force)
Stopping hanadev_sqlpad_1  ... done
Stopping hanadev_hxehost_1 ... done

8. Now that our stack has successfully started and stopped, let’s take a quick look at how much space this has taken:

docker system df

This should return a readout similar to below:

TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              7                   2                   4.498GB             1.319GB (29%)
Containers          2                   0                   67.7kB              67.7kB (100%)
Local Volumes       1                   1                   3.247GB             0B (0%)
Build Cache         0                   0                   0B                  0B

As we can see, the base footprint needed to run HANA Express in this container is around 3.25 GB. Pretty small, not too bad!

9. Also, if you want to see how much room is left on your Cloud 9 instance, you can type df -h /:

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G   12G  8.0G  60% /

As you can see, we have plenty of play area left (8GB) even with HANA Express container running!

Stopping and Starting


This section explains how to start and stop your stack.

Running the Stack

After successfully setting up the stack, to run it again in the background, you want to run it “detached”. This allows you to close your terminal session out and your stack will keep running (make sure you are in your hanadev directory when doing so):

docker-compose up -d

Stopping the Stack

In order to stop your stack, you will need to issue the following command to stop it (make sure you are in your hanadev directory when doing so):

docker-compose down

Exposing SQLPad to the outside world

Now that we know that we have a working stack, we need to expose port 8899 in order to use the SQLPad application. First, we must see what AWS gave our Cloud 9 IDE as a public IP. There are multiple ways to find this out.

◈ Check your EC2 Dashboard on AWS and look for the External IP
◈ Lazy way without leaving Cloud 9

From a Terminal in Cloud 9, simply type curl http://169.254.169.254/latest/meta-data//public-ipv4 and note the IP address.

The lazy way is nice, since this saves you a trip to the EC2 console as you develop off and on, and your IP changes. However for this first time, we’ll need to make the trip to the EC2 Console and select out EC2 Instance assigned to the Cloud 9 IDE.

1. From your EC2 Console, click on the autogenerated Security Group named something similar to aws-cloud9-dev-abc732487cbada-InstanceSecurityGroup-322138957189.

2. You will be taken to the Security Group section in the EC2 Dashboard. Right-click on the selected rule and click Edit inbound rules.

3. Click on Add Rule and populate the following fields:

PropertyValue 
Port Range  8899
Source 0.0.0.0/0 
Description SQLPad Web Interface

4. We are now ready to test SQLPad! With your IP address in hand, in a browser window navigate to http://1.2.3.4:8899 where 1.2.3.4 is the IP address. Note If you are not greeted with a SQLPad page, make sure your Docker Compose stack is running! (docker-compose up -d from hanadev directory)

5. If you see the SQLPad login page, congratulations! Click on the Sign Up link and register. (The first to register becomes administrator, so hurry! :))

Using SQLPad


Connecting to your HANA Express DB

Now that you have registered to your SQLPad application, you will want to create an initial connection to your HANA Express database.

1. At the top right of your SQLPad page, click you your user name -> and click Connections
2. Click New Connection and populate the following fields:

PropertyValue
Connection Name HXE 
Database Driver  SAP HANA 
Host/Server/IP Address  hxehost 
Port (e.g. 39015)  39017 
Database Username  SYSTEM 
Database Password  YourPassword
Tenant HXE
3. Click Test and assuming you get a green “Test Successful” message, click Save

Selecting some data

1. In SQLPad, click New Query on the top left.

2. Ensure that the HXE connection is selected in the dropdown, and paste in the following SQL Command and then click Run:

SELECT TOP 10 TABLE_NAME, RECORD_COUNT, TABLE_SIZE FROM M_TABLES ORDER BY TABLE_SIZE DESC;

3. Assuming that the stars have aligned and this guide made sense and you’ve followed all the steps, you should get a query result back at the bottom in a table format.

4. CONGRATULATIONS! Pat yourself on the back, and stay tuned for the next part of this series which will build upon this stack to include a simple NodeJS container running a module that will read some data from HANA Express.

Cleaning up/Starting Over


In the event that you’ve completely botched your Stack and need to start over, fret not. You can remove the mess at any time easily!

To remove your persistent volumes from the stack

From the hanadev directory, type:

docker-compose down -v

Develop Simple on HANA Express in AWS Cloud 9 Part 2 – The Backend App

$
0
0

Overview


In the second part of this series, I’m covering how to create a simple backend NPM module that will run inside of a container and be added to our Docker Compose stack from the first part of the series. The end result will be the ability to issue an http POST call that is routed with the Node express npm module and get back some system metadata about the HANA Express instance. The Express routing will serve as a framework for similar API calls we will write in further parts of the series. Also in further parts, we will write Vue frontend web app to consume these calls and show in a web browser.

Prerequisites


Cloud 9 set up and configured as described in Part 1

Update the docker-compose directory with a .env file

1. Launch Cloud 9 IDE, and click the Gear button next to the root folder of your workspace. Click Show Hidden Files

2. Right-click your /hanadev folder and add a new file called .env. Open the file and put the following contents:

SAP HANA Study Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA AWS Cloud 9

HXE_MASTER_PASSWORD=YourPasswordFromPart1
HANA_SERVER=hxehost:39017
HANA_UID=SYSTEM

This will serve 2 purposes:

i. This will allow you to not have to set your enviornment variable manually or pass it via command-line each time your Cloud 9 IDE spins down and back up.
ii. The additional environment variables (along with HXE_MASTER_PASSWORD) will serve as parameters for our backend app we will be writing next.

Add one global NPM module to your Cloud 9 IDE


In order to connect to SAP HANA from Node, we need to use @sap/hana-client npm module. However, we cannot simply do a normal npm install @sap/hana-client, because SAP has to make it a little harder and they’ve opted to host it on their own npm server, so type the following 2 commands from a Terminal window:

npm config set @sap:registry https://npm.sap.com
npm i -g @sap/hana-client

Note We are installing this NPM module globally because in my experience, this is a problematic library to place inside your own NPM package when copying entire folders from one environment to another. This is because:

1. The fact that you have to remember to set the registry to SAP’s npm box when doing an npm install on a new development box if you are cloning a project.

2. If you bounce around systems (Windows, to Cloud9 (Linux), to Mac (Darwin)), the architectures are different which means you do NOT want to inadvertantly copy the entire node_modules from your repository because technically the npm install @sap/hana-client does a bunch of compiling at that point and you’ll get a mismatch if you change OS types.

3. I am overlooking some other elegant reason that would get around this issue that someone can clue me in on.

But I digress, just install it globally to humor me

Creating a NPM module for our backend


We will be structuring each subsequent piece of our growing application inside the hanadev directory.

1. In your hanadev folder, create a new folder called hello-world-app.

2. In your hello-world-app folder, create another folder called backend. This will be our location for our backend module.

3. From a Terminal, cd to the hello-world-app/backend directory, and type npm init and take all the default options (just keep pressing Enter) when prompted.

4. Next, we will need to install following module for our backend app by typing the following:

npm i express cors body-parser

5. In your file browser in Cloud 9, you should now have a few new folders and files under /hanadev/hello-world-app/backend. Open the package-json file and modify the scripts section to say:

"scripts": {
  "prod": "node server.js"
},

For example, your package.json should now look similar to this:

{
  "name": "backend",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "prod": "node server.js"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.19.0",
    "cors": "^2.8.5",
    "express": "^4.17.0"
  }
}

6. In the hello-world-app/backend folder, create a server.js file which will serve as our entry point for our backend service routing. Paste in the following code:

const express = require('express');
const app = express();
const bodyParser = require('body-parser');
const hana = require('@sap/hana-client');
 
const port = process.env.PORT || 9999;
 
if(!process.env.HANA_SERVERNODE
    || !process.env.HANA_PWD || !process.env.HANA_UID) {
    console.error(`Set the following environment variables:
    HANA_SERVERNODE\tYour HANA hostname:port
    HANA_UID\tYour HANA User
    HANA_PWD\tYour HANA Password`);
}else{
    let overviewRouter = require('./api/overview');
    app.use('/api/overview', overviewRouter);
    app.use(bodyParser.json());
    app.use(bodyParser.urlencoded({
        extended : true
    }));
     
    app.listen(port, ()=>{
        console.log(`Server started on port ${port}`);
    });
}

So what is this doing? Basically we are including a few common libraries and setting up a simple Express server that has one route waiting for requests on /api/overview.

7. Let’s add one more file. First, create a api folder inside of hello-world-app/backend. Inside that api folder, create a file called overview.js. Paste in the following contents:

const express = require('express');
const router = express.Router();
const cors = require('cors');
const hana = require('@sap/hana-client');
const bodyParser = require('body-parser');
 
router.use(bodyParser.json());
router.use(bodyParser.urlencoded({extended:true}));
router.options('*',cors());
 
router.post('/',cors(),(req,res)=>{
    let conn = hana.createConnection();
    var conn_params = {
        serverNode  : process.env.HANA_SERVERNODE,
        uid         : process.env.HANA_UID,
        pwd         : process.env.HANA_PWD
    };
     
    conn.connect(conn_params, function(err) {
        if (err) {
            conn.disconnect();
            console.log(`Error connecting: ${JSON.stringify(err)}`);
            res.end(err.msg);
        }else{
            conn.exec("SELECT NAME AS KEY, VALUE AS VAL FROM M_SYSTEM_OVERVIEW;", null, function (err, results) {
                conn.disconnect();
                if (err) {
                    res.status(500);
                    res.json(err);
                    console.log(err);
                    res.end();
                }else{
                    res.end(JSON.stringify({
                        backend_information : {
                            server : process.env.HANA_SERVERNODE,
                            user : process.env.HANA_UID
                        },
                        M_SYSTEM_OVERVIEW : results
                    },null,2));
                }
            });
        }
    });
});
 
module.exports = router;

In summary, this code will return some JSON that says what HANA user this code is running as, and some information about the HANA System by querying the M_SYSTEM_OVERVIEW table.

Ok! Coding is done. Now how do we run this?

Make our new Docker Image for our App


Since we (ok I) want to containerize this application into a self-contained stack, we cannot simply run npm run prod. This is because in our development environment, we’ve put HANA Express in a container that is only aware of its own Docker Compose network. This is the nature and beauty of containerization so what we need to do is add our backend application to our stack. So let’s do this now.

1. Inside of our hello-world-app folder, create a file called Dockerfile. Paste in the following contents:

# Docker Image containing SAP HANA npm package
FROM node:8-slim
 
LABEL Maintainer="Your Name <your.name@example.com>"
 
# Add SAP HANA Client NPM package from SAP's npm repository
RUN npm config set @sap:registry https://npm.sap.com && npm i -g @sap/hana-client
 
# Set the global NPM path Environment variable
ENV NODE_PATH /usr/local/lib/node_modules
COPY /hello-world-app /app
WORKDIR /app
CMD npm run prod

Basically what this Dockerfile is doing is taking Node’s node:8-slim Docker image and adding a few small things to it, making it our own new Docker image that we will add to our Stack in a moment. The additions include:

◈ Configuring the SAP NPM Repository reference and installing @sap/hana-client also globally (as we did in our Cloud 9 development box.)

◈ Since we are using a global module, we are setting the NODE_PATH enviornment variable so that Node knows where the global npm packages are.

◈ Copy the contents of our hello-world-app files over to the image under /app

◈ Change the container’s starting work directory to /app and set the starting container command to npm run prod.


Add our Docker Image to our Stack


Now that we have our Docker Image defined for our container, we need to add it to our Docker Compose stack so that it can communicate with the HANA Express database.

1. Open the docker-compose.yaml file under /hanadev directory and update the contents to be this:

version: '2'
     
services:
     
  hello-world-app:
    build:
      context: .
      dockerfile: ./hello-world-app/Dockerfile
    ports:
      - "3333:9999"
    environment:
      - HANA_UID=${HANA_UID}
      - HANA_PWD=${HXE_MASTER_PASSWORD}
      - HANA_SERVERNODE=${HANA_SERVER}
 
  sqlpad:
    image: sqlpad/sqlpad
    ports:
      - "8899:3000"
           
  hxehost:
    image: store/saplabs/hanaexpress:2.00.036.00.20190223.1
    hostname: hxe
    volumes:
      - hana-express:/hana/mounts
    command: --agree-to-sap-license --master-password ${HXE_MASTER_PASSWORD}
     
volumes:
  hana-express:

Basically what we’ve added is a 3rd service/container called hello-world-app. Since this will be based on a docker image build that we’ve not necessarily published (or even ever done a docker build on), we are defining it as its own build, rather than with an image. You can see in the yaml that we are pointing the build context to our current directory (hanadev) and specifying the Dockerfile to the subdirectory (hello-world-app) where our Dockerfile and source code are located.

This definition basically tells Docker Compose that we want to build our image for this Stack. Once our image is less prone to changes, we can always redefine this yaml to point to a finalized Docker Image with a tag name, etc at a later time.

Running and Testing


1. To run our updated Docker Compose Stack, we can run the following command from the hanadev directory:

docker-compose build && docker-compose up

Note In theory, you should be able to just type docker compose up, however I experienced fits where Docker Compose would not always automatically rebuild the image as I was incrementally making changes to my Dockerfile and source files. Basically I follow the rule of thumb where if I know I’ve changed the source and/or the app’s Dockerfile (image), then I just do docker-compose build first.

What you should now see after a minute or 2 of console spam is something like this at the end (as the hxehost container will be the last to spin up):

hxehost_1          |     (Pre start) Hook /hana/hooks/pre_start/320_config_cert: 0s
hxehost_1          |     (Pre start) Hook /hana/hooks/pre_start/330_custom_afls: 0s
hxehost_1          |     Pre start: 0s
hxehost_1          |     HANA startup: 62s
hxehost_1          |     (Post start) Hook /hana/hooks/post_start/201_hxe_optimize: 0s
hxehost_1          |     (Post start) Hook /hana/hooks/post_start/203_set_hxe_info: 0s
hxehost_1          |     Post start: 0s
hxehost_1          |     Overall: 64s
hxehost_1          | Ready at: Fri May 17 18:34:09 UTC 2019
hxehost_1          | Startup finished!

2. Leaving this Terminal window open, open a second Terminal window in Cloud 9 and type the following command:

curl -X POST http://localhost:3333/api/overview

What you should get back is some JSON from our backend app:

{
  "backend_information": {
      "server": "hxehost:39017",
      "user": "SYSTEM"
  },
  "M_SYSTEM_OVERVIEW": [
    {
      "KEY": "Instance ID",
      "VAL": "HXE"
    },
    {
      "KEY": "Instance Number",
      "VAL": "90"
    },
    {
      "KEY": "Distributed",
      "VAL": "No"
    },
    {
      "KEY": "Version",
      "VAL": "2.00.036.00.1547699771 (fa/hana2sp03)"
    },
    {
      "KEY": "Platform",
      "VAL": "SUSE Linux Enterprise Server 12 SP2"
    },
    {
      "KEY": "All Started",
      "VAL": "No"
    },
    {
      "KEY": "Min Start Time",
      "VAL": "2019-05-17 18:24:57.583"
    },
    {
      "KEY": "Max Start Time",
      "VAL": "2019-05-17 18:24:57.583"
    },
    {
      "KEY": "Memory",
      "VAL": "Physical 7.79 GB, Swap 0.48 GB, Used 1.05"
    },
    {
      "KEY": "CPU",
      "VAL": "Available 2, Used -0.02"
    },
    {
      "KEY": "Data",
      "VAL": "Size 19.5 GB, Used 14.2 GB, Free 27 %"
    },
    {
      "KEY": "Log",
      "VAL": "Size 19.5 GB, Used 14.2 GB, Free 27 %"
    },
    {
      "KEY": "Trace",
      "VAL": "Size 19.5 GB, Used 14.2 GB, Free 27 %"
    },
    {
      "KEY": "Alerts",
      "VAL": "2 High, "
    }
  ]
}

If you made it this far, congratulations! You’ve created a containerized backend service application that we’ll build out and use to feed a prettier web front end. And, since it’s containerized, you’ll be able to deploy easily on anything running Docker! (Well, besides a Raspberry Pi, dang ARM architecture….)

Data Tiering Options in SAP HANA Webcast Recap

$
0
0
This was another great SAP User Group webcast this week.

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Data tiering introduced to help manage data growth in SAP HANA

Data is rising, increasing costs, additional hardware, additional license volume

Data tiering manages data in cost efficient way

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Native storage extension, a new feature

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Mission critical data is in Hot (main memory tier)

Data not as critical; certain aging aspect, older data becomes less important; not accessed as often, performance not as critical as hot data

Cold data – old, voluminous (data lakes)

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

DRAM – main memory for hot data, need performance there for most important data

Persistent memory – like DRAM, available in larger sizes

Warm tier – 3 technologies – dynamic tiering, extension node has been around for a while

Native storage extension is a new feature introduced in April

Cold store – nearline storage for SAP BW been around for a while, before HANA

Hot Store, Persistent Memory


SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Developed with Intel

Persistent memory – replaces some DRAM – compared to DRAM it does not lose its data; if shut down database, and restart, persistent memory doesn’t take as long to reload data compared to DRM

Reduce total startup – from 50 minutes to 4 minutes

Increased memory capacity

DRAM – 128GB sizes, some 256 – prices increase; most common is 64GB

Persistent memory is available in larger sizes – 128GB, 256 GB, 512GB; cheaper than DRAM (per TB)

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Also known as non-volatile memory

Two operating modes – memory mode, application does not need to be changed; still need DRAM on server, DRAM acts as cache, address data volume of persistent memory

SAP HANA – app direct mode, take advantage of technology, application will need to be changed

Application will have DRAM available

Direct access enabled file system

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Column store main in persistent memory

Column store delta stored in DRAM

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Size your application to handle workload

This is an example

Different ratios possible

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Impacts on data tiering

Separates storage tier

Can keep more data in HOT with persistent memory

Not a data tiering solution but impacts it

Warm Storage, Native Storage Extension


SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Simpler landscape, integrated into server of SAP HANA

SAP HANA Cloud Services – NSE is a strategic role

Plan to make available for any SAP HANA application

Complement to other data tiering solution

Strategic solution is NSE

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Column loadable is default

Declare partitions to page loadable

Access data on NSE will load little as possible

Buffer cache – configurable size, to buffer to pages in main memory

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Define a whole table as page loadable

Can do on partition level (with data aging)

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Another picture

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Sizing and limitations

Soft limits, hope to lift limits soon, hope to before SPS05

Size of buffer cache recommendation is shown above, according to SAP’s experience

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Tools for NSE include HANA Cockpit

Recommendation engine – can activate in HANA database to monitor query patterns executed to help you decide what is warm and partition; first step to automate data tiering in SAP HANA to make things easier

DLM is planned later this year

Warm Store new feature


SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications


SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Timestamp support

SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Getting started with warm store

NSE is limited to 10TB; plan to extend, plan to come close to what dynamic tiering

Which options to use


SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Data Lifecycle Management


SAP HANA, SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Source: SAP

Develop Simple on HANA Express in AWS Cloud 9 Part 3 – The Frontend App

$
0
0

Overview


In the third part of this series, I will cover creating a basic Vue frontend to consume our backend service that we created in Part 2.

Prerequisites


Cloud 9 set up and configured as described in Part 1
Backend created as described in Part 2

Update the .env file

In part 2, we made use of the SYSTEM user to perform a quick test of our backend app. In a real world use case, we of course do not want to do this. So we will designate some new environment variables to specify a new application user and password.

HXE_MASTER_PASSWORD=HXEHana1
HANA_APP_UID=APPUSER
HANA_APP_PWD=SomeSecretPassword
HANA_SERVER=hxehost:39017
HANA_APP_BACKEND=/backend

Update the docker-compose.yaml file


Open the docker-compose.yaml file under /hanadev and update the contents as follows:

version: '2'
 
services:
 
  hello-world-app:
    build:
      context: .
      dockerfile: ./hello-world-app/Dockerfile
    ports:
      - "3333:9999"
    environment:
      - HANA_UID=${HANA_APP_UID}
      - HANA_PWD=${HANA_APP_PWD}
      - HANA_SERVERNODE=${HANA_SERVER}

  sqlpad:
    image: sqlpad/sqlpad
    volumes:
      - sqlpad:/var/lib/sqlpad
    ports:
      - "8899:3000"
       
  hxehost:
    image: store/saplabs/hanaexpress:2.00.036.00.20190223.1
    hostname: hxe
    volumes:
      - hana-express:/hana/mounts
    command: --agree-to-sap-license --master-password ${HXE_MASTER_PASSWORD}
 
volumes:
  hana-express:
  sqlpad:

Basically, we’ve only changed the hello-world-app environment variable mapping of HANA_UID and HANA_PWD to point to our new HANA_APP_UID and HANA_APP_PWDvariables in our .env files.

Create the Application User


1. Let’s briefly start up our HANA Express DB and SQLPad by typing the following from the hanadev directory in a terminal window.

docker-compose up

2. Open up SQLPad from http://[cloud 9 external IP]:8899 and log in with the user you created. Refer to Part 1 if you need a reminder on how to log in.

3. Create a new SQL statement as follows and click Run

CREATE USER APPUSER PASSWORD SomeSecretPassword NO FORCE_FIRST_PASSWORD_CHANGE;

4. That’s it. We can now stop our Stack by pressing Control + C in the terminal window that you typed docker-compose up.

5. Let’s start back up our stack one last time and make sure our backend app is now running as our new application user. Run docker-compose-up -d and wait about 60 seconds for HANA Express to start up.

6. Next, type curl -X POST http://localhost:3333/api/overview | grep user. You should get a one line back of the JSON output similar to below:

 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1202  100  1202    0     0  52260      0 --:--:-- --:--:-- --:--:-- 52260
"user": "APPUSER"

Congratulations! You’ve successfully modified the backend app to use an application user.

Creating a Vue Project


As I mentioned in Part 1, it’s been a long time since playing in web frameworks. While this can be a bit of a divisive Holy War topic, for me, I’ve gotten particularly fond of Vue. If you are more of an Angular or React person, feel free to replace these steps with your favorite frontend tool and read no further. If you’d like to create a super simple Vue app, read on.

1. From a terminal window in Cloud 9, type npm i -g @vue/cli. This will install Vue and the Vue CLI.

2. Next, since I’m not a big CLI guy, let’s start up the GUI for the CLI by typing vue ui -p 8080 from the hanadev/hello-world-app directory (important.) Once you see the status in your terminal below, proceed to the next step.

ec2-user:~/environment/hanadev (Part3) $ vue ui -p 8080
  Starting GUI...
  Ready on http://localhost:8080

3. In the Cloud 9 toolbar, click Preview -> Preview Running Application. A browser window inside your Cloud 9 IDE should open:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

4. Click the Create button and then click Create a new project here.

5. For Project Folder, name it frontend. Click Next.

6. For preset, leave Default preset select and click Create Project. Vue CLI will begin to generate the boilerplate project files under the hanadev/hello-world-app/frontend folder. After a few moments, you should arrive at a screen saying “Welcome to your new project!”.

7. On the left edge of the page, find the puzzle piece Plugins icon and click it. Then, click the + Add plugin button at the top-left.

8. We are going to install 2 plugins. A routing plugin and a UI plugin. For the router plugin, there should be a Add vue-router button placed prominently at the top of the plugins page. Click it, and then click Continue. Then don’t forget to click Finish installation

9. After vue-router finishes installing, search for vue-cli-plugin-vuetify. Click on the matching search result and click on Install vie-cli-plugin-vuetify.

10. After the installation is complete, click on the Tasks icon on the left edge of the page. (Clipboard icon.)

11. This page serves as a launching point to run vue-cli tasks that you can either opt to use this page to run, or if you are more of a CLI person, you can run from a terminal if you so wish. For now though, let’s click on serve and then Run task.

12. Once the green checkbox appears, we know that our Vue app is running. Click on the Output button to monitor the status of our serve task. You should see something similar to the following:
        
  App running at:
    Local:   http://localhost:8081/ 
    Network: http://172.16.0.99:8081/
    
Note that the development build is not optimized.
To create a production build, run npm run build.

13. Since we are running in Cloud 9 IDE, the IP address reported back and hostname are not accessible from your browser. You will want to substitute your Cloud 9 External IDE here. You also will need to expose port 80xx (whichever one is mentioned in the output) in our Cloud 9 EC2 instance in order to access this application easier. Refer to steps in Part 1 if you do not know how to do this. I’d recommend opening up ports 8080 through 8085 as sometimes we may be running more than one app at once and it will save you a trip to the EC2 Dashboard later on.

14. After noting the 80xx port, navigate to http://[your cloud 9 external ip]:80xx.

15. If you get back a Welcome to Your Vue.js App congratulations! We are ready to start coding.

Modifying your Vue Project


Now that we have created the boilerplate Vue Project, we are ready to make some changes to the application. While I am not a Vue expert, and for sake of brevity, I won’t be explaining everything that’s going on. There are many, many great Vue tutorials online that I’d highly suggest you look for if you are interested in Vue.

1. In your Cloud 9 IDE, locate the /hanadev/hello-world-app/frontend folder. This is where all your frontend code has now been generated. Modify/Create the following files.

2. Create Environment Variable files

i. /hello-world-app/frontend/.env.production:This file will be used for our final productin build. The VUE_APP_HANA_APP_BACKEND variable tells the frontend app where to issue backend requests to. For production, we’ll handle this with Nginx a bit later on.

VUE_APP_HANA_APP_BACKEND=/backend

ii. /hello-world-app/frontend/.env.development.localNOTE: Be sure to add your Cloud 9 External IP address in the placeholder below. For development, we’ll want to hit our running Docker stack running in Cloud 9.

VUE_APP_HANA_APP_BACKEND=http://[your cloud 9 external ip]:3333

3. Modify /hello-world-app/frontend/src/main.js

import Vue from 'vue'
import './plugins/vuetify'
import App from './App.vue'
    
import router from './router'
    
if(!process.env.VUE_APP_HANA_APP_BACKEND){
  alert("VUE_APP_HANA_APP_BACKEND environment variable not set.  Please set your environment and restart this frontend server.")
}else{
  Vue.config.productionTip = false
  new Vue({
    router,
    render: h => h(App)
  }).$mount('#app')
}

4. Modify/Create /hello-world-app/frontend/src/router.js

import Vue from 'vue'
import Router from 'vue-router'
import Overview from './views/Overview.vue'
    
Vue.use(Router)
    
export default new Router({
  routes: [
    {
      path: '/',
      name: 'Overview',
      component: Overview
    }
  ]
})

5. Modify /hello-world-app/frontend/src/App.vue

<template>
  <v-app dark>
    <AppNav :systemInformation="results.backend_information"/>
    <v-content transition="slide-x-transition">
      <router-view />
    </v-content>
  </v-app>
</template>
<script>
  import AppNav from '@/AppNav';
  import axios from 'axios';
  export default {
    name: 'App',
    components: {
        AppNav
    },
    data () {
      return {
        results: {
          backend_information : {
            user : 'dummy'
          }
        }
      };
    },
    methods : {
      getData (){
        axios.post(process.env.VUE_APP_HANA_APP_BACKEND + '/api/overview/',{ }).then(res=>{
          if(res.data){
            this.results = res.data;
            this.systemInformation = res.data.backend_information;
            // console.log(this.results);
          }else{
            alert(JSON.stringify(res));
            this.results = {};
          }
        }, err=> {
          alert(JSON.stringify(err.response.data));
        }).catch(err=>{
          alert(`An error occured communicating with the backend.
          ${err}`);
        })
      },
    },
    mounted(){
        this.getData();
    }
};
</script>

6. Create /hello-world-app/frontend/src/AppNav.vue

<template>
    <v-toolbar app color="blue darken-4" dark>
        <v-toolbar-title>{{appTitle}}</v-toolbar-title>
        <template v-for="(item,index) in items">
            <v-btn v-if="typeof item.link === 'undefined'" :key=index flat :to="'/' + item.title">{{item.title}}</v-btn>
            <v-btn v-else :key=index flat :to="'/' + item.link">{{item.title}}</v-btn>
        </template>
        <v-spacer />
        <v-chip color="primary" label outline     text-color="white">{{systemInformation.user}}@{{systemInformation.server}}:{{systemInformation.port}}</v-chip>
    </v-toolbar>
</template>
    
<script>
export default {
    name: 'AppNav',
    props : {
        systemInformation : Object
    },
    data(){
        return{
            appTitle: 'HANA Sandbox',
            drawer: false,
            items: [
                { title: 'Overview',link: '' }
            ]
        };
    }
};
</script>
    
<style scoped>
</style>

7. Delete any files (About.vue, Home.vue etc.) under /hello-world-app/frontend/src/views

8. Create /hello-world-app/frontend/src/views/Overview.vue

<template>
  <div>
    <v-list two-line>
      <template v-for="(item,index) in results.M_SYSTEM_OVERVIEW">
        <v-list-tile :key="index">
          <v-list-tile-content>
            <v-list-tile-title v-html="item.KEY"></v-list-tile-title>
            <v-list-tile-sub-title v-html="item.VAL"></v-list-tile-sub-title>
          </v-list-tile-content>
        </v-list-tile>
      </template>
    </v-list>
  </div>
</template>
    
<script>
import axios from 'axios';
export default {
  name: 'Overview',
  data: () => ({
    results: []
  }),
  components: {},
  methods: {
    getData(){
      axios.post(process.env.VUE_APP_HANA_APP_BACKEND + '/api/overview/',{ }).then(res=>{
        if(res.data){
          this.results = res.data;
        }else{
          this.results = {};
        }
      }, err=> {
        alert(JSON.stringify(err.response.data));
      }).catch(err=>{
        alert(`An error occured communicating with the backend.
        ${err}`);
      })
    }
  },
  mounted(){
    this.getData();
  }
}
</script>

9. The the Overview.vue file, we are making use of the axios npm module, so we will want to install this. To do so, open a terminal window in Cloud 9 and cd to your frontend folder. Type npm i axios to install it.

Running our Frontend App in Developer Mode


If you are still running the vue-cli UI, you can now terminate it by pressing Control + C. We will now demonstrate how to run the same serve task via command line from the terminal.

1. Make one more trip over to your EC2 Console and expose port 3333. This is our backend port that we’ll need our browser to hit in order to get back data from our HANA Container running in our Stack while running in Developer mode. For “production” use cases, we will not need this port.

2. In a terminal window:If your Docker Compose stack is not already running, start it now:

cd /hanadev
docker-compose up -d

Next, let’s start up our frontend app in developer mode.

cd /hanadev/hello-world-app/frontend`
npm run serve

3. After a few moments, you should receive the following feedback in your terminal:“`bash DONE Compiled successfully in 20991ms 18:59:50App running at:

◈ Local: http://localhost:8080/
◈ Network: http://172.16.0.99:8080/

Note that the development build is not optimized. To create a production build, run npm run build. “`

4. Like earlier, disregard the internal IP, and replace it with your Cloud 9 External IP address and navigate to http://[your cloud 9 external ip]:80xx where 80xx is the port mentioned above.

5. If all has gone well, you should receive a page titled “HANA Sandbox” with your App User shown at the top right, and your HANA Express system information shown below. If so, congratulations! You’ve created a frontend app that is consuming your Docker Compose stack’s backend service!

Running in this manner allows us to make code changes to our frontend application live in Cloud 9 without deploying over and over again, yet at the same time attaching to our Docker Compose stack’s backend app and HANA Express DB. Pretty cool!

After celebrating, terminate the development mode task by pressing Control + C.

Wrapping it up in our Container


For Part 3, we’ll consider this a “Milestone” and use this as an opportunity to bundle our frontend application changes into our docker-compose.yaml file before we call it a day. We’ll need to update a few files to incorporate the frontend app.

1. Open your Dockerfile located under /hanadev Update it with the following:

# Docker Image containing SAP HANA npm package
FROM node:8-slim
    
LABEL Maintainer="Your Name <your.name@example.com>"
    
# Install nginx to handle backend and frontend apps
RUN apt-get update && apt-get install -y nginx
    
# Add SAP HANA Client NPM package from SAP's npm repository
RUN npm config set @sap:registry https://npm.sap.com && npm i -g @sap/hana-client
    
# Set the global NPM path Environment variable
ENV NODE_PATH /usr/local/lib/node_modules
    
# Configure nginx and startup
COPY ./hello-world-app/server.conf /etc/nginx/conf.d/default.conf
# Copy backend Node JS modu
COPY /hello-world-app/backend /app/backend
# Copy production build of Vue frontend app
COPY /hello-world-app/frontend/dist /app/frontend
# Copy startup.sh script
COPY ./hello-world-app/startup.sh /app/startup.sh
    
WORKDIR /app
CMD ./startup.sh

We are adding 3 new main items here:

i. Our frontend app’s production dist folder will be copied over to our Docker images’s /app/frontend folder. The production dist folder is an optimized and minified version of our frontend Vue app.

ii. Install Nginx and copy some configuration files to do some reverse proxy magic so that we can just have one single port exposed from our container and to abstract the underlying architecture away.

iii. Copy over a startup.sh script since we’ll be launching more than one process for the CMD line.

2. Create startup.sh in /hanadev/hello-world-app

#!/bin/sh
echo "Starting Servers..."
mkdir -p /run/nginx
rm /etc/nginx/sites-enabled/default
echo "Starting nginx..."
nginx
cd /app/backend
echo "Starting backend..."
npm run prod

3. Change permissions to executable for startup.sh from a terminal window
   
   cd /hanadev/hello-world-app
   chmod +x startup.sh

1. Create server.conf in /hanadev/hello-world-app

“`conf

server {
    listen      80 default_server;
    # document root #
    root        /app/frontend/;

    # Route requsts to /backend/ to Backend npm module
    location /backend/ {
        proxy_pass http://localhost:9999/;
    }
}
```
1. Build our Vue frontend app to generate the dist folder.
    
cd /hanadev/hello-world-app/frontend
npm run build

You should get some feedback similar to this:

 > frontend@0.1.0 build /home/ec2-user/environment/hanadev/hello-world-app/frontend
 > vue-cli-service build
     
     
 ⠏  Building for production...
     
  WARNING  Compiled with 2 warnings                                                                               19:16:17
     
  warning  
     
 entrypoint size limit: The following entrypoint(s) combined asset size exceeds the recommended limit (244      KiB). This can impact web performance.
 Entrypoints:
   app (303 KiB)
       css/chunk-vendors.bc527eeb.css
       js/chunk-vendors.2793d0c4.js
       js/app.02b450ce.js
     
  warning  
     
 webpack performance recommendations: 
 You can limit the size of your bundles by using import() or require.ensure to lazy load some parts of your      application.
 For more info visit https://webpack.js.org/guides/code-splitting/
     
   File                                   Size              Gzipped
     
   dist/js/chunk-vendors.2793d0c4.js      180.43 KiB        60.10 KiB
   dist/js/app.02b450ce.js                4.87 KiB          2.03 KiB
   dist/css/chunk-vendors.bc527eeb.css    118.13 KiB        15.45 KiB
     
   Images and other types of assets omitted.
     
  DONE  Build complete. The dist directory is ready to be deployed.
  INFO  Check out deployment instructions at https://cli.vuejs.org/guide/deployment.html

2. Update our docker-compose.yaml file under /hanadev:

version: '2'
        
services:
        
  hello-world-app:
    build: 
      context: .
      dockerfile: ./hello-world-app/Dockerfile
    ports:
      # - "3333:9999" No longer needed we are using Nginx
      # Reroute Nginx listening on Port 80 over to 8080 which we've already exposed in EC2
      - "8080:80"
    environment:
      - HANA_UID=${HANA_APP_UID}
      - HANA_PWD=${HANA_APP_PWD}
      - HANA_SERVERNODE=${HANA_SERVER}
    
  sqlpad:
    image: sqlpad/sqlpad
    volumes:
      - sqlpad:/var/lib/sqlpad
    ports:
      - "8899:3000"
              
  hxehost:
    image: store/saplabs/hanaexpress:2.00.036.00.20190223.1
    hostname: hxe
    volumes:
      - hana-express:/hana/mounts
    command: --agree-to-sap-license --master-password ${HXE_MASTER_PASSWORD}
        
volumes:
  hana-express:
  sqlpad:

◈ Basically all that we’ve done is removed the backend port (3333) from being accessible, since our Nginx app inside of our Docker container will be reverse-proxying calls to the npm task running there. For development use cases, you may wish to leave this in place for when developing live and not hosting inside a container, or better yet, simply have a separate docker compose stack for when you are developing, and maybe this one that represents “production”.

◈ Secondly, we’re exposing Nginx that is listening on Port 80 over to Port 8080 since we’ve already exposed 8080 in our EC2 Dashboard, and that will save us a trip and another exposed port.

3. Rebuild our docker-compose stack:

cd /hanadev
docker-compose build

Note The initial time you run the build it will take longer, as we’ve added in a few new Docker image layers to account for the new Nginx addition, etc. After 2 minutes or so, you should get a confirmation that the build has finished successfully:
    
...
    
Step 7/11 : COPY /hello-world-app/backend /app/backend
 ---> c5c3508dc2a2
Step 8/11 : COPY /hello-world-app/frontend/dist /app/frontend
 ---> 6ee34d81d8d5
Step 9/11 : COPY ./hello-world-app/startup.sh /app/startup.sh
 ---> 60f38e70da77
Step 10/11 : WORKDIR /app
 ---> Running in e54ee8b3c9c6
Removing intermediate container e54ee8b3c9c6
 ---> df3d269e1dff
Step 11/11 : CMD ./startup.sh
 ---> Running in 594e5016ca51
Removing intermediate container 594e5016ca51
 ---> 4abedcaed348
Successfully built 4abedcaed348
Successfully tagged hanadev_hello-world-app:latest

Moment of Truth


We are now ready to test our new Docker Compose stack.

1. From /hanadev, type:

docker-compose up

2. After about 60 seconds, open a browser tab and visit http://[your cloud 9 ide external ip]:8080 If you see the HANA Sandbox page with your HANA Express system overview, congratulations! You’ve successfully containerized your frontend and backend app!

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Understanding HANA has never been this easy

$
0
0
Though a lot of materials can be found online talking about HANA, its features/advantages and so on. This blog is for newbies in HANA and will brief about HANA in very simple words.

Before we embark on our journey, lets define 3 most confusing terms which often thought to be similar by beginner.

   HANA                  Suite on HANA                   S/4 HANA

HANA is a database and S/4 HANA is the application that uses this database.

ECC 6.0 and below versions were mainly running on Oracle database that can be seen from System->Status->Database System.

And moving the database from Oracle or any other db to HANA DB is Suite on HANA.

SAP S/4 HANA Full form :
SAP Business Suite 4 (generation) for SAP High-performance ANalytic Appliance

Now, lets start with the topic “HANA”.

HANA is much more than a database, it is also a data processing platform. Now task of application layer is to mainly display what the user has asked for.

Note: Though HANA is doing all the calculations but to achieve complex scenarios, application layer is still used for data processing and calculations.

Features of HANA :


Data Source Independent: 

HANA enables customers to analyze huge volumes of data in real time, data aggregation can be done irrespective of data sources.

HYBRID In-Memory Database:

In-memory, because it stores data directly in main memory and Hybrid, because it stores data as Row store and Column store both.

Row Store: Data is stored as a sequence of records i.e. all the fields of one row are saved in contiguous memory locations.

Column Store: Data is stored as a sequence of fields i.e. all values in a column are stored in contiguous memory locations.

Parallel processing and Partitioning of data:

Parallel processing means that at least two microprocessors handle parts of an overall task.

e.g. In below snap it can be seen for processing or searching for a specific record Core 1 is dedicated to column 1 and core 2 is for column 2 and in case of partitioning two cores are dedicated to each half of the column 3 to speed up the data processing time.

SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Guides

Multi- tenancy :

This is an architecture which allows multiple tenant/customer to share the same computing resource here HANA DB.  Each tenant’s data is isolated and remains invisible to other tenants.

SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Guides

◈ System tables and other data are stored in System DB which are required for proper functioning.
◈ Benefit while using cloud.
◈ Cost effective: Here we need not to pay for whole DB but for occupied memory only.
◈ Virtually we are creating a database inside HANA system.

Dynamic Tiering: 


It is classification of data into temperatures (Hot, Warm, Cold) based on its importance and performance expectations.

Hot data: Data is accessed frequently thus stored in main memory.

Warm data: Data is not accessed frequently and stored in disk based persistent storage.

Cold data:

Data is rarely accessed or inactive and is stored outside SAP HANA database.

SAP HANA Certifications, SAP HANA Study Materials, SAP HANA Guides

Supported Data type :


Text data :

Data from Social media feeds, help desk tickets, logs etc.

e.g.While commenting on social media platform we use SMS language, short forms such data falls under text data.

Spatial data:

Data that relates to locality, maps etc. e.g. GPS map based data.

Graph data: 

Data that related to highly networked entities like social network etc.These are real-time data that keep changes with high frequency.

e.g. followers count on social media, likes on a particular page/post, stock market indices.

Generate certificate and add to SAP HANA Certificate Store

$
0
0

Introduction


This blog helps to Generate Certificates and add it to HANA Certificate Store and Configure Certificate Collection while configuration of Principal Propagation to SAP HANA XS on SCP.

Am highlighting a section where we are unable to find the certificates after configuring the Trust in SAML Identity Provider.

To verify the list of certificates installed use the following SQL Command.

SELECT * FROM SYS.CERTIFICATES

If the result is empty. Follow the below steps to generate the Certificates.

Login to HANA Admin Cockpit with SYSTEM user .
Make sure the SYSTEM user contains all Admin System privileges ( like TENANT ADMIN, CERTIFICATE ADMIN etc ).
After login navigate to SAP HANA Certificate Management section. It should look similar to below.  if the “Configure Certificate Collections” count is 0 then it means there is no certificate in it.


SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Generate Certificates


The generated certificate which  will be later imported it to Certificate Store. To do so follow the below steps.

Step 1 – Edit the metadata.xml in notepad++ and the file should look like the below.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Step 2 – Copy the values highlighted in Yellow .i.e the values between <X509Certificate>  </X509Certificate> html tags.

Step 3 – Create a Certificate(.der) file . Open a notepad and paste it, then add “—–BEGIN CERTIFICATE—–” in the beginning of it and “—–END CERTIFICATE—–” to the end of it . The file should look similar to below.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Step 4 – Now save the file in .der format. ex:- scpcertficatetrial.der

Step 5 – Import the certificate in “Certificate Store”. See the below image.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Step 6 – Create a “Certificate Collection” ex:- SCP Certificate.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Step 7 – Add the Certificate to the Collection.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Step 8 – Change the Purpose to SAML and save it.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Step 9 – Lets check in the HANA Cockpit. You can see the number of certificates in the cockpit if all the configs are done as described above.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Also verify that the certificate of your SCP account metadata has been successfully stored using the following SQL command:

SELECT * FROM SYS.CERTIFICATES

The certificate will be fetched. It should look similar to below:-

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

This step concludes the scenario of Certificate Creation and Addition to the Store.

SAP S/4HANA Cloud and S/4HANA On-premise Quality Management Features

$
0
0
In this blog S/4HANA Cloud and S/4HANA On-premise Quality Management features will be presented. Both applications improve its features and continue to meet customer requirements. Quality management process is the key driver from the product development stage until product discontinuation phase.

S/4HANA Cloud releases its features with Quarterly innovation cycle.

S/4HANA On-premise releases its features Annual innovation cycle.

Overview of S/4HANA On-premise (QM)


SAP S/4HANA Cloud and SAP S/4HANA, SAP HANA Study Materials

Overview of S/4HANA Cloud (QM)


SAP S/4HANA Cloud and SAP S/4HANA, SAP HANA Study Materials

Preconfigured core processes within Quality Management


The following processes support the standard functionalities.

1. Quality Management in Procurement
2. Quality Management in Discrete Manufacturing
3. Quality Management in Stock Handling
4. Quality Management in Sales
5. Quality Management for Complaints from Customers
6. Quality Management for Complaints Against Suppliers
7. Quality Management of Internal Problems
8. Nonconformance Management
9. SAP Fiori Analytical Apps for Quality Management

Fiori Apps in QM at a Glance


SAP S/4HANA Cloud and SAP S/4HANA, SAP HANA Study Materials

Cards for Overview Pages in QM at a Glance


SAP S/4HANA Cloud and SAP S/4HANA, SAP HANA Study Materials

Fiori Launchpad for Different QM Roles


SAP S/4HANA Cloud and SAP S/4HANA, SAP HANA Study Materials

Analytics


SAP S/4HANA Cloud and SAP S/4HANA, SAP HANA Study Materials

SAP road maps provide insights into the near-term investments across intelligent ERP that may help you plan and implement your digital journey.

S/4HANA On-premise Roadmap


SAP S/4HANA Cloud and SAP S/4HANA, SAP HANA Study Materials

S/4HANA Cloud Roadmap


SAP S/4HANA Cloud and SAP S/4HANA, SAP HANA Study Materials

Principal Propagation between HTML5- or Java-based applications and SAP HANA XS on SAP HANA Cloud Platform

$
0
0

Introduction


Although there is no standardized definition of the term “Principal Propagation”, it is commonly understood as the ability of a system to securely forward or propagate the authenticated user (principal) from a sender to a receiver in a way that the forwarded user information is kept confidential and – even more important – cannot be changed during transit. Based on a pre-established trust relationship to the sender, the receiver uses this information to logon the user without asking her again for the credentials.

Principal propagation plays an important role in many scenarios on SAP HANA Cloud Platform (HCP), e.g. when an application has to pass the logged-on user in the Cloud to an on-premise system via the SAP HANA Cloud Connector. The following picture illustrates another very common scenario for principal propagation, where an application on HCP consists of two components: The user interface (UI) is developed and deployed as an HTML5- or Java-application on HCP which consumes an API implemented as a RESTful service from an SAP HANA instance running on HCP. The API requires an authenticated user and exposes the user’s data via SAP HANA Extended Application Services (XS).

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

On HCP, the user usually authenticates against an identity provider (IdP) which is configured for the account where the application is deployed to. In HCP trial accounts for example, this is the SAP ID Service by default, which is a free-of-charge public identity provider from SAP, managing the SAP Community Network users, SAP Service Marketplace users and the users of several other SAP sites. To delegate user authentication to the IdP, HCP uses the SAML 2.0 protocol. Upon successful authentication at the IdP, the HTML5 application on HCP receives a SAML Response from the IdP, which is a message digitally signed by the IdP. It must contain at least the unique logon name of the user, and may also include additional information about the user, such as the user’s first and last name, e-mail address etc.

HTML5 applications usually rely on on-premise or on-demand RESTful services. When a RESTful service is called from an HTML5 application, a new connection is initiated by the central HTML5 dispatcher on HCP to the service that is defined in a corresponding HTTP destination. If this call requires the user to authenticate at the service, the HTML5 dispatcher should rather propagate the authenticated user or login context than prompting the user again for credentials to access the service.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

There are two authentication mechanism available for an HTTP destination to propagate the logged-in user to a RESTful service running on SAP HANA XS: SAP Assertion SSO or Application-to-Application SSO (AppToAppSSO). The first one uses SAP Assertion Tickets to transfer the logged-on user information, the latter uses a SAML Assertion. Compared to SAP Assertion SSO, AppToAppSSO has the following advantages:

◈ The propagated user information can contain more information than just the user’s login name. Additional user attributes are also forwarded with the SAML Assertion. SAP Assertion Tickets only forward the user’s login name.

◈ SAP HANA XS can dynamically create a new DB user based on the forwarded information. This user is required to successfully log on the user on the SAP HANA instance. With SAP Assertion Tickets, this mechanism, sometimes referred to as “Just-in-time (user) provisioning”, is not supported, and the users have to be created in advance. However, this is sometimes not possible, e.g. if there is a large number of users accessing the service.

In this blog you will go step-by-step through a scenario using AppToAppSSO. Common for both mechanism is that the recipient (XS) must trust the sender (HTML5 dispatcher) to accept the propagated principal. For AppToAppSSO, this trust relationship is setup in XS similar to other SAML-based IdPs. Therefore, the SAP HANA instance must be properly setup for SAML-based authentication as one of the following prerequisites.

Note: Although an HTML5 application is used to implement the UI, a Java-based application could have been used as well for the scenario. AppToAppSSO works for both application runtimes to propagate the authenticated user to SAP HANA XS.

Prerequisites


The scenario in this blog is using an SAP HANA Multitenant Database Container (MDC) on the HCP trial landscape. Before getting started, please check that you meet the following prerequisites:

1. You have an HCP trial account, which can be created at no charge from here.
2. You have created a MDC in your trial account. 
3. You have setup the SAML Service Provider in the MDC. 
4. You have installed Eclipse with the SAP HANA Cloud Platform Tools and SAP HANA Tools following the instructions on the SAP HANA Tools site
5. You have installed OpenSSL which will be used in first step to generate the signing key pair and certificate for your HTML5 SAML Service Provider

Step 1: Configuring the Local Service Provider for HTML5 apps

AppToAppSSO uses a SAML Assertion as the security token format to propagate the logged-on user. Therefore, your HCP (trial) account must be setup with a custom SAML Service Provider key pair which is used to digitally sign the SAML Assertion. Based on this signature, XS will verify that the user information has been propagated from a trustworthy system, i.e. your HTML5 application, or even more precisely, your account’s subscription to the central HTML5 dispatcher. Login to the Cloud Cockpit on the HCP trial landscape and open the Trust settings of your account. Click on the Edit button and switch the Configuration Type from “Default” to “Custom”.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

If you have never done this before, you will see empty text fields for the Signing Key and Signing Certificate. Those need to be filled in this step as they identify your HTML5 application to the service running on XS. For development purposes, you can use the “Generate Key Pair” button in this scenario to generate a key pair with a self-signed certificate. For a productive scenario, it is recommended to use a certificate issued by a well-known and trusted Certificate Authority (CA). After clicking on Save you should get a message that you can proceed with the configuring of your trusted identity provider settings, and see a Local Service Provider configuration like shown in the following screenshot:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Click on the Get Metadata link to export the Local Service Provider configuration in a standardized metadata format, which will be use in the next step to import the trust settings in XS.

With the Configuration Type “Custom” you are now able to configure your own trusted identity providers, e.g. a corporate IdP. For the scenario in this blog you will continue to use SAP ID Service as our IdP to authenticate the users. Therefore you have to switch back to Configuration Type “Default” by clicking on the Edit button and reverting Configuration Type “Custom” back to “Default”. Click on Save.

Note: By switching back to “Default”, your “Custom” settings are not lost, and will be used for signing the SAML Assertion sent by the HTTP destination using AppToAppSSO principal propagation.

Step 2: Setup Trust in XS to the HTML5 Local Service Provider

Open the SAML Identity Provider list of your trial MDC with the XS Admin tool using your account-specific URL https://<mdcname><account name>.hanatrial.ondemand.com/sap/hana/xs/admin, and login with the SYSTEM user. If the SYSTEM has not yet the required roles to access the XS Admin tool, add all roles in SAP HANA Studio containing “xs.admin” in the name as shown in the following screenshot:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

On the SAML Identity Provider list, click on Add (“+”) to create a new trust relationship to your HCP account’s Local Service Provider which has been configured in the previous step. In the Metadata field, copy and paste the content of the SAML Metadata file you exported from the Cloud Cockpit using the Get Metadata link.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

When you click on Save, the fields in the form will be updated based on the values from the metadata file. The only fields left blank are “SingleSignOn URL (RedirectBinding)” and “SingleSignOn URL (PostBinding)”, because you’ve actually imported a metadata file of a service provider, and not of an identity provider. Therefore add some dummy values, e.g. “/saml2/sso”. Also make sure that the checkbox “Dynamic User Creation” is activated. This ensures that for new users a corresponding HANA user is created. Click on Save again to store your settings.

Next, verify that the destination for the new IdP was stored in HANA by checking in SAP HANA Studio the _SYS_XS.HTTP_DESTINATIONS table using the command

SELECT * FROM _SYS_XS.HTTP_DESTINATIONS

You should see the destination in the result list:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Also verify that the certificate of your trial account metadata has been successfully stored using the following SQL command:

SELECT * FROM SYS.CERTIFICATES

The certificate is shown at the end of the list:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

This concludes the trust setup in HANA XS to your HTML5 application as a trustworthy system to propagate the authenticated user. Next you will configure the destination of your HTML5 application.

Step 3: Configure HTTP Destination for AppToAppSSO

The sample HTML5 application used in this blog is a project management application, which retrieves a user’s project data from an REST service running on XS. 

◈ In Cloud Cockpit, go to Applications > HTML5 Applications. Click on Import from File.
◈ Select the downloaded ZIP archive. For the version name, enter “initial”
◈ Select the new application “xproject” by clicking on its link
◈ Select Versioning from the left-hand navigation menu, then click on Versions > Initial > Activate.

Open the HANA Cloud Platform WebIDE from the Services Cockpit or open it directly with the link https://webide-<account_name>.dispatcher.hanatrial.ondemand.com/. In your WebIDE workspace, follow these steps:

◈ Right-click on the Workspace and select Import > Application from SAP HANA Cloud Platform
◈ Right-click on the xproject folder and select Deploy > Deploy to SAP HANA Cloud Platform. Enter something like 1.0 in the version field and click Deploy.

You should now be able to reach the application under its Application URL as shown in the Cloud Cockpit, e.g. https://xproject-<account_name>.dispatcher.hanatrial.ondemand.com/?hc_reset.

Let’s have a close look at the HTML5 application. As a user, you login to the application via the IdP, and then see a list of projects where you are assigned to. Therefore the logged-on user must be propagated securely to XS which will use the propagated user id to query the database for the projects where the user is assigned to as the project lead. In addition, the user’s attributes such as first- and last name are used to set the user’s name in the list of projects returned from XS to HTML5.

The actual invocation of the service in XS is done in Project.controller.js of the HTML5 application:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

In the JSON model, the data is loaded from the URL /api/projects, which is mapped in the HTML5 application’s neo-app.json descriptor file to the HTTP destination with name “xsprojectdata” :

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Let’s have a look at the destination configuration in the Cloud Cockpit. The two most important settings are highlighted in the following screenshot:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

◈ The Authentication method is set to AppToAppSSO
◈ An additional property with the name “saml2_audience” and the value “I1700” is set for the destination

The property sets an important value in the SAML Assertion which is used to propagate the user. This value, the SAML audience,

“contain[s] the unique identifier URI from a SAML name identifier that describes a system entity” and “evaluates to Valid if and only if the SAML relying party is a member of one or more of the audiences specified.” 

In other words: XS would reject the SAML Assertion with the propagated user if the audience is not set correctly to its own SAML name identifier. By default, an HTTP destination configured for AppToAppSSO sets the audience to the name of the SAML local service provider (aka “relying party”) configured in the Cloud Cockpit. For a trial account, this would be “https://hanatrial.ondemand.com/<your account name>” if you haven’t changed it. However, your MDC container is configured to a different SAML service provider name. Mine got the name identifier “I1700” which can be looked up in the XS Admin Tool under “SAML Service Provider”:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Last but not least, configure the URL of the destination according to your service location. To do so, follow these steps:

◈ Create a new package “sample”, e.g. with SAP HANA Studio in the SAP HANA Systems view

◈ Create a new schema “xproject”, e.g. with SAP HANA Studio SQL Console, using the command CREATE SCHEMA “xproject”

◈ Download the XS service code, and import it in the new “sample” package, e.g. with the SAP HANA Web-based Development Workbench (https://<MDC_name><account_name>.hanatrial.ondemand.com/sap/hana/ide/editor/) using the context menu (Import – Archive)

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

◈ Refresh the package after successful import of the archive and open the newly created “xproject” sub-package. Right-click on the following imported files and select Activate: PROJECT.hdbdd, project.hdbsequence, projectmember.hdbrole,

◈ Open the data/projects.csv file, and replace the two placeholders <your user id> in the file with your SAP ID Service user ID (e.g. P123456 or D987654). Save the changes. This file will import some sample data into the PROJECT table which is used later for testing the scenario. Now right-click on the file project.hdbti and select Activate from the context menu.

◈ Finally, right-click on the package “xproject” and select Activate All from the context menu.

As a result, the XS-based service is now accessible at the URL  https://<MDC_name><account_name>.hanatrial.ondemand.com/sample/xproject/xproject.xsjs. Enter this URL in the destination’s URL field and save it.

Step 5: Configure the default role of dynamically created users in XS Service

The xproject.xsjs file implements the XS service to retrieve the propagated user’s projects from the database. The function getProject() retrieves the user’s unique logon name and queries the database for projects where the user set as the project lead. The result is returned in JSON format. The PROJECT table can only be accessed by users with the role “projectmember” which is defined by the file projectmember.hdbrole. Therefore, new HANA DB users created dynamically according to the new IdP’s setting should automatically be assigned to this role. To set this default role, you first need to create a run-time role by opening the Security folder of your system in the “Systems” view in SAP HANA Studio. There do a right-click on the Roles element and select New Role from the context menu. For the Role Name, enter a value such as “DEFAULT_ROLE_FOR_PROJECT_MEMBERS”, and click on the “+” in tab Granted Roles to add your design-time role “sample.xproject::projectmember” to it. Press Ctrl+S to save you new run-time role.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Next, double-click on your system in SAP HANA Studio to open the Administration. Select the Configuration tab and filter for “saml”. Right-click on the saml section in the search results and select Add Parameter from the context menu. The Add Parameter Wizard opens. Leave the default selection (“Database”) for the scope and click Next. For the key name, enter “defaultrole”, and for the value the name of the newly created run-time role (“DEFAULT_ROLE_FOR_PROJECT_MEMBERS”). Click Finish to save the new parameter.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Step 6: Configure SAML for the XS Service

Before you can test the scenario, the XS Service must also be protected with SAML. In the XS Admin Tool, select “XS Artifact Administration” from the menu. Go to package “sample.xproject” and click on Edit. In the Security & Authentication tab, activate SAML and select newly created IdP in the dropdown box, starting with “HTTPS__HANATRIAL_…”. Deactivate any other authentication methods and click on Save.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Step 7: Testing the Scenario

Now it is time to test the scenario: Go back to Cloud Cockpit and open the Overview page of your xproject HTML5 application. Right-click on the Application URL and open the application in a new private/incognito browser window to obtain a new session.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

You will see the landing page of the xproject application. Click on Login.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Based on your trial account’s trust settings, you will be redirected to SAP ID Service as the default IdP. Upon successful logon with your SAP ID Service credentials, your browser is redirected back to the application. The project overview page retrieves its data from the XS service, which uses the AppToAppSSO destination to propagate your user. Based on the configuration settings from the previous steps, only the projects for the currently logged-in user are retrieved by getting the username from the XS session object with

var username = $.session.getUsername();

in line 20 of the xproject.xsjs file, and appending it to the SQL statement which queries the application’s PROJECT table. In addition, the federated user attributes for first- and last name of the logged-in user are used to return the display name of the user. Those are accessed in XS under the same name as in HTML5 or Java. For SAP ID Service, they are accessed using firstname and lastname using the following API:

var displayName = $.session.samlUserInfo.firstname + ” “ + $.session.samlUserInfo.lastname;

Depending on your table data and user name, the list may look like this in the web browser, only showing two out of three projects in total:

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials

Import/Export Catalog Objects with Dependencies for Multi-TenantDB in SAP HANA 2.0 SPS04

$
0
0

Introduction


In this blog post you will learn how to import/export catalog objects with dependencies for Multi-TenantDB in SAP HANA 2.0 SPS 04. IMPORT/EXPORT in SAP HANA Database is a useful feature. It is often used to transfer the data between different SAP HANA Databases for different purpose. In most of the cases, you may need to use Cross DB Access based on your business requirements. Let’s assume there are two different tenants, one is DB1 and the other is DB2. And a calculation view (ZCV_REVENUE) in DB1 is referencing many table and views in DB2. In this situation, if customers export ZCV_REVENUE with dependencies, only ZCV_REVENUE is exported, the referenced objects in tenant DB2 has to be exported manually one by one.

What’s new

From SAP HANA 2.0 SPS04 on, user can export/import the cross-database access dependencies with IMPORT/EXPORT. We will show some examples to help you to understand this feature. Please note you should create the remote schema DB2_DEMO in DB2 manually and grant the necessary privileges for the user which has remote identity.

SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorials and Materials
The following is a example to show how to use this feature in SAP HANA 2.0 SPS04.

1. Setup Cross DB Access in SAP HANA system. Tenant DB2 is the remote database. Tenant DB1 is the local one.

2. Create schema DB2_DEMO and table DB2_TEST_TABLE in Tenant DB2. Then insert some test data in this table.

For example, the following statements create schema DB2_DEMO and table DB2_TEST_TABLE in

Tenant DB2.
--In Tenant DB2
CREATE SCHEMA DB2_DEMO;
SET SCHEMA DB2_DEMO;

CREATE TABLE DB2_TEST_TABLE
(
A INT,
B VARCHAR(255)
);

INSERT INTO DB2_TEST_TABLE VALUES(1, '1');
INSERT INTO DB2_TEST_TABLE VALUES(1, '2');
INSERT INTO DB2_TEST_TABLE VALUES(2, '3');
INSERT INTO DB2_TEST_TABLE VALUES(3, '4');

3. Create schema DB1_DEMO and table DB1_TEST_TABLE in Tenant DB1. Then insert some test data in this table. Finally, create a calculation scenario DB1_DEMO.ZCV_REVENUE and related column views.

For example, the following statements create schema DB1_DEMO, table DB1_TEST_TABLE and calculation scenario DB1_DEMO.ZCV_REVENUE with related column views in Tenant DB2.

--In Tenant DB1
CREATE SCHEMA DB1_DEMO;
SET SCHEMA DB1_DEMO;

CREATE TABLE DB1_TEST_TABLE
(
A INT,
B VARCHAR(255)
);

INSERT INTO DB1_TEST_TABLE VALUES(1, '5');
INSERT INTO DB1_TEST_TABLE VALUES(1, '6');
INSERT INTO DB1_TEST_TABLE VALUES(2, '7');
INSERT INTO DB1_TEST_TABLE VALUES(3, '8');

CREATE CALCULATION SCENARIO "DB1_DEMO"."ZCV_REVENUE" USING '[{"__CalculationNode__": true,"name": "DB1_TEST_TABLE","operation": {"__TableDSNodeData__": true,"source": {"__IndexName__": true,"schema": "DB1_DEMO","name": "DB1_TEST_TABLE"},"dataSourceFlags": 0},"attributeVec": [{"__Attribute__": true,"name": "A","role": 1,"datatype": {"__DataType__": true,"type": 73,"sqlType": 3,"sqlLength": 5},"attributeType": 0},{"__Attribute__": true,"name": "B","role": 1,"datatype": {"__DataType__": true,"type": 83,"sqlType": 36,"sqlLength": 255},"attributeType": 0}]},{"__CalculationNode__": true,"name": "DB2_TEST_TABLE","operation": {"__TableDSNodeData__": true,"source": {"__IndexName__": true,"database": "DB2","schema": "DB2_DEMO","name": "DB2_TEST_TABLE"},"dataSourceFlags": 0},"attributeVec": [{"__Attribute__": true,"name": "A","role": 1,"datatype": {"__DataType__": true,"type": 73,"sqlType": 3,"sqlLength": 5},"attributeType": 0},{"__Attribute__": true,"name": "B","role": 1,"datatype": {"__DataType__": true,"type": 83,"sqlType": 36,"sqlLength": 255},"attributeType": 0}]},{"__CalculationNode__": true,"name": "Join_1","inputVec": [{"__Input__": true,"name": "DB1_TEST_TABLE","mappingVec": [{"__Mapping__": true,"type": 1,"target": "A","source": "A","length": 0},{"__Mapping__": true,"type": 1,"target": "B","source": "B","length": 0}]},{"__Input__": true,"name": "DB2_TEST_TABLE","mappingVec": [{"__Mapping__": true,"type": 1,"target": "A_1","source": "A","length": 0},{"__Mapping__": true,"type": 1,"target": "B_1","source": "B","length": 0},{"__Mapping__": true,"type": 1,"target": "A","source": "A","length": 0}]}],"operation": {"__JoinOpNodeData__": true,"joinType": 0,"joinAttributeVec": ["A"],"cardinality": 64},"attributeVec": [{"__Attribute__": true,"name": "A","role": 1,"datatype": {"__DataType__": true,"type": 73,"sqlType": 3,"sqlLength": 5},"attributeType": 0},{"__Attribute__": true,"name": "B","role": 1,"datatype": {"__DataType__": true,"type": 83,"sqlType": 36,"sqlLength": 255},"attributeType": 0},{"__Attribute__": true,"name": "A_1","role": 1,"datatype": {"__DataType__": true,"type": 73,"sqlType": 3,"sqlLength": 5},"attributeType": 0},{"__Attribute__": true,"name": "B_1","role": 1,"datatype": {"__DataType__": true,"type": 83,"sqlType": 36,"sqlLength": 255},"attributeType": 0}],"debugNodeDataInfo" :  {"__DebugNodeDataInfo__": true,"nodeName": "Join_1"}},{"__CalculationNode__": true,"name": "finalProjection","isDefaultNode": true,"inputVec": [{"__Input__": true,"name": "Join_1","mappingVec": [{"__Mapping__": true,"type": 1,"target": "A","source": "A","length": 0},{"__Mapping__": true,"type": 1,"target": "B","source": "B","length": 0},{"__Mapping__": true,"type": 1,"target": "A_1","source": "A_1","length": 0},{"__Mapping__": true,"type": 1,"target": "B_1","source": "B_1","length": 0}]}],"operation": {"__ProjectionOpNodeData__": true},"attributeVec": [{"__Attribute__": true,"name": "A","role": 1,"datatype": {"__DataType__": true,"type": 73,"sqlType": 3,"sqlLength": 5},"description": "A","attributeType": 0},{"__Attribute__": true,"name": "B","role": 1,"datatype": {"__DataType__": true,"type": 83,"sqlType": 36,"sqlLength": 255},"description": "B","attributeType": 0},{"__Attribute__": true,"name": "A_1","role": 1,"datatype": {"__DataType__": true,"type": 73,"sqlType": 3,"sqlLength": 5},"description": "A_1","attributeType": 0},{"__Attribute__": true,"name": "B_1","role": 1,"datatype": {"__DataType__": true,"type": 83,"sqlType": 36,"sqlLength": 255},"description": "B_1","attributeType": 0}],"debugNodeDataInfo" :  {"__DebugNodeDataInfo__": true,"nodeName": "Projection"}},{"__Variable__": true,"name": "$$language$$","typeMask": 512,"usage": 0,"isGlobal": true},{"__Variable__": true,"name": "$$client$$","typeMask": 512,"usage": 0,"isGlobal": true},{"__CalcScenarioMetaData__": true,"externalScenarioName": "tmp::ZCV_REVENUE"}]';
CREATE COLUMN VIEW "DB1_DEMO"."ZCV_REVENUE" WITH PARAMETERS (indexType=11,
'PARENTCALCINDEXSCHEMA'='DB1_DEMO',
'PARENTCALCINDEX'='ZCV_REVENUE',
'PARENTCALCNODE'='finalProjection');
COMMENT ON VIEW "DB1_DEMO"."ZCV_REVENUE" is 'ZCV_REVENUE';
COMMENT ON COLUMN "DB1_DEMO"."ZCV_REVENUE"."A" is 'A';
COMMENT ON COLUMN "DB1_DEMO"."ZCV_REVENUE"."B" is 'B';
COMMENT ON COLUMN "DB1_DEMO"."ZCV_REVENUE"."A_1" is 'A_1';
COMMENT ON COLUMN "DB1_DEMO"."ZCV_REVENUE"."B_1" is 'B_1';

4. Check object dependencies using following statement.

For example, the statement for checking object dependencies.

SELECT * FROM OBJECT_DEPENDENCIES WHERE DEPENDENT_OBJECT_NAME = 'ZCV_REVENUE' AND DEPENDENT_SCHEMA_NAME = 'DB1_DEMO' AND DEPENDENT_DATABASE_NAME = 'DB1';

5. The result of the dependencies looks like the following.

For example, the result of object dependencies.

BASE_DATABASE_NAME;BASE_SCHEMA_NAME;BASE_OBJECT_NAME;BASE_OBJECT_TYPE;DEPENDENT_DATABASE_NAME;DEPENDENT_SCHEMA_NAME;DEPENDENT_OBJECT_NAME;DEPENDENT_OBJECT_TYPE;DEPENDENCY_TYPE
DB1;DB1_DEMO;DB1_TEST_TABLE;TABLE;DB1;DB1_DEMO;ZCV_REVENUE;VIEW;1
DB2;DB2_DEMO;DB2_TEST_TABLE;TABLE;DB1;DB1_DEMO;ZCV_REVENUE;VIEW;1

6. Export calculation scenario with dependencies in Tenant DB1.

For example, the following statement exports the object with remote dependency.

EXPORT "DB1_DEMO"."ZCV_REVENUE" INTO '/tmp';

Please note you can only import the objects with the same database name of exported objects. You have to import each Tenant database with the example below. In this case, you have to import first based objects in Tenant DB2 because the calculation view will be invalid in Tenant DB1 without the referred objects in Tenant DB2.

You can use following statements to import the exported objects.

For example, the following statements import the cross-database access dependencies.

--In Tenant DB2
IMPORT "DB2"."DB2_DEMO"."*" FROM '/tmp';
--In Tenant DB1
IMPORT "DB1"."DB1_DEMO"."*" FROM '/tmp';

Or using the following IMPORT statements.

For example, the following statements import the cross-database access dependencies.

--In Tenant DB2
IMPORT ALL FROM '/tmp';
--In Tenant DB1
IMPORT ALL FROM '/tmp';

Limitation

This feature doesn’t support to IMPORT/EXPORT of “SAP Support Mode”.

Lightweight eclipse for HANA studio ( + streaming, SDI SDK)

$
0
0
The web based tools for the SAP HANA platform are ever improving, but sometimes the HANA studio remains the easiest way to get things done.

The HANA studio is a bit heavy and unstable, so here’s a few advice to get the capabilities in a more lightweight and robust fashion. You will need internet connectivity to do the install.

Download Eclipse for Java Developers


Go to the download page locate the smallest edition, and download the binaries for your platform.

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Add the SAP Cloud Platform repository


After extracting the files, launch eclipse and create a workspace.

Then click on help -> Install new software

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Click on Add

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

The plugin for HANA is not available is the latest repository, so we point to the previous version. Define a new repository with the URL https://tools.hana.ondemand.com/2018-12

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Select the SAP HANA tools

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Click next and accept the terms of the licenses. The installation will take a few minutes.

You should a security warning, just click on install anyway.

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Next you need to accept all certificates

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Congratulations, the installation of the HANA tools will be finished after a restart.

Now make eclipse faster!


Select help -> About eclipse

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Click on Installation Details

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Select all the Mylyn packages and click on uninstall

SAP HANA, SAP HANA Studio, SAP HANA Study Materials, SAP HANA Certifications

Eclipse will check for dependencies and safely uninstall Mylyn. You can repeat this step with:

◈ The two packages m2e (Maven)
◈ EclEmma Java code coverage
◈ Tip of the day

I didn’t push it further than that. I use the same eclipse for java development without problems.

Installing additional plugins


I’ve added the plugin for SAP HANA Streaming Analytics without any issue. I’ve also added the plugin for SDI adapter development version 2.3.5.

SAP Native HANA best practices and guidelines for significant performance

$
0
0
Just putting up and writing down all the consolidated best practices and guidelines including tips for SAP HANA modeling (Majorly for version 2.0 SPS 02)

Top 10 reasons to choose SAP HANA


SAP HANA layered architecture (Great to have) and naming conventions

As per my view in Native HANA great to have Three (3) Layers like SAP HANA Live models

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

1. Base View (1 layer – Raw data 1:1 source to HANA Base layer, Global Filters)
2. Reuse View (Max 3- 4 layers view good to have – Logics, BTL, Mappings, complex)
3. Query View/Report View (1 layer – Final View – Thin layer)
P.S – Master data will be maintained as individual with Dimensions with Star model and consumed in relevant transactional HANA Models

Calculation View: CV_<Layer>_<BUSINESS_NAME> e.g. CV_BV_FSR_1

Naming Conventions – Keep the name as short as possible (preferably under 15 chars)

Name every element in CAPITAL LETTERS. Give meaningful business names

◈ CV_BV_XYZ
◈ CV_RV_XYZ1, 2, 3 (Based on the complexity layers will increase – max 5 nice to have. Possible chunk to smaller
◈ CV_QV_XYZ

Good for reconciling and support reason to analyse the issues (Easier troubleshooting)

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

SAP HANA Best practices and guidelines


Beneath mentioned few HANA best practices and guidelines are from my experience and referral from experience HANA consultants.

Always great to follow and adhere to the SAP HANA best practices and guidelines
  • Use Inner join/referential/left outer joins as maximum any way it depends on the business needs, consider replacing them with UNION nodes (where possible). May vary based on a business case to case
  • Specify the correct cardinality in joins (n:1 or 1:1) – only if sure
  • Use table functions instead of scripted calculation views
  • All views/tables are nice to be used with a projection node. Projection nodes improve performance by narrowing the data set (required columns)
  • Use star join in calculated view and join calculation view (Dimension – Master data) for better performance (Star schema concept)
  • Execute in SQL-engine (Semantics properties or at during custom calculations)
  • Avoid transfer of large result sets between the SAP HANA DB and client applications
  • Reduce the data set as early as possible. Use design time filters at the lowest level, this helps to reduce at least 20-30% execution time
  • Input Parameter (Mandatory/Optional): Placeholders part of the models can be used in the calculation. Can accept multiple values and can be derived from the table (Value help Models) or stored procedures
  • Ensure Variables (where clause) is pushed to the lowest level. Confirm using Visualization Plan/plan cache
  • Wherever possible use variables and input parameters to avoid fetching a big chunk of data
  • Avoid calculated object using IF-THEN-ELSE expression, use restricted measure instead. HANA SPS 11 supports expression using SQL in Restricted column
  • Avoid performing joins on calculated columns
  • Proceeding, avoid script-based calculation view, WHERE clause will not be pushed down
  • Using Filter is better than using Inner Join to limit the dataset
  • Avoid joining columns which are having more NULL values
  • Before You Begin please check for key columns for any null values at table level. The columns that are part of the joins in the HANA models should not contain any NULL values (resolve null values through ETL or SLT jobs before starting modeling)
  • While using Unions make sure there will be no null values in Measure columns otherwise Union Operation will chock (do manage mappings and supply 0 values for null measures columns)
  • Execute the SAP HANA models in SQL Engine (Semantics properties)
  • Make use of aggregation nodes in your models for handling duplicates
  • Avoid filters on the calculated column (consider materializing these columns)
  • Do not create working tables in different schemas. This will create security problems on ownership. Instead of that create a separate schema and create all working tables and use it in your Modelling
  • One of the best practices in HANA modeling is to define joins on columns with either INTEGER or BIGINT as data types
  • Check the performance of the models during the initial development rather than doing at the final phase
  • Partition the tables if they are having a huge number of records for better performance Max 2B records per table (or table partition) and max 1000 partitions per table nice to have
  • Use analytical privilege latest SQL analytical privilege (SP10) to filter the data based on business requirement
  • Join on Key columns and indexed columns
  • Use execution plan & Visualization plan to analyse HANA Models performance and take necessary steps if any performance issues observed, Best way to deal with performance issues is after every step of the Modelling rather than final version of the modeling, by this good chance of overcoming memory allocation issues/CPU/memory consumption issues can be addressed during the development stage
  • It is not recommended to have joined on calculated columns/fields with NVARCHAR or DECIMAL as data types, might create performance issues. Anyway, case to case it differs based on business needs, however, stick to best practices
These are some of HANA the best practices pursued by numerous SAP HANA consultants. However, still, some complex business requirements coerce us to use or alleviate from such HANA best practices, which can be ignored.

SAP HANA general performance principles


  • Identify the long-running queries by reviewing the Performance tab to analyse system performance located under the Administration perspective
  • Perform performance testing of HANA information models in a QA environment before hit production environment
  • SAP HANA cockpit is a good Admin monitor tool, make use of this tool soon to keep away from the performance bottlenecks and avoid major issues on production environment
  • SAP HANA automatically handles Indexes on key columns usually enough, however, when filter conditions on Non- Key fields, please create a secondary index on non-key columns if it is necessary. Creating an index on the non-primary key columns (with high cardinality) will enhance the performance
  • Non-SAP applications make use of SDA (Smart data access), no loading required, low cost, maintenance, on virtual tables business will get Realtime data insights if customer applications are greater side to non-SAP side
SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

SAP HANA 2.0 new features for better performance (SAP HANA data modelling)


Simply setting up and writing down all the consolidated few new features for SAP HANA Modelling – Version 2 SPS03 perspective what we witnessed for better performance execution. Maybe a couple might be referenced for reference purpose, nice to adopt them while implementing SAP HANA models and later for support activities & monitoring
  • Expose only required columns used in reports and hierarchies at lower data models
  • Give meaningful business names for all exposed attributes in final reporting views
  • In Latest version please use calculation views of type Dimension (Master Data – in place of Attribute View) and calculation Views of type with cube/star-join (Analytical Views and Calculations Views old concept – Star Schema – Fact Table surrounded by Dimensions (Attribute Views)
  • Note: Table functions can be used as input sources for either dimensions or facts
  • Nice to have Virtual tables (SDA), SDI: Scheduling and Executing Tasks (Flow Graphics), extended table (Dynamic Tiering) can be consumed in calculation views, so it is essential these features extended to all business cases
  • SAP HANA Cockpit – offline and online (SAP HANA admin performance monitor tool)
  • SAP Web IDE for SAP HANA – Full support for graphical data models (Web-based)
  • Enabling SQL Script with Calculation views (outdated/retired)
  • SAP HANA 2.0 use SQL Script table functions instead of script-based calculation views. Script based-calculation models can be refactored into table functions

Consuming non-In Memory data in Calculation views


Dynamic Tiering

• Provides a disk-based columnar SAP HANA database extension using SAP HANA external tables (extended tables)

Smart Data Access

• Provides a virtual access layer to outside SAP HANA data (e.g. other databases, Hadoop systems, etc.) using so-called virtual tables model and approach these scenarios carefully and monitoring query performance

• Ensure that filters and aggregation is pushed to remote sources

Calculation View Enhancements – Hierarchies (Hierarchy SQL Integration)

• Hierarchy-based SQL processing capabilities enabled via SAP HANA View-based hierarchies

Restricted Objects in advanced versions of HANA

The flowing HANA artifacts should not be created by HANA modelers since they are deprecated in the higher versions and required the laborious amount of work for migrating these objects.

1. Attribute Views and Analytical Views
2. Script based calculation views
3. XML – based Classical analytical privilege

Prepare for the future migration from existing old SAP HANA information views

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

Security and Authorizations – Analytical privileges

Create SQL-based analytic privileges

◈ Start with general Attribute based Analytic Privilege, then switch to SQL-based
◈ Use SQL Hierarchies within SQL Analytical Privileges

SAP HANA Cockpit – offline and online – Sample dashboard (Fiori)

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

Few other HANA modelling area guidelines like Joins, Calculated columns, Filters, Aggregation and so on


The right place to create calculated columns in HANA models

◈ A view having calculated columns at the lower levels can be slower than a view having the same equivalent calculated columns at the highest level. However, some cases may differ based on business needs (Case by case). The greater part we have actualized in our projects at the top level (Aggregation)

◈ Calculated columns at the top level (Aggregation) for example, if a calculation such as “Sales amount – Tax amount” is done at the lowest level (for each table record entry) then it will take significantly longer to process. Another example like needs to view rating flag like sales >20,000 to be calculated on the sum of measures. All Date Calculations, Time Stamp Conversions, Time Zone Conversions must be pushed to the TOP level of the Model. Because they are the most expensive statements

◈ On the lowest level – If the calculated column is a measure like count/sales data for

◈ which discount/logic needs to be applied. (With the goal that it will not give you an unexpected value)

NOTE: Developer needs to analyze from his end, and create the calculated columns on the best place based on the requirement

Filters

◈ Apply all filters, Fixed values and the input parameter expressions at the lowest level of the views
◈ Using filters is better than Inner Join to limit the dataset in terms of performance
◈ Avoid applying filters on calculated columns
◈ Use SQL analytical Privilege to filter data, and avoid using classical analytical privilege since they are obsolete in HANA 2.0 SPS03

SAP HANA Restricted measures and logical partitioning (Example on year OR on the plant in the case to view region wise logical partition)

Aggregation

◈ When using the aggregation node, always try to use at least one of the aggregated functions like COUNT, MIN, MAX, AVG, so that the aggregation works much faster

◈ If there are some unavoidable duplicates rows getting generated from the models, then use the aggregation node immediate to the JOIN node and remove the duplicates at the very lower level itself

◈ Never use the TOP aggregation node for removing duplicates, always introduce a separate aggregation node for handling duplicates

◈ Be careful with the keep flag in aggregation node, where the results get different when this flag is been set in aggregation node (Lower level is best)

SAP HANA modeling – Normalized virtual data model scenarios


SAP HANA virtual data modeling – Key concept 

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

SAP HANA best practice – Designing larger virtual data models using HANA views

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

SAP HANA General Tips and Tricks


Value help look up models

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

SAP HANA general tips and tricks – filter and aggregation to push down

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

SAP HANA views – Performance analysis – Must pursue in QA


SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Tutorials and Materials, SAP HANA Learning

Implementing Dynamic Join To Showcase Measure Based On Different Attribute In a Single HANA View

$
0
0

Introduction


There are different types of joins available in the database. SAP HANA offers all of these joins.

The different types of joins in SAP HANA

◈ INNER
◈ RIGHT OUTER
◈ LEFT OUTER
◈ FULL OUTER
◈ TEXT
◈ SPATIAL
◈ REFERENTIAL
◈ TEMPORAL
◈ DYNAMIC

In this blog post we will be exploring the “Dynamic Join” properties in HANA to create a single view and show revenue data based on different attributes.

Definition –


Whenever a join is defined as dynamic, then the modeler dynamically defines the join condition columns based on the columns requested by Dynamic joins improves the join execution process and helps reduce the number of records that join node process at run-time. Dynamic joins reduce the number of records processed by the join view node at run-time, which helps improve the join execution process.

Dynamic Joins VS Static Join


Dynamic JoinsStatic Join 
The join condition changes with fields requested in query. The query gives run-time error if the client query to the join doesn’t request a join column.The join condition doesn’t change with fields requested in query.
Dynamic join enforces aggregation before executing the join. This means that, if a join column is not requested by the client query, its value is first aggregated, and later the join condition is executed based on columns requested in the client query. Static joins the aggregation happens after the join.

Prerequisite


At least one of the fields involved in the join condition is part of the client query. If you define a join as dynamic, the engine dynamically defines the join fields based on the fields requested by the client query. But, if the field is not part of the client query, it results in query run-time error. We can use dynamic join when we have a composite join(More than one field in join condition).

Business Scenario –


We have Global system in HANA where we have multiple region.e.g-APAC ,EMEA, LAO and NA.
We have a requirement where we have to generate a report which will give the revenue numbers as per region,company and product . But the business does not want multiple HANA VDM created for this.

In the below steps we will be creating one single VDM where we will use the dynamic join to accomplish business requirement which is to show the gross revenue and revenue ratio of the product across the region.

We basically do 2 type of reporting for the same query.

1-Revenue ratio based on Region, Product and Company

2- Revenue ratio only based on Product and Company

If we take static join the Gross Revenue doesn’t show correct value.

The below example is part of my experience working with HANA in my current assignment. 

Source Table-

REGIONPRODUCT COMPANY REVENUE 
APACWIPERC120
APAC WIPER C220 
APAC NAPKIN C1 20 
APAC NAPKIN C2 10
APAC DIAPER C1 30 
APAC DIAPER C2 20
EMEA WIPER C1 20 
EMEA WIPER C210
EMEA WIPER C330 
EMEA NAPKIN C110 
EMEA NAPKIN C2 30 
EMEA NAPKIN C3 20 
EMEA DIAPER C1 20 
EMEA DIAPER C2 20 
EMEA DIAPER C3 10

In above example companies C1, C2, and C3 have business share in the Region EMEA but for APAC we have only business share for company C1 and C2.

To implement this in SAP HANA, we need to create a calculation view and use the same table in two “aggregation” nodes.  Create the first aggregation node as below:

Steps-


1. Let us start by creating the source table where the revenue data is being stored-

Create Table <SCHEMA>.”REVENUE_MARKET”( “REGION” VARCHAR(10), “PRODUCT” VARCHAR(15),”COMPANY” VARCHAR(10),”SALES” INT);

2. Now, let us create a Calculation view. In this calculation view we will be using the same table which we have just created in two different aggregation nodes.

i. In the first aggregation node, I am taking the table and using the column to get the revenue details based in region, product and company.

ii.In the second aggregation node, I am taking the table and using the column to get the revenue details based on Product and company across the region.

First aggregation node –

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

3. We want Gross Revenue by Company so on second aggregation node we take same table but will not enable “COMPANY” because we want to gross revenue by company.  When adding the aggregated measure Revenue from this node, rename it as Gross Revenue.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

4. Now we join these two aggregation node with an inner join on “REGION” and “PRODUCT” and we need to derive Revenue Ratio. So we create Calculated Column “Revenue Ratio”

Formula-  Revenue Ratio = Revenue/Total Revenue

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Issue Faced-


Now when we do data preview-

We reconcile the source table data with HANA data. When we check Gross Revenue of a particular Product with respect to Region we find he correct values.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Issue arrives when we check the Revenue Ratio of a company for all the regions. We want Gross revenue to be shown across both APAC and EMEA.

Since Company C3  has no business share in APAC, the total Revenue for C3 will be only for EMEA.

So the required output is not reflecting as want the Gross Revenue to be calculated across the region(APAC & EMEA).

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Solution –


To achieve the required result, we need to enable Dynamic Join  between the two aggregation nodes. The dynamic join comes in picture for the field we select in output . Since we are not selecting “REGION” in the query, HANA does not execute the join on “REGION”.This allows the gross revenue to remain “constant” across all companies.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

We can see that we are getting same data as static join if we take all fields in report output. (Which was correct).

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

Now we exclude the Region from the query output-

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications

In this way by using “dynamic join” we can achieve the required result. Now we can report the Gross Revenue across the region e.g ( APAC and EMEA)

BW/4 HANA – Virtual Master Data Through SDA

$
0
0
With increasing demand of multi sourced data reporting, flexibility & agility, SAP offers an one stop solution with BW/4 HANA.

SAP S/4HANA provides operational reporting for current data from a single SAP application. SAP BW/4HANA delivers a modern data-warehousing environment that allows reporting on data from SAP and Non-SAP Applications. BW/4 HANA which is available only in SAP HANA platform is optimised for SAP HANA.

Introduction:


Master data, which remains unchanged over a long period of time, is the key data that is used as a base for any transaction. Master data is vital in multiple transactions and making them available for reporting helps in better analysis and reporting.
With Smart Data Access, SDA data can be read from S/4 HANA during run time without actually persisting the data in BW/4 HANA System.

In today’s world,  with immense competition, Organisations are leaning towards real time decisions even on analytical data. Smart data Access enables to read remote data instantly.

SDA also provisions merging of data from multiple landscapes.

Main Content:

In this blog, I will share the steps to create Virtual master data in BW/4HANA via SDA from S/4 HANA.

Pre-requisite: S/4 HANA and BW/4 HANA are connected and Smart Data Access is enabled.

The steps to virtually read Master data in BW/4 HANA

◈ Create a table or Calculation view in S/4 HANA. Alternatively, read any SAP table.
◈ Create an Open ODS view in BW/4 HANA with semantics as Master Data, for attribute data.
◈ Read the S/4 HANA Table or Calculation view in BW/4 HANA using an Open ODS view.
◈ Create a query to view master data from Open ODS view.
◈ Associate the field from transactional Open ODS view with Master Data Open ODS view. (we can also create our own star schema using Open ODS by connecting the facts open ODS with the master data open ODS and text Open ODS)
◈ Create a query to preview the data on Transactional Open ODS view to see Master data enabled.

Create a Calculation view in S/4 HANA


In this scenario, I created a calculation view in S4D from base table MAKT. Open ODS can read data from any base table or View. Read access for an Open ODS can be restricted, to required package and tables available, using privileges.

In the given Calculation view, a Filter is applied on the Language key to fetch data only in English.

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

In BW Modeling, Create an Open ODS view under relevant Infoarea. Enter name, description and choose the semantics appropriately.

Semantics option – Facts for transaction data, Master data for Attribute data and Texts for Text data.

Tip: Open ODS view doesn’t support reading hierarchy data.

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

Choose Source Type as “Virtual Data using SAP HANA Smart Data Access”

Provide the Source system name

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

Search for the calculation view from S4D. Prefix with * in the search criteria as the view will be prefixed with package details.  S/4 tables also be read in this step.

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

Choose the fields in Representative key field. This is the main Master data field. Characteristics(Key) is optional and can be used for compounding of the fields.

Change the reporting properties of the field before saving and activating the Open ODS view.

Tip: At least one field has to be included in Representative Key field section for creating an Open ODS view

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

Create a query on Open ODS to see Attribute and text data.

In here, I chose only 2 attributes. Also, I have enabled the ‘Value output format’ from Properties tab to Display as both “Key and Text”. By default, “Show Result Rows” option is enabled, one need to manually disable if required.

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

The query result is displayed as per the query properties. Below result is from transaction RSRT.

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

To associate master data with transaction data, there are 2 options

1. Associate with Infoobject
2. Associate with Open ODS view.

Go to any transaction Open ODS view, which requires Material Master. Choose the field and associate master data, in our scenario an Open ODS view.

Don’t miss out the – Important step is of changing the default radio button option from “Usage of System-Wide Unique name” to “Direct Usage of Associated Object by Name”.

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

Create a query on Transactional Data Open ODS view and check for master data, attributes and Text.

SAP BW/4 HANA, SAP HANA Certifications, SAP HANA Learning, SAP HANA Study Materials

How to Connect SAP HANA with GeoServer

$
0
0
Meanwhile SAP HANA and its spatial features can be used with a lot of third party GIS (Geographical Information Systems) tools. One crucial question is on how to connect SAP HANA with these tools. This might be quite different from tool to tool and sometimes also be kind of hidden information, especially when the possibility and support for a certain tool is new and recent. In this blog I go into the details how to connect SAP HANA with GeoServer.

SAP HANA can be used in GeoServer as data store. http://docs.geotools.org/latest/userguide/library/jdbc/hana.html describes in some sense that this is possible. We will give the details in this blog for an MS Windows 10 environment and GeoServer 2.15.0.

Deploy the HANA Plug-In for GeoServer: Go to https://mvnrepository.com/artifact/org.geotools.jdbc/gt-jdbc-hana/21.0 and click the button right beside “Files” to download the file gt-jdbc-hana-21.0.jar.
SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning

Copy the file gt-jdbc-hana-21.0.jar to the directory …\GeoServer 2.15.0\webapps\geoserver\WEB-INF\lib of the GeoServer installation.

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning

Having done this also the HANA JDBC ngdbc.jar driver has to be copied into the same directory.

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning

You can find ngdbc.jar in the directory C:\Program Files\SAP\hdbclient of the hdbclient installation of your environment.

After having done this restart the geoServer and you will find SAP HANA as further possible vector data source in the Geo Server under Stores –> Add new store:

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning

Define the Vector Data Source for SAP HANA: We will only describe the connection parameters to a HANA database when creating a store in GeoServer. As a prerequisite you also need to have already a workspace.

In a multi-tenant environment you have to state the name of the tenant database and the instance number at the bottom of the entry mask. Mind that alternatively it does NOT work to use the SQL-Port of the respective tenant and leaving the database empty. The field port is mandatory, but it is not considered. The remaining connection parameters should be pretty clear.

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning

In this example my tenant database is called SPL and there is a database schema called HACKT25.

You may now create a new layer in the GeoServer via Layers –> Add a new layer and cross-check of what you see here with what you have in your remote HANA database.

SAP HANA Study Materials, SAP HANA Certifications, SAP HANA Learning
Viewing all 711 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>