Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

End to End Development – SAP HANA and Web IDE

$
0
0
In this blog, we cover end to end application development starting from creating a database table in SAP HANA, develop virtual data models on top of database tables and finally use Smart templates available in Web IDE to create a Fiori Application consuming virtual data models.

Smart templates, also known as SAP Fiori Elements, provide a framework for generating UIs at runtime based on metadata annotations and predefined templates for the most-used application patterns.

We use CDS to model data source and expose CDS through Gateway without writing line of code using CDS rich semantic annotations. Follow the below steps to create an application.

Step1: First step is to create database tables and inserting records to column store tables.
Figure 1 and Figure 2 shows the structure of the tables used for modelling the views.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 1: Structure ZCAR_COLOR

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 2: Structure ZCOMPANY_CAR

Step2:  Second step is to create CDS views on top of database tables.

Find below the details of annotations used while creating CDS views.

View level Annotations: Applied to entire view and written before the define view statement.

Annotation @Search.searchable: true indicates that CDS View is relevant to search scenarios. 

Annotation @EnterpriseSearch.enabled: true indicates that an Enterprise Search connector should be created and activated.

Annotation @Metadata.allowExtensions: true indicates that the Enterprise Search connector can be enhanced/adapted.

All the above-mentioned Annotations are required for performing Enterprise Search Views.

Annotation “@OData.publish: true” is used to expose the CDS view.  Exposing CDS view through Gateway Service in just one step without writing any piece of code using T-Code  /IWFND/MAINT_SERVICE.

Annotation @EndUserText.label: ‘My smart CDS search view’ is used to define the description of the Enterprise Search connector.

Annotation @ObjectModel.semanticKey: [‘MY_KEY_FIELD_1’, ‘MY_KEY_FIELD_2’, ‘MY_KEY_FIELD_3’] is used to define Enterprise Search Semantic Key.  Semantic key is strictly not a key in terms of database theory.  It identifies an instance of an object from business/search perspective.

Annotation @UI.headerInfo.title: {value: ‘MY_TITLE_ELEMENT’} is used to define an element or field as title field.

Element level Annotations: Applied to entire view and written inside the curly braces before the field selection while doing selection of fields.

Annotation @Search.defaultSearchElement: true is used to mark the element/field as a freestyle request field.  For performance reasons, it is not recommended to have more than 15 freestyle request fields.

The element/field weight for ranking purposes can be defined via annotation @Search.ranking.   Ranking can be set as #HIGH, #MEDIUM and #LOW.

Annotation @UI.selectionField.position: ‘Position’ This annotation is used to specify the order of selection fields that are used for filtering.

CDS View Name  Z_I_CAR_COLOR

@AbapCatalog.sqlViewName: 'ZICARCOLOR'
@AccessControl.authorizationCheck:  #NOT_REQUIRED
@EndUserText.label: 'Car Color Texts CDS View'
define view Z_I_CAR_COLOR 
as select from zcar_color 
{
   //zcar_color 
 key  color_code, 
 key  langu, 
      color_name 
}

CDS View Name  Z_I_COMPANY_CAR_DETAILS

@AbapCatalog.sqlViewName: 'ZICOMPANYCARD'
@AccessControl.authorizationCheck: #NOT_REQUIRED
@EndUserText.label: 'Company Car Details CDS View'
define view Z_I_COMPANY_CAR_DETAILS 
as select from zcompany_car 
association[0..*] to Z_I_CAR_COLOR as _ColorText
on $projection.color = _ColorText.color_code 
{
   //zcompany_car 
key   license, 
      brand, 
      color, 
      power,
      _ColorText

CDS View Name Z_C_CAR_SEARCH

@AbapCatalog.sqlViewName: 'ZCARSEARCH'
@AccessControl.authorizationCheck: #NOT_REQUIRED
@EndUserText.label: 'Car Search CDS View'
@Search.searchable: true
@EnterpriseSearch.enabled: true
@Metadata.allowExtensions: true
@ObjectModel.semanticKey: ['LICENSE_PLATE']
@OData.publish: true
@UI.headerInfo.title: {value: 'Enterprise Search Application'}

define view Z_C_CAR_SEARCH 
as select from Z_I_COMPANY_CAR_DETAILS
{
  @UI.selectionField.position: 10
  @EndUserText.label: 'License Plate'
  @Search.ranking: #HIGH  
  @Search.defaultSearchElement: true
  key license as license_plate,
  @UI.selectionField.position: 20
  @EndUserText.label: 'Auto Brand'
  @Search.ranking: #HIGH  
  @Search.defaultSearchElement: true
  brand,
  @UI.selectionField.position: 30
  @EndUserText.label: 'Horse Power'
  power,
  @UI.selectionField.position: 40
  @EndUserText.label: 'Car Color'
  @Search.ranking: #MEDIUM  
  @Search.defaultSearchElement: true
  _ColorText[ 1: langu = $session.system_language ].color_name as car_color
}

As indicated in Figure 3, OData Service is created automatically. On highlighting the yellow indicator, it shows message that OData Service is not activated.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure3: Name of OData Service

Step3:   Third step is to activate the OData Service.  OData Layer at Gateway level is used to expose the data to outside world.

Go to Transaction: /IWFND/MAINT_SERVICE

Add the service which is automatically created after specifying “@OData.publish: true” annotation in the CDS view. Next step is to activate the service. Figure 4a, 4b and 4c shows services is added and activated.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 4a: Add Service Option

In this case SAP Gateway is in Embedded Deployment Scenario hence we select LOCAL in system Alias.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 4b: Get the required service

Provide the package name in which you want to assign the service.  Adding the service would provide a message ‘Z_C_CAR_SEARCH_CDS’ was created and its metadata was loaded successfully.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 4c: Service Added and Activated.

Click on call browser or SAP Gateway Client to check whether the service is working fine or not. Figure 5a shows that service is working fine (Return Code 200) and Figure 5b and 5c shows the URL to view the metadata and metadata of the service.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 5a Service Working Properly

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 5b URL to view the Metadata

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 5c: Metadata of the service

Step4:  Fourth Step is to build the User Interface using SAP Fiori SMART Template consuming OData Service which is created in previous step3.  

With HANA 2.0, SAP Web-IDE is integrated development environment used for any kind of development (ABAP, JAVA, NODE.JS, XSJS etc.) going forward.

Follow the below steps:

◈ Open Web-IDE, go to FILE in menu option
◈ Click on New
◈ Select Project from Template
◈ Choose SAP Fiori Worklist Application from available templates as shown in Figure 6

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 6: Worklist Application as Template

Select the system where your OData resides and the pick the OData service as shown in Figure 7.  HANA connector can be one of the option to maintain the connection between Web-IDE and backend/On-premise system.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 7 System and OData Service

Provide the application setting and perform data binding as per the requirement. Once we are done with the configurations and bindings, click on Finish button. This will create a project in our workspace. Figure 8 shows the application interface after execution of the project.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 8: Application Interface

Figure 9 shows the search operation performed for brand name.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA IDE

Figure 9: Search Operation

SAP HANA Text Mining Functions – Part 1

$
0
0
In this blog, we’ll discuss Text Mining Functions.  Functions available to find top ranked related and relevant documents and terms.

Figure 1 shows the permutations and combinations available for doing Text Mining.


SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 1: Text Mining Functionality

Available Functions in SAP HANA Text Mining
  • First Block is to identify related and suggested terms. Functions available for these operations
    • TM_GET_RELATED_TERMS
    • TM_GET_SUGGESTED_TERMS
  • Second Block is to identify relevant or similar documents. Function available for this operation
    • TM_GET_RELATED_DOCUMENTS
    • TM_GET_RELEVANT_DOCUMENTS
  • Third Block is to identify relevant terms of a documents. Function available for this operation
    • TM_GET_RELEVANT_TERMS
  • Fourth Block is to categorize or classify documents. Function available for this operation
    • TM_CATEGORIZE_KNN

Document Functions

TM_GET_RELATED_DOCUMENTS

This text mining function returns the top-ranked related documents for a query document within a search request and stores these documents (including metadata) in the return table.

Syntax:

TM_GET_RELATED_DOCUMENTS
( <tm_document>
<tm_search>
<tm_return_document> )

where
<tm_document> := 
DOCUMENT { <string>  [ LANGUAGE <string> ] [ MIME TYPE <string> ]    
| (  <subquery> )   [ LANGUAGE <string> ] [ MIME TYPE <string> ]    
| IN FULLTEXT INDEX WHERE <condition> }

Either provide text as string or provide a select query or specify query document part of full text index using where clause for restriction.

<tm_search> := 
SEARCH <reference column> FROM <reference table> [ WHERE <condition> ]
[ WITH TERM TYPE <string>, ... ]

Specifies the set of reference documents by specifying <reference column> and <reference table>. Specified reference column must be of type TEXT or must have full text index. To restrict the set of reference documents to be used in calculations specify where condition. Further restriction can be introduced using WITH TERM TYPE example 'proper*','noun' in which will only consider the proper names or nouns. 

<tm_return_document> :=
RETURN      [ PRINCIPAL COMPONENTS <pc int> ]  -- output FACTORS, ROTATED_FACTORS     [ CLUSTERING [<string>] ]                                   -- output CLUSTER_LEVEL, CLUSTER_LEFT, CLUSTER_RIGHT 
[ CORRELATION ]                        -- output CORRELATIONS       
[ HIGHLIGHTED ]                                -- output HIGHLIGHTED_DOCUMENT,                                           
                                               -- HIGHLIGHTED_TERMTYPES     
TOP { <top int> | DEFAULT }   
[ <column> [as <alias>], ... ]           -- output columns out of <table> in
                                               -- <tm_search>

Principal Component keyword when specified, a principal components analysis (factor analysis) is calculated on the correlation matrix of the found documents. Factor Analysis is data reduction method. This is a method of extracting important variables (in form of components) from a large set of variables available in a data set. It extracts low dimensional set of features from a high dimensional data set with a motive to capture as much information as possible. It is always performed on a symmetric correlation or covariance matrix.  The <top int> factors will be returned as arrays in the column FACTORS of the result table and the rotated factors will be returned as ARRAYs in the column ROTATED FACTORS. Below graphic in figure 2 shows the transformation of 3-dimensional data to 2-dimensional data (from High to low dimension) using PCA.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 2 PCA Example

Clustering: If [CLUSTERING <string>] is specified, a hierarchical bottom-up cluster analysis will be performed on the found related documents.

In data mining, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis which seeks to build a hierarchy of clusters using bottom up approach.  Set of nested clusters organised as a hierarchical tree. This can be visualized as a dendrogram. Two types of Hierarchical clustering algorithms are Agglomerative and Divisive. Agglomerative starts with data points are individual clusters and at each step, merge the closest pair of clusters till the point just k clusters are formed. Decisive starts with one, all-inclusive cluster, at each step split the cluster till the point k clusters are formed. Figure 3 shows the graphic for HCA.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 3 Nested Cluster and Dendrogram

Find below the list of algorithms available –

‘SINGLE_LINKAGE’: In single linkage, we define the distance between two clusters to be the minimum distance between any single data point in the first cluster and any single data point in the second cluster. This algorithm is sensitive to noise and outliers. Figure 4 shows the graphic for single linkage algorithm.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 4: Single Linkage Nested Cluster and Dendrogram

‘COMPLETE_LINKAGE’: In complete linkage, we define the distance between two clusters to be the maximum distance between any single data point in the first cluster and any single data point in the second cluster. This is more balanced cluster with approximately equal diameter. Figure 5 shows the graphic for complete linkage algorithm.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 5: Complete Linkage Nested Cluster and Dendrogram

‘AVG_DISTANCE_WITHIN’ and ‘AVG_DISTANCE_BETWEEN’: In average linkage, we define the distance between two clusters to be the average distance between data points in the first cluster and data points in the second cluster. Figure 6 shows graphic for Average distance algorithm.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 6: Average Distance Nested Cluster and Dendrogram

‘WARD’: This method looks at cluster analysis as an analysis of variance problem, instead of using distance metrics or measures of association. As per this method, the distance between two clusters, A and B, is how much the sum of squares will increase when we merge them. Figure 7 shows graphic for Ward Algorithm.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 7: Ward Nested Cluster and Dendrogram

The result of the cluster analysis is stored in the columns CLUSTER_LEVEL, CLUSTER_LEFT, and CLUSTER_RIGHT of the result table. Correlation is the association between two variables.

Correlation keyword returns the correlation matrix between the found documents as arrays in the column CORRELATIONS of the result table.  Highlighted keyword returns the document texts with highlighted information.

First Example: In this case, we are pinning it down to one document “Federal_award_id_number = 1304684”. Input is query as part of full text index with document number which is run against the term document matrix/text mining index to fetch top 5 related/similar documents. Score depicts the similarity between the documents, higher the value more similar are the documents. Figure 8a shows the result of function TM_GET_RELATED_DOCUMENTS.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 8a: Result Set of TM_GET_RELATED_DOCUMENTS

First two top ranked documents have same score value ‘1’ means both documents exactly match( Award_abstract column has same content for both the documents). Further list shows that the documents isn’t similar and score value is reducing.

Second Example: This example does statistical analysis.  In this case, we are pinning it down to one document “Federal_award_id_number = 1304684”. Input is query as part of full text index with document number which is run against the term document matrix/text mining index to fetch top 5 related/similar documents. We have principal components, clustering algorithm and correlation matrix. Figure 8b shows the result of function TM_GET_RELATED_DOCUMENTS.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 8b: Result Set of TM_GET_RELATED_DOCUMENTS

TM_GET_RELEVANT_DOCUMENTS

This text mining function returns the top-ranked documents that are relevant to a term.

Syntax:
TM_GET_RELEVANT_DOCUMENTS ( 
<tm_term> 
<tm_search>
<tm_return_document> )

where 
<tm_term> :=   TERM <string>   [ LANGUAGE <string> ]

In this case, specify the term and language to be processed. 

<tm_search> :=   
SEARCH <reference column> FROM <reference table> [ WHERE <condition> ] [ WITH TERM TYPE <string>, ... ]
Specifies the set of reference documents in <reference column> and <reference table>. The specified column must be of type text or must have a full-text index. The set of reference documents can be restricted by WHERE <condition> or With Term Type.

<tm_return_document> :=   RETURN      
[ PRINCIPAL COMPONENTS <pc int> ]-- output FACTORS, ROTATED_FACTORS    
[ CLUSTERING [<string>] ]         -- output CLUSTER_LEVEL, CLUSTER_LEFT, 
                                       -- CLUSTER_RIGHT     
[ CORRELATION ]                   -- output CORRELATIONS       
[ HIGHLIGHTED ]                   -- output HIGHLIGHTED_DOCUMENT,                                        
                                        -- HIGHLIGHTED_TERMTYPES     
TOP { <top int> | DEFAULT }  
[ <column> [as <alias>], ... ]    -- output columns out of                                        
                                        -- <table> in <tm_search> 

For explanation of above options, refer to previous section. If specified, the options PRINCIPAL COMPONENTS, CLUSTERING, CORRELATION, and [HIGHLIGHTED] must be used in this order. TOP must always be specified as the last option.

Example: Input as “Ocean” which is run against the document matrix/text mining index to fetch top 5 relevant documents. Figure 9 shows the result set of function TM_GET_RELAVANT_DOCUMENTS.

SAP HANA Mining, SAP HANA Certifications, SAP HANA Guides, SAP HANA Tutorials and Materials, SAP HANA Learning

Figure 9: Result Set of TM_GET_RELAVANT_DOCUMENTS

Use Excel to query and analyze HANA data

$
0
0

Background


For decades, Microsoft Excel remains as the tool of choice to many users for data analysis. Even In the modern BI era, the “Export to Excel” is a must-have feature to many BI tools. In HANA world, we could hear the question from time to time inquiring how to connect Excel to HANA. This blog aims at introducing a new way, which is also completely free, to connect Excel to HANA. It overcomes the limitations of existing methods, and can be a great companion to HDBStudio to improve the interactive HANA data query and analysis through Excel.

Review of existing options


Currently, there are these popular methods to connect Excel to HANA, all with help of HANA Client: 1) the ODBC way, 2) the MDX way, 3) the ODATA way.
Let’s have a quick look at these approaches and their limitations. The detailed walk-through steps can be found in plenty of guides online, and will not be repeated here.

The assumption here is that HANA Client is installed on Windows already.

1. ODBC way

The ODBC way requires setting up the ODBC DataSource leveraging HDBODBC driver. Then, Excel uses Microsoft Query to communicate with HANA through the ODBC driver. The data flow is

    [HANA] -> [ODBC driver] -> [Microsoft Query] -> Excel

The use experience is not ideal. First, we encounter such warning message from Microsoft Query.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

The user name and password information is saved in the “<connection_name>.dsn” files. The lack of encryption of passwords could a problem from IT security compliance standpoint.

Next, once we proceed, the “Table Options” dialog is rather difficult to navigate with all the unresizeble tiny areas, especially when there are a lot of schema to select:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Once we are at the next screen to define filters, we will notice there are fixed space for up to 3 filter criteria only.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Once we are done with the configuration and proceed to the last step to run the query, many times we could encounter such error message, which makes us wonder if Unicode can be handled properly here.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Another issue we noticed is that Microsoft Query complains with error “Data Truncated” when making ODBC connection to certain HANA revisions (including latest HANA Express), but some older revisions are fine. More investigation is required to identify the root cause, but for now, the fact is that there are cases the connection cannot even be established.

When connection and object configuration are defined successfully, the HANA data querying functionality feels OK. The challenge is mostly with the user experience, efficiency, and bugs like those shown above.

2. MDX Provider way

This approach utilizes Excel’s Data Connection Wizard to connect to HANA cubes e.g. calculation views and analytical views. It is a solid data provider which does not have those disruptive user experience issues encountered in the MSQuery/ODBC way, but by design it is for dimensional data analysis only, not for querying on content objects like tables/views/synonyms.

3. ODATA way

The ODATA way requires the setup of an ODATA project in HDBStudio and activate the ODATA service endpoint. Then, Excel can connect to the ODATA URI. By design, this is a development effort which runs like a small project, If there is requirement to access another object in HANA, we will go through this development and release cycle again. So, by design this is not meant to be a self-service style data exploration, but a project oriented delivery which requires delivery cycle. For the requirement of limited data exposure through managed interfaces, this is the best choice.

A New Way to connect Excel and HANA


There is another way to bridge Excel and HANA, using AecorSoft Reporting HANA Edition which is a free software, functioning as an Excel Add-In.

It is a small foot-print installation for Excel. Under the hood ,it utilizes HANA Client’s ADO.NET data provider to communicate with HANA system.

After installation, Excel will have a new ribbon menu

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Define Connection


First step is to define the HANA system connection, by clicking the “Connection Manager” button.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Define Task


Next, use “Report Task Manager” button to bring up the dialog for Task, proceed with “New” button to start defining a new task. Right now, all three catalog objects (Table, View, Synonym) are supported.

Select the connection just created, choose the object type (table or view or synonym), specify the object name (use wildcard if needed), and click “Search”. 

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Here, we can browse the basic metadata information of the object. Highlight the object we want to work with and click “Finish”

Task configuration and definition in Excel pane


Now the object metadata is brought to the Excel pane like this. It is also a great way to inspect the object column type and length information.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

If the HANA object columns has Comments defined in its metadata table, then they will show up here as descriptions. The columns in the “Column Order” section can be re-arranged through drag-n-drop.

The task name can be renamed by double-clicking on the “Task Name” text box.

Filter


Filters can be defined through right clicking on the field either in the “Metadata” section or the “Column Order” section.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Once defined, it looks like this in the Filter section:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Load Data to Excel


Once everything is defined, the last step is to simply to click the “Load to sheet” button. 

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

During data loading, the progress is shown in the bottom of the pane 

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

If users don’t have authorization to view data, then there is error message stating the insufficient privilege. The security and authorization depend on the actual security model defined in HANA.

Local storage of task and connection information

There are two ini files under %appdata%\Roaming\AecorSoftReporting folder:

◈ AecorSoftHANATasks.ini
◈ AecorSoftHANASourceConnections.ini

The passwords are encrypted.

Compute Distance using a Calculation View – XS Advanced Model

$
0
0
We’re going to learn how to create a Graphical Calculation View to calculate the distance between two locations or coordinates represented by a longitude and a latitude. We assume that some of these locations or coordinates are already known and the other is an input from the user.

This is achieved by using the HANA spatial capabilities. HANA support three spatial reference systems (SRS) by default. For this blog we will use the WGS84 – SRID 4326 SRS because HANA supports it and the coordinates used are based off Google Maps.

If you just right-click anywhere on google maps and select what’s here from the context menu a small pop-up will appear showing you the latitude and longitude. We will use these coordinates to create a few records in a database table.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

To develop this project I am using HANA Express Edition (the free version) running on my local machine. At the time of writing this blog the version was HANA 2.0 SPS02. I will be using the SAP WebIDE Build 4.2.21.

Step 01 : Create a MTA Project and View HDI Container


1. Create a MTA by selecting Project From Template as you normally do. I called my project DistCalc.
2. Create a Database Module. Make sure you select the database version 2.0 SPS 02 and select Build module after creating to create the HDI-Container right after clicking Finish. The module was names db.
3. After the module is created and built, add the container to the Database Explorer. We will be spending most of our time there.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

Step 02 : Create a CDS Artifact to Model a Table to Store Coordinates


To keep it short use the following code shown below in a CDS Artifact to create a simple table to store markers with coordinates that we know of. After the file is built, the table will look as the image below within the database explorer.

namespace DistCalc.db;

context model {
    entity Marker {
        key markerId   : Integer generated always as identity(start with 1000 increment by 1 no maxvalue);
            markerName : String(10);
            latitude   : String(35);
            longitude  : String(35);
    };
};

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

Step 03 : Insert Coordinate records into Table


Use the following INSERT statements to add 5 markers. The marker coordinates were taken from Google Maps. The image below will show the actual coordinates I picked.

INSERT INTO "DistCalc.db::model.Marker" VALUES('marker-01', '-37.784556', '145.268080' );
INSERT INTO "DistCalc.db::model.Marker" VALUES('marker-02', '-28.906649', '136.329625' );
INSERT INTO "DistCalc.db::model.Marker" VALUES('marker-03', '-27.820271', '147.298336' );
INSERT INTO "DistCalc.db::model.Marker" VALUES('marker-04', '-30.506094', '123.787594' );
INSERT INTO "DistCalc.db::model.Marker" VALUES('marker-05', '-21.766935', '133.316398' );

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

The records in the table should look like below. the markerId might be different to yours after the insert because it is generated automatically as per our CDS entity model we used above.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

Step 04 : Create the Calculation View


This is where it gets interesting.

1. Create a graphical calculation view and add our previously created DistCalc table as a Data Source to the Projection and use all of its columns for the output.

2. Create a Calculated Column of Data Type  ST_GEOMETRY and use the following code in the expression editor. Use the Compute Engine for the expression.

ST_GeomFromText('Point(' + "longitude" + '' + "latitude" + ')',4326)​

This expression will use the latitude and longitude values that are mapped from the DistCalc table to create a ST_POINT within a ST_GEOMETRY spatial data type. Also the spatial reference system identifier 4326 is used. The mapped columns and the calculated column should look like the images below.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

When you open the data of the calculation view within the database explorer you should see the ST_GEOMETRY type column having an alphanumeric string. This is the calculated ST_GEOMETRY coordinate value for each latitude longitude combination.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

Step 05 : Add Input Parameters to the Calculation View


Things get more interesting here.

1. Add 2 Parameters to the graphical calculation view of type Input Parameter to enter a coordinate as a latitude and a longitude as shown in the images below.
in_latitude

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

in_longitude

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

2. Add 2 more Calculated Columns to map the 2 parameter inputs created above. This makes it easy to use the inputs for another Calculated Column to derive a ST_GEOMETRY type for the input coordinate. The mapping is done within the expression editor using Column Engine as shown below. Note that the spatial reference id 4326 is used here as well.
Input Latitude

'$$in_latitude$$'

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

Input Longitude

'$$in_longitude$$'

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

Input Coordinate

ST_GeomFromText('Point(' + "InputLongitude" + '' + "InputLatitude" + ')',4326)​

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

3. Run SQL statement with inputs to see the resulting InputCoordinate as a ST_GEOMETRY data type. All the values should have the same value for all the records as shown below. If the image is not clear, zoom-in.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

Step 06 : Create Calculated Column for the Distance


We have reached the final and the most interesting part.

By using the InputCoordinate ST_GEOMETRY type and the previously created MarkerCoordinate ST_GEOMETRY type as inputs to the HANA Spatial Function ST_Distance we can calculate the distance in meters. The ST_Distance function returns values of type DOUBLE.

To do this we need to create our final Calculated Column and use the following expression using the Column Engine.

ST_Distance("InputCoordinate","MarkerCoordinate")

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

Finally when you run the Calculation View using a SQL statement providing the coordinate inputs you will see the calculated distance to the marker coordinate records created above. Based on the input coordinates below the distance to marker-01 should be 1.86 kilometres.

SELECT TOP 1000
"markerId",
"markerName",
"latitude",
"longitude",
"MarkerCoordinate",
"InputLatitude",
"InputLongitude",
"InputCoordinate",
"DistanceToMarker"
FROM "DISTCALC_HDI_DB_1"."DistCalc.db::MarkerDistance"
(placeholder."$$in_latitude$$"=>'-37.78087303495233', 
placeholder."$$in_longitude$$"=>'145.28872537183577');

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications, SAP HANA Learning

This concludes our Compute Distance project. I will be creating another blog in the future to show how I consume this Calculation View and expose it via an OData service.

SAP HANA Text Mining Functions – Part 2

$
0
0
In this blog, we’ll discuss remaining Text Mining Functions.  Functions available to find top ranked related and relevant documents and terms.

Document Classification or Categorization

One of the category of Text Mining function is Document classification or categorization. SQL Function in HANA for performing this operation is TM_CATEGORIZE_KNN.

TM_CATEGORIZE_KNN

K-nearest neighbor algorithm is used for predicting or classifying objects based upon the similarity and closeness to available labelled data.  This function classifies an input document with respect to sets of categories.

K-Nearest Neighbor classification is Document Categorization.

◈ Requires a “reference set” of previously classified documents
◈ Takes an input document and returns the most likely categories for it by comparing it to the documents in reference set
◈ KNN Classifier determines the K nearest neighbors or similar documents from the reference set and then sums and normalizes their similarities per category value to determine the winning category value
◈ Return table contains the suggested categorizations for the target documents with weightage (score value)

When a new data point is to be classified, its distance from each of the labelled data points is computed. This is simplest of all the machine learning algorithm explain with below graphic.  Figure 1 predict the new data points based upon this algorithm.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Text Analysis

Figure 1: k-nearest neighbor algorithm

In the figure, we need to predict the data points (triangle shown in green color). On left hand side, new data point is classified as category ‘1’ as majority of the nearest neighbors in circle belongs to category 1. Whereas on right hand side, new data point is classified as category ‘0’ as majority of the nearest neighbors in circle belongs to category 0.Classification behavior changes (category 1 to category 0) are deviated with considered neighborhood distance.

Syntax: 
TM_CATEGORIZE_KNN ( 
<tm_document>
<tm_search_categorize_knn>
{ <tm_return_category>, ….}
)

Where
<tm_document> := 
DOCUMENT { <string>  [ LANGUAGE <string> ] [ MIME TYPE <string> ]    
| ( <subquery> )   [ LANGUAGE <string> ] [ MIME TYPE <string> ]
| IN FULLTEXT INDEX WHERE <condition> }

Specify the document which you want to categorize in <tm_document>. Either provide text as string or provide a select query or specify query document as part of full text index using where clause for restriction.

SEARCH NEAREST NEIGHBORS
{ <knn_int> | DEFAULT } 
<reference_column> FROM <reference_table> [ WHERE <condition>] 
[ WITH TERM TYPE <string>, ….]
….

In <tm_search_categorize_knn> you search for nearest neighbors by providing below syntax. Specify the reference document in reference column and table. Reference document can be restricted by specifying where condition or with term type clause. 

RETURN TOP { <top_int>| DEFAULT } 
<category_column> FROM <category_table>
[ JOIN <reference_column>  ON <primary key of category table> = <primary key of reference table>  ]

In { <tm_return_category>, ….} this provides the maximum number of category results to return for the specified category column. It may be a column in the same table as the reference documents or a column in a different table that can be used with the join clause.

In this case, we are pinning it down to one document “Federal_award_id_number = 1304684”. Input is query as part of full text index with document number which is run against the term document matrix/text mining index to fetch top 5 nearest neighbors. Score depicts the weightage, higher the value better would be the classification of the document.

Figure 2 shows the result of function TM_CATEGORIZE_KNN.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Text Analysis

Figure 2: Result Set of TM_CATEGORIZE_KNN

********************************************************************

Term Functions

TM_GET_RELATED_TERMS: This text mining function returns the top-ranked related terms for a query term, based on a set of reference documents.

Syntax
TM_GET_RELATED_TERMS ( 
<tm_term> 
<tm_search> 
<tm_return_term> )

Where
<tm_term> :=  
 TERM <string>  [ LANGUAGE <string> ]

Specifies the term and the language to be processed. Input term can be single term or multiple terms with optional terms types and wildcards example: sap, sap:noun (used as a noun) etc.

<tm_search> := 
SEARCH <column>   FROM <table>   [ WHERE <condition> ]
[ WITH TERM TYPE <string>, ... ]

Specifies the set of reference documents in <column> and <table>. The specified column must be of type text or must have a full-text index. Set of documents can be restricted by where conditions and with term type. 

<tm_return_term> :=    
RETURN     
[ PRINCIPAL COMPONENTS <pc int> ]                        -- output FACTORS, ROTATED_FACTORS     
[ CLUSTERING [<string>] ]                                -- output CLUSTER_LEVEL, CLUSTER_LEFT,                                       
                                                         -- CLUSTER_RIGHT    
[ CORRELATION ]                            -- output CORRELATIONS       
TOP { <top int> | DEFAULT }

For explanation of above options, refer to previous section. If specified, the options PRINCIPAL COMPONENTS, CLUSTERING and CORRELATION must be used in this order. TOP must always be specified as the last option.

In this case, input is a term “ocean” which is run against the term document matrix/text mining index to provide top ranked 5 related terms. This is based on co-occurrences.  Figure 3 shows the result of function TM_GET_RELATED_TERMS.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Text Analysis

Figure 3: Result Set of TM_GET_RELATED_TERMS

TM_GET_RELEVANT_TERMS: This text mining function returns the top-ranked relevant terms that describe a document.

Syntax:
TM_GET_RELEVANT_TERMS ( 
<tm_document> 
<tm_search> 
<tm_return_term> )

<tm_document> := 
 DOCUMENT { <string>  [ LANGUAGE <string> ] [ MIME TYPE <string> ]   
 | ( <subquery> )   [ LANGUAGE <string> ] [ MIME TYPE <string> ]   
 | IN FULLTEXT INDEX WHERE <condition> }

Either provide text as string or provide a select query or specify query document part of full text index using where clause for restriction.

<tm_search> := 
SEARCH <column>   FROM <table>   [ WHERE <condition> ]
[ WITH TERM TYPE <string>, ... ]

Specifies the set of reference documents in <column> and <table>. The specified column must be of type text or must have a full-text index. Set of documents can be restricted by where conditions and with term type. 

<tm_return_term> :=   
RETURN      
[ PRINCIPAL COMPONENTS <pc int> ] -- output FACTORS, ROTATED_FACTORS     
[ CLUSTERING [<string>] ]          -- output CLUSTER_LEVEL, CLUSTER_LEFT 
                                         -- CLUSTER_RIGHT     
[ CORRELATION ]                  -- output CORRELATIONS       
TOP { <top int> | DEFAULT }

For explanation of above options, refer to previous section. If specified, the options PRINCIPAL COMPONENTS, CLUSTERING and CORRELATION must be used in this order. TOP must always be specified as the last option.

In this case, we are pinning it down to one document “Federal_award_id_number = 1304684”. Input is entered as document which is run against the document matrix/text mining index to fetch top 5 relevant terms in the document. We have got relevant terms, normalized terms where we remove capitalization and diacritics and term type is giving part of speech text from text analysis. This example shows text mining and text analysis complement each other. Figure 4 shows the result of function TM_GET_RELEVANT_TERMS.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Text Analysis

Figure 4: Result Set of TM_GET_RELEVANT_TERMS

TM_GET_SUGGESTED_TERMS: This text mining function returns the top-ranked terms that match an initial substring. This function can be used for type-ahead or auto-completion functions.

Syntax:
TM_GET_SUGGESTED_TERMS
 ( <tm_term>
 <tm_search> 
<tm_return_top> )

Where
<tm_term> := 
TERM <string>  [ LANGUAGE <string> ]

Specifies the term and the language to be processed.

<tm_search> := 
SEARCH <reference column>   FROM <reference table>   [ WHERE <condition> ]
[ WITH TERM TYPE <string>, ... ]

Specifies the set of reference documents in <reference column> and <reference table>. The specified column must be of type text or must have a full-text index. Set of documents can be restricted by where conditions and with term type. 

<tm_return_top> :=
RETURN TOP { <top int> | DEFAULT }

Specifies the number of top returned terms.

In this case, input is a term which is run against the term document matrix/text mining to get the top 5 suggestions as an output. Figure 5 shows the result of function TM_GET_SUGGESTED_TERMS.

SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Certifications, SAP HANA Text Analysis

Figure 5: Result Set of TM_GET_SUGGESTED_TERMS

Table redistribution and repartitioning in a BW on HANA system

$
0
0
In this blog, I am providing step by step details about how to perform table redistribution and repartitioning in a BW on HANA system. Several steps described in this document are specific only to a BW on HANA system.

In a scaled-out HANA system, tables and table partitions are spread across several hosts. The tables are distributed initially as part of the installation. Over the time when more and more data gets pumped into the system, the distribution may get distorted with some hosts holding very large amount of data whereas some other hosts holding much lesser amount of data. This leads to higher resource utilization in the overloaded hosts and may lead to various problems on those like higher CPU utilization, frequent out of memory dumps, increased redo log generation which in turn can cause problems in system replication to the DR site. If the rate of redo logs generation in the overloaded host is higher compared to the rate of transferring the logs to the DR site, it increases the buffer full counts and puts pressure on the replication network between the primary nodes and the corresponding nodes in secondary.

The table redistribution and repartitioning operation applies advanced algorithms to ensure tables are distributed optimally across all the active nodes and are partitioned appropriately taking into consideration several parameters mainly –

➢ Number of partitions
➢ Memory Usage of tables and partitions
➢ Number of rows of tables and partitions
➢ Table classification

Apart from these, there are several other parameters that are considered by the internal HANA algorithms to execute the redistribution and repartitioning task.

DETERMINING WHETHER A REDISTRIBUTION AND REPARTITIONING IS NEEDED


You can consider the following to decide whether a redistribution and repartitioning is needed in the system.

➢ Execute the SQL script “HANA_Tables_ColumnStore_TableHostMapping” from OSS note 1969700. From the output of this query, you can see the total size of the tables in disk across all the hosts. In our system, as you can see in the screen shot below the distribution was uneven with some nodes holding too much data compared to other nodes.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

If you observe frequent out of memory dumps are getting generated in few hosts due to high memory utilization of those hosts by column store tables. You can execute the following SQL statement to see the memory space occupied by the column store tables.

select host, count(*), round(sum(memory_size_in_total/1024/1024/1024)) as size_GB from m_cs_tables group by host order by host

As you can see in the below screen shot, in some hosts the memory space occupied by the column stores tables is much higher compared to other hosts.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

 If new hosts are added to the system, tables are not automatically distributed to those. Only new tables created after the addition of hosts may get stored there. The optimize table distribution activity needs to be carried out to distribute tables from the existing hosts to the new hosts.

➢ If too many of the following error message are showing up in the indexserver trace file, the table redistribution and repartitioning activity also takes of these issues.

Potential performance problem: Table ABC and XYZ are split by unfavorable criteria. This will not prevent the data store from working but it may significantly degrade performance.
Potential performance problem: Table ABC is split and table XYZ is split by an appropriate criterion but corresponding parts are located in different servers. This will not prevent the data store from working but it may significantly degrade performance.

CHECKLIST


✓ Update table “table_placement”
✓ Maintain parameters
✓ Grant permissions to SAP schema user
✓ Run consistency check report
✓ Run stored procedure to check native HANA table consistency and catalog
✓ Run the cleanup python script to clean virtual files
✓ Check whether there are any business tables created in row store. Convert them to column store
✓ Run the memorysizing python script
✓ Take a database backup
✓ Suspend crontab jobs
✓ Save the current table distribution
✓ Increase the number of threads for the execution of table redistribution
✓ Unregister secondary system from primary (if DR is setup)
✓ Stop SAP application
✓ Lock users
✓ Execute “optimize table distribution” operation
✓ Startup SAP
✓ Run compression of tables

STEPS IN DETAILS


Update table “table_placement”

OSS note 1908075 provides an attachment which has several scripts for different HANA versions and different scale out scenarios. Download the attachment and navigate to the folder as per your HANA version, number of slave nodes and amount of memory per node.
In the SQL script, replace the $$PLACEHOLDER with the SAP schema name of your system. Execute the script. This will update the table “Table Placement” under SYS schema. This table will be referred by HANA algorithms to take decisions on the table redistribution and repartitioning activity.

Maintain parameters

Maintain HANA parameters as recommended in OSS note 1958216 according to your HANA version.

Grant permissions to SAP schema user

For HANA 1.0 SPS10 onwards, ensure that the SAP schema user (SAPBIW in our case) has the system privilege “Table Admin”.

Run consistency check report

SAP provides an ABAP report “rsdu_table_consistency” specifically for SAP systems on HANA database. First, ensure that you apply the latest version of this report and apply the OSS note 2175148 – SHDB: Regard TABLE_PLACEMENT in schema SYS (HANA SP100) if your HANA version is >=SPS10. Otherwise you may get short dumps while executing this report if you select the option “CL_SCEN_TAB_CLASSIFICATION”.
Execute this report from SA38 especially by selecting the options “CL_SCEN_PARTITION_SPEC” and “CL_SCEN_TAB_CLASSIFICATION”. (You can select all the other options as well). If any errors are reported, fix those by running the report in repair mode.

Note: This report should be run after the table “table_classification” is maintained as described in the first step. This report refers to that table to determine and fix errors related to table classification.

Run stored procedure to check native HANA table consistency and catalog

Execute the stored procedures check_table_consistency and check_catalog for the non-BW tables and ensure there are critical errors reported. If any critical errors are reported, fix those first.

Run the cleanup python script to clean extra virtual files

If there are extra virtual files, the table redistribution and repartitioning operation may fail. Run the python script cleanupExtraFiles.py available in the python_support directory to determine whether there are any extra virtual files. Before you run the script, open it in VI editor and modify the following parameters as per your system.
self.host, self.port, self.user and self.passwd
Execute the following command first to determine whether there are any extra virtual files.
Python cleanupExtraFiles.py
If this command reports extra virtual files, execute the command again with remove option to cleanup those.
Python cleanupExtraFiles.py –removeAll

Check whether there are any business tables created in row store

The table redistribution and repartitioning operation considers only column store tables and not row store tables. So, if anybody has created any business tables in row store (by mistake or without knowing the implications) those will not get considered for this activity. Big BW tables are not supposed to be created in row store in the first place. Convert those to column store using the below SQL
Alter table <table_name> column;

Run the memorysizing python script

Before running the actual “optimize table distribution” task, execute the below command –
Call reorg_generate(6,’’)
This will generate the redistribution plan but not execute it. The number ‘6’ here is the algorithm id of “Balance landscape/Table” which gets executed by the “optimize table distribution” operation.
After the above procedure gets executed, in the same SQL console, execute the following query-
Create table REORG_LOCATIONS_EXPORT as (select * from #REORG_LOCATIONS)
This will create the table REORG_LOCATIONS_EXPORT in the schema of the user with which you executed this. Execute the query – Select memory_used from reorg_locations_export

If you see the memory_used column has several negative numbers like shown in the screen shot below, it indicates there is a problem.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

You can also execute the below query and check the output.
select host, round(sum(memory_used/1024/1024/1024)) as memory_used from reorg_locations_export group by host order by host
If you get the output as shown in the below screen shot, this indicates that the memory statistics is not updated. If you execute the “optimize table distribution” operation now, the distribution won’t be even and some hosts may end up having far larger number of tables with high memory usage and whereas others will have very less number of tables and very less memory usage.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

This is due to a HANA internal issue which has been fixed in HANA 2.0 where an internal housekeeping algorithm corrects these memory statistics. As a workaround for HANA 1.0 systems, SAP provides a python script that you can find in the standard python_support directory. (Check OSS note 1698281 for more details about this script).

Note: This script should be executed during low system load (preferably on weekends).

After the script run finishes, generate a new plan with the same method as described above and create a new table for reorg locations with a new name, say reorg_locations_export_1. Execute the query – Select memory_used from reorg_locations_export_1. Now you won’t be seeing
those negative numbers in the memory_used column. Executing the query – select host, round(sum(memory_used/1024/1024/1024)) as memory_used from reorg_locations_export_1 group by host order by host, will now show much better result. As you can see below, the values in the memory_used column is pretty much even across all nodes after executing the memorysizing script and there are no negative values.

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

Take a database backup

Take a database backup before executing the redistribution activity.

Suspend crontab jobs

Suspend jobs that you have scheduled in crontab, e.g. backup script.

Save the current table distribution

From HANA studio, go to landscape –> Redistribution. Click on save. This will generate the current distribution plan and save it in case it is needed to restore back to the original table distribution.

Increase the number of threads for the execution of table redistribution

For faster execution of the table redistribution and repartitioning operation, you can set the parameter indexserver.ini [table_redist] -> num_exec_threads to a high value like 100 or 200 based on the CPU capacity of your HANA system. This will increase the parallelization and speed up the operation. The value should not exceed the number of logical CPU cores of the hosts. The default value of this parameter is 20. Make sure to unset this parameter after the activity is completed.

Unregister secondary system from primary (if DR is setup)

If you have system replication setup between primary and secondary sites, you will need to unregister secondary from primary. If you perform table redistribution and repartitioning with system replication enabled, it will slow down the activity.

Stop SAP application

Stop SAP application servers

Lock users

Lock all users in HANA except the system users SYS, _SYS_REPO, SYSTEM, SAP schema user, etc. Ensure there are no sessions active in the database before you proceed to the next step. This will also ensure that SLT replication cannot happen. Still if you want you can deactivate SLT replication separately.

Execute “optimize table distribution” operation

From HANA studio, go to landscape –> Redistribution. Select “Optimize table distribution” and click on execute. In the next screen under “Parameters” leave the field blank. This will ensure that table repartitioning will also be taken care of along with redistribution. However, if you want to run only redistribution without repartitioning, enter “NO_SPLIT” in the parameters field. Click on next to generate the reorg plan and then click on execute.

Monitor the redistribution and repartitioning operation

Run the SQL script “HANA_Redistribution_ReorganizationMonitor” from OSS note 1969700 to monitor the redistribution and repartitioning activity. You can also execute the below command to monitor the reorg steps

select IFNULL(“STATUS”, ‘PENDING’), count(*) from REORG_STEPS where reorg_id=(SELECT
MAX(REORG_ID) from REORG_OVERVIEW) group by “STATUS”;

Startup SAP

Start SAP application servers after the redistribution completes.

Run compression of tables

The changes in the partition specification of the tables as part of this activity leads to tables in uncompressed form. Though the “optimize table distribution” process carries out compression as part of this activity, due to some bug tables can still be in uncompressed form after this activity completes. This will lead to high memory usage. Compression will run automatically after the next delta merge happens on these tables. If you want you can perform it manually. Execute the SQL script “HANA_Tables_ColumnStore_TablesWithoutCompressionOptimization” and HANA_Tables_ColumnStore_ColumnsWithoutCompressionOptimization”  from OSS note 1969700 to get the list of tables and columns that need compression. The output of this script provides the SQL query for executing the compression.

Result


As an outcome of this activity, the distribution of tables evened out across all the hosts of the system. The memory space occupied by column store tables also became more or less even. Also you can see that the size of tables in the master node has reduced. This is because some of our BW projects had created big tables in the master node which have been moved to the slave nodes as part of this redistribution activity. This should be the ideal scenario.

Size of tables on disk (before)

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

Size of tables on disk (after)

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

Count and memory consumption of tables (before)

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

Count and memory consumption of tables (after)

SAP HANA Tutorial and Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA BW, SAP HANA Certifications

I hope this article helps you all whoever is planning to perform this activity on your BW on HANA system.

How to free Hana System on Public Cloud from I/O performance issue?

$
0
0

How to free Hana System on Public Cloud from I/O performance issue?


Apart from memory, storage performance plays a major role in safeguarding HANA performance. Storage system used for SAP HANA in TDI environments must full fill a certain set of KPIs for minimum data throughput and maximum latency time for Hana data and log volume. Cloud vendor need to pass the KPI checked using HWCCT (Hardware Configuration Check Tool) for SAP to certify their cloud platform to run SAP HANA. The reason is to safeguard customer HANA system from any possible I/O performance that’ll lead to performance degradation up to system standstill and irresponsive.

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning
Common performance issues caused by slow I/O:

◈ Permanently slow system
◈ High commit times due to slow write rate to log volume
◈ Database requests may be blocked where poor I/O performance extend it to a length that causes a considerable performance impact due to Long waitforlock and Critical phase on savepoint.
◈ Increase startup times and database request time due to slow row/ column load.
◈ Longer takeover and active time for HSR
◈ Increase number of HANA lock such as “Barrier Wait”, “IO Wait”, “Semaphore Wait”,etc.
etc, etc.

Things are getting worse for a high load system when there is constantly high volume of modified pages flushing into disk, e.g. during data loading, long running transactions, massive update/ insert, etc. We have a scenario where the customer’s BWoH production system was constantly standstill when there’s a high load with update/ insert activity, where it escalated to SAP and up to the management level and later figured out the culprit was due to the system hosted by the on-premise Hosting vendor failed to meet SAP TDI Storage KPI.

Now back to the scenario of running our HANA systems on Public Cloud. Cloud Vendors had certified their platform to run SAP HANA, however, is our own responsibility to setup the HANA systems to meet SAP Storage KPI to avoid all possible I/O issues. Guidelines of optimal storage configuration are provided by each Cloud vendor as below:

GCP

https://cloud.google.com/solutions/partners/sap/sap-hana-deployment-guide

“To achieve optimal performance, the storage solution used for SAP HANA data and log volumes should meet SAP’s storage KPIs. Google has worked with SAP to certify SSD persistent disks for use as the storage solution for SAP HANA workloads, as long as you use one of the supported VM types. VMs with 32 or more vCPUs and a 1.7 TiB volume for data and log files can achieve up to 400 MB/sec for writes, and 800 MB/sec for reads.”

AWS

https://aws.amazon.com/blogs/awsforsap/deploying-sap-hana-on-aws-what-are-your-options/

◈ With the General Purpose SSD (gp2) volume type, you are able to drive up to 160 MB/s of throughput per volume. To achieve the maximum required throughput of 400 MB/s for the TDI model, you have to stripe three volumes together for SAP HANA data and log files.
◈ Provisioned IOPS SSD (io1) volumes provide up to 320 MB/s of throughput per volume, so you need to stripe at least two volumes to achieve the required throughput.

Azure

https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/hana-get-started

“However, for SAP HANA DBMS Azure VMs, the use of Azure Premium Storage disks for production and non-production implementations is mandatory.

Based on the SAP HANA TDI Storage Requirements, the following Azure Premium Storage configuration is suggested:”

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Now we knew that it is important to have at least 1.7 TiB SSD persistent disk for GCP, LVM stripping with 3 gp2 or 2-3 io1 volume for AWS and 2-3 LVM stripping for Azure premium storage.

Next, let look at the certified KPI for data throughput and latency for HANA Systems:

1943937 – Hardware Configuration Check Tool – Central Note

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

So… what were the KPIs achieved by GCP and AWS without and with optimal storage config?

*Tests were conducted months ago and for your own reference. For a better and accurate result reflected on your system, you are advised to run the HWCCT storage test.

GCP < 1.7TB SSD PD

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

GCP >= 1.7TB SSD PD

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

AWS gp2 without LVM Stripping

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

AWS 3 x gp2 LVM Stripping

SAP HANA Tutorial and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning

Not getting a chance to test on Azure platform yet. It would be great if someone had tested it and able to share the result.

From the result, we see slow I/O that failed on certain KPI for storage setup without following the guidelines, and thus, there’s a risk of arising performance issues caused by slow I/O stated above.

If you run your HANA system on Cloud and constantly encounter I/O issue stated above, do ensure HANA data volume and log volume are setup according to the respective guidelines. By doing so will ensure you stay within SAP support and eliminate any possible I/O performance and maximize the usage of the solid underlying platform provided by GCP, AWS and Azure.

Upgrading Web IDE in HANA Express

$
0
0
One of the housekeeping tasks I generally perform in the instances I use to develop actual applications is to upgrade the Web IDE. Not only because some minor bugs are swept away, but also because there’s always some additional functionality that makes development easier.

You would generally need access to the marketplace and a proper license for this. However, the engineering team has made this patch available for download in the download manager in revision 23. If the following rings a bell, you might want to give this a try:

“Request failed: Gateway Timeout URI: /che/runner/workspace”

The upgrade process is pretty similar as with any other MTA upgrade, only that this time it is easier because you can download it straight into the instance using the console-based download manager.

Using a Virtual Machine? Add 1GB more RAM to the default, it will need it during the installation process.

Check your version


These steps apply to revision 23 of HANA express (HANA 2.0, SPS02). You can still go ninja mode and upgrade your MTAs but you will need to perform some manual work and checks (outlined below).

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA IDE

If you do not have that version, it means you need to go into the marketplace and look for SAP HANA RUNTIME TOOLS 2.0 and SAP WEB IDE 2 in the software downloads.

You also need the *.mtaext file in this note. The one oyu get from the note has some differnt parameters than what HXE uses. I would recommend you use the file you already have from the original installation as a reference. Command sudo find / -name *mtaext should find the original file.

Whlle you are there, make sure you have the compatible versions of the other MTAs. The command to know what your MTA versions are is xs mtas.

Once you have all of the necessary packages, upload them into your instance and jump to “Upgrade time” below.

Download the patch


Don’t you just love the console-mode download manager? Use it to download the patch straight into the same lucky server that is getting the upgrade.

Login to the xs CLI if you have not already and check the versions of your MTAs while you are there:

xs-admin-login
xs t -s SAP
xs mtas

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA IDE

Now list the packages available for download:

cd /usr/sap/HXE/home/bin
./HXEDownloadManager_linux.bin -X

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA IDE

You need to pay attention here. The Download manager will offer packages for different platforms. In my case, I am using a launcher instance on Google Cloud Platform. The linuxx86_64 version applies here (there is no difference between the VM or Binary method for this particular component), it is the platform what will cause you trouble. Use linuxppc64le if applicable.

./HXEDownloadManager_linux.bin <<check your platform>> installer webide_patch.tgz

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA IDE

Install the upgrade


For the second time, we are going to make sure you are on the right space (SAP) after extracting the files:

tar -xvzf /usr/sap/HXE/home/Downloads/webide_patch.tgz
xs t -s SAP

You might want to give some more room to the backend processes. I have found that life flows more swiftly if you allocate more RAM to the backend processes for Web IDE. These are di-core and di-runner. In previous versions they could even crash and throw an awful

Request failed: Bad Gateway URI: /che/project/workspacee…..

And some other unhandled technical errors.

This randomly would happen after running node.js modules, Fiori wizards and syncing Git repositories. The problem is that this will make these processes take up more memory, which may mean less memory for the rest of the processes. This should not be a problem in cloud environments  and I have already pushed the RAM up to 32 GB, so here I go:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA IDE

Once we are ready, let the installation begin:

xs install ./HANA_EXPRESS_20/DATA_UNITS/XSAC_SAP_WEB_IDE_20/sap-xsac-devx-4.2.31.zip -e ./HANA_EXPRESS_20/DATA_UNITS/XSAC_SAP_WEB_IDE_20/sap-xsac-devx-4.2.31-hxe.mtaext

If you have manually downloaded the runtime tools package, you can install it too with:

xs install XSACHRTT0<<the version you downloaded>>.ZIP

Go get some coffee, pet your cat or dog or dedicate some minutes to mindful meditation. This will take some minutes. The success message will appear when ready:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA IDE

And command xs mtas will confirm the upgrade has been successful.

Re-run the space enablement tool


Go into the space enablement tool (https://hxehost:51024 – or whatever xs a | grep space says) and redeploy the development space to update the di-builder:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Learning, SAP HANA IDE

You are all set now.

Setting up communication channel between SAP Cloud Platform (Neo) & HANA XSA (On-Premise)

$
0
0

Overview


Recently, I came across a situation where I had to consume an OData service that’s deployed in HANA 2.0 on-premise following XSA paradigm in a SAPUI5 application running in SAP Cloud Platform – Neo environment. In this blog, I’d like to take through the concepts and the various configuration steps that are involved in setting up communication channel between an application deployed in SCP to and HANA XSA on-premise. I’ll also describe in detail the security configuration to allow a user logged in SCP HTML5 application to access a protected resource in HANA XSA, for example an OData service that requires specific scope to access.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications
Assumption: I assume that you have setup your XSA environment with an external IdP. And that you are familiar with using Application Role Builder, Role Collection and assigning Role Collection to user from XSA Admin Cockpit.

To explain the steps in this document, let’s take a simple HANA XSA application following MTA paradigm containing db (hdi-container), js and an approuter module.

◈ Let the db module define a simple table “Book”, and a calculation view on top of this table. The calculation view “BooksByCountry” has row-level authorization check to only return rows that matches the logged in user country. (Instance Based Authorization)

◈ The js module (xsjs compatibility – nodejs runtime) provides an odata service exposing the calculation view as an entity. The OData service also requires that user has the scope “books.read” to access the service. (Functional Authorization)

◈ The approuter module defines the routes in the xs-app.json along with the scopes required. As you know, this acts as the single entry point for the application, and also takes care of user authentication and authorization (partly, since the backend (js module) also needs to check the user has relevant scopes), and forwarding and caching of the JWT for the logged in user.

Let’s also define a xs-security.json file that would define the required scope, attributes and role template for the application described above. In your xs-app.json and in the odata definition, you can check the scope “read” for accessing the protected resource. And the attribute country will be used in the structured privilege on calculation view to filter rows based on the user accessing the calculation view.

{
"xsappname": "secure_helloworld",
"tenant-mode": "dedicated",

"scopes": [{
"name": "$XSAPPNAME.read",
"description": "Read access to the odata service"
}],

"attributes": [{
"name": "country",
"description": "country",
"valueType": "string"
}],

"role-templates": [{
"name": "BooksViewer",
"description": "View Books",
"scope-references": [
"$XSAPPNAME.read",
"uaa.user"
],
"attribute-references": ["country"]
}]
}

And there’s a simple UI5 application that’s display the data from the OData service.

Communication Channel & Security Configuration


Before going through the configuration steps, let me quickly go through the concepts of how the entire communication channel works and how the logged user in SCP is authenticated on XS UAA side to access the services taking care of both functional (scopes) and instance based authorizations (analytical privileges on calculation level providing row-level permissions)
The application user is authenticated when accessing the UI5 application in the SCP subaccount. The SCP subaccount is configured with an external IdP, that will authenticate the user. The application will access the service on the backend (XSA) via Cloud Connector and the destinations created. The steps that are involved in passing the user context from the SCP HTML5 application to the XSA services in the backend is as follows

1. The user logs in to the HTML5 application authenticated by the external IdP.

2. When accessing any protected service exposed from XSA on-premise, the backend service should be called with the JWT (JSON Web Token) that the backend service can use to verify that the user is authenticated and has the required authorizations

3. The SCP destination configured of type “OAuth2SAMLBearerAssertion” using the configured Token Service URL exchanges the SAML bearer assertion for the JWT token. To explain the steps in detail

3.1 The SAML attribute statement is created from the Principal attributes that’s assigned for this trust configuration (section 5). This is done in SCP Trust Configuration where we set the attributes that are to be mapped to the principal attributes from the SAML assertion
3.2 The SAML assertion created is signed by the Service Provider Signing Key (in this case is the SCP Subaccount – Signing Key from Trust Configuration)
3.3 HANA XSA is configured to trust the Local SP of the SCP Subaccount as an IdP by adding the SP’s metadata as an IdP in the XSA SAML Provider Configuration
3.4 The destination service replaces the Signed SAML bearer assertion for JWT token with the Authorization Server (UAA). This is part of the OAuth2 Standard grant type urn:ietf:params:oauth:grant-type:saml2-bearer. For details on this refer to link. This is possible to exchange since the XSA already trusts the SP signing key as a valid IdP (from previous step)
3.5 The JWT token holds the scope that has user been assigned, and the SAML attributes
3.6 Since XSA is in the on-premise, all the interaction from SCP to XSA UAA and hence to the actual backend service is carried out via the Cloud Connector

4. The JWT token fetched in the previous step is then sent in the header Authorization: Bearer <JWT_Token> to the backend service. Please note that here, we don’t invoke the App Router but then the actual service itself (js module).
5. The backend service should check that user the required scope to the access the protected endpoint.
6. As you already know access to DB in XSA is with single technical user. However, the application user is personified onto the technical user giving the JWT token. The user attributes are set at the global variables for the session and can be used in the instance based authorization in the database artifacts like Structured Privileges.

The steps that are explained above are taken care of implicitly by the destination configured in SCP. Since the XSUAA is running on-premise, it’s required to expose these endpoints as virtual mapping from the cloud connector. The destination service takes care of performing this complex flow, and fetching the service result to the browser client.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Configuration steps to setup the Communication Channel & Security


I have used an SAP Cloud Platform Identity Authentication Service (IAS) tenant to demonstrate the setup here. The steps should work with any other SAML2.0 IdP as well.

Overview of configuration steps

Please note, I’ll not go through all the steps below – since some of them can be looked up from the Reference Guide, and I’ve provided the links to the documentation for the same.
I’d explain in detail the steps in the following sections that are specific to the setup discussed in this blog.
  1. Configure Role and Role Collections
  2. Establish trust between Identity Provider (IAS tenant) and the Service Provider (HANA XSA)
    1. Configure external IdP for the HANA XSA Application server (documentation)
    2. Assign Role Collections to users (federation through Groups attribute)
    3. Register the SP in the IdP as an application
  3. Setup the SAP Cloud Connector
    1. Expose XSUAA endpoints
    2. Expose the service endpoints of the COS application
  4. Setting up destinations on the SAP Cloud Platform
  5. Configure the SCP subaccount with an external IdP
  6. Configure SCP SubAccount Local Identity Provider as an additional IdP

1. Configure Role and Role Collections


Once you have created the XSUAA service, you can maintain the Roles and the Role Collection from the admin cockpit. For this simple application, I created a role BooksViewer from the role template BooksViewer, and then assigned the attribute values from SAML as shown below

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Also, I’ve created a Role Collection BooksOrganizer containing the role BooksViewer

2. Configure the External IdP for XSA


2.1 Steps to configure an external IdP for XSA

I hope you have already configured an external IdP for authenticating business user to access the application in XSA. You can refer the documentation from the table above.

2.2 Assign Role Collections to users (federation through Groups attribute)

1. Open the “SAML Identity Provider Configuration” tile from the XS Advanced Administration Cockpit

2. Select the IdP that you have configured. From the Role Collections tab, add the Role Collections that you have already created “BooksOrganizer”, and then assign the “Groups” attribute value.
In our example, I’ve setup a User Group “BooksOrgUserGroup” on IAS tenant. And users belonging to this group will be federated to the Role Collection based on this configuration

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

3. When a user logs in via IdP, the user gets federated to the role collections which matches to the Groups SAML assertion attribute that the logged user carries in his SAML Assertion attribute statement.

2.3 Register the SP in the IdP as an application

You can register XSA as Service Provider with any SAML2.0 complaint IdP. In the table above, I’ve given the link for configuring with IAS tenant (SaaS). Once you have SAML trust setup, you can then add the Role Collection assignment to user logging in from IdP via the Groups attribute.
From your IdP, make sure that you send the assertion attribute “Groups” (which decides the role collection that user will be federated) and “country”(used for instance based auth in our example) attribute that we will be using later on.
You can try to access the odata service via AppRouter, and then you’d see the link for the IdP below the default form login. You can then login with a valid user from IdP who has the User Group “BooksOrgUserGroup” assigned so that he gets the Role Collection “BooksOrganizer” and is able to access the service.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Now that you are able to access the OData service from XSA directly via AppRouter, the next step will be to consume the same from the UI5 application deployed in SCP Neo.

3. Configurations in SAP Cloud Connector for enabling the communication channel


I assume that you have already setup SAP Cloud Connector (SCC), and you have established the tunnel for your subaccount on SCP (Neo). You can refer to other blogs, that would describe the basic setup in detail.
In here, I’ll focus on setting up the Virtual that are specific to the consume the service from XSA On-premise. We’ll need to create two Virtual endpoints for the on-premise system, one to access the odata service, and the other access the UAA token service.

3.1 Expose the XSUAA endpoints

1. Open the Cloud Connector Admin Page
2. Select the SCP subaccount to which the backend services has to be exposed.
3. Create a mapping to the virtual system with the following details as shown in the snapshot below

PropertyValue 
Back-end Type SAP HANA 
Protocol HTTPS 
Virtual Host <chosen host as per preference> 
Virtual Port 39032 (Can be looked from the env of your application bound with XSUAA instance) 
Internal Host <HANA XSA Host> (Can be looked from the env of your application bound with XSUAA instance) 
Internal Port39032 

A sample configuration is shown below.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

NOTE: The internal host and the ports will change per your system landscape. I’m running here an HANA Express edition locally on my laptop

4. Create the resources that can be accessed on the internal system with the following details

PropertyProperty
URL Path/uaa-security/ 
Access Policy  Path and all sub-paths 

3.2 Expose the service endpoints for OData Service

In this section, we’ll configure to expose the backend service endpoints.
Fetch the backend service endpoints (js module).

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Please note, in this case we go to the backend Node JS service directly, and not via the AppRouter.

You can create virtual mapping and the resources as shown in the images below.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

4. Setup the destinations on the SAP Cloud Platform


In this step, we create the destination that will be used from the HTML5 application to consume the odata service on XSA on-premise

1. From the SAP Cloud Platform subaccount, go to Connectivity → Destinations
2. We’ll be needing the OAuth Token Service endpoint to configure the destination. You can get the same from the XS Client tool, and doing

xs env <js_app_name>

Sample output

"VCAP_SERVICES" : {
    "xsuaa" : [ {
      "name" : "uaa_books",
      "label" : "xsuaa",
      "tags" : [ "xsuaa" ],
      "plan" : "default",
      "credentials" : {
        "tenantmode" : "dedicated",
        "clientid" : "sb-secure_helloworld",
        "verificationkey" : "***",
        "xsappname" : "secure_helloworld",
        "identityzone" : "uaa",
        "identityzoneid" : "uaa",
        "clientsecret" : "***",
        "url" : "https://hxehost:39032/uaa-security"
      }
    } ]
  }

2. Create the destination with the following details.

PropertyMapping Attribute in Environment Sample Value 
TypeN/AHTTP
audience <url>/oauth/tokenhttps://hxehost:39032/uaa-security/oauth/token 
Authentication NA  OAuth2SAMLBearerAssertion 
tokenServiceURL <url>/oauth/token  https://hxehost:39032/uaa-security/oauth/token 
ProxyType N/A OnPremise 
URL Backend (js module) Application URL on XSA https://hxehost:51046 
tokenServiceUser <clientid>  sb-secure_helloworld 
tokenServicePassword <clientsecret>  *** 

Additional Properties

PropertyValue 
WebIDEUsageodata_gen
WebIDEEnabledtrue 
WebIDESystem HXE 

Sample Configuration

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

3. Test the connection to ensure that the system is reachable.

5. Configure SCP SubAccount with an external IdP


I assume that you have configured your Neo Subaccount with an external IdP.

When the user is propagated from the SCP HTML5 application to XSA, the Local Service Provider in SCP SubAccount will act as the IdP for XSA. The outbound SAML assertion will be signed by the SP’s signing certificate. For this purpose, we’ll have to download SP metadata.

1. From the subaccount cockpit, Go to Security → Trust, and then open the Local Service Provider tab.

2. Download the SP metadata by clicking on the “Get Metadata” link.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

3. Since the RoleCollection assignment to user is based on the Groups attribute passed, it’s important to send that as an Assertion attribute from SCP. This is achieved by setting the Assertion attribute.

3.1 Go to Security → Trust → Application Identity Provider → click on the configured IdP link
3.2 Go to Attributes tab, and then add the Groups attribute into the section Assertion-Based Attributes.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

3.3 If there are more attributes to be passed to XSA for row level access (instance based authorization), all of them has to be added to the list here. You can set all attributes passed from IdP as Principal attributes by just entering * in Assertion & Principal Attribute.

6. Configure SCP SubAccount Local Identity Provider as an additional IdP


Finally, we will now add the Local Service Provider of SCP SubAccount as an IdP to XSA.

1. From the admin cockpit, navigate to SAML Identity Provider Configuration tile
2. Add the metadata from got from the Local Service Provider (previous section)
3. Save the configuration.
4. Configure the role collections as we did while configuring an external IdP for XSA. (refer to section 2.2 – Assign Role Collections to users)

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Testing the HTML5 application on SCP


I created a simple Fiori list report application (from the project template). In the step where it asks for you select the odata service, you can select from the destination that you have created.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

You can go ahead and then complete the application. The goal of this application is to show that based on the user logging in, the data fetched from the odata service is filtered based on the attributes configured for him in his IdP.

For User with country attribute as “US”

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Similarly, when you login with a user with country SAML attribute as “IN”, he would only get records having that value in COUNTRY column from Calculation View.
You can also check the attributes that’s set for this session as below

SELECT KEY, VALUE from M_SESSION_CONTEXT WHERE KEY LIKE 'XS_%'

Wrapping the same in xsjs service to display the attributes that are captured at the session level is  highlighted below.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Also when the user is moved out of the user group in the IdP, he’d automatically not be allowed to access the odata service, since the user will not have the required scopes. And will be presented with the following error message.

SAP Cloud Platform, SAP HANA Tutorials and Materials, SAP HANA Guides, SAP HANA Certifications

Connect a SQL client to HANA Express on Google Kubernetes Engine

$
0
0
I showed how to setup a Kubernetes cluster with three HXE containers running in single pods. As I continue to explore different possibilities with my new favorite toy, I thought I could document some extra steps to connect from an external, local SQL client.

Bind HXE’s ports to the host


The host is the actual Virtual Machine with it’s own external IP, so this will allow me to use the port from outside. The assumption that you will not be running any other container that needs those ports and can cause a conflict still applies here. The trick is done by adding the key “hostPort” to the yaml file, followed by the port you will be exposing.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Why not services and NodePort? In a future blog post, let’s tame this beast first.

Enable TCP/IP traffic to your VMs


Go into the firewall rules and create a tag that allows TCP/IP outbound communications from the desired ports and any other responsible firewalling you consider. My instances will only hold the secret of life so I will not sweat it much here.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

You need to create the node first and see which VM it got assigned to. Command kubectl describe pod  will give you the name of the VM.

Using the burger on the left upper corner (yeah… burgers…), go get that VM.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

You can identify where the pod has been deployed from the output of the describe command. The auto-generated names are not too friendly but it’s completely doable:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Click on that VM, edit and add the tag to the network.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Remember to scroll down and save.

Tell your database to use the external IP


We’ve seen this before when adding tenants in cloud environments. When HANA Studio or a SQL client knock on the HANA’s door, they get redirected to the proper port but the internal IP that is assigned to the VM. You need a command to tell the database to use the external IP.

Log in to the pod, log in to the database and cast the SQL spell

kubectl exec -it hxe-pod bash
hdbsql -I 90 -d SYSTEMDB 
alter system alter configuration ('global.ini','SYSTEM')set ('public_hostname_resolution','map_localhost')='your external IP address’ with reconfigure;

Connect to the database


I’m using DBeaver for this but any SQL client that allows you to use a custom JDBC driver will do.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Use port 39017 for the SYSTEMDB and 39041 for the tenant database, just like for Docker. For HANA Studio, you don’t need to worry about the ports.

Custom Rule Set – SAP HANA Text Search

$
0
0
In this blog, I’ll discuss how to create custom rule set in SAP HANA.  To implement certain custom use cases, customers have to implement their own rule set for performing Text Search Operations.

Search Rule Set


Figure 1 below shows the structure of Rule Sets stored in XML/Tree Like Formation.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications, SAP HANA

Figure 1: Rule Set Structure

Figure 2 below shows the steps to configure and use a Search Rule Set.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications, SAP HANA

Step 1: Add View


First step while configuring the rule set is to define a view. Search operation can be performed on Attribute Information views, Column views of type Join, and SQL views. Other database objects, such as row store tables, column store tables, calculation views, or analytic views, are not supported.

1.1 Create a column table and insert the records in column table as specified below:

CREATE COLUMN TABLE employee
(
  id            INTEGER          PRIMARY KEY,
  firstname     SHORTTEXT(100)   FUZZY SEARCH INDEX ON,
  lastname      SHORTTEXT(100)   FUZZY SEARCH INDEX ON,
  address    NVARCHAR(100)    FUZZY SEARCH INDEX ON,
  postcode      NVARCHAR(20)     ,
  cityname      NVARCHAR(100)    ,
  countrycode   NVARCHAR(2),
);

INSERT INTO employee VALUES(1, 'Michael', 'Milliken', '3999 WEST CHESTER PIKE', '001', 'NEWTON SQUARE', 'PA');

1.2 Create and Activate an Attribute View by selecting the customer table and projecting all the columns in output of attribute view as shown in Figure3.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications, SAP HANA

Figure 3: Attribute View

Step 2: Add Stop Words and Term Mappings


Second step is to configure the nodes “Stopwords (table-based)” and “Term Mappings (table-based)” by creating two tables Stopwords and Termmapping.

2.1 Stopwords

Stopwords are the terms which are less significant terms, therefore these terms are not used to generate the result set. However, these terms do influence score calculations.  A record with stopwords identical to the user input gets a higher score than a record with differing or missing stopwords.

Stopwords can be defined either as single terms or as stopword phrases consisting of multiple terms.

Syntax for creating a term mapping table via SQL:

CREATE COLUMN TABLE stopwords
(
stopword_id VARCHAR(32) PRIMARY KEY,
list_id VARCHAR(32) NOT NULL,
language_code CHAR(2) NOT NULL,
term NVARCHAR(200) NOT NULL
);

Stopwords are stored in a column-store table with the following format:

-- to be able to use stopwords, a stopword table is needed:
CREATE COLUMN TABLE stopwords
(
  stopword_id    VARCHAR(32)    PRIMARY KEY,
  list_id        VARCHAR(32)    NOT NULL,
  language_code  VARCHAR(2),
  term           NVARCHAR(200)  NOT NULL
);

INSERT INTO stopwords VALUES('1', 'firstname', 'en', 'Dr');
INSERT INTO stopwords VALUES('2', 'firstname', 'en', 'Mr');
INSERT INTO stopwords VALUES('3', 'firstname', 'en', 'Mrs');
INSERT INTO stopwords VALUES('4', 'firstname', 'en', 'Sir');
INSERT INTO stopwords VALUES('5', 'firstname', 'en', 'Prof');

2.2 Term Mappings

Term mappings can be used to extend the search by adding additional search terms to the user input. When the user enters a search term, the search term is expanded, and synonyms, hypernyms, hyponyms, and so on are added. Term mappings are defined in a column table and can be changed at any time.

Syntax for creating a term mapping table via SQL:

CREATE COLUMN TABLE termmappings
(
    mapping_id    VARCHAR(32)   PRIMARY KEY,
    list_id       VARCHAR(32)   NOT NULL,
    language_code VARCHAR(2),
    term_1        NVARCHAR(200) NOT NULL,
    term_2        NVARCHAR(200) NOT NULL,
    weight        DECIMAL       NOT NULL
);

Term mappings are defined as a unidirectional replacement. For a term mapping definition of ‘term1’ -> ‘term2’, ‘term1’ is replaced with ‘term2’, but ‘term2’ is not replaced with ‘term1’. Term mappings are language-dependent.

Term mappings are stored in a column-store table with the following format:

-- and for term mappings another table:
CREATE COLUMN TABLE termmappings
(
  mapping_id    VARCHAR(32)   PRIMARY KEY,
  list_id       VARCHAR(32)   NOT NULL,
  language_code VARCHAR(2),
  term_1        NVARCHAR(255) NOT NULL,
  term_2        NVARCHAR(255) NOT NULL,
  weight        DECIMAL       NOT NULL
);

INSERT INTO termmappings VALUES('1', 'firstname', 'en', 'Michael', 'Mike', '0.9');
INSERT INTO termmappings VALUES('2', 'firstname', 'en', 'Mike', 'Michael',  '0.9');
INSERT INTO termmappings VALUES('3', 'firstname', 'en', 'Michael', 'Miky',  '0.8');
INSERT INTO termmappings VALUES('4', 'firstname', 'en', 'Miky', 'Michael',  '0.8');
INSERT INTO termmappings VALUES('5', 'lastname', 'en', 'Milly', 'Milliken', '0.9');
INSERT INTO termmappings VALUES('6', 'lastname', 'en', 'Mille', 'Milliken',  '0.9');

Step 3: Add Rule


Next Step is to create a General Project in ABAP perspective. Under that Project create a search rule set file as shown in Figure 4. After creation of search rule set, validate and activate this search rule.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications, SAP HANA

Figure 4: Create a Search Rule Set

Step 4: Final Step is to perform the search operation.


New Built in function SYS.EXECUTE_SEARCH_RULE_SET is available in HANA to execute defined rule set.

Execute the custom created search rule set with below mentioned command.

CALL SYS.EXECUTE_SEARCH_RULE_SET('
<query>
 <ruleset name="ZSearch_RuleProject:Search_Rule.searchruleset" />
 <column name="FIRSTNAME">Prof. Mike</column>
 <column name="LASTNAME">Milly</column>
 <resultsetcolumn name="_SCORE" />
 <resultsetcolumn name="_RULE_ID" />
 <resultsetcolumn name="_RULE_NUMBER" />
 <resultsetcolumn name="FIRSTNAME" />
 <resultsetcolumn name="LASTNAME" />
</query>
');

This command will give user-defined result set by specifying the result set columns name in call procedure command. Figure 5 below shows the Output of call procedure.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications, SAP HANA

Figure 5: Result Set

While executing the below procedure, in case the below error messages appears in SQL console:

Could not execute ‘CALL SYS.EXECUTE_SEARCH_RULE_SET(‘ <query> <ruleset …’

SAP DBTech JDBC: [258]: insufficient privilege: Not authorized

Recommendation would be to provide authorization of SYS.EXECUTE_SEARCH_RULE_SET to _SYS_REPO.

An alternate way to hold the result set is by transferring the records in result table by using below mentioned command.

CREATE COLUMN TABLE RESULT_STORE (
_SCORE FLOAT,
_RULE_ID VARCHAR(255),
"FIRSTNAME" TEXT,
"LASTNAME" TEXT,
"ADDRESS" NVARCHAR(100),
"POSTCODE" NVARCHAR(20),
"CITYNAME" NVARCHAR(100),
"COUNTRYCODE" NVARCHAR(2)
);

CALL SYS.EXECUTE_SEARCH_RULE_SET('
<query>
 <ruleset name="ZSearch_RuleProject:Search_Rule.searchruleset" />
 <resulttablename name="RESULT_STORE"/>
 <column name="FIRSTNAME">Prof. Mike</column>
 <column name="LASTNAME">Milly</column>
</query>
');
 
Select * from RESULT_STORE;

Figure 6 below shows the output by querying the result set table RESULT_STORE.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications, SAP HANA

Figure 6: Result Set

Step 5. Dynamic Rule Set


There is a possibility to create a dynamic rule set by specifying the XML code in call procedure statement.

CALL SYS.EXECUTE_SEARCH_RULE_SET('
<query>
   <ruleset scoreSelection="firstRule">
      <attributeView name="TRAINING::ZAT_SEARCH_RULESET">
        <keyColumn name="ID"/>
      </attributeView>
      <termMappingsTableBased schema="SYSTEM" table="TERMMAPPINGS">
        <column name="FIRSTNAME">
          <list id="FIRSTNAME"/>
        </column>
        <column name="LASTNAME">
          <list id="LASTNAME"/>
        </column>
      </termMappingsTableBased>
      <rule name="Rule 1">
        <column minFuzziness="0.8"  name="FIRSTNAME">
          <ifMissing action="skipColumn"/>
        </column>
        <column minFuzziness="0.8"  name="LASTNAME">
          <ifMissing action="skipColumn"/>
        </column>
      </rule>
  </ruleset>
 <column name="FIRSTNAME">Mike</column>
 <column name="LASTNAME">Milly</column>
<resultsetcolumn name="_SCORE" />
 <resultsetcolumn name="_RULE_ID" />
 <resultsetcolumn name="_RULE_NUMBER" />
 <resultsetcolumn name="FIRSTNAME" />
 <resultsetcolumn name="LASTNAME" />
</query>
');

Figure 7 shows the output while specifying the dynamic rule set while calling the procedure.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications, SAP HANA

Figure 7: Result Set

S/4 Embedded Analytics – The Virtual Data Model

$
0
0
In this post, I will discuss the architecture of building a virtual data model (VDM) in S/4 HANA, using CDS Views (Core Data Services).

With the availability of the SAP HANA platform there has been a paradigm shift in the way business applications are developed at SAP. The rule-of-thumb is: Do as much as you can in the database to get the best performance.

CDS Views


To take advantage of SAP HANA for application development, SAP introduced a new data modeling infrastructure known as core data services. With CDS, data models are defined and consumed on the database rather than on the application server. CDS also offers capabilities beyond the traditional data modeling tools, including support for conceptual modeling and relationship definitions, built-in functions, and extensions

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP S/4HANA

Technically, CDS is an enhancement of SQL which provides a Data Definition Language (DDL) for defining semantically rich database tables/views (CDS entities) and user-defined types in the database. Some of the enhancements are:

Expressions used for calculations and queries in the data model
Associations on a conceptual level, replacing joins with simple path expressions in queries
Annotations to enrich the data models with additional (domain specific) metadata

Supported natively in both ABAP and SAP HANA, the data models are expressed in data definition language (DDL) and are defined as CDS views, which can be used in ABAP programs via Open SQL statements to enable access to the database. CDS provides a range of advantages for businesses and developers, including:

Semantically rich data models

CDS builds on the well-known entity relationship model and is declarative in nature, very close to conceptual thinking.

Compatibility across any database platform

CDS is generated into managed Open SQL views and is natively integrated into the SAP HANA layer. These views based on Open SQL are supported by all major database vendors

Efficiency

CDS offers a variety of highly efficient built-in functions — such as SQL operators, aggregations, and expressions — for creating views.

Support for annotations

The CDS syntax supports domain-specific annotations that can be easily evaluated by other components, such as the UI, analytics, and OData services.

Support for conceptual associations

CDS helps you define associations that serve as relationships between different views. Path expressions can be used to navigate along relations. Introducing an abstraction of foreign key relationships and joins, associations make navigation between entities consumable

Extensibility.

Customers can extend SAP-defined CDS views with fields that will be automatically added to the CDS view along with its usage hierarchy.

CDS Views for Embedded Analytics


Before HANA, querying large datasets in an ERP system could be time consuming, and degrade overall performance. Data Warehouses were used to create persisted data models using advanced modelling techniques to improve query performance. SAP HANA removes the performance issue in ERP out of the equation, allowing us to create Virtual Data Models (VDM) directly in ERP with incredible performance.

What is a VDM? A combination of semantically enriched CDS views that logically combine data from source ERP tables to create meaningful datasets that can be readily consumed in frontend tools

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP S/4HANA

As the slide suggests, at a high level, the VDM consists of CDS Views reading data from tables in the DB, which are then read by other CDS views, without any persistency, happening in real time.

Annotations


Annotations help qualify the CDS Views, and provides semantics and meaning to fields within a CDS View

◈ They can be applied to the entire CDS View entity;
◈ They can be used to specify semantics to fields in the SELECT list;
◈ Are always preceded by the @ symbol.

Below is SAP’s list of annotations:

https://help.sap.com/doc/abapdocu_750_index_htm/7.50/en-US/abencds_annotations_sap.htm

It can also be reached by following the path: ABAP – Keyword Documentation > ABAP – Dictionary > ABAP CDS in ABAP Dictionary > ABAP CDS – Syntax > ABAP CDS – Annotations

Below are a few of the key annotations that define entire CDS Views specific to VDMs:

AnnotationDescription
VDM.viewTypeDefines the type of a VDM view
Analytics.dataCategory Analytic queries can be defined on top of CDS views. By specifying the data category, the developer can provide directives and hints, telling the analytic manager how to interpret individual entities for example 
Analytics.dataExtraction.enabled Application developers can use this annotation to mark views that are suitable for data replication (for example, delta capabilities must be provided for mass data) 
Analytics.query By tagging the CDS view, the developer can specify which views will be exposed to the analytic manager. This type of view will be interpreted as an analytic query by the analytic manager. 
ObjectModel.dataCategory Defines the category of data (#TEXT or #HIERARCHY) 
ObjectModel.representativekey Most specific element (field or managed association) of the primary key (indicated by the keyword KEY) that represents the entity which the view is based on 
AccessControl.authorizationCheck Enables row level authorization on a specific CDS view 

Because I’m also a BW developer, I’ve taken some of the most important annotations for VDMs and compared them to BW objects:

Annotation: @VDM.viewType

ValueDescription BW Equivalent 
#BASICViews that form the core data basis without data redundancies. This is your basic SELECT from a physical table in the databaseThis would be equivalent to an ADSO, where your raw data is present, with some ETL
#COMPOSITE Views that provide data derived and/or composed from the BASIC views. This would be equivalent to a Composite Provider, which is the virtual layer which allows for joins and unions 
#CONSUMPTION Views that serve for specific application purposes and may be defined based upon public interface (for example, BASIC and COMPOSITE) views. This would be equivalent to a BW Query, where we specify a particular layout, variables, RKFs and CKFs, totals, etc. 

Annotation: @Analytics.dataCategory

ValueDescription BW Equivalent 
#DIMENSIONViews representing master data should be annotated with ObjectModel.dataCategory: #DIMENSIONEquivalent to Infoobject Attributes
#FACTThis value indicates that the entity represents transactional data (center of star schema). Usually it contains the measures. Typically, these views are necessary for replication, therefore, they should not be joined with master data views.Equivalent to an ADSO loading from a single datasource, without any master data links
#CUBEThe #CUBE value (like #FACT) also represents factual data, but #CUBE does not have to be without redundancy. This means joins with master data are possible. Queries are usually built on top of #CUBE, where data is replicated from factsEquivalent to an ADSO loading from a single/multiple datasources, with master data linkage to attributes/texts/hiererachies

Annotation: @Analytics.dataExtraction.enabled

ValueDescription BW Equivalent 
#TRUEApplication developers can use this annotation to mark views that are suitable for data replication (for example, delta capabilities must be provided for mass data).N/A
#FALSEThis view is not suitable for data replicationN/A

Annotation: @Analytics.query

ValueDescription BW Equivalent 
#TRUEBy tagging the CDS view, the developer can specify which views will be exposed to the analytic manager. This type of view will be interpreted as an analytic query by the analytic manager.

This will display the consumption view in the frontend tools such as BOBJ, when searching for it in the application folder
BW or BEx Query. This view will be enabled to be run in RSRT in the backend
#FALSEThe query view will not be exposed to the analytic manager.N/A

Annotation: @ObjectModel.dataCategory

ValueDescription BW Equivalent 
#TEXTIndicates that the annotated entity represents texts. Usually one key element is of type language.

NOTE: Within the VDM a text view is always language-dependent.
Equivalent to infoobject Texts/descriptions
#HIERARCHYIndicates that the entity represents the hierarchy-related data. This could be a header information or structure information.Equivalent to infoobject Hierarchy

Annotation: @ObjectModel.representativekey

◈ Most specific element (field or managed association) of the primary key (indicated by the keyword KEY) that represents the entity which the view is based on. This element shall be used as the anchor for defining foreign key relationships (except for text views): The foreign key field corresponding to the representative key represents the entity. As such it can be called representative foreign key element. The foreign key association is defined on the representative foreign key element. The name of the representative key typically equals the name of the entity represented by the view.

◈ For non-text views it is the key element for which the view serves as a value list/check table. For text views (@ObjectModel.dataCategory: #TEXT) it identifies the key element to which the text fields relate to.

◈ The representative key element has to be modelled explicitly even if there is only one primary key field (no implicit derivation).

◈ A view may only become a target of a foreign key association if it has a representative key element (exception: language dependent text views may not be used as targets of foreign key relationships

Annotation: @AccessControl.authorizationCheck

ValueDescription BW Equivalent 
#NOT_REQUIREDA Data Control Language (row level security object for CDS Views) does NOT exist for the CDS View therefore no security is enforced for the CDS View, and all data is displayedN/A
#CHECKA Data Control Language (row level security object for CDS Views) EXISTS for the CDS View and row level security is to be enforced for the CDS ViewSomewhat equivalent to BW authorizations
#NOT_ALLOWEDA Data Control Language (row level security object for CDS Views) exists for the CDS View and row level security is NOT to be enforced for the CDS ViewN/A

Now that we’ve seen the main CDS View annotations, we’ll look at a more detailed architecture of the VDM:

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP S/4HANA

Notice how the concepts are similar to that of a BW environment. We have texts and dimension CDS Views. We can build these once and re-utilize them across any number of transactional CDS views.

Think of this as building a material dimension and text views. You only need to build this once, as the base tables won’t change (MARA and MAKT). But for every transactional model built (Sales, Deliveries, COPA, Inventory, etc.) where you require material description or attributes, you can re-utilize that dimension and text view.

Below are more details regarding each of the CDS Views mentioned above:

TEXT 


SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP S/4HANA

@AbapCatalog.sqlViewName: 'ZBTMATERIAL'
@ObjectModel.dataCategory: #TEXT
@Analytics: { dataCategory: #TEXT, dataExtraction.enabled: true }
@AccessControl.authorizationCheck: #NOT_REQUIRED
@VDM.viewType: #BASIC
@EndUserText.label: 'Material Text'
@ObjectModel.representativeKey: 'Material'

define view Zbt_Material as 
  select from makt 
        { 
          @ObjectModel.text.element:  [ 'MaterialName' ] 
          key makt.matnr as Material, 
          @Semantics.language: true
          key makt.spras as Language,
          @Semantics.text: true 
              makt.maktx as MaterialName}


where makt.spras = $session.system_language

DIMENSION


SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP S/4HANA

@AbapCatalog.sqlViewName: 'ZBDMATERIAL'
@Analytics: { dataCategory: #DIMENSION, dataExtraction.enabled: true }
@VDM.viewType: #BASIC
@AbapCatalog.compiler.compareFilter: true
@EndUserText.label: 'Material Attributes'
@ObjectModel.representativeKey: 'Material'
@AccessControl.authorizationCheck: #NOT_REQUIRED

define view Zbd_Material as select from mara 

association [0..1] to Zbt_Material           as _Text           on $projection.Material      = _Text.Material
association [0..1] to Zbd_MaterialType       as _MaterialType   on $projection.MaterialType  = _MaterialType.MaterialType
association [0..1] to Zbd_MaterialGroup      as _MaterialGroup  on $projection.MaterialGroup = _MaterialGroup.MaterialGroup
association [0..1] to I_UnitOfMeasure        as _BaseUnit       on $projection.MaterialBaseUnit = _BaseUnit.UnitOfMeasure
association [0..1] to I_UnitOfMeasure        as _WeightUnit     on $projection.MaterialWeightUnit = _WeightUnit.UnitOfMeasure
association [0..1] to Zbt_Storage_Conditions as _StorCond       on $projection.StorageCondition = _StorCond.StorageCond

     { @EndUserText.label: 'Material'
       @ObjectModel.text.association: '_Text'
          key mara.matnr as Material, _Text,
          
       @ObjectModel.foreignKey.association: '_MaterialType'
       @EndUserText.label: 'Material Type'  
              mara.mtart as MaterialType, _MaterialType,
       @ObjectModel.foreignKey.association: '_MaterialGroup'
       @EndUserText.label: 'Material Group'        
              mara.matkl as MaterialGroup, _MaterialGroup,
       @EndUserText.label: 'Base Unit of Measure'  
       @Semantics.unitOfMeasure: true
       @ObjectModel.foreignKey.association: '_BaseUnit'
              mara.meins as MaterialBaseUnit, _BaseUnit,                
       @EndUserText.label: 'Gross Weight'  
       @Semantics.quantity.unitOfMeasure: 'MaterialWeightUnit'              
       @DefaultAggregation: #NONE
              mara.brgew as MaterialGrossWeight,
       @EndUserText.label: 'Net Weight'  
       @Semantics.quantity.unitOfMeasure: 'MaterialWeightUnit'
       @DefaultAggregation: #NONE
                     mara.ntgew as MaterialNetWeight,              
       @EndUserText.label: 'Weight Unit'
       @Semantics.unitOfMeasure: true
       @ObjectModel.foreignKey.association: '_WeightUnit'
              mara.gewei as MaterialWeightUnit, _WeightUnit,
                            
              mara.mfrnr as MaterialManufacturerNumber,
              mara.mfrpn as MaterialManufacturerPartNumber,
       @EndUserText.label: 'Storage Condition'
       @ObjectModel.text.association: '_StorCond'              
              mara.raube as StorageCondition, _StorCond,
       @EndUserText.label: 'Product Hierarchy'       
              mara.prdha as ProductHierarchy  
       
}

BASIC/FACT


SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP S/4HANA

Notice how we haven’t applied any semantics (Annotations) to the fields in the select list, since this is the BASIC view, and we’re creating the FACT of our model. Semantic annotations will be applied at the composite view.

Also note that a virtual data model can be comprised of multiple BASIC/FACT views depending on requirements.

@AbapCatalog.sqlViewName: 'ZBFACDOCAXX'
@AbapCatalog.compiler.compareFilter: true
@AccessControl.authorizationCheck: #NOT_REQUIRED
@EndUserText.label: 'Universal Journal Entry, Basic'
@VDM.viewType: #BASIC
@Analytics.dataCategory: #FACT 
@Analytics.dataExtraction.enabled: true
    
define view ZBF_ACDOCA_XX as select from acdoca 

{
       rbukrs as CompCode,
       gjahr  as FiscalYear,
       poper  as Period,
       racct  as GLAccount,
       matnr  as Material,
       werks  as Plant,
       
// UOMs - Currencies
       runit  as UOM,
       rhcur  as CCCurr,
       
// Measures
       msl    as Quantity,
       hsl    as AmtCC              
}

COMPOSITE/CUBE


SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP S/4HANA

@AbapCatalog.sqlViewName: 'ZCCACDOCAXX'
@AbapCatalog.compiler.compareFilter: true
@AccessControl.authorizationCheck: #NOT_REQUIRED
@EndUserText.label: 'Universal Journal Entry, Composite'
@VDM.viewType: #COMPOSITE
@Analytics.dataCategory: #CUBE
@Analytics.dataExtraction.enabled: true

define view ZCC_ACDOCA_XX as select from ZBF_ACDOCA_XX 

association [0..1] to I_Material    as _Mat      on $projection.Material = _Mat.Material
association [0..1] to I_Plant       as _Plant    on $projection.Plant    = _Plant.Plant
association [0..1] to I_CompanyCode as _CompCode on $projection.CompCode = _CompCode.CompanyCode

{
      @ObjectModel.foreignKey.association: '_CompCode'
      CompCode, _CompCode,
      FiscalYear, 
      Period, 
      GLAccount, 
      @ObjectModel.foreignKey.association: '_Mat'
      Material, _Mat,
      @ObjectModel.foreignKey.association: '_Plant'
      Plant, _Plant,
      //UOMs - Currencies
      @Semantics.unitOfMeasure: true
      @EndUserText.label: 'Base UOM'
      UOM, 
      @Semantics.currencyCode: true
      @EndUserText.label: 'Comp. Code Curr.'
      CCCurr, 
//Measures 
      @DefaultAggregation: #SUM
      @EndUserText.label: 'Quantity'
      @Semantics.quantity.unitOfMeasure: 'UOM'
      Quantity, 
      @DefaultAggregation: #SUM
      @EndUserText.label: 'Amount CC'
      @Semantics.amount.currencyCode: 'CCCurr'
      AmtCC
}

CONSUMPTION


SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP S/4HANA

@AbapCatalog.sqlViewName: 'ZCCACDOCAXXQ001'
@AbapCatalog.compiler.compareFilter: true
@AccessControl.authorizationCheck: #NOT_REQUIRED
@EndUserText.label: 'Universal Journal Entry, Query'
@VDM.viewType: #CONSUMPTION
@Analytics.query: true 
define view ZCC_ACDOCA_XX_Q001 as select from ZCC_ACDOCA_XX 

{
    //ZCC_ACDOCA_XX 
    @Consumption.filter: {mandatory: false, selectionType: #SINGLE, multipleSelections: true}
    CompCode, 
    FiscalYear, 
    Period, 
    GLAccount, 
    Material, 
    @AnalyticsDetails.query.display: #KEY_TEXT
    @AnalyticsDetails.query.axis: #ROWS  
    Plant, 
  
//UOMs - Currencies  
    UOM, 
    CCCurr, 
    
//Measures    
    Quantity, 
    AmtCC 

}

ADVANTAGES


One of the main advantages of the VDM is that you build it once, and it can be consumed in a multitude of front end tools:
  • BOBJ
    • WEBI
    • Analysis for Office
    • Crystal
    • OLAP
  • Lumira Discovery (even though it’s being discontinued in favor of Analytics Cloud)
  • Analytics Cloud
  • ALV Grid
  • ODATA services
  • Fiori
In addition to that, because the CDS View exists on the application layer, we can leverage the existing ABAP security model through PFCG. For CDS View security we use an artifact called the Data Control Language (DCL). For more information on DCL, please refer to my blog below:

https://blogs.sap.com/2017/05/22/cds-view-row-level-authorizations-with-data-control-language-dcl/

Once you learn how to model using CDS Views, the time to develop a VDM can be very fast, translating to delivering quick and efficient reporting solutions to customers.

PERFORMANCE


Regarding performance, the VDM is amazing for high volume high aggregation scenarios. I built a model at a client on Purchase Orders, going from the header down to the schedule line, with many master data joins. The schedule line table had about 250 Million records, across 8 years of data at the time.

When running a summarized report with no filters, with 3 measures on the columns and just the year on the drill down (8 rows, 3 columns, 24 data points), the report returned in 1-2 seconds!!!

Now, using the same data model above, when I ran a very detailed report, going down to the schedule line item, bringing many different attributes, with about 30 rows in the drilldown, for an entire month, that report took 10 minutes to run. Why? Because it was doing all the joins and calculations I had defined in the model in real time, and was very process intensive.

The scenario above would be a great candidate for a persisted solution in BW, where the ETL has taken care of any calculations and data joins. The data is persisted in an ADSO and when a report is run, it is simply selecting the data based on the criteria, without the need of any additional processing, with the expectation of performance in the sub second region.

Mircosoft Dynamics 365 CRM adapter for SAP SDI

$
0
0

1. Overview


The Advantco Dynamics 365 adapter is an adapter for SAP HANA Smart Data Integration (SDI), its purpose is to batch load or to replicate changed data in real time from  Dynamics 365 CRM to the SAP HANA tables.

This blog describes how to replicate Account data from Dynamics 365 CRM in HANA tables.

2. Adapter Architecture


SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

The Dynamics 365 Adapter acts as a bridge. It open a connection to Dynamics 365 CRM and read the source data and translate the values into the Hana datatype value.

3. Key features


Below are some of main features covered by this adapter.

◈ Support Microsoft Dynamics CRM Web API (REST API)
◈ Authenticate users via OAuth 2.0
◈ Support for proxy authentication.
◈ Real-time change data capture (CDC).
◈ Improve performance by using Bulk Operations
◈ Bi-directional updates.

4. Account Retrieval Use Case


The goal is to make this data of Account on Dynamics 365 CRM available in SAP HANA by Querying the remote Account via Virtual table (data is not physically moved to the cloud, but remains in its original source.)

4.1. Create a Remote Source

SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

We can browse the metadata tables which are provided by the adapter, each table correspond to one Entity.

SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

4.2. Create a Virtual Table

After creating a remote source, we can create a virtual table to retrieve data for account as below:

SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

4.3. Query on Virtual table

Open the SQL console, input and run the sql statement with the result as below:

SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

5. Account Replication Use Case


The goal is to make this data of Account on Dynamics 365 available in SAP HANA by using replication task feature

5.1. Create a Replication Task

Go to the workbench editor: https://myhana.us3.hana.ondemand.com/sap/hana/ide/editor/

Create the Replication task as below:

SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

5.2. Run replication task

After save the replication task, click run task with the result as below:

SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

5.3. View result

After the replication task done, go to the schema to see the new tables:

SAP HANA smart data integration, SAP HANA Certifications, SAP HANA Learning, SAP HANA Tutorials and Materials

S/4HANA – What to do about Planning and Consolidation – The Options

$
0
0
As organisations continue to convert their SAP ERP systems to S/4HANA, one area that pops up as a roadmap item is how to handle requirements for Planning and Financial Consolidation.

In some cases, this provides an opportunity to simplify the landscape and in others prompts a review of what organisations require from a planning solution and what they require from a financial consolidation tool. These discussions are made more confusing for SAP customers, because SAP provide a number of different solutions that cover the topics of planning and consolidation, and at first glance it can be difficult to understand which should be used.

SAP S/4HANA, SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Planning, SAP HANA Certifications

Figure 1 – S/4HANA Consolidation Overview Fiori App

Which solution you adopt will depend upon the level of complexity of your planning and consolidation requirements. I would recommend that each requirement is considered separately as they will potentially have different stakeholders who will value different things about the solutions. Having considered them independently, you can then merge the results to see if you can get some economies of scale by utilising one solution for both requirements.

Planning


If we first consider the planning requirements there are 2 dimensions of complexity to consider. The first being how complex is the planning process in terms of workflow steps and the complexity of the calculations and simulation needs. The second dimension is how sophisticated the analytics/reporting requirements are – do we need basic tabular financial reports or dashboard / stories for instance.

As a rule of thumb, the more complex the planning processes, the more likely that SAP BPC will be the right fit. The more complex the reporting requirements are, the greater chance that SAP Analytics Cloud will be the right fit. Where both dimensions are complex a “Hybrid” approach might make sense to get the best from both. The figure below shows this in a picture.

SAP S/4HANA, SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Planning, SAP HANA Certifications

Figure 2 : Planning Decision Matrix

Consolidation


For consolidation, we can consider the same 2 dimensions, but from a financial consolidation perspective. How complex is the legal consolidation process and does it need data from SAP and non-SAP systems. How deep do the reporting and simulation capabilities need to be, from simple reports to complex simulation scenarios / interactive dashboards.

In a similar way to planning, the more complex the consolidation process is the more likely that SAP BPC will fit the bill. For less complex scenarios where the data required for the consolidation is in the S/4HANA system – the more likely it is that S/4HANA Consolidation will meet your requirements. For the reporting dimension, the arguments are much the same as for planning, but your organisation’s requirements may be different as they could be driven by different stakeholders. The figure below shows this in a picture.

SAP S/4HANA, SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Planning, SAP HANA Certifications

Figure 3 : Consolidation Decision Matrix

Conclusions


So best case in terms of simplification, could be that you can just use the native features of S/4HANA for both Consolidation and Planning. This is likely to work for smaller organisations, who are not geographically distributed and have a low complexity of requirements.

For many organisation S/4HANA alone isn’t going to be enough and they will need to use SAP Analytics Cloud for capturing the planning input and reporting the results of both planning and consolidation activities.

For organisations that have even more complexity in either the planning or consolidation space, it will be necessary to consider SAP BPC on BW/4HANA. I would expect that in most of these scenarios, the reporting requirements are also going to be complex, so SAP Analytics Cloud will also be needed, as shown in the picture below

SAP S/4HANA, SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Planning, SAP HANA Certifications

Your SAP on Azure: SAP HANA Express on Azure Kubernetes Cluster (AKS)

$
0
0
Using a docker image to install SAP HANA express edition can shorten the deployment time and ensure the consistency between environments. The easy way to use it is to build a Kubernetes cluster using Microsoft Azure Container Service and deploy containers in the cloud.

A docker container is a package of libraries and system settings required to run an application. It allows to save the time needed to provide a working environment and you can focus on the target database configuration. It’s great especially in environments where you need to provide separated HANA instances for many developers.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

Azure Container Service simplifies the creation and configuration of the Kubernetes cluster and management of the entire docker environment. The nodes of the cluster are managed by Azure while your responsibility is to maintain the running application.

CREATE THE KUBERNETES CLUSTER

Creation of Kubernetes cluster in Microsoft Azure is a relatively easy task. During the initial configuration, you will be asked to provide a service principal that will be used to manage the Azure resources. Log in to the portal, go to the Azure Active Directory and create new application registration:

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

Save the settings. It is not important what you type in the Sign-on URL. Generate the key in the application settings – copy it together with the application ID – you will be asked for those details in few minutes.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

To deploy Kubernetes cluster you need to create an Azure Cluster Service (preview). In the first step, you are asked to choose a cluster name and select a resource group in which it will be created.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

You need to provide the basic configuration on the second screen. In the Service Principal ID and Service Principal Client Secret enter the information generated during the app registration. Choose the number of nodes and their size – I chose two DS11_V2 servers which fulfill the SAP HANA database memory and CPU requirements:

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

To connect to the cluster you require the Azure CLI. You need also to install the AKS libraries.

az aks install-cli

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

Log in to your Azure account and connect with the Kubernetes cluster

az login 
az aks get-credentials --resource-group=<resource group name> --name=<cluster name>

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

Once we have established the connection we can display the Kubernetes cluster nodes:

kubectl get nodes

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

You can validate the information in the Azure portal:

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

The nodes of the clusters are standard virtual machines in a single Availability Set:

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

DEPLOY SAP HANA

Downloading an SAP HANA database image from the Docker website requires an authentication. Provide your username and password to create a secret:

kubectl create secret docker-registry docker-secret --docker-server=https://index.docker.io/v1/ --docker-username=<username> --docker-password=<password> --docker-email=<e-mail>

Copy the deployment script and save it to your local drive:

kind: ConfigMap
apiVersion: v1
metadata:
  creationTimestamp: 2018-01-18T19:14:38Z
  name: hxe-pass
data:
  password.json: |+
    {"master_password" : "HXEHana1"}
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: persistent-vol-hxe
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 150Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/hxe_pv"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: hxe-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: hxe-pod
  labels:
    name: hxe-pod
spec:
  initContainers:
    - name: install
      image: busybox
      command: [ 'sh', '-c', 'chown 12000:79 /hana/mounts' ]
      volumeMounts:
        - name: hxe-data
          mountPath: /hana/mounts
  restartPolicy: OnFailure
  volumes:
    - name: hxe-data
      persistentVolumeClaim:
         claimName: hxe-pvc
    - name: hxe-config
      configMap:
         name: hxe-pass
  imagePullSecrets:
  - name: docker-secret
  containers:
  - name: hxe-container
    image: "store/saplabs/hanaexpress:2.00.022.00.20171211.1"
    ports:
      - containerPort: 39013
        name: port1
      - containerPort: 39015
        name: port2
      - containerPort: 39017
        name: port3
      - containerPort: 8090
        name: port4
      - containerPort: 39041
        name: port5
      - containerPort: 59013
        name: port6
    args: [ "--agree-to-sap-license", "--dont-check-system", "--passwords-url", "file:///hana/hxeconfig/password.json" ]
    volumeMounts:
      - name: hxe-data
        mountPath: /hana/mounts
      - name: hxe-config
        mountPath: /hana/hxeconfig

Deploy the image using the command:

kubectl create -f hana.yaml

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

The deployment takes several minutes to finish and can be monitored using the below command. If you see the message Started Container it means the process is completed.

kubectl describe pod hana-pod

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

You can now log in to the container and verify that the instance is running:

kubectl exec -it hxe-pod bash
HDB info
hdbsql -i 90 -d HXE -u SYSTEM -p <password>

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

You can see on which node the pod is running by executing:

kubectl get pods -o wide

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

NODE SELECTION: DIRECT ASSIGNMENT

You can directly assign the node to which the container should be deployed by a Node Selector segment:

  containers:
  - name: hxe-container
    image: "store/saplabs/hanaexpress:2.00.022.00.20171211.1"
    ports:
      - containerPort: 39013
        name: port1
      - containerPort: 39015
        name: port2
      - containerPort: 39017
        name: port3
      - containerPort: 8090
        name: port4
      - containerPort: 39041
        name: port5
      - containerPort: 59013
        name: port6
    args: [ "--agree-to-sap-license", "--dont-check-system", "--passwords-url", "file:///hana/hxeconfig/password.json" ]
    volumeMounts:
      - name: hxe-data
        mountPath: /hana/mounts
      - name: hxe-config
        mountPath: /hana/hxeconfig
  nodeSelector:
    kubernetes.io/hostname: aks-agentpool-25335148-1

Deploy the cluster using the modified configuration file.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

NODE SELECTION: MEMORY REQUIREMENTS

The other possibility to help the cluster to choose good node is to specify a minimum memory requirement. The virtual machine with SAP HANA express edition requires at least 8GB of memory, but as the docker should consume a smaller amount of RAM I have requested only 7GB.

  containers:
  - name: hxe-container
    image: "store/saplabs/hanaexpress:2.00.022.00.20171211.1"
    ports:
      - containerPort: 39013
        name: port1
      - containerPort: 39015
        name: port2
      - containerPort: 39017
        name: port3
      - containerPort: 8090
        name: port4
      - containerPort: 39041
        name: port5
      - containerPort: 59013
        name: port6
    args: [ "--agree-to-sap-license", "--dont-check-system", "--passwords-url", "file:///hana/hxeconfig/password.json" ]
    volumeMounts:
      - name: hxe-data
        mountPath: /hana/mounts
      - name: hxe-config
        mountPath: /hana/hxeconfig
    resources:
      requests:
        memory: "7Gi"

The current hardware utilization can be displayed using:

kubectl top nodes

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

The two previously deployed containers consume more than 10 GB of memory on node 1, therefore, the cluster creates the third HANA instance on node 0.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

SCALE-OUT THE KUBERENETES CLUSTER

Let’s try to create one more instance:

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

As there is not enough available memory on any of the node, the container was not deployed and has status pending. In that case, you can scale-out the Kubernetes cluster and add the third node:

az aks scale --name <resource name> --resource-group <resource group> --node-count <nodes>

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

When we check the pod status again, we can see that the hxe-pod4 is assigned to the newly created node 2.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

KUBERNETES CLUSTER DASHBOARD

Instead of using the command line interface some tasks can be executed from the Kubernetes Dashboard. The bellow command creates a proxy to the Kubernetes engine in Azure and allows you to contact the webpage through a localhost:

az aks browse --resource-group <resource group> --name <cluster name>

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

Open a browser and navigate to http://127.0.0.1:8001/ to display the dashboard.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

REMOTE DATABASE ACCESS

If you wish to access the database from the Internet you can configure the load balancer. Execution of bellow command creates a new service and assigns the Public IP.

kubectl expose pod <pod name> --name=<service name> --type=LoadBalancer
kubectl get service <service name>

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Guides, SAP HANA Certifications

Business Partner in S4 HANA – Customer Vendor Integration

$
0
0
It’s been almost 2 years I have started my journey in SAP world. I started with Technical domain for some time and I landed in Functional domain SAP SD (Sales and Distribution).

I must say that even after learning and practising it’s still feels like there is lot to learn. My office colleagues helping me a lot to learn things every day.

It always exciting whenever you come across new technology.

As we can all know that world is changing rapidly, our business needs
◈ Enhancements,
◈ Real time analytics
◈ On demand processes

So S4 HANA come up with new technology which have:

◈ User Experience.
◈ Simplicity
◈ Cloud based tech

It is very difficult to meet the requirement of customer now a days as things around us changing rigorously. Customer need most optimised solution for their business.

The S4 HANA (SAP Business Suite For HANA) is solution for all the challenges Business are facing now a day.

This blog attempts to give the information on configuration on Customer/Vendor Integration process.

What is Business partner?

A Business Partner can be a person, organization, group of people, or a group of organizations, in which a company has a business interest. It is the single point of entry to create, edit, and display the master data for business partners, customers and vendors. A Business Partner consists of general data like name, address, bank information, etc. as well as role specific information i.e. customer/vendor/employee data.

Why Business Partner?

1. Data Redundancy: A person can be a Vendor as well as customer, in traditional ERP we have to create two objects. With business partner single object is only required.

2. Multiple Transactions: To create vendor or customer we have to go to different transactions. With Business Partner single transaction “BP” is required both objects.

Centrally managing master data for business partners, customers, and vendors.

3. Customer and Vendor Integration to Business Partner:

◈ Customer
◈ Vendor
◈ Business partner

In Traditional ERP, we have Customer and Vendor as different object.

For Customer, we have following views:

◈ General
◈ Finance
◈ Sales

For Vendor, we have following views:

◈ General
◈ Finance
◈ Purchasing

For creating there objects we need different Transaction code:

Customer: XD01, FD01, VD01

Vendor: XK01, FK01,MK01

In S4 HANA both customer and vendor get created with Transaction BP.

4. Business Partner Customizing

4.1 Activation of PPO (Post Processing Active)

SPRO->IMG->Cross-Application Components–> Master Data Synchronization –> Synchronization Control–> Activate PPO Request for platform objects in the dialog

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

After this click on the check box.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

4.2. Activate synchronization between Business Partner and Customer/Vendor

SPRO->IMG->Cross-Application Components–> Master Data Synchronization –> Synchronization Control–> Activate Synchronization Options

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

4.3.Define BP Roles

SPRO->IMG->Cross-Application Components–> SAP Business Partner–>Business Partner–> Basic Settings –> Business Partner Roles–> Define BP Roles

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Maintain Data for Business Roles and SAVE.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

4.4.Define Number Assignment for Direction BP to Vendor/Customer

SPRO->IMG->Cross-Application Components–> Master Data Synchronization–>Customer/Vendor Integration –> Business Partner Settings –> Settings for Vendor Integration –> Field Assignment for Vendor Integration–> Assign Keys

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Maintain and select the check box if you want the Business Partner number and Vendor number to be same.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

4.5.Define BP Number Range and assigning to BP Grouping

Cross-Application Components–> SAP Business Partner–> Business Partner–> Basic Settings –>Number Ranges and Groupings

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

4.6. Managing Fields in Vendor and Customer Master.

IMG>Logistics general>Business partner>Vendor>Define account grp and field selection

IMG>Logistics general>Business partner>Customer>Define account group and field selection

5. BUSINESS PARTNER Vendor and Customer Integration.

Business Partner Tables:

BP000Business partner master (general data)
BUT000 General data I 
BUT001 General data II 
BP001 FS-specific attributes 
BUT0BANK Bank details 
BD001 Assign customer – partner 
BP030 Address 
BC001 Assign vendor – partner 
BUT020BP: addresses 
BUT021 BP: address usage 
BP1000 Roles 
BAS tablesBusiness address services tables 
BUT100 BP roles 
BUT0BK BP: bank details 
BAS tables  Business address services tables 
KNBK Bank details 
SANS1 Addresses 

5.1 In S4 HANA when you enter T code XD01 it will redirect to BP transaction, a prompt will come that will ask you to choose Person, Organization and Group.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Here you can see that default BP role set as FLCU00 (Customer Financial Accounting)

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

5.2. In S4 HANA when you enter T code VD01 it will redirect to BP transaction, a prompt will come that will ask you to choose Person, Organization and Group.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Here you can see that default BP Role is FLVN00 (FI Vendor).

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

6. Business Partner creation

6.1. General Data

Transaction code: BP

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

You will get a default role 000000 (Business Partner General).

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Select the grouping from dropdown which decides the number range.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Enter the all mandatory fields for Business Partner General and then save.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

After entering data click on save.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Business Partner General gets 1000173 created.

Table BUT000 gets updated

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

6.2. Vendor

To create FI Vendor, we can extend the Business Partner already created with Role FLVN00 (FI Vendor).

The role can be selected from dropdown.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Click on Company Code.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

After selecting Company code press Enter and then fill the Recon account and other mandatory fields.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Note: Here you can see in the Company Code Section Vendor is mentioned as <External>.(Please refer points 4.4)

But after saving Vendor number will come same as BP number.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Tables LFA1 and LFB1 gets updated.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Extending Vendor to Purchasing Data, select Role FLVN01 from dropdown.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Select Purchasing Data and then choose Purchasing Org and press enter.

Enter all the mandatory fields.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Table LFM1 gets updated.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

6.3. Customer

Extend BP to Customer Role FLCU00 for Financial Accounting.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Click on Company code and enter all the mandatory fields and then save.

Note: Here in the Company Code Section customer number is presented as <EXTERNAL>.

After saving it will get updated same as BP number.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Extend Customer for Sales Data with Role FLCU01.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Press Sales and Distribution data button and fill all the mandatory fields.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

Then Save.

In a display mode, you can check all the roles Business partner having.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

In the table BUT100 we also check the roles for Business partner.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

After entering all the Data save the customer.

Table KNA1, KNVV gets updated.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, HANA Guides

We are now able to create Business Partner Successfully.

Technical details about data aging Part II

$
0
0
To understand the data aging in detail we will go a little bit deeper in this part of the blog series.

As example we will use table CDPOS. Data aging is in use and aging runs already were executed. As result we get this data distribution.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio
SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

So if we select the record count via SE16 we will get the following result:

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

Here we can see the partitioning attributes and the record count of each partition

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> all partitions are loaded partially like mostly in business time
=> we also see the partitiong as configured in the ABAP backend

SQL Test


Now we will check different statements (also with count clause):

1) select * from CDHDR;
2) select * from CDHDR where MANDANT='100';
3) select * from CDHDR with RANGE_RESTRICTION ('CURRENT')
4) select * from CDHDR where MANDANT='100' with RANGE_RESTRICTION ('2016-11-01');
5) select * from CDHDR where MANDANT='100' with RANGE_RESTRICTION ('0001-01-01');
6) select * from CDHDR where MANDANT='100' with RANGE_RESTRICTION ('CURRENT');​
7) select * from CDHDR with RANGE_RESTRICTION ('0001-01-01');

Test results


Row Count
SQL188.138 
SQL252.226
SQL3 36.884 
SQL4 51.281 
SQL5 52.226 
SQL6 972 
SQL7 88.138 

=> interesting is, that you can achive the same results with and without the range partitioning – which should not work if we can believe the SAP notes

DBACockpit => Diagnostics => SQL Editor

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> here you can also save the execution plan and import it into HANA Studio for a detailed analyses

Details – Plan viz


SQL1

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> we see a search without filters on all partitions
=> the exec plan is identically to SQL7 => no dynamic search is used here
=> time spend for all parts are pretty low, because some of them are loaded into memory and the row count is also pretty low

SQL4

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> now we see a dynamic search in cause of using the range restriction
=> but this time not on partition 2 (00010101 – 20160101)

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> closer look into the dynamic search
=> we see the filter on MANDANT on main and delta store of the partition 4

Same query with unloaded partitions besides current (part id 1)


SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> in the overview we see that the expensive part is on a new operator ‘Delta Log Replay’

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> the select on the current partition is still fast
=> but on the unloaded partitions there must be executed a delta log replay on first access/load

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> the expensive ones are partition 3 and 4
=> so if you have big partitions which are not accessed frequently you can run into performance issues on first access
=> 3,3ms (loaded partitions) vs. 292,5ms (first access on historical partitions) = factor 97 slower

SQL6

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides, SAP S/4HANA, SAP HANA, SAP HANA Studio

=> on the current partition we see also the dynamic search with the select on both parts (main/delta) – this time with some results on the delta store

SAP Hana 2.0 hybrid Landscape Management with LaMa 3.0 & Solution Manager 7.2 Part-1

$
0
0
I will explain and detail how to manage SAP Hana 2.0 SP2 instance with SAP LaMa 3.0 SP5 in the context of hybrid landscape between on-premise and Microsoft Azure.

In order to monitor my hybrid solution i will explain how to configure Solution Manager 7.2 accordingly.

Aside of the SAP components, I will also covert the network implication to realize such type of configuration, which include the IPSec connection between my lab and Azure by using pfSense and the DNS portion for the naming resolution between both site.
For my setup, I will use my own lab on VMware VSphere 6.5 U1, use SAP LaMa 3.0 SP5, SAP Solution Manager 7.2, Pfsense 2.4.2 and use my own Microsoft Azure subscription.

Components details


This picture shows in detail the components deploy on each server such as add-on as well as product version, the protocol of communication is showed too but I intentionally omit to provide any port.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

From a detail components point of view, in order to ensure a transparent and secure connectivity between my on-premise environment and Azure I will use and configure PFsense and Azure Gateway to create a VPN IPsec tunnel.

The management of my SAP Hana instance is done through SAP LaMa 3.0 SP5, which will include the Azure Connector to interact with Azure VMs.

Solution Manager 7.2 SP6 is used for advanced integration monitoring for my hybrid solution.

To ensure reliability in term of naming resolution, two DNS are configured and replicated as read-only to each other.

Configure the IPsec VPN with Azure


From a topology point of view my picture below show how my network is setup on a high-level standpoint

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

On the left side of the picture, I have configured my VMware DvSwitch which operate for 2 different subnets, one is configured for vLan (Local) for my local server network, and the other one for vWan (Firewall) for internet access.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

In order to setup my vpn, I have installed pfSense which act as a virtual firewall/router.
My pfSense is configured with 2 NIC card, one for WAN network to provide internet access to my VMs within my vLan network through the second NIC card LAN which act as a gateway.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

On the right side of the picture, on Azure I will configure multiple component to create the vpn connection associate such as, virtual network and subnet, virtual network gateway and the local network gateway.

Let start with Azure configuration by creating the virtual network and subnet

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

My vNet range is 10.0.0.0/23 and my subnet range is 10.0.0.0/24

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Once create, I select my new create vnet and select “Subnet” to create the gateway subnet

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

And define my Gateway subnet as 10.0.1.0/24

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Now let’s create my virtual network gateway, select virtual gateway from the service marketplace

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

I specify the name of my gateway and choose VPN with Route-based vpn type, because I don’t need high bandwidth I select the basic SKU. I map my gateway to my virtual network created earlier and create the public IP

Note: the creation of the gateway can take up to 45 min

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Once created

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Finally, I will create my local network gateway

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

I provide a name for my local gateway, enter my public IP and gives my internal local address space where the vm needs to be reached out

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Once created, I select my newly created local network gateway and click on connection to assign the virtual network gateway and set my shared key which will be use with my pfsense.

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Now completed, I will configure my pfsense. On the web interface I select VPN –> IPsec

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Click on Add P1

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

In the general information, I use WAN for Interface option and provide the Azure Gateway public ip address

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

For authentication method I select Mutual PSK and provide the Pre-Shared Key setup in Azure while creating the local gateway

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

And finally, for the algorithms, I specify AES 256 with SHA256 and save the configuration

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Once done, one the created connection I click AddP2

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

In the general information, I choose LAN subnet for local network and for remote network I specify the address range configure previously for my Azure vNet

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

And finally, on the SA/Key Exchange, I define the protocol as ESP with encryption algorithms AES256 and hash algorithms SHA1 and save my configuration

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

My setup is done

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Let’s have a look at the IPsec status first from pfsense

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

And from the Azure site and see the status of my connection

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

My vpn connection is fully configured, I will do a quick check from my local network to azure

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

It works I can RDP from my local network to Azure by using the private IP, this first part completed I will configure my DNS in order to resolve mutual domain and hostname.

Setup DNS for mutual name resolution


My hybrid scenario consists of using Azure as a DR site, to do so I have install two DNS with two distinguish FQDN.

My local FQDN is mtl.will.lab and Azure is us.will.lab

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

To make the resolution both ways, on the primary DNS I right click on my primary zone and click on property, then I select Zone Transfers and add the ip of my Azure DNS server

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

And do the same for the reverse lookup zone

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Now on Azure, I go on my secondary DNS server and proceed the same way

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Now I go back on my primary DNS (local) and define a secondary Forward Lookup Zones to match my Azure domain

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

And do the same in the Reverse Lookup Zone

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

I have proceeded with the same step on the Azure DNS server

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

So now from my local network I will try to resolve the Azure FQDN, to do so i have add a temporary entry to make a quick test

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

Now from my local server I will nslookup and it’s working

SAP HANA Tutorials and Materials, SAP HANA Learning, SAP HANA Certifications, SAP HANA Guides

My DNS resolution is working on both side, now I can configure SAP LaMa Azure connector

SAP Hana 2.0 hybrid Landscape Management with LaMa 3.0 & Solution Manager 7.2 Part-2

$
0
0

Configure Microsoft Azure connector for SAP LaMa


The Azure connector for SAP LaMa will allow me to perform several operations directly onto Azure such as activate or power off VMs, do SAP system relocate or perform SAP system copy/clone.
However, not all Azure resources are supported, only VMs deployed by ARM with managed disks are supported, VMs deployed in availability zone are currently not supported.

That thing says, let’s proceed with the setup. From Azure I will start to register a new app from AAD

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Note that the url can be random since the sign-on url is not used

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

My new app created I will click on setting and select the keys to create a new key, I will note the Application ID since it use as user name of the service principal used.

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Once saved my key value appear so I save it

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Let’s now give my service principal user access to my entire Azure subscription, from the subscription list I choose my subscription and click on IAM to add user

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

AS a permission I provide “Owner” in order for the user to have full control on my subscription resources and as username I provide the application ID

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

I’m done for the Azure side, now I will configure the SAP LaMa part. On the Infrastructure panel select “Cloud Manager” and click Add

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Select Microsoft and click Next
Note: Before SAP LaMa .30 SP5 the adapter needs to be enable manually

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Enter the necessary information such as username is the application ID and the password the key generated earlier.
You will also need to provide your Subscription ID and tenant ID which can be retrieved from PowerShell after login in Azure

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Now if I check under virtual host I can see the template option available for me in order to deploy vm based on my personal one on Azure

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

And if I move further, I can now see all my resources group as well for me to deploy my template into

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

We are done for this part, I will explain later on the Azure specific part, for now I will install SAP Hana on Azure and show how to proceed with the registration on LaMa.

Register SAP Hana 2.0 from Azure in SAP LaMa


Before to deal with installation of SAP Hana it is important to make sure to set the VM properly, I have created a dedicated resource group to store all my object in order to not mix then with other artifact

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Now the most important thing, make sure to select manage disk since this only the supported type of disk by SAP on Azure, you can also see that I have select my specific VNet attached to the VPN which lead to the auto selection of the subnet.
If have also disable the public ip since I don’t want my server to be accessed directly from outside.

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Once my server is up and running, I register it into my Azure DNS so it will be replicated in my on-premise DNS

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

I do a quick test from my SAP LaMa server at the OS layer, the name is resolve and I can ssh in my server on Azure from on-premise

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

My Hana on Azure installed I can connect to it

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

So now that my Hana on Azure is up, before to add it into SAP LaMa, the necessary Adaptive Extension needs to be installed.
Note that because I’m running on cloud the EXT version needs to be used

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Once download run the following command from the hostcontrol folder

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Final step register my instance in SAP LaMa, from the configuration tab I will my hostname and domain

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

My two Hana system shows up

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

All set for this part, my two instances are managed and ready for HSR configuration

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Setup Replication between On-Premise and Azure with SAP LaMa


Before to start the replication, setup make sure to perform all necessary prerequisite, such as back all primary database (system & tenant), have the log_mode set in “normal”, copy the PKI SSF.key from the primary Hana system to the secondary.

Note: I have intentionally not created any tenant database on my second instance in order to replicate them from my on-premise environment.

The replication setup can be performed at many place, Hana studio, Hana cockpit, OS layer with hdbnsutil tool or SAP LaMa. I will show you how to proceed with SAP LaMa.

From the dashboard, go on the operations action and select the systemdb from my primary site, then select SAP Hana Replication to be enable

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Provide a Site name and enable it

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

LaMa will proceed with the enablement

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Once done we can see this

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

I have shutdown my Azure instance before to proceed with the registration, so now I can register it as secondary tier

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

On this step I specify the necessary option I want to work with

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Once the system restart, you can now see that take over action is available

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Let’s do some check, first I look at the hana studio, I see my 2 servers and the replication initialized

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

From the cockpit something

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Once done the replication is active

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Ok so my replication between my on-premise environment and Azure is running, I will then include my system is Solution Manager

Configure Solution Manager monitoring


The monitoring portion in Solution Manager, what ever the version involves several steps as well as component that needs to be deployed and/or configured.

Because of such, I will convert it in a light way and highlight the base line part. The first part will consist to register my both Hana instance in my SLD in order to be replicated in LMDB.

From the cockpit, select the system database and form Lifecycle Management choose SLD registration

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Provide the SLD information and click next to proceed

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

When the operation is done from both Hana system check the entries in SLD

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Now done, I install the diagnostic agent on both sever and register them into the SLD as well

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

And check into my sld the registration

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

As well as agent registration from the administration side

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

When diagnostic agents are register, from the Solution Manager Configuration, i need to configure every Hana system as a manage system

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Once the system is configured, they need to be assigned to a monitoring template in order to read system information’s and generate metrics

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

When all my system has an associate template, from the workcenter I can see them now

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

You probably noticed, red light and yellow … it’s normal because I did not push all the setup and just wanted to provide the major step to process the monitoring.

Now done with my monitoring setup, I will perform my Hana failed-over process within SAP LaMa.

Perform SAP Hana takover with SAP LaMa


SAP LaMa 3.0 allows you to perform various task from a replication point of view, to my takeover task, I go on the Operation dashboard and select the secondary instance in Azure

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Expend operation, select SAP Hana Process and choose “Take over”

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

While it’s happening, I can the lock on the instance because it’s processing it

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

To see what is going on in term of process, from the Monitoring dashboard I select activities and select my task associated

SAP HANA 2.0, SAP HANA Certifications, SAP HANA Materials, SAP HANA Guides, SAP HANA Learning

Once the takeover is completed, make sure to discover the new tenant replicated from the primary site, as well as process with Solution Manager monitoring setup for them.

My configuration is now completed for the simple replication and takeover process in SAP LaMa for SAP Hana, in my next document I will elaborate more scenario with HSR as well as dedicated Microsoft Azure resource deployment.

Managing large SAP BW on HANA systems: SAP HANA Enterprise Cloud (HEC) perspective

$
0
0
As a cloud architecture and advisory member, I frequently come across several customer cases related to existing SAP Business Warehouse(BW) systems. Large customers usually have large data footprint (often in several terabytes) and a variety of data source systems. Additionally, several reporting/analytic tools are connected to existing SAP BW systems.

Most of these SAP BW related cases can be classified into two types:

Large SAP BW systems on RDBMS -> Ready to migrate to SAP BW on HANA

◈ Improve query performance for real-time insights and decision making
◈ Boost performance for data load processes, simpler data models, accelerated in-memory planning capabilities etc.
SAP BW on HANA on-premise systems -> Growing rapidly and expecting high data growth

◈ Due to ongoing digital transformations, business model adjustments, additional source systems, new subsidiaries, expansion plans bringing additional data to be reported/queried upon, additional analytical business cases and/or new business users etc.

For a growing/large business warehouse management system, customers would need to come up with a system landscape roadmap by solving the puzzle highlighted in Figure 1. For simplicity sake, we can call this SAP Business warehouse – data growth puzzle.

On a high-level, this puzzle has 4 different dimensions:

1. Data Optimization: What can be gained on SAP BW on HANA?
2. Multi-Temperature data management: SAP Dynamic Tiering or Near-line Storage or both?
3. Scalability of SAP BW on HANA system: SAP HANA scale-up or scale-out?
4. Business continuity: Ensuring high availability(HA) and disaster recovery(DR)?

Understanding all the dimensions separately and making it work together to provide a scalable business warehouse solution is the key to derive this roadmap. This will ensure decision support continuity and enhanced emphasis on analytics.

Standard HANA sizing guidelines for BW suggest 50 % of the data footprint for the available RAM so that there is sufficient space available for the intermediate query/reporting result-sets. Hence, this roadmap preparation is inevitable once existing / planned system data size approaches 3-4 TB (this limit may vary depending on the business cases and hardware availability).

 Figure 1. Several options to manage SAP BW on HANA data: The Data growth puzzle

SAP BW on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning

Data Optimization


An ‘as-is’ migration of SAP BW from RDBMS to HANA can be done and it will bring some instant gains. However, without optimizing the system data/code, the full potential of the new BW on HANA system cannot be realized. One of the ways to optimize the landscape is to remove the existing redundant data.

A classical SAP BW system on RDBMS would have a lot of Info-cubes underneath to improve its performance. These Info-cubes are usually 1:1 copies of data in the DSO (Data Store Objects) layers. As on HANA, these Info-cubes provide no additional performance benefit, they can be decommissioned once all the queries are re-directed to the original DSO’s. By performing this step, the data footprint will reduce and system performance can be gained. Additionally, this simplified data flow will reduce data mapping related errors.

When it comes to the data growth problems, this will give some additional breathing space to the system. This is an option that can be considered first up. Continuous improvement and optimization should also be planned alongside.

Multi-Temperature data management


Based on the access frequency, the data can be classified as hot, warm or cold. 

In case of ‘cold’ historical data, SAP IQ based Nearline storage can be used to offload the data and reduce main-memory data footprint further. Typically, the data is sliced based on the time dimensions or it can be completely moved into the cold storage DB.

The read access to IQ NLS is in most cases much faster than READ access to the traditional databases. The unique advantage of such a native solution is that the performance of the BW query that requires the data from the Sybase IQ store can be optimized using HANA SDA (Smart Data Access). Smart data access enables remote data access as if they are local tables in HANA without copying data into HANA. 

In case of the ‘warm’ data, dynamic tiering (DT) offers a consolidated mechanism to manage less frequently accessed and less critical data without the reduction in performance. The dynamic tiering server store extended tables on an extended node and try to optimize the RAM utilization within HANA. Main HANA and DT nodes share a common database making these nodes an integral part of the system. In case of DT, all data of a PSA (Persistent Staging Area) and w/o optimized DSO objects is in the PRIMARY disk. DT uses the main memory of the extended node servers (which can be powerful HANA nodes loaded with processing power) for caching and processing the data. The data stored on disks is accessed using the sophisticated algorithms.

While sizing the storage on the extended DT nodes, certain guidelines should be adhered to for e.g. SAP the recommended ratios of SAP HANA memory to SAP HANA dynamic tiering extended storage as per the SAP Note: 2086829 are:

SAP HANA memory <= 2.5TB: dynamic tiering storage should not exceed 4x the size of SAP  HANA.

SAP HANA memory > 2.5TB: dynamic tiering storage should not exceed 8x the size of SAP HANA.

The ‘hot’ data would by-default reside in-memory and if this data is large the scalability mechanisms highlighted in the next section need to be considered.

Scalability of SAP BW on HANA System


BW on HANA is generally available for both single node and scale-out architectures. Choosing between scale-up and scale out can be subtle and must be eventually balanced with the costs.

In my opinion, if the actual data size is 3 TB or more (with expected data growth) then the scale-out design is the preferred option. 

Existing BW systems can be evaluated using several mechanisms and future landscape design can be arrived at by following simple decision matrix highlighted in Figure 2.

Figure 2. SAP HANA Scale-up or Scale-out decision matrix

SAP BW on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning

*Source: Digital Business Services Group and SAP HANA Enterprise Group
The typical scale-out architecture and for an SAP BW on HANA system is highlighted in Figure 3. 

Figure 3. Scale-out Architecture SAP BW on HANA

SAP BW on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning

Business continuity


To ensure that an organization can continue to operate in case of failures within a single datacenter or disasters/ catastrophic events across datacenters, it needs to have a plan and this plan need to consider three different aspects i.e. resilience (via spare, replication etc.), recovery (auto-recovery VM’s, automatic failover nodes etc.) and contingency (process in case of unforeseen conditions).

Various techniques to handle single failures within one datacenter can be categorized under ‘High Availability’ (HA) and failures across datacenters fall under ‘Disaster Recovery’ (DR).

High Availability of a BW on HANA system

For a single node HANA system, depending on the type of the HANA node (VM or bare metal), size of the node etc. a dedicated standby node is required. HANA uses HSR (HANA System Replication) with synchronous mode (to avoid any data loss) or Host Auto-failover mechanisms to guarantee the system recovery in case of failures.

HANA scale-out clusters are also safeguarded via standby node that can take the role of any node (master or any other worker/slave node). In case of large HANA clusters, a secondary standby node can be considered (but this also depend on the underlying clustered file system).

For BW ABAP app servers, the auto-recovery mechanism of the underlying virtual machine technology or additional redundant app servers can be used, to route the incoming requests properly a load-balancer or pair of HA-enabled web-dispatchers might be needed.

Disaster Recovery of a BW on HANA System

Two sets of BW systems need to be maintained in SYNC across two different data centers (can be short distance i.e. less than 50KM or long-distance i.e. more than 50KM) to ensure a reasonable RPO (Recovery Point Objectives) and RTO (Recovery Time Objective).

HANA systems can be configured in active-active pairs and asynchronous HANA replication can be configured across geographically remote systems.  For the application servers, the file system replication should be configured. In case of disaster, the network should be able to route incoming requests to the secondary site.

SAP Hana Enterprise Cloud (HEC) approach


Existing customers often engage SAP to understand how SAP can support them in managing growing BW systems efficiently? What it takes to move these systems into SAP’s private managed cloud offering i.e. SAP Hana Enterprise Cloud (SAP HEC)? In the process, they also want to know if it can help them reduce the total cost of ownership and increase agility by optimizing operational efforts.

SAP HEC is a private managed cloud offering to help accelerate HANA adoption. The HEC service is designed to emulate customer’s on-premise environment in a cloud model (with SLA’s, can be subscribed, short-term projects/long-term production, application management services) and it can be deployed on Hyper-scaler environments such as AWS, Azure etc. Hence for SAP’s large heterogenous on-premise customer base, this offering is serving as a “bridge to the cloud”. Additionally, ‘net new’ customers also like SAP HEC as it helps them manage systems, licenses, operations, integration etc. in a single flexible contract (subscription, partial subscription or full subscription), so they can continue to ‘Run Simple’.

A very good introduction to the SAP HEC services is offered in the blog highlighted here.

In the context of SAP BW on HANA System, I would like to highlight one of the initial SAP HEC service here that can assist customers in solving SAP BW on HANA Data puzzle and arrive at a very viable potential landscape roadmap.

SAP HEC Assessment and Advisory service

In context to SAP BW on HANA, assessment service will allow a cloud architect to understand the ‘as-is’ state of the BW system. It is a collaborative exercise where a ‘to-be’ state for a viable solution can be defined. For existing customers, assessment discussion would revolve around BW on HANA sizing report outputs (Note: 2296290), available EWA reports, software versions, add-on’s, existing interfaces, new requirements, analytics tools, possible network solutions, request routing from client tools to SAP HEC (DNS zone transfer etc.), total number of users, expected concurrent users, different viable approaches to manage system data growth, system deployment phases, type of systems, type of application servers, SLA’s for non-production and production system, HA/DR requirements, potential deployment areas and regions, special requirements, support model for HEC, HEC roles and responsibilities, need for on-boarding and migration service etc. ‘Net new’ customers would be advised based on the list of provided requirements and points highlighted above.

A typical output of such an assessment would be a potential solution design based on SAP HEC reference architectures including network topology and pricing for the solution. The solution design will adhere to design principles & guidelines recommended by SAP product development, delivery, operations, maintenance and application management teams.

Figure 4 is drawn specifically to showcase how multi-faceted puzzle of data growth can be managed by SAP in HANA Enterprise cloud. Moreover, additional SAP Services help simplify the overall operations to achieve the required return on investment and reduce total cost of ownership.

Figure 4. SAP HEC – Reducing TCO for SAP BW System

SAP BW on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning

Viewing all 711 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>