Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

Refreshing a SAP HANA Database using a database backup

$
0
0

Overview


One typical use case managed by SAP Landscape Management is refreshing a system – overwrite the content of an existing target system with actual data from the source system while maintaining system topology and configuration. General SAP Landscape Management functionality allows to manage such an exercise via a “Storage Based” or “Virtualization Based” approach:

◈ In a “Storage Based” scenario – the clone/copy of the database happens at storage volume level. SAP Landscape Management triggers the required action in the storage system via the integrated Storage Adapter: Each of the required source volumes is copied to a target volume levering functionalities in the storage hardware. Those storage volumes are later on accessed on a new or existing host and then form the content of the database system.


◈ A “Virtualization Based” approach is possible too, but practically only for “smaller” system sizes: Then, an entire virtual machine, including the database footprint, is cloned to a new image. This cloned image gets then installed to another VM.

Both “Storage Based” and “Virtualization Based” scenarios rely on specific infrastructure requirements – e.g. source and target systems need to reside within the same storage system or virtualization manager boundaries. This introduces some dependencies on the underlying IT infrastructure.

For SAP HANA 2.0 databases, SAP Landscape Management 3.0 has the
integrated functionality to refresh the content using a database backup.

This removes the dependencies on the underlying infrastructure, and provides the functionality even in case that there is no “Storage Adapter” available for the storage system in scope.

This text gives a short overview about the required steps for the required configuration settings within SAP Landscape Management (and the LINUX operating system) to execute such an “Restore-based Refresh” for a SAP HANA database.

Preparation in SAP Landscape Management


General prerequisites for SAP System Copy

All the general preparation settings for SAP System Copy need to be configured first in SAP Landscape Management:

◈ The infrastructure assignment defining all the infrastructure components needs to be completed: network definitions, user management methods, eventually name server update and proxy settings are defined.
◈ The Software Provisioning Manager Configuration for System Copy or System Rename

In addition, the source system needs to be enabled for the Refresh Scenario.
Verify that the source system is enabled for Cloning / Copying in the System Configuration:

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

and ensure that the required RFC users for Post-Copy Automation are created on the source- and target system and are defined in the SAP Landscape Management System configuration.

SAP HANA backup share


The “Restore-based Refresh” requires a complete data backup of the source system. The data backup needs to be stored on a central filesystem, which then will be mounted on the target during the process.

In the example, we have a local filesystem on the source database with a set of backups:

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

For simplicity, I’ve used a local filesystem on the source system for the SAP HANA backup share, and exported it via NFS to the target system – the SAP HANA backup share could also be stored on a NFS/NAS server and then get mounted to both source and target system.

SAP HANA backup share – definition in the system configuration in SAP LaMa

This “SAP HANA Backup Share” gets defined in the system configuration of the source system (in the tab “Provisioning & RFC”, section “Transfer Mount Configuration for System Provisioning”) within SAP Landscape Mangement: During the restore-based refresh SAP LaMa will then ensure that the share is mounted on the target system and so the SAP HANA backup image can be accessed for the restore.

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

The mount options specify who the share shall be mounted on the target system during the process – “read only (ro)” is sufficient, unless you want to use the share also for storing backups of the target system.

When it is configured, the HDB backup share gets visible in the configuration view:

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

NFS Server configuration on the source system

In case of a “local” NFS server on the source system, add the HDB Backup Share to the NFS server export table. In the LINUX operating system, the file /etc/exports contains the table of local physical file systems on an NFS server that shall be made accessible to NFS clients.
The content of the file needs to be maintained by the root user:

vi /etc/exports
/hana/backup/data/DB_S4P          *(ro,sync,no_subtree_check)

Add the “export path” on the local server to this file (in our case “/hana/backup/data/DB_S4P”)
Ensure that the NFS services are started.

In SLES12, start (or restart) the NFS services as user root via command:

> systemctl restart nfsserver
> exportfs
/hana/backup/data/DB_S4P                <world>

With the “exportfs” command, you can verify the NFS export on the source system.

Execute Restore-Based Refresh


Now, you’re ready to invoke the restore-based refresh in SAP Landscape Management

Navigate to Provisioning > Systems and AS, then select the system to be refreshed,
and choose “Restore-Based Refresh”

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

The roadmap for entering all the required parameters will start.

In the Basic tab enter the master password
– which will be used as default password for all the next steps.
(As usual, you need to obey the SAP password rules for it), and choose Next

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

The Hosts tab appears.
The hosts should be preselected according to the refresh scenario. Press Next

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

The Host Names tab appears.
In a refresh scenario, the virtual hostnames typically remain unchanged. Press Next

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

The (OS) Users tab appears.
Due to the refresh, also the OS users should be already existing. Press Next.

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

Now, the selection tab for the Database Restore appears.
Select one of the available backups from the HANA Backup Share, then press Next:

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

In the Rename tab, Enter the SYSTEM password for the respective tenant, and press Next:

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

In the Isolation tab, the NFS server exporting the HANA DB share should get visible in the “allowed” communications. Press Next

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

Finally, in the ABAP PCA tab enter the Post Copy Automation tasklists for the clients in scope, and press Next:

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

Finally, start the Database Refresh Workflow in the Summary tab

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

The Database Refresh will be invoked, and the backup will be used for refreshing the content:

SAP HANA Database, SAP HANA Certification, SAP HANA Study Materials, SAP HANA, SAP NetWeaver

How to Generate a HANA CDS Wrapper View

$
0
0
To generate from command prompt, please complete the following:

◈ Download and install the js application.
◈ Open the js command prompt.
◈ Install the dependent packages from the command prompt npm install.
◈ Open the folder hana-cds-wrapper-generator-master/view-generator and execute js.

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ Next, open localhost with port 3000 in the browser to view the application

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

To generate from WEBIDE, please complete the following:

◈ Clone the source code from the git repository to WEBIDE.
◈ Right click on the project hana-cds-wrapper-generator and select Project > Project Settings

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ Enter the cloud foundry configuration details, including the API, organization, and the space where you want to build the project.
◈ Choose Save.

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ Right click on the project and select Build > Build.

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ Once you have successfully completed the build, select the view-generator folder to run the project

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ The application URL will be generated once the run script has started.

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

The following options are available to generate the HANA CDS Wrapper View:

◈ XS Classic for HDBDD
◈ XS Advanced for HDBCDS

To generate the view from the app using XS Classic, please complete the following:

◈ Select XS Classic from the application.
◈ Enter the Namespace, Source Schema and Target Schema.

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ Choose Generate & Download to download.
◈ The HANA CDS Wrapper View generator file will be downloaded.

To generate the view from the app using XS Advanced, please complete the following :

◈ Select XS Advanced from the application to generate the HANA 2.0 CDS view (.HDBCDS) file.

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ Enter the input values for Namespace and Logical System.
◈ Check the Disable Type Cast box to ensure increased performance.

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ Choose Generate & Download.

SAP HANA smart data integration, SAP HANA Study Materials, SAP HANA Learning, SAP HANA Certification

◈ The HANA CDS Wrapper View generator file will be downloaded.

Result: You have now successfully generated a HANA CDS Wrapper View.

HANA SDI | Smart Data Integration – Batch based Delta Load using Timestamps within a parameterized SDI FlowGraph

$
0
0

1. Introduction


HANA SDI can serve various needs when it comes to data extraction, loading and transformation. In terms of data loading flavors, it supports real-time replication and batch processing, push and pull based.

In this blog entry I would like to outline one option how pull based micro-batching can work with SDI in a simplified use case. The showcase leverages HANA XSC runtime (not state of the art but still applicable out there and therein mostly design time objects. Some material data sets will be loaded in a batch-based manner, using scheduled xs cron jobs, offering some kind of extended options.

2. Context


In our use case, every minute, hourly or on a daily basis, the load of material entities from a remote source is required. There is no simple way given, that allows for real-time or batch based replication of the data. This might be due to various reasons:

◈ No CDC possible/allowed in source
◈ Missing authorizations in source
◈ Compliance issues

Therefore the change data must be identified in a way that involves some basic custom logic.

Prerequisite are date/time based creation and/or change attributes. These act as some kind of markers used to scope the next delta data set. As the tables involved are MARA and MARC, a creation and change date is available. Finally, an XS cron job triggers the execution of the data flow in a defined frequency.

3. HANA/SDI Toolset


We will use a simple SDI flowgraph in combination with some stored procedures to implement the requirements. The flowgraph comprises two data sources which are joined (MARA + MARC), filtered and pushed into some data sink, a target table. The source tables can be based on a remote source of any adapter, it is generally applicable for such kinds of ELT flows.

A parameter enables filtering the data since the last successful load. Stored procedures take over to lookup load dates of the last successful HANA runtime task execution. The actual execution of the flowgraph task is triggered by a HANA XS job. Within the XS job definition it can be decided between an initial and a delta load.

The following visual outlines the described steps.

SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Material

4. Limitations


◈ Deletions in the source table are not reflected. The reason is the employed change pointer, the creation or change date. Given this approach, there is no option to identify deletions and moreover apply those deletions on the target using the flowgraph. It is assumed, the applied records are kept in the target.

5. Implementation/Data Flow Modeling


5.1 FlowGraph

First and foremost a variable of type expression with some default value is introduced.

SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Material

The simplified data flow looks as follows:

◈ Two data sources: ERP table MARA and MARC
◈ FILTER_DELTA node: subset of MARA columns + filter logic using the varDate variable
◈ JOIN_MARA_MARC: inner join of the two tables
◈ MATERIALS_TEMPLATE_TABLE: target table

SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Material

The filter node applies some simple filtering logic. It will filter on the creation date (ERSDA) or the change date (LAEDA).

SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Material

The writer type of your target/template table is to be set to upsert, else you might run into unique constraint violations.

SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Material

5.2 Stored Procedures

SP_loadDateLookup

The stored procedure SP_loadDateLookup passes back the last successful load date of a HANA runtime task. As an input parameter the task name is defined. Alternatively and depending on your requirements, you define the return parameters as date/time/timestamp.

SP_loadMaterial

The stored procedure SP_loadMaterial triggers the execution of the HANA runtime task of the SDI flowgraph. As an input parameter, an initial load flag is defined. This flag enables a more flexible way to control the execution of the data flow and decide between an initial load (where initially the target table is truncated) and a delta load (where the variable and last successful load date will be considered, no truncation).

Your stored procedure to trigger respective task executions may look as follows:

SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Material

PROCEDURE "SYSTEM"."sdi.prototyping::SP_loadMaterial" ( IN initialFlag VARCHAR(1)) 
LANGUAGE SQLSCRIPT SQL SECURITY INVOKER AS
BEGIN DECLARE v_date DATE; DECLARE v_dateChar VARCHAR(256);
BEGIN AUTONOMOUS TRANSACTION
IF (initialFlag = 'X') THEN
DELETE FROM "SYSTEM"."MATERIALS";
END IF;
END;
IF (initialFlag = 'X') THEN
v_dateChar := '19500101';
ELSE
--get latest successfull execution date
    CALL "SYSTEM"."sdi.prototyping::SP_loadDateLookup"(IV_FGNAME => 'sdi.prototyping::FG_MATERIAL', OV_LOAD => v_date);
    v_dateChar := REPLACE(TO_VARCHAR(v_date), '-', '');
END IF;
EXEC 'START TASK "SYSTEM"."sdi.flowgraphs::FG_MATERIAL" ("varDate" => ''' || v_dateChar || ''' )';
END

5.3 XS Cron Job

In order to schedule the execution of the SP_loadMaterial procedure, an XS cron job is created. The scheduler.xsjs file defines respective functions that the .xsjob file triggers. In the current use case, the functions propagate the initial flag to the procedure call, so that you can decide within the job definition if it goes for an initial load or a delta.

The following describes a way how to implement the scheduler.xsjs file:

SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Material

function loadMaterial(initialLoadFlag)
{   
    if (initialLoadFlag != '' || initialLoadFlag != '' || initialLoadFlag == null)
    {
        initialLoadFlag = 'X'
    }
    
    var query = "{CALL \"SYSTEM\".\"sdi.prototyping::SP_loadMaterial\"('" + initialLoadFlag + "')}";
    $.trace.debug(query);
    var conn = $.db.getConnection();
    var pcall = conn.prepareCall(query);
    pcall.execute();
    pcall.close();
    conn.commit();
    conn.close();
}

From the xs admin job section, you are able to decide between initial and delta load (URL: <host>:<port>/sap/hana/xs/admin/jobs):

SAP HANA Certification, SAP HANA Learning, SAP HANA Guides, SAP HANA Study Material

6. Alternative Approaches/Thoughts


◈ The table comparison transform offers another approach how batch processing can be implemented using HANA SDI. The clear downside I see is the need to introduce DB sequences and drop the real primary keys. This might have consequences on the data modeling/consumption side of those tables so it is arguable which option to choose. However, the table comparison transform comes with the capability of also reflecting delete operations, which is in the given scenario not reflected (as initially stated in the limitations section).

◈ Depending on the frequency how often you want to process batches, a finer granularity can be achieved using date + time. You have to change the lookup in the task execution table and return e.g. a timestamp of the last successful load instead of just the date.

SAP Data Hub and R: Time series forecasting

$
0
0
Read on if you want to know how R syntax can be deployed in SAP Data Hub. I will use time series forecasting to introduce this concept. Think of demand forecasting or sales forecasting, just to give a few examples for how this might be used in a business context.

Generally, you can choose from a number of technologies for time series forecasting. You can select a highly automated approach (ie with SAP Predictive Analytics), you can leverage conventional algorithms inside SAP HANA (Predictive Analytics Library) or you can bring in open source such as R or Python.

This blog explains how to implement such a requirement with R in SAP Data Hub. Just note that there is also an option to implement R syntax directly in a SAP HANA procedure.

SAP Data Hub however provides additional functionality that can be very useful in deploying predictive models. It can combine heterogenous data sources and transform or enrich the data in a graphical flowchart. It is also providing data governance and a scheduling mechanism.

In this blog we will forecast how many passenger vehicles will be registered in the next 12 months in a number of European countries. This is the same use case and data as used in the earlier blog on SAP HANA procedures.

Disclaimer: I should point out that any code or content in this blog is not supported by SAP and that you have to validate any code or implementation steps yourself.

Prerequisites


Just reading this blog will hopefully give you a good idea for how R syntax can be leveraged in SAP Data Hub. Should you want to implement the example yourself, you must have access to

◈ SAP Data Hub (this blog is written with version 2.3, earlier versions will not suffice)
◈ SAP HANA as data source and target (no R server has to be connected to this SAP HANA)
◈ R editor (ie RStudio) to develop the script
◈ SAP HANA Client to allow the R editor to connect to SAP HANA

The SAP Data Hub 2.3 trial currently does not include a SAP HANA image. If you are using this trial, you need to combine it with a separate SAP HANA instance.

Load historic data into SAP HANA


Start by loading the historic data into SAP HANA. Use your preferred option to load the content of VEHICLEREGISTRATIONS.csv into a table. If you are using the SAP HANA Modeler, you can choose: File → Import → SAP HANA Content → Data from Local File. I am loading the data into the table TA01.VEHICLEREGISTRATIONS.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Develop R script for forecasting


Use your preferred R editor to script the forecasting code you want to deploy later on in SAP Data Hub. To help scalability develop a script that forecasts only one single time series. SAP Data Hub will be able to apply this script on as many time series as needed.

As such download the history of only a single time series into your R editor. A quick method to connect would be to pass the logon parameters through code.

library("RJDBC")
jdbc_driver <- JDBC(driverClass = "com.sap.db.jdbc.Driver",  
                    classPath = "C:/Program Files/SAP/hdbclient/ngdbc.jar")

jdbc_connection <- dbConnect(jdbc_driver,
                             "jdbc:sap://SERVER:PORT", 
                             "YOURUSER", 
                             "ANDYOURPASSWORD")
data <- dbGetQuery(jdbc_connection, "SELECT COUNTRY, MONTH, REGISTRATIONS FROM TA01.VEHICLEREGISTRATIONS WHERE VEHICLEREGISTRATIONS.Country = 'United Kingdom' ORDER BY MONTH ASC")

However, it is not elegant to have the password visible in your code. It would be better to save the logon parameters securely with the hdbuserstore application, which is part of the SAP HANA client.

Navigate in a command prompt to the folder that contains the hdbuserstore, ie
C:\Program Files\SAP\hdbclient

Then store the logon parameters in the hdbuserstore. In this example the parameters are saved under a key that I called myhana.

C:\Program Files\SAP\hdbclient>hdbuserstore -i SET myhana “SERVER:PORT” YOURUSER

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

And now the logon in R is much slicker.

library(RJDBC)
jdbc_driver <- JDBC(driverClass = "com.sap.db.jdbc.Driver",  
                    classPath = "C:/Program Files/SAP/hdbclient/ngdbc.jar")
jdbc_connection <- dbConnect(jdbc_driver, "jdbc:sap://?key=myhana")
data <- dbGetQuery(jdbc_connection, "SELECT COUNTRY, MONTH, REGISTRATIONS FROM TA01.VEHICLEREGISTRATIONS WHERE VEHICLEREGISTRATIONS.Country = 'United Kingdom' ORDER BY MONTH ASC")

In that R code we are already retrieving the history of the first time series. I just chose the history of the United Kingdom. It could have been any other. Now that the data is in R, you are free to script your preferred forecasting logic. I will be using the following function, which I called forecast_monthly(). That function makes use of a few R libraries. You may have to add these to your system with the usual install.packages() function.

forecast_monthly <- function(data){
  # Load required libraries ---------------------------------------------------------
  library(lubridate)
  library(forecast)
  library(MLmetrics)

  # Specify any parameters, adjust to the use case ----------------------------------
  col_name_date     <- "MONTH"   
  col_name_measure  <- "REGISTRATIONS"
  col_name_segment  <- "COUNTRY"
  date_format       <- "%Y-%m-%d"
  dates_to_forecast <- 12 
  confidence_level  <- 0.95
  forecast_methods  <- c('arima', 'ets', 'rwdrift', 'naive') 
  frequency         <- 12 # This value must not be changed
  
  # Retrieve the individual columns from the data frame -----------------------------
  col_date    <- as.character(data[, col_name_date])
  col_measure <- data[, col_name_measure]
  col_segment <- data[1, col_name_segment]
  
  # Print status update -------------------------------------------------------------
  print(paste("Now starting with: ", col_segment, sep = ""))
  
  # Ensure data is sorted on the date in ascending order  ---------------------------
  data <- data [order(data[, col_name_date]), ] 
  
  # Convert time series into ts object (required by forecast function) --------------
  start_date  <- as.Date(data[1, col_name_date], date_format)
  ts_historic <- ts(data[, col_name_measure], 
                    start = c(year(start_date), month(start_date)),
                    frequency = frequency)
  
  # Keep a hold out sample of forecast length to test forecast accuracy -------------
  ts_short      <- head(ts_historic, n = length(ts_historic)-dates_to_forecast)
  ts_hold_out   <- tail(ts_historic, n = dates_to_forecast)
  
  # Assess all forecasting methods on their performance on the hold out sample ------
  ts_short_mapes <- rep(NA, length(forecast_methods))
  for (ii in 1:length(forecast_methods)) {
    stlf_forecast <- stlf(ts_short, 
                          method = forecast_methods[ii], 
                          h = dates_to_forecast, 
                          level = confidence_level)
    ts_short_mapes[ii] <- MAPE(as.numeric(ts_hold_out), stlf_forecast$mean)
  }
  
  # Select the best performing method to carry out the final forecast ---------------
  forecast_best_mape <- min(ts_short_mapes)
  ts_forecast <- stlf(ts_historic, 
                      method = forecast_methods[which(ts_short_mapes == forecast_best_mape)], 
                      h = dates_to_forecast, 
                      level = confidence_level)
  
  # Dates, name of time series (segment) and date type (Actual or Forecast) ---------
  dates_all    <- as.character(seq(from = start_date, by = "month",  length.out = length(ts_historic)+dates_to_forecast))
  col_segments <- rep(col_segment, length(dates_all))
  model_descr  <- rep(paste(ts_forecast$method, "- MAPE:",  round(forecast_best_mape, 3)), length(dates_all))
  date_types   <- as.character(c(rep("Actual", length(ts_historic)), rep("Forecast", dates_to_forecast)))
  
  # Actual and historic measures ----------------------------------------------------
  forecast_mean   <- rep(NA, dates_to_forecast)
  forecast_mean   <- ts_forecast$mean
  forecast_upper  <- ts_forecast$upper
  forecast_lower  <- ts_forecast$lower
  dates_all_mean  <- as.numeric(c(as.numeric(ts_historic), as.numeric(forecast_mean)))
  dates_all_lower <- as.numeric(c(rep(NA, length(ts_historic)), as.numeric(forecast_lower)))
  dates_all_upper <- as.numeric(c(rep(NA, length(ts_historic)), as.numeric(forecast_upper)))
  
  # Return the combined data --------------------------------------------------------
  result <- data.frame(SEGMENT = col_segments,  
                       MONTH = dates_all, 
                       MEASURETYPE = date_types, 
                       MEASURE = dates_all_mean, 
                       MEASURELOWER = dates_all_lower, 
                       MEASUREUPPER = dates_all_upper, 
                       MODEL = model_descr)
  return(result)
}

Define the function and call it by passing the historic data of a single country as parameter.

library(RJDBC)
jdbc_driver <- JDBC(driverClass = "com.sap.db.jdbc.Driver",  
                    classPath = "C:/Program Files/SAP/hdbclient/ngdbc.jar")
jdbc_connection <- dbConnect(jdbc_driver, "jdbc:sap://?key=myhana")
data <- dbGetQuery(jdbc_connection, "SELECT COUNTRY, MONTH, REGISTRATIONS FROM TA01.VEHICLEREGISTRATIONS WHERE VEHICLEREGISTRATIONS.Country = 'United Kingdom' ORDER BY MONTH ASC")
forecast_monthly(data)

You should then see the output of the function, which contains both the historic data as well as the forecast for the next 12 months. This function will be deployed through SAP Data Hub.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Create target table in SAP HANA


When forecasting the demand through SAP Data Hub the predictions can be written into a SAP HANA table. Execute the following SQL syntax in SAP HANA to create a table that matches the output structure of the forecast_monthly() function, so that it can store the predictions.

SAP Data Hub version 2.3 does not support columns of type DECIMAL, use DOUBLE instead.

SET SCHEMA TA01;

--- Create table for the output
DROP TABLE "VEHICLEREGISTRATIONS_FORECAST";
CREATE COLUMN TABLE "VEHICLEREGISTRATIONS_FORECAST"(
SEGMENT NVARCHAR(14),
MONTH DATE,
MEASURETYPE NVARCHAR(20),
MEASURE DOUBLE,
MEASURELOWER DOUBLE,
MEASUREUPPER DOUBLE,
MODEL NVARCHAR(100));

Now everything is in place to deploy the forecasting script in SAP Data Hub. The major steps in implementing the full deployment are

◈ Obtain a list of all countries that need to be forecasted
◈ For each country obtain the individual history
◈ Create an individual forecast for each country
◈ Write each forecast to SAP HANA

Connect SAP Data Hub to SAP HANA


First establish a connection from SAP Data Hub to SAP HANA. Open the SAP Data Hub in your browser and go into the Connection Management.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Create a new connection to your SAP HANA system, using the connection type HANA_DB. The remaining settings are rather straight-forward. Specify the SAP HANA host, the port and your logon credentials. You can keep the remaining default settings. I have named my connection MYHANA.

Once the connection is created, test whether it is working well. On the overview page that lists all connection, click on the three little dots to the right-hand side of your connection. Select “Check Status” and you should get a confirmation message.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Obtain list of all countries


We want to create a forecast for each country that is found in the historic data. That dynamic approach helps reduce maintenance efforts. We will use a SELECT statement to retrieve that list. On SAP Data Hub’s main page click into the “Modeler”, which is where you create the graphical pipeline.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

In the menu on the left you see different tabs. The “Graphs” section is showing is showing some examples of pipelines. The “Operators” section contains a long list of graphical operators you can combine in a graph.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Make sure you are on the “Graphs” section, then create a new Graph with the plus sign on top. An empty canvas opens up. I prefer to give it a name right away. Click the save symbol on top and name it “R_Forecasting”.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

To obtain a list of the countries that need to be forecasted, add a “SAP HANA Client” operator to the graph. Select the “Operators” section on the left and search for SAP HANA. The SAP HANA Client shows up.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Drag the icon onto the empty canvas. Open it’s configuration panel by clicking the symbol on its right hand side.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

On the configuration panel you just need to open the graphical Connection editor by clciking on the small icon to the right of “Connection”. Set the “Configuration Type” to “Configuration Manager” and you can select the name of the connection you had created earlier in the “Connection Management”, then Save.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Now this SELECT statement needs to be passed to the SAP HANA Client to get the desired list.

SELECT DISTINCT COUNTRY FROM TA01.VEHICLEREGISTRATIONS;

This is done with the Operator called “Constant Generator”. Add such an operator to the left of the SAP HANA Client. Open the Constant Generator’s configuration panel. Paste the above statement into the content configuration.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

To connect the two operators click the output port of the “Constant Generator” and drag the mouse onto the sql input port of the SAP HANA Client.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Add a “Wiretap” Operator to the output of the SAP HANA Client. This will allow us to see the data that was retrieved from SAP HANA.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Save the Graph and start it by clicking the “Run” button on top.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

And if all goes well, the graph should briefly show at the bottom of the screen as pending graph, before moving to the running graphs section. We will see later how to terminate such a graph after its completion. To see the records that were retrieved from SAP HANA, click on the running graph at the bottom of the screen.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

A new page opens, showing the current status of the running graph. On this screen, open the Wiretap’s User Interface.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

A separate Browser tab opens and displays the retrieved data in message format.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Now terminate this graph with the “Stop Process” button at the bottom of the modeler page.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Obtain each country’s individual history


To retrieve the history of each country, the appropriate individual SELECT statements have to be created and sent to a second SAP HANA Client. We will use JavaScript to access the different Countries in the output of the first SAP HANA Client to create the correct SELECT statements.

When we come to the JavaScript syntax we want to have the input data in JSON format. SAP Data Hub comes with an operator that turns a blob into JSON format. As our data is in message format, first add a “ToBlob Converter”, so that we can use the converter to JSON format.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Now we can add the “Format Converter”, which produces JSON output by default.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Now add a “ToString Converter” to the end of the graph, which is needed for the JavaScript operator to read the data correctly. Connect the Format Converter’s output to the lower input port of the ToString operator.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

By the way, in case your graph does not look as orderly, you can click the “Auto Layout” button on top to get the icons in line.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Now on the Modeler’s Operator tab search for JavaScript and add the “Blank JavaScript” operator to the graph.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

This operator does not have any input or output ports yet. These can be manually added. Right click the new operator and select “Add Port”. The JSON data will be retrieved as string. Hence name the first port “input” with Type “string”. Keep the radio button on “Input”.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

The JavaScript will output SELECT statements in string format. Hence create a second port. Name it “output”, with Type “string” and switch the radio button to “Output”.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Now you can connect the ToString’s output port to the JavaScript’s input port. Then open the new node’s Script setting.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Copy the following code into the Script window. The first line specifies that the onInput function will be called when data is retrieved on the “input” port. The onInput function retrieves the input data and parses the JSON format. It then iterates through all countries and outputs for each country the SELECT statement that requests each country’s history.

$.setPortCallback("input",onInput)

function onInput(ctx,s) {
    var json = JSON.parse(s);
    for(var ii = 0; ii < json.length; ii++) {
        var country_name = json[ii].COUNTRY;
        $.output("SELECT COUNTRY, MONTH, REGISTRATIONS FROM TA01.VEHICLEREGISTRATIONS WHERE VEHICLEREGISTRATIONS.COUNTRY = '" + country_name + "' ORDER BY MONTH ASC;")
    }
}

Close the Script window with the small cross on top.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Connect a second Wiretap operator to the output of the JavaScript operator. Save and run the graph. In the Wiretap’s UI you see the SELECT statements coming from the JavaScript’s output port.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

These SELECT statements now need to be passed to a SAP HANA Client to execute them. This is now very much the same as what we had at the very start of this graph. Add a SAP HANA Client, specify its connection and add a Wiretap to the end to verify the output.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Run the graph and in the Wiretap UI you see the history of each country. Each country is an individual request, which allows SAP Data Hub to scale the operation. Note that by default the Wiretap is showing only the first 4092 characters. Hence not all countries show up. This value can be increased though in the Wiretap’s configuration.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Create the forecasts in R


We now have the input data needed for our time series forecasts. Before continuing with the graph however, we need to provide a Docker file, which SAP Data Hub can use to create Docker containers to run the R syntax in.

Go into the “Repository” tab on the Modeler menu. Click the plus-sign on top and choose “Create Docker File”.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Name it “Docker_R_Forecast”. The following Docker syntax worked well for me. It is installing for example the forecast package which is carrying out the predictions in our code. Copy and paste this into the Docker file. Adjust it as you see fit.

FROM debian:9.4

RUN apt-get update \
&& apt-get install -y --no-install-recommends \
locales \
&& rm -rf /var/lib/apt/lists/*

## Install R
## Set a default CRAN repo
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
r-base \
r-base-dev \
r-recommended \
libnlopt-dev \
curl \
        && echo 'options(repos = c(CRAN = "https://cran.rstudio.com/"), download.file.method = "libcurl")'>> /etc/R/Rprofile.site \
&& rm -rf /tmp/downloaded_packages/ /tmp/*.rds \
&& rm -rf /var/lib/apt/lists/*

## Install additional R packages
RUN apt-get update
RUN apt-get install libcurl4-openssl-dev

RUN echo 'options(repos="https://cran.rstudio.com/", download.file.method = "libcurl")'>> /etc/R/Rprofile.site \
    && Rscript  -e "install.packages(c('Rserve'), dependencies=TRUE)" \
    && Rscript  -e "install.packages(c('jsonlite'), dependencies=TRUE)" \
    && Rscript  -e "install.packages(c('MLmetrics'), dependencies=TRUE)" \
    && Rscript  -e "install.packages(c('forecast'), dependencies=TRUE)" \
    && Rscript  -e "install.packages(c('lubridate'), dependencies=TRUE)" 

Open the Docker file’s configuration with the icon on the top right corner.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

This opens a side panel, in which we need to enter some tags. These tags help SAP Data Hub to start the most appropriate Docker container for your code. Add the tags rserve and rjsonlite, should they already exist. Then add a new tag, let’s call it rforecast to indicate that this Docker file is installing the packages needed for our forecasting. Then save the file and initiate the Docker build with the icon right next to it.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

The build process might take half an hour or longer.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Go back to the graph, where we need to transform the data again into JSON format, just like before. Add a ToBlob Converter, followed by a Format Converter, followed by a ToString Converter.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

This brings the data into the format that R can process it. Add an R Client operator.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

The operator does not have any ports yet. As before, create an input port named “input” of type string, and an output port named “output” also of type string. Then connect the R Client with the previous node. Open the R Client’s Script window.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Delete the default script and paste the following R code inside. The first line specifies that the dh_forecast() function is called when data arrives. This function is transforming the input data from JSON to a data frame before passing the data to the forecast_monthly() function we have already used in the R Editor. The output of the forecast_monthly() function is then transformed into JSON format before passing the data to the output port. Close the script window.

api$setPortCallback(c("input"), c("output"), "dh_forecast")

dh_forecast <- function(data) {
data <- fromJSON(data)
result <- forecast_monthly(data)
result <- toJSON(result, na = 'null')
list(output=result) 
}

forecast_monthly <- function(data){
  # Load required libraries ---------------------------------------------------------
  library(lubridate)
  library(forecast)
  library(MLmetrics)

  # Specify any parameters, adjust to the use case ----------------------------------
  col_name_date     <- "MONTH"   
  col_name_measure  <- "REGISTRATIONS"
  col_name_segment  <- "COUNTRY"
  date_format       <- "%Y-%m-%d"
  dates_to_forecast <- 12 
  confidence_level  <- 0.95
  forecast_methods  <- c('arima', 'ets', 'rwdrift', 'naive') 
  frequency         <- 12 # This value must not be changed
  
  # Retrieve the individual columns from the data frame -----------------------------
  col_date    <- as.character(data[, col_name_date])
  col_measure <- data[, col_name_measure]
  col_segment <- data[1, col_name_segment]
  
  # Print status update -------------------------------------------------------------
  print(paste("Now starting with: ", col_segment, sep = ""))
  
  # Ensure data is sorted on the date in ascending order  ---------------------------
  data <- data [order(data[, col_name_date]), ] 
  
  # Convert time series into ts object (required by forecast function) --------------
  start_date  <- as.Date(data[1, col_name_date], date_format)
  ts_historic <- ts(data[, col_name_measure], 
                    start = c(year(start_date), month(start_date)),
                    frequency = frequency)
  
  # Keep a hold out sample of forecast length to test forecast accuracy -------------
  ts_short      <- head(ts_historic, n = length(ts_historic)-dates_to_forecast)
  ts_hold_out   <- tail(ts_historic, n = dates_to_forecast)
  
  # Assess all forecasting methods on their performance on the hold out sample ------
  ts_short_mapes <- rep(NA, length(forecast_methods))
  for (ii in 1:length(forecast_methods)) {
    stlf_forecast <- stlf(ts_short, 
                          method = forecast_methods[ii], 
                          h = dates_to_forecast, 
                          level = confidence_level)
    ts_short_mapes[ii] <- MAPE(as.numeric(ts_hold_out), stlf_forecast$mean)
  }
  
  # Select the best performing method to carry out the final forecast ---------------
  forecast_best_mape <- min(ts_short_mapes)
  ts_forecast <- stlf(ts_historic, 
                      method = forecast_methods[which(ts_short_mapes == forecast_best_mape)], 
                      h = dates_to_forecast, 
                      level = confidence_level)
  
  # Dates, name of time series (segment) and date type (Actual or Forecast) ---------
  dates_all    <- as.character(seq(from = start_date, by = "month",  length.out = length(ts_historic)+dates_to_forecast))
  col_segments <- rep(col_segment, length(dates_all))
  model_descr  <- rep(paste(ts_forecast$method, "- MAPE:",  round(forecast_best_mape, 3)), length(dates_all))
  date_types   <- as.character(c(rep("Actual", length(ts_historic)), rep("Forecast", dates_to_forecast)))
  
  # Actual and historic measures ----------------------------------------------------
  forecast_mean   <- rep(NA, dates_to_forecast)
  forecast_mean   <- ts_forecast$mean
  forecast_upper  <- ts_forecast$upper
  forecast_lower  <- ts_forecast$lower
  dates_all_mean  <- as.numeric(c(as.numeric(ts_historic), as.numeric(forecast_mean)))
  dates_all_lower <- as.numeric(c(rep(NA, length(ts_historic)), as.numeric(forecast_lower)))
  dates_all_upper <- as.numeric(c(rep(NA, length(ts_historic)), as.numeric(forecast_upper)))
  
  # Return the combined data --------------------------------------------------------
  result <- data.frame(SEGMENT = col_segments,  
                       MONTH = dates_all, 
                       MEASURETYPE = date_types, 
                       MEASURE = dates_all_mean, 
                       MEASURELOWER = dates_all_lower, 
                       MEASUREUPPER = dates_all_upper, 
                       MODEL = model_descr)
  return(result)
}

To execute this code SAP Data Hub still needs to know in which Docker container the code should run. This is specified by creating a so called Group. Right click on the R client and choose “Group”.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

This creates a slightly shaded area around the R Client, which represents the group. Select the border of this shaded area and click the icon on the bottom right. This opens the configuration of the group, where you can enter the rforecast tag. When running the graph, SAP Data Hub will use the Docker image with that tag, which includes all the required dependencies for our code. Multiple operators can be placed into such a group, hence the name.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Add another Wiretap to the output port of the R Client and save and run the graph. The forecasts were correctly created!

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Write the forecasts to SAP HANA


The forecasts exist now within the graph. We want to write them to the SAP HANA table that we created earlier. Add another SAP HANA client to the graph. The input is connected to the data port now (not sql as before). In its configuration
  • Specify the connection that was created in the Connection Management.
  • Set the “Input format” to JSON.
  • Set the “Table name” to the qualified target table, ie TA01.VEHICLEREGISTRATIONS_FORECAST.
  • Specify the names and types of the target table’s columns in “Table columns”. Create these individually with the “Add Item” option.
    • SEGMENT NVARCHAR(14),
    • MONTH DATE,
    • MEASURETYPE NVARCHAR(20),
    • MEASURE DOUBLE,
    • MEASURELOWER DOUBLE,
    • MEASUREUPPER DOUBLE,
    • MODEL NVARCHAR(100));
SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

The graph should look very much like this now.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Save and run the graph. The time series are written into the SAP HANA table! Here is a preview of the table in the SAP HANA Modeler.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Terminate the graph


The forecasts are created and written into the target table. All the necessary logic has been executed, yet you can see that the graph is still running.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

SAP Data Hub contains a specific operator, to gracefully end a graph, the “Graph Terminator”. This operator cannot just be added to our graph though, as the operators at the end of the graph are called multiple times, once for each country. This means that the Graph Terminator would be called as soon as the first country has been forecasted. The graph could then be terminated before all countries have been dealt with. Hence we need to add a gate keeper that calls the Graph Terminator only once all countries have been processed.

Start by calculating the number of countries in the dataset. We want to have this dynamic so that the graph still works well in case the number of countries is changing over time. Add a “1:2 Multiplexer” in front of the current JavaScript operator. This opens up a second branch in the graph.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Continue the new branch with a second “Blank JavaScript” operator. Add one input port called “input” of type string and an output port called “output” of type int64.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

This operator will be called only once. Enter the following code, which calculates the number of countries and passes that value to its output port.

$.setPortCallback("input",onInput)

function onInput(ctx,s) {
    var json = JSON.parse(s);
    $.output(json.length)
}

Add a further Blank JavaScript opreator, which acts as gatekeeper with these three ports

◈ Input port of type int64 called “inputtarget”
◈ Input port of type message called “inputcurrent”
◈ Output port of type string called “output”

Connect the two open branches to the new operator.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Enter the following script into the new JavaScript operator. It is receiving the number of time series that are to be forecasted and it keeps track of how many have already been completed. Once all time series have been processed the graph’s execution continues on the output port. Any value could be passed to the output port, the string itself does not have any meaning or purpose.

$.setPortCallback("inputtarget",onInputTarget)
$.setPortCallback("inputcurrent",onInputCurrent)

var target = -1
var current = 0

function onInputTarget(ctx,s) {
    target = s
    if (current == target) {
          $.output("DONE")
    }
}

function onInputCurrent(ctx,s) {
    current++
    if (current == target) {
          $.output("DONE")
    }
}

Now only the “Graph Terminator” needs to be added. Run the graph and we are done!

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

The forecasts have been produced and written into the target table. And the graph has successfully ended. In the status bar it correctly moved from the “Running Graphs” section to the “Completed Graphs” section.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

The graph is now independent could be scheduled to regularly produce the latest forecasts.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Materials, SAP HANA Live

Query Automation on HANA

$
0
0

Business requirement :


When SAP implements SAP Enterprise HANA from other landscape, huge no of SQL query need to be transform to HANA graphical VDM as BI tool like Lumira, Webi design Studio in BOBJ and Tableau. In one scenario similar request (convert SQL to HANA Graphical VDM (Virtual Data Model) raised when the landscape transform to SAP HANA.

Current Process:


The current process is to change the SQL Query into HANA graphical virtual Data model by performing manual steps. The steps includes the following steps.

1. Identifying the tables manually.
2. Identifying the selection, joins ,projections, Union manually
3. Inside the HANA studio redesign the Query graphically.
SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

Assumption and Risk:


If the SQL queries are more complex, following are the risks –

1. It is too much time consuming. For a query it is almost takes no of day to convert it into graphical HANA VDM.
2. Data Accuracy mismatch: It is very difficult to manually convert the SQL query into HANA VDM accurately. Most of the time the data mismatch.

Solution:


After this solution is implemented, manual intervention from the business is drastically reduced. The time duration also has been reduced for this transformation drastically. The desired results can be achieved by implementing the logic within the standard HANA interface with proper accuracy.  The changes are purely automated process and it will populate the accurately to the business user.

Proposed Logic and Scenarios:


Proposed logic is given below –

1. Get the SQL query from the business.

2. Format it with the proper format.

3. Open HANA Studio Interface

4. Create a Script based calculation view.

5. Add the SQL query in the Script based calculation view.

6. Create output column similar to SQL Query.

7. Activate the Script based Calculation view.

8. Go to the option –

Windows -> show view -> others -> HANA -> Quick View

9. Select the migrate option -> Select the hana system from where you want to migrate

- select Script based calculation view to graphical calculation view and table function.

10. Click next and then select the script which you want to transform.

11. Click finish.

12. Once finished button clicked one graphical calculation view and table function will be created.

13. But you will be unable to activate the graphical calculation view and table function from the modeler perspective.

14. Go to developer mode -> repository -> Open the table function-> activate it.

15. Return to modeler perspective ->Activate the Graphical calculation view.

16. Finally this graphical view represents the SQL query which we have transformed.

17. Now if you data preview, you will get the accurate result same as SQL Query.

Scenario:


We have got the following Query which needs to be transformed to HANA graphical view so that the BI tool can consume this

1. 

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

2. Formatted into proper format –     Formatted the script code properly.

3. Open the HANA Studio Interface and create the SQL based calculation view –

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

4 and 5. Provide the SQL inside the view –

i.e. VAR_OUT= SQL QUERY

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

6. Create output column similar to SQL Query :

Create the output column of the view as similar as query sequentially –

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

7.Activate the script based calculation view.

8.Go to the option –

Windows -> show view -> others -> HANA -> Quick View

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

9.Then Select the migrate option -> Select the hana system from where you want to migrate

->select Script based calculation view to graphical calculation view and table function-

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

10. And 11. Click next and then select the script calculation view which you want to transform and click Finish.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

12. Once finished button clicked one graphical calculation view and table function will be created.

13. But you will be unable to activate the graphical calculation view and table function from the modeler perspective.

14. Go to developer mode -> repository -> Open the table function-> activate it.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

15. Return to modeler perspective ->Activate the Graphical calculation view.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

16. Finally this graphical view represents the SQL query which we have transformed.

17. Now if you data preview, you will get the accurate result same as SQL Query.

SAP HANA Study Materials, SAP HANA Guides, SAP HANA Tutorial and Material

Emergency Access Management (EAM) for HANA Target Systems

$
0
0

Purpose of the Document


In the latest version of GRC 12.0,  SAP has extended the emergency access management (EAM) functionality to HANA database. This blog is to provide the details on how this new functionality can be configured and utilized to manage the firefighting access to HANA target systems.

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Another improvement in GRC 12.0 is simplified Firefighter Owner/Controller maintenance:

– In 10.1 User ID must be first defined as FF ID Owner or Controller before assigning to a Firefighter ID.

– In 12.0 Owners and Controllers can be assigned to Firefighter ID even when the User ID is not maintained in Access Control Owners.

Required Configuration to enable EAM for HANA DB
HANA Connection Configuration


Create HANA database connection in GRC system using transaction code DBCO (Database Connection Maintenance)

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

DB Connection: Fill in the DB Connection name. This name will be used in the connector setup so name it accordingly.
DBMS: Select the type of Database Management System as “HDB” (HANA Database)
User Name and Password: Valid user authentication details to connect to HANA DB
Connection Info: HANA database system details (Hostname details along with Port Number)

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Save the database connection after entering all required details as mentioned above.

Testing HANA DB Connection created in GRC


HANA database connection can be tested using ABAP report ADBC_TEST_CONNECTION

Execute transaction SE38 and run report “ADBC_TEST_CONNECTION”

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

HANA DB connection can also be verified using the transaction “DBACOCKPIT” .

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

HANA Database Connector in SM59


Create a connector in SM59 with connection type as “L” (Logical Destination) and connector name same as the connection created in DBCO.

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Audit Policy Configuration in HANA DB

Activities on SAP HANA database (User Changes, Role Changes, Creation or deletion of database objects, Changes to system configuration, Access to or changing of sensitive information) can be track and recorded via in-built Audit configuration feature.

SAP HANA database auditing feature allows monitoring of the activities performed in HANA DB.To make use of this feature, SAP HANA audit policy must be activated on HANA DB.

SAP recommendation is to create separate audit policies for following activities performed in HANA DB separately:

◈ Granting and Revoking of Authorization
◈ Session Management and System Configuration
◈ Structured Privilege Management
◈ User and Role Management

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

HANA Connector Config Setup in GRC


Define connectors in the following IMG path

SPRO -> IMG -> GRC -> Common Component Settings -> Integration Framework -> Maintain Connectors and Connection Types -> Define Connectors

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Define connector groups in the following IMG path and assign HANA DB connectors to this connector group

SPRO -> IMG -> GRC -> Common Component Settings -> Integration Framework -> Maintain Connectors and Connection Types ->Define Connector Groups

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Maintain Connection Settings

Connectors must be assigned to the all integration scenarios (AM, ROLMG, SUPMG, AUTH, PROV) available as it is a good practice.

SPRO -> IMG -> GRC -> Common Component Settings -> Integration Framework -> Maintain Connection Settings

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Maintain Connector Settings

Maintain connector settings in the following path and assign HANA Audit Policy and HANA IDE URL to the HANA DB connectors as shown in the following screenshots.

SPRO -> IMG -> GRC -> Common Component Settings -> Integration Framework -> Maintain Connection Settings

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

HANA WEB IDE URL

Web IDE for SAP HANA is a browser-based IDE for the development of SAP HANA-based applications. This web-based IDE is called SAP HANA Web-based Development Workbench, which contains four modules. EAM firefighting will be enabled to use to in HANA IDE and this is reason for including the URL in connector attributes.

Editor:Manage HANA repository artifacts
Catalog:Manage HANA DB SQL catalog artifacts
Security:User and Role Management
Trace:Set or download trace files for HANA applications

Delivery Unit deployment in HANA DB

Delivery Unit deployment in HANA DB and activating the SQL procedures under AC folder in HANA DB is a prerequisite and must be followed according to the steps mentioned in following SAP Note:

https://launchpad.support.sap.com/#/notes/1869912

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

GRC Procedures Activation

For details on how the corresponding SQL procedures under ARA and ARQ folders are required to be activated are available in SAP Note 1869912.

SQL Procedures under ARA folder – Just execute in any sequence

SQL Procedures under ARQ folder – Execute procedures starting with IS or INS first followed by procedures starting with GRANT and REVOKE and finally remaining procedures.

“GET_USERS_SYNC” procedure has an updated version released through the following SAP Note. Hence, download this from the note and activate it as it is not updated in the latest version by default.

2451688 – Repository sync job not syncing back user validity dates from HANA

However, there are few errors which you will come across during SQL procedures activation like mentioned below:

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Firefighter ID Setup in HANA DB

Step 1: I have created a role in HANA DB with the same name as the one used in config parameter 4010 (Firefighter ID role name).

Step 2: Created a User ID in HANA DB and assigned the role created in previous step to the User ID and to make GRC system recognize the newly created User ID as Firefighter ID.

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

GRC Repository Object Sync

Execute “Repository Object Sync” program once all the above configuration is completed which should successfully sync the USERS and ROLES from HANA DB to GRC system

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Assignment of FF ID Owner and Controller to HANA Firefighter ID

In GRC 12.0 Owners and Controllers can be assigned to Firefighter ID even when the User ID is not maintained in Access Control Owners. This is applicable for “Mass Maintenance” feature as well.

EAM Centralized Vs. Decentralized Firefighting for HANA DB

Decentralized scenario is currently not supported for HANA target systems.Only Centralized Firefighting is supported and Firefighter logon must be done via transaction GRAC_EAM/GRAC_SPM in the GRC Foundation system as the logic to generate the password for the Firefighter ID is implemented in GRC system only.You can verify the details in the following SAP Note 2654895 – FAQ: GRC Access Control 12.0 Installation Questions and Recommendations

Common Errors (Error 1)

When a User ID is created in HANA DB which you want use as a Firefighter ID please ensure that the length of the User ID is not more than 12 characters. If the Firefighter ID length is more than 12 characters, following error message will be shown when you try to start the FF session as EAM functionality is not supported.

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

EAM Centralized Firefighting process for HANA systems


If you have completed all the above steps successfully then you can perform EAM testing for HANA target systems.

Step 1: Execute transaction “GRAC_EAM” in your GRC system as you can use only Centralized Scenario

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Step 2: Click on “Logon” button and enter the required details and click “Continue” to launch the Firefighting session

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Step 3: HANA IDE URL which has been configured during Connector Setup will be launched and will redirect to the logon screen.

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Firefighter ID status will be showing as “GREEN” until you login to HANA IDE.

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

You have to enter the Firefighter ID and the password (you have to just paste the password which is already copied into clipboard. Just do CTRL+V in password field) after which your Firefighting session will begin and the status of Firefighter ID in the EAM launchpad screen will turn to red

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Step 4: Perform required activities in HANA system and once completed log off the Firefighting session.

Step 5: All the logs recorded during Firefighting session can be accessed from HANA table AUDIT_LOG. The same logs will be retrieved and showed in the EAM log review workflow request.

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Step 6: After the completion of firefighting session, execute EAM log sync job which will retrieve the logs from HANA system and creates the log review workflow request.

SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material, SAP HANA Live

Key Points to note:


Issue 1: Password getting copied to clipboard: If the password is copied to clipboard then this can be shared with anyone and there is potential chance FF ID misuse by an unauthorized user.
Currently working with SAP support to check if time limit can be set for password expiry.
e.g. To make password in clipboard unusable after 10 to 15 seconds. This could be a compensating control from security perspective.

Issue 2: When logging to HANA IDE through EAM ensure that no other HANA IDE session with normal User ID is ACTIVE.  If any session is ACTIVE then system redirects to the same session instead of starting new session

Issue 3: During the FF session always ensure to properly logout the session after completion. If the HANA IDE is closed directly without logging out properly then the FF session will remain active until the time out period set for HANA IDE is met.

Implementation of Decision Tables with Return Values in SAP HANA

$
0
0

Introduction to Decision Table


Decision table creates the business rules in a tabular format for automating the business decisions. In this blog, I would like to show a step by step process of creating a simple decision table on HANA and how to execute the same.

Prerequisite for creating a Decision Table


We can create a decision table in HANA based on different sources:

◈ Multiple tables or Table type
◈ Information view (Attribute or Analytic or Calculation view)

Let’s take a scenario where business wants to see how the revenue will be affected if they give discounts on specific product groups depending on the quantity bought by customers. We will be implementing this scenario in the below steps.

In this blog, we will be creating a decision table in HANA based on the table.The process of creation of Decision tables with “update values” and Decision tables with “return values” are almost same except that we use columns from the physical tables as actions in first case while parameters will be used as actions in the latter one.

Now, there is a transaction table called SALES1. It is storing the data for sales for different products IDs and their quantity and amount along with the sale date.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Steps for creating a Decision Table in HANA


1. Let us start by creating a “Decision Table” in HANA Modeler

Right click on the specific package under which you want to create the “Decision Table” and select  “New” -> “Decision Table”

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Provide the initial details (Name and Description) and click on finish. Let call it “DT_SALES_AMOUNT”

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

2. Click on the + icon on the data foundation and add the “SALES1” table

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

3. Select PRODUCT_ ID and AMMOUNT columns from the table.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

4. Select PRODUCT_ ID under the “Attributes” and Right click and select “Add as Conditions”. By this we are this we are marking the product IDs from the SALES1 table.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

5. Now, create a new parameter by right-clicking on the Parameters and click “New…”

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Provide the following details like “Name”, “Description” and “Data Type” and click on OK

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

6. Right click on the newly created parameter “NEW_AMMOUNT” and select “Add as Actions”

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

This will create a new “Action” for the parameter

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

7. Now, click on “Decision Table” node and the following scenario appears in the “Details” section

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

8. Right click on PRODUCT_ ID and click on “Add Condition Values”. By doing this we are adding the specific product ID in which business wants to the new amounts.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Click on the icon to open the list of values window as shown below

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Select a specific PRODUCT_ID (e.g. 1) from the list of values and click on OK

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

After selecting the value from the list, select OK again

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

9. Now, right click on “NEW_AMMOUNT” in the Details panel and click “Set Dynamic Value”. This is the part where we set the variance (1.5 times of the original value) which is desired by the business.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

10. Enter “AMMOUNT” * 1.5 in the text box and then press ‘alt + enter’ to set. This is the step where we are actually setting the

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

After, it will look like this:

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

11. Now, again right click on PRODUCT_ ID and add another condition value by selecting another product ID (2) from the list of values and repeat the process again. The “Dynamic Value” (variance) will get repeated automatically by default for the other “PRODUCT_ID” as well. We can change as per business requirement.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

12. Save and validate and save and activate the decision table.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

13. Once the decision table is activated successfully, then go under “Catalog” and expand the “SYS_BIC” schema. In the “SYS_BIC” schema, go under “Procedures” and look for the newly created decision table.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Double click on the “DT_SALES_AMMOUNT” and we can check the underneath stored procedure codes

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Decision Table Execution Process

◈ Open the SQL console in HANA studio by right-clicking over the system.
◈ The type call “_SYS_BIC”.”<package_name>/DT_SALES_AMMOUNT” (?)
◈ Click on Execute

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Once the SQL code is successfully executed, we can see that the values in “AMMOUNT” column have increased by 1.5 times the original amount  for the product ID 1 and 2

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Implementing SLT configuration (for sidecar) in SAP HANA

$
0
0
In this blog I am going to discuss about one of the data provisioning technique in SAP HANA and how to implement it.

There are primary three types of data provisioning techniques in that can be implemented with SAP HANA Database. these are given below –

1. SAP Landscape Transformation (SLT)
2.SAP Business Objects Data Services (BODS)
3.SAP HANA Direct Extractor Connection (DXC)

Today we will discuss about the SLT techniques i.e. The SAP Landscape Transformation. SAP Landscape Transformation Replication Server is the SAP technology that allows us to load and replicate data in real-time from SAP source systems and non-SAP source systems to an SAP HANA environment.

SLT is used for all sap HANA customers who need real-time or scheduled data replication, sourcing from sap and non-sap sources. iSLT uses trigger-based technology to transfer the data from any source to sap HANA in real-time. and SLT server can be installed on the separate system or on sap ECC system.

# Benefit of SLT system –


◈ Allows real-time or schedule time data replication.
◈ During replicating data in real-time, we can migrate data in sap HANA format.
◈ SLT handles cluster and pool tables.
◈ This is fully integrated with sap hana studio.
◈ SLT have table setting and transformation capabilities.
◈ SLT have monitoring capabilities with sap hana solution manager.

#Basic Architecture :


SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

#Steps to implement SLT on SAP HANA :


Click start  -> Click SAP Logon :

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click on the ECC system :

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

We need to Provide user name Password  then need to click enter :

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Then Provide T_code as SM59  and click enter-

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click new:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Provide RFC Destination name:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click on the search button :

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click connection to ABAP system:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

We need to provide provide Description:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click Logon Security -> Provide Language as ‘EN’ user ID and password.

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click Technical Settings -> Provide Target Host ‘SAPSLT’ Instance no ‘20’ :

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Now test the RFC connection and click yes:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Now we can see that See Connection test RFC_18 is saved:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click on below icon -> Click on Create Session. Then provide T_code as LTRC  and click Enter :

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Click Create icon shown bellow:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Provide Configuration name and then click next :

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Provide RFC_Destination  name.Allow multiple users and read from Single Client. Then click next :

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Provide administration name and  Password .Host name and Instance.Then click next.

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Specify transfer Settings: Enter Number of Data transfer Jobs.Enter Number of Initial Load Jobs and enter Number of Calculation Jobs –

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Then click next and create. You can see that DD02L, DD02T and DD08L tables are created by system.

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Then log into SAP HANA Studio and go to modeler perspective then  reset the perspective.Click Data Provisioning:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Select Source System and Target  Schema:

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

You can now see tables are replicated from ECC.

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Now we can check inside the ECCHANA_18 and see that the tables are replicated inside the system.

SAP HANA Certification, SAP HANA Guides, SAP HANA Learning, SAP HANA Tutorial and Material

Now the SLT configuration and replication process has been completed . In the next blog I will discuss about the SLT implementation in the main car.

How to create XSOData from HANA Calculation Views with input parameters and Navigation properties

$
0
0
We have seen several blogs on how we can easily expose the results of a calculation view as XSOData service.

We can even define navigation properties in the XSOData between the entities. By defining the navigation – we can make use of the existing Smart templates in WebIDE to develop Master Detail kind of UIs.

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

We can find many such blogs as shown above, however in case of complex calculation views that require input parameters defining the association and navigation is a challenge.

Unfortunately, the SAP help documentation and other blogs have not explained this scenario in detail.

In this blog, I will explain how a calculation view that requires input parameters can be linked to other entities and how a navigation property is defined in the XSOData.

We will divide this blog into two steps:

1. Creating Calculation Views and exposing them using XSOData
2. Adding association and navigation to OData entities using XSOData

We will take a very simple example consisting of entities Customer, Sales Order Header and Sales Order Items. In the 1st step we will create calculation views for our entities and expose them. So the experts can directly dive to 2nd step where we will add association and navigation for our entities.

Step 1: Creating Calculation Views and exposing them using XSOData

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

As we can see from the above image, our entities have the following attributes:

Customer

◈ ID
◈ CustomerName
◈ CustomerLocation
◈ Count

Sales Order Header

◈ ID
◈ SalesOrderDate
◈ CustomerID
◈ CustomerName
◈ CustomerLocation
◈ NetAmount

Sales Order Item

◈ ID
◈ SalesOrderItemHeader
◈ ItemName
◈ ItemPrice
◈ ItemQuantity

I’ll share the screenshots of the calculation views and XSOData code below, for more information you can follow SAP HANA developer’s guide

Customer Calculation View


SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

The above calculation view CustomerView is exposed as Customer in XSOData as follows:

service{
    "demo.blog1::CustomerView" as "Customer"
    keys("ID");
}

To access Customer entity, the URL for my OData entity Customer will be as follows:

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/Customer

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/Customer(1)

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Sales Order Header Calculation View

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Note: We have defined input parameters START_TS and END_TS to output SalesOrderHeaders which have SalesOrderDate between START_TS and END_TS.

As we have input parameters we will define XSOData such that parameter values are passed in the URL for our entity along with the key. Therefore the calculation view SalesOrderHeaderView is exposed as SalesOrderHeader in XSOData as follows:

service{
    "demo.blog1::SalesOrderHeaderView" as "SalesOrderHeader"
    keys("ID")
    parameters via key and entity;
}

Note: If you face error such as Unsupported Parameter while activating the above service.

To access SalesOrderHeader entity, the URL for my OData entity SalesOrderHeader will be as follows:

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/SalesOrderHeaderParameters(START_TS=datetime’2015-07-24T00:00:00.0000000′,END_TS=datetime’2018-12-22T23:59:59.0000000′)/Results

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/SalesOrderHeader(START_TS=datetime’2015-07-24T00:00:00.0000000′,END_TS=datetime’2018-10-22T23:59:59.0000000′,ID=1)

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

Sales Order Item Calculation View

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

The above calculation view SalesOrderItemView is exposed as SalesOrderItem in XSOData as follows:

service{
    "demo.blog1::SalesOrderItemView" as "SalesOrderItem"
    keys("ID");
}

To access SalesOrderItem entity, the URL for my OData entity SalesOrderItem will be as follows:

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/SalesOrderItem

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/SalesOrderItem(1)

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

The above services are consolidated into a single XSOData file. Here is the code snippet for the consolidated XSOData:

service{
    "demo.blog1::CustomerView" as "Customer"
    keys("ID");
    
    "demo.blog1::SalesOrderHeaderView" as "SalesOrderHeader"
    keys("ID")
    parameters via key and entity;
    
    "demo.blog1::SalesOrderItemView" as "SalesOrderItem"
    keys("ID");
}

annotations {
    enable OData4SAP;
}

Step 2: Adding association and navigation to OData entities using XSOData


In the previous step we created Customer, Sales Order Header and Sales Order Item calculation views and exposed them using XSOData. Now we will add association between the entities and define navigation between them.

If you have followed from Step 1, you will notice that calculation view Customer has no input parameters and Sales Order Header has 2 input parameters START_TS and END_TS.

In order to navigate from Customer to SalesOrderHeader entity in XSOData it requires START_TS and END_TS input parameters to be passed from Customer to SalesOrderHeader entity. Hence we will add a dummy START_TS and END_TS input parameter to Customer calculation view which will have only one purpose of passing data from Customer to SalesOrderHeader entity while navigation.

This is how our entities will look after association and navigation is added:

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

The change required in Customer Calculation View is marked in red in below image:

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

To add association and navigation we will make changes to our XSOData file. The changes made are highlighted using different colored rectangles and the explanation for the highlighted changes are given below:

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

1. The change reflected in yellow rectangle is because of the changes made to Customer calculation view. It will enable Customer entity to accept START_TS and END_TS as input parameter along with its key.

2. In red rectangle we can see the association defined between the entities Customer and SalesOrderHeader. Note that we have written input parameters before the key in principal and dependent declaration for the corresponding association. Input parameters should always be written before the keys.

3. In green rectangle we can see the association defined between the entities SalesOrderHeader and SalesOrderItem.

4. Lastly we have blue rectangle where we update our entities to navigate to the corresponding associations.

This is how we define association and navigation between OData entities using XSOData. Here is the code snippet of the XSOData file after adding association and navigation:

service{
    "demo.blog1::CustomerView" as "Customer"
    keys("ID")
    navigates ("CustomerSalesOrdersAssociation" as "CustomerSalesOrders")
    parameters via key and entity;
    
    "demo.blog1::SalesOrderHeaderView" as "SalesOrderHeader"
    keys("ID")
    navigates ("SalesOrderHeaderItemsAssociation" as "SalesOrderHeaderItems")
    parameters via key and entity;
    
    "demo.blog1::SalesOrderItemView" as "SalesOrderItem"
    keys("ID");
    
    association via parameters "CustomerSalesOrdersAssociation"
    principal "Customer"("START_TS","END_TS","ID") multiplicity "1"
    dependent "SalesOrderHeader"("START_TS","END_TS","CustomerID") multiplicity "*";
    
    association via parameters "SalesOrderHeaderItemsAssociation"
    principal "SalesOrderHeader"("ID") multiplicity "1"
    dependent "SalesOrderItem"("SalesOrderItemHeader") multiplicity "*";
}

annotations {
    enable OData4SAP;
}

Now lets see how to access the OData entities and their navigation properties through URL.

To access Customer when navigation not required, we can pass some dummy timestamp as START_TS and END_TS since these parameters don’t affect Customer Calculation View and are used only for passing it to Sales Order Header Calculation View.

Here are some examples:

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/CustomerParameters(START_TS=datetime”,END_TS=datetime”)/Results

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/Customer(START_TS=datetime”,END_TS=datetime”,ID=1)

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

In cases where we require navigation from Customer to SalesOrderHeader we will pass valid timestamp as START_TS and END_TS input parameters to fetch sales orders for customers between the given time range.

Here are some examples:

1. To get customer with ID=1, the URL will be as follows:

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/Customer(START_TS=datetime’2015-07-24T00:00:00.0000000′,END_TS=datetime’2018-11-22T23:59:59.0000000′,ID=1)

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

2. To get all the Sales Orders for the Customer with ID=1 between the given time range, the URL is as follows:

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/Customer(START_TS=datetime’2015-07-24T00:00:00.0000000′,END_TS=datetime’2018-11-22T23:59:59.0000000′,ID=1)/CustomerSalesOrders

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

3. To get the sales order with ID=2 from the previous result, the URL is as follows:

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/Customer(START_TS=datetime’2015-07-24T00:00:00.0000000′,END_TS=datetime’2018-11-22T23:59:59.0000000′,ID=1)/CustomerSalesOrders(START_TS=datetime’2015-07-24T00:00:00.0000000′,END_TS=datetime’2018-11-22T23:59:59.0000000′,ID=2)

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

4. To get all sales order items belonging to the Sales Order from the previous result, the URL is as follows:

https://blogdemoi347902sapdev.int.sap.hana.ondemand.com/demo/blog1/api.xsodata/Customer(START_TS=datetime’2015-07-24T00:00:00.0000000′,END_TS=datetime’2018-11-22T23:59:59.0000000′,ID=1)/CustomerSalesOrders(START_TS=datetime’2015-07-24T00:00:00.0000000′,END_TS=datetime’2018-11-22T23:59:59.0000000′,ID=2)/SalesOrderHeaderItems

SAP HANA Certification, SAP HANA Guides, SAP HANA Tutorial and Materials

HANA Active Active System Replication – Configuration, Failover & Failback

$
0
0
Having worked with an Active-Active Read Enabled (R/E) System Replication scenario we wanted to share our experiences. The official documentation is good but does not provide many diagrams or overall setup, process flows for failover, failback and how the read-only queries can be handled. Working with a long-time colleague Paul Barker. This was used for a short evaluation of the Active-Active with Read Enabled capability.

System Replication Prerequisites

◈ 2+ HANA systems, we used HANA 2.00.20 (HANA 2 SP2).
◈ Same size systems
◈ Same HANA SID
◈ Different host names

1. Initial Landscape Configuration

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Study Materials

1a. Acquire an Environment

We used a Cloud Appliance Library (CAL) HANA instance for a quick and easy access to HANA environments.

1b. Clone Environment, Rename Host

We cloned the HANA instance, with cloud providers such as GCP and AWS, this is a quick way of duplicating an existing environment.

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Study Materials

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Study Materials

We did experience an issue whereby after pausing the system my networking was screwed up.  Upon investigation we found that CAL has some clever start-up scripts that map host names and IPs automatically.  These needed to be disabled to preserve changes made to the OS configuration.  If you are experimenting with CAL then you would need to modify.

## Tier2 (Secondary)
/etc/init.d/updatehosts

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Study Materials

We now have 2 systems with the same SID, but different host names, we now need to tell HANA we have a new host name this can be achieved via this command.

## Tier2 (Secondary)
/hana/shared/HDB/hdblcm/hdblcm --action=rename_system --hostmap=vhcalhdbdb=tier2

1c. Configure System Replication

To enable system replication, we need to tell both the primary and secondary nodes about this configuration.  The secondary needs to be stopped before issuing this command.  When the secondary is re-started it will automatically sync all data with the primary node.

## Tier1 (Primary)
hdbnsutil -sr_enable --name=tier1

## Tier2 (Secondary)
hdbnsutil -sr_register --force_full_replica --remoteHost=vhcalhdbdb --remoteInstance=00 --replicationmode=syncmem --name=tier2 --operationMode=logreplay_readaccess

HDB start

1d. Networking – Virtual IPs

To hide the physical deployment from applications and client tools we can use Virtual IPs to connect our environment.  To make this possible we need to add a secondary network interface to each HANA node.  We also need to configure the Linux routing tables for each of the network interfaces, as adding the 2nd interface also effects the 1st one.

## Tier1 (Primary) & Tier2 (Secondary)

## Map the new network card (NIC) to eth2 
udevadm trigger --subsystem-match=net -c add -y eth2 

## Verify we now have 2 NICs
sid-hdb:~ # ifconfig -a
eth0      Link encap:Ethernet  HWaddr 0E:FA:20:F0:EE:D2  
         inet addr:172.31.35.197  Bcast:172.31.47.255  Mask:255.255.240.0
         inet6 addr: fe80::cfa:20ff:fef0:eed2/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
         RX packets:34438 errors:0 dropped:0 overruns:0 frame:0
         TX packets:25235 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000 
         RX bytes:92844006 (88.5 Mb)  TX bytes:5562258 (5.3 Mb)

eth2      Link encap:Ethernet  HWaddr 0E:B1:BF:1E:04:96  
         inet addr:172.31.46.157  Bcast:172.31.47.255  Mask:255.255.240.0
         inet6 addr: fe80::cb1:bfff:fe1e:496/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
         RX packets:38 errors:0 dropped:0 overruns:0 frame:0
         TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000 
         RX bytes:1560 (1.5 Kb)  TX bytes:1590 (1.5 Kb)

## Define the default routes for each NIC
sid-hdb:~ # ip route add default via 172.31.32.1 dev eth0 tab 1
sid-hdb:~ # ip route add default via 172.31.32.1 dev eth2 tab 2

sid-hdb:~ # ip route show table 1
default via 172.31.32.1 dev eth0 
sid-hdb:~ # ip route show table 2
default via 172.31.32.1 dev eth2

2. Failover in DR/HA Scenario


SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Study Materials

2a. Active-Active

We can now access each HANA instance via either by either the original IP or the new virtual IP address (VIP).  The primary (Tier1) allows any type of query and it can also pass read-only queries to the secondary.  We can also connect directly to the secondary, if we wish to use this for purely read-only analytics.  We can verify our current configuration is as expected.

## Tier1 (Primary) or Tier2 (Secondary)
hdbnsutil -sr_state

2b. Simulate Primary Failure

In a fail-over scenario the primary could stop unexpectedly, we can simulate this with a kill.

## Tier1 (Primary)
HDB kill -9

2c. Secondary Takeover

We now tell the secondary (Tier2) to become the primary.

## Secondary (Tier2)
hdbnsutil -sr_takeover

2d. Swap Virtual IP to new Primary

Tier2 is now primary but queries are still being sent to the now dead Tier1 node.  Using the AWS CLI to swap the VIP from the Tier1 node to Tier2.  The command was generated using the AWS Console, but executing via the CLI prevents errors.  Here we are associating a Network Interface with a Private IP

## Windows, Mac or Linux with AWS Client Tools
aws ec2 associate-address --allocation-id "eipalloc-0b18c02cfc0694674" --network-interface-id "eni-00858248469

2e. Failover Completed

The process is now completed, we have swapped our primary HANA node from Tier1 to Tier2.

3. Failback to original configuration


The failback process is similar but first we need to re-sync our old primary (Tier1) with any changes that have taken place while it was offline.  The names primary and secondary are now very confusing as the actual nodes are reversed but those roles still remain

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Study Materials

3a. Original Primary Down, Secondary Now Primary

We start with just a single active node (Tier2).

3b. Make old Primary Secondary

Before re-starting Tier1, we need to tell it, that it’s now a secondary node.

## Failed Primary (Tier1)
hdbnsutil -sr_register --force_full_replica --remoteHost=tier2 --remoteInstance=00 --replicationmode=syncmem --name=tier1 --operationMode=logreplay_readaccess

3c. Start new Secondary (old primary)

When Tier1 re-starts it will now sync all changes made during the time it was not running.  We can also verify the status of our system replication configuration.

## Tier1 Failed Primary, becoming a Secondary
HDB start
hdbnsutil -sr_state

3d. Secondary re-sync with Primary

Initially the new secondary will not be available. The time before it becomes operational depends upon the volume of changes while it was off-line. With the re-sync completed we now have 2 nodes as before, but their roles are reversed.

3e. Stop Primary

To promote Tier1 back to primary we need to stop the current primary.

## Tier2 (now Primary)
HDB stop

3f. Promote Secondary to Primary

We can now tell Tier1 that it is the primary node.  It will automatically check Tier2 is not active and then take over.

## Tier1 Switching from Secondary to Primary
hdbnsutil -sr_takeover

3g. Swap Virtual IPs

The networking needs to be updated to reflect the changes in our deployment.  We point the VIP1 to our new primary and VIP2 back to the stopped primary (soon to become secondary).

## Windows, Mac or Linux with AWS Client Tools
## Switch Primary Virtual IP to Tier1 as Primary Node
aws ec2 associate-address --allocation-id "eipalloc-0b18c02cfc0694674" --network-interface-id "eni-059611a76ccc2c7b4" --allow-reassociation --private-ip-address "172.31.44.23" --region us-east-1

3h. Revert Tier2 to Secondary

We now need to tell Tier2 it is a Secondary node again.

## Tier2 revert to Secondary
hdbnsutil -sr_register --force_full_replica --remoteHost=vhcalhdbdb --remoteInstance=00 --replicationmode=syncmem --name=tier2 --operationMode=logreplay_readaccess

3i. Restart and Re-sync Secondary

When we re-start the secondary it will re-sync with the primary.

## Tier2 re-starting as a Secondary
HDB start

3j. Failback completed, both servers are restored.

We finish the process as we began with 2 HANA servers in an Active-Active configuration.  We can verify all is configured as expected.

## Either HANA Node
hdbnsutil -sr_state

Smart Data Integration: Increase Initial Load Performance for Replication Tasks

$
0
0
This Blog Post aim is to assist you in your daily work using Smart Data Integration and the Replication Task.

In some situations i faced performance issue and i want to use this Blog as a central sweat spot for Best Practices and Lessons learned about performance issues surrounding SDI and the initial Load Replication Task.

To show the advantages of the tweaks i use a testing environment to implement them straight ahead and show the results.

1. Summary and Conclusion


To summarize this Blog take a view on the below Table. It gives you a quick view on the implemented tasks and their expected Impact.
LayerPerformance Tweak Expected Impact 
DatabaseSource Table Index Moderate
Source Table Statistics Moderate 
Logical View High 
DP AgentIncrease Heap Size Low 
Adjust framework Parameter Low 
Replication TaskReplication Task PartitioningHigh 
Virtual Table StatisticsModerate
Target Table Partitioning Moderate 
Replication Task Filter High 

Best Performance is achieved by Best Replication Task Design as we can see. Do not leave the other areas out of scope as they are part of the Big Picture but you should stick your broad efforts into the Design of the Replication Task.

As a result of this Workshop we can see that our RT01 Replication Task started with a Run time of about 186.000 ms and ended, after all Tweaks have been applied, at about 67.000 ms. This is an increase of performance in the area of ~ 60 %.

If you had a similar situation and are aware of even more tweaks i would like to know them. Please feel free to add them into the comments.

2. Introduction


2.1 History

VersionDateAdjustments 
01 2018.11.01Initial creation 

2.2 Architecture

I am using the following architecture as test Environment:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

As you can see this is pretty straight forward. A MS SQL Server 2017 is acting as the source Database having the SAP HANA SDI DP Agent 2.0 installed. Destination Database is a SAP HANA 2.0 Database.

2.3 Versioning

I am using the following Versions of Software Applications.

Source System

◈ Microsoft SQL Server 2017 Developer Edition
◈ Using the WideWorldImportersDW sample Data
◈ SAP HANA SDI DP Agent in Version 2.0 SPS 03 Patch 3 (2.3.3)

Target System

◈ SAP HANA 2.0 SPS 03 Database

2.4 Scope

In Scope of this Blog are Performance Tips and Tricks on the following Layer

◈ The Source System (Restricted)
◈ The SAP HANA SDI DP Agent
◈ The SAP HANA SDI Replication Task

It will cover Database Indexes and Views, SAP HANA SDI DP Agent Parameters as well as Tips and Tricks in the Replication Task design and SAP HANA Parameter settings.

Everything that is not listed here is out of Scope!

2.5 Approach

The approach will look as follows:

◈ Tweak will be applied
◈ Replication Task will run three times
◈ Meridian Run time value out of these three times will be taken as Benchmark

2.6 Application Setup

On source System Level we have the Sale Table from the sample Data. Structure looks as follows:

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

Row count is about 12 Mill. records

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

On the Target System we have a standard Replication Task configured for Initial Load. No performance tweaks applied so far.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

3. Initial Position


Having the setup running, and no tweaks have been performed so far, the run time for our RT01 Replication Task is at ~ 186.000 ms.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

4. Performance Tweaks


4.1 On Source Database

I want to give you a restricted view on the source Database tweaks as they always differ from your source Database vendor.

Most of the tweaks here were already shipped by the demo Data, but generally spoken you should consider the following Tweaks:

◈ Create Indexes
◈ Create Statistics
◈ Create Database Views

4.1.1 Source Tweak 01 – Create an Index

Already shipped with the Demo Content are some Indexes created on the Table.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

Especially important is the Index on the Primary Key. The PK should always be a Part of your Replication Task.

Here you have to contact your source System DBA and let them create your Indexes. Depending on the kind of source system the process differs.

4.1.2 Source Tweak 02 – Create Statistics

Also here, already shipped with the Demo content. We can see the Statistics on the Table.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

Parallel to your Index, your DBA should create up-to-date Statistics. It is one solution to create Statistics, and performance will increase instant. Over time the advantage will vanish if your statistics are not up-to-date. Therefore make sure that they are updated automatically.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

4.1.3 Source Tweak 03 – Create Database Views

Use a Database View to encapsulate only your Business requirements within the View. Maybe you don`t have to consume the full table but only some columns, maybe you need a JOIN or UNION to another table, you could filter the Data directly in the View and so on.

Filtering could also be done within the Replication Task. But having this kind of logic on the lowest Layer in our overall architecture is the best for your performance.

Databases sent their Query into the Query Optimizer in order to ensure the Best Performance available. As these Query Optimizer are some kind of a black box, we have no insight what happens inside.

You can pass along “Hints” within the View to manually adjust the handling of the Query Optimizer. Also here you need the help of your DBA as this task can get complex and varies from source System kind.

Here i created an example Database View that only takes the columns into account that i actually need for my (artificial) business scenario as well as a Filter. The Views takes 12 out of the 21 columns an retrieves only half of the 12 mill. records.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

I now created a simple Replication Task on top of this View and guess what – it was quite fast. Only makes sense as we have fewer columns and fewer rows as from the base “Sale” Table.

The result is at ~ 44.000 ms.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

Always consider the impact when you implement a Database View. In our overall Architecture this is another Layer. It includes logic, needs to be maintained and developed. Often development is out of scope within the project team as this has to be done by the DBA.

Remember these organisational challenges.

4.1.4 Source Tweak – Run time

As already mentioned, in my case the source Database tweaks have already been applied as part of the shipped Demo Content from the WideWorldImportersDW Database. So we wont see much progress here.

If you don`t have these on your source Database and you apply these tweaks, you can expect an performance boost.

After applying these Tweaks the Run time of my RT01 Replication Task is at ~ 187.000 ms.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

4.2 On SDI DP Agent

Lets have a look into our second Layer – The SDI DP Agent.

A general rule of thumb is to place the DP Agent as close as possible to the source System. Ideally installed on the same Host as the source System. For sure i had more situation where this was not possible due to IT guidelines as situation were it was.

At least the SDI DP Agent should be in the same Sub net and geographically very very close to the source System.

4.2.1 SDI DP Agent Tweak 01 – Setting the Heap Size

For very large Initial Loads please configure an adequat “-Xmx” Setting in the “dpagent.ini” config File under “<SDI_DP_AGENT_INSTALL_DIR>\dataprovagent“.

This Setting really depends on the Size of your Tables that you want to load in an Initial Load szenario. If the Tables contain millions of rows, a setting between 16 – 24 GB is recommended.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

I would recommend that you play a little with this Parameter until it fits your purpose.

4.2.2 SDI DP Agent Tweak 02 – Adjusting Parameters

The general rule of thumb with regards to SAP Parameter`s is: Leave it as it is!

Only when you face an issue and the solution is to change a parameter, change the parameter. Solutions are documented in the official SAP Documentation, SAP Note or by a SAP Support Engineer.

Tweak two is to verify two parameters out of the “dpagentconfig.ini” File under “<SDI_DP_AGENT_INSTALL_DIR>\dataprovagent“.

The Parameters “framework.serializerPoolSize” and “framework.responsePoolSize” need to have a lower value as the “-Xmx” Parameter.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

4.2.3 SDI DP Agent Tweak 03 – Adjusting for partitions

When you use Partitions within your Replication Task…and i encourage you to do so…this Parameter comes into account.

Only when you use a large Number of Partitions this Parameter has to be adjusted manually.

We will learn about Partitioning in Replication Tasks later.

4.2.4 SDI DP Agent Tweak – Runtime

Honestly, i didn`t expect a large performance increase after applying these tweaks. Why? I only run this one replication task and all the resources belong to this one.

Setting Xmx values for java based processes comes always into account when we have a heavy parallel processing, which we don`t have while running these tests.

But in a productive Environment with many large tables that you want to load in parallel you will have a positive result out of these tweaks.

After applying these Tweaks the Run time of my RT01 Replication Task is at ~ 185.000 ms

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

At the end of the day a slight improvement.

4.3 On Target HANA Database

Our final Layer is the Target System. Especially the Replication Task design is crucial here. By having the best Replication Task design we get major performance improvements.

4.3.1 Target Tweak 01 – Replication Task Partition

An absolute Must- Have! By using Task Partitioning you cut one large SQL Statement into many small SQL Statements. These will be handled in parallel by the source Database and hence we gain a greater performance.

I would recommend to create the Replication Task Partitions based on the Primary Key. Why? Because in a well fitting Environment we will have an Index and Statistics on our Primary Key. In addition, the PK should always be part of the Replication Task. In that case the PK is our red line through the whole Workflow.

You have a lot of options by using Replication Task Partitioning. I would recommend you check the SDI documentation for the scenario that fits the most.

I usually start with a Range Partition set on the PK and a parallelization of 4.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

Whats the correct value for the parallel partitions? No one can tell you! Always keep in mind that those four SQL Statements have to be joined to one at the end of the Workflow. This takes some additional time.

Creating Partitions is like the SDI DP Agent parameters. You have to play a little with the values until you get a satisfied result.

My PK is a BIGINT and the max value is something around 12 Mill. I created four partitions splitting those 12 Mill. records into groups looking like this

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

“PART_00” Partition fetches all Data until the “Sale Key” 4000000.

“PART_01” Partition fetches all Data from the “Sale Key” 4000001 until 8000000.

And so on…

When i start the Replication Task i can see the process in my DP Task Monitor and also the SQL Statements of the partitions.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

Alright. Lets play a little with the partitions and run the RT01 Replication Task a few times.

Partition CountMeridian Run time 
4~ 111.000 ms
~ 93.000 ms
~ 94.000 ms 
10(*) ~ 116.000 ms 

(*) Remember Note 2362235 when having more than 8 Partitions!

As we can see run time increases at a certain point in Time. We get the best Performance with six Partitions. Having more Partitions the run time increases as we produce more overhead for the merge.

So lets stick to the six Partitions with a run time of ~ 93.000 ms.

4.3.2 Target Tweak 02 – Create Statistics

Create Data statistic objects that allow the query optimizer to make better decisions for query plans.

And in our case we can even create statistics on Virtual Tables that will help us to improve the performance of our initial Load.

We run the following Statement in our HANA Database:

CREATE STATISTICS "<STATISTIC_NAME>" ON 
"<SCHEMA>"."<VIRTUAL_TABLE>" (<COLUMN>) 
TYPE HISTOGRAM 
REFRESH TYPE AUTO 
ENABLE ON 
INITIAL REFRESH
;

Here we follow our red line. As column i recommend the usage of your Primary Key as we did on DB Layer with the Index and the Statistics.

REFRESH TYPE should be set to AUTO as in this case HANA decides whats the best refresh type.

ENABLE ON and INITIAL REFRESH are meanwhile the default options.

Having set the statistics on our Virtual Table, the Run time of our RT01 Replication Task is at ~ 81.000 ms.

4.3.3 Target Tweak 03 – Target Table Partition

Partitioning the Target Table brings us the performance gain in a way that the SDI Table Writer can write the Data in parallel into multiple partitions of the Table.

Please be aware that this tweak speeds up the writing processes but can come along with a performance degradation in the reading processes. This is due to the fact that the Query needs to be merged after running over multiple Table partitions. Especially on simple Queries the Table Partitioning can have negative impact.

Therefore i highly recommend that you check the above Note.

In our scenario we will use the Target Table Partitioning being in sync. with our Replication Task Partitions. The recommendation is to have the same amount of Target Table partitions as of Replication Task Partitions and the usage of the PK. If you are running a Scale- Out Scenario it is recommended to split the partitions over all available Nodes.

For the sake of this Blog we will try two types of Partitions. The HASH Partition and the RANGE Partition. The HASH Partition is the most easiest and fastest way to set up partitioning. For the RANGE Partition we need a little more complex setup.

SQL Syntax for setting up the HASH Partition:

ALTER TABLE "<SCHEMA>"."<TARGET_TABLE>"
PARTITION BY HASH ("<COLUMN>")
PARTITIONS <PARTITION_COUNT>
;

SQL Syntax for setting up the RANGE Partition:

ALTER TABLE "<SCHEMA>"."<TARGET_TABLE>" 
PARTITION BY RANGE ("<COLUMN>")
(USING DEFAULT STORAGE (
    PARTITION 1 <= VALUES < 100,
    PARTITION 100 <= VALUES < 200,
    PARTITION 200 <= VALUES < 300,
    PARTITION 300 <= VALUES < 400,
    PARTITION 400 <= VALUES < 500,
    PARTITION 500 <= VALUES < 600
    )
);

Result:

Partition TypeMeridian Run time 
HASH~ 69.000 ms 
RANGE ~ 67.000 ms 

We can see that there is not much difference between both Partition types. In that case i would recommend the usage of the HASH partition as this is the one with the easier setup and it requires less maintenance.

For the sake of this Blog i will use the RANGE Partition which results in a run time of or RT01 Replication Task of ~ 67.000 ms.

4.3.4 Target Tweak 04 – Using Filters

This one is quite obvious as in chapter 4.1.3. In the beginning double- check your business requirements. Do you really require all columns from the source Table in your target? Do you really require all rows from the source Table in your target?

Verify these questions with your business department in order to minimize the data transferred over the wire. The less Data, the quicker – Nothing new.

For Demo purpose is added a Filter to our RT01 Replication Task. I`m filtering the Data based on a Date Field. In my case the “Invoice Date Key” Field. I have Demo data from 2012 to 2016. I want to get all the Data from the last three years (2014-2016).

So my Replication Task Filter looks like:

TO_DATE("Invoice Date Key") 
BETWEEN 
'2014-01-01' AND '2016-12-31'

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

And (again) guess what – quite fast. Meridian run time was at ~ 2800 ms. This is due to the fact that we massively decreased the amount of data that we transfer over the wire. Instead of transferring our 12 mill. records, we transferred ~ 167.000 records.

So i really encourage you to use Replication Task Filter.

For the sake of this Blog i remove the Filter so that we are back on our 12 Mill. records.

4.3.5 Target Tweak – Run time

Tweaking our Replication Task gave us the best results and most out of the Performance. I was expecting this result.

Replication Task Partitioning, Replication Task Filters and Statistics on our Virtual Table are a Must Have! Think twice of using Target Table Partitioning. It could be a draw bag for your analytical scenarios and processes.

After applying these Tweaks the Run time of my RT01 Replication Task is at ~ 67.000 ms

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Study Materials

5. Appendix


Additional Information related to this Topic.

5.1 SAP HANA SDI Timeout Parameter Settings

Timeouts are always an important setting. When you are dealing with long running Replication Tasks i recommend the above SAP Note to you. Especially the “dpserver.ini” Timeouts are relevant for SDI.

Even after applying all these tweaks it is likely possible that your initial load Replication Task runs for hours due to the massive amount of data based on the business requirements.

In any case, check the SDI timeout settings.

I`ll recommend to set the prefetchTimeout Parameter but with a high value. Something about 10800.

Smart Data Integration: Write Back to MS SQL Server

$
0
0
Prio 1 Use case for SDI is the Table Replication from a non-SAP Data- source into SAP HANA tables. For some Data- sources it is also supported to not only read data from it, it is also supported to write back to it.

Meaning having SAP HANA content as source and writing the Data into the 3rd Party Data- source. Source could be a table or a, even BW generated, Calculation View.

I will have a look into this feature called “Write Back” with the usage of a SAP HANA Database as source and a MS SQL Server as target.

Which 3rd Party Databases are supported for Write Back is documented in the SDI PAM.

1. Summary and Conclusion


Main Use-case for SDI is Data replication from a non-SAP source into SAP HANA. For a couple of releases now it is possible, for certain 3rd party Databases, to not only read from them but to also write back into them.

This Blog shows a little the setup of this Use-case as well as some obstacles you are facing.

As a conclusion we can determine that the default parameter setup shipped with the installation of SAP HANA is sufficient enough to write into a 3rd party Database in general.
When it comes to certain performance requirements this default setup is maybe not sufficient enough. It that case you can apply this Blog to your Environment to fulfill them. As Write Back queues the Data occasionally inside the DP Agent, main memory consumption of the DP Agent is foreseen.

2. Introduction


2.1 History

VersionDate Adjustments 
0.12018.12.21Initial creation 

2.2 Architecture

I am using the following architecture as test Environment:

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

As you can see this is pretty straight forward. A SAP HANA 2.0 Database will act as the source Database. Inside my Tenant i have the “Sale” Table with ~ 12 Mill. records.

I will use a Flow Graph as Object for the Data transfer because Replication Tasks are currently not supported to replicate data out of SAP HANA into a 3rd Party Database.

Target is a MS SQL Server 2017 hosting also the DP Agent.

2.3 Versioning

I am using the following versions of Software Applications.

Source System

◈ SAP HANA Database 2.0 SPS 03

Target System

◈ Microsoft SQL Server 2017 Developer Edition
◈ Using the WideWorldImportersDW sample Data
◈ SAP HANA SDI DP Agent in Version 2.0 SPS 03 Patch 4 (2.3.4)

2.4 Scope

This Blog will cover the following areas in this manner:

◈ Architecture challenges
◈ Flow Graph design
◈ Parameter Hints
◈ Performance

Everything else is out of scope in this Blog.

3. Initial Position


The initial Position looks as follows:

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

1. My Data source transform has the “TBL01_FactSale” table as source. This is a SAP HANA column store table containing the ~ 12 Mill. records. It is also supported to use Calculation Views as data source. This is especially useful wen you face a BWonHANA / BW/4HANA scenario. Here you can use the generated Calculation Views as your Data source.

In that case you maintain the the Calc. View centrally via the BW layer.

2. A standard Projection. Just a 1:1 Field Mapping.

3. My Data target transform has a Virtual Table as its target. This is the only way to be able to write back into a supported 3rd party database.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

This means that you create your virtual Table on top of your physical table as you do when reading from a 3rd Party datasource into SAP HANA. In our special case we will use the virtual Table not in or Data Source transform, we will use it in our Data Target transform.

To have greater performance i configured the automatic Flow Graph partitioning having eight parallel partitions in use.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

3.1 Target Table Setup

When writing into a supported 3rd Party database we are not able to use Template Tables for table generation. In the first place we have to know the structure of our target Table. Afterwards we have to create the table manually in the target database.

In my case this was pretty easy. Source and target structure are the same. For easier handling we can use the “Generate Create” function in our Database explorer.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

We take this SQL command, adjust it in order to remove the SAP HANA specifics and paste it in a SQL Console of our target database. In this case the MS SQL Management Studio.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

Finally we are able to create a Virtual Table on top of this physical Table. This Virtual Table will be used as target in our Data Target Transform.

3.2 Permissions on the target Database

In the usual setup, when we read data out of the source, it is sufficient enough to have read permissions. Write permissions is always required in the target. In a standard SDI scenario, SAP HANA is the target. In that case we need the write permissions here.

As, in our use case, the target is not a SAP HANA Database, it is the MS SQL Server, guess what?! Exactly! We need write permissions in our target.

At least on table level or, for easier maintenance, on Database level.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

4. Write Back


4.1 Initial Discoveries

Having everything set up as described in chapter three i made these initial discoveries.

Beside the fact that i have configured eight partitions, the run- time was (from my point of view) quite long.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

I went ahead and took a look into the Query Monitor were i saw the following picture:

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

1. Execution time for 1000 rows was between 192 – 255 ms.

2. Per INSERT Statement, via the Virtual Table, the adapter wrote exactly 1000 rows. Never more, never less. This is somewhat a very different behavior with respect to SELECT Statements on a Virtual Table.

3. The INSERT statements are generated based on my Partition settings. Same as when reading. In that case we have eight INSERT Statements each writing 1000 rows.

4.2 Findings

The Note states that the 1000 row fetch situation was by design in older versions of SAP HANA. It has been fixed with SAP HANA 2.0 SPS 02 rev. 024.01.

On top of this the behavior of the Virtual Table, when used for Write Back, is the following (Statement out of the above Note)

“[…]Fixed an issue, that writing to a remote virtual table was slower than reading from it, because on one hand the batch size was hard coded to 1000 and each batch was doing a prepare statement instead of caching it, and on the other hand no streaming was present, so each batch waited for reply before sending the next one. See SAP Note 2421290 for details.[…]”

As a matter of fact my deployment is on a higher release but i still faced this issue.

After getting in contact with some colleagues i got to know that even with newer versions of SAP HANA the behavior observed is still the default.

Nevertheless three parameters have been introduced with SAP HANA 2.0 SPS 02 rev. 024.01 which need to be activated.

4.3 Parameter settings

The three Parameter mentioned in 4.2 are:

‘useWritebackStreaming’

This parameter guarantees that the data will be send in a stream to the Virtual Table for Write Back instead as in batches.

‘cache_prepared_statements’

When activating this Parameter SAP HANA will not prepare the SQL Statement each time again.

‘prepared_statement_fetch_size’

Here we can adjust the 1000 row behavior in order to have more or even less.

4.3.1 Setting useWritebackStreaming

First i set the ‘useWritebackStreaming‘ parameter on my SAP HANA Database.

ALTER SYSTEM ALTER CONFIGURATION ('dpserver.ini', 'SYSTEM') 
SET ('framework', 'useWritebackStreaming') = 'true' 
WITH RECONFIGURE;

After setting this parameter the behavior of my Flow Graph looks as follows:

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

In the first place it doesn’t look like a big difference but…

1. Beside i configured eight partitions in the Flow Graph, i observed that sometimes i had less parallel INSERT Statements running.

I believe that this is due to the fact that the adapter is now continuously streaming the data and this is not ideally reflected in the Query Monitoring of the SAP HANA Studio.

At the end of the Day we have a slight improvement with regards to the run- time:

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

4.3.2 Setting cache_prepared_statements

Now i set the second parameter – ‘cache_prepared_statements‘ on my SAP HANA Database.

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM') 
SET ('smart_data_integration', 'cache_prepared_statements') = 'true' 
WITH RECONFIGURE;

After setting this parameter the behavior of my Flow Graph looks as follows:

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

1. Check the Execution Time for each 1000 row chunk. Much faster right now. Beforehand we had Execution Times up to 290 ms. Now we have them narrowed down to something between 39 – 58 ms.

This is due to the fact that HANA doesn’t prepare the statement each time. It does it ones and caches it afterwards.

2. Same situation as in chapter 4.3.1.

Now we end up much faster:

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

4.3.3 Setting prepared_statement_fetch_size

Now i set the third parameter – ‘prepared_statement_fetch_size‘ on my SAP HANA Database.

I will set it in four waves to check the results. First with 2000 rows, 5000 rows, 7000 rows and 10000 rows. Previous we used the default with 1000 rows.

ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM') 
SET ('smart_data_integration', 'prepared_statement_fetch_size') = 'XXXXX' 
WITH RECONFIGURE;

The results are

Fetch size in rowsExecution Time per chunk of rows Total execution Time of the FG 
100028 – 45 ms376754 ms
2000 62 – 107 ms 377363 ms 
5000 105 – 212 ms 374468 ms 
7000 206 – 347 ms 383563 ms 
10000290 – 425 ms 412557 ms 

We can see that with a higher Fetch Size we are getting higher Execution Times per chunk as well as a higher overall execution time.

In our test we can see that with a fetch size of 5000 rows we have the best performance. Even better as with the default of 1000 rows.

This shows us that there is no general rule of thumb when adjusting this parameter. You have to play a bit with it in your Environment.

4.4 Target database Write speed

4.4.1 Write speed

The write speed should be below the roundtrip speed that we see in chapter 4.3.2. Outerwise we will end up in queueing events (4.4.2) and wait events (4.4.3).

I checked the average write speed for my database file using the MS SQL Server Management Studio.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

4.4.2 DP Agent queueing events

After setting the Write back optimized Parameter we have the situation that SDI send`s the data pretty fast to the target Database. If the target Database is not capable to write the records accordingly fast, it will end up in queueing situations inside the DP Agent.

2018-12-04 03:24:21,198 [INFO ] DPFramework | ResponseQueue.run – AHRQ(176): Stopped
2018-12-04 03:24:21,198 [INFO ] DPFramework | ResponseQueue.run – AHRQ(177): Stopped
2018-12-04 03:24:21,198 [INFO ] DPFramework | ResponseQueue.close – AHRQ(176,177): [Pool: max=1.71gb,cur=1.18kb,peak=3.56kb,tot=6.01mb; Queue: cur=0(0b),peak=2(1.76kb),tot=501(293.55kb)]

What you see above is a snipset from the framework trace File. In our case we dont have any queueing events because the target Database was able to write the records fast enough.

If you face queueing events check the framework trace file as well as the DP Agent process. Ideally with the tool “Process Explorer” on Windows or the “top” command on Unix. While queueing the DP Agent spawns a lot of processes and the main memory consuption increases.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

4.4.3 MS SQL Wait events

As discussed in 4.2.1 wait events happen when the target Database is not capable of writing the data fast enough. In case of the MS SQL Server you can check the Activity Monitor out of the Management Console.

MS SQL Server, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

1. Here you see an overall view of all Wait Events througout the whole SQL Server instance. The less – the better.

2. Running processes of the SQL Server. Here you see our DP Agent connections

3. Database I/O files overview. Here you can see the read/write speed of the database Files. Here you also find the Response time of the Database files. The less – the better.

5. Appendix


Additional Information related to this Topic.

5.1 Enhance performance

By applying these Write Back optimized parameters we were able to increase the performance. There are still feature and functions available to even further increase the performance.

Understand key watch outs and mitigations for your SAP S/4HANA Central Finance program

$
0
0
In this blog I have captured some of these key learnings and challenges along with potential mitigations based on our CFIN client implementation and Proof Of Concept (POC) experiences. I will classify these experiences and learnings based on primary four categories below:

I. Project Plan and Approach
II. Solution Architecture
III. Product Issues
IV. Skills and Resources

I. Project Plan and Approach


• CFIN should always be planned as an interim or medium term solution in the roadmap for transformation to a single Finance instance with S/4HANA. Also in the long term the S/4HANA CFIN instance can be leveraged to build and deploy a global template for Finance transaction processing in S/4HANA
• Harmonized and optimized enterprise structure and master data design are key foundation for a scalable CFIN solution. Sufficient time needs to be provisioned during design phase of CFIN for these activities.

• Co-deployment of CFIN and Global template build introduces dependencies and additional complexities which need to be considered while preparing the project plan.

• Realization of non-standard functionalities in CFIN is development heavy and scope needs to be tightly managed. Also extensive involvement from SAP for product support needs to be planned for such non standard functionalities

• SLT (System Landscape Transformation) does not support all non-SAP databases for real time replication to CFIN system. Hence a POC is recommended at a early stage of CFIN program to evaluate alternate tools like BODS (Data Services)/Web methods etc. to be used for replication of documents from non-SAP source systems/databases, not supported by SLT, to CFIN system

II. Solution Architecture


• Based on the current product features in S/4HANA 1610 Enterprise Management (EM), CFIN solution can primarily be leveraged for below functions:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certification

o Enterprise level financial reporting at a detailed line item level for heterogeneous landscapes
o Centralized group wide reporting and monitoring functions like credit management, financial planning, financial consolidation etc.

It cannot be leveraged for centralized transaction processing scenarios like Centralized Invoice Processing, Central Period End Closing, and Central Asset Accounting etc. in a distributed Finance landscape. Primary reason being there are no reverse postings possible from CFIN system to source systems. Also a open item replicated from source system to CFIN system cannot be used in subsequent follow up transactions like Invoice, Clearing, Payment etc. in the CFIN system. This is planned in subsequent release of S/4HANA EM

• Finance Enterprise structure elements (company code, controlling area, operating concern etc.) should be optimized, harmonized and finalized at the start of a CFIN program. This would avoid a lot of rework at later stages of a finance transformation journey

• Since primary benefit of CFIN is integrated financial reporting for a heterogeneous landscape decision on the reporting framework and structures as per statutory and management requirements, needs to be done at a early stage of the CFIN design phase

• Design decisions on key Finance foundational elements like Document splitting, Parallel ledgers, Account Based COPA etc. needs to be taken during CFIN design phase. Most of these design decisions are irreversible and hence need to be taken after a detailed analysis, since a lot of complexity and efforts are involved for any retrospective change in future.

• Guiding principles for master data transformation and cleansing to be defined at early stage of CFIN program. CFIN solution with MDG (Master Data Governance) should be leveraged for harmonizing Finance master data objects like profit centers, cost centers, Chart Of Accounts from decentralized systems. This will avoid the added complexity and cost involved in harmonization of master data in a later stage of a Finance transformation program

• Fixed asset postings from source system can only be replicated to a General Ledger account in CFIN system and not to Asset sub ledger. Due to this asset based reporting is not possible in CFIN and needs to be considered while finalizing the solution design

• With multiple OSS notes being released every month by SAP on CFIN, upgrade of the target CFIN and source systems to the latest service pack or installation of latest version of OSS notes is must.

• Data migration strategy for historical data load should be restricted to only GL balances and not line items since the initial load of FI documents in CFIN system is not triggered through SLT but via RFC to source systems. Line items load could result in performance issues in such a situation

III. Product Challenges


• SLT does not support out of box Master data replication from source to CFIN systems. Recommended to evaluate other tools like BODS (Data Services) for master data replication with additional developments

• Replication of project data (WBS elements) and is not supported out of box. Custom development required in SLT and S/4HANA CFIN system for replication of these objects on a real-time basis

• Challenges in out of box replication and transformation of Costing based COPA data from source systems to Account based COPA in target CFIN system. Custom enhancements required to support replication of Costing Based COPA documents from source to Account Based COPA in target CFIN system

• Replication of documents without document splitting activated in source system to New GL with document splitting in target CFIN systems is not supported out of the box and requires complex custom development

• Although standard reconciliation reports have been introduced for CFIN in S/4HANA 1610 EM, based on our experience these are not robust and user friendly. Also these do not support source non SAP systems. Infosys proprietary reconciliation tools can be leveraged for improved auditability and traceability in CFIN

• Lack of data validation and integrity check for business mapping maintained in MDG. If certain data (e.g. UoM) is not created in CFIN system but mapped in MDG, document is still successfully replicated in CFIN system without giving any error.

• GL open items from open item managed GL accounts are not replicated as line items during initial load. This is a product gap which was corrected by SAP through a customer specific OSS note

• Replication of reversals and clearings of documents where original document is missing is not supported out of box in CFIN. SAP product support required to resolve this issue or manual workaround required to manage these postings during balances take over

IV. Skills and Resources


• With new scenarios and features being added continuously by SAP in CFIN and with not many enterprise level customers as early adopters, it is very important to leverage SAP Customer care, Value assurance and other services from SAP throughout a CFIN program for rapid resolution of product issues

• Adequate representation required from Business and internal IS in early phase of the project to support organizational design and learn the new technologies and solutions. This will mitigate the risk associated with change management at a later stage

• The required technical skillsets within IS in niche areas such as BODS (Data Services), MDG (Master Data Governance), SLT (System Landscape Transformation), Fiori, embedded BI reporting , HANA CDS view design etc. need to be identified and provisioned upfront in a CFIN program 

Introducing the NDSO: Part One – How to create a NDSO

$
0
0

Motivation


When establishing an Enterprise Data Warehouse (EDW) there are two fundamental approaches which can be taken. Either an application driven approach can be chosen by opting for a solution which covers all tasks to be performed in an EDW out-of-the-box or the data modelling is natively performed directly on the database with a collection of separate tools to support the deployment and operations of an EDW.

With SAP BW/4HANA supporting the application driven approach and the SAP HANA Data Warehouse Foundation for native deployments of EDW scenarios customers can individually choose which solution is the best fit for a given scenario. In some cases a mix of both can also be a good option. Integration is key in those mixed scenarios.


SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides

Hence, this blog is about the integration of both solutions and how they work together. In part one the focus is on introducing the native DSO (NDSO) object which was released with the Data Warehousing Foundation 2.0 on SAP HANA 2.0 whereas part two covers the integration of the NDSO with SAP BW/4HANA.

Introduction


Data cleansing and consolidation with robust delta and rollback mechanisms is an important task of an EDW. Central object in BW/4HANA is the advanced DSO which is delivering such capabilities out of the box. But how are developers supported with native developments?

The NDSO is a native HANA SQL object and developed upon the SAP HANA XS advanced application server framework. It can easily be implemented in the SAP HANA Web IDE by using the template HDB Module provided by the SAP HANA Data Warehousing Foundation 2.0 which can be taken as a blueprint for individual NDSO artifacts.

From a design time perspective, it’s structure is similar to an Advanced DSO in SAP BW/4HANA with three tables to persist the data and calculate the delta (Active Data table, Change Log, Inbound Queue). Each table is exposed as a CDS entity. Thus, tools like the SAP EIM Flowgraph can use these entities to load data into or extract data from an NDSO.

In addition to that, the NDSO:

◈ supports the data merge process of delta- and full-data load requests
◈ supports the process to handle records that are marked for deletion within the delta data sets
◈ is an addition to the SAP HANA certified ETL/ELT tools, and not a replacement and functions with the HANA certified ETL-tools

Last but not least the NDSO offers multiple ways to interoperate between a SAP HANA SQL (native) data warehousing and SAP HANA Business Warehouse systems.

There is plenty of detailed information available and links can be found at the end of this blog.

Let’s take a closer look on the major steps to create such an object.

◈ Major steps to create a NDSO.

The most straight forward way to create a NDSO is to open a new project in the SAP Web IDE workspace and choose the template provided:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides

◈ The NDSO is implemented as an multi-target application (MTA) consisting of three modules:
a HANA database module providing a sample NDSO including a flowgraph to load some sample data.
◈ a node.js module which contains the backend logic
◈ a Data Warehousing Foundation module

It is also possible to enable an existing MTA project to leverage the NDSO template.

A wizard is guiding the developer to the subsequent steps going ahead, i.e. give a name to the project, determining security, assigning project to a dedicated DB-Space.

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides

In this case the name of the NDSO generated from the template is ‘sales_acquisition’. The picture shows the structure including the three modules (purple arrows) and the sample structure of the DSO in the SAP Web UI. This object can be taken away and adopted or a separate NDSO can be created from scratch.

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides

The template also comes with a flowgraph to load some test data.

The Manage-UI as part of HRTT (HANA Runtime Tools/Database Browser) is the central tool for managing loads and activations for NDSO. Having started the flowgraph to load the test data, the request can be seen in the administration UI for this ADSO ready for activation.

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides

After activation, the data is available for reporting.

Implementing Dynamic Currency Conversion Using Calculated Column Using Semantic Type Amount with Currency Code in SAP HANA

$
0
0

Objective


Currency Conversion is an important aspect of any data modelling and reporting software. SAP HANA software provides functionality for Currency conversion. Below we will see step by step process on how currency conversion takes place in SAP HANA. This article is completely based on my learning experience. I hope, this article will be useful for those consultants who are new to SAP HANA.

Prerequisite

1. Make sure Currency Conversion related tables (TCUR*) are present in your schema and has data.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

2. Make sure Sales order header and item tables VBAK and VBAP have been loaded into your HANA system.

TABLE NAMEDESCRIPTION 
VBAKSales Document: Header Data

Step by Step Implementation


1. Let us start by creating a “Calculation View” in the HANA system based on VBAK table.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

2.  Provide name and label. Make sure to check the “With Star Join” option.

Note: This can be done for “non-star join” type of calculation view or with an “Analytical View” as well. But for this example, we are taking this as “star join”

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

By default, you will see that there is a “Star Join” node and a “Semantics” node which will appear.

3. Now, insert a “Projection” Click on the plus icon to add the table in the data foundation node and join it to the star join node. After that click on “Add Objects” in the projection to add a table.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Select Table VBAK and click OK

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Select the fields in the VBAK table as shown below:

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Apply a filter on the field WAERK to only bring “EUR”

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Select the value EUR and then click OK

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

4. Now, we need to create few “Input parameter”: first.

Create a new “Input Parameters” in the projection node

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Provide the necessary fields like “Name” and “Level” (DATE).

Select the “Parameter Type” as Direct

Select the “Default Value” as “Expression” and give the “Value” as format(now(),‘YYYYMMDD’)

This syntax will produce the current date when executed.

Also, select both the “Symantec Type” and “Data Type” as Date

After this Click OK

This will help us pick the date on which we want to get currency conversion.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

5. Again create another “Input Parameter”. This input parameter is for the “Target” currency.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Provide the necessary details like name and label (ZCURRENCY)

Select Parameter types as Column

Now, in the “Column” section select “View/Table for value help” as the current view only and the “reference column” as WAERK

Click OK

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

6. Now, create a new “Calculated Column” on the same “Star Join” node

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Provide the necessary fields like “Name” and “Level”.

Make sure to provide the “Data Type” as DECIMAL and “Column Type” as MEASURE.

Give the syntax and validate it

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

After that click on “Semantics” tab and Select semantic type as “Amount with Currency Code”

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

And then click on the “Conversion” check box and then select the “Currency” as “<calculated column>_CURRENCY”. In this case, this will be “TARGET_CURRENCY_CURRENCY”.

After that click on the “Schema for currency conversion” and select the appropriate schema.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Select the schema under which the base tables exist and click OK

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Provide all details as shown and then click OK

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Source Currency: This we have to select the column from the source VBAK table.

Target Currency: This we have to select the “ZCURRENCY” input parameter, as this is something which the user will give.

Exchange Type: This we select that as “M” as this will ensure that the daily transaction rates are getting captured.

Conversion Date: This we have to select the “DATE” input parameter. This will give the user the flexibility to choose the date in which he/she wants to see the exchange rate.

Exchange Rate: This we again need to select the appropriate column in which the exchange rates are being stored. In this case, it is VKORG

7. Now, we click on semantics and check all the column are reflecting or not and then validate and Activate the calculation view

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Execution:

Click on data preview.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Provide values to the two “Input Parameter” as created.

Here we are changing the currency from EUR to USD, so we are giving the ZCURRENCY as “USD” and the conversion will happen based on the date range given.

Click OK.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Click on raw data to view the entire data.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

Here we can see the output values with TARGET_CURRENY column.

SAP HANA Study Material, SAP HANA Guides, SAP HANA Tutorial and Material, SAP HANA Learning, SAP HANA Live

How to transport Calculation views with HTA with all Dependent objects in “One Click”

$
0
0
This blog is to explain our finding / solution to transport Hana Calculation views with dependent objects object with HTA – with classical transport.

Our main challenge was to move our Hana Development using HTA as we decided to apply LSA++ concept on our Hana modelling, by using Harmonization layers, Transformation Layers and Data Access Layer.

Our Solution – still in change:


We used layers for this solution:

◈ Hana layer to retrieve all dependent objects for a calculation view, already used for us to have an understanding of all objects used in a model.
◈ BW ABAP layer for transport management and HTA


Hana layer:


To display dependency,   following calculation view has been created:

This view is used, among others, to identify from master view all dependent views and tables

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Semantic Layer:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Calculated column:

on AO projection: AO_DEPENDENT_OBJECT_NAME

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Filter on AO Projection:
("PACKAGE_ID" ='$$Package_Name$$') AND ("OBJECT_NAME" ='$$CV_Name$$') AND ("OBJECT_SUFFIX" ='calculationview')

Filter on OD Projection:
("DEPENDENT_SCHEMA_NAME" ='_SYS_BIC') AND ("DEPENDENT_OBJECT_TYPE" ='VIEW')

Variables:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Result:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

BW Layer:


External view based on Calculation view Dependency

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

We decided to use the standard program SCTS_HOTA_ORGANIZER and add an “automatic” selection of all dependent objects.

Standard program was not good enough for us as if a model consume 10 views from different package views need to be pick one by one.

Standard program has been duplicated and following change added (Lines where code has beed added is in the code bellow):

Line 6:
      gv_depl      TYPE c,
      gv_pack(255) TYPE c.
***** START ADD fields 
DATA: lt_tab2 TYPE TABLE OF ZTCT_DEP,
      lt_pack TYPE TABLE OF ZTCT_DEP,
      lt_table TYPE TABLE OF ZTCT_DEP,
      lt_views TYPE TABLE OF ZTCT_DEP.
DATA: lv_sql TYPE string.
DATA: lo_sql_stmt   TYPE REF TO cl_sql_statement,
      lo_result    TYPE REF TO cl_sql_result_set,
      o_ref_kna1  TYPE REF TO data,
      o_ref_mandt TYPE REF TO data,
      o_ref_name1 TYPE REF TO data,
      lr_data         TYPE REF TO data,
      lx_sql_exc TYPE REF TO cx_sql_exception,
      lpv_dep     type abap_bool,
      lpv_sync     type abap_bool,
      l_tab type abap_bool .
***** END ADD fields

INTERFACE lif_data_provider.

Line 2336:
*... preselect packages

  data: l_tabix like sy-tabix.      "Add DDU
    LOOP AT mt_master REFERENCE INTO DATA(lr_master).
**** Begin Add/Change - Look for Dependencies
     if lpv_dep = 'X'.     "Check dependencies
      If  lpv_sync = 'X'.  "Check if already sync - assign to transport
        IF lr_master->sync_deploy_state = icon_led_yellow.
           l_tabix = sy-tabix.
           READ TABLE LT_VIEWS   WITH KEY PACKAGE_ID = lr_master->PACKAGE_ID INTO DATA(l_pack).
           if sy-subrc = 0.
             APPEND l_tabix TO lt_rows.
           endif.
        ENDIF.
      else.
         l_tabix = sy-tabix.
         READ TABLE LT_VIEWS   WITH KEY PACKAGE_ID = lr_master->PACKAGE_ID INTO DATA(l_pack2).
         if sy-subrc = 0.
           APPEND l_tabix TO lt_rows.
         endif.
      ENDIF.
    else.

**** End Add/Change - Look for Dependencies

     IF lr_master->sync_deploy_state = icon_led_yellow.
        APPEND sy-tabix TO lt_rows.
      ENDIF.
    endif.

Line 2379

*... preselect objects
    CLEAR lt_rows.
***** list of all CV for package
      LOOP AT mt_slave REFERENCE INTO DATA(lr_slave).
**** Begin

     if lpv_dep = 'X'.     "Check dependencies
      If  lpv_sync = 'X'.  "Check if already sync
        IF lr_slave->sync_deploy_state = icon_led_yellow.
           l_tabix = sy-tabix.
            READ TABLE LT_VIEWS  WITH   KEY   BASE_OBJECT_NAME = lr_slave->OBJECT_NAME PACKAGE_ID = lr_slave->HANA_PACKAGE_ID INTO DATA(l_pack3).
           if sy-subrc = 0.
             APPEND l_tabix TO lt_rows.
           endif.
        ENDIF.
      else.
         l_tabix = sy-tabix.
          READ TABLE LT_VIEWS  WITH   KEY   BASE_OBJECT_NAME = lr_slave->OBJECT_NAME PACKAGE_ID = lr_slave->HANA_PACKAGE_ID INTO DATA(l_pack4).
         if sy-subrc = 0.
           APPEND l_tabix TO lt_rows.
         endif.
      ENDIF.
    else.

      IF lr_slave->sync_deploy_state = icon_led_yellow.
        APPEND sy-tabix TO lt_rows.
      ENDIF.
    endif.
**** End Add/Change - Look for Dependencies
     ENDLOOP.

***** Begin Add
TYPES: BEGIN OF S_ITAB,
  LINE(255),
END OF S_ITAB.

DATA: T_ITAB TYPE TABLE OF S_ITAB.

DATA: WA_S_ITAB TYPE S_ITAB.
if l_tab = ''.
  clear WA_S_ITAB.
LOOP AT lt_table ASSIGNING FIELD-SYMBOL(<fs>).
  WA_S_ITAB-line = <fs>-BASE_OBJECT_NAME.
  APPEND WA_S_ITAB to T_ITAB.
ENDLOOP.


***** Display list of table to be replicated in SLT
    CALL FUNCTION 'POPUP_WITH_TABLE'
      EXPORTING
        ENDPOS_COL         = 100
        ENDPOS_ROW         = 100
        STARTPOS_COL       = 10
        STARTPOS_ROW       = 10
        TITLETEXT          = 'List of SLT Tables'
*     IMPORTING
*       CHOICE             =
      TABLES
        VALUETAB           =  T_ITAB
     EXCEPTIONS
       BREAK_OFF          = 1
       OTHERS             = 2
              .
    IF SY-SUBRC <> 0.
*     Implement suitable error handling here
    ENDIF.
l_tab = 'X'.
else.
  l_tab = ''.

endif.
***** End Add  
    TRY.
        lr_selections = mr_hierseq->get_selections( 2 ).
        lr_selections->set_selected_rows( lt_rows ).

Line 4919
 SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE lv_title.
  PARAMETERS:
    pv_pack TYPE cts_hot_package-hana_package_id LOWER CASE MEMORY ID hta_pack DEFAULT 'XXXXXX',               "add DEFAULT  value
    pv_sub  TYPE abap_bool AS CHECKBOX DEFAULT abap_true MEMORY ID hta_sub, "handle ALL subpackages as well  "Change to abap_true
    pv_pcv TYPE cts_hot_package-hana_package_id LOWER CASE MEMORY ID hta_packcv,                             "Add - Package of CV to Move
    pv_cv TYPE CTS_HOT_OBJECT-ABAP_HANA_OBJECT_NAME_SUFFIX LOWER CASE MEMORY ID hta_view,                    "Add - Calculation view - Father
    pv_dep   TYPE abap_bool AS CHECKBOX DEFAULT abap_true,                                                   "Add - With Dependency?
    pv_sync   TYPE abap_bool AS CHECKBOX DEFAULT abap_true .                                                 "Add - Only object not yet Sync
  SELECTION-SCREEN END OF BLOCK b1.

Line 4945
AT SELECTION-SCREEN.
*** Beging Add
  lpv_dep = pv_dep.
  lpv_sync = pv_sync.
*** End Add
  CASE sy-ucomm.
    WHEN 'AUSF'.
*** BEGIN - ADD SELECT from dependencies
      l_tab = 'X'.
      if lpv_dep = 'X'.

      TRY.
          lv_sql = | SELECT BASE_SCHEMA_NAME, BASE_OBJECT_NAME, DEPENDENT_OBJECT_NAME, VERSION_ID, ACTIVATED_BY, OBJECT_NAME, OBJECT_SUFFIX, PACKAGE_ID, BASE_OBJECT_TYPE2, ACTIVATED_AT2 |
*               use HANA built-in function
                && |   FROM _SYS_BIC."NAME OF CV_EXT_DEPENDENCY" |
                && | ('PLACEHOLDER' = ('$$CV_Name$$', '{ pv_cv }'), 'PLACEHOLDER' = ('$$Package_Name$$', '{ pv_pcv }')) |.

*         Create an SQL statement to be executed via default secondary DB connection
          CREATE OBJECT lo_sql_stmt EXPORTING con_ref = cl_sql_connection=>get_connection( ).
*         execute the native SQL query/ SQL Call
          lo_result = NEW cl_sql_statement( )->execute_query( lv_sql ).   " new syntax
*         read the result into the internal table lt_partner
          GET REFERENCE OF lt_tab2 INTO lr_data.
    lo_result->set_param_table( lr_data ).  "Retrieve result of native SQL call
    lo_result->next_package( ).
    lo_result->close( ).
        CATCH cx_sql_exception INTO lx_sql_exc.
          "lv_text = lx_sql_exc->get_text( ).
          "MESSAGE lv_text TYPE ‘E’.
      ENDTRY.
     data: l_pack(40), l_cv(40).
     LOOP AT lt_tab2 ASSIGNING FIELD-SYMBOL(<fs>).
      if <fs>-BASE_OBJECT_TYPE2 = 'VIEW'.
        SPLIT <fs>-BASE_OBJECT_NAME AT '/'   INTO l_pack l_cv.
        <fs>-PACKAGE_ID = l_pack.
        <fs>-BASE_OBJECT_NAME = l_cv.
        append <fs> to  lt_views.
        <fs>-PACKAGE_ID = pv_pcv.
        <fs>-BASE_OBJECT_NAME = pv_cv.
        append <fs> to  lt_views.
      ELSE.
        append <fs> to  lt_table.
      endif.
     endloop.
****** END - ADD SELECT from dependencies
     endif.
      IF pv_pack IS INITIAL OR pv_pack = '*'.

Text Element:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

Execution:

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

All views are selected automatically

Transport is requested, all calculation views are now assign to transport and can be moved to QA

Following list is also provide with list of table need to be replicated.

SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Learning, SAP HANA Study Materials

HANA Solution approach to implement different Business Use Case Part 1

$
0
0
In this blog I am going to discuss about the HANA Solution approach to implement different Scenario i.e which basis we will choose Native HANA or BWonHANA or SAP business suite etc to implement a client use case.

Background :


In different project implementation you will face different scenarios that need to implement in HANA platform. But to implement it, you have to know about the Implementation approach of in HANA and the how we will determine which Implementation approach  will be applicable for which real time project scenario.Also which basis we will choose  Native HANA or BWonHANA or SAP business suite etc to implement a client use case.

In this blog we will discuss about the different HANA implementation approaches the techniques to determine the approach.

Different HANA implementation scenarios:



There are mainly three types of implementation scenario –

1. Replication scenario: This type of scenario mainly known as “Side by Side” or “Side car” Scenario where HANA act as a secondary DB and replication of data is needed from the primary database (may be SAP ECC or other non SAP system). Data replication is done by ETL process like SLT or BODS or DXC.

(For example : Native HANA implementation is a example of replication scenario)

1. Integration Scenario : In integration scenario SAP HANA act as a primary database for an ECC system.Replication to the secondary database is not required. We can perform both read and write operation to the database.

(For example : BW on HANA is a example of replication scenario)

1. Transformation scenario: Similar to Integration scenario in transformation scenario SAP HANA act as a primary DB but ii is not intended to provide a base for SAP BW or other related app. Here it is become a foundation for disruptive application.Disruptive application are applications that changes the rule of a game.

#Replication Scenario: can be further divided into 7 sub categories.For first time if we decided that for a particular client use case replication scenario is the best to implement then after that we have to choose one particular replication sub scenario among the  7. I am describing each one below:

◈ Data Mart Scenario :

In this scenario SAP HANA act as a data mart or subset of data warehouse.The data is feed  via batch flow. The data flow from other database to SAP HANA is one way data flow.The data mart can hold redundant data.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

◈ App scenario: We can define this scenario  as data mart scenario plus real time. When data up to date is important we will use this scenario. In this scenario reaction time is the key.Data is being feed from directly application to HANA DB and not necessary data need to be written into the database.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

◈ Content Scenario:  If we thing about architectural perspective content scenario is the combination of app and the data mart scenario. the difference is that in content scenario required data flow and reports are provided by SAP in a ready to use format. So it can limitation to get data from SAP applications mainly SAP business suite only. this is the limitation of this scenario.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

◈ Accelerator Scenario: In this scenario HANA system sends data or message to application rather than the user.The target can be source system HANA itself.For example,any SAP application that is called a customer exit.When credit limit exists accelerator called customer exists.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

◈ Cloud on SAP HANA: This implementation scenario is a architectural variance. When the SAP HANA DB is available via Amazon services instead of licence this types of scenario used.for example : Recall Plus.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

◈ SAP Business on Analysis: In this scenario reports on SAP business one solution do not read the data from the application but from the mirror of HANA DB similar to the content scenario.

# Integration Scenario: In this scenario SAP HANA becomes the primary database and not only read operation but write operation also performed to the database,SAP HANA does not contain the relevant copy. After deciding that the use case need to be implemented in integration scenario we need to further choose one sub type among the below three integration scenario.

◈ SAP Business Suite on SAP HANA:  Sap business suite and its core component like CRM has been implemented on top of SAP HANA. Here SAP HANA becomes the primary database.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

◈ SAP Business One on SAP HANA: It is embedded with SAP HANA  database and statistical tools to analyze small and medium seized industry.Here SAP HANA replaces r the ever used Microsoft database.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

◈ SAP BW on HANA : SAP BW is implemented on the top of SAP HANA and HANA act as an primary database. This type of scenario is needed when write operation needed along with read operation.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

# Transformation Scenario:


With transformation Scenario SAP HANA becomes the foundation for what we call the disruptive applications. Disruptive applications are the applications which are creating New Market,Change the rules and change the game. As of now there is only one transformation scenario called SAP HANA apps.

SAP HANA Tutorial and Material, SAP HANA Certification, SAP HANA Guides, SAP HANA Learning

Develop a full-stack multi module business application(MTA) by using java as middle-ware

$
0
0
In this blog, I will explain the step by step process to create a multi module business application(MTA) by using java as the middle-ware component with the help of SAP Web IDE full-stack and this application will be deployed onto the Cloud Foundry trail account.

Functionality of the Application :- Here I am going to display the list of employees of an organization.

Technical Architecture : –


SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

Prerequisites: –


1. Access to SAP Web IDE Full-Stack.
2. Enable SAP HANA Database Development Tools, Tools for Java Development, SAP HANA Database Explorer and SAP Cloud Platform Business Application Development Tools features in Web IDE.
3. Enable Cloud foundry in the SCP trail account.
4. Create Space and Organisation in the Cloud Foundry account.
5. Configure the Cloud Foundry space and install the Builder.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

Application: –


◈ Launch SAP Web IDE full-stack and create a MTA application from the template.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ One MTA project will be created in your workspace with name MTA_Employee.
◈ Create a DB module for the MTA project as shown below.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Create a CDS view in the DB module as shown below.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Give the name of the CDS view as Organization.
◈ One CDS view will be created under scr folder, now open the CDS view in the Code Editor and replace the entire code with the code given in the path Organization.hdbcds..
◈ Build the CDS view to create the Employee table in the HANA DB.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Once the Build is successful one table will be created in the DB, which can be checked from the Data Base explores by adding the DB to the  DB explorer in the Web IDE.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Add some records into the table by generating and executing the insert statement.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Execute the above SQL query by putting some values as shown in the screenshot.
◈ Create a calculation view in the DB module to read the Employee table.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ One Calculation view will be created under the src folder with name GetEmployees, now open it in the Code Editor and replace the entire code with the code given in the path ◈ GetEmployees.hdbcalculationview.
◈ Build the DB module by right clicking and selecting Build on the DB folder.
◈ Now we will create Java module.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Now Java module is created and the folder structure will look like as below.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Open the pom.xml and replace the entire content with code given in the path pom.xml.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Open the web.xml and replace the entire content as given in the path web.xml.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Now create the below java classes and copy the code from the mentioned path. Make sure the folder structure is exactly same, create the missing folders.

     - GetEmployeeServlet  :-  GetEmployeeServlet.java
     - EmployeeEDMProvider :- EmployeeEDMProvider.java
     - EmployeeProcessor :- EmployeeProcessor.java

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Java module is now created, build the Java module as shown below.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Now Create UI module.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ UI module is now created and the folder structure will look like as below.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Open the following files and replace the content from the mentioned path.
     - Xs-app.json :- xs-app.json
     - Home.view.xml :-  Home.view.xml
     - Component.js :- Component.js

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Open the mta.yml file in the editor mode and replace the entire content with the code from the path mta.yaml.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ MTA application is now ready, build the MTA application to create the achieve file.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Once the build is successful then you can find the archive file under the mta_archives folder.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Deploy the .mtar file to the Cloud Foundry.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ You will be prompted to enter the details of the Cloud Foundry.
◈ Once the deployment is successful then you can see three applications in the Cloud Foundry.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Click the UI application to find out the URL of the MTA App.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Learning, SAP HANA Tutorial and Material

◈ Click the URL to run the application.

SAP HANA Study Materials, SAP HANA Certification, SAP HANA Guides

Taming your SAP HANA Express (SE01E03). Learn how establish a HANA Live connection with SAML SSO to SAP Analytics Cloud

$
0
0

Let’s begin


Understanding SAML with SAP Analytics Cloud.

In order to understand SAML SSO (single sign-on) it is important to understand who is who, namely who is the SAML Identity Provider (=the authentication authority) and who is the SAML Service Provider (=the application).

It is overwhelmingly important as more than often a SAML SP can also act as an SAML IDP;

Security Assertion Markup Language (SAML) is an open-standard data format for exchanging authentication and authorisation data between parties.

We can see the three parties involved and a very simplified exchange flow in the following picture:

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

SAP Analytics Cloud is the service provider (SP).

The browser will attempt to get access to the SAC application (=the service) [1] and will be redirected [2] to a third party Identity Provider that will be responsible to authenticate the user.[3]

When you get your SAC tenant URL and logon to SAP Analytics Cloud, you get redirected to SAP Cloud Platform Identity Authentication service. every time to SAML token is no longer valid in the browser cache

The SAP Cloud Identity Provider is used as the default SAML IDP as depicted in the picture below:

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

Understanding SAML with SAP HANA


However, what I am about to show deviates in some ways from the official documentation, especially with regard to the requirement of having a custom SAML IDP for both your SAC tenant and HANA database tenant.

[Please bear in mind that in order to be able to override the default SAC IDP you must be the system owner.]

And in that we shall be using the SAP HANA Express tenant database with the HDI views rather rather than relying on the classic _SYS_BIC schema.

We shall be using the HTTPS INA protocol leveraging the XS classic engine with the web-dispatcher still part of any SAP HANA 2.x on premise image.

There is still many partners and customers out there using HANA 2.x without XSA/HDI at all and anyway this method is officially supported on HANA Platform for the tenant databases.

On the other hand one of the fantastic features of HANA 2.x is the containerization support implemented through the HANA Deployment Infrastructure (HDI) paradigm.

So what. Can’t we take advantage of the HDI views?

Yes we can. I will be showing how easy and convenient it is to be using your HDI container based HANA views through the XS classic INA router for your analytical reporting with SAP Analytics Cloud.

All the below links are to a vanilla SAP HANA Express image where the self-signed SSL certificate was replaced by a trusted certificate signed with a CA (Certificate Authority) of your choice (it could be even your own CA).

To help you with this exercise let me show the full view of the XS classic stack folders in HANA Express tenant database (HXE):

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

From what I recall in the initial HANA Express 2.0 SP03 rev31 image certain of these folders were missing. Thus I needed to upload a couple of delivery units into the tenant database using Eclipse;

SAP HANA Web based Development Workbench


https://vhcalhxedb.sap.corp:4390/sap/hana/ide/

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

Please make sure your tenant database user can access the below editor URL:

https://vhcalhxedb.sap.corp:4390/sap/hana/ide/editor/

SAP HANA XS Classic Admin Console:


https://vhcalhxedb.sap.corp:4390/sap/hana/xs/admin/

a.CORS configuration

https://vhcalhxedb.sap.corp:4390/sap/hana/xs/admin/#/package/sap.bc.ina.service.v2

As you can see the INA service is part of the HCO_INA_SERVICE delivery unit.

From what I recall the CORS settings came pre-configured in the SAPCAL HXE image as depicted below.

If your HANA image is different this is what you will need to see in here:

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

b.SAML configuration

https://vhcalhxedb.sap.corp:4390/sap/hana/xs/admin/#/package/sap.bc.ina.service.v2

As aforementioned I will show how to implement SAML SSO to HANA database on a HANA Live connection level (or HANA user level) without the need of having a custom SAML IDP provider;

This means you are free to put whatever SAML IDP you want including not having any;

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

HTTPS INA protocol:

https://vhcalhxedb.sap.corp:4390/sap/bc/ina/service/v2/GetServerInfo

Your HANA tenant database user must be granted the relevant roles for the INA access

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

and you need to be able to see the answer to the GetServerInfo verb

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

CORS AUTHorization response

https://vhcalhxedb.sap.corp:4390/sap/bc/ina/service/v2/cors/auth.html

You will need to add the following auth.html CORS response file under the cors folder as depicted below using the workbench editor:

https://vhcalhxedb.sap.corp:4390/sap/hana/ide/editor/

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

and the .xaccess file as well

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

Putting this all together


Because we have provided the trusted SSL certificate for the secured (HTTPS) INA connection we will no longer be getting the following type of errors when trying to establish a HANA Live Connection with SAP Analytics Cloud. Please take note as of SAC wave 2019.01 the self-signed SSL certificates are no longer accepted.

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

Deploying SAP HANA Shine XSA application to the HXE tenant database.


I find SAP HANA Shine a fantastic truly educational application. It has many features. For instance it provides a consistent set of cubes (HANA views) implemented in an HDI container; It also features a data generator. I have been using its cubes as a reliable test data that anyone can replicate.

I was a little surprised after I had deployed the SAPCAL HANA Express image to discover the HANA Shine XSA application was deployed on the system database. I would rather expect it to be deployed on the tenant database because the index server is only running on the tenant database;

The system db can leverage to some extent the name server in lieu of the index server but after all I decided to deploy it on the HXE tenant database. Moreover the HTTPS INA protocol can only reach the tenant database.

This is the central SAP note on SAP HANA Shine releases : 2239095 – SAP HANA XS ADVANCED DEMO MODEL – SHINE XSA Release & Information Note

And this is the SAP HANA Shine for XSA link to SMP

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

For convenience this is the direct link to the HANA Shine for XSA documentation in the PDF format.

So you know I opted for the XSA CLI command line deployment method.

Finally let’s go to our SAC tenant:


My SAC tenant version is 2019.02 as follows:

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

Connections:

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

By clicking the OK button you will be validating the connection against the source system;

The below pop-up window is the result of the end user browser issuing the logon authentication request, namely https://vhcalhxedb.sap.corp:4390/sap/bc/ina/service/v2/cors/auth.html

As we are not connected to any custom SAML IDP the authentication request defaults to user and password request that we of course satisfy.

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

This happens only the first time we define the connection and then every time the connection cookie gets invalidated in the browser cache;

One important thing: even if we had set up a custom SAML IDP for both SAC and HXE tenants the auth.autml window would be furtively displayed and seen as well;

the only different being the authentication token would come from the IDP itself and not through the user/password credentials validation.

With the live connection to HANA set we can create models and stories based on these models

I provide the below screenshots on purpose as still may people get confused when they get in here.

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

and one of the stories:

SAP HANA Express, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Certifications

Passing multi-value input parameter from Calculation View to Table Function in SAP HANA – step by step guide

$
0
0

Introduction


In my previous post I demonstrated how to create Table Functions (TF) in SAP HANA XS Classic. One of the TF disadvantage, which I mentioned there was the fact, that they are not supporting multi-value input parameters – and this is official:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Personally I think this is quite a big limitation, because when some functionality cannot be covered by graphical view, then recommendation is to go for SQL Table Function – so why we cannot pass multi-value parameter there? In the reporting world this is one of the most basic functionality to give the user flexibility in defining input criteria, isn’t it?

For some scenarios this limitation forced me to implement complex logic in graphical views, although in SQL it could be done much easier. But I could not take away from the end user possibility of selecting more than one value.

For small datasets when there was no option for implementing report logic graphically, I was creating Table Function (without input parameters) and then in graphical view, which consumed that TF I was creating variables. But as you might know in this approach, values provided by the user in variable  were not pushed down to the table directly, so performance of such a solution is simply bad.

Finally I got a requirement which forced me to find a way of passing multi-value parameters directly to the SQL query – this was for creating BOM (Bill-of-Material) report, where user wanted to display for inputed materials all its components. Now imagine running such a report for all existing materials and recursively searching for all its components from all levels – impossible even for HANA. This was the moment when I started deeply look for the solution of passing multiple values through Calculation View Input Parameter to Table Function.

In this blog I will share my findings and workarounds which I discovered.

Why passing multiple values from Calculation View parameter to underlying Table Function parameter doesn’t work?

Common developers mistake is trying to apply filter combining multi-value input parameter with SQL IN predicate, which won’t work:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

To visualize the issue here let’s create a simple Table Function, which will return only the value of inserted parameter:

FUNCTION "_SYS_BIC"."TMP::TF_DUMMY" ( INPUT VARCHAR(100) ) 
RETURNS TABLE
(
INPUT_PARAMETER VARCHAR(100)
)
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER AS

BEGIN

SELECT :INPUT AS "INPUT_PARAMETER" FROM DUMMY;

END;

Then let’s create calculation view which will consume that TF (1). Before activation remember to map input parameter between TF and calculation view (2) and set the input parameter as Multiple Entries:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

After activation pass multiple values to the selection screen and check the output:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

At first glance the result may seem to be correct, because these are the values which we wanted to pass to the IN predicate, but in fact it’s wrong. Here instead of passing 3 separate values, we are passing single string with all three values concatenated. This is how the syntax in background would look like:

WHERE COLUMN IN :MULTI_VALUE_IP => WHERE COLUMN IN ‘ AA, BB, CC ‘

So instead of filtering three separate values there is a filter on one concatenated value (I skipped here single quotes to make it more clear, because actual value which is passed would be: ‘ ”AA”,”BB”,”CC” ‘ )

Below I showed how HANA converts multiple inserted values (on the left) and what values we would need for IN predicate to make it work (on the right):

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

What are the workarounds to pass multi-value input parameter to Table Function?

There are two approaches to implement multi-value parameter handling.

1. Using APPLY_FILTER function
2. Creating custom function splitting string into multiple values

For the demonstration purposes I created database table TEST_ORDERS and inserted some sample records there.

The structure of the table is as follows:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

I will use it as a data source for further demonstrations.

1. Using APPLY_FILTER() statement


The APPLY_FILTER is SQL function, which allows creating dynamic filter on a table. Syntax of the function is following:

APPLY_FILTER(<table_or_table_variable>, <filter_variable_name>);

To give you more examples I prepared three different scenarios.

Scenario 1.

Requirement is to create Orders report with Mandatory and Multi-Value Input parameter for Customer Number column.

Here is definition of the Table Function:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

I. Define input parameter. Assign varchar data type to the input parameter (even if the filtered column is of integer type). Consider that HANA concatenates multiple inserted values into single string, so to allow user to input as many values as it’s supported I defined maximum number of characters which is 5000. I will describe this limitation in details in the last section.

II. Create a variable and assign dynamic filter definition. Filter definition syntax should be the same as it is used after the SQL WHERE clause. Combine the filter definition with the input parameter ( || symbol is used for concatenating strings )

III. Assign SELECT query to the variable. This Table Variable will be consumed by APPLY_FILTER function.

IV. Use APPLY_FILTER function. As first function parameter provide Table Variable with SELECT query (here :ORDERS) and for second parameter – Variable with dynamic filter definition (here :FILTER). Assign output of APPLY_FILTER to another Table Variable (here FILTERED_ORDERS)

V. Query the Table Variable with the output of APPLY_FILTER function.

Once the table function is activated create a graphical Calculation View. As a source provide created Table Function and map input parameters:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Select options Multiple Entries and Is Mandatory for that input parameter:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Now let’s run the data preview, select multiple values on selection screen and see the output:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

So we just passed multiple values through Calculation View to Table Function!

Scenario 2.

Requirement is to create Orders report with Optional and Multi-Value Input parameter for Customer Number column.

To support optional multi value parameter there is a need to adjust a part of the code (only part II, everything else keep as in Scenario 1.):

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

II. Create a variable and assign dynamic filter definition. Here there is additional check implemented. If the value of inserted parameter is empty then assign to FILTER variable string 1=1, otherwise use the same condition as in first scenario. When you pass to APPLY_FILTER function value of 1=1 as a filter variable, then all records will be returned (because this condition is always valid). This is a workaround for displaying all the records when user doesn’t provide any value to input parameter.

Now when you don’t provide any value in input parameter, you will get all records in the output:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

For other inserted values view will act as in Scenario 1.

Scenario 3.

Requirement is to create Orders report with Two Mandatory and Multi-Value Input parameters one for Customer Number and second for Category column.

In this scenario two parts of code need small adjustments compared to the Scenario 1.

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

I. Add new input parameter for Category.

II. Adjust string assigned to FILTER variable to include CATEGORY as a filter.

In Calculation View you need to add new input parameter and map it with the one which was created in Table Function (IP_CATEGORY):

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Now you can input multiple values for both parameters and filter will consider both of them:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

2. Creating custom function splitting string into multiple values


Second approach to enable passing multiple values to Table Function is to create custom SQL function. Logic inside that function will split concatenated strings into multiple records. Here is the code for the custom function:

FUNCTION "_SYS_BIC"."TMP::TF_SPLIT_STRING" ( INPUT_STRING VARCHAR(5000) ) 
RETURNS TABLE
(
"OUTPUT_SPLIT" VARCHAR(5000)
)
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER AS
BEGIN

DECLARE COUNTER INT := 1;
SPLIT_VALUES = SELECT SUBSTR_BEFORE(:INPUT_STRING,',') SINGLE_VAL FROM DUMMY;
SELECT SUBSTR_AFTER(:INPUT_STRING,',') || ',' INTO INPUT_STRING FROM DUMMY;
WHILE( LENGTH(:INPUT_STRING) > 0 )
DO
   SPLIT_VALUES =
   
   SELECT SUBSTR_BEFORE(:INPUT_STRING,',') SINGLE_VAL FROM DUMMY 
UNION    
SELECT SINGLE_VAL FROM :SPLIT_VALUES;
   SELECT SUBSTR_AFTER(:INPUT_STRING,',') INTO INPUT_STRING FROM DUMMY;
   
END WHILE;
RETURN
SELECT REPLACE(SINGLE_VAL,'''','') AS "OUTPUT_SPLIT" FROM :SPLIT_VALUES; 

END

Now we can consume that function in the SQL query. For the demonstration purposes let’s use Scenario 1.

Scenario 1.

Requirement is to create Orders report with Mandatory and Multi-Value Input parameter for Customer Number column.

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Create graphical Calculation View, use Table Function as a source and map input parameters:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Now let’s run the data preview, select multiple values on selection screen and see the output:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

The result is the same as in the approach with APPLY_FILTER function. From the performance perspective APPLY_FILTER will work faster, but the difference will be minor.

Limitations


Although the presented solutions allow to pass multiple values through Calculation View to Table Function, there is a major limitation for both approaches implementations.

As mentioned at the beginning of that post HANA concatenates all inserted values into single string. Considering the fact that maximum number of characters allowed in VARCHAR data type is 5000, at some point you may realize that number of inserted values is limited.

Let’s take an example of Material (MATNR) field which in SAP is of VARCHAR(18) data type. When inserting two materials:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

Concatenated string will look as follows:

SAP HANA Certification, SAP HANA Tutorial and Material, SAP HANA Guides, SAP HANA Study Material

The concatenated string length for this two values is:

◈ 18 x 2 characters for two materials (36 in total)
◈ 2 x 2 single quotes (4 in total)
◈ 1 x comma (1 in total)

For the presented example the total string length is 41.

Keeping in mind that the maximum varchar length is 5000, for this scenario the maximum number of Materials which you can pass to table function equals to:
5000 / (18 characters + 2 quotes + 1 comma for each material) ~ 238.

For some scenarios it can be more than enough, but sometimes you want to allow import to the input parameter thousands of values. In such a case if the string for input parameter will exceed the limit of 5000 characters you will get error as below:

SAP HANA Certifications, SAP HANA Tutorial and Material, SAP HANA Study Material
Viewing all 711 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>