Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

SAP HANA 2.0 Express Edition, Transport between Tenants

$
0
0
This blog is about how to transport a User-Role between two Tenants within one instance of SAP HANA 2.0 Express.

To enlarge the pictures press STRG++ to Zoom In and STRG– to Zoom Out.

How to download & install SAP HANA Express is explained in this Youtube-Video

I am using the Package “Server only virtual machine”.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

To have access to the VM Ware-Server via hostname hxehost, the local hosts-file C:\windows\system32\drivers\etc\hosts needs to be adapted.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

hxehost-src and hxehost-trg are two virtual hostnames which are used by SAP HANA’s Webdispatcher to forward the request to the Tenant Database.

Once the instance is registered in SAP HANA Studio the two Tenant Databases are created.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

The statements are:

— Create Source Tenant Database
create database src system user password ***;

— Create Target Tenant Database
create database trg system user password ***;

— Configure Routing to Tenant Databases
— SRC
ALTER SYSTEM ALTER CONFIGURATION (‘xsengine.ini’, ‘database’, ‘SRC’)
    SET (‘public_urls’, ‘http_url’) = ‘http://hxehost-src:8090’ WITH RECONFIGURE;

— TRG
ALTER SYSTEM ALTER CONFIGURATION (‘xsengine.ini’, ‘database’, ‘TRG’)
   SET (‘public_urls’, ‘http_url’) = ‘http://hxehost-trg:8090’ WITH RECONFIGURE;

— Validate proper configuration of webdispatcher
select key, value, layer_name
   from sys.M_inifile_contents
     where file_name = ‘webdispatcher.ini’ 
    and section = ‘profile’
   and key like ‘wdisp/system%’

The output of the select-statement shows this:

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

That means, the Webdispatcher is contacted on port 8090. The hostname used is hxehost-src. So, it will forward the request to tenant SRC. This is described in detail in the Database Admin Guide, chapter “12.1.8.3 Configure HTTP(S) Access to Tenant Databases via SAP HANA XS Classic”.

For the database to be able to redirect the hosts internally, I have put the virtual hostnames also in the file /etc/hosts on the VM Ware host.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

To be able for the Tenant Databases to logon to the XS Classic engine, a set of Roles is required which are not available on the Tenant Databases right away. To make them available, I use an initial logon to the XS Classic engine of the System Database. For that user SYSTEM of the System Database requires additional Roles.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

After granting those roles, user SYSTEM is able to logon to the HANA Application Lifecycle Management on the System Database.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

After the login, the required Roles are available also on each Tenant Database.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Please note: The virtual hostnames hxehost-src and hxehost-trg are not used for registration in SAP HANA Studio. For that I use hostname hxehost and the name of the Tenant Database.

Once the users of Tenant Database SRC and TRG have the required Roles assigned a login to the Tenant-Specific XS Classic engine should be possible.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Now the transport route needs to be configured. Native transport on HANA works by pulling on the target from the source. That means, the transport route has to be configured on Tenant TRG.

Click on Transport -> http://hxehost-trg:8090/sap/hana/xs/lm/index.html?page=SystemsTab

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Use Button Register to add the source system.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Pressing Next

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Press Maintain Destination

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

I only maintain the logon data for tenant SRC on tab Authentication Details.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

After pressing Save, I press the X on the upper right corner.

After pressing Finish on the next screen. The tenant database should be registered. A connection test was also successfully done.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Now we can create the role in the IDE on the source system using

URL: http://hxehost-src:8090/sap/hana/ide/editor/

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

First a package needs to be created. After that, I create Role R22_ADMIN. Creating a Package or Role is done via right mouseclick -> new -> …  After creation I added a simple Privilege.

To be able to transport the Package, it needs to be added to a Delivery Unit. To create a new Delivery UNIT, HALM of tenant SRC is used:

http://hxehost-src:8090/sap/hana/xs/lm/?page=DUManagementTab

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

I created Delivery Unit ROBINSON_R22 and added the Package that I created before.

Now, the transport can be done. It has to be started from the target system:

http://hxehost-trg:8090/sap/hana/xs/lm/?page=TransportTab

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Push button Create

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

After pressing Create, the transport is created and is ready to be started.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Mark the transport and press Start Transport

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

Success!

Now we can check the new Role in the target system.

SAP HANA 2.0, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

The role is active in the target system and can be assigned to a user.

That’s it, thank you very much, please like and subscribe Feel free to leave a comment.

The Journey to SAP HANA or S/4HANA: Choose Your Own Adventure

$
0
0
For any company moving to SAP HANA or S/4HANA, choosing a data management strategy is an essential part of the journey. The first step on the path to migration, though, is choosing which path to take. And of course, like any journey, a properly planned itinerary and a map are critical to getting to your intended destination.


One approach to the move to a HANA environment is migration from a source environment. Deciding when to start the migration to SAP HANA or S/4HANA will impact the process. For most companies, the answer should be “now” – but there are a number of important factors to consider. Using proven SAP best practices for data management is the best way to combat the size, cost and complexity issues you’ll face, but different data strategies can be used depending on the size of the SAP system. So, which path is right for your organization? Here are two scenarios you can take on the road to SAP HANA or S/4 HANA when using a migration strategy.

1. Large Data Volumes: A More Complex, Time-Consuming and Costly Migration


Compared to companies with smaller systems, if the data volumes in existing SAP systems are 10 TB or more, migration will take longer(in some cases, even not being able to complete the final migration in the allotted time) and will be more complex and expensive. To ensure the migration costs remain low while still meeting the intended go-live date, it’s important to start planning now and utilize the correct tools.

Shrinking the size of the system is critical in this scenario. But how can an organization remove several terabytes of data from its system – and quickly? Data archiving before a system change will solve the size, complexity and cost problems you will be facing with a large database. Automated software solutions are available to streamline the complete archiving process. These solutions enable organizations to run multiple archiving jobs in parallel, around the clock. This not only reduces the time it takes to archive the data, but also allows IT and business teams to focus on other tasks.

2. More Time, Less Data: A Phased Approach


Data management strategies aren’t just beneficial or necessary for companies with large systems. Companies with smaller SAP systems (less than 10 TB) can also greatly benefit from data management, but may have the benefit of taking a phased approach. This enables companies to reduce data volume, lower IT costs, and improve system performance gradually, giving the company more time to prepare for an eventual migration to SAP HANA or S/4HANA. It also allows IT teams to gain the support of its business users while also maximizing the amount of data that can be archived.

Choosing the alternative “Greenfield” approach to the move to the new HANA environment leaves organizations with a different challenge. Companies that do choose to do a greenfield implementation of SAP HANA should put a data management strategy in place to manage the information that will be left behind in legacy systems. In either case, reducing the size of SAP systems now not only reduces the cost and complexity of the migration process but delivers other benefits such as improved system performance, reduced risk and lower total cost of operating SAP systems.

Choosing which strategy is best for your company is part of the journey and companies should start planning now if they want to achieve the best results. There are best practices that can be leveraged for any given scenario, choose wisely based on your situation and navigate the map that’s best for you.

ABAP on HANA Optimization – Step by Step Remediation

$
0
0

1. STEP BY STEP TUNE YOUR CUSTOM ABAP CODE – HANAFIED


1.1 Introduction

Before Migrating to Suite on HANA or ABAP on HANA environment we need to analyse which of my ABAP code must be changed to avoid potential functional issues. In general existing ABAP code runs on SAP HANA as before Only if ABAP code relies on technical specifics of the old database, ABAP code changes might be necessary.


Technical changes with SAP HANA that may affect existing DB specific ABAP code.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

To avoid such confusions and technical redundancies follow the below mentioned step by step analysis to make custom development of ABAP code more optimized in HANA environment.

1.2 Requirements

To start with the optimization process the basic requirements are as follows

               1. ABAP Programming Knowledge
               2. SAP Development System with SCI/ATC checklist and SQLM transaction access
               3. Basics of SQLM and SWLT transaction processing.
               4. ABAP on HANA environment.

1.3 Step By Step Process

STEP BY STEP – TUNE YOUR CUSTOM ABAP CODE – HANAFIED.

We will see step by step how you get your custom code ready for SAP HANA and in parallel how this brings the performance and quality of your custom code on a new level right now. Also how this is facilitated through SCI results imposed on SQLM along with SWLT (SQL Performance Tuning Worklist).

Approach on Tuning ABAP code for optimization (HANAfication).

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

2. STEPS:

Step 1: Go to Transaction SCI and perform an SCI inspection for the object list for which you want to perform analysis on optimization using a custom variant or PERFORMANCE_DB.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

Step 2: Go to SQLM (SQL Monitor) Manage/Create Snapshots (Make sure Server is Active. Else select and activate all servers)

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

Step 3: Give some Description. Select Local System from data source tab and give the Package details Press F8/ Create Snapshot. You will get the success message as “Snapshot successfully created”.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

Step 4: Go to SE38 . Execute the report program that you want to analyse with the required test data.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

The SQLM monitor will record and analyse the SQL profile of the executed program. (Time delay happens).

Step 5: Goto SE38 – execute the program ‘RSQLM_UPDATE_DATA’ and check all the options while executing to finish SQLM update manually.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

Step 6: We need to feed the SQLM snapshot to SWLT configuration (SQL Monitor tab). Goto SQLM Export the snapshot data from SQLM as zip file.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

Step 7: Now Goto SWLT transaction  On General Tab  Give package/Object details and select ‘Show all details’.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

On Static checks tab – Give the SCI or ATC (ABAP Test Cockpit) inspection name (results obtained on Step1).

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

On SQL Monitor tab – Select the snapshot directly or Import the SQLM Snapshot file that we have exported in Step 6.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

Step 8: Now you are set to run/execute the SWLT Press F8 or Execute.

You will see a detailed report of SQL performance tuning work list highlighting the Total DB and DB mean time for all the tables hit in the descending order (most time consuming at the top) along with Error/warning messages and suggestions/ code remediation check messages.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

3. Conclusion.

The below mentioned jargon  are some of the key takeaways for ABAP optimization on HANA.

ABAP on HANA, SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

As discussed above In general existing ABAP code will run on SAP HANA as ever before. There are certain DB specific ABAP code that must be analysed and modified accordingly. ATC (ABAP Test Cockpit) can be used to find and adapt those mentioned DB specific ABAP custom code easily. Those well-known golden Open SQL rules are still valid on SAP HANA, only some priorities have been shifted.

For effective SQL tuning and to find HANA potential in existing ABAP code new monitoring tools like the SQL.

Monitor (SQLM) are available. With the above mentioned steps and approach, the preparation of your custom code can start before the migration to SAP HANA.

Processing XML data in SAP HANA

$
0
0
New capabilities in SAP HANA are introduced with every SP. One such important one is the ability to process XML data which has been introduced with SAP HANA 2.0 SPS1 and SPS2.

XML is in common use for interchange of data over the Internet. Thus, applications require data in xml format for communication with other entities. With the data stored in the RDMS, as SAP HANA, in relational mode as tables, the application layer requires to process the data and render in the format which could be used for communication. Similarly, when the data reaches the application layer, it needs to again process this data, but now to be able to store in the database as tables (format understood by the RDBMS).

To simplify and optimize this process, SAP HANA provides different XML functions to process this data and render it to the application layer without additional logic required, saving time and complexity at the application side.

FOR XML

<for_xml> ::= FOR XML [ ( <option_string_list> ) ] [ <returns_clause> ]

This clause, introduced in SAP HANA 2.0 SPS 01, is used to render the data from database in XML format. The database objects could be objects as column tables/row tables, virtual tables, multi store tables, views, calculation views, hierarchy views, functions, etc. It could be used as is shown below:

For example, let us create a table with the following metadata:

It is a partitioned table with different data types and column properties.

CREATE COLUMN TABLE "ALLTYPES_COL" ("ID" SMALLINT CS_INT GENERATED BY DEFAULT AS IDENTITY (start with 0 increment by -1 maxvalue 0),
"COUNTRY" VARCHAR (30),
"DATEJOINED" DATE CS_DAYDATE DEFAULT CURRENT_DATE,
"TIMEJOINED" TIME CS_SECONDTIME DEFAULT CURRENT_TIME,
"COMPLETEDATE" SECONDDATE CS_SECONDDATE DEFAULT CURRENT_UTCDATE,
"ENTRYTIME" LONGDATE CS_LONGDATE DEFAULT CURRENT_UTCTIMESTAMP,
"TINYINT_UNITS" TINYINT CS_INT DEFAULT 255,
"SMALLINT_VAL" SMALLINT CS_INT DEFAULT -32767,
"INT_VAL" INTEGER CS_INT DEFAULT -2147483648,
"BIGINT_VAL" BIGINT CS_FIXED DEFAULT -9223372036854775808,
"DEC_VAL" DECIMAL(38,
38) CS_FIXED DEFAULT 0.1) UNLOAD PRIORITY 5 AUTO MERGE
;
ALTER TABLE "ALLTYPES_COL" ADD ("SMALLDECIMAL_VAL" SMALLDECIMAL CS_SDFLOAT GENERATED ALWAYS AS ( 1.12 + 12.9 ))
;
ALTER TABLE "ALLTYPES_COL" ADD ("REAL_VAL" REAL CS_FLOAT DEFAULT 9223372036854.9223372036854)
;
ALTER TABLE "ALLTYPES_COL" ADD ("DOUBLE_VAL" DOUBLE CS_DOUBLE DEFAULT 9223372036854.9223372036854)
;
ALTER TABLE "ALLTYPES_COL" ADD ("FLOAT_VAL_DEF" DOUBLE CS_DOUBLE DEFAULT 9223372036854.9223372036854)
;
ALTER TABLE "ALLTYPES_COL" ADD ("FLOAT_VAL_64" DOUBLE CS_DOUBLE DEFAULT 9223372036854.9223372036854)
;
ALTER TABLE "ALLTYPES_COL" ADD ("FLOAT_VAL_32" REAL CS_FLOAT DEFAULT 9223372036854.9223372036854)
;
ALTER TABLE "ALLTYPES_COL" ADD ("BOOLEAN_VAL" BOOLEAN CS_INT DEFAULT TRUE)
;
ALTER TABLE "ALLTYPES_COL" ADD ("DESCRIPTION" NVARCHAR(40) DEFAULT 'テスト')
;
ALTER TABLE "ALLTYPES_COL" ADD ("ALPHANUM_VAL" ALPHANUM(10) CS_ALPHANUM DEFAULT '10')
;
ALTER TABLE "ALLTYPES_COL" ADD PRIMARY KEY INVERTED VALUE ("ID",
"DATEJOINED")
;
ALTER TABLE "ALLTYPES_COL" WITH PARAMETERS ('PARTITION_SPEC' = 'HASH 4 DATEJOINED');

Insert into the table with the following SQLs:

insert into "ALLTYPES_COL"(country) values ('India');
insert into "ALLTYPES_COL"(country) values ('Germany');
insert into "ALLTYPES_COL"(country) values ('USA');

Querying the table gives data as:

SAP HANA, SAP HANA Live, SAP HANA Materials, SAP HANA Certifications, SAP HANA XML

In order to render the data in xml format, we need to query it with for xml clause as below:

SAP HANA, SAP HANA Live, SAP HANA Materials, SAP HANA Certifications, SAP HANA XML

It has an optional set of attributes to customize the output XML. Below an example of the options provided is presented
  • Nullstyle
To omit or include, as attributes, null values in the records

Insert into the table above null value as:

insert into "ALLTYPES_COL"(country) values (null);

By Default, the null values shall be omitted as: (In the image there is no COUNTRY tag for ID -3)

select * from "ALLTYPES_COL" for xml;

SAP HANA, SAP HANA Live, SAP HANA Materials, SAP HANA Certifications, SAP HANA XML

The option NullStyle shall help to render all the values, including them when there are null values as well: (In the image, COUNTRY tag with null attribute is listed for ID -3)

select * from "ALLTYPES_COL" for xml ('nullstyle'='attribute');

SAP HANA, SAP HANA Live, SAP HANA Materials, SAP HANA Certifications, SAP HANA XML

There are multiple options such as columnstyle, format, header, incremental, nullstyle, root, rowname, schemaloc, tablename and targetns.

Additionally, returns_value helps the user to customize the output expected as: VARCHAR(n), NVARCHAR (n), CLOB, NCLOB (where n is an integer).

XMLTABLE

XMLTABLE (

[ <XML_namespace_clause>,]

<row_pattern> PASSING <XML_argument>

COLUMNS <column_definitions>

<error_option>

);

This function, introduced in SAP HANA 2.0 SPS 02, is used to extract information from XML document and create a relational table. The XML value is provided as an argument to the XMLTABLE function along with the hierarchy to be parsed and the values to be extracted. It could be used as below:

Consider the XML below.

<resultset>

                <row>

                                <ID>0</ID>

                                <COUNTRY>India</COUNTRY>

                                <DATEJOINED>2017-08-11</DATEJOINED>

                                <TIMEJOINED>14:24:35</TIMEJOINED>

                                <COMPLETEDATE>2017-08-11 12:24:35</COMPLETEDATE>

                </row>

                <row>

                                <ID>1</ID>

                                <COUNTRY>Germany</COUNTRY>

                                <DATEJOINED>2017-08-11</DATEJOINED>

                                <TIMEJOINED>14:24:35</TIMEJOINED>

                                <COMPLETEDATE>2017-08-11 12:24:35</COMPLETEDATE>

                </row>

</resultset>

To store the data in the database, we usually require to parse this XML (SAX Parser) in the application layer. With XMLTABLE it makes it very easy to execute a SQL Query to the function with the XML value and the data that should be inserted in the database.

For example, we need ID, COUNTRY and COMLETEDATE from the XML value above. The Query below helps us achieve the same:

SELECT * FROM
XMLTABLE('resultset/row' PASSING
'<resultset>
 <row>
  <ID>0</ID>
 <COUNTRY>India</COUNTRY>
  <DATEJOINED>2017-08-11</DATEJOINED>
  <TIMEJOINED>14:24:35</TIMEJOINED>
 <COMPLETEDATE>2017-08-11 12:24:35</COMPLETEDATE>
 </row>
 <row>
  <ID>1</ID>
  <COUNTRY>Germany</COUNTRY>
  <DATEJOINED>2017-08-11</DATEJOINED>
  <TIMEJOINED>14:24:35</TIMEJOINED>
  <COMPLETEDATE>2017-08-11 12:24:35</COMPLETEDATE>
 </row>
</resultset>'
COLUMNS 
ID INT PATH 'ID', 
COUNTRY VARCHAR(200) PATH 'COUNTRY',
COMPLETEDATE VARCHAR(30) PATH 'COMPLETEDATE'
) as XTABLE

The result shall be obtained as:

SAP HANA, SAP HANA Live, SAP HANA Materials, SAP HANA Certifications, SAP HANA XML

There could be cases when the xml data is residing in a column as large data. It could be passed to the function to convert into a relational table as:

create column table CONTENT (
id integer, 
data nvarchar (5000)
);

insert into CONTENT values (1, '<resultset>
 <row>
  <ID>0</ID>
 <COUNTRY>India</COUNTRY>
  <DATEJOINED>2017-08-11</DATEJOINED>
  <TIMEJOINED>14:24:35</TIMEJOINED>
 <COMPLETEDATE>2017-08-11 12:24:35</COMPLETEDATE>
 </row>
 <row>
  <ID>1</ID>
  <COUNTRY>Germany</COUNTRY>
  <DATEJOINED>2017-08-11</DATEJOINED>
  <TIMEJOINED>14:24:35</TIMEJOINED>
  <COMPLETEDATE>2017-08-11 12:24:35</COMPLETEDATE>
 </row>
</resultset>');

SELECT * FROM
XMLTABLE('resultset/row' PASSING
CONTENT.DATA
COLUMNS 
ID INT PATH 'ID', 
COUNTRY VARCHAR(200) PATH 'COUNTRY',
COMPLETEDATE VARCHAR(30) PATH 'COMPLETEDATE'
) as XTABLE

The output from the select query is:

SAP HANA, SAP HANA Live, SAP HANA Materials, SAP HANA Certifications, SAP HANA XML

Thus, there could be any source to the function as a column/row table, virtual table, extended table, multi store table, view, etc. Similarly, since the output is a tabular result set, it could be used for any operation allowed on a table as join, view creation, in a function/procedure, etc.

Also, the XML could have namespace, with deep hierarchy, with attributes, etc.  Any valid xml value could be used for parsing and converting into the table. In case, there are any errors while parsing, may be due to hierarchy specification, output value type and length, ERROR OPTION could be used to specify the error handling.

With the two simple yet effective XML operations, it makes the life of a developer easy to deal with XML and push its handling onto the database without writing and maintaining explicit parsers and converters at the application layer.

IoT in SAP HANA Cloud Platform & Microsoft Azure

$
0
0

Business Vision:


With a little opinion based on both architectures proposals and with the real experience in integration paradigm:
  • Avoiding the confusion about: How do you determine that you are implementing a “IoT concept” and not just another “integration point of view”.
  • The investment evaluation: is not just to talk about “resources and talent”, it is a must to consider recomendations like: the way to create new business, to add value, to support the productivity of your SCM Model as a competitive advantage and as an “engagement” and “simple” biz model proposals, understanding that maybe you will be implementing innovative ideas with more risks in the unpredictable market…
  • The decision making about quality: begin from your providers, so the doubt here is What is the best “IoT Solution” for you and how do you align it to your strategy?
  • SAP Architect proposal: consider new business models, services and products, usage-based pricing, dynamic pricing, etc…
  • What about concepts like big data, business intelligence / Analytics, mobile, etc in your company?
And How do you align your Business Strategy with this Digital era?, From the integration from your On-site business user… until one question: How do you reimagine your business, for example implemeting SAP S/4 HANA and other New tech offer?…

Process Vision:


Focused in SAP, i can share some tech and flows needs, that you can apply for your company:
  • Processes: Final report available, New supplies on route, Order replacement part, Predictive Maintenance, Connected Asset Management… And specialy with ARIBA and CONCUR….
  • Solution: covering Gateway communication trough HCP, in-memory computing, predictive analytics, spatial analytics, geo-location, telematics,
  • Communication: Connect to transactional and analytical business data: Hadoop, SAP HANA, SAP IQ., SAP HCI…
  • Content: SAP HANA Cloud Platform cockpit / Internet of Things Services

Technical vision:


In this new adventure considering that even you can implement it by yourself (without complex scenarios), we are going to just to compare 2 proposals (SAP / Microsoft) in a simple way, common components from device connectivity to the storage and identification that you need to finish the interoperability vision to your back systems:

Results:


Management:
  • HCP: the common tool is the below image, where you can find tools like Device Management and Message Management, but after you configure the roles / restart IoT Services, you will find the Message Managment Service with tools like Core Services and Data Services:
SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT
  • Azure: you can get a free account and then implement different kind of logics activating the services and then working on visual studio 2015 (depending the technology that you want to use like C#, Java, etc…):
SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT

You need to download: azure-iot-sdks-master file and then open the project DeviceExplorer in your Visual Studio 2015 (you must have last updates to execute it wihout errors)

SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT

Execution:


  • HCP: This is just a tool that we have for a test, but you can consider for example, SAP FIORI proposals…
SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT
  • Azure: Here we simulate the execution with SOAPUI, but you will have the “DeviceExplorer .NET Project” to monitor this executions and even to execute Async or Sync Flows:
SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT

You need your Azure IoT parameters to have the opportunity to activate your services

SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT

SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT

SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT

SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT

Monitoring:


  • HCP: as an expected tool, we can monitor executions, but this is no all that we can use because we must to remember that the architecture consider even more technologies and “concept IoT focus”:
SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT
  • Azure: You can log the requests and even instal IAM tool, or other REST technologies:
SAP HANA Cloud Platform & Microsoft Azure, SAP HANA IOT

Using Predictive Analytics and Python on SAP Cloud Platform HANA database – Part 1

$
0
0
I was recently working with a customer who was interested in doing Predictive Analytics on top of the HANA database which they recently subscribed to on SAP Cloud Platform. They already have an on-premise server for Predictive Suite and have been using their tools against an on-premise HANA database. I this blog, I wanted to share my experience to highlight how easy it is to do the same on a HANA database on the Cloud Platform.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Once you have a HANA database subscribed on your Cloud Platform account, its important to note that you will need an additional subscription to Predictive services on the Cloud Platform too. The Predictive service on Cloud Platform offers REST based APIs which can be used in custom applications that you would build on the Cloud Platform.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

In this scenario, we are not going to leverage the REST APIs as we are going to use the on-premise PA Suite to handle complex use cases.

Once you have subscribed to Predictive Service, you can navigate to your database and click on “Install Components”

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

The system will give you a self-service option to install the APL Libraries you want. Note, it is recommended to have the version of APL match the version on your on-premise PA suite.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Once the APL libraries are installed on your HANA database, the next thing to do is to setup your Cloud Connector.

The next step would be select “On-premise to Cloud” option within the Cloud Connector to setup a service channel for your database connection.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

After you save the settings, you can now access the hcpta instance using the host name where the Cloud Connector is installed and Port = 39815. In my example, I have installed the Cloud Connector on my laptop and hence will refer to it as localhost.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Search for ODBC Data Source in your programs and under “System DSN” maintain a new data source connection. In the below screen, I have maintained one with the name “HCP”

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

In the connection details, I have provided localhost:39815. When I try to test the connection, it will ask me for the HANA DB user credentials and it will give a successful message if everything works fine. I have got Cloud Connector, ODBC and PA software all on the same laptop.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

For demonstration purposes, I have created a schema ADM_DEMO which has demo data on banking customers and their transactions. The transaction table has got millions of records which we can use for predicting the customer churn.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

I am going to show to get started with Predictive Analytics (on-premise) and connect to HANA DB on SAP Cloud Platform. I am not an expert on PA, but just showing a very basic scenario which created a table back in HANA. I would recommend using Predictive Analytics 3.2 version.

Launch the PA software on your laptop and click on “Create Data Manipulation” under Data Manger.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

In the Data sources, select the one created in ODBC and provide the HANA DB login credentials.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Select the Table by browsing through the available schemas.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

I have taken the Disposition table to begin with.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

I can navigate to the Views tab and explore the data of this table.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Next, I am using the Merge option to connect with the Accounts table

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Account_ID is the key which links both the table. I repeat the same steps for adding Client table.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Once I have added all the tables and linked the keys, I can view the SQL which the system has generated.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

I can also view the contents of this Dynamic SQL within PA.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

I can apply filters, for example to only consider Client Type with a value “Owner”

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Once I have put the relevant filters and set the aggregations which are required, I can now save the data back into HANA as a view. In the below example, I have given the name of a table PA_CHURN.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

After execution of this step, I can now view the processed table available in HANA database.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA Cloud Platform

Working with SAP HANA Parent Child Hierarchies

$
0
0
A parent child relationships can be used to model many types of hierarchy, including ragged hierarchies, balanced hierarchies, and unbalanced hierarchies.

SAP Analytics Cloud (SAC) originally required us to use parent child hierarchies. Often when connecting live to HANA, you could be modeling your hierarchies in this way.

Below, we can see an example organisational structure. This is an unbalanced hierarchy as the depth of the hierarchy varies depending which part of the organisation you look at.



SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

For clarity, we have added the ID of each member.
This ID will be this also becomes the child member within the hierarchy.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

As we can see below, the parent child hierarchy only requires a simple structure of two columns, the child entity (Job Title), and the parent or level above that. It is also common to include the text related to that organisation level.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

create column table ORG_STRUCTURE (ORG_ID INT, PARENT_ID INT, JOB_TITLE VARCHAR(50));
insert into ORG_STRUCTURE values (1, NULL, 'CEO');
insert into ORG_STRUCTURE values (2, 1, 'EA');
insert into ORG_STRUCTURE values (3, 1, 'COO');
insert into ORG_STRUCTURE values (4, 1, 'CHRO');
insert into ORG_STRUCTURE values (5, 1, 'CFO');
insert into ORG_STRUCTURE values (6, 1, 'CMO');
insert into ORG_STRUCTURE values (7, 3, 'SVP Sales');
insert into ORG_STRUCTURE values (8, 5, 'SVP Finance');
insert into ORG_STRUCTURE values (9, 6, 'SVP Marketing');
insert into ORG_STRUCTURE values (10, 7, 'US Sales');
insert into ORG_STRUCTURE values (11, 7, 'EMEA Sales');
insert into ORG_STRUCTURE values (12, 7, 'APJ Sales');
insert into ORG_STRUCTURE values (13, 9, 'Global Marketing');
insert into ORG_STRUCTURE values (14, 9, 'Regional Marketing');
insert into ORG_STRUCTURE values (15, 11, 'UK Sales');
insert into ORG_STRUCTURE values (16, 11, 'France Sales');
insert into ORG_STRUCTURE values (17, 11, 'Germany Sales');
insert into ORG_STRUCTURE values (18, 12, 'China Sales');
insert into ORG_STRUCTURE values (19, 12, 'Australia Sales');
select * from ORG_STRUCTURE;

With just this single table we can create a calculation view to model this structure.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

Add a parent child hierarchy, more details on this step can be found in the official documentation.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

To be able to report on this we need a measure.

The easiest and most sensible option here is to add a counter to count the ORG_IDs.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

To test hierarchies, we should use a tool that properly understands the hierarchical structures.
Below we can see the hierarchy with SAP BusinessObjects Analysis for Microsoft Office

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

Alternatively, if Analysis for Office is not available then a workaround is to view the hierarchy within the Analytic Privileges.

To do this, we need to “Enable Hierarchies for SQL Access” in the Calc View properties.  This property is exposed if we have a Star Join within Calc View.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

Within the Analytic Privileges dialogue, we can find our hierarchy after first selecting the child attribute, ORG_ID.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

We can then test and browse our hierarchy, here it shows both the ID (Value) and the Label.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

So far so good, now joining this hierarchy dimension to a fact table should be straight forward, and it is, provided you use the correct join – an outer join.

create column table EXPENSES (ORG_ID int, EXPENSE_AMOUNT int);
insert into EXPENSES values (1,430);
insert into EXPENSES values (2,120);
insert into EXPENSES values (3,100);
insert into EXPENSES values (4,250);
insert into EXPENSES values (5,530);
insert into EXPENSES values (6,180);
insert into EXPENSES values (8,450);
insert into EXPENSES values (9,250);
insert into EXPENSES values (10,160);
insert into EXPENSES values (12,350);
insert into EXPENSES values (13,130);
insert into EXPENSES values (14,300);
insert into EXPENSES values (15,140);
insert into EXPENSES values (16,550);
insert into EXPENSES values (18,170);
insert into EXPENSES values (19,150);

A common scenario is that not all the organisation entities will appear in the fact table, but they are still part of the hierarchy.  We want to ensure that our reporting is accurate, and not lose and information.  To achieve this we should use a left outer join on the dimension table.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

We now have a simple calculation view with a fact table, dimension and a parent child hierarchy.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

Switching to Analysis Office, we can report against our parent-child hierarchy. Notice how all members are returned, including the parent and child members where there are no expense amounts.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Live, SAP HANA Learning

Leveraging Predictive Analytics in IT Departments

$
0
0

Leveraging Predictive Analytics in IT


Being in analytics I feel sort of bad for IT. They do a ton of work to deploy BI for their business users, but don’t typically benefit since they’re not the end-user.

But we’ve seen a shift where IT is starting to better leverage the analytical capabilities used by the business.


SAP IT Operations Analytics (SAP ITOA) is an analytic tool for IT that gives a single view of all devices within IT from applications, to network devices, to servers. Anything that generates a syslog.

While realtime monitoring and alerting are cool, no one likes having their lunch interrupted because of a text message saying there’s a performance issue. Avoiding degradation or outages in advance could have material benefit to your organization.  So, we’ve taken the expertise SAP has built around Predictive Analytics in other areas of the business and applied it to the datacenter. Here are some practical examples…

Notifications about Deviations from Expected Behavior


A simple example of Predictive Analytics in the data center is comparing expected behavior to actual.

Let’s take CPU usage…if it spikes too high that’s bad, you’ll start to see poor performance of your applications. Similarly, if it drops that could be indicative of an issue, perhaps an application has crashed. SAP ITOA predicts performance as expected and sets maximum and minimum error bars based on deviations we’ve seen previously. If actual performance deviates from predicted performance beyond the max and min deviations you can take a proactive action to remedy the situation before it becomes an issue.

SAP HANA Certifications, SAP HANA Materials, SAP Hana Live

Predictive Maintenance


Monitoring will alert you when a system or component needs reactive repair.  MTBF data will allow you to move toward preventive maintenance. To operate more efficiently and to mitigate downtime you will need early alerts that will allow you to do predictive maintenance.  Companies that have implemented predictive maintenance have seen cost improvements of up to 40% over their preventive expenditures.

Reducing Log Noise


Data centers generate a lot of logs (one of ours is seeing half a billion events per day!). But not every log entry is a relevant event.

Take the example of URL filtering in firewall logs. The administrator may set logs to capture when a specific URL or category of website is accessed, but if the access is within the corporate policy (maybe only accessing shopping websites outside work hours) there is no need to ingest and do analytics on the event.

Predictive Analytics can help IT administrators identify what is an event, like someone accessing a website outside permitted conditions, and non-events, which can be deleted or archived.

Drawing out Correlations


Predictive automation will also allow your operations center professionals to improve their knowledge base. By interacting directly with the log data they can determine which variables had the most impact on degraded performance or outages.  Your team can conduct their own assessments to determine root cause analysis.

Combining the power of SAP ITOA with Predictive Analytics will allow you to:
  • Have advanced notification of incidents before they occur
  • Improve the thresholds set by monitoring tools
  • Forecast when applications, systems and networks will be at utilization rates that will impact performance
  • Increase the knowledge base of their operations staff.

Public Synonyms in SAP HANA

$
0
0

Introduction:


This blog is about my experience working on synonyms in a HANA migration project.

This blog will give you an idea of how synonyms behave in different situations and how to overcome them.




My strange experience with synonym:

Let us assume there is a schema called SRIVATSAN and the owner for that schema is the user named SRIVATSAN.

A table is created in SRIVATSAN schema : TABLE_1.

The same table structure with same name is also created in another schema called SRIVATSAN_NEW.

So now, there are two tables on two different schemas.

Contents of SRIVATSAN.TABLE_1:

NAMECITY
SRIVATSANCHENNAI
NEHALPUNE

Contents of SRIVATSAN_NEW.TABLE_1:

NAMECITY
VIGNESHCHENNAI
PANKAJVARANASI

The user SRIVATSAN logs into the system.

Creates a procedure in a schema called PROD as follows:

PROCEDURE “PROD“.“TEST.DEV.SYNO::P_TEST_SYNONYM_SRIVATSAN” ( )
                LANGUAGE SQLSCRIPT
                SQL SECURITY INVOKER
                AS
BEGIN

SELECT * FROM TABLE_1;

END; 

RESULT:
SAP HANA Tutorials and Materials, SAP HANA Live, SAP HANA Certifications

Now , another user named PANKAJ logs into the system. 

Creates a procedure in the same schema (PROD):

PROCEDURE “PROD“.“TEST.DEV.SYNO::P_TEST_SYNONYM_PANKAJ” ( )
                LANGUAGE SQLSCRIPT
                SQL SECURITY INVOKER
                AS
BEGIN

SELECT * FROM TABLE_1;

END;

When this procedure is executed, the system throws an error saying the table TABLE_1 NOT FOUND in PANKAJ schema.

Now, SYNONYM is created for the table TABLE_1:

         CREATE PUBLIC SYNONYM TABLE_1 FOR SRIVATSAN_NEW.TABLE_1;

Now, the table used in the procedure P_TEST_SYNONYM_PANKAJ will refer SRIVATSAN_NEW schema.

So, the output would be,

SAP HANA Tutorials and Materials, SAP HANA Live, SAP HANA Certifications

But, this public synonym will not override the table TABLE_1 used in P_TEST_SYNONYM_SRIVATSAN, because the OWNER OF THE TABLE (SRIVATSAN) OWNS THE SCHEMA (SRIVATSAN) WHICH ALSO HAS THE SAME TABLE NAME.

So, even after the synonym is created, the result of P_TEST_SYNONYM_SRIVATSAN will be:

SAP HANA Tutorials and Materials, SAP HANA Live, SAP HANA Certifications

Hereby, I experienced that, the tables that are used in procedures of an user who owns a schema (with same name) , always references to the schema which the owner owns .

If the table doesn’t exist, then the HANA system looks into the SYNONYMS definition.

Therefore, creating PUBLIC SYNONYMS will not override all the table references.

Overcoming and migrating the procedures to different projects:

In order to avoid the table used in a procedure to look into the owner’s schema by default, the procedure has to be moved by logging as a different user who doesn’t have the particular table in his/her schema which will force the SAP HANA to look for SYNONYM and choose the schema for the table.

In short, the user who logs in and create a procedure in a project should not own the tables (used in the procedure) in his/her schema so as the public synonym is always referred by HANA to decide on the schema for the table.

To check the synonyms that are created for a table, the following query will be helpful:

SELECT * FROM SYNONYMS WHERE OBJECT_NAME LIKE ‘%<tablename>%’

HANA Window Functions: Delivery Block Duration Example

$
0
0

Introduction


I have been working with databases for ages, and always thought they had little limitations, except for the possibilities to calculate across rows. In basic SQL it’s not possible to refer to values in other rows. This make some calculations very hard or even impossible.

Working now a lot with the SAP HANA database, I learned about window functions which really opened a lot of new possibilities.

Window Functions


You can regard a window function as an in-line aggregation. You will get the results of the aggregation function on each line. Some simple examples based on the table below show the idea and the syntax of a window function:

SAP HANA Live, SAP HANA Tutorials and Materials, SAP HANA Certifications

Let’s use a window function now to sum the total revenue of each customer. Here we use the well-known SUM() function and specify the aggregation level with the ‘over(partition by …)’ extension:

select "Customer", "Period", "Revenue",
sum("Revenue") over (partition by "Customer") as "TotalCustomerRevenue"
from "NSLTECH"."CustomerPeriodRevenue"

Which results in the following:

SAP HANA Live, SAP HANA Tutorials and Materials, SAP HANA Certifications

This would be possible without window functions by running a subquery which does the aggregation on customer level and join it to the original table.

If we add an ‘order by’ clause, we will actually get a running sum over the periods

select "Customer", "Period", "Revenue",
sum("Revenue") over (partition by "Customer" order by "Period") as "TotalCustomerRevenue"
from "NSLTECH"."CustomerPeriodRevenue"

SAP HANA Live, SAP HANA Tutorials and Materials, SAP HANA Certifications

Calculating Delivery Block Duration


A common question from business is to analyze the time a delivery block (or any other) has been active. This is a nice example which we can solve with the window functions LAG. The LAG function returns the value of a specific field of the previous row in the partition.

Let’s look at some example change documents regarding delivery blocks in the CDPOS/CDHDR table of SAP:

SAP HANA Live, SAP HANA Tutorials and Materials, SAP HANA Certifications

Here you see that one document has been blocked and unblocked twice with the same code (07). The records where VALUE_OLD has a value and VALUE_NEW is empty are the moments the blocks are removed. If we take these records as the basis we would like to join the corresponding records at which the block was set.

However, this is not easily done with a subquery as you can’t just look at similar keys and block values because in this case the document has been blocked twice. You actually need to find the closest to the unset. This is where the window function LAG comes in.

First we add a couple of helper columns to the raw data:

⇰ ChangeDate: to_seconddate(concat(UDATE, UTIME), ‘YYYYMMDDHH24MISS’)
⇰ BlockCode: case VALUE_OLD when ” then p.VALUE_NEW else p.VALUE_OLD end
⇰ BlockChange: case VALUE_OLD when ” then ‘Block’ else ‘Unblock’ end

Based on this input we calculate the previous ChangeDate for all records using the LAG function:

LAG(“ChangeDate”) over (partition by TABKEY, TABNAME, FNAME, “BlockCode” order by “ChangeDate”) As “PreviousDate”

The complete query:

select
  TABKEY, "BlockCode", "BlockChange", "ChangeDate",  VALUE_OLD, VALUE_NEW,
  LAG("ChangeDate") over (partition by TABKEY, TABNAME, FNAME, "BlockCode" order by "ChangeDate") As "PreviousDate"
from (
  select
     p.MANDANT, p.CHANGENR, p.TABKEY, p.TABNAME, p.FNAME, h.UDATE, h.UTIME,  p.VALUE_OLD, p.VALUE_NEW,
     case p.VALUE_OLD when '' then 'Block' else 'Unblock' end As "BlockChange",
     case  p.VALUE_OLD when '' then p.VALUE_NEW else p.VALUE_OLD end As "BlockCode",
     to_seconddate(concat(UDATE, UTIME), 'YYYYMMDDHH24MISS') As "ChangeDate"
  from SAP.CDPOS p
     inner join SAP.CDHDR h ON p.MANDANT = h.MANDANT and p.OBJECTCLAS = h.OBJECTCLAS and p.OBJECTID = h.OBJECTID and p.CHANGENR = h.CHANGENR
  where fname= 'LIFSK'
)

Which now results in

SAP HANA Live, SAP HANA Tutorials and Materials, SAP HANA Certifications

If you select only the ‘Block’ records from this results and calculate the difference between the ChangeDate and the PreviousDate you will get the duration of the block.

Flags to enforce the push-down of filters (available SAP Web IDE since SAP HANA 2.0 SPS02)

$
0
0
With HANA Data Modeling Tools SPS02 in SAP Web IDE two new flags were introduced that enforce the push-down of filters to lower nodes in specific situations in which filter push-down would not happen per default. One flag is available in Rank nodes. The other flag is available in all nodes but only has an effect if the respective node is consumed by more than one succeeding node.

Push-down here means that a filter that is defined at a node (or in a query) becomes effective in lower nodes. Figuratively, the filter is “pushed down” to lower nodes. A push-down of filters is normally desired because it reduces the amount of records already at earlier processing stages. Reducing the amount of records early helps saving resources like memory and CPU and leads to shorter runtimes. Therefore, this push-down of filters typically happens automatically as long as the semantics and thus the results are preserved by it.

This push-down does not occur per default for rank nodes and nodes that feed into two other nodes. The reason is that here the level at which the filter is applied (before the node or after the node) has different semantic implications. The push-down in these situations, however, can be enforced by the developer by checking the new introduced flags. Therefore, to benefit from performance improvements by enforced filter push-down while not suffering from unintended semantic implications a thorough understanding of the flags is required. This is why the impact of these flags is discussed here.

Flag “Allow Filter Push Down” (available in Rank nodes)


If a filter is defined above a rank node on a column that is not used as partition criterion in the rank node the filter will not be pushed-down per default. This can be overruled by setting the flag “Allow Filter Push Down” in the Mapping tab of the Rank node as shown in the screenshot below.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA IDE

Flag to Enforce the push down of filters below the rank node

Flag “Ignore Multiple Outputs For Filter”


The “Ignore Multiple Outputs For Filter” flag is used in situations when a node is consumed by two nodes. The workings of the flag will be demonstrated in two examples below. Setting of the flag has only local effects and thus has to be checked in the respective node with more than one consumer. If the flag is set in the “View Properties” it will only become effective if another model consumes more than once the model in which the flag is set. In nodes, the flag becomes visible when selecting the Mapping tab of a node without selecting any Data Source in the Mapping dialog (see screenshot below).

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA IDE

These examples demonstrate potential differences in results when the push-down of filters is enforced by flags. In each case the developer should ensure that filtering already early is indeed the intended semantic. Used correctly, these flags can help reducing data at early stages of processing and thus reduce memory and runtime.

SAP HANA High Availability with Minimal Setup (a step by step procedure)

$
0
0
This blog gives you information about the minimal setup required for HANA high Availability. How to add standby host and perform a failover (simulation). How services, hosts and Volumes looks like before and after failover.

For high availability, a distributed HANA (scale out) setup is required.

The minimal setup for a scale out is 2 servers (one worker, one standby).

When an active (worker) host fails, a standby host automatically takes its place.

For that, standby host needs shared access to the database volumes.

Note: standby hosts do not contain any data and do not accept

requests or queries.

Host 1 (first node):

host role = Worker

host name = hanasl12

SID = HIA

Instance Number = 00

IP Address = 192.168.1.149

Host 2 (second node):

host role = Standby

hostname = hanadb4 (alias hanadb2)

SID = HIA

Instance number = 00

IP Address = 192.168.1.172

failover group = default

NFS is used here to shared file systems (/hana/shared, /hana/data/ and /hana/log)

export /hana from first node: /etc/exports

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

On second node: maintain /etc/fstab as shown below and mount the file systems.

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Install SID= HIA, master node (first node) using installation media’s HDBLCM (the below screen shows services before adding standby node)

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Hosts:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

On master node: execute action configure_internal_network (using resident HDBLCM)

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Then, on second node: run resident HDBLCM to add_hosts

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Select Host role as “standby”

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

The below screen, shows services after adding standby node

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Hosts – after adding standby node:

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Volumes: (before failover) attached to active (worker) – on first node

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

To perform failover (simulation), I’ve killed deamon process

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

The below screen shows, stopped first node and now second node has Master name server (actual role) and Master Index server (actual role)

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Volumes: (After failover) attached to second node

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

start instance on first node, it retains standby role (as actual role).

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Learning, SAP HANA Guides

Video Roundup: Troubleshooting, Installing, Provisioning, the List Goes On…

$
0
0
The videos below all fall into one of these categories:

1. Troubleshooting
2. Installation and setup
3. Informational (backup and restore, adapters, streaming lite)
4. What’s new?

    Troubleshooting


    Can’t Add a Server Connection in Studio
    Use this tip to avoid errors when adding a streaming analytics server connection through the SAP HANA Studio Run-Test perspective.


    An Adapter Fails to Connect to a Project

    Learn how to fix connection issues between external unmanaged adapters and streaming analytics projects.


    Troubleshooting Cluster Connection Problems

    Learn how to check the status of your streaming analytics server in the SAP HANA studio and the SAP HANA cockpit, and find out what to look for when diagnosing cluster connection issues.



    Installation and Setup


    Prepare to Install Streaming Analytics

    Learn what you need to do to prep for installing SAP HANA streaming analytics.



    Install Streaming Analytics on an Existing System

    Learn how to add SAP HANA streaming analytics to an existing SAP HANA system, and optionally add the streaming analytics host at the same time.

    The video below uses the graphical installer, but if you want to use a console interface, take a look at the video here.


    Add a Streaming Host

    Learn how to add the streaming analytics host to an SAP HANA system.

    The video below uses the graphical interface, but if you want to use a console instead, take a look at the video here.


    Provision the Service to a Tenant

    Learn how to provision the streaming analytics service to a tenant. If your HANA system has more than one tenant, this has to be a manual process.


    Initialize the Streaming Service

    After provisioning a service, you need to initialize the connection between the streaming service and the tenant database.


    Info on Various Topics


    Running Adapters in Cluster-Managed Mode

    Along with unmanaged and managed modes, you can also run streaming analytics adapters in cluster-managed mode, which combines some features from the first two.


    Backup and Recovery, Part 1: Backup

    Learn how to do a full backup of the HANA system with SAP HANA streaming analytics. Specifically, learn how to back up HANA, the streaming configuration files, and project files.


    Backup and Recovery, Part 2: Recovery

    Learn how to recover a HANA system with SAP HANA streaming analytics from a backup.


    Deploying a Project in Streaming Lite

    In this video, learn how to install streaming lite, then how to compile and deploy a completed streaming project on it.


    What’s New?


    What’s New in 2.0 SP 02?

    The SAP HANA 2.0 SP 02 release brings various new and improved features to streaming analytics, including a name change, enhancements to the streaming plugin for SAP Web IDE and the streaming analytics runtime tool, guaranteed delivery for the Streaming Web Service, and new polling properties for a few input adapters.


    That’s all for now, but we’re posting a couple of videos every month, so keep an eye out for new ones!

    Calling SOAP Web Service from HANA XSJS

    $
    0
    0
    I am writing this blog to share my experience and code snippets for calling a SOAP Web Service from XSJS, creating a XML input (based on input parameters) and understanding the XML response to build a JSON output of your XSJS service.

    Firstly, the xshttpdest file should be in the same folder, as your XSJS file (which has code calling the SOAP Web Service, in our case). It is not just the main package, but also the same folder/sub package.


    Initially, we were not able to access the web service, from our HANA server. This is because of the Proxy settings on the HANA server. Network team has disable the proxy settings in HANA server, it is in DEV server. They might suggest to use a proxy for PROD server.

    All the other configurations are the same, except creating a XML input and analyzing the XML response.

    I wanted to share the code snippets, which might look repetitive. To make the blog complete with all the required code, for new users as well.

    Let us see how the SOAP web service, call will look like. Import the WSDL, open it with tools like SOAP UI, which can help us to test the service. The SOAP service, will look like this: Use the host, port and the path as seen here.

    SAP HANA Certifications, SAP HANA Materials, SAP HANA Tutorials, SAP HANA Learning, SAP HANA
    Xshttpdest file configurations:

    SAP HANA Certifications, SAP HANA Materials, SAP HANA Tutorials, SAP HANA Learning, SAP HANA

    XSJS Code to call the SOAP web service

    SAP HANA Certifications, SAP HANA Materials, SAP HANA Tutorials, SAP HANA Learning, SAP HANA

    We will now see, how to build the XML for SOAP Web service.

    SAP HANA Certifications, SAP HANA Materials, SAP HANA Tutorials, SAP HANA Learning, SAP HANA

    I have to follow this approach, creating the XML input as a string and replaced the string where I need to pass on the values from the input request.

    Observe the output XML in the SOAP UI tool, understand the nodes which will give you the required values. In my case, I have to read values from a node if it satisfies a specific condition.

    My XSJS code, will look like this:

    SAP HANA Certifications, SAP HANA Materials, SAP HANA Tutorials, SAP HANA Learning, SAP HANA

    SAP HANA Certifications, SAP HANA Materials, SAP HANA Tutorials, SAP HANA Learning, SAP HANA

    SAP HANA Certifications, SAP HANA Materials, SAP HANA Tutorials, SAP HANA Learning, SAP HANA

    The extra logic I have included here is – Trying to find the indexOf of the required node name, using a Boolean value, will find the values of the child nodes as well. Building a JSON array, with the response data. We can now use this to construct the actual XSJS response.

    Change Master/Shadow role of cloud connector manually

    $
    0
    0
    High Availability configuration requires Master-Shadow cloud connector. While taking over, sometimes in exceptional situations, where both of cloud connector in High Availability configuration are shadow role/master role.  In case such bad-configuration occurs, SCP connection will be lost and Satellite systems cannot connect to SAP cloud platform (formerly known as HANA cloud Platform)

    How this can happen:

    If you have not setup the cloud connector HA set up perfectly, you might end up in a situation where both of cloud connector act as Master or both act as Shadow Cloud connector.

    Options to rectify:

    When you configure cloud connector installation ,you get the option to chose master or shadow installations, However if you have trapped in a situation as mentioned above, you either have to reinstall both master and shadow installation, and you can chose master and shadow role while setting up the initial configuration., but you will lose all the configuration and connections set up with satellite systems. Hence it is not recommended

    Other option: 

    You can change the role as per your requirement using configuration.jar file.

    Prerequisites:

    1. You should be logged in OS level.

    2. sapjvm7 and higher should be installed. Preferably sapjvm8 and higher for newer version of cloud connector.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

    3. configuration.jar should have executable rights.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

    Procedure:

    1. login to OS level and go to folder where scc is installed. You can find it under /opt/sap/scc/ directory in linux.
    2. Shut down the cloud connector for which you want to change the role using command
    service scc_daemon stop

    3. Run the below command from the directory

    java -jar configurator.jar -be [master/shadow]

    which role you want to assign to one of cloud connector.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

    4. start the cloud connector

    service scc_daemon start

    Please check the status for confirmation. It should be started and should give you URL to login:

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides

    Getting a Permanent License Key for SAP HANA Streaming Analytics

    $
    0
    0
    When you first install SAP HANA streaming analytics, a temporary license is automatically installed so you can run streaming analytics for 90 days. After that, you’ll need to request and install a permanent license, which you can do using HANA cockpit and the SAP Support Portal. Here’s how:

    1. Log into SAP HANA cockpit and connect to your system as a user with the LICENSE ADMIN system privilege

    2. From the system overview page, scroll down and click Manage system licenses. This will give you some important system info that you’ll need to request a license.

    TIP: Don’t worry about jotting this down, because this page will remain open so you can refer to it after, as needed.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA

    3. Click Go to SAP Support Portal.
    4. In SAP Support Portal, click Request Keys.
    5. On the next page, click the + icon to add a system. For the Product field, select SAP HANA,platform edition and then select the version of SAP HANA that you’re using.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA

    Additional fields appear. Fill them out using the information from HANA cockpit and click Continue.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA

    6. On the next screen, click the + icon to create a license key request.
    7. For the License Type field, select SAP HANA smart data streaming and fill out the rest of the fields based on your needs.
    8. Click Add.
    9. Your license key request displays in a row. Select the row and click Generate.
    10. The page refreshes and displays your new license key. You can download or email it.
    11. Back on the license page in SAP HANA cockpit, click the more icon to upload your new license.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA Guides, SAP HANA

    TIP: The name of the license key file is your system name. 

    12. Upload your file. Once this is done, a message will appear at the bottom of the page letting you know the license key has been installed.

    Size of log backup, complete data backup in HANA

    $
    0
    0
    Basic information:

    Backup performed for

    Data

    The data volume contains the data from the last completed Savepoint.

    Log

    The log volume contains all changes on the data volume since the last completed savepoint.


    Each log volume contains the file logsegment_<partition_ID>_directory.dat and one or more log segment files (logsegment_<partition_ID>_<segment_number>.dat).

    Currently only one log partition is supported for each Service, so the default file names are:

    ◉ logsegment_000_directory.dat
    ◉ and logsegment_000_00000000.dat, logsegment_000_00000001.dat, logsegment_000_00000002.dat
    ◉ and so on.

    Log segment files are cyclically overwritten depending on the log mode. The log mode determines how logs are backed up. Log volumes only grow if there are no more segment files available for overwriting.

    Log Backup happens every hour every day to make some space available for more logs to generate so that Database continues to work. Current size of logs can be found in

    Problem Description: Find out the log size that has taken place in 1 day or week or 1 month.

    Idea: There is no direct table, query or report available , up to my knowledge, to find out this information. All we know was the that M_BACKUP_CATALOG table contains the information of logs.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

    Also M_BACKUP_CATALOG_FILES table can give you Backup_size details.

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

    Solution: 

    Combine the properties of both tables M_BACKUP_CATALOG_FILES and M_BACKUP_CATALOG and perform a join and then sum on BACKUP_SIZE to find out the log size let’s say for a day:

    Change the dates, etc as per your need.

    SELECT SUM(B.BACKUP_SIZE) FROM M_BACKUP_CATALOG A JOIN M_BACKUP_CATALOG_FILES B ON A.BACKUP_ID=B.BACKUP_ID WHERE A.ENTRY_TYPE_NAME=’log backup’ AND UTC_END_TIME BETWEEN ‘31.08.2017 00:00:00’ and ‘31.08.2017 23:00:00’

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

    Similarly you can find ‘complete data backup’

    SAP HANA Tutorials and Materials, SAP HANA Certifications, SAP HANA

    Introducing SAP HANA Cloud Platform predictive services

    $
    0
    0
    Here at the SAP HANA Academy we’ve put together a series of hands-on video tutorials that show the basics of predictive services from both an admin and developer perspective.

    The first video tutorial covers getting started topics such as where to find the documentation.

    Tutorials 2,3 & 4 cover the one time setup process for new HCP landscapes – admin tasks.

    If you just want to get a feel for how to develop with predictive services you can jump straight into tutorial #5 “First Steps”.


    Tutorials 6-11 cover the individual services in more depth – dataset, key influencer, scoring equation, forecast, outliers and what-if?

    Tutorials 12-18 cover how to create a HTML5 app using SAP Web IDE that incorporates predictive services.

    Each tutorial includes an example and you can follow along yourself as all data and code snippets have been posted to our GitHub repository.

    1. Getting Started


    2. Setup – Deploy & Install APL


    3. Setup – Create Technical User


    4. Setup – Binding, Roles, Start


    5. First Steps


    6. Dataset Service


    7. Key Influencer Service


    8. Scoring Equation Service


    9. Forecast Service


    10. Outliers Service


    11. What If Service


    12. HTML5 – HCP Destination


    13. HTML5 – Create Project


    14. HTML5 – Register a Dataset


    15. HTML5 – Controls for DatasetID and Target Variable


    16. HTML5 – DatasetID onChange


    17. HTML5 – Key Influencers


    18. HTML5 – Deploy to HCP

    General information about Data Aging

    $
    0
    0
    Many people heard about data aging in context of HANA. You see it on a lot of SAP slides with DLM (Data Lifecycle Management) or in BoH / S/4HANA context, books about HANA and especially in the result of the HANA ABAP sizing reports. In my opinion the naming here – “clean-up” – is a little bit  misleading. It is a bit more than just housekeeping and lot of things you have to pay attention to.

    The sizing report uses by default a threshold of 15 days. All older data will be placed in the cold store and handled as historical data. This should be adjusted to your business data and in most cases 30 days and more will be used. As result you will get another sizing.



    Prerequisites


    1. The SAP application must provide Data Aging Objects
    2. The Profile Parameter abap/data_aging is set to ‘on’
    3. The Data Aging business function (DAAG_DATA_AGING) is switched on
    4. Relevant authorizations for Data Aging activities are provided

    SAP ApplicationTechn. Data Aging ObjectAvailability
    Application Log (BC-SRV-BAL)BC_SBAL7.40 SPS8
    ALE Integration Technology (BC-MID-ALE)BC_IDOC7.40 SPS8
    G/L Accounting (FI-GL) in Smart Financials 1.0FI_DOCUMENTSimple Finance add-on 1.0
    Workflow (BC-BMT-WFM-RUN)BC_WORKITEM7.40 SPS12
    Change Document (BC-SRV-ASF-CHD)BC_CHDO7.40 SPS12
    Sales DocumentsSD_VBAKS/4 1610
    Material DocumentsMM_MATDOCS/4 1511
    DeliveriesLE_LIKPS/4 1610
    billing documentsSD_VBRKS/4 1610
    Purchase OrdersMM_EKKOS/4 1610

    Sizing


    SAP HANA Tutorials and Certifications, SAP HANA Materials, SAP HANA Guides

    Check under the first section of the sizing report if you benefit from data aging. You can see the benefit under the description ‘data clean-up’. But this is the sum of all data aging objects. May be you only want activate a specific one and want to know which table is it worth. Go to the clean-up details for the exact classification.

    SAP HANA Tutorials and Certifications, SAP HANA Materials, SAP HANA Guides

    SAP HANA Tutorials and Certifications, SAP HANA Materials, SAP HANA Guides

    For the most ERP systems you will see that the biggest benefit can be achived in the change document area with aging object BC_CHDO. This affects table CDPOS and CDHDR. In this example you will save 5.3GB with 15 days retention. This means you also save 5.3GB working space. In sum you ~10.6GB memory footprint. In this case I would recommend not to use data aging, because the effort to activate it and adopt the custom code may be too high.

    Transactions


    TransactionDescription
    DAGOBJDisplay all Data Aging Objects
    DAGPTCCustomizing for Partitioning
    DAGPTMManage Partitioning
    DAGRUNOverview of Data Aging Runs
    DAGADMManaging Data Aging Objects
    DAGLOGData Aging Log

    Checklist

    • use sizing report to estimate your benefits
    • compare different sizing runs with report /SDF/READ_HDB_SIZING_RESULTS_2
    • check if you have Z-Coding which are using the aged tables => use the classes CL_ABAP_SESSION_TEMPERATURE and CL_ABAP_STACK_TEMPERATURE for access to cold data
    • check HANA parameters for cold data
    • check sizing of data volume
    • begin on high level with partitioning year > month > week because you can go to low level but not vice versa
    • keep checking your data aging runs

    Working with the predictive analysis library in HANA 2.0 SPS02

    $
    0
    0
    HANA 2.0 SPS 02 is now available and there have been a number of important updates to the predictive analysis library (PAL). The focus is on ease-of-use rather than introducing a bunch of new algorithms (though there are a couple).

    In this blog I’ll introduce the updates and show you where to find hands-on tutorial videos:

    ◉ Type-any procedures
    ◉ Web-based modeler
    ◉ New algorithms
    ◉ Enhanced algorithms


    Type-any procedures

    If you’ve worked with the PAL in the past you’ll be familiar with the “wrapper generator”. This stored procedure is used to generate stored procedures specific to the data structures of your particular scenario and specified explicitly with table types. This is partly because SQL Script is a static-type language so it’s difficult to overload a single procedure with multiple options. Whenever the data structures change because your scenario has evolved or indeed the PAL capabilities themselves have evolved it’s often necessary to re-create the procedure.

    The all-new Type-any syntax removes the need for all that by providing a library of ready built procedures that you can call at will. No wrapper generator or table types required. So less development work for you, less maintenance, and more succinct code. A simple example might look like this:

    -- parameters
    CREATE LOCAL TEMPORARY COLUMN TABLE "#PARAMS" ("NAME" VARCHAR(60), "INTARGS" INTEGER, "DOUBLEARGS" DOUBLE, "STRINGARGS" VARCHAR(100));
    INSERT INTO "#PARAMS" VALUES ('THREAD_RATIO', null, 1.0, null); -- Between 0 and 1
    INSERT INTO "#PARAMS" VALUES ('FACTOR_NUMBER', 2, null, null); -- Number of factors

    -- call : results inline
    CALL "_SYS_AFL"."PAL_FACTOR_ANALYSIS" ("MYSCHEMA"."MYDATA", "#PARAMS", ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);

    So you just need to bring the input data and set any run-time parameters and you can call the Type-any procedure directly. Notice that the procedure is located in the _SYS_AFL schema and is prefixed PAL_. It’s pretty much the same for all algorithms. Unfortunately the real-time state-enabled scoring didn’t quite make it – but we can always hope this will be available via Type-any syntax at some point in the future.

    I don’t know about you but that’s how I envisioned calling a PAL algorithm way way back when it was first introduced (HANA 1.0 SPS 04 anyone)? Better late than never I suppose!

    Web-based modeler

    Despite making access to the PAL via code a piece of cake (see above) some folks prefer not to need to write any code at all. Hey, I’m one of those!

    Well HANA provided the application function modeler (AFM) in HANA 1.0 SPS 06 delivering the functionality via the flowgraph editor in HANA Studio (or in Eclipse with the HANA tools installed). However we’ve never had a web-based version – even though XS Advanced and Web IDE for HANA do support the flowgraph editor.

    Well not anymore!!! With SPS 02 the web-based flowgraph editor now includes a Predictive Analysis node. You can now combine all the ETL stuff you know and love (hey we know that’s often 80% of the job with predictive) and fully integrate that with PAL functions. So your flowgraph might look like this:

    HANA 2.0, SPS02, SAP HANA, SAP HANA Tutorials and Materials

    OK that’s admittedly a very simple example – but you get the idea? You can have as many Predictive Analysis nodes in your process flow as you want. When you build the flowgraph, required stored procedures and tables are generated without you having to write a single line of SQL Script. Yay!

    One thing to bear in mind is that the flowgraph editor is typically about one release behind the PAL. So the PAL functions available correspond to those delivered with HANA 2.0 SPS 01 (i.e. the new algorithms introduced with SPS 02 aren’t yet available). Also you may need to jump through a few hoops to get the Predictive Analysis node to appear – but that’s all covered in the hands-on tutorials.

    New algorithms

    A couple of new algorithms have been introduced to help with data preparation:

    Factor Analysis (did you already spot it in the Type-any example above?) allows you to identify latent variables. Imagine you have information from a survey that includes income, education, and occupation however the responses to all of these are similar as they all relate to social-economic status. The goal of Factor Analysis is to identify scenarios like this and allow you to minimize the number of necessary dimensions to be used in your model.

    Multi-dimensional Scaling is similar in many ways, by allowing you to reduce the number of dimensions in a given dataset. The goal is to simplify the data so that when there are many observed dimensions that are highly similar they can be reduced into a single dimension. This also helps with visualization (how do you visualize a dataset with more than 4 dimensions?)

    Enhanced algorithms

    Finally, a number of algorithms have been enhanced:

    Real-time state-enabled scoring has been extended to support additional PAL functions. Real-time scoring is cool as it allows you to optimize response times when repeatedly scoring from the same model such as when part of a transaction. For complex models, calling a regular scoring function involves parsing the model which can far outweigh the time to actually do the scoring itself. However state-enabled real-time scoring allows you to store the parsed model in-memory ready for ultra-fast execution as and when new transactions arrive. The newly supported functions are LDA Inference, NBC, BPNN, decision trees, PCA projection, cluster assignment, binning assignment, LDA project, LDA CLASSIFY, and Posterior Scaling. I’ve included an example using decision trees in the tutorial videos.

    There are some generic parameters worthy of note:
    • THREAD_RATIO is a new parameter allowing you to optimize the use of multi-threading based of a percentage of available threads
    • HAS_ID allows you to specify whether the dataset does/ does not have an ID in the first column
    • CATEGORICAL_VARIABLE and DEPENDENT_VARIABLE allow you to consistently specify categorical and dependent variables irrespective of PAL function
    Viewing all 711 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>