Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

What is SAP HANA?

$
0
0
SAP HANA is an in-memory data platform that is deployable as an on-premise appliance, or in the cloud.  It is a revolutionary platform that’s best suited for performing real-time analytics, and developing and deploying real-time applications. At the core of this real-time data platform is the SAP HANA database which is fundamentally different than any other database engine in the market today (figure 1).

What is SAP HANA?

Figure 1:  SAP HANA - platform for a new class of real-time analytics and applications

Whenever companies have to go deep within their data sets to ask complex and interactive questions, and have to go broad (which means working with enormous data sets that are of different types and from different sources) at the same time, SAP HANA is well-suited. Increasingly there is a need for this data to be recent and preferably in real-time. Add to that the need for high speed (very fast response time and true interactivity), and the need to do all this without any pre-fabrication (no data preparation, no pre-aggregates, no-tuning) and you have a unique combination of requirements that only SAP HANA can address effectively. When this set of needs or any subset thereof have to be addressed (in any combination), SAP HANA is in its elements.

Analytics and Applications
  • Real-time analytics – The Categories of Analytics which HANA specializes
    • Operational Reporting (real-time insights from transaction systems such as custom or SAP ERP). This covers Sales Reporting (improving fulfillment rates and accelerating key sales processes), Financial Reporting (immediate insights across revenue, customers, accounts payable, etc.), Shipping Reporting (better enabling complete stock overview analysis), Purchasing Reporting (complete real-time analysis of complete order history) and Master Data Reporting (real-time ability to impact productivity and accuracy).
    • Data Warehousing (SAP NetWeaver BW on HANA) – BW customers can run their entire BW application on the SAP HANA platform leading to unprecedented BW performance (queries run 10-100 times faster; data loads 5-10 times faster; calculations run 5-10 times faster), a dramatically simplified IT landscape (leads to greater operational efficiency and reduced waste), and a business community able to make faster decisions. Moreover, not only is the BW investment of these customers preserved but also super-charged. Customers can migrate with ease to the SAP HANA database without impacting the BW application layer at all.
    • Predictive and Text analysis on Big Data - To succeed, companies must go beyond focusing on delivering the best product or service and uncover customer/employee /vendor/partner trends and insights, anticipate behavior and take proactive action. SAP HANA provides the ability to perform predictive and text analysis on large volumes of data in real-time. It does this through the power of its in-database predictive algorithms and its R integration capability. With its text search/analysis capabilities SAP HANA also provides a robust way to leverage unstructured data.
  • Real-time applications – The Categories of Applications which HANA specializes
    • Core process accelerators– Accelerate business reporting by leveraging ERP Accelerators, which are non-disruptive ways to take advantage of in-memory technology. These solutions involve an SAP HANA database sitting next to a customer’s SAP ERP system. Transactional data is replicated in real-time from ECC into HANA for immediate reporting, and then results can even be fed back into ECC. Solutions include CO-PA Accelerator, Finance and Controlling Accelerator, Customer Segmentation Accelerator, Sales Pipeline Analysis, and more.
    • Planning, Optimization Apps– SAP HANA excels at applications that require complex scheduling with fast results, and SAP is delivering solutions that no other vendor can match. These include Sales & Operational Planning, BusinessObjects Planning & Consolidation, Cash Forecasting, ATP calculation, Margin calculation,  Manufacturing scheduling optimization (from start-up Optessa), and more.
    • Sense & response apps– These applications offer real-time insights on Big Data such as smart meter data, point-of-sale data, social media data, and more. They involve complexities such as personalized insight and recommendations, text search and mining, and predictive analytics. Only SAP HANA is well suited for such applications, including Smart Meter Analytics, SAP Supplier InfoNet, SAP precision retailing, and Geo-spatial Visualization apps (from start-up Space-Time Insight). Typically these processes tend to be data-intensive and many could not be deployed in the past owing to cost and performance constraints.

What is the secret sauce?

Other database management systems on the market are typically either good at transactional workloads, or analytical workloads, but not both. When transactional DBMS products are used for analytical workloads, they require you to separate your workloads into different databases (OLAP and OLTP). You have to extract data from your transactional system (ERP), transform that data for reporting, and load it into a reporting database (BW). The reporting database still requires significant effort in creating and maintaining tuning structures such as aggregates and indexes to provide even moderate performance.

Due to its hybrid structure for processing transactional workloads and analytical workloads fully in-memory, SAP HANA combines the best of both worlds. You don’t need to take the time to load data from your transactional database into your reporting database, or even build traditional tuning structures to enable that reporting. As transactions are happening, you can report against them live. By consolidating two landscapes (OLAP and OLTP) into a single database, SAP HANA provides companies with massively lower TCO in addition to mind-blowing speed.

What is SAP HANA?

Figure 2: Broad Portfolio of SAP HANA enabled Solutions – Like “Games” on the “Xbox”

But even more important is the new application programming paradigm enabled for “extreme” applications. Since the SAP HANA database resides entirely in-memory all the time, additional complex calculations, functions and data-intensive operations can happen on the data directly in the database, without requiring time-consuming and costly movements of data between the database and applications.  This incredible simplification and optimization of the data layer is the “killer feature” of SAP HANA because it removes multiple layers of technology and significant human effort to get incredible speed.  It also has the benefit of reducing the overall TCO of the entire solution.

Some other database engines on the market today might claim to provide one or another benefit that SAP HANA brings. However, none of them can deliver on all of them. This is real-time computing, and customers can take advantage of this today via SAP BW on SAP HANA, Accelerators on SAP HANA and native SAP HANA applications (figure 2).


Why In-Memory Processing?

Necessity is the mother of invention. Please look at the statistics, growth of processing speed versus storage memory.

What is SAP HANA?

Figure 3 : Processing Speed versus Storage Capacity

Source: scn.sap.com

Table Transpose in SAP HANA Modeling

$
0
0
Approach 1:
  • Analytic view will be built on each base table column which needs transposition.
  • In this case 6 columns need transposition, hence 6 Analytic views will be created.
  • Calculated Column (VALUE) is created in each Analytic view which derives the value of a particular month in a year.
  • Create Calculation View based on Analytic Views created above and join them together using Union with Constant Value.
  • No need to create Calculated Column (MONTH) in each Analytic view as this can be derived in Calculation View to improve performance. 

Approach 2:
  • 1 general Analytic view will be created instead of several Analytic views in which selected attributes and measures will be selected.
  • In this case we select 6 measures M_JAN, M_FEB, M_MAR, M_APR, M_MAY, M_JUN in addition to common attributes.
  • Create Calculation View based on general Analytic View created above and join them together using Union with Constant Value.
  • Calculated Column (VALUE) is created in each Projection node which derives the value of a particular month in a year. 
Approach 3:
  • No Analytic view will be created instead base table will be used directly.
  • Create Calculation View based on direct base table in each projection node.
  • Here also 6 projection nodes will be used.
  • Calculated Column (VALUE) is created in each Projection node which derives the value of a particular month in a year. 
------------------------------------------------------------------------------------
Approach 4 (Recommended):
With single SQLScript calculation view, the table can be easily transposed.
This is the most easiest way and better as compared to other approaches.
------------------------------------------------------------------------------------
DDL used for workaround is given below:
------------------------------------------------------------------------------------
CREATE COLUMN TABLE TEST.ACTUALS (
     ID INTEGER NOT NULL,
     NAME VARCHAR (20) NOT NULL,
     YEAR VARCHAR (4),
     M_JAN INTEGER,
     M_FEB INTEGER,
     M_MAR INTEGER,
     M_APR INTEGER,
     M_MAY INTEGER,
     M_JUN INTEGER,
     PRIMARY KEY (ID));
INSERT INTO TEST.ACTUALS VALUES (1,'NAME1','2012',101,102,103,104,105,106);
INSERT INTO TEST.ACTUALS VALUES (2,'NAME2','2012',111,112,113,114,115,116);
INSERT INTO TEST.ACTUALS VALUES (3,'NAME3','2012',121,122,123,124,125,126);
INSERT INTO TEST.ACTUALS VALUES (4,'NAME4','2012',131,132,133,134,135,136);
INSERT INTO TEST.ACTUALS VALUES (5,'NAME5','2012',141,142,143,144,145,146);
INSERT INTO TEST.ACTUALS VALUES (6,'NAME6','2013',201,202,203,204,205,206);
INSERT INTO TEST.ACTUALS VALUES (7,'NAME7','2013',211,212,213,214,215,216);
INSERT INTO TEST.ACTUALS VALUES (8,'NAME8','2013',221,222,223,224,225,226);
INSERT INTO TEST.ACTUALS VALUES (9,'NAME9','2013',231,232,233,234,235,236);
INSERT INTO TEST.ACTUALS VALUES (10,'NAME10','2013',241,242,243,244,245,246);
------------------------------------------------------------------------------------
The data in the table is:

Table Transpose in SAP HANA Modeling

Transposed data:

Table Transpose in SAP HANA Modeling

Implementation steps for Approach 1:
  • Analytic view will be built on each base table column which needs transposition.
  • In this case 6 columns need transposition, hence 6 Analytic views will be created.
  • Calculated Column (VALUE) is created in each Analytic view which derives the value of a particular month in a year.
  • Create Calculation View based on Analytic Views created above and join them together using Union with Constant Value.
  • No need to create Calculated Column (MONTH) in each Analytic view as this can be derived in Calculation View to improve performance. 
Now let us see this in action.

Let’s start with building Analytic view (AN_M_JAN) based on column M_JAN and in the Data foundation select the attributes ID, NAME, YEAR which will be common in all Analytic views and only month M_JAN and skip other columns as shown below.

Table Transpose in SAP HANA Modeling

In the Logical Join, create Calculated Column (VALUE) and hard-code the value with the name same as base table column name (“M_JAN”) and validate the syntax as shown below.

Table Transpose in SAP HANA Modeling

In the Semantics, hide the attribute M_JAN as it is not required in the output as shown below.

Table Transpose in SAP HANA Modeling

Now Validate and Activate the Analytic view and do data preview. You will see only the values corresponding to M_JAN only.

Table Transpose in SAP HANA Modeling

Create second analytic view AN_M_FEB based on column M_FEB and the process will be the same as created above for M_JAN. In the data foundation make sure that you select month M_FEB not M_JAN.

Table Transpose in SAP HANA Modeling

Table Transpose in SAP HANA Modeling

Date preview for AN_M_FEB corresponds to M_FEB only.

Table Transpose in SAP HANA Modeling

Similarly create other 4 Analytic views AN_M_MAR, AN_M_APR, AN_M_MAY, AN_M_JUN.

Create Calculation View (CA_ACTUALS_MONTH). From the scenario panel, drag and drop the "Projection" node and add the Analytic view in it. Do not select M_JAN column as the Calculated column VALUE used instead. Similarly add the Projection node for other Analytic vies. Totally 6 Projection nodes are required for each Analytic view.

Table Transpose in SAP HANA Modeling

Now add the "Union" node above the six "Projection" node and join them. In details section click "Auto Map by Name". The only attribute missing in the output is "Month".  In Target(s) under Details section, click on create target as MONTH with datatype as VARCHAR and size as 3 which contains 3 letter month names (eg. JAN, FEb, MAR, etc.)

Table Transpose in SAP HANA Modeling

Right click on MONTH and choose "Manage Mappings" and enter the value for constant for Source model accordingly.

Table Transpose in SAP HANA Modeling

The final Calculation view would be like:

Table Transpose in SAP HANA Modeling

Save and Validate, Activate, and Do the data preview:

Table Transpose in SAP HANA Modeling

which is our desired output of the view with data transposed 

But what about the performance?

Total number of records the information view contains:

Table Transpose in SAP HANA Modeling

To check if the filters are pushed down to the Analytic search, you need to find the “BWPopSearch” operation and check the details on the node in the visual plan. Please refer to awesome document by Ravindra Channe explaining "Projection Filter push down in Calculation View" which in turn points to the Great Lars Breddemann blog"Show me the timelines, baby!"

Let us apply filter for the year 2012.

SELECT NAME, YEAR, MONTH, VALUE FROM "_SYS_BIC"."MDM/CA_ACTUALS_VALUE" WHERE YEAR = '2012';

Table Transpose in SAP HANA Modeling

The Analytic search when expanded will show:

Table Transpose in SAP HANA Modeling

Though the table size is small in our case, Irrespective of table size, the filter is pushed down and fetching only the required records from the base table which helps in improving performance 

Implementation steps for Approach 2:
  • 1 general Analytic view will be created instead of several Analytic views in which selected attributes and measures will be selected.
  • In this case we select 6 measures M_JAN, M_FEB, M_MAR, M_APR, M_MAY, M_JUN in addition to common attributes.
  • Create Calculation View based on general Analytic View created above and join them together using Union with Constant Value.
  • Calculated Column (VALUE) is created in each Projection node which derives the value of a particular month in a year.
Let us see this in action.

Create general Analytic view with no calculated columns, simple and straight forward as shown below:

Table Transpose in SAP HANA Modeling

Create Calculation view. Drag and drop the Projection node and add general Analytic view, select the measure M_JAN only in addition to common attributes. Create Calculated column VALUE as shown below:

Table Transpose in SAP HANA Modeling

Now add 5 more projection nodes with same Analytic view adding to it. Create Calculated Column VALUE in each projection node corresponding to respective month M_FEB M_MAR, etc. 

Table Transpose in SAP HANA Modeling

Now add Union node above these projections and the rest of the process is already seen in  Approach1.

Table Transpose in SAP HANA Modeling

Implementation steps for Approach 3:
  • No Analytic view will be created instead base table will be used directly.
  • Create Calculation View based on direct base table in each projection node.
  • Here also 6 projection nodes will be used.
  • Calculated Column (VALUE) is created in each Projection node which derives the value of a particular month in a year.

------------------------------------------------------------------------------------
Implementation steps for Approach 4: (recommended)

Create the SQLScript as below:

BEGIN
  var_out = 
       SELECT ID, NAME, YEAR, 'JAN' as "MONTH", M_JAN as "VALUE" from TEST.ACTUALS
       UNION
       SELECT ID, NAME, YEAR, 'FEB' as "MONTH", M_FEB as "VALUE" from TEST.ACTUALS
       UNION
       SELECT ID, NAME, YEAR, 'MAR' as "MONTH", M_MAR as "VALUE" from TEST.ACTUALS
       UNION
       SELECT ID, NAME, YEAR, 'APR' as "MONTH", M_APR as "VALUE" from TEST.ACTUALS
       UNION
       SELECT ID, NAME, YEAR, 'MAY' as "MONTH", M_MAY as "VALUE" from TEST.ACTUALS
       UNION
       SELECT ID, NAME, YEAR, 'JUN' as "MONTH", M_JUN as "VALUE" from TEST.ACTUALS
  ;
END

Table Transpose in SAP HANA Modeling

Output:

Table Transpose in SAP HANA Modeling

Isn't it simple as compared to other approaches? yes it is.
Now you are familiar with different approaches of doing table transpose

Source: scn.sap.com

Projection Filter push down in Calculation View

$
0
0
*** Please note that below document is based on the personal experience of using different HANA Information modeling constructs. The results of Performance tests depend upon the data volume and the Query filters. It is advisable to make the final judgement based on your own testing of such models.

Many of the reporting scenarios implement data comparison business case like “This Year to Last Year comparison” or similar. Such business requirement implementation in HANA can be done using the Calculation View model which is discussed in many of the presentations / training sessions. Something like shown below.


Figure 1: Sample model with Projections for the Year on Year data comparison

The base model consists of creating projections over the Analytic view with the filter value for “This Year” Projection and “Last Year” Projection and a UNION node to combine the data for reporting.

Let's consider a sample data model based on Sales data. An Analytic view is created on the Sales base table and used in a Calculation view with Projections for This Year and Last Year data as shown in the sample model in Figure 1.


Figure 2: Sample data models based on Sales data for This Year and Last Year data comparison

In most of the cases, the general requirement is that, the user should provide "This Year" value (say Month and Year) and system should then return back data for the Month in This Year and for the Month in Last Year. Such data input can be captured and modeled in multiple ways.

This document outlines the impact on Performance of two such implementations of Filters in Projection. It explains the FILTER PUSH DOWN impact due to the different implementation options.
  1. Using Input Parameters for This Year Month and for Last Year Month
  2. Using Input Parameter for This Year Month and Calculated Column for Last Year Month

What is Filter Push Down:

When the data is queried on HANA Information models based on a subset defined by certain values, then the subset is defined using Filters. If the filters are implemented at the lowest level of dataset generation (Analytic search), then all the subsequent activities like aggregation happen on this reduced dataset. This provides high performance benefit.

If the filters are not implemented at the Analytic Search, then the entire data set is passed to the aggregation depending upon column joins and hence has negative impact on the performance.

To check if the filters are pushed down to the Analytic search, you need to find the “BWPopSearch” operation and check the details on the node in the visual plan. Please refer to the awesome blog by Lars Breddemann explaining the Visualize Plan tool.

Impact of using option 1: Using Input Parameters for This Year Month and for Last Year Month

In this implementation, two Input parameters are used
  • For This Year Month - IP_TY_MONTH
  • For Last Year Month - IP_LY_MONTH 

The model can be created as shown below:


Figure 3: This Year and Last Year data comparison model using Input Parameters

The sample Sales data consists of few records for the Sales value in different months in different countries.

When such models are queried, then the Projection Filters are pushed down and the dataset exchanged between the OLAP engine and the Calc engine is quite less as the data is filtered in the OLAP engine and only the subset of the entire data required for the aggregation is passed to the Calc Engine. The execution plan for such query shows "Search" being performed as the first operation, reducing the dataset to be passed on for further processing.

Query:
SELECT "C_COUNTRY", "C_YEAR", "C_YEARMONTH", "ZPERIOD", sum("C_SALES") AS "C_SALES"
FROM "_SYS_BIC"."sample/ZGCV_SALES_TY_LY"
( 'PLACEHOLDER' = ('$$IP_LY_MONTH$$', '201201')
, 'PLACEHOLDER' = ('$$IP_TY_MONTH$$', '201301'))
WHERE "C_COUNTRY" = 'US'
GROUP BY "C_COUNTRY", "C_YEAR", "C_YEARMONTH", "ZPERIOD";

Sample data in the underlying Sales table:


Figure 4: Sales data in the underlying table

Output of the query:


Figure 5: Query output based on This Year month and Last Year month Input Parameters and Filter
Execution Plan:


Figure 6: Execution plan for the Query with Projections based on Input Parameters

As seen in the Figure 6 above, the execution plan shows that the filters defined in the projection are pushed down to the OLAP engine resulting in a smaller data set which is passed to the Calc engine. This filter push down improves performance significantly in the Query execution. Hence to improve the performance, it is advisable to model the views so that the data is filtered much earlier.

Impact of using option 2: Using Input Parameters for This Year Month and Calculated Column for Last Year Month

This option provides the flexibility of the coding to derive the other filter values based on a given Input Parameter. It is quite natural to expect the users to provide value for ONLY ONE input parameter for a Projection filter and derive the other value based on the user provided value. Such coding can be achieved using Calculated columns.

*** Please note that currently the expression builder for Projection filter do not provide higher flexibility in data manipulation. Thus such manipulations need to be performed using Calculated Column. The expression builder for Calculated Column support wide variety of functions, enabling complex data manipulation.
In due course of time, I am very sure that the expression builder for Projection filter will support more functions.

The model for the Option 2 can be created as shown below:


Figure 7: This Year and Last Year data comparison model using Input Parameter and Calculated Column

When such models are queried, then the Projection Filter defined in the Calculated Column is NOT pushed down and the dataset exchanged between the OLAP engine and the Calc engine is quite large depending upon the column joins of the columns used in the Query. The entire data is passed to the Calc Engine where the subsequent filtering and aggregation happens. The execution plan for such query shows that NO Search operation happens in the OLAP engine. This has negative impact on the performance as large amount of data is exchanged between the engines and high memory and resources are required for the query execution.

Query:
SELECT "C_COUNTRY", "C_YEAR", "C_YEARMONTH", "ZPERIOD", sum("C_SALES") AS "C_SALES"
FROM "_SYS_BIC"."sample/ZGCV_SALES_TY_LY"
( 'PLACEHOLDER' = ('$$IP_LY_MONTH$$', '201201'))
WHERE "C_COUNTRY" = 'US'
GROUP BY "C_COUNTRY", "C_YEAR", "C_YEARMONTH", "ZPERIOD";

Sample data in the underlying Sales table and the output of the query remains same as for Option 1.

Execution Plan:


Figure 8: Execution plan for the Query with Projections based on Calculated Column and Input Parameter

As seen in the Figure 8 above, the execution plan shows that the filter defined in the projection using the Calculated Column is NOT pushed down to the OLAP engine resulting in large data set passed to the Calc engine. It is strongly recommended to look into the Query execution plan for the Information models and check the impact of such modeling constructs. It is advisable to ensure that the filters can be pushed down to the OLAP engine to improve the performance of the query. If possible, you may opt to change the existing model with Calculated Column to Input Parameters and have the data manipulation done for the values to be passed to the Input Parameters in the Front end reporting tool rather than in HANA. The User input Prompts in the front end reporting tool can be defined to capture one value and other value to be passed to the second Input Parameter can be derived from the first prompt value.

As mentioned at the start of this document, I would request fellow HANA practitioners to try and test their models and validate the performance impact of the queries.

Also as mentioned above, I am very positive that the filter expression for Projection will provide large set of data manipulation functions in future, which will enable writing better Filter conditions and will eliminate the need to define the Calculated Columns for filtering in the Information model completely.

Source: scn.sap.com

ODXL - An open source Data Export Layer for SAP/HANA based on OData

$
0
0
Introductions: 
ODXL is a framework that provides generic data export capabilities for the SAP/HANA platform. ODXL is implemented as a xsjs Web service that understand OData web requests, and delivers a response by means of a pluggable data output handler. Developers can use ODXL as a back-end component, or even as a global instance-wide service to provide clean, performant and extensible data export capabilities for their SAP/HANA applications.

Currently, ODXL provides output handlers for comma-separated values (csv) as well as Microsoft Excel output. However, ODXL is designed so that developers can write their own response handlers and extend ODXL to export data to other output formats according to their requirements.

SAP HANA Certifications and Material


ODXL is provided to the SAP/HANA developer community as open source software under the terms of the Apache 2.0 License. This means you are free to use, modify and distribute ODXL. For the exact terms and conditions, please refer to the license text.

The source code is available on github. Developers are encouraged to check out the source code and to contribute to the project. You can contribute in many ways: we value any feedback, suggestions for new features, filing bug reports, or code enhancements.

What exactly is ODXL?
ODXL was borne from the observation that the SAP/HANA web applications that we develop for our customers often require some form of data export, typically to Microsoft Excel. Rather than creating this type of functionality again for each project, we decided to invest some time and effort to design and develop this solution in such a way that it can easily be deployed as a reusable component. And preferably, in a way that feels natural to SAP/HANA xs platform application developers.

What we came up with, is a xsjs web service that understands requests that look and feel like standard OData GET requests, but which returns the data in some custom output format. ODXL was designed to make it easily extensible so that developers can build their own modules that create and deliver the data in whatever output format suits their requirements.

This is illustrated in the high-level overview below:

SAP HANA ODXL

For many people, there is an immediate requirement to get Microsoft Excel output. So, we went ahead and implemented output handlers for .xlsx and .csv formats, and we included those in the project. This means that ODXL supports data export to the .xlsx and .csv formats right out of the box. 

However, support for any particular output format is entirely optional and can be controlled by configuration and/or extension:
  • Developers can develop their own output handlers to supply data export to whatever output format they like.
  • SAP/HANA Admins and/or application developers can choose to install only those output handlers they require, and configure how Content-Type headers and OData $format values map to output handlers.

So ODXL is OData? Doesn't SAP/HANA suppport OData already?
The SAP/HANA platform provides data access via the OData standard. This facility is very convenient for object-level read- and write access to database data for typical modern web applications. In this scenario, the web application would typically use asynchronous XML Http requests, and data would be exchanged in either Atom (a XML dialect) or JSON format.

ODXL - An open source Data Export Layer for SAP/HANA based on OData

ODXL's primary goal is to provide web applications with a way to export datasets in the form of documents. Data export tasks typically deal with data sets that are quite a bit larger than the ones accessed from within a web application. In addition, an data export document might very well compromise multiple parts - in other words, it may contain multiple datasets. The typical example is exporting multiple lists of different items from a web application to a workbook containaing multiple spreadsheets with data. In fact, the concrete use case from whence ODXL originated was the requirement to export multiple datasets to Microsoft Excel .xlsx workbooks.

So, ODXL is not OData. Rather, ODXL is complementary to SAP/HANA OData services. That said, the design of ODXL does borrow elements from standard OData.

OData Features, Extensions and omissions
ODXL GET requests follow the syntax and features of OData standard GET requests. Here's a simple example to illustrate the ODXL GETrequest:
GET "RBOUMAN"/"PRODUCTS"?$select=PRODUCTCODE, PRODUCTNAME& $filter=PRODUCTVENDOR eq 'Classic Metal Creations' and QUANTITYINSTOCK gt 1&$orderby=BUYPRICE desc&$skip=0&$top=5

This request is build up like so:
  • "RBOUMAN"/"PRODUCTS": get data from the "PRODUCTS" table in the database schema called "RBOUMAN".
  • $select=PRODUCTCODE, PRODUCTNAME: Only get values for the columns PRODUCTCODE and PRODUCTNAME.
  • $filter=PRODUCTVENDOR eq 'Classic Metal Creations' and QUANTITYINSTOCK ge 1: Only get in-stock products from the vendor 'Classic Metal Creations'.
  • $orderby=BUYPRICE desc: Order the data from highest price to lowest.
  • $skip=0&$top=5: Only get the first five results.
For more detailed information about invoking the odxl service, check out the section about the sample application. The sample application offers a very easy way to use ODXL for any table, view, or calculation view you can access and allows you to familiarize yourself in detail with the URL format. 

In addition, ODXL supports the OData $batch POST request to support export of multiple datasets into a single response document. 

The reasons to follow OData in these respects are quite simple:
  • OData is simple and powerful. It is easy to use, and it gets the job done. There is no need to reinvent the wheel here.
  • ODXL's target audience, that is to say, SAP/HANA application developers, are already familiar with OData. They can integrate and use ODXL into their applications with minimal effort, and maybe even reuse the code they use to build their OData queries to target ODXL.
ODXL does not follow the OData standard with respect to the format of the response. This is a feature: OData only specifies Atom (an XML dialect) and JSON output, whereas ODXL can supply any output format. ODXL can support any output format because it allows developers to plug-in their own modules called output handlers that create and deliver the output. 

 Currently ODXL provides two output handlers: one for comma-separated values (.csv), and one for Microsoft Excel (.xlsx). If that is all you need, you're set. And if you need some special output format, you can use the code of these output handlers to see how it is done and then write your own output handler. 

ODXL does respect the OData standard with regard to how the client can specify what type of response they would like to receive. Clients can specify the MIME-type of the desired output format in a standard HTTP Accept:request header:
  • Accept: text/csv specifies that the response should be returned in comma separated values format.
  • Accept: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet specifies that the response should be returned in open office xml workbook format (Excel .xlsx format).
Alternatively, they can specify a $format=<format> query option, where <format>identifies the output format:
  • $format=csv for csv format
  • $format=xlsx for .xlsx format
Note that a format specified by the $format query option will override any format specified in an Accept:-header, as per OData specification. 

ODXL admins can configure which MIME-types will be supported by a particular ODXL service instance, and how these map to pluggable output handlers. In addition, they can configure how values for passed for the $format query option map to MIME-types. ODXL comes with a standard configuration with mappings for the predefined output handlers for .csv and .xlsx output. 

On the request side of things, most of OData's features are implemented by ODXL:
  • The $select query option to specify which fields are to be returned
  • The $filter query option allows complex conditions restricting the returned data. OData standard functions are implemented too.
  • The $skip and $top query options to export only a portion of the data
  • The $orderby query option to specify how the data should be sorted
ODXL currently does not offer support for the following OData features:
  • $expand
  • $metadata
The features that are currently not supported may be implemented in the future. For now, we feel the effort the implement them and adequately map their semantics to ODXL may not be worth the trouble. However, an implementation can surely be provided should there be sufficient interest from the community.

Installation
Use ODXL presumes you already have a SAP/HANA installation with a properly working xs engine. You also need HANA Studio, or Eclipse with the SAP HANA Tools plugin installed. 

Here are the steps if you just want to use ODXL, and have no need to actively develop the project:
  1. In HANA Studio/Eclipse, create a new HANA xs project. Alternatively, find an existing HANA xs project.
  2. Find the ODXL repository on github, and download the project as a zipped folder. (Select a particular branch if you desire so; typically you'll want to get the master branch)
  3. Extract the project from the zip. This will yield a folder. Copy its contents, and place them into your xs project directory (or one of its sub directories)
  4. Activate the new content.
After taking these steps, you should now have a working ODXL service, as well as a sample application. The service itself is in the service subdirectory, and you'll find the sample application inside the app subdirectory.

The service and the application are both self-contained xs applications, and should be completely independent in terms of resources. The service does not require the application to be present, but obviously, the application does rely on being able to call upon the service.
 
If you only need the service, for example, because you want to call it directly from your own sample application, then you don't need the sample application. You can safely copy only the contents of the service directory and put those right inside your project directory (or one of its subdirectories) in that case. But even then, you might still want to hang on to the sample application, because you can use that to generate the web service calls that you might want to do from within your application.
 
If you want to actively develop ODXL, and possibly, contribute your work back to the community, then you should clone or fork the github repository and work from there.

Getting started with the sample application
To get up and running quickly, we included a sample web application in the ODXL project. The purpose of this sample application is to provide an easy way to evaluate and test ODXL.
 
The sample application lets you browse the available database schemas and queryable objects: tables and views, including calculation views (or at least, their SQL queryable runtime representation). After making the selection, it will build up a form showing the available columns. You can then use the form to select or deselect columns, apply filter conditions, and/or specify any sorting order. If the selected object is a calculation view that defines input parameters, then a form will be shown where you can enter values for those too.
 
In the mean while, as you're entering options into the form, a textarea will show the URL that should be used to invoke the ODXL service. If you like, you can manually tweak this URL as well. Finally, you can use one of the download links to immediately download the result corresponding to the current URL in either .csv or .xlsx format.
 
Alternatively, you can hit a button to add the URL to a batch request. When you're done adding items to the batch, you can hit the download workbook button to download as single .xlsx workbook, containing one worksheet for each dataset in the batch.

SAP HANA Material and Certifications

What versions of SAP/HANA are supported?
We initially built and tested ODXL on SPS9. The initial implementation used the $.hdb database interface, as well as the $.util.Zip builtin.

We then built abstraction layers for both database access and zip support to allow automtic fallback to the $.db database interface, and to use a pure javascript implementation of the zip algorithm based on Stuart Knightley's JSZip library. We tested this on SPS8, and everyting seems to work fine there.

We have not actively tested earlier SAP/HANA versions, but as far as we know, ODXL should work on any earlier version. If you find that it doesn't, then please let us know - we will gladly look into the issue and see if we can provide a solution.

How to Contribute
If you want to, there are many different ways to contribute to ODXL.
  • If you want to suggest a new feature, or report a defect, then please use the github issue tracker.
  • If you want to contribute code for a bugfix, or for a new feature, then please send a pull request. If you are considering to contribute code then we do urge you to first create an issue to open up discussion with fellow ODXL developers on how to best scratch your itch
  • If you are using ODXL and if you like it, then consider to spread the word - tell your co-workers about it, write a blog, or a tweet, or a facebook post.

Thank you in advance for your contributions!

Source: scn.sap.com

SAP HANA Extended Application Services

$
0
0
We introduce an exciting new capability called SAP HANA Extended Application Services (sometimes referred to unofficially as XS or XS Engine). The core concept of SAP HANA Extended Application Services is to embed a full featured application server, web server, and development environment within the SAP HANA appliance itself. However this isn't just another piece of software installed on the same hardware as SAP HANA; instead SAP has decided to truly integrate this new application services functionality directly into the deepest parts of the SAP HANA database itself, giving it an opportunity for performance and access to SAP HANA differentiating features that no other application server has.

Before SAP HANA SP5 if you wanted to build a lightweight web page or REST Service which consumes SAP HANA data or logic, you would need another application server in your system landscape. For example, you might use SAP NetWeaver ABAP or SAP NetWeaver Java to connect to your SAP HANA system via a network connection and use ADBC (ABAP Database Connectivity) or JDBC (Java Database Connectivity) to pass SQL Statements to SAP HANA.  Because of SAP HANA’s openness, you might also use Dot Net or any number of other environments or languages which support ODBC (Open Database Connectivity) as well. These scenarios are all still perfectly valid.  In particular when you are extending an existing application with new SAP HANA functionality, these approaches are very appealing because you easily and with little disruption integrate this SAP HANA functionality into your current architecture.

However when you are building a new application from scratch which is SAP HANA specific, it makes sense to consider the option of the SAP HANA Extended Application Services.  With SAP HANA Extended Application Services you can build and deploy your application completely self-contained within SAP HANA; providing an opportunity for a lower cost of development and ownership as well as performance advantages because of the closeness of the application and control flow logic to the database.

Applications designed specifically to leverage the power of SAP HANA, often are built in such a way to push as much of the logic down into the database as possible.  It makes sense to place all of your data intensive logic into SQL, SQLScript Procedures, and SAP HANA Views, as these techniques will leverage SAP HANA’s in-memory, columnar table optimizations as well as massively parallel processing. For the end-user experience, we are increasingly targeting HTML5 and mobile based applications where the complete UI logic is executed on the client side. Therefore we need an application server in the middle that is significantly smaller than the traditional application server. This application server only needs to provide some basic validation logic and service enablement. With the reduced scope of the application server, it further lends credit to the approach of a lightweight embedded approach like that of the SAP HANA Extended Application Services.

SAP HANA Certifications Material

Figure 1 - Architectural Paradigm Shift

SAP HANA Studio Becomes a Development Workbench
In order to support developers in creating applications and services directly within this new SAP HANA Extended Application Services, SAP has enhanced the SAP HANA Studio to include all the necessary tools. SAP HANA Studio was already based upon Eclipse; therefore we were able to extend the Studio via an Eclipse Team Provider plug-in which sees the SAP HANA Repository as a remote source code repository similar to Git or Perforce. This way all the development resources (everything from HANA Views,  SQLScript Procedures, Roles, Server Side Logic, HTML and JavaScript content, etc.) can have their entire lifecycle managed with the SAP HANA Database.  These lifecycle management capabilities include versioning, language translation export/import, and software delivery/transport.

The SAP HANA Studio is extended with a new perspective called SAP HANA Development. As Figure 2 shows, this new perspective combines existing tools (like the Navigator view from the Modeler perspective) with standard Eclipse tools (such as the Project Explorer) and new tools specifically created for SAP HANA Extended Application Services development (for example, the Server Side JavaScript editor shown in the figure or the SAP HANA Repository browser). Because SAP HANA Studio is based on Eclipse, we can also integrate other Eclipse based tools into it. For example the SAP UI Development Toolkit for HTML5 (SAPUI5) is also delivered standard in SAP HANA Extended Application Services.  HANA 1.0 SP5 comes pre-loaded with the 1.8 version of the SAPUI5 runtime and the SAPUI5 development tools are integrated into SAP HANA Studio and managed by the SAP HANA Repository like all other XS based artifacts. Although the SAPUI5 development tools are integrated into SAP HANA Studio, they are not installed automatically.  For installation instructions, please see section 10.1 of the SAP HANA Developers Guide.

SAP HANA Extended Application Services

Figure 2 - SAP HANA Development perspective of the SAP HANA Studio

These extensions to the SAP HANA Studio include developer productivity enhancing features such as project wizards (Figure 3), resource wizards, code completion and syntax highlighting for SAP HANA Extended Application Services server side APIs, integrated debuggers, and so much more.

SAP HANA Extended Application Services

Figure 3- Project Wizards for XS Based Development

These features also include team management functionality.  All development work is done based upon standard Eclipse projects.  The project files are then stored within the SAP HANA Repository along with all the other resources. From the SAP HANA Repository browser view, team members can check out projects which have already been created and import them directly into their local Eclipse workspace (Figure 4).

After projects have been imported into the local Eclipse workspace, developers can work offline on them. You can also allow multiple developers to work on the same resources at the same time. Upon commit back to the SAP HANA Repository, any conflicts will be detected and a merge tool will support the developer with the task of integrating conflicts back into the Repository.

The SAP HANA Repository also supports the concept of active/inactive workspace objects.  This way a developer can safely commit their work back to the server and store it there without immediately overwriting the current runtime version.  It isn't until the developer chooses to activate the Repository object, that the new runtime version is created.

SAP HANA Extended Application Services

Figure 4 - Repository Import Project Wizard

For a deeper look at the basic project creation and Repository object management within SAP HANA Studio, please view the following videos on the topic.


OData Services
There are two main parts of the SAP HANA Extended Application Services programming model. The first is the ability to generate OData REST services from any existing SAP HANA Table or View.  The process is quite simple and easy.  From within an SAP HANA Project, create a file with the extension xsodata. Within this service definition document, the developer needs only to supply the name of the source table/view, an entity name, and, if using an SAP HANA View, the entity key fields.

For example, if you want to generate an OData service for an SAP HANA table named teched.epm.db/businessPartner in the Schema TECHEDEPM, this would be the XSODATA definition file you would create:

service namespace "sap.hana.democontent.epm" {  
       "TECHEDEPM"."teched.epm.db/businessPartner" as "BUYER";  

SAP HANA Extended Application Services

Figure 5 - XSODATA Service Definition and Test

Upon activation of this XSODATA file, we already have an executable service which is ready to test. The generated service supports standard OData parameters like $metadata for introspection (see Figure 6), $filter, $orderby, etc. It also supports body formats of ATOM/XML and JSON (Figure 7 for an example). Because OData is an open standard, you can read more about the URL parameters and other features at http://www.odata.org/.

SAP HANA Extended Application Services

Figure 6 - OData $metadata support

SAP HANA Extended Application Services

Figure 7 - Example OData Server JSON Output

The examples in the above figures demonstrate how easily these services can be tested from the web browser, but of course doesn't represent how end users would interact with the services. Although you can use a variety of 3rd party tools based upon JavaScript, like Sencha, Sencha Touch, JQuery, JQuery Mobile, and PhoneGap, just to name a few; SAP delivers the UI Development Toolkit for HTML5 (SAPUI5) standard in SAP HANA Extended Application Services. A particularly strong feature of SAPUI5 is the integration of OData service consumption not just at a library level but also with special features within the UI elements for binding to OData services.

For example, within SAPUI5, you can declare an OData model object and connect this model object to the URL of the XSODATA service. Next, create a Table UI element and connect it to this model object. Finally you call bindRows of the Table UI element object and supply the OData entity name you want to use as the source of the table.

var oModel = new sap.ui.model.odata.ODataModel  
 ("../../services/buyer.xsodata/", false);   
oTable = new sap.ui.table.Table("test",{tableId: "tableID",  
 visibleRowCount: 10});         
...  
 oTable.setModel(oModel);  
oTable.bindRows("/BUYER");  

This creates an UI Element which has built-in events, such as sort, filter, and paging, which automatically call the corresponding OData Service to fulfill the event. No additional client side or server side programming is necessary to handle such events.

SAP HANA Extended Application Services

Figure 8 - OData Bound Table UI Element

For more details on OData service creation in SAP HANA Extended Application Services and utilizing these services within SAPUI5, please view these videos.





Server Side JavaScript
The XSODATA services are great because they provide a large amount of functionality with minimal amounts of development effort.  However there are a few limitations which come with such a framework approach.  For example in SAP HANA SP5, the OData service framework is read only.  Support for Insert, Update, and Delete operations is available with SAP HANA SP6.

Luckily there is an option for creating free-form services where you can not only perform update operations but also have full control over the body format and URL parameter definition. SAP HANA Extended Application Services also allows development on the server side using JavaScript (via project files with the extension XSJS).  Core APIs of SAP HANA Extended Application Services are, therefore, exposed as JavaScript functions; providing easy access to the HTTP Request and Response object as well database access to execute SQL or call SQLScript Procedures.

In this simple example, we can take two numbers as URL Request Parameters and multiply them together and then return the results as text in the Response Body.  This is an intentionally basic example so that you can focus on the API usage.

SAP HANA Extended Application Services

Figure 9 - Simple XSJS Service

However the power of XSJS services comes from the ability to access the database objects, but also have full control over the body output and to further manipulate the result set after selection. In this example, we use XSJS to create a text-tab delimited output in order to support a download to Excel from a user interface.

function downloadExcel(){  
    var body = '';   
    var query = 'SELECT TOP 25000 \"PurchaseOrderId\", \"PartnerId\", to_nvarchar(\"CompanyName\"), \"LoginName_1\", \"CreatedAt\", \"GrossAmount\" FROM \"_SYS_BIC\".\"teched.epm.db/PO_HEADER_EXTENDED\" order by \"PurchaseOrderId\"';    $.trace.debug(query);  
    var conn = $.db.getConnection();  
    var pstmt = conn.prepareStatement(query);  
    var rs = pstmt.executeQuery();  
    body =   
"Purchase Order Id \tPartner Id \tCompany Name \tEmployee Responsible \tCreated At \tGross Amount \n";  
   while(rs.next()) {  
        body += rs.getString(1)+  
"\t"+rs.getString(2)+"\t"+rs.getString(3)+"\t"+rs.getString(4)+"\t"+rs.getTimestamp(5)+"\t"+rs.getDecimal(6)+"\n";  
    }    $.response.setBody(body);      
    $.response.contentType = 'application/vnd.ms-excel; charset=utf-16le';  
    $.response.headers.set('Content-Disposition','attachment; filename=Excel.xls');  


Closing
This blog hopefully has served to give you a small introduction to many of the new concepts and possibilities with SAP HANA SP5 and SAP HANA Extended Application Services.


Over the coming weeks, we will be posting additional blogs with more advanced examples and techniques as well how to integrate SAP HANA Extended Application Services content into additional environments. Likewise there is a substantial Developer’s Guide which expands on many of the concepts introduced here.

Source: scn.sap.com

Creating HANA DB based SAP Netweaver Gateway Service

$
0
0
SAP has provided with GW SP04 a basic integration framework for creating an OData service based on the HANA DB. With this framework you can easily expose the HANA analytical models via the REST based Open Data (OData) format.

 I was very enthusiastic to try this one out and I was able to create such a service in very less time. And you don’t have to be an ABAP expert to build such a service.
I have outlined the steps to create such a service referring to the help document at
http://help.sap.com/saphelp_gateway20sp04/helpdata/en/33/0e14f0590b48318ea38b3a06a92d17/content.htm and some from trial and error.
There are also some limitations currently on the integration which are also documented above. I will create a simple service on an attribute view which will fetch the data in the service.

1. The First step you need logon to the HANA Client and create a View (Attribute/Analytic/Calculation). Make a note of the catalogue name and the view name.
2. Check your SAP Netweaver Gateway system or the Backend system wherever you are trying to create the service has the add-on component Installed.
Creating HANA DB based SAP Netweaver Gateway Service

3. Now logon to the Backend SAP Netweaver Gateway system and maintain an ADBC Connection via the table DBCON. Provide the details in the table. Make sure you enter the Connection limit and Optimum Conns as >1 this is because the Data Provider class creates more than one connections to the database. Hence if it is 1 the Connection will fail.

Creating HANA DB based SAP Netweaver Gateway Service

4. The next step is you need to implement the enhancement spot /IWBEP/ES_DESTIN_FINDER. This is to declare the maintained SAP HANA connection to the framework. Give the Enhancement Implementation name and description and then BADI definition name Description and the implementation class name. In the Filter Values give Now redefine the methods /IWBEP/IF_DESTIN_FINDER_BADI~GET_DB_CONNECTION and export the DBCON Connection name. /IWBE/IF_DESTIN_FINDER_BADI~GET_RFC_DESTINATION here export the system alias of the system where the service is created.

Creating HANA DB based SAP Netweaver Gateway Service

5. In the next step you need to create the model provider class with super class as /IWHDB/CL_HAI_RT_ABS_MODEL and redefine the class method   GET_HDB_ARTIFACTS and pass the catalog name and view name you got is step 1. (Here you can also specify multiple views for a single service which will come as different entity sets , By default the entity set name will be the view name if you want a different name you can specify a name in lr_view->entity_set_name )

Creating HANA DB based SAP Netweaver Gateway Service

6. Next step is to create a model and assign the Model Provide class.
Goto transaction /n/iwbep/reg_model and enter a model name and version and press create. And in the next screen enter the MPC and description and save.
7. Next create a service via transaction /n/iwbep/reg_service. Enter the service name and version and click on create. In the next screen enter the Data Provider class as /IWHDB/CL_HAI_RT_DATA and a description and save. 
Next assign the model previously created with Assign model button enter the details of model and save.
8. Next is to test the service. If you have created the service in the Backend system then you have to login to the SAP Netweaver gateway system to add/test the service.
9. Goto Transaction /IWFND/MAINT_SERVICE and click on add service. In the Next screen specify the system alias and check for your service and activate the service.
10. Now go back to the previous screen and add system aliases to the service you activated. Check that the ICF node for ODATA is active.
11. Click on Explore service to test the service. Check for Service Document, Metadata and Entity sets.

Creating HANA DB based SAP Netweaver Gateway Service

Creating HANA DB based SAP Netweaver Gateway Service


Source: scn.sap.com

Play the game of life with SAP HANA

$
0
0
What is the game of life?
So first of all, what is the game of life? You can find rules of the game of life from Wikipedia as follows.

"The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive or dead. Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:

1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
2. Any live cell with two or three live neighbours lives on to the next generation.
3. Any live cell with more than three live neighbours dies, as if by overcrowding.
4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed—births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick (in other words, each generation is a pure function of the preceding one). The rules continue to be applied repeatedly to create further generations."

Motivation
I still remember when I studied at TU Berlin, I had a course named parallel programming. There was an assignment about the game of life in which we need to implement it with MPI. That's the first time when I heard of the game of life. Since parallelism is one of the powerful weapons owned by SAP HANA, an idea came to my mind, why not play the game of life with SAP HANA? Let's give it a shot.

Problems
Consider the following simple 3x3 initial pattern with live cells shown in black and dead cells shown in white.

Play the game of life with SAP HANA

According the above rules, the first generation will look like as follows. You can do an exercise yourselves.

Play the game of life with SAP HANA

Now let's start to play the game of life with SAP HANA. The first thing we need to do is modeling the problem in SAP HANA. For the two-dimensional grid, we can create two axes, the horizontal x-axis and the vertical y-axis. For two statuses, we can use 1 for live and 0 for dead. So, the initial pattern will be displayed as below.

Play the game of life with SAP HANA

And it can be implemented with the following SQL statements.

CREATE COLUMN TABLE LIFE (  
  X INTEGER, -- x-axis  
  Y INTEGER, -- y-axis  
  S INTEGER, -- status  
  PRIMARY KEY (X, Y)  
);  
INSERT INTO LIFE VALUES (1, 1, 0);  
INSERT INTO LIFE VALUES (1, 2, 0);  
INSERT INTO LIFE VALUES (1, 3, 1);  
INSERT INTO LIFE VALUES (2, 1, 1);  
INSERT INTO LIFE VALUES (2, 2, 1);  
INSERT INTO LIFE VALUES (2, 3, 0);  
INSERT INTO LIFE VALUES (3, 1, 0);  
INSERT INTO LIFE VALUES (3, 2, 1);  
INSERT INTO LIFE VALUES (3, 3, 1);  

The problem comes now. How can we create the first generation and more further generations repeatedly? Assume the following two points.

1. We always want to update the "LIFE" table itself instead of creating new tables.
2. We do not care the order of tuples.

Play the game of life with SAP HANA

Play with self-join
First of all, we may figure out the following approach/steps for this problem.

1. Calculate # of live neighbours for each cell
SELECT A.X, A.Y, A.S, SUM(B.S) - A.S N  
FROM LIFE A INNER JOIN LIFE B ON ABS(A.X - B.X) < 2 AND ABS(A.Y - B.Y) < 2  
GROUP BY A.X, A.Y, A.S;  

Play the game of life with SAP HANA

The logic is very simple. We use the self-join approach.

1. We use "ABS(A.X - B.X) < 2 AND ABS(A.Y - B.Y) < 2" to filter neighbours plus the cell itself.
2. We use "SUM(B.S) - A.S" to calculate # of live neighbours for each cell.

Since we do not care the order of tuples, the results seem unordered. You can check the correctness of the results manually as shown below. The number in the brackets means # of live neighbours for each cell.

Play the game of life with SAP HANA

There are also several similar alternatives with self-join, e.g.,

Alternative 1
SELECT A.X, A.Y, A.S, SUM(B.S) N  
FROM LIFE A INNER JOIN LIFE B ON ABS(A.X - B.X) < 2 AND ABS(A.Y - B.Y) < 2 AND (A.X <> B.X OR A.Y <> B.Y)  
GROUP BY A.X, A.Y, A.S;  

Play the game of life with SAP HANA

Alternative 2
SELECT A.X, A.Y, A.S, SUM(B.S) N  
FROM LIFE A INNER JOIN LIFE B ON (A.X = B.X AND ABS(A.Y - B.Y) = 1) OR (A.Y = B.Y AND ABS(A.X - B.X) = 1) OR (ABS(A.X - B.X) = 1 AND ABS(A.Y - B.Y) = 1)  
GROUP BY A.X, A.Y, A.S;  

Play the game of life with SAP HANA

2. Apply the rules to get the results of the next generation
SELECT X, Y, CASE S WHEN 0 THEN (CASE N WHEN 3 THEN 1 ELSE 0 END) ELSE (CASE N WHEN 2 THEN 1 WHEN 3 THEN 1 ELSE 0 END) END S  
FROM (SELECT A.X, A.Y, A.S, SUM(B.S) - A.S N  
FROM LIFE A INNER JOIN LIFE B ON ABS(A.X - B.X) < 2 AND ABS(A.Y - B.Y) < 2  
GROUP BY A.X, A.Y, A.S);  

Play the game of life with SAP HANA

Based on the results of step 1, we apply the following simplified rules to calculate the next generation.

1) The current status of the cell is dead
If there are exactly three live neighbours, the next status will be live; Otherwise, the next status will be still dead.

2) The current status of the cell is live
If there are two or three live neighbours, the next status will be still live; Otherwise, the next status will be dead.

You can also check the results manually.

Play the game of life with SAP HANA

3. Update the "LIFE" table with the results in step 2
UPDATE LIFE SET S = NEXTGEN.S FROM LIFE,  
(SELECT X, Y, CASE S WHEN 0 THEN (CASE N WHEN 3 THEN 1 ELSE 0 END) ELSE (CASE N WHEN 2 THEN 1 WHEN 3 THEN 1 ELSE 0 END) END S  
FROM (SELECT A.X, A.Y, A.S, SUM(B.S) - A.S N  
FROM LIFE A INNER JOIN LIFE B ON ABS(A.X - B.X) < 2 AND ABS(A.Y - B.Y) < 2  
GROUP BY A.X, A.Y, A.S)) NEXTGEN  
WHERE LIFE.X = NEXTGEN.X AND LIFE.Y = NEXTGEN.Y;  

Perhaps you are not familiar with the special syntax of "UPDATE" here, don't worry. You can find the example at the bottom of UPDATE - SAP HANA SQL and System Views Reference - SAP Library. So far, we've created the first generation successfully.

Play the game of life with SAP HANA

You can repeat step 3 to play further.

Play with window function
If you think this is the end of the blog. You're wrong. What about play the game of life with window function in SAP HANA? As I mentioned in Window function vs. self-join in SAP HANA, if self-join exists in your SQL statement, please consider if it is possible to implement the logic with window function. Now let's play with window function. First of all, in order to compare with the self-join approach, we need to reset the "LIFE" table to the same initial pattern.

Play the game of life with SAP HANA


1. Calculate # of live neighbours for each cell
In the window function approach, we have two sub-steps as follows.

1) Calculate # of live "vertical" neighbours for each cell
SELECT X, Y, S, (S + IFNULL(LEAD(S) OVER (PARTITION BY X ORDER BY Y), 0) + IFNULL(LAG(S) OVER (PARTITION BY X ORDER BY Y), 0)) N FROM LIFE;  

Play the game of life with SAP HANA

In this sub-step, we just calculate N(North) + C(Center) + S(South) for each C(Center). We partition the "LIFE" table by X vertically

Play the game of life with SAP HANA


For instance, # of live "vertical" neighbours of cell (2, 2) is 2.

Play the game of life with SAP HANA

2) Calculate # of live neighbours for each cell
Based on sub-step 1), we can calculate the final result by partitioning the "LIFE" table horizontally. In this sub-step, we partition the table by Y.

SELECT X, Y, S, (N + IFNULL(LEAD(N) OVER (PARTITION BY Y ORDER BY X), 0) + IFNULL(LAG(N) OVER (PARTITION BY Y ORDER BY X), 0) - S) N  
FROM (SELECT X, Y, S, (S + IFNULL(LEAD(S) OVER (PARTITION BY X ORDER BY Y), 0) + IFNULL(LAG(S) OVER (PARTITION BY X ORDER BY Y), 0)) N FROM LIFE);  

Play the game of life with SAP HANA

2. Apply the rules to get the results of the next generation
SELECT X, Y, CASE S WHEN 0 THEN (CASE N WHEN 3 THEN 1 ELSE 0 END) ELSE (CASE N WHEN 2 THEN 1 WHEN 3 THEN 1 ELSE 0 END) END S  
FROM (SELECT X, Y, S, (N + IFNULL(LEAD(N) OVER (PARTITION BY Y ORDER BY X), 0) + IFNULL(LAG(N) OVER (PARTITION BY Y ORDER BY X), 0) - S) N  
FROM (SELECT X, Y, S, (S + IFNULL(LEAD(S) OVER (PARTITION BY X ORDER BY Y), 0) + IFNULL(LAG(S) OVER (PARTITION BY X ORDER BY Y), 0)) N FROM LIFE));  

Play the game of life with SAP HANA

3. Update the "LIFE" table with the results in step 2
UPDATE LIFE SET S = NEXTGEN.S FROM LIFE,  
(SELECT X, Y, CASE S WHEN 0 THEN (CASE N WHEN 3 THEN 1 ELSE 0 END) ELSE (CASE N WHEN 2 THEN 1 WHEN 3 THEN 1 ELSE 0 END) END S  
FROM (SELECT X, Y, S, (N + IFNULL(LEAD(N) OVER (PARTITION BY Y ORDER BY X), 0) + IFNULL(LAG(N) OVER (PARTITION BY Y ORDER BY X), 0) - S) N  
FROM (SELECT X, Y, S, (S + IFNULL(LEAD(S) OVER (PARTITION BY X ORDER BY Y), 0) + IFNULL(LAG(S) OVER (PARTITION BY X ORDER BY Y), 0)) N FROM LIFE))) NEXTGEN  
WHERE LIFE.X = NEXTGEN.X AND LIFE.Y = NEXTGEN.Y;  

Play the game of life with SAP HANA

Self-join vs. window function
Since we have only played the very small 3x3 game of life with SAP HANA, we cannot compare the performance between self-join and window function. In order to compare the performance, we need to generate a bigger grid. We can first create a stored procedure which enables us to generate a NxN grid.

CREATE PROCEDURE GENERATE_LIFE(IN X INTEGER) LANGUAGE SQLSCRIPT AS  
i INTEGER;  
j INTEGER;  
BEGIN  
  DELETE FROM LIFE;  
  FOR i IN 1 .. X DO  
  FOR j IN 1 .. X DO  
  INSERT INTO LIFE VALUES (i, j, ROUND(RAND()));  
  END FOR;  
  END FOR;  
END;  

Then we can call the above stored procedure to generate the initial pattern, e.g., a 100x100 grid.

CALL GENERATE_LIFE(100);  

Now let's do some comparisons. Here we just compare step 2 between self-join and window function since step 3 is just the update operation. Hence, we compare the following two SELECT statements.

Self-join
SELECT X, Y, CASE S WHEN 0 THEN (CASE N WHEN 3 THEN 1 ELSE 0 END) ELSE (CASE N WHEN 2 THEN 1 WHEN 3 THEN 1 ELSE 0 END) END S  
FROM (SELECT A.X, A.Y, A.S, SUM(B.S) - A.S N  
FROM LIFE A INNER JOIN LIFE B ON ABS(A.X - B.X) < 2 AND ABS(A.Y - B.Y) < 2  
GROUP BY A.X, A.Y, A.S);  

Window function
SELECT X, Y, CASE S WHEN 0 THEN (CASE N WHEN 3 THEN 1 ELSE 0 END) ELSE (CASE N WHEN 2 THEN 1 WHEN 3 THEN 1 ELSE 0 END) END S  
FROM (SELECT X, Y, S, (N + IFNULL(LEAD(N) OVER (PARTITION BY Y ORDER BY X), 0) + IFNULL(LAG(N) OVER (PARTITION BY Y ORDER BY X), 0) - S) N  
FROM (SELECT X, Y, S, (S + IFNULL(LEAD(S) OVER (PARTITION BY X ORDER BY Y), 0) + IFNULL(LAG(S) OVER (PARTITION BY X ORDER BY Y), 0)) N FROM LIFE));  
  • Test environment:
  • SAP HANA SPS08 Rev. 80
  • CPU: 8 cores
  • Memory: 64GB
  • Disk: 200GB
N=100N=400
N = 100 N = 400 Time with self-joinJ~40s~1400s
Time with window function~30ms~120ms
# of cells100 * 100 = 10,000400 * 400 = 160,000
From the above table, we can find the following results:

1. The performance of window function is much better than self-join.
2. The time with window function seems a linear function of N.
3. The time with self-join seems a exponential growth with N.

We can find some reasons from the following two visualization plans. It takes about 99.99% time for the large column search node in self-join to make the non equi join and do the filter. And the join result has 25,600,000,000 rows! Meanwhile, the visualization plan of window function shows that two sequential window nodes just need to consume # of cells.
Visualization plan of self-join (N = 400)

Play the game of life with SAP HANA

Visualization plan of window function (N = 400)

Play the game of life with SAP HANA

Source: scn.sap.com

Complete End-To-End ABAP For HANA 7.4 SP 09 Development

$
0
0
Steps Involved For Displaying Flight Details:
  1. Creation of ABAP CDS view.
  2. Creation of Gateway O Data service.
  3. Testing of GW service in Gateway client.
  4. Creation of FIORI-Like App.
  5. Deployment of SAP UI5 Application to ABAP Back-End.
Lets start......

1. Creation of ABAP CDS view.

          -  Go to HANA studio ABAP perspective and Choose the package in which you want to create CDS view. Right click on package - New - Other ABAP Repository object - DDL source - Next
          -  Provide Name and Description
          -  It seems like below
Complete End-To-End ABAP For HANA 7.4 SP 09 Development

  • Check & Activate.
  • Right click on new created DDL source i.e. ZCDS_FL and click on open data preview.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development

  • Same CDS view physically available/accessible in SE11
Complete End-To-End ABAP For HANA 7.4 SP 09 Development

2.  Creation of Gateway O Data services

  • Now let's create O Data service which will fetch data (i.e. Query) from ZCDS_FL_SQL.
  • In terms of O Data, HTTP and ABAP we need apply operation QUERY, GET and Select * from <Table> into @data(ITAB) respectively.
  • Go to SEGW t-code and create new project ZFlight_Detail
Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • Now right on Data Model and Import - DDIC structure option - Provide ABAP DDIC structure name i.e.ZCDS_FL_SQL as well as Entity Name.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • Click Next and select required fields.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development

  • In next step mark check as key for FLIGHT_ID and CONNID and click Finish.
  • At a same time Entity Set has been created because earlier we have marked check box for create default entity set(it's SP 09 ).
  • Now let's play with runtime artifacts - Click on generate runtime objects(red and white circle) and it will pop up below screen - click enter button.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • On successful creation we will see the below screen(plenty of green symbols

Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • Now register you service under service maintenance folder.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • Click ok will get message service created....Yah we have maintained service...
  • Now for fetching data from DDIC artifacts ZCDS_FL_SQL we need to implement Query operation i.e. code based implementation.......
  • Now Open runtime artifacts and right click on class ZCL_FLIGHT_DETAIL_DPC_EXT - click on Go To ABAP Workbench.
  • In Edit mode redefine ZFLIGHT_DETAILSE_GET_ENTITYSET i.e. nothing but a implementation Query operation that results multiple rows.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • We are fetching complete set of records. Below is the code.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development

  • Yahhh... we have done with coding part. Lets move toward for testing GW service
3. Testing of gateway service in gateway client.
  • Now click on SAP Gateway Client under IFC Nodes box screen either you can use t-code /IWFND/GW_CLIENT
Complete End-To-End ABAP For HANA 7.4 SP 09 Development

  • Click on entity set name - select Zflight_DetailSet
  • It will generate automatic URI like below and hit execute button.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • Status code 200 means its working fine...
  • Same we can test in web browser (Just copy link from above screen)http://XXXXXXXXXXXXXXXXX/sap/opu/odata/sap/ZFLIGHT_DETAIL_SRV/Zflight_DetailSet/
  • After hitting above link in browser it will ask user ID and password which will be your SAP credentials and results output.
4. Creation of FIORI-Like App.
  • Open ABAP Prespective - Right click in the project explorer - New - Others
  • Type SAPUI and select application project

Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • Hit next and provide project name.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development

Complete End-To-End ABAP For HANA 7.4 SP 09 Development

  • After creation of project it seems like below.
Complete End-To-End ABAP For HANA 7.4 SP 09 Development

  • Double click on MAIN.view.xml and paste below code 



<core:View xmlns:core="sap.ui.core" xmlns:mvc="sap.ui.core.mvc" xmlns="sap.m"
              controllerName="zcds_fiori.MAIN" xmlns:html="http://www.w3.org/1999/xhtml">
       <Page title="Flight Details">
              <content>
       <Table id="idProductsTable"
              inset="false"
              items="{path:/Zflight_DetailSet',
              sorter:{path:'FlightId',
              descending:false}
              }">
              <columns>
                     <Column> <Text text="Flight ID" /> </Column>
                     <Column> <Text text="Flight Number" /> </Column>
                     <Column> <Text text="Flight Date" /> </Column>
                     <Column> <Text text="Plane Type" /> </Column>
                     <Column> <Text text="Price" /> </Column>
                     <Column> <Text text="Currency" /> </Column>
                     <Column> <Text text="Flight Name" /> </Column>
              </columns>
              <items>
                     <ColumnListItem>
                           <cells>
                                  <Text text="{FlightId}" />
                                  <Text text="{Connid}" />
                                  <Text text="{Fldate}" />
                                  <Text text="{Planetype}" />
                                  <Text text="{Price}" />
                                  <Text text="{Currency}" />
                                  <Text text="{RhFligntName}" />
                           </cells>
                     </ColumnListItem>
              </items>
       </Table> </content>
       </Page>
</core:View>

  • Now go to MAIN.controller.js and paste below code.
onInit: function() {
   
       var oModel = new sap.ui.model.odata.ODataModel("http://XXXXXXXX/sap/opu/odata/sap/ZFLIGHT_DETAIL_SRV/",false);
       sap.ui.getCore().setModel(oModel);
       this.getView().setModel(oModel);

       },

Note : 'XXXXXXXXX' would be your server path.

  • Save and right click on ZCDS_FIORI -  Run As - Web App Preview and result is below
Complete End-To-End ABAP For HANA 7.4 SP 09 Development
  • Now go to browser and paste URL which is generated in index.html
  • If everything went right then its time to celebrate.....
Result :

Complete End-To-End ABAP For HANA 7.4 SP 09 Development

5.Deployment of SAP UI5 Application to ABAP Back-End

Source: scn.sap.com

How to Deploy and Run SAPUI5 application on ABAP Server

$
0
0
Prerequisite

To run application on ABAP server, we need to have ABAP Development tools and SAPUI5 ABAP Repository Team Provider plugin installed into your Eclipse IDE.

How to Deploy and Run SAPUI5 application on ABAP Server

And on backend SAP system, you need to have below components.

How to Deploy and Run SAPUI5 application on ABAP Server

Before deploying this application into SAP ABAP AS, if we see Run As menu, it will display below options. Once it will get successfully deployed, it will show option for testing on ABAP server.

How to Deploy and Run SAPUI5 application on ABAP Server

Procedure

As displayed below, we need to right click on Project and go to menu Team --> Share Project.

How to Deploy and Run SAPUI5 application on ABAP Server

Select repository type as SAPUI5 ABAP Repository.

How to Deploy and Run SAPUI5 application on ABAP Server

In this step, you need to configure connection to SAP system where you want to deploy your SAPUI5 application.

How to Deploy and Run SAPUI5 application on ABAP Server

After successful connection to SAP system, it will display properties of connection.

How to Deploy and Run SAPUI5 application on ABAP Server

On click of Next button, logon screen will be displayed. Provide credentials of SAP system.

How to Deploy and Run SAPUI5 application on ABAP Server

Now you will be provided with 2 options as below. We will select option to create new BSP application. By default, name of BSP application will be selected as your SAPUI5 project name. But in that case, you will get error message as displayed in below screen.

How to Deploy and Run SAPUI5 application on ABAP Server

You need to give Z* name to your BSP application. Also you can browse package and select required package in which you want to save your BSP application.

How to Deploy and Run SAPUI5 application on ABAP Server

In this step, you can perform selection of Transport Request. You will be provided with below 3 options.

How to Deploy and Run SAPUI5 application on ABAP Server

You will get below kind of popup message for Server Runtime Version Check. Click OK.

How to Deploy and Run SAPUI5 application on ABAP Server

Final step of the deployment will be submitting the project. Select option Team --> Submit.

How to Deploy and Run SAPUI5 application on ABAP Server


As displayed below, it will ask resources to submit. Click on Finish.

How to Deploy and Run SAPUI5 application on ABAP Server

With this step, SAPUI5 application will get deployed to SAP ABAP AS Server and SAPUI5 project structure will look like as below.

Notice the changes between before and after deployment.

How to Deploy and Run SAPUI5 application on ABAP Server

Also you will see now Run on ABAP Server option got added as displayed below. By selecting this option, SAPUI5 application will run on ABAP server.

How to Deploy and Run SAPUI5 application on ABAP Server

Here you can see URL path which is nothing but the BSP application URL path got created in SICF transaction.

How to Deploy and Run SAPUI5 application on ABAP Server

If you see in backed SAP system, BSP application will get created as displayed below.

How to Deploy and Run SAPUI5 application on ABAP Server

Also you can see service getting created under the path /sap/bc/ui5_ui5/sap/zflightdemo in SICF transaction.

How to Deploy and Run SAPUI5 application on ABAP Server

Closing Remarks

We can easily deploy and run SAPUI5 application on ABAP server. Basically, in the backend SAP system, BSP application gets created when we share and submit SAPUI5 project.

Updates (Resubmitting UI5 project)

We can easily resubmit the changes done to the UI5 application. after doing changes in UI5 application, you need to again submit the project. in that case, you will see below screen with operation as Update.

How to Deploy and Run SAPUI5 application on ABAP Server

I created UI5 application with input field and submitted as explained in this technical article. below is the screen of BSP application executed which I created for UI5 application.

How to Deploy and Run SAPUI5 application on ABAP Server

After that, I added TextView UI element and again submitted the project. I again executed BSP application which reflected back the changes I did to my UI5 application.

How to Deploy and Run SAPUI5 application on ABAP Server

Source: scn.sap.com

SAP HANA Information Composer V 1.0 - External Data Upload

$
0
0
Introduction.
This document will walk you through the process of creation of tables and uploading data into HANA database without writing single SQL statement. This tool helps us to create physical table in HANA database in accordance to the sample data set we provide. It removes all the technicalities of modeling and provides an easy to use simple modeling environment with limited functionalities. Hence it bridges the gap between backend system and business users.
It allows Business users to gather their own data, upload it, and combine/compare it with corporate sources. They need not to understand various modeling technicalities, the tool itself is intelligent enough to suggest various joins/Unions etc once it analyses the data.
Hence SAP HANA (SP3) delivers the first version of “Self service data warehousing” solution.
What is Information Composer?
Information Composer V 1.0 is a self-service BI data warehousing tool, it is a modeling environment for non technical people like end users. User needs not to be a computer expert to analyze the data set.
You can import external data in workbook format (.xls, .xlsx or.csv) into the SAP HANA database and use it to create Information Views for analysis.
It provides a centralize secure environment to the highly trusted corporate BI data. Separate roles are assigned for Information composer users to maintain highly secure access.

Note: It is different from the HANA modeler. Both of them are targeting two different sets of users. HANA modeler is a more powerful tool targeting users with sound technical knowledge and hence offers extensive functionalities. Whereas Information Composer is intended for Business users with less technical knowledge. It provides simple set of functionalities along with user friendly interface.

Scenario example
Uploading of local data is needed in situations where business users needed to combine data from local system and already loaded BI data. Requirements like Profitability reports where we need to take data from local system from time to time for analysis of when the business user needed to upload day today customer comments to the backend system.

Functionality.

1. Data extraction: It helps to extract data, cleanse, preview data and automate the process of creation of physical table in the HANA database.
2. Manipulating data: It helps us to combine two HANA objects (Physical tables, Analytical View, attribute view and calculation views), allows us to add calculation views (select them as attribute or measure) and create information view that can be consumed by SAP BusinessObjects Tools like SAP Businessobjects Analysis, SAP BusinessObjects Explorer and commonly used tools like MS Excel.
3. Provide a centralize IT service in the form of URL, that can be accessed from anywhere.

Note: In HANA terms “Information Views” can be of type Attribute View, Analytic View, and Calculation View. The Information View that SAP HANA Information Composer creates are of type Calculation View.

System landscape

SAP HANA Information Composer V 1.0 - External Data Upload

Showing where the Information Composer is integrated in the HANA landscape (new inclusion)  - Source is SAP.

Uploading data
Large amount of data (up to 5 millions of cell) can uploaded using Information Composer.
Below is the address for accessing information Composer:
http://<server>:<port>/IC  

First step is to logon to the Information Composer system.

SAP HANA Information Composer V 1.0 - External Data Upload

Note: If one is using the HANA sandbox Personal Image hosted on cloud, one must need to let that process “Start Information Composer” runs for 2-3 minutes (it needs to start and initialize all the Java processes).After 2-3 minutes, you can click on the IE Shortcut for “Information Composer” on the desktop, and it will open up your IC login page (as shown above).

1. One can perform external data load or data manipulations with this tool (as shown below, for this article we will only consider the UPLOAD)

SAP HANA Information Composer V 1.0 - External Data Upload

2. One can load data from xls, .xlsx .csv or data from clipboard (that is copied earlier).

SAP HANA Information Composer V 1.0 - External Data Upload

- > One can even select sheet number from a workbook.
- > Data can be loaded along with the header

 Note:  As soon as the data is uploaded or the source is selected the data can be saved as a draft. The same can be viewed under table folder in IC_TABLES schema with : “ic/<userid> /saved (or temp )/<alphanumeric caracters>”. Saved will only come if it is saved during the course of uploading the external data. Once the data is published the additional entries will disappear with “ic/<name of the table>” Table related metadata will be stored under “SAP_IC” schema, where tables like IC_MODELS, IC_SPREADSHEETS.
One can find details of tables created using IC under these tables against created by column (in our case there is an entry for S0008812557 and the status is appearing as SAVED)

SAP HANA Information Composer V 1.0 - External Data Upload

Metadata from these system tables are taken for “My Data” section (explained later).

Once the data is published, one cannot rename the table. In that case one need to delete the table. This activity cannot be carried out from IC. For deletion on need to login to the HANA Modeler and write a simple deletion code.
Note: don’t forget to select the correct schema.
Snapshot of IC_SPREADSHEETS table under SAP_IC schema.

SAP HANA Information Composer V 1.0 - External Data Upload

The other way of uploading the data into HANA system is to copy a set of data into the clipboard and then upload that with the help of IC*.

SAP HANA Information Composer V 1.0 - External Data Upload

*IC stands for Information Composer. One can choose header part along with the data set. In that case the “Upload with header” is needed to be checked.

3. Once the data is uploaded (in temporary storage), it will display a preview of the data set uploaded. Spyglass on the right hand side will help to search a particular entry.

SAP HANA Information Composer V 1.0 - External Data Upload

Note: By default the data is stored in columnar format under schema IC_TABLES where table name starts with “ic/”
Below is a button to have a summarized view. That gives an idea of the variety of that available, where the blue color gives an idea of repetition of a particular entry.

SAP HANA Information Composer V 1.0 - External Data Upload.

4. Some time the data set may contain inconsistent data (like in our case country may appear as India or india or INDIA) but all of them have the same meaning. So this tool offers an additional step for Cleansing of data (optional step).

SAP HANA Information Composer V 1.0 - External Data Upload

The tool intelligent enough to finds out the variation in the data part and finds the probability of that they might be same. It offers manual correction or one can drag and drop the differences in to a single stack as shown.

SAP HANA Information Composer V 1.0 - External Data Upload

If we move our cursor on top of any field, it will display the property of that column. One can change the data source any time during the upload process with the help of “Change Source” link.

SAP HANA Information Composer V 1.0 - External Data Upload

5. Next step is to classify the data, whether it is a measure or an attribute (measures can be used for mathematical calculations). The tool itself shows all the fields that can be considered as a measure. One needs to select them manually by selecting the respective checkbox.

SAP HANA Information Composer V 1.0 - External Data Upload

Note:  A separate index field (which is also the primary key) is created by default for every physical table created with the help of IC
6.  Here comes the most important step: Publication or the final step of creation and loading of physical table into HANA database. Provide a technical name and description (make sure no other table with the same name is there inside the IC_TABLES schema).

SAP HANA Information Composer V 1.0 - External Data Upload

Two sets of roles can be assigned to users who will use Information Composer. IC_MODELER is for creating physical tables, uploading data and creating information views. The other one is the IC_PUBLIC, this role allows users to view information views created by other users. This role doesn’t allow user to upload of create any information views using IC. (More details are there at the end of this article)

SAP HANA Information Composer V 1.0 - External Data Upload

Once the “Share data with other users” is selected users having IC_PUBLIC role can view the uploaded data set/ physical table.
“Start a new information View based on this data” will automatically jump to the next modeling screen where present data set will act as the first data source.
Once the “Publish and Finish” is clicked a physical table is created at the HANA database. If everything goes well, a success message will be displayed (as below).

SAP HANA Information Composer V 1.0 - External Data Upload

A quick access launch bar can be viewed on the left side of the IC. Displaying all the recently created and publicly available tables. It takes information from IC_SPREADSHEETS and IC_MODELS.

SAP HANA Information Composer V 1.0 - External Data Upload

SAP HANA Information Composer V 1.0 - External Data Upload
 Various symbols are displayed alongside the tables, explained as below:   
SAP HANA Information Composer V 1.0 - External Data Upload
Icon helps us to filter out "Draft user data", "Private user data", or "Public user data". 21.JPG
Icon to rename, delete, or share the data (data cannot be renamed once it is published)   
SAP HANA Information Composer V 1.0 - External Data Upload
this symbol shows that dataset is already published.
SAP HANA Information Composer V 1.0 - External Data Upload
This symbol tells that it is shared with other users (having IC_PUBLIC role at least).

System Requirements - Source is SAP
Server Requirements:
  • At least 2GB of available RAM is required
  • Java 6 (64-bit) must be installed on the server
  • The Information Composer Server must be physically located next to the HANA server

Client Requirements:
  • Internet Explorer with Silverlight 4 installed
User Roles - Source is SAP
Two sets of user roles exist for Information Composer. Assign these roles to users who will use the Information Composer:
  • IC_MODELER Allows users to work with Information Composer and upload data, create Information Views and so on. If you only have the IC_MODELER role assigned to your user, then you can only create and use private Information Views (that is, they cannot be seen by other users). The IC_MODELER role includes the IC_PUBLIC role. This means that users that are granted with this role already include the IC_PUBLIC role
  • IC_PUBLIC Allows users to work with Information Composer and to see workbooks and Information Views that were uploaded or created by other users. For example, if you choose to share your data or Information views with other users, then anyone with the IC_PUBLIC role can see this data.
Source: scn.sap.com

SAP Hana Information Composer - for the Non-technical User - ASUG Webcast

$
0
0
SAP HANA Information Composer (new with SAP HANA 1.0 SP03) is a modeling tool developed specifically for the needs of non-technical users. From installation through end-user experience, you will leave this session with a comprehensive understanding of this modeling environment and how it fits in to your modeling options. You will also learn how more advanced data architects and centralized IT can benefit from and extend scenarios initiated by non-technical users in the SAP HANA Information Composer via the SAP HANA Modeler.
Agenda:
  • User Experience
  • Installation & Admin
User Experience

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 1: Source: SAP

Figure 1 shows the broader picture of SAP Hana, looking at main flow & architecture

3 key layers:
1) Movement of data into hana
2) Management & modeling of data
3) Consumption layer

Composer comes in Modeling layer with 2 tools:
1) Hana Modeling studio
2) Information Composer

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 2: Source: SAP

Modeler is an Eclipse based tool

The Hana modeler is the primary modeler

You can generate time records, different layers, attribute views for descriptive data, analytic view modeling including measures and facts

It uses the multi-dimensional star schema model

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 3: Source: SAP

Information Composer has two main functions as shown in Figure 3 – acquires data and simple data modeling

Focused on non-technical user to allow them to quickly take information, such as a spreadsheet or clipboard and create their own data in Hana and combine with central models from IT.

It is a step-through wizard type environment

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 4: Source: SAP

Figure 4 shows the comparison of the Hana Modeler versus the Information Composer.

Info composer user may not know concept of joins/union or understand but do know what want to accomplish – know their data itself

Workspaces are kept private for their own use and shared for other users

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 5: Source: SAP

Figure 5 shows the business and IT benefits.  “Bending and not breaking IT” is what Lori said.

Artifacts created  are stored in the repository – same as the Hana modeler.  It provides transparency and authorizations intact.  Same authorization roles & management

It is not a separate authorization concept itself.  Avoid loop holes, says Lori.

It is business by design as an end user has something to create,  a blend of 2 different data models, may be something to push out and then hands over to the architect who continues to take to another level.

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 6: Source: SAP

Hana Information Composer has URL based interface; give users URL and get welcome screens

Lori then showed some demos on SAP Hana Information Composer – please see http://www.experiencesaphana.com/videos/1234
The views are published to HANA and then are visible to the HANA front end.

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 7: Source: SAP

Figure 7 shows the upload features.  IT may be concerned about loads of large amounts of data so there is a restriction of 5 million cells – it is not a technical restriction.  A request has been made to make this a configurable value.

Installation & Administration

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 8: Source: SAP

It is on SMP via ZIP file and will install a lean Java server with a Silverlight UI on the front end. You choose http or https configuration. Port defaults 8080; if you have tomcat you want to make sure the port for Information Composer is unique.

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast

Figure 9: Source: SAP

Figure 9 shows the installation steps and the system configuration.  It creates the IC_TABLES schema tables.

SAP Hana Information Composer  - for the Non-technical User - ASUG Webcast


Figure 10: Source: SAP
Figure 10 shows two new roles after installation – IC_MODELER (to use Information Composer) and IC_PUBLIC (authorized to use tables)

Source: scn.sap.com

LEARN WITH US : S/4 HANA Analytics!!

$
0
0
What it is?

S/4 HANA stands for Simple 4th Generation Business suite solutions from SAP which will run on SAP HANA. SAP HANA is one of the preferred product among the companies seeking for an optimized enterprise solution because the product has come a long way from its previous predecessors which had its transaction processing and analytical processing in different platforms, that meant more time on data output and decision making.

It is very well known as The Next Generation Business Suite and it is greatest innovation since R3 and ERP from the SAP world. It unites the software and people to build businesses to run on real-time, networked and in simple way. It has got built-in analytics for hybrid transactional and analytical applications.

What does it do?

LEARN WITH US : S/4 HANA Analytics!!

SAP S/4 HANA provides enhanced analytical capabilities due to the architecture based on SAP HANA. Now SAP Hana is all about Insight and immediate action on LIVE data, nullifying the process of batch processing and ETL. Some of the best features with S/4 Hana analytics are cross system online reports, a built in BI system, Smart business, analytical applications and many more. Real-time reporting on data is available from the one single SAP S/4HANA component, which aid you to get many other tools for analytical purposes by creating quick queries.



How does real-time data and historical data work together?

SAP S/4 HANA Analytics + SAP Business Warehouse

Now let us have a closer look from the data perspective!

When a BW system is running on an SAP HANA database, the BW data is stored in a special schema known as the BW-managed schema and is exposed via InfoProviders (e.g. DataStore Objects, Info Objects, etc.). In other (remote) SAP HANA schemas, data can be stored in SAP HANA tables and accessed via HANA views which are based e.g. on Core Data Services (CDS) technology.

LEARN WITH US : S/4 HANA Analytics!!

You can make data available from any SAP HANA database schema of your choice in BW. You can also make BW data (data from the BW-managed schema in the SAP HANA database) available in a different SAP HANA schema. To do so you can use virtual access methods such as Open ODS Views (using HANA Smart Data Access for remote scenarios) and data replication methods like SAP LT Replication Server.

S/4 Hana analytics just uses the concept of Instant Insight to action by using its’ built in analytics for hybrid transactional and analytical processes. One of the applications that work on this principle is SAP Smart business cockpits which use advanced analytics enabling the business user to see instant real time data to solve any business situations. They are individualized, more accurate, and more collaborative and can be operated form anywhere, anytime.



The process of combining the real time data and multi sourced data with S/4 HANA analytics and SAP Business Warehouse respectively has helped the company provide a hybrid solution which is a strategic move. S/4 for HANA analytics complements SAP BW (powered by SAP HANA) helping in better services and decision making for the organizations.

Source: scn.sap.com

SAP HANA SPS 12 What's New: Security - by the SAP HANA Academy

$
0
0
In the upcoming weeks we will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA Support Package Stack (SPS) 12.

For Security for the SPS 11 release, see SAP HANA SPS 11 What's New: Security - by the SAP HANA Academy

Overview Video

SAP HANA Security - SPS 12 - YouTube

What's New?

Security Administration with SAP HANA Cockpit

The tile catalog SAP HANA Security Overview has a new tile: Authentication, which lists the status of the password policy and any SSO configuration. The tile opens the Password Policy and Blacklist app.


The new Password Policy and Blacklist app allows you to view and edit password policy and blacklist of the SAP HANA database. In previous versions, SAP HANA studio was required for these tasks.


The Auditing tile now allows you to configure auditing: enable/disable, audit trail targets and create/change/delete audit policies. In previous versions, SAP HANA studio was required for these tasks.


The Data Volume Encryption app now allows you to change the root key used for data volume encryption. Alternatively, the root key can be changed using SQL (see below). In previous versions, the command line tool hdbnsutil was required to perform this task.


You can also see a history of root key changes using the view M_PERSISTENCE_ENCRYPTION_KEYS.

Authorization

Two new system views are available for analysing user authorisations:
  • EFFECTIVE_PRIVILEGE_GRANTEES
  • EFFECTIVE_ROLE_GRANTEES



Two new roles are available to support users administrating SAP HANA using SAP Solution Manager and SAP NetWeaver tools:
  • sap.hana.admin.roles::SolutionMangagerMonitor
  • sap.hana.admin.roles::RestrictedUserDBSLAccess

Authentication

You can now disable authentication mechanisms that are not used in your environment.


SAP HANA smart data access now supports SSO with Kerberos and Microsoft Active Directory for connections to SAP HANA remote sources.

Encryption

You can change the root key for data volume encryption using either SAP HANA cockpit or SQL (note that no native UI for HANA studio has been included).
SQL> ALTER SYSTEM PERSISTENCE ENCRYPTION CREATE NEW ROOT KEY

SAP HANA studio now support client certification validation for the SAP HANA database connection


The SAP HANA user store (hdbuserstore) now also supports JDBC connections and multitenant databases.

Auditing

Several new user actions in the SAP HANA database can now be audited:
  • CREATE | ALTER | DROP PSE
  • CREATE | DROP CERTIFICATES
  • CREATE | DROP SCHEMA

Cross-database queries in SAP HANA multitenant database containers are now audited in the tenant database in which the query is executed.

The maximum length of a statement can set set using the system parameter audit_statement_length in section [auditing configuration] of global.ini.

Security Checklists and Recommendations

A new guide has been added to the SAP HANA documentation set: SAP HANA Security Checklists and Recommendations. This guide extends and replaces the Security Configuration Checklist paragraph from the SAP HANA Security Guide.


Security for SAP HANA Extended Application Services, Advanced Model

A new paragraph has been added to the SAP HANA Security Guide on the topic of Extended Application Services:

Source: scn.sap.com

Understand HANA SPS and supported Operating Systems

$
0
0
HANA Revision Strategy

SAP is shipping regular corrections and updates. Corrections are shipped in the form of Revisions and Support Packages of the product’s components. New HANA capabilities are introduced twice a year in form of SAP HANA Support Package Stacks (SPS). The Datacenter Service Point (DSP) is based upon the availability of a certain SAP HANA revision, which had been running internally at SAP for production enterprise applications for at least two weeks before it is officially released.

See SAP Note“2021789 - SAP HANA Revision and Maintenance Strategy” for more details

Understand HANA SPS and supported Operating Systems

Supported Operating Systems

According to SAP Note “2235581 – Supported Operating Systems” the following two Linux distributions can be used with the SAP HANA Platform.
  • SUSE Linux Enterprise Server for SAP Applications (SLES for SAP)
  • Red Hat Enterprise Linux for SAP HANA (RHEL for SAP HANA)
All information in this document refers to these two products from SUSE and Red Hat only.

Several additional notes applies, when selecting an Operating System release for SAP HANA.
  • HANA SPS Revision Notes:  HANA revision (x) requires  a minimum OS release (y) – e.g. SAP Note 2233148& 2250138 - SLES 11 SP2 minimum for HANA SPS11
  • Hardware that has been certified within the SAP HANA hardware certification program has been certified for a list of available combinations on a specific operating system version.
  • Only host types released by SAP hardware partners are suitable for using SAP software productively on Linux ( SAP Note  171356 )
  • SAP cannot support  software from third-party providers  (e.g. OS)  which is no longer maintained by the manufacturer (SAP Note 52505
SUSE Linux Enterprise Server for SAP  (SLES for SAP)
With SUSE Linux Enterprise Server for SAP (SLES for SAP) there are major releases 11 & 12 with their appropriate service packs. Several Service Packs (SP) are available in an SUSE release and it is possible to stay with a specific SP until its support ends.  The general support for a Service pack ends at a defined date. These “general end” dates are communicated on SUSE Web pages.

Understand HANA SPS and supported Operating Systems

With an RTC (Release To Customer) dates of HANA SPS there are supported SUSE Linux Enterprise Server for SAP (SLES) versions available which can be combinations for a supported stack. When a new SAP HANA SPS is released new “SLES for SAP” versions are available to build a supported combination:

Understand HANA SPS and supported Operating Systems

Red Hat Enterprise Linux for SAP HANA (RHEL for SAP HANA)

Starting with SAP HANA Platform SPS 08 Red Hat Enterprise Linux for SAP HANA is supported for SAP HANA. Red Hat follows an approach with major & minor releases. With Extended Update Support (EUS) for specific RHEL for SAP HANA releases it is possible to remain on a specific minor release even if there is already a subsequent minor release available.

You can find more details about  Life Cycle on Red Hat Enterprise Linux product web pages. When a new SAP HANA SPS is released new RHEL for SAP HANA versions are available which can be used as a supported stack:

Understand HANA SPS and supported Operating Systems

HANA SPS & OS version timeline

We see a timeline of HANA SPS and available OS releases from SUSE and Red Hat. The next overview and all earlier timelines shows lifecycles of HANA and SLES/RHEL and not an official support status for HANA releases.

Understand HANA SPS and supported Operating Systems

The marked intersections show those points in time when there is a supported release from the OS vendors at the RTC date of a new HANA SPS. 
Older operating system releases are not longer supported and disappear in the timeline after SUSE’s “general support end” for “SLES for SAP” or  Red Hat’s “EUS” end date and will be replaced with a new OS version or service pack.

OS Validation

End of Life of an OS

The sample on the next picture shows SLES for SAP 11 SP3 that has a general support end date in January 2017. This is a point in time when validation stops for that OS release and upcoming HANA SPS are no more supported on SLES for SAP 11 SP3.

Understand HANA SPS and supported Operating Systems

Sunrise of an OS

If a new OS release is available, SAP is planning to support it with the next upcoming HANA SPS. The following sample timeline shows SLES for SAP 11 SP4, which was available before HANA SPS11 was released. Following SPS are also supported on SLES for SAP 11 SP4 since it has not reached its general support end date yet.

Understand HANA SPS and supported Operating Systems

SLES  for SAP 11 SP4 was validated and SAP supports this OS version with HANA SPS11.

Support Matrix

This validation methodology and the timelines of HANA revisions and OS releases leads to a “Support Matrix” which is presented in this sample table:

Understand HANA SPS and supported Operating Systems

The corridor shows the combinations of OS releases with HANA SPS which will be supported by SAP.

Source: scn.sap.com

HANA Monitoring Handy SQL's

$
0
0
Monitoring Memory Usage

Used Memory
The total amount of memory in use by SAP HANA is referred to as its Used Memory. This is the most precise indicator of the amount of memory that the SAP HANA database uses at any time

When used: To understand the current used memory in HANA when HANA alerts shows usage greater than licensed memory. Understanding memory usage by components will help in troubleshooting and perform necessary memory clean up actions.

Display the current size of the Used Memory; you can use the following SQL statement

SELECT
        ROUND(SUM(TOTAL_MEMORY_USED_SIZE/1024/1024/1024),
        2) AS "Used Memory GB"
FROM SYS.M_SERVICE_MEMORY;

Display current used memory for Column Store Tables

SELECT
        ROUND(SUM(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "Column Tables MB Used"
FROM M_CS_TABLES;

Display current memory used breakdown by Schema

SELECT
        SCHEMA_NAME AS "Schema",
        ROUND(SUM(MEMORY_SIZE_IN_TOTAL) /1024/1024) AS "MB Used"
FROM M_CS_TABLES
GROUP BY SCHEMA_NAME
ORDER BY "MB Used" DESC;

Display memory usage by components

SELECT
        host,
        component,
        sum(used_memory_size) used_mem_size
FROM PUBLIC.M_SERVICE_COMPONENT_MEMORY
group by host,
        component
ORDER BY sum(used_memory_size) desc;

Database resident
Resident memory is the physical memory actually in operational use by a process.

SELECT SUM(PHYSICAL_MEMORY_SIZE/1024/1024/1024) "Database Resident" FROM M_SERVICE_MEMORY;

Find the total resident on each node and physical memory size

SELECT
        HOST,
        ROUND(USED_PHYSICAL_MEMORY/1024/1024/1024,
        2) AS "Resident GB",
        ROUND((USED_PHYSICAL_MEMORY + FREE_PHYSICAL_MEMORY)/1024/1024/1024,
        2) AS "Physical Memory GB"
FROM PUBLIC.M_HOST_RESOURCE_UTILIZATION;

Find total Resident

SELECT
        T1.HOST,
        (T1.USED_PHYSICAL_MEMORY + T2.SHARED_MEMORY_ALLOCATED_SIZE)/1024/1024/1024 "Total Resident"
FROM M_HOST_RESOURCE_UTILIZATION AS T1 JOIN (SELECT
        M_SERVICE_MEMORY.HOST,
        SUM(M_SERVICE_MEMORY.SHARED_MEMORY_ALLOCATED_SIZE) AS SHARED_MEMORY_ALLOCATED_SIZE
       FROM SYS.M_SERVICE_MEMORY
       GROUP BY M_SERVICE_MEMORY.HOST) AS T2 ON T2.HOST = T1.HOST;

Maximum peak used memory
SAP HANA database tracks the highest-ever value of Used Memory reached since the database was started. In fact, this is probably the single most significant memory indicator that you should monitor as an overall indicator of the total amount of memory required to operate the SAP HANA database over a long period of time.

SELECT
        ROUND(SUM("M")/1024/1024/1024,
       2) as "Max Peak Used Memory GB"
FROM (SELECT
        SUM(CODE_SIZE+SHARED_MEMORY_ALLOCATED_SIZE) AS "M"
       FROM SYS.M_SERVICE_MEMORY
       UNION SELECT
        SUM(INCLUSIVE_PEAK_ALLOCATION_SIZE) AS "M"
       FROM M_HEAP_MEMORY
       WHERE DEPTH = 0);

Peak used memory
SAP HANA maintains a special Used Memory indicator, called the Peak Used Memory. This is useful to keep track of the peak value (the maximum, or “high water mark”) of Used Memory over time. Here is how to read the Peak Used Memory:

SELECT
        ROUND(SUM("M")/1024/1024/1024,
       2) as "Peak Used Memory GB"
FROM (SELECT
        SUM(CODE_SIZE+SHARED_MEMORY_ALLOCATED_SIZE) AS "M"
       FROM SYS.M_SERVICE_MEMORY
       UNION SELECT
        SUM(INCLUSIVE_PEAK_ALLOCATION_SIZE) AS "M"
       FROM M_HEAP_MEMORY_RESET
       WHERE DEPTH = 0);

Memory usage in server

free –g | awk '/Mem:/ {print "Physical Memory: " $2 " GB."} /cache:/ {print "Resident: " $3 " GB."}'

Memory Cleanup: Forcing Garbage collector from Server
Login to Hana server -> open HDBAdmin.sh and navigate to Services -> Console
Select the node where the garbage collection to be triggered. Execute the below command

mm gc –f
The garbage collector will be triggered, and free up the memory. This will not unload the tables.

Resetting Monitoring Views
When Used: when testing a report or need to monitor the peak of memory usage by a SQL, monitor IO, memory objects throughput and statistics about garbage collection jobs. The below will allow to reset these statistics.

Memory allocator statistics
M_HEAP_MEMORY view contains information about memory consumption of various components in the system.
ALTER SYSTEM RESET MONITORING VIEW SYS.M_HEAP_MEMORY_RESET;

M_CONTEXT_MEMORY view contains information about memory consumption grouped by connections and/or users.

ALTER SYSTEM RESET MONITORING VIEW SYS.M_CONTEXT_MEMORY_RESET;

File access statistics
M_VOLUME_IO_STATISTICS_RESET view shows information about basic I/O operations on I/O subsystems (that is, paths).

ALTER SYSTEM RESET MONITORING VIEW SYS.M_VOLUME_IO_STATISTICS_RESET;

Memory object statistics
M_MEMORY_OBJECTS_RESET view provides information about the number and size of resources currently in the resource container and about the throughput of the resource container.

ALTER SYSTEM RESET MONITORING VIEW SYS.M_MEMORY_OBJECTS_RESET;

Garbage collection/history manager statistics
M_GARBAGE_COLLECTION_STATISTICS_RESET view shows various statistics about garbage collection jobs.

ALTERSYSTEMRESET MONITORING VIEW SYS.M_GARBAGE_COLLECTION_STATISTICS_RESET;

Schema/Tables Monitoring
Find Tables loaded into memory & delta records
When used: To see what tables are loaded to memory at any given time; If a report is running slow see if the table is loaded to memory though the tables goes on lazy loading it is a best practice to have the table loaded to memory.

SELECT
       LOADED,
       TABLE_NAME,
        RECORD_COUNT,
        RAW_RECORD_COUNT_IN_DELTA ,
        MEMORY_SIZE_IN_TOTAL,
        MEMORY_SIZE_IN_MAIN,
        MEMORY_SIZE_IN_DELTA
from M_CS_TABLES
where schema_name = 'SCHEMA'
order by RAW_RECORD_COUNT_IN_DELTA Desc

To drill down further and see what columns is not loaded /loaded please use below
Select top 100 LOADED,
HOST,
TABLE_NAME,
COLUMN_NAME,
MEMORY_SIZE_IN_TOTAL
from PUBLIC.M_CS_COLUMNS
WHERE SCHEMA_NAME = 'SCHEMA'
AND LOADED <> 'TRUE'

MERGE DELTA
See if there is delta to be merged. RAW_RECORD_COUNT_IN_DELTA will provide the delta count.
SELECT
        LOADED,
       TABLE_NAME,
        RECORD_COUNT,
        RAW_RECORD_COUNT_IN_DELTA ,
        MEMORY_SIZE_IN_TOTAL,
        MEMORY_SIZE_IN_MAIN,
        MEMORY_SIZE_IN_DELTA
from M_CS_TABLES
where schema_name = 'SCHEMA'
order by RAW_RECORD_COUNT_IN_DELTA Desc

Forcing delta Merge
UPDATE SCHEMA.COLUMN_STATISTICS MERGE DELTA INDEX;
Smart merge
UPDATE <table_name> MERGE DELTA INDEX WITH PARAMETERS ('SMART_MERGE'='ON')
Find Auto Merge On
select TABLE_NAME, AUTO_MERGE_ON from SYS.TABLES

Find Compression
When used: To see the uncompressed size and the compression ratio in HANA for the loaded tables.

SELECT top 100 "SCHEMA_NAME",
sum("DISTINCT_COUNT") RECORD_COUNT,
sum("MEMORY_SIZE_IN_TOTAL") COMPRESSED_SIZE,
sum("UNCOMPRESSED_SIZE") UNCOMPRESSED_SIZE,
(sum("UNCOMPRESSED_SIZE")/sum("MEMORY_SIZE_IN_TOTAL")) as COMPRESSION_RATIO,
100*(sum("UNCOMPRESSED_SIZE")/sum("MEMORY_SIZE_IN_TOTAL")) as COMPRESSION_PERCENTAGE
FROM "SYS"."M_CS_ALL_COLUMNS"
GROUP BY "SCHEMA_NAME"
having sum("UNCOMPRESSED_SIZE") >0
ORDER BY UNCOMPRESSED_SIZE DESC ;
To go on a detail level and identify what type of compression is applied on each column and the ratio please use below
select
        COLUMN_NAME,
        LOADED,
        COMPRESSION_TYPE,
        MEMORY_SIZE_IN_TOTAL,
        UNCOMPRESSED_SIZE,
        COMPRESSION_RATIO_IN_PERCENTAGE as COMPRESSION_FACTOR
from M_CS_COLUMNS
where schema_name = 'SCHEMA'

Forcing compression on a table
update SCHEMA.COLUMN_STATISTICS  with parameters ('OPTIMIZE_COMPRESSION' = 'TRUE');

Find which node is active
to find which node your session is connected to
SELECT
        HOST,
        PORT,
        CONNECTION_ID
FROM M_CONNECTIONS
WHERE OWN = 'TRUE';

Expensive Statements
Ensure the expensive statement trace is ON
When used: To troubleshoot a report failure or a sql failure and understand why it failed. Also to monitor the expensive sqls executed in HANA. Identify the ways for performance optimization.
Find expensive statements for errors
SELECT
       "HOST",
        "PORT",
        "CONNECTION_ID",
        "TRANSACTION_ID",
        "STATEMENT_ID",
        "DB_USER",
        "APP_USER",
        "START_TIME",
        "DURATION_MICROSEC",
        "OBJECT_NAME",
        "OPERATION",
        "RECORDS",
        "STATEMENT_STRING",
        "PARAMETERS",
        "ERROR_CODE",
        "ERROR_TEXT",
        "LOCK_WAIT_COUNT",
        "LOCK_WAIT_DURATION",
        "ALLOC_MEM_SIZE_ROWSTORE",
        "ALLOC_MEM_SIZE_COLSTORE",
        "MEMORY_SIZE",
        "REUSED_MEMORY_SIZE",
        "CPU_TIME"
FROM  "PUBLIC"."M_EXPENSIVE_STATEMENTS"
WHERE ERROR_CODE > 0
ORDER BY START_TIME DESC;
Finding expensive statements executed by User
SELECT
       "HOST",
        "PORT",
        "CONNECTION_ID",
        "TRANSACTION_ID",
        "STATEMENT_ID",
        "DB_USER",
        "APP_USER",
        "START_TIME",
        "DURATION_MICROSEC",
        "OBJECT_NAME",
        "OPERATION",
        "RECORDS",
        "STATEMENT_STRING",
        "PARAMETERS",
        "ERROR_CODE",
        "ERROR_TEXT",
        "LOCK_WAIT_COUNT",
        "LOCK_WAIT_DURATION",
        "ALLOC_MEM_SIZE_ROWSTORE",
        "ALLOC_MEM_SIZE_COLSTORE",
        "MEMORY_SIZE",
        "REUSED_MEMORY_SIZE",
        "CPU_TIME"
FROM  "PUBLIC"."M_EXPENSIVE_STATEMENTS"
WHERE STATEMENT_STRING LIKE '%NAIRV%'

CONNECTIONS
Find running connections

SELECT "HOST", "PORT", "CONNECTION_ID", "TRANSACTION_ID", "START_TIME", "IDLE_TIME", "CONNECTION_STATUS", "CLIENT_HOST", "CLIENT_IP", "CLIENT_PID", "USER_NAME", "CONNECTION_TYPE", "OWN", "IS_HISTORY_SAVED", "MEMORY_SIZE_PER_CONNECTION", "AUTO_COMMIT", "LAST_ACTION", "CURRENT_STATEMENT_ID", "CURRENT_OPERATOR_NAME", "FETCHED_RECORD_COUNT", "AFFECTED_RECORD_COUNT", "SENT_MESSAGE_SIZE", "SENT_MESSAGE_COUNT", "RECEIVED_MESSAGE_SIZE", "RECEIVED_MESSAGE_COUNT", "CREATOR_THREAD_ID", "CREATED_BY", "IS_ENCRYPTED", "END_TIME", "PARENT_CONNECTION_ID", "CLIENT_DISTRIBUTION_MODE", "LOGICAL_CONNECTION_ID", "CURRENT_SCHEMA_NAME", "CURRENT_THREAD_ID"
FROM "PUBLIC"."M_CONNECTIONS"
WHERE  CONNECTION_STATUS = 'RUNNING'
ORDER BY "START_TIME" DESC
 Resetting Connections

Find the connection
SELECT CONNECTION_ID, IDLE_TIME
FROM M_CONNECTIONS
WHERE CONNECTION_STATUS = 'IDLE' AND CONNECTION_TYPE = 'Remote'
  ORDER BY IDLE_TIME DESC
Disconnect Session

ALTER SYSTEM DISCONNECT SESSION '203927';
ALTER SYSTEM CANCEL SESSION '237048';
Find owners of objects

SELECT * FROM "PUBLIC"."OWNERSHIP" WHERE SCHEMA='SCHEMA'

Find Granted Privileges for Users
SELECT * FROM PUBLIC.GRANTED_PRIVILEGES
WHERE GRANTEE_TYPE = 'USER' AND GRANTOR = 'NAIRV'
PASSWORD Policy
Disable password policy on a user, this is used when you don’t want the policy to be applied on a user. This will set to lifetime.
ALTER USER USER DISABLE PASSWORD LIFETIME

Audit Policy
Configure
Enable global auditing
alter system alter configuration ('global.ini',
       'SYSTEM')
set ('auditingconfiguration',
       'global_auditing_state' ) = 'true' with reconfigure;
Set the auditing file type
       alter system alter configuration ('global.ini','SYSTEM')
       set ('auditingconfiguration'
       ,'default_audit_trail_type' ) = 'CSVTEXTFILE'
       with reconfigure;
aduit target path
       alter system alter configuration ('global.ini','SYSTEM')
       set ('auditingconfiguration'
       ,'default_audit_trail_path' ) = 'path'
        with reconfigure;
Find the policy implemented
Select * from public.audit_policies;

To enable/ disable global auditing
-- change the configuration for setting the audit
alter system alter configuration ('global.ini',
       'SYSTEM')
set ('auditingconfiguration',
       'global_auditing_state' ) = 'true' with reconfigure;

Add audit policy
CREATE AUDIT POLICY Audit_EDW_DM_DROPTABLE_H00 AUDITING SUCCESSFUL DROP TABLE LEVEL CRITICAL;
Policy enable/disable
ALTER AUDIT POLICY Audit_EDW_DM_DROPTABLE_H00 ENABLE;

CHECK AFL PAL FUNCTIONS ARE INSTALLED
SELECT * FROM SYS.AFL_FUNCTIONS WHERE PACKAGE_NAME='PAL';

Source: scn.sap.com

Capturing and Replaying Workloads - by the SAP HANA Academy

$
0
0
Introduction

One of the new SPS 12 features for monitoring and managing performance in SAP HANA is the ability to capture and replay workloads. This feature enables you to take a performance snapshot of your current system -- a captured workload -- and then execute the same workload again on the system (or another system from backup) after some major hardware or software configuration change has been made. This will help you evaluate potential impacts on performance or stability after, for example, a revision upgrade, parameter modifications, table partition or index changes, or even whole landscape reorganisations.

In this blog, I will describe the required preparation and the operational procedures.

Preparation

Import Delivery Unit

To capture, replay and analyze workloads you use the three new apps in the equally new SAP HANA Performance Monitoring tile catalog of the SAP HANA cockpit.

Capturing and Replaying Workloads - by the SAP HANA Academy

The apps are not included with a standard installation of SAP HANA but are provided as a delivery unit (DU): HANA_REPLAY.

You can import the DU using the SAP HANA Application Lifecycle Management (ALM) tool, which is part of SAP HANA cockpit. Alternatively, you can use the ALM command line tool (hdbalm) or use SAP HANA studio (File > Import > SAP HANA Content > Delivery Unit)

Capturing and Replaying Workloads - by the SAP HANA Academy

Grant Roles

The DU adds the following roles:
  • sap.hana.replay.roles::Capture
  • sap.hana.replay.roles::Replay
  • sap.hana.workloadanalyzer.roles::Administrator
  • sap.hana.workloadanalyzer.roles::Operator
Typically, you would grant a user with system administration privileges the Capture replay and Replay replay role. This could be the same user or a different user.

The workloadanalyzer roles are granted to users who need to perform the analysis on the target system. Operators have read-only access to the workload analysis tool.

Configure SAP HANA cockpit

The Analyze Workload app is added automatically to the SAP HANA cockpit if you have any of the two workloadanalyzer roles. The Capture Workload and Replay Workload apps need to be added manually from the tile catalog.

Configure Replayer Service

On the target system you need to configure and start the replayer service before you can replay a workload.

For this, you need to have access as the system administrator the SAP HANA host and create the file wlreplayer.ini in directory $SAP_RETRIEVAL_PATH, typically /usr/sap/<SID>/HDB<instance_number>/<hostname>.

This file needs to contain the following lines
[communication]
listeninterface = .global

[trace]
filename = wlreplayer
alertfilename = wlreplay_alert

Next, start the replayer service with the hdbwlreplayer command:

dbwlreplayer -controlhost hana01 -controlinstnum 00 -controladminkey SYSADMIN, HDBKEY -port 12345

Use the following values for the parameters:

ParamenterDescription
controladminkeyuser name and secure store key (separated by comma)
controldbnameoptionally, database name in case of multitenant database container system
controlhostdatabase host name
controlinstnumdatabase instance number
portavailable port

Secure Store Key



Procedure

Once you have performed the preparation steps, the procedure is simple.

1. Capture Workload

Connect with SAP HANA cockpit to the system, open the Capture Workload app and click Start New Capture in the Capture Management display area. Provide a name and optional description and use the ON/OFF switches to collect an explain plan or performance details. The capture can be started on-demand or scheduled. Optionally filter can be set on the name of the application name, database user, schema user, application user, client or statement type (DML, DDL, procedure, transaction, session, system). Also, a threshold duration can be set and the passport trace level.

Capturing and Replaying Workloads - by the SAP HANA Academy


When done, click Stop Capture.

Capturing and Replaying Workloads - by the SAP HANA Academy

Optionally, you can set the capture destination, trace buffer size and trace file size for all captures with Configure Capture.

Capturing and Replaying Workloads - by the SAP HANA Academy

2. Replay Workload: Preprocess

Once one or more capture have been taken, open the Replay Workload app from the HANA cockpit to preprocess the capture. The captured workloads are listed in the Replay Management display area. Click Edit and then click Start Preprocessing on the bottom right.

Capturing and Replaying Workloads - by the SAP HANA Academy

2. Replay Workload

Once the capture has been preprocessed, you can start the replay from the same Replay Workload app.

First select the (preprocessed) replay candidate that you want to replay, then select Configure Replay.

Capturing and Replaying Workloads - by the SAP HANA Academy

In the Replay Configuration window, you need to provide
  • Host, instance number and database mode (Multiple for a multitenant database container system) of the HANA system
  • Replay Admin user (with role sap.hana.replay.roles::Replay) with either password or secure store key
  • Replay speed: 1x, 2x, 4x, 8x, 16x
  • Collect Explain plan
  • Replayer Service
  • User authentication from the session contained in the workload

Capturing and Replaying Workloads - by the SAP HANA Academy

When the Replay has finished, you can select Go to Report to view replay statistics.


Capturing and Replaying Workloads - by the SAP HANA Academy


Capturing and Replaying Workloads - by the SAP HANA Academy


3. Analyze Workload

Third and final step is to analyze the workload. For this start the Analyze Workload app from the SAP HANA cockpit. You can analyze on different dimensions like Service, DB User, Application Name, etc.

Capturing and Replaying Workloads - by the SAP HANA Academy

Video Tutorial

In the video tutorial below, I will show you in less than 10 minutes the whole process, both preparation and procedures.


Source: scn.sap.com

Data based on Conditions in a table

$
0
0
All the user conditions are available in the Table: USER_CONDITIONS and these conditions will be applied on the Table: EMPLOYEE. To keep it simple, let us consider two tables with sample data as shown below:



Output of information View:


All the dimensions and measures in the view are available in table EMPLOYEE and HEADCOUNT is the aggregation of EMP_ACTIVE. The final output is the data with conditions from the table applied.
  1. A User can create any number of conditions.
  2. A Condition can be based on any number of attributes (Business unit, Gender, etc).
  3. Each Attribute in a Condition can have only one Operator. (In / Not in).
  4. Different Attributes in a Condition can have different Operators (In, Not in, etc).

In order to apply conditions in the table USER_CONDITIONS, we need user and condition name, which will be the Input parameters to the view. The user who uses the application or runs the Calculation view can be determined using SESSION_USER. Hence input parameter for the user is not required, and only the condition_name is required.

Input parameter to view:
  1. IP_CONDITION_NAME (based on conditions set by user which are saved in table: USER_CONDITIONS)

Approach:

The complexity here is in using the OPEARTOR (IN / NOT IN) for the attributes in a Condition. For such scenarios, SQLScript Calculation Views can serve the purpose easily.
  1. Check the given input parameter is valid or not.
  2. Check whether the given condition name exists in condition table or not , if not then skip the processing else move the count to a variable.
  3. Check how many attributes are included in condition name, if none then skip the processing else move the count to a variable.
  4. Check the operator whether it is IN or NOT IN for different Attributes in a Condition.
  5. If the Operator is IN then query should be based on IN operator (EX: GENDER IN ‘Male’).
  6. If the Operator is NOT IN then query should be based on NOT IN operator (EX: GENDER NOT IN ‘Male’).
  7. Declare all the used variables.

Let us write the sqlscript code  based on above steps:

1. Check the given input parameter is valid or not.

IF (:IP_CONDITION_NAME IS NOT NULL AND :IP_CONDITION_NAME <> '') THEN
…….
ELSE
…….
END IF;

2. Check whether the given condition name exists in condition table or not , if not then skip the processing else move the count to a variable.

     SELECT COUNT(CONDITION_NAME) INTO VAR_COUNT
        FROM RSALLA.USER_CONDITIONS
          WHERE CONDITION_NAME = :IP_CONDITION_NAME;
       IF VAR_COUNT > 0 THEN
        ….
       ELSE
        …..
       END IF;

3. Check the existence of all the attributes for given Condition name.

         SELECT COUNT(*) INTO COUNT_BU FROM RSALLA.USER_CONDITIONS
          WHERE USER = SESSION_USER AND CONDITION_NAME = :IP_CONDITION_NAME
             AND ATTRIBUTE = 'BUSINESS_UNIT';

4. Check the operator whether it is IN or NOT IN for different Attributes in a Condition.
                         
IF :COUNT_BU > 0 THEN
   SELECT TOP 1 CASE WHEN OPERATOR = 'IN' THEN 'I'
              WHEN OPERATOR = 'NOT IN' THEN 'N'
              ELSE ''
              END INTO FLAG_OPERATOR_BU
       FROM RSALLA.USER_CONDITIONS
        WHERE USER = SESSION_USER
AND CONDITION_NAME = :IP_CONDITION_NAME
AND ATTRIBUTE = 'BUSINESS_UNIT';
             END IF;

5. If the Operator is IN then query should be based on IN operator (EX: BUSINESS_UNIT IN ‘0001’).

SELECT FISCAL_YEAR, GENDER, BUSINESS_UNIT, SUM(EMP_ACTIVE) AS HEADCOUNT
FROM RSALLA.EMPLOYEE
WHERE    :COUNT_BU = 0 OR
( :FLAG_OPERATOR_BU = 'I' AND BUSINESS_UNIT IN
                         (SELECT DISTINCT BUSINESS_UNIT FROM RSALLA.USER_CONDITIONS
WHERE USER = SESSION_USER AND CONDITION_NAME = :IP_CONDITION_NAME AND ATTRIBUTE = 'BUSINESS_UNIT') )


If there is no condition on attribute Business Unit then COUNT_BU = 0 will be true and the rest will be false. No condition will be applied on Business Unit.

If condition exists then COUNT_BU = 0 will be false and if the operator is IN then Flag of operator will be ‘I’ and IN operator will be applied on business Unit.

As each Attribute in a Condition can have only one Operator ( IN/NOT IN), Query is written in such a way that When IN is true, NOT IN becomes false.


6. If the Operator is NOT IN then query should be based on NOT IN operator (EX: BUSINESS_UNIT NOT IN ‘0001’).

( :FLAG_OPERATOR_BU = 'N' AND (BUSINESS_UNIT NOT IN
         (SELECT DISTINCT BUSINESS_UNIT FROM RSALLA.USER_CONDITIONS
           WHERE USER = SESSION_USER AND CONDITION_NAME = :IP_CONDITION_NAME AND ATTRIBUTE = 'BUSINESS_UNIT')
   OR BUSINESS_UNIT IS NULL ))
      
If the operator is NOT IN then Flag of operator will be ‘N’ and NOT IN operator will be applied on business Unit.

If you observe properly, for NOT IN operator, there is extra piece of code
OR BUSINESS_UNIT IS NULL.

This is required as HANA will ignore NULL values for NOT IN operator. Below is the example for GENDER values for null, blank values


7. Declare all the used variables.

DECLARE VAR_COUNT, COUNT_BU, COUNT_GENDER              SMALLINT DEFAULT 0;
DECLARE FLAG_OPERATOR_BU, FLAG_OPERATOR_GENDER  VARCHAR (1) DEFAULT '';

Now we will put all pieces of code together and the final script is:

BEGIN

       DECLARE VAR_COUNT, COUNT_BU, COUNT_GENDER       SMALLINT DEFAULT 0;
       DECLARE FLAG_OPERATOR_BU, FLAG_OPERATOR_GENDER  VARCHAR (1) DEFAULT '';

    IF (:IP_CONDITION_NAME IS NOT NULL AND :IP_CONDITION_NAME <> '') THEN
      SELECT COUNT(CONDITION_NAME) INTO VAR_COUNT
        FROM RSALLA.USER_CONDITIONS
          WHERE USER = SESSION_USER AND CONDITION_NAME = :IP_CONDITION_NAME;

       IF VAR_COUNT > 0 THEN
         SELECT COUNT(*) INTO COUNT_BU FROM RSALLA.USER_CONDITIONS
          WHERE USER = SESSION_USER AND CONDITION_NAME = :IP_CONDITION_NAME
           AND ATTRIBUTE = 'BUSINESS_UNIT';

               IF :COUNT_BU > 0 THEN
                SELECT TOP 1 CASE WHEN OPERATOR = 'IN' THEN 'I'
                                   WHEN OPERATOR = 'NOT IN' THEN 'N'
                                    ELSE ''
                                     END INTO FLAG_OPERATOR_BU
                     FROM RSALLA.USER_CONDITIONS
                      WHERE USER = SESSION_USER
                       AND CONDITION_NAME = :IP_CONDITION_NAME
                        AND ATTRIBUTE = 'BUSINESS_UNIT';
               END IF;
              
               SELECT COUNT(*) INTO COUNT_GENDER FROM RSALLA.USER_CONDITIONS
                WHERE USER = SESSION_USER AND CONDITION_NAME = :IP_CONDITION_NAME
                 AND ATTRIBUTE = 'GENDER';

               IF :COUNT_GENDER > 0 THEN
                SELECT TOP 1 CASE WHEN OPERATOR = 'IN' THEN 'I'
                                   WHEN OPERATOR = 'NOT IN' THEN 'N'
                                    ELSE ''
                                      END INTO FLAG_OPERATOR_GENDER
                     FROM RSALLA.USER_CONDITIONS
                      WHERE USER = SESSION_USER
                       AND CONDITION_NAME = :IP_CONDITION_NAME
                        AND ATTRIBUTE = 'GENDER';
                       
               END IF;
              
               TAB_RESULT =
               SELECT FISCAL_YEAR, GENDER, BUSINESS_UNIT, SUM(EMP_ACTIVE) AS HEADCOUNT
                FROM RSALLA.EMPLOYEE
                 WHERE
                  (:COUNT_BU = 0 OR
                        ( :FLAG_OPERATOR_BU = 'I' AND BUSINESS_UNIT IN
                         (SELECT DISTINCT VALUE AS BUSINESS_UNIT
                           FROM RSALLA.USER_CONDITIONS
                            WHERE USER = SESSION_USER
                             AND CONDITION_NAME = :IP_CONDITION_NAME
                              AND ATTRIBUTE = 'BUSINESS_UNIT') )
                                 OR
                             ( :FLAG_OPERATOR_BU = 'N' AND (BUSINESS_UNIT NOT IN
                              (SELECT DISTINCT VALUE AS BUSINESS_UNIT
                                FROM RSALLA.USER_CONDITIONS
                                 WHERE USER = SESSION_USER
                                  AND CONDITION_NAME = :IP_CONDITION_NAME
                                   AND ATTRIBUTE = 'BUSINESS_UNIT')
                                    OR BUSINESS_UNIT IS NULL ))      
                   )
                    AND
                     (:COUNT_GENDER = 0 OR
                           ( :FLAG_OPERATOR_GENDER = 'I' AND GENDER IN
                            (SELECT DISTINCT VALUE AS GENDER
                              FROM RSALLA.USER_CONDITIONS
                               WHERE USER = SESSION_USER
                                AND CONDITION_NAME = :IP_CONDITION_NAME
                                 AND ATTRIBUTE = 'GENDER') )
                                        OR
                             ( :FLAG_OPERATOR_GENDER = 'N' AND (GENDER NOT IN
                              (SELECT DISTINCT VALUE AS GENDER
                                FROM RSALLA.USER_CONDITIONS
                                 WHERE USER = SESSION_USER
                                  AND CONDITION_NAME = :IP_CONDITION_NAME
                                   AND ATTRIBUTE = 'GENDER')
                                    OR GENDER IS NULL ))      
                     )
                   GROUP BY FISCAL_YEAR, GENDER, BUSINESS_UNIT
                ;

       ELSE
         TAB_RESULT = SELECT '' AS FISCAL_YEAR, '' AS GENDER, '' AS BUSINESS_UNIT,
                       0 AS HEADCOUNT
                        FROM DUMMY;
       END IF;
    ELSE
      TAB_RESULT = SELECT '' AS FISCAL_YEAR, '' AS GENDER, '' AS BUSINESS_UNIT,
                    0 AS HEADCOUNT
                     FROM DUMMY;
    END IF;
      
       VAR_OUT = SELECT FISCAL_YEAR, GENDER, BUSINESS_UNIT,
                  SUM(HEADCOUNT) AS HEADCOUNT
                   FROM :TAB_RESULT
                    GROUP BY FISCAL_YEAR, GENDER, BUSINESS_UNIT;
                                 
END;

Input Parameter:


Data Validation:

Now lets run the Calculation view for different conditions.

Test case 1: User - RSALLA, Condition name - CONDITION_1


Output of CV and data from EMPLOYEE table is matching.

Test case 2: User - RSALLA, Condition name - CONDITION_2


Output of CV and data from EMPLOYEE table is matching.

Test case 3: User - RSALLA, Condition name - CONDITION_5 (does not exist)


Output of CV is just a single row with default values. This can be modified as per the requirement.

Source: scn.sap.com

Troubleshooting SAP HANA Delivery Units and HANA Live Packages issues. (HALM)

$
0
0
Whilst importing delivery units into your HANA System you can sometimes run into some common errors which can easily be fixed without the means of opening a SAP Incident.

Lets look at an example.

Here you are importing SAP HANA Analytics into your system. During the import you see an error:

Troubleshooting SAP HANA Delivery Units and HANA Live Packages issues. (HALM).

To get a more in depth look a what actually went wrong here, we would need to look into the installation log (this is printed after the import fails) or the indexserver.trc file:

[37654]{228998}[123/-1] 2014-04-07 23:24:06.604933 e REPOSITORY       activator.cpp(01179) : Repository: Activation failed for at least one object;At least one runtime reported an error during revalidation. Please see CheckResults for details.

The problem in such cases is the person who was responsible for the prerequisites for the import did not check SAP note 1781992 before starting the import.
It is very important to have the necessary tables in the SAP_ECC Schema or else the import will fail. Best thing to do if this fails is to compare the existing tables with the tables listed in the note:

select table_name from m_cs_tables where schema_name = 'SAP_ECC' order by table_name asc;

1: What do you do if the import is still failing after all the tables have been imported?

Check the tables for invalid attributes and make sure the tables are set up correctly. (This just involves recreating the table).

You should also note which Delivery Units have failed to import. Re-importing the Delivery Unit again is also a valid approach to fix activation or deployment errors.

2: What do I do if the activation of some of views is failing after the import?

Make sure that when the tables are being searched in the schema, that it is searching the correct schema, this will involve insuring you have your schema mapping done correctly. An example of this can be seen in the below trace:

One table for a calculation view was searched in schema DT4_XT5: "- CalculationNode(WRF_CHARVALT): ColumnTable DT4_XT5:WRF_CHARVALT not found (cannot get catalog object)."

When searching this schema doesn't exist. Therefore the activation of "sap.is.retail.ecc.RetailCharacteristicValue" will obviously fail. So now you have to ask yourself the question, what schema did I start the installation with, did I start with a different schema and change it somewhere in between the process? Also check to see if you moved the "WRF_CHARDVALT" table elsewhere.

3: What if the user does not have the required privileges when activating CAR HANA Content or any other content?

When checking the logs you see errors with activation which refer to, lets say SAP_CRM schema. When looking for this schema in the catalog you cannot see it, so this leads to the question, does this actually exist or is my user prohibited from viewing this? The answer which is most likely is the lather. Make sure your user has been granted the SELECT privilege on the SAP_CRM schema. A good guide would be to follow SAP Note 1936727

So referring to the Note, you could check the _SYS_REPO to see if any important privileges are missing such as EXECUTE, INSERT, etc. Please note that normally the _SYS_REPO normally only needs the SELECT privilege for a plain installation.

4: What if you face errors when importing HANA SMART BUSINESS FOR ERP 1.0?

Checking the installation log after the import fails you see:

Object: sap.hba.r.sappl604::BSMOverview.calculationview ==> Error Code: 40117 Severity: 3
Object: sap.hba.r.sappl604::BSMOverview.calculationview ==> Repository: Encountered an error in repository runtime extension;Model inconsistency. Create Scenario failed:


ColumnView _SYS_BIC:sap.hba.r.sappl604/BSMInfoOverview not found (cannot get catalog object)(CalculationNode (BSMInfoOverview))


The following errors occured: Inconsistent calculation model (34011)
Details (Errors):
- CalculationNode (BSMInfoOverview): ColumnView _SYS_BIC:sap.hba.r.sappl604/BSMInfoOverview not found (cannot get catalog object).

The solution for this can be found in SAP Note 2317634

5: You receive errors when performing data previews on the package sap.hba.ecc?

When viewing the sap.hba.ecc packages you can see some calculation views are marked red. When clicking on data preview you see error:

Troubleshooting SAP HANA Delivery Units and HANA Live Packages issues. (HALM).

What we now need to ask is, did any of these views ever work at all or is it specific to individual views? If you answered "yes" to the first question, then I would re-deploy the delivery unit again which should fix this issue.

If it is specific views that are causing the issue, then try to re-deploy each view separately through Studio and activate again. If this does not work then work go into the Diagnosis Files tab in the HANA Studio and pull down the most recent errors which should have been printed in the indexserver.trc file. Check to see which table is trying to be reached for this view and what schema it is in. Check if the user you are logged in with has the correct privileges along with the _SYS_REPO.

The solution for this can also be found in SAP Note 2318731.

Source: scn.sap.com

SAP HANA, S/4 HANA, IBP, cloud, IoT, DevOps, … BINGO!

$
0
0

SAP R/3 – a pioneer era

The current times within IT feel somewhat similar to what the first colonists felt when they started their journey from a struggling Europe to the new world. Everything must have been so different, so innovative, so much better. Everyone tells you what you can do, buy and strive for. Yet at the end, as always, it is just a matter of money.

If we come a little bit closer to the present day, and also closer to IT and business, I am reminded of SAP’s launch of R/3 more than 20 years  ago. Wow, SAP R/3, the single ideal monolithic centre and backbone of a company. Integrated and harmonized standard software supposed to guarantee seamless processes across the entire value chain, harmonized master data, full transparency, no interfaces, state of the art client server technology, comparably cheap to maintain, easy to use and easy to tailor.
It was an amazing feeling to work in this system with preconfigured business processes and an integrated development workbench. One felt far more superior, leaving the other IT folks behind to keep the various legacy systems running with their manually edited scripts, UNIX cron jobs and ancient COBOL programming language without any workbench. I still remember being confused at the non-intuitive and outdated ASCII user frontends with green letters on a black background. That was a great pioneer era in IT. Simply a brave new IT world!

Evolution over time

We all know how the ERP journey continued and more and more specialized SAP systems were needed to be included into a company’s ecosystem. That’s how SAP SCM (Supply Chain Management) for advanced operational planning, SAP SRM for optimized supplier collaboration, SAP CRM for the customer side and even some more systems sneaked their way into the IT landscape, which SAP licensed and sold under the name SAP business suite.
Quite some significant portions of work and regular maintenance were required for the interfaces and data harmonization section. The most crucial topics developed to be process breaks because of system breaks and data consistency although SAP even provided “out of the box” real-time interfaces. So, over the last decade, a large amount of effort has been put into interface design, reconciliation and robustness of integration. And everyone who operated multiple systems supposed to exchange mass data in a consistent and frequent way knows the pain of monitoring interfaces and data reconciliation, don’t you?

Along came HANA

Then SAP came along with SAP HANA. A new type of database which stores the data in memory and in a columnar way. Simply a boost for performance. Along with SAP HANA, the evolution of IT hardware, the drop of hardware prices, the success of mobility, small apps and the always online idea, SAP launched a firework of new and reinvented systems and applications. An evolutionary next step renewing the classical business suite by breaking with unbreakable paradigms and replacing the shop-soiled SAP frontends.
That was the birth of S/4 HANA. Reengineered, modernized, remodelled and a simplified core system for all business related operations. A next generation ERP platform powered by SAP’s new database technology SAP HANA. All in one system and ideally hosted in the cloud to reduce the total cost of ownership and accelerate the update cycles (time to market). It was also the start of many more systems which seemed to be waiting for the right catalyst. SAP HANA.
As always, the big marketing machinery kicks in and mixes current trends with SAP’s product portfolio. Terms like SAP IBP, S/4 HANA Simple Logistics, Simple Manufacturing, cloud, ARIBA, Internet of Things, SAP HANA, SAP CAR, Big data, real-time, predictive analytics and so forth are present at each and every symposium.
It feels again like the start of something new. An evolution in the business IT market. Pioneer feelings. Rightly so!
Nevertheless customers are more than ever unclear at how to adopt and at which point in time. They ask themselves the same questions:
  • What is the correct IT / business strategy for us?
  • What is the right tool set?
  • What capabilities do you really need?
  • What is the maturity level of those brand new systems? Is it already fit for our business?
  • What are the implied costs (Hardware, Licensing, project costs, user training etc.)?

So how do you stay on top and not take unnecessary risks for your business whilst still being up to date? How do you get prepared for the future?

SAP business suite on SAP HANA – a case study

Whilst new SAP products are pushed into the market and companies start thinking about how to migrate from their current setup to the new one with the lowest possible risk, others are lifting their current business suite implementation to the next level by replacing the any Databases with SAP HANA in order to improve system performance and bring the platform into the necessary shape for upcoming challenges like continuous data growth.
It doesn’t sound fancy? True. It isn’t fancy. But Supply Planning isn’t fancy either. Yet doing it wrongly, inefficiently, reactively or slowly will result in substantial costs.
So what’s in for an SAP client by just migrating one’s databases to SAP HANA?
I already had the chance to find out and have gained in depth, first-hand experience during performing an Oracle to SAP HANA DB migration with a very smooth cut over and go live.
At the end it is not only the obvious performance improvement which SAP HANA brings to your classical business suite landscape. There is much more.

Example for additional business functions (here for SAP SCM on SAP HANA):


The infamous SAP GUI is not the only frontend possibility anymore. Freely available SAP FIORI apps, tailored SAP LUMIRA dashboards or available add-ons like the Supply Chain Info Center (attention: additional license costs might apply) for SAP SCM open up a broad spectrum of new and appealing user interfaces.
Those tools and techniques paired with the performance of SAP HANA open up the gates for business transformation. They are catalyst for solving problems differently, faster and more intuitively.
I created a prototype to illustrate sales order revenue per region in a spatial manner based on real-time available ECC sales order data with SAP LUMIRA as the frontend tool. The different way of operational analytics paired with the necessary high responsiveness (<4 seconds for more than 3 million datasets) has been really impressive. But that’s just one of a couple of examples which can be used to demonstrate the power behind “yet another database”.
Here the performance improvement related figures which we measured after the migration of a 1 Terabyte SAP SCM system. The project was done based on a 1:1 migration implicitly without performing any custom code related optimizations.


The improvement bandwidth varied between 20% and scaled up to 73% on an aggregated average per package. Interestingly most of the runtime improvements have shown up in the area of regular IT maintenance jobs and data load procedures, i.e. all those jobs which are necessary but which don’t contribute towards an improved business process.
A possible journey does not need to be necessarily complex and cumbersome. It’s usually a matter of the right preparation. The right sequence and the necessary quality and accuracy. The migration prepares the stage for the real value add, the start of a transformational process. Just because of SAP HANA? Yes and no. Not only but mainly. Sometimes there needs to be the right catalyst to unveil hidden ideas and the true potential.


What’s next?
  • How to make user experience more attractive?
  • How does SAP HANA fit in your individual IT / SAP landscape?
  • What does real time (operational) analytics mean?
  • How to start business transformation?
  • And how to prepare the stage for the next level ERP, called S/4 HANA?

Still a lot of questions which always need tailored answers. But one thing is clear. A new SAP area has started and it needs your full focus to get prepared. SAP HANA is the catalyst and I am personally convinced that it will be once more groundbreaking.

Source: scn.sap.com

Managing cold data in SAP HANA database memory

$
0
0
So once you have the cold data moved away from the main memory you will notice that it is re-loaded whenever needed. But for how long will the cold data stay in memory so that the aging still makes sense? This blog explains how cold data is managed in memory and describes different strategies for keeping it all under control.

HANA database takes care of loading and unloading data to and from memory automatically with the aim to keep all relevant information in memory. In most cases only the necessary columns (i.e. columns that are actually used) are loaded into memory on the first request and kept there for later use. For example after system restart only a few columns might be initially loaded into memory and only for the hot partition as shown in the figure below.

Managing cold data in SAP HANA database memory

You can monitor history of loads and unloads from system views M_CS_LOADS and M_CS_UNLOADS or Eclipse IDE • Administration • Performance • Load • Column Unloads.

SELECT * FROM "SYS"."M_CS_LOADS" WHERE table_name = 'BKPF';
SELECT * FROM "SYS"."M_CS_UNLOADS" WHERE table_name = 'BKPF';

1. Loading data into memory

The following reasons may cause a column table to be loaded into HANA memory:
  1. First time access to a table column (for example when executing SQL statement)
  2. Explicit loads triggered with SQL command LOAD
  3. Reload after startup for tables and/or columns defined with PRELOAD statement. You can check the status in "SYS"."TABLES" and "SYS"."TABLE_COLUMNS".
  4. Pre-warming – reload after startup based on columns loaded before the system shutdown (configurable option – see SAP OSS Note 2127458).

As of SPS 09, load and pre-load do not consider cold partitions.

2. Paged partitions

Loads of data into memory happen column-wise. However, if paged attributes are used with partitioning, loads will happen in a more granular way i.e. page-wise. This means that only data that is in the area in which you are searching will be loaded.

In case partitions were not created with paged attribute (see e-bite: Data Aging for SAP Business Suite on SAP HANA) you can use report RDAAG_PARTITIONING_MIGRATION to change the loading behavior (see figure below). This can be done for a single table, Data Aging Object or the whole Data Aging Group. For more information check SAP OSS Note 1996342.

Managing cold data in SAP HANA database memory

3. Un-loading data from memory

Unloads happen based on a "least recently used" (LRU) approach. In case of memory shortage, columns that have not been used for the longest period of time are unloaded first. You can also prioritize this behavior by using UNLOAD PRIORITY setting. However, unload priority can only be set for a whole table, without a possibility to distinguish between cold and hot partitions. It can be 0 ~ 9, where 0 means not-unloadable and 9 means earliest unload.

SELECT UNLOAD_PRIORITY FROM "SYS"."TABLES" WHERE TABLE_NAME = 'BKPF'

In M_CS_UNLOADS (REASON column) you can see a detail reason explaining why the unload happen:
  • LOW MEMORY – happens automatically when memory becomes scarce (see SAP OSS Note 1993128).
  • EXPLICIT – un-load triggered with SQL command UNLOAD.
  • UNUSED RESOURCE - Automatic unloads when a column exceeds the configured unused retention period. 
Too many unloads may indicate that memory requirements are exceeded. This will affect performance since tables need to be fetched again from disk on the next access request. Time spent on loading table columns during SQL statement preparation can be found in TOTAL_TABLE_LOAD_TIME_DURING_PREPARATION column in M_SQL_PLAN_CACHE system view. You can find more information on how to handle column store unloads in SAP OSS Note 1977207.

Even with enough resources in normal circumstances, it may happen that complex queries will create high volume intermediate results and thus lead to memory shortage. Once memory allocation limit has been reached on HANA host, the memory manager will start unloading data (cashes, buffers, columns) based on the "least recently used" approach. In such cases it would be desirable to have the cold partitions removed first. As per business definition they have the lowest priority. The following two chapters present such mechanism.

4. Unloading paged memory

Paged attribute for partitioning allows separating data of hot and cold partitions in a way that cold data is not loaded into memory if unnecessary. However once data records (columns) have been loaded into memory from cold partitions they will reside there until memory shortage. Then unloading is triggered according to LRU approach. As already mentioned it is not possible to set unloading priorities differently for each partition.

However, for cold partitions created with paged attribute it is possible to precede this mechanism by setting up limits for paged memory pool. There are two configuration parameters that have to be considered (see also SAP Note 2111649):
  • PAGE_LOADABLE_COLUMNS_MIN_SIZE – in case of low memory paged resources will be unloaded first, before any other resources, as long as memory they occupy is bigger than the value set up for this parameter. Only once their size is below this limit, standard LRU mechanism will be triggered.
  • PAGE_LOADABLE_COLUMNS_LIMIT – if paged resources exceeds this limit some of them will be unloaded according to LRU approach until the total size of paged memory is back again at min level.
Values should be specified in MB. Default value is 1047527424 MB which equals 999 TB. Current size of the paged memory can be found in the system view M_MEMORY_OBJECT_DISPOSITIONS as shown in figure below.

Managing cold data in SAP HANA database memory

In the example below the parameters were set to small values for demonstration purposes (Eclipse IDE go to ADMINISTRATION • CONFIGURATION) as shown in figure below.

Managing cold data in SAP HANA database memory

At this point executing statement SELECT * FROM bkpf WHERE _dataaging <> ‘00000000’ will cause the paged memory to grow above the allowed limits. Paged memory will be unloaded until minimum limit is reached as shown in figure below.

Managing cold data in SAP HANA database memory

5. Data retention

In order to manage more selectively cold partitions in memory you can use Auto-Unload feature of SAP HANA. It allows unloading of tables or partitions from memory automatically after a defined unused retention period. Configuring a retention for unloads typically increases the risk of unnecessary unloads and loads. However, retention periods (unlike priorities) can be set on a partition level. Therefore their use for managing cold storage might be justifiable.

In the figure below all partitions (including cold) are loaded partially. Almost all columns from recently accessed partitions are loaded into memory. This includes hot and cold (2015) partitions. Only some key columns from 2013 and 2014 cold partitions are loaded.

Managing cold data in SAP HANA database memory

To see current table setup and memory usage run the following statement in SQL console in Eclipse IDE as shown in figure below:

Managing cold data in SAP HANA database memory

To set retention periods execute the SQL ALTER TABLE statement with UNUSED RETENTION PERIOD option. Retention periods are provided in seconds for the whole table or for each partition separated with commas. If multiple values are specified, the number of values must match the number of table partitions. The first partition which represents hot storage will have the retention periods set to 0. This means that the global default value from system configuration will be used for this partition (see below). When it comes to cold partitions let’s use some short time frames in the example scenario (5 min.):

ALTER TABLE "SAPERP"."BKPF" WITH PARAMETERS('UNUSED_RETENTION_PERIOD'='0, 300, 300, 300');

Retention periods should now be visible in the table setup – see figure below.

Managing cold data in SAP HANA database memory

Next step is to switch on the auto-unload function for HANA database. In Eclipse IDE go to ADMINISTRATION • CONFIGURATION • GLOBAL.INI • MEMORYOBJECTS  and set the configuration parameters the following way (see figure below):
  • UNUSED_RETENTION_PERIOD – number of seconds after which an unused object can be unloaded. Default value is 0 which means that auto-unload is switched off by default for all tables. Set the default to 31536000 secs (1 year).
  • UNUSED_RETENTION_PERIOD_CHECK_INTERVAL– check frequency for objects (tables and partitions) exceeding the retention time. Default value is 7200 (every 2 hours). In the example let’s use short time frame of 600 seconds.
Managing cold data in SAP HANA database memory

After the check interval has passed you may check again the memory status. The total memory used is now much lower for the fourth partition for the year 2015, for which all the columns/pages were unloaded as shown in figure below.

Managing cold data in SAP HANA database memory

In addition in the column view you can see that only columns for the hot partition remain in the memory (figure below).

Managing cold data in SAP HANA database memory

In the M_CS_UNLOADS view you will notice that new events were recorded with reason code UNUSED RESOURCE and only the columns from cold partitions of BKPF were moved away from memory (see figure below). Hot partition and other tables were not affected.

Managing cold data in SAP HANA database memory

Auto-unload lets you manage cold areas of memory more actively however it also has a side effect. All the columns/pages from the cold partitions are removed from memory. After that even when accessing documents from the hot storage in SAP ERP it may happen that the key columns/pages from cold partitions will be reloaded from disk (see figure below). This may happen for example when trying to display a “hot” document in FB03 without specifying fiscal year. In this case SAP ERP will perform initial search for a full document key without restriction to hot storage only (see note 2053698). This may have negative impact on such queries executed for the first time after auto-unload.

Managing cold data in SAP HANA database memory

6. SAP HANA Scale-out and table distribution

In case cold storage has already grown significantly over the years, it might be advisable to scale out current HANA system to more than one host. The main purpose of scaling out HANA system is to reduces delta merge processing time, balance the load on database and achieve higher parallelization level for queries execution. However, one of such hosts could also be dedicated for cold storage only. When tables are partitioned over several hosts the unloading mechanism is managed per host. This means that usage of cold data to any extent will not have impact on the performance of queries executed against hot data. Moreover, the cold storage node could be smaller in terms of allocated memory.

You can see the current table and partition distribution in the TABLE DISTRIBUTION editor. You can open it by right-clicking on CATALOG or SCHEMA in the SYSTEMS view in Eclipse IDE and choosing SHOW TABLE DISTRIBUTION – see figure below.

Managing cold data in SAP HANA database memory

Before distributing partitions in SAP HANA database, first you need to switch off paged attributes property. The status needs to be reset to deactivated. You can do it with report RDAAG_PARTITIONING_MIGRATION – see section 2. Paged partitions for more information. SAP HANA provides several automatic redistribution and optimization methods that helps improving significantly the performance in a multi-node HANA cluster. To manually move cold partitions to a dedicated host run the following SQL statement:

ALTER TABLE "SAPERP"."BKPF" MOVE PARTITION <part_id> TO '<host:port>' [PHYSICAL];

Where <part_id> is the partition number and <host:port> is the location where the partition is to be moved. The partition will be moved if the target host has sufficient memory.
When moving tables without the PHYSICAL addition then only the link is moved to another host and not the whole table. The physical table is moved when merge process is triggered. You can see details on the current logical and physical locations in the system views:
  • M_TABLE_LOCATIONS – shows logical location of a table/partition (1st figure below).
  • M_TABLE_PERSISTENCE_LOCATIONS – shows physical data locations (persistence parts).  This will include items from M_TABLE_LOCATIONS but also nodes that still contains some persistence of the table. This happens when tables is moved but not yet merged (2nd figure below).
Managing cold data in SAP HANA database memory

Managing cold data in SAP HANA database memory
Viewing all 711 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>