Quantcast
Channel: SAP HANA Central
Viewing all 711 articles
Browse latest View live

Port of Antwerp from the Opendata challenge perspective – part 1

$
0
0
Getting Access to the SAP HANA Cloud Platform trial platform

Here is how to register for a developer/trial account: Sign up for an free trial account on SAP HANA Cloud Platform

This should not take you more than 5 minutes to receive the account activation link and get to the HCP Cockpit. You will be assigned a “P” user id (unless you have an SAP account already and you want to use it with your “S” user id).


For example here is my HCP cockpit landing page. My HCP user id is “p1941109952” and my HCP account name is “p1941109952trial”:

Port of Antwerp from the Opendata challenge perspective – part 1

The distinction is really important, and you will see why later.

Get you HANA instance up and running

Once you are there, you can create your HANA MDC (Multi database container) instance.

You can call the instance the way you want, just make sure you don’t loose your SYSTEM password because you will have no way to retrieve this one later (or simply use the one provided: Welcome16)

This will give you access to a HANA container where you will be able to do your stuff!!!

The instance creation takes about 15 minutes, so relax, stretch your legs and have some water in the meantime.

Port of Antwerp from the Opendata challenge perspective – part 1

Now just look at the URL for you page. Mine is:
https://account.hanatrial.ondemand.com/cockpit#/acc/p1941109952trial/dbs/mdc/requests

You can note that it includes my HCP account name and my HANA instance name (I called it mdc).

Get access to your HANA instance

Cool, now let’s get back to work!

You can get access to this HANA instance using the SAP HANA Web-based Development Workbench from your browser or using Eclipse with the HANA Cloud toolkit.
Note that you will always need to start the SAP HANA Cockpit after the instance creation to at least get your SYSTEM upgraded with the proper roles and privileges.

SAP HANA Web-based Development Workbench

if you want to access the SAP HANA Web-based Development Workbench, go to the Overview of your HANA instance and click on SAP HANA Web-based Development Workbench like here:

Port of Antwerp from the Opendata challenge perspective – part 1

The URL of the Workbench is:
https://mdcp1941109952trial.hanatrial.ondemand.com/sap/hana/ide

You can notice that it starts with HANA MDC instance name, then the HCP account name.

In HANA, the SYSTEM user is only allowed to edit Security settings and look at the Traces.

Port of Antwerp from the Opendata challenge perspective – part 1

Our goal is to get access to the database “Catalog“, but for this we need to create a HANA user with the relevant roles and privileges.

This is what the “Catalog” perspective look like:

Port of Antwerp from the Opendata challenge perspective – part 1

Eclipse with the HANA Tools

Once you have downloaded and installed Eclipse Neon, you will need to install the “SAP HANA Tools” additional Eclipse “software” using the “Help > Install New Software…” menu and the https://tools.hana.ondemand.com/neon as  descibed in the Procedure section.


Port of Antwerp from the Opendata challenge perspective – part 1

Once you have completed the installation, you can now switch to the SAP HANA Administration Console perspective using the “Window > Perspective > Open Perspective> Other” menu.

You can now add your HANA cloud instance using the“Add Cloud System…” drop down menu as displayed here:

Port of Antwerp from the Opendata challenge perspective – part 1

Then you will need to input your details:

Port of Antwerp from the Opendata challenge perspective – part 1

You can notice that we use here the HCP account name (with trial at the end), the HCP user name and password.

Then click on next to select you HANA MDC instance name (in my case mdc) and enter you HANA credentials (in my case I created a new HANA user named HCPPSTRIAL, but you can use your SYSTEM user as well):

Port of Antwerp from the Opendata challenge perspective – part 1

This is how it looks like:

Port of Antwerp from the Opendata challenge perspective – part 1

So, we are now done with this first part around the configuration and access of your own HANA instance on the SAP HANA Cloud Platform.

Enterprise Readiness with SAP HANA

$
0
0
Data Centers as the “Power Plants” of the digital Economy

Data centers are the power plants of today’s businesses. However, planning high availability into a system landscape may often be an overlooked aspect by some businesses. As more elements ranging from devices, people, equipment and other systems become increasingly connected, the costs after an unplanned outage is definitely becoming more significant. In the always-on digital economy, companies that rely on data to make decisions, conduct transactions, and interact with consumers cannot afford data center blackouts.

The average cost of network downtime is over $300k/hour, although there is a large degree of variance depending on the business. The cost for a bank, or manufacturing company undergoing downtime would definitely be more due possibly to transactions that could not be executed. What is more important however is that the cost is significantly increasing year-on-year, from an average cost per minute of an unplanned outage from $5,610 in 2010 to $7,908 in 2013 to $8,851 per minute in 2016.

So how much downtime are companies experiencing today on average? 59% of Fortune 500 companies experienced a minimum of 1.6 hours downtime per week. The principal cause for most outages is still UPS systems failure. While such overall unplanned outages including those causes from cooling failures and human error have decreased as processes and equipment becoming more reliable, planned cybercrime is increasingly becoming another cause of data center outages.

The Data Center Readiness Cycle

From SAP’s point of view, managing a data center could be broken down into a lifecycle – planning, building and running phases. Data center readiness begins often in planning, where IT managers should address the stages of installation and update, persistence as well as backup and recovery solutions. In the build phase, IT managers will need to focus on the various high availability options that can be leveraged cost-effectively. Lastly, the run phase includes setting to operation the various disaster recovery, monitoring and security capabilities.

We will cover each of these stages in detail over the next weeks in six different parts, so do refer to the below for the links to each individual blog. We will update the links once the blogs are ready.

Enterprise Readiness with SAP HANA

Leveraging the SAP HANA Platform

Going forward, as IT landscapes become more complex, maintaining uptime will increasingly become a challenge. It is important for IT managers to consider technologies that can help them simplify their landscape and IT architectures, while getting their systems ready for the future.

Aside from combining transaction and analytical workloads and providing real-time performance, SAP HANA also comes provided also with in-built high availability and disaster recovery features. These help to improve the overall enterprise readiness of a business, in a more cost effective way. This is because, while database reliability is essential today to ensure a continuous digital connection to the business network, it can be cost prohibitive for some customers to maintain it.

An example of improving enterprise readiness cost-effectively would be the continuous improvement to the SAP HANA system replication feature. Over the years, the system replication feature has been continuously refined with the customers’ TCO in mind. Starting with Delta data shipping in SPS11 to maintain better concurrency between the Primary and Secondary backup systems, we further introduced continuous log replay features to further improve the recovery-time-objective and also reduce the network bandwidth that may spike during delta data shipments. (see: figure 2).

Enterprise Readiness with SAP HANA

These options may allow IT managers to better optimize their IT investments towards maintaining readiness in their data center, and free up more budget for innovation. But beyond just budget, SAP also focuses on reducing the “performance cost” for example on the networks, such as the continuous log replay feature above.

    Enterprise Readiness with SAP HANA – Persistence, Backup & Recovery

    $
    0
    0
    Enterprise Readiness – Planning Your Data Center

    As High Availability and Disaster Recovery become increasingly important topics for many enterprise customers embarking on digitization, we will be taking time to explain more on this topic in a blog series to help customers better plan and prepare their data center operations, with SAP HANA as the platform of focus.

    We began this series with an earlier blog on Enterprise Readiness, and will continue with more topics in the following months.

    Design & Setup, Scalability Options

    We begin this blog on the Plan phase, where we focus on the installation options available for SAP HANA. With SAP HANA, there are two options available: a. as a full-appliance delivery, or b. a tailored data center integration approach.

    Choosing between the two options above depends on the IT manager’s following priorities:
    1. Time to value
    2. availability of existing assets: Servers, Storage, Network
    3. Landscape limitations: space, other challenges connecting a new appliance into the landscape
    For managers looking for faster time to value, the HANA appliance comes with a pre-configured hardware setup, along with pre-installed software that can be up and running in the landscape quickly. However for managers with large existing IT investments and infrastructures, the HANA Tailored data center integration approach could prove as an attractive alternative to capitalize on pre-existing assets.
    Below is a table highlighting the advantages between both options, and provides a rough guide on which option would be more feasible.

    Figure 1

    Enterprise Readiness with SAP HANA – Persistence, Backup & Recovery

    On Scalability, the SAP HANA system can be scaled up or scaled out, depending on business requirements. Scale-up options are usually recommended for production workloads, while we recommend a scale-out cluster model for the SAP Business Warehouse application on SAP HANA. Instances of SAP HANA can also be deployed into the cloud such as the SAP HANA Enterprise Cloud service or the SAP HANA Cloud Platform, or also in public clouds such as Amazon Web Services or Microsoft Azure. SAP plans to support additional public clouds as they become available, thus addressing the issue of scalability that our customers may face as their business grows.

    Persistence

    While the database holds the bulk of data in memory to ensure maximum performance for both online transaction processing (OLTP) and online analytical processing (OLAP), SAP HANA uses persistent storage as a fallback in case of failure. Every five minutes, SAP HANA pushes this data – along with SQL data, undo log information, and modeling data – to persistent storage assets. This write process is asynchronous.

    Information about transaction log changes on the other hand, is saved directly to persistent storage as soon as transactions are committed. These transaction logs are saved synchronously.

    The following figure below helps provide a break-up visualization of the system.

    Figure 2

    Enterprise Readiness with SAP HANA – Persistence, Backup & Recovery

    Backup and Recovery

    SAP HANA supports three backup options: full backups, delta backups (either incremental or differential), and log backups. The figure below helps provide a visual summary of how these would like, without the delta backups, which we will cover later.

    The data and undo log information is written to the disk (data area), asynchronously every 5 minutes, as described earlier above.

    The log captures all changes by database transactions (redo logs), and is saved to disk (log area) continuously and synchronously after each COMMIT of a database transaction (waiting for end of disk write operation).

    In addition to memory and persistent storage for the staging of backups, the database also offers an external backup staging area. The staging area can help protect data in situations such as when a third-party backup tool has a maintenance window. SAP HANA parks backups in the staging area for a defined downtime period.

    Figure 3

    Enterprise Readiness with SAP HANA – Persistence, Backup & Recovery

    Delta Data Backups

    On top of the full data backups, delta backups provides more options for the IT manager, and reduce the reliance on full data backups which can be limiting in some cases.
    • Full Data Backups: All data.
    • Incremental Backups: Changed data since the last data backup (delta or full).
    • Differential Backups: Changed data since the last full data backup.


    Figure 4

    Enterprise Readiness with SAP HANA – Persistence, Backup & Recovery

    With the delta data, users can also mix both incremental and differential backups together, depending on their operational requirements. Additionally, Data Backups which were successful can be reused for the next attempt of recovery. SAP HANA offers resumable recoveries to shorten the time required for a failed recovery attempt. This feature reuses milestones during recovery execution as a safety measure. SAP HANA’s latest SPS 12 release also focuses on extending this feature from data backups into log backups. The log backup file handling could further be potentially optimized in future releases to reduce the number of backup files per day.

    How a backup recovery works

    To sum it up, the below shows how backup recoveries will be executed at the moment of a power failure:
    1. Database loads from the last full data backup
    2. Data changed since the last full backup (known as differential backup) are applied or
    3. data changed since the last data backup (known as incremental backup)
    4. Log backups of transactions, in which changes are applied
    5. Log entries of all transactions kept in the online log volume of SAP HANA are applied, until the latest committed status before power loss.

    Figure 5.
    Enterprise Readiness with SAP HANA – Persistence, Backup & Recovery

    Additional Backup Recommendations

    The above features we talked about showcase the native backup features of SAP HANA. Aside from these, there are also available alternative storage backups snapshots combined with the usage of the SAP HANA internal snapshots for keeping the internal consistency within the SAP HANA database. This allows the user to have multiple reset points per day, which usually with the help of fast snapshot restores can offer a nice extension to the native backup options described before.

    Port of Antwerp from the Opendata challenge perspective – part 2

    $
    0
    0
    Now that we have our HANA instance up and running in the SAP HANA Cloud Platform trial platform as described in part 1 of this blog series, we can now start importing CSV like type of data.

    This dataset is available in different format: CSV, JSON, XML, KML and MAP.

    Step 1: Download and explore the file locally

    Open the following URL and save the file locally: http://datasets.antwerpen.be/v4/gis/grondwaterstroming.csv

    Open the file with a text editor like Notepad++ or Textpad or any text editor that you are used to. It only contains 123 rows.

    "id";"objectid";"geometry";"shape";"diepte_min_mtaw";"diepte_max_mtaw";"gridcode";"shape_length";"shape_area"
    "1";"22817";"{""type"":""Polygon"",""coordinates"":[[[4.3485873207025,51.346769755136],[4.3478973692569,51.346440719645],[4.347314661279,51.346163674265], [4.3468624645733,51.345941143175],[4.3462929983102,51.345670422578],[4.3449967290939,51.34505534943],[4.3439950525301,51.344583140302],[4.3437740104511,51.344478188117],[4.3434659797483,51.344331225892],[4.3429218895802,51.344074533317],[4.34288534622,51.344057673192],[4.3409137263619,51.343117163245],[4.3403091531947,51.342826173675],[4.3390567402791,51.342233658706],[4.3328651550485,51.33922930772],[4.331886599349,51.338758269204],[4.3311485526292,51.338404419669],[4.330086644905,51.33790188229],[4.3293988266884,51.337573177119],[4.328771087475,51.3372736987],[4.3272616188114,51.336637146853],[4.326651121465,51.336389487602],[4.3264888590539,51.336315846364],[4.3263967478269,51.336274801953],[4.3263400946058,51.33619950784],[4.326180091714,51.335864649775],[4.3261597950016,51.335761703446],[4.3266226585268,51.335868319593],[4.3270050610938,51.335953694495],[4.3273957730855,51.336038936191],[4.32759124408,51.336073687632],[4.328288007978,51.336119150872],[4.3285004665499,51.335954202951],[4.3289469355724,51.335584442431],[4.3297332822139,51.334937988123],[4.331938082533,51.335103690544],[4.3334930195943,51.335202599348],[4.3348311758461,51.335296097318],[4.3357146784906,51.335365575973],[4.3358590121613,51.33544551828],[4.3371420013413,51.335573524889],[4.3385397653228,51.335712205965],[4.3392999786683,51.335789635393],[4.3394574359333,51.335680550615],[4.3397974422046,51.335465064817],[4.340791892702,51.33483459416],[4.3410679772104,51.334672316669],[4.3415907689501,51.334323863148],[4.3418414929863,51.334164247584],[4.3420541077191,51.334028629994],[4.342249447717,51.333890204008],[4.3424026273298,51.333786453157],[4.3426405789147,51.333621485258],[4.342972054204,51.33339278501],[4.3432609134663,51.333190481572],[4.3438771876297,51.332764938422],[4.3442979127241,51.332469543799],[4.3454452057513,51.331671384125],[4.3465415697138,51.330905228029],[4.3473022037807,51.330375773714],[4.3473405482524,51.330354712588],[4.347574992935,51.329928803729],[4.3477466201118,51.329609202486],[4.3479276784484,51.329271617588],[4.3481372006834,51.328893456378],[4.3483373195894,51.328522456603],[4.3485432639002,51.328141903156],[4.3487451551991,51.327771046674],[4.3488613576963,51.327565878624],[4.3488880659212,51.327471652806],[4.3490101755812,51.327061172735],[4.3491188249319,51.326690300427],[4.3492256645973,51.326320398377],[4.3492674139323,51.326170143619],[4.3492822050077,51.326104448175],[4.3488413130698,51.326413448761],[4.3454583087765,51.327564439105],[4.33870870381,51.329080050325],[4.3197710284714,51.329520711012],[4.315591971097,51.330327213735],[4.3094752307029,51.333745773119],[4.3058687602067,51.338716106648],[4.3036718652719,51.343158643902],[4.2978329511419,51.35120174591],[4.2949066636125,51.35714279553],[4.2921285130391,51.365206828188],[4.2888072942056,51.366814517309],[4.2818242091877,51.369352852897],[4.2777400030282,51.372461008749],[4.2762063877381,51.375555926289],[4.3279056730946,51.37558468686],[4.3284916501884,51.3743992826],[4.3325572590312,51.369722140851],[4.3311804387299,51.368369840803],[4.3260762008003,51.367624679135],[4.323509919768,51.366187670841],[4.3220757486114,51.36453314703],[4.3197169172929,51.360409362886],[4.319544615571,51.358579341762],[4.3213585707178,51.354909704834],[4.3248051867653,51.350214989784],[4.3283502486996,51.34901025715],[4.3331034485997,51.347943919169],[4.3464405367933,51.347386342772],[4.3485873207025,51.346769755136]]]}";"";"2";"4";"2";"21448.786259186";"12187823.977738"
    "4";"22818";"{""type"":""Polygon"",""coordinates"":[[[4.3654619939525,51.354810747631],[4.3640409482971,51.356188478285],[4.3634128505739,51.357430346935],[4.3635748822165,51.357411937903],[4.3637193395027,51.357413911619],[4.3639270482673,51.357393696305],[4.3641145577698,51.357387384207],[4.3641997968326,51.357420102891],[4.3642441925927,51.357483720537],[4.3643298794717,51.357484846745],[4.3644207237189,51.357437384575],[4.3645606881384,51.357436265298],[4.3647284774256,51.357375467121],[4.3648462358112,51.357348219117],[4.3649274294383,51.357346540553],[4.3649949407966,51.357370270318],[4.3651122519885,51.357374767228],[4.3652609584006,51.357376604355],[4.3653782688091,51.357395418694],[4.365496704718,51.357310881313],[4.3655915937937,51.357297823934],[4.3657044118086,51.357299372141],[4.3658062491747,51.357277605472],[4.3659096493009,51.357296284204],[4.3660404109148,51.357309345888],[4.3661891305792,51.357334343617],[4.3662884975973,51.357341229856],[4.3665729050578,51.357324947718],[4.3666987420707,51.357372414218],[4.3667900297115,51.357287587619],[4.3668308563108,51.357279588835],[4.3668990452383,51.357223127658],[4.3670446091818,51.357144772649],[4.3671534084717,51.357091682116],[4.3672850609737,51.357039023053],[4.3673759155645,51.356999980254],[4.3675073521592,51.356953082172],[4.3681223837244,51.356100492855],[4.366934744751,51.355521335176],[4.3655071620833,51.35483247234],[4.3654619939525,51.354810747631]]]}";"";"8";"10";"5";"987.57980133301";"57314.693996644"

    This first row is the header with the column names and looking at the data we can deduct that the data types are:

    idinteger
    objectidinteger
    geometrylong text
    shapeempty so string
    diepte_min_mtawinteger
    diepte_max_mtawinteger
    gridcodeinteger
    shape_lengthfloat
    shape_areafloat

    We can also notice that the separator is the “semi colon” and the field values are enclosed by double quotes.

    Step 2: Import the local file using Eclipse

    First make sure you are using the SAP HANA Administration Console perspective.
    Now, using the “File > Import…” menu, type in “Data from Local File” in the search box, then click on “Next“:

    Port of Antwerp from the Opendata challenge perspective – part 2

    Select your “Target  System” and click “Next“.

    Then use the “Browse” button to select your file, change the “Field Delimiter” to “Semi Colon“, check the “Header exists” and input 1 in the filed, check “Import all data“, the pick your schema and table name.

    Port of Antwerp from the Opendata challenge perspective – part 2

    Click on“Next“.

    On the screen you will have the ability to adjust the “Table Settings and Data Mapping” settings where you will have to select “id” as “Key“:

    Port of Antwerp from the Opendata challenge perspective – part 2

    You might to adjust the data type here, as they will be guess from the first hundreds of rows.

    Click on “Finish“.

    Congratulations, your data has been uploaded. Hit “F5” to refresh the tree:

    Port of Antwerp from the Opendata challenge perspective – part 2

    Now that you know how to upload a CSV file into HANA, let’s get it a bit more … sophisticated.

    You probably noticed that the “geometry” filed was imported a Blob, but looked very much like a geoJSON piece of information which we should store into a ST_GEOMETRY column type and use that in the Spatial engine.

    However, geoJSON is not supported by HANA out of the box, so we will see in part 3 how we can convert geoJSON into the “Well-Known Text” format: Port of Antwerp from the Opendata challenge perspective – part 3.

    Port of Antwerp from the Opendata challenge perspective – part 3

    $
    0
    0
    As stated before HANA does support this format yet with the proper ST_GEOMETRY data type constructor or constructor. So we will need to convert this field to a “Well-Known Text” format.

    I found a “Node.js” package to do this (only one), so I decided to build a little program to convert the field from the local CSV file.

    First you will need to install Node.js from their website: https://nodejs.org/en/download/

    Once installed you can open command line prompt and run the following command:

    node --version

    This will return the currently installed and make sure it’s properly installed.

    Now, run the following commands:
    npm install fast-csv
    npm install fs
    npm install terraformer
    npm install terraformer-wkt-parser

    This will install the relevant packages to run the package.

    You can now create a new file named ‘convert-geojson-to-wkt-csv.js‘ and add the following code:
    var csv = require('fast-csv');
    var fs = require('fs');
    var Terraformer = require('terraformer');
    var WKT = require('terraformer-wkt-parser');

    var writeOptions = {
    headers : true,
    quoteHeaders : true,
    quoteColumns : true,
    rowDelimiter :'\n',
    delimiter : ';'
    };

    var csvWriteStream = csv
    .format(writeOptions);

    var writableStream = fs.createWriteStream("C:/temp/grondwaterstroming-out.csv")
    .on("finish", function(){
    console.log("All done");
    });

    csvWriteStream.pipe(writableStream);

    var readOptions = {
    objectMode : true,
    headers : true,
    delimiter : ';',
    quote : '"',
    escape : '"'
    };

    var csvReadStream = fs.createReadStream("C:/temp/grondwaterstroming.csv");

    var csvStream = csv
     .parse(readOptions)
     .on("data", function(data){
    data.geometry = WKT.convert(JSON.parse(data.geometry));
    csvWriteStream.write(data);
     })
     .on("end", function(){
    csvWriteStream.end();
     });

    csvReadStream.pipe(csvStream);

    The program will read the ‘C:/temp/grondwaterstroming.csv‘ file and out the result in ‘C:/temp/grondwaterstroming-out.csv‘ where the geoJSON field will be converted in a WKT format.

    You can now run the following command:
    node convert-geojson-to-wkt-csv.js

    And you can now check the output file: ‘C:/temp/grondwaterstroming-out.csv‘

    Let’s now import that files in HANA Studio like we did in the last blog, but this set the geometry field to ST_GEOMETRY instead of BLOB.

    Port of Antwerp from the Opendata challenge perspective – part 3

    But unfortunately this file contains some invalid polygons, and you will get the following error message: “Invalid polygon: multiple exterior rings“.

    So let’s remove these polygons with holes now using the following code:
    var csv = require('fast-csv');
    var fs = require('fs');
    var Terraformer = require('terraformer');
    var WKT = require('terraformer-wkt-parser');

    var writeOptions = {
    headers : true,
    quoteHeaders : true,
    quoteColumns : true,
    rowDelimiter :'\n',
    delimiter : ';'
    };

    var csvWriteStream = csv
    .format(writeOptions);

    var writableStream = fs.createWriteStream("C:/temp/grondwaterstroming-out.csv")
    .on("finish", function(){
    console.log("All done");
    });

    csvWriteStream.pipe(writableStream);

    var readOptions = {
    objectMode : true,
    headers : true,
    delimiter : ';',
    quote : '"',
    escape : '"'
    };

    var csvReadStream = fs.createReadStream("C:/temp/grondwaterstroming.csv");

    var csvStream = csv
     .parse(readOptions)
     .on("data", function(data){
    var geometry = new Terraformer.Primitive(JSON.parse(data.geometry));
    if (geometry.hasHoles()) {
    console.log("found holes in " + data.id + ". let's remove the holes");
    } else {
    data.geometry = WKT.convert(JSON.parse(data.geometry));
    csvWriteStream.write(data);
     }
     })
     .on("end", function(){
    csvWriteStream.end();
     });

    csvReadStream.pipe(csvStream);

    The newly generate file will have less lines, and should import fine with the same procedure as earlier (don’t forget to change the geometry type from BLOB to ST_GEOMETRY).

    R Integration with SAP HANA

    $
    0
    0
    Once you have installed HANADBClient in your system then you need to setup data source.

    Go to -> Control Panel -> Search for data Source (ODBC) -> Set up data sources (ODBC)

    You will get the below popup

    R Integration with SAP HANA

    On Current tab click on Add one popup will appear for Server/Port information

    R Integration with SAP HANA

    Enter any data source name/any description for server ad then enter server: port –  if you don’t know the server and port name go to your hana studio (SAP HANA Development perspective) and right click on system then click on properties tab -> Additional properties tab -> Host details. After entering these details click on connect to check if the details are correct or not by entering the username/password if all ok then you get a prompt on connect that connection is successful.

    R Integration with SAP HANA

    Once you are done with above now download the R Studio (https://www.rstudio.com/ ) and R package (https://cran.r-project.org/ )

    Now install both the Components once you are done with R Studio installation download the RODBC package (https://cran.r-project.org/web/packages/RODBC/index.html )  goto your R Studio now & all install r Studio package.

    R Integration with SAP HANA

    Once we have added the RODBC package now we can check on R studio using library function

    R Integration with SAP HANA

    Now we can go ahead for accessing our HANA database artifacts, establish the database connection using R commands & then execute sqlQuery function for executing sql commands on R.

    I created one table in HANA server for billing docs that I am accessing using “R” below is the sample code.

    library (‘RODBC’)

    ch<-odbcConnect(“data source name”,uid=”test_hana”,pwd=”test12″);

    sqlQuery(ch, ‘SELECT * FROM “_SYS_BIC”.”BILLING_DATA” ‘)

    result ->

    Here is the output from “R”

    R Integration with SAP HANA

    Another sample for consuming procedure in “R”

    I created one procedure which returns Company Code , Accounting doc & fiscal year from BSEG table.

    sqlQuery(ch, ‘CALL “ABAPDEMO”.”GETDATA”(10 , ? )’);

    Output in “R”

    R Integration with SAP HANA

    Note –  you can also use hdbuserstore set default for storing username/password for you hana server hence you don’t need to expose that into your connection string , below link elaborates the way of securing your username/password when dealing with ODBC –

    https://help.sap.com/saphelp_hanaplatform/helpdata/en/dd/95ac9dbb571014a7d7f0234d762fdb/content.htm

    The secure user store is installed with the SAP HANA client package. After you install the SAP HANA client, the hdbuserstore program is located in one for the following directories:
    • /usr/sap/hdbclient (Linux/UNIX)
    • %SystemDrive%\Program Files\sap\hdbclient (Microsoft Windows)

    Port of Antwerp from the Opendata challenge perspective – part 4

    $
    0
    0
    In the last part (part 3), we saw how to import a CSV file into HANA using HANA Studio where we converted the geoJSON filed into WKT, let’s now see how we can take care of the same content but in a JSON format.
    if you remember, our CSV file (http://datasets.antwerpen.be/v4/gis/grondwaterstroming.csv) is also available in JSON format.

    So the link to our JSON content is: http://datasets.antwerpen.be/v4/gis/grondwaterstroming.json, this will only return the first thousand rows (and luckily this has only 123 rows, but I will show you how to implement the pagination).


    We will also be using Node.js to convert the JSON content into CSV and take care of the geometry field at the same time.

    I will assume from now that you have installed Node.js and the packages from the previous blog.

    You will to install the following additional package using the following comand from your working directory:

    npm install fast-csv

    You can now create a new file named ‘convert-geojson-to-wkt-json.js‘ and add the following code:

    var csv = require('fast-csv');
    var fs = require('fs');
    var request     = require("super-request");
    var Terraformer = require('terraformer');
    var WKT = require('terraformer-wkt-parser');

    var input_host = "http://datasets.antwerpen.be";
    var input_path = "/v4/gis/grondwaterstroming.json";

    var writeOptions = {
    headers : true,
    quoteHeaders : true,
    quoteColumns : true,
    rowDelimiter :'\n',
    delimiter : ';'
    };

    var csvWriteStream = csv
    .format(writeOptions);

    var writableStream = fs.createWriteStream("C:/temp/grondwaterstroming-json-out.csv")
    .on("finish", function(){
    console.log("All done");
    });

    csvWriteStream.pipe(writableStream);

    function transformOpenDataInCSVForHANA (json) {
    var i = 0;
    json.data.forEach(function(item) {
    var geometry = new Terraformer.Primitive(JSON.parse(item.geometry));
    if (geometry.hasHoles()) {
    console.log("found holes in " + item.id + ". let's remove the holes");
    } else {
    item.geometry = WKT.convert(JSON.parse(item.geometry));
    csvWriteStream.write(item);
    }
    i++;
        });
    }

    function getJSONOpenData (param_host, param_path) {
    var record_count = 1;
    // Start the request
    request(param_host)
    // first let's get the number of records from the first page
    .get(param_path)
    .qs(function () {
    return {page_size: record_count};
    })
    .expect(200)
    .end(
    function (error, response, json) {
    if (!error && response.statusCode === 200) {
    var page = JSON.parse(json);
    record_count = page.paging.records;
    } else {
    console.log(error);
    }
    }
    )
    // now let's get all the records
    .get(param_path)
    .qs(function () {
    return {page_size: record_count};
    })
    .expect(200)
    .end(
    function (error, response, json) {
    console.log("page_size " + record_count);
    if (!error && response.statusCode === 200) {
    transformOpenDataInCSVForHANA (JSON.parse(json));
    } else {
    console.log(error);
    }
    csvWriteStream.end();
    }
    );
    }
    getJSONOpenData (input_host, input_path);

    The program will read the data from the URL made of the input_host and input_host variables (in other words: http://datasets.antwerpen.be/v4/gis/grondwaterstroming.json), then will write the output in ‘C:/temp/grondwaterstroming-json-out.csv‘ where the JSON properties will be transposed and the geoJSON field will be converted in a WKT format.

    You can now run the following command:

    node convert-geojson-to-wkt-json.js

    Then you should be able to run the import just like in the last blog part.

    A few more word about the pagination on this Opendata web site, they implemented it using a HTTP parameter named ‘page_size’. The first page will return you the number of records.
    This is why in the code I call twice the URL but in the first one, I just retrieve one record, just so I can get the total number of records and use it in the second call.
    Obviously this may lead to some performance issue if the the number of record is really huge, and might require some rework on the way I call the the transformOpenDataInCSVForHANA function.

    Input parameter based on procedure of type Date

    $
    0
    0
    Use case: The user is asked to enter a date. If nothing is specified then it should be default to first day of the current month else user specified value should be used to filter the data, using Graphical calculation view.

    If you are thinking to do by using Input parameter with “Derived from Procedure/Scalar function”
    then you are almost there with just 2 hurdles to cross.

    For this demonstration I’m using the below table structure for which the field
    DOJ is of type DATE on which the input parameter will be used.

    Input parameter based on procedure of type Date

    Sample Data in the table:

    Input parameter based on procedure of type Date

    Create a procedure which returns the date (first day of current month).


    CREATE PROCEDURE RAJ.FIRST_DAY (OUT FIRST_DAY DATE)
    LANGUAGE SQLSCRIPT
    AS

    BEGIN
    ---- Write the logic based on business requirement.
    ---- This logic gives you the first day of the current month as date
    SELECT ADD_DAYS(ADD_MONTHS(LAST_DAY(CURRENT_DATE),-1),1)
    INTO FIRST_DAY
    FROM DUMMY;
    END
    ;

    Call the procedure to see the output:

    CALL RAJ.FIRST_DAY(?);

    Input parameter based on procedure of type Date

    In graphical calculation view, Create Input parameter which is based on above procedure.
    Now you will come across the first hurdle 

    Error message is - Procedure must have scalar parameter of type String, which is because of the product limitation.

    Input parameter based on procedure of type Date

    The input parameter is based on Date and there is a product limitation to use string type only.
    Fortunately the dates are given in single quotes in a SQL query, hence this should not stop us to go ahead.

    Input parameter based on procedure of type Date

    Let us go back to Procedure and change the
    out parameter type from DATE to VARCHAR(10) and
    convert the date output to string using TO_CHAR.

    DROP PROCEDURE RAJ.FIRST_DAY;
    CREATE PROCEDURE RAJ.FIRST_DAY (OUT FIRST_DAY VARCHAR(10))
    LANGUAGE SQLSCRIPT
    AS

    BEGIN
        SELECT TO_CHAR(ADD_DAYS(ADD_MONTHS(LAST_DAY(CURRENT_DATE),-1),1))
        INTO FIRST_DAY FROM DUMMY;
    END
    ;

    Now in the input parameter, give the procedure name. This time it should accept
    without error message. Hurdle number 1 crossed

    Input parameter based on procedure of type Date

    Apply the input parameter on your required date field in Graphical calculation view (in my case the field is DOJ).

    Input parameter based on procedure of type Date

    Input parameter based on procedure of type Date

    Validate and Activate the view.

    Input parameter based on procedure of type Date

    On Data Preview you see Input Parameter output value is not passed. Here comes the hurdle number 2 

    Somehow the output of procedure is not getting populated correctly. Could not catch the actual reason for this.

    Generally when Attribute/Analytic/Calculation view is activated, a column view of the same is placed in _SYS_BIC from which the system access.

    So I changed the schema name in procedure to _SYS_BIC.

    The only change this time will do is change the schema name to _SYS_BIC.

    DROP PROCEDURE _SYS_BIC.FIRST_DAY;
    CREATE PROCEDURE _SYS_BIC.FIRST_DAY (OUT FIRST_DAY VARCHAR(10))
    LANGUAGE SQLSCRIPT
    AS

    BEGIN
    SELECT TO_CHAR(ADD_DAYS(ADD_MONTHS(LAST_DAY(CURRENT_DATE),-1),1)) INTO FIRST_DAY FROM DUMMY;
    END;

    Check the output of your procedure by CALL _SYS_BIC.FIRST_DAY(?);

    Now modify the Input parameter in calculation view to point to schema _SYS_BIC.

    Input parameter based on procedure of type Date

    Activate the view and do the data preview.
    This time you will see the output of the procedure being populated. Hurdle number 2 crossed


    Output of view based on input parameter value returned from procedure:

    Input parameter based on procedure of type Date

    Getting the required result. Now again do the data preview and give the date we want,

    Input parameter based on procedure of type Date

    Data is fetching as expected.

    SAP HANA 2.0 SPS 00 What’s New: Administration – by the SAP HANA Academy

    $
    0
    0
    Introduction

    We will be posting new videos to the SAP HANA Academy to show new features and functionality introduced with SAP HANA 2.0 Support Package Stack (SPS) 00.

    Tutorial Video

    What’s New?

    SAP HANA Cockpit

    The most significant change for SAP HANA 2.0 for system administration is the new SAP HANA cockpit. Cockpit merges the functionality of the Database Control Center (DBCC) tool to manage SAP HANA landscapes with the functionality of the HANA 1.0 cockpit to manage single HANA systems (both single and multitenant container systems).

    SAP HANA 2.0 SPS 00 What’s New: Administration – by the SAP HANA Academy

    The new cockpit no longer relies on the SAP Web IDE for SAP HANA development tool for user and trace management but is now fully integrated with same the look and feel. To browse catalog objects, for example, and to run SQL statement interactively, you can now use the new Database Explorer.

    SAP HANA 2.0 SPS 00 What’s New: Administration – by the SAP HANA Academy

    The following illustration shows the architecture conceptually:

    SAP HANA 2.0 SPS 00 What’s New: Administration – by the SAP HANA Academy

    The new SAP HANA cockpit supports both HANA 2.0 and HANA 1.0 SPS 12 systems. The cockpit runs on a separate system ‘powered by’ HANA express and XS advanced with cockpit services for landscape and database administration (compare system parameter configurations), a cockpit manager to add HANA systems as resources, and a cockpit view for system administration.

    Just like with HANA 1.0 cockpit and HANA studio, most of the activity uses a SQL client connection with named database user. For offline activity, like starting a stopped database, loading trace files from the file system or restoring a database, the host agent is used. This integrates both the existing HANA platform lifecycle management tool (hdblcm) and the HANA cockpit for offline administration tool, which have not changed (much).

    SAP HANA 2.0 SPS 00 What’s New: Administration – by the SAP HANA Academy

    The DB Control Center (DBCC) tool is no longer supported for HANA 2.0. If you want to upgrade HANA 1.0 to 2.0 and this tool is installed.

    Multitenant Database Containers

    For system administration, the following changes have been made to make administration of multitenant database container (MDC) systems easier:
    • A backup of a single-container system database can now be recovered into a tenant database in an MDC system, retaining the backup history. This simplifies conversion to multitenant environments.
    • Secure network communication can be disabled for the copy and move process of tenant databases with the setting global.ini > [multidb] enforce_ssl_database_replication=off. This simplifies the task for noncritical environments
    • Performance trace can now be enabled for multiple tenant databases at the same time to analyze cross-database queries. This simplifies monitoring of MDC systems.

    Workload Management

    The new admission control feature for workload management allows the administrator to manage peak workloads. You define thresholds for memory and CPU usage both for when HANA should start to queue and for when HANA should start to reject incomings requests. Enabling admission control will avoid saturation of the HANA database server.

    For more information about the different parameters

    SAP HANA 2.0 SPS 00 What’s New: Administration – by the SAP HANA Academy

    Additionally, a query timeout feature has now been implemented which can be used to apply a maximum time limit to process any SQL statement. This feature can be used as a way of automatically canceling client queries which are hanging or looping indefinitely.

    SAP HANA HDBSQL

    A couple of new options have been added to the SAP HANA database interactive terminal, HDBSQL. This command line tool is mostly used for batch processing and scripting, for example by the SAP HANA installer itself.

    SAP HANA 2.0 SPS 00 What’s New: Administration – by the SAP HANA Academy

    There are new input and output options: with [-V] you can specify variable values for use in SQLscript; and with [-quiet] the welcome screen is not printed.

    We also have two new output format options: with [-b maxlength] you can define the maximum output length for binary and long columns in bytes/characters, defaults to 32.
    As of SAP HANA 2.0, hdbsql includes times for executions and fetches by default; with With [-oldexectimes ] you can use the HANA 1.0 execution-only timing.

    Does your CFO need a real time geolocation report?

    $
    0
    0
    In a constantly evolving world, CEOs and CFOs must adapt continuously to their new market dimension. Having access to real time reporting gives a serious advantage on its competitors. Thus, CFOs can obtain the necessary information to have a real-time picture of their finance.
    Real time reconciliation introduced by SAP Simple Finance solution increases the confidence index of financial reporting. In a more and more competitive world, CFOs can detect earlier business lines profitability decrease and take appropriate decision.
    In addition, connectivity and mobility solutions as HCP, HANA Live, Fiori, Lumira etc.. Enable you to reach your reports at any location/device.


    This have been said, let’s see how SAP HANA tools are integrated. Here is a simple case of turnover geolocation.

    1- Pre-requisite :

    In order to show finance posting analysis, we need.
    • SAP Simple Finance system fully configured
    • SAP HANA LIVE Studio with the standard package imported
    • SAP Lumira connected to SAP HANA
    • SAP Lumira map extension activated
    • Esri GEOMAP account
    • Your financial posting
    For instance, the following finance posting has been done in the SFIN System.
    Company 1 with 14 888 550 euro of turnover
    Company 2 with 17 780 000  euro of turnover


    2- HANA LIVE Data model creation:

    Create or extend a standard calculation view (based on ACDOCA table) in order to extract turnover data.


    A relevant data selection can be performed.

    3- SAP Lumira analytics :

    Add a new dataset getting the financial report based on the calculation view below.


    Your report is ready, you can display it and breakdown data as needed.
    The first screen is a global turnover view “by country”.


    Then you can breakdown it to display turnover by country and G/L account.


    In conclusion, thanks to those new features, CFOs can benefit from turnover geolocation presentation on any device which is very important in the big data era.
    In MRP logistics processes, for example, stock can be displayed by store or by region which can help for appropriate replenishment.

    Constant Selection in SAP HANA Using Dynamic Join

    $
    0
    0
    In SAP BW, there is this concept of “Constant Selection” where you can mark a selection in the Query Designer as constant. This means that navigation and filtering will have no effect on the selection at runtime.

    In SAP HANA, there is no feature that directly supports this functionality. We have to model it ourselves. One way to implement it is through “self joins” and “dynamic joins”.
    One application of “Constant Selection” is when calculating the market share of a product of a company against the same product of other companies. The problem arises when there are additional attributes (e.g. Country) that a user can select that can affect the result.

    Let’s take this table as an example:

    Constant Selection in SAP HANA Using Dynamic Join

    As you can see from this table, companies A, B, and C have products in the US.  And only companies A and B have products in Canada and not company C.

    If we want to calculate the “market share” of a product of a company, in BW we should make the “sales by company” as the constant because we want to relate the sales of the product of the individual company to the total sales of the product of the whole group of companies.

    To implement this in SAP HANA, we need to create a calculation view and use the same table in two “aggregation” nodes.  Create the first aggregation node as below:

    Constant Selection in SAP HANA Using Dynamic Join

    In the second aggregation node, we will use the same table but will not expose “COMPANY” because we want to total sales by company.  When adding the aggregated measure “SALES” from this node, rename it as “TOTAL_SALES”.

    Constant Selection in SAP HANA Using Dynamic Join

    Now create an inner join between these two aggregation nodes on “COUNTRY” and “PRODUCT”. This effectively is a “self join” as we are joining a table to itself.

    Constant Selection in SAP HANA Using Dynamic Join

    Lastly, to get the market share percentage, we need to create a “calculated column” in the final aggregation node and name it “MARKET_SHARE” using the formula “SALES” / “TOTAL_SALES”.

    Constant Selection in SAP HANA Using Dynamic Join

    Constant Selection in SAP HANA Using Dynamic Join

    Now let’s run the following query including all attributes “PRODUCT”, “COUNTRY”, and “COMPANY”:

    Constant Selection in SAP HANA Using Dynamic Join

    The total sales are of a particular product totaled for all companies in a country which is correct.  But now, suppose we want to see the market share of a company’s product in North America (both US and Canada).

    To do this, we remove “COUNTRY” from the query:

    Constant Selection in SAP HANA Using Dynamic Join

    But since Company C does not have products in Canada, the total sales for C is only for US.  This is not what we want since we want the total sales to be across both US and Canada.

    To solve this problem, we need to change the join between the two aggregation nodes to a “dynamic join”.

    Constant Selection in SAP HANA Using Dynamic Join

    Now let’s rerun the same query without the “COUNTRY”.

    Constant Selection in SAP HANA Using Dynamic Join

    Problem solved! The total sales is across both US and Canada for all companies.  The dynamic join takes into consideration the attributes you use in your query.  Since “COUNTRY” is not part of the query, HANA does not execute the join on “COUNTRY” which allows the total sales to remain “constant” across all companies.

    Installing Hana Server (SP12) and XS Advanced Runtime Components

    $
    0
    0
    The main purpose of this blog post is to list the steps required to provision a Hana SP12 System as a development server, along with XS Advanced runtime components such as the Database Catalog Tool (HRTT) and WebIDE.

    Note – In this blog post we will be using command line tools for the installation. However, the last few steps should be applicable to even if you used the GUI.

    Prerequisites

    In this tutorial, we assume that you have already,
    • Downloaded the latest version of the Hana Database Server
    • Downloaded the latest version of the XS Advanced Runtime
    • Downloaded the latest version of the XS Advanced Runtime Additional Components, such as HRTT, DevX and WebIDE. These should be zip files (which contain mtar files of the applications). For example, sap-xsac-hrtt-2.0.8.zip, sap-xsac-di-4.0.9.zip, sap-xsac-webide-4.0.9.zip.
    • Copied the installation files into an installation folder in your target server.
    The software can be downloaded from the SAP Software Download Center:

    SAP Software Download Center > Download Software > By Alphabetical Index (A-Z) > H > SAP IN-MEMORY (SAP HANA ) > HANA PLATFORM EDITION > SAP HANA PLATFORM EDITION > SAP HANA PLATFORM EDITION 2.0

    Note – Make sure you also download the mtaext files for DevX and WebIDE. Alternatively, you could also create these files as show below.

    The mtaext file for the DevX components should look like this:

    _schema-version: "2.0.0"
    ID: com.sap.devx.di.xs2-config1
    extends: com.sap.devx.di

    modules:

      - name: di-core
        parameters:
          memory: 512M
        properties:
          JBP_CONFIG_JAVA_OPTS: '[java_opts: ""]'

      - name: di-runner
        parameters:
          memory: 768M

    resources:

      - name: di-builder-memory
        properties:
          DI_BUILDER_MEMORY: 512M

    The mtaext file for the WebIDE should look like this:

    _schema-version: "2.0.0"
    ID: com.sap.devx.webide.ext
    extends: com.sap.devx.webide
    # any change in mtad should be downported to 1SCV repo -com.sap.devx.fassembly/com.sap.devx.fassembly.build/mtad.*
    modules:
      - name: webide
        parameters:
          port: 53075
          host: webide
          memory: 1GB

    Now that we have downloaded the required installation files and copied them into the target server, let’s get started with the installation process.

    1. Make all files executable.

    Run following command as sudo user on target server

    cd <installation directory>
    chmod -R 777 *

    2. Start HDBLCM installation process to install Hana, XSA Runtime

    Change directories into the directory which contains the hdblcm tool and run the command given below as sudo user. Using the component_dirs option, provide the paths for the Hana server installation files and the XSA Runtime components separately (comma separated).

    If you want to install the HRTT and DevX as part of this step, you need to specify their directories as well. If not, you can install these later.

    Note – Web IDE needs to be installed separately.

    ./hdblcm --component_dirs=<hana server installation directory>,<xsa runtime installation directory>,<additional components directory (hrtt, devx)>

    3. Follow installation prompts.

    You can select default settings for most options, except maybe the ones given below.
    • System index: [Install new system]
    • Components for installation: [All components]
    • Local Host Name: [specify full domain name. eg: hostname.example.company.com]
    • SAP HANA System ID: [eg: DEV]
    • Database Mode: [single_container]
    • System Usage: [development]
    • Organization Name: [eg: SAP]
    • Space Name: [eg: DEV]
    • Routing Mode: [ports]
    • XS Advanced components to be installed: [xsac_di_core, xsac_hrtt, xsac_portal_serv, xsac_monitoring, xsac_services, xsac_ui5_fesv2, etc]
    4. Verify the installation.

    Now that you have installed the Hana Database and XSA Runtime, you can verify the installation. You can do this using the XSA Runtime client tool or using Hana Studio.

    4a. Verify using XSA Runtime client tool
    • Login to server using PuTTY (or SSH) as <sysid>adm (Alternatively, you can use your local XSA Runtime Client tool to login as XSA_ADMIN)
    • Login as XSA_ADMIN
    xs-admin-login
    • Check XSA CLI version
    xs version
    • Check if apps are installed and running
    xs apps ( or xs a )

    4b. Verify using Hana studio
    • Add the system on Hana studio as SYSTEM user
    • Go to the Landscape tab in the system info
    • Verify that xsengine, xscontroller, xsuaaserver, etc are running
    5. Install HRTT (if it wasn’t already installed during previous steps)
    • Login to server using PuTTY (or SSH) as <sysid>adm (Alternatively, you can use your local XSA Runtime Client tool to login as XSA_ADMIN)
    • Login to XSA Runtime as XSA_ADMIN.
    xs-admin-login
    • Note – Make sure that you are on the SAP space.
    • Replace the path and zip file name with your zip file and run the following command:
    xs install <path to additional components>/sap-xsac-hrtt-2.0.8.zip -o ALLOW_SC_SAME_VERSION
    • Verify by running the xs apps command. You should see an app named hrtt-core, that is STARTED. This is the Hana Catalog Tool. You should also be able to login to this application as XSA_ADMIN using the specified app URL. 
    6. Install DevX DI (if it wasn’t already installed during previous steps)
    • Run the following command as XSA_ADMIN (same as above). Replace the path and zip file name with your zip file:
    xs install <path to additional components>/sap-xsac-di-4.0.9.zip -e <path to additional components>/sap-xsac-di-4.0.9.mtaext -o ALLOW_SC_SAME_VERSION
    • Verify by running the xs apps command. You should see a number of apps starting with di* and devx*
    7. Install Web IDE
    • Run the following command as XSA_ADMIN (same as above). Replace the path and zip file name with your zip file:
    xs install <path to additional components>/sap-xsac-webide-4.0.9.zip -e <path to additional components>/sap-xsac-webide-4.0.9.mtaext
    • Verify by running xs a command. You should see an app named webide. 
    8. Add new Role Collections

    In order to access the Web IDE, Space Enablement Tool and Certificate Administration Tool, you will need to create Role Collections and add them to your users. First step is to find the URL for the XS Advanced Administration and Monitoring tool.

    8a. Find the URL for the XS Advanced Administration and Monitoring tool
    • Login as XSA_ADMIN
    • Run the xs version command.
    • You should see an application named xsa-admin with an endpoint. Copy this URL and login as XSA_ADMIN.
    8b. Create new Role Collections named WebIDE developer and WebIDE admin
    • Login to XS Advanced Administration and Monitoring tool as XSA_ADMIN
    • Open Application Role Builder tile
    • Click the three parallel lines (on the left upper side)
    • Select Role Collection
    • Click on the + at the bottom
    • Provide Role Collection name (e.g. WebIDE_DEVELOPER, WebIDE_ADMIN) and click on Create button
    • Once you have created the two Role Collections, click Save at the bottom of the screen
    8c. Add Role collection to Role Template
    • Go to Application Role tab
    • Select webide!i1
    • Select the WebIDE_Developer Role
    • Click Add to Role Collection
    • Select WebIDE_DEVELOPER and click OK
    • Do the same for WebIDE_ADMIN
    8d. Add Role Collection to user
    • Click Home button
    • Go to User Management tile
    • Click on XSA_ADMIN user
    • Go to Role Collections tab
    • Click Add button
    • Add WebIDE_DEVELOPER and WebIDE_ADMIN
    • Click Save button at the bottom of the screen
    Now you should be able to login to the Web IDE, Space Enablement Tool and Certificate Administration Tool

    9. Add space roles to XSA_ADMIN

    xs set-org-role XSA_ADMIN SAP OrgManager
    xs set-space-role XSA_ADMIN SAP DEV SpaceDeveloper
    xs set-space-role XSA_ADMIN SAP DEV SpaceManager

    10. Add Github SSL certificate to Web IDE (requried for cloning github repositories)
    • Export github.wdf.sap.corp SSL certificate using a browser as a Base64 encoded, CER (X.509) file
    • Find URL for Certificate Administration Tool (di-cert-admin-ui) and login as XSA_ADMIN
    • Upload github SSL certificate
    • Now you should be able to clone your github repositories
    11. Enable DEV space (required to run builds in the Web IDE)
    • Find URL for Space Enablement Tool (di-space-enablement-ui) and login as XSA_ADMIN
    • Click Enable button for DEV space (this process might take a few minutes to complete)
    • Now you should be able to run builds in the Web IDE
    Now the Hana XSA instance should be ready for developing and deploying apps. You can also create new DEV users and provide them access to the Web IDE and Catalog Tools.

    12. Create DEV users

    To access the WebIDE, your dev users need to be granted the WebIDE_Developer role. For this, you can either use the XS Advanced Administration Tool, or run an SQL command in Hana Studio. In this tutorial we will be using Hana Studio. The second requirement is to specify the new dev user as a SpaceDeveloper in the DEV space (or whichever space you created). The steps are given in detail below.

    12a. Create new users and add WebIDE developer role

    In Hana Studio, open an SQL window as SYSTEM user, and run the following SQL statements.

    CREATE USER  PASSWORD "InitialPwd1"
    SET PARAMETER 'XS_RC_XS_CONTROLLER_USER' = 'XS_CONTROLLER_USER';
    ALTER USER  SET PARAMETER 'XS_RC_WEBIDE_DEVELOPER' = 'WEBIDE_DEVELOPER';

    12b. Add user as SpaceDeveloper

    Run the following command in the XSA Runtime Client Tool as XSA_ADMIN user:

    xs set-space-role <username> <ORGNAME> <spacename> SpaceDeveloper

    Now your new dev users should be able to login to the WebIDE and Catalog Tools.

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    $
    0
    0
    The Proposed Process

    Step 1: Identify the need
    Step 2: Design
    Step 3: Back-end development
    Step 4: Front-end configuration (or really development at this point)
    What’s most important (always) is Step 1 and Step 2; but this post will focus on Step 3 and Step 4 since I feel when doing these steps, the documentation is all there in places like experience.sap.com and sapui5, but is still pretty cryptic for beginners. That said, this is guide to help you see a real example, and not really training material. Actually, it’s more just a dump of information but if you are persistent and give this a go yourself, it should at least help with knowing you’ve got all the bits to put it together.

    Step 1 – Identify the Need

    This is not a technology looking for a problem; but a technology that can address real User Experience issues and opportunities for some groups of people. I think the most obvious one is a People Manager Overview Page with a Smart Tile that highlights the need to look at it. E.g. Upcoming important dates like birthdays or contract end dates; Leave Calendar, Team Timesheet status.  These are all Smart Business Tiles at my current Customer, but Home Page Tiles are valuable real estate (like your smart phone’s screen), so let’s not force our functionality all to the home page.

    Step 2 – Design

    Build.me has a fairly rudimentary version of the Overview Page; hence we ended up just using drawn pictures in a freestyle build project; and in reality, an Excel spreadsheet was the final mockup/documentation for each Card. Hopefully build.me improves with the ability to shape the output over time; but at least producing this for end users; helps drive the discussion early to avoid rework after the build.

    Step 2.5 – Design -> Build transition

    Consider this Card design:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Here we have a few things to consider:
    1. A Heading
    2. A KPI Value with a Unit of Measure
    3. A Comparison KPI Value (for percentage increase/decrease)
    4. Some grouping information (Material Group A and EMEA)
    5. A graph title
    6. An x-axis dimension (year in this case)
    7. A y-axis measure (number of cancelled purchase orders in this case)
    8. A range of data (e.g. 4 data points, 1 for each year)
    9. Potentially the semantic navigation when you click on the header, or select a specific dimension (in this case, “year”).
    10. Potentially the global filter that should be applied at the Overview Page level (note – For multiple cards; the technical name of the fields to filter by needs to be consistent across cards).

    Except for Titles and semantic navigation, this pretty much defines the type of data we need to expose via oData to the Card and helps drive the discussion of what needs to be developed.

    If done right, the data that gets exposed can be used well beyond just the overview page card; and will make the Front-End configuration trivial.

    Note – I will point out, I’m still confused by some design choices for the Card definitions (e.g. Trend calculations in the 1.40 UI5 version compared to 1.38) so your mileage may vary depending on your desired requirements.

    Real World Example… 

    In order to write this post, I’ve taken the following semi-real requirement for an analytical Card.

    What we want to see is an Analytical Card which highlights the % calculation for Preventative Maintenance versus Total Maintenance. We also want to filter it by Work Centre. E.g. It should look something like this.

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Based on above, a quick summary of the information provided is:

    Title: Preventative Maintenance hours versus Total Maintenance hours

    x axis: month

    y axis: Percentage

    y axis unit: %

    KPI Header Value: This month’s percentage

    KPI Comparison Value: Last month’s percentage

    Show: Last 4 months

    Filter: By Work Centre

    Order: By month ascending

    Navigation: To intent “MaintenanceHours-Analysis” (for example)

    … in a Trial World

    Now I don’t like writing a tutorial that you can’t go try yourselves, so I’ve leveraged an MDC HANA instance on an HCP Trial account to create the XSOData service which will expose the right Calculation View for the above to be possible. This will let you test it inside the UI5 WebIDE, but don’t expect to be able to deploy this scenario as is.

    FYI – The one thing required to run this is create a destination in the HCP Cockpit like follows then your destination can be referenced in the UI5 webide and can call XSOData services exposed from the MDC HANA instance:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Step 3 – Backend Development (in HANA)

    For simplicity, I’ve created the following dummy tables and data representing the ERP data (create your own tables and data which is easy to do and worth learning if you don’t know how):
    1. Work Orders (simplified version of AUFK but artificially including Work Centre to simplify the data model)
    2. Time Entries (simplified version of CATSPM)
    e.g.

    Work Orders (where Z1 is preventative, and Z2 is corrective):

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Time Entries:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    So in essence, we need a Calculation View in HANA which provides a month by month breakdown totalling all Z1’s hours and all Z2’s hours. We also need to calculate the percentage for each month; and be able to provide the current month’s percentage (KPI) and last month’s percentage (comparison KPI).

    I’m not a HANA Calculation View modelling expert and you’re going to need to know some HANA modelling to get through this bit. For those who are experts, I’d love some feedback and I encourage you to blog about some complex calculation view problems you’ve solved.

    So here’s the high level solution, with some screen shots.

    The HANA solution consisted of the following files:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    e.g.
    • MaintenanceHoursOVP Calculation View which provides us a month by month view of preventative maintenance and corrective maintenance.
    • MaintenanceHoursGroupedByMonth table function which was created to group the previous calculation view by month consistently (the why will be explained below)
    • MaintenanceHoursOVPFinal Calculation View which is the final aggregation on top of the previous Table Function which also calculated the percentage, and “this month”’s percentage and “last month”’s percentage for the KPI value.
    • xsodata definition pointing at MaintenanceHoursOVPFinal Calculation View providing our odata endpoint

    MaintenanceHoursOVP Calculation View looks like this:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Projection_1 is pretty much just a vanilla projection of the dummy Work Orders table.

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Projection_2 is pretty much just a vanilla projection of the dummy Time Entries table but using some string manipulation; we’ve created 2 calculated columns (Year and Month).

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Projection_4 is using the standard M_TIME_DIMENSION table in HANA (I’ve previously generated time information using the Eclipse wizard to do this).

    I’m working with Month values so I’ve filtered by the 1st of each month; plus looking at only the last 24 months of data.  Note – To do a relative date, I’ve created a calculated column called MonthsBack (this is quite useful in a future calculation you’ll see shortly).

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    The join between Work order and Time Entry is a simple 1.n relationship from Work Order to Time Entry:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    The join between Calendar and the above join uses the month/year and while in reality, there will always typically be a work order for every month; we need to mark it as a right outer join (which could occur if you ran this on a new month before any work time had been registered).

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Now for the final Aggregation takes the output of Join_2, uses hours as one of the measure and creates a number of calculated columns:
    • MonthName – Made from the DATE_SQL by using the useful function “monthname”
    • PreventativeHours and CorrectiveHours (requires use of the IF statement tocalculated totals based upon work order type – in reality, it will be a few IF statements chained together)

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    • Percent Preventative (calculated using the other 2 calculated values)
    • Ignore the UoM Calculated columns for the moment as that will be discussed shortly


    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Finally, adjust the semantics on each measure (semantic definitions aren’t actually required at this point):

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Now this Calculation View would actually be quite useful for anlaysis but for now, this is just the source for our table function. Now why do I need a table function?  To be honest, it’s to get around anomalies with my understanding about how the above aggregation responds based upon what you select.

    e.g. The Overview Page makes a few requests. One of these is the request for the graph data; and another is for the KPI values.  The funny thing if I added a calculated column called “This Month percentage” and “Last Month percentage”; and I select only these 2 columns from the calculation view, HANA optimises the request in a way that produces unusual results for me (e.g. Not grouping by month). To get around this, we create a Table Function with a group by clause hopefully forcing the outcome we want.

    FUNCTION "HANA_USER"."ovp::MaintenanceHoursGroupedByMonth" ( ) 
    RETURNS TABLE ( "MonthName" VARCHAR(20), "MonthsBack" INT, "WORK_CENTRE" VARCHAR(10), 
    "PreventativeHours" FLOAT, "CorrectiveHours" FLOAT, "PercentPreventative" FLOAT )
    LANGUAGE SQLSCRIPT
    SQL SECURITY INVOKER AS
    BEGIN
    return SELECT 
    "MonthName",
    "MonthsBack",
    "WORK_CENTRE",
    sum("PreventativeHours") AS "PreventativeHours",
    sum("CorrectiveHours") AS "CorrectiveHours", 
    sum("PercentPreventative") AS "PercentPreventative" 
    FROM "_SYS_BIC"."ovp/MaintenanceHoursOVP" 
    GROUP BY 
    "DATE_SQL", "MonthsBack", "MonthName", "WORK_CENTRE"
    order by "DATE_SQL";
    END;

    With this, we can now make a dedicated Calculation View to expose to the Card. The bonus of a using a dedicated calculation view is we can do some prefiltering to return just the 4 values we want to display on the graph:

    MaintenanceHoursOVPFinal.calculationview

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    Project_1 looks like this (simply the Table Function as the data source, and 2 calculated fields:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    This Month Percentage looks like this:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    and “=1” instead of “=0” for Last Month’s calculated column.

    I also added the filter “MonthsBack” < 4 to just return 4 results.

    E.g. The use of MonthsBack makes the above calculations pretty easy!

    Unit of Measures are actually shown in Cards based upon semantic definitions.  In HANA you can tag a measure with the semantic tag “Quantity with Unit of Measure”. It supports a constant value but it appears that the Overview Page only supports a Unit of Measure that is linked to another column, so this now explains why I’ve added PercentUoM columns.

    Finally we set the dimensions/measures appropriately and we are ready to create the XSOData service.

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    First, we created a dedicated package (as shown above).  The package needs an .xsapp file and a .xsaccess file (new->File):

    .xsapp contains:
    {}

    .xsaaccess contains:

    {

    "exposed" : true,

    "authentication": null,

    "prevent_xsrf" : true,

    "headers":

    {

    "enabled": true,

    "customHeaders": [ {"name":"X-Frame-Options","value":"SAMEORIGIN"} ]

    }

    }

    Then finally the xsodata file itself (new file and call it [service name].xsodata)

    Contents, something like this:


    service{

    "ovp/MaintenanceHoursOVPFinal.calculationview" as "MaintenanceHoursPreventative"

    keys generate local "GeneratedID"

    aggregates always;

    }

    annotations {

    enable OData4SAP;

    }

    At this point, you can run the XSOData service and start playing with the calculation view via the browser. At this point, note down the URL as you’ll use the relative path when setting up to enter into your HANA connection

    Front End Configuration

    Create an Overview Page Project (New Project from Template, Overview Page Application) pointing at your XSOData service . When the new project wizard shows the Annotation Selection screen, just press Next since we’re going to add a local annotation file after the wizard is finished. Note – If the Overview Page Application is not shown, you’ll need to activate it in the plugins folder in WebIDE settings.

    Now you can add your analytical card (New-> Card from the root folder).

    The only really thing to explain here is the following configuration:

    First – What is an annotation? Put simply, imagine you are moving your furniture and you want boxes to be put in certain rooms.  What you do is write on the box, or put a sticky note on it and say “this goes in the lounge room”; or “this goes into bedroom 1”. Well this is what annotations are used for. To tell the mover (the UI) where to put the box (the field).

    On top of this are qualifiers. A qualifier is like using the same sticky notes for 2 different moving jobs. They both say “this goes into bedroom 1” but maybe you add the qualifier “Fred’s house” and “Terry’s House” to ensure the boxes end up in the right house’s bedroom 1.

    So while you could just press Next and not add anything; I’d suggest you add a qualifier to each of the above annotations.  In our case, let’s just add #preventative to each of these.

    If you open the manifest file, it should look something like this:


    "sap.ovp": {

    "_version": "1.1.0",

                               "globalFilterModel": "OVP",

                               "globalFilterEntityType": "MaintenanceHoursPreventativeType",

    "cards": {

    "Example_card00": {

    "model": "OVP",

    "template": "sap.ovp.cards.charts.analytical",

    "settings": {

    "title": "{{Example_card00_title}}",

    "entitySet": "MaintenanceHoursPreventative",

    "selectionAnnotationPath": "com.sap.vocabularies.UI.v1.SelectionVariant#preventative",

    "chartAnnotationPath": "com.sap.vocabularies.UI.v1.Chart#preventative",

    "presentationAnnotationPath": "com.sap.vocabularies.UI.v1.PresentationVariant#preventative",

    "dataPointAnnotationPath": "com.sap.vocabularies.UI.v1.DataPoint#preventative",

    "identificationAnnotationPath": "com.sap.vocabularies.UI.v1.Identification#preventative"

    }

    }

    }

    }

    The next step is to select the webapp folder and add a local annotation file to your project (with future CDS implementations in S4, you create annotations at the source).

    Now make sure that in the WebIDE plugins you have the Annotation Modeller enabled then go ahead and open the annotation file in the Annotation Modeller.

    Now this is the most cryptic part of annotating, and rather than describe it to you; I’ll give you the final version of the annotations used in this example:

    Let’s Build an Analytical Card for a Fiori Overview Page (with a HANA Backend)

    The full annotation file looks like this:

    <edmx:Edmx xmlns:edmx="http://docs.oasis-open.org/odata/ns/edmx" Version="4.0">

    <edmx:Reference Uri="/sap/bc/ui5_ui5/ui2/ushell/resources/sap/ushell/components/factsheet/vocabularies/UI.xml">

    <edmx:Include Alias="UI" Namespace="com.sap.vocabularies.UI.v1"/>

    </edmx:Reference>

    <edmx:Reference Uri="/sap/bc/ui5_ui5/ui2/ushell/resources/sap/ushell/components/factsheet/vocabularies/Communication.xml">

    <edmx:Include Alias="vCard" Namespace="com.sap.vocabularies.Communication.v1"/>

    </edmx:Reference>

    <edmx:Reference Uri="/sap/bc/ui5_ui5/ui2/ushell/resources/sap/ushell/components/factsheet/vocabularies/Common.xml">

    <edmx:Include Alias="Common" Namespace="com.sap.vocabularies.Common.v1"/>

    </edmx:Reference>

    <edmx:Reference Uri="http://docs.oasis-open.org/odata/odata/v4.0/errata02/os/complete/vocabularies/Org.OData.Core.V1.xml">

    <edmx:Include Alias="Core" Namespace="Org.OData.Core.V1"/>

    </edmx:Reference>

    <edmx:Reference Uri="http://docs.oasis-open.org/odata/odata/v4.0/cs01/vocabularies/Org.OData.Measures.V1.xml">

    <edmx:Include Alias="CQP" Namespace="Org.OData.Measures.V1"/>

    </edmx:Reference>

    <edmx:Reference Uri="http://docs.oasis-open.org/odata/odata/v4.0/cs01/vocabularies/Org.OData.Capabilities.V1.xml">

    <edmx:Include Alias="Capabilities" Namespace="Org.OData.Capabilities.V1"/>

    </edmx:Reference>

    <edmx:Reference Uri="http://docs.oasis-open.org/odata/odata-data-aggregation-ext/v4.0/cs02/vocabularies/Org.OData.Aggregation.V1.xml">

    <edmx:Include Alias="Aggregation" Namespace="Org.OData.Aggregation.V1"/>

    </edmx:Reference>

    <edmx:DataServices>

    <Schema xmlns="http://docs.oasis-open.org/odata/ns/edm">

    <Annotations Target="ovp.xsodata.OVP.MaintenanceHoursPreventativeType">

    <Annotation Term="UI.Chart" Qualifier="preventative">

    <Record Type="UI.ChartDefinitionType">

    <PropertyValue Property="ChartType" EnumMember="UI.ChartType/ColumnStacked"/>

    <PropertyValue Property="Measures">

    <Collection>

    <PropertyPath>PreventativeHours</PropertyPath>

    <PropertyPath>CorrectiveHours</PropertyPath>

    </Collection>

    </PropertyValue>

    <PropertyValue Property="MeasureAttributes">

    <Collection>

    <Record Type="UI.ChartMeasureAttributeType">

    <PropertyValue Property="Measure" PropertyPath="PreventativeHours"/>

    <PropertyValue Property="Role" EnumMember="UI.ChartMeasureRoleType/Axis1"/>

    <PropertyValue Property="DataPoint" AnnotationPath="@UI.DataPoint#preventativehours"/>

    </Record>

    <Record Type="UI.ChartMeasureAttributeType">

    <PropertyValue Property="Measure" PropertyPath="CorrectiveHours"/>

    <PropertyValue Property="Role" EnumMember="UI.ChartMeasureRoleType/Axis2"/>

    <PropertyValue Property="DataPoint" AnnotationPath="@UI.DataPoint#CorrectiveHours"/>

    </Record>

    </Collection>

    </PropertyValue>

    <PropertyValue Property="Dimensions">

    <Collection>

    <PropertyPath>MonthName</PropertyPath>

    </Collection>

    </PropertyValue>

    <PropertyValue Property="DimensionAttributes">

    <Collection>

    <Record Type="UI.ChartDimensionAttributeType">

    <PropertyValue Property="Dimension" PropertyPath="MonthName"/>

    <PropertyValue Property="Role" EnumMember="UI.ChartDimensionRoleType/Category"/>

    </Record>

    </Collection>

    </PropertyValue>

    <PropertyValue Property="Actions">

    <Collection/>

    </PropertyValue>

    </Record>

    </Annotation>

    <Annotation Term="UI.DataPoint" Qualifier="preventative">

    <Record Type="UI.DataPointType">

    <PropertyValue Property="Title" String="{@i18n&gt;@CURRENT_MONTH_PREVENTATIVE_PERCENTAGE}"/>

    <PropertyValue Property="Value" Path="ThisMonthsPercentage"/>

    <PropertyValue Property="CriticalityCalculation">

    <Record Type="UI.CriticalityCalculationType">

    <PropertyValue Property="ImprovementDirection" EnumMember="UI.ImprovementDirectionType/Maximize"/>

    <PropertyValue Property="ToleranceRangeLowValue" String="50"/>

    <PropertyValue Property="DeviationRangeLowValue" String="50"/>

    </Record>

    </PropertyValue>

    <PropertyValue Property="TrendCalculation">

    <Record Type="UI.TrendCalculationType">

    <PropertyValue Property="ReferenceValue" Path="LastMonthsPercentage"/>

    <PropertyValue Property="IsRelativeDifference" Bool="true"/>

    <PropertyValue Property="UpDifference" Decimal="5"/>

    <PropertyValue Property="StrongUpDifference" Decimal="10"/>

    <PropertyValue Property="DownDifference" Decimal="5"/>

    <PropertyValue Property="StrongDownDifference" Decimal="20"/>

    </Record>

    </PropertyValue>

    </Record>

    </Annotation>

    <Annotation Term="UI.DataPoint" Qualifier="preventativehours">

    <Record Type="UI.DataPointType">

    <PropertyValue Property="Title" String="{@i18n&gt;@PREVENTATIVE_HOURS}"/>

    <PropertyValue Property="Value" Path="PreventativeHours"/>

    </Record>

    </Annotation>

    <Annotation Term="UI.DataPoint" Qualifier="CorrectiveHours">

    <Record Type="UI.DataPointType">

    <PropertyValue Property="Title" String="{@i18n&gt;@CORRECTIVE_HOURS}"/>

    <PropertyValue Property="Value" Path="CorrectiveHours"/>

    </Record>

    </Annotation>

    <Annotation Term="UI.SelectionFields">

    <Collection>

    <PropertyPath>WORK_CENTRE</PropertyPath>

    </Collection>

    </Annotation>

    <Annotation Term="UI.PresentationVariant" Qualifier="preventative">

    <Record Type="UI.PresentationVariantType">

    <PropertyValue Property="SortOrder">

    <Collection>

    <Record Type="Common.SortOrderType">

    <PropertyValue Property="Property" PropertyPath="MonthsBack"/>

    <PropertyValue Property="Descending" Bool="true"/>

    </Record>

    </Collection>

    </PropertyValue>

    <PropertyValue Property="GroupBy">

    <Collection/>

    </PropertyValue>

    <PropertyValue Property="TotalBy">

    <Collection/>

    </PropertyValue>

    <PropertyValue Property="Total">

    <Collection/>

    </PropertyValue>

    <PropertyValue Property="InitialExpansionLevel" Int="1"/>

    <PropertyValue Property="Visualizations">

    <Collection/>

    </PropertyValue>

    <PropertyValue Property="RequestAtLeast">

    <Collection/>

    </PropertyValue>

    </Record>

    </Annotation>

    <Annotation Term="UI.SelectionVariant" Qualifier="preventative">

    <Record Type="UI.SelectionVariantType">

    <PropertyValue Property="Parameters">

    <Collection/>

    </PropertyValue>

    <PropertyValue Property="SelectOptions">

    <Collection/>

    </PropertyValue>

    </Record>

    </Annotation>

    <Annotation Term="UI.Identification" Qualifier="preventative">

    <Collection>

    <Record Type="UI.DataFieldForIntentBasedNavigation">

    <PropertyValue Property="Determining" Bool="false"/>

    <PropertyValue Property="SemanticObject" String="Action"/>

    <PropertyValue Property="RequiresContext" Bool="true"/>

    <PropertyValue Property="Action" String="toappnavsample"/>

    </Record>

    </Collection>

    </Annotation>

    </Annotations>

    </Schema>

    </edmx:DataServices>

    </edmx:Edmx>

    Wrap-Up and General Thoughts

    Overview Pages should be (and to an extent are) easy to build/configure.  Though with limitations and changing card characteristics between releases, it’s still a bit disjointed between design and build since designers really need to understand the possibilities with the various versions of the card.  That said; providing insight to action; and once the link card is finally released; a place where a very generic “role” can come to work; this really will enhance people’s user experience compared to a Launchpad with Tiles alone. Personally, I hope more detailed real-world how-to guides (that go to much more detail than what I’ve skimmed over) are created. And for designers, more real-world and complete overview page examples are provided within build.me.

    How to – Import SFLIGHT sample data into HANA from a local computer

    $
    0
    0
    As you discovered few seconds ago, HANA doesn’t come with any preloaded sample data.

    So, before starting to play with your brand new system, you need to find and load some data.

    Luckily SAP provides for free its flight data model (you can download it here unluckillly the previous link doesn’t work anymore; I will try to fix this issue asap) – it’s a good starting point since it contains a sufficient amount of data and a limited number of tables.

    In the next steps I’m going to explain (or better I’m going to try) how you can import SFLIGHT into HANA System directly from your local machine.

    1 – Unzip the SFLIGHT archive




    2 – On HANA Studio, expand the desired system.


    3 – Right click on the folder Catalog and then click on Import.


    4 – Select Import Catalog Objects from Current Client and then click on Browse…



    5 – Navigate to the folder where you previously extraceted the SFLIGHT archive; select the unzipped folder and click on OK.


    6 – Click on NEXT.


    7 – Select all (CTRL + A) the items showed on the left and click on Add.



    8 – Click on Next.

    9 – To import tables and the corresponding data, select Catalog and Data and then set a convenient Number of Parallel Threads; click on Finish.


    10 – The procedure will run for few seconds but, at the end, refreshing (F5) the Navigator View, the SFLIGHT schema will “magically” appear.


    Now you’re ready to start the exploration of SAP HANA. Have fun!

    Synonyms in HANA XS Advanced, Introduction

    $
    0
    0
    Why Synonyms?

    A complex HANA data warehouse might use several DB schemas in which tables, views and other
    DB objects reside. E.g., there might be a replicated ERP schema managed by SLT, a Netweaver/BW schema managed by the Netweaver Stack, and a “native” HANA schema, all residing in the same HANA instance and all of them consumed by the same data warehouse application.

    On  database level, tables and other DB objects from different schemas can be accessed just by providing the corresponding object privileges to a user. Synonyms can be used for convenience or to improve design, but are not needed.

    In XS Advanced (XSA) based HANA data warehouses and applications, development is schema-less. A developer can only access the “local schema” that is generated for the application. Access to objects in other schemas has to be done via private/local synonyms or projection views created in this local schema.

    In this series of three blogpost I will introduce the basic concepts, show how synonyms can be used to access objects in a remote schema, and explain the more complex concepts like configuration and service replacement. Even though I focus on HANA data warehouses, this document also applies to the usage of synonyms in XSA application development in general. I will not cover projection views explicitly, but include them in one of the example repos.

    Use Case

    I consider mainly the following use cases:
    • Accessing tables owned by a classical schema from a XSA based schema (HDI container), e.g. accessing an existing ERP Schema from XSA
    • Accessing tables owned by one XSA based schema from another XSA based schema, e.g. one schema containing data, a different XSA based DW application accessing those data
    • Transport/deployment  to another environment
    The following picture shows the principle. Changing the schema names is optional.

    Synonyms in HANA XS Advanced, Introduction

    HDI Containers

    In XSA DB objects reside in an HDI container, which is a generated schema. Development has to be done in a schema-less way. This isolates HDI containers from each other completely and makes it easier to deploy multiple containers into the same system, have several developers work independently from each other etc.. Using synonyms is therefore the designated method to access objects in other schemas.

    In a pure SAP BW environment without any native HANA development you might not need this level of isolation. But as soon as native HANA development with multiple development teams becomes part of the use case, the isolation of HDI containers is an extremely powerful security mechanism, superior to the classic repository in that aspect.

    Before continuing, we need to understand the object owner and user concept of XSA. Several pre-defined users are generated for each HDI container.
    • A schema owner (name of the HDI container/schema)
    • An object owner (creates and owns all the objects)
    • An application user (also called runtime user, HANA user that runs XSA Applications within the HDI container)
    • Some other technical users which we are not interested in here
    • Other external user, e.g. from BI Tools
    The following picture tries to illustrate different HDI containers/schemas and users involved:

    Synonyms in HANA XS Advanced, Introduction

    Prerequisites for Examples

    To execute the steps in the coding examples, the following prerequisites apply:
    • You are familiar with the basic concepts of XSA, Web IDE and SQL
    • You have access to the XS advanced run-time environment
    • You have access to SAP Web IDE for SAP HANA
    • You have access to the XS command-line interface client
    • You have access to a “classical” database schema
    Some examples access tables in a classical DB schema. A repository with the description how to generate the schema can be found at https://github.com/CGilde/syn-prov-hdi
    Some examples access tables in an HDI container. A repository with the description how to generate the schema can be found at https://github.com/CGilde/syn-prov-hdi

    The complete coding examples can be found in public github repos at https://github.com/CGilde .
    All examples were tested on HANA 2.0, but most features are available already in 1.0 SPS12.

    Simple example

    I will finish with the most trivial example I can think of. Probably the most often used public synonym in HANA is DUMMY. Since DUMMY is a public synonym pointing to table SYS.DUMMY, for usage in XSA a private synonym pointing to table SYS.DUMMY has to be defined first.

    Instead of issuing a “CREATE SYNONYM” statement, an .hdbsynonym file is included in the db folder of a project, either by using the graphical editor:

    Synonyms in HANA XS Advanced, Introduction

    or by including a file:

    {
      "DUMMY": {
        "target": {
          "object": "DUMMY",
          "schema": "SYS"
        }
      }
    }

    To keep things simple, no namespace is used (.hdinamespace file with empty name and option “subfolder” : “ignore”).

    To test the synonym, a short function is created by including a file TEST_FUNC.hdbfunction in the db folder of the project. This function uses the newly created synonym DUMMY and can be called from the HANA runtime tools (hrtt).

    FUNCTION "TEST_FUNC" ( )
    RETURNS table (result NVARCHAR(100)) 
    LANGUAGE SQLSCRIPT 
    SQL SECURITY INVOKER AS 
    BEGIN 
      return select 'CURRENT_USER: ' || CURRENT_USER result from DUMMY;
    END;

    Executing this function in hrtt/Database explorer will give the application user of the generated HDI container as result (see above for details on the different users involved in HDI).

    Synonyms in HANA XS Advanced, Introduction

    The repo for this example can be found at https://github.com/CGilde/syn-dummy.

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    $
    0
    0
    In the previous post I introduced synonyms in XS Advanced (XSA) and created a very simple synonym. Now I will create synonyms pointing to objects I defined by myself.
    Accessing Objects in a Classical Schema

    I assume, that the external schema I want to access is already existing. The example schema EPM_DEV can be created by using repo https://github.com/CGilde/syn-prov-classic. I also assume, there is already a project with a db module existing in the XSA Web IDE. We will insert the synonyms into this project. A git repo for the project with the complete coding can be found at https://github.com/CGilde/syn-hdi-classic-1.
    To provide access  to objects in this schema via synonyms, the following steps have to be performed:

    1. Create roles in external schema

    The roles are used to control access to the objects we want to expose to the HDI container.
    Technically we could also use object privileges, or allow access to the whole external schema. Both have disadvantages over using roles and I will not use them here.

    In schema EPM_DEV, created by using the repo mentioned above, the roles are created using plain SQL. The roles could also be created using design-time role definitions.

    Typically there will be two roles created for synonym access:

    1. A role with grant option. This role is used to allow the HDI object owner access to the external objects and to grant access to other users, e.g. via roles for accessing procedures/functions/calcviews/views in definer mode. In my example this is the role “EPM_XXX::external_access_g”.

    2. A role without grant option. This role is used to allow the HDI application user direct access to the synonyms or via procedures/functions/calcviews in invoker mode. In my example this is the role “EPM_XXX::external_access”.

    -- Create role to be granted for external access via synonym,
    -- we leave out access to SNWD_EMPLOYEES on purpose, just imagine this table contains sensitive data
    create role "EPM_XXX::external_access";
    --grant select on schema EPM_DEV to "EPM_XXX::external_access"; -- use this for allow access to the whole schema
    grant select on SNWD_AD to "EPM_XXX::external_access";
    grant select on SNWD_BPA to "EPM_XXX::external_access";
    ...

    create role "EPM_XXX::external_access_g";
    --grant select on schema EPM_DEV to "EPM_XXX::external_access_g" with grant option; -- use this for allow access to the whole schema
    grant select on SNWD_AD to "EPM_XXX::external_access_g" with grant option;
    grant select on SNWD_BPA to "EPM_XXX::external_access_g" with grant option;
    ...

    2. Create a User Provided Service

    The roles created above have to be granted to the generated users of the HDI container. To achieve this, we first have to create a so called “user provided service”. It connects with a given user to a given DB and executes grant statements during deployment. The grant statements are generated out of the content of .hdbgrants files, which I define later.

    Logon to the XS CLI (Command Line Interface) with administrator privileges:
    xs-admin-login

    Change the target organization and space to those you are developing in (here space DEV, no change of organization):

    xs t -s DEV

    Create the user provided service. Replace host, port user, password and schema with your values (xs cups is the alias for xs create-user-provided-service):

    xs cups EPM_XXX-table-grantor -p "{\"host\":\"my_host\",\"port\":\"30015\",\"user\":\"EPM_DEV\",\"password\":\"Grant_123\",\"driver\":\"com.sap.db.jdbc.Driver\",\"tags\":[\"hana\"] , \"schema\" : \"EPM_DEV\" }"

    The output should look like this:
    Created environment:
    "EPM_XXX-table-grantor": [
      {
        "schema": "EPM_DEV",
        "password": "Grant_123",
        "driver": "com.sap.db.jdbc.Driver",
        "port": "30015",
        "host": "my_host",
        "user": "EPM_DEV",
        "tags": [ "hana" ]
      }
    ]

    Now every service (e.g. project in XSA Web IDE) within the same organization and space can use the user provided service to obtain access to the external objects. You might think: “well, if every service can access the external objects, what about security?” The answer is simple. Services in XSA have only access to other services in their own organization and space. XSA organizations and spaces, and assignment of developers to them have to be structured in a way that supports the security requirements of your system landscape.

    3. Create .hdbgrants files

    To grant the privileges to the generated users, I include a file SNWD-table.hdbgrants into the db folder of my project in the Web IDE. The .hdbgrants file can be seen as the HDI equivalent of the “GRANT” statement in plain SQL. In the background, SQL “GRANT” statements are generated out of the .hdbgrants file content and executed using the user provided service  “EPM_XXX-table-grantor”  created above.
    {
    "EPM_XXX-table-grantor": {
    "object_owner": {
    "roles": [
    "EPM_XXX::external_access_g"
    ]
    },
    "application_user": {
    "roles": [
    "EPM_XXX::external_access"
    ]
    }
    }
    }

    In this hdbgrants file, “EPM_XXX-table-grantor” corresponds to the user defined service, which grants the privilege (grantor). “object_owner” and “application_user” refer to the users, to which the privileges are granted (grantees). The privileges/roles listed after “object_owner” will be granted directly to the HDI container’s object owner user. The privileges/roles listed after “application_user” will be granted to the HDI application user (aka runtime user) via a specific role (role with suffix ::access_role).

    For backwards compatibility, instead of .hdbgrants also the suffix .hdbsynonymgrantor is supported.

    4. Define dependencies

    The dependencies of our project are defined in the development descriptor file mta.yaml. The db module requires not only a hdi-container, where the synonyms will be created in, but also the user defined service to create the grant statements.  I put some comments into the following mta.yaml file to explain, which symbols/properties refer to which development artifacts.
    _schema-version: '2.0'
    ID: syn-hdi-classic-1
    version: 0.0.1

    modules:
      - name: db
        type: hdb
        path: db
        requires:                                        # db module needs:
          - name: hdi-container                          # ...where synonyms are created
            properties:
              TARGET_CONTAINER: ~{hdi-container-service} # defined at (d1)
           
          - name: EPM_XXX-table-grantor                  #...for executing grant statement
           
    resources:
      - name: hdi-container
        type: com.sap.xs.hdi-container
        properties:
          hdi-container-service: ${service-name}        # (d1) get service into variable

      - name: EPM_XXX-table-grantor
        type: org.cloudfoundry.existing-service         # service created with xs cups

    5. Create Synonyms

    Before creating the first synonym, I build the almost empty db module of the project. Currently (2.0 SPS0) this is necessary to use the object search dialog in the graphical synonym editor. When only the text editor is used, the empty project does not have to be build.

    I include a file SNWD.hdbsynonym into the db folder of my project. Finally, I can create the synonyms, either using the graphical synonym editor or the text editor. In the graphical synonym editor I can select objects from the external services I have bound to the db module via the mta.yaml file, in our case from “EPM_XXX-table-grantor”. The search dialog offers all objects, that can be read by the external service and also some system objects that are accessible via some HDI default privileges/roles. The hdbgrants are not considered here.

    SNWD.hdbsynonym in graphical editor with search dialog:

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    SNWD.hdbsynonym in text editor:

    {
      "SNWD_AD": {
        "target": {
          "object": "SNWD_AD",
          "schema": "EPM_DEV"
        }
      } ...

    The .hdbsynonym file defines the synonyms, and references a target object and schema for each synonym. Several synonyms can be defined in the same file.

    I build the db module again. After successfully building, I use the HANA database explorer (or HANA runtime tools) to check the definition and content of the synonyms just created. The synonyms point to the right objects in schema EPM_DEV and show the right data:

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    6. Consume Synonyms

    The synonyms can now be consumed in all places, where their base objects could be consumed. I created a very simple calculation view to verify, that the synonyms can be used instead of tables. Also a role for consumption of the calculation view by external users (e.g. from BI client tools) was created to verify the end-to-end use case. I will not go into further detail here, because this is not specific to synonyms. The calculation view and the role are contained in repo syn-hdi-classic-1.

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    Typical problems that occur when consuming synonyms are authorization problems, caused by insufficient privileges assigned to the generated HDI users or external users. In this case, your roles and hdbgrants were probably not defined as explained above (see 1. Create roles …, 3. Create .hdbgrants…). For table-synonyms, the object owner typically needs select with grant option, the application user typically needs only select privilege. You should also avoid granting more privileges than needed (e.g. giving the application user the grant option) to avoid security risks.

    7. Deployment

    Promoting to production can be done using some software life-cycle management tool, or via the XS CLI (Command Line Interface). I will use the XS CLI, since deployment is not the main topic of this post. After building the whole project in the Web IDE, I download the mtar file from the mta_archives folder. The mtar file has to be downloaded to a place, that is accessible from the XS CLI.
    The only thing that is different to deployment of a “stand-alone-application” is, that you have to make sure that the user provided service is available in your deployment target before you deploy the application containing the synonyms. Create it if needed using “xs cups…” as described above.
    Trivial deployment:

    xs deploy syn-hdi-classic-1_0.0.1.mtar 

    If I want or have to change the service and/or schema name, to which I deploy, I can optionally use an mta-extension. I include this example, because it is often a requirement to have a defined, readable schema name.

    mta-extension file syn-hdi-classic-1.mtaext, must be accessible from XS CLI:
    _schema-version: '2.0'
    ID: syn-hdi-classic-1-mtaext
    extends: syn-hdi-classic-1

    resources:
      - name: hdi-container
        parameters:
          service-name: syn-hdi-classic-1-deploy
          config:   
            schema: SYN_HDI_CLASSIC_1

    Deployment with mta extension:.

    xs deploy syn-hdi-classic-1_0.0.1.mtar -e syn-hdi-classic-1.mtaext

    Accessing objects in another HDI Container

    Again, I assume, that the HDI Container which I want to access is already existing. The example HDI container can be created by using repo https://github.com/CGilde/syn-prov-hdi I created it using “xs deploy” with schema name EPM_DEV_HDI and service name EPM_DEV_HDI_DB.
    I also assume, there is already a project with a db module existing in the XSA Web IDE. I will insert the synonyms into this project. The repo for this project can be found at https://github.com/CGilde/syn-hdi-hdi-0

    There are three typical use cases that I will consider when accessing another HDI Container:

    1. Accessing an HDI Container generated from Web IDE during development (same organization/space)
    During development we sometimes want to access an HDI Container which is also still “being developed”. Then we do not want to deploy the other HDI Container first, but want to consume the one that is generated from Web IDE.

    2. Accessing an HDI Container generated during deployment of an MTA archive (same organization/space)
    During deployment and sometimes during development we want to access and HDI Container which was generated during deployment.

    3. Accessing an HDI Container in a different organization/space
    Same as 2., but for security or other reasons the HDI Container we want to access was deployed in a different organization/space. We will look at the differences compared to 1. and 2. at the end.

    Let’s see, what the differences are between accessing objects in an HDI Container vs. accessing a classical schema. The following steps have to be performed:

    1. Create roles in “external” HDI Container

    The roles are defined using .hdbrole files. Again, I create two roles, one with and one without grant option, named “EPM_XXX::external_access” and “EPM_XXX::external_access_g#“.
    Roles containing privileges/roles with grant option have to use “#” as their last character. Those roles can only be granted to the object owner user.

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    2. Getting the Service name of target HDI Container

    Creating a user defined service is not necessary when accessing a container in the same organization and space. Every HDI Container can just reference the services from its organization and space. The referenced service could be either a deployed service or – during development – a generated service from Web IDE.

    In the following example EPM_DEV_HDI_DB is a service create by “xs deploy…”, the other one was generated by Web IDE and both using repo https://github.com/CGilde/syn-prov-hdi

    > xs s | grep -E "EPM_DEV|syn-prov-hdi-"
    GILDE-lf10figdtim2i93w-syn-prov-hdi-hdi-container       hana       hdi-shared
    EPM_DEV_HDI_DB                                          hana       hdi-shared   DB

    3. Create .hdbgrants files

    The .hdbgrants files refer to the HDI services instead of the “user provided service”. And we have to use “container_roles” instead of “roles” within the .hdbgrants file. Container roles refer to HDI Container specific roles instead of global roles. Everything else is identical.

    Example with deployed service:
    {
    "EPM_DEV_HDI_DB": {
    "object_owner": { 
    "container_roles": [
    "EPM_XXX::external_access_g#"
    ]...

    Example generated Service from Web IDE:

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    4. Define dependencies

    The dependencies in the mta.yaml are defined identically compared to accessing a classical schema. Just use the service names of the HDI Container you want to access.

    5. Create Synonyms

    This step is also identical compared to accessing a classical schema, when you know the schema names.
    Use the schema name you find in the build-log of the Web IDE:

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    … or from a deployed application. Here you either know the schema name already (defined via mta extension), or you can get the generated schema name via XS CLI command “xs env <app_name>”, or by executing “SELECT current_schema from sys.dummy” from the DB explorer:

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    Example synonym pointing to Web IDE generated HDI Container:

    Synonyms in HANA XS Advanced, Accessing Objects in an External Schema

    At this point it becomes quite obvious, that we need a more flexible solution when accessing generated schema names. I will show in the next post how flexibility can be achieved.

    6. Consume Synonyms

    This is identical compared with accessing a classical DB schema.

    7. Deployment

    Deployment with hard coded schema names is identical compared with accessing a classical DB schema – but only when the schema and service names are known in advance.

    8. Accessing HDI Container in different organization/space

    Accessing a HDI container that is running in a different organization/space is almost identical to accessing a classical schema.
    The following steps have to be executed:
    • create a DB user
    • create global wrapper roles for the HDI generated roles (like EPM_XXX::external_access from above)
    • grant the DB user those wrapper roles with grant option
    • create a user provided service using the DB user
    • assign the wrapper roles in the .hdbgrants files
    The use of a global additional wrapper role is a workaround, since schema roles are currently not supported in .hdbgrants files and container roles can only be used when accessing HDI containers in the same organization/space.

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    $
    0
    0
    In the previous post I showed the complete end-to-end scenario of using synonyms in XS Advanced. In this post I will show, how more flexibility can be achieved.

    The flexibility options do not really depend on whether accessing a classical schema or a generated HDI Container. Everything that is different was explained in the previous post. I will therefore only use a classical schema OR an HDI container as target, not both. And I will focus on the things that are different with flexible synonym targets.
    Using Configuration Files with Templating

    For the examples in this chapter I will use repo https://github.com/CGilde/syn-prov-classic (see previous post) as target. This is a classical DB schema.

    When the target schema of a synonym is not known during design time or if you want to be able to easily switch to another schema as target, we need a mechanism to replace some symbolic or default schema by a concrete schema during deployment. Configuration files and a templating mechanism provide the required functionality. Configuration files also provide the functionality to change the targets of synonyms at a central place.

    The starting point is the repo https://github.com/CGilde/syn-hdi-classic-1 from the previous post. I copy it to a new project. The complete repo can be found at https://github.com/CGilde/syn-hdi-classic-2

    Since I do not want to hard-code the target objects in the .hdbsynonym files anymore, I remove the targets from those files. What remains can be seen as “declaration only” of a synonym, without defining any specific target:

    SNWD.hdbsynonym before…
    {
      "SNWD_AD": {
        "target": {
          "object": "SNWD_AD",
          "schema": "EPM_DEV"
        }...

    and after removing target information:
    {
      "SNWD_AD": {},
      "SNWD_BPA": {}...
    }

    Now I move the information about the target objects into an .hdbsynonymconfig file. Those configuration files have to be located in a folder cfg, which must be a subfolder of the db module’s root folder. The cfg folder can contain subfolders to reflect the project structure. The location of the .hdbsynonym files remain unchanged.

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    If existing, the configuration file is the place, where the target of a synonym is defined. Optionally, we could leave a default configuration in the .hdbsynonym files, which would then be overwritten by a configuration in the .hdbsynonymconfig files. But I find it much clearer it the configuration is only present in one place and therefore did not use any default configuration.

    The syntax of the .hdbsynonymconfig file is almost identical to the .hdbsynonym file, but allows to use “schema.configure”, followed by “<service_name>/schema” instead of a fixed schema name. During deployment (or build in Web IDE) the schema will then be replaced by the actual schema of the referenced service. This means, that we do not have to know the schema we are referencing in advance!

    During deployment the cfg folder is scanned recursively for all synonym configurations found. Those define the targets of the synonyms (or replace an existing default configuration).

    For compatibility reasons with older versions, there is also the possibility to use an .hdbsynonymtemplate file. An example for synonym SNWD_SO_SL is included in project repo.

    Before building the project, I  copy the .hdiconfig file from the src folder to the cfg folder, in case it is not already there.

    I build the project in the Web IDE and check the synonyms in the DB Explorer. The schema name is EPM_DEV, even though we did not mention it explicitly anywhere in the project:

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    If we want to deploy the project to another system or another space, all we have to do is create a service “EPM_XXX-table-grantor” via “xs cups” in that system and space, pointing to the schema we want to access. Accessing another HDI Container works the same way. A HDI Container with the same service name we use in our project has to be available in the target organization/space of the deployment.

    Using Service Replacement

    Sometimes it is not wanted or not possible to create a grantor service with the name used in the development descriptor mta.yaml and the other files, like .hdbsynonymconfig or .hdbgrants files. Instead, an existing service shall be used, either in the Web IDE or during deployment. In order to support such use cases, a service replacement mechanism is offered. This feature is available starting with HANA 2.0 SPS0. To use this mechanism, we have to adapt the files using the service names and mta.yaml file of our project slightly.

    For the examples in this chapter I will use repo  https://github.com/CGilde/syn-prov-hdi as target for the synonymy. This is a HDI Container containing some tables.

    The starting point for service replacement is the repo https://github.com/CGilde/syn-hdi-hdi-0 from the previous post. I copy it to a new project. The complete repo can be found at https://github.com/CGilde/syn-hdi-hdi-1

    First, I move the information about the target objects into an .hdbsynonymconfig file in a cfg folder. This is identical as in the previous chapter and I will not explain it further.

    Different is, that I do not use a physical service name within the .hdbgrants and .hdbsynonymconfig files. Instead, I use a logical service name (“EPM_log-table-grantor”).

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    Within the development descriptor mta.yaml file I define a mapping between the logical service name and a physical service name.

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    The development descriptor first defines a mapping from the logical service with key “EPM_log-table-grantor” to a resource “EMP_XXX-table-grantor”. The resource name “EMP_XXX-table-grantor” is also the default value for the physical service name. This default service name is used when no replacement is provided. It can be overwritten with a parameter “service-name” of that resource. I highlighted the lines that create the mapping between logical service name, resource name (default physical service), and optional replacement via properties and parameter “service-name”.

    This gives us a lot of flexibility. I will show three different use cases.

    1. Targeting the default service

    An example is a deployed HDI Container containing the test data of a development system, that shall be accessed by an application during development.

    For this use case, I can deploy/build the project with the mta.yaml file as shown above. Before deploying/building, I deployed the target HDI Container with the service name “EMP_XXX-table-grantor”:

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement
    After building the project within Web IDE the target of the synonyms is this generated schema:


    2. Targeting a Web IDE generated service

    An example is an HDI Container that is developed in parallel with an application accessing objects in this HDI Container.

    For this usecase I insert the service name of the target HDI container generated by the Web IDE as parameter in the last two lines of the mta.yaml file:

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    changed lines of mta.yaml file, providing a fixed parameter for use in Web IDE:

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    After building the project within Web IDE the target of the synonyms is now a schema generated by Web IDE:

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    3. Targeting a service, where the name is not known before deployment

    An example is promoting to production, where you do not know the names of the production services during development.

    To simulate this, I build the whole project without changing the mta.yaml file, download the mtar archive file, create an mta-extension file and deploy the mtar archive file using this mta-extension to a different space TST (development space is DEV). A template for the mta extension file is included in the repo.

    syn-hdi-hdi-1.mtaext file with target service name supplied via parameter:

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    I did not want to change the service- and schema-name of the HDI Container containing the synonyms. Therefore I commented those lines in the mta extension file.

    Target service in space TST:

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    Deployment in space TST

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    New service HDI Container with name “hdi-container” (admittedly not the best name…):

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    To check the deployed service with synonyms in space TST, I first have to assign my user as a developer to that space, using XSA Administration UI or XS CLI commands. Until now I was only working in space DEV.

    Synonyms in HANA XS Advanced, Configuration, Templating, Service Replacement

    Additional Information

    Public synonyms can be defined from HDI using an .hdbpublicsynonym file. They should be used with special care since a public synonym can only be defined once in the whole database (singleton). Special configuration is needed to be able to create public synonyms, which can be looked up in the official documentation.

    Synonyms can point to a different database container in a MDC enabled system. The synonym definition has to include the target database additionally to the target schema in this case. Configuration file templating can use “database.configure” to derive the database from a service.

    Security aspects

    I would like to draw your attention to some security aspects that are important when accessing other schemas using synonyms, without intention to cover this topic completely.

    Every service in an XSA space can be accessed from any other service running in the same XSA space. Organizations and spaces are the concept that ensure, that only services can accessed, that are allowed to be accessed. This concept is not XSA specific, but originates in cloud-foundry.

    If security considerations play a role during development, you should already use multiple spaces during development (available since SP12). E.g. you could create a space per project or even a space per developer. Within the XSA Admin tool you only assign the developer role to developers in those spaces that they are allowed to work in. In both, Web IDE and the HRTT (DB Browser in HANA 2.0) only those spaces can be accessed by a user, for which the user has the developer role.

    When you create the services for accessing objects in other schemas via “xs cups”, you should create them very specific only for those objects you want to give access to. If you use a user with too many privileges in your services, those services could be (mis-) used to gain access to more objects than needed.

    I would like to give a simple example:

    For legal reasons there is a requirement within your company, that only project “VERY SECURE” is allowed to access employee data of a classical DB schema “HR”. In principle this could be solved int the following way:
    • Create a specific development space “Very_Secure”,
    • Give only specific developers the developer role in that space
    • Create a role in schema “HR” that allows to read the employee tables (only the tables that shall be exposed, not the whole schema!)
    • A special service user is created that can grant this role to other users
    • A service is created via xs cups in development space “Very_Secure”
    • This service is used to access the employee tables
    • Important: DO NOT CREATE general services via xs cups in any other space that allow granting arbitrary existing roles via system privilege “Role Admin”. This would allow anyone with access to that service and knowledge of the roles to gain access to the employee data

    Did you know you can add Spatial Reference Systems to HANA?

    $
    0
    0
    By default, HANA has 4 preconfigured Spatial Reference Systems (SRS).  Two of the preconfigured ones most used are 4326 (WGS 84 Spheroidal) and 1000004326 (WGS 84 Planar). However, there are over 3000 other spatial reference systems and many of our customers utilize some of them.  So how do you add additional ones?
    There are two ways to add additional ones to HANA.  The first way is to add one at a time using the CREATE SPATIAL REFERENCE SYSTEM command in a SQL Console as a user with the required privileges (i.e. SYSTEM).  The second way is to use the HANA Geospatial Metadata Installer.  This web based admin tool will install additional SRS – a total of 3988 [as of HANA SPS 12] spatial reference systems.  They will be available in the target HANA instance after an update is applied using the tool.  Here is a screenshot of the admin tool after the update was applied.

    Did you know you can add Spatial Reference Systems to HANA?

    In order to install the DU containing the Geospatial Metadata Installer, please follow the full instructions which are available in the appendix of the SAP HANA Spatial Reference Guide.  The basic steps are:
    1. Download the appropriate file from the SAP Portal
    2. Unzip the file (on the client where you’re running HANA Studio)
    3. Import the unzipped file (HCOSPATIALMI.tgz) using HANA Studio
    4. Create two HANA users: one to use the Geospatial Metadata Installer and the second establish the required SQLCC connection and add new spatial reference systems in that HANA instance
    5. Assign the user for the Geospatial Metadata Installer.  This is accomplished using the XS Admin console. Here’s where there may be a slight twist (see below)
    6. Finally, use the Geospatial Metadata Installer.  There are two functions available as shown in the image below:
    Did you know you can add Spatial Reference Systems to HANA?

    You can view the current state which should show 4 spatial reference systems present.  To add additional ones, use the “Start Update Immediately” function.  You can’t select which ones are added, it will add the remaining 3984 spatial reference systems (as of SPS 12) to the target HANA instance.  The update occurs within a matter of 10 or 15 seconds.  Once the update is completed, you can check the status using the “View Current State…” function which is shown in the first screenshot at the top.

    The slight twist I mentioned above is logging into the XS Admin console.  I used the SYSTEM user and the result was an Access Forbidden error.  It turns out the SYSTEM user did not have the requisite Application Privileges.  This discussion thread shows which privileges must be added for the user accessing the XS Admin console. 

    After updating the available spatial reference systems, I wanted to make sure a particular SRS (WKID of 26781) existed after the update.  To find out, I issued the following query to verify:

    select * from ST_SPATIAL_REFERENCE_SYSTEMS WHERE SRS_ID = 26781;

    The query returned the following result:

    Did you know you can add Spatial Reference Systems to HANA?

    Once you’ve verified the desired SRID or SRIDs that you need exist, you can use spatial methods in SQL to transform between them.  The proper spatial method is ST_Transform(<SRID>).  At first glance, it looks like ST_SRID(<SRID>) might work as well, but this method only changes the SRID and does not actually transform the geometries.

    It’s common to transform from one SRS to another in GIS packages. Typically, at the client level, you’re working with small datasets (100s or 1000s of features).  When large datasets with spatial data need to be transformed, having that capability in SAP HANA means the transformation can be done at high speed.

    In summary, it’s very straightforward to add additional spatial reference systems to SAP HANA and to transform between them at high speed, Just make sure you carefully follow the instructions in SAP HANA Spatial Reference Guide and in the discussion thread with respect to adding the appropriate Application Privileges for XS Admin console access.

    SAP HANA Tools-Modeler and Web IDE (New and Changed) – SAP HANA Platform 2.0 SP00

    $
    0
    0
    As of SAP HANA Platform 2.0 SP00, the following new features and changes are available in SAP Web IDE and integrated SAP HANA tools.

    SAP Web IDE for SAP HANA is a browser-based integrated development environment (IDE) for the development of SAP HANA-based applications comprised of web-based or mobile UIs, business logic, and extensive SAP HANA data models. SAP Web IDE works in conjunction with the SAP HANA deployment infrastructure (HDI), the Application Lifecycle Management tools (ALM), the XS Advanced runtime platform, and various SAP HANA tools.

    SAP Web IDE (New and Changed):

    1.Git Features (new):

    The Git tools have been enhanced with new capabilities. Now you can:
    ● Set up Git
    ● Configure Git repositories
    ● Use multiple branches
    ● View the History pane

    2. HTML5 Module Templates (new):

    Two new templates are now available for HTML5 modules:
    ● SAPUI5 application with a basic project structure
    ● SAP Fiori Master-Detail application

    3. Layout Editor (new):

    A visual designer is now available for the development of SAPUI5-based HTML5 modules.

    4.Problems View (new):

    A new pane is available to view and analyze information about problems in the modules and
    projects in your work space

    5.Run Console (changed):

    The enhanced Run console provides a holistic view of all running modules in a project and a
    quick access to their logs.

    6.Runtime Performance Improvements (changed):

    Performance improvements when building and running HTML5 and Node.js modules

    7.Selective Build (changed):

    You can selectively build artifacts in an HDB module rather than build the entire module. This
    supports incremental development and shortens the processing time.

    8.User-defined Schema Names (new):

    You can now define the name of the database schema that is automatically created for an HDB
    module.


    SAP HANA Tools (new and changed):

    SAP HANA Tools Calculation View Editor (Modeler)

    1.Rank Node (Enhanced):

    You can now generate an additional output column for rank nodes to store rank values.

    2.Assigning Semantics (Enhanced):

    In addition to the existing support for assigning semantics to measures, you can now also assign semantics to attributes in a calculation view

    3.Column Lineage (Enhanced):

    Column lineage support is now extended to trace source of columns used in calculated column expressions, and also for base measures used in restricted columns.

    4.Cache Invalidation (Enhanced):

    Transaction-based cache invalidation is performed whenever the underlying data is modified.

    5.Restricted Columns (New):

    You can create restricted columns as an additional measure based on attribute restrictions. For example, you can choose to restrict the value for the REVENUE column only for REGION = APJ, and YEAR = 2016.

    6.Support to Convert Attribute Values to Required Formats (New):

    You can assign conversion functions to attribute columns. These functions help maintain conversion from any internal to external format and from any external to internal format.

    7.Support for Debugging Calculation Views (New):

    You can execute debug queries on calculation views and analyze the runtime performance of views. For example, based on the query that you execute, you can identify pruned and unpruned data sources in calculation views and at design-time

    8.Handling Null Values in Columns (New):

    Define default values for columns (both attributes and measures). The system uses these default values in the reporting tools to replace any null values in columns.

    9.Support for Virtual Tables (New):

    In addition to the already supported data source types, you can now also use virtual tables as a data source for modeling calculation views.

    10.Hierarchies (New):

     You can use graphical modeling tools to create and define hierarchies. The tool supports both  level hierarchies and parent-child hierarchies.

    11.Support for Generating Time Data and Creating Calculation Views with Time Dimension (New)

    You can generate time data into default time-related tables present in the _SYS_BI schema and use these tables in calculation views to add a time dimension

    12.Time Travel Queries (New):

    Calculation views now support time travel queries, which help query the past state of data. You can use input parameters to specify the time stamp in time travel queries.

    C_HANATEC_11 Certification Guide

    $
    0
    0
    All that you need to know about SAP C_HANATEC_11
    SAP launched HANA Technology primarily based on in-memory expertise with which customers can analyze huge volumes of information and data in flip of seconds. This post is about the certifications which can be obtained with on SAP HANA Technology. SAP certification builds on the fundamental knowledge gained through related SAP HANA Technology coaching and ideally refined by practical experience within an SAP HANA project team, whereby the consultant applies the acquired knowledge practically in projects.
    SAP has made available only one Associate Consultant certification with code: C_HANATEC_11This exam primarily verifies that the candidate possesses the knowledge in the area of the SAP HANA Technology for the profile of an SAP HANA Technology consultant.
    Make sure you are able to gain complete understanding and firm knowledge on SAP HANA Technology certification syllabus. That includes topics like System Architecture, Users and Authorization, Security etc.
    Learn and explore everything about the SAP HANA Technology for the profile of an SAP HANA Technology consultant
    Things that you should be aware of for C_HANATEC_11
    Exam details
    Duration of the exam
    Sample Questions that are available
    Topics of the subjects
    Format of the exam
    Areas of the subject
    Required training for the same
    Get complete detail on The "SAP Certified Technology Associate - SAP HANA (Edition 2016)" certification exam certification syllabus
    Below I have gathered all the data accessible to the very best of my knowledge related to the certification exam:
    1.How many questions are there in the exam?A: 80
    2.What is the duration of exam?
    A: 180 minutes
    3.What is the level you graduate with the exam?A: Associate
    4.What are the various languages in which the certification can be done?A: English, Spanish, Japanese
    5.What is the cut-off or passing percentage?
    A: 64%
    6. What are the types of questions asked?
    A:As from the SAP HANA Technology sample questions provided by SAP, the questions have single choice or multiple choice answers. Also, as shown in the sample questions, the number of correct answers is indicated for each question.
    7. What to study for the HANA Technology exam
    A:Firstly, go through the Exam Topics of HANA Technology certification. These exam topics prove to be very helpful in the preparation. They act as the blueprint for the exam. This is because the questions asked in the exam are sourced from these topics itself. Every candidate will need to know 'how to do' the questions rather than just answer them theoretically.
    8.What are easy tips you can have in your mind while giving this exam?
    SAP provides a note "There are 'N' correct answers to this question." in actual SAP HANA Technology Certification Exam.
    SAP does not ask "True or False" type questions in actual SAP C_HANATEC_11 Exam.
    SAP provides an option to Increase(+) or Decrease(-) font size of exam screen for better readability in actual SAP HANA Technology Certification Exam.
    9.If a question has 3 correct answers, and I answer 2 answers correct, will I get partial marks?
    A: No. It is MANDATORY to answer all the answers correct to get marks. The pattern of scoring is binary - either it is correct or wrong. No partial marks available. On this note, make sure to pay attention on the indicated number of correct answers to make sure that you have chosen correct number of answers. Choosing less or more answers than indicated will directly get you 0 points irrespective of the accuracy of your choices.
    10.What are the various Book References available for training?
    A:The following book references are available and recommended:
    1.HA200It's a 5 day classroom course. Install and update a SAP HANA database in version SPS11. The main idea of the course is to cover the most critical tasks for the daily work of an SAP HANA administrator. The main area of expertise of this course are as follows:
    It helps to perform the daily tasks for a SAP HANA administrator.
    Start and stop, backup and troubleshoot a SAP HANA SPS11 system and change the configuration.
    Acts as a Backup and recover a SAP HANA SPS11 database.
    Gives a very good overview of all the different components in SAP HANA SPS11.
    2.HA240 : This training course is focuses on various aspects. The main goals of this can be shortlisted as follows:
    The Security, Authorization, and integrated scenario.
    Though it's an amazing way to do a revision of SAP HANA SPS11.
    3.HA250 : This course helps you to understand the step by step process and start a combined update and migration of a SAP system to a SAP HANA database with technical details regarding the process. The main Database Migration Option (DMO) of the Software Update Manager (SUM) integrates and helps you with simplified migration steps. The goals of the course are as follows:
    Enable and help students to use independently the one-step migration procedure (DMO) to SAP HANA.
    It also covers the most critical tasks for an SAP HANA administrator for the combined update and migration of an SAP system to the SAP HANA database.
    Furthermore, it provides details about the preparation of the technical procedure and details of the process.
    11.Is going through the trainings/material enough or do I need hands on experience of data modeling/provisioning?
    A:As it is clearly they say, if you read, you might forget, however if you practice, you remember for a long time. Having said that, hands on experience will surely be helpful and is recommended to pass the exam but don't waste too much of your time just trying to get access. If you are sure to clear your overall concepts you would give a fairly decent job in the exam.
    12.Which topics are tested in the exam and how is the exam distributed among all the topics?
    A: The table below gives a rough idea on the topics which you need to primarily focus. (source). I have added here the approximate number of questions from the percentages given at the certification page.
    S.No
    Topic

    % of Exam Topics
    Primary Reference





    1
    System Architecture

    8% - 12%





    2
    Users and Authorization

    8% - 12%





    3
    Security

    8% - 12%









    4
    High Availability &
    Disaster
    8% - 12%

    Tolerance








    5
    Backup & Recovery

    8% - 12%




    6
    Troubleshooting of SAP HANA
    8% - 12%





    7
    Monitoring of SAP HANA

    8% - 12%





    8
    Operations of SAP HANA

    8% - 12%




    9
    SAP HANA Installation & Upgrade
    8% - 12%





    10
    VDatabase Migration
    to SAP
    8% - 12%

    HANA








    Before you give the exam C_HANATEC_11
    Each clear set of certification comes with its own preparation tactics. We maximum define them as "Topic Areas" and they can be identified on each exam description. You can find the answers of number of questions, the duration of the exam, what expertise you will be tested on, and recommended course and content you can refer.
    Please be known that the proper trained professional-level certification also needs several years of experience on-the-job and addresses real-life time scenarios.
    For further queries refer to our Take your SAP Certification Exams and our FAQs.
    Things to Remember
    To ensure that you succeed, SAP highly recommends combining your personal experience with your educational courses to prepare for your certification exam as the questions will just test your knowledge of the course, but also your ability in application.
    Subscribers of the SAP Learning Hub (LH) can as well have a to the study material and participate in the appropriate SAP Learning Room (LR) to gain the required understanding and knowledge.
    You are strictly not allowed to use any reference materials during the certification test. This clearly indicates that you are given no access to SAP system or online documentation.

    Source @ Academia




    Viewing all 711 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>