Quantcast
Channel: SCN : All Content - SAP HANA Developer Center
Viewing all 6745 articles
Browse latest View live

Do we have a ROWID or equivalent in HANA ?

$
0
0

Hi,

 

We are in the process of putting our HR data into HANA. The existing Oracle system uses a lot of analytical functions like LEAD, LAG, FIRST_VALUE to manipulate data. I m manually coding for these functions which is making my life really tough. My life would become a little easier if I can find an equivalent to the Oracle concept of ROWID. In Oracle ROWID is a hexadecimal value which recognizes a row in the table. For example if a table has 2 duplicate rows then by using ROWID we can distinguish them. Do we have something similar in HANA ?

 

When HANA is planning to provide analytical function to calculate LAG, LEAD functions to calculate the previous n value and post n values of a row by partitioning the data. These are very essential and can not think of proceeding without these.

 

 

Thanks a ton in advance.

 

Regards,

Samarpan


Not able to schedule HANA XS based batch job for HANA EIM Flow?

$
0
0

Hello Community Members,

 

I have created a simple hdbflowgraph (task) to replicate data from a HANA table using new Smart Data Integration Capabilities of HANA EIM. The data flow works well and I am able to replicate data in a template by manually executing the job. Now, I want the ability to schedule this job to run every so often (let's say every 59 minutes). In order to accomplish this, I have created a procedure where I start the task.

 

PROCEDURE "XXXXXX"."TEST_SDI.SDQ_SDI::TEST_BATCHSCHEDULE" ( )

  LANGUAGE SQLSCRIPT

  SQL SECURITY INVOKER

  DEFAULT SCHEMA XXXXXX

  AS

BEGIN

/*****************************

  Write your procedure logic

*****************************/

START TASK "XXXXXX"."TEST_SDI.SDQ_SDI::FILTER_SORT";

END;

 

I am able to activate this Procedure without any errors.

 

Now the next step is to create a XS schedule artifact so that I can have this procedure run every 59 minutes. I am using the following code and getting an error on activation "Invalid file content".

 

{

    "description": "Read VBRK table and update HANA",

    "action": "XXXXXX"."TEST_SDI.SDQ_SDI::TEST_BATCHSCHEDULE",

    "schedules": [

       {

          "description": "Read VBRK table and update HANA",

          "xscron": "* * * * * * 59"        

       }

    ]

}

 

I am looking to understand what can I do to fix this error. Any help is highly appreciated.

Insurance Claims triangle - A jab at SQLScripting

$
0
0

The Insurance Claims triangle/Loss triangle/Run off triangle

 

Before we delve into the prediction, scripting and all the interesting stuff, lets understand what the claims loss triangle really is. An insurance claims triangle is a way of reporting claims as they developer over a period of time. It is quite typical that claims get registered in a particular year and the payments are paid out over several years. So it becomes important to know how claims are distributed and paid out. An Insurance claims triangle does just that. Those who are familiar with Solvency II norms set by EIOPA would be familiar with the claims triangle report. The report is a mandate and is in the Quantitative Reporting Templates(QRTs).

 

 

fig1.png

 

Fig : 1 - The claims triangle

 

In figure - 1, the rows signify the year of claim registration and the columns the development year. Consider that we are in the year 2013 and are looking at the claims triangle. The first row focuses on the claims registered in the year 2013. The first column of the first row(header 0) gives you the claim amount paid out by the insurance company in the same year. The second column gives you claim amount paid out in the next year(2006). This goes on until the previous year of reporting i.e. 2012. The second row does the same thing, but for the claims registered in the year 2006. Logically, as each row is incremented, the number of columns would be lesser by one. This gives the triangular shape to the report and hence its catchy name. The claims triangle could be of two types - incremental or cumulative. Incremental is when each column hold the amount paid at that specific intersection of registration year and payment year. The cumulative on the hand would contain the cumulative claims paid out as of that intersection point.

 

The below prediction is based on the cumulative model of the claims triangle. We would base or logic on a set of records stored at the cumulative level. I have uploaded the input data as a CSV in the blog for saving your time.

 

The Prediction

 

The interesting part is to fill the second triangle of the rectangle(if you will). Typically R is used to do this work and that would be a much easier and reliable way to do it of-course. If you are interested to follow the R way, I would suggested viewing these videos presented in the channel SAP Academy - https://youtu.be/wogBQ8Rixwc . It was out of shear curiosity that I planned on implementing an SQL Script based implementation of the loss triangle. Let's try understanding the algorithm first.

 

As an insurance company it would be useful to know what you would have to pay out as claims in the years to come. It helps the insurance company to maintain financial reserves for future liabilities and reduce risk of solvency. There are quite some statistical models used to predict the future numbers, but the most accepted one is the Chain Ladder algorithm presented by T.Mack.

 

Well, lets see the math behind the prediction. I'd have to candidly accept that my math is not too refined. So I would rather explain it in words. The algorithm itself has two parts to it - building the CLM estimator and the prediction itself.

 

Phase 1 : Derivation of the CLM(Chain ladder method) estimator

 

The first phase would be to determine the multiplication factors for each column which would later be used for the prediction. 

fig2.jpg

Fig : 2 - CLM Estimator derivation

 

 

The above figure shows the CLM estimator of each column. Basically the math is a rather simple division of subsequent columns with equal number of cells. The CLM estimator for column 3 is derived as the division of the cumulative values of column number 3 over column number 2 excluding the last cell of column number 2. The same exercise is repeated over all adjacent sets of columns to build the estimators.

 

Phase 2 : Predicting the values

 

The prediction is a recursive exercise that is done one diagonal row at a time. Each diagonal row signifies claim payments for one particular future year. Looking again at figure 1, the first empty diagonal row would hold the predicted values that would be paid out in the year 2013 for the claims registered across different years. The next diagonal row would be for 2014 and so on.

 

fig3.jpg


Fig : 3 - Prediction

 

Each predicted value is to be calculated as a product of the CLM estimator of the target column and the amount in the predecessor column of the same row. Once an entire row is calculated, the next diagonal row is calculated the saw but based on the previous predicted diagonal row. The whole process is done until the entire rectangle is complete.

 

 

 

The SQL Scripting

 

Now to get to the meat of this blog. I took a major assumption in the example that I show here; I assume the cumulative values for the run-off triangle is available in a table. The reason is that  data for claims and payments could be on a single table or multiple tables depending on how the insurance data model is implemented. An SQL/View would have to be written to build a cumulative value and the the whole SQL script done here can be pointed to it. For simplicity I just use a single table here.

 

The whole implementation is on a script based calculation view.

 

Semantics

 

fig4.jpg

 

Fig : 4 - Calculation view semantics

 

As you see above, the calculation view gives out 5 fields

  • Claim_year - Year of claim registration
  • Pymt_year - Year of payment(cumulative)
  • Dev_year - Claim development year
  • Predict - A flag to distinguish predicted and historical values
  • Amount - Cumulative amount

Script -> Variable declarations

 

fig5.jpg

Fig : 5 - Variable declaration

 

Above is just a bunch of variables that would be used in the calculations below. I use an array of real type to store the CLM estimators.

 

 

Script -> Variable definitions

 

fig6.jpg

Fig : 6 - Variable definition

 

What you see above is building three variables - the minimum year, maximum year for calculation and their difference. The next component is building a table t_claim_table based on pre-calculated cumulative claim amounts stored in the CLAIMS_PAID table. The above part of the code could be modified based on the underlying data model and calculation requirements. For example if you are trying to execute claims triangle as of current status, the max value could be selected as  select year(current_date) from dummy and the min could be filled from an input parameter or from the table itself as done here. For simplicity of my simulation, I have hard-coded the max and obtained the min from the table itself. The select query on CLAIM_PAID also could be changed based on the data model used. Assuming we were able to get over the above hurdle of building the input data.

 

Script -> Building the CLM estimator

 

fig7.jpg

Fig : 7 - CLM Estimator

 

 

To understand the math behind the CLM estimator I recommend reading the topic on the "The Prediction" above. I use a while loop to iteratively go over subsequent columns, build the sum and in the outer query divide and arrive at the CLM estimator. The value is then saved into an array. The iteration starts from 0 to the maximum number of years for which the run of triangle goes. For our example, looking at figure 1, this would be 2012 - 2005 = 7. So we could safely assume the while loop runs 7 times to calculate the 7 CLM estimator values as seen in figure 2. The variable 'i' helps in controlling selection of the correct column. At the end of the while loop, all the 7 CLM estimator values would be in the array.

 

 

Script -> Predicting the values

 

fig8.jpg

Fig : 8 - The prediction

 

To understand math behind the prediction done here, I recommend reading the topic on the "The Prediction" above. There are two nested for loops that do the work. The inner for loop calculates each cell within one diagonal row at a time. The outer for loop runs as many times as there are diagonal rows until the rectangle is filled. The three variables 'i', 'j' and 'h' control calculation of each value. The CLM estimator is obtained from the array filled in the previous step. I used a UNION to append records to the existing historical claims. This way, once a diagonal row has been predicted, I can use those values to build the next diagonal row. At the end of the loops, the table variable - t_claim_table would have the historic as well as the predicted values filling up the rectangle.

 

 

Script -> Finally the output

 

fig9.jpg

Fig : 9 - Output

 

The var_out variable is finally filled to be displayed as output. The case statement checks whether it is a predicted or a historic value and is later used for applying a filter in the report.

 

Visualization - SAP Lumira Reports

 

Putting all the moving pieces together, Lumira is the perfect tool to show the output. I used a cross-tab report to demonstrate the triangular layout. The development year is along the columns and the claim registration year is along the rows. Additionally a filter lets you make the report even more interactive.

fig10.jpg

 

Fig : 10 - SAP Lumira report showing loss triangle with only historical values

 

 

fig11.jpg

 

Fig : 11 - SAP Lumira report showing loss triangle with the predicted values

 

I am quite keen on listening to your feedback and suggestions on if there is a better way to script this (Of course not using the shortcut by calling R)

Insights to HANA's result cache

$
0
0

I'm wondering if there's any good documentation on using HANA's result cache? All I've found so far is this quite helpful SCN thread, but I'm hoping to get a bit more insight into the cache behavior.

 

My situation is that we have views that will query the views from another application (let's call it XYZ).  XYZ's views will return a list of items that the user has permission to see. The views are complex, slow, and not something we are willing to modify.  I was hoping that the result cache would help improve performance for these views; however, the cached results are being re-used across different user's sessions, which breaks the access control feature.

 

My idea is to use the "resultcache_white_list" configuration setting to enabled caching for a single calculation view.  This view will have an input parameter which is unique for each user's session (i.e. SESSION_USER variable) that will hopefully make the query unique as far as the cache is concerned.  In practice this seems to work, but I have trouble getting certain views to cache.

 

so some questions:

 

  • does this strategy for selective caching sound like a viable solution? any other recommendations?
  • is there documentation of how cache invalidation is implemented?
  • is there a trace for the resultcache that will give more insight than the M_RESULT_ENTRIES and M_CACHE table?
  • is there some attribute of a model that will prevent the model from being cached, apart from the resultcache_* configuration settings?

 

Thanks,

 

Charles

Disable auto commit when saving a procedure

$
0
0

Currently when I save a procedure in the project explorer tab it automatically updates the server repository. How do I disable this? I want to be able to commit the changes to the repository at a later date.

 

I never see the icon below:

 

The SQLScript Editor Not Committed and Activated Procedure icon shows that your procedure is not committed and not activated.

SQL Analytic Privilege

$
0
0

Hi,

 

In the Apply Privilege Property of any Information view (In View Properties Tab), there exists two options

  • Analytic Privilege
  • SQL Analytic Privilege

 

When I select SQL Analytic Privilege and activate the view it gets activated. But when I do Data Preview it throws error.

Could some one explain what does SQL analytic privilege mean?

 

Regards

Monissha

Developer Mode not exporting DecsionTable Objects

$
0
0

Hello ,

 

We wanted to export all the HANA developed Views and Decision Tables from one SAP Native HANA server to the another SAP Native HANA server.

 

We have tried to export object using Developer Mode option and able to import all the Views into another Hana server.  But in Developer Mode Decision table objects are not getting attached. Then we have tried to import all the objects using Delivery Unit option. But Delivery Unit export package file is not getting imported in another server because of version mismatch.

 

Objects which we want to export is having in SAP HANA SPS09 version and the server where we want to import that package is having in SAP HANA SPS08 version.

 

Because of some other dependency we are not able to upgrade SAP HANA server from SPS08 to SPS09.

 

Can someone please help us in this issue. Only Decision Tables objects are not getting imported into SAP HANA SPS 08 server.

 

Thanks,

Shweta.

SAP HANA Smart Data Access - realtime replication

$
0
0

Hello,

 

I tried to setup a realtime data replication like in the video: SAP HANA Academy - Smart Data Integration/Quality : SAP ECC Replication [SPS09] - YouTube

I have connected an Micrsosoft SQL Server as Remote Resouce with the Smart Data Access and added a virtual table for replication.

Now I would like to create a Flowgraph Model for realtime data replication.

I have selected Flowgraph for Activation as Task Plan and selected the virtual table as data source. The target is a Data Sink (template table).

I have selected realtime behaviour in the containerNode as well as in the data source. The activation of the flowgraph model was successful.

If I try to call the created procedure to start the task plan I get to error:

Could not execute 'call "MSSQL"."MSSQL::realtime_SP"' in 262 ms 426 µs .

[129]: transaction rolled back by an internal error:  [129] "MSSQL"."MSSQL::realtime_SP": line 5 col 1 (at pos 98): [129] (range 3): transaction rolled back by an internal error: sql processing error: QUEUE: MSSQL::realtime_RS: Failed to add subscription for remote subscription MSSQL::realtime_RS.Error: exception 151050: CDC add subscription failed: Unable to obtain agent name where remote source id = 153481

 

Is it possible to solve this issue?

Or is a running SAP Hana Data Provisioning Agent necessary for the realtime replication?

 

Best regards,

Marc


HANA System tabs in HANA Studio taking more time to open

$
0
0

Dear Experts,

 

Currently, we're facing a issue in HANA System in Our HANA Studio. System tabs like "Overview", "Landscape", "Alerts"....... are not opening immediately like other HANA systems. We see like, for "Overview", it is showing "Refreshing overview...." (see in attachment) and for "Landscape", "Alerts"......., it is showing "Pending.....". It is taking more than 20 minutes to open these tabs.

 

Any ideas, please?

 

 

Thanks & best regards,

Sreenu

Allowing CORS on OData with authentication in HANA SPS09

$
0
0

Hi,

 

I am trying to allow CORS on OData service with authentication created over HANA SPS09.

 

 

I tried the following options -

 

XSADMIN

 

Screen Shot 2015-06-05 at 8.10.22 pm.png

 

 

Contents of .XSACCESS


{

     "exposed" : true, 

                 

     "authentication" :                                           

            {

               "method": "Basic"  

            },

 

     "cache_control" : "must-revalidate",

 

 

     "cors" :                     

            {

             "enabled" : true,

             "allowMethods": ["GET"],

             "allowOrigin": ["*"]

            },

                    

     "enable_etags" : false,

 

 

     "force_ssl" : false,

    

     "prevent_xsrf" : true

}

 

Please let me know if any settings are redundant


I referred to the thread

CORS Issue while consuming Hana's OData

 


I am not able to access my OData service in other domains. Can you please suggest where I am making a mistake



Privileges required for Content Folder

$
0
0

Hi

 

What privileges will need to be granted to a User to be able to access a Package in Content Folder and Create Procedures in the Package .

Currently the user is not able to see any packages in the Content Folder.

 

Thanks

Kris

Library import errors after while upgrading HANA patch from 85 to 96

$
0
0

Hi all,

 

Can anyone please help me figure out what is causing this error.

 

===========================

Error: import: failed to load the library  (line 1 position 1 in /sap/sop/sopfnd/services/admin/patchinfo.xsjs)

[12412]{12412}[-1/-1] 2015-06-03 11:17:14.359021 e xsa:sap.sop      SandBox.cpp(01454) : Found the following errors:

===========================

TypeError: $.sap is undefined (line 4 position 1 in /sap/sop/sopfnd/services/admin/patchinfo.xsjs)

[21949]{21949}[-1/-1] 2015-06-03 11:17:24.754510 e xsa:sap.sop      SandBox.cpp(01454) : Found the following errors:

===========================

TypeError: $.sap is undefined (line 4 position 1 in /sap/sop/sopfnd/services/admin/patchinfo.xsjs)

[12022]{12022}[-1/-1] 2015-06-03 11:17:50.579720 e xsa:sap.sop      SandBox.cpp(01454) : Found the following errors:

===========================

TypeError: $.sap is undefined (line 4 position 1 in /sap/sop/sopfnd/services/admin/patchinfo.xsjs)

[15945]{15945}[-1/-1] 2015-06-03 11:20:35.794198 e xsa:sap.sop      sap.sop.services:userpreference.xsjs(00106) :  userpreference executing query  =select PREF_VALUE as PLAN_AREA from SAP_SFND. "sap.sop.sopfnd.catalogue::SOPUM_USER_PREF" where USER_ID = ? and PREF_NAME = ?

[15945]{15945}[-1/-1] 2015-06-03 11:20:35.795765 e xsa:sap.sop      sap.sop.services:userpreference.xsjs(00089) :  userpreference executing query =select PARAMVAL as PLAN_AREA from SAP_SFND. "sap.sop.sopfnd.catalogue::SOPDM_GLOBALCONFIG" where PARAM = ?

[15941]{15941}[-1/-1] 2015-06-03 11:21:06.356773 e xsa:sap.sop      sap.sop.services:userpreference.xsjs(00106) :  userpreference executing query  =select PREF_VALUE as PLAN_AREA from SAP_SFND. "sap.sop.sopfnd.catalogue::SOPUM_USER_PREF" where USER_ID = ? and PREF_NAME = ?

[15941]{15941}[-1/-1] 2015-06-03 11:21:06.358240 e xsa:sap.sop      sap.sop.services:userpreference.xsjs(00089) :  userpreference executing query =select PARAMVAL as PLAN_AREA from SAP_SFND. "sap.sop.sopfnd.catalogue::SOPDM_GLOBALCONFIG" where PARAM = ?

[32964]{32964}[-1/-1] 2015-06-03 11:21:50.073692 e xsa:sap.sop      sap.sop.services:userpreference.xsjs(00106) :  userpreference executing query  =select PREF_VALUE as PLAN_AREA from SAP_SFND. "sap.sop.sopfnd.catalogue::SOPUM_USER_PREF" where USER_ID = ? and PREF_NAME = ?

[32964]{32964}[-1/-1] 2015-06-03 11:21:50.075485 e xsa:sap.sop      sap.sop.services:userpreference.xsjs(00089) :  userpreference executing query =select PARAMVAL as PLAN_AREA from SAP_SFND. "sap.sop.sopfnd.catalogue::SOPDM_GLOBALCONFIG" where PARAM = ?

[12410]{12410}[-1/-1] 2015-06-03 11:23:24.880521 e xsa:sap.sop      sap.sop.sopfnd.services.usermanagement:needsPasswordReset.xsjs(00006) : : Inside needsPasswordReset.xsjs--POST body is {"User":"SOPITADMIN"}

[12410]{12410}[-1/-1] 2015-06-03 11:23:24.880607 e xsa:sap.sop      sap.sop.sopfnd.services.usermanagement:needsPasswordReset.xsjs(00015) : : Inside needsPasswordReset.xsjs: 

[12410]{12410}[-1/-1] 2015-06-03 11:23:24.881340 e xsa:sap.sop      sap.sop.sopfnd.services.usermanagement:needsPasswordReset.xsjs(00019) : user password resetSOPITADMIN

[12410]{12410}[-1/-1] 2015-06-03 11:23:24.891689 e xsa:sap.sop      sap.sop.sopfnd.services.usermanagement:needsPasswordReset.xsjs(00027) :  passwordResetFlag :  false

[12621]{12621}[-1/-1] 2015-06-03 11:23:33.301328 e xsa:sap.sop      sap.sop.services:userpreference.xsjs(00106) :  userpreference executing query  =select PREF_VALUE as PLAN_AREA from SAP_SFND. "sap.sop.sopfnd.catalogue::SOPUM_USER_PREF" where USER_ID = ? and PREF_NAME = ?

[12621]{12621}[-1/-1] 2015-06-03 11:23:33.302889 e xsa:sap.sop      sap.sop.services:userpreference.xsjs(00089) :  userpreference executing query =select PARAMVAL as PLAN_AREA from SAP_SFND. "sap.sop.sopfnd.catalogue::SOPDM_GLOBALCONFIG" where PARAM = ?

[15941]{15941}[-1/-1] 2015-06-03 11:24:50.767708 e xsa:sap.sop      SandBox.cpp(01454) : Found the following errors:

===========================

Error: import: failed to load the library  (line 1 position 1 in /sap/sop/sopfnd/services/admin/patchinfo.xsjs)

[12409]{12409}[-1/-1] 2015-06-03 11:25:18.839425 e REPOSITORY       packageStoreAccessor.cpp(00215) : Repository: Package not found; "": the package does not exist

[32956]{32956}[-1/-1] 2015-06-03 11:25:18.878836 e REPOSITORY       packageStoreAccessor.cpp(00215) : Repository: Package not found; "": the package does not exist

[12408]{12408}[-1/-1] 2015-06-03 11:26:20.657782 e xsa:sap.sop      sap.sop.sopfnd.services.analytics:sopa.xsjs(02132) : POST body is {"reportviewid":"550928E1B1ACB4C3E10000000AAD00CA","ACTION":"getReportViewFilters"}

[32969]{32969}[-1/-1] 2015-06-03 11:26:20.730205 e xsa:sap.sop      sap.sop.sopfnd.services.analytics:sopa.xsjs(02132) : POST body is {"reportviewid":"55317BD54D2275C5E10000000AAD00CA","ACTION":"getReportViewFilters"}

 

 

 

 

 

My piece of code that I am trying to execute is

 

$.import("sap.hana.xs.ide.editor.server.repo", "utilsLib");

$.import("sap.hana.xs.ide.editor.server.repo", "objectLib");

$.import("sap.hana.xs.ide.editor.server.repo", "packageLib");

var utilsLib = $.sap.hana.xs.ide.editor.server.repo.utilsLib;

var objectLib = $.sap.hana.xs.ide.editor.server.repo.objectLib;

var packageLib = $.sap.hana.xs.ide.editor.server.repo.packageLib;

 

 

 

 

//Process Inbound HTTP Request

switch ($.request.method) {

    //HTTP Get

    case $.net.http.GET:

        processGETRequest();

        break;

    case $.net.http.POST:

        processPOSTRequest();

        break;

    default:

        $.response.contentType = "text/plain";

        $.response.setBody(JSON.stringify([{

            status: $.net.http.BAD_REQUEST,

            text: "Request Method not Supported!"

        }]));

        $.response.status = $.net.http.OK;

        break;

}

 

and then I have all the GET/POST method declarations at the bottom.

 

Any help would be much appreaciated.

 

Regards

Uday

Do we have a ROWID or equivalent in HANA ?

$
0
0

Hi,

 

We are in the process of putting our HR data into HANA. The existing Oracle system uses a lot of analytical functions like LEAD, LAG, FIRST_VALUE to manipulate data. I m manually coding for these functions which is making my life really tough. My life would become a little easier if I can find an equivalent to the Oracle concept of ROWID. In Oracle ROWID is a hexadecimal value which recognizes a row in the table. For example if a table has 2 duplicate rows then by using ROWID we can distinguish them. Do we have something similar in HANA ?

 

When HANA is planning to provide analytical function to calculate LAG, LEAD functions to calculate the previous n value and post n values of a row by partitioning the data. These are very essential and can not think of proceeding without these.

 

 

Thanks a ton in advance.

 

Regards,

Samarpan

Missing privileges on Hana Cloud Trial

$
0
0

Hello Experts,

 

I am trying to follow one document :

 

http://help.sap.com/openSAP/HANA1/openSAP_HANA1_Week_01_How_to_Import_Delivery_Units.pdf

 

Here in a particular step "Start the import of the delivery units: " I am getting below error :

"Insufficient privileges to perform Import Server operation".

 

So I looked up internet on this issue and I found that I am missing on few privileges as mentioned in below thread.

 

http://scn.sap.com/thread/3437559

 

What are these missing privileges ?

Most Important Query is - How can I get these privileges ?

View on tables generated by JPA

$
0
0

Hi Gurus!

 

I finished RUI's tutorial on raspberry (RaspberryPi on SAP HANA Cloud Platform) and I have a lots of measurement already. My question is if it is possible to make a calculation view on those tables (I cannot even find the tables that had been created...).

This is the persistence.xml:

<?xml version="1.0" encoding="UTF-8"?>

<persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistencehttp://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">

  <persistence-unit name="iotscenario">

  <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>

        <class>org.persistence.Measurement</class>

        <class>org.persistence.Sensor</class>

        <properties>

            <property name="eclipselink.ddl-generation" value="create-tables"/>

        </properties>

  </persistence-unit>

</persistence>

 

Where can I find the generated tables, so I can create a CV on them...

(My aim is to sum the measurements by day from the timestamp in a CV and call that CV from my application)

 

Thanks,

David.


call PAL procedure with XSJS - infinite execution time - HIGH PRIORITY

$
0
0

Hello,

 

--High Priority--

 

We are experiencing a strange behaviour with our HANA instance. We are trying to build a simple linear regression model  to show demo on HANA PAL capabilities. For this we are following HANA academy and developer guides.

 

Situation:

 

We have built an SAPUI5 frontend where an user can simulate the prices and see the impact on sales and so on. So, for every value of purchase price changed on UI by the user will trigger an "XSJS" call to the backend to execute the procedures an send back the results from tables back to the UI.

 

Example Code:

 

Currently we have an "xsjs" script that keeps running forever. I tried to debug each line of the script and found out that the ".execute()" function never ends.The code is as follows:

function procExecute(proc) {
var body;
var conn;
$.response.status = $.net.http.OK;
try {
conn = $.db.getConnection();
var price = $.request.parameters.get('PURPRICE');
var pStmt = conn.prepareCall(proc);
pStmt.setFloat(1, parseFloat(price));
var rs = pStmt.execute();  //the execution never ends in here
 conn.commit(); 
 $.response.status = $.net.http.OK;
} catch (e) {
body = "Error generated";
$.response.status = $.net.http.BAD_REQUEST;
}
 conn.close();
}
procExecute('call "_SYS_BIC"."path1.path2/gen_proc_purchase_to_sales_pred_values_gen1"(?)');

Procedure to be called:

 

create procedure _SYS_BIC.gen_proc_purchase_to_sales_pred_values_gen1(  in PURPRICE DECIMAL(11,4)  ) language SQLSCRIPT sql security invoker
as
BEGIN
DECLARE PURPRICE1 DOUBLE := :PURPRICE ;
DECLARE id_val INTEGER;
EXEC 'SET SCHEMA APP_TEST';
EXEC 'DROP TABLE APP_TEST.PS_RGP_PREDICT';
EXEC 'DROP TABLE APP_TEST.PS_RGP_PREDICTED';
CREATE COLUMN TABLE APP_TEST.PS_RGP_PREDICT ("ID" INT,"PurchasePrice" DOUBLE);
CREATE COLUMN TABLE APP_TEST.PS_RGP_PREDICTED(ID INT,Fitted DOUBLE);
DELETE FROM APP_TEST.PS_RGP_PREDICT;
select max(ID)+1 into id_val from APP_TEST.V_PS_RG_DATA;
EXEC 'COMMIT';
INSERT INTO APP_TEST.PS_RGP_PREDICT values(:id_val,:PURPRICE1);
DELETE FROM APP_TEST.PS_RGP_PREDICTED;
PS_RGP_PREDICT_TBL = SELECT * FROM APP_TEST.PS_RGP_PREDICT;
PS_RG_COEFF_TBL = SELECT * FROM APP_TEST.PS_RG_COEFF;
PS_RG_PARAMS_TBL = SELECT * FROM APP_TEST.PS_RG_PARAMS;
CALL _SYS_AFL.PAL_PS_RGP1(:PS_RGP_PREDICT_TBL, :PS_RG_COEFF_TBL, :PS_RG_PARAMS_TBL, :lv_ps_rgp_predicted_tbl) WITH OVERVIEW;
INSERT INTO APP_TEST.PS_RGP_PREDICTED SELECT * FROM :lv_ps_rgp_predicted_tbl;
END;

Note: We have put initial part of the code onto other procedures for preparing data and generating statistic values (similar to what we HANA PAL developer guide).


When we execute the procedure with a value manually on SQL console, it runs fine and the predicted values are generated. But when called with XSJS, the execution fails. Once the execution fails, we are not able to execute the procedure via SQL console as well.


We do have several xsjs files that calls procedures for other applcaitions and they work absolutely fine.

Can any one suggest  some ideas to fix the above?

Thanks in advance.

How to EXPORT procedure result as CSV via SQL?

$
0
0

Dear HANA Experts,

 

I face some issues with exporting results of SQLScript procedures as CSV to Unix. 

 

I defined some procedures that do joins of multiple tables. Procedure looking like this:

 

CREATE PROCEDURE _SYS_BIC.TABLE_JOIN(OUT TABLE TT_TABLE)

...

TABLE1_RESULT =

SELECT DISTINCT TABLE1.FIELDA,
    TABLE1.FIELDB,
    TABLE1.FIELDC,
    TABLE2.FIELDD,
    TABLE3.FIELDE
   
    FROM MY.TABLE3 AS TABLE3 INNER JOIN MY.TABLE4 AS TABLE3K ON TABLE3.FIELDE = TABLE4.FIELDE
                INNER JOIN MY.TABLE1 AS TABLE1 ON TABLE3.FIELDA = TABLE1.FIELDA
                INNER JOIN MY.TABLE2 AS TABLE2 ON TABLE2.FIELDD = TABLE1.FIELDD;

...

 

 

 

When calling these within HANA Studio all works fine. However now I want to export the result of the joins to the Unix file system using a SQL command like this:

 

 

 

 

 

 

EXPORT  "_SYS_BIC"."my.procedures/TABLE_JOIN"AS CSV INTO'/tmp'WITHREPLACE THREADS 10;

 

However the result is that I get a CSV file with the raw data from each of the tables involved in the JOIN operation. In this case I find 4 CSV files, one for each of the tables involved in the query,  But I do not get the one table that my procedure generates (using my JOINS, WHERE clauses etc).

 

Is there any way to write the result of the procedure to a file on Unix? Using SQL commands or HDBSQL?

 

Thanks

 

Daniel

Difference in values of a procedure in SAP HANA SQL Script?

$
0
0

Hi,

 

I have a created a Procedure to display Notification MTTR & MTBR values.

If i run this Procedure in DEBUG mode it's showing correct values but if i run it directly using SQL Console then it's showing wrong or less values like below.

 

Values in Debug Mode

BREAKDOWNLIST in DEBUG.PNG

 

Values without Debug Mode

BREAKDOWNLIST without DEBUG.PNG

 

Can anyone suggest what could be the issue.

 

Regards,

Ramana.

COEP timestamp conversion in HANA

$
0
0

Hi,

 

I am replicating COEP table in HANA. I need to create an additional column for the table which gives the posting date. COEP table has TIMESTAMP column, however the data is stored in DEC 16 format.

 

I was able to find an ABAP function module / logic to covert it into SQL date.

 

Convert COEP-TIMESTMP to date and time

 

Please let me know if this can be done in native HANA.

 

Thanks,

Aamod.

client.request(req,dest) throwing Server error (500)

$
0
0

Hi,

I tried to get Data from Northwind odata service using Outbound Connectivity

 

This is products.xshttpdest

 

host = "services.odata.org";
port = 80;  
description = "Sample outbound connection";
useSSL = false;
pathPrefix = "/northwind/northwind.svc/Products";
authType = none;
useProxy = false;
proxyHost = "";
proxyPort = 0;
timeout = 0;

And below code is for readData.xsjs

var dest = $.net.http.readDestination("package", "products");    
var client = new $.net.http.Client();    
var req = new $.web.WebRequest($.net.http.GET, "");    
client.request(req, dest);    
var response = client.getResponse();

I am getting Internal Server error at line 04 in the readData.xsjs file.

Please suggest a solution

Is there any permission or settings required to enable client.request.

 

Thanks,

Vijay

Viewing all 6745 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>