In the HANA R integration guide, there is emphasis on installing R on a separate server (i.e. not on the same server as
HANA). Why is this?
Thanks
S.
In the HANA R integration guide, there is emphasis on installing R on a separate server (i.e. not on the same server as
HANA). Why is this?
Thanks
S.
In the HANA R integration guide, there is emphasis on installing R on a separate server (i.e. not on the same server as
HANA). Why is this?
Thanks
Shabna
Hi All,
I have around 7 years of experience in QA, which involves BI reports and Functional testing with little bit knowledge on SQL and ETL testing.
Off late have heard about HANA as trendsetting tool in the area of DWH. I would like to transform my career to SAP HANA but i don't know in which are of SAP HANA i need to continue and for that start learning. As per my understanding one from QA can get in to Functional Consultant profile, but is there any other options in this regard?
Appreciate your time and input. You also share your advice and any documents on jprakash.bdvt@gmail.com
Regards,
Jayaprakash
Hello, I am trying to grant select access to myself to a schema that I have just created with the following command from developer guide:
call
_SYS_REPO.GRANT_SCHEMA_PRIVILEGE_ON_ACTIVATED_CONTENT('select','<schema name>','<user name>;
and I encountered the following error:
"insufficient privilege: Not authorized'
I think I need admin access to run the above command, is that correct? Can someone help me this resolve the error please?
Thanks in advance,
Suresh.
SAP has joined forces with several cloud providers to give you a choice of infrastructure platforms. Below you find a rough overview of your choices, explore the vendors websites to find out more about their offerings.
You may have heard of SAP HANA One, a cloud-based SAP HANA Database that is sold via AWS Marketplace. So how is this different from these offerings? And which [HANA] do you want??
Well, the main difference is in the license - the developer edition offered here only allows you to develop, test and demo HANA applications. You don't get the right to use those applications productively on the developer edition or sell the apps as a cloud-based service. For that, you need to move to HANA One (contact us at inmemorydevcenter@sap.com if you need help with the migration). HANA One removes the limitation of "dev only" and allows you to use your HANA application in production. There is also an OEM program, if you want to sell HANA One Applications to your customers. On the other hand, if you are not ready for production use and just exploring HANA or developing a HANA app, then you would be wasting money on HANA One. Check out this comparison between SAP HANA One Business Edition and the developer edition offered here.
Cloud Partner | Description | Instance Types | Price |
---|---|---|---|
(redirects to AWS)
| Create and run your own SAP HANA, developer edition environment on Amazon Web Services (AWS) cloud.
Run your SAP HANA, developer edition environment on the secure, highly scalable, and low cost elastic cloud infrastructure on the AWS Cloud. With AWS there is no need to procure IT infrastructure weeks or months in advance, so you can create and run a developer environment in minutes, scale your environment according to workload or demand, and pay only for the resources used.
*Standard AWS charges and fees will apply. |
| starting from $0.55 / hour |
(redirects to KT)
| Take your own SAP HANA, developer edition on kt ucloud within 5 minutes
Just one click! Most cost-effective! Pay as you go!
|
| starting from $0.286 / hour |
(redirects to CloudShare)
| Get your own SAP HANA, developer edition in the cloud – with CloudShare. Start NOW. No IT required. Ready configurations – you’ll be up and running in 5 minutes
|
| $137 / month |
SmartCloudPT
(redirects to SmartCloudPT)
| Quickly set up your SAP HANA, developer edition environment on Portugal Telecom’s SmartCloudPT for fixed monthly price starting at only 159,95€/month (Just 0,22€/Hour). SAP HANA, developer edition in SmartCloudPT provides simple and immediate access to a preconfigured SAP HANA development environment, running on the enterprise-grade SmartCloudPT infrastructure (based in Europe) and with an easy to connect Static/Public IP. The service fee is prepaid using a credit card and can be easily renewed at the end of each month. For this monthly commitment to SmartCloudPT you get a special fixed pricing for the service that starts at 159,95€/month for a 16GB RAM cloud instance (or just 0,22€/Hour). Unlike other public clouds, pricing is fixed and fully inclusive, with no hidden costs or surcharges!
All environments run on enterprise-grade infrastructure and come with 24/7 instance availability supported by an expert support team that will quickly and effectively respond to your queries. |
| starting from €159 / month (Just 0,22€/hour) |
I can't find a way to convert an IF exists from microsoft sql to hana.
With mssql, IF can be used to execute a sql command only if the condition is true.
Someone have an idea on how to do this ?
code:
if not exists( select * from SYS.TABLE_COLUMNS where Schema_Name = CURRENT_SCHEMA AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'Column' )
alter table TableName
...
Hi, due to the performance issues we are facing, we try to investigate the query plan.
We found some description how to generate it in admin guide, however it is not sufficient in a sense that it only refers how to generate but not interpretation which information means what.
What is worse, we now cannot generate it on rev. 52 which used to work on rev. 47.
Please someone kindly advise how to resolve the issues above.
Hi all.
I wants to know about jdbc client info API.
I have to use different query as jdbc version.
For example
if Client version is 39 and then execute Another query that right for Revision 39.
if Cient version is 51 and then execute Another query that right for Revision 51.
I think con.getMetaData() is not enough.
do you have any idea about this?
please give me advice.
thanks.
created an analytic view with a fact (FOUNDATION) table and 2 attribute views (data foundation in logical join) and created the join between the attribute view and fact tables. Get this error when trying to view data.
Cannot iterate over results set rows: SAP DBTech JDBC [2048]: column store error:
search table error: [2712] error executing physical pla; olap; merging multi value dicts is not
implmeneted; bwPopJoin1Inwards pop25 (P13856444140: PRODUCTS_CATen.ITEM_SUBCLASS2 to .ITEM_NAME)
in executor:: Executor in cube: SYS_BIC:sidthekid/AVTEST
Hi,
I am useing SBOP ADVANCED ANALYSIS MS OFFICE 1.3 (AAO1332SP06_0-10010946SP6) for 32 Bit / Analysis for Microsoft Excel.
For SAP BW Queries with "OLE DB for OLAP" disables/ not set I get an error:
An exception occurred in one of the data sources.
SAP BI Add-in has disconnected all data sources. (ID-111007)
Nested exception. See inner exception below for more details:
Nested exception. See inner exception below for more details:
Nested exception. See inner exception below for more details:
Termination message sent
ERROR BRAIN (136): The query cannot be released for OLE DB for OLAP
MSGV1: %CT
ABEND BRAIN (635): Query IC_SH_COSTS could not be opened.
MSGV1: IC_SH_COSTS
The query cannot be released for OLE DB for OLAP (BRAIN-136)
Query IC_SH_COSTS could not be opened. (BRAIN-635)
Program error in class SAPMSSY1 method : UNCAUGHT_EXCEPTION (RSBOLAP-000)
Is this a product bug or a feature ?
Is this planned to be changed ?
Thank You
Martin
Was able to validate and activate the analytic view, however when trying to view raw data get the following exception:
Cannot iterate over result set rows: SAP DBTech JDBC: [2048]: column store error: search table error: [2712] Error executing physical plan: olap: merging multi value dicts is not implemented;BwPopJoin1Inwards pop58(E2SC_HANA:SITEen.LONG_CODE to .SITE_NAME),in executor::Executor in cube: _SYS_BIC:e2sc-hana/AV_INVENTORY
I see this is also happening in the Calculation view as shown in the attached image.
Hi all.
In our SAP-HANA installation, we get a SIGFPE dump when issueing a SQL-Statement. This is reproducible.
Infos from the trc-file:
[BUILD] build information: (2013-03-21 10:21:57 000 Local)
Version : 1.00.41.370506 (NewDB100_REL)
Build host : ldm053.server
Build time : 2012-11-13 10:27:17
Platform : linuxx86_64
Compiler : cc (SAP release 20121004, based on SUSE gcc43-4.3.4_20091019-0.22.17) 4.3.4 [gcc-4_3-branch revision 152973]
Branch : NewDB100_REL
Git hash : not set
Git mergetime : not set
Weekstone : 0000.00.0
[OK]
...
[CRASH_STACK] stacktrace of crash: (2013-03-21 10:21:57 000 Local)
----> Symbolic stack backtrace <----
0: text_search::short_text_index::estimateDocTermCounts(unsigned int, ltt::vector<int> const&, text_search::result_list<text_searc
Symbol: _ZN11text_search16short_text_index21estimateDocTermCountsEjRKN3ltt6vectorIiEERNS_11result_listINS_9doc_valueEEEPKNS
SFrame: IP: 0x00007f19e27e1369 (0x00007f19e27e0b10+0x859) FP: 0x00007f1576238c90 SP: 0x00007f1576238b60 RP: 0x00007f19dac22
Params: 0x1e, 0x0, 0x0, 0xffffffff00000000, 0x7f02940f8008, 0x8
Regs: rax=0x0, rbx=0x0, rcx=0xffffffff00000000, rdx=0x0, rsi=0x0, rdi=0x1e, rbp=0x7f1576238c80, rsp=0x7f1576238b60, r8=0x7f
Source: core.cpp:1147
Module: /usr/sap/NHX/HDB00/exe/libhdbcs.so
-----------------------------------------
1: AttributeEngine::TextAttribute::_scoreSimpleQueryTermBM25(TRexCommonObjects::BM25QueryTermStatistics<unsigned int> const&, ltt:
Symbol: _ZNK15AttributeEngine13TextAttribute25_scoreSimpleQueryTermBM25ERKN17TRexCommonObjects23BM25QueryTermStatisticsIjEE
SFrame: IP: 0x00007f19dac2299e (0x00007f19dac22950+0x4e) FP: 0x00007f1576238d70 SP: 0x00007f1576238c90 RP: 0x00007f19dac24d
Params: ?, ?, 0x7f1576238eb0
Regs: rbx=0x0, rdx=0x7f1576238eb0, rbp=0x7f1576238d60, rsp=0x7f1576238c90, r12=0x7f1576238e70, r13=0x7f175bcdbea0, r14=0x7f
Source: TextAttribute.cpp:689
Module: /usr/sap/NHX/HDB00/exe/libhdbcs.so
-----------------------------------------
2: AttributeEngine::TextAttribute::_doBM25ranking(AttributeEngine::AttributeQuery const&, ltt::vector<AttributeEngine::QueryInfoBl
Symbol: _ZNK15AttributeEngine13TextAttribute14_doBM25rankingERKNS_14AttributeQueryERKN3ltt6vectorINS_14QueryInfoBlockIjEEEE
SFrame: IP: 0x00007f19dac24d6e (0x00007f19dac244c0+0x8ae) FP: 0x00007f1576238f30 SP: 0x00007f1576238d70 RP: 0x00007f19dac49
Params: ?, ?, 0x41
Regs: rbx=0x0, rdx=0x41, rbp=0x7f1576238f20, rsp=0x7f1576238d70, r12=0x7f15762396d0, r13=0x2, r14=0x7f175bcdbea0, r15=0x0
Source: TextAttribute.cpp:819
Module: /usr/sap/NHX/HDB00/exe/libhdbcs.so
Is there anydefect known ?
Thank you in advance
Andreas
Hi reader,
Today, i have attended SAP HANA interview for cognilytics. Pls. check details
It would be great, if you could answer aforementioned questions.
Regards,
Vikram
Introduction
The integration of SAP HANA with SalesForce.com (SFDC) will provide an unique solution that will harness the user experience of SFDC with real time analysis of data using SAP HANA. In this blog we explain the POC developed using SalesForce adapter provided by SAP Business Object Data Services (BODS) to extract information from SalesForce.com, normalizing and storing the data in SAP HANA and expose processed data via SAP HANA Extended Application Services Server (XS Server).
Business Value
Statistical analysis of the customer data will help business pinpoint the inconsistencies and take corrective action. It will also help track customer behavior; identify trends and potential business opportunities. SAP HANA provides statistical functions as part of Predictive Analysis Library (PAL). These statistical functions can be used to analyze customer data from SFDC and develop a statistical control process to support client’s business objective.
Steps to integrate SAP HANA with SalesForce.com
In addition to the data analysis the other steps that go into building this solution are:
There are multiple options available to import data from SFDC into SAP, in this blog we focus on SalesForce adapter provided by SAP BODS as –
SAP HANA XS Server provides the 3 options for web clients to consume data –
The diagram below shows the components and dataflow for HANA to SalesForce integration.
Importing data using BODS Sales Force Adapter
To use the Adapter for Salesforce.com from SAP BusinessObjects Data Services please follow the following steps.
Develop ODataWebService using HANA Extentded Application Service
OData is a resource-based Web protocol for querying and updating data that maps the persistence model to consumption models. An OData application running in SAP HANA XS can be used to provide the consumption model for client applications exchanging OData queries with the SAP HANA database.
OData enables clients to consume authorized data stored in the SAP HANA database. OData defines operations on resources using RESTful HTTP commands (for example, GET, PUT, POST, and DELETE). Data is transferred over HTTP using either the Atom (XML) or the JSON (JavaScript) format.
Given below are steps to create an ODATA Webservice for HANA XS Engine:
2. Create a XS Project- A new XS Project – xs_order_prj created using Development Perspective.
a) Create a new project | b) Select project type as – XS Project |
c) Project xs_order_prj is created with location in workspace folder xsandas3. Please note the default location option is unchecked. The workspace folder name is same as the package where the analytical view was created. | |
d) The project- xs_order_prj that was created in the above step needs to be shared. It was shared by right clicking the project name and selecting Team -> Share project option. | |
3. Set up ODATA Service files - Application descriptor (.xsapp) and application access (.xsaccess) files created.
Application descriptor (.xsapp) is a blank file.
a) Right click on project created and use New -> File option to create .xsapp, .xsaccess and order.xsodata files.
b) Application access (.xsaccess) file is created as shown below. This file is used define what is to be exposed as web service and who has access.
c) Create file order.xsodata to expose the analytical view – AN_ORDERS as a web service. The ODATA Service is defined as –
service {
"xsandas3/AN_ORDERS.analyticview" as "orders1" keys generate local "GID" aggregates always;
}
4. Commit and activate service– Commit and activate the .xsapp, .xsaccess and order.xsodata file.
After the files are committed and activated successfully the orange bubble icon appears
5. Testing Odata Services - ODatawebservices provide multiple parameters and syntax to support multiple operation such as retrieving the metadata definition, apply filter, selecting a subset of record.
ODATA URL: The ODATA URL is based on the server name, instance number of the HANA system, package name, xsodata filename and entity set name.
The actual server name, instance number, package and xsodata filename can be replaced in the URL - http://<servername>:80<instance number>/<package>/<xsodata filename> to determine the URL string.
The below table provides sample URLs and output assuming server name is acmecorp.com, instance number is 00, package – xsandas3, xsodata filename is order.xsodata and entity set name = orders1 .
Operation | URL and Output |
Retrieve Metadata | http://acmecorp.com:8000/xsandas3/order.xsodata/$metadata
|
Get records in JSON format | http://acmecorp.com:8000/xsandas3/order.xsodata/orders1?$format=json |
Get records in atom format | http://acmecorp.com:8000/xsandas3/order.xsodata/orders1/
|
Apply filter | http://acmecorp.com:8000/xsandas3/order.xsodata/orders1?$filter=startswith(CUSTOMER,'901') |
Building SOAP based web service using XMLA
XMLA enables the exchange of analytical data between a client application and a multi-dimensional data provider working over the Web, using SOAP based web service. Implementing XMLA in SAP HANA enables third-party reporting tools that are connected to the SAP HANA database to communicate directly with the MDX interface. I will provide details regarding building a XMLA service in next edition of my blog.
Accessing Webservice in SalesForce.com
Salesforce.com provides APIs that support both SOAP and REST based webservice. The Webservices defined in HANA can easily be accessed and consumed.
Reference Links -
BODS Sales Force adapter guide - http://help.sap.com/businessobject/product_guides/sboDS41/en/sbo41_ds_salesforce_en.pdf
SAP HANA Developer Guide –
http://help.sap.com/hana/hana_dev_en.pdf
Contribution
The following team developed the POC and the blog:
Hi,
Sometimes I see a table under two different schemas (SYS and PUBLIC).
I checked both and their content appears to be same and same record count.
My question is SYS vs PUBLIC. Most of the time a table exists in both schema.
Why is it like this?
Can anybody please explain?
Regards,
SS
Hi Gurus,
I am trying to work with windows function in SAP HANA
I am using below query to partition data over a dimension
select class, val, offset,
ROW_NUMBER() OVER (partition by class order by val) as "row_num"
from T;
I am getting below error
Could not execute 'select class, val, offset, ROW_NUMBER() OVER (partition by class order by val) as "row_num" from ...' in 19 ms 136 µs . SAP DBTech JDBC: [257] (at 49): sql syntax error: incorrect syntax near "(": line 2 col 22 (at pos 49)
Table T is created using example for SQL reference manual
Please advice
Thanks,
Nikhil
Hello Experts,
I have made the following observation and I wonder if I'm missing something or if there is a limitation in the way string columns can be combined as Calculated Columns within Analytic Views.
I created an analytic view solely with the aid of the visual editor. All columns used in the view are defined as string data types of which the row count is used on one of them to define it as a measure column. This works fine as I get the expected results from the view. Now if I create a calculated column by combining two columns (neither is the one I use as a measure) using the + operator, the validation fails with the error that the data type of measure column has to be numeric. This was acceptable previously but now that I have introduced a concatenated column (of two string columns), I get a failure on the validation step. Here is a snippet of the error:
Error Message
Internal deployment of object failed;Repository: Encountered an error in repository runtime extension;Internal Error:Create Scenario: failed aCalcEngine.createScenario(): The following errors occured: Inconsistent calculation model (34011)nDetails (Errors):n- CalculationNode (dataSource) -> attributes -> attribute (COUNTER): Keyfigure has to be numeric.n- CalculationNode (finalAggregation) -> attributes -> attribute (COUNTER): Keyfigure has to be numeric
So I removed the column I used as a measure and introduced a measure column from the base table which is defined as a numeric column and bingo the view works with concatenated columns as expected. I then re-introduced the previous column as a second measure and the validation step fails again with the same error.
From this it appears that you must have only numeric measure columns from the base table for you to create any concatenated string column (as a Calculated Column). Since this is fairly straight forward to implement in a raw sql, I think this is perhaps an issue with the studio unless I have missed a step somewhere.
(Searching thro the various posts, I am led to believe that I can indeed use the + operator for concat operation on string columns).
I'm on CloudShare with:
SAP HANA Studio
Version: 1.0.48
Build id: 201301130825 (372847)
HDB version info:
version: 1.00.48.372797
Any thoughts?
Regards,
Ramesh
Hi,
I was reading the HANA documents regarding backup and restore and have some questions. Please assist
1.Where does the log files for commited transactions get stored
2. Is the files which i see on /sap/mnt0001/log the online logs
3. Is the files in /usr/sap/<SID>/HDB00/backup/log created only when performing a backup or it is created when a transaction is commited.
Hi,
I was reading the HANA documents regarding backup and restore and have some questions. Please assist
1.Where does the log files for commited transactions get stored
2. Is the files which i see on /sap/mnt0001/log the online logs
3. Is the files in /usr/sap/<SID>/HDB00/backup/log created only when performing a backup or it is created when a transaction is commited.
When using a SAP HANA DB, is there any differences of the query structure design, to take advantage of the performance gains?
Is there any aspects that you should consider when designing new queries?