Quantcast
Channel: SCN : Document List - SAP Business Warehouse
Viewing all 1574 articles
Browse latest View live

update methods in Info Package-SAP BI7

$
0
0

Introduction

 

Update method is used to get the updated data coming from the source system to BI system at Info package level.

 

We can set update methods in the update tab of the info package.

 

The update methods in the info package are:

 

                                          1. Full Update

                                      

                                          2. Delta Update

 

                                          3. Initialize Delta Process

 

                                                (I) Initialize with data transfer

                                           

                                                (II) Initialize without data transfer

 

                                                (III) Early Delta Initialization

 

 

1.Full Update

  • Full update extracts the full data from source system to PSA in BI7 every time.
  •   No delta functionality possible with full update.

 

If you want delta records, we need to use delta update method or initialize delta.

 

2. Delta Update

 

Delta update extracts delta records only from the BW delta queue in the source system to BI system.

 

We must initialize the delta in order to get delta records, otherwise it is not possible to load delta records.

 

The following are the 4 delta types for the data source in the system.

 

F: Flat file provides the delta

 

E: Extractor determines the delta, For Ex:LIS,COPA

 

D: Application determines the delta, For Ex: LO,FI-AR/AP

 

A: Use ALE change log delta

 

Note: We can know the delta properties from ROOSOURCE table in the source system with SE16 transaction code.


3.Initialize Delta Process


To get the delta records, one can must initialize the delta process.While initializing the delta process, the system will generate a flag: Initialize option for the source system in (scheduler menu of info package) BI and BW delta queue per the data source in the source system(RSA7). This enables the time stamp mechanism.

 

Initialize with data transfer

 

If you select this option, It extracts the init data from source system to BI system and allows delta functionality.

 

Steps for initialize with data transfer

 

  • Lock the users in the source system
  • Delete the contents of the setup tables for the specific application component in source system(T code: LBWG).
  • Fill the setup tables (SBIW or OLI*BW, use 1,2,3...in place of * according to the application ).
  • Run the info package with initialize with data transfer.
  • unlock the users in the system


Note: This is very time consuming process, because we need to lock the users until data reaches to the BI system.This effects the client business.

 

Initialize without data transfer

 

In some cases, init is successful but someone has deleted the init flag.In order to set the flag again to perform the delta load without disturbing the data, we execute IP with this option.

 

Steps for initialize without data transfer

  • Lock the users in the source system
  • Delete the setup tables content for the specific application component.
  • Fill the setup tables
  • Run the IP with the option: Initialize without data transfer.
  • Unlock the users in the source system
  • Load data to BI system using repair full request info package

Note: In this method, after data is loaded to setup tables we can unlock the users in source system. this is better option than initialize with data transfer option.

 

Early Delta Initialization

 

In this option, we can do the delta initialization before filling the setup tables.So that users can post the documents when we are filling the setup tables.We will get the posted records in the next delta run

 

Steps for early delta initialization

  • Run the Info package with early delta initialization option.This will enable the BW delta queue and setup the time stamp for delta in the source system.
  • Delete the setup tables for the application component
  • Fill the setup tables
  • Load the setup table data using repair full request(scheduler menu option of info package) info package

 

How to check whether the data source supports early delta initialization or not?

  • Go to SE16 in ECC, give table name: ROOSOURCE and enter
  • In the next screen give data source like 2lis_02_sdr(purchase document header data source) name and enter
  • if field ZDD_ABLE has a value 'X', then the data source supports early delta initialization
  • If the filed has space, then the data source does not support early delta initialization.

 



 


 





 


 

 

 


 

 

 




Real time cube data - Direct extract to file

$
0
0

Scenario: In Real time cube if the keyfigure values are changed by input ready functionality, two records will be created one with a negative sign to initialize the existing record and the other with the newly entered value. If we need both the records for further processing then it becomes a problem, because when we try to extract the data from the cube we get only the aggregated result set, hence we get only the value that was entered lastly.The only place where we can see all the records is during the Display data option in the cube.

 

This document explains the process of how to get the data into a file in local workstation. In a similar way the values can be updated to a database table or into a file in the application server.

 

Step 1: Goto T-code LISTCUBE. Provide the cube name as shown below and execute it.

1.png

Step 2: You get a selection screen as shown below.

3.png

Step 3: Copy the program name for the screen using the menu System-> Status-> Program

4.png

Step 4: With the executed state of the program open a new session and goto T-code SE38.Copy the program and activate it.

 

5.png

Step 5: The selection screen will be displayed as follows.

6.png

Step 6: The necessary labels for the different fields can be enabled from the dictionary using the Goto -> Selection texts option.

 

7.png

8.png

Step 7: The selection screen gets modified as shown below.

9.png

Step 8: The “Fld Selectn for Output” should be enabled for the desired fields to get it in the output. Presently the display will be as shown below.

10.png

Step 9: To make these fields enabled permanently during the execution of the program and to propose the preferred target location for placing the file, the following changes need to be made.In our case we will enable the single Characteristic Info object as shown below.

11.png

Step 10: The program code and the corresponding setting in the selection screen are as shown below.

12.png

Step 11: In our case as we are planning to place the file in the workstation, we will enable the Radio button “Store in file (workstation)” and provide the “Name of the table or file” in the text box provided.

13.png

Step 12: The selection screen gets modified for the settings as shown below.

14.png

Step13: On executing the program (ZCOPY_PGM) a new file is created in the local work station with all the 3 records from the cube.

15.png

Transporting a datasource ended with return code 12

$
0
0

Summary

 

Datasource 0FI_AP_4  is not available in active version in Target Systems, When this is transported to target system the transport failed with

error code 12. This is caused because of Inactive version of datasource exist in Target system. In this document, I will explain the problem

and it's correction.

 

Author(s):    Sreekanth S

Company:    Accenture Services Private Ltd

Created on:  21 Jun 2013

 

Problem Description:

 

Transporting 0FIAP_4 datasource is resulting in error Code 12, with an error SAPSQL_ARRAY_INSERT_DUPREC exception CX_SY_OPEN_SQL_DB.

The current ABAP program "CL_RSAR_PSA===================CP" terminated.

 

System program terminates at the below line, trying to insert a field which is already there. RSTSODSFIELD is the System table, which contains datasource and it’s all fields.

  FIAP_4 DS1.png

Reason:

 

When transport is moved to target system, it is checking for the existance of datasource or PSA, if does not find trying to insert the record for the

datasource in RSTSODSFIELD database table, where as this table already has got this entry and unable to accept duplicate entry. So it is giving

short dump.

 

Collecting the object again or re importing the request will not correct the problem.

 

Step by Step Solution

 

Step1: Check the datasource version in RSDS table, in target system.

FIAP_4 DS2.png

It is not in active version. Check the datasource version in RSTSODS table, it is also shows inactive.

 

Step2:  Execute the program RSDS_DATASOURCE_ACTIVATE_ALL, for the datasource and then check whether version in RSDS and

RSTSODS table got changed. If no change in the version, then proceed with next step.

 

Step3:  Go to RSTSODS table, give the PSA name and manually change the OBJSTAT value from INA to ACT.

To do this step, you may go to debugging mode. first select the record and click display

then clicking on /h on transaction bar and press enter. ABAP debugging session opens,  double click on 'CODE' variable and modify the value to Change.

 

0fiap_4 ds3.png

as in above screen,  then press F7 or F8  you will get a screen to modify the record you selected. Modify the value to ACT and save it.

 

Step4: Re import the failed request into target system, it will go successful.

 

Related Content:

 

SAP note 1703960

 

 

    



 
 
 
 
 
 
 
 
 
 




Points to be considered while integrating BW Bex queries with BO Webi

$
0
0

Hi All,

 

There are lot and lot of SAP BW-BO integration projects happening around the world. As consultants, we always face some or the other issues while integrating these tools.

 

Here, I am trying to share some points that I have observed while integrating the BW Bex queries with BO Webi.

Environment: BW7.3SP4 and BO4.0SP4
1) Before delivering a query, ensure that “Allow external access to this query” option is checked in. Otherwise, BO team won’t  be able to access the Bex query.
image001.jpg                 
2) In performance point of view, it’s better to go with the second option 'X'.The first option "Query to read all data at once" reads all data at once does which include all free characteristic and for sure that means that in most cases you will ask for more data than what is actualy needed.
image002.jpg
In case of where hierarchy elements are available, we can go ahead with the option H. 
3) Avoid creating query elements with SUMGT, SUMCT, SUMRT etc in the calculations. It’s observed that when we allow the external access to such queries which have SUMCT, SUMGT etc, we cannot save the queries and they are throwing errors like “cannot be released for OLEDB for OLAP”.
4) Avoid giving conditions like Top5 in BW queries. Conditions are not working properly in BO. They have the ranking option which is quite similar to conditions in BW.
5) For performance improvements, try providing variables in BW queries itself, so that creating the variables (prompts) in BO can be avoided.The general rule is that any prompt or filter should be defined in the BW query and not in the Web Intelligence query panel. This helps improving the webi report performance.
6) For large sets of data, use “selection of structure members” option in RSRT query properties in BW side.Checking this property helps improve performance in situations where there are many Restricted Key Figures and other calculations in the Query Definition.Checking this ensures the structure elements are sent to the database for processing.
image002.png
7) Wherever the requirement is to give multiple values in prompt, it’s better to provide the option “Several Single Values”.
image002.png
Avoid providing selection option.BO recognizes this as an interval. It will show the display screen as below in BO(with Start value and End value),which would cause lot of confusions to the BO Consultants.
image002.png
8) Calculated Key Figures containing mathematical functions should be avoided. It’s giving some errors in BO.
image002.png
9) In case if you are forced to give exception aggregation in query, kindly inform the BO team about the reference characteristic that is used for exception aggregation. In case if they ignore the reference characteristic, they are getting the #UNAVAILABLE error.
10) In case if you have more elements in the query designer which were used only for the calculation purpose, keep them “always hide”.
image002.png
Elements having the property Hide (can be show) is visible in the BO environment. If we give that option for the case like below, where we have 2 elements with the same text like “CMBO VALUE”, it will be shown as “CMBO Value” and “CMBO Value1” in BO side. 
image002.png
11) Whenever the BW query is delivered to the BO Team, kindly ensure that you give a screenshot of the BEx query output. If possible, a screenshot showing the data in the underlying infoprovider also (with the same filter that was given in Bex).This will serve 3 purposes:
               1)  We have evidence that the Query was working fine at the time of delivery.
2)  The screen shot can be used as a test case.
3)  BO team doesn’t have to spend their time in finding actual filter values which has data in the underlying dataprovider.
12) Never create Text Variables with Replacement Path in BW, as it won’t work as expected in the BO side..
As a test scenario, the following text variables were created in BW:
image002.png
Default value for variable is current Cal month which is 01.2013. As expected, entry when changed in the BW variable will make the corresponding changes in the output column headings aswell.
But when the same query was taken to BO, the default value 01.2013 is coming along with text of the measure(Left pane). Even after refreshing the Webi document, the default value is not changing in the column headers and the data is also not populating correctly.
The following result appeared in BO, even after the prompt was changed to 10.2011 after editing the default value 01.2013
image002.png
               
13) When we build the corresponding Query for a dashboard requirement like the following,
image002.png
                 
For bringing the region here, we would have used the BW object 0CUST_SALES__0SALES_OFF whose actual text would have been something like sales office.But while creating the query, ensure that you change the text from sales office to Region(as mentioned in the dashboard requirement document) so that the BO consultant doesn’t have any confusion while creating/merging in WEBI.
image002.png
                 
14) Try to restrict the query result set by applying the global filters wherever possible so that huge number of records are not passed to BO level.
15) Use most of the chars in Free characteristics instead of placing them in Rows/columns.(Any ways its not going to affect ur webi output as characteristic in rows/freechar doesnot make any difference as far as webi report is concerned and everything would be shown in the same pane in BO environment).
16) Make it a habit to use the technical name of the BEx variable instead of the text in the UserResponse function(for prompts).It would be helpful if you need it to work in a multi language environment. Example,for the same technical name, the prompt text is "Date" in English and "Datum" in German.
17) Compounded attributes will always be displayed in BO irrespective of whatever trick you try to hide the same in BW.This further leads to some confusion in BO side although they have options like substring function to remove the compounded objects.
But in SP5, SAP has come with a solution for the same.
See the link below-
18) Structures defined in BW Bex part will work fine in BO WEBI aswell:
Untitled1.pngUntitled1.png
19) Default values given in BW Bex variables will work fine in BO WEBI aswell:
Untitled1.pngUntitled1.png
20) As a best practise, the BW queries should be created on top of the Mullti provider and the same mulitiprovider can be used in BO IDT or CMC for cretaing the BICS connection,in case if the query numbers are more.(for easy tracking)
In case where the query numbers are less,the BO connections can be made on top of BW queries itself.
Also,from the beginning of the project itself, It's lways better to use a generic system user id credentials rather than the normal dialog user id's while creating such connections.
21) From the beginning of the project itself, a well defined naming convention (approved by client) should be in place while creating the technical objects in BW and BO side and the objects(like bex query names/BO Webi connection names) created in the course of the project by the BW and BO team members should be added and well maintained in some tracker(may be an excel sheet) on a common folder or share which is accessible to all the stake holders of the project.
22) As soon as the dashboarding requirements are clear, Please have a check on the drop downs/Combo box(Dimensions) used in the charts/trends.Discuss with the client and find the maximum possible number of rows of data that might come at the production environment. This is very important as we all  are aware of the Row limitations(512 to a max of 2000) in Dashboard excel.Anything above 2000 rows will badly affect the Dashboad performance.Inform this tool limitation to client before development.See the link below-
23) The rounding off option in Bex query property is not supported at the BO Webi side.
Untitled.png
BO guys will do the =Round() function in their side to achieve the desired results.
24)  If possible, try to avoid giving the Keyfigures directly from Cube/Multi Provider in the Bex Query, Instead try to give them in the form of selection/RKF/Formula or CKF. This gives the BW user more flexibility in changing the underlying logics(if required) at a later stage also, even if the BO development is completely done.BO will always refer the Bex query elements using the GUID's generated. GUID will not be affected even if you make some logic changes with in selection/RKF/Formula or CKF.
25) The 'Access type for Result Values' option in Bex query Properties will not work in BO side.
image001.png
26) If the requirement is to find the cumulative values(say in this example: Billing Quantity), It's always better to perform it in the BO side.Though we have the 'Cumulated' option available in Bex, it might not give the values correctly if their is a Null value or a unit change in the Keyfigure.In the below example, Billing qty cumulation was working fine till a unit change(from EA to L) happened.
BO has the option '=RunningSum()' to deal with such cumulation requirements.
image002.pngimage003.png
27) Never create any restrictions in the Default Values pane(as shown in the below screenshot).Though it works fine in BW side, Bo won't consider this restriction atall.
Untitled1 (1).png
28) There might be situations in Bex where the formula might yield 'X'  in the report, but it will result in 'Out Of Range' error in BO. We can use 'NOERR' function in Bex wherein the X will be changed to 0 and thereby the BO error is completely solved.
Untitled2.png          Untitled1.png
29) Similar to what we have Key and Text in BW, BO also have some similar options called Caption and Unique names. But in certain cases, even if you select only Caption with the intention of seeing only description , Webi will show you both caption and unique names(technical names) like below:
Untitled1.png
The underlying Bex query for the above Webi report is as follows:
Untitled2.png
If you observe the above bex query, you can notice that certain objects with technical names 0MATERIAL and 0MAT_SALES has the same description Material and this similarity in description is the reason why Webi was behaving differently.
So whenever you create a bex query, ensure that you have not used same description for 2 objects which are having different technical names.
30) Suppose, we have a requirement in dashboard wherein we need to show fiscal year period as a drop down in a chart, the underling BW query would be in such a way that the fiscal year period will be available in rows.When the BO consultant use this query for creating a Webi report for the dashboard, he would be trying to sort the fiscal year period. Unfortunately the tool wont understand the year change scenarios and the sorting would yield the result like below (mixed up):
11.png
So, as a best practise, try to ensure that you provide fiscal year and if possible,posting period also along with the Fiscal year period in the Bex query. This point needs to be kept in  mind at the time of BW modelling part itself. Further the BO consultant can drag in the fiscal year and do a sorting on that as well as the Fiscal year period key so that the sorting order is obtained as expected(see  below screenshot)
22.png
31) Please be aware that the negative Key figures in BW would sometimes be shown within brackets(without -ive sign) in BO side(as shown in the below screenshots)(as the BO developer would have used the custom format option). This awareness would be very useful when you create some variables in BO which includes some addition or subtraction operations with such measures.There is a high possibility that your variable result would bring some different figures than what we expected, if not properly taken care off.
image002.pngUntitled.png
32) Performance Hints
a) If a Bex query contains Hierarchies, don't forget "supress unassigned nodes" in RSH1(Tcode) in BW side.
           
b) In BO side, enable "Query Stripping" in webi Query Panel and in report design mode - Document Properties.
This feature optimizes the query to fetch only data which will be dragged on to the report for user display thus increases the performance and faster display of the report for the user.
c) Where possible, define calculated and restricted key figures on the InfoCube instead of locally in the query.
d) When using restricted key figures, filters or sel)ections, try to avoid the exclusion option if possible. Only characteristics in the inclusion can use database indexes. Characteristics in the exclusion cannot use indexes. With selections it is better to include than exclude characteristic values.
e) The setting “Result Rows” for each characteristic the in BEx query should be set to “Always Suppress”. Web Intelligence does not use the “Results Rows” and performance will improve if this option is set.
f) Time characteristics should be restricted, ideally to the current characteristic value.

Move time restrictions to a global filter whenever possible.

 

g) Check code for all exit variables used in a report to make sure it is optimized and functioning.(in BW side )

 

h) Reduce the data volume selected by introducing many mandatory variables.(in BW side)

 

i) Define only those characteristics as navigational attributes, which there is a business requirement for; navigational attributes need additional table joins at query runtime

Even though Web Intelligence does not distinguish between a normal characteristic and a navigational attribute, on the BW system there is still an impact on performance, when Web Intelligence reports are running on BEx Queries with navigational attributes.

I will certainly add more points to this post in future....

 

Above mentioned are just a very few points...You also would have faced many limitations like this.

I humbly request you to share some of your project experience here, so that this would act as an help document to all our friends who are new to such integration projects.....

 

New releases like SP5 with designer studio would certainly washout many of the existing limitations pertaining to this integration and it  will also enhance the power of BO dashboarding where the source would be SAP BW.....

 

 

Related Documents:

1) Implementing Bex Data Functions in WEBI:

http://scn.sap.com/community/businessobjects-web-intelligence/blog/2012/05/07/implementing-the-bex-data-function-in-webi-report

 

2) Performance optimization for WEBI reports based on BEX:

http://scn.sap.com/docs/DOC-31081

 

3) Best practices for BO4.0 in combination with Netweaver:

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d056e1bc-2794-2e10-959e-8779c5623cc5?QuickLink=index&overridelayout=true&51586852230547

 

4) Query element parameter table:

http://scn.sap.com/people/suhas.karnik/blog/2011/09/06/rszeltprop-the-query-element-parameters-table

 

5) Support for Bex elements in BO4.0 SP5:

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f08f6c97-9434-3010-7bae-824833dc59d4?QuickLink=index&overridelayout=true&57909044096146

 

6) Tips for developing smart WEBI reports over BEX:

http://scn.sap.com/docs/DOC-35062

 

7) Performance optimize BO reports based on BW using BICS:

https://scn.sap.com/docs/DOC-33706

 

8) Selecting the right BO tool:

https://scn.sap.com/docs/DOC-11876

 

9) Dynamic rolling graphs in Bex-BO integration:

http://scn.sap.com/docs/DOC-34499

 

10) Create connections in BO 4.0

http://www.google.fi/url?sa=t&rct=j&q=sap%20bo-bw

%20blogs&source=web&cd=10&cad=rja&ved=0CGkQFjAJ&url=http%3A%2F%2Fwww.pieterverstraeten.com%2Fblog%2F%3Fdownload%3Dhow-to-create-connections-in-bi-4.0.pdf&ei=VUszUejkL4OKrgeCn4CoAg&usg=AFQjCNETrRa74QhxolhCv8BsZZhhcQIIcQ&bvm=bv.43148975,d.bmk

 

 

PS: Please find below the link to my document which walks you through the new features introduced in BW7.3

http://scn.sap.com/docs/DOC-30461

 

Thanks and Regards
Prabhith

Copying files from one folder path to other in AL11 directory

$
0
0

Summary

 

In BW, files getting stored in a folder AL11 directory from Openhubs or any kind of FTPs. We may have to copy the files to other folder in the directory to take a backup, if the files getting overwritten every day. In this document, detailed solution is explained to take the backup/copy the files to other folder.

 

Author:    Sreekanth Surampally

Company:    Accenture Services Private Ltd

Created on:   22 Jun 2013

 

Business Scenario

 

In BW sytem, 3 openhubs are placing files in AL11 directory folder path, these files are sent to Oracle server with an FTP program which takes all the files in the AL11 folder and sends to oracle. It happens everyday. The kind of data sent to Oracle server is delta every day. So anyday, there should be current day's files in the folder while FTP program executed. For this reason, files get overwritten everyday with new delta.

File history is not available in directory, with that troubleshooting has become difficult when some data is missed a month ago if any.

 

Solution

 

Created a ABAP program to copy the files from one folder to other in directory, added this step in the process chain.

 

Step1:   Passing the source path value to variable LV_SRC_PATH,  then to LV_DIR_PATH.  the value can be maintained in TVARV table, so read the value and store it in the variable.

Call the function module   'EPS_GET_DIRECTORY_LISTING' ,  with the exporting parameter source path value. It lists out all the files in the folder.

File Backup 1.png

Step2: Loop the internal table which has all the files in the source path,

Ignoring header files by checking the condition, first letter is S then ignore the file.  Then concatenate the source path and the file name and also

concatenate the target path and file name with date and time to store the files separately.

 

File Backup2.png

Step3:  Copy the Source file content to target file content,

 

File Backup3.png

in this subrutine, you write the open dataset to read and write the file content to target file, every line has commens, easy to understand.

File Backup4.png

 

Step4  Closing the dataset, then the subroutine is completed. Actually this subrutine is called in the loop statement which runs for every file

in the folder path.

File Backup5.png

After execution of the program, files are copied to target path in the AL11 directory.

Implementing Data Warehouse InfoObjects - Part 1: ABAP Dictionary Objects

$
0
0

Data Warehouse InfoObjects can be defined as the data elements to populate the Data Warehouse layer of your Enterprise Data Warehouse. Please refer for a conceptual overview to my blog Introducing Data Warehouse InfoObjects - Part 1: Conceptual Overview. For SAP source systems it is advisable to take advantage of the structured and comprehensive ABAP Data Dictionary to generate these Data Warehouse InfoObjects. Please refer for more technical details to my blog Introducing Data Warehouse InfoObjects - Part 2: Technical Details.

I created an ABAP program to generate Data Warehouse InfoObjects for SAP source systems’ DataSources. For more information on using the program please refer to the blog series:

 

I would like to share via 3 documents detailed technical instructions how to create the ABAP program and all related objects. Part 1 explains how to create all ABAP Dictionary objects. Implementing Data Warehouse InfoObjects - Part 2: ABAP Programming & Other Objects focuses on ABAP programming and other ABAP Workbench objects. Implementing Data Warehouse InfoObjects - Part 3: ABAP Developments in SAP Source System describes all ABAP developments to be done in every SAP source system that is subject to Generating Data Warehouse InfoObjects.

Step 1: Create Domains

SAP Menu: Tools > ABAP Workbench > Development > ABAP Dictionary

T/code: SE11

 

Create Domains YBWPATTERN and YBWPATTERNTYPE as shown in the screenshots.

 

Fig_01_Domain_Pattern_String.jpg

Figure 1: Domain for Pattern String

 

Fig_02_Domain_Pattern_Type_1.jpg

Figure 2: Domain for Pattern Type (1)

 

Fig_03_Domain_Pattern_Type_2.jpg

Figure 3: Domain for Pattern Type (2)

Step 2: Create Data Elements

SAP Menu: Tools > ABAP Workbench > Development > ABAP Dictionary

T/code: SE11

 

Create Data Elements YBWPATTERN and YBWPATTERNTYPE as shown in the screenshots.

 

Fig_04_Data_Element_Pattern_String.jpg

Figure 4: Data Element for Pattern String

 

Fig_05_Data_Element_Pattern_Type.jpg

Figure 5: Data Element for Pattern Type

Step 3: Create Search Helps

SAP Menu: Tools > ABAP Workbench > Development > ABAP Dictionary

T/code: SE11

 

Create Search Helps YBW_DSFIELD and YBW_IOBJFNM as shown in the screenshots.

 

Fig_06_Search_Help_DataSource_Fields.jpg

Figure 6: Search Help  for DataSource Fields

 

Fig_07_Search_Help_InfoObject_Field_Name.jpg

Figure 7: Search Help  for InfoObject and Field Name

Step 4: Create Tables

SAP Menu: Tools > ABAP Workbench > Development > ABAP Dictionary

T/code: SE11

 

Create Tables YBWADMIN, YBWIOBJBAS, YBWIOBJREF, YBWIOBJUNI, YBWIOBJKYF, YBWIOBJTIM, YBWIOBJEXP and YBWMAPPING as shown in the screenshots.

 

Note: The following Delivery and Maintenance properties apply for all tables.

 

Fig_08_Delivery_Maintenance_Properties.jpg

Figure 8: Delivery and Maintenance Properties

 

Fig_09_Table_Administration_Settings.jpg

Figure 9: Control Table - Administration Settings

 

Fig_10_Table_Basic_Characteristics.jpg

Figure 10: Metadata Repository Table - Basic Characteristics

 

Fig_11_Table_Characteristics_with_Reference.jpg

Figure 11: Metadata Repository Table - Characteristics with Reference

 

Fig_12_Table_Units.jpg

Figure 12: Metadata Repository Table - Units

 

Fig_13_Table_Key_Figures.jpg

Figure 13: Metadata Repository Table - Key Figures

 

Fig_14_Table_Time_Characteristics_1.jpg

Figure 14: Customizing Table - Time Characteristics (1)

 

Fig_15_Table_Time_Characteristics_2.jpg

Figure 15: Customizing Table - Time Characteristics (2)

 

Fig_16_Table_Time_Characteristics_3.jpg

Figure 16: Customizing Table - Time Characteristics (3)

 

Fig_17_Table_Exclude_Patterns.jpg

Figure 17: Customizing Table - Exclude Patterns

 

Fig_18_Table_Mapping_1.jpg

Figure 18: Metadata Repository Table - Mapping (1)

 

Fig_19_Table_Mapping_2.jpg

Figure 19: Metadata Repository Table - Mapping (2)

 

Fig_20_Table_Mapping_3.jpg

Figure 20: Metadata Repository Table - Mapping (3)

 

Fig_21_Table_Mapping_4.jpg

Figure 21: Metadata Repository Table - Mapping (4)

 

Fig_22_Table_Mapping_5.jpg

Figure 22: Metadata Repository Table - Mapping (5)

Step 5: Create Lock Objects

SAP Menu: Tools > ABAP Workbench > Development > ABAP Dictionary

T/code: SE11

 

Create Lock Objects EYBWIOBJBAS, EYBWIOBJREF, EYBWIOBJUNI, EYBWIOBJKYF, and EYBWMAPPING as shown in the screenshots.

 

Fig_23_Lock_Object_Basic_Characteristics_1.jpg

Figure 23: Lock Object - Basic Characteristics (1)

 

Fig_24_Lock_Object_Basic_Characteristics_2.jpg

Figure 24: Lock Object - Basic Characteristics (2)

 

Fig_25_Lock_Object_Characteristics_with_Reference_1.jpg

Figure 25: Lock Object - Characteristics with Reference (1)

 

Fig_26_Lock_Object_Characteristics_with_Reference_2.jpg

Figure 26: Lock Object - Characteristics with Reference (2)

 

Fig_27_Lock_Object_Units_1.jpg

Figure 27: Lock Object - Units (1)

 

Fig_28_Lock_Object_Units_2.jpg

Figure 28: Lock Object - Units (2)

 

Fig_29_Lock_Object_Key_Figures_1.jpg

Figure 29: Lock Object - Key Figures (1)

 

Fig_30_Lock_Object_Key_Figures_2.jpg

Figure 30: Lock Object - Key Figures (2)

 

Fig_31_Lock_Object_Mapping_1.jpg

Figure 31: Lock Object – Mapping Rules (1)

 

Fig_32_Lock_Object_Mapping_2.jpg

Figure 32: Lock Object – Mapping Rules (2)

Implementing Data Warehouse InfoObjects - Part 2: ABAP Programming & Other Objects

$
0
0

Data Warehouse InfoObjects can be defined as the data elements to populate the Data Warehouse layer of your Enterprise Data Warehouse. Please refer for a conceptual overview to my blog Introducing Data Warehouse InfoObjects - Part 1: Conceptual Overview. For SAP source systems it is advisable to take advantage of the structured and comprehensive ABAP Data Dictionary to generate these Data Warehouse InfoObjects. Please refer for more technical details to my blog Introducing Data Warehouse InfoObjects - Part 2: Technical Details.

I created an ABAP program to generate Data Warehouse InfoObjects for SAP source systems’ DataSources. For more information on using the program please refer to the blog series:

 

I would like to share via 3 documents detailed technical instructions how to create the ABAP program and all related objects. Implementing Data Warehouse InfoObjects - Part 1: ABAP Dictionary Objects explains how to create all ABAP Dictionary objects. Part 2  focuses on ABAP programming and other ABAP Workbench objects. Implementing Data Warehouse InfoObjects - Part 3: ABAP Developments in SAP Source System describes all ABAP developments to be done in every SAP source system that is subject to Generating Data Warehouse InfoObjects.

Step 1: Create Number Range Object

SAP Menu: Tools > ABAP Workbench > Development > Other Tools > Number Ranges

T/code: SNRO

 

Create Number Range Object YBWDSOTMPL as shown in the screenshots.

 

Fig_01_Number_Range_Object_1.jpg

Figure 1: Number Range Object (1)

 

Fig_02_Number_Range_Object_2.jpg

Figure 2: Number Range Object (2)

 

Fig_03_Number_Range_Object_3.jpg

Figure 3: Number Range Object (3)

Step 2: Create Application Log Object and Subobject

SAP Menu: Tools > ABAP Workbench > Development > Other Tools > Application Log

T/code: SLG0

 

Create Application Log Object YBW and Application Log Subobject YBWDWHIOBJ as shown in the screenshot.

 

Fig_04_Application_Log_Object.jpg

Figure 4: Application Log Object and Subobject

Step 3: Create Authorization Object

SAP Menu: Tools > ABAP Workbench > Development > Other Tools > Authorization Objects > Objects

T/code: SU21

 

Create Authorization Object Class YBW and Authorization Object YBWDWHIOBJ as shown in the screenshots.

 

Fig_05_Authorization_Object_Class.jpg

Figure 5: Authorization Object Class

 

Fig_06_Authorization_Object_1.jpg

Figure 6: Authorization Object (1)

 

Fig_07_Authorization_Object_2.jpg

Figure 7: Authorization Object (2)

Step 4: Create Message Class

SAP Menu: Tools > ABAP Workbench > Development > Programming Environment > Messages

T/code: SE91

 

Create Message Class YBWIOBJ as shown in the screenshots.

 

Fig_08_Message_Class_1.jpg

Figure 8: Message Class (1)

 

Fig_09_Message_Class_2.jpg

Figure 9: Message Class (2)

 

Refer to the attached file YCX_BW_DWH_IOBJ_and_YBWIOBJ_v1.txt (Part 1 - Message Class YBWIOBJ) for an overview of all messages with their short text.

Step 5: Create Exception Class

SAP Menu: Tools > ABAP Workbench > Development > Class Builder

T/code: SE24

 

Create class YCX_BW_DWH_IOBJ as shown in the screenshots. Make sure that you flag checkbox With Message Class.

 

Fig_10_Exception_Class_1.jpg

Figure 10: Exception Class (1)

 

Furthermore, specify on the Properties tab Message Class YBWIOBJ.

 

Fig_11_Exception_Class_2.jpg

Figure 11: Exception Class (2)

 

Refer to the attached file YCX_BW_DWH_IOBJ_and_YBWIOBJ_v1.txt. From here you can quite easily build up the class using copy & paste:

  • Public Section: the source code can be found in Part 2 of the attached file;
  • Method CREATE_MSG: the source code can be found in Part 3 of the attached file;
  • Description of Title, Attributes, Method and Parameters can be found in Part 4 of the attached file.

Step 6: Create Class

SAP Menu: Tools > ABAP Workbench > Development > Class Builder

T/code: SE24

 

Create class YCL_BW_DWH_IOBJ as shown in the screenshot.

 

Fig_12_Class.jpg

Figure 12: Class

 

Refer to the attached file YCL_BW_DWH_IOBJ_v1.txt. From here you can quite easily build up the class using copy & paste:

  • Source code can be found in Part 1 of the attached file;
  • Description of Title, Attributes, Methods, Parameters, Types and Text Symbols can be found in Part 2 of the attached file.

 

A manual post-processing action is necessary to restrict the SAP BW system(s) in which the program can be executed. Refer to public static method SYSTEM_CHECK. Just after the comment * Execute system check you can define the system(s). Typically you enter here your Development and Sandbox systems.

Step 7: Create Program

SAP Menu: Tools > ABAP Workbench > Development > ABAP Editor

T/code: SE38

 

Create program YBW_DWH_IOBJ_CREATE as shown in the screenshot.

 

Fig_13_Program.jpg

Figure 13: Program

 

Refer to the attached file YBW_DWH_IOBJ_CREATE_v1.txt. From here you can quite easily build up the program using copy & paste:

  • Source code can be found in Part 1 of the attached file;
  • Description of Title, Text Symbols and Selection Texts can be found in Part 2 of the attached file.

Step 8: Create Transaction Code

SAP Menu: Tools > ABAP Workbench > Development > Other Tools > Transactions

T/code: SE93

 

Create transaction code YIOBJ as shown in the screenshots.

 

Fig_14_Transaction_Code_1.jpg

Figure 14: Transaction Code (1)

 

Fig_15_Transaction_Code_2.jpg

Figure 15: Transaction Code (2)

Implementing Data Warehouse InfoObjects - Part 3: ABAP Developments in SAP Source System

$
0
0

Data Warehouse InfoObjects can be defined as the data elements to populate the Data Warehouse layer of your Enterprise Data Warehouse. Please refer for a conceptual overview to my blog Introducing Data Warehouse InfoObjects - Part 1: Conceptual Overview. For SAP source systems it is advisable to take advantage of the structured and comprehensive ABAP Data Dictionary to generate these Data Warehouse InfoObjects. Please refer for more technical details to my blog Introducing Data Warehouse InfoObjects - Part 2: Technical Details.

I created an ABAP program to generate Data Warehouse InfoObjects for SAP source systems’ DataSources. For more information on using the program please refer to the blog series:

 

I would like to share via 3 documents detailed technical instructions how to create the ABAP program and all related objects. Implementing Data Warehouse InfoObjects - Part 1: ABAP Dictionary Objects explains how to create all ABAP Dictionary objects. Implementing Data Warehouse InfoObjects - Part 2: ABAP Programming & Other Objects focuses on ABAP programming and other ABAP Workbench objects. Part 3 describes all ABAP developments to be done in every SAP source system that is subject to Generating Data Warehouse InfoObjects.

Step 1: Create Line Structures

SAP Menu: Tools > ABAP Workbench > Development > ABAP Dictionary

T/code: SE11

 

Create Line Structures YBW_S_DATEL, YBW_S_DATELTEXT, YBW_S_DOMAIN and YBW_S_DOMAINTEXT as shown in the screenshots.

 

Fig_01_Line_Structure_Data_Element.jpg

Figure 1: Line Structure Data Element

 

Fig_02_Line_Structure_Data_Element_Text.jpg

Figure 2: Line Structure Data Element Text

 

Fig_03_Line_Structure_Domain.jpg

Figure 3: Line Structure Domain

 

Fig_04_Line_Structure_Domain_Text.jpg

Figure 4: Line Structure Domain Text

Step 2: Create Table Types

Note: This activity has to be executed in every SAP source system

SAP Menu: Tools > ABAP Workbench > Development > ABAP Dictionary

T/code: SE11

 

Create Table Types YBW_T_DATEL, YBW_T_DATELTEXT, YBW_T_DOMAIN and YBW_T_DOMAINTEXT as shown in the screenshots.

 

Fig_05_Table_Type_Data_Element.jpg

Figure 5: Table Type Data Element

 

Fig_06_Table_Type_Data_Element_Text.jpg

Figure 6: Table Type Data Element Text

 

Fig_07_Table_Type_Domain.jpg

Figure 7: Table Type Domain

 

Fig_08_Table_Type_Domain_Text.jpg

Figure 8: Table Type Domain Text

Step 3: Create Function Modules

Note: This activity has to be executed in every SAP source system

SAP Menu: Tools > ABAP Workbench > Development > Function Builder

T/code: SE37

 

Create Function Group YBWDWHIOBJ, Function Module Y_BW_RETRIEVE_DATEL_TEXTS and Function Module Y_BW_RETRIEVE_DOMAIN_TEXTS as shown in the screenshots.

 

Fig_09_Function_Group.jpg

Figure 9: Function Group

 

Fig_10_Function_Module_Data_Element_Texts_1.jpg

Figure 10: Function Module for Data Element Texts (1)

 

Fig_11_Function_Module_Data_Element_Texts_2.jpg

Figure 11: Function Module for Data Element Texts (2)

 

Fig_12_Function_Module_Data_Element_Texts_3.jpg

Figure 12: Function Module for Data Element Texts (3)

 

Refer to the attached file Y_BW_RETRIEVE_DATEL_TEXTS_v1.txt for the source code to be inserted.

 

Fig_13_Function_Module_Domain_Texts_1.jpg

Figure 13: Function Module for Domain Texts (1)

 

Fig_14_Function_Module_Domain_Texts_2.jpg

Figure 14: Function Module for Domain Texts (2)

 

Fig_15_Function_Module_Domain_Texts_3.jpg

Figure 15: Function Module for Domain Texts (3)

 

Refer to the attached file Y_BW_RETRIEVE_DOMAIN_TEXTS_v1.txt for the source code to be inserted.


SAP NetWeaver BI Java 7.3 Patching Guide by Start Release

$
0
0

SAP NetWeaver BI Java 7.3 SP/Patch Upgrade without NW Stack Upgrade

Stack Level
Currently Recommended SP / Patch Upgrade Level
(without NW SP Stack Upgrade)
NW SP Stack Level
BI Java SP Level
BI Java SP Level
BI Java Patch Level
Namecoding
Name 
NW SPS 09
BI Java SP 09
BI Java SP 09
#30
BIBASES09P_30-10008077.SCA
BIWEBAPP09P_30-10008080.SCA
BIBASEE09P_30-10009619.SCA
BIBASEB09P_30-10009618.SCA
BIIBC09P_30-10008078.SCA

• Patch 30 for BI BASE SERVICES 7.3 SP09 (PL 30)

 

• Patch 30 for BI WEB APPLICATIONS 7.3 SP09 (PL 30)

• Patch 30 for BI BASE EXPORT SERVICES 7.3 SP09 (PL 30)

 

• Patch 30 for BI BASE FOUNDATION 7.3 SP09 (PL 30)

 

• Patch 30 for BI INFORMATION BROADCASTING 7.3 SP09 (PL 30)

NW SPS 08
BI Java SP 08
BI Java SP 09
#30
BIBASES09P_30-10008077.SCA

 

BIWEBAPP09P_30-10008080.SCA

 

BIBASEE09P_30-10009619.SCA

 

BIBASEB09P_30-10009618.SCA

 

BIIBC09P_30-10008078.SCA

• Patch 30 for BI BASE SERVICES 7.3 SP09 (PL 30)

 

• Patch 30 for BI WEB APPLICATIONS 7.3 SP09 (PL 30)

• Patch 30 for BI BASE EXPORT SERVICES 7.3 SP09 (PL 30)

 

• Patch 30 for BI BASE FOUNDATION 7.3 SP09 (PL 30)

 

• Patch 30 for BI INFORMATION BROADCASTING 7.3 SP09 (PL 30)

NW SPS 07
BI Java SP 07
BI Java SP 09
#30
BIBASES09P_30-10008077.SCA

 

BIWEBAPP09P_30-10008080.SCA

 

BIBASEE09P_30-10009619.SCA

 

BIBASEB09P_30-10009618.SCA

 

BIIBC09P_30-10008078.SCA

• Patch 30 for BI BASE SERVICES 7.3 SP09 (PL 30)

 

• Patch 30 for BI WEB APPLICATIONS 7.3 SP09 (PL 30)

• Patch 30 for BI BASE EXPORT SERVICES 7.3 SP09 (PL 30)

 

• Patch 30 for BI BASE FOUNDATION 7.3 SP09 (PL 30)

 

• Patch 30 for BI INFORMATION BROADCASTING 7.3 SP09 (PL 30)

NW SPS 05
BI Java SP 05
BI Java SP 08
#35
BIBASES08H_35-10008077.SCA
BIWEBAPP08H_35-10008080.SCA
BIBASEE08H_35-10009619.SCA
BIBASEB08H_35-10009618.SCA
BIIBC08H_35-10008078.SCA
• Hotfix 5 for BI BASE SERVICES 7.3 SP08 (PL 35)
• Hotfix 5 for BI WEB APPLICATIONS 7.3 SP08 (PL 35)
• Hotfix 5 for BI BASE EXPORT SERVICES 7.3 SP08 (PL 35)
• Hotfix 5 for BI BASE FOUNDATION 7.3 SP08 (PL 35)
• Hotfix 5 for BI INFORMATION BROADCASTING 7.3 SP08 (PL 35)
NW SPS 04
BI Java SP 04
BI Java SP 07
#42
BIBASES07H_42-10008077.SCA
BIWEBAPP07H_42-10008080.SCA
BIBASEE07H_42-10009619.SCA
BIBASEB07H_42-10009618.SCA
BIIBC07H_42-10008078.SCA
• Hotfix 2 for BI BASE SERVICES 7.3 SP07 (PL 42)
• Hotfix 2 for BI WEB APPLICATIONS 7.3 SP07 (PL 42)
• Hotfix 2 for BI BASE EXPORT SERVICES 7.3 SP07 (PL 42)
• Hotfix 2 for BI BASE FOUNDATION 7.3 SP07 (PL 42)
• Hotfix 2 for BI INFORMATION BROADCASTING 7.3 SP07 (PL 42)
NW SPS 03
BI Java SP 03
BI Java SP 05 
#41
BIBASES05H_41-10008077.SCA
BIWEBAPP05H_41-10008080.SCA
BIBASEE05H_41-10009619.SCA
BIBASEB05H_41-10009618.SCA
BIIBC05H_41-10008078.SCA
Hotfix 1 for BI BASE SERVICES 7.3 SP05 (PL 41)
Hotfix 1 for BI WEB APPLICATIONS 7.3 SP05 (PL 41)
Hotfix 1 for BI BASE EXPORT SERVICES 7.3 SP05 (PL 41)
Hotfix 1 for BI BASE FOUNDATION 7.3 SP05 (PL 41)
Hotfix 1 for BI INFORMATION BROADCASTING 7.3 SP05 (PL 41)
NW SPS 02
BI Java SP 02
BI Java SP 04
#14
BIBASES04H_14-10008077.SCA
BIWEBAPP04H_14-10008080.SCA
BIBASEE04H_14-10009619.SCA
BIBASEB04H_14-10009618.SCA
BIIBC04H_14-10008078.SCA
Hotfix 4 for BI BASE SERVICES 7.3 SP04 (PL 10)
Hotfix 4 for BI WEB APPLICATIONS 7.3 SP04 (PL 10)
Hotfix 4 for BI BASE EXPORT SERVICES 7.3 SP04 (PL 10)
Hotfix 4 for BI BASE FOUNDATION 7.3 SP04 (PL 10)
Hotfix 4 for BI INFORMATION BROADCASTING 7.3 SP04 (PL 10)
NW SPS 01
BI Java SP 01
BI Java SP 03
#03
 
BIBASES03P_3-10008077.SCA
BIWEBAPP03P_3-10008080.SCA
BIBASEE03P_3-10009619.SCA
BIBASEB03P_3-10009618.SCA
BIIBC03P_3-10008078.SCA
Patch 3 for BI BASE SERVICES 7.3 SP03 (PL 3)
Patch 3 for BI WEB APPLICATIONS 7.3 SP03 (PL 3)
Patch 3 for BI BASE EXPORT SERVICES 7.3 SP03 (PL 3)
Patch 3 for BI BASE FOUNDATION 7.3 SP03 (PL 3)
Patch 3 for BI INFORMATION BROADCASTING 7.3 SP03 (PL 3)
NW SPS 00
BI Java00 SP 04
BI Java SP 02
#4
BIBASES02P_4-10008077.SCA
BIWEBAPP02P_4-10008080.SCA
BIBASEE02P_4-10009619.SCA
BIBASEB02P_4-10009618.SCA
BIIBC02P_4-10008078.SCA
Patch 4 for BI BASE SERVICES 7.3 SP02 (PL 4)
Patch 4 for BI WEB APPLICATIONS 7.3 SP02 (PL 4)
Patch 4 for BI BASE EXPORT SERVICES 7.3 SP02 (PL 4)
Patch 4 for BI BASE FOUNDATION 7.3 SP02 (PL 4)
Patch 4 for BI INFORMATION BROADCASTING 7.3 SP02 (PL 4)

* For further upgrades you need to upgrade the NW Support Package Stack level. Due to technical reasons (codeline synchronization) skipping the upgrade of the NW Support Package Stack level and still upgrading the BI Java SP/Patch level is only possible for 3 NW SPS level in a row. (e.g. when starting on a SPS 04 BI Java patch level it is possible to upgrade to a SPS 06 BI Java patch level at the most)
Download SAP NW 7.3 BI Java Support Packages in the Software Distribution Center using the following path:
http://service.sap.com/swdc --> Support Packages & Patches --> Browse our Download Catalog --> SAP NetWeaver and complementary products --> SAP NETWEAVER --> SAP NETWEAVER 7.3 --> Entry by Component --> BI Java
Additional Links:

3 Ways to find PSA Table Name

$
0
0

Hi All,

 

This doc will help you how to find psa table name.

 

We have 3 ways to find PSA table name.

 

1.  Use table at SE11 , Table - RSDSTS: click on display, choose, contents , you can
give data source name and execute. You will see below details in a row.

 

 

1.jpg

 

2. At RSA1, left side select info provider or Data source, go to respective data source ,select your data source(showed
in green circle), select on wrinch symbol, marked in red circle. Below screen
comes up. There you can see PSA Table.

 

 

233.jpg

 

My 2ns step and below screens shows same. but way is different. in below link, we need to go data source, from menu --> Go To--> Technical attributes.

http://rahulursbw.wordpress.com/2010/03/21/find-the-psa-database-table-and-data/

 

3.  At RSA1, left side choose  info provider, right side, go to respective data source, right click--> manage, there one pop screen comes, there on
top you may see request for psa /BIC/B0000XXX. For this name you need to add 3 ‘0’ at end. That will indicate psa name. i.e my below screen show as /BIC/B0000574, so my psa name will be /BIC/B000574000. You can see the same name in below 2nd screen.

 

4.jpg

From below my screen PSA Table name - /BIC/B0000574000:

 

5.jpg

 

Very small information but might be useful.

 

Thanks for reading.....

Why use Report Program to generate query output into a flat file and How ?

$
0
0

Introduction

BI reports designed in query designer can be published in various manners. This data can be used for further processing by different SAP and non-SAP systems.

General approach is to push data from BI to other systems is using APD, but due to limitations of APD in some specific cases we had used report program using standard SAP Function Module RRW3_GET_QUERY_VIEW_DATA to generate query output into a flat file in application server.

This approach having some benefits over APD and we can avoid performance overheads of using APDs .

 

Why this Approach?

Objective: Our objective is to execute a query for certain set of selections and transfer the query output into a flat file.

Solutions:

  • APD:

Most commonly used approach for this kind of requirement is APD. APD involve data source, transformations and data targets. It reads the data from data sources, then performs the transformations and finally stores the result data into the data targets. It is a workbench with an intuitive, GUI for the creation, execution and monitoring of analysis processes. APD is easy to use and efficient in most of the scenarios but have some limitations.

Limitations of APD’s

  1. APDs having performance overheads due to intermediate storing of data in a BI database flat table after pulling it from info Sources.
  2. In case mass data with millions of records, data needs to be processed in pieces using internal table in the analysis process runtime which holds data. For this, the option "Process data in memory" has to be deactivated; along with this the data will need to be stored in temporary database tables after each "step" of your analysis process.
  3. It cannot be used for queries with multiple structures at rows and columns.
  4. If a same OHD was required to be run for different queries, we can’t run it parallel for multiple queries because OHD is only capable of handling one query at a time.
  • Report Program :

Report program have some benefits over APD and can work in specific scenarios where APD can’t be used.

  1. It can be used for queries with multiple structures at rows and columns.
  2. No performance overhead because of storing the data in a BI staging server is not in this case as now data is written directly to flat file that can be pushed later on in non-SAP system.
  3. The new design has the capability of parallel processing.

Design Process steps for using Report Program

  1. A process chain will execute a report program and passes different inputs for selection.
  2. Report program will execute the Query for given selection using FM ‘RRW3_GET_QUERY_VIEW_DATA’.
  3. Output of FM will be processed internally and generated the output structure dynamically at run time and write it into flat files (SAP Directory / user desktop).
  4. After successful execution of report program process chain will execute a FMS step to transfer the file to non-SAP system.

Inputs to report:

Info provider name: optional field of type string (30)

Query name: mandatory field of type string (30)

View name: optional field of type string (30)

Parameters: optional table with two columns [Parameter name, string(30); Parameter value, string(250)]

Stored Procedure name: mandatory field of type string (50)   

 

 

Logic Applied

  • Selection screen for inputs:

SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.

SELECT-OPTIONS:  DEPT_DTE FOR DEPARTURE_DATE.       //for a range of values (multiple) needed for selection

PARAMETERS:

               QUERY TYPE RSZCOMPID DEFAULT 'ZZCRM_U02_Q002_ESM_MS',

               INFOPR TYPE RSINFOPROV DEFAULT 'ZCRM_U02'.

               FISCYR FOR FISCAL_YEAR DEFAULT ‘2011’.        //for single values

SELECTION-SCREEN END OF BLOCK B1.

  • Passing selection parameters to function module internal table :

V_EXT_NUM = 1.

CONDENSE V_EXT_NUM.

LOOP AT DEPT_DTE.

  CONCATENATE  C_VAR_NAME V_EXT_NUM INTO T_TABPARAM-NAME.

  T_TABPARAM-VALUE = 'ZDEPT_DTE '. //technical name of query input selection parameter

  APPEND T_TABPARAM.

  CONCATENATE  C_VAR_OPERATOR V_EXT_NUM INTO T_TABPARAM-NAME.

  T_TABPARAM-VALUE = 'EQ'.             //operator

  APPEND T_TABPARAM.

  CONCATENATE C_VAR_VALUE_LOW_EXT V_EXT_NUM INTO T_TABPARAM-NAME.

  T_TABPARAM-VALUE =  DEPT_DTE. //value provided by selection screen

  APPEND T_TABPARAM.

  V_EXT_NUM =  V_EXT_NUM + 1.  //for multiple values this increment is compulsory

  CONDENSE V_EXT_NUM.

  APPEND LINES OF T_TABPARAM[] TO I_TABPARAM[].

  • Function module Code:

CALL FUNCTION 'RRW3_GET_QUERY_VIEW_DATA'

    EXPORTING

      I_INFOPROVIDER          = INFOPR

      I_QUERY                     = QUERY

      I_VIEW_ID                  = ''

      I_T_PARAMETER           = I_TABPARAM

    IMPORTING

      E_AXIS_INFO               = I_AXIS_INFO

      E_CELL_DATA               = T_CELL_DATA     //columns data

      E_AXIS_DATA               = T_AXIS_DATA    //rows data

      E_TXT_SYMBOLS           = T_TXT_SYMBOLS

    EXCEPTIONS

NO_APPLICABLE_DATA         = 1

INVALID_VARIABLE_VALUES  = 2

      NO_AUTHORITY                   = 3

      ABORT                               = 4

      INVALID_INPUT                    = 5

      INVALID_VIEW                     = 6

      OTHERS                              = 7.

  • Reading characteristics(rows )

Read table i_axis_info for axis 001.This will again be a table having fields

CHANM: Technical name of characteristics

CHAVL_EXT: To read key value of characteristics

CAPTION: To read text value of characteristics

Read all the rows and pass it to an internal table IT_ROW_COLUMN.

  • Reading Key Figures

Read table i_axis_info for axis 000 to get number of key figures (columns) .Table T_CELL_DATA contain all the column values.

Read columns and populate them against rows into same internal table IT_ROW_COLUMN.

  • Creation of file in Application server:

CONCATENATE '/interfaces/BW/'  ‘FILENAME.CSV' INTO V_FILENAME .

//you can create file at your desktop by using CONCATENATE 'c:/TEMP/'‘FILENAME.CSV' into V_FILENAME or it can be passed as a selection parameter. 

OPEN DATASET V_FILENAME FOR OUTPUT IN TEXT MODE

ENCODING DEFAULT MESSAGE D_MSG_TEXT.

 

IF SY-SUBRC NE 0.

  WRITE: 'FILE CANNOT BE OPENED. REASON:', D_MSG_TEXT.

  EXIT.

ENDIF.

    • Transfer table content to file

    LOOP AT IT_ROW_COLUMN ASSIGNING WA_ROW_COLUMN.

      CONCATENATE

              WA_ROW_COL-characteristic1_key

              WA_ROW_COL- characteristic2_key

              WA_ROW_COL- characteristic2_text

              WA_ROW_COL-keyFigure1

              WA_ROW_COL- keyFigure2         

      INTO V_STRING SEPARATED BY '|'.   //Type of separator required in file

      TRANSFER V_STRING TO V_FILENAME.

    ENDLOOP.

    CLOSE DATASET V_FILENAME.

    The file written in application server can be seen after execution of report program by using transaction code AL11 in /interfaces/BW/ directory.

     

    Limitations:

    1. Query structure and output required is static and need to be coded in your report program.
    2. One report program can be used only with one query so for each new query new report program is required.

    Points to Remember: 

    1. Selection inputs needed to be provided in correct format otherwise query will not generate error but will not filter the output.
    2. If amount of data is too much then selection inputs should be selected properly to avoid dump in program.

     

    Sample Example

    Query Details :

     

    sample Query Details .jpg

    Sample Code (Refer attachment)

    Sample Output (Refer attachment)

    All about a Multiprovider

    $
    0
    0

    Multi provider:

    A Multiprovider  is also an infoprovider that uses union operation to combine the data coming from fifferent Info Providers. A multi provideritself doesnot contain any data.It brings the data from the Info Providers from which it is based on at the time of reporting.

    Procedure to create a Multi provider

    Step 1)Right click on your info area and say create Multiprovider.

     

    1.JPG

     

    Step 2)Give the technical name and description of the multi provider, in general user created info areas starts with Z or Y.

     

    Step 3) click in create icon

    2.JPG

     

    3.JPG

     

    Step 4)

    Info Providers used in creation of Multiproviders:

     

     

    1)     InfoCubes(can be virtual providers also)

    2)     Datastore objects

    3)     Info Objects

    4)     Info sets

    5)     Aggregation levels

    Step 5)

    Here you have an option of selecting the way the info provider’s display:

     

    1) Display all info providers:

     

    This is a default option.

    It displays the entire info providers list for the selected option.

     

     

    4.JPG

    Use of Identification of characteristics option:

    When you have same characteristics coming from two or more info providers we need to identify which one you want to choose while reporting. In general you should have at least two characteristics coming from two different info providers selected.You can also identify the fields that are not common and which are used in the reporting but you will be getting (#) values as per the nature of the Multiprovider.

     

     

    We do the same in case of key figures also:

     

    Step 7)

     

    Click on the select key figures option and choose the key figures required.

    5.JPG

    Using Info providers that don’t store data physically in a Multiprovider:

     

    A Multiprovider can also be created on virtual info cubes and info sets that do not store data physically.

     

    Note: Every data target is an info provider but not every info provider is a data target

     

    Data target: This stores data physically

     

     

     

     

    Role of a Multiprovider in query performance:

     

    In general when a query is executed a sub query is run internally to fetch the data, but in case of multi provider a main query is sub divided into number of sub queries which execute in parallel to fetch data. A sub-query is generated for each InfoProvider associated with the MultiProvider.

     

    If you have data from 2005 to 2009 in a single info cube you can partition in to 5 different infocubes dependin on the year and  build a Multiprovider on the top of it which decreases the query execution time and increase the performnce. 

     

    Multiprovider in BI 7.0 flow:

     

    A Multiprovider is placed in between the query and an info provider

    6.JPG

    Recommendations:

     

    It is not recommended to use more than 10 info providers in one multi provider.

     

     

     

     

    References:

    http://help.sap.com/saphelp_erp2004/helpdata/en/52/1ddc37a3f57a07e10000009b38f889/content.htm

    http://scn.sap.com

    http://www.comerit.com/

     

    Standard DSO tables

    $
    0
    0

    Standard Datastoreobject

    There are three tables in SDSO, they are:

    > New data table

    > Active data table

    > Change log table

     

    In this screenshot, we have a request loaded to DSO, and we will analyse how the three tables behave, when we load another request to this DSO.

     

    1.png

     

     

    Now, we will load another request with the same set of characteristics, but the keyfigure values are changed.

    Firstly, We will analyse data for keyfigures with the overwrite mode of aggregation. We will check the new data table, active data table and change log table for the changes.

    Secondly, we will change the keyfigures material cost, bill amount, and profit of sales to summation in the transformation rules, and keyfigure material quantity is left unchanged as overwrite mode, and then we will again analyse the change log table and active data table.

     

    The setting to change keyfigure from overwrite to summation is available in transformation.

    Go to the transformation --> click change --> double-click on rule type --> click dropdown box of Aggregation --> select summation.

     

    New data table:

    Newly loaded data is available in new data table before activation of the DSO.

    You can see in the below screenshot, that we have loaded another request, and it is not activated yet.

     

    2.png

     

    Now, we will check the data in the New data table by going to the contents tab. Click the ‘new table’ and execute, to check the data in the table.

     

    3.png

     

    Activation:

    You need to go to the requests tab of the Infoprovider administration, and click on ‘Active’ to activate the DSO.

    4.png

    After the activation of DSO, we will check the three tables and we are going to discuss the changes in the tables.

    5.png

     

    Now, you can see that the data in the newly loaded request is activated, and it is ready for reporting.

     

    Go to the contents tab, and check the new table for the data. 

     

    6.png

     

    As you can see in the above screenshot, there are ‘0’ records in the new data table after the activation of the request.

     

     

    Change log table:

    After the activation of the request, changes are stored in change log table.

     

    7.png

    Now, you can see the change log table has 90 entries. You can see the records from the previous load are shown. Now, the same records for the current load will be shown below.

     

    8.png

     

    You can see when the second request is activated, the change log has multiplied record one with “-“, then record two is the new record. Because DSO is in overwrite mode, these are the changes made to overwrite the previous data.

    For Bill amount of ‘C0001’

    1. 5000 – 5000 = 0.
    2. 5000 overwritten by 1000 = 1000.

     

    For Profit of sales (/BIC/zprf_sls)

    1. 3000 – 3000 = 0.
    2. 600 overwrites 3000 = 600.

    The end result is written to the active data table.

    9.png

    Now, let’s see the result for the same data, but in the summation mode of the keyfigures.

     

    You can see in the below screen, that change log contains 150 records, because we have loaded the request before in overwrite mode, and then we have deleted the request, But the entries in the change log are not deleted. However active data table is adjusted with the deletion of the request.

    10.png

    So, the 90 previous records, and 60 for the recent load with the summation mode, total 150 records are available in the change log. Below you can see the recent request changes that change log has made.

     

    11.png

     

    You can see for the customer ‘C0001’ Bill amount is -5000 in the first entry, and 6000 in the second entry.

    What exactly happens is:

    When you load the first request change log has the entry 5000 for the Bill amount. When you load delta with the Bill amount 1000, system adds up the delta entry ‘1000’ to the previous load, and then subtracts the previous entry.

     

    For the data already exists, the system multiplies the data with ‘–ve’, and then replaces the previous result.

    For the Customer ‘C0001’ - /BIC/zbillamt

    1. 5000– 5000 = 0.
    2. 5000 + 1000 = 6000.

    The end result 6000 for Customer ‘C0001’ Bill amount field is available in the active data table.

     

    For the customer ‘C0001’ -  /BIC/Zprf_sls

    1. 3000 – 3000 = 0.
    2. 3000 +600 = 3600.

    The end result 3600 is written to the active data table.

     

     

    Active data table:

    The data after the activation of the DSO is stored in the active data table.

     

    The result shown below is for the overwrite mode of the DSO, so it has overwritten the old values with the new values of the keyfigure.

     

    12.png

     

    The result shown below is for the summation mode of the DSO, with the same request which we loaded before.

    The keyfigures of the both requests are summed up, and the result is shown in the Active data table.

     

    13.png

     

    Here, as you see in the above screenshot, the result is shown in the active data table. The data in the active data table is the final resultant data of the Standard DSO. This is the data which will be updated to the further infoproviders.

     

     

     

    For SAP BI videos, please visit my youtube channel below.

    https://www.youtube.com/user/shahid01282

     

    Please visit my personal website for more information about me.

    http://www.shaikshahidimam.com/

    Copying files from one folder path to other in AL11 directory

    $
    0
    0

    Summary

     

    In BW, files getting stored in a folder AL11 directory from Openhubs or any kind of FTPs. We may have to copy the files to other folder in the directory to take
    a backup, if the files getting overwritten every day. In this document, detailed solution is explained to take the backup/copy the files to other folder.

     

    SDN Photo.jpg

    Author: Sreekanth Surampally

    Company:  Accenture Inc.

    Created on: 22 Jun 2013.

     

    Business Scnerario:

     

    In BW sytem, 3 openhubs are placing files in AL11 directory folder path, these files are sent to Oracle server with an FTP program which takes all the files in the AL11 folder and sends to oracle. It happens everyday. The kind of data sent to Oracle server is delta every day. So anyday, there should be current day's files in the folder while FTP program executed. For this reason, files get overwritten everyday with new delta.

     

    File history is not available in directory, with that troubleshooting has become difficult when some data is missed a month ago if any.

     

    Solution:

     

    Created a ABAP program to copy the files from one folder to other in directory, added this step in the process chain.

     

    Step1: Passing the source path value to variable LV_SRC_PATH, then to LV_DIR_PATH. the value can be maintained in TVARV table, so read the value and store it in the variable.

     

    Call the function module 'EPS_GET_DIRECTORY_LISTING', with the exporting parameter source path value. It lists out all the files in the folder

     

    File Copy1.png

     

     

    Step2: Loop the internal table which has all the files in the source path, Ignoring header files by checking the condition, first letter is S then ignore the file. Then
    concatenate the source path and the file name and also
    concatenate the target path and file name with date and time to store the files separately.

    Copy file2.png

    Step3: Copy the Source file content to target file content,

    Copy file3.png

    Step4: in this subroutine, you write the open dataset to read and write the file content to target file, every line has commens, easy to understand.

    File Copy4.png

     

     

    Step4: Closing the dataset, then the subroutine is completed. Actually this subrutine is called in the loop statement which runs for every file in the folder path.

    Copy file5.png

    Step5: After execution of the program, files are copied to target path in the AL11 directory.

     

     

    Transporting a datasource ended with return code 12

    $
    0
    0

    Summary

     

    Datasource 0FI_AP_4 is not available in active version in Target Systems, When this is transported to target system the transport failed with error code 12. This is caused because of Inactive version of datasource exist in Target system. In this document, I will explain the problem and it's correction.

    SDN Photo.jpg

    Author(s): Sreekanth Surampally

    Company: Accenture Inc

    Created on: 21 Jun 2013

     

    Problem Description:

     

    Transporting 0FIAP_4 datasource is resulting in error Code 12, with an error SAPSQL_ARRAY_INSERT_DUPREC exception CX_SY_OPEN_SQL_DB.

    The current ABAP program "CL_RSAR_PSA===================CP" terminated.

     

    System program terminates at the below line, trying to insert a field which is already there. RSTSODSFIELD is the System table, which contains datasource and it’s all
    fields.

      FIAP 1.png

    Reason:

     

    When transport is moved to target system, it is checking for the existance of datasource or PSA, if does not find trying to insert the record for the datasource in
    RSTSODSFIELD database table, where as this table already has got this entry and unable to accept duplicate entry. So it is giving
    short dump.

     

    Collecting the object again or re importing the request will not correct the problem.

     

    Step by Step Solution

     

    Step1:Check the datasource version in RSDS table, in target system.

    FIAP 2.png

     

    It is not in active version. Check the datasource version in RSTSODS table, it is also shows inactive.

     

    Step2: Execute the program RSDS_DATASOURCE_ACTIVATE_ALL, for the datasource and then check whether version in RSDS and RSTSODS table got
    changed. If no change in the version, then proceed with next step.

     

    Step3: Go to RSTSODS table, give the PSA name and manually change the OBJSTAT value from INA to ACT.

     

    To do this step, you may go to debugging mode. first select the record and click display then clicking on /h on transaction bar and press enter. ABAP debugging session opens, double click on 'CODE' variable and modify the value to Change.

     

    FIAP 3.png

    as in above screen, then press F7 or F8 you will get a screen to modify the record you selected. Modify the value to ACT and save it.

     

    Step4: Re import the failed request into target system, it will go successful.

     

    Related Content: SAP note 1703960


    Job Cancellation directly from the Server in SAP BW & SAP R/3

    $
    0
    0
    1-a.png
    Job Cancellation directly from the Server in SAP BW & SAP R/3
    Applies to:
     
    SAP NetWeaver Business Warehouse (Formerly BI), Will also work on SAP BI 3.5. Business Intelligence
    Summary
    This article gives clear picture about how to cancel the job directly from the server.
      
    Author: Mohit Saxena
    Company: Infosys Limited.
    Created on: 25 June 2013
    Author Bio
    final.png
    INTRODUCTION
    Sometimes we need to cancel the jobs which we triggered either in SAP BW or in SAP R/3. We have different options to cancel that job. But in this article we will discuss how to cancel the job directly from the server when we are not able to cancel the job in SM37.
     
     

    HOW TO CANCEL JOB DIRECTLY FROM SERVER IN SAP BW & SAP R/3

     
    Steps:
    Follow the following simple steps.
           
    Step: 1
    Go to SM37transaction code and select the active job which need to be cancel.
    3-1.png
    Step:2
    Double click on the job and then you will get below screen.
    3-2.png
    Step: 3
    Click on Job Details
    4-1.png
    Step: 4
    Then you will get server details(in case of multiple server) with row.
    4-2.png
    Step:5
    Go to SM51, select the server and make double click on it.
    5-1.png
    Step: 6
    Select the row no.(as mentioned in step 4) .
    5-2.png
    Step: 7
    Go to Process tab and click on Cancel with core and then refresh the screen.
    5-3.png

     

     

    The basic difference between the options is as below.

     

    Cancel with Core

     

    Cancels the work process, A core file is created, which you can view in transaction ST11.

    Cancel with core = LOG FILES are created and the job is killed.

     

    Cancel without Core

     

    Cancels a work process and a core file is not created.

    Cancel without core =NO LOG FILES are created and the job is killed.

    Step:8
    Then that row get eliminated from that screen and when you check it again in SM37 ,the job get Cancelled.6-1.png

     

     

    Related Content

     

    For more information, visit the Business Intelligence homepage.

    A quick checklist to improve performance of the BW system at database level

    $
    0
    0

    Performance of the BW system plays a key role in the business, as it directly impacts the user’s bandwidth. To solve performance issues there should be a standard life cycle maintained in every project at regular intervals to check validate and resolve the performance issues.

    Some of the components which you can check to improve your query performance faster are:

    At the data base level:

     

    1.     1)  Data Model: A data model defines the way the objects are developed for example dimensions in an infocube...

    a)      Make sure the dimensions in an infocube are as smaller as possible if you have individual characteristics

                              Example: Try not to include product and customer in a single dimension.

     

    2.      2)  Query Definition: The way you define a query in Query designer also impacts the performance of the runtime of the query.

    a)      Make sure that most of the calculations are done in transformations at infoprovider level rather than defining them in a query.

    b)      Be sure that the InfoCube is compressed if you use non-cumulative key figures.

     

    3.     3)   Aggregates: The main use of the aggregates is to fasten the response time of the query by reducing the amount of data which makes it a subset of an infocube.

          a)      Define aggregates for queries that have high data base read times.

          b)      If you have more aggregates the loading time also increases which effects the loading performance as roll up has to take place.

          c)       If the aggregates proposed by you or the system are too large and there is no good enough effect on the performance try using OLAP CACHE.

    4.     4)  OLAPCache: It is just a buffer area where the data of the frequently used queries is stored, so whenever you execute or refresh a query instead  of  going to the database and fetching the data the system brings it from the OLAP CACHE.

    a)      There are five Cache modes, to find them go to the tcode RSRT

    b)      Enter your query name and click on properties tab

    p7.JPG

    c)      After unchecking the infoprovider setting the Cache Mode will be highlighted where you can choose your Cache options.

    p8.JPG

    1) Cache inactive,

    2) Memory Cache without swapping

    3) Memory Cache with swapping,

    4) Cluster/Flatfile Cache per application server,

    5) Cluster/Flatfile Cache cross-application server.

     

    d) The stored OLAP CACHE data will be updated every time the new data is uploaded to the respective infoprovider also whenever the query is reactivated.

    5. Pre-Calculated Web Templates: Pre-calculation is a process where you can distribute the workload of running the report when no one is using it and making it ready to access very fast when the users accesses the report. The main advantage is that it reduces the load on the server which makes the server to act faster to access data.

     

    6. Compressing: Compressing an infocube transfer the data from f fact table to E fact table the requests are deleted and the records are aggregated which have the same keys

    a) We have to compress all those requests as early as we can which are not likely to be deleted.

    b) The same with the aggregates as well, compress them as soon as possible.

    c) Compressing improves query performance of InfoCubes containing non-cumulative key figures significantly.

     

    7) Indexes: We can define secondary indexes most of the secondary indexes have the negative impact on the data activation performance as they have to be maintained during the load

    Example: We drop indexes before loading data into an infocube and rebuild them after loading

     

    8. DBStatistics: We have to be certain that data base statistics are definitely maintained in the system with regular updating.

    a) As most of the DBMS(database management systems) do not maintain the statistics for newly created infoproviders like infocubes we have to make sure that they are maintained before used for reporting.

    b) If you have performance problems when building statistics use the alternate DBMS_STATS method.

    I.            Please go through SAP note 351163 (Creating ORACLE DB statistics using DBMS_STATS).

    II.            SAP note 129252 (ORACLE DB Statistics for BW tables) for details.

    Virtual provider

    $
    0
    0

    It is a type of infoprovider that follows extended star schema. It is used to display real time data. It is just a transient structure

     

    Conditions to be satisfied for Virtual provider:

     

    • For getting the data into Virtual provider we use a special DTP called as Direct access DTP.
    • The data source is also a direct access data source.

    To load data into virtual provider we use three techniques

    1)    

    •   By direct access

    2)    

    •   Using BADI/BAPI

      

    •   Function module

     

     

    Virtual provider flow diagram:

     

     

     

     

    vp1.JPG

     

     

     

    • If you want to use a start routine or expert routine in the virtual cube you have to include the following code in the inverse routine.

    c_th_fields_inbound = i_th_fields_outbound.
    c_r_selset_inbound   = i_r_selset_outbound.
    c_exact             = rs_c_true.

    • We use this code because when a report is executed or refreshed on the top of the virtual provider uses the direct access DTP to go to the source system and extract the data in simple terms the query triggers the virtual provider unless and until the query is executed or refreshed the data is not fetched as there is no data present in the virtual provider.
    • We can use a virtual provider in the following scenarios

               1)     When you have less amount of data either from R3 or BI or non-sap

               2)     When you have less number of concurrent users working on the Report (Bex Query).

               3)     When you have transaction data.

               4)     When you need only Read Access Data.

    • A Real time scenario where Virtual providers are exclusively used:

    Take a scenario where we have the status of process chains is stored in a database table for action happened in the process chain:

     

    Different status of the process chain:


     

    1. Start: Represents starting of a process chain.
    2. In Process: Represents that the process chain is running.
    3. Process chain has ended

    And a query is built on the top of the virtual provider where its source is BI system based on tables where the entries or status of the process chains are stored. When you execute a report or query the virtual provider directly brings the data from the table used to store the entries which will have the updated data.

     

    How to check whether the datasource supports direct access or not?

    There are many ways to find out this one best standard way is :

    Got to the transaction Rsa2 and enter your data source name and click display
    vp2.JPG

    In the next screen go to the extraction tab, here in the below screen we can find Direct access option which says 1 means it supports direct access.

    vp3.JPG

    Once the virtual provider is created and the data is ready to be loaded you need to give the option
    ‘ Activate Direct Access ’.

    vp4.JPG

     

    Once this is done save your DTP and display your data.

     

    References:

    Help.sap.com

    Scn.sap.com

    Steps to delete the error request in DSO when we cannot delete the request in Manage screen

    $
    0
    0

    This document will help you to  “Delete  the error request  when we cannot delete the request in Manage “  in DSO.

                                    

    When we get an error requests while loading the data into DSO. Usually we delete the error request in DSO manage screen and again we repeat the load. Some times,  it is not allowed to delete the request in the DSO manage screen.

    Below are the steps to delete the Error Request :

     

     

    To delete the request first we have to delete the request from the following tables:

             

    ·                     RSICCONT.

    ·                     RSMONICP.

    ·                     RSODSACTUPDTYPE.

    ·                     RSODACTREQ.

     

    Goto SE16 Transaction

    Provide the Table Name

    pic1.JPG

    Copy the Error Request number  from the manage screen of the DSO  and Proivide for “RNR” feild and Execute.

    pic2.JPG

         Now delete the logs of the request from the table .Repeat the procedure for the remaining tables.

             

         Once you delete the logs from the tables for that particular request and  now you can  delete the request from the DSO .

     

     

     

    SAP BI Support Data Load Errors and Solutions

    $
    0
    0

    SAP BI/BW Data Load Errors and Solutions for a support Project:

     

    1) BW Error:Failure occurred when delta Update is going on from one data target to another data target.

     

    Possible Causes:

    1. 1. TRFC error
    2. 2. Data is incorrect-error records.
    3. 3. Locked by ALEREMOTE or user.

     

    Solution: In Monitor check the technical status of the request for red status and then, delete the request from the data target.

     

    If its delta update, in the source data target reset the delta. Retrigger the Info Package to load data again.

    If it’s Full update, restart job again after deleting error request from the data target.

     

     

    2) BW Error: Master job abended with error code or PR1 batch job did not run or get delayed.

     

    Possible Causes: This can be because of changes to Maestro or some changes to job.

     

    Solution: Maestro jobs are handled by Production Services, if job is abended with an error code, BASIS team looks at Maestro problems. If Job is an issue, SAP support team investigates it.

     

     

     

     

    3) BW Error:Database Errors: Enable to extend Table, enable to extend the Index.

    For example.

    Database error text: "ORA-01653: unable to extend table                         

    SAPR3./BIC/AZUSPYDO100 by 8192 in table space PSAPODS2D"                                

    Possible Causes: This is due to lack of space available to put further data.

     

    Solution: This is Database error. Short dump message indicates this error message. Ticket is raised for DBA Elizabeth Mayfield who provides the space required. If the update mode is delta, technical status of job is changed to red and request is deleted from the data target. InfoPackage for Delta update is triggered again to get delta from R/3 back. If its full update, request is deleted from the data target and InfoPackage is triggered again to get full update.

     

    We can navigate to relevant short dump from monitor or execute transaction ST22.

     

     

     

     

     

    Once DBA confirms that space issue is corrected, job is rerun to get data from source again.

     

    4) BW Error:Dead Lock:

    Possible Causes: This can happen when SMON is running or any DBA

    Solution: Contact the DB People Ticket is raised for DBA who can adjust the schedules for SMON process.

    If the update mode is delta, technical status of job is changed to red and request is deleted from the datatarget. Infopackage for Delta update is triggered again to get delta from R/3 back. If its full update,request is deleted from the datatarget and Infopackage is triggered again to get full update.

     

    5) BW Error:Time Stamp errors:

     

    Possible Causes: This can happen when there is some changes done on datasource and datasource is not replicated.

    Solution: Execute T code SE38 in BW give programme name as RS_Transtrucuture_activate and execute the programme. Give InfoSource and  Source System and activate. This will replicate the datasource and its status is changed to active. Once this is done, delete the request by changing technical status to red and trigger InfoPackage to get delta back from source system.

     

    Replicate datasource as shown in the screen shot below.

     

     

     

     

    Go to SE38 and execute program RS_Transtrucuture_activate ,mention source and name of the Infosource.

     

     

     

     

     

     

     

    Delete request from the datatarget and trigger Infopackage again.to get either delta or full data.

     

     

     

    6) BW Error:Error log in PSA- Error occurred while writing to PSA.:

    Possible Causes: This is because of corrupt data or data is not in acceptable format to BW.

    Solution: Check the cause of the error in Monitor in detail tabsrip.This gives the record number and Infoobject having format issue. Compare the data with correct values and determine the cause of failure. Change the QM status of request in datatarget to red and delete the request. Correct the incorrect data in PSA and then upload data into data target from PSA.

     

     

     

     

     

     

    Step1:

    If the request is directly into datatarget, take data in PSA also to correct this issue. Delete error request from datatarget and Go to PSA to investigate the issue.

    Look for messages in Monitor, which may give name of the Info Object causing the error. Normally # character in the last is not permitted in BW.

    Filter the records based on Status and start correcting the records. Once they are complete, upload data from PSA back into datatarget.

     

     

     

    7) BW Error:Duplicate data error in Master data upload:

    Short Dump message:

     

    "SAPSQL_ARRAY_INSERT_DUPREC" CX_SY_OPEN_SQL_DBC                                

    "SAPLRSDRO " or "LRSDROF07 "

    "START_RSODSACTREQ_ACTIVATE"

     

    Possible Causes: This can happen if there are duplicate records from the source system. BW do not allow duplicate data records.

     

    Solution:

    To check for records which are duplicate in master data characteristic, Go to RSRV transaction

     

     

     

    Select name of the characteristic and execute, this will give first 50 records for the problem. Remove duplicates from the Master data and then upload data back by triggering Infopackage again.

     

     

    8) BW Error:Error occurred in the data selection

     

    Possible Causes: This can occur due to either bug in the infopackage or incorrect data selection In the infopackage.

     

    Solution: Data selection checked in the infopackage and job is started again after changing the technical status to red and deleting the error request from the data target.

     

    There was bug found in one Infopackage eg.

     

    Value Type field was getting changed after the job runs once. Everytime this need to be corrected.

     

     

     

     

     

     

     

    9) BW Error:Processing (data packet): No data:

     

    Possible Causes: This can be because of some issue with source system with datasource. Delta update program can be one of the issue.

     

    Solution: Go to R/3 source system, See how many records are there in delta in RSA7 transaction. If the records are Zero and you are sure that number of records can not be zero, then check for update Program which might not be running or stucked. Check for BD87 ,SMQ1, SM58 for solution to error. Error logs can suggest the solution to problem.

     

     

     

     

     

     

    10) BW Error:Process Chain failed

     

    Possible Causes: Errors occurred in one of the job in the process chain

     

    Solution: Attend failure manually and go to process chain log for today and right click on the next job and select Repeat Option. This will execute all  remaining jobs in process chain.

     

     

     

     

     

     

     

     

     

     

     

    11) BW Error: Errors occurred in (IDocs and TRFC)

    Non Updated IIDOCs found in the source system.

     

    Possible Causes: This can happen when in source system job is terminated or source system or BW is not available for whole period of data upload. This can also happen if resources are not sufficient in source system.

     

    Solution: IIDOC’s need to be processed manually either in OLTP or in BW depending on the failure. Error message in monitor in status tabstrip will take you to either BW or OLTP wherever there is a problem. Process IDOC’s, this will start the left over packets and will finish the load.

    This situation we have to check the Idocs so we have to check Idoc in WE05 and know the status these are WE51, WE52, WE53 may be and next goto WE 19 there we have to execute the exist Idoc with successfully loaded Idoc.

     

     

     

     

    Go to Transaction ST22 or see relevant short dump in the monitor.

     

     

     

     

     

     

     

     

     

     

     

     

     

    Execute transaction WE02 to see Non updated IDOC’s.

     

     

    Process them if they have yellow status.

     

    12) BW Error: Processing (data packet): Errors occurred-Update ( 0 new / 0 changed ) : Errors occurred-Error records written to application log.

     

    Possible Causes: This can be because of data not acceptable to datatarget although data reached PSA.

     

    Solution: Data checked in PSA for correctness and after changing the bad data data uploaded back into datatarget from PSA.

    Change QM status of request to red  in datatarget and delete request from  the datatarget to edit PSA data. Go to PSA associated with this request and edit records to fix the error reported in Monitor in Details tabstrip.

     

    Reload data from PSA back into datatarget.

     

     

    13) BW Error:Process Chains: Errors occurred in Daily Master Data

     

    Possible Causes: This occurs when Transaction data is loaded before Master data.

     

    Solution:Ensure to load Master data before Transaction data. Reload data depending on update mode (Delta/Full Update)

     

    14) BW Error: Errors occurred-Transfer rules ( 0 Records ) :

     

    Possible Causes: These errors happen when the transfer rules are not active and mapping the data fields is not correct

     

     

    Solution: Check for transfer rules ,make relevant changes and load data again.

     

     

     

     

    15) BW Error: Missing messages -Processing end: Missing messages

     

    Possible Causes: This can be because of incorrect PSA data, transfer structure, transfer rules, update rules and ODS. 

     

    Solution:Check PSA data, Transfer structure, transfer rules, Update rules or datatarget definition.

     

     

    16) BW Error: Other (messages): Errors occurred in data selection

     

    Possible Causes: The 'valid from' date is smaller than the mimimum date, The 'valid from' date is smaller than the mimimum date, Error in node time interval with the ID 00000011 . Details in next message, The 'valid from' date is smaller than the mimimum date.

     

    Solution:Change the selection option and reload data.

     

    17) BW Error: Activation of ODS failed

     

    Possible Causes: This happens when data is not acceptable to ODS  definition. Data need to be corrected in PSA

     

    Solution: Check for Infoobject, which has caused this problem in the monitor details tabstrip. Delete request from datatarget after changing QM status to red. Correct data in PSA and update data back to datatarget from PSA.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    18) BW Error: Source System not available

     

    Possible Causes: This can happen when request IDOC is sent source system ,but the source system for some reason is not available

     

    Solution: Ensure that source system is available. Change technical status of request to red and delete request from datatarget. Trigger Infopackage again to get data from source system.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    19) BW Error:Caller 70 missing

     

    Possible Causes: IDOC’s not processed completely.

     

    Solution: Idoc Problem, Either wait till time out & process Idoc from detail monitor screen, or goto BD87 & process Idoc with status = YELLOW ( be careful while processing IDOCS from BD87, choose only relevent Idocs

     

     

     

    20) BW Error: Object corrupted- Delta Management of Master data lost but allows Full update.

    Update Mode R not supported. “

     

    Possible Causes: This happened during mass failures due to database space issue.

     

    Solution: Reinitialization or there is one program for repair of Info object. During reinitialization ensure to use option of Only PSA, avoid duplicate data and subsequently into datatargets.

    Go to Transaction code RSRV to check consistency of Infoobject.

     

     

     

     

     

     

     

     

     

     

     

     

    21) BW Error: Error while opening file from the source system.

     

    Possible Causes: This happens when  either file is open  or file is not deposited on server or not available.

     

    Solution: Arrange for file,delete error request from datatarget and trigger Infopackage to load data from file.

     

     

     

     

     

     

    22) BW Error:Object locked by user

     

    Possible Causes: This can happen when user or ALEREMOTE is accessing the same table.

     

    Solution: Change the technical status of job to red,delete request from datatarget and trigger Infopackage again. If its delta update it will ask for repeat delta,Click on Yes button.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    23) BW Error: While load is going on in R/3 table is locked

     

    Possible Causes: This happens when some datasource is accessing R/3 transparent table and some transaction takes place in R/3

     

    Solution: Change the technical status of job to red in the monitor and retrigger the job again from R/3.

     

     

     

     

     

     

    24) BW Error:Change run already started by ALEREMOTE

     

    Possible Causes: This can happen due to overlapping of same request to Program.

     

    Solution: Go to Transaction Code RSATTR; see if it’s running. Once its finished, repeat the job.

    Viewing all 1574 articles
    Browse latest View live




    Latest Images