Quantcast
Channel: SCN : Document List - SAP Business Warehouse
Viewing all 1574 articles
Browse latest View live

SAP First Guidance - SEM/BW Configuration in SolMan 7.1

$
0
0

Due to constant Questions about the Upgrade to NetWeaver 7.3x including SEM Add-On Components, we created a SAP First Guidance Document which describes the successfull Definition of a SAP NetWeaver BW 7.0/7.01 System with the SEM Add-On Installed on top. With this Information you will be able to integrate the stack.xml in the SUM (software update manager) process and the upcoming DMO (database migration option) DMO process included in SUM as the first input is the location of the stack.xml which defines the download directory for SUM. Furthermore the interaction of the stack.xml in the upgrade process enables you a smooth Integtration into the Upgrade Process.

View this Document


Upgrade to NetWeaver BW 7.3x

$
0
0

With the Release of NetWeaver 7.30 BW new and innovative capabilities are added to SAP's EDW Premium Solution. BW 7.30 also powered by SAP HANA as the backbone for a successful usage of SAP BusinessObjects Platform and SAP Business Warehouse Accelerator - BWA is the heard of the SAP Business Intelligence Architecture. This short presentation consolidates the must know delta information to upgrade quickly and effective to NetWeaver 7.30 BW or NetWeaver 7.30 EhP1 (7.31).

View this Presentation

Customer Exit Variables in SAP BI!

$
0
0

In many reports, Users want to see the actual data and to compare this data with the same period for the previous year i.e. for example, if the user enters the current date and wants to see the data from the beginning of the year till today. Also if you want to compare two periods for two different years it is good to keep in mind that working days could be different, so from business side it is important to see the actual working days in the report.

 

Please check my wiki in the link below:

http://wiki.sdn.sap.com/wiki/display/NWTech/Customer+Exit+Variables+in+SAP+BI

 

How to populate and use user exit varible - Code Gallery - SCN Wiki

Automation of Master Data Validation Process

$
0
0

Purpose

In most of the Business scenarios it is required to validate the master data before loading to the master data objects.

This tool automates the process of validating the master data and also outputs the records along with the appropriate error message which have invalid master data.

Overview

The main features of this tool are:

-       If the master data validation fails due to the integrity check in the transformation, DTP request turns red and the error stack records gets updated.

-       Send an e-mail to the user with erroneous records file as an attachment.

-       Automatically deletes the erroneous DTP and PSA request.

It can be used in the Process Chains for automation of the process.

Configuration Steps

In this document, example of data load from flat file Data Source to DSO is being considered.

1) Enable the Integrity check in the Transformation for master data validation, system will automatically validates the master data records and turns the DTP request red in case the validation fails.

Transformation.png

2) Change the DTP settings for error handling, enable "Update Valid Records, No Reporting (Request Red)" in DTP.

DTP.png

3) Enter Data Source, Target DSO Technical Name and the receiver e-mail address in the program and save it as a variant.

If the master data validation fails due to the integrity check in the transformation, DTP request turns red and the error stack records gets updated.

The program will fetch the error records from the error stack table along with the error messages and send the file to the user as an attachment.

Also, if the Delete DSO & PSA Requests is checked, then it deletes both the erronous DTP request as well as PSA request.

Variant.png

4) Configure this program in the ABAP process type in case of DTP load failure, so as to automate the whole process.

 

Process Chain.png

 

Program Code

Make selection screen with following parameters:

1) Data Source technical name.

2) Target DSO technical name

3) Receiver e-mail address

4) Check Box - Delete DSO & PSA request

 

1) Fetch error request from target DSO

  SELECT rnr UP TO 1 ROWS  INTO gv_rnr FROM rsmonicdp
  WHERE icube   = p_tgt
  AND   status  = '@0A@'"Error Status
  ENDSELECT.
  IF sy-subrc <> 0.
    "Do nothing
  ENDIF.


2) Fetch SID of the error request:

SELECT SINGLE sid INTO gv_sid FROM /bi0/srequid
  WHERE requid = gv_rnr.
  IF sy-subrc = 0.
** Retrieve DTP name
    CLEAR: gv_dtp.

    SELECT SINGLE dtp INTO gv_dtp FROM rsbkrequest
      WHERE requid = gv_sid.
    IF sy-subrc = 0.
      "Do nothing
    ENDIF.


3) Retrieve Error Stack table name

** Retrieve Error Stack Table name
  CLEAR gv_odsname.

  SELECT odsname_tech UP TO 1 ROWS
    INTO  gv_odsname
    FROM  rstsods
    WHERE version = 'A'
    AND   userobj = gv_dtp.
  ENDSELECT.
  IF sy-subrc = 0.
    "Do nothing
  ENDIF.

  CREATE DATA gv_generic TYPE TABLE OF (gv_odsname).
  CREATE DATA gv_wa TYPE (gv_odsname).

  ASSIGN gv_generic->* TO <fs_generic>.

  ASSIGN gv_wa->* TO <fs_wa>.

** Retrieve data from error stack table
  SELECT * FROM (gv_odsname)
    INTO TABLE <fs_generic>.
  IF sy-subrc <> 0.
    "do nothing
  ENDIF.


4) Retrieve error stack records and its corresponding error messages

DATA: lwa_tab_line   TYPE soli,
        lwa_tab_line2  TYPE soli,
        lv_msg         TYPE string,
        lv_len1        TYPE i,
        lv_len2        TYPE i,
        lv_len3        TYPE i.

  DATA: lv_datapacket_number TYPE i,
        lv_datarecord_number TYPE i,
        lv_tablename         TYPE RSD_TABLNM.

  DATA: lt_error_messages   TYPE STANDARD TABLE OF ty_error_messages,
        lwa_error_messages  TYPE ty_error_messages.

  CONSTANTS: con_tab TYPE c VALUE cl_abap_char_utilities=>horizontal_tab,
             con_cret TYPE c VALUE cl_abap_char_utilities=>cr_lf.

** Identify if the PSA table is partioned or not

  CLEAR lv_tablename.

  lv_tablename = gv_odsname.

  CALL FUNCTION 'RSDU_PARTITIONS_INFO_GET'
    EXPORTING
      i_tablnm              = lv_tablename
    EXCEPTIONS
      table_not_exists      = 1
      table_not_partitioned = 2
      inherited_error       = 3
      OTHERS                = 4.
  IF sy-subrc = 0.
    lv_datapacket_number = 2.
    lv_datarecord_number = 4.
  ELSEIF sy-subrc = 2.
    lv_datapacket_number = 2.
    lv_datarecord_number = 3.
  ENDIF.

** Display the error stack table to the output
  LOOP AT <fs_generic> ASSIGNING <fs_wa>.
    CLEAR: lwa_tab_line, lv_len1, lv_len2, lv_len3.
    DO.
      IF sy-index = lv_datapacket_number.
        ASSIGN COMPONENT lv_datapacket_number OF STRUCTURE <fs_wa> TO <fs_datapacket>.
      ELSEIF sy-index = lv_datarecord_number.
        ASSIGN COMPONENT lv_datarecord_number OF STRUCTURE <fs_wa> TO <fs_datarecord>.
      ELSEIF sy-index > lv_datarecord_number.
        ASSIGN COMPONENT sy-index OF STRUCTURE <fs_wa> TO <fs_field>.
        IF sy-subrc = 0.
          DESCRIBE FIELD <fs_field> LENGTH lv_len2 IN BYTE MODE.
          lv_len3 = lv_len1 + lv_len2.
          IF lv_len3 > 255.
            EXIT.
          ENDIF.
          IF lv_len2 > 0.
            MOVE <fs_field> TO lwa_tab_line+lv_len1(lv_len2).
            CONCATENATE lwa_tab_line con_tab INTO lwa_tab_line.
          ENDIF.
          lv_len1 = lv_len1 + lv_len2 + 1.
          lv_len3 = lv_len1.
          WRITE: <fs_field>.
        ELSE.
          EXIT.
        ENDIF.
      ENDIF.
    ENDDO.
    NEW-LINE.

**  Retrieve corresponding error messages

    REFRESH lt_error_messages.

    SELECT msgid msgno msgty msgv1 msgv2 msgv3 msgv4
    FROM   rsberrorlog
    INTO   TABLE lt_error_messages
    WHERE  request   = gv_sid
    AND    datapakid = <fs_datapacket>
    AND    record    = <fs_datarecord>.
    IF sy-subrc EQ 0.
      LOOP AT lt_error_messages INTO lwa_error_messages.

        CLEAR lv_msg.

        CALL FUNCTION 'FORMAT_MESSAGE'
          EXPORTING
            id        = lwa_error_messages-msgid
            lang      = '-D'
            no        = lwa_error_messages-msgno
            v1        = lwa_error_messages-msgv1
            v2        = lwa_error_messages-msgv2
            v3        = lwa_error_messages-msgv3
            v4        = lwa_error_messages-msgv4
          IMPORTING
            msg       = lv_msg
          EXCEPTIONS
            not_found = 1
            OTHERS    = 2.
        IF sy-subrc = 0.
          CONCATENATE lwa_tab_line lv_msg INTO lwa_tab_line2.
          CONDENSE lwa_tab_line2.
          CONCATENATE con_cret lwa_tab_line2 INTO lwa_tab_line2.
          APPEND lwa_tab_line2 TO gt_tab_lines.
          CLEAR lwa_tab_line2.
        ENDIF.
      ENDLOOP.
    ENDIF.

  ENDLOOP.

5) Pass the Error Stack records as an attachment to the email id.

DATAlv_subject       TYPE        so_obj_des,
         it_contents      TYPE        soli_tab,
         lv_ref_document  TYPE REF TO cl_document_bcs,
         lv_ref_documentw TYPE REF TO cl_bcs,
         lv_recipient     TYPE REF TO if_recipient_bcs,
         lv_result        TYPE        os_boolean,
         lv_count         TYPE        i,
         lv_size          TYPE        sood-objlen.


  lv_subject = 'Master Data Validation Failed Records'(001).

** Create document with contents
  CREATE OBJECT lv_ref_document.

  lv_ref_document = cl_document_bcs=>create_document(
        i_type       = 'HTM'
        i_subject    = lv_subject
        i_length     = '1000'
        i_language   = sy-langu
        i_importance = '1'
        i_text       = it_contents ).

** Creat attachment

  DESCRIBE TABLE gt_tab_lines LINES lv_count.

  lv_size = lv_count * 255.

  CALL METHOD lv_ref_document->add_attachment
    EXPORTING
      i_attachment_type    = 'XLS'
      i_attachment_subject = 'Error Records'(002)
      i_attachment_size    = lv_size
      i_att_content_text   = gt_tab_lines.

* CREATING PERSISTENT OBJECT WILL ALLOW YOU TO SET THE DOCUMENT IN THE MAIL
  lv_ref_documentw = cl_bcs=>create_persistent( ).
  CALL METHOD lv_ref_documentw->set_document( lv_ref_document ).

* EMAIL AS GIVEN IN THE SELECTION SCREEN.
  LOOP AT so_rcvr.
    gv_email = so_rcvr-low.
    lv_recipient = cl_cam_address_bcs=>create_internet_address( gv_email ).
    "Add recipient to send request
    CALL METHOD lv_ref_documentw->add_recipient
      EXPORTING
        i_recipient = lv_recipient
        i_express   = 'X'.
  ENDLOOP.

* SEND THE MAIL
  CALL METHOD lv_ref_documentw->send(
    EXPORTING
      i_with_error_screen = 'X'
    RECEIVING
      result              = lv_result ).

  IF lv_result = 'X'.
    NEW-LINE.
    NEW-LINE.
    WRITE:/ 'E-mail sent successfully'(003).
    COMMIT WORK.
  ELSE.
    NEW-LINE.
    NEW-LINE.
    WRITE:/ 'Error in sending e-mail'(004).
    ROLLBACK WORK.
  ENDIF.


6) Delete the error request from the target DSO

DATA: lv_msg  TYPE string,
        lv_cube TYPE rsiccont-icube.

  REFRESH gt_parent_rnr.

  SELECT  parent_rnr INTO TABLE gt_parent_rnr
  FROM rsstatmanreqmap
  WHERE rnr         = gv_rnr
  AND dta_dest      = p_tgt
  AND dta_dest_type = 'ODSO'.
  IF sy-subrc = 0.
    "Do nothing
  ENDIF.

  CLEAR lv_cube.

  lv_cube = p_tgt.

** Delete DSO request only if the check box is checked.
  IF p_dso IS NOT INITIAL.
    CALL FUNCTION 'RSSM_DELETE_REQUEST'
      EXPORTING
        request                    = gv_rnr
        infocube                   = lv_cube
      EXCEPTIONS
        request_not_in_cube        = 1
        infocube_not_found         = 2
        request_already_aggregated = 3
        request_already_comdensed  = 4
        no_enqueue_possible        = 5
        cube_in_planning_mode      = 6
        OTHERS                     = 7.
    IF sy-subrc <> 0.
      CLEAR lv_msg.

      CALL FUNCTION 'FORMAT_MESSAGE'
        EXPORTING
          id        = sy-msgid
          lang      = '-D'
          no        = sy-msgno
          v1        = sy-msgv1
          v2        = sy-msgv2
          v3        = sy-msgv3
          v4        = sy-msgv4
        IMPORTING
          msg       = lv_msg
        EXCEPTIONS
          not_found = 1
          OTHERS    = 2.
      IF sy-subrc = 0.
        WRITE:/ lv_msg.
      ENDIF.
    ENDIF.
  ENDIF.


7) Delete corresponding PSA requests

*Retrieve PSA request
  DATA: lv_msg          TYPE string,
        lwa_parent_rnr  TYPE rsrequid.

  DATA: lwa_s_rsseldone TYPE rsseldone,
        lv_new_ds,
        lwa_s_rsis      TYPE rsis,
        lv_applnm       TYPE rsappl-applnm,
        lwa_s_rsdchabas TYPE rsdchabas.

  CONSTANTS: c_dtpprefix(1)     TYPE c VALUE 'D'.

  IF p_dso IS NOT INITIAL.                    " Check box to delete PSA request
    LOOP AT gt_parent_rnr INTO lwa_parent_rnr.
** Delete PSA Request
      CLEAR lwa_s_rsseldone.
      CALL FUNCTION 'RSSM_RSSELDONE_READ'
        EXPORTING
          i_rnr           = lwa_parent_rnr
          i_single_select = 'X'
        IMPORTING
          e_s_rsseldone   = lwa_s_rsseldone.

      IF lwa_s_rsseldone-typ = 'I'.
        CALL FUNCTION 'RSDS_DATASOURCE_OLDNEW'
          EXPORTING
            i_datasource = lwa_s_rsseldone-oltpsource
            i_logsys     = lwa_s_rsseldone-logsys
          IMPORTING
            e_new_isused = lv_new_ds.
        IF lv_new_ds IS INITIAL.
          IF NOT lwa_s_rsseldone-source IS INITIAL.
            SELECT SINGLE * FROM rsis INTO lwa_s_rsis WHERE
                   isource = lwa_s_rsseldone-source.
            lv_applnm = lwa_s_rsis-applnm.
          ELSE.
            lv_new_ds = 'X'.
          ENDIF.
        ENDIF.
      ELSEIF lwa_s_rsseldone-typ = 'O'.
        CALL FUNCTION 'RSDS_DATASOURCE_OLDNEW'
          EXPORTING
            i_datasource = lwa_s_rsseldone-oltpsource
            i_logsys     = lwa_s_rsseldone-logsys
          IMPORTING
            e_new_isused = lv_new_ds.

        IF lv_new_ds IS INITIAL.
          IF NOT lwa_s_rsseldone-source IS INITIAL.
            SELECT SINGLE * FROM rsdchabas INTO lwa_s_rsdchabas WHERE
                   chabasnm = lwa_s_rsseldone-source.
            lv_applnm = lwa_s_rsdchabas-applnm.
          ELSE.
            lv_new_ds = 'X'.
          ENDIF.
        ENDIF.
      ELSE.
        IF lwa_parent_rnr(1) = c_dtpprefix.
          CALL FUNCTION 'RSS2_DTP_REQ_MAP_TO_SELDONE'
            EXPORTING
              i_rnr       = lwa_parent_rnr
            IMPORTING
              e_s_seldone = lwa_s_rsseldone.
          lv_new_ds = 'X'.
        ENDIF.
      ENDIF.

      CALL FUNCTION 'RSAR_ODS_API_DEL'
        EXPORTING
          i_request         = lwa_parent_rnr
          i_date            = lwa_s_rsseldone-seldate
          i_new_psa         = lv_new_ds
        EXCEPTIONS
          no_ods_found      = 3
          parameter_failure = 1
          OTHERS            = 2.
      IF sy-subrc <> 0.             "Delete sy-subrc
        CLEAR lv_msg.

        IF NOT sy-msgid IS INITIAL AND NOT sy-msgno IS INITIAL.   " Error message
          CALL FUNCTION 'FORMAT_MESSAGE'
            EXPORTING
              id        = sy-msgid
              lang      = '-D'
              no        = sy-msgno
              v1        = sy-msgv1
              v2        = sy-msgv2
              v3        = sy-msgv3
              v4        = sy-msgv4
            IMPORTING
              msg       = lv_msg
            EXCEPTIONS
              not_found = 1
              OTHERS    = 2.
          IF sy-subrc = 0.
            WRITE:/ lv_msg.
          ENDIF.
        ELSE.                 " Error message
          CALL FUNCTION 'FORMAT_MESSAGE'
            EXPORTING
              id        = 'RSM1'
              lang      = '-D'
              no        = '206'
              v1        = lwa_parent_rnr
              v2        = sy-msgv2
              v3        = sy-msgv3
              v4        = sy-msgv4
            IMPORTING
              msg       = lv_msg
            EXCEPTIONS
              not_found = 1
              OTHERS    = 2.
          IF sy-subrc = 0.
            WRITE:/ lv_msg.
          ENDIF.
        ENDIF.                " Error message
      ENDIF.                  "Delete sy-subrc
    ENDLOOP.
  ENDIF.                    " Check box to delete PSA request

How to improve runtime of data loading to cube

$
0
0

Introduction:

In SAP BI projects we may face issues with the run time of loading data to cubes and which may considerably affect the SLA. In order to avoid that this document will help to find a performance optimization technique to successfully reduce the runtime of loading to cubes.

 

Procedure:

 

STEP 1:

 

Execute a standard ABAP program SAP_INFOCUBE_DESIGNS  in se38  and check the output of the report

PIC C1.png

 

Above is an example of the output of the standard report  where can see no of rows of fact table of a cube,the  no. of rows of each of the dimensions of the cube and the ratio  of the dimeniosn to fact table size  .

In the above example we can see 2 of the dimensions are marked in red ,this signifies the bad dimensions. In this report we will consider any dimension having dimension to fact table size ratio above 20 percentage as a bad dimension and it will be highlighted in red.

 

STEP 2:Check the DTP loading step where we have the step " Conversion of characteristic values to SIDs" .In our fact table we have the dimension ids and in dimension tables we find the SIDS of characteristic values. It can be observed in the history of the data loading of cube how much time is spent on this particular step and if its very alarming then with the help of the the program output we can check  which of the dimensions are  marked in red i.e. exceeds the optimized level of ration percentage which is 20.

This particular step has the conversion of char values  to SIDs and if any of the cubes have bad dimensions reflected then we need to take care of it in order to improve the load performance.

 

PICC2.png

 

STEP 3: Now we need to identify a way to improve the the load by improving this particular  step of the load.

 

We need to execute a function module in se37 , RSD_CUBE_GET and give the input parameters as

 

PIC C3.png

 

Execute again and when the following window appears ,double click on  E_T_DIME ,Check the number range object ID corresponding  to the dimension of the cube marked in red in the report SAP_INFOCUBE_DESIGNS.

As per my  example we can see below dimensions and its corresponding number range object id

 

 

 

PIC C4.png

 

STEP 4: Goto Transaction  SNRO  and in the input we need to put the Number range object IDs being selected in the above step.

 

PIC C5.png

 

After choosing the option Buffering to main memory  we can set upto 1500 objects as per SAP recommendations but it will depend upon each and every scenario depending upon the system health and to check that basis help can be taken to choose what exact figure to apply in main memory.

 

PIC C6.png

The purpose of using the buffers is to replace with the  access to hit the database table SID/DIM ID  table to look up the values which in case of huge number of records can increase the run gradually.Using buffer will reduce the time considerably.

Input Ready Query through BI-Integrated Planning

$
0
0

Applies to

 

SAP BI Input Ready Query- Integrated Planning in SAP BI.

For more information, visit the Business Intelligence homepage.

 

Summary

 

This article describes the steps required to build an Input Ready Query to load data into BW system through BI Integrated Planning. It also describes the steps to create MultiProvider, Aggregation level and input enabled query.

 

 

Integrated Planning

 

            Planning is an integral part of the management cycle. The management cycle helps companies to position themselves in complex environments and to keep on track in reaching their overall company goal. Planning supports decision making, the coordination of different divisions towards a common goal and defines general conditions for operation.


            BI Integrated Planning provides business experts with an infrastructure for realizing and operating planning scenarios or other applications. BI Integrated Planning can be used to create process-oriented planning.

            BI Integrated planning fulfills both the end user’s and the administrator’s wishes at the same time. The end user can work from a plan query that is equally valid for all planners in a planning area, can open Excel offline planning and then carry out significantly more detailed planning there. Only part of this data is then written back to the enter-able query; that is, the part refers to the cell in question.

Overview of Input Ready Query

                   Input-ready queries are used to create applications for manual planning. These can range from simple data entry scenarios to complex planning applications. You define queries for manual planning in BEx Query Designer. In Web Application Designer or BEx Analyzer, you can combine input-ready queries with other queries and planning functions to create complex planning applications.


            A query that is defined upon an InfoProvider of type aggregation level, it is input ready and can be used for manual planning. Whether a particular cell is input ready depends on the drill-down, specifically whether characteristic relationships and data slices are permitted for the cell.

 

Prerequisites

            Fundamentals of SAP Business Intelligence, SAP Integrated Planning. Good knowledge of BEx.

 

Business scenario

            The business requirement is to have a Dispatch Plan upon the actual data. Architecturally this has been enabled by using actual InfoCube which provides the available Data that is used for reporting. The goal is to enable an Input Ready field for Dispatch Plan in Report that allows users to enter data which in background will get updated to the Realtime Cube.


            In the example below Actual Cube is ‘ZS_ICACT’ which contains necessary data for reporting and ‘ZS_IPTRD’ is the Realtime cube that’s used to store the data entered through the reports.

How to start with Integrated Planning:

 

 

1.jpg

Creating a Real Time Info Cube

 

            Real-time Info Cubes differ from Standard Info Cubes in their ability to support parallel write accesses. Standard Info Cubes are technically optimized for read accesses to the detriment of write accesses.

Right click on the Info Area and select ‘Create Info Cube’.


2.jpg

Give name for InfoCube and click on ‘Create’ button. ‘ZS_IPTRD’ is the name of the InfoCube.


3.jpg

Creating a Multi Provider

 

            A MultiProvider is a type of InfoProvider that combines data from a number of InfoProviders and makes it available for reporting purposes. The MultiProvider does not itself contain any data. Its data comes entirely from the InfoProviders on which it is based. These InfoProviders are connected to one another by a union operation. 

            MultiProviders only exist as a logical definition. The data continues to be stored in the InfoProviders on which the MultiProvider is based. A query based on a MultiProvider is divided internally into subqueries. There is a subquery for each InfoProvider included in the MultiProvider. These subqueries are usually processed in parallel.

Right click on the Info Area and select ‘Create MultiProvider’.



4.jpg

Give name for MultiProvider and click on ‘Create’ button. ‘ZS_MULPRO’ is the name of the Multi Provider.


5.jpg

In the next screen, select the Info Cubes which should be in the MultiProvider. So select the check-box for ‘ZS_ICACT’ and ‘ZS_IPTRD’.


5.jpg

Drag and drop Info Objects from the ‘Involved Info Providers’ pane to the right side pane of MultiProvider. 0Calday and 0Calyear should be dragged to the dimension ‘Time’. Drag key figures to the dimension ‘KeyFigures’. Drag Material to the dimension ‘Dimension1’.


5.jpg

Right click on any characteristic and select the option ‘Identify (Assign)’.


8.jpg

Select the check-boxes for the InfoProvider.


9.jpg

In the same way assign all characteristics and keyfigures to both Info Cubes and activate the MultiProvider.

 

Steps in Planning Modeler

            Go to tcode RSPLAN. Click on the button ‘Start Modeler’ to start the Planning Modeler.

10.jpg

            The Planning Modeler is a Web-based modeling environment for all tasks related to planning. The Planning Modeler is the central modeling tool for all planning-specific objects. You can use the Planning Modeler to model, manage and test all the metadata that belongs to the planning scenario. The different objects are maintained in the modeler in connection with planning, such as aggregation levels, as are the relationships between these objects.


Give the name of the MultiProvider and click on either ‘Start’ button or press ‘Enter’ key.


11.jpg


The Info Objects of the MultiProvider will be listed in the lower part of the screen.


12.jpg

Aggregation levels

            To determine the level on which data can be entered or changed (manually through user input or automatically by a planning function), you define an InfoProvider of type aggregation level. An aggregation level consists of a subset of the characteristics and key figures of a MultiProvider or real-time InfoCube. Real-time InfoCubes are used to store data.


13.jpg


            All Info Objects of the MultiProvider will be listed. Select the Info Objects that should be included in the MultiProvider by checking the check-box in the ‘Used’ column.


14.jpg

Check the Aggregation level by clicking on the ‘Check’ button. Save and activate the Aggregation level.


Steps in BEx Query Designer

            Once you have defined a query on one of the InfoProvider types listed above, you see the Planning tab under the Properties of structural components (in key figures or restricted key figures, for example). You can use the options here to set which structural components of an input-ready query you want to be input ready at run-time. For structural components that are not input ready, you can also set whether they are viewed as data not relevant for locking or are just protected against manual entry.

 

Option

Description

Not input ready (not relevant for locking)

The structure components are not locked for the exclusive access of a user because many users use this data as a reference (for actual data, for example).

This is the default setting.

Not input ready (relevant for locking)

If you want to protect structural components against manual entries but allow changes by planning functions, you can use locks to protect this data for one particular user. This allows you to ensure that the planning function works with the displayed data only and not with data that has been changed by other users.

Input ready (relevant for locking)

The data is locked for a user and is input-ready for manual planning.

 

In the selection which is needed as Input-Ready, Go to Planning Tab of Properties and in Change Data select ‘Input ready (relevant for locking)’.


15.jpg

Finally in overall properties of query( Ctrl+P) in the Planning tab select ‘Start Query in Change Mode’.


16.jpg

RSRT Output

 

Then to add the entered values to the Real-time cube click on ‘Save’ and then ‘Transfer’.

 

17.jpg

BEx Analyzer Output


18.jpg

In BEx analyzer to add the entered values to the Real-time cube click on ‘Save Values’ and then ‘Transfer Values’.


19.jpg

Points to be noted:

 

  • Ensure that Realtime InfoCube is in Planning Mode.

20.jpg


  • To ensure that only one user is able to change data, “their” data is locked and cannot be changed by other users. Depending on the expected load (determined by the number of users working in parallel and the complexity of the selection), you can specify one of several lock processes as the default. The lock algorithm is used by BW-BPS and BI Integrated Planning.

 

21.jpg

  • Lock can be released using the Tcode: SM12.

 

22.jpg

 

  • All the Infoobjects selected in the aggregation level should be maintained in the query; otherwise at times Input Ready Field will be disabled.
  • Once you click on Save in the Input Ready Query, then a request is created in the Manage Tab of your Info Cube.


23.jpg


  • Data requests in real-time InfoProviders stay open until the number of records in the data request exceeds 50,000. When an application writes data to the request and the limit of 50,000 records is exceeded, the data is stored and the request is closed. Thus, a request can actually contain more than 50,000 records.
  • It is still possible to roll-up and define aggregates, compress and so on.
  • Note that switching from planning to loading automatically closes the open planning request.

 

24.jpg

Recently Featured Content for SAP NetWeaver BW

$
0
0

Listen to Dr. Vishal Sikka, Member of the SAP Executive Board!

In this interview from SAP Teched in Bangalore, he is talking about SAP NetWeaver BW on SAP HANA. Watch this video from SAPTechEd India showfloor. January 2014

 

Good News - Easier Modeling of the SEM Add-On in Solution Manager

Recently the following option was made available for the usage of the SEM component on top of an existing AP NetWeaver BW  system starting from release SAP NetWeaver 7.0. Now you can define the SEM component as a "normal" Add-On on top of the SAP NetWeaver BW system. This makes the creation of the stack.xml for the usage of the software update manager and the database migration option not anymore a big hurdle. Read more... November 2013

 

Overview SAP BW 7.4 SP5 on SAP HANA and further Roadmap

SAP NetWeaver BW on SAP HANA continues to be the  cornerstone of SAP’s strategic vision for enterprise data warehousing providing organizations a solid data foundation to capture, store, transform and manage data in a scalable, enterprise-ready data warehouse. And as new challenges arise in the market (exploding data volumes, new data sources, the need for real-time information access, etc), SAP BW on HANA continues to evolve in order to meet the growing challenges imposed on IT by these ever changing market forces. The release of SAP BW running on SAP HANA is a great example of how SAP BW has evolved to ensure organizations continue to leverage  their investment in SAP BW to meet these new  challenge.

See this presentation to learn what SAP is doing next to evolve SAP BW on SAP HANA with  SAP BW 7.4., SP 5 on SAP HANA. October 2013

 

The Support Package Stack 10 for SAP NetWeaver 7.3 is released!

See BW specific information on the page SAP NetWeaver 7.3  BW ABAP Support Packages - NewsSeptember 2013

 

SAP EDW Positioning

Check out this presentation that is positioning EDW based on the SAP Real-Time data platform by Lothar Henkes. August  2013

 

How to Consume SAP HANA Models in SAP NetWeaver BW from Remote HANA Instances

This paper's focus is the use case when BW consumes the data or information models of a remote SAP HANA system (e.g. standalone/native SAP HANA, Accelerator). Read about loading the data from HANA into BW via ODP (Operational Data Provisioning) and a direct access solution leveraging the BW VirtualProvider (modeling options for direct access). The BW system could be running on any database.

 

Scale Out - Best Practices

Check out this latest comprehensive overview on the delta information for scale out implementations by Marc Hartz.

 

The HANA EDW

Read the new blog from Thomas Zurek about the HANA EDW!

 

Support Package 9 for SAP NetWeaver BW 7.3 ist released!

Support Package 9 is providing the SAP NetWeaver BW Near Line Storage solution (NLS) based on Sybase IQ. This solution helps to reduce TCO by reducing data volume in SAP NetWeaver BW, shortens backup time frames, provides high speed analysis for NLS residing historical information and is a SAP owned solution out of one hand. Please also see the according release notes and the information page for SAP NetWeaver BW 7.30 ABAP Support Packages

 

How to... Log Changes in Plan Data when using the SAP NetWeaver BW Planning Applications Kit

In the How to Paper - Log Changes in Plan Data Using DataStore Objects (see http://scn.sap.com/docs/DOC-16124) we have given an example how to use the Logging BAdI in SAP NetWeaver BW-IP. This paper is building on this How to Paper and describes how the logging functionality can now also be used in conjunction with HANA and the Planning Applications Kit (in memory planning on SAP NetWeaver BW).

 

 

SAP HANA-Native Reporting on BW Data in SAP NetWeaver BW powered by SAP HANA  – Positioning

An important part of the whole SAP NetWeaver BW powered by SAP HANA story is the option to create scenarios with interaction between data owned and modeled in SAP NetWeaver BW and data owned and modeled in native SAP HANA tools. These are what we call mixed scenarios. The interaction can take place in both directions, from SAP HANA to SAP NetWeaver BW and vice-versa, and there are options to physically move the data or to virtually expose it. Read more in this position by Klaus Nagel and Thomas Zurek

 

How to.. Consume SAP HANA Models with Input Parameters in SAP NetWeaver BW Virtual Providers

Input Parameters in SAP HANA Models enable user-specific parameterization of the model execution. This paper describes how such models can be consumed in SAP NetWeaver BW (using the VirtualProvider interface) thereby allowing SAP NetWeaver BW Users to provide inputs to the models at runtime.

 

 

Automatic Test of Queries with RS Trace Tool

Queries that run quickly and produce the right results are crucial for customers. With the trace tool environment, it is now possible to set up test scenarios for the key queries of your BW solution and to re-execute and evaluate them whenever required. Read aboutpossible use cases presented by Lars Breddemann and check out Tobias Kaufmann’s detailed description of how this tool can help to prepare your upgrade to SAP NetWeaver BW 7.30 on SAP HANA and ensure proper quality of your queries before and after applying SAP Notes and support packages.

 

SCN Space SAP NetWeaver BW on SAP HANA

The new SCN space SAP NetWeaver BW on SAP HANA is now live!

Check out this space for the latest news and summaries. Explore the getting started information and join in the forum discussions.

 

Demo - Failover Scenario SAP NetWeaver BW Powered by SAP HANA

This demo shows business continuity of SAP NetWeaver BW 7.3 powered by SAP HANA in a failover scenario. The technical demo shows the continuous operation of the BW application server when one of the HANA DB high availability nodes goes out of service (due to a hardware failure, etc.).

 

 

Globe and Mail Runs SAP NetWeaver BW to Sell More Papers

Read how Globe and Mail, Canada's premier newspaper company, improves its marketing campaigns and drives increased sales leveraging their investment in SAP NetWeaver BW.

 

Join our CSA Know-How Network Webinar: Migrating to SAP NetWeaver BW on SAP HANA on  December 12, 2012 !

With post copy automation functionality supported by SAP NetWeaver Landscape Virtualization Management software, you can quickly and easily migrate to SAP NetWeaver Business Warehouse (SAP NetWeaver BW) on SAP HANA. During the next Customer Solution Adoption (CSA) Know-How Network Webinar "New Tools to Ease Migration to SAP NetWeaver BW on SAP HANA", we’ll take an in-depth look at using post copy automation to migrate your existing BW system, while enabling parallel delta loading from your ERP system.

Join Sara Hollister from the CSA team  to learn about: Migration paths to SAP NetWeaver BW on SAP HANA, improved migration with automated task lists and dual delta queue creation for side-by-side BW loading

 

 

SCN Space SAP NetWeaver BW on SAP HANA

The new SCN space SAP NetWeaver BW on SAP HANA is now live!

Check out this space for the latest news and summaries. Explore the getting started information and join in the forum discussions.

 

Easier Migration to SAP NetWeaver BW powered by SAP HANA with ABAP Post-Copy Automation for SAP NetWeaver BW

To reduce downtime in your production landscape, one of the recommended migration paths from SAP NetWeaver Business Warehouse (SAP NetWeaver BW) to SAP NetWeaver BW on SAP HANA comprises a system copy of your SAP NetWeaver BW system. The system copy procedure of SAP NetWeaver BW systems and landscapes is complex for a number of reasons however. There are a large number of configuration settings (such as connections and delta queue handling for data loading) and system copy scenarios of SAP NetWeaver BW (each with different landscape aspects) for example that have to be handled as part of every system copy, regardless of whether the system copy is part of the migration to SAP HANA or if you want to perform regular system copies of your SAP NetWeaver BW landscape.

 

You can now dramatically improve your system copy process by using automated system copy activities, which is essential for a successful, fast and safe SAP NetWeaver BW system copy. To this end, SAP NetWeaver Landscape Virtualization Management offers preconfigured "task lists" used by the ABAP task manager for lifecycle management automation. You can also enable SAP NetWeaver BW powered by SAP HANA to “go productive” with the parallel operation of your existing production system, both connected to the same back end systems. This is achieved using a special and unique automated solution for delta queue cloning and synchronization on the production systems. SAP Note https://service.sap.com/sap/support/notes/886102(SMP logon required) thus becomes obsolete. Using the post-copy automation for SAP NetWeaver BW (BW PCA) in the migration process from SAP NetWeaver BW to SAP NetWeaver BW on SAP HANA, this process can be shortened by weeks and becomes easier, faster and more reliable.

 

Support Package 8 for SAP NetWeaver BW 7.3  is released!

We are pleased to announce the release of SAP NetWeaver BW 7.3, SP8, a service pack release that builds upon your investment in SAP NetWeaver BW 7.3 by adding even more capabilities and functionality to take you to the next level of performance, ease of use and flexibility in SAP NetWeaver BW.

SAP NetWeaver BW Support Package 8 includes many enhancements that came from direct customer feedback during the early ramp-up period of SAP NetWeaver BW 7.3 powered by SAP HANA. In this context, additional integration scenarios have been added between SAP NetWeaver BW on SAP HANA and SAP HANA Data Mart use cases for further flexibility and to enable  the “not-active” data concept for optimized RAM sizing and lower TCO. Further enhancements include support for converting existing SPOs to SAP HANA-optimized DSOs and  InfoCubes, and partitioning of write-optimized DSOs.

 

SAP HANA as Driver of EDW Evolution LSA++ (Layered Scalable Architecture) for SAP NetWeaver BW on SAP HANA

SAP NetWeaver BW on SAP HANA means new options and approaches modeling and designing SAP NetWeaver BW systems. An adoption of best practices and architecture is necessary to reap maximum benefit from SAP HANA. The LSA++ is in this spirit an adoption and extension of the well-known Layered Scalable Architecture (LSA) for SAP NetWeaver  BW. How to deal with persistent data in a SAP HANA context, with operational and real time data? What are the consequences for the Data Mart Layer? What does it mean to the BW EDW? Overall how to leverage SAP HANA making SAP NetWeaver BW more flexible, more responsive and more agile? LSA++ is the holistic framework for consistent BI on the EDW, operational and agile BI.

 

Demo - Failover scenario SAP NetWeaver BW powered by SAP HANA

This demo shows business continuity of SAP NetWeaver BW 7.3 powered by SAP HANA in a failover scenario. The technical demo shows the continuous operation of the BW application server when one of the HANA DB high availability nodes goes out of service (due to a hardware failure, etc.).

Interrupts and Events in a Process chain

$
0
0

This Document will explain only in a very high level on how to use the interrupts and events in a Process chain effectively.

The advantages of using interrupts and events in a Process chain are many. It handles the lookups easily, gives us more flexibility to run many chains in parallel using Meta chains depending on the system size, provides an easy way to monitor and resolve issues and it uses  all standard SAP.

 

I have below used two process chains one raising the event and the other one capturing the event using interrupt process.  This process chains can be run in two ways. First trigger PC1 this will raise an event. Then trigger PC2 this will capture that event and then allow the DTP to run.

 

Second way is trigger Pc 2 first then it will wait in the interrupt stage for the event to be raised. Only when the Pc1 is triggered and raised an event then the PC2 continues and load DTP.

 

Now this is only in a high level. But imagine we have so many Meta chains and so many dependent and non-dependent Process chains and with many lookups. Then this approach can be used very effectively

 

1.jpg

 

2.jpg

3.jpg

PC 2 :

 

4.jpg

5.jpg

 

Thanks,

Sathya


Certified Partner Software

$
0
0
248077_l_srgb_s_gl.jpg

The below listed partner software is licensed for the certification category Business Information Warehouse.

 

It is updated quarterly. Last update: September 2013. Next Update: December 2013.

For a complete and up to date picture please visit our  Partner Information Center http://www.sap.com/partners/directories/SearchSolution.epx and search in the according certification categories.

 

 

 

 

 

Integration Scenario BW-OBI - OLAP BAPI

 

Certified Partner Software ReleasePartner  Details
arcplan Enterprise 7 arcplan Information Services GmbH
Cubeware Cockpit V6PROCubeware GmbH
DeltaMaster 5.5Bissantz & Company GmbH
MicroStrategy 9.0MicroStrategy Inc
STATISTICA 11Statsoft, Inc.
WebFocus BW Query Adapter 7.0

Information Builders, Inc.

IBM Cognos Business Intelligence 10.2IBM Canada, Ltd.

 

 

Integration Scenario BW-ODB- Reporting for OLAP

 

Certified Partner Software ReleasePartner  Details
DeltaMaster 5.5Bissantz & Company GmbH
Tableau 7.0Tableau Software

 

 

Integration Scenario BW-OHS Open Hub Service
Certified Partner Software ReleasePartner  Details
IBM InfoSphere Information Server 8 IBM Deutschland Research & Development

Informatica PowerExchange 9.1

Informatica Corporation
Solutions Engineering Adaptor Suite 2.0Solutions Engineering Corporation
Teradata Extract/Load Solution 1.3.5Teradata Operations, Inc.
Informatica PowerExchange 9.1 Informatica Corporation
Microsoft SQL Server Integ 11.0Microsoft (China) Co. Ltd

 

 

Integration Scenario BW-SCH Job Scheduling
Certified Partner Software ReleasePartner  Details
A-AUTO 7.2BSP Incorporated
ActiveBatch Extension 2Advanced Systems Concepts, Inc.
Dollar Universe 6ORSYP SA
IBM Tivoli Workload Scheduler for App.. 8.6IBM Deutschland Research & Development
Tidal Enterprise Scheduler 5.3 Cisco Systems, Inc.
Tidal Enterprise Scheduler 6 Cisco Systems, Inc.
VISUAL TOM 5.3 ABSYSS

UC4 ONE Automation V9

Automic Software GmbH

 

 

Integration Scenario BW-STA Data Staging
Certified Partner Software ReleasePartner  Details
IBM InfoSphere Information Server 8 IBM Deutschland Research & Development
MS SQL Server Integration Services 10.0Microsoft Corporation
Solutions Engineering Adaptor Suite 2.0 Solutions Engineering Corporation
Informatica PowerExchange 9.5.1Informatica Corporation
Microsoft SQL Server Integ 11.0Microsoft (China) Co. Ltd

 

 

Integration Scenario BW-UDC-JDBC Universal Data Connect JDBC
Certified Partner Software ReleasePartner  Details
NonStop SQL/MX 3.1Hewlett-Packard Company

 

 

Integration Scenario BW-XMLA XML for Analysis
Certified Partner Software ReleasePartner  Details
Board Management Intelligence 7.2 Board International SA
Dundas Dashboard 2.5Dundas Data Visualization, Inc.
Next Analytics 5Nextanalytics Corporation
PROGNOZ Platform 5.0 PROGNOZ

 

 

Integration Scenario NW-BI-NLS  Nearline Storage
Certified Partner Software ReleasePartner  Details
Informatica Nearline 3.1Informatica Corporation
OutBoard Data Browser and Analyzer 1.0DataVard Ltd.
OutBoard with Storage Management 2.0DataVard Ltd.
PBS CBW NLS 3.4 PBS Software GmbH
PBS CBW NLS IQ 3.4 PBS Software GmbH
PBS CBW NLS IQ 7.3 PBS Software GmbH
NW-BW-NLS 7.30 - Informatica ILM Nearline 6.1AInformatica Corporation

 

 

In addition to the Certified Partner Software nearline storage can be implemented on IBM DB2.

The Nearline Storage solution for SAP NetWeaver BW on IBM DB2 LUW  is utilizing various DB2
capabilities and is included in SAP DB2 OEM licence.
Further information can be found in note 1405664

 

High Performance Near-Line Storage leverages the High Performance Storage Saver functionality of the
IBM DB2 Analytics Accelerator and DB2 for z/OS.
Further information can be found in note 1815192.

 

 

Partner Integration:
SAP InterfaceCertified Partner Software ReleasePartner  Details
HANA-BW CNT 7.3 - HANA BW Content 7.3gkm/SKM 1.0Gicom GmbH
BUSINESS INTEL - Business IntelligenceCT-BW Analyzer & Docu 2.1CT-Softwareberatungs GmbH
BW-CNT 7.0 - Third-Party Content for SAP NetWeaver BI 7.0FinCon 2.0InnovatIT Solutions Inc
ABAP 7.0 - ABAP Add-On for SAP NetWeaver Application Server 7.00Firsteam Framework Fiscal Solution Suite 1.0Firsteam Consulting S.A.
International Regulatory Reporting (IRR) 1.0BearingPoint Software Solutions GmbH
ABAP 7.0 - ABAP Add-On for SAP NetWeaver Application Server 7.00

Xtreme Reporting 10.0

Unit Consulting

 

 

Integration Scenario BO_DS Integration with BusinessObjects
-  SAP BusinessObjects Data Services:

 

SAP InterfaceCertified Partner Software ReleasePartner  Details
BO-DS-WI-OWGreenplum 4.0Greenplum, a division of EMC
BO-DS-WI-OWVectorwise ODBC Driver 1.6Actian Corporation

SAP First Guidance – SAP NetWeaver BW 7.40 SP5 powered by SAP HANA - CompositeProvider

$
0
0

A CompositeProvider is an InfoProvider in which you can combine data from BW InfoProviders such as InfoObjects, DataStore Objects, SPOs and InfoCubes, or SAP HANA views such as Analytical or Calculation Views using join or union operations to make the data available for reporting. This paper explains how to create a CompositeProvider in SAP BW 7.4 SP5 using the BW modeling tools in SAP HANA Studio.

View this Document

Article Hierarchy in SAP BI : Approach and Solution

$
0
0

Document Objectives:


The objective of this document is to provide important configuration details of Master Data Article Hierarchy with the enhancements details.


0Material does not provide the article hierarchy. The level of hierarchy provided in 0Material with the below standard data source

 

0MATERIAL_LGEN_HIER

0MATERIAL_LKLS_HIER

0MATERIAL_LPRH_HIER

 

Which provide only MERCHANDISE level of Hierarchy. To derive the exact Article (SKU) level we need to use the below standard options. There is no specific documents that demonstrate article hierarchy.


Configuration Details


You can go RSA5 on ECC and look for the IS-R nodes. You'll have two kinds of extractors: 0CM_CDT1_ATTR and 0RF_ARTHIER1_ATTR. In my case I had to use the 0RF_ARTHIER1_ATTR series extractors.


Activate in R/3 the following data sources:

0RF_ARTHIER1_ATTR,0RF_ARTHIER2_ATTR,0RF_ARTHIER3_ATTR,0RF_ARTHIER4_ATTR, 0RF_ARTHIER5_ATTR,0RF_ARTHIER6_ATTR,0RF_ARTHIER7_ATTR,0RF_ARTHIER8_ATTR, 0RF_PRODUCT_ATTR,0RF_PRODUCT_TEXT,0RF_ARTHIER1_TEXT,0RF_ARTHIER2_TEXT, 0RF_ARTHIER3_TEXT,0RF_ARTHIER4_TEXT,0RF_ARTHIER5_TEXT, 0RF_ARTHIER6_TEXT, 0RF_ARTHIER7_TEXT, 0RF_ARTHIER8_TEXT.


Untitled1.png


Replicate the Data source:

In BI, replicate the data sources for the source system that you activated the data sources for. Also, activate the standard content for the DSO 0RF_DS01. These include update rules, transfer rules, info objects and info sources.


Untitled2.png

After you have activated the 0RF_DS01 DSO, you can go to RSA5 (in the BI server) and activate the data source 0RF_ARTHIER_ACT. After this, activate the info source 0RF_ACT_HIER. Also activate the update rules related to these on the BI content.

 

Untitled3.pngUntitled4.png

Now you have to create Info packages for each data source that you’ve activated on step 1. I used deltas for the ATTR and full for the TEXT.

Run the Info packages. If the loads are green, then congratulations, you have half of the work. The most important thing that is having the hierarchy on the BI server is done.

Maintain the Hierarchy

Maintain the Hierarchy tab and click on the external characteristics. I had to add the characteristics:


0CM_CDT1 (Category), 0CM_CDT2 (Subcategory), 0CM_CDT3 (Segment), 0CM_CDT4 (Sub segment), 0CM_CDT5 (Sub-Sub segment), 0CM_CDT6 (Product Group),0CM_CDT7 (Brand),0CM_HIEID (Hierarchy ID),0CM_MCATDIV (Industry), and 0RF_SKU (Retail Fashion SKU)

 

Untitled5.png

Once you have these, then you can continue. Now expand the 0MATERIAL node, right click the hierarchy node, and link it to a 3.x info source, for flat files. Select the IDOC method instead of the PSA.


Untitled6.png


Create an Info package for this hierarchy. You can choose between frontend and server. If you choose server, be sure that you and the user ALEREMOTE have access to the folder that you select, and write down the file name that you chose.


Untitled7.png

Now let’s go to SE38. You’ll create a program that converts the DSO with all the article hierarchy into a file with data in the required lay out for uploading this as a hierarchy for 0MATERIAL. Here you have an example of the program that I’ve developed:


PROGRAM TO DOWNLOAD FROM DSO to Flat File

 

REPORT  zrf_arthier_download.

INCLUDE zrf_arthier_download_top.
INCLUDE zrf_arthier_download_forms.

*Parameters
PARAMETERS: p_hier TYPE /bi0/arf_ds0100-cm_hieid.
PARAMETERS: p_local  RADIOBUTTONGROUP rad1,
            p_server
RADIOBUTTONGROUP rad1 DEFAULT'X'.

START-
OF-SELECTION.

 
PERFORM selection USING p_hier.

* Checks if there is any hierarchy on the DSO
 
IF it_dso[] ISNOTINITIAL.
   
PERFORM prepare_download USING p_hier.
* If the download table contains any data, it downloads it.
   
IF it_download[] ISNOTINITIAL.
     
PERFORM download_file USING p_local.
   
ENDIF.
 
ELSE.
   
EXIT.
 
ENDIF.

 

*&---------------------------------------------------------------------*
*&  Include           ZRF_ARTHIER_DOWNLOAD_TOP
*&---------------------------------------------------------------------*
* Types
TYPES: BEGINOF ty_dso,
        rf_sku    
TYPE /bi0/arf_ds0100-rf_sku,
        cm_cdt1   
TYPE /bi0/arf_ds0100-cm_cdt1,
        cm_hieid  
TYPE /bi0/arf_ds0100-cm_hieid,
        validfrom 
TYPE /bi0/arf_ds0100-validfrom,
        cm_mcatdiv
TYPE /bi0/arf_ds0100-cm_mcatdiv,
        cm_cdt2   
TYPE /bi0/arf_ds0100-cm_cdt2,
        cm_cdt3   
TYPE /bi0/arf_ds0100-cm_cdt3,
        cm_cdt4   
TYPE /bi0/arf_ds0100-cm_cdt4,
        cm_cdt5   
TYPE /bi0/arf_ds0100-cm_cdt5,
        cm_cdt6   
TYPE /bi0/arf_ds0100-cm_cdt6,
        cm_cdt7   
TYPE /bi0/arf_ds0100-cm_cdt7,
        validto   
TYPE /bi0/arf_ds0100-validto,
        rf_tlevel 
TYPE /bi0/arf_ds0100-rf_tlevel,
      
ENDOF ty_dso.

TYPES: BEGINOF ty_download,
       
field(255) TYPEc,
      
ENDOF ty_download.

* Internal tables
DATA it_dso      TYPETABLEOF ty_dso.
DATA it_download TYPETABLEOF ty_download.

* Work areas
DATA wa_dso      TYPE ty_dso.
DATA wa_download TYPE ty_download.

* Constants
CONSTANTS c_nodeid(7)     TYPEcVALUE'Node Id'.
CONSTANTS c_infname(15)   TYPEcVALUE'InfoObject Name'.
CONSTANTS c_nodename(9)   TYPEcVALUE'Node Name'.
CONSTANTS c_link(4)       TYPEcVALUE'Link'.
CONSTANTS c_parent(11)    TYPEcVALUE'Parent Node'.
CONSTANTS c_validto(8)    TYPEcVALUE'Valid to'.
CONSTANTS c_validfrom(10) TYPEcVALUE'Valid from'.
CONSTANTS c_language(8)   TYPEcVALUE'Language'.
CONSTANTS c_short(5)      TYPEcVALUE'Short'.
CONSTANTS c_medium(6)     TYPEcVALUE'Medium'.
CONSTANTS c_long(4)       TYPEcVALUE'Long'.
CONSTANTS c_sep(1)        TYPEcVALUE','.
CONSTANTS c_en(2)         TYPEcVALUE'EN'.
CONSTANTS c_hieid(9)      TYPEcVALUE'0CM_HIEID'.
CONSTANTS c_mcatdiv(11)   TYPEcVALUE'0CM_MCATDIV'.
CONSTANTS c_cdt1(8)       TYPEcVALUE'0CM_CDT1'.
CONSTANTS c_cdt2(8)       TYPEcVALUE'0CM_CDT2'.
CONSTANTS c_cdt3(8)       TYPEcVALUE'0CM_CDT3'.
CONSTANTS c_cdt4(8)       TYPEcVALUE'0CM_CDT4'.
CONSTANTS c_cdt5(8)       TYPEcVALUE'0CM_CDT5'.
CONSTANTS c_cdt6(8)       TYPEcVALUE'0CM_CDT6'.
CONSTANTS c_cdt7(8)       TYPEcVALUE'0CM_CDT7'.
CONSTANTS c_0material(9TYPEcVALUE'0MATERIAL'.
CONSTANTS c_e(1)          TYPEcVALUE'E'.
CONSTANTS c_x(1)          TYPEcVALUE'X'.

 

*&---------------------------------------------------------------------*
*&  Include           ZRF_ARTHIER_DOWNLOAD_FORMS
*&---------------------------------------------------------------------*

*&---------------------------------------------------------------------*
*&      Form  SELECTION
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*      -->P_P_HIER  text
*----------------------------------------------------------------------*
FORM selection  USING    p_hier.

 
REFRESH it_dso.
 
REFRESH it_download.
 
CLEAR wa_dso.
 
CLEAR wa_download.

 
SELECT rf_sku
         cm_cdt1
         cm_hieid
         validfrom
         cm_mcatdiv
         cm_cdt2
         cm_cdt3
         cm_cdt4
         cm_cdt5
         cm_cdt6
         cm_cdt7
         validto
         rf_tlevel
        
INTOTABLE it_dso
        
FROM /bi0/arf_ds0100
        
WHERE cm_hieid = p_hier
        
AND   rf_del_ind <> c_x.

ENDFORM.                    " SELECTION

*&---------------------------------------------------------------------*
*&      Form  PREPARE_DOWNLOAD
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*  -->  p1        text
*  <--  p2        text
*----------------------------------------------------------------------*
FORM prepare_download USING p_hier.

 
TYPES: BEGINOF ty_parent,
         
id(8) TYPEc,
          name 
TYPE /bi0/arf_ds0100-cm_cdt1,
        
ENDOF ty_parent.

 
DATA it_parent TYPETABLEOF ty_parent.

 
DATA wa_parent TYPE ty_parent.

 
DATA l_short(20TYPEc.
 
DATA l_medium(40) TYPEc.
 
DATA l_long(60)   TYPEc.
 
DATA l_count(8)   TYPEc.
 
DATA l_validfrom  TYPE /bi0/arf_ds0100-validfrom.
 
DATA l_validto    TYPE /bi0/arf_ds0100-validto.

 
REFRESH it_parent.
 
CLEAR wa_parent.

* Put the headers
 
CONCATENATE c_nodeid
              c_sep
              c_infname
              c_sep
              c_nodename
              c_sep
              c_link
              c_sep
              c_parent
              c_sep
              c_validto
              c_sep
              c_validfrom
              c_sep
              c_language
              c_sep
              c_short
              c_sep
              c_medium
              c_sep
              c_long
             
INTO wa_download-field.
 
APPEND wa_download TO it_download.
 
CLEAR wa_download.

* Get the first node
 
SELECTSINGLE txtsh
                txtmd
        
INTO (l_short,
               l_medium)
        
FROM /bi0/tcm_hieid
        
WHERE cm_hieid = wa_dso-cm_hieid
        
AND   langu    = c_e.

 
CLEAR l_validto.
 
SELECTMAX( validto )
        
INTO l_validto
        
FROM /bi0/arf_ds0100
        
WHERE cm_hieid = p_hier.

 
CLEAR l_validfrom.
 
SELECTMIN( validfrom )
      
INTO l_validfrom
      
FROM /bi0/arf_ds0100
      
WHERE cm_hieid = p_hier.

 
CONCATENATE'1'
              c_sep
              c_hieid
              c_sep
              p_hier
              c_sep
              c_sep
             
'0'
              c_sep
              l_validto
              c_sep
              l_validfrom
              c_sep
              c_en
              c_sep
*              l_short
              l_medium
              c_sep
              l_medium
              c_sep
             
INTO wa_download-field.

 
APPEND wa_download TO it_download.
 
CLEAR wa_download.

* Here I get the second node of the hierarchy
 
READTABLE it_dso INTO wa_dso WITHKEY rf_tlevel = 1.

 
SELECTSINGLE txtsh
                txtmd
                txtlg
        
INTO (l_short,
               l_medium,
               l_long)
        
FROM /bi0/tcm_mcatdiv
        
WHERE cm_hieid   = wa_dso-cm_hieid
        
AND   cm_mcatdiv = wa_dso-cm_mcatdiv
        
AND   langu      = c_e.

 
CLEAR l_count.
  l_count =
2.

 
CONDENSE l_count.
 
CONDENSE wa_parent-id.

 
CONCATENATE l_count
              c_sep
              c_mcatdiv
              c_sep
              wa_dso-cm_hieid
              wa_dso-cm_mcatdiv
              c_sep
              c_sep
             
'1'
              c_sep
              wa_dso-validto
              c_sep
              wa_dso-validfrom
              c_sep
              c_en
              c_sep
*              l_short
              l_medium
              c_sep
              l_medium
              c_sep
             
INTO wa_download-field.

 
APPEND wa_download TO it_download.

  wa_parent-
id = l_count.
  wa_parent-name = wa_dso-cm_mcatdiv.
 
APPEND wa_parent TO it_parent.
 
CLEAR wa_parent.
 
CLEAR wa_download.
 
CLEAR wa_dso.
 
CLEAR l_short.
 
CLEAR l_medium.
 
CLEAR l_long.

* Get the following nodes
 
LOOPAT it_dso INTO wa_dso
      
WHERE rf_tlevel = 02
      
AND cm_cdt1 <> space
      
AND cm_cdt2 = space.

    l_count = l_count +
1.
   
CONDENSE l_count.

   
SELECTSINGLE txtsh
                  txtmd
                  txtlg
          
INTO (l_short,
                 l_medium,
                 l_long)
          
FROM /bi0/tcm_cdt1
          
WHERE cm_hieid = wa_dso-cm_hieid
          
AND   cm_cdt1  = wa_dso-cm_cdt1
          
AND   langu    = c_e.

   
READTABLE it_parent
        
INTO wa_parent
        
WITHKEY name = wa_dso-cm_mcatdiv.

   
CONCATENATE l_count
                c_sep
                c_cdt1
                c_sep
                wa_dso-cm_hieid
                wa_dso-cm_cdt1
                c_sep
                c_sep
                wa_parent-
id
                c_sep
                wa_dso-validto
                c_sep
                wa_dso-validfrom
                c_sep
                c_en
                c_sep
*                l_short
                l_medium
                c_sep
                l_medium
                c_sep
                l_long
               
INTO wa_download-field.

    wa_parent-
id = l_count.
    wa_parent-name = wa_dso-cm_cdt1.
   
APPEND wa_parent TO it_parent.
   
APPEND wa_download TO it_download.
   
CLEAR wa_download.
   
CLEAR l_short.
   
CLEAR l_medium.
   
CLEAR l_long.
   
CLEAR wa_parent.
 
ENDLOOP.

 
LOOPAT it_dso INTO wa_dso
       
WHERE rf_tlevel = 03.

    l_count = l_count +
1.
   
CONDENSE l_count.

   
SELECTSINGLE txtsh
                  txtmd
                  txtlg
          
INTO (l_short,
                 l_medium,
                 l_long)
          
FROM /bi0/tcm_cdt2
          
WHERE cm_hieid = wa_dso-cm_hieid
          
AND   cm_cdt2  = wa_dso-cm_cdt2
          
AND   langu    = c_e.

   
READTABLE it_parent
        
INTO wa_parent
        
WITHKEY name = wa_dso-cm_cdt1.

   
CONCATENATE l_count
                c_sep
                c_cdt2
                c_sep
                wa_dso-cm_hieid
                wa_dso-cm_cdt2
                c_sep
                c_sep
                wa_parent-
id
                c_sep
                wa_dso-validto
                c_sep
                wa_dso-validfrom
                c_sep
                c_en
                c_sep
*                l_short
                l_medium
                c_sep
                l_medium
                c_sep
                l_long
               
INTO wa_download-field.

    wa_parent-
id = l_count.
    wa_parent-name = wa_dso-cm_cdt2.
   
APPEND wa_parent TO it_parent.
   
APPEND wa_download TO it_download.
   
CLEAR wa_download.
   
CLEAR l_short.
   
CLEAR l_medium.
   
CLEAR l_long.
   
CLEAR wa_parent.
 
ENDLOOP.

 
LOOPAT it_dso INTO wa_dso
     
WHERE rf_tlevel = 04.

    l_count = l_count +
1.
   
CONDENSE l_count.

   
SELECTSINGLE txtsh
                  txtmd
                  txtlg
          
INTO (l_short,
                 l_medium,
                 l_long)
          
FROM /bi0/tcm_cdt3
          
WHERE cm_hieid = wa_dso-cm_hieid
          
AND   cm_cdt3  = wa_dso-cm_cdt3
          
AND   langu    = c_e.

   
READTABLE it_parent
        
INTO wa_parent
        
WITHKEY name = wa_dso-cm_cdt2.

   
CONCATENATE l_count
                c_sep
                c_cdt3
                c_sep
                wa_dso-cm_hieid
                wa_dso-cm_cdt3
                c_sep
                c_sep
                wa_parent-
id
                c_sep
                wa_dso-validto
                c_sep
                wa_dso-validfrom
                c_sep
                c_en
                c_sep
                l_short
                c_sep
                l_medium
                c_sep
                l_long
               
INTO wa_download-field.

    wa_parent-
id = l_count.
    wa_parent-name = wa_dso-cm_cdt3.
   
APPEND wa_parent TO it_parent.
   
APPEND wa_download TO it_download.
   
CLEAR wa_download.
   
CLEAR l_short.
   
CLEAR l_medium.
   
CLEAR l_long.
   
CLEAR wa_parent.
 
ENDLOOP.

 
LOOPAT it_dso INTO wa_dso
   
WHERE rf_tlevel = 05.

    l_count = l_count +
1.
   
CONDENSE l_count.

   
SELECTSINGLE txtsh
                  txtmd
                  txtlg
          
INTO (l_short,
                 l_medium,
                 l_long)
          
FROM /bi0/tcm_cdt4
          
WHERE cm_hieid = wa_dso-cm_hieid
          
AND   cm_cdt4  = wa_dso-cm_cdt4
          
AND   langu    = c_e.

   
READTABLE it_parent
        
INTO wa_parent
        
WITHKEY name = wa_dso-cm_cdt3.

   
CONCATENATE l_count
                c_sep
                c_cdt4
                c_sep
                wa_dso-cm_hieid
                wa_dso-cm_cdt4
                c_sep
                c_sep
                wa_parent-
id
                c_sep
                wa_dso-validto
                c_sep
                wa_dso-validfrom
                c_sep
                c_en
                c_sep
                l_short
                c_sep
                l_medium
                c_sep
                l_long
               
INTO wa_download-field.

    wa_parent-
id = l_count.
    wa_parent-name = wa_dso-cm_cdt4.
   
APPEND wa_parent TO it_parent.
   
APPEND wa_download TO it_download.
   
CLEAR wa_download.
   
CLEAR l_short.
   
CLEAR l_medium.
   
CLEAR l_long.
   
CLEAR wa_parent.
 
ENDLOOP.

 
LOOPAT it_dso INTO wa_dso
   
WHERE rf_tlevel = 06.

    l_count = l_count +
1.
   
CONDENSE l_count.

   
SELECTSINGLE txtsh
                  txtmd
                  txtlg
          
INTO (l_short,
                 l_medium,
                 l_long)
          
FROM /bi0/tcm_cdt5
          
WHERE cm_hieid = wa_dso-cm_hieid
          
AND   cm_cdt5  = wa_dso-cm_cdt5
          
AND   langu    = c_e.

   
READTABLE it_parent
        
INTO wa_parent
        
WITHKEY name = wa_dso-cm_cdt4.

   
CONCATENATE l_count
                c_sep
                c_cdt5
                c_sep
                wa_dso-cm_hieid
                wa_dso-cm_cdt5
                c_sep
                c_sep
                wa_parent-
id
                c_sep
                wa_dso-validto
                c_sep
                wa_dso-validfrom
                c_sep
                c_en
                c_sep
                l_short
                c_sep
                l_medium
                c_sep
                l_long
               
INTO wa_download-field.

    wa_parent-
id = l_count.
    wa_parent-name = wa_dso-cm_cdt5.
   
APPEND wa_parent TO it_parent.
   
APPEND wa_download TO it_download.
   
CLEAR wa_download.
   
CLEAR l_short.
   
CLEAR l_medium.
   
CLEAR l_long.
   
CLEAR wa_parent.
 
ENDLOOP.

 
LOOPAT it_dso INTO wa_dso
   
WHERE rf_tlevel = 07.

    l_count = l_count +
1.
   
CONDENSE l_count.

   
SELECTSINGLE txtsh
                  txtmd
                  txtlg
          
INTO (l_short,
                 l_medium,
                 l_long)
          
FROM /bi0/tcm_cdt6
          
WHERE cm_hieid = wa_dso-cm_hieid
          
AND   cm_cdt6  = wa_dso-cm_cdt6
          
AND   langu    = c_e.

   
READTABLE it_parent
        
INTO wa_parent
        
WITHKEY name = wa_dso-cm_cdt5.

   
CONCATENATE l_count
                c_sep
                c_cdt6
                c_sep
                wa_dso-cm_hieid
                wa_dso-cm_cdt6
                c_sep
                c_sep
                wa_parent-
id
                c_sep
                wa_dso-validto
                c_sep
                wa_dso-validfrom
                c_sep
                c_en
                c_sep
                l_short
                c_sep
                l_medium
                c_sep
                l_long
               
INTO wa_download-field.

    wa_parent-
id = l_count.
    wa_parent-name = wa_dso-cm_cdt6.
   
APPEND wa_parent TO it_parent.
   
APPEND wa_download TO it_download.
   
CLEAR wa_download.
   
CLEAR l_short.
   
CLEAR l_medium.
   
CLEAR l_long.
   
CLEAR wa_parent.
 
ENDLOOP.

 
LOOPAT it_dso INTO wa_dso
   
WHERE rf_tlevel = 08.

    l_count = l_count +
1.
   
CONDENSE l_count.

   
SELECTSINGLE txtsh
                  txtmd
                  txtlg
          
INTO (l_short,
                 l_medium,
                 l_long)
          
FROM /bi0/tcm_cdt7
          
WHERE cm_hieid = wa_dso-cm_hieid
          
AND   cm_cdt7  = wa_dso-cm_cdt7
          
AND   langu    = c_e.

   
READTABLE it_parent
        
INTO wa_parent
        
WITHKEY name = wa_dso-cm_cdt6.

   
CONCATENATE l_count
                c_sep
                c_cdt7
                c_sep
                wa_dso-cm_hieid
                wa_dso-cm_cdt7
                c_sep
                c_sep
                wa_parent-
id
                c_sep
                wa_dso-validto
                c_sep
                wa_dso-validfrom
                c_sep
                c_en
                c_sep
                l_short
                c_sep
                l_medium
                c_sep
                l_long
               
INTO wa_download-field.

    wa_parent-
id = l_count.
    wa_parent-name = wa_dso-cm_cdt7.
   
APPEND wa_parent TO it_parent.
   
APPEND wa_download TO it_download.
   
CLEAR wa_download.
   
CLEAR l_short.
   
CLEAR l_medium.
   
CLEAR l_long.
   
CLEAR wa_parent.
 
ENDLOOP.

 
LOOPAT it_dso INTO wa_dso
 
WHERE rf_tlevel = 00
   
AND rf_sku <> space.

    l_count = l_count +
1.
   
CONDENSE l_count.

   
SELECTSINGLE txtmd
          
INTO l_medium
          
FROM /bi0/tmaterial
          
WHERE material = wa_dso-rf_sku
          
AND   langu    = c_e.

   
IF wa_dso-cm_cdt7 <> space.
     
READTABLE it_parent
     
INTO wa_parent
     
WITHKEY name = wa_dso-cm_cdt7.
   
ELSEIF wa_dso-cm_cdt6 <> space.
     
READTABLE it_parent
     
INTO wa_parent
     
WITHKEY name = wa_dso-cm_cdt6.
   
ELSEIF wa_dso-cm_cdt5 <> space.
     
READTABLE it_parent
     
INTO wa_parent
     
WITHKEY name = wa_dso-cm_cdt5.
   
ELSEIF wa_dso-cm_cdt4 <> space.
     
READTABLE it_parent
     
INTO wa_parent
     
WITHKEY name = wa_dso-cm_cdt4.
   
ELSEIF wa_dso-cm_cdt3 <> space.
     
READTABLE it_parent
     
INTO wa_parent
     
WITHKEY name = wa_dso-cm_cdt3.
   
ELSEIF wa_dso-cm_cdt2 <> space.
     
READTABLE it_parent
     
INTO wa_parent
     
WITHKEY name = wa_dso-cm_cdt2.
   
ELSEIF wa_dso-cm_cdt1 <> space.
     
READTABLE it_parent
     
INTO wa_parent
     
WITHKEY name = wa_dso-cm_cdt1.
   
ENDIF.

   
CONCATENATE l_count
            c_sep
            c_0material
            c_sep
            wa_dso-rf_sku
            c_sep
            c_sep
            wa_parent-
id
            c_sep
            wa_dso-validto
            c_sep
            wa_dso-validfrom
            c_sep
            c_en
            c_sep
            c_sep
            l_medium
            c_sep
           
INTO wa_download-field.

    wa_parent-
id = l_count.
    wa_parent-name = wa_dso-rf_sku.
   
APPEND wa_parent TO it_parent.
   
APPEND wa_download TO it_download.
   
CLEAR wa_download.
   
CLEAR l_short.
   
CLEAR l_medium.
   
CLEAR l_long.
   
CLEAR wa_parent.
 
ENDLOOP.
ENDFORM.                    " PREPARE_DOWNLOAD
*&---------------------------------------------------------------------*
*&      Form  DOWNLOAD_FILE
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
*  -->  p1        text
*  <--  p2        text
*----------------------------------------------------------------------*
FORM download_file USING p_local.

 
CONSTANTS c_fbd(8)    TYPEcVALUE'fbdaix98'.
 
CONSTANTS c_sidd(3)   TYPEcVALUE'FBD'.
 
CONSTANTS c_fbt(8)    TYPEcVALUE'fbtaix98'.
 
CONSTANTS c_sidt(3)   TYPEcVALUE'FBT'.
 
CONSTANTS c_fbp(8)    TYPEcVALUE'fbpaix01'.
 
CONSTANTS c_sidp(3)   TYPEcVALUE'FBP'.
 
CONSTANTS c_part1(9TYPEcVALUE'/usr/sap/'.
 
CONSTANTS c_part2(31) TYPEcVALUE'/SYS/global/bi_data/arthier.csv'.

 
DATA l_filename  TYPE string.
 
DATA l_filelocal TYPE rlgrap-filename.

 
CLEAR l_filename.

 
IF p_local ISINITIAL.
* Server download
   
IF sy-host = c_fbd.
     
CONCATENATE c_part1
                  c_sidd
                  c_part2
                 
INTO l_filename.
   
ELSEIF sy-host = c_fbt.
     
CONCATENATE c_part1
                  c_sidt
                  c_part2
                 
INTO l_filename.
   
ELSEIF sy-host = c_fbp.
     
CONCATENATE c_part1
                  c_sidt
                  c_part2
                 
INTO l_filename.
   
ENDIF.

   
OPENDATASET l_filename FOROUTPUTINTEXTMODEENCODINGDEFAULT.
   
LOOPAT it_download INTO wa_download.
     
TRANSFER wa_download TO l_filename.
   
ENDLOOP.
   
CLOSEDATASET l_filename.
   
COMMITWORK.
 
ELSE.
* Local download
   
CLEAR l_filelocal.
   
CALLFUNCTION'KD_GET_FILENAME_ON_F4'
*   EXPORTING
*     PROGRAM_NAME        = SYST-REPID
*     DYNPRO_NUMBER       = SYST-DYNNR
*     FIELD_NAME          = ' '
*     STATIC              = ' '
*     MASK                = ' '
     
CHANGING
        file_name           = l_filelocal
*   EXCEPTIONS
*     MASK_TOO_LONG       = 1
*     OTHERS              = 2
              .
   
IF sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
*         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
   
ELSE.

     
IF l_filelocal <> space.
        l_filename = l_filelocal.
       
CALLFUNCTION'GUI_DOWNLOAD'
         
EXPORTING
*     BIN_FILESIZE                    =
            filename                        = l_filename
*     FILETYPE                        = 'ASC'
*     APPEND                          = ' '
*     WRITE_FIELD_SEPARATOR           = ' '
*     HEADER                          = '00'
*     TRUNC_TRAILING_BLANKS           = ' '
*     WRITE_LF                        = 'X'
*     COL_SELECT                      = ' '
*     COL_SELECT_MASK                 = ' '
*     DAT_MODE                        = ' '
*     CONFIRM_OVERWRITE               = ' '
*     NO_AUTH_CHECK                   = ' '
*     CODEPAGE                        = ' '
*     IGNORE_CERR                     = ABAP_TRUE
*     REPLACEMENT                     = '#'
*     WRITE_BOM                       = ' '
*     TRUNC_TRAILING_BLANKS_EOL       = 'X'
*     WK1_N_FORMAT                    = ' '
*     WK1_N_SIZE                      = ' '
*     WK1_T_FORMAT                    = ' '
*     WK1_T_SIZE                      = ' '
*     WRITE_LF_AFTER_LAST_LINE        = ABAP_TRUE
*     SHOW_TRANSFER_STATUS            = ABAP_TRUE
*   IMPORTING
*     FILELENGTH                      =
         
TABLES
            data_tab                        = it_download
*     FIELDNAMES                      =
*   EXCEPTIONS
*     FILE_WRITE_ERROR                = 1
*     NO_BATCH                        = 2
*     GUI_REFUSE_FILETRANSFER         = 3
*     INVALID_TYPE                    = 4
*     NO_AUTHORITY                    = 5
*     UNKNOWN_ERROR                   = 6
*     HEADER_NOT_ALLOWED              = 7
*     SEPARATOR_NOT_ALLOWED           = 8
*     FILESIZE_NOT_ALLOWED            = 9
*     HEADER_TOO_LONG                 = 10
*     DP_ERROR_CREATE                 = 11
*     DP_ERROR_SEND                   = 12
*     DP_ERROR_WRITE                  = 13
*     UNKNOWN_DP_ERROR                = 14
*     ACCESS_DENIED                   = 15
*     DP_OUT_OF_MEMORY                = 16
*     DISK_FULL                       = 17
*     DP_TIMEOUT                      = 18
*     FILE_NOT_FOUND                  = 19
*     DATAPROVIDER_EXCEPTION          = 20
*     CONTROL_FLUSH_ERROR             = 21
*     OTHERS                          = 22
                  .
       
IF sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
*         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
       
ENDIF.
     
ENDIF.
   
ENDIF.

 
ENDIF.
ENDFORM.                    " DOWNLOAD_FILE

 





Go to SE 38 again, Enter the following Program Name ZAF_ARTHIER_DOWNLOAD click Execute.


Untitled8.png

 

Select the Hierarchy id

Untitled9.png

Execute to the Application Server AL11.


 

Now come to 0material Hierarchy infopackage,

External data

Select Load External data from Application Server and specify the path,


Untitled10.png

Now Schedule it in the Schedule tab,

 

Check the load, and go to the Hierrachy structure you can see the Article Hierarchy.

Untitled11.png

 

 

Untitled12.png

This is just an example, but you can create or you own or modify this.

Remember to change the file name if you’ve chosen to upload the file from the server.

Now you’re ready to include the program, the Infopackages and the last Infopackage that you created in a process chain. Enjoy your Article Hierarchy!

SAP BW 7.3: How to trigger BW process chains from ECC

$
0
0

In this document I will comment how to trigger BW process chain from ECC, for example to update a particular report. I hope it can help to someone who has to perform this process in the future.

 

According to the requirements, the user - after executing certain transactions in ECC - should be able to trigger a load in BW and see the reports updated with this new information.

 

To do this, we will create an ad-hoc transaction in ECC (ZBW_UPDATE_REPORTS) with an associated program (ZBW_TRIGGER_EVENT). A dynpro where the user will be able to select the business area he wants to update will be displayed in this transaction. After selecting this area, "Execute" button is pressed and the program is launched. This program calls a BW function (ZFM_TRIGGER_EVENT) passing as parameter the event associated with the selected area. Finally, this function will be responsible for triggering the event (ZECC_EVE01) and starting to run the appropriate load chain (TR_DT).

 

ECC SIDE

 

Step 1. Create the transaction in SE93. This transaction will have associated the program that we will create in the next step.

1.png


2.png

 

Create Dialog Transaction:

3.png

 

Step 2. Create an executable program in t-code SE38. In this transaction the user will select the Area of reports that he wants to update.

4.png

 

Global Data:

5.png

 

PBO-Modules:

6.png

 

PAI-Modules:

7.png

 

FORM-Routines:

8.png

Note: SLBWD200 is the name of our SAP BW system.

 

Fill the Text Symbols and Selection Texts:

9.png


10.png

 

Step 3. The dynpro looks like this:

11.png

 

BW SIDE

 

Step 4. Create an event in tcode SM64 for each of the different loads we want throw. In our case: ZECC_EVE01 (for Treasury area) and ZECC_EVE02 (for Consolidation area).

12.png

 

Step 5. We associate to each chain the corresponding event. In our example we plan the  load chain of Treasury (TR_DT) to run after event: ZECC_EVE01.

13.png

Note: it is important to plan and activate the chain and check “Periodic job”.

 

Step 6. Create the function module (in our case ZFM_TRIGGER_EVENT). This FM must be enabled as Remote-Enabled and it will call the standard function  BP_EVENT_RAISE which will raise the event received as parameter.

14.png

 

Attributes:

attributes.jpg

 

Import:

15.png

 

Step 7. Finally, just after executing the transaction from ECC side, the BW Process Chain is thrown:

16.png

 

17.png

 

Best regards,

Carlos

Repair Full load problem fix for datasource 0CSM_CASE_ATTR ::

$
0
0

Applies to:

SAP NetWeaver Business Warehouse (formerly BI). This has been tested and working fine on SAP BI 7.0 version.

Applicable for other BW versions as well.

 

 

Summary :

This document will help to understand the steps one can take in order to fix the issue related with the datasource - 0CSM_CASE_ATTR (Case Management: Case (GUID)). This is a necessary point to be performed if you have freshly installed the datasource in the BW system. The datasource is enabled to fetch delta records.

 

 

Author: Arpit Khandelwal.

Company: Accenture Services Pvt. Ltd.

 

 

Author Bio:


apk.PNG

 

Arpit Khandelwal is a SAP BI Consultant currently working with Accenture Services Private Limited. He has 2.7 years of experience in  BW/BI implementation and support projects. He has been particularly involved in SAP BW Archiving Activities. Major area of work involves with the extraction of R/3 data from LO and FI datasources.



Basic Scenario ::


Freshly installed datasource - 0CSM_CASE_ATTR (Case Management: Case (GUID))is available in your BW system.

You have run the RSA3 extractor in R/3 side and tried to pull the data through repair full load on BW side but records are not matching.


Number in records in RSA3:: 70,936


s1.png


Number of records in Infoobject loaded on Repair Full ::  13,228


s2.png




Functional & Technical Aspect ::


The bug in the standard extractor program of this datasource does not allows the data to be loaded correctly through the Repair Full option. This needs to be fixed by instaling the support packages provided by the SAP OSS - 996360 - Case Extractor not able to pull more than 1 data package . If the BW version is not supported by this OSS, then we need to use the SAP OSS - 1159696 - Not able to get the data using case type from case extractor.

 

 

The 2nd OSS message also gives us an option to add one more field to the RSA3 extractor - CHANGE_Time field.


If user wants to extract the data using CHANGE_TIME, please do the below steps.

1. Execute rsa2 transaction

2. Input Data source(ex:0CSM_CASE_ATTR) and click on change button

3. Click on fields tab

4. Locate "CHANGE_TIME" field. Change the property value to "X"from "P".

5. Save the changes.


s3.JPG


s1.JPG



You need to install the Note in your source system (R/3) development --> acceptance --> production to make this load work correctly.

After installing the correction note , the data was loaded correctly.


Number in records in RSA3:: 62,140


s2.png


Number of records in Infoobject loaded on Repair Full ::  62,140

 

s4.png

 

 

Reference Sources ::

 

Note 996360 - Case Extractor not able to pull more than 1 data package

 

Note 1159696 - Not able to get the data using case type from case extractor

 

Time desync of data loading from data source - 0CSM_CASE_ATTR

Data Records movement from MCEX --> RSA7 via V3 job ::

$
0
0

Applies to:

SAP NetWeaver Business Warehouse (formerly BI). This has been tested and working fine on SAP BI 7.0 version.

Applicable for other BW versions as well.

 

 

Summary :

This document will show how the MCEX queues are cleared by the V3 jobs and move the data to RSA7 (Bw Delta Queue).

 

 

Author: Arpit Khandelwal.

Company: Accenture Services Pvt. Ltd.

 

 

Author Bio:


apk.PNG

 

Arpit Khandelwal is a SAP BI Consultant currently working with Accenture Services Private Limited. He has 2.7 years of experience in  BW/BI implementation and support projects. He has been particularly involved in SAP BW Archiving Activities. Major area of work involves with the extraction of R/3 data from LO and FI datasources.



Basic Scenario ::


Basically the new system which has been handed to you has lot of data already in MCEX queues. This data needs to be pulled in BW system after the loading of Historical data via set-up tables.



Initially, we have the following records in SMQ1 (Outbound Queue).


111.png

 

To run the V3 job, go to LBWE ; click on Job Control.


112.JPG


Click on Start Date.

 

113.JPG


Select the next option as per requirement. In our case, we went for immediate.

 

114.JPG


Check and save the setting. Now click on Print Parameters option and assign the Local Printer for the printing.

It will show a warning but you can click to proceed ahead.


115.JPG




Click on Schedule job to schedule the V3 job. In our case, it was done for Plant Maintenance flow (17).

Showing the three phases of the job run to showcase the movement of data from MCEX queue to RSA7 queue.


Phase 1::


116.png


Phase 2::


117.png


Phase 3 ::


118.png

 

Through observing the three phases we come to know that MCEX queue will be always receiving some amount of data anytime because user postings are not stopped in the system.


When the V3 job is initiated, it will start with that moment for the number of records in the MCEX queue and start pulling the records from the MCEX queue to RSA7 queue.


Once the job gets completed, there will be gradual rise in the LUW's available in the RSA7 by the data moved from the MCEX queue which in turn will keep the MCEX queue not overloaded. Ultimately, the data from RSA7 can be moved to BW (through delta loads).

Semantically Partitioned Objects in SAP BW

$
0
0

Business Scenario:

 

An organization is operating its business in many continents/regions. When it comes to data analysis and decision making based on the BW data, the organization might face various problems because of the larger geographical area and different standard time zones. There might be following problems:

 

  1. The data would be very huge. It will take significantly more time to get even a small data records.
  2. Data loading is usually done in non-business hours. Because of the different time zones, some down time would be required. Reason is that when data load is done as per the non-business hours of 1 region, it could be business hours in other regions.
  3. If an error occurs in the data of a particular region, it will have impact on the global business. Request will be red and concerned persons from other regions also would not be able to do the analysis.

 

Solution:

 

To overcome such hurdles, SAP has provided an InfoProvider, Semantically Partitioned Object. It is an Infoprovider that contains many InfoCubes or DataStore Objects with same structure. At the time of defining the InfoProvider, we decide to make it as Semantically Partitioned Object.

 

Note: This feature is available for BI 7.3 or higher version.

 

Creation of Semantically Partition Object:

 

1. Right Click InfoArea   à   Select Create InfoCube

 

1.PNG

2. Following screen is displayed. Enter the name of the infoCube. Tick the check box ‘Semantically Partitioned’ and click Create button.

 

2.PNG

3. Enter the characteristics InfoObjects for this InfoProvider.

 

3.PNG

 

4. Enter the KeyFigures.

 

4.PNG

 

5. Click Maintain Partition as shown below:

 

5.PNG

6. We get the following screen:

 

6.PNG

 

7. Select the characteristics based on which partition has to be created.

 

7.PNG

 

Here we are creating partition based on geography.

 

8.  Click Partition to create required number of partitions. Enter the geography name as required.

 

8.PNG

 

9. Here we have created 4 partitions as shown:

 

9.PNG

 

10. Now activate the InfoProvider as shown below:

 

10.PNG

 

11. InfoProvider gets activated. Now create the transformation as shown below:

 

11.PNG

12. Enter the details for source.

 

12.PNG

 

13. Transformation is created. Activate the transformation.Now create the DTP as shown below:

 

13.PNG

 

14. Click Folder to create a new template folder as shown below:

 

14.PNG

 

15. Enter the name of Template folder:

 

15.PNG

 

16. Right click the created template (ZGEO_SL)  à  Click Create New DTP Template:

 

16.PNG

 

17. Enter DTP Template name.

 

17.PNG

 

18. DTP setting screen is displayed:

 

18.PNG

 

19. Click Save. Following screen is displayed.

       Select DTP Template and the DataSource connected to first partition. Click Assign.

 

19.PNG

 

20. Following screen is displayed. Select the ‘Not Yet Generated’ DTP and click Generate.

 

20.PNG

 

21. Follow steps 19 and 20 to create the DTP for all the partitions. All the DTPs are created as shown below:

 

21.PNG

 

22. Click Create Process Chains to create the process chains  for all the partitions:

 

22.PNG

 

23. Select  first DTP and click Add as shown below:

 

23.PNG

 

24.  Follow the same procedure to add all the remaining DTPs in process chain.

 

25. Once all the DTPs are added , click Generate as shown below:

 

24.PNG

 

26.  After successful generation of process chain, we see the following screen:

 

25.PNG

 

27.  Click Process Chain Maintenance to display the generated process chain:

 

26.PNG

 

28.  We get the following process chain created:

 

27.PNG

 

 

Now run this process chain and check the data in each InfoCube:

 

Data in partition 1:

 

28.PNG

 

Data in partition 2:

 

29.PNG

Data in partition  3:

 

30.PNG

 

Data in partition 4:

 

31.PNG

 

Check the number of records in PSA.

 

32.PNG

 

We find that there are 10 records in PSA but depending on the geographical partition, only the relevant data is being loaded in each InfoCube.

 

Benefits:

 

1.       Report execution time will be tremendously decreased

2.       If error occurs in one partition, other partitions will be not affected and those will be available for reporting.

3.       No downtime is required for data load; data in each partitions can be loaded as required.


DSO - Step by Step (Part 2 of 2) : Loading, Activation and Maintenance

$
0
0

Note: This is the continuation from DSO - Step by Step (Part 1 of 2) document. Part 1 dealt with Creation, Extraction and Transformation. This part deals with Loading, Data Activation and DSO Maintenance. (If the images in this document are blurry/not clear, click on the image to open it in high resolution)

 

4. Loading


4.1 Create a Data Transfer Process(DTP)- DTP controls the movement of data within BW system. It can read data from PSA or a data target, apply the transformation rules and load it to other data targets. Below are the steps to create, configure and execute a DTP.


  • In DWW > Modeling > InfoProvider, locate and expand the DSO tree. Right click the Data Transfer Process folder and select Create Data Transfer Process... option

     2.4.1.png


  • In the Create Data transfer Process pop-up, Specify the DataSource as source of the DTP and click the OK (ok button.png) button.

     2.4.2.png

 

  • Change Data Transfer Process screen opens up. In the Extraction tab, provide details about the DataSource for the DTP. For Extraction Mode, specify Delta as this ensures that system loads only the requests that are yet to be loaded into data target. Package Size determines the number of records to be extracted in a set.

     2.4.3.png

 

  • In Update tab, specify the error handling settings. Select Update Valid Records, No Reporting (Request Red) option. This ensures that the erroneous records gets written to Error Stack and the only the valid records gets loaded into data target. But the entire set remains unavailable to reporting.

     2.4.4.png

  • Now Save, Check(check button.pngor Ctrl+F2) and Activate(activate button.pngor Ctrl+F3) the DTP.


  • In the Execute tab, there are options to specify Processing Mode. Select Serial Extraction, Immediate Parallel Processing option and click the Execute button. Data gets processed asynchronously in a background process.

     2.4.5.png


  • Click on the Monitor Icon (monitor button.png) to open the Monitor: Data Transfer Process page. This will list the status, duration and break down of all the request processing steps.

     2.4.6.png


  • With this the data is loaded into the DSO. But the loaded data is stored in the Activation Queue and is not available for reporting.

 

 

 

5. Activation


  • In DWW > Modeling > InfoProvider, locate and right click the DSO. Select Manage option from the context menu.

     2.5.1.png


  • Manage InfoProvider screen opens up. In the Requests tab, click the Activate button at the bottom of the screen.

     2.5.2.png


  • A pop-up windows opens and lists all the requests that are loaded but not yet activated in the DSO. Select the request that needs to be activated and click the Start button at the bottom of the pop-up.

     2.5.3.png

  • This will submit a data activation request that will be processed in background. To check the status of activation, go back and see the request row in Requests tab.
  • Once the data gets activated, i.e. once the data gets moved from Activation Queue to Active Data table within DSO, the data will be available for reporting and also for loading it into other data targets, usually InfoCubes.

 

 

 

6. Maintenance


6.1 Deleting Change Log data for Performance Improvement - As the DSO ages, the data in Change Log table piles up creating performance issues. So it is a good practice to free it up periodically. Below are the steps to delete data from Change Log table.

 

  • Go to Manage InfoProvider Screen for the DSO
  • In SAP GUI's menu bar, open Environment Menu and select Delete Change Log Data option

     6.1.png

 

  • This opens Delete Change Log Data screen where you can select what data should be deleted based on how old the request is or when it was loaded. Enter the date and click the Start button to delete data before that date.

     6.2.png

 

6.2 Request Automation

  • Go to Manage InfoProvider Screen for the DSO
  • In SAP GUI's menu bar, open Environment Menu and select Automatic Request Processing option

     6.6.png

  • In the pop-up window that opens, you can enable/disable automatic setting status to OK and automatically activating requests. Click on the Save button after making the changes.

     6.7.png

 

 

6.3 View DSO contents - At times there will be need to view the contents of DSO, be it, the activation queue table, active data table or the change log for administration and re-conciliation purposes. Below steps shows how to view the DSO contents.

 

  • Go to Manage InfoProvider screen for the DSO
  • In the Contents tab, there are 3 buttons, one each to view activation queue, active data and change log tables. Click on any of the button to view its content.

     6.3.png

 

  • Click New Data button to view the loaded but not yet activated data. This opens the Data Browser screen as shown below. Click on the Number of Entries button to view the record count in the table.

     6.4.png

  • Click on the Execute button (execute button.png). Data Browser screen refreshes with the contents of DSO table as shown below.

     6.5.png

 

 

This concludes the 2 Part document illustrating the steps for Creation, Extraction, Transformation, Loading, Activation and Maintenance of a Standard DSO.


All the steps discussed in Part 1 & 2 of this document can be automated using Process Chains.

SAP CRM BW Delta flow with BDOC

$
0
0

Applies to:

SAP BI NetWeaver 2004s and SAP CRM 5.0

 

Summary

This white paper is meant for SAP BW CRM delta understanding and will help to troubleshoot the delta flow of CRM standard datasources. Also it will help to analyze the changes moving from CRM to BW system.

 

Author(s):    Rahul Desai

Company:   Tata Consultancy Services Ltd

Created on:  20th Feb 2014

 

Author Bio:

scn1.jpg

Rahul Desai works as a SAP BW/BI consultant in Tata Consultancy Services Ltd and having 7 years of experience in SAP BW/BI Implementations and Production support environment.


Introduction

This document will allow you to understand the role of BDoc messages in BW/BI system, to monitor their flow, and to react effectively to error situations.

Also it will help to troubleshoot the missing or incorrect delta records which is coming from CRM to BW/BI system.

 

CRM to BW/BI delta flow


The below diagram will help you to understand the data from from CRM system to BW system and also you can understand the role of Bdoc and IDoc.

BWAdapter.jpg

 

Also in this document we will see the step by step how CRM changes will be flowed to BW system.

 

Below are the basic transaction used for troubleshoot CRM datasources.

 

SPRO Configuration Related Settings

SMOEAC Administration console – CRM Middleware

SBDM BDoc Modeler

SMW01 Transaction for Monitoring BDocs

SMW02 Display BDoc Message Summary

SMQ1 Queue RFC Monitor for Outbound queue

SMQ2 Queue RFC Monitor for Inbound queue

BWA1 BW Adapter

BWA5 BW Adapter DataSources

 


Here in this document we have taken standard datasource 0CRM_SALES_CONTR_I to understand the actual Delta flow.

 

 

In each CRM item level data creation or any change will create ITEM_GUID in the system and also it will assign the HEADER_GUID for all header related information.

 

In this document we will change some item level information of CRM sales contract and we will see how those changes will be captured in Bdoc.

 

 

Step by Step CRM contract changes:


First login to the CRM source system and run the transaction CRMD_ORDER. This is the main transaction where you can create CRM order and change any existing CRM order. Here we have taken example of CRM contract "5000002"

1.png

After continue you can see on the screen the CRM contract with their respective line items:

2.png

After this to create the Delta records for BW we need to change any of the pricing details of the Contract line item. Here we will pick the line item 100 and will make the changes in the price details:

Currently the Price is 500 USD for line item 100 for contract 5000002.

3.png

Now go to the condition and change the price of contract to 700 USD and then click on SAVE.

4.png

Now before SAVE this transaction if will check the BW delta Queue for datasource "0CRM_SALES_CONTR_I" it will show you ZERO entry.

5.png

After SAVE the contract in CRM source system we will check the RSA7 delta queue and Bdoc (SMW01) in the system.

Now RSA7 will display the Delta Entry which we have changed in CRM:

6.png


Now we will check the same changes reflected in the BDoc (SMW01). Whenever any changes happened to CRM contract it will create a Bdoc and it will store the data in structural manner in BDoc queue.

During the production support thre are 100 of queue used to be created due to changes and it is difficult to find the Queue for single transaction.

so to make easier we will find the BDoc queue with the help of CRM contract number "5000002".

See the below screen of SMW01 and also go for "Expand Additional select option"

7.png

Once exapand the selection check the Additional select options and you can see the "QUEUE NAME". Now Queue name consist of CRM order number with some prefix and post fix. so you can search the queue like "*5000002*" so that it will give all the Queue name which consist of change to this CRM contract. and then click on execute.

8.png

In below screen you can see the BDoc Queue where your change records exist:

9.png

Once select the latest Queue according to Date and time of changes you have made and then click on the button

10.png

Now you can see the ITEM changes you have made in this Queue, we will check the PRICING changes which we have made in CRM order ITEM 10.

11.png

Also you can check the price changes in this particular contract in that pricing structure:

12.png


Now once you verify the change you have made then you can go and start the BW delta load for the datasource "0CRM_SALES_CONTR_I"

13.png

Now after running the Delta load you can check the data in PSA if you changes moved correctly or not.

In below scrrenshot of PSA it shows the changes we have made to pricing of contract (700 USD).

14.png


After this you can move your changes to BW data target.



Related content


www.scn.sap.com

help.sap.com

www.sapww.com/wp-content/uploads/.../BestPractice-BDoc-Analysis-V2.pdf

 

 

 

 







SAP-BW/BI Reporting Authorization

$
0
0

SAP-BW/BI Reporting Authorization

 

The purpose of the document is to provide BI Authorizations details that can help in understanding what security setup are required for SAP BI/BW reporting needs.

The Authorization Concepts are segregated to a number of different categories of users as end users, developers, production support etc... :

 

High level different Categories:

     - Functional Authorizations

             - Report Authorizations

             - Data Authorizations

 

To have further explanation to these categories:

 

Functional Authorization

        - Restrict a user to execute a particular tool or feature within the BW toolset. Different functional authorization will allow different levels of access to display or change data models, queries/reports, monitoring tools & administration activities. Functional roles include. Below are different examples of users who can perform different activities, so will be provided different functional roles.

· Data consumer

· Super User

· Configuration - Display

· Developer - Basic

· Developer - Advanced

· Production Support

· Production Scheduler

· Production Emergency User

        - Functional Authorizations are maintained by Roles hence maintained using PFCG

 

Example for Data Consumer role, a PFCG role can be created with having access to execute BW queries:

        Pic 1.jpg

Complete objects for RFC_NAME can be maintained as:

BAPI_CUBE*, BAPI_IOBJ*, BAPI_MD*, RFC1, RRMX, RRXWS, RRY1, RSAB, RSAH, RSAN, RSBAPI*, RSBAPI_IOBJ, RSCR*, RSMENU, RSNDI*, RSOB, RS_PERS_BOD, RS_UNIFICATION, RZX0, RZX2, SM02, SMHB, SRFC, SURL, SUSO, SUSW, SU_USER, SYST

Note: If user experience any access issue for accessing any hierarchy data or any other object; the error can be checked just after the auth issue in transaction SU53.

Similar to Query consumer role, other roles are created for having different accesses for different groups as stated above.

 

Report Authorizations

        - The reporting authorization primarily controls Provider level of security.

        - The report authorization is generally organized by functional areas and applicable for end users or power users who are needed to restricted on specific area:

        - Different Reporting Roles can be created based on areas like

· Profitability Analysis – COPA

· Account Receivables

· Inventory

· Sales and Distribution

· Account Payable

· Purchasing

· Cost Centre Reporting

· Asset Accounting

        - Report Authorizations are maintained by Roles hence maintained using PFCG

 

Example of a reporting role that is created in PFCG which provide access to run BW queries that are created top on GCOPA* providers:

Pic 2.jpg

 

Note: In above example, info providers are created as GCOPA* naming convention and similar will require change / updates as per followed naming conventions. Here reporting components provide access to query components available with these name like 0*, G*, L*, R*, T*. If any query is created with Z* name, user will not be able to access without adding access of Z*.

 

Data Authorizations

        - The data authorization allows access to the actual data content held within the BW data warehouse. This is called Analysis Authorization.

        - The data authorizations allow access based of specific data selections within a specified area of reporting.

        - The data authorization includes different objects some as:

· Company Code

· Sales Organization

· Plant

· Cost Center

        - A same analysis profile can provide access for different area or different analysis profiles can be created for each area as one profile for COPA, one for AR and so on.

        - Data Authorizations profiles are generated / manually maintained by transaction RSECADMIN

        - Profiles can directly be assigned to Users or added in a Role and then Role can be assigned to Users

 

        - Analysis Authorization works on “All-or-Nothing” Rule with exception to display hierarchies and key figure

                        Scenario: Sufficient Authorizations

                                        Complete selection is subset of authorizations

                                        Query results will be shown

                             Pic 3.jpg

                        Scenario: Insufficient Authorizations

                                        Complete or part of selection is outside of authorizations

                                        Query results will not be shown at all

                                    Pic 4.jpg

  - Data access can be restricted by following Authorizations:

· On InfoCube Level

· On Characteristic Level

· On Characteristic Value Level

· On Key Figure Level

· On Hierarchy Node Level

 

        - Prerequisite to apply Data Authorization

· To apply data authorizations restrictions - Authorization Relevant check at Info Object is needed to be marked

· An authorization dimension is a characteristic or navigation attribute

· Authorization of characteristics and navigation attributes can be defined independently of one another

· Authorization Relevant query variable must be included in queries (optional for all * access)

 

Pic 5.JPG

        - Manual creation of data authorization is possible from Central maintenance authorizations / transaction RSECADMIN.

 

· In transaction RSECADMIN, under Authorization – Maintenance

· Provide Authorization name and Create

· Authorization Relevant Objects can be added with Insert option (as in right panel)

 

Pic 6.jpg

· Special Authorization Characteristics can be included with option as below

Pic 7.jpg

        - Special Authorization Characteristics

· In addition to generic dimensions, an authorization includes special dimensions.

o InfoProvider - 0TCAIPROV

o Validity - 0TCAVALID

o Activity - 0TCAACTVT

· These special characteristics must be included in at least one authorization for a user; otherwise the user is not authorized to execute a query.

· It’s recommended as best practice to include these special characteristics in every authorization for reasons of clarity and analysis security.

· They must not be included in queries.

 

        - Authorizing Characteristic Values

· To provide authorizations to users only to specific sales organizations (e.g., 1101 and 2101) can be maintained as below. Ranges and patterns are also supported in characteristics restrictions.

Pic 8.JPG

· Navigational Attributes can be assigned individually. The referencing characteristic (here: 0CUST_SALES) does not need to be authorization-relevant

Pic 9.jpg

        - Authorizing Hierarchies

· In the same way as with value authorization, authorizations are also granted on hierarchy levels

· Assume hierarchy of sales organization as (these are cities and states in India)

Pic 10.jpg

· Hierarchy level authorizations can be granted by ‘Hierarchy Authorizations’ tab under Details

Pic 11.jpg

· Access can be granted on different nodes (like here access is required for a node and its sub-nodes and on a leaf of another node). Access will be granted to Two different nodes in this case

Pic 12.jpg

· Authorizations can be provided at different levels with maintaining different values as below:

o 0 (Only the selected nodes),

o 1 (Subtree below nodes - Generally Used),

o 2 (Subtree below nodes to level (incl.)),

o 3 (Complete hierarchy),

o 4 (Subtree below nodes to (and including) level (relative))

Pic 13.jpg

· As hierarchies can be created as time dependant or version dependant, in such cases Validity Period can also be defined for hierarchies.

o 0 (Name, Version Identical, and Key Date Less Than or Equal to -Recommended)

o 1 (Name and Version Identical)

o 2 (Name Identical)

o 3 (All Hierarchies)

Pic 14.jpg

 

        - Assigning Individual Authorizations to users

· In RSECADMIN – Users – Individual Assignment

· Select a user ID and change the assignment

· Then insert individual authorizations to the assigned list

Pic 15.JPG

        - Assigning Authorizations to Roles

· Authorizations can be assigned to roles, which can then be assigned to users

· Authorization object S_RS_AUTH are used for the assignment of authorizations to roles

· Maintain the authorizations as values for field BIAUTH

Pic 16.JPG

 

There is automatic authorization generation approach available with generating authorizations via maintaining details in different DSO:

     Pic 17.JPG

 

Detailed level information for authorization generation is available at help.sap.com

http://help.sap.com/saphelp_nw73/helpdata/en/4c/65fbce2f841f3ce10000000a42189c/frameset.htm

 

Data modelling in BODS with BW as data target - Part 2

$
0
0


Hi All,

 

These days, BODS is playing a very important role in the SAP system landscape.

 

Many customers use BODS to pull data from Non SAP systems and load to SAP BW.

 

These documents are primarily intended for BW consultants who would like to know more about integration, modelling and transport mechanism related to BODS.

 

In part 1, we have gone through how to integrate BW and BODS using RFC connection.

In this part 2 document, we will try to understand how modelling is done in BODS especially when we have SAP BW as the data target.

In part 3, we will try to understand the transport mechanism within BODS.(will be published later)

 

Environment:

BODS : Version 14.0.3.273 and BW 7.3SP4

 

 

Let me explain this with an real time example.

We have an oracle source table which stores customer call data in the following format:

Oracle table name: LDR_COMMON_INBOUND_CALL_B

1.png

 

We have another excel sheet which stores region data in the following format:

2.png

 

Our idea here is to merge state (here Andhra Pradesh) from first source and region (here 1) from the second source and map it to 0Region object in BW whose structure would be a follows:

3.png

 

All other fields would be mapped(1 is to 1) as per requirement.

We will further use some of the navigational attributes of 0Region later in the Bex reports.

 

In BODS, Create a Oracle data store and give the required credentials.

5.png

6.png

 

Now an oracle datastore is created like shown below.We have a table option under the newly created datastore.Right click on that and use the 'Import by Name' option.

7.png

8.png

 

Now give the required table/view name and use the import option.

9.png

 

Once we have the successful import, we will have our table name mentioned under the Tables heading

 

10.png

 

 

Now use the format option to create a new flat file datastore for importing the excel data:

 

11.png

12.png

13.png

Now a new object named region will be created as follows:

14.png

Now we need to create a target datastore and in this case we will create a SAP BW Target Datastore by giving the BW system credentials

15.png

 

The BW credentials can be obtained from the System entry properties.

16.png

 

In BW7.3 system, we have BO Dataservices as a new source system(the RFC between BW and BODS has not been explained in this document)

17.png

Create a new application component.(as a best practise, create MD and TD applicaton components).

In this case, we will create a Transactional  datasource under the TD application component

18.png

 

In this case, the datasource was created as follows:

19.png

20.png

Now go to the BODS server and we will find two options Master  and Transaction transfer structure

21.png

 

Since we had created a TD datsource in BW, we will have to import the same under the Transaction transfer structure option.

 

22.png

23.png

Now create a Project

24.png

 

Create a new batch job under the project

25.png

 

Batch job would be created as shown below

26.png

 

Create a workflow under the project(not mandatory)

27.png

Create a dataflow under the workflow

28.png

 

Now in the designer workspace, we need to model our data.

 

Drag in the flat file, Oracle and BW target datastores from the left side.

 

Add the query transformations(available in the right palette).

 

Join all the objects as follows:

30.png

Double click on the flat file datastore. For my case, It would like below:

31.png

 

 

The Oracle datastore for my case would look like below;

 

32.png

 

We will be merging both the source datastores in the first query transformation.

33.png

 

We can use different BODS functions to model our data.

In this case, some functions like 'UPPER'(below screenshot Mapping Tab) have been used to change the lower to upper case.(which purely depends on the type of the requirement)

34.png

In our case, a additional query transformation has been added to include some other functions like decode, lpad and so on.(purely depends on the requirement)

35.png

 

Now, go back to BW and complete the modelling up to the cube level.

36.png

 

For BODS related infopackage, we now have a new option '3rd party selections'.

Enter the details like 'Repository', 'Job server' , and Job name(Job name is the batch job that you have created  in BODS).

37.png

 

Run the infopackage and now you will see the data in expected format in BW.

38.png

 

Hope you got some understanding on the BODS modelling with BW as a target data store.

 

BR

Prabhith

This Article tells about how to get the F4 help values for the data source 0MATERIAL_CDTH_HIER using RSA3 in ECC

$
0
0

Introduction:

 

After reading this article, you will know how to F4 values for Hierarchy name in RSA3 for the data source 0MATERIAL_CDTH_HIER in ECC side with the error saying as
List is empty The below documents helps for populating the.


Issue:


When we check this data source in RSA3 and try to get F4 help for the field HIENM we get pop-up saying “List is empty”.

fig1.png

 

So below are the pre-requisite steps to be checked.

 

  1. Check in ECC if they have maintained the Article node by using the T-code WMATGRP03.


fig2.png

From the above screen, we could see that one hierarchy status is ACTIVATED, which means that is the hierarchy that has been maintained in ECC and we have to get that in F4 help in RSA3 screen. If the there is no Activated status maintained, you to have to check with the functional guys for this.

 

Let’s check the next level:

 

    2.Go to the T-code WMATGRP01 and check if the SAP BW flag has been checked.

fig3.png


3. Go to SPRO at SPRO Logistics - General / Article Hierarchy / Edit Control Parameters / check if CDT Update is also flagged.

 

 

fig4.png


4. Go to the table MATGRP_HIER using SE11 tcode and check if the maintained hierarchy is been displayed in this table. If the data is maintained then the desired Function modules from where the data is pulled are CMBW_HIERARCHY_CATALOG_CDTH and CMBW_HIERARCHY_TRANSFER_CDTH.

 

But in my case, I still don’t see the required hierarchy in MATGRP_HIER. So let’s go to the next level.

 

fig5.png

5. In the Table WRF_MATGRP_HIER table, I could see my desired hierarchy. If the hierarchy is found then the required function modules are RTCMBW_HIERARCHY_CATALOG_CDTH and RTCMBW_HIERARCHY_TRANSFER_CDTH.

 

fig6.png

 

  Below will explain you what are these function modules.  For each data source there would be a extractor and one F4 function module assigned in RSA2 for populating the F4 possible values.

 

For the data source 0MATERIAL_CDTH_HIER in RSA2, we could see extractor as CMBW_HIERARCHY_CATALOG_CDTH and F4 function module CMBW_HIERARCHY_TRANSFER_CDTH

 

We need to check if the data source has been assigned with the proper function modules.

 

fig7.png


When we double click the function module, CMBW_HIERARCHY_CATALOG_CDTH and CMBW_HIERARCHY_TRANSFER_CDTH we could see the data has been pulled from MATGRP_HIER and MATGRP_HIERT tables. But since our required hierarchy is been filled in WRF_ MATGRP_HIER and WRF_ MATGRP_HIERT tables we need to change the Function modules from CMBW_HIERARCHY_CATALOG_CDTH and CMBW_HIERARCHY_TRANSFER_CDTH to RTCMBW_HIERARCHY_CATALOG_CDTH and RTCMBW_HIERARCHY_TRANSFER_CDTH which pulls the data from WRF_ MATGRP_HIER and WRF_ MATGRP_HIERT tables.

  

For assigning the correct function modules here are the steps:

     

  1. Start se11 for table ROOSOURCE, type in data source name 0MATERIAL_CDTH_HIER and the object version as “A”.
  2. Change the value field “EXTRACTOR “from CMBW_HIERARCHY_TRANSFER_CDTH to the function module RTCMBW_HIERARCHY_TRANSFER_CDTH.
  3. Got to SE16 table ROOHIECAT, type the data source 0MATERIAL_CDTH_HIER and object version A. Change the value in the field FHCATALOG from CMBW_HIERARCHY_CATALOG_CDTH to RT CMBW_HIERARCHY_CATALOG_CDTH.
  4. Go to table ROHIEBAS, Check for the below fields:

     

  • CHBASNM = 0MATERIAL
  • HCLASS = CDTH
  • ROLLNAME = MATNR
  • FHCATALOG = RTCMBW_HIERARCHY_CATALOG_CDTH
  • FHTRANSFER = RTCMBW_HIERARCHY_TRANSFER_CDTH

       

see if there is an entry if there is no entry then create an entry .By performing the mentioned steps in RSA2 the correct RTCMBW_HIERARCHY_CATALOG_CDTH and RTCMBW_HIERARCHY_TRANSFER_CDTH function modules will be assigned and in turn the correct the correct tables WRF_ MATGRP_HIER and WRF_ MATGRP_HIERT instead of MATGRP_HIER and MATGRP_HIERT.

  

We can verify in RSA2 if it’s pointing to the correct function modules.

 

fig8.png

 

And for F4

 

fig7.png


And in RSA3 if we check the F4 help, we should get the possible values.

 

fig10.png

 

Hope this will be helpful.

 

Thanks and Regards

Sarala.K

Viewing all 1574 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>