Quantcast
Channel: SCN : Document List - SAP Business Warehouse
Viewing all 1574 articles
Browse latest View live

Compare and Update Currency / Unit settings across SAP systems

$
0
0

Summary

 

It is critical to have common settings for currencies or units in development, quality and production landscapes, and across different SAP system. This document explains an easy way of comparing and adjusting the SAP system tables related to units (Ex. T006I) and currencies (Ex. TCURX). We can compare the entries in the standard SAP tables for the same and adjust/copy the missing entries from the system we are comparing with.


Author: Yogesh Kulkarni        

Company: Infosys

Created on: 06 Oct 2014


Author Bio

0.jpg

Yogesh Kulkarni is presently working with Infosys. He has worked extensively in SAP Business Intelligence and Data-warehousing space.



Table of Contents

 

  • Scenario
  • Step by step guide for table comparison and copy / adjustments
  • Prerequisites
      • Step 1: Logon and check the table contents
      • Step 2: Compare the table contents with other system
      • Step 3: Various options for Comparison and Copy / Adjust
          • Adjust
          • Filter
          • Statistics
          • Legend
  • Conclusion
  • Copyright



Scenario

 

There is always a need to ensure consistency in the  currency or unit settings across SAP landscapes. Comparing the table contents manually can be a tedious task if we are dealing with a table like TCURR which has thousands of entries for exchange rates. This document explains the step by step guide to do such comparisons and also adjust the entries according to the comparison system without hassles.

 

Such comparison can be done between any two systems which have RFC connection set up between them.



Step by step guide for table comparison and copy / adjustments

 

We will take an example of table TCURT (Currency Code Names) for our comparison. We will login into one SAP BW system and try to compare the TCURT table with other SAP BW system with which we have an RFC connection set up. Please note below prerequisites before we proceed further.


Prerequisites –

 

  1. Comparison authorization S_TABU_RFC to the RFC user that is to perform the comparison.
  2. RFC connection between two systems which we want to compare.
  3. No comparison is possible if the comparison client/system is protected against external access with respect to the view/table comparison tool (see client table T000, transaction SCC4).

 

 

Step 1: Logon and check the table contents

 

  • Logon to the system where you need to check the contents of the table and compare it with other system.

 

  • Go to transaction SPRO - SAP Reference IMG - SAP Customizing Implementation Guide - SAP NetWeaver - General settings - Currencies - Check Currency Codes.

 

  • Click on the button shown below to see the Currency Codes maintained in your system.

1.png

  • You will see the list of Currency Codes as shown below.

2.png


Step 2: Compare the table contents with other system

 

  • Go to menu: Utilities - Adjustment as shown below.

3.png

  • You will get a popup to put the system/client you want to compare your table with. You can also use the F4 button to see all the available system/clients (which have RFC connection set up with your system).

4.png

  • After selecting the System/Client, click on ‘Continue’ button to see the comparison of TCURT table between your system/client and the comparing system/client.


  • As you can see in below picture, it shows following details about your comparison.
      1. View: V_CURC
      2. Logon System: System you are logged in
      3. Comparison System: System you are comparing your table with
      4. Date: Current date
      5. Various Options like Adjust, Statistics: explained in next step

5.png



Step 3: Various options for Comparison and Copy / Adjust


Apart from standard ALV display options, we get few other helpful options here.


  • Adjust– This option can be used to adjust/copy the entries from comparison system/client to logon system/client.

6.png


  • Filter– This option can be used to filter the comparison entries based on below shown criteria’s. This is very useful to analyze the differences in two systems.

7.png


  • Statistics– This option can be used to see the statistics of comparison as shown below.

8.png


  • Legend– This option can be used to see the legend for color coding used in comparison data display.

9.png

 

 

Conclusion

 

Though we have a standard way of extracting and loading Currency and Unit settings from one system to other, we need to have a corresponding Source System created in our system. The standard way does not give us the flexibility to compare and selectively load the settings from source system (it’s always an update or rebuild). In such scenarios, this method provides a very sophisticated and flexible way of comparing and adjusting the standard settings for currencies and units across different SAP systems.



Copyright

 

© Copyright 2012 SAP AG. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice.

Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.

Microsoft, Windows, Excel, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation.

IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, System z9, z10, z9, iSeries, pSeries, xSeries, zSeries, eServer, z/VM, z/OS, i5/OS, S/390, OS/390, OS/400, AS/400, S/390 Parallel Enterprise Server, PowerVM, Power Architecture, POWER6+, POWER6, POWER5+, POWER5, POWER, OpenPower, PowerPC, BatchPipes, BladeCenter, System Storage, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, Parallel Sysplex, MVS/ESA, AIX, Intelligent Miner, WebSphere, Netfinity, Tivoli and Informix are trademarks or registered trademarks of IBM Corporation.

Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

Adobe, the Adobe logo, Acrobat, PostScript, and Reader are either trademarks or registered trademarks of Adobe Systems Incorporated in the United States and/or other countries.

Oracle is a registered trademark of Oracle Corporation.

UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group.

Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems, Inc.

HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology.

Java is a registered trademark of Oracle Corporation.

JavaScript is a registered trademark of Oracle Corporation, used under license for technology invented and implemented by Netscape.

SAP, R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP Business ByDesign, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries.

Business Objects and the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and other Business Objects products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of Business Objects S.A. in the United States and in other countries. Business Objects is an SAP company.

All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary.

These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies ("SAP Group") for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.


DTP Trend analysis in Excel

$
0
0

Hi All,

 

Today i'm going to share with you all on how to get the DTP statistics for particular DTP in your sap bi system and how to represent it in excel chart.

 

In my case I've done a DTP trend analysis for one DTP which is running too long and this is related to inventory aging loads from daily process chain and it actually runs two times in days i.e. One DTP in two different process chains with different run timings.

 

First identify your DTP name for which you need to make trend analysis for last couple of months/days/years.


Go to your DTP statistics cube and give your DTP Technical name.


Untitled.png


And in field selection output , select only the DTP,Run time ( in second ), Calendar Day, Process chain ID and take the out put to excel like below.


Untitled.png



Then select all the and go for insert pivot table in excel like below,


Untitled.png


Then in sheet two you will get the pivot table representation and make column label as process chains ( pc-1 and pc-2 ) and in row label as calendar day ( the each day ) and sum of run time.

Untitled.png


Run time of process chains will be in seconds, But i need in HH:MM:SS format so you need to change. Excel formula is


Capture.PNG

After applying the time function the column labels ( PC-1 and PC-2 ) will be HH:MM:SS format. Where it make me easier to show case my view in chart.


Untitled.png


Now select all go for insert chart,


Untitled.png



But this is not the final one, you need make it more clear. For that right click select data.


Untitled.png




Where you can change to your data to your x and y axis.


Untitled1.png


Final your DTP Trend for last few days,


Capture.PNG


Green color indicates the DTP which has ran in PC-2 taken the high run time on 12-06-2014 for 57 minutes and Red color indicates that the DTP  which has ran in PC-1 taken the high run time on 09-06-2014 for 50 minutes.


Like this you can take a trend for any processes like IP, DTP any jobs for last months/days/year.



Thanks,

Siva








ST22 Short Dump Email Tool

$
0
0

Program for Short Dump Email Tool:-

 

Objective of Program is to Notify ABAP Short dumps to monitoring DL’s for specified time duration.

 

Upon execution of program based on the selection of Time duration we can be able to Mail/Notify the Short dumps encountered in last destined hours so that manual check can be avoided .

 

 

REPORT Z_SHORTDUMP_MONITOR LINE-SIZE 200.

 

* Include for declaration of data types

INCLUDE Z_SHORTDUMP_MONITOR_TOP.

 

* Include for Selection Screen elements.

INCLUDE Z_SHORTDUMP_MONITOR_SEL.

 

* Include for storing all forms used

INCLUDE Z_SHORTDUMP_MONITOR_FORM.

 

START-OF-SELECTION.

 

* Fetching data and calculations

  PERFORM GET_DATA.

 

* Creating message

  PERFORM create_message.

 

* Sending Message

  PERFORM send_message.

 

 

 

 

----------------------------------------------------------------------------------------------

 

 

 

*&---------------------------------------------------------------------*

*&  Include           Z_SHORTDUMP_MONITOR_TOP

*&---------------------------------------------------------------------*

TABLES : SNAP_BEG,somlreci1,AGR_USERS .

DATA:

* Declaration of types to be used in the Function module for sending mail

it_objbin   TYPE STANDARD TABLE OF solisti1,   " Attachment data

it_objtxt   TYPE STANDARD TABLE OF solisti1,   " Message body

it_objpack  TYPE STANDARD TABLE OF sopcklsti1, " Packing list

it_reclist  TYPE STANDARD TABLE OF somlreci1,  " Receipient list

it_objhead  TYPE STANDARD TABLE OF solisti1.  " Header

 

 

* Declaration of Work Area  to be used in the Function module for sending mail

DATA: wa_docdata TYPE sodocchgi1,   " Document data

      wa_objtxt  TYPE solisti1,     " Message body

      wa_objbin  TYPE solisti1,     " Attachment data

      wa_objpack TYPE sopcklsti1,   " Packing list

      wa_reclist TYPE somlreci1.    " Receipient list

 

RANGES: r_email FOR somlreci1-receiver.

 

 

 

DATA: w_tab_lines TYPE i.           " Table lines.

 

 

DATA : var(4) TYPE c,

       Rvalue(10) TYPE c,

       v_mailbody type solisti1.

 

* Declaration of variables to be used for the Time range calculations

* to be used in the Radio buttons

  DATA :l_lines type i ,

        lv_time  TYPE SNAP_BEG-uzeit ,

        lv_time1  TYPE SNAP_BEG-uzeit ,

        lv_time2  TYPE SNAP_BEG-uzeit ,

        lv_time3  TYPE SNAP_BEG-uzeit,

        lv_time0  TYPE SNAP_BEG-uzeit  .

 

 

* Declaration of Structure of Internal Table to be used for storing selective data from SNAP-BEG table

  TYPES: BEGIN OF TY_FINAL1,

       date type SNAP_BEG-DATUM,

       time type SNAP_BEG-UZEIT,

       server type SNAP_BEG-AHOST,

       user type SNAP_BEG-UNAME,

       wp type SNAP_BEG-MODNO,

       seqno type SNAP_BEG-SEQNO,

       e_line type SNAP_BEG-FLIST,

       errorid type c length 250,

  END OF TY_FINAL1 .

 

* Declaration of Structure of Internal Table to be used for importing to FM for sending mail

  TYPES: BEGIN OF TY_FINAL,

       date type SNAP_BEG-DATUM,

       time type SNAP_BEG-UZEIT,

       server type SNAP_BEG-AHOST,

       user type SNAP_BEG-UNAME,

       wp(4) type c,

       seqno type SNAP_BEG-SEQNO,

       e_line type SNAP_BEG-FLIST,

       errorid type c length 250,

  END OF TY_FINAL .

 

  DATA : it_final1 TYPE TABLE OF ty_final1 ,

         it_final TYPE TABLE OF ty_final,

 

          wa_final1 TYPE ty_final1,

          wa_final TYPE ty_final.

 

  data : v_errorid type c length 250,

         v_id(2) type c ,

         v_len(3) type c .

 

         lv_time = sy-uzeit .

 

*         IF lv_time LE 100000 .

 

         lv_time0 = 000001 .

         lv_time1 = lv_time - 3600 .

         lv_time2 = lv_time - 7200 .

         lv_time3 = lv_time - 21600.

 

*         ENDIF.

 

 

--------------------------------------------------------------------------------------------------------------

 

 

*&---------------------------------------------------------------------*

*&  Include           Z_SHORTDUMP_MONITOR_SEL

*&---------------------------------------------------------------------*

SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.

  parameters:

*  Describing Radio Buttons.

  Last1Hrs radiobutton group rbg1 ,

  Last2Hrs radiobutton group rbg1 ,

  Last6Hrs radiobutton group rbg1 .

  SELECTION-SCREEN END OF BLOCK b1.

 

 

*  Selection screen for accepting Mail id's for broadcasting.

 

SELECTION-SCREEN BEGIN OF BLOCK b2 WITH FRAME TITLE text-004.

SELECT-OPTIONS EmailId FOR somlreci1-receiver NO INTERVALS  DEFAULT 'abc@xyz.com' OBLIGATORY.

SELECTION-SCREEN END OF BLOCK b2.

 

 

 

----------------------------------------------------------------------------------------------------------------

 

*&---------------------------------------------------------------------*

*&  Include           Z_SHORTDUMP_MONITOR_FORM

*&---------------------------------------------------------------------*

*&---------------------------------------------------------------------*

*&      Form  GET_DATA

*&---------------------------------------------------------------------*

*       text

*----------------------------------------------------------------------*

*  -->  p1        text

*  <--  p2        text

*----------------------------------------------------------------------*

FORM GET_DATA .

 

*  Fetching records for Last 1 hour

 

IF Last1Hrs EQ 'X'.

  Rvalue = '1' .

 

  IF sy-uzeit LE '010000'.

 

SELECT DATUM UZEIT AHOST UNAME MODNO SEQNO FLIST

FROM SNAP_BEG

INTO TABLE it_final1

WHERE DATUM eq sy-datum and SEQNO = '000' and UZEIT BETWEEN lv_time0 AND lv_time.

 

  Else.

 

    SELECT DATUM UZEIT AHOST UNAME MODNO SEQNO FLIST

FROM SNAP_BEG

INTO TABLE it_final1

WHERE DATUM eq sy-datum and SEQNO = '000' and UZEIT BETWEEN lv_time1 AND lv_time.

 

      Endif.

 

*  Fetching records for Last 2 hours.

 

ELSEIF Last2Hrs EQ 'X'.

  Rvalue = '2' .

 

  IF sy-uzeit LE '020000'.

 

SELECT DATUM UZEIT AHOST UNAME MODNO SEQNO FLIST

FROM SNAP_BEG

INTO TABLE it_final1

WHERE DATUM eq sy-datum and SEQNO = '000' and UZEIT BETWEEN lv_time0 AND lv_time .

 

  1. Else.

 

SELECT DATUM UZEIT AHOST UNAME MODNO SEQNO FLIST

FROM SNAP_BEG

INTO TABLE it_final1

WHERE DATUM eq sy-datum and SEQNO = '000' and UZEIT BETWEEN lv_time2 AND lv_time .

 

  Endif.

 

*    Fetching records for Last 6 hours.

 

ELSEIF Last6Hrs EQ 'X'.

  Rvalue = '6' .

 

  IF sy-uzeit LE '060000' .

 

SELECT DATUM UZEIT AHOST UNAME MODNO SEQNO FLIST

FROM SNAP_BEG

INTO TABLE it_final1

WHERE  DATUM eq sy-datum and SEQNO = '000' and UZEIT BETWEEN lv_time0 AND lv_time .

 

  Else .

 

SELECT DATUM UZEIT AHOST UNAME MODNO SEQNO FLIST

FROM SNAP_BEG

INTO TABLE it_final1

WHERE  DATUM eq sy-datum and SEQNO = '000' and UZEIT BETWEEN lv_time3 AND lv_time .

 

    Endif.

 

  1. ENDIF.

 

 

SORT it_final1 by date  user server wp.

DELETE ADJACENT DUPLICATES FROM it_final1 comparing date user server wp .

 

IF sy-subrc = 0.

DESCRIBE TABLE it_final1 LINES l_lines.

 

LOOP AT it_final1 INTO wa_final1.

 

 

 

v_id = wa_final1-e_line+0(2).

v_len = wa_final1-e_line+2(3).

v_errorid = wa_final1-e_line+5(v_len).

*Populate it_final

 

wa_final-date   = wa_final1-date.

wa_final-time   = wa_final1-time .

wa_final-server = wa_final1-server.

wa_final-user   = wa_final1-user.

var             = wa_final1-wp.

wa_final-wp  = var.

wa_final-seqno  = wa_final1-seqno.

wa_final-e_line = wa_final1-e_line.

wa_final-errorid = v_errorid .

 

Clear var.

 

APPEND wa_final to it_final .

 

SORT it_final by time.

 

IF SY-SUBRC EQ 0.

CLEAR wa_final.

CLEAR wa_final1.

 

  1. ENDIF.

 

  1. ENDLOOP.

Loop at it_final into wa_final.

*  WRITE:/ , 5 wa_final-server ,30 wa_final-user,50 wa_final-wp ,70 wa_final-errorid .

 

  ENDLOOP.

 

  1. else.

*  WRITE: ' NO Run Time errors found', /.

 

  1. ENDIF.
  2. " GET_DATA

*&---------------------------------------------------------------------*

*&      Form  CREATE_MESSAGE

*&---------------------------------------------------------------------*

*       text

*----------------------------------------------------------------------*

 

*----------------------------------------------------------------------*

FORM CREATE_MESSAGE .

 

**  1 Title, Description & Body.

  PERFORM create_title_desc_body.

 

**2 Receivers

  PERFORM fill_receivers.

 

  1. " CREATE_MESSAGE

 

*&---------------------------------------------------------------------*

*&      Form  CREATE_TITLE_DESC_BODY

*&---------------------------------------------------------------------*

*       text

*----------------------------------------------------------------------*

* Title, Description and body

*----------------------------------------------------------------------*

FORM CREATE_TITLE_DESC_BODY .

*  ...Title

  wa_docdata-obj_name  = 'Email notification'.

 

*...Description

  wa_docdata-obj_descr = 'ABAP Runtime Error Alert'.

 

*...Message Body in HMTL

  wa_objtxt-line = '<html> <body style="background-color:#FFE4C4;">'.

  APPEND wa_objtxt TO it_objtxt.

 

 

*CONCATENATE 'Hi Team' 'This is to notify the ABAP runtime errors encountered today as listed below for last' Rvalue 'Hrs' INTO  wa_objtxt-line  SEPARATED BY SPACE.

  wa_objtxt-line = 'Hi Team ,Please be notified of the ABAP runtime errors encountered today as listed below for last '.

*  wa_objtxt-line = Rvalue .

*  WRITE: l_lines, 'Run Time errors found', /.

CONCATENATE wa_objtxt-line Rvalue 'Hours'

          INTO wa_objtxt-line SEPARATED BY space .

  APPEND wa_objtxt TO it_objtxt.

 

*   table display

  wa_objtxt-line = '<table style="MARGIN: 10px" bordercolor="NavajoWhite" '.

  APPEND wa_objtxt TO it_objtxt.

  wa_objtxt-line = ' cellspacing="0" cellpadding="3" width="800"'.

  APPEND wa_objtxt TO it_objtxt.

  wa_objtxt-line = ' border="1"><tbody><tr>'.

  APPEND wa_objtxt TO it_objtxt.

 

*   table header

  wa_objtxt-line = '<th><font color="RoyalBlue">Server Name </font></th>'.

  APPEND wa_objtxt TO it_objtxt.

  wa_objtxt-line = '<th><font color="RoyalBlue">User Id</font></th>'.

  APPEND wa_objtxt TO it_objtxt.

  wa_objtxt-line = '<th><font color="RoyalBlue">Time</font></th>'.

  APPEND wa_objtxt TO it_objtxt.

  wa_objtxt-line = '<th><font color="RoyalBlue">Work Process Id</font></th>'.

  APPEND wa_objtxt TO it_objtxt.

  wa_objtxt-line = '<th><font color="RoyalBlue">Message Id</font></th>'.

  APPEND wa_objtxt TO it_objtxt.

 

 

*   table Contents

  LOOP AT it_final INTO wa_final.

    wa_objtxt-line = '<tr>'.

    APPEND wa_objtxt TO it_objtxt.

 

      CONCATENATE '<td><center>' wa_final-server '</center></td>' INTO wa_objtxt-line.

      APPEND wa_objtxt TO it_objtxt.

      CONCATENATE '<td><center>' wa_final-user '</center></td>' INTO wa_objtxt-line.

      APPEND wa_objtxt TO it_objtxt.

      CONCATENATE '<td><center>' wa_final-time '</center></td>' INTO wa_objtxt-line.

      APPEND wa_objtxt TO it_objtxt.

       CONCATENATE '<td><center>' wa_final-wp '</center></td>' INTO wa_objtxt-line.

      APPEND wa_objtxt TO it_objtxt.

      CONCATENATE '<td><center>' wa_final-errorid '</center></td>' INTO wa_objtxt-line.

      APPEND wa_objtxt TO it_objtxt.

 

 

    CLEAR : wa_final.

 

  ENDLOOP.

 

*   table close

  wa_objtxt-line = '</tbody> </table>'.

  APPEND wa_objtxt TO it_objtxt.

 

*   Signature color

  wa_objtxt-line = '<br><br>'.

  APPEND wa_objtxt TO it_objtxt.

  wa_objtxt-line = '<p> Regards,</p>'.

  APPEND wa_objtxt TO it_objtxt.

  wa_objtxt-line = '<p><b> Support Team</b></p>'.

  APPEND wa_objtxt TO it_objtxt.

 

*   HTML close

  wa_objtxt-line = '</body> </html> '.

  APPEND wa_objtxt TO it_objtxt.

 

* Document data

  DESCRIBE TABLE it_objtxt      LINES w_tab_lines.

  READ     TABLE it_objtxt      INTO wa_objtxt INDEX w_tab_lines.

  wa_docdata-doc_size =

      ( w_tab_lines - 1 ) * 255 + STRLEN( wa_objtxt ).

 

* Packing data

  CLEAR wa_objpack-transf_bin.

  wa_objpack-head_start = 1.

  wa_objpack-head_num   = 0.

  wa_objpack-body_start = 1.

  wa_objpack-body_num   = w_tab_lines.

*   we will pass the HTML, since we have created the message

*   body in the HTML

  wa_objpack-doc_type   = 'HTML'.

  APPEND wa_objpack TO it_objpack.

 

  1. " CREATE_TITLE_DESC_BODY

*&---------------------------------------------------------------------*

*&      Form  FILL_RECEIVERS

*&---------------------------------------------------------------------*

*       text

*----------------------------------------------------------------------*

*  -->  p1        text

*  <--  p2        text

*----------------------------------------------------------------------*

FORM FILL_RECEIVERS .

**  wa_reclist-receiver = 'xyz@abc.com'.

*  wa_reclist-rec_type = 'U'.

*  APPEND wa_reclist TO it_reclist.

*  CLEAR  wa_reclist.

  LOOP AT EmailId INTO r_email.

 

  MOVE r_email-low TO wa_reclist-receiver .

  wa_reclist-rec_type = 'U'.

   APPEND wa_reclist TO it_reclist.

 

  ENDLOOP.

*  CLEAR  wa_reclist.

 

  1. " FILL_RECEIVERS

*&---------------------------------------------------------------------*

*&      Form  SEND_MESSAGE

*&---------------------------------------------------------------------*

*       text

*----------------------------------------------------------------------*

*  -->  p1        text

*  <--  p2        text

*----------------------------------------------------------------------*

FORM SEND_MESSAGE .

*   Send Message to external Internet ID

  CALL FUNCTION 'SO_NEW_DOCUMENT_ATT_SEND_API1'

    EXPORTING

      document_data              = wa_docdata

      put_in_outbox              = 'X'

      commit_work                = 'X'

    TABLES

      packing_list               = it_objpack

      object_header              = it_objhead

      contents_txt               = it_objtxt

      receivers                  = it_reclist

    EXCEPTIONS

      too_many_receivers         = 1

      document_not_sent          = 2

      document_type_not_exist    = 3

      operation_no_authorization = 4

      parameter_error            = 5

      x_error                    = 6

      enqueue_error              = 7

      OTHERS                     = 8.

 

*  IF sy-subrc NE 0.

*    WRITE: /'Sending Failed'.

*  ELSE.

*    WRITE: /'Sending Successful'.

*  ENDIF.

 

  1. " SEND_MESSAGE

--------------------------------------------------------------------------------------------------------

Database/Disk Space Analysis of BW System

$
0
0

Purpose:


To have an overview of data volume in order to check and control the growth of data in BW system.

An report to serve following purpose:

  • Analytical & flexible report to check periodic growth of data
  • To check the components contributing the the growth
  • Report for periodic house keeping activity & migration projects
  • Alert report in case disk space consumption has reached a defined limit

 

In all previous version of BW this data was available in several transactions like DB02, ST03, in some underlying tables like :DB6TREORG etc and via API's.

However with SAP-7.3 release the technical content was enhanced for such analysis by providing the option to load this data into BW data models in conventional manner to facilitate flexible/multidimensional view of mentioned data.



Solution/Realization:

To use the new technical content the BW system has to be on SAP7.3 environment and install latest BI content version.

 

Following are the BI content activation steps required to activate the relevant data stream:


Step 1: Log in to BW system and Trnx: RSA1 -> BI Content tab

Step 2: Maintain the following settings:

               Source system Assignment : Local BW system (Myself BW system)

               Collection Mode                  : Automatically

               Grouping                            : In data flow before & afterwards.

Step 3: Expand the Multiprovider section from Object type panel and select/double click the option: "Select Objects"

Step 4: On the new small window select the binocular search button and type in : 0TCT_MC25 as displayed in following image:


 

Step 5: After selecting the mentioned row select the button "Transfer selections" from left bottom of the window

Step 6: Then select the Multiprovider and its subsequent objects and select the "Install" button as displayed:

 

 

Step 7: This will install all the objects required for data loading to the infocube and the reports on top of multiprovider

Step 8: Make a init data load for the infocube: 0TCT_C25. Please note the following relevant objects for the data flow:

            Multiprovider  : 0TCT_MC25

            Infocube        : 0TCT_C25

            Datasource    : 0TCT_DS25 (The data source is delta enabled)

Step 9: Automate the data load in process chain to load data on a periodic manner to store the history.

 


Output/Reports:

Technical content provides wide range of flexible reporting on the data, few of them are following:

1)0TCT_MC25_Q0103: BI Objects Size Overview  :- This report provides an overview of data in various table space in BW system. like following example:

   

 

2) 0TCT_MC25_Q0105 : BW DB Usage: Monthly Overview : This report provides an overview about the growth of data in BW system. like following:

   

 

3) 0TCT_MC25_Q0205 : BI Data Distribution By Table Types : This report provides an overview of data based on table types in the system as :

 

  

 

Technical content too provides rich set of characteristics to analyze the statistical data from various aspects like : Data allocation based on Infoareas, Application Component, Top 10 tables in system.

 

Automation/Alert:

These reports can be scheduled over broadcaster and sent to desired user groups in order to have consistent periodic overview of the system growth.

Exception alerts can be set on the queries which would be broadcasted when the disk space consumption has reached certain defined limits.




FI-HR Payroll Integration Scenario based on Wage Type

$
0
0

We came across a requirement where there was a need to integrate FI-HR
payroll based on Wage types. But we didn’t found any standard Data source for
this. So we went ahead with the approach shown below to achieve this.

 

 

Hope it helps anyone who have to deal with such scenario.

 

 

Scenario:

 

 

Our client required details of all the Ad-hoc documents which are posted in
FI based on posting made in Payroll Wage types.

 

 

According to the requirement, based on Country Grouping (MOLGA) and Wage
Type(LGART), Symbolic Account(SYMKO) needs to be fetched from Wage types table
T52EL. Now the Symbolic Account which we receive from wages table needs to be
passed on to Accounts Table(T030) and compared with Valuation Grouping Code by
providing Cost Centers and Transition Key selections.

 

 

Accounts table will give us the GL accounts based on Wage Type and Country
Grouping by above requirement. Now the task is to get the Documents from the GL
Accounts which we receive from Accounts table. Further GL accounts identified
from Accounts table can be passed onto BSEG/BKPF table to get the required
Accounting Document Number.

 

 

Table T52EL:

 

T52EL.jpg

 

 

Table T030:

 

 

T030.jpg

 

 

Implementation:

 

 

To accomplish the above requirement where FI and Payroll tables are required
to be mapped, we have to search for an extractor which matches the fields from
Wage and Accounts table, but we don’t have any such data source. So we went
ahead for below options:

 

 

1) Creating Custom DS based on View for T52EL and T030 tables.

 

 

Not Possible cause T030 is not a transparent table.

 

 

2) Creating Custom DS based on FM for T52EL and T030 tables.

 

 

Not allowed again since we need to use join operation from these table in FM
coding which again stopped us since T030 is not a transparent table.

 

 

3) Creating 3 Custom DS and apply Lookup between them to get the output.

 

 

Possible, but we went with a simpler way:

 

 

4) Using Payroll Data Standard DS (0HR_PY_1_CE) and enhancing it for fields
GL Account (ZZ_KONTS) and Symbolic Account (ZZ_SYMKO).

 

 

Now based on the requirement, we have a standard Data source which we can use
to map our HR table which is T52EL and table in which FI posting are being made
which is T030.

 

Enhancement.jpg


 

The next task is to create logic for the enhanced fields:

 

 

when '0HR_PY_1_CE'.



      TYPES:
BEGIN OF ty_t52el,

        molga
TYPE molga,

        lgart
TYPE lgart,

        symko
TYPE P_KOMOK40,

       
END OF ty_t52el.



        DATA: it_t52el
TYPE STANDARD TABLE OF ty_t52el,

              wa_t52el
type ty_t52el.



        TYPES:
BEGIN OF ty_t030,

        ktopl
TYPE ktopl,

        ktosl
TYPE ktosl,

        bwmod
TYPE bwmod,

        konts
type SAKNR,

       
END OF ty_t030.



        DATA: it_t030
TYPE STANDARD TABLE OF ty_t030,

              wa_t030
type ty_t030.





    FIELD-SYMBOLS: <fs_0HR_PY_1_CE>
TYPE HRCCE_HRMS_BIW_PY1.



       
select molga lgart symko from t52el

         
into table it_t52el where

          molga =
'AE' and

          lgart
like '2%'.





         
select ktopl ktosl bwmod konts

           
from t030 into table it_t030

           
where ( ktosl = 'HRC' or

                  ktosl =
'HRF' )

              
and    ktopl = '1000'.



   
LOOP AT c_t_data  ASSIGNING <FS_0HR_PY_1_CE>.



   
CLEAR wa_t52el.

     
READ TABLE it_t52el INTO wa_t52el WITH KEY molga = <FS_0HR_PY_1_CE>-molga

                                                 lgart = <FS_0HR_PY_1_CE>-lgart.

     
IF sy-subrc = 0.

        <FS_0HR_PY_1_CE>-zz_symko = wa_t52el-symko.

      ENDIF.





   
CLEAR wa_t030.

 

 

LOOP AT it_t030 into wa_t030 where bwmod = <FS_0HR_PY_1_CE>-zz_symko.



     
IF sy-subrc = 0.

        <FS_0HR_PY_1_CE>-zz_konts = wa_t030-konts.

         <FS_0HR_PY_1_CE>-zz_ktosl = wa_t030-ktosl.

      ENDIF.

      ENDLOOP.



    ENDLOOP.

 

 

Create a DSO based on the 0HR_PY_1_CE Data source which will provide us the
required GL accounts based on T52EL and T030 tables.

 

 

We need to get Accounting Document Number from the identified GL accounts from
BSEG table, so we can do a lookup based on Info providers on top of 0FI_GL_40
DS which takes postings from BSEG table.

 

 

Providing the lookup logic below:

 

 

types: begin of ty_/BIC/AZFI_O6940,

      comp_code
type /BI0/OICOMP_CODE,

      gl_account
type /BI0/OIGL_ACCOUNT,

      co_area
type /BI0/OICO_AREA,

      costcenter
type /BI0/OICOSTCENTER,

 
end of ty_/BIC/AZFI_O6940.



  DATA: it_ty_/BIC/AZFI_O6940
type standard table of ty_/BIC/AZFI_O6940,

        wa_ty_/BIC/AZFI_O6940
type ty_/BIC/AZFI_O6940.

 

 

DATA: itab_target type standard table of _ty_s_TG_1,

      wa_target
type _ty_s_TG_1.


SELECT COMP_CODE

       GL_ACCOUNT  co_area costcenter
from /BIC/AZFI_O6900 into TABLE

       it_ty_/BIC/AZFI_O6940

      
FOR ALL ENTRIES IN

                  RESULT_PACKAGE

                 
WHERE 


                  GL_ACCOUNT = RESULT_PACKAGE-GL_ACCOUNT.





 
LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.



   
READ table it_ty_/BIC/AZFI_O6940 into wa_ty_/BIC/AZFI_O6940 with

   
key

    GL_ACCOUNT = <RESULT_FIELDS>-GL_ACCOUNT.



   
IF sy-subrc = 0.

       <RESULT_FIELDS>-GL_ACCOUNT = wa_ty_/BIC/AZFI_O6940-GL_ACCOUNT.



      
append <RESULT_FIELDS> to itab_target.

    ENDIF.

  ENDLOOP.



   
refresh RESULT_PACKAGE.

    RESULT_PACKAGE[] = itab_target[].

 

 

In this way we can achieve the required Accounting Document Number.

SAP NetWeaver 7.4 BW ABAP Support Packages

$
0
0

Picture5.jpg

Check out the listed SAP BWNews notes to see what is delivered with the according Support Package.

 

SAP BWNews Notes


SAP NetWeaver
7.4 SP #

SAP BW News Note
Comments
02

1804758

released

03

1818593

released

04

1853730

released

05

1888375

released
061920525

released

071955499released
082000326released
092030800planned release dates
102070452planned release dates

 

 

What comes with the Support Packages?

 

Release of SAP BW 7.4 SP8

SAP BW 7.4 SP8 powered by SAP HANA is the next milestone for enterprise data warehousing with BW on HANA and provides  the next level of simplification for BW .In addition, SAP BW on Hana’s Virtual Data Warehouse capabilities have been enhanced for even more flexibility.

The further push down of transformations and OLAP capabilities  are next steps for excellent data Load –  and Analysis performance.

Customers focusing on the planning capabilities will benefit from the enhanced Planning application Kit  and FOX usability

 

 

Support Package 07
Transfer of Reporting Objects and Reporting Data with SDATA: Using the new SDATA tool, you can transfer reporting objects and data from a source location to a target location - for test and demo purposes. This enables you for example, to transfer reporting objects and data - for test purposes - to BW trial versions in the cloud. Note that the RSDATA is intended exclusively for test and demo purposes and is not suitable for transporting BW objects or transferring data in a productive system landscape!

 

In expert mode in the SAP HANA analysis process, the Maintenance Execution function is now available. This function allows support to simulate execution of the SAP HANA analysis process for troubleshooting purposes.

CompositeProvider:The function for setting cardinality has been changed. Conversion routins are now taken into account if you enter constants. The external display is now shown

.

In the transformation, you can now use the new rule type Calculation 0RECORDMODE for ODP.

 

Various interface changes and user-friendliness enhancements have been introduced in the data transfer process (DTP):

 

For detailed information visit the  Release Notes for SAP BW 7.4 SP7

Support Package 06

 

 

Support Package 05

 

 

Further Information:

 

  • Documentation/Release note information will be provived short before release of the according Support Package
  • The release dates for the  Support Package Stacks can be found in the Support Package Stack Schedule (SMP login required)
  • For generell Information about SAP NetWeaver BW 7.4 please visit SAP NetWeaver Business Warehouse 7.4
  • For regular updates please subscribe to the notes above as follows: You need to display the note on the service marketplace page. Use the direct links above or use SAP notes search and enter the note number directly. To subscribe to this special note activate the "subscribe" button (left hand above the title line of the note page). Also make sure that your E-Mail notification is activated (for activation see note 487366).
  • Importing Support Packages: Please note that after implementing a Support Package, you are usually required to perform additional maintenance in transaction SNOTE. SAP Notes that are already installed may become inconsistent. This can lead to function errors or syntax errors. Go to transaction SNOTE and reimplement the SAP Notes that are no longer consistent. Your BW system is only operable and consistent after you have reimplemented these SAP Notes.

SAP BW 7.31 ABAP Support Packages

$
0
0

274712_l_srgb_s_gl_banner.jpgSAP BWNews Notes

Check out the listed SAP BWNews notes to see what is delivered with the accordingSupport  Package.

 

 

Release Notes for the SAP NetWeaver 7.31 Support Package Stacks

Check out the release notes for detailed information on SAP NW 7.31 SPS. (Logon to SAP Help Library is required)

 

SAP NetWeaver 7.31
Stack Number

SAPBWNews
Note

Comment

152070453
142030844
131989585released
121951409released
111914639released
101882717released
091847231released
081813987released
071782744released
061753103released
051708177released
041680998released
031652580released
021630954released
011593298released

 

 

News


For detailed information on all the Support Packages for the SAP BW 7.31 please see the release notes

 

 

  •   Support Package 9: (Logon to SAP Help Library is required)
    • The SAP HANA-optimized DataStore object is now obsolete. See documentation

    • The integration of SAP BW and the SAP Landscape Transformation Replication Server (SAP LT Replication Server) allows real-time replication of data in BW and provides two options: You can use the SAP LT Replication Server for delta procedures for tables that do not contain any fields that are capable of producing delta figures. This makes it possible to use delta procedures for extractors that do not have delta logic.

      The trigger-based replication approach allows you to significantly reduce the administrative load if you have frequent master data updates.

      For futher information see "Data Transfer with SAP Landscape Transformation Replication Server (New)"

    • A planning function type is a procedure that can be parameterized and that change transaction data in BW Integrated Planning and in the Planning Application Kit. You can also implement customer-specific planning function types, in order to implement specific procedures and apply them to transaction data.

    •   Details on Logging Saved Values (Enhanced)

 

 

  • Support Package 8: (Logon to SAP Help Library is required)
    • Data Transfer with SAP Landscape Transformation Replication Server (New)

      The integration of SAP BW and the SAP Landscape Transformation Replication Server (SAP LT Replication Server) allows real-time replication of data from non-SAP sources and SAP sources to and provides the following options:

      You can use the SAP LT Replication Server for delta procedures for tables that do not contain any fields that are capable of producing delta figures. This makes it possible to use delta procedures for extractors that do not have delta logic.

      The trigger-based replication approach allows you to significantly reduce the administrative load if you have frequent master data updates.

      Note that complex extractors should generally not be replaced by replication with the SAP LT Replication Server.

      Learn more about the present possibilities to achieve real-time replication into SAP BW 7.30 powered by SAP HANA and read the presentation "Real-time Data Warehousing with SAP LT Replication Server (SLT) for SAP NetWeaver BW" by Marc Hartz.

 

    • SAP HANA-Optimized DataStore Object: With the next Support Package, or when SAP Note 1849497 is implemented, SAP HANA-optimized activation will be performed for all standard DataStore objects. Conversion of DataStore Objects to SAP HANA-optimized objects will then be obsolete. This conversion will no longer be necessary, since all standard DataStore objects will be optimized for SAP HANA automatically. You can still use existing SAP HANA-optimized DataStore objects, but there will also be a new transaction for reconversion. Related Information.

    • Enhanced ODP Source System:  For sources that offer Operational Data Providers (ODP) hierarchically, you can now select the ODP via the source hierarchy in DataSource maintenance. You can select ODPs for SAP HANA for example via the package hierarchy.You can now change the field names for the DataSource in the Proposal tab. If you do not make any changes to the field names, these are adapted automatically as before if they are not suitable as PSA field names.

 

 

  • Support Package 7: (Logon to SAP Help Library is required)
    • The structure of the documentation is enhanced.Some topics were bundeled others expanded and revised to make the documentation structure more clearly represented.See details.
    • The tool for editing the runtime parameters for DataStore objects has been revised and enhanced. It is now more user-friendly and offers users the option of changing the settings for MPP databases and the SAP HANA database.
    • The data flow documentation now contains more details about data transfer processes. Information has been added about the fields used in the filter and the key fields in the error stack: Saving Data Flow Documentation as an HTML File

    • The adapter for Sybase IQ as a near-line solution is delivered with the BW system. Integration of Sybase IQ makes it possible for you to separate data that you access frequently from data that you access rarely, thus making less demands on the resources in the BW system. The near-line data is stored in compressed form and needs to be backed up less frequently. You can thus reduce the costs incurred by data that is accessed less frequently.

 

 

  • Support Package 6: (Logon to SAP Help Library is required)
    • SP6 introduces the concept for handling Inactive Data with SAP HANA Database:
      To optimize the use of the main memory in SAP HANA, we introduced the concept of inactive data in the BW system. Optimization is supported as of SAP HANA Support Package 5. Implementing this concept results in higher priority displacement of data from Persistent Staging Area tables and from tables of write-optimized DataStore objects from the main memory, in the event of main memory bottlenecks. This has a positive effect on hardware sizing when dealing with a large quantity of inactive data of these object types.
    • SAP HANA-Optimized DataStore Object: Change Log Compression
      As of SAP HANA Revision 38, you have the option of having a change log created for every activated record in SAP HANA-optimized DataStore objects.

 

 

  • The release dates for the  Support Package Stacks can be found in the Support Package Stack Schedule (SMP login required).
  • Please see the SAP NetWeaver 7.31 Support Package Stack page (SMP login required) This page is the central point of information for planning the implementation of Support Package Stacks (SP Stacks) for SAP NetWeaver 7.31.
  • For regular updates please subscribe to the notes above as follows: You need to display the note on the service marketplace page. Use the direct links above or use SAP notes search and enter the note number directly. To subscribe to this special note activate the "subscribe" button (left hand above the title line of the note page). Also make sure that your E-Mail notification is activated (for activation see note 487366).
  • Importing Support Packages: Please note that after implementing a Support Package, you are usually required to perform additional maintenance in transaction SNOTE. SAP Notes that are already installed may become inconsistent. This can lead to function errors or syntax errors. Go to transaction SNOTE and reimplement the SAP Notes that are no longer consistent. Your BW system is only operable and consistent after you have reimplemented these SAP Notes.

Procedure to Load data using Setup Table

$
0
0

Procedure to
load data using Setup Table

 

Summary

 

The objective of the document is to give information to load data using setup tables when delta is still enabled. The information below is available in SCN in different forums. I have tried to collate that and give some more information in my personal experience in one single document.

 

Introduction

 

Setup tables are nothing but tables which are directly linked to application tables. SAP doesn’t allow direct access to application tables and hence to extract data from those tables, we have setup table as an interface between them and the extractor.

 

Load from setup table is used to initialize delta loads, which means this is always a full load from the application tables, based on the selection provided while
running setup job. Once the load from setup table is completed, we could normally load deltas through delta queue.

 

Business Scenario


We have various scenarios to load historical data to newly developed info-provider or existing info-provider with few enhancements. In these scenarios, we may need the help of setup tables to load data and this is a part of LO Extraction scenario.

 

Business users want a new report based on purchasing which cannot be achieved using a standard info-cube based on the requirement. Hence we have developed a new Info cube which needs to have data from 2LIS_02_SCL and 2LIS_02_SCL (which is already delta enabled).

 

In the above scenario, if the business doesn’t require history data, then there is no challenge since we could add the new info-cube in the existing delta info-packages.

 

If the business requires history data, then we need to load history data using setup table and then add the same info-cube in existing delta loads.

 

Procedure to fill setup table and load data to BI

 

We need to keep in mind that the setup table is not filled always. If we have a scenario to load full/Init, we need to fill the corresponding setup tables by scheduling setup runs (BGD jobs).

 

Before proceeding to fill the setup tables, we should make sure that they are empty

 

Below is the procedure to delete setup tables. The procedure to delete setup table for all components is same and the deletion is component dependent, not
datasource wise. Hence if we our requirement is to load data from HDR datasource, we need to fill the whole component which includes HDR information also.

 

Transaction code to delete setup table is LBWG

 

 

LBWG.png

 

Once we get the above screen, we could specify the component (02, 11, etc...) and click on execute button.

 

Once we execute, it would give us the below screen for confirmation. Click ‘Yes’ to proceed further.

 

(We are not going to delete any data from application tables. We are deleting the setup table before filling it up, just to avoid duplicate data)

 

Prompt.png

 

Once the setup table is deleted, the below message will appear in the bottom of the screen

 

message.png

 

Now we are done with the deletion of setup table and we are fine to proceed further to
fill it back

 

Deletion activity is very simple with one transaction for all components. But in order to fill the setup tables, we have the following transaction codes for
individual components.

 

 


T-Code


Application


OLI1BW


Material  Movements


OLIZBW


Invoice Verification/Revaluation


OLI3Bw


Purchasing Documents


OLI4BW


Shop Floor Information system


OLIFBW


Repetitive Manufacturing


OLIQBW


Quality Management


OLIIBW


Plant Maintenance


OLISBW


Service Management (customer Service)


OLI7BW


SD Sales Order


OLI8BW


Deliveries


OLI9BW


SD Billing Documents


VIFBW


LES-Shipment Cost


VTBW


LES-Transport


ORISBW


Retail


OLIABW


Agency Business


OLI6BW


Invoice Verification


OLI4KBW


Kanban

 

We could also achieve the above through the transaction code SBIW in the following path.

 

Settings for Application-Specific Data Sources --> Logistics --> Managing Extract Structures --> Initialization --> Filling in the Setup Table --> Choose the component to fill

Setup Table.png

 

Loading data to BI


Once the required history data is available in setup table, we could start loading it to BI. Before triggering the info-package, we need to change the settings in info-package to load it as Repair-Full request.


Below are the steps to convert an info-package with full load to a repair full load

 

Open the info-package

Menu --> Scheduler --> Click ‘Repair Full Request’ as below

 

Capture.PNG

 

Below screen will appear in which we need to check the option ‘Indicate Request as Repair Request’

 

Prompt 1.png

 

The purpose of changing the full load to repair-full load is because we wouldn’t be able to run delta after a full load. Hence the load should be a repair-full
load.

 

There is also an option to change a full load request into repair full request after loading using the program ‘RSSM_SET_REPAIR_FULL_FLAG’


Once the full loads are completed, we need to run the above program which will give us the below screen

 

Program.png

 

We need to fill the above information and execute. All full requests in the info-provider will be converted into Repair-Full request. After which, we could do delta loading without any issues.

 

Points to remember

  • The setup table deletion and filling up activity should be preferably done in non-business hours
  • Before starting the activities, it is suggested to lock the business users, so that delta will not be lost
  • It is not mandatory to stop V3 jobs, since the setup table concept runs entirely a different path and it will not disturb delta. But still, we should run V3 jobs manually until and make sure delta loads to BI picks ‘0’ data.
  • When we run setup table filling jobs, it is always recommended to fill it for selective regions or periods. We can never predict on how much data is available in application tables. If we don’t specify any selections, it will take days to complete the job.

 

Naming Convention of setup tables

 

Setup table name will be extract structure name followed by SETUP.

 

It starts with 'MC' followed by application component '01'/'02' etc. and then last digits of the Data source name and then followed by SETUP. We can also derive it with the name of communication structure in LBWE followed by 'SETUP'

 

Below is the list of setup table names for the application 02 – Purchasing.

 

With the use of these tables, we could make sure that the data is deleted/ filled after the corresponding jobs

 

 


Application


Datasource


Setup Table


Purchasing


2LIS_02_ACC


MC02M_0ACCSETUP


Purchasing


2LIS_02_CGR


MC02M_0CGRSETUP


Purchasing


2LIS_02_HDR


MC02M_0HDRSETUP


Purchasing


2LIS_02_ITM


MC02M_0ITMSETUP             


Purchasing


2LIS_02_SCL


MC02M_0SCLSETUP


Purchasing


2LIS_02_SCN


MC02M_0SCNSETUP             


Purchasing


2LIS_02_SGR


MC02M_0SGRSETUP             

 


Important Transaction Codes

 

NPRT – Log for Setup Table jobs

LBWG – To delete setup table based on
components


SAP BW Archiving data to NLS

$
0
0

Hi All,

 

There are couple of steps to perform data archiving into NLS.

 

As i was unable to find any specific doucment with my requirement when i was performing Data archiving on NLS

 

Run the transaction RSA1

 

Select the Cube under info provider

 

Manage.JPG

 

Select the archiving tab

 

Archiving tab.JPG

 

At the bottom of the screen select the Archving request button

 

Archiving request.JPG

 

A new popup will be coming up select the date range as per the requirement

 

selection criteria.JPG

 

Provide the date range criteria

 

Execute the job in background mode

background.JPG

 

Click the refresh screen to display the updated status for the archiving request.

 

Archiving request_completed.JPG

 

Double click on the status button to continue performing other steps

 

Status.JPG

Select the verification phase as per the requirement

 

verification.JPG

Select the deletion phase to 70

deletion phase.JPG

Execute the job in Background mode.

 

 

background.JPG

Refresh the screen to get the update details

 

refresh.JPG

 

Once the data archiving is done sucessfully all the status will turn into Green, It means that all the data has been copied and verified and set

the data to deletion from BW

 

Archiving request_completed_finally.JPG

 

 

Hope you all understand the concept of archiving clearly.

 

But make sure of couple of things - the object which you have planned to archive check the number of records.

As time varies during archiving and some time it may take to run for couple of days, Depending on the scenario and requirement.

 

In my case it ran for couple of days.


Regards

Mohammed

BW Tip - LISTCUBE !!!

$
0
0

Hi All,

 

Today ill share BW tips which is help full for those in AMS project  which reduces their effort spent on displaying the same cube data each and every time during their analysis,


This simple method will help you to reduce your time on the same, just give a try if you feel comfortable with this method.


LISTCUBE tcode which you can make use of this

 

Have you ever thought of how to save selection screen and output fields in tcode LISTCUBE? (an alternate way to view your cube data)If not below are the steps to be followed to make use of it.

 

Advantages:

  •   This will help in  reusability like avoiding the LISTCUBE tcode again and again for the same selections.
  •   This will also helps you to  reduce certain amount of your manual work.

 

Here is the steps to be followed.

 

     1.Go to the TCODE LISTCUBE.

 

     2.Select your needed info provider to be displayed.


Untitled.png

3. Same time give the program name starting with Z or Y ( This is the program which is going to be reused) and please make a note of it.


Untitled.png

4. Now execute the screen , which displays  your selection screen and select your needed field for the selection, later also select your needed fields for the output using field selection for output tab


5. After selecting it, kindly select the Save button which is there on the top of the screen to save it as variant.


Untitled.png


6. You will get the popup screen in which it will ask you to update with Variant name and its Description, enter the needed info and save it again through the save button or through the file menu Variant --->Save.


Untitled.png

7. Use can make use of options check box which is available in the screen  like Protect Variant, which protects others to changes this variant on the same program, means it can be changed only by the created user, other is not certainly required, still if you need to know its purpose kindly press F1 on corresponding check box for its use.


8. Now go to TCODE SE38 and give the program name as the one which you gave it in the LISTCUBE transaction and execute it.


9. Once you execute it you will get the screen as you decided in listcube, click on variant button for save variants or click on field selection for output tab for your changes like to save another variant or to make the changes in the current variant.


Untitled.png

10. Here if you want to view the save variants from list cube you can use the variant button and select your variants to display it which has been stored already.


11. Select the need variant and execute the program to get your desired outputs.


Note: you can also store ‘N’ number of variants from this program itself.

 

Now instead of going LISTCUBE again and again to make your selection you can make use of this program when you want to make use of the same info provider to display your output for n number of times, this will reduce your time in selecting your selection option, and your desired output screen.

Dependencies: If the structure of the Info Provider changes, a program generated once is not adapted automatically. The program has to be manually deleted and generated again to enable selections of new fields.


Thanks,

Siva




BI Activities involved in SAP BI 7.4 upgrade – Part 1

$
0
0

BI Activities involved in SAP BI 7.4 upgrade – Part 1

 

 

Summary

 

The objective of the document is to give information on the activities performed during and after BI 7.4 upgrade.

 

System Information

 

Before Upgrade: SAP BI 7.0 SP 22 - SAPKW70022

 

After Upgrade: SAP BI 7.4 SP 7 - SAPKW74007

 

Introduction

 

The introduction about SAP BI 7.4 is absolutely not necessary here since we have lot of information already available all over SCN

 

Please don’t expect that I am going to give some information which is not available anywhere. The below details about the activities performed during and after the upgrade might be also available here and there.

 

I have tried to include our project experience during the upgrade which will be useful in overcoming the practical difficulties.

 

We have started our actual upgrade in mid of June and ended up in mid of September in four different BI systems. Secondary Development system,  Development System, Quality System and Production. We still have a project testing system which is not yet upgraded.

 

It is always better to have one system to be excluded from the upgrade, so that we could use that for testing and comparing purpose (Just a personal thought since it was very useful for us to check the changes before and after upgrade)

 

BW Sizing Information required for Quick Sizer

 

Quick Sizer is a Web-based tool designed to make the sizing of the SAP solutions easier and faster. It has been developed by SAP in close cooperation with all platform partners and is free of cost. With Quick Sizer you can translate business requirements into technical requirements. Simply fill in the online  questionnaire, an up-to-date survey that is based on business-oriented figures. The results you obtain can help you select an economically balanced system that matches your company's business goals.

 

The information that we have provided gives an outline of whether we need to upgrade our hardware component as well before our application upgrade.

 

You could get all information about Quick Sizer in SCN market place using your login id, password. We will go into what we have done actually to get the required information

 

Basis Team is fully responsible for updating the information into the Quick Sizer tool. There are lot more information provided other than BW sizing. Here I will explain what we have done from BI side.

 

The information that is required from a BI system is divided into five different tables. Based on each table, the required information varies

 

  • Table 1: Throughput - User Groups of SEM-BPS

 

This table s related to Integrated Planning and BPC information. We should give the information like how many planners, how many real time infocubes and the volume of data etc.

 

  • Table 2: Throughput - Query & User Distribution

 

Here we should give information related to normal, business and super users. How many of them are using the reports, how many reports in total, how much time spent by them on an average.

 

  • Table 3: Throughput - Data Upload to BW Server

 

Here we need to give information about what is the volume of data expected on a daily basis which is actually based on the ECC transactions.

 

  • Table 4: Throughput - Definition of Infocubes

 

This is time consuming table. As from the name, you could have find out that we are going to give all the information about each and every infocube present in the system. This holds information including number of dimensions, number of key figures, average volume of data, records in initial load and many more

 

  • Table 5: Throughput - Definition of ODS Objects

 

This is also same as the above table, but DSO specific information. Number of text fields, numeric fields, average volume of data, records in initial load and so on.

 

Issues faced in Quick Sizer tool

 

The first three tables will not be of much issues. But the tables related to cube and DSO will give errors that the information provided is wrong or not  sufficient.

 

See a sample error message

 

Capture.PNG

 

There are scenarios like we may have to manipulate the data for those two tables.

 

We might not be able to get some data accurately. For example Average length of character fields in DSO. There are almost 300 DSOs in our system. It is not feasible to get the char length of all fields and calculating the average. So initially we left that field blank for all entries. But the tool gave error only for few DSOs for which we have calculated the average length and uploaded it.

 

For some cubes, the total number of requests was more than 10000 and the quick sizer didn’t accept 5 digits initially. Later when all other issues were rectified, it accepted 5 digits as well.

 

In some cases, whatever values we give, it didn’t accept. Then we gave NA and it worked. So it sometimes works in trial and error basis.

 

PS: We didn’t spend much time to analyse the reason for these issues. Since it was not one or two errors and we were keen in resolving the issues as early as possible and proceed further.

 

Important Tables which we used to get information

 

We have not used all of the below tables, but most of them. But I have tried to list out all tables related to Infocube and DSO which will be useful in many scenarios. There are lot more other than these tables listed below.

 

RSDCUBE - Directory of Infocubes

RSDCUBET - Texts on Infocubes

RSDCUBEIOBJ - Objects per Infocube (where-used list)

RSDDIME - Directory of Dimensions

RSDDIMET - Texts on Dimensions

RSDDIMEIOBJ - InfoObjects for each Dimension (Where-Used List)

RSDCUBEMULTI - Infocubes involved in a Multicube

RSDICMULTIIOBJ - MultiProvider: Selection/Identification of InfoObjects

RSDICHAPRO - Characteristic Properties Specific to an Infocube

RSDIKYFPRO - Flag Properties Specific to an Infocube

RSDICVALIOBJ - InfoObjects of the Stock Validity Table for the Infocube

RSDODSO - Directory of all ODS Objects

RSDODSOT - Texts of all ODS Objects

RSDODSOIOBJ - InfoObjects of ODS Objects

RSDODSOATRNAV - Navigation Attributes for ODS Object

RSDODSOTABL - Directory of all ODS Object Tables

RSODSSETTINGS - Settings for an ODS

 

This activity seem to be very simple, but it takes many days to get all the required information from the production system. There are lot more activities coming in the next couple of documents.


Thanks for your time! Please feel free to add any comments.

 

Other Documents published by me

 

http://scn.sap.com/docs/DOC-58355

http://scn.sap.com/docs/DOC-58038

BW Operations/Support- Utilities

$
0
0


Objective:

 

To provide overview of SAP Standard utilities useful for BW Operations and Production support.

 

During regular operations often there are situation where following information is required:

 

1) Analysis of Background Job : What the job was doing, where did it spent what amount of time, what memory did it consumed etc

2) BW- Accelerator                  : How often BWA indexes were used, BWA overall summary, Which infocube lies on which blade to judge impact during issues

3) Query Usage Analysis          : How often the query was executed and by whom

4) BW- Process Chain Analysis: Basic need to monitor chain and observe several aspects like run time of steps in a chain etc

 

Often consultant has to spent lot of time to find the information and has to apply various tricks to find the relevant information.

However this document provides an overview of some standard utilities those support consultant to derive the information:

 

 

Analysis of Background Job:


Step 1: Execute Trnx: St13 and select the program: BACKGROUND_JOB_ANALYSIS and execute

 

 

Step 2: Enter the job name /user and time frame for which you would need information and execute:

 

Step 3:  Execution will provide the list of jobs running in the system based on the selections entered.

 

 

Step 4: Select the job you would like to analyze from the results and select the button "STAD" as displayed in following image:

 

 

Step 5: Execution of STAD button will display the statistical information of job like displayed in following image:

 

 

Step 6: In case further details are required double click on the result row, this will lead you to a transaction providing tabular view of detailed statistical data

 

 


BW Accelerator:

 

Step 1: Execute Trnx: St13 and select the program: BWATOOLS and execute

 

Step 2: Execution displays a Cockpit to perform various analysis on  BWA as displayed in following image:

 

 

Index Usage Analysis

Step 3: Select button "Index Usage Analysis" to analyze the frequency of the index used to judge the quality

 

Step 4: Provide the selections for which you need to perform analysis and execute as displayed in following image:

 

Step 5: Execution will provide the statistical information of the selected infocube or the cube based on the time frames

 

 

 

BWA Index Attributes: To get the details of indexes like, Size of Indexes, No. of Records in Indexes,Which blade host which index etc

 

Step 1: Select button "Index Properties" and select execute:

 

Step 2: Select the appropriate selection for which you need to perform analysis like displayed in following and execute:

 

Step 3: Execution provides the statistical information about the indexes to be used for further analysis as displayed in following image:

 

 

Step 4: Further details can be obtained by selecting the particular records.




Query Usage Analysis

Step 1: Execute Trnx: St13 and select the program: BW_QUERY_USAGE and execute



Step 2: Execution displays following image to enter the selections for analysis:

 

Step 3: Uncheck the option "All Queries" if you want to analyze for any specific query (Currently I will uncheck to analyze one particular query) and execute

Step 4: Execution will display an additional window asking to select the relevant query:

 

Step 5: Expand the relevant infoarea and select the query to analyze:

 

Step 6: Select query and select button "Open" located at the bottom of window:

 

 

Step 7: This will display the information who executed the query and how many times:

Same exercise can be performed for all queries, for a specific time frame or for any specific user in the system.

 

However this utility has limitation to not provide the information about the time stamp of its usage although you can use the time frame field in selection window to narrow down the findings.

 

 

 

BW Process Chain Analysis

Step 1: Execute Trnx: St13 and select the program: BW-TOOLS and execute

 

Step 2: This program provides range of options to perform various BW analysis, for our example select option: "Process Chain Analysis" and execute

 

 

Step 3: Next window provides option to analyze complete chain or particular process types in the system for specific time range as displayed in respective images:

 

 

 

Step 4: Enter the name of chain if you need to analyze particular chain else leave it blank and execute

 

Step 5: This will display a tabular view of the chains those were running/completed/aborted in the system during the mentioned time frame

Step 6: The tabular view provides two views of data as following:

               i) Tree View: This view is displayed when you select the column: "Log Id" of chain and displays the tree shaped view of the progress of chain.

              ii) Hierarchy View: This is more analytical view providing the information of runtime of each step in process chain,displayed by selecting column "Chain"

 

 

This transaction provides lots of useful utilities this would highly recommend to use and explore further options based on your need.

 

Hope it helps !!

Function of RSADMIN parameters for the issues occuring in the SAP BW Web Java Runtime

$
0
0

Purpose:
To understand the function of few of the RSADMIN parameters for some of the issues occuring in the SAP BW Web Java Runtime.

 

Overview:
To understand the function of few of the RSADMIN parameters for some of the issues occuring in the SAP BW Web Java Runtime.

 

This RSADMIN parameter is set in the relevant BW Backend system -> Go to transaction SE38 -> SAP_RSADMIN_MAINTAIN

 

Few of the RSADMIN parameters are listed here:


1)RSWR_EXPORT_INTERNAL_ITEM:

Also, please refer the following SAP Notes for details and prerequisites:
2057789 - Changes in the default exporting behaviour of images
1660488 - Export internal mime images to excel


2)RS_BEX_FORCE_CHARTS_RENDERING
If the following situation occurs:
When executing the Web Template containing Web Item Chart, the following error message occurs:
“Graph size is too small to display this amount of data”

For this symptom, you can do either of the following:
"Increase the size of the chart.
or
Alternatively apply the Support Package or Patch as mentioned in the SAP Note 1781468 and define the following RSADMIN parameter on the BW ABAP backend:
RS_BEX_FORCE_CHARTS_RENDERING = X
To maintain the table RSADMIN you can use the ABAP report SAP_RSADMIN_MAINTAIN. After RSADMIN parameters have changed a restart of the Java system may be required.
Please be aware that defining this parameter, may lead to performance issues or to not properly rendered charts."


3)F4_LIMIT_HIERARCHY_MEMBERS
Please do look at the SAP Note for details and prerequisites:
1352432 - F4 help: Limit hierachy members in selection screen


4)EXPORT_HIDDEN_ITEM
Please do look at the following SAP Note for details and prerequisites:
1730163 - 730:Visible item inside hidden group item is not exported.


5)BICS_DA_RESULT_SET_LIMIT_MAX
BICS_DA_RESULT_SET_LIMIT_DEF
These are the safety belt parameters.
Please look at the following SAP Notes for details and prerequisites:
1127156 - Safety belt: Result set is too large


6)IGNORE_RSD1_MA_ATTR
You can set this parameter to X if you do not want to see the attribute value to be displayed in the F4 help for the hierachy variable while executing
the Web template.

Please do look at the following SAP Note for details and prerequisites:
1144979 - RSD1: Deactivating attribute display in input help


7)  F4_LIST_NO_LINEBRAEKS 
You can set this parameter to X if the following problem occurs:
"While trying to use web to open some queries -> On one query we have seen some ??? characters instead of expected value or Null pointer exception occurs."

Please do look at the following SAP Note for details and prerequisites:
1368055 - BEx Web 7.0: Nullpointer exception from ToolSingleValue.java


8)RS_ALLOW_WILDCARD_FOR_SELOPT       
You can set this parameter to X if the following problem occurs:
"You executer A Web Template/ Query and in the resultant page you see a variable screen and try to add wildcard entry in the direct input field, you will get error message as
"The entered value <XXX> is not valid. Please enter a valid single value" and the value is not added to the right hand side of selection screen.
Please do look at the following SAP Notes for details and prerequisites of SP and Patch Level for BI Java components while setting this parameter to X:
1561846 - Wildcard entry for SELECTION OPTION variable in selector


9)ALLOW_INVALID_VARIABLE_VALUES
DISABLE_NO_MASTER_DATA_WARNING

The following problem occurs:
"While executing the Web Template/Query -> in the resultant page you will see a variable screen and after entering relevant values you click on Check and OK and then in the resultant page
you will see the following error message:
“Characteristic <info Object> has no Master Data for <Technical ID>” and Value " " does not exist; enter a different value


Please do look at the following SAP Notes for details and prerequisites:
1451699 - Suppressing warnings for single values that do not exist
1546051 - Planning Function selection has missing or invalid entries


10)BICS_DA_MEMB_READ_LIMIT_MAX
BICS_DA_MEMB_READ_LIMIT_MAX_DL
BICS_DA_MEMB_READ_LIMIT_MAX_TP

Please do look at the following SAP Note for details and prerequisite:
1938365 - BICS 7.3x: Characteristic member selection safety belt


11)ALLOW_MULTI_DP_RESET
Set this parameter to X in the following situation:
While using menu entries "Back to Start" and "Back One Navigation Step" in Context Menu or while using commmands for example:"BACK_TO_PREVIOUS_STATE", "BACK_TO_INITIAL_STATE",
performance is bad or you do not get the correct result.

Please do look at the following SAP Note for details and prerequisites:
1581923 - BEx Web: Optimization on Back with Many Data Providers

 

Also, with regard to issues occuring in Web Java Runtime, please refer the following SAP KBA:

1899396 - Patch Level 0 for BI Java Installation - Detailed Information

Long text extraction via READ_TEXT FM

$
0
0

Reading Long text is a tough task for a BW consultant while loading to BW system. We have a SAP provided FM READ_TEXT for this purpose. The general understanding about this FM is it decrypts the text stored in various objects of SAP stored in encrypted format in tables STXH and STXL. We recently got such kind of requirement. We had to pull data for Sales Order Text.

 

I surfed over net, came up with a solution as there was no many document scripting the complete approach as I needed, I have used few documents as reference for this whole development as mentioned in the bibliography below.


Scenario:

 

Our Project is based on SAP Retail and POSDM. The Data generated by the SAP ECC is pulled further by Informatica system and it gets reported in Micro-strategy Tool.

 

We have implemented below modules:

  1. SAP FI-GL
  2. SAP Purchasing
  3. SAP Inventory Management
  4. SAP Sales Orders
  5. SAP Delivery
  6. SAP Billing
  7. SAP Agency Business
  8. SAP POSDM

The major part of a SAP BW consultant in our project is to handle the extractions side of SAP; it may be any standard or custom DS.


Issue:

 

We came across a requirement where we needed the Long text stored for multiple lines and reasons related to a Sales Order.

 

The requirement stated that the Informatica system needs all active line of Texts for all the Sales orders. This text is not the description. It’s the long text updated via transaction VA02, VA22 and similar. This gets updated to STXH and STXL tables and related table.

 

Since the text is in encrypted format, even if we create a generic extractor on these table combinations i.e. views, we will not be able to extract required texts from these tables.

 

SAP has provided a Standard function Module for extracting such type of texts. They are separated with text IDs, Names and Object types.


Solution applied:

 

We will consider below example for our explanation of the way I used for this extraction. I used a generic DS based on FM for this purpose.

 

Step 1: Create a structure named say ZTEST_TEXT, with below attributes and details.

READ_TEXT_1.jpg

Step 2: Update the enhancement category for avoiding warning and activate it.

 

Step 3: Copy the FM Group RSAX to make it editable to for example ZRSAX_TEXTLG


Step 4: Copy only FM RSAX_BIW_GET_DATA_SIMPLE to say ZZZZRSAX_BIW_GET_DATA_SIMPLE

READ_TEXT_2.jpg

Step 5: Maintain the FM ZZZZRSAX_BIW_GET_DATA_SIMPLE in original language as its base FM was in German so it will also you to maintain it in German.

 

Step 5.5: Activate and Include LZRSAX_TEXTLGTOP.

 

Step 6: Update the Code as below and as mentioned in the BOLD Font.


FUNCTION ZZZZRSAX_BIW_GET_DATA_SIMPLE.
*"----------------------------------------------------------------------
*"*"Local Interface:
*"  IMPORTING
*"     VALUE(I_REQUNR) TYPE  SRSC_S_IF_SIMPLE-REQUNR
*"     VALUE(I_DSOURCE) TYPE  SRSC_S_IF_SIMPLE-DSOURCE OPTIONAL
*"     VALUE(I_MAXSIZE) TYPE  SRSC_S_IF_SIMPLE-MAXSIZE OPTIONAL
*"     VALUE(I_INITFLAG) TYPE  SRSC_S_IF_SIMPLE-INITFLAG OPTIONAL
*"     VALUE(I_READ_ONLY) TYPE  SRSC_S_IF_SIMPLE-READONLY OPTIONAL
*"     VALUE(I_REMOTE_CALL) TYPE  SBIWA_FLAG DEFAULT SBIWA_C_FLAG_OFF
*"  TABLES
*"      I_T_SELECT TYPE  SRSC_S_IF_SIMPLE-T_SELECT OPTIONAL
*"      I_T_FIELDS TYPE  SRSC_S_IF_SIMPLE-T_FIELDS OPTIONAL
*"      E_T_DATA STRUCTURE  ZTEST_TEXT OPTIONAL
*"  EXCEPTIONS
*"      NO_MORE_DATA
*"      ERROR_PASSED_TO_MESS_HANDLER
*"----------------------------------------------------------------------

* Example: DataSource for table SFLIGHT
TABLES: ZTEST_TEXT.

* Auxiliary Selection criteria structure
DATA: L_S_SELECT TYPE SRSC_S_SELECT,


I_T_DATA1
TYPe STANDARD TABLE OF ZTEST_TEXT,
lt_text_lines
TYPe STANDARD TABLE OF TLINE,
lr_text_lines
TYPE TLINE.

FIELD-SYMBOLS: <FS_ZDS_TEXT_ORDER> TYPE ZTEST_TEXT.


* Maximum number of lines for DB table
STATICS: S_S_IF TYPE SRSC_S_IF_SIMPLE,

* counter
S_COUNTER_DATAPAKID
LIKE SY-TABIX,

* cursor
S_CURSOR
TYPE CURSOR.
* Select ranges
RANGES: SDNO FOR ZTEST_TEXT-VBELN.

* Initialization mode (first call by SAPI) or data transfer mode
* (following calls) ?
IF I_INITFLAG = SBIWA_C_FLAG_ON.

************************************************************************
* Initialization: check input parameters
*                 buffer input parameters
*                 prepare data selection
************************************************************************

* Check DataSource validity
    CASE I_DSOURCE.
WHEN 'ZDS_TEXT_ORDER'.
WHEN OTHERS.
IF 1 = 2. MESSAGE E009(R3). ENDIF.
* this is a typical log call. Please write every error message like this
LOG_WRITE
'E'                  "message type
'R3'                 "message class
'009'                "message number
I_DSOURCE  
"message variable 1
' '.                 "message variable 2
RAISE ERROR_PASSED_TO_MESS_HANDLER.
ENDCASE.

APPEND LINES OF I_T_SELECT TO S_S_IF-T_SELECT.

* Fill parameter buffer for data extraction calls
S_S_IF
-REQUNR    = I_REQUNR.
S_S_IF
-DSOURCE = I_DSOURCE.
S_S_IF
-MAXSIZE   = I_MAXSIZE.

* Fill field list table for an optimized select statement
* (in case that there is no 1:1 relation between InfoSource fields
* and database table fields this may be far from beeing trivial)
APPEND LINES OF I_T_FIELDS TO S_S_IF-T_FIELDS.

ELSE.                 "Initialization mode or data extraction ?

************************************************************************
* Data transfer: First Call      OPEN CURSOR + FETCH
*                Following Calls FETCH only
************************************************************************

* First data package -> OPEN CURSOR
IF S_COUNTER_DATAPAKID = 0.

* Fill range tables BW will only pass down simple selection criteria
* of the type SIGN = 'I' and OPTION = 'EQ' or OPTION = 'BT'.
**      LOOP AT S_S_IF-T_SELECT INTO L_S_SELECT WHERE FIELDNM = 'CARRID'.
**        MOVE-CORRESPONDING L_S_SELECT TO L_R_CARRID.
**        APPEND L_R_CARRID.
**      ENDLOOP.
**
**      LOOP AT S_S_IF-T_SELECT INTO L_S_SELECT WHERE FIELDNM = 'CONNID'.
**        MOVE-CORRESPONDING L_S_SELECT TO L_R_CONNID.
**        APPEND L_R_CONNID.
**      ENDLOOP.

LOOP AT S_S_IF-T_SELECT INTO L_S_SELECT WHERE FIELDNM = 'VBELN'.
MOVE-CORRESPONDING L_S_SELECT TO SDNO.
APPEND SDNO.
ENDLOOP
.

* Determine number of database records to be read per FETCH statement
* from input parameter I_MAXSIZE. If there is a one to one relation
* between DataSource table lines and database entries, this is trivial.
* In other cases, it may be impossible and some estimated value has to
* be determined.
SELECT MANDT TDNAME TDSPRAS TDID
FROM STXL
into table I_T_DATA1
where TDOBJECT = 'VBBK'.

LOOP AT I_T_DATA1 ASSIGNING <FS_ZDS_TEXT_ORDER>.

CALL FUNCTION 'READ_TEXT'
EXPORTING
CLIENT                  = <FS_ZDS_TEXT_ORDER>-MANDT
ID                      = <FS_ZDS_TEXT_ORDER>-TDID
LANGUAGE                = <FS_ZDS_TEXT_ORDER>-TDSPRAS
NAME                   
= <FS_ZDS_TEXT_ORDER>-VBELN
OBJECT                 
= 'VBBK'
TABLES
LINES                   = lt_text_lines
EXCEPTIONS
ID                      = 1
LANGUAGE                = 2
NAME                   
= 3
NOT_FOUND              
= 4
OBJECT                 
= 5
REFERENCE_CHECK        
= 6
WRONG_ACCESS_TO_ARCHIVE
= 7
OTHERS                  = 8.

READ TABLE lt_text_lines INTO lr_text_lines INDEX 1.
<FS_ZDS_TEXT_ORDER>
-TXTLG = lr_text_lines-TDLINE(132).
ENDLOOP.
SORT I_T_DATA1.
E_T_DATA[]
= I_T_DATA1[] .


*      OPEN CURSOR WITH HOLD S_CURSOR FOR
***      SELECT (S_S_IF-T_FIELDS) FROM SFLIGHT
***                               WHERE CARRID  IN L_R_CARRID AND
***                                     CONNID  IN L_R_CONNID.
ENDIF.                             "First data package ?
*
** Fetch records into interface table.
**   named E_T_'Name of extract structure'.
*    FETCH NEXT CURSOR S_CURSOR
*               APPENDING CORRESPONDING FIELDS
*               OF TABLE E_T_DATA
*               PACKAGE SIZE S_S_IF-MAXSIZE.
**
*    IF SY-SUBRC <> 0.
*      RAISE NO_MORE_DATA.
*    ENDIF.
*
*
S_COUNTER_DATAPAKID
= S_COUNTER_DATAPAKID + 1.

ENDIF.

"Initialization mode or data extraction ?

ENDFUNCTION.

 

Step 7: Create a DS with name based on FM as below ZDS_TEXT_ORDER (mentioned in the code.

 

READ_TEXT_3.jpg

READ_TEXT_4.jpg

READ_TEXT_5.jpg

Step 8: Open RSA3 for DS ZDS_TEXT_ORDER along with its selection or you can also keep it blank and hit on F8 or extraction.

 

READ_TEXT_6.jpg

Below is the result:

 

READ_TEXT_7.jpg

Documents used as a reference:

 

HowTo extract Order long text

 

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a0f46157-e1c4-2910-27aa-e3f4a9c8df33?overridelayout=t…

How to improve performance of DTP 7.3

$
0
0

The data loading scenario you have described matches the behaviour of a DTP feature called ‘Semantic Grouping’.

When most DTPs execute, the volume of data being extracted is small enough for the average BW Developer/Support person to not notice (or care about) the impact of semantic grouping. The DTPs are transported into the BW Test and BW Production system with very little consideration of the semantic group configuration. In fairness, most of the time it does not matter that this configuration is ignored.

As the volume of data being extracted in a single request increases, so does the importance of the semantic grouping configuration. In your case, the effect is made even more noticeable due to the width of the records (300+ InfoObjects).

There are three (3) independent suggestions for you to consider implementing. Please ensure you test each of them prior to committing the change into any BW Production system.

1. Disable the DTP ‘Semantic Grouping’ feature.
2. Decrease the number of records per DataPacket.
3. Use write optimised DataStores.

Suggestion 1: Disable the DTP ‘Semantic Grouping’ feature.

The semantic group configuration is located on the ‘Extraction’ tab within a DTP. It is the ‘Semantic Groups’ button just below the ‘Filters’ button. If there is a green tick on the right side of the button then the semantic group feature is turned on.

To turn it off, click the ‘Semantic Groups’ button then ensure all fields are un-ticked, then click the green tick button. Remember to save, check and activate the DTP.


hmm.jpg

The semantic grouping feature of a DTP dramatically changes the way the database table is read when creating the extracted DataPackets (and iDoc’s) from the DataProvider. It also changes the efficiency of the parallel processing of DataPackets.

When semantic grouping is turned on (because key fields have been identified) the extractor will ensure that the records will be delivered in groups according to the master data values of the semantic keys identified (not necessarily in sorted order but definentely grouped).

For Example: If the 0DOC_NUM (EBELN) field was ticked as a key field for semantic grouping on the ‘Document Schedule Line (2LIS_02_SCL)’ extractor then the data would be delivered in the DataPacket with all the records for the same document number, together. The DataPacket would continue to fill up with ‘Groups of records by Document Number’ until the DataPacket reaches it configured size for ‘Number of records per DataPacket’.

It is this ‘grouping’ feature that is causing the long time delay when a DTP is first executed.

A full extract of 14 million records will first make a temporary copy of the data (all 300+ InfoObjects wide) then group the temporary dataset by the key fields identified in the DTPs semantic group configuration. The full DataSet must be prepared first to ensure the grouping is 100% correct, hence it will take quite a while before the very first DataPacket is delivered.

The performance slows down more and more based upon several different factors:
* The total number of records to be extracted for this single request;
* The width of the record being extracted;
* The number of key fields identified in the semantic group.
* The database resources for a full temporary copy of the DataSet to be extracted.
Note: At this point of the extraction, the size of the DataPacket is irrelevant.

When the semantic group feature is disabled, please ensure you diligently regression test the data loading process. Ensure you have considered the impact to:
* Extractor enhancements (BAPI and Customer Exit API);
* Error DTPs;
* Custom ABAP in start, field, end and expert routines;
* InfoSources with key fields leveraging run-time DataPacket compression;
* Process chain timing, specifically in regards to other dependancies.

Suggestion 2: Decrease the number of records per DataPacket.

This increases the number of DataPackets to be processed but also relieves the memory requirements per DataPacket. This can have a significant impact of ‘How long it takes for a transformation to process each DataPacket’ because the application server is no longer thrashing about doing virtual memory page swapping.

There is a balance point/sweet spot within each BW system where the running program demands more memory from the application server than has been physically allocated. When this in-memory point is reached the SAP Kernel and the Operating System begin the process of “Paging” blocks of memory out to disk, effectively freezing access to that block of memory by running programs. When a program tries to access that ‘checked out page of memory’ the program pauses while the SAP Kernel/Operating System retrieves it and then the program can continue. This also reduces the speed of execution of the ABAP program down to ‘how fast can the disk be access’. Given that todays CPUs and memory (RAM) run considerably faster that the disk, this difference is very noticeable.

When a ABAP programs (like a DTP for extraction or a Transformation for DataPacket processing) tries to handle “X” number of records which forces virtual memory to be enabled, the running program will take a lot longer to execute. The same number of records are extracted but it just takes longer.

By decreasing the number of records per DataPacket you are reducing the demand on internal ABAP program memory; which in turn (hopefully) lowers the memory requirement below the threshold where a lot of virtual memory thrashing will occur. To take this to an extreme example (not recommended) would be 10 records per DataPacket. The demand upon the ABAP internal memory would be very low, even for a 300+ InfoObject wide record.

Assuming you are using the recommended default DataPacket size of 50,000 records; try changing this to 15,000. I’m suggesting 15,000 based upon the fact that most ETL pathways contain approx 30 to 80+ InfoObjects and 50,000 records per DataPacket processes very well on most BW systems. Since you have a much wider record, we can offset the additional memory requirement by reducing the number of records. Exchanging a wider record for a shorter DataPacket.

Suggestion 3: Use write optimised DataStores.

Consider converting any DataStore DataTargets into ‘Write Optimised’ DataStores. This is a viable suggestion as you have clearly stated the DataSet is always a snap shot; hence there is not much chance of the different requests overlapping and becoming delta optimised during the activation of a request (as a ‘standard DataStore’ would benefit from). Statistically the number of records loading into the DataTarget will also be the number of records that flow further down stream to other DataTargets, so you may as well remove the effort involved in a standard DataStore activation that determines dealt changes.

 

hmmmmmm.jpg

Closing Notes:

Please keep in mind that the three suggestions are all completely independent from each other. Implementing just one will make a difference.

Suggestion 1 will give you the better performance improvement, specifically when extracting the data from the DataProvider.

Suggestion 2 will mostly improve the performance of DataPacket processing through the Transformations (and InfoSources if you are using them).

Suggestion 3 will improve the committing of data into the DataStore, reducing the activation time.




Thanks,

Siva



Procedure to Load data using Setup Table

$
0
0

Procedure to
load data using Setup Table

 

Summary

 

The objective of the document is to give information to load data using setup tables when delta is still enabled. The information below is available in SCN in different forums. I have tried to collate that and give some more information in my personal experience in one single document.

 

Introduction

 

Setup tables are nothing but tables which are directly linked to application tables. SAP doesn’t allow direct access to application tables and hence to extract data from those tables, we have setup table as an interface between them and the extractor.

 

Load from setup table is used to initialize delta loads, which means this is always a full load from the application tables, based on the selection provided while
running setup job. Once the load from setup table is completed, we could normally load deltas through delta queue.

 

Business Scenario


We have various scenarios to load historical data to newly developed info-provider or existing info-provider with few enhancements. In these scenarios, we may need the help of setup tables to load data and this is a part of LO Extraction scenario.

 

Business users want a new report based on purchasing which cannot be achieved using a standard info-cube based on the requirement. Hence we have developed a new Info cube which needs to have data from 2LIS_02_SCL and 2LIS_02_SCL (which is already delta enabled).

 

In the above scenario, if the business doesn’t require history data, then there is no challenge since we could add the new info-cube in the existing delta info-packages.

 

If the business requires history data, then we need to load history data using setup table and then add the same info-cube in existing delta loads.

 

Procedure to fill setup table and load data to BI

 

We need to keep in mind that the setup table is not filled always. If we have a scenario to load full/Init, we need to fill the corresponding setup tables by scheduling setup runs (BGD jobs).

 

Before proceeding to fill the setup tables, we should make sure that they are empty

 

Below is the procedure to delete setup tables. The procedure to delete setup table for all components is same and the deletion is component dependent, not
datasource wise. Hence if we our requirement is to load data from HDR datasource, we need to fill the whole component which includes HDR information also.

 

Transaction code to delete setup table is LBWG

 

 

LBWG.png

 

Once we get the above screen, we could specify the component (02, 11, etc...) and click on execute button.

 

Once we execute, it would give us the below screen for confirmation. Click ‘Yes’ to proceed further.

 

(We are not going to delete any data from application tables. We are deleting the setup table before filling it up, just to avoid duplicate data)

 

Prompt.png

 

Once the setup table is deleted, the below message will appear in the bottom of the screen

 

message.png

 

Now we are done with the deletion of setup table and we are fine to proceed further to
fill it back

 

Deletion activity is very simple with one transaction for all components. But in order to fill the setup tables, we have the following transaction codes for
individual components.

 

 


T-Code


Application


OLI1BW


Material  Movements


OLIZBW


Invoice Verification/Revaluation


OLI3Bw


Purchasing Documents


OLI4BW


Shop Floor Information system


OLIFBW


Repetitive Manufacturing


OLIQBW


Quality Management


OLIIBW


Plant Maintenance


OLISBW


Service Management (customer Service)


OLI7BW


SD Sales Order


OLI8BW


Deliveries


OLI9BW


SD Billing Documents


VIFBW


LES-Shipment Cost


VTBW


LES-Transport


ORISBW


Retail


OLIABW


Agency Business


OLI6BW


Invoice Verification


OLI4KBW


Kanban

 

We could also achieve the above through the transaction code SBIW in the following path.

 

Settings for Application-Specific Data Sources --> Logistics --> Managing Extract Structures --> Initialization --> Filling in the Setup Table --> Choose the component to fill

Setup Table.png

 

Loading data to BI


Once the required history data is available in setup table, we could start loading it to BI. Before triggering the info-package, we need to change the settings in info-package to load it as Repair-Full request.


Below are the steps to convert an info-package with full load to a repair full load

 

Open the info-package

Menu --> Scheduler --> Click ‘Repair Full Request’ as below

 

Capture.PNG

 

Below screen will appear in which we need to check the option ‘Indicate Request as Repair Request’

 

Prompt 1.png

 

The purpose of changing the full load to repair-full load is because we wouldn’t be able to run delta after a full load. Hence the load should be a repair-full
load.

 

There is also an option to change a full load request into repair full request after loading using the program ‘RSSM_SET_REPAIR_FULL_FLAG’


Once the full loads are completed, we need to run the above program which will give us the below screen

 

Program.png

 

We need to fill the above information and execute. All full requests in the info-provider will be converted into Repair-Full request. After which, we could do delta loading without any issues.

 

Points to remember

  • The setup table deletion and filling up activity should be preferably done in non-business hours
  • Before starting the activities, it is suggested to lock the business users, so that delta will not be lost
  • It is not mandatory to stop V3 jobs, since the setup table concept runs entirely a different path and it will not disturb delta. But still, we should run V3 jobs manually until and make sure delta loads to BI picks ‘0’ data.
  • When we run setup table filling jobs, it is always recommended to fill it for selective regions or periods. We can never predict on how much data is available in application tables. If we don’t specify any selections, it will take days to complete the job.

 

Naming Convention of setup tables

 

Setup table name will be extract structure name followed by SETUP.

 

It starts with 'MC' followed by application component '01'/'02' etc. and then last digits of the Data source name and then followed by SETUP. We can also derive it with the name of communication structure in LBWE followed by 'SETUP'

 

Below is the list of setup table names for the application 02 – Purchasing.

 

With the use of these tables, we could make sure that the data is deleted/ filled after the corresponding jobs

 

 


Application


Datasource


Setup Table


Purchasing


2LIS_02_ACC


MC02M_0ACCSETUP


Purchasing


2LIS_02_CGR


MC02M_0CGRSETUP


Purchasing


2LIS_02_HDR


MC02M_0HDRSETUP


Purchasing


2LIS_02_ITM


MC02M_0ITMSETUP             


Purchasing


2LIS_02_SCL


MC02M_0SCLSETUP


Purchasing


2LIS_02_SCN


MC02M_0SCNSETUP             


Purchasing


2LIS_02_SGR


MC02M_0SGRSETUP             

 


Important Transaction Codes

 

NPRT – Log for Setup Table jobs

LBWG – To delete setup table based on
components

Implementation - BW on HANA Export/Import

$
0
0

This First Guidance document should help to quickly implement either a fresh SAP NetWeaver BW on SAP HANA installation or an export of an existing system with any DB export. As the technical installation steps are the same the guidance should make an own created customer specific documentation obsolete and is the complementary documentation to the existing e2e guide for migration SAP NetWeaver BW on SAP HANA. This Version adds a Chapter for the System Export Preparation and Reflects the latest Changes for BW 7.30/7.31, HANA 1.0 SP05 and the SL toolset 1.0

View this Document

SAP First Guidance - SAP-NLS Solution with Sybase IQ

$
0
0

This “SAP First Guidance” document should help to quickly implement the new released option to store historical BW data on an external IQ Server for System Performance of Preparation of a migration to BW powered by HANA. Please Note that the SAP-NLS Solution can be used with all supported Database Versions supported by SAP NetWeaver BW 7.3x. The Existence of SAP HANA is not necessary. The document is “work in progress” and not intended to be exhaustive, but it contains everything to successfully implement the SAP-NLS Solution with Sybase IQ. For more Information please contact roland.kramer@sap.com.

View this Document

Triggering Process chain after completion of ECC Delta Extraction Job

$
0
0

Scenario:

This scenario of triggering process chains after completion source system (ECC) delta Extraction job would be very useful when we are using FI-CA, Logistics (MCEX) related extractions. These extractions require updating delta queue first before the BW delta job run and we cannot also predict when would be the Delta queue update job finishes.

In this regards, please follow the guide below for detail steps of triggering of process chain after source system delta queue job completion.

Detailed Steps

STEP 1>

Create an event in BW in SM64, t-code.

Triggering - 1.jpg

STEP 2>

Create a remote function module (FM), SE37 in BW and provide the event name created in STEP 1.

I use method “RAISE” in class “CL_BATCH_EVENT”.

For Ex: Remote FM for Invoice.

Triggering - 2.jpg

ABAP Code for the Remote Function Module:

DATA: eventid TYPE TBTCJOB-eventid.

eventid = 'ZFICA_INVOICE_PC_TRIGGER'.

CALL METHOD cl_batch_event=>raise
EXPORTING
i_eventid                     
= eventid
*    i_eventparm                    =
*    i_server                       =
*    i_ignore_incorrect_server      = 'X'
*  EXCEPTIONS
*    excpt_raise_failed             = 1
*    excpt_server_accepts_no_events = 2
*    excpt_raise_forbidden          = 3
*    excpt_unknown_event            = 4
*    excpt_no_authority             = 5
*    others                         = 6
.
IF sy-subrc <> 0.
* Implement suitable error handling here
ENDIF.
WRITE: sy-subrc.

STEP 3>

Create a program (SE38) in source system (ECC) to call the function module created in STEP 2.

Triggering - 3.jpg

ABAP Code for the program to trigger FM:

REPORT  ZCC_BI_TRIGGER_PROCESS_CHAIN.

PARAMETERS: FM(40) TYPE C,
TARGET
(10) TYPE C.

CALL FUNCTION FM DESTINATION TARGET.

Above program is a general program to call any of the remote FM in target system (BW) from source system (ECC). As this is a program used to call FMs in remote system, we must mention target system. So, there is a parameter for Target has to be placed in program.

STEP 4>

Create a variant (SE38) for the program created in STEP 3.

For Ex: Based on the example above, let’s take invoice extraction. So, using FM created for invoice in STEP 2, created a variant.

Triggering - 4.jpg

STEP 5>

After creation of Event, remote FM to trigger the event and the program to call FM. Now, it’s time to create a variant for respective Extraction program.

Find the respective program to schedule the ECC Delta Extraction job and create a variant as per your requirement.

For Ex:

Kindly go through my document for Scheduling FI-CA related Delta Extraction for FI-CA related extraction.

In case of FI-CA Invoice Extraction, we use the program “RFKK_MA_SCHEDULER” and I created a variant name as “BW_INV_EXTR”.

Triggering - 5.jpg

Save the variant values and attributes.

STEP 6>

Now, every require object created main event start which is crucial.

In this step, go to SM36, create a batch job as below.

  1. Create a job, name as per your project naming convention.

Triggering - 6.jpg

     b. Press Enter, then select “ABAP Program” and provide the Name of your extraction program and Variant which you have created in STEP 5.

For ex: “RFKK_MA_SCHEDULER”, for FI-CA related extraction and variant created earlier.

User name as “ALEREMOTE” or “BWREMOTE” or any other background user.

Triggering - 7.jpg

Then click SAVE. It looks as below.

Triggering - 8.jpg

STEP 7>

Now, step for triggering remote FM inturn triggering respective procress chain program has to be added.

  1. Go to STEPmenu, then click on create.

                    Triggering - 9.jpg
     b. Now, after selecting the ABAP program in the next screen, provide the Name as the program name which is created to trigger the remote FM           and variant for your specfic FM triggering as per STEP 3 & 4.

                    Triggering - 10.jpg

After saving will look as below:

Triggering - 11.jpg

STEP 8>

While saving the job in SM36, system will ask to schedule the job for specific start date/time and period values.

Scheduling - 6.jpg

Based on the project requirement this job can be schedule for a specific start date/time and frequency like 2hours or 4hours or 6hours etc.

 

The above detailed steps will fetch easiness in achieving the scenario of triggering the process chain after completion of ECC delta queue job.

 

Please provide you valuable feedback.

Scheduling FI-CA related Delta Extraction

$
0
0

This document will guide through the steps involved in back ground scheduling of all FI-CA related delta extraction.

Introduction to FI-CA flow:

There are 3 main areas in FI-CA, Invoicing, Posting and Payment. Below is a small diagram illustrating the overall data flow of the SD & FI-CA, FI-CO.

Scheduling - 1.jpg

Explanation above illustration:

After SD billing has taken into place then the document created is Invoice. When Invoice is saved at the same time Posting document is also generated. Invoice is related to AR accounting where as Posting is related to GL Accounting.

Then afterwards when customer does the payment then Payment document is generated when payment is done via payment run or payment transaction (FPCJ, Cash journal, T-code).

BW Part:

As highlighted in red box in the above screen, I will go through extractors which are used for FI-CA data extraction.

Below are the extractors for each area:

  1. Invoice – 0FC_INVDOC_00 (FICA Extraction of Invoicing Document Data)
  2. Posting - 0FC_BP_ITEMS (FI-CA Business Partner Items) or 0FC_CI_01 (FICA Cleared Items for Interval) or 0FC_OP_01 (FI-CA Open Items at Key Date)
  3. Payment - 0FC_PAY (Payments)

  Manual extraction of the FI-CA data into Delta Queue:

Below are the t-codes to extract delta/Init data into delta queue, manually.

Invoice –

  1. FKKINV_BW_MON (Analysis of BI Extraction Orders):: To monitor the records and simulation of Invoicing Extraction
  2. FKKINV_BW_MA (BI Extraction of Invoicing Documents):: To extract data into Delta queue

  Posting –

  1. FPBW (BW Extraction of Open Items):: to extract Open BP Line Items
  2. FPCIBW (BW Extraction of Cleared Items):: to extract Cleared BP Line Items
  3. FPOP (Update of BP Delta Queue):: to extract BP Line items either Open or Cleared

  Payment –

  1. FPCIBW (Update  Delta Queue):: to extract Payment Delta records (check the Payments tick box)

Delta Extraction:

Unfortunately all the above t-codes doesn’t have a functionality to schedule a background (batch) job to extract delta records, respectively.

1st Option::

In order to schedule the delta extractors we have an common t-code, FPSCHEDULER.

Before scheduling jobs in FPSCHEDULER we have to do 1 manual run to get the details for date of run and identification.

Scheduling - 2.jpg

STEP 1>

  1. Select “Mass Activity Type”accordingly for each area as below;

  BWOP – BP Line Items

2620 – Invoice

BWET – Payment

  1. Date ID & Identification – Need to pick up from old runs.

Scheduling - 3.jpg

STEP 2>

In the Program menu, Execute in background.

Scheduling - 4.jpg

Select Output Device as LP01 or LOCL.

Scheduling - 5.jpg

After selecting the output device, select your schedule start time and Period values (frequency) accordingly. Accessing the below screen is as general scheduling screen.

Scheduling - 6.jpg

STEP 3>

After saving the above schedule, you can check the Batch jobs in SM37. Job names will be created as below.

Scheduling - 7.jpg

2nd Option::

Another option in scheduler the jobs.

Go to SE38 (ABAP Program) t-code, give the program name as “RFKK_MA_SCHEDULER” then you can create as many variant as you can for respective areas.

Scheduling - 8.jpg

Variants:

Scheduling - 9.jpg

Variant Values provided as:

Scheduling - 10.jpg

Based on the above 2 options we can do scheduling (batch job) for FI-CA related extractors.




Please provide your valuable feedback on the above guide.

Viewing all 1574 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>