Quantcast
Viewing all 60 articles
Browse latest View live

All about LO Extraction.... Part 6 - Implementation Methodology

LO Extraction - Part 6 Implementation Methodology

by: P Renjith Kumar

Introduction First we will check the steps in ECC system followed by steps in BI system. Assume that you need to load cube 0SD_C05. For this you need two datasource 2LIS_11_VAHDR and 2LIS_11_VAITM. As the name depicts, it is of application component 11 and it belongs to VA (Sales) and HDR, ITM represents header and Item data.

Link:

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0508c20-84c1-2d10-2693-b27ca55cdc9f?quicklink=index&overridelayout=true
Image may be NSFW.
Clik here to view.

All about Inventory Management - Step by Step guide to implement Inventory Management

BI Inventory Management- Data Loading

by:  Gaurav Kanungo and Harishraju Govindaraju

This document aims to explain the concept of Inventory management using non cumulative key figures in a simple and straight forward manner. The reader of this document should have a moderate knowledge of BW concepts for understanding the concept of Inventory management.


Links:

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/9054ac59-aac4-2d10-3d9b-df98c52b4f31?quicklink=index&overridelayout=true

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/906b837a-0d54-2c10-08b8-bde70337547e?overridelayout=true
Image may be NSFW.
Clik here to view.

All about BEX Query Performance....

Checklist for Query Performance

By: Neelam

1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.

2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)

3. Within structures, make sure the filter order exists with the highest level filter first.

4. Check code for all exit variables used in a report.

5. Move Time restrictions to a global filter whenever possible.

6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).

7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.

8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired

9. If Alternative UOM solution is used, turn off query cache.

10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queries—for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.

11. Turn off formatting and results rows to minimize Frontend time whenever possible.

12. Check for nested hierarchies. Always a bad idea.

13. If “Display as hierarchy” is being used, look for other options to remove it to increase performance.

14. Use Constant Selection instead of SUMCT and SUMGT within formulas.

15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.

16. Check Sequential vs Parallel read on Multiproviders.

17. Turn off warning messages on queries.

18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).

19. Check to see where currency conversions are happening if they are used.

20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.

21. Avoid Cell Editor use if at all possible.

22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.

23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.

25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The “not assigned” nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
Image may be NSFW.
Clik here to view.

All about.... SAP BW / BI Data Load Performance Analysis and Tuning

SAP BW Data Load Performance Analysis and Tuning

By: Ron Silberstein

Overview:

The staging process of any significant volume of data into SAP BW presents challenges to system resource utilization and timeliness of data. This session discusses the causes of data load performance issues, highlights the troubleshooting process, and offers tuning solutions to help maximize throughput. Many aspects of data load performance analysis and tuning are covered including extraction, packaging, transformation, parallel processing, as well as change run and aggregate rollup.




Image may be NSFW.
Clik here to view.

All about Inventory Management....- Marker Update, No Marker Update (BW-BCT-MM-IM)



by: Colum Cronin

Purpose

To explain the Marker Update& No Marker Update concepts within the Inventory Management.

Overview

The use of Marker Update & No Marker Update is a cause of frequent confusion, which I hope to clear up with this page.

Marker Update

  • Updating the marker (i.e. Marker Update) is a special feature available only for noncumulative InfoCubes.
  • This is done to improve the performance of the query read, especially when reading the current stock..
  • It is used to reduce the time of fetching the non-cumulative key figures while reporting.
  • It helps to easily get the values of previous stock quantities while reporting.
  • The marker is a point in time which marks an opening stock balance.
  • Data up to the marker is compressed.

No Marker Update

  • The No Marker Update concept arises if the target InfoCube contains a non-cumulative key figure.
  • For example, take the Material Movements InfoCube 0IC_C03 where stock quantity is a non-cumulative key figure. The process of loading the data into the cube involves in two steps:
  1. In the first step, one should load the records pertaining to the opening stock balance or the stock present at the time of implementation. At this time we will set marker to update (uncheck 'no marker update') so that the value of current stock quantity is stored in the marker. After that, when loading the historical movements (stock movements made previous to the time of implementing the cube) we must check marker update so that the marker should not be updated (because of these historical movements, only the stock balance / opening stock quantity has been updated; i.e. we have already loaded the present stock and the aggreagation of previous/historical data accumulates to the present data).
  2. After every successful delta load, we should not check marker update (we should allow to update marker) so that the changes in the stock quantity should be updated in the marker value. The marker will be updated to those records which are compressed. The marker will not be updated to those uncompressed requests. Hence for every delta load request, the request should be compressed.

Check or uncheck the Marker Option

  • Compress the request with stock marker => uncheck the marker update option.
  • Compress the loads without the stock marker => check the marker update option.

Relevant FAQs

  • The marker isn't relevant when no data is transferred (e.g. during an delta init without data transfer). 
  • The marker update is just like a check point (it will give the snapshot of the stock on a particular date when it is updated).
  • The request in which the opening stock (Initialization) is loaded must always be compressed WITH a marker update.
  • The request in which historical material documents are contained must always be compressed WITHOUT a marker update. This is necessary because the historical material movements are already contained in the opening stock.
  • Successive delta uploads must always be compressed WITH marker updates.

Related Content

Related Notes

SAP Note 643687 Compressing non-cumulative InfoCubes
SAP Note
834829 Compression of BW InfoCubes without update of markers
SAP Note
745788 Non-cumulative mgmnt in BW: Verifying and correcting data
SAP Note
586163 Composite Note on SAP R/3 Inventory Management in SAP BW

Related Documentation

Image may be NSFW.
Clik here to view.

All about Inventory Management.... 10 important points to consider while working with inventory management

 by: Ranjit Rout

1.  There are 3 SAP delivered transactional data sources for stock management.
2LIS_03_BX : Always needed to carry the initialization of the stocks.

2LIS_03_BF :  Initialize this in source if you need historical data else an empty init can be done for this to load only future delta records. If the Source system is new then no need to do an initialization. 

2LIS_03_UM : Only needed if revaluations are carried out in Source. This data source is helpful if adjustment in material prices is done time to time otherwise it won’t extract any data.

2.  Check list for source system. Needs to be carried out before initialization of the above 3 data sources .

A.   Table TBE11 : Maintain Entry ‘NDI’ with text ‘New Dimension Integration’ and activate the flag(Note 315880)
B.   Table TPS01 : Entry should be as below(Note 315880)
PROCS – 01010001
INTERFACE  - SAMPLE_PROCESS_01010001
TEXT1 – NDI Exits Active
C.   Table TPS31 : Entry should be as below(Note 315880)
PROCS – 01010001
APPLK  - NDI
FUNCT – NDI_SET_EXISTS_ACTIVE
D.   Tcode –   MCB_  
In most cases you need to set the industry sector as ‘standard’. For more info please see Note 353042
E.    Tcode –  BF11
Set the indicator to active for the Business warehouse application entry. This entry may needs to be transported to the production system. (Note 315880)

3.  After running the setup data, check the data for the fields BWVORG, BWAPPLNM, MENGE. If no data available in these fields then some setting mentioned in the above checklist are missing in R/3. Correct the issue and rerun the setup data.

4.  Data staging with DSO for BX extractor is not allowed. Data should directly load from extractor to Cube only once. Choose the extraction mode as ‘Initial Non-cumulative for Non-cumulative values’ in the DTP.

5.  DSO is possible for BF. If you are creating a standard DSO then choose the fields MJAHR,BWCOUNTER,MBLNR,ZEILE as key fields. Some of these fields won’t be available in the standard data source but the data source can be enhanced using LO Cockpit (LBWE) to add these fields. In addition to these other fields depending upon the DSO structure is also possible.
Note  417703 gives more info on this.

6.  Point-5 is valid for UM also. The Key fields could be a combination of MJAHR,MBLNR,ZEILE,BUKRS  fields. Note 581778 

7.  Data load to the cube should follow the below process

A.      Load the BX data. Compress the request with stock marker(uncheck the marker option).
B.      Load the BF and UM init data without Marker Update, as all the hitorical data of the BF and UM are already updated to the Marker while loading BX. Compress the loads without the stock maker(Check the marker option).
C.      The future delta loads from BF and UM should be compressed with Stock marker(uncheck the marker option).

8.  If in future the cube needs to be deleted due to some issues then the load process should also be carried out as above. (only init of BF and UM should be loaded first and then the deltas should be processed)

9.  To check the data consistency of a Non cumulative cube the standard program SAP_REFPOINT_COMPLETE can be used. To check the compression status of the cube the table RSDCUBE can be refered. Before the compression of BX request, the ‘REFUPDATE’ field should be blank and after the compression the value should become ‘X’. Check Note 643687 for more info.

10. After BX data load to cube the data won’t be visible by LISTCUBE. Only after compression the data can be seen by running a query on the Non cumulative cube.
Image may be NSFW.
Clik here to view.

All about Process Chains... in SAP BW Step By Step


By: Anonymous
 
I want to continue my series for beginners new to SAP BI. In this blog I write down the necessary steps how to create a process chain loading data with an infopackage and with a DTP, activation and scheduling of this chain.

1.)    Call transaction RSPC

Image may be NSFW.
Clik here to view.
RSPC


 RSPC is the central transaction for all your process chain maintenance. Here you find on the left existing process chains sorted by “application components”.  The default mode is planning view. There are two other views available: Check view and protocol view.
2.)    Create a new process chain
To create a new process chain, press “Create” icon in planning view. In the following pop-Up window you have to enter a technical name and a description of your new process chain.

Image may be NSFW.
Clik here to view.
name chain


The technical name can be as long as up to 20 characters. Usually it starts with a Z or Y. See your project internal naming conventions for it.
3.)    Define a start process
After entering a process chain name and description, a new window pop-ups. You are asked to define a start variant.
 Image may be NSFW.
Clik here to view.
Start variant



That’s the first step in your process chain! Every process chain does have one and only one starting step. A new step of type “Start process” will be added. To be able to define unique start processes for your chain you have to create a start variant. These steps you have to do for any other of the subsequent steps. First drag a process type on the design window. Then define a variant for this type and you have to create a process step. The formula is:
 Process Type + Process Variant = Process Step!
If you save your chain, process chain name will be saved into table RSPCCHAIN. The process chain definition with its steps is stored into table RSPCPROCESSCHAIN as a modified version.So press on the “create” button, a new pop-up appears:

Image may be NSFW.
Clik here to view.
start variant name


Here you define a technical name for the start variant and a description. In the n ext step you define when the process chain will start. You can choose from direct scheduling or start using meta chain or API. With direct scheduling you can define either to start immediately upon activating and scheduling or to a defined point in time like you know it from the job scheduling in any SAP system. With “start using meta chain or API” you are able to start this chain as a subchain or from an external application via a function module “RSPC_API_CHAIN_START”. Press enter and choose an existing transport request or create a new one and you have successfully created the first step of your chain.
 4.)    Add a loading step
If you have defined the starting point for your chain you can add now a loading step for loading master data or transaction data. For all of this data choose “Execute infopackage” from all available process types. See picture below:

Image may be NSFW.
Clik here to view.
loading step


You can easily move this step with drag & drop from the left on the right side into your design window.A new pop-up window appears. Here you can choose which infopackage you want to use. You can’t create a new one here. Press F4 help and a new window will pop-up with all available infoapckages sorted by use. At the top are infopackages used in this process chain, followed by all other available infopackages not used in the process chain. Choose one and confirm. This step will now be added to your process chain. Your chain should look now like this:

Image may be NSFW.
Clik here to view.
first steps


How do you connect these both steps? One way is with right mouse click on the first step and choose Connect with -> Load Data and then the infopackage you want to be the successor.

 Image may be NSFW.
Clik here to view.
connect step


Another possibility is to select the starting point and keep left mouse button pressed. Then move mouse down to your target step. An arrow should follow your movement. Stop pressing the mouse button and a new connection is created. From the Start process to every second step it’s a black line.
5.)    Add a DTP process In BI 7.0 systems you can also add a DTP to your chain. From the process type window ( see above.) you can choose “Data Transfer Process”. Drag & Drop it on the design window. You will be asked for a variant for this step. Again as in infopackages press F4 help and choose from the list of available DTPs the one you want to execute. Confirm your choice and a new step for the DTP is added to your chain. Now you have to connect this step again with one of its possible predecessors. As described above choose context menu and connect with -> Data transfer process. But now a new pop-up window appears.

Image may be NSFW.
Clik here to view.
connection red green
 
Here you can choose if this successor step shall be executed only if the predecessor was successful, ended with errors or anyhow if successful or not always execute. With this connection type you can control the behaviour of your chain in case of errors. If a step ends successful or with errors is defined in the process step itself. To see the settings for each step you can go to Settings -> Maintain Process Types in the menu. In this window you see all defined (standard and custom ) process types. Choose Data transfer process and display details in the menu. In the new window you can see:

Image may be NSFW.
Clik here to view.
dtp setting


 DTP can have the possible event “Process ends “successful” or “incorrect”, has ID @VK@, which actually means the icon and appears under category 10, which is “Load process and post-processing”. Your process chain can now look like this:

Image may be NSFW.
Clik here to view.
two steps



You can now add all other steps necessary. By default the process chain itself suggests successors and predecessors for each step. For loading transaction data with an infopackage it usually adds steps for deleting and creating indexes on a cube. You can switch off this behaviour in the menu under “Settings -> Default Chains". In the pop-up choose “Do not suggest Process” and confirm.

Image may be NSFW.
Clik here to view.
default chains


Then you have to add all necessary steps yourself.
6.)    Check chain
Now you can check your chain with menu “Goto -> Checking View” or press the button “Check”. Your chain will now be checked if all steps are connected, have at least one predecessor. Logical errors are not detected. That’s your responsibility. If the chain checking returns with warnings or is ok you can activate it. If check carries out errors you have to remove the errors first.
7.)    Activate chain
After successful checking you can activate your process chain. In this step the entries in table RSPCPROCCESSCHAIN will be converted into an active version. You can activate your chain with menu “Process chain -> Activate” or press on the activation button in the symbol bar. You will find your new chain under application component "Not assigned". To assign it to another application component you have to change it. Choose "application component" button in change mode of the chain, save and reactivate it. Then refresh the application component hierarchy. Your process chain will now appear under new application component.
8.)    Schedule chain
After successful activation you can now schedule your chain. Press button “Schedule” or menu “Execution -> schedule”. The chain will be scheduled as background job. You can see it in SM37. You will find a job named “BI_PROCESS_TRIGGER”. Unfortunately every process chain is scheduled with a job with this name. In the job variant you will find which process chain will be executed. During execution the steps defined in RSPCPROCESSCHAIN will be executed one after each other. The execution of the next event is triggered by events defined in the table.  You can watch SM37 for new executed jobs starting with “BI_” or look at the protocol view of the chain.
9.)    Check protocol for errors
You can check chain execution for errors in the protocol or process chain log. Choose in the menu “Go to -> Log View”. You will be asked for the time interval for which you want to check chain execution. Possible options are today, yesterday and today, one week ago, this month and last month or free date. For us option “today” is sufficient.
Here is an example of another chain that ended incorrect:
  Image may be NSFW.
Clik here to view.
chain log


On the left side you see when the chain was executed and how it ended. On the right side you see for every step if it ended successfully or not. As you can see the two first steps were successfull and step “Load Data” of an infopackage failed. You can now check the reason with context menu “display messages” or “Process monitor”. “Display messages” displays the job log of the background job and messages created by the request monitor. With “Process monitor” you get to the request monitor and see detailed information why the loading failed. THe logs are stored in tables RSPCLOGCHAIN and RSPCPROCESSLOG. Examining request monitor will be a topic of one of my next upcoming blogs.


 10.) Comments
Here just a little feature list with comments.
- You can search for chains, but it does not work properly (at least in BI 7.0 SP15).
- You can copy existing chains to new ones. That works really fine.
- You can create subchains and integrate them into so-called meta chains. But the application component menu does not reflect this structure. There is no function available to find all meta chains for a subchain or vice versa list all subchains of a meta chain. This would be really nice to have for projects.
- Nice to have would be the possibility to schedule chains with a user defined job name and not always as "BI_PROCESS_TRIGGER".
But now it's your turn to create process chains.
Image may be NSFW.
Clik here to view.

All about Process Chains.... Tips

By: Chandiraban singu

Process chain:
A Process chain is a sequence of processes that wait in the background for an event. Some of these processes trigger a separate event that can start other processes in turn.
If you use Process chains, you can
  1. Automate
  2. the complex schedules in BW with the help of the event-controlled processing,
  3. Visualize
  4. the schedule by using network applications, and
  5. Centrally control and monitor
  6. the processes. This article will provide you a few (Seven) tips in the management of Process chain.

    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/00a1f389-ec7c-2c10-04bc-9d81b3084171?overridelayout=true
    Image may be NSFW.
    Clik here to view.

    All About.... Infocube Dimension design: A different perspective

    By: Manohar Delampadi         scn.sap.com

    Objective: The objective of this post is to simplify the understanding on dimension designs of an infocube and to decide upon the dimensions based on the repetition of the data held in the dimension tables.

    Pre-requisites: An infocube is already created and active, and filled will data, which will be used for analysis of dimension tables.

    Dimension to Fact Ratio Computation: This ratio is a percentage figure of the number of records that exists in the dimension table to the number of records in fact table or what percentage of fact table size is a dimension table. Mathematically putting it down, the equation would be as below:

              Ratio = No of rows in Dimension table X 100 / No of rows in Fact Table

    Dimension Table Design Concept: We have been reading and hearing over and over again that the characteristics should be added into a dimension if there exists a 1:1 or 1:M relation and they should be in separate dimension if there exists a M:M relation. What is this 1:1 or 1: M? This is the relation which the characteristics share among each other.
    For instance if one Plant can have only one Storage Location and one storage location can belong to only one plant at any given point of time, then the relation shared among them is 1:1.
    If 1 Functional Location can have many equipment but one equipment can belong to only one functional location then the relation shared between the functional location and Equipment is 1:M.
    If 1 sales order can have many materials and one material can exist in different sales orders then there absolutely is no dependence among these two and the relation between these two is many to many or M: M.

    Challenges in understanding the relationship: Often we SAP BI consultants depend on the Functional consultants to help us out with the relationship shared between these characteristics / fields. Due to time constraint we generally cannot dedicate time to educate the functional consultants on the purpose of this exercise, and it takes a lot of time to understand this relationship thoroughly.


    Scenario: An infocube ZPFANALYSIS had few dimensions which were way larger than the preferred 20% ratio. This had to be redesigned such that the performance was under 20% ratio.
    This ratio could be either manually derived by checking the number of entries in the desired dimension table (/BIC/D<infocube name><dimension number>) to the fact table (/BIC/F<Infocube Name> or /BIC/E<Infocube name>) or a program SAP_INFOCUBE_DESIGNS can be executed in SE38 which reports this ratio for all the dimensions, for all the infocubes in the system.

    SAP_INFOCUBE_DESIGNS:
    We can find from the report that the total number of rows in the fact table is 643850. Dimension 2 (/BIC/DZPFANLSYS2) has around 640430 rows, which is 99% (99.49%)of the fact table rows and Dimension 4(/BIC/DZPFANLSYS4) has around 196250 rows, which is 30%  (30.48%)of the fact table rows.

    Infocube ZPFANLSYS:

    Approach:

    Step 1: Analysis of the dimension table /BIC/DZPFANALSYS2 to plan on reducing the number of records.
    /BIC/DZPFANLSYS2

    Fact table:
    Dimension table holds 1 record more than the fact table.
    View the data in the table /BIC/DZPFANLSYS2 (Table related to Dimension 2) in SE12 and sort all the fields. This sorting will help us spot the rows which have repeated values for many columns, which will eventually lead to understanding the relationship between the characteristics (columns in dimension table).

    Identifying the relationships:
    Once the sorting is done we need to look out for the number of values that repeat across the columns. All the records which repeat could have been displayed in a single row with one dimension id assigned if all the columns had same data. The repetition is a result of one or more columns which contribute a unique value to each row. Such columns if removed from the table then the number of rows in the table will come down.

    In the below screenshot I’ve highlighted the rows in green that were repeating themselves with new dimension IDs, as only 2 columns SID_ZABNUM and SID_0NPLDA have new values for every row. These two columns having new values for every row have resulted in rest of the columns repeating themselves and in turn increasing the data size in the dimension table. Hence it can be easily said that these two columns do not belong in this dimension tables, so the related characteristics (ZABNUM and 0NPLDA) need to be removed out of this dimension.
    Few rows could be found which repeat themselves for most of the columns, but have a new value once in a while for some columns, as highlighted in yellow in the below screenshot. This indicates that these columns share a 1:M relation with the rest of the columns with repeated rows and these could be left in the same dimension.
    Conclusion: The columns marked in green belong to this dimension tables and the columns marked in red needs to be in other dimension tables.
    Step 2: Create a copy infocube C_ZPFAN and create new dimensions to accommodate ZABNUM and 0NPLDA.
    ZABNUM was added to dimension C_ZPFAN8 and 0NPLDA was added to C_ZPFAN7. These were marked as line item dimensions as they have only one characteristic under them.
    Analysed the issue with dimension 4 in the similar way and changed other dimensions to help the situation.

    Post changes, loaded the data into the copy infocube C_ZPFAN and found the number of records in the dimension table /BIC/DC_ZPFAN2 to be 40286.

    Ratio: 40286 / 657400 * 100 = 6.12 %


    SAP_INFOCUBE_DESIGNS:

    Dimension2 of the copy infocube: /BIC/DC_ZPFAN2
    Even now there a few repeated rows and columns, but the ratio is within 20%. We can create up to 13 dimensions, but it is always better to keep a dimension or two free for future enhancements.

    Hope this was helpful.
    Image may be NSFW.
    Clik here to view.

    Recovering deleted web templates in development from quality – reverse transport

    By Manohar 

    Objective:
    It is human to err, but some errors unfortunately involve high price tags. This post is to explore the options of recovering a deleted web template in SAP BW development system. I have not tried this for other objects so I’m unaware of the possibilities that lie ahead. It is always a best practice to avoid any types of deletion of objects or data without proper confirmations.

    Requirement: Recovering a web template in development, which got deleted by mistake.

    Pre-requisite: This web template’s latest version has been transported to quality and other target systems previously.

    Procedure: Let us first create a scenario by deleting an existing web template:
    Stock Overview report as shown below has been deleted from the development system BAD.Image may be NSFW.
    Clik here to view.
    1.JPG
    After choosing YES the template ceases to exist in the development system which is reconfirmed by the below checks:
    Running the URL:
    Landscape and direction of transport movement:
    Step 1: Login to Quality - BAQ
    Step 2: Identify the transport request containing the latest version of the required web template.
    In transaction SE03 choose “Search for objects in Requests/Tasks” under “Objects in Requests” folder.
    Enter the object type as TMPL in case of web templates and the name of the web template that was deleted and execute.
    The result would display all the requests that contained the web template; it is wiser to choose the latest one as that would be the working latest version of the web template. Make a note of the transport request.
    Step 3: Create a Transport Request in Quality of request type “Transport of copies” and target as DEVELOPMENT.
    In transaction SE10 create a transport request of type “Transport of copies” checked.
    Under create request choose the radio button “Transport of copies”.
    Make sure to choose the TARGET as DEVELOPMENT system, use possible entries help here. If you choose only workbench request then the possible entries help would not display the Development system in the list.
    Step 4: Insert the web template into the request from the transport request identified in step 2.
    Choose to “Include Objects” into the selected request.
    Select the radio button “Object List from Request” and specify the request number which holds the latest version of the required web template.
    The message bar should indicate that the objects are inserted from one request to another; this could also be confirmed by checking the objects in the new request.
    Step 5: Release the request from Quality so that it could be imported in Development.
    Step 6: Login to development system and import the released request containing the web template from Quality:
    Go to transaction STMS_IMPORT in development.
    You should be able to see the request released from Quality here, provided you did not miss to mention the target as Development during creation of the request in Step 3.

    Choose the required options and the import should start.
    Step 7: Moment of Truth.
    Post successful import of the transport run the development relevant URL and you should be able to find the web template opening up.
    Also you should be able to find the deleted web template upon searching:

    Even though we know the cure, prevention is always better. Please be extremely cautious before deleting objects from any system.
    Image may be NSFW.
    Clik here to view.

    All about Data Transfer Process (DTP) - SAP BW 7

    Credits: Bhushan Raval 

    Data Transfer Process (DTP)


    DTP determines the process for transfer of data between two persistent/non persistent objects within BI.
    As of SAP NetWeaver 7.0, InfoPackage loads data from a Source System only up to PSA. It is DTP that determines the further loading of data thereafter.



    Use
    • Loading data from PSA to InfoProvider(s).
    • Transfer of data from one InfoProvider to another within BI.
    • Data distribution to a target outside the BI system; e.g. Open HUBs, etc.

    In the process of transferring data within BI, the Transformations define mapping and logic of data updating to the data targets whereas, the Extraction mode and Update mode are determined using a DTP.

    NOTE: DTP is used to load data within BI system only; except when they are used in the scenarios of Virtual InfoProviders where DTP can be used to determine a direct data fetch from the source system at run time.


    Key Benefits of using a DTP over conventional IP loading
    1. DTP follows one to one mechanism between a source and a Target i.e. one DTP sources data to only one data target whereas, IP loads data to all data targets at once. This is one of the major advantages over the InfoPackage method as it helps in achieving a lot of other benefits.
    2. Isolation of Data loading from Source to BI system (PSA) and within BI system. This helps in scheduling data loads to InfoProviders at any time after loading data from the source.
    3. Better Error handling mechanism with the use of Temporary storage area, Semantic Keys and Error Stack.


    Extraction
    There are two types of Extraction modes for a DTP – Full and Delta.



    Full:


    Update mode full is same as that in an InfoPackage.
    It selects all the data available in the source based on the Filter conditions mentioned in the DTP.
    When the source of data is any one from the below InfoProviders, only FULL Extraction Mode is available.
    • InfoObjects
    • InfoSets
    • DataStore Objects for Direct Update

    Delta is not possible when the source is anyone of the above.


    Delta:

                        
    Unlike InfoPackage, delta transfer using a DTP doesn’t require an explicit initialization. When DTP is executed with Extraction mode Delta for the first time, all existing request till then are retrieved from the source and the delta is automatically initialized.Image may be NSFW.
    Clik here to view.
    Delta.jpg

    The below 3 options are available for a DTP with Extraction Mode: Delta.
    • Only Get Delta Once.
    • Get All New Data Request By Request.
    • Retrieve Until No More New Data.


         I      Only get delta once:
    If this indicator is set, a snapshot scenario is built. The Data available in the Target is an exact replica of the Source Data.
    Scenario:
    Let us consider a scenario wherein Data is transferred from a Flat File to an InfoCube. The Target needs to contain the data from the latest Flat File data load only. Each time a new Request is loaded, the previous request needs to be deleted from the Target. For every new data load, any previous Request loaded with the same selection criteria is to be removed from the InfoCube automatically. This is necessary, whenever the source delivers only the last status of the key figures, similar to a Snap Shot of the Source Data.
    Solution – Only Get Delta Once
    A DTP with a Full load should suffice the requirement. However, it is not recommended to use a Full DTP; the reason being, a full DTP loads all the requests from the PSA regardless of whether these were loaded previously or not. So, in order to avoid the duplication of data due to full loads, we have to always schedule PSA deletion every time before a full DTP is triggered again.

    ‘Only Get Delta Once’ does this job in a much efficient way; as it loads only the latest request (Delta) from a PSA to a Data target.
    1. Delete the previous Request from the data target.
    2. Load data up to PSA using a Full InfoPackage.
    3. Execute DTP in Extraction Mode: Delta with ‘Only Get Delta Once’ checked.

    The above 3 steps can be incorporated in a Process Chain which avoids any manual intervention.


         II     Get all new data request by request:
    If you set this indicator in combination with ‘Retrieve Until No More New Data’, a DTP gets data from one request in the source. When it completes processing, the DTP checks whether the source contains any further new requests. If the source contains more requests, a new DTP request is automatically generated and processed.

    NOTE: If ‘Retrieve Until No More New Data’ is unchecked, the above option automatically changes to ‘Get One Request Only’. This would in turn get only one request from the source.
    Also, once DTP is activated, the option ‘Retrieve Until No More New Data’ no more appears in the DTP maintenance.



    Package Size

    The number of Data records contained in one individual Data package is determined here.
    Default value is 50,000.
      
     

    Filter

      
    The selection Criteria for fetching the data from the source is determined / restricted by filter.Image may be NSFW.
    Clik here to view.
    filter.jpg

    We have following options to restrict a value / range of values:

       Multiple selections

       OLAP variable

       ABAP Routine

    AImage may be NSFW.
    Clik here to view.
    check.jpg
     on the right of the Filter button indicates the Filter selections exist for the DTP.




    Semantic Groups

    Choose Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package.
    This setting is only relevant for DataStore objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.

    AImage may be NSFW.
    Clik here to view.
    check.jpg
    on the right side of the ‘Semantic Groups’ button indicates the Semantic keys exist for the DTP.
      
      

    Update


    Error Handling


    • Deactivated:
    If an error occurs, the error is reported at the package level and not at the data record level.
    The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.
    This results in faster processing.

    • No Update, No Reporting:
    If errors occur, the system terminates the update of the entire data package. The request is not released for reporting. The incorrect record is highlighted so that the error can be assigned to the data record.
    The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.

    • Valid Records Update, No Reporting (Request Red):
    This option allows you to update valid data. This data is only released for reporting after the administrator checks the incorrect records that are not updated and manually releases the request (by a QM action, that is, setting the overall status on the Status tab page in the monitor).
    The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.

    • Valid Records Update, Reporting Possible (Request Green):
    Valid records can be reported immediately. Automatic follow-up actions, such as adjusting the aggregates, are also carried out.
    The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.



    Error DTP

    Erroneous records in a DTP load are written to a stack called Error Stack.
    Error Stack is a request-based table (PSA table) into which erroneous data records from a data transfer process (DTP) are written. The error stack is based on the data source (PSA, DSO or Info Cube), that is, records from the source are written to the error stack.
    In order to upload data to the Data Target, we need to correct the data records in the Error Stack and manually run the Error DTP.


    Execute



    Processing Mode

    Serial Extraction, Immediate Parallel Processing:
    A request is processed in a background process when a DTP is started in a process chain or manually.

      
     

    Serial in dialog process (for debugging):
    A request is processed in a dialog process when it is started in debug mode from DTP maintenance.
    This mode is ideal for simulating the DTP execution in Debugging mode. When this mode is selected, we have the option to activate or deactivate the session Break Points at various stages like – Extraction, Data Filtering, Error Handling, Transformation and Data Target updating.
    You cannot start requests for real-time data acquisition in debug mode.

    Debugging Tip:
    When you want to debug the DTP, you cannot set a session breakpoint in the editor where you write the ABAP code (e.g. DTP Filter). You need to set a session break point(s) in the Generated program as shown below:



    No data transfer; delta status in source: fetched:
    This processing is available only when DTP is operated in Delta Mode. It is similar to Delta Initialization without data transfer as in an InfoPackage.
    In this mode, the DTP executes directly in Dialog. The request generated would mark the data found from the source as fetched, but does not actually load any data to the target.
    We can choose this mode even if the data has already been transferred previously using the DTP.
      
      

    Delta DTP on a DSO
    There are special data transfer options when the Data is sourced from a DTP to other Data Target.


    • Active Table (with Archive)
           The data is read from the DSO active table and from the archived data.

    • Active Table (Without Archive)The data is only read from the active table of a DSO. If there is data in the archive or in near-line storage at the time of extraction, this data is not extracted.

    • Archive (Full Extraction Only)The data is only read from the archive data store. Data is not extracted from the active table.

    • Change Log
      The data is read from the change log and not the active table of the DSO.
    Image may be NSFW.
    Clik here to view.

    Article 1


    Dimension design: A different perspectiveObjective: The objective of this post is to simplify the understanding on dimension designs of an infocube and to decide upon the dimensions based on the repetition of the data held in the dimension tables.


    Pre-requisites: An infocube is already created and active, and filled will data, which will be used for analysis of dimension tables.

    Dimension to Fact Ratio Computation: This ratio is a percentage figure of the number of records that exists in the dimension table to the number of records in fact table or what percentage of fact table size is a dimension table. Mathematically putting it down, the equation would be as below:

              Ratio = No of rows in Dimension table X 100 / No of rows in Fact Table

    Dimension Table Design Concept: We have been reading and hearing over and over again that the characteristics should be added into a dimension if there exists a 1:1 or 1:M relation and they should be in separate dimension if there exists a M:M relation. What is this 1:1 or 1: M? This is the relation which the characteristics share among each other.
    For instance if one Plant can have only one Storage Location and one storage location can belong to only one plant at any given point of time, then the relation shared among them is 1:1.
    If 1 Functional Location can have many equipment but one equipment can belong to only one functional location then the relation shared between the functional location and Equipment is 1:M.
    If 1 sales order can have many materials and one material can exist in different sales orders then there absolutely is no dependence among these two and the relation between these two is many to many or M: M.

    Challenges in understanding the relationship: Often we SAP BI consultants depend on the Functional consultants to help us out with the relationship shared between these characteristics / fields. Due to time constraint we generally cannot dedicate time to educate the functional consultants on the purpose of this exercise, and it takes a lot of time to understand this relationship thoroughly.


    Scenario: An infocube ZPFANALYSIS had few dimensions which were way larger than the preferred 20% ratio. This had to be redesigned such that the performance was under 20% ratio.
    This ratio could be either manually derived by checking the number of entries in the desired dimension table (/BIC/D<infocube name><dimension number>) to the fact table (/BIC/F<Infocube Name> or /BIC/E<Infocube name>) or a program SAP_INFOCUBE_DESIGNS can be executed in SE38 which reports this ratio for all the dimensions, for all the infocubes in the system.

    SAP_INFOCUBE_DESIGNS:
    We can find from the report that the total number of rows in the fact table is 643850. Dimension 2 (/BIC/DZPFANLSYS2) has around 640430 rows, which is 99% (99.49%)of the fact table rows and Dimension 4(/BIC/DZPFANLSYS4) has around 196250 rows, which is 30%  (30.48%)of the fact table rows.

    Infocube ZPFANLSYS:

    Approach:

    Step 1: Analysis of the dimension table /BIC/DZPFANALSYS2 to plan on reducing the number of records.
    /BIC/DZPFANLSYS2

    Fact table:
    Dimension table holds 1 record more than the fact table.
    View the data in the table /BIC/DZPFANLSYS2 (Table related to Dimension 2) in SE12 and sort all the fields. This sorting will help us spot the rows which have repeated values for many columns, which will eventually lead to understanding the relationship between the characteristics (columns in dimension table).

    Identifying the relationships:
    Once the sorting is done we need to look out for the number of values that repeat across the columns. All the records which repeat could have been displayed in a single row with one dimension id assigned if all the columns had same data. The repetition is a result of one or more columns which contribute a unique value to each row. Such columns if removed from the table then the number of rows in the table will come down.

    In the below screenshot I’ve highlighted the rows in green that were repeating themselves with new dimension IDs, as only 2 columns SID_ZABNUM and SID_0NPLDA have new values for every row. These two columns having new values for every row have resulted in rest of the columns repeating themselves and in turn increasing the data size in the dimension table. Hence it can be easily said that these two columns do not belong in this dimension tables, so the related characteristics (ZABNUM and 0NPLDA) need to be removed out of this dimension.
    Few rows could be found which repeat themselves for most of the columns, but have a new value once in a while for some columns, as highlighted in yellow in the below screenshot. This indicates that these columns share a 1:M relation with the rest of the columns with repeated rows and these could be left in the same dimension.
    Conclusion: The columns marked in green belong to this dimension tables and the columns marked in red needs to be in other dimension tables.
    Step 2: Create a copy infocube C_ZPFAN and create new dimensions to accommodate ZABNUM and 0NPLDA.
    ZABNUM was added to dimension C_ZPFAN8 and 0NPLDA was added to C_ZPFAN7. These were marked as line item dimensions as they have only one characteristic under them.
    Analysed the issue with dimension 4 in the similar way and changed other dimensions to help the situation.

    Post changes, loaded the data into the copy infocube C_ZPFAN and found the number of records in the dimension table /BIC/DC_ZPFAN2 to be 40286.

    Ratio: 40286 / 657400 * 100 = 6.12 %


    SAP_INFOCUBE_DESIGNS:

    Dimension2 of the copy infocube: /BIC/DC_ZPFAN2
    Even now there a few repeated rows and columns, but the ratio is within 20%. We can create up to 13 dimensions, but it is always better to keep a dimension or two free for future enhancements.

    Hope this was helpful.
    Image may be NSFW.
    Clik here to view.

    All About Process Chains....Use of RSWAITSEC Program to introduce delay in the Process Chains

    By Mohanavel at scn.sap.com

    Objective:    Introducing time delay in the process chain with help of standard SAP provided program RSWAITSEC.


    Background:  With the use of interrupt process type in the process chain we can give the fixed delay to the subsequent process types when the interrupt step is reached.  But when we use the standard interrupt process type we have to mention the date and time or event name.
    In many cases interrupt step might not help, if suppose an interrupt step is introduce to delay the subsequent processes by a definite period of time, and if all the steps above to the interrupt gets completed early then instead of passing the trigger to the subsequent step after the desired wait time, the interrupt will force the chain to wait till the conditions in the interrupt are satisfied.
    In order to achieve the delay in the trigger flow from one process type to another in a process chain without any condition for the fixed time limit or event raise we can use this RSWAITSEC Program.


    Scenario:   In our project one of the master data chain is getting scheduled at 23.00IST. This load supplies data to the report which is based on 0CALWEEK. The data load and an abap program in the process chain make use of SY-DATUM, so a load that starts on sunday 23:00 if doesn't complete by 23:59:59 hours (1 hour duration) then the entire data gets wrongly mapped to the next week. .This will cause discrepancy of data.
    So it was required to schedule chain at 23:00 IST everyday except sunday, and at 22:45:00IST (15mins earlier) on Sundays.


    Different Ways to Achieve the above Situation:
    1.        Creating two different process chains and scheduling the 1st process chain at 23.00 IST for Monday to Saturday (Using factory calendar), scheduling the 2nd at 22.45 IST only for Sunday.
    Disadvantage of 1st method:
    Unnecessarily creating two chains for the same loads, this makes way to have multiple chains in the system.


       2.   Scheduling the same chain at 22.45IST and adding the decision step to find and give interruption of 15mins for Monday to Saturday.  So on Sunday it will  get start at 22.45.


    Process Chain with Interrupt Process Type:

    Disadvantage of 2nd method:
    If suppose you want execute this chain with immediate in other time, then my interruption step will wait until 23.00 IST to get start the load for Monday to Saturday loads.


    Better Way of Achieving with RSWAITSEC program:


    Scheduling the chain at 22.45 and adding the decision step to find whether its Sunday or other.  If Sunday then next step would be directly to the local chain, if the particular day is between Monday to Saturday then the next step would be with RSWAITSEC program(SAP std program).   In the program variant we have to mention the desired time delay in Secs(900 Secs).

         Compared to above two methods, this will be the  better way to achieve the desired output.  Even though if I run the process chain with start process as         immediate on other than Sunday’s my local chain will not wait until 23.00IST to reach, it will wait for 15mins and it will get triggered. 




           As this is the SAP provided one no need to move any TP for this, even in production we will be able to use directly.



    Process chain with RSWAITSEC:


    ABAP Process type with RSWAITSEC Program(which shown in the above PC):




    Setting the Variant value (required time):


    In the variant value we need to mention the desired limit of delay in Seconds.  My requirement is of with 15mins of delay, so I have given 900sec in the variant value.




    So we can use this program in any stages of process chain to give the fixed period of delay.


    Hope this will be helpful.


    Image may be NSFW.
    Clik here to view.

    Improve performance - by designing InfoCube dimensions correctly in #SAP #BW

    By Martin Grob


    Introduction


    In reality dimensions in an InfoCube are often designed by business terms (like material, customer etc.) This often leads to the impression that InfoCube dimensions should be designed based on business constraints. This although should not be the leading criteria and shouldn't drive the decision. 
    Aside from the datavolume which depends on the granularity of the data in the InfoCube, performance is very much depending on how the InfoObject are arranged in the dimensions. Although this has no impact on the size of the fact table it certainly has one on the size of the dimensions.


    How is a dimension then designed?

    The main goal distributing the InfoObjects in their dimensions must be to keep the dimensions as small as possible. The decision on how many dimension and what InfoObjects go where is purely technical driven. In some cases this matches the organisational view but this would only be a conicidence and not the goal.

    There is a few guidelines that should be considered assigning InfoObjects to dimensions:
    • Use as many dimensions as necessary but it's more important to minimize dimension size rather than the number of dimensions.
    • Within the dimension only characteristics that have a 1:n relation should be added (e.g. material and product hierarchy)
    • Within a dimension there shouldn't be n:m relations. (e.g. product hierarchy and customer)
    • Document level InfoObjects or big characteristics should be designed as Line-Item dimensions. Line item dimensions are not a true dimensions they have a direct link between the fact table and the SID table. 
    • The most selective characteristics should be at the top of the dimension table
    • Don't mix characteristics with values that change frequently causing large dimension tables. (e.g. material and promotions)
    • Consider also to combine unrelated characteristics it can improve performance by reducing the number of table joins. (you only have 13 dimensions so combine the small ones)

    As a help the report (SE38) SAP_INFOCUBE_DESIGNS can be used.
    This yellow marked dimension should be converted into a line item dimension if it contains a document level characteristic or it is simply bad design.

    The maximum number of entries  a dimension potentially can have is calculated through the cartesian product of all SID's. (e.g. 10'000 customer and 1'000 product hierarchies lead to 10'000'000 possible combinations in the dimension table. It's unlikely that this is going to happen and while designing the dimension this should also be considered - analyzing the possibilities of all customers buying all products in this case.
    In cases where there is an m:m relationship it usually means there is a missing entity between those two and therefore they should be stored in different dimensions.
    Once data is loaded into the InfoCube a check on the actual number of records loaded into the dimension table vs. the number of record in the fact table should be done. As a rule of thumb the ratio should be between 1:10 and 1:20.


    Degenerated Dimensions

    If a large dimension table reaches almost the size of the fact table when measured the number of rows in the tables it's a degenerated dimension. The OLAP processer has to join two big tables which is bad for the query perfromance. Such dimensions can be marked as Line Item Dimensions causing the database not to create an actual dimension table. Checking the table /BIC/F<INFOCUBE> will then show that instead of the DMID dimension key the SID of the degenerated dimension table is placed in the fact table. (Field name RSSID). With this a join of the two tables is eliminated. Those dimensions can only hold one InfoObject as a 1:1 relationship must exist between the SID value and the DIMID.
    Dimensions with a lot of unique values can be set to High Cardinality which changes the method of indexing dimensions. (ORA DB only) This results in a switch from a bitmap index to a B-Tree index.
    Defining a dimension as Line Item Dimension / High Cardinality


    Conclusion

    Finding the optimal model and balancing the size and the number of dimensions is a delicate excercise.
    Dimensions in MultiProvider do not have to follow the underlying InfoCubes definitions. Those can be focused on the end users need and be structrured by the organizations meaning. This does not affect the performance as the MultiProvider does not have a physically existing datamodel on the database.    
    Designing the dimension in an InfoCube correctly can have a significant improvement on performance!

    Image may be NSFW.
    Clik here to view.

    All About LO....SAP BW - LO EXTRACTION MADE SIMPLE

    SAP BW - LO EXTRACTION MADE SIMPLE:


    PART 1:

    EXTRACTION:

    CONTENTS:
    1) INTRODUCTION
    2) WHY GOING EXTRACTION?
    3) Dimensions of extraction process?
    4) FREQUENTLY USED IMPORTANT TERMS IN LO:
    5) EXTRACTION TYPES
    6) LO Cockpit
    7) TRANSACTION CODES IN USE FOR LO EXTRACTION
    8) LO- EXTRACTION APPLICATIONS
    9) DELTA EXTRACTION-ABR MECHANISM
    10) UPDATES & UPDATE MODES AVAILABLE IN LO- EXTRACTION
    A. DELTA UPDATES:
    i. V1
    ii. V2
    iii. V3
    B. DELTA UPDATE MODES:
    i. DIRECT DELTA
    ii. QUEUED DELTA
    iii. UNSERIALIZED V3 UPDATE
    11) FEATURES & DIFFERENCES BETWEEN V1 V2 & V3 UPDATES
    12) LO-EXTRACTION (logistics data)
    13) NAMING CONVENTIONS
    a. NAMING CONVENTION FOR TRANSACTION DATA SOURCE
    b. NAMING CONVENTION FOR EXTRACT STRUCTURE
    c. NAMING CONVENTION FOR SETUP TABLES
    d. TABLE REPRESENTATION:
    NAMING CONVENTION FOR TRANSACTION DATA SOURCE, EXTRACT STRUCTURE & SETUP TABLES

    Extraction:
    Extraction? What is extraction?
    Extraction itself means extracting data.
    From where?
    For what?
    Why?

    Just think! Not arduous. But not easier said than done!
    I think you are familiar with modeling, which we can say that, it’s an art of designing. So what u has designed?
    Answer: modeling of InfoCubes & D.S.O DataStoreObject or ODS which are well known as DataTargets.

    By what? InfoObjects, InfoAreas, Idoc, PSA transfer methods, and also u know update rules, transfer rules. ohhhh. Its gud. Gradually after building a data model i.e. InfoCube-just an ESS an extended Star Schema, just another step is how u fillllllllllll the InfoCube?

    Now u can get an answer, that’s by extraction-extractor-extractor mechanism.
    Am I right?
    Yeah! Just its filling the InfoCube!
    How u fill?
    Much more about SAP interfaces!

    SAP has an amazing feature i.e. it supports external data from third party ETL Tools. E.g. Ascential Software, ETI. And Informatica through Staging BAPI to load data and meta data from many different databases and file formats. SAP entered into strategic partnership with Ascential Software and bundling their DataStage ETL software package with SAP BW.
    Extraction programs read the data from EXTRACT STRUCTURES and send it to DATA STAGING MECHANISMS in a required format.

    Logistics is an important aspect in an enterprise and forming a core part of ERP application. For every application above stated in table the applications such as Purchasing, Inventory Controlling, Shop Floor Controlling, Quality Management, Invoice Verification, Shipment SD Sales BW, LE Shipping BW, SD Billing BW, Plant Maintenance BW, Customer Service BW etc.



    Day to day thousands of transactions occur form as a database volume. Important point to notice here is that, use of Microsoft Excel is opted by small organizations, having low volume of data transactions or transaction data.
    But when we have large number of transaction data updated daily, ERP software must be chosen. Thus we are choosing SAP implementation and as we know it has its own pre delivered business content for all applications, and according to the requirement we must install and change the business content.

    After data modeling we go to Extraction and then reporting. So we must opt a different mechanism in extraction so called extraction mechanism i.e. alternate method to reduce or decrease performance related issues and minimize the impact on OLAP (Transaction System). Thus we choose LO-Extraction as an alternate to meet the real time analytic requirements.

    The main conclusion here is that we need exact data for reporting, i.e. eliminate unwanted data or repeated data from millions of records. I mean to say, Right data for right Reporting.

    E.g. for more clarification, we will not recruit same person many times for one interview process. Here discussing about data. Here person may be data, interview may be InfoCube, interviewer may be BW Delta Queue, and organization may be BI or BW. Here, so the extracted data is analyzed in reporting making drill down across and so on etc in reporting.

    SAP BW can extract and load data from a variety of source systems including Non-SAP source systems:
    Data from non-SAP data sources can be loaded into SAP BW in one of three ways:
    1. Flat file source systems (loaded through flat files)
    2. External source systems—connect to external systems with third-party custom extraction tools (using staging BAPIs).
    3. External data providers such as Nielsen

    SAP source systems:
    1. SAP R/3 source systems (after release 3.1H)
    2. SAP BW systems—using Myself
    3. Other SAP products


    Methods of Data Load into SAP BW:
    There are four methods to update the DataTargets (ODS or InfoCube) in SAP BW:
    1. Update PSA, then data targets. This is the standard data transfer method. Data are first updated in the PSA and can be subsequently updated.

    2. Update PSA and data targets in parallel. This is a way of carrying out a high-performance update of data in the PSA and one or more InfoCubes.

    3. Update PSA only. You can store data in the PSA and bypass InfoCube or ODS objects. For example, if a high volume of data needs to be extracted into SAP BW, the PSA can be loaded several times a day (e.g., every 15 minutes) and the InfoCube and ODS can be loaded subsequently once a day (outside of business hours when there is limited on-line activity and load on the BW server). This is one method to improve overall performance and manage high volumes of data.

    4. Update data targets only. You can load the InfoCubes and bypass the PSA. This is one way to reduce the overhead associated with the PSA. The drawback of this method is that you lose the ability to do the following:

    The preferred method for loading data into SAP BW is to use the PSA if possible.





    2) What are the dimensions of extraction process?
    Four dimensions are generally used to describe the different methods and properties of extraction processes:
    Every extraction process can be viewed along these four dimensions.
    1. Extraction mode.
    2. Extraction scenario.
    3. Data latency.
    4. Extraction scope.
    For further explanation see sap online guidance or Willey’s Mastering the SAP Business Information Warehouse



    3) FREQUENTLY USED IMPORTANT TERMS IN LO:

    ABR MECHANISM : ABR Delta Mechanism is a PUSH DELTA, pushes the data from APPLICATION to the QUEUED DELTA by means of V1 or V2 Update.
    COMMUNICATION STRUCTURE : In the Communication Structure, data from an InfoSource is staged in the SAP (BW) System. The Communication Structure displays the structure of the InfoSource. It contains all (logically grouped) of the InfoObjects belonging to the InfoSource of the SAP (BW) System. Data is updated in the InfoCubes from the Communication Structure. InfoCubes and Characteristics interact with InfoSources to get Source system data.
    DATA SOURCE : The table ROOSOURCE has all details about the DataSource. You can give the input as your DataSource name and get all relevant details about the DataSource
    EXTRACTOR : Extractor enables the upload of business data from source systems into the data warehouse
    EXTRACT RULES : Extract rules define how the data will be moved from extract structure to transfer structure.
    EXTRACT STRUCTURE Extract Structure is a record layout of InfoObjects. In the Extract Structure, data from a DataSource is staged in the SAP (R/3) SourceSystem. The Extract Structure contains the amount of fields that are offered by an Extractor in the SAP (R/3) Source System for the data loading process. For flat files there is no Extract Structure and Transfer structure.
    EXTRACTION : Extraction is an alternate method for extracting data from the direct database (which is the result of ‘n’ number of transaction data updated daily through OLTP, resulting huge volume of data) mainly to meet OLAP requirements.
    SETUP TABLES : Lo uses the concept of Setup tables, from where the application tables’ data is collected. Setup tables are used as data storage. Setup tables are used to Initialize delta loads and for full load.
    TRANSFER RULES : Transfer rules transform data from several transfer structures from source system into a single communication structure or InfoSource.
    Data is transferred 1:1 from the Transfer Structure of the SAP (R/3) Source System into the Transfer Structure of the SAP (BW) System, and is then transferred into the SAP (BW) System Communication Structure using the TransferRules. In the transfer rules maintenance, you determine whether the communication structure is filled with fixed values from the transfer structure fields, or using a local conversion routine.
    TRANSFER STRUCTURE Transfer Structure maps data source fields to InfoSource InfoObjects.
    The Transfer Structure is the structure in which the data is transported from the SAP (R/3) Source System into the SAP (BW) System. In the Transfer Structure maintenance, you determine which Extract Structure fields are to be transferred to the SAP (BW) System. When you activate the DataSource of the SAP (R/3) Source System from the SAP (BW) System, an identical Transfer Structure like the one in the SAP (R/3) Source System is created in the SAP (BW) System.
    UPDATE MODES : DELTA DIRECT DELTA, QUEUED DELTA, UNSERIALIZED V3 UPDATE, SERIALIZED UPDATE
    UPDATE RULES Update rules transform data from communication structure in an InfoSource into one or more data targets using chars, KF and time char
    The update rules specify how the InfoObjects (Key Figures, Time Characteristics, and Characteristics) are updated in the DataTargets from the Communication Structure of an InfoSource. You are therefore connecting an InfoSource with an InfoCube or ODS object. The update rules assign InfoObjects in the InfoSources to InfoObjects in data targets. Update rules are independent of the source system or data source and are specific to data targets.
    Example: you can use update rules to globally change data independent of the source system.
    UPDATES : V1- Synchronous, V2-Asynchronous, V3- Batch asynchronous updates
    Image may be NSFW.
    Clik here to view.

    4) EXTRACTION TYPES:


    SAP BW Extractors:
    SAP BW extractors are the programs in SAP R/3 that enable the extraction and load of data from SAP R/3 to SAP BW. SAP provides extractors that are programmed to extract data from most applications of SAP R/3 and delivered as part of the business content. Extractors exist in release 3.1H and up.

    There are three types of extractors in SAP BW:
    1. SAP BW content extractors. For extraction of data from SAP R/3 application-specific data (e.g., FI, CO, HR, LO cockpit)
    2. Customer generated, application specific (generic extractors). For extraction of cross-application data in SAP R/3 (e.g., LIS, FI/SL, CO/PA)
    3. Generic extractors, cross application. For extraction of data from SAP R/3 across application database table/views and SAP query.






    5) LO COCKPIT-WHAT IS IT, WHAT IT DO & WHAT IT CONSIST OF?

    The LO cockpit is a tool to manage the extract of logistics data from SAP R/3 and It consists of:
    • New standard extract structures
    • New DataSources
    The functionality of the LO cockpit includes
    • Maintaining extract structure.
    • Maintaining DataSources.
    • Activating the update.
    • Controlling the V3 update.


    6) TRANSACTION CODES IN USE FOR LO EXTRACTION:
    TCODE DESCRIPTION
    LBWE : LO DATA EXTRACTION: CUSTOMIZING COCKPIT
    LBWF : BW LOG
    LBWG : DELETION OF SETUP DATA – MANUALLY
    NPRT : LOG FOR SETUP OF STATISTICAL DATA
    OLI*BW : INITIALIZATION OR SETUP (STATISTICAL)
    LBWQ :
    OLI8BW : DELIVERY
    OLI9BW : BILLING
    RSA2 : DATA SOURCE METADATA IN SAP ECC 6.0 9DATA SOURCE REPOSITORY)
    RSA3 : TO CHECK ANY DATA THAT IS AVAILABLE RELATED TO YOUR DATA SOURCE
    RSA5 : ACTIVATION OF BUSINESS CONTENT DATA SOURCE
    RSA6 TO SEE THE ACTIVATED DATASOURCE
    RSA7 : DELTA QUEUE ( DETA IMAGE & REPEAT DELTA)
    SBIW : IMG FOR SAP BI EXTRACTION
    SE11 : TO SEE APPLICATION TABLES STORED
    SE37 CHECKING EXTRACTOR
    SM13 : TO CHECK THE VARIOUS UPDATES THAT GETS PROCESSED DURING THE PROCESS OF UPDATING THE SALES DOCUMENT CHANGE
    SM37 : TO KNOW THE JOB STATUS
    SMQ1 : CLEAR EXTRACTOR QUEUES
    SMQ2 : TO CHECK THE VARIOUS UPDATES THAT GETS PROCESSED DURING THE PROCESS OF UPDATING THE SALES DOCUMENT CHANGE
    PROCESS KEYS
    PROGRAM RMCVNEUA : SETUP THE INFORMATION FOR SALES DOCUMENTS
    TMCLVBT : IT MAINTAINS THE PROCESS KEYS & THEIR DESCRIPTION
    TMCLVBW : IT MAINTAINS THE PROCESS KEYS FOR EACH APPLICATION



    7) LO- EXTRACTION APPLICATIONS:

    As we know that SAP BW has its own business content i.e. SAP Pre Delivered Business Content. This is the touch stone of SAP BW making pioneer in the ERP market. LO makes it as one stop for all i.e. The Transaction Data Source available in LO – COCKPIT. We can see this source data by entering Transaction Code: LBWE

    LO- EXTRACTION APPLICATIONS
    APPLICATION NUMBER : APPLICATION NAME APPLICATION NUMBER : APPLICATION NAME
    02 : Purchasing 11 : SD Sales BW
    03 : Inventory Controlling 12 : LE Shipping BW
    04 : Shop Floor Controlling 13 : SD Billing BW
    05 : Quality Management 17 : Plant Maintenance BW
    06 : Invoice Verification 18 : Customer Service BW
    08 : Shipment

    8) DATA FLOW IN BW & BI:



    Here we can see that BI-System extracts the data as a ONE TIME activity for the INIT DATA LOAD. After successful data load, the SETUPTABLES/RESTRUCTING TABLES to be deleted to avoid redundant storage.
    Important points here to know in SAP BW is
    i. Installing Business Content (which contains SAP delivered Standard Data Sources comprising of data structures and Extraction Programs. Actually SAP delivered Business Content is D Version, so we must activate the Business Content “A Version” to use.
    ii. Customizing
    iii. Deploying

    LO uses concept of SETUPTABLES to carry out the INITIAL DATA EXTRACTION process. The data Extractors for HR, FI extracts data directly by accessing APPLICATION TABLES. I.e. LO Data Sources do not use application tables directly. SETUPTABLES are also known as RESTRUCTURING TABLES, which are CLUSTER TABLES, which holds the respective APPLICATION DATA. When we run init load or Full load in BW, the data will be read from Setup Tables for the first time, total data will be read and the delta records will be updated to Delta Queue by V3 job runs and delta records can be extracted from Delta Queue. After successful run of the init, setup Tables are deleted.

    Important terms:
    INIT/FULL LOAD: First Time (START)
    DELTA: Only Changes (AFTER START)


    9) DELTA EXTRACTION-ABR MECHANISM:
    ABR MECHANISM:
    The LO data sources support ABR Delta Mechanism (Push Delta Mechanism) which is compatible to ODS/DSO & InfoCube. USE OF ABR DELTA:
    The ABR Delta creates delta with:
    • AFTER: No MINUS symbol, updates records
    • BEFORE: MINUS SYMBOL, Delta Before Image

    So what are pull data and Push data?
    Data is pulled from delta queue (SourceSystem) and pushed into Data Ware House when a transaction is saved. SAP BW system follows Pull Scenario & SAP BI system follows Push Data. The ABR Delta creates (-) symbol, Delta before Image for the data which is deleted & After Image provides status after change with addition of records. The ABR delta Mechanism uses V1 or V2 update where a delta is generated for a document change with addition of change (i.e. document postings done), the program LO Updates the application tables for a transaction pushes/triggers the data. Data extraction in BW is extracting data from various tables in the R/3 systems or BW systems. There are standard delta extraction methods available for master data and transaction data. You can also build them with the help of transaction codes provided by SAP. The standard delta extraction for master data is using change pointer tables in R/3. For transaction data, delta extraction can be using LIS structures or LO cockpit. Delta Queue for an LO DataSource is automatically generated after successful initialization and can be viewed in transaction RSA7 or in Transaction SMQ1 under the name MCEX.


    10) UPDATES & UPDATE MODES AVAILABLE IN LO- EXTRACTION:
    First when we are going to update methods, first we must know about updates, because updates are used in update methods!

    A. UPDATES:
    i. V1 - Synchronous update
    ii. V2 - Asynchronous update
    iii. V3 - Batch asynchronous update

    i. V1 - Synchronous update:• V1 update is defined as direct delta is synchronous type uses time critical technology and its nature of update is automatic, where the process done only once and never 2nd time.
    • It is used for single update work process & belongs to a same LUW (Logical Unit Of Work), reads data from documents & uses direct delta & writes to application tables.
    • V1 update can be scheduled any time, with a mandatory step of locking users to prevent simultaneous updates.
    • V1 is carried out for critical/primary changes & these affects objects that has controlling functions in SAP System
    • During the process of creation of an order, the V1 update writes data into the application tables and the order gets processed.

    ii. V2 - Asynchronous update:
    • V2 update is defined as queued delta, is asynchronous statistical update type uses non-time critical (for updating statistical tables of transaction tables) and its nature of update is never automatic, uses report RMBW311.
    • The process not done in a single update & carried out in separate LUW, reads data from documents and uses extractor Queue & writes to application tables.
    • V2 update is scheduled hourly, with a mandatory step of locking users to prevent simultaneous updates.
    • A V2 is executed for less critical secondary changes & are pure statistical updates resulting from the transaction.
    • V1 updates must be processed before V2


    iii. V3 - Batch asynchronous update:• V3 is defined as Unserialized is collective update / synchronous with background schedule or batch synchronous update, uses delta queue technology and the nature of update is never automatic, which runs in background uses report RSM13005.
    • The process can be done at any time after V1, V2 updates. i.e. V3 uses V1 or V2 +v3 update, reads data from documents & uses delta queue collective run call and writes to application tables.
    • V3 can be scheduled at any time, with a mandatory step of locking users to prevent simultaneous updates.


    B. UPDATE MODES:

    The LO DataSource implements its delta functionality using the above discussed V1, V2 & V3 update methods, individually or by combination of them.
    P1 2002.1 is the upgraded version. So what are the update modes that are available with LO DataSource as of P1 2002.1?

    i. DIRECT DELTA
    ii. QUEUED DELTA
    iii. SERIALIZED V3 UPDATE
    iv. UNSERIALIZED V3 UPDATE

    i. DIRECT DELTA : V1 update:

    Document postings and update sequence is 1:1. I.e. the direct delta V1 updates directly the document positions to the BW Delta Queue and this is extracted to BI System periodically.

    In doing so each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues.

    Users are locked but any postings are done so is completely lost. Because from start of recompilation run in OLTP until all init requests have been successfully updated in BW.



    Advantages:
    SUITABILITY : Suitable for customers with fewer documents and no monitoring of update data/extraction queue required
    SERIALIZATION BY DOCUMENT : Posting process ensures serialization of document by document, while writing delta to Delta Queue within V1
    EXTRACTION : Extraction is independent of V2 update

    Disadvantages:
    SUITABILITY : V1 is more heavily burden& Not suitable f high number of documents
    Re-initialization process
    (User Locks) : Setup and initialization required before document postings are resumed.
    IMPLEMENTATION : less

    ii. QUEUED DELTA V2 (V1+V3) update: **SAP RECOMMENDS**

    Delta queues essentially are tables capturing the key values of changed or inserted records or the entire transaction recordsData is pushed to extraction queue by means of V1 update and to delta queue same as V3 update. As we know that V3 is a collective run the data collected in extraction queue and scheduled background.SAP recommends queued delta for customers with high amount of documents.

    Uses REPORT RMBWV311 for collective run and the naming convention is MCEX_UPDATE_ for example for sales its MCEX_UPDATE_11s.

    Advantages:
    SUITABILITY : Suitable for customers with high amount of documents (SAP RECOMMENDS)
    SERIALIZATION BY DOCUMENT : No serialization as it’s a statistical update, which is run after V1
    EXTRACTION : Extraction is independent of V2 update
    IMPLEMENTATION : **SAP RECOMMENDS**

    Disadvantages : No disadvantages


    iii. UNSERIALIZED V3 UPDATE

    Data extracted from application tables is written to update tables using un-serialized V3 update mode. By using collective update the so extracted data is processed. The Important point to consider here is that the data is read without a sequence from update tables and are transferred to BW delta queue. Un serialized update, which is run after V1 & V2 doesn’t guarantee serialisation of the document data posted to delta queue. This method is not suggestible as the entries in the delta queue are not equal to the updates that made to application. This method V3 results erroneous data, when data from DataSource is updated to DSO in overwrite mode, so the previous data is over written by the last update. But this V3 is suggestible when updating data to ODS or InfoCube. V3 runs and process the data after V2 with which the data is processed.

    Advantages
    SUITABILITY : Suitable for customers with high amount of documents
    SERIALIZATION BY DOCUMENT : No serialization as it’s a statistical update, which is run after V1 & V2 but the un serialised V3 doesn’t guarantee serialisation of the document data.
    EXTRACTION : collective
    DOWN TIME : Not efficient
    IMPLEMENTATION : LESS

    Disadvantages
    SUITABILITY : Not suggested if documents subjected to large changes with tracking changes



    iv. SERIALIZED V3 UPDATE

    Here the tables are updated consistently and the main problem in this method is, the user locks not there and the document postings occur as we are extracting data. It’s arduous as the data will not have consistency. E.g. document changed twice or thrice when extracting data.
    Hey: the important difference we to analyze here is!
    Serialized V3 Vs Un-serialized V3
    As the term serialisation itself mean, sequentially i.e. Data is read sequentially and transferred to the BW Delta Queue. Un-serialised mean, the process is not in a sequence i.e. data is read in the update collective run without taking the sequence into account and then transferred to the BW delta queues.

    As per the data flow, serialized and un-serialized V3 updates, I can say that these are twins, with different process.



    11) DATA FLOW CHARTS OF DIRECT DELTA, QUEUED DELTA, SERIALIZED V3 & UNSERIALIZED V3 UPDATES:



    12) FEATURES , DIFFERENCES BETWEEN V1 V2 & V3 UPDATES: (COMPARISION TABLE):



    13) NAMING CONVENTIONS:
     NAMING CONVENTION FOR TRANSACTION DATA SOURCE
    ü
     NAMING CONVENTION FOR SETUP TABLES
    ü
     NAMING CONVENTION FOR EXTRACT STRUCTURE
    ü
     TABLE REPRESENTATION:
    ü





    Image may be NSFW.
    Clik here to view.

    All about BW Table Types (MD tbl, SID tbl, DIM tbl, etc)

    Attribute tables:
    ·         Attribute tbl for Time Independent attributes:
    ·         /BI*/P<characteristic_name>
    ·         stored with characteristic values

    Attribute tbl for Time Dependent attributes:
    ·         /BI*/Q<characteristic_name>
    ·         Fields DATETO & DATEFROM are included in time dependent attribute tbl.
    ·         stored with characteristic values

    Dimension tables:
    ·         Dimension tbls (i.e. DIM tables): /BI*/D<Cube_name><dim.no.>
    ·         stores the DIMID, the pointer between fact tbl & master data tbl
    ·         data is inserted during upload of transact.data (data is never changed, only inserted)
    ·         Examples:
    o    /bic/D(cube name)P is the package dimension of a content cube
    o    /bic/D(cube name)U is the unit dimension of a content cube
    o    /bic/D(cube name)T is the time dimension of a content cube
    o    /bic/D(cube name)I is the user defined dimension of a content cube


    External Hierarchy tables:
    ·         /BI*/I*, /BI*/J*, /BI*/H*, /BI*/K*
    ·         /BI0/0P...
    ·         are tables that occur in the course of an optimized preprocessing that contains many tables.
    ·         bic/H(object name) hierarchy data of object
    ·         For more information see Note 514907.


    Fact tables:
    ·         In SAP BW, there are two fact tables for including transaction data for Basis InfoCubes: the F and the E fact tables.
    o    /bic/F(cube name) is the F-fact table of a content cube
    o    /bic/E(cube name) is the E-fact table of a content cube
    ·         The Fact tbl is the central tbl of the InfoCube. Here key figures (e.g. sales volume) & pointers to the dimension tbls are stored (dim tbls, in turn, point to the SID tbls).
    ·         If you upload data into an InfoCube, it is always written into the F-fact table.
    ·         If you compress the data, the data is shifted from the F-fact table to the E-fact table.
    ·         The F-fact tables for aggregates are always empty, since aggregates are compressed automatically
    ·         After a changerun, the F-fact table can have entries as well as when you use the functionality 'do not compress requests‘ for Aggregates.
    ·         E-fact tbl is optimized for Reading => good for Queries
    ·         F-fact tbl is optimized for Writing => good for Loads
    ·         see Note 631668 


    Master Data tables
    ·         /BI0/P<char_name>
    ·         /bic/M(object name) master data of object
    ·         Master data tables are independent of any InfoCube
    ·         Master data & master data details (attributes, texts & hierarchies) are stored.
    ·         Master data table stores all time independent attributes (display & navigational attribues)


    Navigational attributes tables:
    ·         SID Attribute table for time independent navigational attributes: /BI*/X<characteristic_name>
    ·         SID Attribute tbl for time dependent navigational attributes: /BI*/Y<characteristic_name>
    ·         Nav.attribs can be used for naviagtion purposes (filtering, drill down).
    ·         The attribs are not stored as char values but as SIDs (master data IDs).


    P table:
    ·         P-table only gets filled if you load master data explicitly.
    ·         As soon as the SID table is populated, the P tbl is populated as well


    SID table:
    ·         SID tbl: /BI*/S<characteristic>
    ·         stores the char value (eg customer number C95) & the SID. The SID is the pointer that is used to link the master data tbls & the dimension tbls. The SID is generated during the upload (uniqueness is guaranteed by a number range obj).
    ·         Data is inserted during the upload of master data or of transactional data
    S table gets filled whenever transaction gets loaded. That means if any new data is there for that object in the transactions then SID table gets fillled.


    Text table:
    ·         Text tbl: /BI*/T<characteristic>
    ·         stores the text for the chars
    ·         data is inserted & changed during the upload of text data attribs for the InfoObject
    ·         stored either language dependent or independent

    Image may be NSFW.
    Clik here to view.

    All About....Transaction Codes For Filling Setup Tables LO Extractors


    Transaction Codes For Filling Setup Tables LO Extractors
    Credits: Todor Peev

    An overview of Datasources and the programs filling  the relevant setup table (named MC*SETUP). With this handy table you can  find the status of your current job or previous initialization jobs  through SM37.

    T-Code         Purpose
    OLI1BW       INVCO Stat. Setup: Material Movemts
    OLI2BW       INVCO Stat. Setup: Stor. Loc. Stocks
    OLI3BW       Reorg.PURCHIS BW Extract Structures
    OLI4BW       Reorg. PPIS Extract Structures
    OLI4KBW     Initialize Kanban Data
    OLI6BW        Recompilation Appl. 06 (Inv. Ver.)
    OLI7BW        Reorg. of VIS Extr. Struct.: Order
    OLI8BW        Reorg. VIS Extr. Str.: Delivery
    OLI9BW        Reorg. VIS Extr. Str.: Invoices
    OLIABW       Setup: BW agency business
    OLIB             PURCHIS: StatUpdate Header Doc Level
    OLID             SIS: Stat. Setup - Sales Activities
    OLIE              Statistical Setup - TIS: Shipments
    OLIFBW        Reorg. Rep. Manuf. Extr. Structs
    OLIGBW       Reconstruct GT: External TC
    OLIH             MRP Data Procurement for BW
    OLIIBW         Reorg. of PM Info System for BW
    OLIKBW        Setup GTM: Position Management
    OLILBW        Setup GTM: Position Mngmt w. Network
    OLIM             Periodic stock qty - Plant
    OLIQBW       QM Infosystem Reorganization for BW
    OLISBW       Reorg. of CS Info System for BW
    OLIX             Stat. Setup: Copy/Delete Versions
    OLIZBW       INVCO Setup: Invoice Verification

    Datasource                          Tcode           Program
    2LIS_02*                              OLI3BW       RMCENEUA
    2LIS_03_BX                         MCNB          RMCBINIT_BW
    2LIS_03_BF                         OLI1BW       RMCBNEUA
    2LIS_03_UM                        OLIZBW       RMCBNERP
    2LIS_04* orders                   OLI4BW       RMCFNEUA
    2LIS_04* manufacturing      OLIFBW       RMCFNEUD
    2LIS_05*                              OLIQBW      RMCQNEBW
    2LIS_08*                              VTBW          VTRBWVTBWNEW
    2LIS_08* (COSTS)              VIFBW        VTRBWVIFBW
    2LIS_11_V_ITM                   OLI7BW     RMCVNEUA
    2LIS_11_VAITM                   OLI7BW     RMCVNEUA
    2LIS_11_VAHDR                  OLI7BW     RMCVNEUA
    2LIS_12_VCHDR                 OLI8BW     RMCVNEUL
    2LIS_12_VCITM                   OLI8BW     RMCVNEUL
    2LIS_12_VCSCL                  OLI8BW     RMCVNEUL
    2LIS_13_VDHDR                 OLI9BW     RMCVNEUF
    2LIS_13_VDITM                   OLI9BW     RMCVNEUF
    2LIS_17*                              OLIIBW      RMCINEBW
    2LIS_18*                              OLISBW     RMCSNEBW
    2LIS_45*                              OLIABW     RMCENEUB

    Document update is where the transaction (documents) are updated in the application tables. This update is normally a synchronous update, i.e. if the update does not go through for what ever reason, the complete transaction is rolled back.
    Statistical update is the update of statistics for the transaction – like LIS or extractors for BW.
    V1 – synchronous update. If the update is set to V1, then all tables are update and if any one fails, all are rolled back. Done for all transaction plus critical statistics like credit management, etc.
    V2 – asynchronous update – transactions are updated and statistical updates are done when the processor has free resources. If the statistical update fails, the transaction would have still gone through and these failures have to be addressed separately.
    V3 – batch update – statistics are updated using a batch (periodic) job like every hour or end of the day. Failure behavior is same as V2 updates.

    Statistical update is also used as to describe the initial setup of the statistical tables for LO/LIS. When old transactions are updated in LO/LIS as a one time exercise, then it is called a statistical update also. Once these tables are upto date will all transactions, then every transaction is updated in them using V1, V2 or V3.
    Image may be NSFW.
    Clik here to view.

    All about Dimensions....



    Dimension design: A different perspective


    Pre-requisites: An infocube is already created and active, and filled will data, which will be used for analysis of dimension tables.

    Dimension to Fact Ratio Computation: This ratio is a percentage figure of the number of records that exists in the dimension table to the number of records in fact table or what percentage of fact table size is a dimension table. Mathematically putting it down, the equation would be as below:

              Ratio = No of rows in Dimension table X 100 / No of rows in Fact Table

    Dimension Table Design Concept: We have been reading and hearing over and over again that the characteristics should be added into a dimension if there exists a 1:1 or 1:M relation and they should be in separate dimension if there exists a M:M relation. What is this 1:1 or 1: M? This is the relation which the characteristics share among each other.
    For instance if one Plant can have only one Storage Location and one storage location can belong to only one plant at any given point of time, then the relation shared among them is 1:1.
    If 1 Functional Location can have many equipment but one equipment can belong to only one functional location then the relation shared between the functional location and Equipment is 1:M.
    If 1 sales order can have many materials and one material can exist in different sales orders then there absolutely is no dependence among these two and the relation between these two is many to many or M: M.

    Challenges in understanding the relationship: Often we SAP BI consultants depend on the Functional consultants to help us out with the relationship shared between these characteristics / fields. Due to time constraint we generally cannot dedicate time to educate the functional consultants on the purpose of this exercise, and it takes a lot of time to understand this relationship thoroughly.


    Scenario: An infocube ZPFANALYSIS had few dimensions which were way larger than the preferred 20% ratio. This had to be redesigned such that the performance was under 20% ratio.
    This ratio could be either manually derived by checking the number of entries in the desired dimension table (/BIC/D<infocube name><dimension number>) to the fact table (/BIC/F<Infocube Name> or /BIC/E<Infocube name>) or a program SAP_INFOCUBE_DESIGNS can be executed in SE38 which reports this ratio for all the dimensions, for all the infocubes in the system.

    SAP_INFOCUBE_DESIGNS:
    We can find from the report that the total number of rows in the fact table is 643850. Dimension 2 (/BIC/DZPFANLSYS2) has around 640430 rows, which is 99% (99.49%)of the fact table rows and Dimension 4(/BIC/DZPFANLSYS4) has around 196250 rows, which is 30%  (30.48%)of the fact table rows.

    Infocube ZPFANLSYS:

    Approach:

    Step 1: Analysis of the dimension table /BIC/DZPFANALSYS2 to plan on reducing the number of records.
    /BIC/DZPFANLSYS2

    Fact table:
    Dimension table holds 1 record more than the fact table.
    View the data in the table /BIC/DZPFANLSYS2 (Table related to Dimension 2) in SE12 and sort all the fields. This sorting will help us spot the rows which have repeated values for many columns, which will eventually lead to understanding the relationship between the characteristics (columns in dimension table).

    Identifying the relationships:
    Once the sorting is done we need to look out for the number of values that repeat across the columns. All the records which repeat could have been displayed in a single row with one dimension id assigned if all the columns had same data. The repetition is a result of one or more columns which contribute a unique value to each row. Such columns if removed from the table then the number of rows in the table will come down.

    In the below screenshot I’ve highlighted the rows in green that were repeating themselves with new dimension IDs, as only 2 columns SID_ZABNUM and SID_0NPLDA have new values for every row. These two columns having new values for every row have resulted in rest of the columns repeating themselves and in turn increasing the data size in the dimension table. Hence it can be easily said that these two columns do not belong in this dimension tables, so the related characteristics (ZABNUM and 0NPLDA) need to be removed out of this dimension.
    Few rows could be found which repeat themselves for most of the columns, but have a new value once in a while for some columns, as highlighted in yellow in the below screenshot. This indicates that these columns share a 1:M relation with the rest of the columns with repeated rows and these could be left in the same dimension.
    Conclusion: The columns marked in green belong to this dimension tables and the columns marked in red needs to be in other dimension tables.
    Step 2: Create a copy infocube C_ZPFAN and create new dimensions to accommodate ZABNUM and 0NPLDA.
    ZABNUM was added to dimension C_ZPFAN8 and 0NPLDA was added to C_ZPFAN7. These were marked as line item dimensions as they have only one characteristic under them.
    Analysed the issue with dimension 4 in the similar way and changed other dimensions to help the situation.

    Post changes, loaded the data into the copy infocube C_ZPFAN and found the number of records in the dimension table /BIC/DC_ZPFAN2 to be 40286.

    Ratio: 40286 / 657400 * 100 = 6.12 %


    SAP_INFOCUBE_DESIGNS:

    Dimension2 of the copy infocube: /BIC/DC_ZPFAN2
    Even now there a few repeated rows and columns, but the ratio is within 20%. We can create up to 13 dimensions, but it is always better to keep a dimension or two free for future enhancements.

    Hope this was helpful.
    Image may be NSFW.
    Clik here to view.

    All About SAP BW - ABAP performance tuning

    Credits - Lakshminarasimhan N


    ABAP performance tuning for SAP BW system

    Applies to:
    SAP BW 7.x system



    Details
    In SAP BW system, we will be using ABAP in many places and the common used places are start routine, end routine and expert routines. This document points out the ways we can fine tune the ABAP code written in the SAP BW system.

    Rule 1 – Never use “select *”.  Select * should be avoided and “select  ... end select” select must be avoided at any cost.

    Rule 2 – Always check if internal table is not empty before using “For all entries”. When you use a select statement with “for all entries”, make sure the internal table is not empty.

    Example:

    SELECT
    CSM_CASE
    CSM_EXID
    CSM_CRDA
    CSM_TYPE
    CSM_CATE
    CSM_CLDA
    FROM
    /BIC/AZCACSMDS00 
    INTO TABLE
    LIT_ZCACSMDS
    FOR ALL ENTRIES IN
    RESULT_PACKAGE -----------  Must not be empty 
    WHERE
    CSM_CASE 
    RESULT_PACKAGE-CSM_CASE.

    Hence we need to check if the internal table is not empty and only if it is not empty then proceed with the select statement.  

    IF RESULT_PACKAGE[] IS NOT INITIAL.
        SELECT
    CSM_CASE
    CSM_EXID
    CSM_CRDA
    CSM_TYPE
    CSM_CATE
    CSM_CLDA
    FROM
    /BIC/AZCACSMDS00 
    INTO TABLE
    LIT_ZCACSMDS
    FOR ALL ENTRIES IN
    RESULT_PACKAGE 
    WHERE
    CSM_CASE 
    RESULT_PACKAGE-CSM_CASE.
    1. ENDIF.

    Rule 3 – Always use “Code Inspector” and “Extended syntax check”. Double click the transformation and then from menu option you can find “Display generated program”, select it. Then the entire program is displayed, then select the “Code Inspector” and “Extended Program Check” from the below screen shot.
    Correct the warning and error messages shown.


    Rule 4 – Always use the “types” statement to declare the local structure in the program and the same structure can be used in the select statement.
    Example –
    From the purchasing DSO if you want to read PO number, PO Item and Actual Quantity Delivered. Then we create a local structure using types statement.
    Types : begin of lty_pur,
    OI_EBELN   type                /BI0/OIOI_EBELN,
    OI_EBELP   type                /BI0/OIOI_EBELP,
    PDLV_QTY  type               /BI0/OIPDLV_QTY,
    End of lty_pur.
    Data : lt_pur type standard table of lty_pur. “ Internal table declared based on the local type
    Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into table lt_pur.

    Rule 5 – Always try use the “Hashed” internal table and “Sorted” internal table in the routines, sometimes when you are unable to use them and you are using the “Standard” internal table, make  sure you “Sort” the table in ascending order based on the keys you use in “READ” statement and then use “Binary search” in the READ statement. This improves the read statement performance. When the standard table is sorted and then used make sure that the read statement, matches the sort order otherwise you will get the correct result.

    Example –
    Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into table lt_pur.
    If sy-subrc = 0.
    Sort lt_pur by OI_EBELN OI_EBELP.     “ Sorting the table based on the key used in read statement
    Loop at result_package assigning <result_fields>.
    Read table lt_pur into la_pur with key EBELN = <result_fields>- OI_EBELN  EBELP = <result_fields>- OI_EBELPBinary search.
    If sy-subrc = 0.
    <logic to populate the fields>.

    Rule 6 – Never use “into corresponding fields of table”. Follow Rule 5, to declare structure via types statement and use it to create an internal table. In the select statement do not use “into corresponding fields of table”.

    Example  --
    Never use the way given below, follow the example of Rule 4
    Data : lt_pur type standard table of /BI0/APUR_O0100.
    Select OI_EBELN OI_EBELP PDLV_QTY from /BI0/APUR_O0100 into corresponding fields of table lt_pur.

    Rule 7 – In the select statement make sure you add the primary key’s. For the DSO’s with huge volume of data make sure you create index and then use them in the select statement.

    Rule 8 – Never use Include program in your transformations.

    Rule 9 – Try to minimize the use of 'RSDRI_INFOPROV_READ'. In case you need to use it make sure you need only the necessary characteristics and key figures.  Make sure the cube is compressed.

    Rule 10 – Make sure to clear the “work area”, “temp. variables” before they are used in the loop.

    Rule 11 – Always rely on the field symbols rather than the work areas. This way you can avoid the “modify” statement.

    Rule 12 – When the code in the transformation is huge and complicated, make sure the DTP package size is reduced for a faster data load.

    Rule 13 – Never use hard-coded “BREAK-POINT” in the transformation.

    Rule 15 – Add lot of comments in the transformation along with the Developer name, Functional owner, Technical Change, CR number etc.

    Rule 16 – Delete duplicated before you use the “For all entries”.

    Example –

    You select the “status profile” from CRM DSO.

    Select CSM_CASE CSM_EXID CSM_SPRO from  /BIC/AZCSM_AGE00 into table lt_csm_pro.

    Let us assume that there are 1 million records and all these come to the table lt_csm_pro
    Now I need to extract from another table using the “Status profile”
    So,

    Select 0CSM_TYPE 0CSM_CATE from /BIC/AZCSM_BHF00 into table lt_csm_bhf for all entries in
    lt_csm_pro where CSM_SPRO = lt_csm_pro-CSM_SPRO.

    The above select statement will take very long time to execute as there are 1 million records.
    we know that status profile has duplicates and hence when we remove the duplicates then we
    will have only 90 status profiles. So the best approach is to remove the duplicates and then use them in “For all entries”
    Copy the table lt_csm_pro to another internal table lt_csm_pro_1.

    lt_csm_pro_1[] = lt_csm_pro[].

    Sort lt_csm_pro_1 by CSM_SPRO.

    Delete adjacent duplicates from lt_csm_pro_1 comparing CSM_SPRO.

    After the delete statement lt_csm_pro_1- CSM_SPRO will contain only 90 records. Hence the below statement will work fast.

    Select 0CSM_TYPE 0CSM_CATE from /BIC/AZCSM_BHF00 into table lt_csm_bhf for all entries in
    lt_csm_pro_1 where CSM_SPRO = lt_csm_pro_1-CSM_SPRO.

    Rule 17 – Always use the method new_record__end_routine to add new records to the result_package. Manually we can sort the result_package by record number and then add the records instead it is recommended to use the method new_record__end_routine.

    Rule 18 – Use the “global declaration” to declare the internal tables only when you want to maintain records between the start, transformation and end routines.

    Rule 19 – Make the use of “Documents” to write detailed steps related to code in the transformation, dependent loads and any other details.

    Example –



    Rule 20 – Try to use the “DTP filter” and “DTP filter routines” to filter the incoming data from the source InfoProvider.

    Rule 21 – Try to use SAP provided features like Master data read, DSO read in the transformations rather than the lookup using ABAP code!!!! :-)

    Rule 22 – Before writing code check for the volume of data in PRD system and how frequently the data is increasing, this will allow you to foresee challenges and make you write a better code.

    Rule 23 – Make sure you use BADI’s instead of CMOD’s. Make sure you write methods and classes instead of Function modules and subroutines.

    Rule 24 – Always use the MONITOR_REC table to capture the exceptional records, instead of updating them into any Z table.

    Rule 25 – Use the exceptions cx_rsrout_abort and cx_rsbk_errorcount cautiously.

    Rule 26 -- Within the start and end routines, for every small change don't add a new "loop at result_package..endloop". avoid multiple "Loop at result_package..endloop" and use the existing "loop at result_package...endloop". Try to add the entire logic within single "loop at..endloop". This will help in maintaining the code uniformly and clearly.

    Rule 27 - Use "constants" which enable you to easily maintain. Also it is even more better to maintain the constant values in Infoobject master data table and use them in the ABAP lookup. Futher the paramter tables can also be used.

    Rule 28 - "For all entries" will not fetch duplicate records, so there might be a data loss, but inner join would fetch all of the records and hence "for all entries" must be used cautiously. Make sure to use all Primary keys to fetch records before you use the internal table in "For all entries"
    scn.sap.com/thread/2029157

    Final  Rule - Avoid as much as ABAP code as possible !!! :-) The reason is very simple, when you go to power your BW system with HANA, if you transformations have ABAP code then the transformations will not be executed in the HANA Database.  
    Image may be NSFW.
    Clik here to view.

    All About....Common Production Failures Encountered at BW Production Support....

    Common Production Failures Encountered at BW Production Support -

    Author at SDN by Devakar Reddy TatiReddy


    Image may be NSFW.
    Clik here to view.
    Viewing all 60 articles
    Browse latest View live