Quantcast
Viewing all 60 articles
Browse latest View live

Useful Transactions and Notes goes with NetWeaver 7.0

RSRD_ADMIN - Broadcasting Administration - Available in the BI system for the administration of Information Broadcasting.
CHANGERUNMONI - Using this Tcode we can monitor the status of the attribute change run.
RSBATCH - Dialog and Batch Processes. BI background management functions:
  • Managing background and parallel processes in BI
  • Finding and analyzing errors in BI
  • Reports for BI system management 
are available under Batch Manager.
RRMX_CUST - Make setting directly in this transaction for which BEx Analyzer version is called.
Note: 970002 - Which BEx Analyzer version is called by RRMX?
RS_FRONTEND_INT - Use this transaction to Block new Frontend Components in field QD_EXCLUSIVE_USER from migrating to 7.0 version.
Note: 962530 - NW04s - How to restrict access to Query Designer 2004s.
WSCONFIG - This transaction is to create, test and release the Web Service definition.
WSADMIN - Administration Web Services - This transaction is to display and test the endpoint.
RSTCO_ADMIN - This transaction to install basic BI objects and check whether the installation has been carried out successfully. If the installation is red, restart the installation by calling transaction RSTCO_ADMIN again. Check the installation log.
Note 1000194 - Incorrect activation status in transaction RSTCO_ADMIN.
Note 1039381 - Error when activating the content Message no. RS062 (Error when installing BI Admin Cockpit).
Note 834280 - Installing technical BI Content after upgrade.
Note 824109 - XPRA - Activation error in NW upgrade. XPRA installs technical BW Content objects that are necessary for the productive use of the BW system. (An error occurs during the NetWeaver upgrade in the RS_TCO_Activation_XPRA XPRA. The system ends the execution of the method with status 6.
RSTCC_INST_BIAC - For activating the Technical Content for the BI admin cockpit
Run report RSTCC_ACTIVATE_ADMIN_COCKPIT in the background
Note 934848 - Collective note - (FAQ) BI Administration Cockpit
Note 965386 - Activating the technical content for the BI admin cockpit
Attachment for report RSTCC_ACTIVATE_ADMIN_COCKPIT source code
When Activating Technical Content Objects terminations and error occurs
Note 1040802 - Terminations occur when activating Technical Content Objects
RSBICA - BI Content Analyzer - Check programs to analyze inconsistencies and errors of custom-defined InfoObject, InfoProviders, etc - With central transaction RSBICA, schedule delivered check programs for the local system or remote system via RFC connection. Results of the check programs can be loaded to the local or remote BI systems to get single point of entry for analyzing the BI landscape.
RSECADMIN - Transaction for maintaining new authorizations. Management of Analysis Authorizations.
Note 820123 - New Authorization concept in BI.
Note 923176 - Support situation authorization management BI70/NW2004s.
RSSGPCLA - For the regeneration of RSDRO_* Objects. Set the status of the programs belonging to program classes "RSDRO_ACTIVATE", "RSDRO_UPDATE" and "RSDRO_EXTRACT" to "Generation required". To do this, select the program class and then activate the "Set statuses" button.
Note 518426 - ODS Object - System Copy, migration
RSDDBIAMON - BI Accelerator - Monitor with administrator tools.
  • Restart BIA server: restarts all the BI accelerator servers and services.
  • Restart BIA Index Server: restart the index server.
  • Reorganize BIA Landscape: If the BI accelerator server landscape is unevenly distributed, redistributes the loaded indexes on the BI accelerator servers.
  • Rebuild BIA Indexes: If a check discovers inconsistencies in the indexes, delete and rebuild the BI accelerator indexes.
RSDDSTAT - For Maintenance of Statistics properties for BEx Query, InfoProvider, Web Template and Workbook.
Note 964418 - Adjusting ST03N to new BI-OLAP statistics in Release 7.0
Note 934848 - Collective Note (FAQ) BI Administration Cockpit.
Note 997535 - DB02 : Problems with History Data.
Note 955990 - BI in SAP NetWeaver 7.0: Incompatibilities with SAP BW 3.X.
Note 1005238 - Migration of workload statistics data to NW2004s.
Note 1006116 - Migration of workload statistics data to NW2004s (2).
DBACOCKPIT - This new transactions replaces old transactions ST04, DB02 and comes with Support Pack 12 for database monitoring and administration.
Note 1027512 - MSSQL: DBACOCKPIT  for basis release 7.00 and later.
Note 1072066 - DBACOCKPIT - New function for DB monitoring.
Note 1027146- Database administration and monitoring in the DBA Cockpit.
Note 1028751 - MaxDB/liveCache: New functions in the DBA Cockpit.
BI 7.0 iView Migration Tool
Note 1128730 - BI 7.0 iView Migration Tool
Attachements for iView Migration Tool:
  • bi migration PAR
  • bi migration SDA
  • BI iView Migration Tool
For Setting up BEx Web
Note 917950 - SAP NetWeaver2004s : Setting Up BEx Web
Handy Attachements for Setting up BEx Web:
  • Problem Analysis
  • WDEBU7 Setting up BEx Web
  • System Upgrade Copy
  • Checklist
To Migrate BW 3.X Query Variants to NetWeaver 2004s BI:
Run Report RSR_VARIANT_XPRA from Transaction SE38 to fill the source table with BW 3.X variants that need to be migrated to SAP NetWeaver 2004s BI. After upgrading system to Support Package 12 or higher run the Migration report RSR_MIGRATE_VARIANTS to migrate the existing BW 3.x query Variants to the new NetWeaver 2004s BI Variants storage.
Note 1003481 - Variant Migration - Migrate all Variants
To check for missing elements and repairing the errors run report ANALYZE_MISSING_ELEMENTS.
Note 953346 - Problem with deleted InfoProvider in RSR_VARIANT_XPRA
Note 1028908 - BW Workbooks MSA: NW2004s upgrade looses generic variants
Note 981693 - BW Workbooks MSA: NW2004s upgrade looses old variants
For the Migration  of Web Templates from BW 3.X to SAP NetWeaver 2004s:
Note 832713 - Migration of Web Templates from BW 3.X to NetWeaver 2004s
Note 998682 - Various errors during the Web Template migration of BW 3.X
Note 832712 - BW - Migration of Web items from 3.x to 7.0
Note 970757 - Migrating BI Web Templates to NetWeaver 7.0 BI  which contain chart
Upgrade Basis Settings for SAP NetWeaver 7.0 BI
SAP NetWeaver 7.0 BI Applications with 32 bit architecture are reaching their limits. To build high quality reports on the SAP NetWeaver BI sources need an installation based on 64-bit architecture.
With SAP NetWeaver 7.0 BI upgrade basis parameter settings of SAP Kernel from 32 bit to 64 bit version. Looking at the added functionality in applications and BI reports with large data set use lot of memory which adds load to application server and application server fails to start up because the sum of all buffer allocations exceeds the 32 bit limit.
Note 996600 - 32 Bit platforms not recommended for productive NW2004s apps
Note 1044441 - Basis parameterization for NW 7.0 BI systems
Note 1044330 - Java parameterization for BI systems
Note 1030279 - Reports with very large result sets/BI Java
Note 927530 - BI Java sizing
Intermediate Support Packages for NetWeaver 7.0 BI
BI Intermediate Support Package consisits of an ABAP Support Package and a Front End Support Package and where ABAP BI intermediate Support Package is compatible with the delivered BI Java Stack.
Note 1013369 - SAP NetWeaver 7.0 BI - Intermediate Support Packages
Microsoft Excel 2007 integration with NetWeaver 7.0 BI
Microsoft Excel 2007 functionality is now fully supported by NetWeaver 7.0 BI
Advanced filtering, Pivot table, Advanced formatting, New Graphic Engine, Currencies, Query Definition, Data Mart Fields
Note 1134226 - New SAP BW OLE DB for OLAP files delivery - Version 3
Full functionality for Pivot Table to analyze NetWeaver BI data
Microsoft Excel 2007 integrated with NetWeaver 7.0 BI for building new query, defining filter values, generating a chart and creating top n analysis from NetWeaver BI Data
Microsoft Excel 2007 now provides Design Mode, Currency Conversion and Unit of Measure Conversion
Image may be NSFW.
Clik here to view.

SAP BW BI Interview Pattern - RealTime approach

By:
Chandiraban singu 
sdn.sap.com

Interviewers and the question will not remain the same, but find the pattern.

Brief of your profile
Brief of what you done in the project
Your challenging and complex situations
Your regularly faced problem and what you did to avoid the same in permanent
interviewers complex situation , recent situation will be posted for your analysis.

Some one may add
Your system landscape
System archiectecture
Release management
Team size, org str,....

If your exp has production support tthen generally about your roles, authorization and commonlly faced errors.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/b0c1d94f-b825-2c10-15ae-ccfc59acb291

About data source enhancement
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/00c1f726-1dc2-2c10-f891-ddfbffdb1a46

About data flow during delta
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f03da665-bb6f-2c10-7da7-9e8a6684f2f9


If your exp has implementation.then
Modules which you have implemented.
Methodoloyg adopted
https://weblogs.sdn.sap.com/pub/wlg/13745
Approach to implementation
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8917
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/8920
Testing system
Business scenario
how did you did data modellling like why std lo datasource? why dso ? why this much layers ?.....
Documentation, how your functionall spec and technical spec template, content, information will be...?

Design a SAP NetWeaver - Based System Landscape
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50a9952d-15cc-2a10-84a9-fd9184f35366
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/8877

BI - Soft yet Hard Challenges
https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/9068

*Best Practice for new BI project *
https://www.sdn.sap.com/irj/sdn/thread?threadID=775458&tstart=0

Guidelines to Make Your BI Implementations Easier
http://www.affine.co.uk/files/Guidelines%20to%20Make%20Your%20BI%20Implementations%20Easier.pdf


Specific bw interview questions

https://www.sdn.sap.com/irj/scn/advancedsearch?query=SAP+BW+INTERVIEW+QUESTIONS&cat=sdn_all
200 BW Questions and Answers for INTERVIEWS
http://sapdocs.info/sap-overview/sap-interview-questions/
http://www.erpmastering.com/bwfaq.htm
http://www.allinterview.com/showanswers/33349.html
http://searchsap.techtarget.com/generic/0,295582,sid21_gci1182832,00.html
http://prasheelk.blogspot.com/2008_05_12_archive.html

Best of luck for your interviews.....Be clear with what you done...
http://saptutions.com/SAPBW/BW_openhub.asp
http://www.scribd.com/doc/6343052/BW-EXAM


Image may be NSFW.
Clik here to view.
 
Image may be NSFW.
Clik here to view.

SAP BI Inventory Management

By: Raj Kandula and Jitu Krishna
 Source: sdn.sap.com
Hi,
     Few points in Inventory Management.

BW Inventory movement cubes are initialised in two stages.
Firstly you load & compress the initial onhand stocks as of "todays" date;
secondly you load & compress the historic stock movements.
Once that's all done you do your regular delta loads to keep the info up to date.
The Marker is used as a reference point during compression to keep a running
total of what's onhand.
When you initially load "today's" onhands you UNCHECK the "No Marker Update" box, so that the marker records those stock levels.
When you load the historic stock movements you CHECK the "No Marker

Update" box, so that these movements do not affect the marker (as those
movements have already affected the current onhand level).
For the regular delta loads you UNCHECK the the "No Marker Update" box again so that the "future" movements net off the marker as they go.



Marker is used to reduce the time of fetching the non-cummulative key figures while reporting.

Refer
https://www.sdn.sap.com/irj/sdn/thread?messageID=4885115
https://www.sdn.sap.com/irj/sdn/thread?messageID=4764397
https://www.sdn.sap.com/irj/sdn/thread?messageID=4862257
https://www.sdn.sap.com/irj/sdn/thread?messageID=3254753#3254753
https://www.sdn.sap.com/irj/sdn/thread?threadID=422530
Inventory management
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/documents/a1-8-4/How%20to%20Handle%20Inventory%20Management%20Scenarios.pdf
How to Handle Inventory Management Scenarios in BW (NW2004)
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
https://www.sdn.sap.com/irj/sdn/thread?threadID=776637&tstart=0

•• ref.to page 18 in "Upgrade and Migration Aspects for BI in SAP NetWeaver 2004s" paper
http://www.sapfinug.fi/downloads/2007/bi02/BI_upgrade_migration.pdf
Non-Cumulative Values / Stock Handling
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/93ed1695-0501-0010-b7a9-d4cc4ef26d31
Non-Cumulatives
http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/da1640dc88e769e10000000a155106/frameset.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a62ebe07211d2acb80000e829fbfe/frameset.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a62f8e07211d2acb80000e829fbfe/frameset.htm
Here you will find all the Inventory Management BI Contents:
http://help.sap.com/saphelp_nw70/helpdata/en/fb/64073c52619459e10000000a114084/frameset.htm

Hope this helps.

Image may be NSFW.
Clik here to view.

Production Planning Module Business content Implementation

These are the BW datasource for PP area.
2lis_04_P_MATNR (Material level),
2lis_04_P_COMP (Component level),
2lis_04_P_ARBPL (Operation leve) and
0PRODORDER_ATTR (Process/ Production order master data).

BI Best Practice Business content for PP:
http://help.sap.com/saphelp_nw70/helpdata/en/79/0a383fdb800804e10000000a114084/frameset.htm


To re-consulate the data with R3:

COOIS will server your purpose, but make sure the selection.

These are the standard reports from PP end.
BOM related tables :- Engineering Change Management
AENR Customer and priority
STKO --- BOM - header AEOI Revision Numbers

STPO --- BOM - item Work Center
CRHD Workcenter Header Data
STAS --- BOMs - Item Selection CRCA Workcenter Capacity Allocation
CRCO Workcenter Cost Center Assignment
STPN --- BOMs - follow-up control CRHH Hierarchy Header
CRHS Hierarchy Structure
STPU --- BOM - sub-item CRTX Workcenter Text
KAKO Capacity Header
STZU --- Permanent BOM data KAZY Intervals of Capacity

PLMZ --- Allocation of BOM - items to operations Routing
PLPO Routing Operation Details
MAST --- Material to BOM link PLKO Routing Header Details
MAPL Routing Link to Material
KDST --- Sales order to BOM link PLAB Relationships - Standard Network
PLAS Task List - Selection of Operations
PLPLAN_PARAM --- BOM-dependent planning parameter PLMZ Component Allocation
PLPH CAPP Sub-operations
SNUM --- BOM explosion number PLFH PRT Allocation
PLWP Maintenance Package Allocation
T415S --- BOM status PLMK Inspection Characteristics

T415T --- BOM Status Texts Bill of Material
STPO BOM Item Details
T416 --- BOM Usage - Item Statuses STPU BOM Sub Items (designators)
STKO BOM Header Details
T416T --- BOM Usage Texts MAST BOM Group to Material
STZU BOM History Records
T618EM --- BOM of CAP material components STAS BOM Item Selection
STPF BOM Explosion Structure
TCS09 --- BOM Control Depending on Category and Usage
Line Design
TCS11 --- BOM Objects LDLH Line Hierarchy Header
LDLP Line Hierarchy Items
TCS15 --- BOMs with History Requirement LDLT Line Hierarchy Takt Times
LDLBC Takts/No. Individual Capacities per Line
TCS17 --- BOMs with History Requirement LDLBH Line Balance Header
LDLBP Line Balance Items
TCS19 --- BOM Item Object Types LDLBT Line Hierarchy Entry and Exit Takts

TCS21 --- BOM Item Object Types: internal > external PRT's
CRFH PRT Master Data
TCS22 --- BOM Item Object Types: external > internal CRVD_A Link of PRT to Document
CRVD_B Link of Document to PRT
TCS31 --- BOM Field Selection - Usage (T416) CRVE_A Assignment of PRT data to Equipment
CRVE_B Assignment of equipment to PRT data
TCS32 --- BOM to Plant Allocation - Field Selection CRVM_A Link of PRT data to Material
CRVM_B Link of Material to PRT data
TCS33 --- BOM Field Selection (SAP) CRVS_A Link of PRT Internal number to PRT External number
CRVS_B Link of PRT External number to PRT Internal number
TCS34 --- BOM Field Selection - Control Data (TCS03)Image may be NSFW.
Clik here to view.

Loading data to a newly added field in DSO using an ABAP program.

· Added by nasarat anjum, last edited by Arun Varadarajan on Sep 26, 2009  (view change)

Business Context

        In BW infrastructure, data targets are built and there can be a scenario in future to enhance/modify the existing data target . Suppose a DSO is built and we need to add a Key Figure to the DSO. Data would be present in the DSO since a long time and you can't take the pain to load data again into the Same DSO just to populate values into the new Key Figure.

       To face such scenario, an ABAP program can be written which will load data just to that Key Figure corresponding to the old values.

       We faced a similar scenario, where we had a Key figure which was of type "Interger". Now we had to Change this Key Figure to type "Number- FLTP". Changing the existing Key Figure and then Re-loading the existing (10 million records of data) DSO would require lot of time.

The solution we implemented was
1. Create a new Key Figure and include it in the DSO.
2. Create an ABAP program to populate the New Key Figure with the values from the already existing Key Figure.
3. Once this is done for existing data, we can stop loading the old key Figure (by modifying the transformations) and continue loading the New Key figure through the normal data loading procedure.

Advantage(s):

This will save us lot of time needed for re-loading of DSO.

Assumption(s):
Data is loaded into the data target prior to scheduling this program.

Limitation(s):
This program is hard coded to Key Figures, it can be enhanced to make the program generalized to accept Old Key Figure and New Key Figure at run time and do the processing based on the input values.

PROGRAM SOURCE CODE:

*&---------------------------------------------------------------------*
*& Report  YPOPULATE_DSO_KF
*&
*&---------------------------------------------------------------------*
*& This program has a selection variable called "Month". Based on the
*& month entered, the program updates the Key figure values into newly
*& added key figure for all the records corresponding to the month.
*&
*&---------------------------------------------------------------------*

REPORT YPOPULATE_DSO_KF.

***SELECTION SCREEN
***--------------------------------------------------------------------*
TABLES: /BIC/AZDSO_O0100,
        /BIC/AZDSO_O0200,
        /BIC/AZDSO_O0300,
        /BIC/AZDSO_O0400.

SELECTION-SCREEN: BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
PARAMETERS L_DSO(10) TYPE C.
SELECT-OPTIONS: L_EFFMTH FOR /BIC/AZDSO_O0100-/BIC/ZEFF_MNTH.
SELECTION-SCREEN: END OF BLOCK B1.
*************************************************************************

* Check whether Month value is entered or not.
IF L_EFFMTH IS INITIAL.
* To check which DSO is to be updated.
CASE L_DSO.
* If 'ZDSO_O01'is selected.
WHEN 'ZDSO_O01'.
UPDATE /BIC/AZDSO_O0100 SET /BIC/ZNEWFLD = /BIC/AZDSO_O0100~/BIC/ZOLDFLD.
* If 'ZDSO_O02' is selected.
WHEN 'ZDSO_O02'.
UPDATE /BIC/AZDSO_O0200 SET /BIC/ZNEWFLD = /BIC/AZDSO_O0200~/BIC/ZOLDFLD.
* If 'ZDSO_O03'is selected.
WHEN 'ZDSO_O03'.
UPDATE /BIC/AZDSO_O0300 SET /BIC/ZNEWFLD = /BIC/AZDSO_O0300~/BIC/ZOLDFLD.
* If 'ZDSO_O04' is selected.
WHEN 'ZDSO_O04'.
UPDATE /BIC/AZDSO_O0400 SET /BIC/ZNEWFLD = /BIC/AZDSO_O0400~/BIC/ZOLDFLD.
ENDCASE.
* Check whether Effective Month value entered.
ELSE.
CASE L_DSO.
* If 'ZDSO_O01'is selected
WHEN 'ZDSO_O01'.
UPDATE /BIC/AZDSO_O0100  SET /BIC/ZNEWFLD = /BIC/AZDSO_O0100~/BIC/ZOLDFLD WHERE /BIC/ZEFF_MNTH IN L_EFFMTH.
* If 'ZDSO_O02' is selected.
WHEN 'ZDSO_O02'.
UPDATE /BIC/AZDSO_O0200 SET /BIC/ZNEWFLD = /BIC/AZDSO_O0200~/BIC/ZOLDFLD WHERE /BIC/ZEFF_MNTH IN L_EFFMTH.
* If 'ZDSO_O03'is selected.
WHEN 'ZDSO_O03'.
UPDATE /BIC/AZDSO_O0300 SET /BIC/ZNEWFLD = /BIC/AZDSO_O0300~/BIC/ZOLDFLD WHERE /BIC/ZEFF_MNTH IN L_EFFMTH.
* If 'ZDSO_O04' is selected.
WHEN 'ZDSO_O04'.
UPDATE /BIC/AZDSO_O0400 SET /BIC/ZNEWFLD = /BIC/AZDSO_O0400~/BIC/ZOLDFLD WHERE /BIC/ZEFF_MNTH IN L_EFFMTH.
ENDCASE.
ENDIF.

Image may be NSFW.
Clik here to view.

All about Infocube….

· Added by Krish, last edited by Arun Varadarajan on Mar 16, 2009

InfoCube:
Definition
An object that can function as both a data target and an InfoProvider.

From a reporting point of view, an InfoCube describes a self-contained dataset, for example, of a business-orientated area. This dataset can be evaluated in a BEx query.        

An InfoCube is a quantity of relational tables arranged according to the star schema: A large fact table in the middle surrounded by several dimension tables.

Use
InfoCubes are supplied with data from one or more InfoSources or ODS objects (Basic InfoCube) or with data from a different system (RemoteCube, SAP RemoteCube, virtual InfoCube with Services, transactional InfoCube).

Structure
There are various types of InfoCube:
1.      Physical data stores:

Basic InfoCubes
Transactional InfoCubes

2.      Virtual data stores:
RemoteCube
SAP RemoteCube
Virtual InfoCube with Services

Only Basic InfoCubes and transactional InfoCubes physically contain data in the database. Virtual InfoCubes are only logical views of a dataset. By definition, they are not data targets. However, the InfoCube type is of no importance from the reporting perspective, since an InfoCube is accessed as an InfoProvider.

Integration
You can access the characteristics and key figures defined for an InfoCube in the Query Definition in the BEx Web or in the BEx Analyzer.

Basic InfoCube
Definition
A Basic InfoCube is a type of InfoCube that physically stores data. It is filled with data using BW Staging. Afterwards, it can be used as an InfoProvider in BEx Reporting.
Structure
As with other InfoCube types, the structure of a Basic InfoCube corresponds to the Star Schema.
For more information, see InfoCube
Integration
The Basic InfoCube is filled using the Scheduler, providing that Update Rules have been maintained.
It is then made available to Reporting as an InfoProvider. It can also be updated into additional data targets or build a MultiProvider together with other data targets.

Transactional InfoCubes
Definition
Transactional InfoCubes differ from Basic InfoCubes in their ability to support parallel write accesses. Basic InfoCubes are technically optimized for read accesses to the detriment of write accesses.
Use
Transactional InfoCubes are used in connection with the entry of planning data. See also Overview of Planning with BW-BPS. The data from this kind of InfoCube is accessed transactionally, meaning data is written to the InfoCube (possibly by several users at the same time). Basic InfoCubes are not suitable for this. You should use Basic InfoCubes for read-only access (for example, when reading reference data).
Structure
Transactional InfoCubes can be filled with data using two different methods: Using the transaction of BW-BPS to enter planning data and using BW staging, whereas planning data then cannot be loaded simultaneously. You have the option to convert a transactional Info Cube Select Convert Transactional InfoCube using the context menu in your transactional InfoCube in the InfoProvider tree. By default, Transaction Cube Can Be Planned, Data Loading Not Permitted is selected. Switch this setting to Transactional Cube Can Be Loaded With Data; Planning Not Permitted  if you want to fill the cube with data via BW Staging.

During entry of planning data, the data is written to a transactional InfoCube data request. As soon as the number of records in a data request exceeds a threshold value, the request is closed and a rollup is carried out for this request in defined aggregates (asynchronously).  You can still rollup and define aggregates, collapse, and so on, as before.

According to the database on which they are based, transactional InfoCubes differ from Basic InfoCubes in the way they are indexed and partitioned.  For an Oracle DBMS this means, for example, no bitmap index for the fact table and no partitioning (initiated by SAP BW) of the fact table according to the package dimensions.

Reduced read-only performance is accepted as a drawback of transactional InfoCubes, in the face of the parallel (transactional) writing option and improved write performance.

Creating a transactional InfoCube
Select the Transactional indicator when creating a new (Basis) InfoCube in the Administrator Workbench.
Converting a basic InfoCube into a transactional InfoCube
InfoCube conversion: Removing transaction data
If the Basic InfoCube already contains transaction data that you no longer need (for example, test data from the implementation phase of the system), proceed as follows:
1.      In the InfoCube maintenance in the Administrator Workbench choose, from the main menu, InfoCube ® Delete Data Content. The transaction data is deleted and the InfoCube is set to "inactive".
2.      Continue with the same procedure as with creating a transactional InfoCube.
InfoCube conversion: retaining transaction data
If the Basic InfoCube already contains transaction data from the production operation you still need, proceed as follows:
Execute the SAP_CONVERT_TO_TRANSACTIONAL ABAP report under the name of the corresponding InfoCube. You should schedule this report as a background job for InfoCubes with more than 10,000 data records. This is to avoid a potentially long run-time.
Integration

The following typical scenarios arise for the use of transactional InfoCubes in BW-BPS.
1st Scenario:
Actual data (read-only access) and planned data (read-only and write access) have to be held in different InfoCubes. Therefore, use a Basic InfoCube for actual data and a transactional InfoCube for planned data. Data integration is achieved using a multi-planning area that contains the areas that are assigned to the InfoCubes. Access to the two different InfoCubes is controlled here by the characteristic "Planning area", which is added automatically.
2nd Scenario:
In this scenario, the planned and actual data have to be together in one InfoCube. This is the case, for example, with special rolling forecast variants. Here you have to use a transactional InfoCube, since both read-only and write accesses take place. You can no longer load data directly that has already arrived in the InfoCube by means of an upload or import source. To be able to load data nevertheless, you have to make a copy of the transactional InfoCube that is identified as a Basic InfoCube and not as transactional. Data is loaded as usual here and subsequently updated to the transactional InfoCube.

RemoteCube
Definition
A RemoteCube is an InfoCube whose transaction data is not managed in the Business Information Warehouse but externally. Only the structure of the RemoteCube is defined in BW. The data is read for reporting using a BAPI from another system.
Use
Using a RemoteCube, you can carry out reporting using data in external systems without having to physically store transaction data in BW. You can, for example, include an external system from market data providers using a RemoteCube.
By doing this, you can reduce the administrative work on the BW side and also save memory space.
Structure
When reporting using a RemoteCube, the Data Manager, instead of using a BasicCube filled with data, calls the RemoteCube BAPI and transfers the parameters.
Selection
Characteristics
Key figures
As a result, the external system transfers the requested data to the OLAP Processor.
Integration
To report using a RemoteCube you have to carry out the following steps:
1.      In BW, create a source system for the external system that you want to use.
2.      Define the required InfoObjects.
3.      Load the master data:
Create a master data InfoSource for each characteristic Load texts and attributes
4.      Define the RemoteCube
5.      Define the queries based on the RemoteCube

SAP RemoteCube
Definition
An SAP RemoteCube is a RemoteCube that allows the definition of queries with direct access to transaction data in other SAP systems.
Use
Use SAP RemoteCubes if:
You need very up-to-date data from an SAP source system
You only access a small amount of data from time to time
Only a few users execute queries simultaneously on the database.
Do not use SAP RemoteCubes if:
You request a large amount of data in the first query navigation step, and no appropriate aggregates are available in the source system
A lot of users execute queries simultaneously
You frequently access the same data
Structure
SAP RemoteCubes are defined based on an InfoSource with flexible updating. They copy the characteristics and key figures of the InfoSource. Master data and hierarchies are not read directly in the source system. They are already replicated in BW when you execute a query.
The transaction data is called during execution of a query in the source system. During this process, the selections are provided to the InfoObjects if the transformation is only simple mapping of the InfoObject. If you have specified a constant in the transfer rules, the data is transferred only if this constant can be fulfilled. With more complex transformations such as routines or formulas, the selections cannot be transferred. It takes longer to read the data in the source system because the amount of data is not limited. To prevent this you can create an inversion routine for every transfer routine. Inversion is not possible with formulas, which is why SAP recommends that you use formulas instead of routines.
Integration
To be assigned to an SAP RemoteCube, a source system must meet the following requirements:
BW Service API functions (contained in the SAP R/3 plug-in) are installed.
The Release status of the source system is at least 4.0B
In BW, a source system ID has been created for the source system
DataSources from the source system that are released for direct access are assigned to the InfoSource of the SAP RemoteCube. There are active transfer rules for these combinations.

Virtual InfoCubes with Services
Definition
A virtual InfoCube with services is an InfoCube that does not physically store its own data in BW. The data source is a user-defined function module. You have a number of options for defining the properties of the data source more precisely. Depending on these properties, the data manager provides services to convert the parameters and data.
Use
You use a virtual InfoCube with services if you want to display data from non-BW data sources in BW without having to copy the data set into the BW structures. The data can be local or remote. You can also use your own calculations to change the data before it is passed to the OLAP processor.
This function is used primarily in the SAP Strategic Enterprise Management (SEM) application.
In comparison to the RemoteCube, the virtual InfoCube with services is more generic. It offers more flexibility, but also requires more implementation effort.
Structure
When you create an InfoCube you can specify the type. If you choose Virtual InfoCube with Services as the type for your InfoCube, an extra Detail pushbutton appears on the interface. This pushbutton opens an additional dialog box, in which you define the services.
1.      Enter the name of the function module that you want to use as the data source for the virtual InfoCube. There are different default variants for the interface of this function module. One method for defining the correct variant, together with the description of the interfaces, is given at the end of this documentation.
2.      The next step is to select options for converting/simplifying the selection conditions. You do this by selecting the Convert Restrictions option. These conversions only change the transfer table in the user-defined function module. The result of the query is not changed because the restrictions that are not processed by the function module are checked later in the OLAP processor.
Options:
No restrictions: If this option is selected, no restrictions are passed to the InfoCube.
Only global restrictions: If this option is selected, only global restrictions (FEMS = 0) are passed to the function module. Other restrictions (FEMS > 0) that are created, for example, by setting restrictions on columns in queries, are deleted.
Simplify selections: Currently this option is not yet implemented.
Expand hierarchy restrictions: If this option is selected, restrictions on hierarchy nodes are converted into the corresponding restrictions on the characteristic value.
3.      Pack RFC: This option packs the parameter tables in BAPI format before the function module is called and unpacks the data table that is returned by the function module after the call is performed. Since this option is only useful in conjunction with a remote function call, you have to define a logical system that is used to determine the target system for the remote function call, if you select this option.
4.      SID support: If the data source of the function module can process SIDs, you should select this option.
If this is not possible, the characteristic values are read from the data source and the data manager determines the SIDs dynamically. In this case, wherever possible, restrictions that are applied to SID values are converted automatically into the corresponding restrictions for the characteristic values.
5.      With navigation attributes: If this option is selected, navigation attributes and restrictions applied to navigation attributes are passed to the function module.
If this option is not selected, the navigation attributes are read in the data manager once the user-defined function module has been executed. In this case, in the query, you need to have selected the characteristics that correspond to these attributes. Restrictions applied to the navigation attributes are not passed to the function module in this case.
6.      Internal format (key figures): In SAP systems a separate format is often used to display currency key figures. The value in this internal format is different from the correct value in that the decimal places are shifted. You use the currency tables to determine the correct value for this internal representation.
If this option is selected, the OLAP processor incorporates this conversion during the calculation.
Dependencies
If you use a remote function call, SID support must be switched off and the hierarchy restrictions must be expanded.
Description of the interfaces for user-defined function modules
Variant 1: Variant 2:
Additional parameters for variant 2 for transferring hierarchy restrictions, if they are not expanded:
With hierarchy restrictions, an entry for the 'COMPOP' = 'HI' (for hierarchy) field is created at the appropriate place in table I_T_RANGE (for FEMS 0) or I_TX_RANGETAB (for FEMS > 0), and the 'LOW' field contains a number that can be used to read the corresponding hierarchy restriction from table I_TSX_HIER, using field 'POSIT' .  Table i_tsx_hier has the following type:
Variant 3:
SAP advises against using this interface.
The interface is intended for internal use only and only half of it is given here.
Note that SAP may change the structures used in the interface.
Method for determining the correct variant for the interface
The following list describes the procedure for determining the correct interface for the user-defined function module. Go through the list from top to the bottom. The first appropriate case is the variant that you should use:
If Pack RFC is activated: Variant 1
If SID Support is deactivated: Variant 2

Image may be NSFW.
Clik here to view.

DSO Types

· Added by B. Rajesh, last edited by Arun Varadarajan.

Hi,

A small description about DSO types in BI is as follows, just for your reference.

Image may be NSFW.
Clik here to view.
SAP BI DSO Types

 

 

 

 

 

 

 

 

 

DataStore object types:

Standard DataStore object

· Data provided using a data transfer process

· SID values can be generated

· Data records with the same key are aggregated during activation

· Data is available for reporting after activation

 

Write-optimized DataStore object

· Data provided using a data transfer process

· SID values cannot be generated

· Records with the same key are not aggregated

· Data is available for reporting immediately after it is loaded

 

DataStore object for direct update

· Data provided using APIs

· SIDs cannot be generated

· Records with the same key are not aggregated

And you can find more information / examples on this topic at:

http://help.sap.com/saphelp_nw04s/helpdata/en/F9/45503C242B4A67E10000000A114084/content.htm

Thanks and Regrds

Image may be NSFW.
Clik here to view.

Delta Administration Tables and Important Fields

· Added by Vincent Judge, last edited by Arun Varadarajan on Jul 28, 2009

ROOSPRMSC 
RLOGSYS: target system 
SLOGSYS: source system 
INITRNR: init request 
DELTARNR: last delta request 
INITSTATE: if X than the delta update is active

ROOSPRMSF selection criteria for init request
FIELDNM 
RSLOW 
RSHIGH


RSSDLINIT
LOGSYS: source system
RNR: Init request number
INITSTATE: = X -> Delta update is active
LAST DELTA RNR: last delta update


RSSDLINITSELselection criteria for init request
IOBJNM
FIELDNAME
RSLOW
RSHIGH
RSSDLINITDELall deleted initalisations

Image may be NSFW.
Clik here to view.

How-to enhance LO-Cockpit DataSources with delta-relevant fields

· Added by Davide Cavallari, last edited by Arun Varadarajan on Mar 09, 2010  (view change)

Summary

We need to add a user-defined field to a LO-Cockpit DataSource. This field can change with no other field of the standard extract structure changing at the same time. We want this change to be registered by the DataSource's delta queue.

In the following example we will see how to add a custom field for the sales rep specified in a sales document. We will assume that the sales rep is modeled as a line-item partner function, and we will therefore enhance the DataSource 2LIS_11_VDITM (Sales Document Item Data).

Author: Davide Cavallari

Company: Trilog S.p.A.
Created on: 07/09/2009
Author Bio
After studying piano and composition, taking a degree in physics and working for two years in a neuroscience laboratory, Davide got involved with IT and Web technologies. After joining Trilog S.p.A., he moved to the SAP Netweaver platform, focusing on the Information Integration layer (BI, KM, Search and Classification). His main focus now is on business requirements gathering and data modelling for the SAP BI platform.

Table of Contents

· Summary

· Author: Davide Cavallari

· Introduction

· 1. Enhancing the LIS communication structure

· 2. Customizing the extract structure

· 3. Implementing the extraction logic for delta process

· 4. Implementing the extraction logic for the initialisation run

· Related Content

Introduction
1. Enhancing the LIS communication structure
2. Customizing the extract structure
3. Implementing the extraction logic for delta process
4. Implementing the extraction logic for the initialisation run
Related content

Introduction

The extraction process for the Logistics DataSources (those found in the LO Customising Cockpit, transaction code LBWE) is pretty complex. No wonder that, when you need to add a custom field to one of such DataSources, you have to pay especially carefull attention in order to avoid losing data from the DataSource's extraction or delta queues. On top of that, if your custom field is delta relevant as well, you may well end up with a broken delta process, where document changes are not always sent to the delta queue.

Here delta relevant means that when this field changes, this change should be felt by the V3 delta process, even though no additional field in the standard extract structure changes. It can be difficult to figure out exactly which steps are needed to ensure that the delta process works--although there is some good piece information around, such as this one, all in all I find the documentation available is pretty fragmented.

In the following I will concisely discuss an example which illustrates how to add a custom field for the sales rep specified in a sales document. We will assume that the sales rep is modeled as a line-item partner function, which in our example will have the value 'Z1'. We will therefore enhance the DataSource 2LIS_11_VDITM (Sales Document Item Data). I will not give details as to use the LO Customising Cockpit (LBWE) or implement an enhancement through the CMOD, though. If you need information about these tools, you should probably look for some specific documents on these topics as well. I do not even expalain the concept of before- and after-image records--should you need such information please refer to the related content section at the end of this document.

1. Enhancing the LIS communication structure

First of all, we need to add an extra field (YYKAM, in the example) in the LIS communication structure for sales documents' item data (MCVBAP).

We will insert an append structure (ZAMCVBAPUS, in the example) into the include structure MCVBAPUSR, and insert the new field YYKAM into this append structure:

Image may be NSFW.
Clik here to view.
clip_image001

2. Customizing the extract structure

The custom field YYKAM is now available in the LO Customizing Cockpit (transaction code LBWE), so we can add it to the extract structure MC11VA0ITM:

Image may be NSFW.
Clik here to view.
clip_image002

3. Implementing the extraction logic for delta process

In this step we will implement the code for extracting the value of the sales rep when a sales order's line item is created, modified or deleted (delta process). In the last step, instead, we will implement the extraction logic needed when performing the initialisation of a delta process.

The code for the delta extraction has to be written into the function EXIT_SAPLMCS1_002 (this function refers to the sales orders' line items) of the enhancement MCS10001 (SIS: Statistics update, sales documents). This can be done via the transaction CMOD:

Image may be NSFW.
Clik here to view.
clip_image003

Every time a line item is changed, deleted, or added, the EXIT function is called two times: one execution step processes the status of the data before the change (before-image), while the other the status after the change (after-image).

The field YY_KAM has to be filled with the value of the sales rep before or after the change, depending on whether the record is the before- or after-image. We can figure out which record is being processed through the field i_xmcvbap-supkz--its value being 1 for the before-image or 2 for the after-image record:

01.*----------------------------------------------------------------------*

02.*   INCLUDE ZXMCVU02                                                   *

03.*----------------------------------------------------------------------*

04.*  importing   VALUE(I_XMCVBAK) LIKE  MCVBAKB STRUCTURE  MCVBAKB

05.*"             VALUE(I_XMCVBUK) LIKE  MCVBUKB STRUCTURE  MCVBUKB

06.*"             VALUE(I_XMCVBAP) LIKE  MCVBAPB STRUCTURE  MCVBAPB

07.*"             VALUE(I_XMCVBUP) LIKE  MCVBUPB STRUCTURE  MCVBUPB

08.*"             VALUE(I_XMCVBKD) LIKE  MCVBKDB STRUCTURE  MCVBKDB

09.*"             VALUE(I_CONTROL) LIKE  MCCONTROL STRUCTURE  MCCONTROL

10.*"       EXPORTING

11.*"             VALUE(E_XMCVBAPUSR) LIKE  MCVBAPUSR

12.*"                             STRUCTURE  MCVBAPUSR

13.*----------------------------------------------------------------------*

14.*

15.* [...]

16.*

17.[...]

18.

19.DATA lv_old_kz VALUE '1'.

20.DATA lv_new_kz VALUE '2'.

21.

22.[...]

23.

24.CASE i_xmcvbap-supkz.

25.

26.** before-image record

27.WHEN lv_old_kz .

28.

29.[...]

30.

31.** after-image record

32.WHEN lv_new_kz.

33.

34.[...]

35.

36.ENDCASE.

37.[...]

Being the sales rep a partner function, its before- and after-image values will be read from internal tables YVBPA and XVBPA respectively. These two internal tables are defined and filled in the program SAPMV45A, and to access to their content we need to reference them through the ABAP instruction ASSIGN:

01.FIELD-SYMBOLS:TYPE table.

02.FIELD-SYMBOLS:TYPE table.

03.

04.DATA lv_old_kz VALUE '1'.

05.DATA lv_new_kz VALUE '2'.

06.DATA tb_pa LIKE vbpavb OCCURS 0 WITH HEADER LINE.

07.REFRESH tb_pa. CLEAR tb_pa.

08.

09.CASE i_xmcvbap-supkz.

10.

11.** before-image record

12.WHEN lv_old_kz .

13.""reference to before-image table

14.ASSIGN ('(SAPMV45A)YVBPA[]') TO ‹Y1›.

15.IF sy-subrc EQ 0.

16.tb_pa[] =.

17.

18.[...]

19.

20.ENDIF.

21.

22.**after-image record

23.WHEN lv_new_kz.

24.""reference to after-image table

25.ASSIGN ('(SAPMV45A)XVBPA[]') TO ‹Y1›.

26.IF sy-subrc EQ 0.

27.tb_pa[] =.

28.ENDIF.

29.

30.ENDCASE.

31.[...]

When users modify the sales rep in an order line item, they can enter the original value again. When this happens, the before-image and the after-image values for the field YYKAM have to be the same. In this case, however, the table YVBPA does not contain the before-image value, so we will need to fetch it from the after-image table XVBPA:

01.** before-image record

02.WHEN lv_old_kz .

03.""reference to before-image table

04.ASSIGN ('(SAPMV45A)YVBPA[]') TO  ‹Y1›.

05.IF sy-subrc EQ 0.

06.tb_pa[] =.

07.

08.READ TABLE tb_pa WITH KEY  parvw = 'Z1'.

09."Z1 is the sales rep's partner function

10.IF sy-subrc ne 0.

11.""when user does not change the sales rep, Y- table does not

12.""contain the partner function Z1, so before-image state has to be

13.""read from X- table.

14.ASSIGN ('(SAPMV45A)XVBPA[]') TO ‹Y1›.

15.IF sy-subrc EQ 0.

16.tb_pa[] =.

17.ENDIF.

18.ENDIF.

19.

20.ENDIF.

Here is the complete example code:

01.*----------------------------------------------------------------------*

02.*   INCLUDE ZXMCVU02                                                   *

03.*----------------------------------------------------------------------*

04.*  importing   VALUE(I_XMCVBAK) LIKE  MCVBAKB STRUCTURE  MCVBAKB

05.*"             VALUE(I_XMCVBUK) LIKE  MCVBUKB STRUCTURE  MCVBUKB

06.*"             VALUE(I_XMCVBAP) LIKE  MCVBAPB STRUCTURE  MCVBAPB

07.*"             VALUE(I_XMCVBUP) LIKE  MCVBUPB STRUCTURE  MCVBUPB

08.*"             VALUE(I_XMCVBKD) LIKE  MCVBKDB STRUCTURE  MCVBKDB

09.*"             VALUE(I_CONTROL) LIKE  MCCONTROL STRUCTURE  MCCONTROL

10.*"       EXPORTING

11.*"             VALUE(E_XMCVBAPUSR) LIKE  MCVBAPUSR

12.*"                             STRUCTURE  MCVBAPUSR

13.*---------------------------------------------------------------------------*

14.*

15.*  See SAP Note 216448 for information on 'before-' e 'after-image'

16.*---------------------------------------------------------------------------*

17.

18.FIELD-SYMBOLS:TYPE table.

19.FIELD-SYMBOLS:TYPE table.

20.

21.DATA lv_old_kz VALUE '1'.

22.DATA lv_new_kz VALUE '2'.

23.DATA tb_pa LIKE vbpavb OCCURS 0 WITH HEADER LINE.

24.REFRESH tb_pa. CLEAR tb_pa.

25.

26.CASE i_xmcvbap-supkz.

27.

28.** before-image record

29.WHEN lv_old_kz .

30.""reference to before-image table

31.ASSIGN ('(SAPMV45A)YVBPA[]') TO  ‹Y1›.

32.IF sy-subrc EQ 0.

33.tb_pa[] =.

34.

35.READ TABLE tb_pa WITH KEY  parvw = 'Z1'.

36.IF sy-subrc ne 0.

37.""when user does not change the sales rep, Y- table does not

38.""contain the partner function Z1, so before-image state has to be

39.""read from X- table.

40.ASSIGN ('(SAPMV45A)XVBPA[]') TO ‹Y1›.

41.IF sy-subrc EQ 0.

42.tb_pa[] =.

43.ENDIF.

44.ENDIF.

45.

46.ENDIF.

47.

48.** after-image record

49.WHEN lv_new_kz.

50.""reference to after-image table

51.ASSIGN ('(SAPMV45A)XVBPA[]') TO ‹Y1›.

52.IF sy-subrc EQ 0.

53.tb_pa[] =.

54.ENDIF.

55.

56.ENDCASE.

57.

58.** we take the line-item value unless not present

59.** in which case we take the header value

60.READ TABLE tb_pa WITH KEY posnr = i_xmcvbap-posnr

61.parvw = 'Z1'.

62.IF sy-subrc NE 0.

63.READ TABLE tb_pa WITH KEY posnr = '000000'

64.parvw = 'Z1'.

65.ENDIF.

66.

67.IF sy-subrc EQ 0.

68.MOVE tb_pa-lifnr TO e_xmcvbapusr-yykam.

69.ENDIF.

4. Implementing the extraction logic for the initialisation run

The code in the EXIT above is only run when a sales document line is created, modified, or deleted (delta process). However, that code is not executed during the initialisation of the delta process, i.e. when all data from setup tables are loaded in BW.

In order for the extraction to take place also during the initialisation run, we need to implement the same extraction logic in the function EXIT_SAPLRSAP_001 (normally used to enhance non-LO transactional DataSources) of the enhancement RSAP0001 (Customer function calls in the service API).

Since we have already implemented the extraction logic for the delta process, we have to make sure that the code we put here is executed only when data are requested in full mode--i.e. when the field i_updmode has the value 'F' (transfer of all requested data), 'C' (initialization of the delta transfer), 'S' (simulation of initialzation of delta transfer), or 'I' (transfer of an opening balance for non-cumulative values, not relevant in our case), but not when its value is 'D' (transfer of the delta since the last request) and 'R' (repetition of the transfer of a data packet):

01.[...]

02.

03.CASE i_datasource.

04.

05.[...]

06.

07.WHEN '2LIS_11_VAITM'.

08.DATA: s_mc11va0itm LIKE mc11va0itm.

09.

10.LOOP AT c_t_data INTO s_mc11va0itm.

11.wk_tabx = sy-tabix.

12.

13.*-- The following code must be executed during initialisation only

14.*-- (the extraction logic for delta update in EXIT_SAPLMCS6_002)

15.*

16.** we take the line-item value unless not present

17.** in which case we take the header value

18.

19.* only during initialisation

20.IF i_updmode EQ 'F' OR " F  Transfer of all requested data

21.i_updmode EQ 'C' OR " C Initialization of the delta transfer

22.i_updmode EQ 'S' OR " S Simulation of Initialzation of Delta Transfer

23.i_updmode EQ 'I'.   " I Transfer of an opening balance for non-cumulative values

24.* D Transfer of the Delta Since the Last Request

25.* R Repetition of the transfer of a data packet

26.

27.SELECT SINGLE lifnr INTO l_lifnr

28.FROM vbpa

29.WHERE vbeln = s_mc11va0itm-vbeln AND

30.posnr = s_mc11va0itm-posnr AND

31.parvw = 'Z1'.

32.

33.IF sy-subrc NE 0.

34.SELECT SINGLE lifnr INTO l_lifnr

35.FROM vbpa

36.WHERE vbeln = s_mc11va0itm-vbeln AND

37.posnr = '000000' AND

38.parvw = 'Z1'.

39.ENDIF.

40.

41.IF sy-subrc EQ 0.

42.MOVE l_lifnr TO s_mc11va0itm-yykam.

43.ENDIF.

44.ENDIF.

Related Content

· SAP Network Blog LOGISTIC COCKPIT - WHEN YOU NEED MORE - First option: enhance it !

· SAP Note 576886: Change to user-defined fields not extracted (SMP login required)

· SAP Note 216448: BW/SIS: Incorrect update / SD user exit (SMP login required)

· SAP Note 757361: Additional data records in BW when document changed (SMP login required)

Image may be NSFW.
Clik here to view.

Data Transfer Process and Error handling process

Data Transfer Process and Error handling process
Introduction about Data Transfer Process

  1. You use the data transfer process (DTP) to transfer data within BI from a persistent object to another object in accordance with certain transformations and filters. In this respect, it replaces the data mart interface and the Info Package. As of SAP Net Weaver 2004s, the Info Package only loads data to the entry layer of BI (PSA).
  2. The data transfer process makes the transfer processes in the data warehousing layer more transparent. Optimized parallel processing improves the performance of the transfer process .You can use the data transfer process to separate delta processes for different targets and you can use filter options between the persistent objects on various levels. For example, you can use filters between a Data Store object and an Info Cube.
  3. Data transfer processes are used for standard data transfer, for real-time data acquisition and for accessing data directly. 
    Image may be NSFW.
    Clik here to view.

Interesting Benefits of New Data Transfer Process

  1. Loading data from one layer to others except Info sources.
  2. Separation of delta mechanism for different data targets.
  3. Enhanced filtering in dataflow.
  4. Improved transparency of staging processes across data warehouse layers.
  5. Improved performance : optimized parallelization
  6. Enhanced error handling in the form of error stack
  7. Enables real-time data acquisition. 
    Image may be NSFW.
    Clik here to view.

Most important advantage in Data Transfer Process

  1. Delta logic can be separately handled for separate data targets
  2. Example for separation for delta logic
  3. Delta logic is a part of DTP
  4. One Source PSA
  5. Two targets : One DSO keeping daily data and other one keeping weekly data

Five process for handling errors in DTP

Process#1 Enhanced Filtering, Debugging and error handling options
Image may be NSFW.
Clik here to view.

Process # 2 -Handling Data Records With Errors

  1. Using the error handling settings on the Update tab page in the data transfer process, when data is transferred from a DTP source to a DTP target, you can specify how the system is to react if errors occur in the data records.
  2. These settings were previously made in the Info Package. When using data transfer processes, Info Packages write to the PSA only. Error handling settings are therefore no longer made in the Info Package, but in the data transfer process

Process # 3 -Error Handling Features

  1. Possibility to choose in the scheduler to
  2. Abort process when errors occur
  3. Process the correct records but do not allow reporting on them
  4. Process the correct records and allow reporting on them
  5. Number of wrong records which lead to a wrong request
  6. Invalid records can be written into an error stack
  7. Keys should be defined for error stack to enable the error handling of data store object
  8. Temporary data storage can be switched on/off for each sub step of the loading process
  9. Invalid records can be updated into data records after their correction

Image may be NSFW.
Clik here to view.

Process # 4 - Error Stack

  1. Stores erroneous records
  2. Keeps the right sequence of records à for consistent data store handling.
  3. Key of error stack defines which data should be detained from the update after the erroneous data record.
  4. After Correction, Error-DTP updates data from error stack to data target.

Note: Once the request in the source object is deleted, the related data records in error stack area automatically deleted.

    1. Key of Error Stack = Semantic Groups.
    2. Subset of the key of the target object.
      1. Max. 16 fields
      2. Defines which data should be detained from the update of erroneous data record (for data store object)
      3. The bigger the key, the fewer records will be written to the error stack Image may be NSFW.
        Clik here to view.

        Process # 5 - Temporary Data Storage
  1. In order to analyze the data at various stages you can activate the temporary storage in the DTP
  2. This allows you to determine the reasons of error Image may be NSFW.
    Clik here to view.

    I enjoyed sharing my knowledge. Hope the above information is useful and provide me your feed back. Thank you.

And related link as follow: http://help.sap.com/saphelp_nw2004s/helpdata/en/42/f98e07cc483255e10000000a1553f7/frameset.htm

Image may be NSFW.
Clik here to view.

How to Restore Query into Older Version

 

  • Added by Mahesh Kumar, last edited by Arun Varadarajan
    Once older verion (3.x) queries are migrated into latest version (7.0).Query can be restored to 3.x after migration. The backup for any query created in 3.x will be taken when first opened for editing in 7.0 Query Designer. The backup contains the last change done to the query using 3.x editor, any changes done in 7.0 Query Designer will be lost on restore. Also the query originally created in 7.0 can not be restored to older versions as there is no Backup in 3.x.

   Queries can be restored to 3.x version using program COMPONENT_RESTORE.

Steps followed for Restoring Query with 3.x versions:

Step 1 : Execute "COMPONENT_RESTORE" program in SE38.

Image may be NSFW.
Clik here to view.

Step 2 :   Next screen Select InfoProvider and component type. Different component types are available like

Step 3 : Select REP as a component type to revert back Query.  
Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Step 4 :  Execute (F8). In the next screen it will displays all related Queries for that particular infoprovider.
Image may be NSFW.
Clik here to view.

Step 5 : Search for the query which you want to revert back to older versions.

Step 6 : Then say transfer selection. below message will appears in the system.
Image may be NSFW.
Clik here to view.

Step 7 : Once we select Yes, Query successfully restored into older versions.

Image may be NSFW.
Clik here to view.

BI FAQ - Important Transaction Codes in SAP Business Intelligence

 

  • Added by Arun Varadarajan, last edited by Arun Varadarajan

Compiled below is a list of  Transaction codes used in SAP BI -
 

Tcode Description
DB02 Tables and Indexes Monitor
DB14 Display DBA Operation Logs
DB16 Display DB Check Results
DB20 Update DB Statistics
KEB2 DISPLAY DETAILED INFO ON CO-PA DATA SOURCE R3
LISTCUBE List viewer for InfoCubes
LISTSCHEMA Show InfoCube schema
LBWE LO Data Extraction: Customizing Cockpit
LBWF BW Log
LBWG To Delete Setup Tables
OLI*BW To fill Setup Tables (* Application Component) [ie 7 for SD Sales Orders]
OS06 Local Operating System Activity
OB08 Currency Exchange Rates
RSA1 Administrator Work Bench
RSA2 OLTP Metadata Repository
RSA3 Extractor Checker
RSA5 Install Business Content
RSA6 Maintain DataSources
RSA7 BW Delta Queue Monitor
RSA8 DataSource Repository
RSA9 Transfer Application Components
RSA11 Calling up AWB with the IC tree
RSA12 Calling up AWB with the IS tree
RSA13 Calling up AWB with the LG tree
RSA14 Calling up AWB with the IO tree
RSA15 Calling up AWB with the ODS tree
RSBBS Maintain Query Jumps (RRI Interface)
RSBICA Modeling BI Integrated Planning
RSCUSTA Maintain BW Settings
RSCUSTA2 ODS Settings
RSCUSTV*  
RSD1 Characteristic maintenance
RSD2 Maintenance of key figures
RSD3 Maintenance of units
RSD4 Maintenance of time characteristics
RSD5 Edit InfoObjects
RSDBC DB connect
RSDCUBE Start: InfoCube editing
RSDCUBED Start: InfoCube editing
RSDCUBEM Start: InfoCube editing
RSDDBIAMON BI Accelerator Monitor
RSDDV Maintaining
RSDIOBC Start: InfoObject catalog editing
RSDIOBCD Start: InfoObject catalog editing
RSDIOBCM Start: InfoObject catalog editing
RSDL DB Connect - Test Program
RSDMD Master Data Maintenance w.Prev. Sel.
RSDMD_TEST Master Data Test
RSDMPRO Initial Screen: MultiProvider Proc.
RSDMPROD Initial Screen: MultiProvider Proc.
RSDMPROM Initial Screen: MultiProvider Proc.
RSDMWB Customer Behavior Modeling
RSDODS Initial Screen: ODS Object Processing
RSDS Data Source Repository
RSIMPCUR Load Exchange Rates from File
RSINPUT Manual Data Entry
RSIS1 Create InfoSource
RSIS2 Change InfoSource
RSIS3 Display InfoSource
RSISET Maintain InfoSets
RSKC Maintaining the Permitted Extra Chars
RSLGMP Maintain RSLOGSYSMAP
RSMO Data Load Monitor Start
RSMON BW Administrator Workbench
RSOR BW Metadata Repository
RSORBCT BI Business Content Transfer
RSORMDR BW Metadata Repository
RSPC Process Chain Maintenance
RSPC1 Process Chain Display
RSPCM Monitor daily process chains
RSPLAN Modeling BI Integrated Planning
RSPLSE BI Planning Lock Management
RSRCACHE OLAP: Cache Monitor
RSRT Start of the report monitor
RSRT1 Start of the Report Monitor
RSRT2 Start of the Report Monitor
RSRTRACE Set trace configuration
RSRTRACETEST Trace tool configuration
RSRV Analysis and Repair of BW Objects
RSSM Authorizations for Reporting
RZ20 To see log for Process Chains
SE03 Transport Organizer Tools
SE06 Set Up Transport Organizer
SE07 CTS Status Display
SE09 Transport Organizer
SE10 Transport Organizer
SE11 ABAP Dictionary
SE18 Business Add-Ins: Definitions
SE18_OLD Business Add-Ins: Definitions (Old)
SE19 Business Add-Ins: Implementations
SE19_OLD Business Add-Ins: Implementations
SE21 Package Builder
SE24 Class Builder
SE80 Object Navigator
SE93 to view Transaction Codes
SM04 User List
SM12 Display and Delete Locks
SM21 Online System Log Analysis
SM37 Overview of job selection
SM50 Work Process Overview
SM51 List of SAP Systems
SM58 Asynchronous RFC Error Log
SM59 RFC Destinations (Display/Maintain)
SM66 Global work process Monitor
SMQ1 qRFC Monitor (Outbound Queue)
SMQ2 qRFC Monitor (Inbound Queue)
ST22 ABAP Runtime Error (Dumps)
ST14 BW Evaluation Application Analysis
WE02 Display IDoc
WE05 IDoc Lists
WE06 Active IDoc monitoring
WE07 IDoc statistics
WE08 Status File Interface
WE09 Search for IDoc in Database
WE10 Search for IDoc in Archive
WE11 Delete IDocs
WE12 Test Modified Inbound File
WE14 Test Outbound Processing
WE15 Test Outbound Processing from MC
WE16 Test Inbound File
WE17 Test Status File
WE18 Generate Status File
WE19 Test tool
WE20 Partner Profiles
WE21 Port definition
WE23 Verification of IDoc processing
Image may be NSFW.
Clik here to view.

DSO SERIES

Image may be NSFW.
Clik here to view.

The Tech details of Standard ODS / DSO in SAP DWH

Raj
Image may be NSFW.
Clik here to view.
Long time ago I was sitting back thinking about the architecture of ODS and how invaluble it is from its inception with BW 2.0 in the BW Data Warehouse Layer (DWH). This basically motivated me to write this blog with the necessary tech details of ODS - Well call it Data Store Object (DSO) as of NW04s or BI 7.0

"An Operational Data Store object (ODS object) is used to store consolidated and cleansed data (transaction data or master data for example) on a document level (atomic level)" - Refered from SAP Docs.It describes a consolidated dataset from one or more Info Sources / transformations (7.0) as illustrated below in Fig.1.
In this blog we will look at the Standard Data Store Object. We have other types namely Data Store Object with Direct Update (Transactional ODS in 3.x) and Write Optimized Data Store new with BI 7.x which contains only Active data table used to manage huge data loads for instance - Here is the link from Help portal Write optimised DSO

Architecture of Standard ODS /DSO (7.x)
"ODS Objects consist of three tables as shown in the Architecture graphic below" - Refered from SAP Docs:

Image may be NSFW.
Clik here to view.
image

Figure 1: ODS Architecture - Extracted from SAP Docs

TIP: The new data status is written to the table with active data in parallel to writing to the change log taking the advantage of parallel processes which can be customized globally or at the object level in the system

Lets go through a Scenario
In this example we will take the Master data object material and plant (0MAT_PLANT compounded with 0PLANT) with a few attributes for the demonstration purpose. Now define a ODS / DSO as below where material and plant is a key and the corresponding attributes as data fields.

Image may be NSFW.
Clik here to view.
image

Figure 2: ODS / DSO definition

Lets create a flat file data source or an info source with 3.x in this example to simplify the scenario with all the info objects we have defined in ODS structure

Image may be NSFW.
Clik here to view.
image

Figure 3: Info source definition
Lets check the flat file records, remember that the key fields are plant and material and we have a duplicate record as in the below Fig.4. The 'Unique Data Records'option is unchecked which means it can expect duplicate records.
Image may be NSFW.
Clik here to view.
image

Figure 4: Flat file Records
Check the monitor entries and we see that 3 records are transferred to update rules and two records are loaded in to NEWDATA table as we haven't activated the request yet. This is because we have a duplicate record for the key in the ODS which gets overwritten (Check the first two records in Fig 4)
Image may be NSFW.
Clik here to view.
image

Figure 5: Monitor Entries
Now check the data in the NEWDATA / ACTIVATION QUEUE table, we have only two records as the duplicate records gets overwritten with the most recent record i.e. record 2 in PSA got overwritten as it has got the same key material and plant.
Image may be NSFW.
Clik here to view.
image

Figure 6: Activation Queue
Image may be NSFW.
Clik here to view.
image

Figure 7: PSA data for comparison
Tip: The key figures will have the overwrite option by default, additionally we have the summation option to suit certain scenarios and the characteristics will overwrite always. The technical name of new data / Activation queue table is always for customer objects - /bic <name of ODS>140 and for SAP objects - /bio<name of ODS>140.
Once we activate the data we will have two records in ODS Active Data table. As we see below the Active Data table always has contains the semantic key (Material, Plant)
Image may be NSFW.
Clik here to view.
image

Figure 8: Active Data Table
TIP: The name of the active table /BIC/A<odsname>100 and /BI0 for SAP.
The change log table has these 2 entries with the new image (N). Remember the record mode we will look in to it later. The technical key (REQID, DATAPACKETID, RECORD NUMBER) will be part of change log
Image may be NSFW.
Clik here to view.
image

Figure 9: Change Log Table
TIP: The technical name is always /BIC/<internal generated number>.
Now we will add two new records material 75 plant 1, Material 80 Plant 1 and change the existing record for the key Material 1 and plant 1 as below
Image may be NSFW.
Clik here to view.
image

Figure 10: Add more records
When we look at the monitor there will be 3 records in the activation queue table as the duplicate records gets filtered out, in this example the first record in Fig.10
Image may be NSFW.
Clik here to view.
image

Figure 11: Monitor
Look at the new data table (Activation Queue) and we will have 3 records that are updated as seen in the monitor
Image may be NSFW.
Clik here to view.
image

Figure 12: Activation Queue
How the Change log works?
We will check the change log table to see how the deltas are handled. The highlighted records are from the first request that is uniquely identified by technical key (Request Number, Data packet number, Partition value of PSA and Data record number)
Image may be NSFW.
Clik here to view.
image

Figure 13: Change log Table 1

With the second load i.e. the second request the change log table puts the before and after Image for the relevant records (the non highlighted part from the Fig.13)

In the above example Material (1) and Plant (1) has the before image with record mode "x"(row 3 in the above Fig) And all the key figures will be have the "-" sign as we have opted to overwrite option and the characteristics will be overwritten always.
Image may be NSFW.
Clik here to view.
image

Figure 14: Change log Table 2
The after image " " which reflects the change in the data record (Check row 4 in the above fig). We have changed the characteristic Profit center with SE from SECOND and the Key figure Processing Time is changed from 1 to 2. A new record (last row in the above Fig) is added is with the Status "N" as it's a new record.

Summary
This gives us an overview of the standard ODS object and how the change log works. The various record modes available:
Image may be NSFW.
Clik here to view.
image

Figure 15: Record Modes
Check the note 399739 about the details of the Record Mode. The record mode(s) that a particular data source uses for the delta mechanism largely depends on the type of the extractor. Check the table RODELTM about the BW Delta Process methods with the record modes available as well our well known table ROOSOURCE for the extractor specific delta method.
For Instance LO Cockpit extractors use 'ABR' delta method that supplies After-Image, Before-Image, New Image and Reverse Image. Extractors in HR and Activity based costing uses the delta method 'ADD' i.e. with record mode 'A' and FI-GL,AR,AP extractors are based on delta method 'AIE' i.e. record mode space ' ' After image. The list goes on ..........

Raj   is a SAP certified NW BI Consultant

Image may be NSFW.
Clik here to view.

LO COCKPIT - V3 update: Questions and answers

Added By Vinay

Question 1
Update records are written to the SM13, although you do not use the extractors from the logistics cockpit (LBWE) at all.
Active datasources have been accidentally delivered in a PI patch.For that reason, extract structures are set to active in the logistics cockpit. Select transaction LBWE and deactivate the active structures. From now on, no additional records are written into SM13.
If the system displays update records for application 05 (QM) in transaction SM13, even though the structure is not active, see note 393306 for a solution.

Question 2
How can I selectively delete update records from SM13?
Start the report RSM13005 for the respective module (z.B. MCEX_UPDATE_03).
  • Status COL_RUN INIT: without Delete_Flag but with VB_Flag (records are updated).
  • Status COL_RUN OK: with Delete_Flag (the records are deleted for all modules with COL_RUN -- OK)
    With the IN_VB flag, data are only deleted, if there is no delta initialization. Otherwise, the records are updated. MAXFBS : The number of processed records without Commit.
ATTENTION: The delta records are deleted irrevocably after executing report RSM13005 (without flag IN_VB). You can reload the data into BW only with a new delta-initialization!
Question 3
What can I do when the V3 update loops?
Refer to Note 0352389. If you need a fast solution, simply delete all entries from SM13 (executed for V2), however, this does not solve the actual problem.
ATTENTION: THIS CAUSES DATA LOSS. See question 2 !
Question 4
Why has SM13 not been emptied even though I have started the V3 update?
  • The update record in SM13 contains several modules (for example, MCEX_UPDATE_11 and MCEX_UPDATE_12). If you start the V3 update only for one module, then the other module still has INIT status in SM13 and is waiting for the corresponding collective run. In some cases, the entry might also not be deleted if the V3 update has been started for the second module.In this case, schedule the request RSM13005 with the DELETE_FLAG (see question 2).
  • V3 updating no longer functions after the PI upgrade because you did not load all the delta records into the BW system prior to the upgrade.Proceed as described in note 328181.
Question 5
The entries from SM13 have not been retrieved even though I followed note 0328181!
Check whether all entries were actually deleted from SM13 for all clients. Look for records within the last 25 years with user * .
Question 6
Can I schedule V3 update in parallel?
The V3 update already uses collective processing.You cannot do this in parallel.

Question 7
The Logistics Cockpit extractors deliver incorrect numbers. The update contains errors !
Have you installed the most up-to-date PI in your OLTP system?
You should have at least PI 2000.1 patch 6 or PI 2000.2 patch 2.

Question 8
Why has no data been written into the delta queue even though the V3 update was executed successfully?
You have probably not started a delta initialization. You have to start a delta initialization for each DataSource from the BW system before you can load the delta.Check in RSA7 for an entry with a green status for the required DataSource. Refer also to Note 0380078.

Question 9
Why does the system write data into the delta queue, even though the V3 update has not been started?
You are using the automatic goods receipt posting (transaction MRRS) and start this in the background.In this case the system writes the records for DataSources of application 02 directly into the delta queue (RSA7).This does not cause double data records.This does not result in any inconsistencies.

Question 10
Why am I not able to carry out a structural change in the Logistics Cockpit although SM13 is blank?
Inconsistencies occurred in your system. There are records in update table VBMOD for which there are no entries in table VBHDR. Due to those missing records, there are no entries in SM13. To remove the inconsistencies, follow the instructions in the solution part of Note 67014. Please note that no postings must be made in the system during reorganization in any case!

Question 11
Why is it impossible to plan a V3 job from the Logistics Cockpit?
The job always abends immediately. Due to missing authorizations, the update job cannot be planned. For further information see Note 445620.
Posted by Vinay
Image may be NSFW.
Clik here to view.

Best Practice for Data Source Enhancement

Added by MAYURI SINHA

Data Extractor Enhancement - Overview of Approach
Following are the steps for data source enhancements:

The flowchart explains the step we need to follow while enhancing a standard LIS extractor.
Determine the fields with which the extractor is to enhanced. Recheck if these fields are not the ones which are already provided by SAP. As if they are already provided, we need not enhance them. As a next check we need to check the LBWE pool where we have some extra fields which we can easily add to our datasource.
If these fields do not exist in the LBWE pool either, we need to enhance the data source.
Above flowchart would give an overview of the same, but here we would go through each of those steps in detail. The approach which we are going to discuss here is the Function Module approach of data source enhancement.
Steps of Data Source Enhancement -Function Module approach
Step 1: Go to T Code CMOD and choose the project you are working on.
Step 2: Choose the exit which is called when the data is extracted. 
Step 3: This is the step where we have a difference from the normal approach.

Normal Approach: CMOD Code

Code Sample:

    WHEN '2LIS_05_Q0ACTY'.

-------
Note: This is just a sample code. Logic will vary according to the requirements.In this normal approach, which is followed in most of the BW instances, we write ABAP CASE/WHEN conditions.
Function ModuleApproach: CMOD Code
*&---------------------------------------------------------------------*
*&  Include           ZXRSAU01                                         *
*&---------------------------------------------------------------------*
DATA: L_FNAME  TYPE RS38L_FNAM,
      L_EXP_FNAME type rs38l_fnam,
      L_EXP_ACTIVE TYPE RS38L_GLOB,
      L_ACTIVE TYPE RS38L_GLOB,
      L_S_SELECT TYPE RSSELECT.
SELECT SINGLE FUNC
         INTO L_FNAME

         FROM ZTEST

        WHERE DSNAM  = I_DATASOURCE.
*&---------------------------------------------------------------------*
* Cehck to see if a local versio of the extractor exists
*&---------------------------------------------------------------------*
IF L_FNAME IS NOT INITIAL.
  CALL FUNCTION 'RS_FUNCTION_ACTIVE_CHECK'
    EXPORTING
      FUNCNAME  = L_FNAME
    IMPORTING
      ACTIVE    = L_ACTIVE
    EXCEPTIONS
      NOT_FOUND = 1
      OTHERS    = 2.
  IF SY-SUBRC EQ 0 AND L_ACTIVE IS NOT INITIAL.
    CALL FUNCTION L_FNAME
      EXPORTING
        I_DATASOURCE             = I_DATASOURCE
        I_ISOURCE                = I_ISOURCE
        I_UPDMODE                = I_UPDMODE
      TABLES
        I_T_SELECT               = I_T_SELECT
        I_T_FIELDS               = I_T_FIELDS
        C_T_DATA                 = C_T_DATA
        C_T_MESSAGES             = C_T_MESSAGES
      EXCEPTIONS
        RSAP_CUSTOMER_EXIT_ERROR = 1.
  ENDIF.                               " IF SY-SUBRC EQ 0...
ELSE.
  CLEAR L_FNAME.
  CONCATENATE 'ZTEST_' I_DATASOURCE INTO L_FNAME.
  CALL FUNCTION 'RS_FUNCTION_ACTIVE_CHECK'
    EXPORTING
      FUNCNAME  = L_FNAME
    IMPORTING
      ACTIVE    = L_ACTIVE
    EXCEPTIONS
      NOT_FOUND = 1
      OTHERS    = 2.
  IF SY-SUBRC = 0 AND L_ACTIVE = 'X'.
    CALL FUNCTION L_FNAME
      EXPORTING
        I_DATASOURCE             = I_DATASOURCE
        I_ISOURCE                = I_ISOURCE
        I_UPDMODE                = I_UPDMODE
      TABLES
        I_T_SELECT               = I_T_SELECT
        I_T_FIELDS               = I_T_FIELDS
        C_T_DATA                 = C_T_DATA
        C_T_MESSAGES             = C_T_MESSAGES
      EXCEPTIONS
        RSAP_CUSTOMER_EXIT_ERROR = 1.
  ENDIF.                               " IF SY-SUBRC = 0...
ENDIF.                                 " IF L_FNAME IS NOT INITIAL.
Note: This is a reusable code. Here ZTEST is a table which maintains the data source name and the function module corresponding to that data source.

Step 4: Here in this step we create a function module for each data source. We create a new FM (Function Module in SE37)
Data Extractor Enhancement - Best Practice/Benefits
  This is the best practice of data source enhancement. This has the following benefits:

1.       No more locking of CMOD code by 1 developer stopping others to enhance other extractors.

2.       Testing of an extractor becomes more independent than others.

3.       Faster and a more robust Approach

Image may be NSFW.
Clik here to view.

SAP BW Data Source Enhancement

Added By: Venkata Chalapathi Challapalli, R. Prem Kumar, Vikas Agarwal, Seema John and Tapan Kumar Jain.

1. Introduction
As we can read from http://help.sap.com SAP Business Information Warehouse provided preconfigured objects under collective term “Business Content” (BC). Business Content includes DataSources, InfoObjects, data targets, and InfoSources that support the entire flow of data within BW. It provides a rapid starting point when modeling key business information requirements, and it is also intended to cover most of the traditional reporting requirements that companies face. However, Business Content will not solve all your data information needs.


Now the spontaneous question is: what’s happen if a standard Business Content DataSource as provided in standard (ready-to-use) configuration doesn’t completely meet our data model requirements?

The Answer is : DataSource Enhancement

Below links shows the ways to do the Datasource Enhancement.

SAP BW Data Source Enhancement


Data Source Enhancement using User Exit


Enhancing LO Datasource Step by Step


Data Extraction & DS Enhancement in SAP BI - Step by Step

Image may be NSFW.
Clik here to view.

Constant selection in query (Apples and Oranges)

Added From Sven van Leuken’s Blog

The great thing about an infoset is that it can combine data from different sources via a "JOIN" (inner/outer) mechanism. This may come in very handy if you want to report on two different sources (apples & oranges) without having to show the annoying '#'.
But what to do if you want to show data from the Apples bucket, even though this bucket is empty for the corresponding Oranges bucket? (and vice versa)
Eg.
Apples: 10 pieces in January 2010 for customer A
Oranges: 5 pieces in January 2010 for customer B
With an infoset (JOIN) on Apples & Oranges (where Apples is 'leading') it is impossible to show the Oranges of January for customer B because Apples has no customer B entry.
To be able to show the Oranges data, you have to use a multiprovider which uses the UNION mechanism. The downside of the multiprovider is that it will show a # for January Oranges for Customer A and a # for January Apples for Customer B.
This unwanted # behaviour can be overruled by using constant selection
Step 1: Create restricted keyfigure
Image may be NSFW.
Clik here to view.

Step 2: Tick on constant selection
Image may be NSFW.
Clik here to view.

Without constant selection:
January 2010;Customer A; Apples 10; Oranges #
January 2010;Customer B; Apples #; Oranges 5
With constant selection:
January 2010;Customer A; Apples 10; Oranges 5
January 2010;Customer B; Apples 10; Oranges 5

Image may be NSFW.
Clik here to view.

Writing a Virtual Characteristic or Key figure

Added From Arun KK Blog sapbikk

Writing a Virtual characteristic is one which i really struggled to get on - more because of lack of material than the complexity itself.
Here, i have documented my approach to write one - hope it is a good reference for some one planning to write one.
CAUTION: Virtual char and KFs seriously impact the performance of the query; therefore use them with discretion.
---------------------------------------

Need for virtual char & KFs:

To do calculation/manipulation of char or KFs at query run-time

Walk-through:

We’ll go through the virtual char & KFs using this example.

‘Calc Date’ is an IO that holds the below logic

If Current Date > Revised Promise Date Then

Calc Date = ‘PD’

Else

Calc Date = Revised Promise Date

We shall see how this is implemented using virtual characteristics.

Step 1: Creation of dummy IO

Create a ‘dummy’ info object (without any mapping) that would be the holder for Calc Date

I created IO - ZCRPDTCLC.

Step 2: Associating IO to data target

Add this info object to the data target that would be used for reporting.

I added ZCRPDTCLC under DSO ZPP_DS06 and again to ZPU_M03 MP.

The next steps would involve writing the code for calculating Calc Date.

The code is written in 3 modules.

ZXRSRTOP - Global declaration of variables is done here

ZXRSRU02 - Mode of access for each of the IOs used in the exit is defined here.

ZXRSRZZZ - Association of the global variable to the actual IO is done here.

The exit logic is also written here.

Step 3: Global declaration of variables

Go to ZXRSRTOP module in ABAP editor (tcode se38)

In the ‘Include for Virtual Char’ block (it could be in other blocks also, code written here to make it more organized), declare global variables for Calc Date and Revised Promise Date (these are the objects that will be used in the calculation)

The global variables declared should be in the format –

g_pos__

Data type should be number.

Eg:

Data:

g_pos_ZPP_DS06_ZCRPDTCLC TYPE I,

g_pos_ZPP_DS06_ZCRPDT TYPE I.

Step 4: Defining mode of access for the IO.

Go to ZXRSRU02 module in ABAP editor (tcode se38)

There will be a single CASE block for structure i_s-rkb1d

Eg:

CASE i_s_rkb1d-infocube.

ENDCASE

This structure ‘i_s_rkb1d’ has all details regarding the query like query tech name, data target, etc.

Thereby, i_s_rkb1d-infocube will contain the data target for each query.

CASE i_s_rkb1d-infocube.

WHEN 'ZPP_DS06'.

* Fields to read

g_t_chanm-mode = rrke_c_mode-read.

g_t_chanm = 'ZCRPDT'.

APPEND g_t_chanm to e_t_chanm.

* Fields to write

g_t_chanm-mode = rrke_c_mode-no_selection.

g_t_chanm-chanm = 'ZCRPDTCLC'.

APPEND g_t_chanm to e_t_chanm.

ENDCASE.

We check the info cube attribute for the corresponding data target related to our query.

‘rrke_c_mode’ is the structure that defines the mode of access for each IO (read mode, write mode).

‘g_t_chanm’ is the structure that will hold the name of characteristics that will be used

‘g_t_kyfnm’ is the structure that will hold the name of key figures that will be used

In our example, we use only characteristics and hence only ‘g_t_kyfnm’ structure.

The rest of the code should be self-explanatory.

For each new object, you assign its tech name to ‘g_t_chanm_chanm’ object and append it to ‘e_t_chanm’ which is the output structure.

Similarly for key figures, ‘e_t_kyfnm’ is the output structure.However for key figures, there is no structure to set the mode of access (mode of access – read/write by default).

Another example to drive the point home:

* Characteristics and Units

g_t_chanm-mode = rrke_c_mode-read.

g_t_chanm-chanm = '0MATERIAL'. append g_t_chanm to e_t_chanm.

g_t_chanm-chanm = '0PLANT'. append g_t_chanm to e_t_chanm.

g_t_chanm-mode = rrke_c_mode-no_selection.

g_t_chanm-chanm = 'PR_ID1'. append g_t_chanm to e_t_chanm.

g_t_chanm-chanm = 'PR_ID2'. append g_t_chanm to e_t_chanm.

g_t_chanm-chanm = 'PR_YEAR1'. append g_t_chanm to e_t_chanm.

g_t_chanm-chanm = 'PR_YEAR2'. append g_t_chanm to e_t_chanm.

g_t_chanm-chanm = 'PR_CURR1'. append g_t_chanm to e_t_chanm.

g_t_chanm-chanm = 'PR_CURR2'. append g_t_chanm to e_t_chanm.

* Key Figures

append '0QUANT_B' to e_t_kyfnm.

append 'AMOUNT1' to e_t_kyfnm.

append 'AMOUNT2' to e_t_kyfnm.

For ‘g_t_kyfnm’ we need not set any mode.

Step 5: Writing the logic for virtual char/KF

Go to ZXRSRZZZ module in ABAP editor (tcode se38)

Here, create a new form for the data target being used.

Form name should be begin with ‘USER_’ followed by the data target’s name.

C_S_DATE is the structure that would hold the data in the query.

To access the IOs in the code, we have to create field symbols that act as aliases.

Then the global variables created are associated to these aliases using the ASSIGN statement.

The rest of the code involves the implementation logic as in the snippet below.

FORM USER_ZPP_DS06 USING I_S_RKB1D TYPE RSR_S_RKB1D

CHANGING C_S_DATE TYPE ANY.

DATA: L_DATE TYPE SCAL-DATE.

DATA: L_WEEK TYPE SCAL-WEEK.

CONSTANTS: PAST_DUE(2) TYPE C VALUE 'PD'.

FIELD-SYMBOLS: , .

ASSIGN COMPONENT g_pos_ZPP_DS06_ZCRPDT OF STRUCTURE C_S_DATE TO .

ASSIGN COMPONENT g_pos_ZPP_DS06_ZCRPDTCLC OF STRUCTURE C_S_DATE TO .

L_DATE = SY-DATUM. "Today's Date

CALL FUNCTION 'DATE_GET_WEEK'

EXPORTING

DATE = L_DATE

IMPORTING

WEEK = L_WEEK.

IF L_WEEK GT . "If Current Week is greater than Revised Promise Date, Calc_Date holds "PD"

= PAST_DUE.

ELSE. "Calc_Date holds Revised Promise Date

= .

ENDIF.

ENDFORM.

Overall Program Flow for Calc Date

ZXRSRTOP

Data:

g_pos_ZPP_DS06_ZCRPDTCLC TYPE I,

g_pos_ZPP_DS06_ZCRPDT TYPE I.

ZXRSRU02

CASE i_s_rkb1d-infocube.

WHEN 'ZPP_DS06'.

* Fields to read

g_t_chanm-mode = rrke_c_mode-read.

g_t_chanm = 'ZCRPDT'.

APPEND g_t_chanm to e_t_chanm.

* Fields to write

g_t_chanm-mode = rrke_c_mode-no_selection.

g_t_chanm-chanm = 'ZCRPDTCLC'.

APPEND g_t_chanm to e_t_chanm.

ENDCASE.

ZXRSRZZZ

FORM USER_ZPP_DS06 USING I_S_RKB1D TYPE RSR_S_RKB1D

CHANGING C_S_DATE TYPE ANY.

DATA: L_DATE TYPE SCAL-DATE.

DATA: L_WEEK TYPE SCAL-WEEK.

CONSTANTS: PAST_DUE(2) TYPE C VALUE 'PD'.

FIELD-SYMBOLS: , .

ASSIGN COMPONENT g_pos_ZPP_DS06_ZCRPDT OF STRUCTURE C_S_DATE TO .

ASSIGN COMPONENT g_pos_ZPP_DS06_ZCRPDTCLC OF STRUCTURE C_S_DATE TO .

L_DATE = SY-DATUM. "Today's Date

CALL FUNCTION 'DATE_GET_WEEK'

EXPORTING

DATE = L_DATE

IMPORTING

WEEK = L_WEEK.

IF L_WEEK GT . "If Current Week is greater than Revised Promise Date, Calc_Date holds "PD"

= PAST_DUE.

ELSE. "Calc_Date holds Revised Promise Date

= .

ENDIF.

ENDFORM.

Image may be NSFW.
Clik here to view.

Stages in BW project (ASAP Methodology)

Stages in BW project
1 Project Preparation / Requirement Gathering
2 Business Blueprint
3 Realization
4 Final Preparation
5 GO Live & Support


1. Project Preparation / Requirement Gathering
Collect requirement thru interviews with Business teams /Core users / Information Leaders .
Study & analyze KPI 's (key figures) of Business process .
Identify the measurement criteria's (Characteristics).
Understand the Drill down requirements if any.
Understand the Business process data flow if any .
Identify the needs for data staging layers in BW – (i.e need for ODS if any)Understand the system landscape .
Prepare Final Requirements Documents in the form of Functional  specifications containing :
Report Owners,
Data flow ,
KPI’s ,
measurement criteria’s,
Report format along with drilldown requirements .

2. Business Blueprint
Check Business content against the requirements
Check for appropriate
Info Objects - Key figures & Characters
Check for Info cubes / ODS
Check for data sources & identify fields in source system
Identify Master data
document all the information in a file – follow standard templates
Prepare final solution
Identify differences (Gaps) between Business Content & Functional
specification. propose new solutions/Developments & changes if required at different levels such as Info Objects ,Info cube , Data source etc . Document the gaps & respective solutions proposed– follow standard templates
Design & Documentation
Design the ERD & MDM diagrams for each cube & related objects
Design the primary keys/data fields for intermediate Storage in ODS
Design the Data flow charts right from data source up to Cube .
Consider the performance parameters while designing data models
Prepare High level / Low level design documents for each data model.--- follow standard templates
Identify the Roles & Authorizations required and Document it – follow standard templates
final review of design with core BW users .
Sign off the BBP documents

3. Realization
Check & Apply Latest Patches/Packages ...in BW & R/3 systems.
Activate/Build & enhance the cubes/ODS as per data model designs...maintain the version documents .
Identify & activate Info objects / Master data info sources / attributes ,prepare update rules
Assign data sources .prepare transfer rules , prepare multi providers . prepare Info packages .
perform the unit testing for data loads….both for master data & transaction data .
develop & test the end user queries .
Design the process chains ,schedule & test
create authorizations / Roles …assign to users ..and test
Apply necessary patches & Notes if any .
freeze & release the final objects to quality systems
perform quality tests .
Re design if required . (document changes, maintain versions)


4. Final Preparation
Prepare the final check list of objects to be released .identify the dependencies & sequence of release
perform Go Live checks as recommended by SAP in production system
keep up to date Patch Levels in Production system
Test for production scenarios in a pre-production system which is a replica of production system .
Do not Encourage the changes at this stage .
freeze the objects .


5 GO Live & Support
keep up to date Patch Levels
Release the objects to production system
Run the set ups in R/3 source system & Initialize Loads in BW
Schedule Batch jobs in R/3 system (Delta loads)
schedule the process chains in BW .
Performance tuning – on going activity
Enhancements - if any

Image may be NSFW.
Clik here to view.
Viewing all 60 articles
Browse latest View live