Sunday, August 4, 2013

Question & Answers in SAP BI

Go to Transaction LBWE (LO Customizing Cockpit)
1). Select Logistics Application
      e.g. SD Sales BW
            Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
       Select the fields of your choice and continue
             Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
Next step is to delete the setup tables
7). Go to T-Code SBIW
8). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Delete the content of Setup tables (T-Code LBWG)
vi. Select the application (01 – Sales & Distribution) and Execute
Now, Fill the Setup tables
9). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Filling the Setup tables
vi. Application-Specific Setup of statistical data
vii. SD Sales Orders – Perform Setup (T-Code OLI7BW)
        Specify a Run Name and time and Date (put future date)
Check the data in Setup tables at RSA3
Replicate the DataSource
Use of setup tables:
You should fill the setup table in the R/3 system and extract the data to BW - the setup tables is in SBIW - after that you can do delta extractions by initialize the extractor.
Full loads are always taken from the setup tables
Type 1: Direct Delta
·         Each document posting is directly transferred into the BW delta queue
·         Each document posting with delta extraction leads to exactly one LUW in the respective BW delta queues
Type 2: Queued Delta
·         Extraction data is collected for the affected application in an extraction queue
·         Collective run as usual for transferring data into the BW delta queue
Type 3: Un-serialized V3 Update
·         Extraction data for written as before into the update tables with a V3 update module
·          V3 collective run transfers the data to BW Delta queue
·         In contrast to serialized V3, the data in the updating collective run is without regard to sequence from the update tables
1.      Select the DataSource type and give it a technical name.
2.      Choose Create.
The creating a generic DataSource screen appears.
3.      Choose an application component to which the DataSource is to be assigned.
4.      Enter the descriptive texts. You can choose any text.
5.      Choose from which datasets the generic DataSource is to be filled.
·         Choose Extraction from View, if you want to extract data from a transparent table or a database view. Choose Extraction from Query, if you want to use a SAP query InfoSet as the data source. Select the required InfoSet from the InfoSet catalog.
·         Choose Extraction using FM, if you want to extract data using a function module. Enter the function module and extract structure.
·         With texts, you also have the option of extraction from domain fixed values.
6.      Maintain the settings for delta transfer where appropriate.
7.      Choose Save.
When extracting, look at SAP Query: Assigning to a User Group.
Note when extracting from a transparent table or view:
If the extract structure contains a key figure field, that references to a unit of measure or currency unit field, this unit field must appear in the same extract structure as the key figure field.
A screen appears in which you can edit the fields of the extract structure.
8. Choose DataSource ® Generate.
The DataSource is now saved in the source system.
Step 1: Go to T Code CMOD and choose the project you are working on.
Step 2: Choose the exit which is called when the data is extracted.
Step 3: There are two options
Normal Approach: CMOD Code
Function Module Approach: CMOD Code
Step 4: Here in this step we create a function module for each data source. We create a new FM
(Function Module in SE37)
Data Extractor Enhancement - Best Practice/Benefits:
This is the best practice of data source enhancement. This has the following benefits:
·         No more locking of CMOD code by 1 developer stopping others to enhance other extractors.
·         Testing of an extractor becomes more independent than others.
·         Faster and a more robust Approach
This field from the extraction structure of a DataSource meets one of the following criteria:
1. The field has the following type: Time stamp. New records to be loaded into the BW using a delta upload have a higher entry in this field than the time stamp of the last extraction.
2. The field has the following type: Calendar day. The same criterion applies to new records as in the time stamp field.
3. The field has another type. This case is only supported for SAP Content DataSources. In this case, the maximum value to be read must be displayed using a DataSource-specific exit when beginning data extraction.
This field is used by DataSources that determine their delta generically using a repetitively-increasing field in the extract structure.
The field contains the discrepancy between the current maximum when the delta or delta init extraction took place and the data that has actually been read.
Leaving the value blank increases the risk that the system could not extract records arising during extraction.
Example: A time stamp is used to determine the delta. The time stamp that was last read is 12:00:00. The next delta extraction begins at 12:30:00. In this case, the selection interval is 12:00:00 to 12:30:00. At the end of extraction, the pointer is set to 12:30:00.
A record - for example, a document- is created at 12:25 but not saved until 12:35. It is not contained in the extracted data but, because of its time stamp, is not extracted the next time either.
R/3 System
1. Run KEB0
2. Select Datasource 1_CO_PA_CCA
3. Select Field Name for Partitioning (Eg, Ccode)
4. Initialize
5. Select characteristics & Value Fields & Key Figures
6. Select Development Class/Local Object
7. Workbench Request
8. Edit your Data Source to Select/Hide Fields
9. Extract Checker at RSA3 & Extract
BW System
1. Replicate Data Source
2. Assign Info Source
3. Transfer all Data Source elements to Info Source
4. Activate Info Source
5. Create Cube on Infoprovider (Copy str from Infosource)
6. Go to Dimensions and create dimensions, Define & Assign
7. Check & Activate
8. Create Update Rules
9. Insert/Modify KF and write routines (const, formula, abap)
10. Activate
11. Create InfoPackage for Initialization
12. Maintain Infopackage
13. Under Update Tab Select Initialize delta on Infopackage
14. Schedule/Monitor
15. Create Another InfoPackage for Delta
16. Check on DELTA Option
17. Ready for Delta Load
RSA7, LBWQ, Idocs and SMQ1.
BW Data Modeling
Start Routine
The start routine is run for each data package at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the data package.
Routine for Key Figures or Characteristics
This routine is available as a rule type; you can define the routine as a transformation rule for a key figure or a characteristic. The input and output values depend on the selected field in the transformation rule.
End Routine
An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to postprocess data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE). You have to use a dummy rule to override this.
Expert Routine
This type of routine is only intended for use in special cases. You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine.
You can use this to program the transformation yourself without using the available rule types. You must implement the message transfer to the monitor yourself.
If you have already created transformation rules, the system deletes them once you have created an expert routine.
If the target of the transformation is a DataStore object, key figures are updated by default with the aggregation behavior Overwrite (MOVE).
Rule Group 
A rule group is a group of transformation rules. It contains one transformation rule for each key field of the target. A transformation can contain multiple rule groups.
Rule groups allow you to combine various rules. This means that for a characteristic, you can create different rules for different key figures.
Ans: See this table
Data Supply
SID Generation
Standard DataStore Object
Consists of three tables: activation queue, table of active data, change log
From data transfer process
Standard DataStore Object
Operational Scenario for Standard DataStore Objects
Write-Optimized DataStore Objects
Consists of the table of active data only
From data transfer process
Write-Optimized DataStore Object
Operational Scenario for Write-Optimized DataStore Objects
DataStore Objects for Direct Update
Consists of the table of active data only
From APIs
DataStore Objects for Direct Update
Operational Scenario for DataStore Objects for Direct Update.
You sometimes need to compound InfoObjects in order to map the data model. Some InfoObjects cannot be defined uniquely without compounding.
For example, if storage location A for plant B is not the same as storage location A for plant C, you can only evaluate the characteristic Storage Location in connection with Plant. In this case, compound characteristic Storage Location to Plant, so that the characteristic is unique.
Using compounded InfoObjects extensively, particularly if you include a lot of InfoObjects in compounding, can influence performance. Do not try to display hierarchical links through compounding. Use hierarchies instead.
A maximum of 13 characteristics can be compounded for an InfoObject. Note that characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself.
1.      Line item: This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:
·         When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.
·         A table- having a very large cardinality- is removed from the star schema. As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.
Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions.
It is recommended that you use DataStore objects, where possible, instead of InfoCubes for line items.
 2.      High cardinality: This means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.
You want to modify an InfoCube that data has already been loaded into. You use remodeling to change the structure of the object without losing data.
If you want to change an InfoCube that no data has been loaded into yet, you can change it in InfoCube maintenance.
You may want to change an InfoProvider that has already been filled with data for the following reasons:
·         You want to replace an InfoObject in an InfoProvider with another, similar InfoObject. You have created an InfoObject yourself but want to replace it with a BI Content InfoObject.
·         The structure of your company has changed. The changes to your organization make different compounding of InfoObjects necessary.
At runtime, erroneous data records are written to an error stack if the error handling for the data transfer process is activated. You use the error stack to update the data to the target destination once the error is resolved.
With an error DTP, you can update the data records to the target manually or by means of a process chain. Once the data records have been successfully updated, they are deleted from the error stack. If there are any erroneous data records, they are written to the error stack again in a new error DTP request.
  1. On the Extraction tab page under Semantic Groups, define the key fields for the error stack.
  2. On the Update tab page, specify how you want the system to respond to data records with errors:
  3. Specify the maximum number of incorrect data records allowed before the system terminates the transfer process
  4. Make the settings for the temporary storage by choosing Goto ® Settings for DTP Temporary Storage
  5. Once the data transfer process has been activated, create an error DTP on the Update tab page and include it in a process chain. If errors occur, start it manually to update the corrected data to the target.
If you choose a template InfoObject, you copy its properties and use them for the new characteristic. You can edit the properties as required
Several InfoObjects can use the same reference InfoObject. InfoObjects of this type automatically have the same technical properties and master data.
BW Reporting (BEx)         
In the Query Designer, you use selections to determine the data you want to display at the report runtime. You can alter the selections at runtime using navigation and filters. This allows you to further restrict the selections.
The Constant Selection function allows you to mark a selection in the Query Designer as constant. This means that navigation and filtering have no effect on the selection at runtime. This allows you to easily select reference sizes that do not change at runtime.
e.g. In the InfoCube, actual values exist for each period. Plan values only exist for the entire year. These are posted in period 12. To compare the PLAN and ACTUAL values, you have to define a PLAN and an ACTUAL column in the query, restrict PLAN to period 12, and mark this selection as a constant selection. This means that you always see the plan values, whichever period you are navigating in.
When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.
Cell-specific definitions allow you to define explicit formulas and selection conditions for cells as well as implicit cell definitions. This means that you can override implicitly created cell values. This function allows you to design much more detailed queries.
In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.
1. In the new formula window right click on Formula Variable and choose New Variable
2. Enter the Variable Name, Description and select Replacement Path in the Processing by field.
Click the Next Button
3. In the Characteristic screen, select the date characteristic that represents the first date to use in the calculation
4. In the Replacement Path screen select Key in the Replace Variable with field. Leave all the other options as they are (The offset values will be set automatically).
5. In the Currencies and Units screen select Date as the Dimension ID
Repeat the same steps to create a formula variable for second date and use them in the calculation.
Type 1:Characteristic value variables
Characteristic value variables represent characteristic values and can be used wherever characteristic values can be used.
If you restrict characteristics to specific characteristic values, you can also use characteristic value variables.
Type 2: Hierarchy variables
Hierarchy variables represent hierarchies and can be used wherever hierarchies can be selected.
If you restrict characteristics to hierarchies or select presentation hierarchies, you can also use hierarchy variables.
Type 3: Hierarchy node variables
Hierarchy node variables represent a node in a hierarchy and can be used wherever hierarchy nodes can be used.
If you restrict characteristics to hierarchy nodes, you can also use hierarchy node variables.
Type 4: Text variables
Text variables represent a text and can be used in descriptions of queries, calculated key figures and structural components.
You can use text variables when you create calculated key figures, restricted key figures, selections and formulas in the description of these objects. You can change the descriptions in the properties dialog box.
Type 5: Formula variables
Formula variables represent numerical values and can be used in formulas
The processing type of a variable determines how a variable is filled with a value for the runtime of the query or Web application.
The following processing types are available:
●     Manual Entry/Default Value
●     Replacement Path
●     Customer Exit
●     SAP Exit
●     Authorizations
You can restrict the key figures of an InfoProvider for reuse by selecting one or more characteristics. The key figures that are restricted by one or more characteristic selections can be basic key figures, calculated key figures, or key figures that are already restricted.
In the Query Designer, you use formulas to recalculate the key figures in an InfoProvider so that you can reuse them. Calculated key figures consist of formula definitions containing basic key figures, restricted key figures or precalculated key figures.
It is used to aggregate (sum up) the result of a key figure in a different manner than standard OLAP functionality. It aggregates the key keyfigures depending upon some characteristic value. In other words Exception Aggregation counts the occurrences of a key figure value relative to one or more other characteristics.
The OLAP processor executes the aggregations in the following sequence:...
Tye 1: Normal aggregation:
Standard aggregation is executed first. Possible types if aggregation are summation (SUM), minimum (MIN), and maximum (MAX). Minimum and maximum can be set, for example, for date key figures. This type of aggregation is catered at the standard key figure level.
Type 2: Exception aggregation with respect to the reference characteristic:
The aggregation of a selected characteristic takes place after the standard aggregation (exception aggregation). Possible exception aggregations available are average, counter, first value, last value, minimum, maximum, no aggregation, standard deviation, summation and variance. Cases where exception aggregation would be applied include, for example, storage non-cumulatives that cannot be totaled by time, or counters that count the number of characteristics for a particular characteristic
Type 3: Currency and unit aggregation
Aggregation by currency and units is executed last. If two figures are aggregated unequally with different currencies or units, the system marks this with „*‟. Formulas are only calculated after figures have been fully aggregated The Exception aggregation is used in special scenarios where we do not want to show the result of key figure as simply the total of all the values.
To improve the efficiency of data analysis, you can formulate conditions. In the results area of the query, the data is filtered according to the conditions so that only the part of the results area that you are interested in is displayed.
If you apply conditions to a query, you are not changing any figures; you are just hiding the numbers that are not relevant for you. Conditions therefore have no effect on the values displayed in the results rows. The results row of a query with an active condition is the same as the results row of a query without this condition (see Ranked List Condition: Top 5 Products).
You can define multiple conditions for a query. Conditions are evaluated independently of each other. The result set is therefore independent of the evaluation sequence. The result is the intersection of the individual conditions. Multiple conditions are linked logically with AND. A characteristic value is only displayed when it fulfills all (active) conditions of the query
In exception reporting you select and highlight objects that are in some way different or critical. Results that fall outside a set of predetermined threshold values (exceptions) are highlighted in color or designated with symbols. This enables you to identify immediately any results that deviate from the expected results.
Exception reporting allows you to determine the objects that are critical for a query, both online, and in background processing.
Global filters are applicable to the complete result set of query and local filters work only for a specific key figure.
Default values filter can be changed during query navigation but characteristic restrictions filter cannot be changed once restricted.
When is reconstruction allowed?

1. When a request is deleted in a ODS/Cube, will it be available under reconstruction.
Ans :Yes it will be available under reconstruction tab, only if the processing is through PSA Note: This function is particularly useful if you are loading deltas, that is, data that you cannot request again from the source system
2. Should the request be turned red before it is deleted from the target so as to enable reconstruction
Ans :To enable reconstruction you may not need to make the request red, but to enable repeat of last delta you have to make the request red before you delete it.
3. If the request is deleted with its status green, does the request get deleted from reconstruction tab too
Ans :No, it wont get deleted from reconstruction tab
4. Does the behaviour of reconstruction and deletion differ when the target is differnet. ODS and Cube
Ans :YesHow to Debugg Update and transfer Rules
1.Go to the Monitor.
2. Select 'Details' tab.
3. Click the 'Processing'
4. Right click any Data Package.
5. select 'simulate update'
6. Tick the check boxes ' Activate debugging in transfer rules' and 'Activate debugging in update rules'.
7. Click 'Perform simulation'.

Error loading master data - Data record 1 ('AB031005823') : Version 'AB031005823' is not valid
ProblemCreated a flat file datasource for uploading master data.Data loaded fine upto PSA.Once the DTP which runs the transformation is scheduled, its ends up in error as below:

SolutionAfter refering to many links on sdn, i found that since the data is from an external file,the data will not be matching the SAP internal format. So it shud be followed that we mark "External" format option in the datasource ( in this case for Material ) and apply the conversion routine MATN1 as shown in the picture below

:Once the above changes are done, the load was successful.Knowledge from SDN forumsConversion takes place when converting the contents of a screen field from display format to SAP-internal format and vice versa and when outputting with the ABAP statement WRITE, depending on the data type of the field.

Check the info: fm ( MATN1) will add leading ZEROS to the material number because when u query on MAKT with MATNR as just 123 you wll not be getting any values, so u should use this conversion exit to add leading zeros.’Function module to make yellow request to RED
Use SE37, to execute the function module RSBM_GUI_CHANGE_USTATE.From the next screen, for I_REQUID enter that request ID and execute.From the next screen, select 'Status Erroneous' radiobutton and continue.This Function Module, change the status of request from Green / Yellow to RED.What will happend if a request in Green is deleted?
Deleting green request is no harm. if you are loading via psa, you can go to tab 'reconstruction' and select the request and 'insert/reconstruct' to have them back.But,For example you will need to repeat this delta load from the source system. If you delete the green request then you will not get these delta records from the source system.Explanation :when the request is green, the source system gets the message that the data sent was loaded successfully, so the next time the load (delta) is triggered, new records are sent.If for some reason you need to repeat the same delta load from the source, then making the request red sends the message that the load was not successful, so do not discard these delta records.Delta queue in r/3 will keep until the next upload successfully performed in bw. The same records are then extracted into BW in the next requested delta load.Appearence of Values for charecterstic input help screen
Which settings can I make for the input help and where can I maintain these settings?In general, the following settings are relevant and can be made for the input help for characteristics:Display: Determines the display of the characteristic values with the following options "Key", "Text", "Key and text" and "Text and key".Text type: If there are different text types (short, medium and long text), this determines which text type is to be used to display the text.Attributes: You can determine for the input help which attributes of the characteristic are displayed initially. When you have a large number of attributes for the characteristic, it makes sense to display only a selected number of attributes. You can also determine the display sequence of the attributes.F4 read mode: Determines in which mode the input help obtains its characteristic values. This includes the modes "Values from the master data table (M)", "Values from the InfoProvider (D)" and "Values from the Query Navigation (Q)".

Note that you can set a read mode, on the one hand, for the input help for query execution (for example, in the BEx Analyzer or in the BEX Web) and, on the other hand, for the input help for the query definition (in the BEx Query Designer). You can make these settings in InfoObject maintenance using transaction RSD1 in the context of the characteristic itself, in the InfoProvider-specific characteristic settings using transaction RSDCUBE in the context of the characteristic within an InfoProvider or in the BEx Query Designer in the context of the characteristic within a query. Note that not all the settings can be maintained in all the contexts. The following table shows where certain settings can be made:

Setting RSD1 RSDCUBE BExQueryDesigner
Display X X X
Text type X X X
Attributes X - -
Read mode -
Query execution X X X -
Query definition X - -
Note that the respective input helps in the BEx Web as well as in the BEx Tools enable you to make these settings again after executing the input help.

When do I use the settings from InfoObject maintenance (transaction RSD1) for the characteristic for the input help?

The settings that are made in InfoObject maintenance are active in the context of the characteristic and may be overwritten at higher levels if required. At present, the InfoProvider-specific settings and the BEx Query Designer belong to the higher levels. If the characteristic settings are not explicitly overwritten in the higher levels, the characteristic settings from InfoObject maintenance are active.When do I use the settings from the InfoProvider-specific characteristic settings (transaction RSDCUBE) for the input help?You can make InfoProvider-specific characteristic settings in transaction RSDCUBE -> context menu for a characteristic -> InfoProvider-specific properties.These settings for the characteristic are active in the context of the characteristic within an InfoProvider and may be overwritten in higher levels if required. At present, only the BEx Query Designer belongs to the higher levels. If the characteristic settings are not explicitly overwritten in the higher levels and settings are made in the InfoProvider-specific settings, these are then active. Note that the settings are thus overwritten in InfoObject maintenance.When do I use the settings in the BEx Query Designer for characteristics for the input help?In the BEx Query Designer, you can make the input help-relevant settings when you go to the tab pages "Display" and "Advanced" in the "Properties" area for the characteristic if this is selected.These settings for the characteristic are active in the context of the characteristic within a query and cannot be overwritten in higher levels at present. If the settings are not made explicitly, the settings that are made in the lower levels take effect.How to supress messages generated by BW Queries
Standard Solution :
You might be aware of a standard solution. In transaction RSRT, select your query and click on the "message" button. Now you can determine which messages for the chosen query are not to be shown to the user in the front-end.

Custom Solution:
Only selected messages can be suppressed using the standard solution. However, there's a clever way you can implement your own solution... and you don't need to modify the system for it!All messages are collected using function RRMS_MESSAGE_HANDLING. So all you have to do is implement an enhancement at the start of this function module. Now it's easy. Code your own logic to check the input parameters like the message class and number and skip the remainder of the processing logic if you don't want this message to show up in the front-end.

FUNCTION rrms_message_handling.
* Filter BIA Message
if i_class = 'RSD_TREX' and i_type = 'W' and i_number = '136'*
just testing it.*
exitend if.
Dummy ..

How can I display attributes for the characteristic in the input help?
Attributes for the characteristic can be displayed in the respective filter dialogs in the BEx Java Web or in the BEx Tools using the settings dialogs for the characteristic. Refer to the related application documentation for more details.In addition, you can determine the initial visibility and the display sequence of the attributes in InfoObject maintenance on the tab page "Attributes" -> "Detail" -> column "Sequence F4". Attributes marked with "0" are not displayed initially in the input help.

Why do the settings for the input help from the BEx Query Designer and from the InfoProvider-specific characteristic settings not take effect on the variable screen?
On the variable screen, you use input helps for selecting characteristic values for variables that are based on characteristics. Since variables from different queries and from potentially different InfoProviders can be merged on the variable screen, you cannot clearly determine which settings should be used from the different queries or InfoProviders. For this reason, you can use only the settings on the variable screen that were made in InfoObject maintenance.

Why do the read mode settings for the characteristic and the provider-specific read mode settings not take effect during the execution of a query in the BEx Analyzer?

The query read mode settings always take effect in the BEx Analyzer during the execution of a query. If no setting was made in the BEx Query Designer, then default read mode Q (query) is used.

How can I change settings for the input help on the variable screen in the BEx Java Web?

In the BEx Java Web, at present, you can make settings for the input help only using InfoObject maintenance. You can no longer change these settings subsequently on the variable screen.Selective Deletion in Process Chain
The standard procedure :
1. Create a variant which is stored in the table RSDRBATCHPARA for the selection to be deleted from a data target.
2. Execute the generated program.
The generated program executes will delete the data from data target based on the given selections. The program also removes the variant created for this selective deletion in the RSDRBATCHPARA table. So this generated program wont delete on the second execution.

If we want to use this program for scheduling in the process chain we can comment the step where the program remove the deletion of the generated variant.

I_THRESHOLD = '1.0000E-01'
I_MODE = 'C'
L_T_MSG.export l_t_msg to memory id sy-repid.
WHERE REPID = 'ZSEL_DELETE_QM_C10'.ABAP program to find prev request in cube and delete
There will be cases when we cannot use the SAP built-in settings to delete previous request..The logic to determine previous request may be so customised, a requirement.In such cases you can write a ABAP program which calculates previous request basing our own defined logic.Following are the tables used : RSICCONT ---(list of all requests in any particular cube)RSSELDONE ----- ( has got Reqnumb, source , target , selection infoobject , selections ..etc)Following is one example code. Logic is to select request based on selection conditions used in the infopackage:TCURF, TCURR and TCURX
TCURF is always used in reference to Exchange rate.( in case of currency translation ).For example, Say we want to convert fig's from FROM curr to TO curr at Daily avg rate (M) and we have an exchange rate as 2,642.34. Factors for this currency combination for M in TCURF are say 100,000:1.Now the effective exchange rate becomes 0.02642.
Question ( taken from sdn ):can't we have an exchange rate of 0.02642 and not at all use the factors from TCURF table?.I suppose we have to still maintain factors as 1:1 in TCURF table if we are using exchange rate as 0.02642. am I right?. But why is this so?. Can't I get rid off TCURF.What is the use of TCURF co-existing with TCURR.Answer :Normally it's used to allow you a greater precision in calaculationsie 0.00011 with no factors gives a different result to0.00111 with factor of 10:1So basing on the above answer, TCURF allows greater precision in calculations.Its factor shud be considered before considering exchange rate

.-------------------------------------------------------------------------------------TCURRTCURR table is generally used while we create currency conversion types.The currency conversion types will refer to the entries in TCURR defined against each currency ( with time reference) and get the exchange rate factor from source currency to target currency.

table is used to exactly define the correct number of decimal places for any currency. It shows effect in the BEx report output.
-------------------------------------------------------------------------------------How to define F4 Order Help for infoobject for reporting
Open attributes tab of infoobject definition.In that you will observe column for F4 order help against each attribute of that infoobject like below :
This field defines whether and where the attribute should appear in the value help.Valid values:• 00: The attribute does not appear in the value help.•
01: The attribute appears at the first position (to the left) in the value help.•
02: The attribute appears at the second position in the valuehelp.•
03: ......• Altogether, only 40 fields are permitted in the input help. In addition to the attributes, the characteristic itsel, its texts, and the compounded characteristics are also generated in the input help. The total number of these fields cannot exceed 40.
So accordingly , the inofobjects are changed> Suppose if say for infobject 0vendor, if in case 0country ( which is an attribute of 0vendor) is not be shown in the F4 help of 0vendor , then mark 0 against the attribtue 0country in the infoobject definition of 0vendor.Dimension Size Vs Fact Size
The current size of all dimensions can be monitored in relation to fact table by t-code se38 running report SAP_INFOCUBE_DESIGNS.Also,we can test the infocube design by RSRV tests.It gives out the dimension to fact ratio.

The ratio of a dimension should be less than 10% of the fact table.In the report,Dimension table looks like /BI[C/O]/D[xxx]
Fact table looks like /BI[C/0]/[E/F][xxx]
Use T-CODE LISTSCHEMA to show the different tables associated with a cube.

When a dimension grows very large in relation to the fact table, db optimizer can't choose efficient path to the data because the guideline of each dimension having less than 10 percent of the fact table's records has been violated.

The condition of having large data growth in a dimension is called degenerative dimension.To fix, move the characteristics to different dimensions. But can only be done when no data in the InfoCube.

Note : In case if you have requirement to include item level details in the cube, then may be the Dim to Fact size will obviously be more which you cant help it.But you can make the item charecterstic to be in a line item dimension in that case.Line item dimension is a dimension having only one charecterstic in it.In this case, Since there is only one charecterstic in the dimension, the fact table entry can directly link with the SID of the charecterstic without using any DIMid (Dimid in dimension table usually connects the SID of the charecterstic with the fact) .Since link happens by ignoring dimension table ( not in real sense ) , this will have faster query performance.

BW Main tables
Extractor related tables: ROOSOURCE - On source system R/3 server, filter by: OBJVERS = 'A'
Data source / DS type / delta type/ extract method (table or function module) / etc
RODELTAM - Delta type lookup table.
ROIDOCPRMS - Control parameters for data transfer from the source system, result of "SBIW - General setting - Maintain Control Parameters for Data Transfer" on OLTP system.
maxsize: Maximum size of a data packet in kilo bytes
STATFRQU: Frequency with which status Idocs are sent
MAXPROCS: Maximum number of parallel processes for data transfer
MAXLINES: Maximum Number of Lines in a DataPacketMAXDPAKS: Maximum Number of Data Packages in a Delta RequestSLOGSYS: Source system.

Query related tables:
RSZELTDIR: filter by: OBJVERS = 'A', DEFTP: REP - query, CKF - Calculated key figureReporting component elements, query, variable, structure, formula, etc
RSZELTTXT: Similar to RSZELTDIR. Texts of reporting component elementsTo get a list of query elements built on that cube:RSZELTXREF: filter by: OBJVERS = 'A', INFOCUBE= [cubename]
To get all queries of a cube:RSRREPDIR: filter by: OBJVERS = 'A', INFOCUBE= [cubename]To get query change status (version, last changed by, owner) of a cube:RSZCOMPDIR: OBJVERS = 'A' .

Workbooks related tables:
RSRWBINDEX List of binary large objects (Excel workbooks)
RSRWBINDEXT Titles of binary objects (Excel workbooks)
RSRWBSTORE Storage for binary large objects (Excel workbooks)
RSRWBTEMPLATE Assignment of Excel workbooks as personal templatesRSRWORKBOOK 'Where-used list' for reports in workbooks.

Web templates tables:
RSZWOBJ Storage of the Web Objects
RSZWOBJTXT Texts for Templates/Items/Views
RSZWOBJXREF Structure of the BW Objects in a TemplateRSZWTEMPLATE Header Table for BW HTML Templates.

Data target loading/status tables:
rsreqdone, " Request-Data
rsseldone, " Selection for current Request
rsiccont, " Request posted to which InfoCube
rsdcube, " Directory of InfoCubes / InfoProvider
rsdcubet, " Texts for the InfoCubes
rsmonfact, " Fact table monitor
rsdodso, " Directory of all ODS Objects
rsdodsot, " Texts of ODS Objectssscrfields. " Fields on selection screens

Tables holding charactoristics:
OBJVERS -> A = active; M=modified; D=delivered
(business content characteristics that have only D version and no A version means not activated yet)TXTTABFL -> = x -> has text
ATTRIBFL -> = x -> has attribute
RSREQICODS. requests in ods
RSMONICTAB: all requestsTransfer Structures live in PSAPODSD
/BIC/B0000174000 Trannsfer Structure
Master Data lives in PSAPSTABD
/BIC/IXXXXXXX SID Structure of hierarchies:
/BIC/JXXXXXXX Hierarchy intervals
/BIC/KXXXXXXX Conversion of hierarchy nodes - SID:
/BIC/PXXXXXXX Master data (time-independent):
/BIC/SXXXXXXX Master data IDs:
/BIC/TXXXXXXX Texts: Char./BIC/XXXXXXXX Attribute SID table:

Master Data views
/BIC/MXXXXXXX master data tables:
/BIC/RXXXXXXX View SIDs and values:
/BIC/ZXXXXXXX View hierarchy SIDs and nodes:InfoCube Names in PSAPDIMD
/BIC/Dcube_name1 Dimension 1....../BIC/Dcube_nameA Dimension 10
/BIC/Dcube_nameB Dimension 11
/BIC/Dcube_nameC Dimension 12
/BIC/Dcube_nameD Dimension 13
/BIC/Dcube_nameP Data Packet
/BIC/Dcube_nameT Time/BIC/Dcube_nameU Unit
/BIC/Ecube_name Fact Table (inactive)/BIC/Fcube_name Fact table (active)

ODS Table names (PSAPODSD)
BW3.5/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records
/BIC/AXXXXXXX40 ODS object XXXXXXX : New records
/BIC/AXXXXXXX50 ODS object XXXXXXX : Change log

/BIC/AXXXXXXX00 ODS object XXXXXXX : Actve records
/BIC/AXXXXXXX10 ODS object XXXXXXX : New records

T-code tables:
tstc -- table of transaction code, text and program name
tstct - t-code text .

1What is tickets? And example?
The typical tickets in a production Support work could be:
1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling the info packages for the attributes/texts mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.
3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the requirement of client.
Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or whatever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.....etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low priority it must be considered only after attending to high priority tickets.
Checklists for a support project of BPS - To start the checklist:
1) Info Cubes / ODS / data targets 2) planning areas 3) planning levels 4) planning packages 5) planning functions 6) planning layouts 7) global planning sequences 8) profiles 9) list of reports 10) process chains 11) enhancements in update routines 12) any ABAP programs to be run and their logic 13) major bps dev issues 14) major bps production support issues and resolution .

What are the tools to download tickets from client? Are there any standard tools or it depends upon company or client...?Yes there are some tools for that. We use Hpopenview. Depends on client what they use. You are right. There are so many tools available and as you said some clients will develop their own tools using JAVA, ASP and other software. Some clients use just Lotus Notes. Generally 'Vantive' is used for tracking user requests and tickets.
It has a vantive ticket ID, field for description of problem, severity for the business, priority for the user, group assigned etc.
Different technical groups will have different group ID's.
User talks to Level 1 helpdesk and they raise ticket.
If they can solve issue for the issue, fine...else helpdesk assigns ticket to the Level 2 technical group.
Ticket status keeps changing from open, working, resolved, on hold, back from hold, closed etc. The way we handle the tickets vary depending on the client. Some companies use SAP CS to handle the tickets; we have been using Vantage to handle the tickets. The ticket is handled with a change request, when you get the ticket you will have the priority level with which it is to be handled. It comes with a ticket id and all. It's totally a client specific tool. The common features here can be - A ticket Id, - Priority, - Consultant ID/Name, - User ID/Name, - Date of Post, - Resolving Time etc.
There ideally is also a knowledge repository to search for a similar problem and solutions given if it had occurred earlier. You can also have training manuals (with screen shots) for simple transactions like viewing a query, saving a workbook etc so that such queried can be addressed by using them.
When the problem is logged on to you as a consultant, you need to analyze the problem, check if you have a similar problem occurred earlier and use ready solutions, find out the exact server on which this has occurred etc.
You have to solve the problem (assuming you will have access to the dev system) and post the solution and ask the user to test after the preliminary testing from your side. Get it transported to production once tested and posts it as closed i.e. the ticket has to be closed.

3.What is User Authorizations in SAP BW?
Authorizations are very important, for example you don't want the important financial report to all the users. so, you can have authorization in Object level if you want to keep the authorization for specific in object for this you have to check the Object as an authorization relevant in RSD1 and RSSM tcodes. Similarly you set up the authorization for certain users by giving that users certain auth. in PFCG tcode. Similarly you create a role and include the tcodes; BEx reports etc into the role and assign this role to the userid.

General Error In BI

General Errors in BW:
1. Time Stamp errors:  This can happen when there is some changes done on data source and data source is not replicated.
Execute T code SE38 in BW give program name as RS_Transtruc_Activate_All and execute the program. Give Info source and Source System and activate. This will replicate the data source and its status is changed to active. Once this is done, delete the request by changing technical status to red and trigger Info package to get delta back from source system.
2. Error log in PSA- Error occurred while writing to PSA: This is because of corrupt data or data is not in acceptable format to BW.
Check the cause of the error in Monitor in Detail tabsrip.This gives the record number and Info object having format issue. Compare the data with correct values and determine the cause of failure. Change the QM status of request in data target to red and delete the request. Correct the incorrect data in PSA and then upload data into data target from PSA.
3. Duplicate data error in Master data uploads: This can happen if there are duplicate records from the source system. BW do not allow duplicate data records.
 If it is a delta update, change the technical status in the monitor to red and delete the request from the data target. If it is full upload delete the request.
Schedule again with the option in the Info package, "without duplicate data" for master data upload.
 4. Error occurred in the data selection: This can occur due to either bug in the info package or incorrect data selection in the info package.
Data selection checked in the info package and job is started again after changing the technical status to red and deleting the error request from the data target.
5. Processing (data packet) Errors occurred-Update (0 new / 0 changed): This can be because of data not acceptable to data target although data reached PSA.
Data checked in PSA for correctness and after changing the bad data uploaded back into data target from PSA.
6. Processing (data packet) Errors occurred-Transfer rules (0 Records): These errors happen when the transfer rules are not active and mapping the data fields is not correct.
 Check for transfer rules, make relevant changes and load data again.
 7. Missing messages - Processing end Missing messages: This can be because of incorrect PSA data, transfer structure, transfer rules, update rules and ODS.
 Check PSA data, Transfer structure, transfer rules, Update rules or data target definition.
 8. Activation of ODS failed: This happens when data is not acceptable to ODS definition. Data need to be corrected in PSA.
 Check for Info object which has caused this problem in the monitor details tab strip. Delete request from data target after changing QM status to red. Correct data in PSA and update data back to data target from PSA.
   9. Source System not available: This can happen when request IDOC is sent source system, but the source system for some reason is not available.
 Ensure that source system is available. Change technical status of request to red and delete request from data target. Trigger Info package again to get data from source system.
 10. Error while opening file from the source system: This happens when either file is open or file is not deposited on server or not available.
 Arrange for file, delete error request from data target and trigger Info package to load data from file.

 11. While load is going on in R/3 table is locked: This happens when some data source is accessing R/3 transparent table and some transaction takes place in R/3.
 Change the technical status of job to red in the monitor and retrigger the job again from R/3.
 12. Object locked by user: This can happen when user or ALEREMOTE is accessing the same table.
 Change the technical status of job to red, delete request from data target and trigger Info package again. If its delta update it will ask for repeat delta, Click on Yes button.
 13. Process Chains Errors occurred in Daily Master Data: This occurs when Transaction data is loaded before Master data.
 Ensure to load Master data before Transaction data. Reload data depending on update mode (Delta/Full Update)
 14. Processing (data packet) No data: This can be because of some bug in Info package, rescheduling with another Info package corrects the problem.
 This type of problem we can solve with copy the Info package and reschedule the data.
 15. Database errors Enable to extend Table, enable to extend the Index: This is due to lack of space available to put further data.

16. Transaction Job Fails Giving Message: "NO SID FOUND FOR CERTAIN DATA RECORD" is due to some illegal characters for the Data records.

 17. Error Asking for Initialization: If you want to load data with the delta update you must first initialize the delta process. Afterwards the selection conditions that were used in the initialization can no longer be changed.
 18. Job Failure at Source System: Go to the background processing overview in the source system. You can get to this with the Wizard or the menu path Environment -> Job Overview -> In the source system (This is the alternate path to see the "Job Overview" at the Source System.)
    At source system you can see the reason for the Job Failure. Thus we need to take the action accordingly.
19. Invalid characters in load: BW accepts just capital letters and certain characters. The permitted characters list can be seen via transaction RSKC.
There are several ways to solve this problem:
1)       Removing erroneous character from R/3 (for example required vendor number that need to be changed can be found from PSA from line shown in error message)
2)       Changing or removing character in update rules (need to done by ABAP)
3)       Putting character to BW permitted characters, if character is really needed in BW
4)       If the bad character only happens once then it can be directly change/removed by editing the PSA
5)       Put ALL_CAPITAL in permitted characters. Needs to be tested first!
Editing and updating from PSA, first ensure that the load has been loaded in PSA, then delete the request from the data target, edit PSA by double clicking the field you wish to change and save. Do not mark the line and press change this will result in incorrect data. After you have corrected the PSA, right click on the not yet loaded PSA and choose "start immediately."
 20. Update mode R is not supported by the extraction API: This happens for loading of delta loads of MD attributes. Why has not been covered. Replicate the data source. Use SE38 and function module TRANSTRU_ACTIVATE_ALL.
Subsequently perform an initial load.
  1. Go to the info package.
  2. Delete the previous initial load
  3. Load the initial
  4. After the initial is successful check the solution by loading a delta       Labels parameters
SAP BI Production Support Issues
Production Support Errors :1) Invalid characters while loading: When you are loading data then you may get some special characters like @#$%...e.t.c.then BW will throw an error like Invalid characters then you need to go through this RSKC transaction and enter all the Invalid chars and execute. It will store this data in RSALLOWEDCHAR table. Then reload the data. You won't get any error because now these are eligible chars done by RSKC.

2) IDOC Or TRFC Error: We can see the following error at “Status” Screen:Sending packages from OLTP to BW lead to errorsDiagnosisNo IDocs could be sent to the SAP BW using RFC.System responseThere are IDocs in the source system ALE outbox that did not arrive in the ALE inbox of the SAP BW.Further analysis:Check the TRFC log.You can get to this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".Removing errors:If the TRFC is incorrect, check whether the source system is completely connected to the SAP BW. Check especially the authorizations of the background user in the source system.Action to be taken:If Source System connection is OK Reload the Data.

3)PROCESSING IS OVERDUE FOR PROCESSED IDOCsDiagnosis IDocs were found in the ALE inbox for Source System that is not updated. Processing is overdue. Error correction: Attempt to process the IDocs manually. You can process the IDocs manually using the Wizard or by selecting the IDocs with incorrect status and processing them manually. Analysis:After looking at all the above error messages we find that the IDocs are found in the ALE inbox for Source System that are not Updated.Action to be taken:We can process the IDocs manually via RSMO -> Header Tab -> Click on Process manually.

4) LOCK NOT SET FOR LOADING MASTER DATA ( TEXT / ATTRIBUE / HIERARCHY )Diagnosis User ALEREMOTE is preventing you from loading texts to characteristic 0COSTCENTER . The lock was set by a master data loading process with therequest number. System response For reasons of consistency, the system cannot allow the update to continue, and it has terminated the process. Procedure Wait until the process that is causing the lock is complete. You can call transaction SM12 to display a list of the locks. If a process terminates, the locks that have been set by this process are reset automatically. Analysis:After looking at all the above error messages we find that the user is “Locked”. Action to be taken:Wait for sometime & try reloading the Master Data manually from Info-package at RSA1.

5) Flat File Loading ErrorDetail Error MessageDiagnosis Data records were marked as incorrect in the PSA. System response The data package was not updated.Procedure Correct the incorrect data records in the data package (for example by manually editing them in PSA maintenance). You can find the error message for each record in the PSA by double-clicking on the record status.Analysis:After looking at all the above error messages we find that the PSA contains incorrect record.Action to be taken:To resolve this issue there are two methods:-i) We can rectify the data at the source system & then load the data.ii) We can correct the incorrect record in the PSA & then upload the data into the data target from here.

6) Object requested is currently locked by user ALEREMOTEDetail Error Message.DiagnosisAn error occurred in BI while processing the data. The error is documented in an error message.Object requested is currently locked by user ALEREMOTEProcedureLook in the lock table to establish which user or transaction is using the requested lock (Tools -> Administration -> Monitor -> Lock entries). Analysis:After looking at all the above error messages we find that the Object is “Locked. This must have happened since there might be some other back ground process runningAction to Be taken : Delete the error request. Wait for some time and Repeat the chain.

Idocs between R3 and BW while extraction
1)When BW executes an infopackage for data extraction the BW system sends a Request IDoc ( RSRQST ) to the ALE inbox of the source system.Information bundled in Request IDoc (RSRQST) is :
Request Id ( REQUEST )
Request Date ( REQDATE )
Request Time (REQTIME)
Info-source (ISOURCE)
Update mode (UPDMODE )
2)The source system acknowledges the receipt of this IDoc by sending an Info IDoc (RSINFO) back to BW system.The status is 0 if it is ok or 5 for a failure.
3)Once the source system receives the request IDoc successfully, it processes it according to the information in the request. This request starts the extraction process in the source system (typically a batch job with a naming convention that begins with BI_REQ). The request IDoc status now becomes 53 (application document posted). This status means the system cannot process the IDoc further.
4)The source system confirms the start of the extraction job by the source system to BW by sending another info IDoc (RSINFO) with status = 1
5)Transactional Remote Function Calls (tRFCs) extract and transfer the data to BW in data packages. Another info IDoc (RSINFO) with status = 2 sends information to BW about the data package number and number of records transferred
6)At the conclusion of the data extraction process (i.e., when all the data records are extracted and transferred to BW), an info IDoc (RSINFO) with status = 9 is sent to BW, which confirms the extraction process.

Give details of any standard BI report that are relevant to Purchasing?
For any Reports you just go to RSRREPDIR table and execute it and see the cube wise/module wise reports.
Eg: Give cube = 0PUR* then execute it will display all reports. Then take COMPUID goto RSZELTTXT table and give this ID and get description.

These are some of the BI standard reports for purchasing:
Contract Details : Technical Name: 0SRCT_DS1_Q003
Consolidated Purchase Order Value Analysis (Over Three Months) : Technical Name: 0BBP_C01_Q029
ABC Analysis :Technical name: 0BBP_C01_Q041
Procurement Values with/Without Contracts : Technical Name: 0BBP_C01_Q020
Expiring Contracts : Technical Name: 0SRCT_DS1_Q004
Quantity reliability :Technical Name: 0BBP_C01_Q011
Procurement Values per Service Provider :Technical name: 0BBP_C01_Q019
Delivery Delay of Last Confirmation :Technical Name: 0BBP_C01_Q012
Procurement Card Use :Technical Name: 0BBP_C01_Q015
Procurement Values per Vendor : Technical Name: 0BBP_C01_Q005
Procurement Values per Product Category :Technical Name: 0BBP_C01_Q004

Explain the main purpose OF T.code SMQ1 qrfc queue.

SMQ1 is the Tcode where you can view the delta queue for the delta enabled extractors.

SMQ1 is generally a outbound queue used to monitor the status of Logical unit of work for different data source used in BW .

The QRFC monitor , we can say is same as Delta Queue (RSA7) , but the we can't identify the current and repeat delta in QRFC but in RSA7 we can find them seperately. So better to use RSA7 for the monitoring purpose.

In RSA7 also we can view the delta queue then what is the difference between RSA7 AND SMQ1.

Any changes or new posting will hit the qRFC queue immediately and this will be reflected in that queue. To pull to BW we need to run the job control to get it collected in RSA7. The RSA7 queue has the entries which are to be pulled to BW.

Clearing SMQ1 Queue   
In a test phase, I want to clear all the entries from a queue in SMQ1, there are 450 or so LUW's.  Is it necessary to delete line by line?  This is a slow process, maybe if this is not possible through the standard transaction then someone might have some code to do so through the qRFC API? 
If this a test system and you are sure to delete all queues, why don't you do a select all F5 in the first screen of SMQ1 and click delete.

'Select All' and 'Delete Selected' options are available in edit menu. 
Deleting an outbound queue SMQ1 

Is it ok to delete an outbound queue in smq1? The situation is, we have tried a delta load but we already aborted it since it is not needed anymore. But when we check on smq1, the queue is still there and it's status is running (transaction executing). What will happen if that queue will be deleted? will it cause data lost? 
Yes you can delete queue in SMQ1. Make sure you also delete delta events for that object R3AC4 so that NO more deltas will come to CRM.

You can delete it. You won't loose ay data, because this will delete any queue that sends data from one system to the other and will note delete data on the source side.

This shouldn't be a problem. And there won't be any data loss. however you might see additional table entries in your CRM or ECC (depending on what was the destination) for the LUWs that were already processed in your delta load.

Q) Difference Between BW Technical and Functional
In general Functional means, derive the funtional specification from the business requirement document. This job normally is done either by the business analyst or system analyst who has a very good knowledge of the business. In some large organizations there will be a business analyst as well as system analyst.
In any business requirement or need for new reports or queries originates with the business user. This requirement will be recorded after discussion by the business analyst. A system analyst analyses these requirements and generates functional specification document.  In the case of BW it could be also called logical design in DATA MODELING.
After review this logical desing will be translated to physical design . This process defines all the required dimensions, key figures, master data, etc.
Once this process is approved and signed off by the requester(users), then conversion of this into practically usable tasks using the SAP BW software. This is called Technical. The whole process of creating an InfoProvider, InfoObjects, InforSources, Source system, etc falls under the Technical domain.
What is the role of consultant has to play if the title is BW administrator? What is his day to day activity and which will be the main focus area for him in which he should be proficient?
BW Administartor - is the person who provides Authorization access to different Roles, Profiles depending upon the requirement. 
For eg. There are two groups of people : Group A and Group B.
  Group A - Manager
  Group B - Developer
Now the Authorization or Access Rights for both the Groups are different.
So for doing this sort of activity.........we required Administrator.

Q) Common BW Support Project Errors
Below are some of the errors in a support project which will be a great help for new learners:
by: Anoo
1) RFC connection lost.
A) We can check out in the SM59 t-code
RFC Destination
+ R/3 connection
CRD client (our r/3 client)
double click..test connection in menu
2) Invalid characters while loading.
A) Change them in the PSA & load them.
3) ALEREMOTE user is locked.
1) Ask your Basis team to release the user. It is mostly ALEREMOTE.
2) Password Changed
3) Number of incorrect attempts to login into ALEREMOTE.
4) USE SM12 t-code to find out are there any locks.
4) Lower case letters not allowed.
A) Uncheck the lower case letters check box under "general" tab in the info object.

5) Object locked.
A) It might be locked by some other process or a user. Also check for authorizations
6) "Non-updated Idocs found in Source System".
A) Check whether any TRFC s strucked in source system. You can check it in SM58.  If no TRFC s are there then better trigger the load again i.e., change the status to red, delete the bad request and run the load.   Check whether the load is Delta or Full.  If it is full just go ahead with the above step.
If it is Delta check wheteher it is source system or BW. If it is source system go for repeat delta. If it is BW then you need to reset Data Mart status.

7) Extraction job aborted in r3
A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.
8) Repeat of last delta not possible.
A) Repeat of last delta is not a option, but a mandate, in case the delta run failed.  In such a case, we cant run the simple delta again.  The system is going to run a repeat of last delta, so as to collect the failed delta's data again as well as any data collect till now right from failure.
For repeat of last delta to be run, we should have the previous delta failed.  Lets assume, in your case, I am not sure, if the delta got falied or deleted. If this is a deletion, then we need to catch hold of the request and make the status to red. This is going to tell the system that the delta failed(although it ran successfully, but you are forcing this message to the system).  Now, if you run the delta info package, it will fetch the data related to 22nd plus all the changes from there on till today.
An essential point here, you should not have run any deltas after 22nd till now. Then only repeat of last delta is going to work. Otherwise only option is to run a repair full request with data selections, if we know selection parameters.
9) Datasource not replicated
A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.
10) Datasource/transfer structure not active.
A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
11) ODS activation error.
A) ODS activation errors can occur mainly due to following reasons-
1.Invalid characters (# like characters)
2.Invalid data values for units/currencies etc
3.Invalid values for data types of char & key figures.
4.Error in generating SID values for some data.
Q) Tickets and Authorization in SAP Business Warehouse
What is tickets? and example?
The typical tickets in a production Support work could be:
1. Loading any of the missing master data attributes/texts.
2. Create ADHOC hierarchies.
3. Validating the data in Cubes/ODS.
4. If any of the loads runs into errors then resolve it.
5. Add/remove fields in any of the master data/ODS/Cube.
6. Data source Enhancement.
7. Create ADHOC reports.
1. Loading any of the missing master data attributes/texts - This would be done by scheduling the infopackages for the attributes/texts mentioned by the client.
2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object.
3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3.
4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action.
5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement
6. Data source Enhancement.
7. Create ADHOC reports. - Create some new reports based on the requirement of client.
Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or what ever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.....etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low priority it must be considered only after attending to high priority tickets.
Checklists for a support project of BPS - To start the checklist:
1) InfoCubes / ODS / datatargets
2) planning areas
3) planning levels
4) planning packages
5) planning functions
6) planning layouts
7) global planning sequences
8) profiles
9) list of reports
10) process chains
11) enhancements in update routines
12) any ABAP programs to be run and their logic
13) major bps dev issues
14) major bps production support issues and resolution
Q) What are the tools to download tickets from client? Are there any standard tools or it depends upon company or client...?
Yes there are some tools for that. We use Hpopenview. Depends on client what they use. You are right. There are so many tools available and as you said some clients will develop their own tools using JAVA, ASP and other software. Some clients use just Lotus Notes. Generally 'Vantive' is used for tracking user requests and tickets.
It has a vantive ticket ID, field for description of problem, severity for the business, priority for the user, group assigned etc.
Different technical groups will have different group ID's.
User talks to Level 1 helpdesk and they raise ticket.
If they can solve issue for the issue, fine...else helpdesk assigns ticket to the Level 2 technical group.
Ticket status keeps changing from open, working, resolved, on hold, back from hold, closed etc. The way we handle the tickets vary depending on the client. Some companies use SAP CS to handle the tickets; we have been using Vantive to handle the tickets. The ticket is handled with a change request, when you get the ticket you will have the priority level with which it is to be handled. It comes with a ticket id and all. It's totally a client specific tool. The common features here can be 
- A ticket Id, 
- Priority, 
- Consultant ID/Name, 
- User ID/Name, 
- Date of Post, 
- Resolving Time etc.
There ideally is also a knowledge repository to search for a similar problem and solutions given if it had occurred earlier. You can also have training manuals (with screen shots) for simple transactions like viewing a query, saving a workbook etc so that such queried can be addressed by using them.
When the problem is logged on to you as a consultant, you need to analyze the problem, check if you have a similar problem occurred earlier and use ready solutions, find out the exact server on which this has occurred etc.
You have to solve the problem (assuming you will have access to the dev system) and post the solution and ask the user to test after the preliminary testing from your side. Get it transported to production once tested and posts it as closed i.e. the ticket has to be closed.

What is User Authorizations in SAP BW?
Authorizations are very important, for example you don't want the important financial report to all the users. so, you can have authorization in Object level if you want to keep the authorization for specific in object for this you have to check the Object as an authorization relevent in RSD1 and RSSM tcodes. Similarly you set up the authorization for certain users by giving that users certain auth. in PFCG tcode. Similarly you create a role and include the tcodes, BEx reports etc into the role and assign this role to the userid.

Q) Differences Between BW and BI Versions
List the differences between BW 3.5 and BI 7.0 versions.

Major Differences between Sap Bw 3.5 & SapBI 7.0 version:
  1. In Infosets now you can include Infocubes as well.
  2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube.
  3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM.
  4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal ! :)
  5. Search functionality hass improved!! You can search any object. Not like 3.5
  6. Transformations are in and routines are passe! Yess, you can always revert to the old transactions too.
  7. The Data Warehousing Workbench replaces the Administrator Workbench.
  8. Functional enhancements have been made for the DataStore object: New type of DataStore object Enhanced settings for performance optimization of DataStore objects.
  9. The transformation replaces the transfer and update rules.
10. New authorization objects have been added
11. Remodeling of InfoProviders supports you in Information Lifecycle Management.
12 The Data Source:
There is a new object concept for the Data Source.
Options for direct access to data have been enhanced.
From BI, remote activation of Data Sources is possible in SAP source systems.
13.There are functional changes to the Persistent Staging Area (PSA).
14.BI supports real-time data acquisition.
15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include:
a) Renamed ODS as DataStore.
b) Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation
c) Unification of Transfer and Update rules
d) Introduction of "end routine" and "Expert Routine"
e) Push of XML data into BI system (into PSA) without Service API or Delta Queue
f) Intoduction of BI accelerator that significantly improves the performance.
g) Load through PSA has become a must. I am not too sure about this. It looks like we would not have the option to bypass the PSA Yes,
16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load.
New features in BI 7 compared to earlier versions:
  i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA).
 ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
iii. One level of Transformation. This replaces the Transfer Rules and Update Rules
iv. Performance optimization includes new BI Accelerator feature.
 v. User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations. 
Q) What Is Different Between ODS & IC
What is the differenct between IC & ODS?  How to flat data load to IC & ODS?

ODS is a datastore where you can store data at a very granular level. It has overwritting capability. The data is stored in two dimensional tables. Whereas cube is a based on multidimensional modeling which facilitates reporting on diff dimensions. The data is stored in an aggregated form unlike ODS and have no overwriting capability. Reporting and analysis can be done on multidimensions unlike on ODS.

ODS are used to consolidate data. Normally ODS contain very detailed data, technically there is the option to overwrite or add single records.InfoCubes are optimized for reporting. There are options to improve performance like aggregates and compression and it is not possible to replace single records, all records sent to InfoCube will be added up.

The most important difference between ODS and BW is the existence of key fields in the ODS. In the ODS you can have up to 16 info objects as key fields. Any other info objects will either be added or overwritten! So if you have flat files and want to be able to upload them multiple times you should not load them directly into the info cube, otherwise you need to delete the old request before uploading a new one. There is the disadvantage that if you delete rows in the flat file the rows are not deleted in the ODS. 
I also use ODS-Objects to upload control data for update or transfer routines. You can simply do a select on the ODS-Table /BIC/A<ODSName>00 to get the data. 
ODS is used as an intermediate storage area of operational data for the data ware house . ODS contains high granular data . ODS are based on flat tables , resulting in simple modeling of ODS.   We can cleanse transform merge sort data to build staging tables that can later be used to populate INOFCUBE . 
An infocube is a multidimentionsl dat acontainer used as a basis for analysis and reporting processing. The infocube is a fact table and their associated dimension tables in a star schema. It looks like a fact table appears in the middle of the graphic, along with several surrounding dimension tables. The central fact is usually very large, measured in gigabytes. it is the table from which you retrieve the interesting data. the size of the dimension tables amounts to only 1 to 5 percent of hte size of the fact table. common dimensions are unit & time etc. There are different type of infocubes in BW, such as basic infocubes, remote infocubes etc. 
An ODS is a flat data container used for reporting and data cleansing/quality assurance purpose. They are not based on star schema and are used primaily for detail reporting rather than for dimensional analyais. 
An infocube has a fact table, which contains his facts (key figures) and a relation to dimension tables. This means that an infocube exists of more than one table. These tables all relate to each other. This is also called the star scheme, because the dimension tables all relate to the fact table, which is the central point. A dimension is for example the customer dimension, which contains all data that is important for the customer. 
An ODS is a flat structure. It is just one table that contains all data.  Most of the time you use an ODS for line item data. Then you aggregate this data to an infocube. 
Q) Difference Between PSA, ALE IDoc, ODS
What is difference between PSA and ALE IDoc?  And how data is transferd using each one of them?
The following update types are available in SAP BW:
1. PSA
2. ALE (data IDoc) 
You determine the PSA or IDoc transfer method in the transfer rule maintenance screen. The process for loading the data for both transfer methods is triggered by a request IDoc to the source system. Info IDocs are used in both transfer methods. Info IDocs are transferred exclusively using ALE
A data IDoc consists of a control record, a data record, and a status record The control record contains, for example, administrative information such as the receiver, the sender, and the client. The status record describes the status of the IDoc, for example, "Processed".  If you use the PSA for data extraction, you benefit from increased flexiblity (treatment of incorrect data records). Since you are storing the data temporarily in the PSA before updating it in to the data targets, you can check the data and change it if necessary. Unlike a data request with IDocs, the PSA gives you various options for additional data updates into data targets:
InfoObject/Data Target Only - This option means that the PSA is not used as a temporary store. You choose this update type if you do not want to check the source system data for consistency and accuracy, or you have already checked this yourself and are sure that you no longer require this data since you are not going to change the structure of the data target again.
PSA and InfoObject/Data Target in Parallel (Package by Package) - BW receives the data from the source system, writes the data to the PSA and at the same time starts the update into the relevant data targets.  Therefore, this method has the best performance.
The parallel update is described in detail in the following: A dialog process is started by   data package, in which the data of this package is writtein into the PSA table. If the data is posted successfully into the PSA table, the system releases a second, parallel dialog process that writes the data to the data targets. In this dialog process the transfer rules for the data records of the data package are applied, that data is transferred to the communcation structure, and then written to the data targets. The first dialog process (data posting into the PSA) confirms in the source system that is it completed and the source system sends a new data package to BW while the second dialog process is still updating the data into the data targets.
The parallelism relates to the data packages, that is, the system writes the data packages into the PSA table and into the data targets in parallel.  Caution: The maximum number of processes setin the source system in customizing for the extractors does not restrict the number of processes in BW. Therefore, BW can require many dialog processes for the load process. Ensure that there are enough dialog processes available in the BW system. If there are not enough processes on the system side, errors occur. Therefore, this method is the least recommended. 
PSA and then into InfoObject/Data Targets (Package by Package) - Updates data in series into the PSA table and into the data targets by data package. The system starts one process that writes the data packages into the PSA table. Once the data is posted successfuly into the PSA table, it is then written to the data targets in the same dialog process. Updating in series gives you more control over the overall data flow when compared to parallel data transfer since there is only one process per data package in BW. In the BW system the maximum number of dialog process required for each data request corresponds to the setting that you made in customizing for the extractors in the control parameter maintenance screen. In contrast to the parallel update, the system confirms that the process is completed only after the data has been updated into the PSA and also into the data targets for the first data package.
Only PSA - The data is not posted further from the PSA table immediately. It is useful to transfer the data only into the PSA table if you want to check its accuracy and consistency and, if necessary, modify the data. You then have the following options for updating data from the PSA table:
Automatic update - In order to update the data automatically in the relevant data target after all data packages are in the PSA table and updated successfully there, in the scheduler when you schedule the InfoPackage, choose Update Subsequently in Data Targets on the Processing tab page.   *-- Sunil

What is difference between PSA and ODS?
PSA: This is just an intermediate data container. This is NOT a data target. Main purpose/use is for data quality maintenance. This has the original data (unchanged) data from source system.
ODS: This is a data target. Reporting can be done through ODS. ODS data is overwriteable. For datasources for which delta is not enabled, ODS can be used to upload delta records to Infocube.  
You can do reporting in ODS. In PSA you can't do reporting directly
ODS contains detail -level data , PSA The requested data is saved, unchanged from the source system. Request data is stored in the transfer structure format in transparent, relational database tables in the Business Information Warehouse. The data format remains unchanged, meaning that no summarization or transformations take place
In ODS you have 3 tables Active, New data table, change log, In PSA you don't have.  
Q) Daily Tasks in Support Role and Infopackage Failures
1. Why there is frequent load failures during extractions? and how they are going to analyse them? 
If these failures are related to Data,, there might be data inconsistency in source system..though you are handling properly in transfer rules. You can monitor these issues in T-code -> RSMO and PSA (failed records).and update .
If you are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system.   These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO).. and restart the extraction.
2. Can anyone explain briefly about 0record modes in ODS? 
ORECORDMODE is SAP Delivered object and will be added to ODS object while activating. Using this ODS will be updated during DELTA loads.. This has three possible values ( X D R)..  D & R is for deleting and removing records and X is for skipping records during delta load. 
3. What is reconciliation in bw? What the procedure to do reconciliation? 
Reconcilation is the process of comparing the data after it is transferred to the BW system with the source system. The procedure to do reconcilation is either you can check the data from the SE16 if the data is coming from a particular table only or if the datasource is any std datasource then the data is coming from the many tables in that scenario what I used to do ask the R/3 consultant to report on that particular selections and used to get the data in the excel sheet and then used to reconcile with the data in BW . If you are familiar with the reports of R/3 then you are good to go meaning you need not be dependant on the R/3 consultant ( its better to know which reports to run to check the data ).
4. What is the daily task we do in production support.How many times we will extract the data at what times. 
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided.  If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures...   long times are expected..
Usually You need to work on RSMO and see what records are failing.. and update from PSA.
5. What are some of the frequent failures and errors?
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be becuase of previuos load failure , if the load is dependant on other loads
d) Can be because of erreneous records
e) Can be because of RFC connections
Q) Questions Answers on SAP BW
What is the purpose of setup tables?
Setup tables are kind of interface between the extractor and application tables. LO extractor takes data from set up table while initalization and full upload and hitting the application table for selection is avoided. As these tables are required only for full and init load, you can delete the data after loading in order to avoid duplicate data. Setup tables are filled with data from application tables.The setup tables sit on top of the actual applcation tables (i.e the OLTP tables storing transaction records).  During the Setup run, these setup tables are filled. Normally it's a good practice to delete the existing setup tables before executing the setup runs so as to avoid duplicate records for the same selections
We are having Cube. what is the need to use ODS. what is the necessary to use ODS though we are having cube?
1)  Remember cube has aggregated data and ods has granular data.
2)  In update rules of a infocube you do not have option for over write whereas for a ods the default is overwrite.
What is the importance of transaction RSKC? How it is useful in resolving the issues with speial characters.
Using this T-code, you can allow BW system to accept special char's in the data coming from source systems.  This list of chars can be obtained after analyzing source system's data OR can be confirmed with client during design specs stage.
How to handle double data loading in SAP BW?
What do you mean by SAP exit, User exit, Customer exit?
2A. These exits are customized for handling data transfer in various scenarios.  (Ex. Replacement Path in Reports- > Way to pass variable to BW Report)  Some can be developed by BW/ABAP developer and inserted wherever its required.  Some of these programs are already available and part of SAP Business Content. These are called SAP Exits. Depends on the requirement, we need to extend some exits and customize.
What are some of the production support isues-trouble shooting guide?
3A. Production issues are different for each BW project and most common issues can be obtained from some of the previous mails. (data load issues).
When we go for Business content extraction and when go for LO/COPA extraction?
What are some of the few infocube name in SD and MM that we use for extraction and load them to BW?
How to create indexes on ODS and fact tables?
What are data load monitor (RSMO or RSMON)?
LIS Extraction is kind of old school type and not preferred with big BW systems.  Here you can expect issues related to performance and data duplication in set up tables.
LO extraction came up with most of the advantages and using this, you can extend exiting extract structures and use customized data sources.
If you can fetch all required data elements using SAP provided extract structures, you don't need to write custom extractions... You can get clear idea on this after analyzing source system's data fields and required fields in target system's data target's structure.
MM -  0PUR_C01(Purchasing data) , OPUR_C03 (Vendor Evaluation)
SD  -  0SD_CO1(Customer),0SD_C03( Sales Overview) ETC..
You can do this by choosing "Manage Data Target" option  and click on few buttons available in "performance" tab. 
RSMO is used to monitor data flow to target system from source system. You can see data by request, source system, time request id etc.... just play with this..

What is KPI?
A KPI are Key Performance Indicators.
These are values companies use to manage their business.  E.g. net profit.
In detail: 
Stands for Key Performance Indicators. A KPI is used to measure how well an organization or individual is accomplishing its goals and objectives. Organizations and businesses typically outline a number of KPIs to evaluate progress made in areas where performance is harder to measure. 
For example, job performance, consumer satisfaction and public reputation can be determined using a set of defined KPIs. Additionally, KPI can be used to specify objective organizational and individual goals such as sales, earnings, profits, market share and similar objectives. 
KPIs selected must reflect the organization's goals, they must be key to its success, and they must be measurable. Key performance indicators usually are long-term considerations for an organization   
Q) Business Warehouse SAP Interview
1.  How to convert a BeX query Global structure to local structure  (Steps involved)?
To convert a BeX query Global structure to local structureSteps:
A local structure when you want to add structure elements that are unique to the specific query. Changing the global structure changes the structure for all the queries that use the global structure. That is reason you go for a local structure.
Coming to the navigation part--
In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open query icon (icon tht looks like a folder) On the SAP BEx Open dialog box: Choose Queries. Select the desired InfoCube Choose New. On the Define the query screen: In the left frame, expand the Structure node. Drag and drop the desired structure into either the Rows or Columns frame. Select the global structure. Right-click and choose Remove reference. A local structure is created.
Remember that you cannot revert back the changes made to global structure in this regard. You will have to delete the local structure and then drag n drop global structure into query definition.
When you try to save a global structure, a dialogue box prompts you to comfirm changes to all queries. that is how you identify a global structure.
2.  I have RKF & CKF in a query, if report is giving error which one should be checked first RKF or CKF and why (This is asked in one of int).
RKF consists of a key figure restricted with certain charecteristics combinations CKF have calculations which fully uses various key figures
They are not interdependent on each other . You can have both at same time
To my knowledge there is no documented limit on the number of RKF's and CKF's. But the only concern would be the performance. Restructed and Calculated Key Figures would not be an issue. However the No of Key figures that you can have in a Cube is limited to around 248.
Restricted Key Figures restrict the Keyfigure values based on a Characteristic.(Remember it wont restrict the query but only KF Values)
Ex: You can restrict the values based on particular month
Now I create a RKFlike this:(ZRKF)
Restrict with a funds KF
with period variable entered by the user.
This is defined globally and can be used in any of the queries on that infoprovider. In columns: Lets assume 3 company codes are there. In new selection, i drag
Company Code1
Similarly I do for other company codes.
Which means I have created a RKF once and using it in different ways in different columns(restricting with other chars too)
In the properties I give the relevant currency to be comverted which will display after converting the value to target currency from native currency.
Similarly for other two columns with remaining company codes.
3.  What is the use of Define cell in BeX & where it is useful?
Cell  in BEX:::Use
When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.
Cell-specific definitions allow you to define explicit formulas, along with implicit cell definition, and selection conditions for cells and in this way, to override implicitly created cell values. This function allows you to design much more detailed queries.
In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.
You need two structures to enable cell editor in bex. In every query you have one structure for key figures, then you have to do another structure with selections or formulas inside.
Then having two structures, the cross among them results in a fix reporting area of n rows * m columns. The cross of any row with any column can be defined as formula in cell editor.
This is useful when you want to any cell had a diferent behaviour that the general one described in your query defininion.
For example imagine you have the following where % is a formula kfB/KfA * 100.
kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 50%
Then you want that % for row chC was the sum of % for chA and % chB. Then in cell editor you are enable to write a formula specifically for that cell as sum of the two cell before. chC/% = chA/% + chB/% then:
kfA kfB %
chA 6 4 66%
chB 10 2 20%
chC 8 4 86%          
Q) SAP BW Interview Questions 2
1) What is process chain? How many types are there?  How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
2) What is data integrityand how can we achieve this?
3) What is index maintenance and what is the purpose to use this in real time?
4) When and why use infocube compression in real time?
5) What is mean by data modelling and what will the consultant do in data modelling?
6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
8) What is mean by multiprovider and what purpose we use multiprovider?
9) What is scheduled and monitored data loads and for what purpose?
Ans # 1:
Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.
PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain. 
This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task.
1. Check the Source System for that particular PC.
2. Select the request ID (it will be in Header Tab) of PC
3. Go to SM37 of Source System.
4. Double Click on the Job.
5. You will navigate to a screen
6.  In that Click "Job Details" button
7. A small Pop-up Window comes
8. In the Pop-up screen, take a note of
a) Executing Server
b) WP Number/PID
9. Open a new SM37 (/OSM37) command
10. In the Click on "Application Servers" button
11. You can see different Application Servers.
11. Goto Executing server, and Double Click (Point 8 (a))
12. Goto PID (Point 8 (b))
13. On the left most you can see a check box
14. "Check" the check Box
15. On the Menu Bar.. You can see "Process"
16. In the "process" you have the Option "Cancel with Core"
17. Click on that option.                
Ans # 2:
Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
Ans # 4:
InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
This compression can be done through Process Chain and also manually.
Tips by: Anand
Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.
Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....
We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...
Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc..  fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..
Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to for more info...
Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.
Q) What is ODS?
It is operational data store.  ODS is a BW Architectural component that appears between PSA ( Persistant Staging Area ) and infocubes and that allows Bex ( Business Explorer ) reporting.  It is not based on the star schema and is used primarily for details reporting, rather than for dimensional analysis. ODS objects do not aggregate data as infocubes do.  Data are loaded into an IDS object by inserting new records, updating existing records, or deleting old records as specified by RECORDMODE value.    *-- Viji
1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records to an infocube?
3. What are the four ASAP Methodologies?
4. How do you measure the size of infocube?
5. Difference between infocube and ODS? 
6. Difference between display attributes and navigational attributes?  *-- Kiran
1. Ans. This depends,if you have complex coding in update rules it will take longer time,orelse it will take less than 30 mins.
3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization
4. Ans:
In no of records
5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by different dim table which connects to sids.  And the data wise, you will have aggregated data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have granular data(detailed level).
6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as navigational attribute is used for drilling down in the report.We don't need to maintain Nav attribute in the cube as a characteristic(that is the advantage) to drill down.
*-- Ravi
Ans: But how is it possible?.If you load it manually twice, then you can delete it by request.
Sure you can.  ODS is nothing but a table.
Yes of course.  For example, for loading text and hierarchies we use different data sources but the same infosource.
Data flows from transactional system to analytical system(BW).  DS on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively.
Full and delta.
Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white paper on LO-Cokpit extractions.
It holds granular data.
In PSA  table.
The volume of data one data target holds(in no.of records)
Basic,Virtual(remote,sap remote and multi)
Can be made of ODSs and objects
In R/3 or in BW.  2 in R/3 and 2 in BW
Exist In the info object,transfer routines,update routines and start routine
Rows and Columns,you can create structures.
Variable with default entry
Replacement path
SAP exit
Customer exit
You can drill down to any level you want using Nav attributes and jump targets
Indexes are data base indexes,which help in retrieving data fastly.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
KPI’s indicate the performance of a company.These are key figures
After image(correct me if I am wrong)
Help!!!!!!!!!!!!!!!!!!!Refer documentation
ST*,Number ranges,delete indexes before load ..etc
There should be some tool to run the job daily(SM37 jobs)
Profile generator
What are you expecting??
Of course
Refer help.What are you expecting??.Multicube works on Union condition
Dev ---> Q and Dev ---> P
Q) BW Query Performance
1. What kind of tools are available to monitor the overall Query Performance?
o BW Statistics
o BW Workload Analysis in ST03N (Use Export Mode!)
o Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
o Yes, you need to turn on the BW Statistics:
  RSA1, choose Tools -> BW statistics for InfoCubes
  (Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools are available to analyse a specific query in detail?
o Transaction RSRT
o Transaction RSRTRACE
4.  Do I have a overall query performance problem?
o Use ST03N -> BW System load values to recognize the problem. Use the
  number given in table 'Reporting - InfoCubes:Share of total time (s)'
  to check if one of the columns %OLAP, %DB, %Frontend shows a high 
  number in all InfoCubes.
o You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
o If the database statistic strategy is set up properly for your DB platform 
  (above all for the BW specific tables)
o If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
o If Buffers, I/O, CPU, memory on the database server are exhausted?
o If Cube compression is used regularly
o If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
o If the CPUs on the application server are exhausted
o If the SAP R/3 memory set up is done properly (use TX ST02 to find
o If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,
  Customizing default)
7. What can I do if the client proportion is high for all queries?
o Check whether most of your clients are connected via a WAN Connection and the amount 
  of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
o Again you can use ST03N -> BW System Load
o Depending on the time frame you select, you get historical data or
  current data.
o To get to a specific query you need to drill down using the InfoCube
o Use Aggregation Query to get more runtime information about a
  single query. Use tab All data to get to the details.
  (DB, OLAP, and Frontend time, plus Select/ Transferred records,
  plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
   values for a specific query?
(Use Details to get the runtime segments)
o High Database Runtime
o High OLAP Runtime
o High Frontend Runtime
10. What can I do if a query has a high database runtime?
o Check if an aggregate is suitable (use All data to get values
  "selected records to transferred records", a high number here would
  be an indicator for query performance improvement using an aggregate)
o Check if database statistics are update to data for the
  Cube/Aggregate, use TX RSRV output (use database check for statistics
  and indexes)
o Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
o Check if a high number of Cells transferred to the OLAP (use
  "All data" to get value "No. of Cells")
o Use RSRT technical Information to check if any extra OLAP-processing
  is necessary (Stock Query, Exception Aggregation, Calc. before
  Aggregation, Virtual Char. Key Figures, Attributes in Calculated
  Key Figs, Time-dependent Currency Translation)
  together with a high number of records transferred.
o Check if a user exit Usage is involved in the OLAP runtime?
o Check if large hierarchies are used and the entry hierarchy level is
  as deep as possible. This limits the levels of the
  hierarchy that must be processed. Use SE16 on the inclusion
  tables and use the List of Value feature on the column successor
  and predecessor to see which entry level of the hierarchy is used.
- Check if a proper index on the inclusion  table exist
12. What can I do if a query has a high frontend runtime?
o Check if a very high number of cells and formattings are transferred
  to the Frontend ( use "All data" to get value "No. of Cells") which
  cause high network and frontend (processing) runtime.
o Check if frontend PC are within the recommendation (RAM, CPU Mhz)
o Check if the bandwidth for WAN connection is sufficient

Q) The Three Layers of SAP BW
SAP BW has three layers:
  • Business Explorer: As the top layer in the SAP BW architecture, the Business Explorer (BEx) serves as the reporting environment (presentation and analysis) for end users. It consists of the BEx Analyzer, BEx Browser, BEx Web, and BEx Map for analysis and reporting activities.

  • Business Information Warehouse Server: The SAP BW server, as the middle layer, has two primary roles:

• Data warehouse management and administration: These tasks are handled by the production data extractor (a set of programs for the extraction of data from R/3 OLTP applications such as logistics, and controlling), the staging engine, and the Administrator Workbench.
• Data storage and representation: These tasks are handled by the InfoCubes in conjunction with the data manager, Metadata repository, and Operational Data Store (ODS).
  • Source Systems: The source systems, as the bottom layer, serve as the data sources for raw business data. SAP BW supports various data sources:

• R/3 Systems as of Release 3.1H (with Business Content) and R/3 Systems prior to Release 3.1H (SAP BW regards them as external systems)
• Non-SAP systems or external systems
• components (such as mySAP SCM, mySAP SEM, mySAP CRM, or R/3 components) or another SAP BW system.
Q) What Is SPRO In BW Project?
1) What is spro?
2) How to use in bw project?
3) What is difference between idoc and psa in transfer methods?
1.  SPRO is the transaction code for Implementation Guide, where you can do configuration settings.
* Type spro in the transaction box and you will get a screen customizing :
   Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
   SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse Information.
2.  SPRO is used to configure the following settings :
* General Settings like printer settings, fiscal year settings, ODS Object Settings, Authorisation settings, settings for  displaying SAP Documents, etc., etc.,
* Links to other systems : like links between flat files and BW Systems, R/3 and BW,  and other data sources, link between BW system and Microsoft Analysis services, and crystal enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RFC Desitination for SAP BW for J2EEE Engine, Installation of Availability monitoring for UD Connect.
* Automated Processes:  like settings for batch processes, background processes etc., etc.,
* Transport Settings : like settings for source system name change after transport and create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.
3.  PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed requests in the format of the transfer structure. It is defined according to the Datasource and source system, and is source system dependent.
IDOCS : Intermediate DOCuments : Data Structures used as API working storage for applications, which need to move data in or out of SAP Systems.
Q) What the difference between data validation and data reconciliation?
By : Anuradha
Data validation is nothing but:
Validation allows solid data entry regarding special rules. According to previous rules, the system can evaluate an entry and a message can appear on the user's terminal if a check statement is not met. A validation step contains prerequisite statement and check statement. Both of them are defined using Boolean Logic or calling an ABAP/4 form.
Data Reconcialtion:
Reconcilation is the process of comparing the data after it is transferred to the BW system with the source system. The procedure to do reconcilation is either you can check the data from the SE16 if the data is coming from a particular table only or if the datasource is any std datasource then the data is coming from the many tables in that scenario what I used to do ask the R/3 consultant to report on that particular selections and used to get the data in the excel sheet and then used to reconcile with the data in BW . If you are familiar with the reports of R/3 then you are good to go meaning you need not be dependant on the R/3 consultant ( its better to know which reports to run to check the data ).

How to do Reconciliation?
There are two ways for Reconciliation:
1) Create Basic Cube and load the data from the source system. In the same way create another cube of type Virtual cube. After creating those two cubes, create one multiprovider by using the Basic Cube and Virtual Cube, in the Identification of the Multiprovider select two cube. Then go to reporting create the query and write on formule to compare the values of these two cubes.
2) See the contents of the basic cube which is there is BW. In that screen one Button is there as "SAVE AS". Click that button and select as "Spread sheet". Save as .xls. In the Source system side also go to T-Code RSA3, select your data source which you assigned to the basic cube. Click on execute and see the contents.
Now again here also select the "SAVE AS" button and select the spread sheet and save under .xls file. Ok now your two flat file are ready. Now move one file into other by using "move copy". Now two flat files are in one excel sheet only in different sheets. Now write a formula to compare the values of sheet 1 and sheet 2 in either in sheet1 or sheet2.

Q) How can we compare the R/3 data with B/W after the data loading.  What are the procedures to be followed?
Data validation Steps:
Following step-by-step solutions are only an example.
1. Run transaction SE11 in the R/3 system to create a view which is based on the table of COEP and COBK. 
These two tables are the source information for extractor 0CO_OM_CCA_9 (CO cost on the line item level).
2. Define selection conditions. 
Only CO objects with prefix ‘KS’ or ‘KL’ should be selected because only these objects are relevant for the extraction and relevant for the reconciliation. 
’KS’ means controlling area; ’KL’ means cost element.
3. Setting the ‘Maintenance Status’.  Status ‘Display/Maintenance Allowed’ allows you to display and edit this view.
4. Create a DataSource in transaction RSO2. 
Assign the DataSource to the appropriate application component. 
The view, which is created by following the steps above, should be used in this field.
Click the ‘Save’ button to save this DataSource. 
You will get a pop-up for the development class. 
For testing purposes you can save this DataSource as a local object. If you want to transport this DataSource into any other systems it should be saved with the appropriate development class.
5. Replicate this new Datasource ‘ZCOVA_DS1’ to BI and create InfoSource / Transfer Rule in BI the system.
6. Because the value of InfoObject ’0costcenter’ is determined in extractor 0CO_OM_CCA_9 and this logic cannot be replaced by the view
‘ZCORECONCILIATION’, this InfoObject has to be determined in the transfer rule using formula:
0costcenter = substring (object number, 6, 10).
7. InfoObject ‘0fiscvarnt’ can be assigned to a constant for testing purposes. 
In this example we assume that K4 is the fiscal year variant for the company. 
You can also determine the value of InfoObject ‘0fiscvarnt’ by reading the attribute value of InfoObject ‘0COMP_CODE’ which is available in the transfer structure.
8. In ‘ZCOVA_DS1’ InfoObject ‘0Fiscper’ (fiscal period) can be added to the InfoSource to make the comparison fairly easy. This InfoObject can be determined in the
transfer rule using formula: 

Q) What are all the differences between RSA5 and RSA6?
RSA5 - Contains all the Business content data source in Delivered version.
RSA6 - After activation from RSA5, the delivered objects will come to RSA6 as a Active Version.
T-code used in Extraction:
RS02 -> Generic Datasource
SE11 -> Database dictionary
SE37 -> Function module
LBWE -> Logistic Datasource
LBWG -> Deletion of setup table data
RSA5 --- Transfer business content data source
** Make available these data source to bw side for extracting data.
RSA6 --- Data source enhancement
** Enhancement of data source to include extra fields in it.. editing, displaying, test extraction of data source (rsa3) these are functions available in rsa6.
In transaction RSA5 you will get the DataSources in their Delivered State whereas in Transaction RSA6 (Post Process DataSources and Hierarchy) you can view the Activated DataSources in their activated state which is being done in RSA5 only.

Elaborating the main point.
RSA5 - Transaction from which business content data sources delivered by SAP can be activated/installed for productive use with live data.
RSA6 - Transaction to maintain the currently active data sources in your system.  Here you would find custom and SAP delivered installed datasources. You could branch to changing the data source from here.
Now, in RSA6 you can not only see SAP datasources currently active in the system, but also the custom datasources (Y* or Z*) the you have created and are activated in the system.
So, in a nutshell we can say that RSA5 gives all the DELIVERED datasources in the system and RSA6 gives all the ACTIVE datasources available for use.
RSA5 ->
In the BW system, we call transaction RSA5 Install Data Sources from Business Content to install the DataSources from the application components.
We have to install business content using RSA5 before we can use it in SAP R/3.
By means of installing business content (BC) we are changing version of BC component from delivered "D" to active. No modifications to the datasource are possible here.  After installing only, we can use the datasources in LBWE.
RSA6 ->
Once you activate the datasource in RSA5, it will be available in RSA6.
Rsa6 list active datasources, as you can see in menu, functions : create application component, display/change datasource, test extraction (similar to rsa3), and enhance datasource.  Here user could modify the Data Sources. You can change the data source in RSA6 like you can append the fields you can hide the fields and make the fields selection enabled.
The function is same in r/3 and bw.

Q) Data load in SAP BW
What is the strategy to load for example 500,000 entries in BW (material master, transactional data)?
How to separate this entries in small packages and transfer it to BW in automatic?
Is there some strategy for that?
Is there some configuration for that?
See OSS note 411464 (example concerning Info Structures from purchasing documents) to create smaller jobs in order to integrate a large amount of data.
For example, if you wish to split your 500,000 entries in five intervals:
- Create 5 variants in RMCENEAU for each interval
- Create 5 jobs (SM36) that execute RMCENEAU for each variant
- Schedule your jobs
- You can then see the result in RSA3
Loading Data From a Data Target
Can you please guide me for carrying out his activity with some important steps?
I am having few request with the without data mart status. How can I use only them & create a export datasource?
Can you please tell me how my data mechanism will work after the loading?
Follow these steps:

1. Select Source data target( in u r case X) , in the context menu click on Create Export Datasources.
DataSource ( InfoSource) with name 8(name of datatarget) will be generated.

2. In Modelling menu click on Source Systems, Select the logical Source System of your BW server, in the context menu click on Replicate DataSource.

3. In the DataModelling click on Infosources and search for infosource 8(name of datatarget). If not found in the search refresh it.  Still not find then  from DataModelling click on Infosources, in right side window again select Infosources, in the context menu click on insert Lost Nodes.
Now search you will definately found.

4. No goto Receiving DataTargets ( in your case Y1,Y2,Y3) create update rules.
In the next screen select Infocube radio button and enter name of Source Datatarget (in u r case X). click Next screen Button ( Shift F7), here select Addition radio button, then select Source keyfield radio button and map the keyfields form Source cube to target cube.

5. In the DataModelling click on Infosources select infoSource which u replicated earlier and create infopackage to load data.. 
Q) SAP R/3 BW Source and SID Table
R/3 Source Table.field - How To Find?
What is the quickest way to find the R/3 source table and field name for a field appearing on the BW InfoSource?
By: Sahil
With some ABAP-knowledge you can find some info: 
1, Start ST05 (SQL-trace) in R/3
2, Start RSA3 in R/3 just for some records
3, After RSA3 finishes, stop SQL-trace in ST05
4, Analyze SQL-statements in ST05 
You can find the tables - but this process doesn't help e.g for the LO-cockpit datasources. 
Explain tables and sid tables.
A basic cube consists of fact table surrounded by dimension table. SID table links these dimension tables to master data tables.
SID is surrogate ID generated by the system. The SID tables are created when we create a master data IO. In SAP BW star schema, the distinction is made between two self contained areas: Infocube & master data tables/SID tables.
The master data doesn't reside in the satr schema but resides in separate tables which are shared across all the star schemas in SAP BW. A numer ID is generated which connects the dimension tables of the infocube to that of the master data tables.
The dimension tables contain the dim ID and SID of a particular IO. Using this SID the attributes and texts of an master data Io is accessed.
The SID table is connected to the associated master data tables via teh char key 
Sid Tables are like pointers in C
The details of the tables in Bw :

Tables Starting with  Description:
M - View of master data table
Q  - Time Dependent master data table
H - Hierarchy table
K - Hierarchy SID table
I  - SID Hierarchy structure
J  - Hierarchy interval table
S  - SID table
Y  - Time Dependent SID table
T  - Text Table
F  - Fact Table - Direct data for cube ( B-Tree Index )
E  - Fact Table - Compress cube ( Bitmap Index ) 
Q) Explain the what is primary and secondary index.
When you activate an object say ODS / DSO, the system automatically generate an index based on the key fields and this is primary index.
In addition if you wish to create more indexes , then they are called secondary indexes.
The primary index is distinguished from the secondary indexes of a table. The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
You can also create further indexes on a table. These are called secondary indexes. This is necessary if the table is frequently accessed in a way that does not take advantage of the sorting of the primary index for the access. Different indexes on the same table are distinguished with a three-place index identifier.
Lets say you have an ODS and the Primary Key is defined as Document Nbr, Cal_day. These two fields insure that the records are unqiue, but lets lay you frequently want to run queries where you selct data based on the Bus Area and Document Type. In this case, we could create a secondary index on Bus Area, Doc Type. Then when the query runs, instead of having to read every record, it can use the index to select records that contain just the Bus Area and Doc type values you are looking for.
Just because you have a secondary index however, does not mean it will be used or should be used. This gets into the cardinality of the fields you are thinking about indexing. For most DBs, an index must be fairly selective to be of any value. That is, given the values you provide in a query for Bus Area and Doc Type, if it will retrieve a very small percentage of the rows form the table, the DB probably should use the index, but if the it would result in retrieving say 40% of the rows, it si almost always better to just read the entire table.
Having current DB statististics and possibly histograms can be very important as well. The DB statistics hold information on how many distinct values a field has, e.g. how many distinct values of Business Area are there, how many doc types.
Secondary indexes are usally added to ODS (which you can add using Admin Wkbench) based on your most frequently used queries. Secondary indexes might also be added to selected Dimension and Master data tables as well, but that usually requires a DBA, or someone with similar privileges to create in BW.

Q) Types of Update Methods
What are these update methods and which one has to use at what purpose. 
R/3 update methods
1. Serialized V3 Update
2. Direct Delta
3. Queed Delta
4. Un Serialized Delta Update
By: Anoo
a) Serialized V3 Update
This is the conventional update method in which the document data is collected in the sequence of attachment and transferred to BW by batch job.The sequence of the transfer does not always match the sequence in which the data was created.
b) Queued Delta
In this mode, extraction data is collected from document postings in an extraction queue from which the data is transferred into the BW delta queue using a periodic collective run. The transfer sequence is the same as the sequence in which the data was created
c) Direct delta.
When a Document is posted it first saved to the application table and also directly saved to the RSA7 (delta queue) from here it is being moved to BW.
So you can understand that for Delta flow in R/3 Delta queue is the exit point.
d) Queued Delta
When a document is posted it is saved to application table, and also saved to the Extraction Queue ( here is the different to direct delta) and you have to schedule a V3 job to move the data to the delta queue periodically and from their it is moved to BW.
e) Unserialized V3 Update
This method is largely identical to the serialized V3 update. The difference lies in the fact that the sequence of document data in the BW delta queue does not have to agree with the posting sequence. It is recommended only when the sequence that data is transferred into BW does not matter (due to the design of the data targets in BW).
You can use it for Inventory Management, because once a Material Document is created, it is not edited. The sequence of records matters when a document can be edited multiple times. But again, if you are using an ODS in your inventory design, you should switch to the serialized V3 update.
Q) Deltas Not Working for Installation Master Data
I am having trouble with the deltas for master data object "installation". The changes are clearly recorded in the time dependent and time independent tables, EANL/EANLH. The delta update mode is using ALE pointers, does anyone know of a table where I can go check where these deltas/changes are temporarily stored, or what's the process behind this type of delta? 
The following steps must be executed: 
1. Check, whether the ALE changepointer are active in your source system (Transaction BD61) and whether the number range is maintained (Transaction BDCP).
2. In addition, check in the ALE Customizing, whether all message types you need are active (Transaction SALE -> Model and implement business processes -> Configure the distribution of master data -> Set the replication of changed data -> Activate the change pointer for each message type ).
3. Check, whether the number range for the message type BI_MSTYPE is maintained (Transaction SNUM -> Entry 'BI_MSTYPE' -> Number range -> Intervals). The entry for 'No.' must be exactly '01'. In addition, the interval must start with 0000000001, and the upper limit must be set to 0000009999.
4. Go to your BW system and restart the Admin. workbench.
All of the following activities occur in the InfoSource tree of the Admin. Workbench.
5. Carry out the function "Replicate DataSource" on the affected attached source system for the InfoObject carrying the master data and texts.
4. Activate the X'fer structure
All changes, all initial data creations and deletions of records from now on are recorded in the source system.
5. Create an InfoPackage for the source system. In the tabstrip 'update parameter', there are three alternative extraction modes:
Full update
Delta update
Initialization of delta procedure
First, initialize the delta procedure and then carry out the delta update. 
An update on this issue:
In the EMIGALL process, SAP decided to bypass all the standard proces to update the delta queues on IS-U, because it would cause too much overhead during the migration.  It is still possible to modify the standard programs, but it is not recommended, except if you want to crash you system. 
The other options are as follows :
- Extract MD with full extractions using intervalls..
- modify the standard to put data in a custom table on which you are going to create a generic delta;
- modify the standard to put the ALE pointers in a custom table and then use a copy of the standard functions to extract these data....
- Extract the data you want in a flat file and load it in BW... 
By the way, if you want to extract the data from IS-U, forget to do it during migration, find another solution to extract after. 
PS: Well if you have generic extractor and huge volume data then you can do it with multiple INITS with RANGES as selection criteria and then a single DELTA(which is summation of all INITS) in order to improve performance with GENERIC DELTA. 
Q) Explain about deltas load and where we use it exactly.
A data load into a BI ODS/master data/cube can be either FULL or DELTA.
Full load is when you load data into BI for the first time i.e. you are seeding the destination BI object with initial data. A delta data load means that you are either loading changes to already loaded data or add new transactions.
Usually delta loads are done when the process has to sync any new data/changed data from the OLTP system i.e. SAP ECC or R/3 to SAP BI (DSS/BI). DSS stands for Decision Support Systems or system that is used for deriving Business Intelligence.
Let's say you are trying to derive a report to empower the management to figure out who are the customers who have bought the most from your company.
On the BI side, you create the necessary master data elements. You use the master data elements to create an ODS and a cube. The ODS and the cube will house the daily transactions that get added to the OLTP systems via a variety of applications.
Now you identify the datasource in ECC that will bring the necessary transactions to BI. You replicate the datasource in BI and map the data source to the ODS and map the ODS to the cube. Hence you create the Transformation and DTP as a full load for the first time.
At this point of time, your ODS and cube has the data for the last last x number of years where x stands for the life of your company. You also need to capture the daily transactions from here onwards going forward. What you do now is change the DTP to allow only delta records.
Now you schedule the execution of the datasource and loading of the data in a process chain. At run time, the process chain will get the new records from OLTP (since the datasource is already replicated keeping in mind that the datasource structure has not changed) and import those changes to the ODS and hence to the cube.
Any such loads that brings in new transactions or changes to earlier transactions will be called delta records and hence the load is called delta load.

Q) Removing '#' in Analyzer (Report)
In ODS, there are records having a value BLANK/EMPTY for some of the fields. EX: Field: `Creation Date' and there is no value for some of the records. 
For the same, when I execute the query in Analyzer, the value  `#' is displaying in place of `BLANK' value for DATE and other Characteristic fields. Here, I want to show it as `BLANK/SPACE' instead of `#'.  How to do this? 
I had a similar problem and our client didn't want to see # signs in the report. And this is what I did. I created a marco in Workbook as SAPBEXonRefresh and run my code in Visual Basic editor. You can run same code in query also as when you will refresh query this # sign will be taken care. You can see similar code in SAP market place.
I will still suggest not to take out # sign as this represent no value in DataMart. And this is SAP standard. I did convince my client for this and later they were OK with it.
The codes are below:
Sub SAPBEXonRefresh(queryID As String, resultArea As Range)
If queryID = "SAPBEXq0001" Then
'Remove '#'
Selection.Cells.Replace What:="#", Replacement:="", LookAt:=xlWhole, _ SearchOrder:=xlByRows, MatchCase:=False, MatchByte:=True
'Remove 'Not assigned'
Selection.Cells.Replace What:="Not assigned", Replacement:="", LookAt:=xlWhole, _ SearchOrder:=xlByRows, MatchCase:=False, MatchByte:=True
End If
' Set focus back to top of results
resultArea(1, 1).Select
End Sub
Q) How To Convert Base Units Into Target Units In BW Reports
My client has a requirement to convert the base units of measurements into target units of measurements in BW reports.  How to write the conversion routine and otherwise pls refer conversion routine used so that the characteristic value(key) of an infoobject can be displayed or used in a different format to how they are stored in the database.
Have a look at the how to document "HOWTO_ALTERNATE_UOM2"
You can use the function module 'UNIT_CONVERSION_SIMPLE'
                    input                      = ACTUAL QUANTITY
*              NO_TYPE_CHECK              = 'X'
*              ROUND_SIGN                 = ' '
                   unit_in                     = ACTUAL UOM
                   unit_out                    = 'KG'  ( UOM YOU WANT TO CONVERY )
*              ADD_CONST                  =
*              DECIMALS                   =
*              DENOMINATOR                =
*              NUMERATOR                  =
                    output                     = w_output-h_qtyin_kg
*            EXCEPTIONS
*              CONVERSION_NOT_FOUND       = 1
*              DIVISION_BY_ZERO           = 2
*              INPUT_INVALID              = 3
*              OUTPUT_INVALID             = 4
*              OVERFLOW                   = 5
*              TYPE_INVALID               = 6
*              UNITS_MISSING              = 7
*              UNIT_IN_NOT_FOUND          = 8
*              UNIT_OUT_NOT_FOUND         = 9
*              OTHERS                     = 10
                IF sy-subrc <> 0.
Q) Non Cumulative key figures are nothing but the key figure which will not be cumulative depending on some characteristic values. You will find these Non Cumulative KF's while you extract the data from MM data sources.
For example,you have a requirement of showing this month stock in the report. Means a key figure has not to be cumulated based on the char.   While you create a KF, you will get the aggregation tab in the middle, there you have something called aggregation and summation aggregation. We put aggregation as summation and summation aggregation as last value. Once you select Non cumulative then it will ask for depending on what char this characteristic this KF has not to be cumulated.
Non-cumulative with inflow or outflow!
There has to be two additional cumulative key figures as InfoObjects for non-cumulative key figures - one for inflows and one for outflows. The cumulative key figures have to have the same technical properties as the non-cumulative key figure, and the aggregation and exception aggregation have to be SUM.
You can evaluate separately the non-cumulative changes on their own, or also the inflow and outflow, according to the type of chosen non-cumulative key figure in addition to the non-cumulative. For Example Sales volume (cumulative value):
Sales volume 01.20 + sales volume 01.21 + sales volume 01.23 gives the total sales volume for these three days.
Warehouse stock (non-cumulative key figure):
Stock 01.20 + stock 01.21 + stock 01.23 does not give the total stock for these three days.
Technically, non-cumulatives are stored using a marker for the current time (current non-cumulative) and the storage of non-cumulative changes, or inflows and outflows. The current, valid end non-cumulative (to 12.31.9999) is stored in the marker. You can determine the current non-cumulative or the non-cumulative at a particular point in time. You can do this from the current, end non-cumulative and the non-cumulative changes and/or the inflows and outflows.
Queries for the current non-cumulative can be answered very quickly, since the current non-cumulative is created as a directly accessible value. There is only one marker for each combination of characteristic values that is always updated when the non-cumulative InfoCube (InfoCube that includes the non-cumulative key figures) is compressed. So that access to queries is as quick as possible, compress the non-cumulative InfoCubes regularly
Cumulative Keyfigures With Exception Aggregation:
It's a 'normal' KF (with summation, min or max as aggregation behaviour), but you set some exception in this behaviour...for example, you can say that a KF, normally aggregated by 'summation', have to show the max value (or the average, or '0' or something else), that is the 'exception aggregation' when you use it in combination with 0DOC_DATE (or other char), that is the 'exception aggregation char reference' this case OLAP processor give to you the possibility to see your KF with different behaviour depending from whether did you use 0DOC_DATE (in our example, MAX) or something else (SUMMATION).
Q) In a real time scenario where do we use cell definition in query designing.
A cell is the intersection between two structural components. The term cell for the function Defining Exception Cells should not be confused with the term cell in Microsoft Excel. The formulas or selection conditions that you define for a cell always take effect at the intersection between two structural components. If a drilldown characteristic has two different characteristic values, the cell definition always takes effect at the intersection between the characteristic value and the key figure.
Use of cell definition:
When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.
Cell-specific definitions allow you to define explicit formulas and selection conditions for cells as well as implicit cell definitions. This means that you can override implicitly created cell values. This function allows you to design much more detailed queries.
In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.
For example:
You have already implemented the sales order system in your company. You have given the reports to the end users including open order reports.
User come and tell you that they what some special calculations for some particular customers. Say for example, in your report you have 5 customers like Nike, Coke, Philips, Sony and Microsoft. For your users requirement you need to provide some discount or giving some special exceptions for only Microsoft on 5th month only. Microsoft's 5th month detail is always come in our report at fifth column fifth row.
For this scenario you can use have accurate column and row that you need to calculate. So here you can utilize the function Cell Editor to calculate for the particular column.
Q) How to extract data using BW from CRM?
Steps for Extracting data from CRM:
Configuration Steps:
1. Click on -> Assign Dialog RFC destination
If your default RFC destination is not a dialog RFC destination, you need to create an additional dialog RFC destination in addition and then assign it to your default RFC destination.
2. Execute Transaction SBIW in CRM
3. Open BC DataSources.
4. Click on Transfer Application Component Hierarchy
Application Component hierarchy is transferred.
5. SPRO in CRM .Go to CRM -> CRM Analytics 
6. Go to transaction SBIW -> Settings for Application specific Data Source ->Settings for BW adapter 
7. Click on Activate BW Adapter Metadata
Select the relevant data sources for CRM sales 
8. Click on Copy data sources
Say yes and proceed
9. Logon to BW system and execute transaction RSA1.
Create a source system to establish connectivity with CRM Server
A source system is created. (LSYSCRM200)(Prerequisites: Both BW and CRM should have defined Back ground, RFC users and logical systems)
10. Business content activation for CRM sales area is done
11. Click on source system and choose replicate datasources.
In CRM6.0, do we need to use BWA1 tcode to map the fields between CRM and BW, the way we used to do in earlier CRM versions?
Below are the steps for CRM(6.0) extraction as per my knowledge:
1. Activate the DS in RSA5.
2. Replicate into BI.
3. Schedule Init data load.
4. Schedule Delta.
5. Use Rsa3 and RSA7 tcodes to check data in CRM system.
If you are have SAP CRM 5.x or later, you would activate the DataSources in RSA5 and maintain in RSA6 as you do for FI DataSources in R3/ECC.
The only difference in the technology is that the extraction goes through a BW Adaptor on SAP CRM and passes through a Service API to SAP BW. In the end, though, it's really no difference than FI.
Same as FI extractors in R3/ECC. 
After you have activated the DataSource in RSA5:
1) Go to RSA6 and click on the Enhance Extraction Structure button.
2) Append your custom fields to the structure and activate.
3) Create the User Exit in CMOD to populate the custom fields.
4) Re-activate the DataSource.
5) Test extraction in RSA3.
6) Replicate the DataSource in BW.
There is no interval settings required, like there are for FI. Here's the technical description of the CRM extraction process for both Full and Delta extraction.
1) SAP BW calls the BW Adapter using the Service API.
2) BW Adapter reads the data from the source table for the DataSource.
3) Selected data is converted into the extract structure as a BDoc.
4) The BDoc type determine the BAdI that is called by the BW Adapter.
5) Data Package is transferred to SAP BW using the Service API.
Some considerations for Delta are:
1) Net changes in CRM are communicated via BDoc.
2) The flow controller for BDocs calls the BW Adapter. 
3) BW Adapter checks if net change is in BDoc that's BW relevant. 
4) Non-relevant net changes are not sent to SAP BW.
5) Relevant net changes are transferred to SAP BW.
6) CRM standard DataSources use AIMD delta method.
CRM systems use what is called as a BW adapter to extract data - for other systems it is the Service API - hence these tcodes will be used - this is because CRM systems are based on BDocs and traditional R/3 systems are based on iDocs and ALE technology.
BWA5 is used to activate 'delta' for CRM datasource.
BWA1 is used for mapping fields in extract structure with BDoc.
Q) Transport Process Chains to Test System
What is the best way to transport process chains to test system?
I got many other additional and unwanted objects got collected when I tried for collection of process chains from transport connection. 
To transport a process chain the best is to transport only objects created for the process chain. On my system I created specific obejcts for PC : Infopackages, jobs, variant. those objects are only use for PC. By this way I avoid errors when users restart load or job manually. 
So when I want to transport a process chain I go in the monitor and select the PC make a grouping on only necessary objects, and I go through the tree generated to select only what I need. Then I go in SE10 to check if the transport contains not other objects which can impact my target system. 
You can avoid some uncessary objects by clicking in Grouping > Data flow before & Data Flow After . For example you already have infopackages in your target system but not process chains & you only want to transport only process chain without any other objects like transfer structure or infopackages . You can choose before or after option .
You can also choose hierachries or display option from the Display tab too if you have objects in bulk but make sure all object are selected ( in case when different-2 process chain having different kind of object then better use Hierarchy, not list ) 
While Creating these TR some objects may be in use or locked in other TR so first release them by Tcode Se03 ,using unclock object ( Expert Tool ). 
These options can reduce your effort while collecting your objects , even after so much effort you get some warning or Error like :- objects are already in system then ask basis to use overwrite mode.
Transport a specific infoobject
How to transport a specific info object? I tried to change it and then save but the transport request won't appear. How to manually transport that object? 
1. Administrator Workbench (RSA1), then Transport Connection
2. Object Types, then from the list of objects put the requested one on the right side of the screen (drag & drop)
3. Click "Transport Objects", put the development class name and specify the transport (or create the new one)
4. Transaction SE01, check transport and release it
5. Move the transport up to the another system.
If you change and reactivate the infoobject, but get no transport request, this means that your infoobject is still in $tmp class.
go in to the maintenance of the infoobject, menu extras, object directory entry and change the development class. at this point you should have a pop-up requesting a transport request
If you're not getting a transport request when you change and activate, it could also be that the InfoObject is already on an open transport. 
When you collect the object in the transport connection as described above, you will see in the right hand pane an entry called Transport Request. If there is an entry here, the object is already on a transport and this gives you the transport number. 
You can then use SE01 or SE10 to delete the object from the existing transport if that is what you want to do then, when you change and activate the object again, you should be prompted for a transport request. Alternatively, you can use the existing transport depending on what else is on it.
How To Do Transports in BW?
Step by step procedure for transporting in BW:
1. In RSA1 go to Transport Connection
2. Select Object Types Your Object that you want to transfer.
3. Choose grouping method (in data flow before and after)
4. Drag and drop your object.
5. Click the Truck button to transfer
6. In Source System (e.g Dev SE09).
    a. Expand to choose your child request
    b. Click on the release button (truck)
    c. Choose the parent request and click the Truck button release.
7. In Target System (e.g QA) go to STMS
    a. Click on Truck button (Import Overview)
    b. Dbl click on your QA system queue
    c. Clck on Refresh
    d. Clk on adjust import queue
    e. Select ur request and click on F11.                             *-- David Kazi
Is it possible to copy a process chain in BW 3.1? If so, how?
In RSPC, double click the process chain so that you can see it in the left hand pane. In the box where you type in the transaction code, type COPY and hit Enter. 
Q) Infocube Compression
I was dealing with the tab "compression" while managing the infocube, was able to compress the infocube and send in the E- table but was unable to find the concrete answer on the following isssues:
1. What is the exact scenario when we use compression?
2. What actually happens in the practical scenario when we do compression?
3. What are the advantages of compressing a infocube?
4. What are the disadvantages of compressing a infocube?
1. Compression creates a new cube that has consolidated and summed duplicate information.
2. When you compress, BW does a group by on dimensions and a sum on measures... this eliminates redundent
3. Compressed infocubes require less storage space and are faster for retrieval of information.
4. Once a cube is compressed, you cannot alter the information in it. This can be a big problem if there
is an error in some of the data that has been compressed.
I understand the advantage to compressed the infocube is the performance.  But I have a doubt.  If I compressed one or more request ID of my infocube the data it will continue to appear in my reports (Analyzer)?
The data will always be there in the Infocube. The only thing that would be missing is the request id's.. you can take a look in to your packet dimension and see that it would be empty after you compress.
Compression yeap its for performance. But before doing this compression you should keep in mind one thing very carefully.
1) If your cube is loading data with custom defined deltas you should check whether delta is happening properly or not, procedure is compress some req and schedule the delta.
2) If your system having outbounds from cube and this happening with request ids then you need to follow some other procedure because request ids wont be available after compression.
These two things are very important when you go for compression.
Q) How to Compress InfoCube Data
How Info cube compression is done?
v\:* {behavior:url(#default#VML);}o\:*
{behavior:url(#default#VML);}w\:* {behavior:url(#default#VML);}.shape
Create aggregates for that infocube
I guess what the question was how we can compress a data inside a cube, I assume that's usually done through by deleting the Request ID column value.
This can be done through Manage - > Compress Tab.
Go to RSA1
Under Modeling --> Choose InfoProvider --> InfoArea and then --> Select your InfoCube
Right Click on your infocube --> from context menu --> choose Manage
Once you are in manager data Targets screen:
Find out the request numbers – decide till what request id you want to compress
Go to Collapse tab – under compress --> choose request ID and click Release
The selected request ID and anything below will be compressed.
What is happening behind the scene is  “After the compression, the F fact table contains no more data.
Instead, the compressed data now appear in the E fact table.”
Q) Cube to Cube Load 
You need to move some data from one cube to another.
The steps involved are :-
You need to first create 'Export Data Source' from original cube (right-click on the InfoCube and select Generate Export Data Source). 
Then, assign the new datasource to new cube. (you may click on 'Source system' and select your BW server and click 'Replicate'). 
Then, you can configure your infosource, and infopackage. 
Lastly, you are ready to load already.  
Q) Question:
A datasource was changed and a document date was added to the standard datasource. 
How to find which user has changed the datasource?
You can use table ROOSOURCE and provide your data source here.
ROOSOURCE is a table in the source system which has information about all the data sources in the system.
When you create a datasource it will update to the three tables:
- ROOsourcet
Take OLTP version as A and then execute.
In the output you can see the last change user and its time stamp.
You can use the TCODE : RSA2 (Datasource Repository ) to display the datasource.
In the general Tab you can see the Last Changed by : and the date and time of change.
Error message: Datasource does not exists in version A.
It means the datasource is not active in the system, you will have to activate the datasource from RSA5 
Make sure you are providing the right technical name of the datasource while checking.
You have to activate your Data source first and then check in the table ROOSOURCE with OLTP version as A and then execute.
Q) What is meant by Selection field, Hide field, Inversion and Field only Known exit?  What is the Use of these?
by: Anoo
When scheduling a data request in the BW Scheduler, you can enter the selection criteria for the data transfer. For example, you may want to determine that data requests are only to apply to data from the previous month.
If you set the Selection indicator for a field within the extract structure, the data for this field is transferred in correspondence with the selection criteria in the scheduler.
Hide field
You should set this indicator to exclude an extract structure field from the data transfer. As a result of your action, the field is no longer made available in BW when setting the transfer rules and generating the transfer structure.
If you don't want to see this this set this field and you can't see in the BW which is available in extract structure.
The field is inverted in the case of reverse posting. This means that it is multiplied by (-1). For this, the extractor has to support a delta record transfer process, in which the reverse posting is identified as such.
If the option Field recognized only in Customer Exit is chosen for the field, this field is not inverted. You cannot activate or deactivate inversion if this option is set.
Field only known:
The indicator Field known only in Exit is set for the fields in an append structure, meaning that by default, these fields are not passed to the extractor in the field list and the selection table.
For Example:
You had posted one record in to the cube.  All the key figures are updated (some are added and some are substracted).  But you want to revert it back.  So what you can do is if your data is present in the PSA. You can reverse post that request so that all the signs of the key figures are reversed( i.e addition becomes minus and minus key figures becomes additive) so that the net key figure change is nullufied. i.e., total change is Zero. In such cases, only those key figures which have "inversion" set will be reversed.
Q) Explain the steps to load master data hierarchies from R/3 system.
by: Reddy
A summary of the steps are as follows:
1) Goto Hierachy tab in infobject on to which your loading Hierachy data.
2) Select With Hierarchies.
3) Select Hierarchy Properties ( Time Dependent or not time depen..etc..)
4) Click on External Chars in Hierarchies, in that select the characterstics on which this Hierarchy is depending.
5) Then Create Infosource, assign Datasource.
6) Create Infopackage, to load Hierarchies.
7) Hierarchy section tab in Infoapackage select load Hierarchy and refersh the Available Hierarchies from OLTP, If it is Time dependent select time interval in update  tab.
8) Then start the load.
If you want to load from Flat file, some what different way to do it.
It is normally done by the following:
Transferring the master datasources in RSA5 to RSA6 and then replicating the DS into BW and assignment of DS to Infosource and cretaion of Infopackage and load it into the master tables.
Generally, the control parameters for data transfer from a source system are maintained in extractor customizing. In extractor customizing, you can access the corresponding source system in the source system tree of the SAP BW Administrator Workbench by using the context menu.
To display or change the settings for data transfer at source system level, choose Business Information Warehouse --> General Settings --> Maintaining Control Parameters for Data Transfer.
Note: The values for the data transfer are not hard limitations. It depends on the DataSource if these limits can be followed.
In the SAP BW Scheduler, you can determine the control parameters for data transfer for individual DataSources. You can determine the size of the data packet, the number of parallel processes for data transfer and the frequency with which the status IDocs are sent, for every possible update method for a DataSource.
To do so, choose Scheduler --> DataSource --> Default Settings for Data transfer.
In this way you can, for example, update transaction data in larger data packets in the PSA. If you want to update master data in dialog mode, smaller packets ensure faster processing.
Q) eal-time InfoCubes differ from standard InfoCubes in their ability to support parallel write accesses. Standard InfoCubes are technically optimized for read accesses to the detriment of write accesses.
Real-time InfoCubes are used in connection with the entry of planning data.
The data is simultaneously written to the InfoCube by multiple users. Standard InfoCubes are not suitable for this. You should use standard InfoCubes for read-only access (for example, when reading reference data).
Real-time InfoCubes can be filled with data using two different methods: using the transaction for entering planning data, and using BI staging, whereby planning data cannot be loaded simultaneously. You have the option to convert a real-time InfoCube. To do this, in the context menu of your real-time InfoCube in the InfoProvider tree, choose Convert Real-Time InfoCube. By default, Real-Time Cube Can Be Planned, Data Loading Not Permitted is selected. Switch this setting to Real-Time Cube Can Be Loaded With Data; Planning Not Permitted if you want to fill the cube with data using BI staging.
When you enter planning data, the data is written to a data request of the real-time InfoCube. As soon as the number of records in a data request exceeds a threshold value, the request is closed and a rollup is carried out for this request in defined aggregates (asynchronously). You can still rollup and define aggregates, collapse, and so on, as before.
Depending on the database on which they are based, real-time InfoCubes differ from standard InfoCubes in the way they are indexed and partitioned. For an Oracle DBMS, this means, for example, no bitmap indexes for the fact table and no partitioning (initiated by BI) of the fact table according to the package dimension.
Reduced read-only performance is accepted as a drawback of real-time InfoCubes, in favor of the option of parallel (transactional) writing and improved write performance.
Creating a Real-Time InfoCube
When creating a new InfoCube in the Data Warehousing Workbench, select the Real-Time indicator.
Converting a Standard InfoCube into a Real-Time InfoCube
Conversion with Loss of Transaction Data
If the standard InfoCube already contains transaction data that you no longer need (for example, test data from the implementation phase of the system), proceed as follows:
1. In the InfoCube maintenance in the Data Warehousing Workbench, from the main menu, choose InfoCube -> Delete Data Content. The transaction data is deleted and the InfoCube is set to inactive.
2. Continue with the same procedure as with creating a real-time InfoCube.
Conversion with Retention of Transaction Data
If the standard InfoCube already contains transaction data from the production operation that you still need, proceed as follows:
Execute ABAP report SAP_CONVERT_NORMAL_TRANS under the name of the corresponding InfoCube. Schedule this report as a background job for InfoCubes with more than 10,000 data records because the runtime could potentially be long.
Q) Difference between 'F' fact table & an 'E' Fact table?
A cube has 2 fact tables - E and F. When the requests in the cube are not compressed the data exists in the F fact table and when the requests are compressed the data lies in the E fact table.
When the requests are compressed all the request ids are lost (set to NULL) and you would not be able to select/delete the data by request id. The data in the E fact table is compressed and occupies lesser space than F fact table.
When you load a data target, say a cube, the data is stored in the F fact table. If the cube is compressed, the data in the F fact table is transferred to the E fact table.
The F-table uses b-tree indexes the E-Table uses bitmap indexes. Index, Primary Index (The primary index is created automatically when the table is created in the database.).  Secondary Index (usually abap tables), Bitmap Index(Bitmap indexes are created by default on each dimension column of a fact table), and B-Tree Index.

Does anybody know what the compression factor is between the F-table and the E-table?
I.e. when you move 100 rows from the F-table, how many rows will be added to the E-table?
There is no conversion factor. All the request id's are deleted when you compress the Cube and the records are aggregated based on the remaining dim id's.
Ex- suppose there is only one customer with C100 is doing Transactions & in 100 requests there are 100 records.
Then when you eliminate the request all records are aggregated to 1 record.
If there are 100 customers and you enterd each customer data in each request. Then when you do compression there will be 100 records still because the customer no. Varies.

Bex access the records from F-table or E- Table of InfoCube?
Bex access both F and E fact tables. If data exists in both tables, it picks from both.
If the cube is not compressed it takes from F table, if fully compressed it takes from E table, partial compression - both F and E.

If Accessing from E- table is true, do we have to move the records from F table to E table in-order to make the records available for reporting?
Data is automatically moved from F to E fact table when you compress the data in the cube. You can do this in the cube manage->collapse tab.
E table will have data once you do compression, and compression is not mandatory for all the cases. try using aggregates for reports.

When we do roll-up in InfoCube maintenance, records are moved to aggregates? or moved from F table to E table?
Roll-up adds the copy of records from F or E table to the aggregate tables. The records are not moved from F or E.
Q) How  can I list all the inactive objects of a cube. Is there any transaction code for it?
To check inactive objects,
Goto SE11->Utilities->Inactive Objects.
Goto SE38->Utilities->Inactive Objects.
To check all the objects (pgms, tables, classes, FM etc) of server, if they are active or not:
There is NO ONE Table which will get you all the info.
For programs, above is the tablename.
R3STATE is the field for status.
Note :
If a program is in ACTIVE state first, and then inactive (due to some modifications), then this table will contain TWO entries for it.
a) A = active
b) I = inactive
2. Same Table for FUNCTION MODULES.
In the case of FM,
You will have to check the INCLUDE name for the corresponding FM.
eg. ZAM_FG01 = function group
ZAM_F06 = Function Name.
LZAM_FG01U02 = include name for this FM.
(it can be 02, 03, 01 etc.)
3. For Tables : DD02L
Field name = AS4LOCAL
(There will more than 1 record, if table is in inactivated state)
A Entry was activated or generated in this form
L Lock entry (first N version)
N Entry was edited, but not activated
S Previously active entry, backup copy
T Tempory version when editing
We can also use the above FM.
For the field object type, the following is necessary:
Program : REPS
Table = TABU
Q) How to derive 0FISCYEAR, 0FISCPER & 0FISCPER3 from 0CALMONTH?
Use formulas in the update rules which are avilable under TIME CHARs.
Go to your update Rules > select Time Chars
* fill the internal table "MONITOR", to make monitor entries
data: l_fiscper type rsfiscper.
iv_calmonth = COMM_STRUCTURE-calmonth
iv_periv = 'K4'
ev_fiscper = l_fiscper.
* result value of the routine
RESULT = l_fiscper.
Code for 0FISCPER3
data: l_fiscper3 type t009b-poper.
iv_calmonth = COMM_STRUCTURE-calmonth
iv_periv = 'K4'
ev_fiscper3 = l_fiscper3.
* result value of the routine
RESULT = l_fiscper3.
Code for 0FISCYEAR
data: l_fiscyear type t009b-bdatj.
iv_calmonth = COMM_STRUCTURE-calmonth
iv_periv = 'K4'
ev_fiscyear = l_fiscyear.
* result value of the routine
RESULT = l_fiscyear.
Just copy and paste the code in your system. 
Note: K4 is the Variant. Change Variant according to your requirement.
Q) There are two ways to measure the size of the cube. One is an estimate and other is the accurate reading in MB or GB.
Before you build the cubes if you want to estimate what will be the size of the cube, then you you can use the formula.
Formula is:
 IC = F x ((ND x 0.30)+2) x NR x NP
 F = ((ND+3) x 4bytes) + (22 bytes x NK)
 = required disk space in bytes
 30% per dimension in the fact table
 100% for aggregates
 100% for indexes
F = fact table size in bytes
 ND = no. of dimension
 NK = no. of key figures
 NR = no. of records
 NP = no. of periods
But as in your case you already have a cube and ODS that is ready so use the following calculatioins (This is for the cube)
Data on the BW side is in terms of "number of records" not TB or GB. The size, if required has to be calculated only. You have to either use the formulae as given at above, to translate the number of records into TB or GB or the easy way, if you want to do it for yourself, is to estimate from the data growth and put an intellegent guess on it. Depends how accurate you would want to be.
The exact method, however, still remains as under:
Go through SE16. For example if the cube is ZTEST, then look at either E table or F table by typing in /BIC/EZTEST or /BIC/FZTEST and clicking on "number of records", just the way we do for other tables.
If the cube has never been compressed (a rare case if you are working on a reasonable project), then you need to bother only on the F Fact table as all the data is in F Fact table only.
You can get the table width by going to SE11, type in the table name, go to "extras" and "table width". Also you can get the size of each record in the fact table in bytes. Next, you can find out the size of all dimension tables by doing this. The complete picture of extended star schema should be clear in your mind to arrive at the correct figure.
Add all these sizes ( fact table width + all dimension tables widths) and multply it by number of records in the fact table. this gives you total size of the cube.
If the cube is compressed, (as may be the case??) then you will need to add records in E table also becasue after compression, data moves from F Fact table to E Fact table, hence you need to look into the E Fact table also.. Hope this helps
This is all done for the Cube. For the ODS you can get direct info from DB02
Q) Assume my DataSource is Initialized for the first time today and V3 run collected 10 records to RSA7. 
My understanding is, RSA7 displays these 10 records under both Delta Update & Delta Repetition. When the InfoPackage run, 10 records will transfer to BW but RSA7 still shows these 10 records under Delta Repetition until next V3 run. Suppose next V3 run collected 5 records to RSA7. This time RSA7 shows newly added 5 records under Delta Update and both newly added and old records (15 records) under Delta Repetition. When the InfoPackage run for next time, 5 records will transfer to BW and RSA7 shows newly added 5 records in Delta Repetition until next V3 run and deleted old 10 records.


Yes, your assumption is correct. But with one caveat. The data would get deleted from Delta repeat section only when the next delta run is successful. This is done to ensure that no delta is lost.


Assume my DataSource is Initialized for the first time today and V3 run collected 10 records to RSA7. My understanding is, RSA7 displays these 10 records under both Delta Update & Delta Repetition.

I guess you would see 10 records only for delta update and no records for repetition as you are yet to run the first delta and you cannot do a delta repetition.


When the InfoPackage run, 10 records will transfer to BW but RSA7 still shows these 10 records under Delta Repetition until next V3 run. Suppose next V3 run collected 5 records to RSA7. This time RSA7 shows newly added 5 records under Delta Update and both newly added and old records (15 records) under Delta Repetition. 
If the status in the monitor is green for the delta update, you shall have 10 records in delta repetition. These will be cleared from delta update. If the next V3 brings 5 records, you shall have 10 in repetition and 5 in delta update.

When the InfoPackage run for next time, 5 records will transfer to BW and RSA7 shows newly added 5 records in Delta Repetition until next V3 run and deleted old 10 records.

5 records in repetition and any records from V3 under delta update.  

Q) Explain the steps for performance tuning in the bw R/3 system.
by: Anoo
With an increasing number of data records in the InfoCube, not only the load but also the query performance can be reduced. This is attributed to the increasing demands on the system for maintaining indexes. The indexes that are created in the fact table for each dimension allow you to easily find and select the data.
By using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance.
Compressing the Infocube
Infocube compression means aggregation of the data ignoring the request id’s. After compression, the system need not perform aggregation using the request ID every time you execute a query.
Basesd on these you may have doughts that!
- Compare and contrast the above techniques?
- Are all of the the above techniques are to improve the query performance? 
- What techniques do we follow to improve the dataload performance?
For all these doughts!
Yes, the creation of indices shud be done after loading, because just like a book index, and aggregates improves the query perfo becaz, you can observe at the query execution time when it is above to give you the output for first time, the OLAP processor takes much time to calculate, but for the next time it will be faster....
In what ways and what combinations should they be implemented in a project?
It means.
In project depending upon the client requirement, if the reports are running slow, losding slow, .......for all types of issues, we need to study, by maintaining the statistical information, by using tcodes and procedures and tables like, RSDDSTAT, st22, db02... and then we need to analyse the issue and follow the techinques required.
Basically the Following are the points to be kept in mind to improve loading performance.
1. When you are extracting data from source system using PSA Transfer method:
Using PSA adn Datat target parallel ---- for faster loading
Using Only PSA & update subsequent data target -----reduces the burden on the server
2. Data packet Size: When extracting data from source system to BW, we use data packets. As per SAP standard, we prefer to have 50,000 records per one data packet. 
For every data packet, it does commit & save --- so less no. of data packets required.
If you have 1 lakh records per data packet and there is an error in the last record, the entire packet gets failed ----
3.In a project, we have millions of records to extract from different modules to BW. All loads will be running in the background for every 1 or two hours aproximately which will be handled by workprocess. We need to make sure that the work process is neither over utilized not under utilized.
4. Drop index of a cube before loading
5. Distribute work load among multiple server instances
6. Prefer delta load: as it loads only newly added or modified records.
7. We should deploy parellism. Multiple Info packages should be run simultaneously.
8. Update routines and transfer routines should be avoided unless necessary. And the routine should be a optimized code.
9. We should prefer to laod master data and then transaction data because when u load master data, SID is generated and this SID is used in Transaction data.
This is all about my celar Picture of Performance Issues!
Q) COPA Extraction Steps
The below are the command steps and explanation. COPA Extraction -steps
R/3 System
1. KEB0
2. Select Datasource 1_CO_PA_CCA
3. Select Field Name for Partitioning (Eg, Ccode)
4. Initialise
5. Select characteristics & Value Fields & Key Figures
6. Select Development Class/Local Object
7. Workbench Request
8. Edit your Data Source to Select/Hide Fields
9. Extract Checker at RSA3 & Extract
1. Replicate Data Source
2. Assign Info Source
3. Transfer all Data Source elements to Info Source
4. Activate Info Source
5. Create Cube on Infoprovider (Copy str from Infosource)
6. Go to Dimensions and create dimensions, Define & Assign
7. Check & Activate
8. Create Update Rules
9. Insert/Modify KF and write routines (const, formula, abap)
10. Activate
11. Create InfoPackage for Initialization
12. Maintain Infopackage
13. Under Update Tab Select Initialize delta on Infopackage
14. Schedule/Monitor
15. Create Another InfoPackage for Delta
16. Check on DELTA OptionPls r
17. Ready for Delta Load
LIS, CO/PA, and FI/SL are Customer Generated Generic Extractors, and LO is BW Content Extractors.
LIS is a cross application component LIS of SAP R/3 , which includes, Sales Information System, Purchasing Information System, Inventory Controlling....
Similarly CO/PA and FI/SL are used for specific Application Component of SAP R/3.
CO/PA collects all the OLTP data for calculating contribution margins (sales, cost of sales, overhead costs). FI/SL collects all the OLTP data for financial accounting, special ledger
1) Add the fields to the operating concern. So that the required field is visible in CE1XXXX table and other concerned tables CE2XXXX, CE3XXXX etc.
2) After you have enhanced the operating concern then you are ready to add it to the CO-PA data source. Since CO-PA is a regenerating application you can't add the field directly to the CO-PA data source. You need to delete the data source and then need to re-create using KEB2 transaction.
3) While re-creating the data source use the same old name so that there won't be any changes in the BW side when you need to assign the data source to info-source. Just replicate the new data source in BW side and map the new field in info-source. If you re-create using a different name then you will be needing extra build efforts to take the data into BW through IS all the way top to IC. I would personally suggest keep the same old data source name as before.

If you are adding the fields from the same "Operating concern" then goto KE24 and edit the dataaource and add your fields. However if you are adding fields outside the "Operating concern" then you need to append the extract structure and        populate the fields in user exit using ABAP code.   Reference OSS note: 852443
1. Check RSA7 on your R3 to see if there is any delta queue for COPA. (just to see, sometimes there is nothing here for the datasource, sometimes there is)
2. On BW go to SE16 and open the table RSSDLINIT
3. Find the line(s) corresponding to the problem datasource.
4. You can check the load status in RSRQ using the RNR from the table
5. Delete the line(s) in question from RSSDLINIT table
6. Now you will be able to open the infopackage. So now you can ReInit. But before you try to ReInit ....
7. In the infopackage go to the 'Scheduler' menu > 'Initialization options for the source system' and delete the existing INIT (if one is listed)        
Q) Delete unwanted Objects in QA system
I have deleted unwanted Update rules and InfoSources (that have already been transported to QA system) in my DEV system. How do I get them out of my QA system? I cannot find the deletions in any transports that I have created. Although they could be buried somewhere. Any help would be appreciated. 
I had the same problem with you. And I have been told there is a way to delete the unwanted objects.  You may request the Basis team to open up test box temporarily to remove the obsolete Update rules and InfoSources. Remember to delete the request created in test after you have removed the Update rules and InfoSources.
When I tried to delete the master data, get the following message"Lock NOT set for: Deleting master data attributes". What I need to do in order to allow me can delete the master data.  
Since, technically, the master data tables are not locked via SAP locks but via a BW-specific locking mechanism, it may occur in certain situations, that a lock is retained after the termination of one of the above transactions. This always happens if the monitor no longer has control, for example in the case of a short dump. If the monitor gets the control back after an update termination (regular case), it analyzes whether all update processes (data packets) for a request have been updated or whether they have terminated. If this is the case, the lock is removed. 
Since the master data table lock is no SAP lock, this can neither be displayed nor deleted via Transaction SM12. There is an overview transaction in the BW System, which can display and delete all currently existing master data table locks. Via the button in the monitor with the lock icon or via Transaction code RS12 you can branch to this overview. 
A maximum of two locks is possible for each basis characteristic: 
Lock of the master data attribute tables
Lock of the text table 
Changed by, Request number, Date and Time is displayed for every lock. Furthermore, a flag in the overview shows whether locks have been created via master data maintenance or master data deletion. 
During a loading process the first update process starting to update data into the BW System (several processes update may update in parallel for each data request), sets the lock entry. All other processes only check whether they belong to the same data request. The last process, which has either been updated or has terminated, causes the monitor to trigger the deletion of the lock. 
Q) Differences Among Query, Workbook and View
Many people are confused by the differences among: Query, Workbook, and View.  
Here are my thoughts on the subject:
A query definition is saved on the server. Never anywhere else.
Although people say a workbook contains a query (or several queries); it does not. It contains a reference to a query. The workbook can be saved on the server; or anywhere else that you might save an Excel workbook.
What happens if someone changes the query definition on the server? 
Answer: the next time you refresh the query in the Excel workbook, the new query definition replaces the old query definition in the workbook. Maybe. It depends on what change was made.
For example, if someone added a Condition to the query definition, the workbook will be virtually invisible to this. The Condition is available; but, is not implemented in the workbook. (Until the user of the workbook manually adds the view of the Condition and then activates it.)
For example, if someone changed the definition of a KF in the query definition, the revised KF will show up in place of the old KF in the workbook.
But ... if, for example, someone deleted the old KF and added a new KF, we get a different story. Now the old KF no longer appears (it does not exist); but, the new KF does not appear (it was not marked to be visible in the workbook).
About workbooks as views ... OK, a workbook may very well have a certain "view" of the query (drilldown, filters, et cetera). And, if the workbook is saved to the server in a Role where everyone can access it, this is good. But, if the workbook is saved to one's favorites, then this "view" is only accessible to that individual. Which may be good. Or may not.
A "saved view", on the other hand is stored on the server. So, it is available to all.
If you navigate in a workbook you can back up. You can back up, though, only as far as you navigated in the current session. You cannot back up to where you were in the middle of last week's session. Unless you saved that navigation state as a "saved view". Then, you can jump to that view at any time.
The downside of saved views is that they are easy for anyone to set up and difficult for most to delete.
Q) Customer Exit Variable In Bex
The customer exit works at:
1. Extraction side
After enhancing datasource in RSA6 we need to populate those enhanced fields in that case we have to create a project in cmod transaction and select the Enhancement assignment RSAP0001 and select the appropriate FM and need to write the select statement in the appropriate include. EXIT_SAPLRSAP_001 - Transaction data EXIT_SAPLRSAP_002 - Master data EXIT_SAPLRSAP_003 - Text EXIT_SAPLRSAP_004 - Hierarchy The above things we need to do in Source System side Ex: R/3
2. Reporting side
We need to write the user-exit to populate Reporting related variables in the Enhancement assignment RSR00001 and select the FM EXIT_SAPLRRS0_001 and then in the include ZXRSRU01 we need to write the code. These are helpful especially we need to derive any varible.
Along with that:
BEx User Exit allows the creation and population of variables and calculations for key figures and variables on a runtime basis.
R/3 User Exit is found in R/3 under CMOD and contains additional programming that is needed to fill field additions to extract structures. 
Q) Restricted Key figures:
The key figures that are restricted by one or more characteristic selections can be basic key figures, calculated key figures or key figures that are already restricted.
Calculated key Figure:
Calculated key figures consist of formula definitions containing basic key figures, restricted key figures or precalculated key figures.
Procedure for Defining a new restricted key figure:
1. In the InfoProvider screen area, select the Key Figures entry and choose New Restricted Key Figure from the context menu (secondary mouse button).
If a restricted key figure has already been defined for this InfoProvider, you can also select the Restricted Key Figures entry and then choose New Restricted Key Figure from the context menu.
The entry New Restricted Key Figure is inserted and the properties for the restricted key figure are displayed in the Properties screen area.
2. Select the New Restricted Key Figure entry and choose Edit from the context menu (secondary mouse button).
  • The Change Restricted Key Figure dialog box appears.
  • You can also call the Change Restricted Key Figure dialog box from the Properties screen area by choosing the Edit pushbutton.
  • You make the basic settings on the General tab page.
  • The text field, in which you can enter a description of the restricted key figure, is found in the upper part of the screen.
  • You can use text variables in the description (see Using Text Variables).
  • Next to that, you can enter a technical name in the Technical Name field.
  • Underneath the text field, to the left of the Detail View area, the directory of all objects available in the InfoProvider is displayed. The empty field for defining the restricted key figure (Details of the Selection) is on the right-hand side of the screen.
3. Using drag and drop, choose a key figure from the InfoProvider and restrict it by selecting one or more characteristic values. See Restricting Characteristics.
You can also use variables instead of characteristic values. However, note that you cannot use the following variable types in restricted key figures for technical reasons:
  • Variables with the process type Replacement with Query (see also Replacement Path: Replacement with Querys)
  • Variables that represent a precalculated value set (see also Details)
You can use these variable types to restrict characteristics in the rows, columns, or in the filter.
4. Make any necessary settings for the properties of the restricted key figure on the other tab pages. See Selection/Formula Propertiess.
5. Choose OK. The new restricted key figure is defined for the InfoProvider.
Q) V1 - Synchronous update
V2 - Asynchronous update
V3 - Batch asynchronous update
These are different work processes on the application server that takes the update LUW (which may have various DB manipulation SQLs) from the running program and execute it. These are separated to optimize transaction processing capabilities.
Synchronous Updating (V1 Update)-->>
The statistics update is made synchronously with the document update.
While updating, if problems that result in the termination of the statistics update occur, the original documents are NOT saved. The cause of the termination should be investigated and the problem solved. Subsequently, the documents can be entered again.
Asynchronous Updating (V2 Update)-->>
With this update type, the document update is made separately from the statistics update. A termination of the statistics update has NO influence on the document update (see V1 Update).
Asynchronous Updating (V3 Update) -->>
With this update type, updating is made separately from the document update. The difference between this update type and the V2 Update lies, however, with the time schedule. If the V3 update is active, then the update can be executed at a later time.
If you create/change a purchase order (me21n/me22n), when you press 'SAVE' and see a success message (PO.... changed..), the update to underlying tables EKKO/EKPO has happened (before you saw the message). This update was executed in the V1 work process.
There are some statistics collecting tables in the system which can capture data for reporting. For example, LIS table S012 stores purchasing data (it is the same data as EKKO/EKPO stored redundantly, but in a different structure to optimize reporting). Now, these tables are updated with the txn you just posted, in a V2 process. Depending on system load, this may happen a few seconds later (after you saw the success message). You can see V1/V2/V3 queues in SM12 or SM13.
V3 is specifically for BW extraction. The update LUW for these is sent to V3 but is not executed immediately. You have to schedule a job (eg in LBWE definitions) to process these. This is again to optimize performance.
V2 and V3 are separated from V1 as these are not as realtime critical (updating statistical data). If all these updates were put together in one LUW, system performance (concurrency, locking etc) would be impacted.
Serialized V3 update is called after V2 has happened (this is how the code running these updates is written) so if you have both V2 and V3 updates from a txn, if V2 fails or is waiting, V3 will not happen yet.
BTW, 'serialized' V3 is discontinued now, in later releases of PI you will have only unserialized V3.
In contrast to V1 and V2 Updates , no single documents are updated. The V3 update is, therefore, also described as a collective update.
1. Application tables (R/3 tables)
2. Statistical tables (for reporting purpose)
3. update tables
4. BW queue
Statistical tables are for reporting on R/3 while update tables are for BW extraction. Is data stored redundantly in these two (three if you include application tables) sets of table?
Yes it is.
Difference is the fact that update tables are temporary, V3 jobs continually refresh these tables (as I understand). This is different from statistics tables which continue to add all the data. Update tables can be thought of as a staging place on R/3 from where data is consolidated into packages and sent to the delta queue (by the V3 job).
Update tables can be bypassed (if you use 'direct' or 'queued' delta instead of V3) to send the updates (data) directly to the BW queue (delta queue). V3 is however better for performance and so it is an option alongwith others and it uses update tables.
Statistical table existed since pre BW era (for analytical reporting) and have continued and are in use when customers want their reporting on R/3.
The structure of statistical table might be different from the update table/BW queue, so, even though it is based on same data, these might be different subsets of the same superset.
V3 collective update means that the updates are going to be processed only when the V3 job has run. I am not sure about 'synchronous V3'. Do you mean serialized V3?
At the time of oltp transaction, the update entry is made to the update table. Once you have posted the txn, it is available in the update table and is waiting for the V3 job to run. When V3 job runs, it picks up these entries from update table and pushes into delta queue from where BW extraction job extracts it.
Q) By thumb rule we can say that aggregates improve Query performance.
Q's : o.k then what is thumb rule ?
Rules for Efficient Aggregates:
"Valuation" column evaluates each aggregate as either good or bad. The valuation starts at "+++++" for very useful, to "-----" for delete. This valuation is only meant as a rough guide. For a more detailed valuation, refer to the following rules:
1. An aggregate must be considerably smaller than its source, meaning the InfoCube or the aggregate from which it was built. Aggregates that are not often affected by a change run have to be 108 times smaller than their source. Other aggregates have to be even smaller. The number of records contained in a filled aggregate is found in the "Records" column in the aggregates maintenance. The "Summarized Records (Mean Value)" column tells you how many records on average have to be read from the source, to create a record in the aggregate. Since the aggregate should be ten times smaller than its source, this number should be greater than ten.
2. Delete aggregates that are no longer used, or that have not been used for a long time. The last time the aggregate was used is in the "Last Call" column, and the frequency of the calls is in the "Number of Calls" column. Do not delete the basic aggregates that you created to speed up the change run. Do not forget that particular aggregates might only not be used at particular times (holidays, for example).
3. Determine the level of detail you need for the data in the aggregate. Insert all the characteristics that can be derived from these characteristics. For example, if you define an aggregate on a month level, you must also include the quarter and the year in the aggregate. This enhancement does not increase the quantity of data for the aggregate. It is also only at this point, for example, that you can actually build a year aggregate from this aggregate, or that queries that need year values are able to use this aggregate.
4. Do not use a characteristic and one of its attributes at the same time in an aggregate. Since many characteristic values have the same attribute value, the aggregate with the attribute is considerably smaller than the aggregate with the characteristic. The aggregate with the characteristic and the attribute has the same level of detail and therefore the same size as the aggregate with the characteristic. It is however affected by the change run. The attribute information in the aggregate is contained in the aggregate only with the characteristic using the join with the master table. The aggregate with the characteristic and the attribute saves only the database – join. For this reason, you cannot create this kind of aggregate. If they are ever going to be useful, since otherwise the database optimizer creates bad execution plans, you can create an aggregate of this kind in the expert mode (in 2.0B: In the aggregate maintenance select an aggregate: Extras > Expert Mode, otherwise enter "EXPT" in the OK code field).
The factor 10 used in the following, is only meant as a rule of thumb. The exact value depends on the user, the system, and the database. If, for example, the database optimizer has problems creating a useful plan for SQL statements with a lot of joins, aggregates with smaller summarization are also useful, if this means that joins are saved.
Q) Explain the way to use return table option.
The return table is basically another key for each key figure and since you have the possibility to dynamically update a key figure or not, (RETURN CODE), the qty can be updated, but not the value (or vice versa).
If you want to split data for more key figures it is better to do in start routine.
The main difference between start routine and field wise routine is:
You no need to maintain new or more fields in data target in infosource. But if you want to write start routine all the fields should available in infosource as well and you have to map one-to-one in updaterules.
If you loading into cube then time distribution is available,(ex: you are getting month level data want to distribute to week level, you can use time distribution, no need of coding).
For sample codes, plz search in this forum with search term " start routine" you will get a lot.
For simple example you are getting price and quantity from data source and you want calculate Value and you are having separate key figure for Value in your target.
You achive this in 2 ways.
1. Field wise routine.
2. Start Routine.
If you want to write at field level you no need to maintain new keyfigure(value) in infosurce, you can calculate this either by formula or by routine.
But if you want to calculate at start routine, this nes KF(value should available in infosouce), then only it will be available DATA_PACKAGE in start routine. Then only you can assign formula. and also you have to map one-to-one in update rules.
Actually if you are creating a new infosource you can modify accordingly by adding requed fields, but when you are loading from one ODS/CUBE to another ODS/CUBE, data mart scenario, system will generate datasource and infosources. Then it will be a bit difficult.
Q) RDA - Real time data acquistion - it brings real time data to BW from the r3/ web services.
It uses a program called Dameon that controls the data flow in BW and takes care of extraction from source system.
With Remote cube we can't access large volume of data & large number of users.
With RDA we can do the reporting on large volume od data & large number of users.
Here in RDA we are going to store the data Physically.
DEAMON : Data Extraction and Monitoring
Through RDA only to DSO we store the Data.
Datasource should be realtime supported.
Deamon supports two levels only i.e Deltaqueue to PSA and then DSO only.
Sources that support RDA are Webservices & SAP.
Daemon : RSRDA
We can't schedule RDA through Processchain. Only through DAEMON.
We can create only one Realtime IP on Datasource.
Q) What are the steps to create RFC connection on Business Warehouse?
Step1 :- On the BW side :-
1. Create a logical System. SPRO->ALE-> Sending &Receiving Systems -> Logical System-> New Entries (E.g 800  BWCLNT800) 
2. Assign client to logical System.
Step 2 :- Same Procedure for r/3 on r/3 side to create a logical system.
Step3  :- BW side :- Create RFC Connection in SM59.
RFC destination name - Name should be logical system in r/3.
Connection type:- 3
1st tab technical settings
Target host :- IP address of r/3 server.
Sytem :03
2nd tab Logon/Security 
             Client-r/3 client no
             user- r/3 user
             Password - r/3 password.
Step 4:- R/3 same procedure SM59
RFC destination name - Name should be logical system in bw.
Connection type:- 0
1st tab technical settings 
Target host :- IP address of r/3 server.
Sytem :03
2nd tab Logon/Security 
             Client-bw client no
             user- bw user
             Password - bw password.
Step 5 :- spro -> select img -> biw->links to other sytems -> links between r/3 and bw
create ALE user in S.S -> select bwaleremote -> back
Step 6 :- In bw
username BWREMOTE
profiles S_BI_WHM_RFC
Step 7 :- In R/3
username ALEREMOTE
profiles S_BI_WHM_RFC
Step 8 :- In R/3
Create RFC user
user RFCUser  create
usertype system
pwd 1234
profiles SAP_ALL
Step9 :-
Table RSAMIN enter default client in the field ?BWMANDT RZ10
Step10 :- In bw
user RFCUser  create
usertype system
pwd 1234
profiles SAP_ALL
Step11 :- In bw
RSA1 - Source system -> create
RFC destination
Target system host name of r/3
System no
Source system ALEREMOTE
BAckgroung user :BWREMOTE
Q) Explain about "BW statistics" and how it is useful in improving the performance in detail?
BW statistics is nothing but the SAP deliverd 1multiprovider and 5 cubes which can get the statistics of the objects developed.  We have to enable and activate the BW statistics for particular objects which you want to see the statistics and to gather required data.  But this no way will improve the performance.  But we can analyze the statistics data and based on the data can decide on the ways to improve performance i.e. setting the read mode, compression, partitioning, creation of aggregates etc.....
BW Statistics is a tool
-for the analysis and optimization of Business Information Warehouse processes.
-to get an overview of the BW load and analysis processe
The following objects can be analyzed here:
  • Roles
  • SAPBWusers
  • Aggregates
  • Queries
  • InfoCubes
  • InfoSources
  • ODS
  • DataSources
  • InfoObjects
The BW Statistics sub-area is the most important of the two
1. BW Statistics
2. BW Data Slice
BW Statistics data is stored in the Business Information Warehouse. 
This information is provided by a MultiProvider (0BWTC_C10), which is based on several BW BasisCubes.
  • OLAP (0BWTC_C02)
  • OLAP Detail Navigation (0BWTC_C03)
  • Aggregates (0BWTC_C04)
  • WHM (0BWTC_C05)
  • Metadata ( 0BWTC_C08 )
  • Condensing InfoCubes (0BWTC_C09)
  • Deleting Data from an InfoCube (0BWTC_C11)
BW Data Slice to get an overview of the requested characteristic combinations for particular InfoCubes and of the number of records that were loaded. This information is based on the following BasisCubes:
-BW Data Slice
-Requests in the InfoCube
BW Data Slice
BW Data Slice contains information about which characteristic combinations of an InfoCube are to be loaded and with which request, that is, with which data request.
Requests in the InfoCube
The InfoCube Requests in the InfoCube does not contain any characteristic combinations you can create queries for this InfoCube that return the number of data records for the corresponding InfoCube and for the individual requests data flow fall into below data.
- data load data management
- data analysis
Q) What is the use of match or copy in business content.
Match (X) or Copy
If the SAP delivery version and the active version can be matched, a checkbox is displayed in this column.
With the most important object types, the active version and the SAP delivery version can be matched.
From a technical point of view, the SAP delivery version (D version) is matched with the M version. As in most cases the M version is identical to the active version (A version) in a customer system, this is referred to as a match between the D and A versions for reasons of simplification.
When a match is performed, particular properties of the object are compared in the A version and the D version. First it has to be decided whether these properties can be matched automatically or whether this has to be done manually. A match can be performed automatically for properties if you can be sure that the object is to be used in the same way as before it was transferred from Business Content. When performing matches manually you have to decide whether the characteristics of a property from the active version are to be retained, or whether the characteristics are to be transferred from the delivery version.
Example of an automatic match
Additional customer-specific attributes have been added to an InfoObject in the A version. In the D version, two additional attributes have been delivered by SAP that do not contain the customer-specific attributes. In order to be able to use the additional attributes, the delivery version has to be installed from Business Content again. At the same time, the customer-specific attributes are to be retained. In this case, you have to set the indicator (X) in the checkbox. After installing the Business Content, the additional attributes are available and the customer-specific enhancements have been retained automatically. However, if you have not checked the match field, the customer-specific enhancements in the A version are lost.
Example of a manual match
An InfoObject has a different text in the A version than in the D version. In this case the two versions have to be matched manually. When Business Content is installed, a details screen appears which asks you to specify whether the text should be transferred from the active version or from the D version.
The Match indicator is set as default in order to prevent the customer version being unintentionally overwritten. If the Content of the SAP delivery version is to be matched to the active version, you have to set the Install indicator separately.
The active version is overwritten with the delivery version if
- the match indicator is not set and
- the install indicator is set.
In other words, the delivery version is copied to the active version.
If the Install indicator is not set, the object is not copied or matched. In this case, the Match indicator has no effect.
In the context menu, two options are available:
a. Merge All Below
The object in the selected hierarchy level and all objects in the lower levels of the hierarchy are selected as to Match.
b. Copy All Below
The Match indicators are removed for the object in the selected hierarchy level and all objects in the lower levels of the hierarchy. If the Install indicator is also set, these objects are copied from the delivery version to the active version.