3/6/2016 6:12:35 PM

Introduction to ST12 trace: Transaction code ST12 is similar to a combination of the standard ABAP and SQL trace transactions SE30 and ST05. Transaction ST12 is used to to integrate ABAP and performance traces (SQL Enqueue RFC, transaction ST05) and to make the tracing and analysis process faster and more convenient. ABAP trace with ST12 is the central entry point for performance analysis. It should be used to detect top-down any performance hotspot, for functional time distribution analysis, and to optimize ABAP/CPU bound issues. 

This blog will show how to take a trace with ST12 and how to analyse the ST12 trace.

Before you start: We need to understand the SAP system response times which is given below.


ST12 Trace combines ABAP and performance (SQL) trace into single transaction, with major functional enhancements especially for the ABAP trace part. In a joint switch on/off with the performance trace, ST12 allows to activate the ABAP trace for another user. ABAP and performance traces can be activated on another server or even on all servers to catch e.g. incoming RFCs.

ST12 makes it easy to keep valuable trace results and pass them on e.g. to SAP backoffice. The ABAP trace results are completely collected to database.  

For Performance trace ST12 remembers time-frame & server, and one click navigates directly into the ST05 trace display on the proper server. Selected results form performance trace and other findings can be saved as annotation texts into a trace analysis.

The ST12 ABAP Trace Summary quickly shows the contribution of known expensive functionalities. It is also able to estimate the time contribution of certain programs, esp. user exits and customer coding.

With ST12 the program hierarchies can be analyzed in the aggregated ABAP trace 'per call'. Therefore the non-aggregated ABAP trace with its large trace file sizes is not needed and was omitted from ST12.

ABAP trace for beginners

ABAP traces measures two different things. The first are certain simple and possibly expensive ABAP statements like database accesses and statements on internal tables. These are easy to understand. The second are calls to modularization units like perform, call function/method, call screen or PAI PBO modules. These are complex because they are hierarchical containers and resemble nested russian puppets. Their hierarchies can branch and also merge again.

When and how to use ST12 ABAP trace?

ABAP trace was recommended only to analyze gaps in the SQL trace or pure CPU issues. ABAP trace with ST12 can and should be used to 

a) Identify top-down any performance hotspot and get an exact functional time distribution

b) Find customer modifications and user exits

c) Detect issues in the call hierarchy and

d) Search for localized technical tuning potential, i.e. CPU-expensive ABAP statements

ABAP real-time Object overview : A report which in-turn call a smart form to display customer payments and this program, smart form also custom. Now user raised a query about "why the transaction taking more than 5 minutes to complete printing". So I have asked to user whether a report itself taking time to display or time taken is from printing? Actually this report have an selection screen which is given below


When user executes this report then there is an ALV display, after user selecting a particular line to click print button. So time taken is from Data selection and Internal table processing.

Note : Before activating a trace you have to run the report/transaction twice or thrice since the record selection is first time then obviously database response time will be bit high and it wont be any processing(internal table) time differences in ABAP. 

Please close all the session except two(Report execution and ST12), If you are taking trace and the report is running by user also fine but make sure user is using in single session.

Step 1) Go to transaction code ST12 and give comment, User name and Start trace


Step 2) Run your report/transaction and click 'End traces & collect' in ST12

Step 3) In ST12 bottom side you could see Full screen button and click it


Now select the row and click 'ABAP trace' in toolbar. Choose the 4th column(Net) and click descending button which is given below.


Actually I gave smaller input selection in selection screen so the total execution time is 91.2 seconds. If we see the above snapshot ABAP, Database and System time is shown and we could see ABAP run-time is 88.0% and Database run-time is  12.0 so we improve both the response times ABAP(Internal table processing) and Database. If we check the marked rectangular in above screen there are internal table processing and database select query can be improved.


Now system will take you to the respective program/object with the exact line number. In this program there is an select statement which is highest response time(will see later). 

Below the information will give you overview of ABAP trace

S.No Call Detailed explanation
01 Call Simply we can say which statement is executed
02 No. "Number" says how many times the statement is executed. like perform, call function/method, call screen or PAI PBO modules
03 Gross "Gross time(in microseconds)" says Total time of which "Call" to execute "No." times or simply we can say Gross time is the summarized time over all call executions
04 Net "Net(Microseconds)" says Net time is the gross time minus the time when this mod.unit calls other mod.units minus the durations of simple statements that occur within this modularization unit AND that are explicitely measured
05 Gross(%) "Gross time in %" says Gross / Total runtime X 100
06 Net(%) "Net time in %" says Net / Total runtime X 100
07 Program(called program) Which program being called
08 Type Type can be like "DB", "DB->" and "Sys."

Now we will consider the ABAP(internal table processing) performance improvement

Step 3.1) Let us analyse Read Table INT_BSEG

READ TABLE INT_BSEG INTO WA_BSEG WITH KEY  AUGBL WA_BKPF-BELNR
                                                                                                                          GJAHR WA_BKPF-GJAHR. 

The read statement called 11,084 times but its taken high response time in this report, There are 2,771 records exist in table INT_BKPF which is looping and inside the read INT_BSEG is calling. Actually we have the key field in INT_BKPF as "BUKRS", "BELNR" and  "GJAHR" so the same fields can be passed(missing the field "BUKRS" in read statement) and BINARY SEARCH can be used.

Note : If the internal table is standard table then we have to sort the table before using "BINARY SEARCH"

Step 3.2) Let us analyse Read Table INT_KNA1

READ TABLE INT_KNA1 INTO WA_KNA1 WITH KEY KUNNR WA_BSAD-KUNNR. 

The read statement called 11,084 times but its taken high response time in this report, There are 800 records exist in table INT_KNA1 which is looping and inside the read INT_BSEG is calling.  In this case company code will not available(KUNNR is key field in table KNA1 so adding  BINARY SEARCH will give better response time .

We can analyse Per call or Per modularizaton unit(will give set of call inside ie. Perform F_GET_DATA)

Below the information will be useful when you writing ABAP code

S.No Title Comments Advantage
01 Use For All Entries For all entries is a ABAP query. Use FAE instead of Ranges(where clause IN). Before using For All Entries delete the duplicate records in the driver table. This method will prevent dump
02 Use Joins Joins will hit the databse at only once Result will be faster
03 Reduction of selected columns Avoid “*-selections” and instead list the column names needed in the selection. Especially expensive are unnecessary retrievals of columns with type STRING. The result table columns should be synchronous with the selection structure for optimized results. If you have more fields in the result table but if the fields to be retrieved have been given identical names to the ones used for the table, you can use the addition CORRESPONDING FIELDS. This does not cause additional runtime. Simultaneously, the robustness of the code is improved. Memory saved and Time consuming
04 Optimized query If at all possible, try to use all fields of an existing index for the database query. If this is not possible, try to at least take the first index fields into account. This will restrict the sequential search to as few data records as possible. System load will reduced
05 Reduction of line items Use the WHERE-clause to restrict the selection and to minimize the amount of data returned to the ABAP system. Use SELECT SINGLE/UP TO n ROWS, whenever you only need some lines. Avoiding unwanted records
06 Existency checks Do not use COUNT(*) to find out whether there are records for specific selection criteria. Use SELECT SINGLE instead, with a field included in the index accessed for the selection. This avoids unnecessary table access. Avoiding unwanted table access
07 Aggregate Aggregates like MIN or MAX are always resolved on the database server and the table buffering is therefore circumvented. This can potentially cause a high load on the DB system if it is accessed from several installations and application servers. Developers should therefore check how large the data volume for the aggregate will be and if it could make sense – i.e., if it is not too large – to first load the data into an internal table and then do the aggregation from there. We would like to point out, however, that as far as it is known today, HANA based database queries do have their strengths especially for aggregations and that their usage is therefore explicitly encouraged for HANA. Example: to calculate the average amount of a large number (>100,000) of orders it makes sense to have the DB-system determine this aggregate. Then it will pass only one or a few items instead of hundreds of thousands back to the application server for calculating the average amount in ABAP. This is applicable especially if this calculation is done rarely (e.g. just once per day). If, on the other hand, you often need to determine the sum of the order position of a single order and on all available application servers, doing the calculation in ABAP is usually the better option Can be used if the database is HANA platform
08 Updates The command UPDATE SET makes it possible to restrict the list of fields to be updated (instead of updating the complete record). This command should be preferred if possible. Try to use all updates using update function modules
09 Number of DB-access executions Each execution of an open SQL-statement comes with a certain overhead (parsing, checking against the statement buffer in the DBMS etc.). Each command should therefore retrieve as much data as possible at once. E.g. If you need data of 50 orders, you should not get them in 50 individual selects but you should retrieve them via one statement which supports the so called array-fetch. These commands can be identified by the additions INTO TABLE for SELECTs or FROM TABLE for UPDATE statements. Avoid Open SQL-statements within loops at all cost! With these types of constructs you will have the overhead for the statement at each loop iteration. Do not use MODIFY ! Within an application it should be clear if data records were created or if existing ones were updated. In addition, the statement is extremely critical from a performance perspective. Even with the addition FROM TABLE, the database is accessed once for each line item in the internal table. For each of these, an UPDATE is tried first and if this is not successful, an INSERT is done. If you have many new data records to insert, this will not happen with n database access but it will be 2n where n = the number of line items in the internal table.
10 VIEWS/JOINS Nested SELECTs and SELECT-commands in loops should be avoided. As an alternative, make use of VIEWS, JOINS or the addition FOR ALL ENTRIES. Please keep the following in mind with FOR ALL ENTRIES: a. If the internal table referenced with FOR ALL ENTRIES is empty, all items will be loaded. b. If the internal table contains duplicate entries, it is possible that the related data records will be loaded twice from the database. It therefore makes sense to get rid of duplicates via DELETE ADJACENT DUPLICATES.
11 Choose the table type 1) Standard tables are suitable for data which are rarely or not at all searched for specific criteria. If no searches are needed, then it is not worth the costs to create and to keep current the additional key-structures needed for the other table types. Depending upon the number of search requests and the table width and if the amount of data are very small.
2) Sorted tables are suitable if the data often need to be searched via (partial) keys, but if it cannot be guaranteed that the key fields are unambiguous. READs which only use some of the first key fields should be done only with this table type.
3) Hashed tables are perfectly suited to search for unambiguous keys in dictionary like constructs. If the unambiguousness of the entries regarding the key fields can be guaranteed, and if the search term always uses the complete key (all the fields of the key are checked against the corresponding value) this table type is usually the best.
12 SORTED or HASHED If a table of type SORTED or HASHED is accessed, this should always be done with a suitable (partial) key. This means to use WITH TABLE KEY for READ TABLE and to query as many of the key fields – in their proper sequence – as possible with “=” in a LOOP AT WHERE construct. If this is done, the internally built key-structures will be used to find the corresponding entries as quickly as possible.
13 Similar to DB-access there are single and mass operations for internal tables. Whenever possible, the mass operations should be used because they are performance optimized as compared with multiple single operations. E.g., appending lines from a partial results table to an overall results table should be done with the command pattern APPEND LINES OF… TO instead of doing the same thing with a LOOP AT and single APPEND TO.
14 SORT Command When using the SORT command, always provide the needed sort fields. This improves the legibility of the code and standard table types more often than not do not have a table key defined. Without such a key, the complete table line is used as the key and all the fields of the 3 PERFORMANCE 21 table are checked during sorting which leads to a considerable loss of performance. If a table needs to be sorted by user and date, use the command SORT table BY USER DATE even if the table structure starts with these fields and even if the sequence of the results regarding the requested fields is the same.
15 DELETE ADJACENT DUPLICATES Before using the command DELETE ADJACENT DUPLICATES you should always ensure that the table has been sorted by the same fields so that duplicate entries are actually eliminated. As the command indicates, only adjacent table rows are compared. Similar to the SORT command you should always provide the fields which are to be considered. Otherwise, the complete row will be compared field by field even if only two fields are needed from a process perspective.
16 READ TABLE TRANSPORTING NO FIELDS If you are only interested in the existence but not the content of a table row and if it is not needed for subsequent processing, always use READ TABLE TRANSPORTING NO FIELDS.
17 Field Symbols Routinely make use of field symbols when accessing internal tables
18 Passing Parameters Use as few parameters as possible which are passed by reference. Make use of the passing by value only where it is mandated technically

Now we will consider the database performance improvement

Note : If you would like to analyse the performance issue(Database) without taking trace please read the link  http://www.sappractical.com/home/blogd/sap-abap-performance-tuning/

Go to Trace analyses full screen list and select the row and click "SQL trace summary"


The report is already sorted by duration(microseconds)

Below the information will give you overview of SQL trace

S.No Title Detailed explanation
01 Executions Total Number of Executions a SQL is executed for a table
02 Redundant # Redundant identical selects will give you the count for how many calls made to Database with the same record. There might be a chance the same select might be written in different places with the similar where clause and which can be avoided
03 Percentage identical statements Percentage of Redundant identical selects
04 Duration Execution Time of SQL in microseconds means total duration of entire table records selection
05 Records Total number of records processed from Database table
06 Time/Exec Time per Execution = Duration / Executions
07 Rec/Exec Records per Execution = Records / Executions
08 Avg Time/R. Average Processing time per Record = Time per Execution / Records per Execution

a) Check the Ident%-Above the snapshot shows 75% select queries are duplicate which means the same record selected multiple times and which can be avoided. Select the row and click Statement Details


Select the calling Source Code Locations and click "ABAP" which is marked in yellow(above snapshot)


Note : Current program used for all entries(BKPF) and there is an duplicate lines, so before using for all entries use sort and delete adjacent duplicates which will prevent Table identical selects.

However we need to check the execution plan of SQL.


Note: We could see the table is RFBLG but we have not used in this table and we have used BSEG table. Since the BSEG is an cluster so the back end tables are different.


Okay lets go to second table KNA1, the execution plan is given below but still the Identical selects are high so we can follow the same step which is described above. 


From the above snapshot the CPU-Costs are less but still there is an change to improve the SQL execution time. 



Note : You can check the Last statistics date if its older than 2 months(Depends) then you can ask BASIS team to update the table statistics.

Action taken after our trace analysis

a) Deleted duplicate rows from the table INT_BKPF

b) Copied INT_BSEG into ING_BSEG1 and deleted duplicate records based on KUNNR

c) Sort command used for both INT_BSEG and INT_KNA1

d) Removed the select query J_1BBRANCH to outside the loop and used read statement

After all the changes trace is collected and trace comparison is given below.