2/4/2016 8:07:22 PM

This blog talk about Oracle database problems and the main source is from SAP. All FAQ about ST03N, ST04, SM51, SM50 and SM66 transactions

Question:How can I determine whether the general database performance can be optimized?

Answer:You can use transaction ST03 or ST03N to determine the extent to which accesses to the Oracle database are involved in the overall response time. This portion is determined by using the "Database time" column and should not exceed 40% of the overall response time.

Furthermore, the value for "Time/User call" can be determined in transaction ST04. Experience indicates that values of more than 10ms show room for improvement. In individual cases, however, the value can be much higher due to the inclusion of special idle events, and this limits the relevance of this data.

The ratio of "Busy wait time" to "CPU time" in ST04 is also an indicator of optimization potential. A ratio of 60: 40 generally indicates a well-tuned system. Significantly higher values (such as 80: 20) indicate that system performance can be improved using "wait event tuning". If the CPU share is significantly higher than 40%, a CPU bottleneck may be responsible. In this case, check the CPU utilization of the database server as described below.

Question:How can I tell how much time a particular transaction spends on the database and on which activity?

Answer:You can derive the database time of a transaction step from the respective statistical data record in Transaction STAD (formerly: STAT).

Note that for extremely long-running programs, there overflows may occur in the components of database time (for values > 2^32 microseconds = 71 minutes), and the subsequent time is wrongly recorded as processing time.

To determine which database accesses are primarily responsible for the database response time, you can set the R/3 parameter stat/tabrec to a value greater than 0 (-> number of tables to be logged).

If this parameter is used in connection with batch jobs, the additional statistical data is not saved into the statistical data records of the actual job, but instead into a statistical data record of the sequentially-logged RSBTCRTE report in Release 6.40 and lower releases.

Question:How can performance data be accessed by Oracle?

Answer:Oracle allows access to performance data using special V$-Views. Including:

V$SYSTEM_EVENT: Totaling of all queue events

V$SYSSTAT: Statistics values for the entire system

V$SESSION_EVENT: Totaling of the queue events per session

V$SESSION_WAIT: Current queue events per session

V$SESSTAT: Statistics values per session

V$FILESTAT: File system accesses

V$LATCH: Latch information

V$WAITSTAT: Information about Buffer Busy Waits

Oracle also provides a collection of scripts in the form of statspacks to collect and evaluate data relevant to performance.

Question:How can Oracle performance data be accessed by R/3?

Answer:In R/3, you can use transaction  "ST04 -> Detail Analysis Menu -> Display V$ Values" or the RSORAVDV report to directly access the Oracle V$ Views. In addition, other evaluation options are available within ST04 or special reports (such as RSORAVSE for the history of V$SYSTEM_EVENT).

If the new transaction ST04N is already available, you can use the associated report RSORADJV to execute any SQL statement on DBA and V$ views (including joins and functions). Note however, that an alias of your choice must be added to all the functions used, for example.:


Question:In which time unit is performance data usually specified in Oracle and R/3?

Answer:Oracle usually stores times internally in 1/100 seconds (that is, in hundredths of seconds). In R/3 transactions such as ST04, on the other hand, times are generally specified in 1/1000 seconds. In more recent Oracle releases, more and more times are stated in 1/1,000,000 seconds (that is, microseconds).

Question:How do I proceed if I come across lengthy database accesses in SM50 or SM66?

Answer:First of all, determine the Oracle session that belongs to the work process. To do this, note the PID from SM50 or SM66. In transaction ST04, choose "Detail analysis menu -> Oracle session", and look for the row containing the PID from SM50/SM66 in the column "Client PID". Caution: If the work process has build secondary connections to the same database, several Oracle sessions may exist that you can assign to the work process using the "Client PID".

Question:What is the reason for significant differences in run-time when identical database accesses are executed?

Answer:In many cases, significant differences in run-time can be explained by one of the following factors:

The execution of a statement for the first time often takes longer than subsequent executions, because numerous blocks have to be read from the hard disk. In subsequent accesses, the blocks are already present in the memory in the SGA, which reduces the run-time considerably.

If Blind Value Peeking is active because _OPTIM_PEEK_USER_BINDS is not set to FALSE as recommended by SAP, this feature may cause the database to behave unpredictably. Depending on the access path, the run-time may differ greatly as a result.

Changes to the CBO statistics (for example, as part of a BRCONNECT statistics run) may also lead to varying run-times.

More precise information can only be obtained on the basis of a detailed SQL analysis.

Question:What causes poor performance at INSERT statements?

Answer:While SELECT, UPDATE and DELETE statements are being converted on the basis of execution plans, INSERT statements work according to a completely different mechanism based on free lists. Therefore, with INSERT statements, you cannot improve the performance by SQL tuning. Instead, you should analyze in detail the wait events that occur.

Among other things, the following factors can trigger poor performance for INSERT statements:

If extents that are too small are used in Dictionary Managed Table spaces for the object in question, performance problems may occur due to space transactions and ST enqueues. See also Note 745639.

Caution: Even without ST-enqueue wait situations, the performance of INSERTS can be seriously affected by space transactions because it is very time intensive, not only to wait for the ST-enqueue, but also to carry out the space transaction itself, and in certain cases, this can last longer than 1 second. In this case, a number of LMODE=6 entries with the type 'ST' can be found in V$LOCK, and the "recursive cpu usage" (STATISTIC#=8) for this session in V$SESSTAT is relatively high.

If parallel INSERTs are waiting for a long time for "buffer busy waits", increasing the free lists may help

Bulk-INSERTs ("INSERT ... FROM TABLE" in ABAP) may insert a large number of data records for each INSERT, depending on the size of the internal table, and this can cause longer run-times. You can use an SQL trace to determine how many records are inserted for each INSERT. However, note that if the program displays return code 1 (ORA-00001: unique constraint violated) several data records have been processed unsuccessfully because an identical key combination already exists. If most INSERTS fail with ORA-00001, then the INSERT run-time may be very long, even though only a few data records are actually inserted.

A large number of indexes or index fragmentation can also cause the performance to deteriorate because more blocks have to be read and changed in the SGA.

If you use CBO statistics for the Oracle DDIC, incorrect CBO decisions relating to recursive SQL statements can extend the INSERT run-time considerably.

INSERT operations using the traditional PGA administration can sometimes show high CPU consumption together with a high PGA allocation.

Question:What causes poor performance at COMMIT statements?

Answer:Increased COMMIT times usually occur directly in connection with increased wait times on the Oracle wait event "log file sync". For the optimization, you must improve the write I/O in the online redo logs.

Question:What does "Sequential Read" and "Direct Read" mean in SM50/SM66?

"Direct read" refers to accesses in which a maximum of one line is returned by the database. This includes fully-qualified primary key accesses and summarization queries such as SELECT COUNT

A "Sequential Read" refers to all other read database accesses, in which there may be more than one line returned. The term "Sequential read" is deceptive since it implies that blocks are always read sequentially (as in a full table scan). In reality, however, the index is accessed in most cases.

"Direct read" and "Sequential read" in SM50 are generally worthless for a performance analysis. The only information provided is that the processes are working on processing one of the queries directed at the database interface. You cannot determine from the analysis whether the wait time is triggered by a physical I/O, database locking, network, Oracle client, DBSL, DBI or for other reasons. For a more precise analysis, you can either use a buffer trace in ST05, create a work process trace of DBI and DBSL (SM50).

Question:How is Data Buffer Quality determined?

Answer:Data Buffer Quality (or "hit ratio") can be determined in various ways:

a) Determining the ratio of physical reads and logical reads on the basis of V$SYSSTAT:

(1 - "physical reads" / "session logical reads") * 100

b) Determining the ratio of physical reads and logical reads on the basis of V$SYSSTAT, in which direct reads are not considered, since the data buffer is not read in direct reads.

(1 - ("physical reads" - "physical reads direct") /   ("session logical reads" - "physical reads direct") * 100

c) Determining the ratio of physical reads and logical reads on the basis of V$BUFFER_POOL_STATISTICS, in which direct reads are generally not considered in this view:


              Transaction ST04 uses method a) while transaction ST04N uses method b). When there are many direct reads (parallel query, lobs, PSAPTEMP access, ...), the hit ratio issued by ST04N can be significantly better than the ST04 hit ratio.

              Method c) delivers similar results as method b), although the hit ratio determined on the basis of V$BUFFER_POOL_STATISTICS can be lower, since in many situations, less logical reads are logged than in V$SYSSTAT.

If you like this blog, please share (Facebook/LinkedIn/Google+) to click below links so it will reach to others.