oacore / forms stdout eating up your CPU and I/O? It’s a bug!!

We’re going live ina few weeks and and my performance tests showed me a distrubing trend. The oacore processes would consume about 40% CPU between them and were churning out 8 GB’s of stdout files for each managed servers in a few hours.

Opening an SR gave me a method to create a filter that would filterout these message but I had to use the EBS scritps to start and stop these managed servers then. And I liked the WL control console 🙂

After a bit of further digging, I chanced upon Oracle Doc ID 1981206.1, which points to bug 18345006.

JBO DEBUG LOGGING ON WITHOUT -D FLAG, NOTICE LEVEL AND PROD MODE=TRUE

After applying the patch jbo messages are no longer being thrown into teh std out, and the CPU and I/O are very normal now

Advertisements

Monitoring the LongOperations in Oracle DB using V$SESSION_LONGOPS

Most of the long running SQL’s can be monitored by using the oracle view v$session_longops. This view is not to be queried in standalone. Rather, you should identify the session that you want to monitor using the v$session and the use session_id as a filter. I also add a condition time_remaining > 0 so that it lists only the active ones.

SQL to Monitor v$session_longops

SELECT SID,
       ROUND(TIME_REMAINING /60,2)"time remaining(min)",
       ROUND(ELAPSED_SECONDS/60,2)"elapsed_mins",
       SOFAR,
       TOTALWORK-SOFAR "Work Remaining" ,
       SQL_PLAN_OPTIONS||' '||SQL_PLAN_OPERATION "PLAN OPERATION",
       MESSAGE
  FROM V$SESSION_LONGOPS
 WHERE TIME_REMAINING>0;

There are a number of criteria which makes a SQL to be listed in view V$SESSION_LONGOPS, and I am not sure of all of them. In case of Full Table Scans, the scan has to run for at least 6 seconds and the table scanned should occupy at least 10000 db_blocks. The view also lists long operations in hash_joins, sorts, sort outputs, gather table stats and gather tables index stats. The 6 second criteria appears to be applicable across all of it while the 10K block occupancy is limited to the Table Scan alone.

Note that the time_remaining column in the view should never be considered to predict the “actual” time remaining. In most cases, especially involving hash_joins the time increases rather exponentially than linear. It depends on the hash_area_size (AUTO by default in 11g) and the size of the blocks to be processed. And if it starts using TEMP to do the sorts and you know that there is a large amount of data, you should probably start tweaking the SQL 🙂

Exceptions:

When executing DML (DELETE & UPDATE), in addition to the rows in the table, oracle has to update the indexes on the table. The time spent in doing the index modification is not reflected properly in the v$session_longops view as it looks at table blocks only. The estimate will also be based on table blocks and hence will be lower than the actual value required.

Custom Applications:

The DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS can be used to add this facility to custom applications. Once this is set, oracle will display the information about that session in the view. For it to work effectively, you must know the total amount of work you are planning to do and how much work you have done so far.

Syntax

DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS (
   rindex      IN OUT BINARY_INTEGER,
   slno        IN OUT BINARY_INTEGER,
   op_name     IN     VARCHAR2       DEFAULT NULL,
   target      IN     BINARY_INTEGER DEFAULT 0,
   context     IN     BINARY_INTEGER DEFAULT 0,
   sofar       IN     NUMBER         DEFAULT 0,
   totalwork   IN     NUMBER         DEFAULT 0,
   target_desc IN     VARCHAR2       DEFAULT 'unknown target',
   units       IN     VARCHAR2       DEFAULT NULL)  ;

set_session_longops_nohint constant BINARY_INTEGER := -1;

Syntax For the above referenced from Oracle Documentation

Oracle Global Optimizer Hints

Table hints refer to the tables in the statement block in which the hint is specified. If you have a view in your statement block and you try to hint the base tables using a table hint, it won’t work.

Here we have two options:

1. Embed the hint in the view definition

2. Use GLOBAL hints

Oracle recommends using the Global Hints as preferred method to hint base tables inside the views. Any oracle specified table hint can be transformed into a global hint by using a bit of extra syntax for table name.

Consider the below example:

CREATE OR REPLACE VIEW V1 AS  SELECT *  FROM students  WHERE student_no < 200;

CREATE OR REPLACE VIEW V2 AS SELECT * v1.student_no ,v1.student_name ,lecturers.lecturer_name ,lecturers.subject ,lecturers.subject_ID  FROM V1 ,lecturers  WHERE v1.subject = lecturers.subject; 

SELECT /*+ INDEX(V2.V1.STUDENTS STUDENTS_N01) */ *
  FROM V2
 WHERE subject_ID = 02;

The above SELECT query has a global hint which specifies an INDEX Scan for table student which is referenced in view V1, which in turn is referenced in view v2. If I had used

/*+ INDEX(students students_n01)*/,

the optimizer would have ignored the hint as the STUDENTS table is not present in the FROM clause of the statement block.

Note:

1. In case of a UNION or UNION ALL in the view referenced by a GLOBAL HINT, the hint is applied to the first statement branch that contains the hinted table.

2. The global hint syntax also applies to un-mergeable views.

Gather Schema Statistics on EBS 11.5.8 with Gather Auto

Recently a client of ours who’s on EBS 11.5.8 and 9i came up with  a request – to run Gather Schema Statistics with the Gather Auto Option “without setting statistics_level as typical”. Yes, you heard me right.

I met with many road blocks in this course and now finally, I have overcome all these. Below is a very brief pointers on how I went about it. The reason I am posting this is that I see may hits which come from Google with the Search Item EBS 11.5.8 Gather Stats in my hit counter. So there are people out there who are looking for similar solutions.

I analyzed the FND_STATS package which is called by the program “Gather Schema Statistics (GSS)”. Upon this, I found that FND_STATS is actually a wrapper for DBMS_STATS package. DBMS_STATS package is the db package which enables statistics gathering. FND_STATS adds extra validation and a few important functionalities to the DBMS_STATS. As per Oracle; we require the parameter statistics_level to be set to “typical” in order to use “GATHER AUTO”.

Analyzing the FND_STATS showed that whenever Gather Schema Statistics is run with GATHER AUTO parameter, the package actually gets a list of tables which do not have MONITORING enabled and then it proceeds to gather stats for all tables in this list @ an estimate percent 10 and option as GATHER. It will then execute and ALTER SESSION statement which will enable monitoring for this table. This means from that point of time, data will be gathered in all_tab_modifications table on the INSERT/UPDATE/DELETE action on the said table.

What this means for us: We can run GSS with GATHER AUTO option and it will in the first run do two things:

a. Gather stats at 10% for ALL tables satisfying certain criteria.

b. Enable MONITORING for all tables in above step.

I tried running the normal GSS with GATHER AUTO and the some request completed while some errored out. Upon analyzing the errored SCHEMAS, it was found that they errored out because the session could not get a LOCK with NOWAIT clause – which meant that certain tables in those schemas were being accessed & locked by another process when this happened.

I went back to FND_STATS wrapper and deduced the logic which it used to get the list of tables and enable MONITORING. I then wrote a script which would do this for me schema by schema. I was able to update all tables except those in APPLSYS schema. This is because once the EBS Application is up, it will continuously access certain tables like FND_CONCURRENT_PROCESSES, FND_CONCURRENT_QUEUES, FND_CONCURRENT_REQUESTS etc. I ran this script with the application (incl. CCM) down and all tables was updated.

I ran the request set with LIST AUTO parameter (this will list out all the tables for which stats would be gathered if we use GATHER AUTO). To my surprise, it did not list a single table whereas ideally it should have listed a number of tables. Then, a search on metalink pointed me to a Bug which caused oracle to ignore the default value of 10 for the parameter “modifications threshold”. This parameter is responsible for determining the cut off %change in a table after which stats would be gathered for it. There are not patches as of now; however a workaround exists and I implemented it. (you just have to manually enter the value instead of leaving it blank for Oracle to default it ; neat ha?) I ran the GSS request set with LIST AUTO again and it picked up many tables. Then I submitted the GSS with GATHER AUTO option. The program completed successfully.

Conclusion:

Here I have run GSS with GATHER AUTO at a default modifications threshold of 10%. In different environments we might need to have different values for this parameter for a set of tables which are very dynamic. This list can be deduced by monitoring the performance of the Application over a period of time.

This post is supposed to be a pointer for people out there who are trying to achieve something similar. Also also a note: FND_STATS can be can be customized to reflect a customer’s own needs. Though this is not supported by Oracle, no harm in doing this in an already de-sup version. Let me know if any of you are interested.

All you need to know about Execution Plans -I

This article will cover all that you need to know about execution plans in order to do SQL tuning. As this is a vast topic (believe me), I will cover this in a series of articles.

Whenever I have a SQL which is running long, the first step I do is to obtain an execution plan. An Execution plan lists out the steps taken by Oracle to execute a SQL statement. It will give you details about ordering, access method, join methods, data operation involved, optimization, cardinalities. This is one torch which will show you the path 95% of times.

There are 3 main ways in which once can get an execution plan for a SQL. They are given below:

  1. Using “explain plan” statement.
  2. Using the oracle provided dynamic performance views.
  3. Enable Trace / Autotrace.

Explain Plan statement

The Explain Plan statement shows the execution plan chosen by oracle for SQL statements (SELECT, UPDATE, INSERT or DELETE).

Pre-reqs

  1. You should have a plan table which will hold the o/p of this statement. You can create one using the script $ORACLE_HOME/rdbms/admin/utlxplan.sql if you wish to create separate plan table for the resources, you can edit this and create multiple tables.
  2. You should be able to execute the SQL for which you are trying to obtain the explain plan. If your SQL contains views, then you must have sufficient privileges in the underlying tables and view. In many cases you may be able to execute a select on a view but not an explain plan. This is because your user id does not have the execute privileges on the underlying tables.

Syntax

The syntax for this is shown below (ref: Oracle Document)

Eg:

  1. Using all the above cases:
EXPLAIN PLAN
SET STATEMENT_ID =’plan1’
INTO my_plan_table
FOR
SELECT employee_name, department, salary FROM employee WHERE salary > 10000;
  1. You may use the default plan table in Oracle; In that case you can do away with the INTO clause.
  2. SET STATEMENT_ID is requires when many people are using the same plan table. The statement ids will help you differentiate your plan from others.

In my case, I am the only guy working on tuning and I use

EXPLAIN PLAN FOR
SELECT employee_name, department, salary FROM employee WHERE salary > 10000;

Displaying the plan

  1. To retrieve the plan from the plan table you can use a simple query like
Select * from plan_table where statement_id = ‘plan1’;
  1. You may also make use of the display function in the  dbms_xplan package to display the plan. This will retrieve the latest plan in the plan table:
Select * from table(dbms_xplan.display);
  1. You may also construct a SQL to retrieve rows from your custom plan table. I use the one shown below:
SELECT     ID, parent_id,
LPAD (' ', LEVEL - 1) || operation || ' ' || options operation,
object_name, NVL (BYTES, 0) BYTES,
   CASE
        WHEN cardinality > 1000000
        THEN    TO_CHAR (TRUNC (cardinality / 1000000))|| 'M'
        WHEN cardinality > 1000
        THEN TO_CHAR (TRUNC (cardinality / 1000)) || 'K'
        ELSE cardinality || ''
    END no_of_rows,
   CASE
        WHEN COST > 1000000
        THEN TO_CHAR (TRUNC (COST / 1000000)) || 'M'
        WHEN COST > 1000
        THEN TO_CHAR (TRUNC (COST / 1000)) || 'K'   
        ELSE COST || ''
    END COST
   FROM table_plan_8508
  WHERE STATEMENT_ID = 'plan1'
  START WITH ID = 0 AND STATEMENT_ID = 'plan1'
CONNECT BY PRIOR ID = parent_id AND STATEMENT_ID = 'plan1';

Here, table_plan_8508 is my custom plan table and plan1 is the statement_id of the execution plan I require. This query can also be used against performance views with appropriate modifications.

Using the oracle provided dynamic performance views

We can make use of the oracle views v$sql_plan and v$sql_plan_statistics to determine the execution plan. Of these, v$sql_plan is almost the same as the plan_table mentioned above. The only difference would be a few extra columns for cursor identification etc. v$sql_plan_statistics view gives us the execution stats (time, # of rows etc.) for each operation in the v$sql_plan. You can write a query joining these two views to retrieve the runtime execution plan of a query.

There is another view, v$sql_plan_statistics_all which gives all the combined information in the above two views. I’d rather query this view than write a complex query. Note that the initialization parameter statistics_level needs to be set to “ALL” OR the query hinted to collect the execution statistics so that the execution stats are gathered. Also note that they cause an overhead which cannot be ignored. Given below is a sample query which I use to get the plan from the dynamic performance views.

 SELECT   ID, parent_id,
          LPAD (' ', LEVEL - 1) || operation || ' ' || options operation,
          object_name, NVL (BYTES, 0) BYTES,
   CASE
          WHEN cardinality > 1000000
          THEN    TO_CHAR (TRUNC (cardinality / 1000000))|| 'M'
          WHEN cardinality > 1000
          THEN TO_CHAR (TRUNC (cardinality / 1000)) || 'K'
          ELSE cardinality || ''
    END no_of_rows,
   CASE
          WHEN COST > 1000000
          THEN TO_CHAR (TRUNC (COST / 1000000)) || 'M'
          WHEN COST > 1000
          THEN TO_CHAR (TRUNC (COST / 1000)) || 'K'
          ELSE COST || ''
    END COST
   FROM v$sql_plan
  WHERE hash_value  = &&hash_value
  START WITH ID     = 0 AND hash_value         = &&hash_value
CONNECT BY PRIOR ID = parent_id AND hash_value = &&hash_value;

Using Trace

There are many types of traces. However, from the performance point of view, the most important ones are those associated with Event 10053and Event 10054.

Event 10046

This is the one I use extensively when any of the EBS programs are running long. This can also be used to capture what is going on in an SQL*Plus(or TOAD etc.) session. This is also may be the only officially oracle supported trace.

Syntax

1)      If you want to trace the current session, you may use:

ALTER SESSION SET events ‘10046 trace name context forever;

ALTER SESSION SET events ‘10046 trace name context off’;

2) If you want to trace a concurrent program session OR another session which already exists, then use:

EXECUTE DBMS_STATS.SET_EV(sid,serial#,10054,,’’);

Event 10053

The event 10053 trace actually traces the optimizer actions. Please note that this should be used as a last resort – basically because the data contained in the trace can be very difficult to comprehend in most of the cases. Have you ever wondered why the optimizer chose a particular index and not the other? Well this trace will give you an answer – at lease guide you to it.

The trace file will give you information like peaked values of the binds in the SQL, initialization parameter values, stored statistics, query blocks, single tables, sanity checks, join orders etc. To explain how to interpret this is beyond the scope of this article, I plan to cover it later in a series. Those in a hurry can refer chapter 12 of Jonathan Lewis’ book “Cost-Based Oracle Fundamentals”. It’s a great read.

Syntax

ALTER SESSION SET events '10053 tracename context forever;

ALTER SESSION SET events '10053 tracename context off';

The rest will come in part II – how to make use of the execution plan that you just obtained. Stay tuned.

Gather Schema Stats in Oracle EBS 11.5.10

Oracle E-Business Suite (EBS) has a concurrent program called “GATHER_SCHEMA_STATISTICS” which is used to gather stats for the smooth functioning of the CBO (Cost Based Optimizer). It uses FND_STATS, which is basically a wrapper around DBMS_STATS. In this article, I will explain the basic working of the FND_STATS and different options available. I have omitted the explanation on the input parameters which this procedure/program takes and instead have concentrated on the logic which it uses.

The details specified are takes from a 11.5.10 EBS version. But AFSIK, it is equally applicable in 11.5.9 as well.

_FND_STATS.GATHER_SCHEMA_STATS_

If options = GATHER

Select list of tables which satisfy the following criteria:

1) dba_tables.iot_type ‘IOT_OVERFLOW’ or iot_type is null

2) Table is not a temporary table

3) It is not an excluded table (determined from fnd_stats_hist and fnd_exclude_table_stats)

It then calls the FND_STATS.GATHER_TABLE_STATS. The functioning of this procedure is described towards the end of this write-up.

If options = GATHER AUTO

1) For db_versions above 8.1 flush db monitoring info from SGA to the dba_tab_monitoring tables. Else use whatever data is available in dba_tab_monitoring tables

2) Select a list of table satisfying the following criteria:

a. dba_tab_modifications.table_owner = upper(schema name)

b. (nvl(dtm.inserts,0)+nvl(dtm.updates,0)+nvl(dtm.deletes,0)) > modpercent*nvl(dt.num_rows,0))/100

Where modpercent (default value = 10) is the threshold level of staleness after which a table should be picked for Stats gathering. This value can be passes from the concurrent program. This is how the wrapper can control the threshold levels as opposed to the DBMS_STATS package where threshold is fixed at 10% .

The logic of the select query is such that there can be some redundant stats gathering – one for the table and another for the partition, when both have stale data. A note says that such scenario will be very rare and the code will be revisited in future if needed.

3) It submits the FND_STATS.GATHER_TABLE_STATS for each table, partition in the list of tables in step 2.

4) GATHER AUTO includes GATHER EMPTY, so stats are gathered on any unanalyzed tables and/or indexes. This is determined from the dba_tab_modifications.last_analyzed = NULL. In case of indexes, it is dba_indexes.last_analyzed = NULL

5) Check if there are any tables in the schema which does not have monitoring enabled. If yes, gather stats for them using 10% and enable monitoring for such tables so that we have data for them in dba_tab_modifications for next time.

For options = LIST AUTO

It writes the list of tables in step 2 to the log file

For options = GATHER EMPTY

It does gather stats for any unanalyzed tables and/or indexes which are not excluded into the log file.

For options = LIST EMPTY

It writes the list of any unanalyzed tables and/or indexes which are not excluded into the log file

If the schema_name = ALL

It will recursively submit FND_STATS.GATHER_SCHEMA_STATS for all schemas satisfying the below conditions:

1) The schema should have a valid entry in the fnd_product_installations table. This means people migrating from 11.5.8, 11.5.9 to 11.5.10, will have to re-register their custom applications in fnd_product_installations for them to be picked by Gather Schema Statistics & schema name = ALL

If schema_name = SYS

It will give error saying – Gathering statistics on the SYS schema using FND_STATS is not allowed. Please use DBMS_STATS package to gather stats on SYS objects.

_FND_STATS.GATHER_TABLE_STATS_

1) For better performance, tables smaller than small_tab_for_par_thold should be gathered in serial

2) For better stats, tables smaller than small_tab_for_est_thold should be gathered at 100%.

3) If db_versn >= 9.2 then Build up the method_opt if histogram cols are present.

method: = method ||’ ‘|| x.column_name ||’ SIZE ‘||x.hsize — this will give something like FOR ALL COLUMNS SIZE 1

4) If no histogram cols then nullify method, else call it with histogram cols.

5) If version is pre 9.2, use the old method of calling twice.

There are a few other checks which do not have any significance in the current context. The procedure then calls another procedure XXX, which is a Private package that now calls DBMS_STATS.GATHER_TABLE_STATS dynamically depending upon the version of the database. For 8i, dbms_stats is called as before, for higher versions, it is called with the no_invalidate flag.

So, in short, the Gather Schema Statistics concurrent program internally submits DBMS_STATS.GATHER_TABLE_STATS for each eligible table in that particular schema.