DBA Sensation

June 9, 2010

Sth. about checkpoint

Filed under: 1. Oracle, [System Performance tuning] — Tags: , , — zhefeng @ 2:31 pm

reading a article about checkpoint on metalink(Checkpoint Tuning and Troubleshooting Guide [ID 147468.1])

Here are some good points for checkpoint:

Oracle writes the dirty buffers to disk only on certain conditions:
– A shadow process must scan more than one-quarter of the db_block_buffer
parameter.
– Every three seconds.
– When a checkpoint is produced.

A checkpoint is realized on five types of events:
– At each switch of the redo log files.
– When the delay for LOG_CHECKPOINT_TIMEOUT is reached.
– When the size in bytes corresponding to :
(LOG_CHECKPOINT_INTERVAL* size of IO OS blocks)
is written on the current redo log file.
– Directly by the ALTER SYSTEM SWITCH LOGFILE command.
– Directly with the ALTER SYSTEM CHECKPOINT command.

During a checkpoint the following occurs:
– The database writer (DBWR) writes all modified database
blocks in the buffer cache back to datafiles,
– Checkpoint process (ckpt) updates the headers of all
the datafiles to indicate when the last checkpoint
occurred (SCN)

Advertisements

March 12, 2010

Why Isn’t Oracle Using My Index?!

Filed under: [System Performance tuning] — Tags: , , , — zhefeng @ 4:02 pm

By Jonathan Lewis
http://www.dbazine.com/oracle/or-articles/jlewis12

The question in the title of this piece is probably the single most frequently occurring question that appears in the Metalink forums and Usenet newsgroups. This article uses a test case that you can rebuild on your own systems to demonstrate the most fundamental issues with how cost-based optimisation works. And at the end of the article, you should be much better equipped to give an answer the next time you hear that dreaded question.

Because of the wide variety of options that are available when installing Oracle, it isn’t usually safe to predict exactly what will happen when someone runs a script that you have dictated to them. But I’m going to risk it, in the hope that your database is a fairly vanilla installation, with the default values for the mostly commonly tweaked parameters. The example has been built and tested on an 8.1.7 database with the db_block_size set to the commonly used value of 8K and the db_file_multiblock_read_count set to the equally commonly used value 8. The results may be a little different under Oracle 9.2

Run the script from Figure 1, which creates a couple of tables, then indexes and analyses them.

create table t1 as
select
trunc((rownum-1)/15) n1,
trunc((rownum-1)/15) n2,
rpad(‘x’, 215) v1
from all_objects<
where rownum <= 3000;

create table t2 as
select
mod(rownum,200) n1,
mod(rownum,200) n2,
rpad('x',215) v1
from all_objects
where rownum <= 3000;

create index t1_i1 on t1(N1);
create index t2_i1 on t2(n1);

analyze table t1 compute
statistics;
analyze table t2 compute
statistics;

Figure 1: The test data sets.

Once you have got this data in place, you might want to convince yourself that the two sets of data are identical — in particular, that the N1 columns in both data sets have values ranging from 0 to 199, with 15 occurrences of each value. You might try the following check:

select n1, count(*)
from t1
group by n1;

and the matching query against T2 to prove the point.

If you then execute the queries:

select * from t1 where n1 = 45;
select * from t2 where n1 = 45;

You will find that each query returns 15 rows. However if you

set autotrace traceonly explain

you will discover that the two queries have different execution paths.

The query against table T1 uses the index, but the query against table T2 does a full tablescan.

So you have two sets of identical data, with dramatically different access paths for the same query.
What Happened to the Index?

Note: if you've ever come across any of those "magic number" guidelines regarding the use of indexes, e.g., "Oracle will use an index for less than 23 percent, 10 percent, 2 percent (pick number at random) of the data," then you may at this stage begin to doubt their validity. In this example, Oracle has used a tablescan for 15 rows out of 3,000, i.e., for just one half of one percent of the data!

To investigate problems like this, there is one very simple ploy that I always try as the first step: Put in some hints to make Oracle do what I think it ought to be doing, and see if that gives me any clues.

In this case, a simple hint:

/*+ index(t2, t2_i1) */

is sufficient to switch Oracle from the full tablescan to the indexed access path. The three paths with costs (abbreviated to C=nnn) are shown in Figure 2:

select * from t1 where n1 = 45;

EXECUTION PLAN
————–
TABLE ACCESS BY INDEX ROWID OF T1 (C=2)
INDEX(RANGE SCAN) OF T1_I1 (C=1)

select * from t2 where n1 = 45;

EXECUTION PLAN
————–
TABLE ACCESS FULL OF T2 (C=15)

select /*+ index(t2 t2_i1) */
*
from t1
where n1 = 45;

EXECUTION PLAN
————–
TABLE ACCESS BY INDEX ROWID OF T2 (C=16)
INDEX(RANGE SCAN) OF T2_I1 (C=1)

Figure 2: The different queries and their costs.

So why hasn't Oracle used the index by default in for the T2 query? Easy — as the execution plan shows, the cost of doing the tablescan is cheaper than the cost of using the index.
Why is the Tablescan Cheaper?

This, of course, is simply begging the question. Why is the cost of the tablescan cheaper than the cost of using the index?

By looking into this question, you uncover the key mechanisms (and critically erroneous assumptions) of the Cost Based Optimiser.

Let's start by examining the indexes by running the query:

select
table_name,
blevel,
avg_data_blocks_per_key,
avg_leaf_blocks_per_key,
clustering_factor
from user_indexes;

The results are given in the table below:
T1 T2
Blevel 1 1
Data block / key 1 15
Leaf block / key 1 1
Clustering factor 96 3000

Note particularly the value for "data blocks per key." This is the number of different blocks in the table that Oracle thinks it will have to visit if you execute a query that contains an equality test on a complete key value for this index.

So where do the costs for our queries come from? As far as Oracle is concerned, if we fire in the key value 45, we get the data from table T1 by hitting one index leaf block and one table block — two blocks, so a cost of two.

If we try the same with table T2, we have to hit one index leaf block and 15 table blocks — a total of 16 blocks, so a cost of 16.

Clearly, according to this viewpoint, the index on table T1 is much more desirable than the index on table T2. This leaves two questions outstanding, though:

Where does the tablescan cost come from, and why are the figures for the avg_data_blocks_per_key so different between the two tables?

The answer to the second question is simple. Look back at the definition of table T1 — it uses the trunc() function to generate the N1 values, dividing the "rownum – 1 "by 15 and truncating.

Trunc(675/15) = 45
Trunc(676/15) = 45

Trunc(689/15) = 45

All the rows with the value 45 do actually appear one after the other in a tight little clump (probably all fitting one data block) in the table.

Table T2 uses the mod() function to generate the N1 values, using modulus 200 on the rownum:

mod(45,200) = 45
mod(245,200) = 45

mod(2845,200) = 45

The rows with the value 45 appear every two hundredth position in the table (probably resulting in no more than one row in every relevant block).

By doing the analyze, Oracle was able to get a perfect description of the data scatter in our table. So the optimiser was able to work out exactly how many blocks Oracle would have to visit to answer our query — and, in simple cases, the number of block visits is the cost of the query.
But Why the Tablescan?

So we see that an indexed access into T2 is more expensive than the same path into T1, but why has Oracle switched to the tablescan?

This brings us to the two simple-minded, and rather inappropriate, assumptions that Oracle makes.

The first is that every block acquisition equates to a physical disk read, and the second is that a multiblock read is just as quick as a single block read.

So what impact do these assumptions have on our experiment?

If you query the user_tables view with the following SQL:

select
table_name,
blocks
from user_tables;

you will find that our two tables each cover 96 blocks.

At the start of the article, I pointed out that the test case was running a version 8 system with the value 8 for the db_file_multiblock_read_count.

Roughly speaking, Oracle has decided that it can read the entire 96 block table in 96/8 = 12 disk read requests.

Since it takes 16 block (= disk read) requests to access the table by index, it is clearer quicker (from Oracle's sadly deluded perspective) to scan the table — after all 12 is less than 16.

Voila! If the data you are targetting is suitably scattered across the table, you get tablescans even for a very small percentage of the data — a problem that can be exaggerated in the case of very big blocks and very small rows.
Correction

In fact, you will have noticed that my calculated number of scan reads was 12, whilst the cost reported in the execution plan was 15. It is a slight simplfication to say that the cost of a tablescan (or an index fast full scan for that matter) is

'number of blocks' /
db_file_multiblock_read_count.

Oracle uses an "adjusted" multi-block read value for the calculation (although it then tries to use the actual requested size when the scan starts to run).

For reference, the following table compares a few of the actual and adjusted values:
Actual Adjusted
4 4.175
8 6.589
16 10.398
32 16.409
64 25.895
128 40.865

As you can see, Oracle makes some attempt to protect you from the error of supplying an unfeasibly large value for this parameter.

There is a minor change in version 9, by the way, where the tablescan cost is further adjusted by adding one to result of the division — which means tablescans in V9 are generally just a little more expensive than in V8, so indexes are just a little more likely to be used.
Adjustments

We have seen that there are two assumptions built into the optimizer that are not very sensible.

* A single block read costs just as much as a multi-block read — (not really likely, particularly when running on file systems without direction)
* A block access will be a physical disk read — (so what is the buffer cache for?)

Since the early days of Oracle 8.1, there have been a couple of parameters that allow us to correct these assumption in a reasonably truthful way.

See Tim Gorman's article for a proper description of these parameters, but briefly:

Optimizer_index_cost_adj takes a value between 1 and 10000 with a default of 100. Effectively, this parameter describes how cheap a single block read is compared to a multiblock read. For example the value 30 (which is often a suitable first guess for an OLTP system) would tell Oracle that a single block read costs 30% of a multiblock read. Oracle would therefore incline towards using indexed access paths for low values of this parameter.

Optimizer_index_caching takes a value between 0 and 100 with a default of 0. This tells Oracle to assume that that percentage of index blocks will be found in the buffer cache. In this case, setting values close to 100 encourages the use of indexes over tablescans.

The really nice thing about both these parameters is that they can be set to "truthful" values.

Set the optimizer_index_caching to something in the region of the "buffer cache hit ratio." (You have to make your own choice about whether this should be the figure derived from the default pool, keep pool, or both).

The optimizer_index_cost_adj is a little more complicated. Check the typical wait times in v$system_event for the events "db file scattered read" (multi block reads) and "db file sequential reads" (single block reads). Divide the latter by the former and multiply by one hundred.
Improvements

Don't forget that the two parameters may need to be adjusted at different times of the day and week to reflect the end-user workload. You can't just derive one pair of figures, and use them for ever.

Happily, in Oracle 9, things have improved. You can now collect system statistics, which are originally included just the four:

+ Average single block read time
+ Average multi block read time
+ Average actual multiblock read
+ Notional usable CPU speed.

Suffice it to say that this feature is worth an article in its own right — but do note that the first three allow Oracle to discover the truth about the cost of multi block reads. And in fact, the CPU speed allows Oracle to work out the CPU cost of unsuitable access mechanisms like reading every single row in a block to find a specific data value and behave accordingly.

When you migrate to version 9, one of the first things you should investigate is the correct use of system statistics. This one feature alone may reduce the amount of time you spend trying to "tune" awkward SQL.

In passing, despite the wonderful effect of system statistics both of the optimizer adjusting parameters still apply — although the exact formula for their use seems to have changed between version 8 and version 9.
Variations on a Theme

Of course, I have picked one very special case — equality on a single column non-unique index, where thare are no nulls in the table — and treated it very simply. (I haven't even mentioned the relevance of the index blevel and clustering_factor yet.) There are numerous different strategies that Oracle uses to work out more general cases.

Consider some of the cases I have conveniently overlooked:

+ Multi-column indexes
+ Part-used multi-column indexes
+ Range scans
+ Unique indexes
+ Non-unique indexes representing unique constraints
+ Index skip scans
+ Index only queries
+ Bitmap indexes
+ Effects of nulls

The list goes on and on. There is no one simple formula that tells you how Oracle works out a cost — there is only a general guideline that gives you the flavour of the approach and a list of different formulae that apply in different cases.

However, the purpose of this article was to make you aware of the general approach and the two assumptions built into the optimiser's strategy. And I hope that this may be enough to take you a long way down the path of understanding the (apparently) strange things that the optimiser has been known to do.

March 11, 2010

How to Troubleshooting Bad Execution Plans

Filed under: [System Performance tuning] — Tags: , — zhefeng @ 11:36 am

Very good sql tuning artical from Greg Rahn

Original Link:

One of the most common performance issues DBAs encounter are bad execution plans. Many try to resolve bad executions plans by setting optimizer related parameters or even hidden underscore parameters. Some even try to decipher a long and complex 10053 trace in hopes to find an answer. While changing parameters or analyzing a 10053 trace might be useful for debugging at some point, I feel there is a much more simple way to start to troubleshoot bad execution plans.

Verify The Query Matches The Business Question

This seems like an obvious thing to do, but I’ve seen numerous cases where the SQL query does not match the business question being asked. Do a quick sanity check verifying things like: join columns, group by, subqueries, etc. The last thing you want to do is consume time trying to debug a bad plan for an improperly written SQL query. Frequently I’ve found that this is the case for many of those “I’ve never got it to run to completion” queries.

What Influences The Execution Plan

I think it’s important to understand what variables influence the Optimizer in order to focus the debugging effort. There are quite a number of variables, but frequently the cause of the problem ones are: (1) non-default optimizer parameters and (2) non-representative object/system statistics. Based on my observations I would say that the most abused Optimizer parameters are:

* OPTIMIZER_INDEX_CACHING
* OPTIMIZER_INDEX_COST_ADJ
* DB_FILE_MULTIBLOCK_READ_COUNT

Many see setting these as a solution to get the Optimizer to choose an index plan over a table scan plan, but this is problematic in several ways:

1. This is a global change to a local problem
2. Although it appears to solve one problem, it is unknown how many bad execution plans resulted from this change
3. The root cause of why the index plan was not chosen is unknown, just that tweaking parameters gave the desired result
4. Using non-default parameters makes it almost impossible to correctly and effectively troubleshoot the root cause

Object and system statistics can have a large influence on execution plans, but few actually take the time to sanity check them during triage. These statistics exist in views like:

* ALL_TAB_COL_STATISTICS
* ALL_PART_COL_STATISTICS
* ALL_INDEXES
* SYS.AUX_STATS$

Using GATHER_PLAN_STATISTICS With DBMS_XPLAN.DISPLAY_CURSOR

As a first step of triage, I would suggest executing the query with a GATHER_PLAN_STATISTICS hint followed by a call to DBMS_XPLAN.DISPLAY_CURSOR. The GATHER_PLAN_STATISTICS hint allows for the collection of extra metrics during the execution of the query. Specifically, it shows us the Optimizer’s estimated number of rows (E-Rows) and the actual number of rows (A-Rows) for each row source. If the estimates are vastly different from the actual, one probably needs to investigate why. For example: In the below plan, look at line 8. The Optimizer estimates 5,899 rows and the row source actually returns 5,479,000 rows. If the estimate is off by three orders of magnitude (1000), chances are the plan will be sub-optimal. Do note that with Nested Loop Joins you need to multiply the Starts column by the E-Rows column to get the A-Rows values (see line 10).
view source
print?
01 select /*+ gather_plan_statistics */ … from … ;
02 select * from table(dbms_xplan.display_cursor(null, null, ‘ALLSTATS LAST’));
03
04 ——————————————————————————————
05 | Id | Operation | Name | Starts | E-Rows | A-Rows |
06 ——————————————————————————————
07 | 1 | SORT GROUP BY | | 1 | 1 | 1 |
08 |* 2 | FILTER | | 1 | | 1728K |
09 | 3 | NESTED LOOPS | | 1 | 1 | 1728K |
10 |* 4 | HASH JOIN | | 1 | 1 | 1728K |
11 | 5 | PARTITION LIST SINGLE | | 1 | 6844 | 3029 |
12 |* 6 | INDEX RANGE SCAN | PROV_IX13 | 1 | 6844 | 3029 |
13 | 7 | PARTITION LIST SINGLE | | 1 | 5899 | 5479K |
14 |* 8 | TABLE ACCESS BY LOCAL INDEX ROWID | SERVICE | 1 | 5899 | 5479K |
15 |* 9 | INDEX SKIP SCAN | SERVICE_IX8 | 1 | 4934 | 5479K |
16 | 10 | PARTITION LIST SINGLE | | 1728K | 1 | 1728K |
17 |* 11 | INDEX RANGE SCAN | CLAIM_IX7 | 1728K | 1 | 1728K |
18 ——————————————————————————————

Using The CARDINALITY Hint

Now that I’ve demonstrated how to compare the cardinality estimates to the actual number of rows, what are the debugging options? If one asserts that the Optimizer will choose the optimal plan if it can accurately estimate the number of rows, one can test using the not so well (un)documented CARDINALITY hint. The CARDINALITY hint tells the Optimizer how many rows are coming out of a row source. The hint is generally used like such:
view source
print?
1 select /*+ cardinality(a 100) */ * from dual a;
2
3 ————————————————————————–
4 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
5 ————————————————————————–
6 | 0 | SELECT STATEMENT | | 100 | 200 | 2 (0)| 00:00:01 |
7 | 1 | TABLE ACCESS FULL| DUAL | 100 | 200 | 2 (0)| 00:00:01 |
8 ————————————————————————–

In this case I told the Optimizer that DUAL would return 100 rows (when in reality it returns 1 row) as seen in the Rows column from the autotrace output. The CARDINALITY hint is one tool one can use to give the Optimizer accurate information. I usually find this the best way to triage a bad plan as it is not a global change, it only effects a single execution of a statement in my session. If luck has it that using a CARDINALITY hint yields an optimal plan, one can move on to debugging where the cardinality is being miscalculated. Generally the bad cardinality is the result of non-representative table/column stats, but it also may be due to data correlation or other factors. This is where it pays off to know and understand the size and shape of the data. If the Optimizer still chooses a bad plan even with the correct cardinality estimates, it’s time to place a call to Oracle Support as more in-depth debugging is likely required.

Where Cardinality Can Go Wrong

There are several common scenarios that can lead to inaccurate cardinality estimates. Some of those on the list are:

1. Data skew: Is the NDV inaccurate due to data skew and a poor dbms_stats sample?
2. Data correlation: Are two or more predicates related to each other?
3. Out-of-range values: Is the predicate within the range of known values?
4. Use of functions in predicates: Is the 5% cardinality guess for functions accurate?
5. Stats gathering strategies: Is your stats gathering strategy yielding representative stats?

Some possible solutions to these issues are:

1. Data skew: Choose a sample size that yields accurate NDV. Use DBMS_STATS.AUTO_SAMPLE_SIZE in 11g.
2. Data correlation: Use Extended Stats in 11g. If <= 10.2.0.3 use a CARDINALITY hint if possible.
3. Out-of-range values: Gather or manually set the statistics.
4. Use of functions in predicates: Use a CARDINALITY hint where possible.
5. Stats gathering strategies: Use AUTO_SAMPLE_SIZE. Adjust only where necessary. Be mindful of tables with skewed data.

How To Best Work With Oracle Support

If you are unable to get to the root cause on your own, it is likely that you will be in contact with Oracle Support. To best assist the support analyst I would recommend you gather the following in addition to the query text:

1. Output from the GATHER_PLAN_STATISTICS and DBMS_XPLAN.DISPLAY_CURSOR
2. SQLTXPLAN output. See Metalink Note 215187.1
3. 10053 trace output. See Metalink Note 225598.1
4. DDL for all objects used (and dependencies) in the query. This is best gotten as a expdp (data pump) using CONTENT=METADATA_ONLY. This will also include the object statistics.
5. Output from: select pname, pval1 from sys.aux_stats$ where sname='SYSSTATS_MAIN';
6. A copy of your init.ora

Having this data ready before you even make the call (or create the SR on-line) should give you a jump on getting a quick(er) resolution.

Summary

While this blog post is not meant to be a comprehensive troubleshooting guide for bad execution plans, I do hope that it does help point you in the right direction the next time you encounter one. Many of the Optimizer issues I’ve seen are due to incorrect cardinality estimates, quite often due to inaccurate NDV or the result of data correlation. I believe that if you use a systematic approach you will find that debugging bad execution plans may be as easy as just getting the cardinality estimate correct.

DBMS_STATS, METHOD_OPT and FOR ALL INDEXED COLUMNS

Filed under: [System Performance tuning] — zhefeng @ 10:14 am

Another very good article about dbms_stats package:
http://structureddata.org/2008/10/14/dbms_stats-method_opt-and-for-all-indexed-columns/

I’ve written before on choosing an optimal stats gathering strategy but I recently came across a scenario that I didn’t directly blog about and think it deserves attention. As I mentioned in that previous post, one should only deviate from the defaults when they have a reason to, and fully understand that reason and the effect of that decision.

Understanding METHOD_OPT

The METHOD_OPT parameter of DBMS_STATS controls two things:

1. on which columns statistics will be collected
2. on which columns histograms will be collected (and how many buckets)

It is very important to understand #1 and how the choice of METHOD_OPT effects the collection of column statistics.

Prerequisite: Where Do I Find Column Statistics?

Understanding where to find column statistics is vital for troubleshooting bad execution plans. These views will be the arrows in your quiver:

* USER_TAB_COL_STATISTICS
* USER_PART_COL_STATISTICS
* USER_SUBPART_COL_STATISTICS

Depending on if the table is partitioned or subpartitioned, and depending on what GRANULARITY the stats were gathered with, the latter two of those views may or may not be populated.

The Bane of METHOD_OPT: FOR ALL INDEXED COLUMNS

If you are using FOR ALL INDEXED COLUMNS as part of your METHOD_OPT you probably should not be. Allow me to explain. Using MENTOD_OPT=>’FOR ALL INDEXED COLUMNS SIZE AUTO’ (a common METHOD_OPT I see) tells DBMS_STATS: “only gather stats on columns that participate in an index and based on data distribution and the workload of those indexed columns decide if a histogram should be created and how many buckets it should contain“. Is that really what you want? My guess is probably not. Let me work through a few examples to explain why.

I’m going to start with this table.
view source
print?
01 SQL> exec dbms_random.initialize(1);
02
03 PL/SQL procedure successfully completed.
04
05 SQL> create table t1
06 2 as
07 3 select
08 4 column_value pk,
09 5 round(dbms_random.value(1,2)) a,
10 6 round(dbms_random.value(1,5)) b,
11 7 round(dbms_random.value(1,10)) c,
12 8 round(dbms_random.value(1,100)) d,
13 9 round(dbms_random.value(1,100)) e
14 10 from table(counter(1,1000000))
15 11 /
16
17 Table created.
18
19 SQL> begin
20 2 dbms_stats.gather_table_stats(
21 3 ownname => user ,
22 4 tabname => ‘T1′ ,
23 5 estimate_percent => 100 ,
24 6 cascade => true);
25 7 end;
26 8 /
27
28 PL/SQL procedure successfully completed.
29
30 SQL> select
31 2 COLUMN_NAME, NUM_DISTINCT, HISTOGRAM, NUM_BUCKETS,
32 3 to_char(LAST_ANALYZED,’yyyy-dd-mm hh24:mi:ss’) LAST_ANALYZED
33 4 from user_tab_col_statistics
34 5 where table_name=’T1′
35 6 /
36
37 COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS LAST_ANALYZED
38 ———– ———— ————— ———– ——————-
39 PK 1000000 NONE 1 2008-13-10 18:39:51
40 A 2 NONE 1 2008-13-10 18:39:51
41 B 5 NONE 1 2008-13-10 18:39:51
42 C 10 NONE 1 2008-13-10 18:39:51
43 D 100 NONE 1 2008-13-10 18:39:51
44 E 100 NONE 1 2008-13-10 18:39:51
45
46 6 rows selected.

This 6 column table contains 1,000,000 rows of randomly generated numbers. I’ve queried USER_TAB_COL_STATISTICS to display some of the important attributes (NDV, Histogram, Number of Buckets, etc).

I’m going to now put an index on T1(PK), delete the stats and recollect stats using two different METHOD_OPT parameters that each use ‘FOR ALL INDEXED COLUMNS’.
view source
print?
01 SQL> create unique index PK_T1 on T1(PK);
02
03 Index created.
04
05 SQL> begin
06 2 dbms_stats.delete_table_stats(user,’T1′);
07 3
08 4 dbms_stats.gather_table_stats(
09 5 ownname => user ,
10 6 tabname => ‘T1’ ,
11 7 estimate_percent => 100 ,
12 8 method_opt => ‘for all indexed columns’ ,
13 9 cascade => true);
14 10 end;
15 11 /
16
17 PL/SQL procedure successfully completed.
18
19 SQL> select COLUMN_NAME, NUM_DISTINCT, HISTOGRAM, NUM_BUCKETS,
20 2 to_char(LAST_ANALYZED,’yyyy-dd-mm hh24:mi:ss’) LAST_ANALYZED
21 3 from user_tab_col_statistics
22 4 where table_name=’T1′
23 5 /
24
25 COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS LAST_ANALYZED
26 ———– ———— ————— ———– ——————-
27 PK 1000000 HEIGHT BALANCED 75 2008-13-10 18:41:10
28
29 SQL> begin
30 2 dbms_stats.delete_table_stats(user,’T1′);
31 3
32 4 dbms_stats.gather_table_stats(
33 5 ownname => user ,
34 6 tabname => ‘T1’ ,
35 7 estimate_percent => 100 ,
36 8 method_opt => ‘for all indexed columns size auto’ ,
37 9 cascade => true);
38 10 end;
39 11 /
40
41 PL/SQL procedure successfully completed.
42
43 SQL> select COLUMN_NAME, NUM_DISTINCT, HISTOGRAM, NUM_BUCKETS,
44 2 to_char(LAST_ANALYZED,’yyyy-dd-mm hh24:mi:ss’) LAST_ANALYZED
45 3 from user_tab_col_statistics
46 4 where table_name=’T1′
47 5 /
48
49 COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS LAST_ANALYZED
50 ———– ———— ————— ———– ——————-
51 PK 1000000 NONE 1 2008-13-10 18:41:12

Notice that in both cases only column PK has stats on it. Columns A,B,C,D and E do not have any stats collected on them. Also note that when no SIZE clause is specified, it defaults to 75 buckets.

Now one might think that is no big deal or perhaps they do not realize this is happening because they do not look at their stats. Let’s see what we get for cardinality estimates from the Optimizer for a few scenarios.
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 /
06
07 COUNT(*)
08 ———-
09 500227
10
11 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
12
13 PLAN_TABLE_OUTPUT
14 ——————————————————————————————
15 SQL_ID 4df0g0r99zmba, child number 0
16 ————————————-
17 select /*+ gather_plan_statistics */ count(*) from t1 where a=1
18
19 Plan hash value: 3724264953
20
21 ————————————————————————————-
22 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
23 ————————————————————————————-
24 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.24 | 3466 |
25 |* 2 | TABLE ACCESS FULL| T1 | 1 | 10000 | 500K|00:00:00.50 | 3466 |
26 ————————————————————————————-
27
28 Predicate Information (identified by operation id):
29 —————————————————
30
31 2 – filter(“A”=1)

Notice the E-Rows estimate for T1. The Optimizer is estimating 10,000 rows when in reality there is 500,227. The estimate is off by more than an order of magnitude (50x). Normally the calculation for the cardinality would be (for a one table single equality predicate):
number of rows in T1 * 1/NDV = 1,000,000 * 1/2 = 500,000
but in this case 10,000 is the estimate. Strangely enough (or not), 10,000 is exactly 0.01 (1%) of 1,000,000. Because there are no column stats for T1.A, the Optimizer is forced to make a guess, and that guess is 1%.

As you can see from the 10053 trace (below), since there are no statistics on the column, defaults are used. In this case they yield very poor cardinality estimations.

SINGLE TABLE ACCESS PATH
—————————————–
BEGIN Single Table Cardinality Estimation
—————————————–
Column (#2): A(NUMBER) NO STATISTICS (using defaults)
AvgLen: 13.00 NDV: 31250 Nulls: 0 Density: 3.2000e-05
Table: T1 Alias: T1
Card: Original: 1000000 Rounded: 10000 Computed: 10000.00 Non Adjusted: 10000.00
—————————————–
END Single Table Cardinality Estimation
—————————————–

Now that I’ve demonstrated how poor the cardinality estimation was with a single equality predicate, let’s see what two equality predicates gives us for a cardinality estimate.
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 and b=3
06 6 /
07
08 COUNT(*)
09 ———-
10 124724
11
12 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
13
14 PLAN_TABLE_OUTPUT
15 ——————————————————————————————
16 SQL_ID ctq8q59qdymw6, child number 0
17 ————————————-
18 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
19
20 Plan hash value: 3724264953
21
22 ————————————————————————————-
23 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
24 ————————————————————————————-
25 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.19 | 3466 |
26 |* 2 | TABLE ACCESS FULL| T1 | 1 | 100 | 124K|00:00:00.25 | 3466 |
27 ————————————————————————————-
28
29 Predicate Information (identified by operation id):
30 —————————————————
31
32 2 – filter((“A”=1 AND “B”=3))

Yikes. In this case the cardinality estimate is 100 when the actual number of rows is 124,724, a difference of over 3 orders of magnitude (over 1000x). Where did the 100 row estimate come from? In this case there are two equality predicates so the selectivity is calculated as 1% * 1% or 0.01 * 0.01 = 0.0001. 1,000,000 * 0.0001 = 100. Funny that. (The 1% is the default selectivity for an equality predicate w/o stats.)

Now let’s add a derived predicate as well and check the estimates.
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 and b=3
06 6 and d+e > 50
07 7 /
08
09 COUNT(*)
10 ———-
11 109816
12
13 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
14
15 PLAN_TABLE_OUTPUT
16 ——————————————————————————————
17 SQL_ID 5x200q9rqvvfu, child number 0
18 ————————————-
19 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
20 and d+e > 50
21
22 Plan hash value: 3724264953
23
24 ————————————————————————————-
25 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
26 ————————————————————————————-
27 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.22 | 3466 |
28 |* 2 | TABLE ACCESS FULL| T1 | 1 | 5 | 109K|00:00:00.33 | 3466 |
29 ————————————————————————————-
30
31 Predicate Information (identified by operation id):
32 —————————————————
33
34 2 – filter((“A”=1 AND “B”=3 AND “D”+”E”>50))

Doh! The cardinality estimate is now 5, but the actual number of rows being returned is 109,816. Not good at all. The Optimizer estimated 5 rows because it used a default selectivity of 1% (for A=1) * 1% (for B=3) * 5% (for D+E > 50) * 1,000,000 rows. Now can you see why column statistics are very important? All it takes is a few predicates and the cardinality estimation becomes very small, very fast. Now consider this:

* What is likely to happen in a data warehouse where the queries are 5+ table joins and the fact table columns do not have indexes?
* Would the Optimizer choose the correct driving table?
* Would nested loops plans probably be chosen when it is really not appropriate?

Hopefully you can see where this is going. If you don’t, here is the all too common chain of events:

* Non representative (or missing) statistics lead to
* Poor cardinality estimates which leads to
* Poor access path selection which leads to
* Poor join method selection which leads to
* Poor join order selection which leads to
* Poor SQL execution times

Take 2: Using the Defaults

Now I’m going to recollect stats with a default METHOD_OPT and run through the 3 execution plans again:
view source
print?
01 SQL> begin
02 2 dbms_stats.delete_table_stats(user,’t1′);
03 3
04 4 dbms_stats.gather_table_stats(
05 5 ownname => user ,
06 6 tabname => ‘T1′ ,
07 7 estimate_percent => 100 ,
08 8 degree => 8,
09 9 cascade => true);
10 10 end;
11 11 /
12
13 PL/SQL procedure successfully completed.
14
15 SQL> select column_name, num_distinct, histogram, NUM_BUCKETS,
16 2 to_char(LAST_ANALYZED,’yyyy-dd-mm hh24:mi:ss’) LAST_ANALYZED
17 3 from user_tab_col_statistics where table_name=’T1′
18 4 /
19
20 COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS LAST_ANALYZED
21 ———– ———— ————— ———– ——————-
22 PK 1000000 NONE 1 2008-13-10 19:44:32
23 A 2 FREQUENCY 2 2008-13-10 19:44:32
24 B 5 FREQUENCY 5 2008-13-10 19:44:32
25 C 10 FREQUENCY 10 2008-13-10 19:44:32
26 D 100 NONE 1 2008-13-10 19:44:32
27 E 100 NONE 1 2008-13-10 19:44:32
28
29 6 rows selected.
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 /
06
07 COUNT(*)
08 ———-
09 500227
10
11 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
12
13 PLAN_TABLE_OUTPUT
14 ——————————————————————————————
15 SQL_ID 4df0g0r99zmba, child number 0
16 ————————————-
17 select /*+ gather_plan_statistics */ count(*) from t1 where a=1
18
19 Plan hash value: 3724264953
20
21 ————————————————————————————-
22 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
23 ————————————————————————————-
24 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.20 | 3466 |
25 |* 2 | TABLE ACCESS FULL| T1 | 1 | 500K| 500K|00:00:00.50 | 3466 |
26 ————————————————————————————-
27
28 Predicate Information (identified by operation id):
29 —————————————————
30
31 2 – filter(“A”=1)
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 and b=3
06 6 /
07
08 COUNT(*)
09 ———-
10 124724
11
12 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
13
14 PLAN_TABLE_OUTPUT
15 ——————————————————————————————
16 SQL_ID ctq8q59qdymw6, child number 0
17 ————————————-
18 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
19
20 Plan hash value: 3724264953
21
22 ————————————————————————————-
23 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
24 ————————————————————————————-
25 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.14 | 3466 |
26 |* 2 | TABLE ACCESS FULL| T1 | 1 | 124K| 124K|00:00:00.25 | 3466 |
27 ————————————————————————————-
28
29 Predicate Information (identified by operation id):
30 —————————————————
31
32 2 – filter((“B”=3 AND “A”=1))
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 and b=3
06 6 and d+e > 50
07 7 /
08
09 COUNT(*)
10 ———-
11 109816
12
13 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
14
15 PLAN_TABLE_OUTPUT
16 ——————————————————————————————
17 SQL_ID 5x200q9rqvvfu, child number 0
18 ————————————-
19 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
20 and d+e>50
21
22 Plan hash value: 3724264953
23
24 ————————————————————————————-
25 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
26 ————————————————————————————-
27 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.17 | 3466 |
28 |* 2 | TABLE ACCESS FULL| T1 | 1 | 6236 | 109K|00:00:00.22 | 3466 |
29 ————————————————————————————-
30
31 Predicate Information (identified by operation id):
32 —————————————————
33
34 2 – filter((“B”=3 AND “A”=1 AND “D”+”E”>50))

As you can see, the first two queries have spot on cardinality estimates, but the the third query isn’t as good as it uses a column combination and there are no stats on D+E columns, only D and E individually. I’m going to rerun the third query with dynamic sampling set to 4 (in 10g it defaults to 2) and reevaluate the cardinality estimate.
view source
print?
01 SQL> alter session set optimizer_dynamic_sampling=4;
02
03 Session altered.
04
05 SQL> select /*+ gather_plan_statistics */
06 2 count(*)
07 3 from t1
08 4 where a=1
09 5 and b=3
10 6 and d+e > 50
11 7 /
12
13 COUNT(*)
14 ———-
15 109816
16
17 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
18
19 PLAN_TABLE_OUTPUT
20 ——————————————————————————————
21 SQL_ID 5x200q9rqvvfu, child number 1
22 ————————————-
23 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
24 and d+e > 50
25
26 Plan hash value: 3724264953
27
28 ————————————————————————————-
29 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
30 ————————————————————————————-
31 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.17 | 3466 |
32 |* 2 | TABLE ACCESS FULL| T1 | 1 | 102K| 109K|00:00:00.22 | 3466 |
33 ————————————————————————————-
34
35 Predicate Information (identified by operation id):
36 —————————————————
37
38 2 – filter((“B”=3 AND “A”=1 AND “D”+”E”>50))
39
40 Note
41 —–
42 – dynamic sampling used for this statement

Bingo! Close enough to call statistically equivalent.

Summary

I hope this little exercise demonstrates how important it is to have representative statistics and that when statistics are representative the Optimizer can very often accurately estimate the cardinality and thus choose the best plan for the query. Remember these points:

* Recent statistics do not necessarily equate to representative statistics.
* Statistics are required on all columns to yield good plans – not just indexed columns.
* You probably should not be using METHOD_OPT => ‘FOR ALL INDEXED COLUMNS SIZE AUTO’, especially in a data warehouse where indexes are used sparingly.
* Dynamic Sampling can assist with cardinality estimates where existing stats are not enough.

March 10, 2010

Using Histograms to Help Oracle Cost-Based Optimizer Make Better Decisions

Filed under: [System Performance tuning] — Tags: , , , — zhefeng @ 5:36 pm

Find a very good article talking about histogram, here is the original link:
http://support.confio.com/blog/tag/methodopt/38/

Introduction

Histograms are a feature of the cost-based optimizer (CBO) that allows the Oracle engine to determine how data is distributed within a column. They are most useful for a column that is included in the WHERE clause of SQL and the data distribution is skewed.

Example

Assume a table named PROCESS_QUEUE with one million rows including a column named PROCESSED_FLAG with five distinct values. Also assume a query similar to the following is executed:

SELECT id, serial_number

FROM process_queue
WHERE processed_flag = ‘N’;

SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1087 Card=260363 Bytes=7029801)
TABLE ACCESS (FULL) OF ‘PROCESS_QUEUE’ (TABLE) (Cost=1087 Card=260363 Bytes=7029801)

Without histograms and only five distinct values, Oracle assumes an even data distribution and would most likely perform a full table scan for this query. With one million rows and five values, Oracle assumes that each value would return 200,000 rows, or 20% of the rows.

Data Skew

However, what if the data for the PROCESSED_FLAG column was skewed:

SELECT processed_flag, COUNT(1)
FROM process_queue
GROUP BY processed_flag;

PROCESSED_FLAG COUNT
——————————- ———-
P 24
Y 999345
E 30
S 568
N 33

In this case, ony 33 rows have a value of ‘N’, so there has to be a way to tell Oracle to use the index on the PROCESSED_FLAG column. That is where histograms come into use. A histogram would include data similar to above and allow Oracle to know that only 33 rows would be returned for this query.

Collecting Histograms

To collect histograms for this column, a command similar to the following could be used:

EXECUTE DBMS_STATS.GATHER_TABLE_STATS(user, ‘PROCESS_QUEUE’, method_opt => ‘for columns processed_flag size 5’)

SELECT id, serial_number
FROM process_queue
WHERE processed_flag = ‘N’;

SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=28 Bytes=756)
TABLE ACCESS (BY INDEX ROWID) OF ‘PROCESS_QUEUE’ (TABLE) (Cost=1 Card=28 Bytes=756)
INDEX (RANGE SCAN) OF ‘PQ_IX1’ (INDEX) (Cost=1 Card=28)

Notes About Histograms

Note 1: Using histograms works best for SQL statements that use literal values. If a statement uses a bind variable, the first time the query is parsed, Oracle will peek at the value of the bind variable and choose a plan accordingly. That same plan will be used until the SQL is reparsed. In this case, if the bind variable was ‘Y’ the first time, Oracle may perform a full table scan for this query no matter what value was passed in from then on.

The opposite may also be true. Assume a similar data distribution to above but with 100 distinct values for the PROCESSED_FLAG column. The rows that have a ‘Y’ value are still be 95% of the rows. However, if you used the criteria “WHERE processed_flag=’Y'”, without histograms Oracle may decide to use the index when a full table scan may be a better option.

Note 2: The defaults for the METHOD_OPT parameter changed between Oracle 9i and 10g. In 9i the parameter defaulted to ‘for all columns size 1’ which essentially turns off histograms. The default value in Oracle 10g is ‘for all columns size auto’ which means that Oracle will decide whether or not to collect histograms for a column. In my experience it seems that unneccesary histograms are collected and histogram data is not collected for some columns where it would be useful.

Conclusion

Histograms allow Oracle to make much better performance decisions. The case we discussed in this article is one way that histograms are used and is commonly referred to as “table access method” histograms. Another use for histograms, referred to as “table order join” histograms, is to help Oracle decide the order in which tables will be joined. This helps the CBO know the size of the result sets or “cardinality” to properly determine the correct order in which to do joins.

March 8, 2010

Index Full Scan vs Index Fast Full Scan

Filed under: [System Performance tuning] — Tags: , , , — zhefeng @ 2:06 pm

http://spaces.msn.com/members/wzwanghai/

[Oracle] Index Full Scan vs Index Fast Full Scan
作者:汪海 (Wanghai)
日期:14-Aug-2005 
出处:http://spaces.msn.com/members/wzwanghai/

Index Full Scan vs Index Fast Full Scan

index full scan和index fast full scan是指同样的东西吗?答案是no。两者虽然从字面上看起来差不多,但是实现的机制完全不同。我们一起来看看两者的区别在哪里?

首先来看一下IFS,FFS能用在哪里:在一句sql中,如果我们想搜索的列都包含在索引里面的话,那么index full scan 和 index fast full scan 都可以被采用代替full table scan。比如以下语句:

SQL> CREATE TABLE TEST AS SELECT * FROM dba_objects WHERE 0=1;

SQL> CREATE INDEX ind_test_id ON TEST(object_id);

SQL> INSERT INTO TEST
SELECT *
FROM dba_objects
WHERE object_id IS NOT NULL AND object_id > 10000
ORDER BY object_id DESC;

17837 rows created.

SQL> analyze table test compute statistics for table for all columns for all indexes;

Table analyzed.

SQL> set autotrace trace;

SQL> select object_id from test;

17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=68 Card=17837 Bytes=71348)
1 0 TABLE ACCESS (FULL) OF ‘TEST’ (Cost=68 Card=17837 Bytes=71348)

这时候 Oracle会选择全表扫描,因为 object_id 列默认是可以为null的,来修改成 not null:

SQL>alter table test modify(object_id not null);

SQL> select object_id from test;

17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=17837 Bytes=71348)
1 0 INDEX (FAST FULL SCAN) OF ‘IND_TEST_ID’ (NON-UNIQUE) (Cost=11 Card=17837 Bytes=71348)

当然我们也可以使用index full scan:

SQL> select/*+ index(test ind_TEST_ID)*/ object_id from test;

17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=41 Card=17837 Bytes=71348)
1 0 INDEX (FULL SCAN) OF ‘IND_TEST_ID’ (NON-UNIQUE) (Cost=101 Card=17837 Bytes=71348)

我们看到了两者都可以在这种情况下使用,那么他们有什么区别呢?有个地方可以看出两者的区别, 来看一下两者的输出结果,为了让大家看清楚一点,我们只取10行。

INDEX FAST FULL SCAN

SQL> select object_id from test where rownum select/*+ index(test ind_TEST_ID)*/ object_id from test where rownum select object_id from dba_objects where object_name=’IND_TEST_ID’;

OBJECT_ID
———-
70591

索引的object_id为70591,使用tree dump可以看到索引树的结构

SQL> ALTER SESSION SET EVENTS ‘immediate trace name TREEDUMP level 70591’;

—– begin tree dump
branch: 0x6809b8d 109091725 (0: nrow: 100, level: 1)
leaf: 0x6809b96 109091734 (-1: nrow: 294 rrow: 0)
leaf: 0x6c07ec1 113278657 (0: nrow: 262 rrow: 0)
leaf: 0x6c07ebd 113278653 (1: nrow: 518 rrow: 0)
leaf: 0x6c07eb1 113278641 (2: nrow: 524 rrow: 0)
leaf: 0x6c07ead 113278637 (3: nrow: 524 rrow: 0)
leaf: 0x6c07ea9 113278633 (4: nrow: 524 rrow: 0)
leaf: 0x6c07ea5 113278629 (5: nrow: 524 rrow: 0)
leaf: 0x6c07ea1 113278625 (6: nrow: 524 rrow: 0)
leaf: 0x6c07e9d 113278621 (7: nrow: 524 rrow: 0)
leaf: 0x6c07e99 113278617 (8: nrow: 524 rrow: 0)
leaf: 0x6c07e95 113278613 (9: nrow: 532 rrow: 0)
leaf: 0x6c07e91 113278609 (10: nrow: 524 rrow: 0)
leaf: 0x6c07e8d 113278605 (11: nrow: 524 rrow: 0)
leaf: 0x6c07ec8 113278664 (12: nrow: 524 rrow: 0)
leaf: 0x6c07ec4 113278660 (13: nrow: 524 rrow: 0)
leaf: 0x6c07ec0 113278656 (14: nrow: 524 rrow: 0)
leaf: 0x6c07ebc 113278652 (15: nrow: 524 rrow: 0)
leaf: 0x6809bb2 109091762 (16: nrow: 524 rrow: 0)
leaf: 0x6c07eb8 113278648 (17: nrow: 524 rrow: 0)
leaf: 0x6c07eb4 113278644 (18: nrow: 524 rrow: 0)
leaf: 0x6c07eb0 113278640 (19: nrow: 524 rrow: 0)
leaf: 0x6c07eac 113278636 (20: nrow: 524 rrow: 0)
leaf: 0x6809bae 109091758 (21: nrow: 524 rrow: 0)
leaf: 0x6c07ea8 113278632 (22: nrow: 524 rrow: 0)
leaf: 0x6c07ea4 113278628 (23: nrow: 524 rrow: 0)
leaf: 0x6c07ea0 113278624 (24: nrow: 105 rrow: 105)
leaf: 0x6c07e9c 113278620 (25: nrow: 129 rrow: 129)
leaf: 0x6c07eb9 113278649 (26: nrow: 123 rrow: 123)
leaf: 0x6809baa 109091754 (27: nrow: 246 rrow: 246)
leaf: 0x6c07e98 113278616 (28: nrow: 246 rrow: 246)
leaf: 0x6c07e94 113278612 (29: nrow: 246 rrow: 246)
leaf: 0x6809ba6 109091750 (30: nrow: 246 rrow: 246)
leaf: 0x6809bce 109091790 (31: nrow: 246 rrow: 246)
leaf: 0x6809bca 109091786 (32: nrow: 246 rrow: 246)
leaf: 0x6809c05 109091845 (33: nrow: 248 rrow: 248)
leaf: 0x6809c01 109091841 (34: nrow: 246 rrow: 246)
leaf: 0x6809bfd 109091837 (35: nrow: 246 rrow: 246)
leaf: 0x6809bf9 109091833 (36: nrow: 246 rrow: 246)
leaf: 0x6809bf5 109091829 (37: nrow: 246 rrow: 246)
leaf: 0x6809bf1 109091825 (38: nrow: 246 rrow: 246)
leaf: 0x6809bed 109091821 (39: nrow: 246 rrow: 246)
leaf: 0x6809be9 109091817 (40: nrow: 246 rrow: 246)
leaf: 0x6809be5 109091813 (41: nrow: 246 rrow: 246)
leaf: 0x6809be1 109091809 (42: nrow: 246 rrow: 246)
leaf: 0x6809bdd 109091805 (43: nrow: 246 rrow: 246)
leaf: 0x6809bd9 109091801 (44: nrow: 246 rrow: 246)
leaf: 0x6809bd5 109091797 (45: nrow: 246 rrow: 246)
leaf: 0x6809bd1 109091793 (46: nrow: 248 rrow: 248)
leaf: 0x6809bcd 109091789 (47: nrow: 246 rrow: 246)
leaf: 0x6809bc9 109091785 (48: nrow: 246 rrow: 246)
leaf: 0x6809c08 109091848 (49: nrow: 246 rrow: 246)
leaf: 0x6809c04 109091844 (50: nrow: 246 rrow: 246)
leaf: 0x6809c00 109091840 (51: nrow: 246 rrow: 246)
leaf: 0x6809bfc 109091836 (52: nrow: 246 rrow: 246)
leaf: 0x6809bf8 109091832 (53: nrow: 246 rrow: 246)
leaf: 0x6809bf4 109091828 (54: nrow: 246 rrow: 246)
leaf: 0x6809bf0 109091824 (55: nrow: 246 rrow: 246)
leaf: 0x6809bec 109091820 (56: nrow: 246 rrow: 246)
leaf: 0x6809be8 109091816 (57: nrow: 246 rrow: 246)
leaf: 0x6809be4 109091812 (58: nrow: 246 rrow: 246)
leaf: 0x6809be0 109091808 (59: nrow: 248 rrow: 248)
leaf: 0x6809bdc 109091804 (60: nrow: 246 rrow: 246)
leaf: 0x6809bd8 109091800 (61: nrow: 246 rrow: 246)
leaf: 0x6809bd4 109091796 (62: nrow: 246 rrow: 246)
leaf: 0x6809bd0 109091792 (63: nrow: 246 rrow: 246)
leaf: 0x6809bcc 109091788 (64: nrow: 246 rrow: 246)
leaf: 0x6809c07 109091847 (65: nrow: 246 rrow: 246)
leaf: 0x6809c03 109091843 (66: nrow: 246 rrow: 246)
leaf: 0x6809bff 109091839 (67: nrow: 246 rrow: 246)
leaf: 0x6809bfb 109091835 (68: nrow: 246 rrow: 246)
leaf: 0x6809bf7 109091831 (69: nrow: 246 rrow: 246)
leaf: 0x6809bf3 109091827 (70: nrow: 246 rrow: 246)
leaf: 0x6809bef 109091823 (71: nrow: 246 rrow: 246)
leaf: 0x6809beb 109091819 (72: nrow: 248 rrow: 248)
leaf: 0x6809be7 109091815 (73: nrow: 246 rrow: 246)
leaf: 0x6809be3 109091811 (74: nrow: 246 rrow: 246)
leaf: 0x6809bdf 109091807 (75: nrow: 246 rrow: 246)
leaf: 0x6809bdb 109091803 (76: nrow: 246 rrow: 246)
leaf: 0x6809bd7 109091799 (77: nrow: 246 rrow: 246)
leaf: 0x6809bd3 109091795 (78: nrow: 246 rrow: 246)
leaf: 0x6809bcf 109091791 (79: nrow: 246 rrow: 246)
leaf: 0x6809bcb 109091787 (80: nrow: 246 rrow: 246)
leaf: 0x6809c06 109091846 (81: nrow: 246 rrow: 246)
leaf: 0x6809c02 109091842 (82: nrow: 246 rrow: 246)
leaf: 0x6809bfe 109091838 (83: nrow: 246 rrow: 246)
leaf: 0x6809bfa 109091834 (84: nrow: 246 rrow: 246)
leaf: 0x6809ba2 109091746 (85: nrow: 129 rrow: 129)
leaf: 0x6c07eb5 113278645 (86: nrow: 123 rrow: 123)
leaf: 0x6809bf6 109091830 (87: nrow: 246 rrow: 246)
leaf: 0x6809bf2 109091826 (88: nrow: 246 rrow: 246)
leaf: 0x6809bee 109091822 (89: nrow: 246 rrow: 246)
leaf: 0x6809bea 109091818 (90: nrow: 246 rrow: 246)
leaf: 0x6809b9e 109091742 (91: nrow: 246 rrow: 246)
leaf: 0x6809be6 109091814 (92: nrow: 246 rrow: 246)
leaf: 0x6809be2 109091810 (93: nrow: 246 rrow: 246)
leaf: 0x6809bde 109091806 (94: nrow: 246 rrow: 246)
leaf: 0x6809bda 109091802 (95: nrow: 246 rrow: 246)
leaf: 0x6809b9a 109091738 (96: nrow: 246 rrow: 246)
leaf: 0x6809bd6 109091798 (97: nrow: 246 rrow: 246)
leaf: 0x6809bd2 109091794 (98: nrow: 246 rrow: 246)
—– end tree dump

index full scan读取的是0x6c07ea0 这个块,而index fast full scan读取的是 0x6809b9a这个块也就是包含数据的物理存储位置最前的块。分别看一下这两个块的内容
0x6c07ea0 =十进制的113278624
0x6809b9a =十进制的109091738

SQL> select dbms_utility.data_block_address_file(113278624) “file”,dbms_utility.data_block_address_block(113278624) “block” from dual;

file block
———- ———-
27 32416

SQL> select dbms_utility.data_block_address_file(109091738) “file”,dbms_utility.data_block_address_block(109091738)”block” from dual;

file block
———- ———-
26 39834

SQL> alter system dump datafile 27 block 32416;

SQL> alter system dump datafile 26 block 39834;

block 32416的前10行

row#0[6564] flag: —–, lock: 2
col 0; len 4; (4): c3 02 07 11
col 1; len 6; (6): 07 00 7c 20 00 2b
row#1[6578] flag: —–, lock: 2
col 0; len 4; (4): c3 02 16 4e
col 1; len 6; (6): 07 00 7c 20 00 2a
row#2[6592] flag: —–, lock: 2
col 0; len 4; (4): c3 02 16 4f
col 1; len 6; (6): 07 00 7c 20 00 29
row#3[6606] flag: —–, lock: 2
col 0; len 4; (4): c3 02 16 50
col 1; len 6; (6): 07 00 7c 20 00 28
row#4[6620] flag: —–, lock: 2
col 0; len 4; (4): c3 02 18 02
col 1; len 6; (6): 07 00 7c 20 00 27
row#5[6634] flag: —–, lock: 2
col 0; len 4; (4): c3 02 23 60
col 1; len 6; (6): 07 00 7c 20 00 26
row#6[6648] flag: —–, lock: 2
col 0; len 4; (4): c3 02 24 25
col 1; len 6; (6): 07 00 7c 20 00 25
row#7[6662] flag: —–, lock: 2
col 0; len 4; (4): c3 02 24 28
col 1; len 6; (6): 07 00 7c 20 00 24
row#8[6676] flag: —–, lock: 2
col 0; len 4; (4): c3 02 28 18
col 1; len 6; (6): 07 00 7c 20 00 23
row#9[6690] flag: —–, lock: 2
col 0; len 4; (4): c3 02 42 04
col 1; len 6; (6): 07 00 7c 20 00 22

block 39834的前10行
row#0[4591] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 43
col 1; len 6; (6): 02 81 71 f6 00 36
row#1[4605] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 44
col 1; len 6; (6): 02 81 71 f6 00 35
row#2[4619] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 45
col 1; len 6; (6): 02 81 71 f6 00 34
row#3[4633] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 46
col 1; len 6; (6): 02 81 71 f6 00 33
row#4[4647] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 47
col 1; len 6; (6): 02 81 71 f6 00 32
row#5[4661] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 48
col 1; len 6; (6): 02 81 71 f6 00 31
row#6[4675] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 49
col 1; len 6; (6): 02 81 71 f6 00 30
row#7[4689] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 4a
col 1; len 6; (6): 02 81 71 f6 00 2f
row#8[4703] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 4b
col 1; len 6; (6): 02 81 71 f6 00 2e
row#9[4717] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 4c
col 1; len 6; (6): 02 81 71 f6 00 2d

对照一下前面的结果集
block 32416的第一行为10616,数据内的存储格式应该为

SQL> select dump(10616,16) from dual;

DUMP(10616,16)
———————-
Typ=2 Len=4: c3,2,7,11

确实等于dump block所看到的

row#0[6564] flag: —–, lock: 2
col 0; len 4; (4): c3 02 07 11
col 1; len 6; (6): 07 00 7c 20 00 2b

再看block 39834的第1行

SQL> select dump(66266,16) from dual;

DUMP(66266,16)
———————–
Typ=2 Len=4: c3,7,3f,43

跟dump 的结果也一样

row#0[4591] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 43
col 1; len 6; (6): 02 81 71 f6 00 36

这就证明了上面所说的index full scan和index fast full scan的不同。
我们也可以用10046事件去跟踪两者走的路径。

SQL> ALTER SESSION SET EVENTS ‘immediate trace name flush_cache’;

(清空buffer cache,以便观看’db file sequential read’,’db file scattered read’事件)。

SQL> alter session set events’10046 trace name context forever,level 12′;

Session altered.

SQL> select object_id from test where rownum alter session set events’10046 trace name context off’;

Session altered.

[oracle@csdbc udump]$ grep read cs-dbc_ora_15596.trc

Redo thread mounted by this instance: 1
WAIT #1: nam=’db file sequential read’ ela= 33 p1=26 p2=39820 p3=1
WAIT #1: nam=’db file sequential read’ ela= 21 p1=26 p2=39817 p3=1
WAIT #1: nam=’db file sequential read’ ela= 17 p1=26 p2=39819 p3=1
WAIT #1: nam=’db file parallel read’ ela= 53 p1=2 p2=2 p3=2
WAIT #1: nam=’db file scattered read’ ela= 466 p1=26 p2=39821 p3=16

最前面的’db file sequential read’是由于读段头等操作,我们来关注’db file scattered read’事件,因为index fast full scan是采用多块读,从39821开始读取db_file_multiblock_read_count个块(本例里设置为16)。我们关心的 39834块正位于其中。
再来看index full scan的10046 trace

SQL> ALTER SESSION SET EVENTS ‘immediate trace name flush_cache’;

(清空buffer cache,以便观看’db file sequential read’,’db file scattered read’事件)。

SQL> alter session set events’10046 trace name context forever,level 12′;

Session altered.

SQL>

OBJECT_ID
———-
10616
12177
12178
12179
12301
13495
13536
13539
13923
16503

10 rows selected.

SQL> alter session set events’10046 trace name context off’;

Session altered.

[oracle@csdbc udump]$ grep read cs-dbc_ora_15609.trc

Redo thread mounted by this instance: 1
WAIT #1: nam=’db file sequential read’ ela= 49 p1=26 p2=39821 p3=1
root block,正是先前索引树dump里面的 0x6809b8d
WAIT #1: nam=’db file sequential read’ ela= 32 p1=26 p2=39830 p3=1
WAIT #1: nam=’db file sequential read’ ela= 40 p1=27 p2=32449 p3=1
WAIT #1: nam=’db file sequential read’ ela= 35 p1=27 p2=32445 p3=1
WAIT #1: nam=’db file sequential read’ ela= 28 p1=27 p2=32433 p3=1
WAIT #1: nam=’db file sequential read’ ela= 19 p1=27 p2=32429 p3=1
WAIT #1: nam=’db file sequential read’ ela= 34 p1=27 p2=32425 p3=1
WAIT #1: nam=’db file sequential read’ ela= 32 p1=27 p2=32421 p3=1
WAIT #1: nam=’db file sequential read’ ela= 33 p1=27 p2=32417 p3=1
WAIT #1: nam=’db file sequential read’ ela= 29 p1=27 p2=32413 p3=1
WAIT #1: nam=’db file sequential read’ ela= 37 p1=27 p2=32409 p3=1
WAIT #1: nam=’db file sequential read’ ela= 32 p1=27 p2=32405 p3=1
WAIT #1: nam=’db file sequential read’ ela= 35 p1=27 p2=32401 p3=1
WAIT #1: nam=’db file sequential read’ ela= 34 p1=27 p2=32397 p3=1
WAIT #1: nam=’db file sequential read’ ela= 31 p1=27 p2=32456 p3=1
WAIT #1: nam=’db file sequential read’ ela= 29 p1=27 p2=32452 p3=1
WAIT #1: nam=’db file sequential read’ ela= 31 p1=27 p2=32448 p3=1
WAIT #1: nam=’db file sequential read’ ela= 30 p1=27 p2=32444 p3=1
WAIT #1: nam=’db file sequential read’ ela= 38 p1=26 p2=39858 p3=1
WAIT #1: nam=’db file sequential read’ ela= 31 p1=27 p2=32440 p3=1
WAIT #1: nam=’db file sequential read’ ela= 32 p1=27 p2=32436 p3=1
WAIT #1: nam=’db file sequential read’ ela= 35 p1=27 p2=32432 p3=1
WAIT #1: nam=’db file sequential read’ ela= 31 p1=27 p2=32428 p3=1
WAIT #1: nam=’db file sequential read’ ela= 29 p1=26 p2=39854 p3=1
WAIT #1: nam=’db file sequential read’ ela= 36 p1=27 p2=32424 p3=1
WAIT #1: nam=’db file sequential read’ ela= 32 p1=27 p2=32420 p3=1
WAIT #1: nam=’db file sequential read’ ela= 36 p1=27 p2=32416 p3=1

index full scan走的路径正是文章开始所提到的定位到root block,然后根据leaf block链表一路读取块。看到这里大家应该比较了解index full scan 和index fast full scan的区别了,最后补充一下 index full scan 和 index fast full scan 在排序上的不同。

SQL> set autotrace trace;

SQL> select object_id from test order by object_id;

17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=41 Card=17837 Bytes=71348)
1 0 INDEX (FULL SCAN) OF ‘IND_TEST_ID’ (NON-UNIQUE) (Cost=101 Card=17837 Bytes=71348)

由于有排序所以oracle自动选择了index full scan避免了排序。那么强制用index fast full scan呢?

SQL> select/*+ index_ffs(test ind_test_id)*/object_id from test order by object_id;
17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=59 Card=17837 Bytes=71348)
1 0 SORT (ORDER BY) (Cost=59 Card=17837 Bytes=71348)
2 1 INDEX (FAST FULL SCAN) OF ‘IND_TEST_ID’ (NON-UNIQUE) (Cost=11 Card=17837 Bytes=71348)

index fast full scan会多一步sort order by,相信仔细看过这篇文章的人能知道其中结果了吧,还不知道的人请在文章中自己找答案吧。

March 10, 2009

dbms_stats has error when gather statistics on case sensitive objects

Filed under: [System Performance tuning] — Tags: , , , , — zhefeng @ 3:33 pm

i was trying to use dbms_stats gather the statistics for a table named: “XTREME_EN”.”Financials”, but got this error:

sys@FUN10U09> exec dbms_stats.gather_table_stats(ownname => ‘XTREME_EN’,tabname
=> ‘Financials’);
BEGIN dbms_stats.gather_table_stats(ownname => ‘XTREME_EN’,tabname => ‘Financials’); END;

*
ERROR at line 1:
ORA-20000: Unable to analyze TABLE “XTREME_EN”.”FINANCIALS”, insufficient privil
eges or does not
exist
ORA-06512: at “SYS.DBMS_STATS”, line 13427
ORA-06512: at “SYS.DBMS_STATS”, line 13457
ORA-06512: at line 1

However, if i use traditional analyze command, it works:

sys@FUN10U09> ANALYZE TABLE “XTREME_EN”.”Financials” COMPUTE STATISTICS;

Table analyzed.

Is that because dbms_stats stupider than analyze? Of course not, Oracle is always encouraging you to use “dbms_stats” instead of using “analyze”.

After search on the metalink, i found this doc: “DBMS_STATS Reports ORA-20000 and ORA-06512 On Case Sensitive Object Names”  Doc ID: 343355.1

https://metalink2.oracle.com/metalink/plsql/f?p=130:14:4774819970862237887::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,343355.1,1,1,1,helvetica

According to this doc, we have to mention the object name in double quotes.This is very similar to how SQL supports mixed cases.

For our case:

sys@FUN10U09> exec dbms_stats.gather_table_stats(ownname => ‘XTREME_EN’,tabname
=> ‘”Financials”‘);

PL/SQL procedure successfully completed.

Works perfectly!

December 19, 2008

how to use dbms_stats package in a efficient way

Filed under: [System Performance tuning] — zhefeng @ 4:31 pm

i found a very good doc talking about oracle dbms_stats package from dba-oracle website: here is the link (http://www.dba-oracle.com/oracle_tips_dbms_stats1.htm)

The old fashioned “analyze table” and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. This is because the cost-based SQL Optimizer (CBO) relies on the quality of the statistics to choose the best execution plan for all SQL statements. The dbms_stats utility does a far better job in estimating statistics, especially for large partitioned tables, and the better stats results in faster SQL execution plans.

Let’s see how dbms_stats works. It’s easy! Here is a sample execution of dbms_stats with the options clause:
exec dbms_stats.gather_schema_stats( –
ownname => ‘SCOTT’, –
estimate_percent => dbms_stats.auto_sample_size, –
method_opt => ‘for all columns size repeat’, –
degree => 34 –
)

When the options clause is specified you may specify GATHER options. When GATHER AUTO is specified, the only additional valid parameters are ownname, stattab, statid, objlist and statown; all other parameter settings are ignored.
exec dbms_stats.gather_schema_stats( –
ownname => ‘SCOTT’, –
options => ‘GATHER AUTO’
)

There are several values for the options parameter that we need to know about:

*gather – re-analyzes the whole schema.

*gather empty – Only analyze tables that have no existing statistics.

*gather stale – Only re-analyze tables with more than 10% modifications (inserts, updates, deletes).

*gather auto – This will re-analyze objects which currently have no statistics and objects with stale statistics. (like “gather empty”+”gather stale”)

Note that both gather stale and gather auto require monitoring. If you issue the “alter table xxx monitoring” command, Oracle tracks changed tables with the dba_tab_modifications view. Below we see that the exact number of inserts, updates and deletes are tracked since the last analysis of statistics.
SQL> desc dba_tab_modifications;

Name Type
——————————–
TABLE_OWNER VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
SUBPARTITION_NAME VARCHAR2(30)
INSERTS NUMBER
UPDATES NUMBER
DELETES NUMBER
TIMESTAMP DATE
TRUNCATED VARCHAR2(3)

The most interesting of these options is the gather stale option. Because all statistics will become stale quickly in a robust OLTP database, we must remember the rule for gather stale is > 10% row change (based on num_rows at statistics collection time).

Hence, almost every table except read-only tables will be re-analyzed with the gather stale option. Hence, the gather stale option is best for systems that are largely read-only. For example, if only 5% of the database tables get significant updates, then only 5% of the tables will be re-analyzed with the “gather stale” option.

The CASCADE option

When analyzing specific tables, the cascade option can be used to analyze all related objects based on foreign-key constraints. For example, stats$snapshot has foreign key referential integrity into all subordinate tables (stats$sysstat, etc.), so a single analyze can invoke an analyze of all subordinate tables:

exec dbms_stats.gather_table_stats( –
ownname => ‘PERFSTAT’, –
tabname => ’STATS$SNAPSHOT’ –
estimate_percent => dbms_stats.auto_sample_size, –
method_opt => ‘for all columns size skewonly’, –
cascade => true, –
degree => 7 –
)

The DEGREE Option

Note that you can also parallelize the collection of statistics because the CBO does full-table and full-index scans. When you set degree=x, Oracle will invoke parallel query slave processes to speed up table access. Degree is usually about equal to the number of CPUs, minus 1 (for the OPQ query coordinator).
Automating sample size with dbms_stats

Now that we see how the dbms_stats options works, get see how to specify the sample size for dbms_stats. The following estimate_percent argument is a new way to allow Oracle’s dbms_stats to automatically estimate the “best” percentage of a segment to sample when gathering statistics:

estimate_percent => dbms_stats.auto_sample_size

You can verify the accuracy of the automatic statistics sampling by looking at the dba_tables sample_size column. It is interesting to note that Oracle chooses between 5% to 20% for a sample_size when using automatic sampling.

In our next installment we will look at automatics the collection of histogram data from dbms_stats.

December 11, 2008

How to set trace for others sessions, for your own session and at instance level

Filed under: [System Performance tuning] — zhefeng @ 10:32 am

This is a very good detailed article about oracle tracing. The original links is here:
http://www.petefinnigan.com/ramblings/how_to_set_trace.htm

Tools to analyse trace files

Up to and including Oracle 10g the tool that is generally used to analyse trace files is called tkprof. This tool formats the trace files that have been generated into a more readable format. Understanding the trace file format seems daunting on first inspection. A good source for details on the trace file format is a metalink note 39817.1. In 10g there is a new tool for formatting trace files called trcsess. This tool has been designed to deal with the new trace facilities that allow trace to be identified based on client identifier or by a combination of service name / module / action. This allows trace to be completed even if connection pooling and multi-threading is used. An individual client in these circumstances could share many different sessions.

Find out where the trace file will be written to

If the user you are using is not a DBA or to be more specific has not been granted access to the data dictionary view V$PARAMETER then you will need to use this technique to find out where your trace files are written to:

SQL> set serveroutput on size 1000000 for wra
SQL> declare
2 paramname varchar2(256);
3 integerval binary_integer;
4 stringval varchar2(256);
5 paramtype binary_integer;
6 begin
7 paramtype:=dbms_utility.get_parameter_value(‘user_dump_dest’,integerval,stringval);
8 if paramtype=1 then
9 dbms_output.put_line(stringval);
10 else
11 dbms_output.put_line(integerval);
12 end if;
13 end;
14 /
C:\oracle\admin\sans\udump

PL/SQL procedure successfully completed.

SQL>

If the user you are using has access to the base views then you can do the following instead.

SQL> select name,value
2 from v$parameter
3 where name=’user_dump_dest’;

NAME
—————————————————————-
VALUE
——————————————————————————–
user_dump_dest
C:\oracle\admin\sans\udump

SQL>

Making trace files available

There is an undocumented parameter _trace_files_public that if set to true changes the file permissions in the user_dump_dest directory when trace files are created to allow everyone to read them. This parameter can be checked with the following SQL. Beware that this is an undocumented parameter and should not be routinely set to true as some information in trace files can be used by hackers or malicious users. You can set this parameter by adding the following line to the init.ora file:

# allow trace files to be created with public permissions
_trace_files_public=true
# disable this feature:
#_trace_files_public=true
# or =>
_trace_files_public=false

Here is the SQL to check the value of this parameter:

SQL> select x.ksppinm name,y.ksppstvl value
2 from sys.x$ksppi x,sys.x$ksppcv y
3 where x.inst_id=userenv(‘Instance’)
4 and y.inst_id=userenv(‘Instance’)
5 and x.indx=y.indx
6 and x.ksppinm=’_trace_files_public’;

NAME
—————————————————————-
VALUE
——————————————————————————–
_trace_files_public
FALSE

SQL>

Let’s start with some examples of how to check trace for another session that is connected to the database.

Now find the SID and SERIAL# of the other session

We are using a simple example and the session we are looking for is for the user SCOTT and we are logged into this session with AS SYSDBA. We need to be logged in as SYS or AS SYSDBA so that we can access the packages DBMS_SUPPORT and DBMS_SYSTEM needed to set trace in another session or in our own session. Again as with the first example about access to v$parameter a user with access to the views V$SESSION and V$PROCESS is needed. First lets find the SID and SERIAL#

SQL> connect system/manager@sans as sysdba
Connected.
SQL> col sid for 999999
SQL> col serial# for 999999
SQL> col username for a20
SQL> col osuser for a20
SQL> select s.sid,s.serial#,s.username,s.osuser
2 from v$session s,v$process p
3 where s.paddr=p.addr;

SID SERIAL# USERNAME OSUSER
——- ——- ——————– ——————–
1 1 SYSTEM
2 1 SYSTEM
3 1 SYSTEM
4 1 SYSTEM
5 1 SYSTEM
6 1 SYSTEM
7 1 SYSTEM
8 1 SYSTEM
9 253 SYSTEM ZULIA\pete
10 20 SCOTT ZULIA\pete

10 rows selected.

SQL>

great the SID and SERIAL# that we need are 10 and 20.

A word about trace levels

Before we use the DBMS_SYSTEM package to set trace in SCOTT’s session we need to discuss what levels are. Trace in fact sets an event in the Oracle kernel (what is an event? – An event is simply a flag to the Oracle kernel to tell it to emit some trace messages or to add some additional processing or to activate some new functionality. Some events are used by support analysts and developers to force certain conditions to occur for testing purposes). In our case we want to look at event number 10046 – This event tells the Oracle kernel to emit trace lines and timings. The levels available in Oracle through some of the interfaces used to set trace are:

* Level 0 = No statistics generated
* Level 1 = standard trace output including parsing, executes and fetches plus more.
* Level 2 = Same as level 1.
* Level 4 = Same as level 1 but includes bind information
* Level 8 = Same as level 1 but includes wait’s information
* Level 12 = Same as level 1 but includes binds and waits

For a complete list of events that can be set look at the file $ORACLE_HOME/rdmbs/mesg/oraus.msg on Unix or Linux. This file is not shipped on Windows systems. Also setting any event other that trace (10046) should not be done without the guidance of Oracle support.

Set trace in another session using DBMS_SYSTEM

First lets set trace in SCOTT’s session using the DBMS_SYSTEM package. Before we do let’s turn on timed statistics so that the trace files get timing info and also set the dump file size so that there is plenty of room for the trace being generated.

SQL> exec dbms_system.set_bool_param_in_session(10,20,’timed_statistics’,true);

PL/SQL procedure successfully completed.

SQL> exec dbms_system.set_int_param_in_session(10,20,’max_dump_file_size’,2147483647);

PL/SQL procedure successfully completed.

OK, here we set trace in SCOTT’s session

SQL> — now use standard dbms_support interface
SQL> exec dbms_system.set_sql_trace_in_session(10,20,true);

PL/SQL procedure successfully completed.

SQL> — execute some code
SQL> exec dbms_system.set_sql_trace_in_session(10,20,false);

PL/SQL procedure successfully completed.

SQL>

A second way to set trace in another session – This time setting trace level as well

Next we can again use the DBMS_SYSTEM interface but this time use the set event syntax. This allows us to set any event in the database. This is of course not sanctioned by Oracle support and can cause damage to your database if not done correctly. Use this interface with care and just set 10046 (trace) events. Here is how it is done:

SQL> exec dbms_system.set_ev(10,20,10046,8,”);

PL/SQL procedure successfully completed.

SQL> — execute some code
SQL> exec dbms_system.set_ev(10,20,10046,0,”);

PL/SQL procedure successfully completed.

Installing the DBMS_SUPPORT package

Using the example above we set trace to level 8, you can of course set it to any level you wish from the list we discussed above. Next we will use the DBMS_SUPPORT package to set trace. This package is not installed by default and is in fact undocumented and indeed on some platforms and versions its not even shipped and you will need to talk to Oracle support and get it from metalink. First we will install the package:

SQL> — now do the same with dbms_support
SQL> — the package has to be installed first – you should ask Oracle first though!
SQL> @%ORACLE_HOME%\rdbms\admin\dbmssupp.sql

Package created.

Package body created.

SQL>

Use DBMS_SUPPORT to set trace in another users session

Next use the interface to again set trace for SCOTT’s session that we found earlier. here it is:

SQL> exec dbms_support.start_trace_in_session(10,20,waits=>true,binds=>false);

PL/SQL procedure successfully completed.

SQL> — execute some code
SQL> exec dbms_support.stop_trace_in_session(10,20);

PL/SQL procedure successfully completed.

SQL>

use DBMS_SUPPORT to set trace in your own session

OK, that’s how to set trace in SCOTT’s session. How do we set trace in our own session. Well first we can use all of the approaches seen above and pass in the SID and SERIAL# for our own session. There are other methods for setting trace in your own session though. The first is again using the DBMS_SUPPORT package. Here it is:

SQL> exec dbms_support.start_trace(waits=>true,binds=>false);

PL/SQL procedure successfully completed.

SQL> — run some code
SQL> exec dbms_support.stop_trace;

PL/SQL procedure successfully completed.

SQL>

Use DBMS_SESSION to set trace in your own session

The next method for setting trace in our own session also is done using a built in package, this time DBMS_SESSION. here it is:

SQL> — in your own session using dbms_session
SQL> exec dbms_session.set_sql_trace(true);

PL/SQL procedure successfully completed.

SQL> — execut some code
SQL> exec dbms_session.set_sql_trace(false);

PL/SQL procedure successfully completed.

SQL>

using oradebug to set trace through SQL*Plus

oradebug is a debugging utility that is essentially undocumented and is intended for use by Oracle support analysts for various tasks one of which is that it can be used to set trace. oradebug is available from svrmgrl before Oracle 9i and from SQL*Plus after. The first step in using this tool is to find the OS PID or the Oracle PID of the process you want to analyse. You can do this as follows:

SQL> connect system/manager@sans as sysdba
Connected.
SQL> col sid for 999999
SQL> col serial# for 999999
SQL> col spid for a8
SQL> col username for a20
SQL> col osuser for a20
1 select s.sid,s.serial#,p.spid,p.pid,s.username,s.osuser
2 from v$session s,v$process p
3* where s.paddr=p.addr
SQL> /

SID SERIAL# SPID PID USERNAME OSUSER
——- ——- ——– ———- ——————– ——————–
1 1 2528 2 SYSTEM
2 1 2536 3 SYSTEM
3 1 2540 4 SYSTEM
4 1 2544 5 SYSTEM
5 1 2552 6 SYSTEM
6 1 2604 7 SYSTEM
7 1 2612 8 SYSTEM
8 1 2652 9 SYSTEM
10 343 3740 12 SYS ZULIA\pete
12 70 864 13 SCOTT ZULIA\pete

10 rows selected.

Now that we have found the Operating System PID and Oracle PID (values 864 and 13 in this case) of SCOTT’s session we can use this to set trace with the oradebug tool as follows:

SQL> — set the OS PID
SQL> oradebug setospid 864
Windows thread id: 864, image: ORACLE.EXE
SQL> — or set the Oracle pid
SQL> oradebug setorapid 13
Windows thread id: 864, image: ORACLE.EXE
SQL> — set the trace file size to unlimitd
SQL> oradebug unlimit
Statement processed.
SQL> — now turn on trace for SCOTT
SQL> oradebug event 10046 trace name context forever, level 12
Statement processed.
SQL> — run some queries in another session and then turn trace off
SQL> oradebug event 10046 trace name context off
Statement processed.

Some things to be aware of

You should be aware that some of these methods allow setting of extended trace and some do not. Those that allow extended trace are easy to spot. These methods include ways to set the trace level or include variables suitably named such as waits or binds which again enable extended trace facilities. Some trace methods have a default level such as set sql_trace=true which sets trace to level 8. The rest set trace to normal trace levels.

One other point to note is that we have looked first at ways to set trace in another session to the one you are logged into and also now at ways of setting trace in your own session, there is a third option, which is to set trace for the whole system (i.e for all users sessions), This is not recommended unless you know what you are doing and are monitoring trace as you can quickly fill the file system.

Setting trace at the instance level using the init.ora

Trace can be set in the database initialization file the init.ora file. If you use spfile then you can still use the init.ora file and then copy it to the spfile. Simply add the following line to the init.ora file:

sql_trace=true

You can also set timed_statistics and max_dump_file_size in the init.ora file in the same way. i.e

timed_statistics=true
max_dump_file_size=unlimited

Trace can also be disabled at the instance level by simply commenting out the same parameter or by deleting it. A commented line is shown next:

#sql_trace=true

Or you can set the same parameter to false:

sql_trace=false

A second instance level method – setting events

Another method that can be used to set trace at the instance level is to add an event (or multiple events)to the initialization file, the init.ora as described above. Again if you use spfile’s then you can copy the init.ora to spfile or use ALTER SYSTEM to set the value in the spfile. Here is an example of setting the trace event 10046 to level 12 in the initialization file:

# set the event in the init.ora
event = “10046 trace name context forever, level 12”
# to turn off the event simply comment out the line as follows:
# event = “10046 trace name context forever, level 12”

Using ALTER SESSION to set trace in your own session

The alter session command can be used to set trace for the current session as follows:

SQL> alter session set sql_trace=true;

Session altered.

SQL> — execute some code
SQL> alter session set sql_trace=false;

Session altered.

SQL>

This method can also be used to set timing and dump file size for the current session as follows:

SQL> alter session set timed_statistics=true;

Session altered.

SQL> alter session set max_dump_file_size=unlimited;

Session altered.

SQL>

Using ALTER SESSION to set extended trace using events

One last method I want to demonstrate is the alter session syntax to set events. Again stick to 10046 (trace) and do not attempt to set any of the other events that are available without Oracles say so in a supported system. Here is the example of setting trace to level 12, including binds and waits:

SQL> alter session set events ‘10046 trace name context forever, level 12’;

Session altered.

SQL> — execute some code
SQL> alter session set events ‘10046 trace name context off’;

Session altered.

SQL>

A sample logon trigger to set trace

Quite often you would like trace to be set for a session as soon as the user logs on. Also you may want to be able to set trace for a specific set of users when they log in. This can easily be done with a database logon trigger. Here is a sample trigger.

Connected to:
Personal Oracle9i Release 9.2.0.1.0 – Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 – Production

SQL> create or replace trigger set_trace after logon on database
2 begin
3 if user not in (‘SYS’,’SYSTEM’) then
4 execute immediate ‘alter session set timed_statistics=true’;
5 execute immediate ‘alter session set max_dump_file_size=unlimited’;
6 execute immediate ‘alter session set sql_trace=true’;
7 end if;
8 exception
9 when others then
10 null;
11 end;
12 /

Trigger created.

SQL> sho errors
No errors.
SQL>

OK, that was easy. You can also use the alter session set events ‘10046 trace name context forever,level 12’ syntax if you prefer You can also enable other checks within the trigger if need by using any valid PL/SQL logic that you prefer. One tip is that if you have any troubles with your system trigger and it causes logins to fail is to always include, as I have, an exception handler that calls null; for any error condition. If all else fails you can disable system triggers by setting the parameter _system_trig_enabled=false in the initialisation file. This undocumented / hidden parameter stops the processing of system triggers such as logon triggers.

Using ALTER SYSTEM to set trace at the instance level

Finally you can also use the alter system syntax to set trace at the system level. Here is a simple example:

SQL> alter system set sql_trace=true scope=spfile;

System altered.

SQL>
SQL> — to turn it off again do:
SQL> alter system set sql_trace=false scope=spfile

System altered.

SQL>

Checking the privileges of the packages used to set trace

Some of the packages used in this example have to be run as SYS or you need to be logged in AS SYSDBA or specific privileges need to be granted against those packages for the user that will run them. The default privileges for DBMS_SYSTEM, DBMS_SUPPORT and for DBMS_SESSION are showed next in output from who_can_access.sql (A script that shows privileges hierarchically for an object who’s name is passed in). Here they are:

— check who has access to dbms_system
who_can_access: Release 1.0.0.0.0 – Production on Fri Feb 27 12:53:24 2004
Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved.

get user input

NAME OF OBJECT TO CHECK [USER_OBJECTS]: dbms_system
OWNER OF THE OBJECT TO CHECK [USER]: sys
OUTPUT METHOD Screen/File [S]:
FILE NAME FOR OUTPUT [priv.lst]:
OUTPUT DIRECTORY [/tmp]:

Checking object => SYS.DBMS_SYSTEM
====================================================================

Object type is => PACKAGE (TAB)
Privilege => EXECUTE is granted to =>
Role => OEM_MONITOR which is granted to =>
User => SYS

PL/SQL procedure successfully completed.

For updates please visit http://www.petefinnigan.com/tools.htm

SQL>

— check who has access to dbms_support
who_can_access: Release 1.0.0.0.0 – Production on Fri Feb 27 12:54:29 2004
Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved.

get user input

NAME OF OBJECT TO CHECK [USER_OBJECTS]: dbms_support
OWNER OF THE OBJECT TO CHECK [USER]: sys
OUTPUT METHOD Screen/File [S]:
FILE NAME FOR OUTPUT [priv.lst]:
OUTPUT DIRECTORY [/tmp]:

Checking object => SYS.DBMS_SUPPORT
====================================================================

PL/SQL procedure successfully completed.

For updates please visit http://www.petefinnigan.com/tools.htm

SQL>

— check who has access to dbms_session
who_can_access: Release 1.0.0.0.0 – Production on Fri Feb 27 12:55:31 2004
Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved.

get user input

NAME OF OBJECT TO CHECK [USER_OBJECTS]: dbms_session
OWNER OF THE OBJECT TO CHECK [USER]: sys
OUTPUT METHOD Screen/File [S]:
FILE NAME FOR OUTPUT [priv.lst]:
OUTPUT DIRECTORY [/tmp]:

Checking object => SYS.DBMS_SESSION
====================================================================

Object type is => PACKAGE (TAB)
Privilege => EXECUTE is granted to =>
Role => PUBLIC

PL/SQL procedure successfully completed.

For updates please visit http://www.petefinnigan.com/tools.htm

SQL>

That’s it, there are many ways to set trace in your session, in others sessions and at system level. Also many ways to enable extended trace. Beware of the privileges needed to run some of them and beware of setting events explicitly.

New tracing methods in Oracle 10g – DBMS_MONITOR

Oracle 10g offers a new package to allow sessions to be traced end to end in multi-tier architectures that share sessions using connection pooling or multi-threading. This package allows applications written using for instance JDBC / Java or something like Forte to be traced where it would normally be very difficult to identify a database session belonging to a client as the sessions / clients pairings change with time.

The new functionality works in three levels. You can use the old SID / SERIAL# pairings to identify a session but you can also use a client identifier or a service name / module / action to identify a client session to be traced. The package also offers a set of procedures to allow statistics to be gathered for the same groups. These statistics can then be selected from dynamic views.

let’s now take a look at some of the features of this package.

Setting trace with DBMS_MONITOR using SID / SERIAL#

Trace can be set for the current user session, for the current session or for another users session. First lets look at tracing another users session. First we need to get the SID and SERIAL# – we will use SCOTT connected through SQL*Plus as our sample session:

SQL> select s.sid,s.serial#,s.username
2 from v$session s, v$process p
3 where s.paddr=p.addr
SQL> /


SID SERIAL# USERNAME
———- ———- ——————————
248 153 SCOTT
258 61 DBSNMP
251 418 SYSMAN
255 961 SYS
249 215

27 rows selected.

SQL>

OK as with previous methods we can use a SID / SERIAL# pair of 248 and 153. lets set trace for this user session:

SQL> exec dbms_monitor.session_trace_enable(248,153,TRUE,FALSE);

PL/SQL procedure successfully completed.

SQL> — execute some sql
SQL> — in the other session
SQL> — turn trace off
SQL> exec dbms_monitor.session_trace_disable(248,153);

PL/SQL procedure successfully completed.

SQL>

Setting trace at the session level using DBMS_MONITOR

The same procedures can be used to set trace for the session by omitting the serial#. This is demonstrated next:

SQL> exec dbms_monitor.session_trace_enable(248);

PL/SQL procedure successfully completed.

SQL> — execute some sql in the other session
SQL> — turn off trace
SQL> exec dbms_monitor.session_trace_disable(248);

PL/SQL procedure successfully completed.

SQL> — or you can turn it on with
SQL> exec dbms_monitor.session_trace_enable(248,null);

PL/SQL procedure successfully completed.

SQL> — turn off again with:
SQL> exec dbms_monitor.session_trace_disable(248,null);

PL/SQL procedure successfully completed.

SQL>

Setting trace for the current session using DBMS_MONITOR

Setting trace for the current user session is done by leaving out the SID and SERIAL# altogether by setting them to NULL. Here is an example:

SQL> — trace the current session
SQL> exec dbms_monitor.session_trace_enable(null,null);

PL/SQL procedure successfully completed.

SQL> — execute some code
SQL> — turn it off again
SQL> exec dbms_monitor.session_trace_disable(null,null);

PL/SQL procedure successfully completed.

SQL> — to get waits and binds do
SQL> exec dbms_monitor.session_trace_enable(null,null,true,true);

PL/SQL procedure successfully completed.

SQL> — execute some code
SQL> — then turn off tracec
SQL> exec dbms_monitor.session_trace_disable(null,null);

PL/SQL procedure successfully completed.

SQL> — or turn it on like this
SQL> exec dbms_monitor.session_trace_enable();

PL/SQL procedure successfully completed.

SQL> — execute some SQL and then turn off trace
SQL> exec dbms_monitor.session_trace_disable();

PL/SQL procedure successfully completed.

SQL>

That completes some of the ways to use DBMS_MONITOR to set trace using SID, SERIAL#, or at the session level or for the current session.

Set trace using a client identifier

Tracing using the client identifier allows trace to be set across multiple sessions as many Oracle shadow processes can work on behalf of one client. Also trace is persistent across all instances and restarts. First we need to see how the client identifier is set. This can be done using the DBMS_SESSION package as follows:

SQL> exec dbms_session.set_identifier(‘pete id’);

PL/SQL procedure successfully completed.

SQL>

We can check now for a specific identifier in the V$SESSION view with the client_identifier column.

SQL> select s.username,s.client_identifier
2 from v$session s,v$process p
3 where s.paddr=p.addr
4 and client_identifier is not null;

USERNAME
——————————
CLIENT_IDENTIFIER
—————————————————————-
SCOTT
pete id

SQL>

OK, now we can use this information to set trace for this client identifier as follows:

SQL> exec dbms_monitor.client_id_trace_enable(‘pete id’,true,false);

PL/SQL procedure successfully completed.

SQL> — wait for the client session to do something
SQL> — turn off trace as follows:
SQL> exec dbms_monitor.client_id_trace_disable(‘pete id’);

PL/SQL procedure successfully completed.

SQL>

That was quite easy!. next let’s look at setting trace at the service, module action levels.

Setting trace for service/module/action with DBMS_MONITOR

This method of setting trace acts hierarchically. The first level is that trace is set globally for the whole database (all instances) You can override this by setting an instance name in the call to turn on trace. For this example as I am on a single instance database I will leave this parameter at its default. There are three levels to the hierarchy. If we set ACTION to NULL then all actions for the module and service are traced. The next level, if we set MODULE to NULL then all actions for all modules for the specified service name are traced. The trace will be collected into multiple trace files and the new tool trcsess must be used to collate all the trace files into one usable file.

The service name can be set using the package DBMS_SERVICE and the procedure CREATE_SERVICE. Here is an example:

SQL> exec dbms_service.create_service(‘Test Service’,’test network’);

PL/SQL procedure successfully completed.

SQL> — it can be deleted with
SQL> exec dbms_service.delete_service(‘Test Service’);

PL/SQL procedure successfully completed.

SQL>

The service name can quite often be set already by the tool. It could be used to group together a set of programs / modules that perform some business task. Next let’s see how the module and actions can be set.

SQL> — set action
SQL> exec dbms_application_info.set_action(‘PAYMENT’);

PL/SQL procedure successfully completed.

SQL> — set the module
SQL> exec dbms_application_info.set_module(‘ACCOUNTS’,’PAYMENT’);

PL/SQL procedure successfully completed.

SQL>

To view the relevant service names, modules and actions for sessions in the database you can use the v$SESSION view as follows:

SQL> col service_name for a15 wrapped
SQL> col username for a15 wrapped
SQL> col module for a15 wrapped
SQL> col action for a15 wrapped
SQL> select s.username,s.service_name,s.module,s.action
2 from v$session s,v$process p
3 where s.paddr=p.addr;

USERNAME SERVICE_NAME MODULE ACTION
————— ————— ————— —————
SYSMAN SANS
SYSMAN SANS OEM.SystemPool
DBSNMP SYS$USERS emagent@emil (T
NS V1-V3)

DBSNMP SYS$USERS emagent@emil (T
NS V1-V3)

SYS$USERS
SYS SANS ACCOUNTS PAYMENT
SCOTT SANS SQL*Plus

29 rows selected.

SQL>

As we deleted our sample service name set up with DBMS_SERVICE.CREATE_SERVICE we will just use the default value SANS inserted by Oracle in our test case. Let’s test some of the methods of setting trace with this functionality.

SQL> — set trace for all modules and actions for SANS service name
SQL> exec dbms_monitor.serv_mod_act_trace_enable(‘SANS’,DBMS_MONITOR.ALL_MODULES,DBMS_MONITOR.ALL_ACTIONS,TRUE,FALSE,NULL);

PL/SQL procedure successfully completed.

SQL> — turn it off
SQL> exec dbms_monitor.serv_mod_act_trace_disable(‘SANS’);

PL/SQL procedure successfully completed.

SQL> — now trace all actions for service SANS and module accounts
SQL> exec dbms_monitor.serv_mod_act_trace_enable(‘SANS’,’ACCOUNTS’,DBMS_MONITOR.ALL_ACTIONS,TRUE,FALSE,NULL);

PL/SQL procedure successfully completed.

SQL> — now turn it off
SQL> exec dbms_monitor.serv_mod_act_trace_disable(‘SANS’,’ACCOUNTS’);

PL/SQL procedure successfully completed.

SQL> — finally test service SANS, module ACCOUNTS and action PAYMENT
SQL> exec dbms_monitor.serv_mod_act_trace_enable(‘SANS’,’ACCOUNTS’,’PAYMENT’,TRUE,FALSE,NULL);

PL/SQL procedure successfully completed.

SQL> — turn it off
SQL> exec dbms_monitor.serv_mod_act_trace_disable(‘SANS’,’ACCOUNTS’,’PAYMENT’);

PL/SQL procedure successfully completed.

SQL> — you can turn on or off binds and waits as well or use the waits=>true
SQL> — syntax instead.
SQL>

OK, that wraps up the new procedures in 10g that can be used to turn on trace in different ways to capture true end to end trace for multi-tier applications. You should also be aware that DBMS_MONITOR also provides procedures to enable statistic gathering at the same levels of client identifier and service name/module/action. These statistics are stored and can then be accessed by selecting from V$SERV_MOD_ACT_STATS and V$CLIENT_STATS views. I will not detail those procedures here as this short paper is concentrating on trace only.

One last idea – use AUTOTRACE in SQL*Plus

OK, one final way to set and get trace, is to use the SQL*Plus AUTOTRACE facilities. There are a few settings that you can use. These are as follows:

* set autotrace off – The default – no output
* set autotrace on explain – This shows only the optimizer path
* set autotrace on statistics – This only shows SQL statistics
* set autotrace on – Includes both of the above
* set autotrace traceonly – As above but the query output is not displayed

One more final item – CBO trace 10053

One other event that you might like to try and experiment with is the 10053 event. This event traces the Cost Based Optimizer (CBO) and shows all of the plans and costs assigned to them that it tried in its search for the best cost and also is shows how it came to its decision. The 10053 event has two levels 1 and 2. More detail is emitted if level 1 is used rather than level 2. The output is again sent to a trace file in the directory specified by user_dump_dest. The trace is only generated if the SQL is hard parsed and also obviously uses the CBO. To get a trace file you can use any of the methods above that allow the event number to be specified. An example is:

SQL> alter session set events ‘10053 trace name context forever, level 1

Session altered.

SQL> — execute some SQL to create a CBO trace.
SQL> — turn CBO trace off
SQL> alter session set events ‘10053 trace name context off’;

Session altered.

SQL>

An excellent document describing how to interpret CBO trace files called “A Look under the Hood of CBO – the 10053 Event.pdf” has been written by Wolfgang Breitling of Centrex Consulting Corporation. The URL for Wolfgangs site is http://www.centrexcc.com/papers.html

Adds-on:
About 10046 tracing
10046 has 4 tracing levels
1 – Standard SQL_TRACE, the same as sql_trace
4 – Level 1 + bind values
8 – Level 1 + wait events
12 – Level 1 + Level 4 + Level 8
Like sql_trace,10046 can be set on system level or session level
1. system level
Add the parameter in:
event=”10046 trace name context forever,level 12″

2. session level
SQL> alter session set events ‘10046 trace name context forever’;
SQL> alter session set events ‘10046 trace name context forever, level 8’;
SQL> alter session set events ‘10046 trace name context off’;

Note: for finding the crossing trace file for current session, use this query to get the location:
select d.value||’/’||lower(rtrim(i.instance, chr(0)))||’_ora_’||p.spid||’.trc’ trace_file_name
from
( select p.spid
from v$mystat m,v$session s,v$process p
where m.statistic# = 1 and s.sid = m.sid and p.addr = s.paddr) p,
( select t.instance from v$thread t,v$parameter v
where v.name = ‘thread’ and (v.value = 0 or t.thread# = to_number(v.value))) i,
( select value from v$parameter where name = ‘user_dump_dest’) d;

Using import (imp) or data pump import (impdp) to import a table without data results in the table’s statistics being locked in 10gR2

Filed under: [System Performance tuning] — zhefeng @ 9:03 am

when you import the table without data by using impdp, the statistics was locked after importing. Whatever you re-gather the statistics, the statistics won’t change and it won’t give your the locked error even.

The solution are varied from serveral way:
1. Gather the statistics with force=true
exec dbms_stats.unlock_schema_stats(ownname => 'WORLDDB9_REP', force => TRUE);

2. unlock the statistics using DBMS_STATS.UNLOCK_[SCHEMA|TABLE]_STATS, then gather statistics on the table using DBMS_STATS.GATHER_[SCHEMA|TABLE|INDEX]_STATS
EXEC DBMS_STATS.UNLOCK_TABLE_STATS(ownname => 'WORLDDB9_REP', tabname => 'TRIGGER_BASED');
exec dbms_stats.unlock_schema_stats(ownname => 'WORLDDB9_REP');

3. To prevent import (imp) from locking the table’s statistics when importing a table without the rows (rows=n), use statistics=none. To prevent data pump import (impdp) from locking the table’s statistics when importing a table without the rows (content=metadata_only), use exclude=(table_statistics,index_statistics).

Metalink: Note:433240.1

Older Posts »

Blog at WordPress.com.