DBA Sensation

March 11, 2010

DBMS_STATS, METHOD_OPT and FOR ALL INDEXED COLUMNS

Filed under: [System Performance tuning] — zhefeng @ 10:14 am

Another very good article about dbms_stats package:
http://structureddata.org/2008/10/14/dbms_stats-method_opt-and-for-all-indexed-columns/

I’ve written before on choosing an optimal stats gathering strategy but I recently came across a scenario that I didn’t directly blog about and think it deserves attention. As I mentioned in that previous post, one should only deviate from the defaults when they have a reason to, and fully understand that reason and the effect of that decision.

Understanding METHOD_OPT

The METHOD_OPT parameter of DBMS_STATS controls two things:

1. on which columns statistics will be collected
2. on which columns histograms will be collected (and how many buckets)

It is very important to understand #1 and how the choice of METHOD_OPT effects the collection of column statistics.

Prerequisite: Where Do I Find Column Statistics?

Understanding where to find column statistics is vital for troubleshooting bad execution plans. These views will be the arrows in your quiver:

* USER_TAB_COL_STATISTICS
* USER_PART_COL_STATISTICS
* USER_SUBPART_COL_STATISTICS

Depending on if the table is partitioned or subpartitioned, and depending on what GRANULARITY the stats were gathered with, the latter two of those views may or may not be populated.

The Bane of METHOD_OPT: FOR ALL INDEXED COLUMNS

If you are using FOR ALL INDEXED COLUMNS as part of your METHOD_OPT you probably should not be. Allow me to explain. Using MENTOD_OPT=>’FOR ALL INDEXED COLUMNS SIZE AUTO’ (a common METHOD_OPT I see) tells DBMS_STATS: “only gather stats on columns that participate in an index and based on data distribution and the workload of those indexed columns decide if a histogram should be created and how many buckets it should contain“. Is that really what you want? My guess is probably not. Let me work through a few examples to explain why.

I’m going to start with this table.
view source
print?
01 SQL> exec dbms_random.initialize(1);
02
03 PL/SQL procedure successfully completed.
04
05 SQL> create table t1
06 2 as
07 3 select
08 4 column_value pk,
09 5 round(dbms_random.value(1,2)) a,
10 6 round(dbms_random.value(1,5)) b,
11 7 round(dbms_random.value(1,10)) c,
12 8 round(dbms_random.value(1,100)) d,
13 9 round(dbms_random.value(1,100)) e
14 10 from table(counter(1,1000000))
15 11 /
16
17 Table created.
18
19 SQL> begin
20 2 dbms_stats.gather_table_stats(
21 3 ownname => user ,
22 4 tabname => ‘T1′ ,
23 5 estimate_percent => 100 ,
24 6 cascade => true);
25 7 end;
26 8 /
27
28 PL/SQL procedure successfully completed.
29
30 SQL> select
31 2 COLUMN_NAME, NUM_DISTINCT, HISTOGRAM, NUM_BUCKETS,
32 3 to_char(LAST_ANALYZED,’yyyy-dd-mm hh24:mi:ss’) LAST_ANALYZED
33 4 from user_tab_col_statistics
34 5 where table_name=’T1′
35 6 /
36
37 COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS LAST_ANALYZED
38 ———– ———— ————— ———– ——————-
39 PK 1000000 NONE 1 2008-13-10 18:39:51
40 A 2 NONE 1 2008-13-10 18:39:51
41 B 5 NONE 1 2008-13-10 18:39:51
42 C 10 NONE 1 2008-13-10 18:39:51
43 D 100 NONE 1 2008-13-10 18:39:51
44 E 100 NONE 1 2008-13-10 18:39:51
45
46 6 rows selected.

This 6 column table contains 1,000,000 rows of randomly generated numbers. I’ve queried USER_TAB_COL_STATISTICS to display some of the important attributes (NDV, Histogram, Number of Buckets, etc).

I’m going to now put an index on T1(PK), delete the stats and recollect stats using two different METHOD_OPT parameters that each use ‘FOR ALL INDEXED COLUMNS’.
view source
print?
01 SQL> create unique index PK_T1 on T1(PK);
02
03 Index created.
04
05 SQL> begin
06 2 dbms_stats.delete_table_stats(user,’T1′);
07 3
08 4 dbms_stats.gather_table_stats(
09 5 ownname => user ,
10 6 tabname => ‘T1’ ,
11 7 estimate_percent => 100 ,
12 8 method_opt => ‘for all indexed columns’ ,
13 9 cascade => true);
14 10 end;
15 11 /
16
17 PL/SQL procedure successfully completed.
18
19 SQL> select COLUMN_NAME, NUM_DISTINCT, HISTOGRAM, NUM_BUCKETS,
20 2 to_char(LAST_ANALYZED,’yyyy-dd-mm hh24:mi:ss’) LAST_ANALYZED
21 3 from user_tab_col_statistics
22 4 where table_name=’T1′
23 5 /
24
25 COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS LAST_ANALYZED
26 ———– ———— ————— ———– ——————-
27 PK 1000000 HEIGHT BALANCED 75 2008-13-10 18:41:10
28
29 SQL> begin
30 2 dbms_stats.delete_table_stats(user,’T1′);
31 3
32 4 dbms_stats.gather_table_stats(
33 5 ownname => user ,
34 6 tabname => ‘T1’ ,
35 7 estimate_percent => 100 ,
36 8 method_opt => ‘for all indexed columns size auto’ ,
37 9 cascade => true);
38 10 end;
39 11 /
40
41 PL/SQL procedure successfully completed.
42
43 SQL> select COLUMN_NAME, NUM_DISTINCT, HISTOGRAM, NUM_BUCKETS,
44 2 to_char(LAST_ANALYZED,’yyyy-dd-mm hh24:mi:ss’) LAST_ANALYZED
45 3 from user_tab_col_statistics
46 4 where table_name=’T1′
47 5 /
48
49 COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS LAST_ANALYZED
50 ———– ———— ————— ———– ——————-
51 PK 1000000 NONE 1 2008-13-10 18:41:12

Notice that in both cases only column PK has stats on it. Columns A,B,C,D and E do not have any stats collected on them. Also note that when no SIZE clause is specified, it defaults to 75 buckets.

Now one might think that is no big deal or perhaps they do not realize this is happening because they do not look at their stats. Let’s see what we get for cardinality estimates from the Optimizer for a few scenarios.
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 /
06
07 COUNT(*)
08 ———-
09 500227
10
11 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
12
13 PLAN_TABLE_OUTPUT
14 ——————————————————————————————
15 SQL_ID 4df0g0r99zmba, child number 0
16 ————————————-
17 select /*+ gather_plan_statistics */ count(*) from t1 where a=1
18
19 Plan hash value: 3724264953
20
21 ————————————————————————————-
22 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
23 ————————————————————————————-
24 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.24 | 3466 |
25 |* 2 | TABLE ACCESS FULL| T1 | 1 | 10000 | 500K|00:00:00.50 | 3466 |
26 ————————————————————————————-
27
28 Predicate Information (identified by operation id):
29 —————————————————
30
31 2 – filter(“A”=1)

Notice the E-Rows estimate for T1. The Optimizer is estimating 10,000 rows when in reality there is 500,227. The estimate is off by more than an order of magnitude (50x). Normally the calculation for the cardinality would be (for a one table single equality predicate):
number of rows in T1 * 1/NDV = 1,000,000 * 1/2 = 500,000
but in this case 10,000 is the estimate. Strangely enough (or not), 10,000 is exactly 0.01 (1%) of 1,000,000. Because there are no column stats for T1.A, the Optimizer is forced to make a guess, and that guess is 1%.

As you can see from the 10053 trace (below), since there are no statistics on the column, defaults are used. In this case they yield very poor cardinality estimations.

SINGLE TABLE ACCESS PATH
—————————————–
BEGIN Single Table Cardinality Estimation
—————————————–
Column (#2): A(NUMBER) NO STATISTICS (using defaults)
AvgLen: 13.00 NDV: 31250 Nulls: 0 Density: 3.2000e-05
Table: T1 Alias: T1
Card: Original: 1000000 Rounded: 10000 Computed: 10000.00 Non Adjusted: 10000.00
—————————————–
END Single Table Cardinality Estimation
—————————————–

Now that I’ve demonstrated how poor the cardinality estimation was with a single equality predicate, let’s see what two equality predicates gives us for a cardinality estimate.
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 and b=3
06 6 /
07
08 COUNT(*)
09 ———-
10 124724
11
12 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
13
14 PLAN_TABLE_OUTPUT
15 ——————————————————————————————
16 SQL_ID ctq8q59qdymw6, child number 0
17 ————————————-
18 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
19
20 Plan hash value: 3724264953
21
22 ————————————————————————————-
23 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
24 ————————————————————————————-
25 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.19 | 3466 |
26 |* 2 | TABLE ACCESS FULL| T1 | 1 | 100 | 124K|00:00:00.25 | 3466 |
27 ————————————————————————————-
28
29 Predicate Information (identified by operation id):
30 —————————————————
31
32 2 – filter((“A”=1 AND “B”=3))

Yikes. In this case the cardinality estimate is 100 when the actual number of rows is 124,724, a difference of over 3 orders of magnitude (over 1000x). Where did the 100 row estimate come from? In this case there are two equality predicates so the selectivity is calculated as 1% * 1% or 0.01 * 0.01 = 0.0001. 1,000,000 * 0.0001 = 100. Funny that. (The 1% is the default selectivity for an equality predicate w/o stats.)

Now let’s add a derived predicate as well and check the estimates.
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 and b=3
06 6 and d+e > 50
07 7 /
08
09 COUNT(*)
10 ———-
11 109816
12
13 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
14
15 PLAN_TABLE_OUTPUT
16 ——————————————————————————————
17 SQL_ID 5x200q9rqvvfu, child number 0
18 ————————————-
19 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
20 and d+e > 50
21
22 Plan hash value: 3724264953
23
24 ————————————————————————————-
25 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
26 ————————————————————————————-
27 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.22 | 3466 |
28 |* 2 | TABLE ACCESS FULL| T1 | 1 | 5 | 109K|00:00:00.33 | 3466 |
29 ————————————————————————————-
30
31 Predicate Information (identified by operation id):
32 —————————————————
33
34 2 – filter((“A”=1 AND “B”=3 AND “D”+”E”>50))

Doh! The cardinality estimate is now 5, but the actual number of rows being returned is 109,816. Not good at all. The Optimizer estimated 5 rows because it used a default selectivity of 1% (for A=1) * 1% (for B=3) * 5% (for D+E > 50) * 1,000,000 rows. Now can you see why column statistics are very important? All it takes is a few predicates and the cardinality estimation becomes very small, very fast. Now consider this:

* What is likely to happen in a data warehouse where the queries are 5+ table joins and the fact table columns do not have indexes?
* Would the Optimizer choose the correct driving table?
* Would nested loops plans probably be chosen when it is really not appropriate?

Hopefully you can see where this is going. If you don’t, here is the all too common chain of events:

* Non representative (or missing) statistics lead to
* Poor cardinality estimates which leads to
* Poor access path selection which leads to
* Poor join method selection which leads to
* Poor join order selection which leads to
* Poor SQL execution times

Take 2: Using the Defaults

Now I’m going to recollect stats with a default METHOD_OPT and run through the 3 execution plans again:
view source
print?
01 SQL> begin
02 2 dbms_stats.delete_table_stats(user,’t1′);
03 3
04 4 dbms_stats.gather_table_stats(
05 5 ownname => user ,
06 6 tabname => ‘T1′ ,
07 7 estimate_percent => 100 ,
08 8 degree => 8,
09 9 cascade => true);
10 10 end;
11 11 /
12
13 PL/SQL procedure successfully completed.
14
15 SQL> select column_name, num_distinct, histogram, NUM_BUCKETS,
16 2 to_char(LAST_ANALYZED,’yyyy-dd-mm hh24:mi:ss’) LAST_ANALYZED
17 3 from user_tab_col_statistics where table_name=’T1′
18 4 /
19
20 COLUMN_NAME NUM_DISTINCT HISTOGRAM NUM_BUCKETS LAST_ANALYZED
21 ———– ———— ————— ———– ——————-
22 PK 1000000 NONE 1 2008-13-10 19:44:32
23 A 2 FREQUENCY 2 2008-13-10 19:44:32
24 B 5 FREQUENCY 5 2008-13-10 19:44:32
25 C 10 FREQUENCY 10 2008-13-10 19:44:32
26 D 100 NONE 1 2008-13-10 19:44:32
27 E 100 NONE 1 2008-13-10 19:44:32
28
29 6 rows selected.
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 /
06
07 COUNT(*)
08 ———-
09 500227
10
11 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
12
13 PLAN_TABLE_OUTPUT
14 ——————————————————————————————
15 SQL_ID 4df0g0r99zmba, child number 0
16 ————————————-
17 select /*+ gather_plan_statistics */ count(*) from t1 where a=1
18
19 Plan hash value: 3724264953
20
21 ————————————————————————————-
22 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
23 ————————————————————————————-
24 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.20 | 3466 |
25 |* 2 | TABLE ACCESS FULL| T1 | 1 | 500K| 500K|00:00:00.50 | 3466 |
26 ————————————————————————————-
27
28 Predicate Information (identified by operation id):
29 —————————————————
30
31 2 – filter(“A”=1)
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 and b=3
06 6 /
07
08 COUNT(*)
09 ———-
10 124724
11
12 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
13
14 PLAN_TABLE_OUTPUT
15 ——————————————————————————————
16 SQL_ID ctq8q59qdymw6, child number 0
17 ————————————-
18 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
19
20 Plan hash value: 3724264953
21
22 ————————————————————————————-
23 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
24 ————————————————————————————-
25 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.14 | 3466 |
26 |* 2 | TABLE ACCESS FULL| T1 | 1 | 124K| 124K|00:00:00.25 | 3466 |
27 ————————————————————————————-
28
29 Predicate Information (identified by operation id):
30 —————————————————
31
32 2 – filter((“B”=3 AND “A”=1))
view source
print?
01 SQL> select /*+ gather_plan_statistics */
02 2 count(*)
03 3 from t1
04 4 where a=1
05 5 and b=3
06 6 and d+e > 50
07 7 /
08
09 COUNT(*)
10 ———-
11 109816
12
13 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
14
15 PLAN_TABLE_OUTPUT
16 ——————————————————————————————
17 SQL_ID 5x200q9rqvvfu, child number 0
18 ————————————-
19 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
20 and d+e>50
21
22 Plan hash value: 3724264953
23
24 ————————————————————————————-
25 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
26 ————————————————————————————-
27 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.17 | 3466 |
28 |* 2 | TABLE ACCESS FULL| T1 | 1 | 6236 | 109K|00:00:00.22 | 3466 |
29 ————————————————————————————-
30
31 Predicate Information (identified by operation id):
32 —————————————————
33
34 2 – filter((“B”=3 AND “A”=1 AND “D”+”E”>50))

As you can see, the first two queries have spot on cardinality estimates, but the the third query isn’t as good as it uses a column combination and there are no stats on D+E columns, only D and E individually. I’m going to rerun the third query with dynamic sampling set to 4 (in 10g it defaults to 2) and reevaluate the cardinality estimate.
view source
print?
01 SQL> alter session set optimizer_dynamic_sampling=4;
02
03 Session altered.
04
05 SQL> select /*+ gather_plan_statistics */
06 2 count(*)
07 3 from t1
08 4 where a=1
09 5 and b=3
10 6 and d+e > 50
11 7 /
12
13 COUNT(*)
14 ———-
15 109816
16
17 SQL> select * from table(dbms_xplan.display_cursor(null, null, ‘allstats last’));
18
19 PLAN_TABLE_OUTPUT
20 ——————————————————————————————
21 SQL_ID 5x200q9rqvvfu, child number 1
22 ————————————-
23 select /*+ gather_plan_statistics */ count(*) from t1 where a=1 and b=3
24 and d+e > 50
25
26 Plan hash value: 3724264953
27
28 ————————————————————————————-
29 | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
30 ————————————————————————————-
31 | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.17 | 3466 |
32 |* 2 | TABLE ACCESS FULL| T1 | 1 | 102K| 109K|00:00:00.22 | 3466 |
33 ————————————————————————————-
34
35 Predicate Information (identified by operation id):
36 —————————————————
37
38 2 – filter((“B”=3 AND “A”=1 AND “D”+”E”>50))
39
40 Note
41 —–
42 – dynamic sampling used for this statement

Bingo! Close enough to call statistically equivalent.

Summary

I hope this little exercise demonstrates how important it is to have representative statistics and that when statistics are representative the Optimizer can very often accurately estimate the cardinality and thus choose the best plan for the query. Remember these points:

* Recent statistics do not necessarily equate to representative statistics.
* Statistics are required on all columns to yield good plans – not just indexed columns.
* You probably should not be using METHOD_OPT => ‘FOR ALL INDEXED COLUMNS SIZE AUTO’, especially in a data warehouse where indexes are used sparingly.
* Dynamic Sampling can assist with cardinality estimates where existing stats are not enough.

Advertisements

March 10, 2010

Using Histograms to Help Oracle Cost-Based Optimizer Make Better Decisions

Filed under: [System Performance tuning] — Tags: , , , — zhefeng @ 5:36 pm

Find a very good article talking about histogram, here is the original link:
http://support.confio.com/blog/tag/methodopt/38/

Introduction

Histograms are a feature of the cost-based optimizer (CBO) that allows the Oracle engine to determine how data is distributed within a column. They are most useful for a column that is included in the WHERE clause of SQL and the data distribution is skewed.

Example

Assume a table named PROCESS_QUEUE with one million rows including a column named PROCESSED_FLAG with five distinct values. Also assume a query similar to the following is executed:

SELECT id, serial_number

FROM process_queue
WHERE processed_flag = ‘N’;

SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1087 Card=260363 Bytes=7029801)
TABLE ACCESS (FULL) OF ‘PROCESS_QUEUE’ (TABLE) (Cost=1087 Card=260363 Bytes=7029801)

Without histograms and only five distinct values, Oracle assumes an even data distribution and would most likely perform a full table scan for this query. With one million rows and five values, Oracle assumes that each value would return 200,000 rows, or 20% of the rows.

Data Skew

However, what if the data for the PROCESSED_FLAG column was skewed:

SELECT processed_flag, COUNT(1)
FROM process_queue
GROUP BY processed_flag;

PROCESSED_FLAG COUNT
——————————- ———-
P 24
Y 999345
E 30
S 568
N 33

In this case, ony 33 rows have a value of ‘N’, so there has to be a way to tell Oracle to use the index on the PROCESSED_FLAG column. That is where histograms come into use. A histogram would include data similar to above and allow Oracle to know that only 33 rows would be returned for this query.

Collecting Histograms

To collect histograms for this column, a command similar to the following could be used:

EXECUTE DBMS_STATS.GATHER_TABLE_STATS(user, ‘PROCESS_QUEUE’, method_opt => ‘for columns processed_flag size 5’)

SELECT id, serial_number
FROM process_queue
WHERE processed_flag = ‘N’;

SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=28 Bytes=756)
TABLE ACCESS (BY INDEX ROWID) OF ‘PROCESS_QUEUE’ (TABLE) (Cost=1 Card=28 Bytes=756)
INDEX (RANGE SCAN) OF ‘PQ_IX1’ (INDEX) (Cost=1 Card=28)

Notes About Histograms

Note 1: Using histograms works best for SQL statements that use literal values. If a statement uses a bind variable, the first time the query is parsed, Oracle will peek at the value of the bind variable and choose a plan accordingly. That same plan will be used until the SQL is reparsed. In this case, if the bind variable was ‘Y’ the first time, Oracle may perform a full table scan for this query no matter what value was passed in from then on.

The opposite may also be true. Assume a similar data distribution to above but with 100 distinct values for the PROCESSED_FLAG column. The rows that have a ‘Y’ value are still be 95% of the rows. However, if you used the criteria “WHERE processed_flag=’Y'”, without histograms Oracle may decide to use the index when a full table scan may be a better option.

Note 2: The defaults for the METHOD_OPT parameter changed between Oracle 9i and 10g. In 9i the parameter defaulted to ‘for all columns size 1’ which essentially turns off histograms. The default value in Oracle 10g is ‘for all columns size auto’ which means that Oracle will decide whether or not to collect histograms for a column. In my experience it seems that unneccesary histograms are collected and histogram data is not collected for some columns where it would be useful.

Conclusion

Histograms allow Oracle to make much better performance decisions. The case we discussed in this article is one way that histograms are used and is commonly referred to as “table access method” histograms. Another use for histograms, referred to as “table order join” histograms, is to help Oracle decide the order in which tables will be joined. This helps the CBO know the size of the result sets or “cardinality” to properly determine the correct order in which to do joins.

March 8, 2010

Index Full Scan vs Index Fast Full Scan

Filed under: [System Performance tuning] — Tags: , , , — zhefeng @ 2:06 pm

http://spaces.msn.com/members/wzwanghai/

[Oracle] Index Full Scan vs Index Fast Full Scan
作者:汪海 (Wanghai)
日期:14-Aug-2005 
出处:http://spaces.msn.com/members/wzwanghai/

Index Full Scan vs Index Fast Full Scan

index full scan和index fast full scan是指同样的东西吗?答案是no。两者虽然从字面上看起来差不多,但是实现的机制完全不同。我们一起来看看两者的区别在哪里?

首先来看一下IFS,FFS能用在哪里:在一句sql中,如果我们想搜索的列都包含在索引里面的话,那么index full scan 和 index fast full scan 都可以被采用代替full table scan。比如以下语句:

SQL> CREATE TABLE TEST AS SELECT * FROM dba_objects WHERE 0=1;

SQL> CREATE INDEX ind_test_id ON TEST(object_id);

SQL> INSERT INTO TEST
SELECT *
FROM dba_objects
WHERE object_id IS NOT NULL AND object_id > 10000
ORDER BY object_id DESC;

17837 rows created.

SQL> analyze table test compute statistics for table for all columns for all indexes;

Table analyzed.

SQL> set autotrace trace;

SQL> select object_id from test;

17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=68 Card=17837 Bytes=71348)
1 0 TABLE ACCESS (FULL) OF ‘TEST’ (Cost=68 Card=17837 Bytes=71348)

这时候 Oracle会选择全表扫描,因为 object_id 列默认是可以为null的,来修改成 not null:

SQL>alter table test modify(object_id not null);

SQL> select object_id from test;

17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=17837 Bytes=71348)
1 0 INDEX (FAST FULL SCAN) OF ‘IND_TEST_ID’ (NON-UNIQUE) (Cost=11 Card=17837 Bytes=71348)

当然我们也可以使用index full scan:

SQL> select/*+ index(test ind_TEST_ID)*/ object_id from test;

17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=41 Card=17837 Bytes=71348)
1 0 INDEX (FULL SCAN) OF ‘IND_TEST_ID’ (NON-UNIQUE) (Cost=101 Card=17837 Bytes=71348)

我们看到了两者都可以在这种情况下使用,那么他们有什么区别呢?有个地方可以看出两者的区别, 来看一下两者的输出结果,为了让大家看清楚一点,我们只取10行。

INDEX FAST FULL SCAN

SQL> select object_id from test where rownum select/*+ index(test ind_TEST_ID)*/ object_id from test where rownum select object_id from dba_objects where object_name=’IND_TEST_ID’;

OBJECT_ID
———-
70591

索引的object_id为70591,使用tree dump可以看到索引树的结构

SQL> ALTER SESSION SET EVENTS ‘immediate trace name TREEDUMP level 70591’;

—– begin tree dump
branch: 0x6809b8d 109091725 (0: nrow: 100, level: 1)
leaf: 0x6809b96 109091734 (-1: nrow: 294 rrow: 0)
leaf: 0x6c07ec1 113278657 (0: nrow: 262 rrow: 0)
leaf: 0x6c07ebd 113278653 (1: nrow: 518 rrow: 0)
leaf: 0x6c07eb1 113278641 (2: nrow: 524 rrow: 0)
leaf: 0x6c07ead 113278637 (3: nrow: 524 rrow: 0)
leaf: 0x6c07ea9 113278633 (4: nrow: 524 rrow: 0)
leaf: 0x6c07ea5 113278629 (5: nrow: 524 rrow: 0)
leaf: 0x6c07ea1 113278625 (6: nrow: 524 rrow: 0)
leaf: 0x6c07e9d 113278621 (7: nrow: 524 rrow: 0)
leaf: 0x6c07e99 113278617 (8: nrow: 524 rrow: 0)
leaf: 0x6c07e95 113278613 (9: nrow: 532 rrow: 0)
leaf: 0x6c07e91 113278609 (10: nrow: 524 rrow: 0)
leaf: 0x6c07e8d 113278605 (11: nrow: 524 rrow: 0)
leaf: 0x6c07ec8 113278664 (12: nrow: 524 rrow: 0)
leaf: 0x6c07ec4 113278660 (13: nrow: 524 rrow: 0)
leaf: 0x6c07ec0 113278656 (14: nrow: 524 rrow: 0)
leaf: 0x6c07ebc 113278652 (15: nrow: 524 rrow: 0)
leaf: 0x6809bb2 109091762 (16: nrow: 524 rrow: 0)
leaf: 0x6c07eb8 113278648 (17: nrow: 524 rrow: 0)
leaf: 0x6c07eb4 113278644 (18: nrow: 524 rrow: 0)
leaf: 0x6c07eb0 113278640 (19: nrow: 524 rrow: 0)
leaf: 0x6c07eac 113278636 (20: nrow: 524 rrow: 0)
leaf: 0x6809bae 109091758 (21: nrow: 524 rrow: 0)
leaf: 0x6c07ea8 113278632 (22: nrow: 524 rrow: 0)
leaf: 0x6c07ea4 113278628 (23: nrow: 524 rrow: 0)
leaf: 0x6c07ea0 113278624 (24: nrow: 105 rrow: 105)
leaf: 0x6c07e9c 113278620 (25: nrow: 129 rrow: 129)
leaf: 0x6c07eb9 113278649 (26: nrow: 123 rrow: 123)
leaf: 0x6809baa 109091754 (27: nrow: 246 rrow: 246)
leaf: 0x6c07e98 113278616 (28: nrow: 246 rrow: 246)
leaf: 0x6c07e94 113278612 (29: nrow: 246 rrow: 246)
leaf: 0x6809ba6 109091750 (30: nrow: 246 rrow: 246)
leaf: 0x6809bce 109091790 (31: nrow: 246 rrow: 246)
leaf: 0x6809bca 109091786 (32: nrow: 246 rrow: 246)
leaf: 0x6809c05 109091845 (33: nrow: 248 rrow: 248)
leaf: 0x6809c01 109091841 (34: nrow: 246 rrow: 246)
leaf: 0x6809bfd 109091837 (35: nrow: 246 rrow: 246)
leaf: 0x6809bf9 109091833 (36: nrow: 246 rrow: 246)
leaf: 0x6809bf5 109091829 (37: nrow: 246 rrow: 246)
leaf: 0x6809bf1 109091825 (38: nrow: 246 rrow: 246)
leaf: 0x6809bed 109091821 (39: nrow: 246 rrow: 246)
leaf: 0x6809be9 109091817 (40: nrow: 246 rrow: 246)
leaf: 0x6809be5 109091813 (41: nrow: 246 rrow: 246)
leaf: 0x6809be1 109091809 (42: nrow: 246 rrow: 246)
leaf: 0x6809bdd 109091805 (43: nrow: 246 rrow: 246)
leaf: 0x6809bd9 109091801 (44: nrow: 246 rrow: 246)
leaf: 0x6809bd5 109091797 (45: nrow: 246 rrow: 246)
leaf: 0x6809bd1 109091793 (46: nrow: 248 rrow: 248)
leaf: 0x6809bcd 109091789 (47: nrow: 246 rrow: 246)
leaf: 0x6809bc9 109091785 (48: nrow: 246 rrow: 246)
leaf: 0x6809c08 109091848 (49: nrow: 246 rrow: 246)
leaf: 0x6809c04 109091844 (50: nrow: 246 rrow: 246)
leaf: 0x6809c00 109091840 (51: nrow: 246 rrow: 246)
leaf: 0x6809bfc 109091836 (52: nrow: 246 rrow: 246)
leaf: 0x6809bf8 109091832 (53: nrow: 246 rrow: 246)
leaf: 0x6809bf4 109091828 (54: nrow: 246 rrow: 246)
leaf: 0x6809bf0 109091824 (55: nrow: 246 rrow: 246)
leaf: 0x6809bec 109091820 (56: nrow: 246 rrow: 246)
leaf: 0x6809be8 109091816 (57: nrow: 246 rrow: 246)
leaf: 0x6809be4 109091812 (58: nrow: 246 rrow: 246)
leaf: 0x6809be0 109091808 (59: nrow: 248 rrow: 248)
leaf: 0x6809bdc 109091804 (60: nrow: 246 rrow: 246)
leaf: 0x6809bd8 109091800 (61: nrow: 246 rrow: 246)
leaf: 0x6809bd4 109091796 (62: nrow: 246 rrow: 246)
leaf: 0x6809bd0 109091792 (63: nrow: 246 rrow: 246)
leaf: 0x6809bcc 109091788 (64: nrow: 246 rrow: 246)
leaf: 0x6809c07 109091847 (65: nrow: 246 rrow: 246)
leaf: 0x6809c03 109091843 (66: nrow: 246 rrow: 246)
leaf: 0x6809bff 109091839 (67: nrow: 246 rrow: 246)
leaf: 0x6809bfb 109091835 (68: nrow: 246 rrow: 246)
leaf: 0x6809bf7 109091831 (69: nrow: 246 rrow: 246)
leaf: 0x6809bf3 109091827 (70: nrow: 246 rrow: 246)
leaf: 0x6809bef 109091823 (71: nrow: 246 rrow: 246)
leaf: 0x6809beb 109091819 (72: nrow: 248 rrow: 248)
leaf: 0x6809be7 109091815 (73: nrow: 246 rrow: 246)
leaf: 0x6809be3 109091811 (74: nrow: 246 rrow: 246)
leaf: 0x6809bdf 109091807 (75: nrow: 246 rrow: 246)
leaf: 0x6809bdb 109091803 (76: nrow: 246 rrow: 246)
leaf: 0x6809bd7 109091799 (77: nrow: 246 rrow: 246)
leaf: 0x6809bd3 109091795 (78: nrow: 246 rrow: 246)
leaf: 0x6809bcf 109091791 (79: nrow: 246 rrow: 246)
leaf: 0x6809bcb 109091787 (80: nrow: 246 rrow: 246)
leaf: 0x6809c06 109091846 (81: nrow: 246 rrow: 246)
leaf: 0x6809c02 109091842 (82: nrow: 246 rrow: 246)
leaf: 0x6809bfe 109091838 (83: nrow: 246 rrow: 246)
leaf: 0x6809bfa 109091834 (84: nrow: 246 rrow: 246)
leaf: 0x6809ba2 109091746 (85: nrow: 129 rrow: 129)
leaf: 0x6c07eb5 113278645 (86: nrow: 123 rrow: 123)
leaf: 0x6809bf6 109091830 (87: nrow: 246 rrow: 246)
leaf: 0x6809bf2 109091826 (88: nrow: 246 rrow: 246)
leaf: 0x6809bee 109091822 (89: nrow: 246 rrow: 246)
leaf: 0x6809bea 109091818 (90: nrow: 246 rrow: 246)
leaf: 0x6809b9e 109091742 (91: nrow: 246 rrow: 246)
leaf: 0x6809be6 109091814 (92: nrow: 246 rrow: 246)
leaf: 0x6809be2 109091810 (93: nrow: 246 rrow: 246)
leaf: 0x6809bde 109091806 (94: nrow: 246 rrow: 246)
leaf: 0x6809bda 109091802 (95: nrow: 246 rrow: 246)
leaf: 0x6809b9a 109091738 (96: nrow: 246 rrow: 246)
leaf: 0x6809bd6 109091798 (97: nrow: 246 rrow: 246)
leaf: 0x6809bd2 109091794 (98: nrow: 246 rrow: 246)
—– end tree dump

index full scan读取的是0x6c07ea0 这个块,而index fast full scan读取的是 0x6809b9a这个块也就是包含数据的物理存储位置最前的块。分别看一下这两个块的内容
0x6c07ea0 =十进制的113278624
0x6809b9a =十进制的109091738

SQL> select dbms_utility.data_block_address_file(113278624) “file”,dbms_utility.data_block_address_block(113278624) “block” from dual;

file block
———- ———-
27 32416

SQL> select dbms_utility.data_block_address_file(109091738) “file”,dbms_utility.data_block_address_block(109091738)”block” from dual;

file block
———- ———-
26 39834

SQL> alter system dump datafile 27 block 32416;

SQL> alter system dump datafile 26 block 39834;

block 32416的前10行

row#0[6564] flag: —–, lock: 2
col 0; len 4; (4): c3 02 07 11
col 1; len 6; (6): 07 00 7c 20 00 2b
row#1[6578] flag: —–, lock: 2
col 0; len 4; (4): c3 02 16 4e
col 1; len 6; (6): 07 00 7c 20 00 2a
row#2[6592] flag: —–, lock: 2
col 0; len 4; (4): c3 02 16 4f
col 1; len 6; (6): 07 00 7c 20 00 29
row#3[6606] flag: —–, lock: 2
col 0; len 4; (4): c3 02 16 50
col 1; len 6; (6): 07 00 7c 20 00 28
row#4[6620] flag: —–, lock: 2
col 0; len 4; (4): c3 02 18 02
col 1; len 6; (6): 07 00 7c 20 00 27
row#5[6634] flag: —–, lock: 2
col 0; len 4; (4): c3 02 23 60
col 1; len 6; (6): 07 00 7c 20 00 26
row#6[6648] flag: —–, lock: 2
col 0; len 4; (4): c3 02 24 25
col 1; len 6; (6): 07 00 7c 20 00 25
row#7[6662] flag: —–, lock: 2
col 0; len 4; (4): c3 02 24 28
col 1; len 6; (6): 07 00 7c 20 00 24
row#8[6676] flag: —–, lock: 2
col 0; len 4; (4): c3 02 28 18
col 1; len 6; (6): 07 00 7c 20 00 23
row#9[6690] flag: —–, lock: 2
col 0; len 4; (4): c3 02 42 04
col 1; len 6; (6): 07 00 7c 20 00 22

block 39834的前10行
row#0[4591] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 43
col 1; len 6; (6): 02 81 71 f6 00 36
row#1[4605] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 44
col 1; len 6; (6): 02 81 71 f6 00 35
row#2[4619] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 45
col 1; len 6; (6): 02 81 71 f6 00 34
row#3[4633] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 46
col 1; len 6; (6): 02 81 71 f6 00 33
row#4[4647] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 47
col 1; len 6; (6): 02 81 71 f6 00 32
row#5[4661] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 48
col 1; len 6; (6): 02 81 71 f6 00 31
row#6[4675] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 49
col 1; len 6; (6): 02 81 71 f6 00 30
row#7[4689] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 4a
col 1; len 6; (6): 02 81 71 f6 00 2f
row#8[4703] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 4b
col 1; len 6; (6): 02 81 71 f6 00 2e
row#9[4717] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 4c
col 1; len 6; (6): 02 81 71 f6 00 2d

对照一下前面的结果集
block 32416的第一行为10616,数据内的存储格式应该为

SQL> select dump(10616,16) from dual;

DUMP(10616,16)
———————-
Typ=2 Len=4: c3,2,7,11

确实等于dump block所看到的

row#0[6564] flag: —–, lock: 2
col 0; len 4; (4): c3 02 07 11
col 1; len 6; (6): 07 00 7c 20 00 2b

再看block 39834的第1行

SQL> select dump(66266,16) from dual;

DUMP(66266,16)
———————–
Typ=2 Len=4: c3,7,3f,43

跟dump 的结果也一样

row#0[4591] flag: —–, lock: 2
col 0; len 4; (4): c3 07 3f 43
col 1; len 6; (6): 02 81 71 f6 00 36

这就证明了上面所说的index full scan和index fast full scan的不同。
我们也可以用10046事件去跟踪两者走的路径。

SQL> ALTER SESSION SET EVENTS ‘immediate trace name flush_cache’;

(清空buffer cache,以便观看’db file sequential read’,’db file scattered read’事件)。

SQL> alter session set events’10046 trace name context forever,level 12′;

Session altered.

SQL> select object_id from test where rownum alter session set events’10046 trace name context off’;

Session altered.

[oracle@csdbc udump]$ grep read cs-dbc_ora_15596.trc

Redo thread mounted by this instance: 1
WAIT #1: nam=’db file sequential read’ ela= 33 p1=26 p2=39820 p3=1
WAIT #1: nam=’db file sequential read’ ela= 21 p1=26 p2=39817 p3=1
WAIT #1: nam=’db file sequential read’ ela= 17 p1=26 p2=39819 p3=1
WAIT #1: nam=’db file parallel read’ ela= 53 p1=2 p2=2 p3=2
WAIT #1: nam=’db file scattered read’ ela= 466 p1=26 p2=39821 p3=16

最前面的’db file sequential read’是由于读段头等操作,我们来关注’db file scattered read’事件,因为index fast full scan是采用多块读,从39821开始读取db_file_multiblock_read_count个块(本例里设置为16)。我们关心的 39834块正位于其中。
再来看index full scan的10046 trace

SQL> ALTER SESSION SET EVENTS ‘immediate trace name flush_cache’;

(清空buffer cache,以便观看’db file sequential read’,’db file scattered read’事件)。

SQL> alter session set events’10046 trace name context forever,level 12′;

Session altered.

SQL>

OBJECT_ID
———-
10616
12177
12178
12179
12301
13495
13536
13539
13923
16503

10 rows selected.

SQL> alter session set events’10046 trace name context off’;

Session altered.

[oracle@csdbc udump]$ grep read cs-dbc_ora_15609.trc

Redo thread mounted by this instance: 1
WAIT #1: nam=’db file sequential read’ ela= 49 p1=26 p2=39821 p3=1
root block,正是先前索引树dump里面的 0x6809b8d
WAIT #1: nam=’db file sequential read’ ela= 32 p1=26 p2=39830 p3=1
WAIT #1: nam=’db file sequential read’ ela= 40 p1=27 p2=32449 p3=1
WAIT #1: nam=’db file sequential read’ ela= 35 p1=27 p2=32445 p3=1
WAIT #1: nam=’db file sequential read’ ela= 28 p1=27 p2=32433 p3=1
WAIT #1: nam=’db file sequential read’ ela= 19 p1=27 p2=32429 p3=1
WAIT #1: nam=’db file sequential read’ ela= 34 p1=27 p2=32425 p3=1
WAIT #1: nam=’db file sequential read’ ela= 32 p1=27 p2=32421 p3=1
WAIT #1: nam=’db file sequential read’ ela= 33 p1=27 p2=32417 p3=1
WAIT #1: nam=’db file sequential read’ ela= 29 p1=27 p2=32413 p3=1
WAIT #1: nam=’db file sequential read’ ela= 37 p1=27 p2=32409 p3=1
WAIT #1: nam=’db file sequential read’ ela= 32 p1=27 p2=32405 p3=1
WAIT #1: nam=’db file sequential read’ ela= 35 p1=27 p2=32401 p3=1
WAIT #1: nam=’db file sequential read’ ela= 34 p1=27 p2=32397 p3=1
WAIT #1: nam=’db file sequential read’ ela= 31 p1=27 p2=32456 p3=1
WAIT #1: nam=’db file sequential read’ ela= 29 p1=27 p2=32452 p3=1
WAIT #1: nam=’db file sequential read’ ela= 31 p1=27 p2=32448 p3=1
WAIT #1: nam=’db file sequential read’ ela= 30 p1=27 p2=32444 p3=1
WAIT #1: nam=’db file sequential read’ ela= 38 p1=26 p2=39858 p3=1
WAIT #1: nam=’db file sequential read’ ela= 31 p1=27 p2=32440 p3=1
WAIT #1: nam=’db file sequential read’ ela= 32 p1=27 p2=32436 p3=1
WAIT #1: nam=’db file sequential read’ ela= 35 p1=27 p2=32432 p3=1
WAIT #1: nam=’db file sequential read’ ela= 31 p1=27 p2=32428 p3=1
WAIT #1: nam=’db file sequential read’ ela= 29 p1=26 p2=39854 p3=1
WAIT #1: nam=’db file sequential read’ ela= 36 p1=27 p2=32424 p3=1
WAIT #1: nam=’db file sequential read’ ela= 32 p1=27 p2=32420 p3=1
WAIT #1: nam=’db file sequential read’ ela= 36 p1=27 p2=32416 p3=1

index full scan走的路径正是文章开始所提到的定位到root block,然后根据leaf block链表一路读取块。看到这里大家应该比较了解index full scan 和index fast full scan的区别了,最后补充一下 index full scan 和 index fast full scan 在排序上的不同。

SQL> set autotrace trace;

SQL> select object_id from test order by object_id;

17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=41 Card=17837 Bytes=71348)
1 0 INDEX (FULL SCAN) OF ‘IND_TEST_ID’ (NON-UNIQUE) (Cost=101 Card=17837 Bytes=71348)

由于有排序所以oracle自动选择了index full scan避免了排序。那么强制用index fast full scan呢?

SQL> select/*+ index_ffs(test ind_test_id)*/object_id from test order by object_id;
17837 rows selected.

Execution Plan
———————————————————-
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=59 Card=17837 Bytes=71348)
1 0 SORT (ORDER BY) (Cost=59 Card=17837 Bytes=71348)
2 1 INDEX (FAST FULL SCAN) OF ‘IND_TEST_ID’ (NON-UNIQUE) (Cost=11 Card=17837 Bytes=71348)

index fast full scan会多一步sort order by,相信仔细看过这篇文章的人能知道其中结果了吧,还不知道的人请在文章中自己找答案吧。

November 12, 2009

dbconsole can’t be started with ssl error

Filed under: [RAC] — Tags: , , , — zhefeng @ 2:36 pm

Got problem like failing to start dbconsole, check the trace file got this:
emdctl.trc
———–
2008-09-15 10:58:20 Thread-4136126688 ERROR http: 8: Unable to initialize ssl connection with
server, aborting connection attempt
2008-09-15 10:59:52 Thread-4136126688 ERROR ssl: nzos_Handshake failed, ret=29024.

After searching the metalink, found just need to unsecure and resecure the dbconsole to renew the expired dbconsole certificate:

1. Unsecure the Dbconsole
– Unsecure database control using
$ORACLE_HOME/bin>emctl unsecure dbconsole

2. Force an upload:

$ORACLE_HOME/bin> emctl upload

3. Also consider Resecuring the Dbconsole
– Secure database control using
$ORACLE_HOME/bin>emctl secure dbconsole

Starting with 10.2.0.4, HTTPS is used by default.

July 27, 2009

two ways to call Oracle Stored procedure from VB.net

Filed under: [PL/SQL dev&tuning] — Tags: , , , — zhefeng @ 10:05 pm

PROCEDURE TEST_C(temp out varchar2,a IN varchar2, b in varchar2)
IS
BEGIN
temp:=a || b;
END;

Solution 1:
add “Imports System.Data.OleDb” at the code beginning

Dim dbConn As New OleDbConnection
Dim dbComm As OleDbCommand

dbConn.ConnectionString = “Provider=MSDAORA;User ID=xxx;Password=xxx;Data Source=xxx;”
dbConn.Open()
dbComm = dbConn.CreateCommand

dbComm.Parameters.Add(“temp”, OleDbType.VarChar, 30).Direction = ParameterDirection.Output
dbComm.Parameters.Add(“a”, OleDbType.VarChar, 30).Direction = ParameterDirection.Input
dbComm.Parameters(“a”).Value = “test ”
dbComm.Parameters.Add(“b”, OleDbType.VarChar, 30).Direction = ParameterDirection.Input
dbComm.Parameters(“b”).Value = “OK”

dbComm.CommandText = “TEST_C”
dbComm.CommandType = CommandType.StoredProcedure
dbComm.ExecuteNonQuery()
dbConn.Close()

MessageBox.Show(dbComm.Parameters(“temp”).Value)

Solution 2:
add “Imports System.Data.OracleClient” at the code beginning

Dim oraConn As New OracleConnection
Dim oraComm As New OracleCommand

oraConn.ConnectionString = “Data Source=xxx;User Id=xxx;Password=xxx”
oraComm.Connection = oraConn

oraComm.Parameters.Add(“temp”, OracleType.VarChar, 10).Direction = ParameterDirection.Output
oraComm.Parameters.Add(“a”, OracleType.VarChar, 10).Direction = ParameterDirection.Input
oraComm.Parameters(“a”).Value = “test ”
oraComm.Parameters.Add(“b”, OracleType.VarChar, 10).Direction = ParameterDirection.Input
oraComm.Parameters(“b”).Value = “OK”

oraConn.Open()
oraComm.CommandText = “TEST_C”
oraComm.CommandType = CommandType.StoredProcedure
oraComm.ExecuteNonQuery()
oraConn.Close()

MessageBox.Show(oraComm.Parameters(“temp”).Value)

Note: the first parameter name has to be the same as the oracle stored procedure parameter name;
if there is dblink in the stored procedure, then the solution 1 is the only choice.

if you are trying to call oracle stored procedure by passing clob/lob as parameters, then don’t use odbc solution because it has 32k limitation. Oracle have this metalink note talk about this:
From metalink: Subject-32k Limitation When Passing LOB Parameter Through Stored Procedure Doc ID: 252102.1
https://metalink2.oracle.com/metalink/plsql/f?p=130:14:6916025231277951933::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,126125.1,1,0,1,helvetica

Workaround
~~~~~~~~~~

Workaround 1:

Use the Oracle Provider for OLEDB instead, making sure to set the command
property SPPrmsLOB to TRUE:

objCmd.Properties(“SPPrmsLOB”) = TRUE

Workaround 2:

Instead of passing a CLOB as a parameter to a stored procedure, use a method
that directly interfaces with the database and does not require the use of
a stored procedure to update the CLOBs as in the following example:

Note 126125.1 – ADO Streaming BLOB & CLOB Example Using ODBC and OLEDB in VB (SCR 1388).

June 26, 2009

11g rac could not be started

Filed under: [RAC] — Tags: , , , , , — zhefeng @ 1:53 pm

Today after reboot the rac nodes servers, the RAC 11g couldn’t be started.
Here is the errors and solutions:

Errors:
1.[root@db03 racg]# crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.

[root@db03 racg]# crsctl check crs
Failure 1 contacting Cluster Synchronization Services daemon
Cannot communicate with Cluster Ready Services
Cannot communicate with Event Manager

[root@db03 racg]# ps -ef|grep -i init.d
root 3895 1 0 Jun21 ? 00:00:00 /bin/sh /etc/init.d/init.evmd run
root 3896 1 0 Jun21 ? 00:00:00 /bin/sh /etc/init.d/init.cssd fatal
root 3897 1 0 Jun21 ? 00:00:00 /bin/sh /etc/init.d/init.crsd run
root 3961 3895 0 Jun21 ? 00:00:04 /bin/sh /etc/init.d/init.cssd startcheck
root 4031 3896 0 Jun21 ? 00:00:04 /bin/sh /etc/init.d/init.cssd startcheck
root 4123 3897 0 Jun21 ? 00:00:04 /bin/sh /etc/init.d/init.cssd startcheck
root 5230 24639 0 12:58 pts/0 00:00:00 grep -i init.d

–check the system log
[root@db03 racg]# tail -f /var/log/messages
Jun 26 13:15:49 db03 automount[3295]: create_udp_client: hostname lookup failed: Operation not permitted
Jun 26 13:15:49 db03 automount[3295]: create_tcp_client: hostname lookup failed: Operation not permitted
Jun 26 13:15:49 db03 automount[3295]: lookup_mount: exports lookup failed for d
Jun 26 13:15:49 db03 logger: Cluster Ready Services waiting on dependencies. Diagnostics in /tmp/crsctl.4031.
Jun 26 13:15:49 db03 logger: Cluster Ready Services waiting on dependencies. Diagnostics in /tmp/crsctl.4031.

–check the trace file
[root@db03 racg]# cat /tmp/crsctl.4031
Oracle Cluster Registry initialization failed accessing Oracle Cluster Registry device: PROC-26: Error while accessing the physical storage Operating System error [Permission denied] [13]

–verify the raw file to see if they are binded
[root@db03 ~]# raw -qa
/dev/raw/raw1: bound to major 8, minor 1
/dev/raw/raw2: bound to major 8, minor 2

–check the permission since the log was mentioning that
[root@db03 ~]# cd /dev/raw
[root@db03 raw]# ls -al
total 0
drwxr-xr-x 2 root root 80 Jun 21 07:08 .
drwxr-xr-x 14 root root 3760 Jun 24 08:17 ..
crw——- 1 root root 162, 1 Jun 21 07:08 raw1
crw——- 1 root root 162, 2 Jun 21 07:08 raw2
–looks like the permission is not correct

–change permissions (on both nodes)
[root@db03 raw]# chown oracle:dba /dev/raw/raw1
[root@db03 raw]# chown oracle:dba /dev/raw/raw2
[root@db03 raw]# chmod 660 /dev/raw/raw1
[root@db03 raw]# chmod 660 /dev/raw/raw2
[root@db03 raw]# chown oracle:dba /dev/sda1
[root@db03 raw]# chown oracle:dba /dev/sda2
[root@db03 raw]# chmod 660 /dev/sda1
[root@db03 raw]# chmod 660 /dev/sda2

–after that, chheck the init.cssd, it’s up!
[root@db03 raw]# ps -ef|grep init.d
root 3895 1 0 Jun21 ? 00:00:00 /bin/sh /etc/init.d/init.evmd run
root 3896 1 0 Jun21 ? 00:00:03 /bin/sh /etc/init.d/init.cssd fatal
root 3897 1 0 Jun21 ? 00:00:00 /bin/sh /etc/init.d/init.crsd run
root 7588 3896 0 13:25 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oprocd
root 7606 3896 0 13:25 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oclsomon
root 7630 3896 0 13:25 ? 00:00:00 /bin/sh /etc/init.d/init.cssd daemon
root 20251 6701 0 14:15 pts/0 00:00:00 grep init.d

–check the crs service is also working now
[root@db03 db]# crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy

–bring up the rac resources again by using srvctl

Reference:
“why my oracle cluster could not start” http://surachartopun.com/2009/04/why-my-oracle-cluster-could-not-start.html

June 15, 2009

Oracle connection idle timeout with firewall

Since our oracle 10g RAC has been moved behind firewall, we always get disconnected/timeout by firewall if the connection was idle.
By searching the metalink i found this article is really useful:

Resolving Problems with Connection Idle Timeout With Firewall

An Overview

Firewall(FW) has become common in today’s networking to protect the network environment. The firewall recognizes the TCP protocol and it records the client server socket end-points. Also, FW recognize the TCP connection closure, and then will release the resources allocated for recording the opening connection. For every end-point pairs , the firewall must also allocate some resources(may be small).

When the client or server closes the communication it sends TCP FIN type packet, this is a normal socket closure. However, it is not uncommon that the client server communication abruptly ending without closing the end points properly by sending FIN packet, for example, when the client or server crashed, power down or a network error which prevents sending the closure packet to the other end. In that cases, the firewall will not know that the end-points will no longer use the opened channel. As a passive intermediary, it had no way to determine if the endpoints are still active. As is it not possible to maintain resources forever, and also, it is a security threat keeping a port open for undefined time. So, firewall imposes a BLACKOUT on those connections that stay idle for a predefined amount of time.

Initially FW were designed to protect the application servers, network and then to protect client/server connection. With these in mind, a time-out in terms of hours (1 hour is the default for most FW) is reasonable. With the advent of more complex security schemes, FW are not only between client and server, but also between different application servers ( intranet, demilitarized zone (DMZ) , and such) and database servers. So, the horizon of 1 hour idle time for communication between servers maybe not be appropriate.

Idle connections can be expected from an application server. There is the case of J2EE using pooled JDBC connections. The pool usually returns the first available connection to the requester, so the first connections of the pool list are the most likely to be active. The last one, which are at the end of the list, are only used at peek loads, and most of the time it will be inactive.

Other cases are the connections established from a HTTP Server, either SQL connections from mod_plsql, or AJP connections from mod_oc4j.

Blackout

One of the inconvenience of theses blackout, is that they are passive. None of the endpoints will be notified that the communication was banned . Only when the client or server tries to contact its peer, it comes to know that the peer end is no more active and the communication has already been broken.

The worst of all scenarios are the so called “passive listeners” . They will never know. Because, passive listeners are those processes at an endpoint that are simply waiting for commands to arrive from the other end. A typical example of this are the backend database server processes, which are reading from the socket looking new SQL statements to execute , and after the request is answered, they return to their passive state. When a blackout occurs, they will stay forever in this reading state, unless some of the following techniques are applied.

Resolving problems with connection idle time-out

TCP KeepAlive

You can enable TCP KeepAlive option at the Operating System(OS) level. Once TCP keepalive option is enabled and configured, a small probe packet will be sent to the other end at every predefined in-activity interval and it expects an ACK from the other end. And ACK will be returned only when the other end is alive and is reachable. If ACK is not returned, then after some retry, the OS will close the end points and will release the resources allocated for that. The application which is listening on that particular socket will recieve the error, so that application can take necessary action upon receiving the error signal from the OS.

When a communication is blacked out by the firewall, the probe will not reach its other end, and then the OS will close the socket end points and the application will be notified of the exception.

Steps to configure TCP KeepAlive depends on a specific Operating Systems. You will have to refer the appropriate OS documentation for it.

It is common to enable TCP KeepAlive option at the server end. Because server is the one which holds many resources for a communication, it any communication is broken, then those resources at the server will be released than holding it for indefinite time. By default TCP KeepAlive is not enabled at the OS.

TCP KeepAlive is applicable for all network applications running on that particular Operating System.

DCD for DataBase Servers

For database connections, one of the endpoints is a passive listener, either is a dedicated process or a dispatcher process. If the connection becomes blacked out , this backend will never know that client cannot send any more requests, and then will lock important resources as database sessions, locks , and at least , a file descriptor used for maintaining the socket.

A solution is to make this backend “not so” passive, using the DCD (dead connection detection) to figure out if the communication is still possible.

Simply, set in the $ORACLE_HOME/network/admin/sqlnet.ora, in the server side SQLNET.EXPIRE_TIME=10 (10 minutes, for example). With this parameter in place, after 10 minutes of inactivity, the server send a small 10 bytes probe packet to the client. If this packet is not acknowledge, the connection will be closed and the associated resources will be released.

There are two benefits with this DCD
1. If the SQLNET.EXPIRE_TIME is less than the FW connection idle time-out, then the firewall will consider this packet as activity, and the idle time-out (firewall blackout) will never happen until both the client and the server processes are alive.

2. If the SQLNET.EXPIRE_TIME (let’s say a little bit higher) than the FW idle limit, then , as soon as the blackout happens , the RDBMS will know and will close the connection.

The first case is recommended when the connection comes from another application server , and the second makes sense for client applications.

DCD works at the application level and also works on top of TCP/IP protocol. If you have set the SQLNET.EXPIRE_TIME=10 then do not expect that the connections will be closed exactly after 10 minutes of the blackout or network outage. The TCP timeout and TCP retransmission values also adds to this time.
Please note that some latest firewalls may not see DCD packets as a valid traffic, and thus the DCD may not be useful. In this case, firewall timeout should be increased or users should not leave the application idle for longer than the idle time out configured on the firewall.

AJP Connections

It is not a default behavior in 9.0.2, but if Patch 2862660 is installed, the connection between and OHS server process and the J2EE can be maintained for more than a single request. If the parameter Oc4jConnTimeout is set, the OHS will maintain the connection for at least that time. The problem is that the child process may became inactive before that time-out occurs, and then the connection will remain open. While the child process is inactive, the connection will be idle, and there is chance to be blackout by the FW.

If this happens, the first thing that the child will do is to close it when it becomes active. But at this time, the TCP socket closing cannot be completed, due the blackout. Although the http child process can simply ignore the closing failure and continue the creation of a new connection, the passive listener at the
j2ee side (the worker thread) will be hook without a chance for the resources to be released.

To solve this , the Patch 3151686 must be installed and the java-option
-Dajp.keepalive=true
must be enabled.

After this, the blackout detection will rely on the TCP KeepAlive provided by the operating system.

As DCD , this process consist in send probes -empty packages- when a socket had been inactive for a period of time. If there is no response, the socket will be closed, and then, even the passive listener, will receive and exception or signal to let him know that the no further communication is possible.

CONCLUSION

As the firewalls extend their functionality , and are now are placed in between application servers, some tuning and parameter adjusting must be made to overcome the default rules established for client/server communications.

Note id: 257650.1
Link: https://metalink2.oracle.com/metalink/plsql/f?p=130:14:9497610239050450074::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,257650.1,1,1,1,helvetica

June 5, 2009

export name case-sensitive tables in oracle

Filed under: 1, [backup and recovery] — zhefeng @ 3:22 pm

exp xtreme file=xtreme_tables.dmp tables=(‘\”Product\”‘,’\”Supplier\”‘,CUSTOMER,ORDERS)

March 30, 2009

keep data consistency when using oracle exp/expdp

Filed under: [backup and recovery] — Tags: , , , , — zhefeng @ 1:25 pm

When we were using old oracle exp, we usually will set exp consistent=y (default is n) to ensure the data consistency (the image taken of the data in the tables being exported represents the committed state of the table data at the same single point-in-time for all of the tables being exported.)

However, started from 10g they decomission this parameter. Today i happened to have a requirement for this and i searched meta link and found this useful piece:

The versions 10gR1 and 10gR2 additionally put the message in the expdp header:

FLASHBACK automatically enabled to preserve database integrity

Does this guarantee export consistency to a single point of time?

Cause
The message:

FLASHBACK automatically enabled to preserve database integrity

only means that some of the tables will be assigned special SCNs (needed for Streams and Logical Standby). There is no consistency guaranteed between exported tables.

The next example demonstrates this:

1. Create the environment

connect / as sysdba

create or replace directory flash as ‘/tmp’;
grant read, write on directory flash to system;

drop user usr001 cascade;
purge dba_recyclebin;

create user usr001 identified by usr001 default tablespace users temporary tablespace temp;
grant connect, resource to usr001;

connect usr001/usr001
create table part001
(
col001 number,
col002 date,
col003 varchar2(1000)
)
partition by range (col001)
(
partition p001 values less than (500001),
partition p002 values less than (1000001)
);

2. Populate the partitioned table: partition P001 contains 500000 rows and partition P002 contains 10 rows

connect usr001/usr001
begin
for i in 1..500010 loop
insert into part001 values (i, sysdate, lpad (to_char(i), 1000, ‘0’));
end loop;
commit;
end;

3. Start expd

#> expdp system/passwd directory=flsh dumpfile=user001_1.dmp logfile =user001_1.log schemas=usr001

4. During point 3. is running, run in a separate session:

connect usr001/usr001
delete from part001 where col001 in (500001, 500002, 500003, 500004, 500005);
commit;

This will delete 5 rows in partition P002.

5. Expdp completes with:

Export: Release 10.2.0.3.0 – 64bit Production on Friday, 05 September, 2008 13:59:59

Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_02″: system/******** directory=flash dumpfile=usr001_1.dmp logfile=exp_usr001_1.log schemas=usr001
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 568.0 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported “USR001”.”PART001″:”P001″ 486.3 MB 500000 rows
. . exported “USR001”.”PART001″:”P002″ 10.50 KB 5 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_02 is:
/media/usbdisk/TESTS/FLASH/usr001_1.dmp
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully completed at 14:02:47

=> From partition P002 only 5 rows were exported, so the written export dump is not consistent.
Solution
To generate consistent Data Pump’s database or schema export similar to exports generated with exp parameter CONSISTENT=Y, use Data Pump parameters FLASHBACK_SCN and FLASHBACK_TIME for this functionality.

Conform with the example above, running expdp with:

#> expdp system/passwd directory=flsh dumpfile=user001_2.dmp logfile =user001_2.log schemas=usr001 flashback_time=\”TO_TIMESTAMP \(TO_CHAR \(SYSDATE, \’YYYY-MM-DD HH24:MI:SS\’\), \’YYYY-MM-DD HH24:MI:SS\’\)\”

This will end with:

Export: Release 10.2.0.3.0 – 64bit Production on Friday, 05 September, 2008 14:15:38

Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining Scoring Engine options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_02″: system/******** directory=flash dumpfile=usr001_2.dmp logfile=exp_usr001_2.log schemas=usr001 flashback_time=”to_timestamp (to_char (sysdate, ‘YYYY-MM-DD HH24:MI:SS’), ‘YYYY-MM-DD HH24:MI:SS’)”
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 568.0 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported “USR001”.”PART001″:”P001″ 486.3 MB 500000 rows
. . exported “USR001”.”PART001″:”P002″ 15.48 KB 10 rows
Master table “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_02 is:
/media/usbdisk/TESTS/FLASH/usr001_2.dmp
Job “SYSTEM”.”SYS_EXPORT_SCHEMA_02″ successfully completed at 14:16:55

=> Partition P002 contains all 10 rows, though 5 rows were deleted during expdp time. The parameter FLASHBACK_TIME guarantees the consistency.

Link:  “Doc ID: 377218.1”

https://metalink2.oracle.com/metalink/plsql/f?p=130:14:4374876471460387797::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,377218.1,1,0,1,helvetica

March 15, 2009

Oracle XML practise

Filed under: [PL/SQL dev&tuning] — zhefeng @ 3:39 pm

A very good Oracle XML practise from this link: http://www.itpub.net/viewthread.php?tid=899445&highlight=xml

quoted here for future reference.

1、判断一下数据库中xml功能是否已经安装成功

C:\>sqlplus /nolog

SQL*Plus: Release 10.2.0.1.0 – Production on 星期一 11月 26 10:35:23 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

idle>conn sys/sys@doscdb as sysdba
已连接。

DOSCDB(sys)>select * from v$version;

BANNER
———————————————————————————-
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Prod
PL/SQL Release 10.2.0.1.0 – Production
CORE    10.2.0.1.0      Production
TNS for 32-bit Windows: Version 10.2.0.1.0 – Production
NLSRTL Version 10.2.0.1.0 – Production

DOSCDB(sys)>select username from dba_users;

USERNAME
———————————————————–
DIP
TSMSYS
DBSNMP
SYSMAN
MDSYS
ORDSYS
CTXSYS
ANONYMOUS
EXFSYS
DMSYS
WMSYS
XDB
ORDPLUGINS
SI_INFORMTN_SCHEMA
MGMT_VIEW
SYS
SYSTEM
OUTLN
UOA
UBASE
UWF
UOCEAN
UBI
UTEMP

已选择24行。

–存在XDB用户

DOSCDB(sys)>DESCRIBE RESOURCE_VIEW;
名称                                                  是否为空? 类型
—————————————————– ——– ————————————
RES                                                            XMLTYPE(XMLSchema “http://xmlns.orac
le.com/xdb/XDBResource.xsd” Element
“Resource”)
ANY_PATH                                                       VARCHAR2(4000)
RESID                                                          RAW(16)

–从RESOURCE_VIEW 信息中可以判断XDB功能已经安装完成

2、准备xml_test.xml文件,并放在e:\testxml目录下

–xml_test.xml文件内容

<xwork>
<include file=”webwork-default.xml”/>
<package name=”default” extends=”webwork-default”>
<Refpeng>PENG XML_TEST </Refpeng>
<result-types>
<result-type1 name=”excel” class=”com.hisoft.web.view.ExcelResult”/>
<result-type2 name=”excelReport” class=”com.hisoft.web.view.ExcelReportResult”/>
</result-types>

<interceptors>
<interceptor name=”auth” class=”com.hisoft.web.interceptor.RoleInterceptor” />
<interceptor name=”XData” class=”com.hisoft.web.interceptor.XDataInterceptor” />
<interceptor-stack name=”roleStack”>
<interceptor-ref name=”defaultStack”/>
<interceptor-ref name=”auth”/>
</interceptor-stack>
</interceptors>

<!– global results –>
<global-results>
<result name=”error” type=”freemarker”>/WEB-INF/pages/globalError.ftl</result>
<result name=”redirect” type=”freemarker”>/WEB-INF/pages/redirect.ftl</result>
<result name=”login” type=”freemarker”>/WEB-INF/pages/login.ftl</result>
</global-results>

<action name=”login” class=”com.hisoft.web.action.LoginAction”>
<result name=”success” type=”freemarker”>/WEB-INF/pages/globalredirect.ftl</result>
<result name=”input” type=”freemarker”>/WEB-INF/pages/login.ftl</result>
<interceptor-ref name=”validationWorkflowStack” />
</action>

<action name=”index” class=”com.opensymphony.xwork.ActionSupport”>
<result name=”success” type=”freemarker”>/WEB-INF/pages/index.ftl</result>
<interceptor-ref name=”roleStack” />
</action>

<action name=”left” class=”com.hisoft.web.action.LeftMenuAction”>
<result name=”success” type=”freemarker”>/WEB-INF/pages/leftmenu.ftl</result>
<interceptor-ref name=”roleStack” />
</action>

<action name=”top” class=”com.hisoft.web.action.TopAction”>
<result name=”success” type=”freemarker”>/WEB-INF/pages/top.ftl</result>
<interceptor-ref name=”roleStack” />
</action>

<action name=”main”>
<result name=”success” type=”freemarker”>/WEB-INF/pages/welcome.ftl</result>
<interceptor-ref name=”roleStack” />
</action>

<action name=”exportExcel” class=”com.hisoft.web.action.ExportExcelAction”>
<result name=”success” type=”excel”>
<param name=”filename”>product.</param>
</result>
</action>

<action name=”exportExcelReport” class=”com.hisoft.web.action.ExportExcelListAction”>
<result name=”success” type=”excelReport”>
<param name=”filename”>product</param>
<param name=”template”>/WEB-INF/pages/product.xls</param>
</result>
</action>
</package>

</xwork>

3、创建xml表,用来存储xml 数据

DOSCDB(sys)>create table xml_test of xmltype;

表已创建。

DOSCDB(sys)>desc xml_test;
名称                                                  是否为空? 类型
—————————————————– ——– ————————————
TABLE of XMLTYPE

DOSCDB(sys)>CREATE DIRECTORY xmldir AS ‘e:\testxml\’;

目录已创建。

DOSCDB(sys)>INSERT INTO xml_test VALUES (XMLType(bfilename(‘XMLDIR’, ‘testxml.xml’),
2   nls_charset_id(‘AL32UTF8’)));

已创建 1 行。

DOSCDB(sys)>commit;

提交完成。

4、如何使用呢,既然存储进去了,当然是为了更好的使用了

DOSCDB(sys)>desc user_xml_tables;
名称                                                  是否为空? 类型
—————————————————– ——– ————————————
TABLE_NAME                                                     VARCHAR2(30)
XMLSCHEMA                                                      VARCHAR2(700)
SCHEMA_OWNER                                                   VARCHAR2(30)
ELEMENT_NAME                                                   VARCHAR2(2000)
STORAGE_TYPE                                                   VARCHAR2(17)

DOSCDB(sys)>select table_name from user_xml_tables;

TABLE_NAME
————————————————————
XML_TEST

–取得所有文档的内容

DOSCDB(sys)>SELECT OBJECT_VALUE from xml_test;

OBJECT_VALUE
—————————————————————————————————-
<xwork>
<include file=”webwork-default.xml”/>
<package name=”default” extends=”webwork-default”>
<Refpeng>PENG XML_TEST </Refpeng>
<result-types>
<result-type1 name=”excel” class=”com.hisoft.web.view.ExcelResult”/>
<result-type2 name=”excelReport” class=”com.hisoft.web.view.ExcelReportResult”/>
</result-types>
<interceptors>
<interceptor name=”auth” class=”com.hisoft.web.interceptor.RoleInterceptor”/>
<interceptor name=”XData” class=”com.hisoft.web.interceptor.XDataInterceptor”/>
<interceptor-stack name=”roleStack”>
<interceptor-ref name=”defaultStack”/>

<action name=”left” class=”com.hisoft.web.action.LeftMenuAction”>
<result name=”success” type=”freemarker”>/WEB-INF/pages/leftmenu.ftl</result>
<interceptor-ref name=”roleStack”/>
</action>
<action name=”top” class=”com.hisoft.web.action.TopAction”>
<result name=”success” type=”freemarker”>/WEB-INF/pages/top.ftl</result>
<interceptor-ref name=”roleStack”/>
</action>
<action name=”main”>
<result name=”success” type=”freemarker”>/WEB-INF/pages/welcome.ftl</result>
<interceptor-ref name=”roleStack”/>
</action>
<action name=”exportExcel” class=”com.hisoft.web.action.ExportExcelAction”>
<result name=”success” type=”excel”>
<param name=”filename”>product.</param>
</result>
</action>
<action name=”exportExcelReport” class=”com.hisoft.web.action.ExportExcelListAction”>
<result name=”success” type=”excelReport”>
<param name=”filename”>product</param>
<param name=”template”>/WEB-INF/pages/product.xls</param>
</result>
</action>
</package>
</xwork>

–取得<xwork>内的内容

DOSCDB(sys)>SET LONG 10000
DOSCDB(sys)>SET PAGESIZE 100
DOSCDB(sys)>SELECT extract(OBJECT_VALUE, ‘/xwork’)
2    FROM xml_test;

DOSCDB(sys)>SELECT extract(OBJECT_VALUE, ‘/xwork/package’)
2  from xml_test;

EXTRACT(OBJECT_VALUE,’/XWORK/PACKAGE’)
—————————————————————————————————-
<package name=”default” extends=”webwork-default”><Refpeng>PENG XML_TEST </Refpeng><result-types><re
sult-type1 name=”excel” class=”com.hisoft.web.view.ExcelResult”/><result-type2 name=”excelReport” cl
ass=”com.hisoft.web.view.ExcelReportResult”/></result-types><interceptors><interceptor name=”auth” c
lass=”com.hisoft.web.interceptor.RoleInterceptor”/><interceptor name=”XData” class=”com.hisoft.web.i
/pages/login.ftl</result><interceptor-ref name=”valida
tionWorkflowStack”/></action><action name=”index” class=”com.opensymphony.xwork.ActionSupport”><resu
lt name=”success” type=”freemarker”>/WEB-INF/pages/index.ftl</result><interceptor-ref name=”roleStac
k”/></action><action name=”left” class=”com.hisoft.web.action.LeftMenuAction”><result name=”success”
type=”freemarker”>/WEB-INF/pages/leftmenu.ftl</result><interceptor-ref name=”roleStack”/></action><
action name=”top” class=”com.hisoft.web.action.TopAction”><result name=”success” type=”freemarker”>/
WEB-INF/pages/top.ftl</result><interceptor-ref name=”roleStack”/></action><action name=”main”><resul
t name=”success” type=”freemarker”>/WEB-INF/pages/welcome.ftl</result><interceptor-ref name=”roleSta
ck”/></action><action name=”exportExcel” class=”com.hisoft.web.action.ExportExcelAction”><result nam
e=”success” type=”excel”><param name=”filename”>product.</param></result></action><action name=”expo
rtExcelReport” class=”com.hisoft.web.action.ExportExcelListAction”><result name=”success” type=”exce
lReport”><param name=”filename”>product</param><param name=”template”>/WEB-INF/pages/product.xls</pa
ram></result></action></package>

5、牛刀小试
–简单访问

DOSCDB(sys)>SELECT extract(OBJECT_VALUE, ‘/xwork/package/Refpeng’)
2  from xml_test;

EXTRACT(OBJECT_VALUE,’/XWORK/PACKAGE/REFPENG’)
—————————————————————————————————-
<Refpeng>PENG XML_TEST </Refpeng>

–访问一下属性值看看

DOSCDB(sys)>SELECT extractvalue(OBJECT_VALUE, ‘/xwork/package/Refpeng’)
2  from xml_test;

EXTRACTVALUE(OBJECT_VALUE,’/XWORK/PACKAGE/REFPENG’)
—————————————————————————————————-
PENG XML_TEST

–小试 where

DOSCDB(sys)>SELECT extractValue(OBJECT_VALUE, ‘/xwork/package/Refpeng’) “Peng Test”
2  FROM xml_test
3  WHERE existsNode(OBJECT_VALUE,
4                     ‘/xwork/package[Refpeng=”PENG XML_TEST “]’) = 1;

Peng Test
———————————————————————————————
PENG XML_TEST

–小试update

DOSCDB(sys)>UPDATE xml_test
2    SET OBJECT_VALUE =
3          updateXML(OBJECT_VALUE,
4                    ‘/xwork/package/Refpeng/text()’,
5                    ‘PENG XML_UPDATE_TEST’
6                    )
7    WHERE existsNode(OBJECT_VALUE,
8                       ‘/xwork/package[Refpeng=”PENG XML_TEST “]’) = 1;

已更新 1 行。

DOSCDB(sys)>COMMIT;

提交完成。

DOSCDB(sys)>SELECT extractValue(OBJECT_VALUE, ‘/xwork/package/Refpeng’) “Peng Test”
2  FROM xml_test
3  /

Peng Test
—————————————————————————————————-
PENG XML_UPDATE_TEST

« Newer PostsOlder Posts »

Create a free website or blog at WordPress.com.