Skip to content

Personal tools
You are here: Home » 10g » Oracle10g Articles » Hidden Benefits of 10g
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3548

Hidden Benefits of 10g

by Jonathan Lewis

There are various features of Oracle 10g that get all the publicity and press, and for many people, the decision to migrate is driven by whether or not they happen to see anything in the high-profile features that might be useful.

However, there are numerous other changes that are likely to be of benefit to almost everyone – and in this article I have described just a few of them.

This article is based on material first published in the presentation, “The evolution of Optimisation: 8i to 10g,” and the seminar “Optimising Oracle – Performance by Design.”

Optimizer Improvements (1)

When an Oracle database is used well, most of the work it does is in the SQL – so anything that improves the ability of the optimiser to find a better path is likely to be a good thing.

You may have heard already about the ability of the 10g optimiser to perform an outer join “the wrong way round” if it's a hash join. Historically the “preserved” table had to appear in the join order before the “outer” table (the one with the additional null row indicated in Oracle syntax by the (+) in the join clause). Consider the query:

        small_table st,
        large_table lt
where = lt.n1

In Oracle 8 and 9, the join order has to be large table -> small table because of the outer join. This means Oracle has to hash the large table into memory, and then probe with the small table – but hash joins usually work best if the “small table” is the one that is hashed into memory. In 10g, this can happen. Spot the difference in the execution plans from 9i and 10g:

Execution Plan (9i)
  HASH JOIN (OUTER) (Cost=51 Card=10000)
    TABLE ACCESS (FULL) OF 'LARGE_TABLE' (Cost=29 Card=10000)

Execution Plan (10g)
  HASH JOIN (RIGHT OUTER) (Cost=36 Card=10000)
    TABLE ACCESS (FULL) OF 'LARGE_TABLE' (TABLE) (Cost=33 Card=10000)

Note especially how the 10g execution plan has the (RIGHT OUTER) option on the hash join; and how the cost of the 10g hash join is simply the sum of the two required tablescans. The 9i plan is not so lucky – the large table would not hash into memory, and the cost reflects the expected disk access due to the one-pass hash. (A prediction reflected at run-time by the statistic: “workarea executions – onepass”).

Optimizer Improvements (2)

But there are other, more subtle optimisations that you might never spot (apart from the improved performance) unless you look very closely. Consider the following query:

        /*+ index(t1 i1) */
        n1 = 1
and     n2 = 1
and     n4 = 1
and     (n3+1 = 1 or n3+1 = 2 or n3+1 = 3)

To appreciate this example you have to know that there is an index on table t1, with the columns (n1, n2, n3, n4).

In a simple test case against Oracle and Oracle, the older version performed a full tablescan, the newer version used an index range scan. There were various reasons for this, but the most significant one showed up when I forced Oracle 9 to use the index (hints are orders, not suggestions) and then did a full explain plan, rather than just a quick autotrace.

The columns in the plan to watch out for are the access_predicates and filter_predicates. It is quite hard to pack a full explain plan onto a short line – but the following extract gives you a good indication of what's going on:

Execution Plan (9i)
TABLE ACCESS T1 (by index rowid)  -
        Filter (“T1”.”N3”+1=1 OR “T1”.”N3”+1=2 OR “T1”.”N3”+1=3)
  INDEX NON-UNIQUE I1 (range scan)  -
        Access (“T1”.”N1”=1 AND “T1”.”N2”=1 AND “T1”.”N4”=1)  -
        Filter (“T1”.”N4”=1)

Execution Plan (10g)
TABLE ACCESS T1 (by index rowid)
  INDEX I1 (range scan)  -
        Access (“N1”=1 AND “N2”=1 AND “N4”=1)  -
        Filter (“N4”=1 AND (“N3”+1=1 OR “N3”+1=2 OR “N3”+1=3))

Note the critical difference: in Oracle 9i, the optimiser considers only columns n1, n2, and n4 when examining the entries in the index leaf blocks. Any examination of column n3 is postponed until after the table has actually been visited.

In Oracle 10g, the optimiser has managed to infer that column n3 could be examined as the index leaf blocks are being acquired, and before going to the table – so the only table blocks visited are exactly the right ones.

As most DBAs know, unless all your table accesses by index range scan are high precision (which means just two or three rows targeted by the index), then most of the cost of range scans is in the number of table blocks visited. (This is one of the main reasons why index rebuilds are generally a waste of effort, of course.)

So here, in the upgrade from 9i to 10g, we see a tiny enhancement in the optimiser that hits a critical performance area very precisely – the more redundant table visits you eliminate, the faster your queries go. And with this optimisation, Oracle reduces the redundant visits without introducing a penalty elsewhere.

Developer Fixes

But it's not just the optimiser. You've probably heard about the improvements in PL/SQL – it now has a proper optimising compiler built in, so it can do things like eliminate redundant assignments, promote constant assignments outside the loop, and so on.

But there are some tricks in PL/SQL that aren't relevant to “normal” compilers, because “normal” languages don't understand Oracle databases. Here's the cutest (and possibly most dangerous) specialist enhancement – one that isn't in the press releases. Consider the following PL/SQL fragment:

        m_n number;
        m_n := 0;
        for i in (select v1 from t1) loop
                i.v1 := upper(i.v1);
                m_n := m_n + 1;
                exit when m_n = 200;
        end loop;

Single row processing often results in a major scalability problem – there are lots of little overheads and contention points all over the place if you have to run this type of code with any significant degree of concurrency. So 10g fixes the code up behind the scenes. If you look for the driving statement in v$sql, and check the number of executions, fetches and rows_processed, you will see the following differences:

        executions, rows_processed, fetches, sql_text
        lower(sql_text) = 'select v1 from t1'

  9.2 10.1
Executions 1 1
Rows processed 200 200
Fetches 200 2

Note the change in the number of fetches. Oracle 10g had processed the same number of rows as the 9.2 equivalent, but it has converted the single row processing into array processing using an array size of 100. In general, this improves performance and scalability. (In my test case, the number of buffer_gets also dropped from 202 to 10 — which is another indication of the typical benefit — relating particularly to cache-buffers-chains latching — you get from array processing).

If you’re wondering why my predicate was based on lower(sql_text), there’s yet another little enhancement. Oracle 10g tries to “standardise” the SQL as much as possible before it hits the shared SQL area: text is capitalised, and white space eliminated before the parser has to check to see if the incoming SQL has been previously parsed.

You probably noticed my comment about “in general, this improves performance.” There are cases when it does exactly the opposite. The array size seems to be fixed at 100, so if I change my code to exit after just two rows, here’s what’s going to happen if there is some complicated and expensive expression that has to be evaluated for each row I return:

        m_n number;
        m_n := 0;
        for i in (select wait_row(v1,0.02) v1 from t1) loop
                i.v1 := upper(i.v1);
                m_n := m_n + 1;
                exit when m_n = 2;                     -- quit after two rows
        end loop;

As a simple demonstration, the wait_row() function in this example returns its first input after waiting the number of seconds given by its second input. Under Oracle 9.2, the loop completes in about 0.04 seconds. Under Oracle 10g, it takes 2.00 seconds, because the array fetch triggers the wait 100 times.

There is a workaround for this problem, brought to my attention by Paul Drake and described in a presentation by Steven Feuerstein. There are three different levels for PL/SQL optimisation, and this particular optimisation is one of the effects that appears only at level 2 (the default). This particular feature is disabled if you set the optimisation level back to 1 or zero, for example, with the command:

alter session set plsql_optimize_level = 1;

Moreover, as Steve Feuerstein's presentation shows, you can ensure that a package is compiled at a particular level and stays at that level when it is subsequently recompiled. So if you do have some code for which this feature is a threat, you can handle it tidily.


There are many ways in which Oracle has been enhanced to reduce the resources, improve performance, and increase scalability. Even if none of the headline features don't hold any appeal, it would be sensible to do some testing of 10g to see if some of your performance problems disappear “for no apparent reason.”


Jonathan Lewis is a freelance consultant with more than 16 years experience of Oracle. He specialises in physical database design and the strategic use of the Oracle database engine. He is author of Practical Oracle8i — Designing Efficient Databases published by Addison-Wesley, and has contributed to two other books on Oracle. Further details of his published papers, presentations, tutorials and seminars can be found at, which also hosts The Co-operative Oracle Users’ FAQ for the Oracle-related Usenet newsgroups.

Contributors : Jonathan Lewis
Last modified 2005-04-18 07:58 PM
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!

Powered by Plone