Oracle 10G R2 New Features Part 2
Enhanced Commit
Before we discuss 10G R2's new Enhanced Commit Feature, a little background
information is in order. For all you Oracle gurus that know how a commit works,
please turn to the paragraph titled "Oracle's New Commit Parameters."
Commit Process
When a transaction commits, the server process places a commit record in the
log buffer and tells LGWR to perform a contiguous write of all the redo log
buffer entries up to and including the COMMIT record to the redo log files (not
the data files!). Oracle is now able to guarantee that the changes will not
be lost even if there is an instance failure. Please note that the flushing
of dirty data buffers to the data files is performed independently by DBWR and
can occur either before or after the commit. This could result in Oracle having
committed data on the redo log files but not on the data files.
The Synchronization Process
If a failure occurs and the instance terminates abnormally, Oracle must restore
the database to a transaction consistent state just prior to the failure. The
database must remove all uncommitted data from the data files and replay all
changes committed and recorded in the redo log files but not recorded on the
data files (remember that a commit forces LGWR to flush, not DBWR). Oracle restores
the database to a transaction consistent state using roll forward and roll backward
processes.
Roll Forward Phase
The purpose of the roll forward is to apply all changes recorded in the log
files to the corresponding data blocks. Rollback/undo segments are populated
during the roll forward phase. A rollback/undo segment entry is added if an
uncommitted data block is found on the data files and no undo entry exists in
the rollback/undo segment. At the end of this phase, all committed data is in
the data files, although uncommitted data may still exist. The database is then
opened to allow user access to database data.
Roll Backward Phase
Oracle removes the uncommitted data by using the rollback/undo segments populated
during the roll forward phase or prior to the crash. Blocks are rolled back
when requested by the Oracle server or a user's server process, depending on
which process requests the block first. The database is therefore available
even while the roll backward phase is executing. Only those data blocks that
are actively participating in the rollback are not available. Total data synchronization
has now occurred.
Oracle's New Commit Parameters
10G R2's new COMMIT clause provides options that control how the redo stream
is written from the redo log buffer to the online redo log files. As a result,
it is really important that we fully understand these new parameters because
they affect the way Oracle commits data as we now know it.
In the past, we knew that a COMMIT guaranteed that a transaction's data was safely stored in the database. After 10G R2, this guarantee will depend upon the COMMIT statement options chosen for that particular transaction. The end result is the data you think should be in the database, might not be. Oh boy
Let's take a look at 10G R2's new COMMIT WRITE <option> clause:
- COMMIT WRITE WAIT; -
This is the default option. WAIT tells the COMMIT to function as it has done
in the past. The transaction will wait until LGWR posts a message back stating
that all redo changes were flushed from the redo log buffer to the online
redo log files on disk. If you issue a COMMIT with no other parameters, this
is the option that will be chosen.
- COMMIT WRITE NOWAIT;
- The transaction will continue processing without waiting for LGWR to post
a message back stating that all redo changes are on disk.
- COMMIT WRITE BATCH: -
Oracle will batch groups of COMMIT statements together and flush them to the
redo logs as a single unit. If you have batch jobs that commit too frequently,
this is a possible remedy. I would recommend that you first ask the developers
to change their code to increase the amount of work performed between commits
before you implement this option.
- COMMIT WRITE IMMEDIATE; - Notifies Oracle to flush the log buffer to the online redo logs immediately.
Oracle provides the dynamic system initialization parameter COMMIT_WRITE to conrtrol redo stream processing at the database level.
Why would Oracle do this? The answer is performance. Commit processing has a measurable impact on transaction performance. One of the first questions a DBA asks (or should ask) when tuning a long running batch job is how many updates are occurring between commits. If the answer is "after every record", a lengthy discussion on the detrimental impact of performing too many commits is justified.
Oracle provides the "log file sync" wait event to allow administrators to monitor commit performance. The wait is measured when Oracle foreground processes issue a COMMIT or ROLLBACK statement and are waiting for LGWR to post a message back stating that all redo changes are safely on disk. Oracle documentation on Metalink provides the following information to help reduce the number of log file sync wait events.
"If there are lots of short duration transactions see if it is possible to BATCH transactions together so there are fewer distinct COMMIT operations. Each commit has to have it confirmed that the relevant REDO is on disk. Although commits can be "piggybacked" by Oracle reducing the overall number of commits by batching transactions can have a very beneficial effect."
Notice that Oracle is recommending that users BATCH transactions together to reduce the number of COMMIT statement operations.
Transportable AWR Snapshots
10G R2's Enterprise Manager allows administrators to transfer Automatic Workload
Repository snapshots to other 10G R2 workload repositories for offline analysis.
This is accomplished by the administrator specifying a snapshot range and extracting
the AWR data to a flat file. The flat file is then loaded into a user-specified
staging schema in the target repository. To complete the transfer, the data
is copied from the staging schema into the target repository's SYS schema. The
data in the SYS schema is then used as the source for the ADDM analysis.
If
the snapshot range already exists in the SYS or staging schemas, the data being
imported is ignored. All data in snapshot ranges that does not conflict with
existing data is loaded. 10G R2 contains a new package DBMS_SWRF_INTERNAL to
provide AWR snapshot export and import functionality.
The example below exports a snapshot range starting with 100 and ending at 105 to the output dump file 'awr_wmprod1_101_105' in the directory '/opt/oracle/admin/awrdump/wmprod1':
BEGIN
DBMS_SWR_INTERNAL.AWR_EXTRACT
(
DMPFILE =>'awr_export_wmprod1_101_105',
DMPDIR => '/opt/oracle/admin/awrdump/wmprod1',
BID => 101,
EID => 105
)
We then use the AWR_LOAD procedure to load the data into our target repository staging schema:
BEGIN
DBMS_SWR_INTERNAL.AWR_LOAD
(
SCHNAME => 'foot',
DMPFILE =>'awr_export_wmprod1_101_105',
DMPDIR => '/opt/oracle/admin/awrdump/wmprod1'
)
The last step is to transfer the data from our staging schema (FOOT) to the SYS schema for analysis:
BEGIN
DBMS_SWR_INTERNAL.MOVE_TO_AWR
(
SCHNAME => 'foot',
)
Transportable SQL Tuning
Sets
Administors are now able to transfer SQL Tuning Set workoads from one database
to another. This allows the administrator to transfer poorly perfoming SQL statements
to test systems where they can be safely analyzed and tuned. This
will prevent administrators from having to perform extended diagnostics on SQL
statements in their production environments.
In addition, once a statement is tuned, it must be rexecuted to determine if the tuning changes accomplished their intended results. If the results don't match expectations, another round of tuning and monitoring is warranted. Transporting the statements in bulk to another database facilitates the monitoring and testing of multiple SQL statements.How do you create a SQL Tuning Set? That will be the topic of one of my upcoming blogs after we finish discussing 10G R2. I'll also show you how to move it to another environment at that time.
DBMS_OUTPUT Line Length
and Output Limit Increases
Oracle provides the DBMS_OUTPUT package to display debugging information from
PL/SQL code. The package was not designed to display reports or format output
to standard output. But there are times when developers need to pump out large
amounts of diagnostic information to help them debug their programs.
But developers using 10G R1 and earlier versions were frequently constrained by DBMS_OUTPUT's 255 byte single line limit and total session limitation of 1 million bytes. As a result, when using SQL*PLUS to debug their PL/SQL code, developers often received the following error messages:
10G R2 increases the single length line limit to 32,767 bytes and the overall session limit is removed altogether.ORA-20000: ORU-10028: line length overflow, limit of 255 bytes per line
Single line limit of 255 bytes exceeded.
ORA-20000: ORU-10027: buffer overflow, limit of 1000000 bytes
Maximum amount of 1 million bytes per session exceeded.
Memory Corruption Checking
Setting the DB_BLOCK_CHECKSUM system initialization parameter to TRUE notifies
DBWR to calculate a value based on the number of bytes in the block and store
that value in the cache header of every data block when writing it to disk.
This calculated number is called a checksum. Checksums are verified when the
block is read and is used by Oracle to detect corruption caused by the various
components of disk storage systems.
In Oracle 10G R2, administrators can set DB_BLOCK_CHECKSUM to FULL to notify Oracle to verify the checksum as it is being transferred from one memory construct to another (i.e. user session to buffer cache). This improves Oracle's capability of identifying block corruptions and is particularly useful when propagating blocks to standby databases.
Online Redefinition of
a Single Partition
Oracle9i introduced a feature that allows DBAs to perform complex table redefinitions
online. The DBMS_REDEFINITION utility allows users to change column names and
datatypes, manipulate data, add and drop columns and partition tables while
the table is being accessed by online transactions. DBMS_REDEFINITION provides
significant benefits over more traditional methods of altering tables that require
the object to be taken off-line during the redefinition process.
10G R2 enhances DBMS_REDEFINITION by providing it with the capability of redefining a single partition of a multi-partition tablespace. One benefit that stands out is that administrators can now use DBMS_REDEFINITION to move a single partition to a different tablespace while the data is being updated. In addition, this enhancement allows a partition table to be redefined, one partition at a time.
Transportable Tablespace
from RMAN Backups
A new feature in Oracle8i provided a high-speed mechanism for copying tablespaces
(datafiles) between identical systems and allowing the same data to be accessed
by both systems. Now data movement could be as fast as simply transferring the
files between machines.
Transporting tablespaces is particularly useful for:
- Feeding data from OLTP systems to data warehouse staging systems
- Updating data warehouses and data marts from staging systems
- Loading data marts from central data warehouses
- Archiving OLTP and data warehouse systems efficiently
The Export and Import utilities are used to "unplug" and "plug-in" the data dictionary structural information (metadata) that pertains to the objects in the tablespace set being transported. The data is copied by operating system commands and is only limited by network bandwith and hardware limitations. Since the Export/Import utilities only transfer the metadata, they run much faster than their more traditional counterparts.
The problem is that to "unplug"
the tablespace, it had to be taken offline first. This could be a problem for
applications that require 24 x 7 data availability. In 10G R2, administrators
are able to create transportable tablespaces using RMAN backups instead of the
online database datafiles. This allows the data to be moved without making it
read only in the source database.
Up Next
In my final blog on 10G R2, we'll discuss DML error logging, ASM command
line, RMAN backup encryption, Transparent Data Encryption and a whole LOT more!
Some questions I have :
what happens if I issue commit wrtie nowait and then continue with the logic of my program and a few milliseconds later the commit fails for whatever reasons ? Do we need to check the SQLCODE a few times after isssing the commit nowait to see if the commit was successful ?
How do we know how often/how many batches are batched toghter when we issue commit batch ?
What are the differences between commit no wait and immmediate ?
Some examples to illustrate the differences will be great!
Thanks
Replies to this comment