Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » Oracle » Oracle Articles Archive » Tuning and Performance
Seeking new owner for this high-traffic DBAzine.com site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3548
 

Tuning and Performance

by Christopher Foot

Choosing a Database Block Size (DB_BLOCK_SIZE) Parameter

Bigger blocks means more data transfer per I/O call. This usually leads to an increase in performance since the cost of I/O setup (disk seek + rotation, data transfer) makes up the majority of the cost of an I/O. The performance increase does not benefit just the data blocks. A bigger block size means more space for key storage in the branch nodes of B-tree indexes, which reduces index height and improves the performance of indexed queries.

But, if you have a high transactional concurrency to a segment, using bigger blocks will increase the transactional concurrency to the individual data blocks. You are storing more data in each block and increasing the number of transactions accessing those blocks concurrently. If you have a very high number of concurrent users and you increase the database block size without taking INITRANS and MAXTRANS into consideration, you may cause performance problems.

In addition, Oracle has a tendency to waste memory when it writes undo into rollback segment blocks. If you have large database block sizes, your undo per transaction is small and your transaction concurrency is high (two or more transactions per rollback segment), you may cause transaction slowdowns in highly volatile OLTP environments.

Recommendations:

      • Use a DB_BLOCK_SIZE of 2048 only in special circumstances. A neat trick is to use a 2K block size to execute stress tests in which you are trying to drive server workload artificially high so you can analyze the bottlenecks less expensively (i.e., without generating a lot of test data and test transactions)
      • Use DB_BLOCK_SIZE of 8192 for most transactional processing systems. This represents a good balance between advantages and the disadvantage for undo segments
      • Use DB_BLOCK_SIZE of larger than 8KB for OLTP systems in which your data structures drive the block size to a naturally larger size. If your most popular tables have row sizes that are greater than 8KB, consider using a larger block size to decrease the costs of accessing a single row
      • Use DB_BLOCK_SIZE of larger values than 8KB for systems where undo generation isn't a meaningful part of your workload (i.e., data warehouses, decision support systems). In addition, data warehouses, data marts and decision support systems access greater amounts of data to provide users with the information they require to make business decisions. You can reduce I/O by storing more data in each block

Changing DB_FILE_MULTIBLOCK_READ_COUNT

This is a neat trick I learned from one of Rich Niemiec's articles. Rich has written numerous articles and given dozens of presentations on Oracle. If you don't have his latest book on tuning, titled Oracle Performance Tuning Tips & Techniques, you need to buy it.

If you are thinking about changing the DB_BLOCK_SIZE parameter to increase database performance, consider increasing the DB_FILE_MULTIBLOCK_READ_COUNT parameter. More blocks will be read during sequential read operations. In some cases, it will provide benefits similar to a larger DB_BLOCK_SIZE parameter and you won't have to rebuild the entire database.

Increasing the DB_FILE_MULTIBLOCK_READ_COUNT may have an impact on access path selection. Full table scans use multiblock reads, so the cost of a full table scan depends on the number of multiblock reads required to read the entire table. The number of multiblock reads required to read the entire table depends on the number of blocks read by a single multiblock read which is specified by the DB_FILE_MULTIBLOCK_READ_COUNT parameter.

For this reason, the optimizer may be more likely to choose a full table scan when the value of this parameter is high.

When Is the SYSTEM Rollback Segment Used?

When you create a database, Oracle creates a rollback segment called SYSTEM in the system tablespace. The system rollback segment is only used for transactions that occur inside the system tablespace. The system rollback segment's main purpose is to handle the rollback generated for DDL statements. DDL statements cause updates to be applied to the data dictionary tables in the system tablespace.

It is possible for the system rollback segment to be used for non-data dictionary tables, but only if those tables are created inside the system tablespace. Application objects should not be stored in the system tablespace. Create separate tablespaces for application tables and indexes.

Why Is Oracle's New Recommendation to have All Rollback Segments have a Minimum of Twenty Extents?

Rollback segments are very dynamic in nature. They allocate space when required and deallocate it when it is no longer needed. If the extents are large in size and few in number, the less granular these space allocations and deallocations become.

A 200 megabyte rollback segment which consists of 2 100 megabyte segments would allocate another 100 megabyte each time it requires additional space. This automatically increases the rollback segment size by 50% and may acquire much more space than is actually needed. If the rollback consisted of twenty 10 megabyte extents, additional extents would be acquired in 10 megabyte chunks.

When a rollback segment consists of twenty or more extents, any single change in the number of extents will not move the total size of the rollback segment by more than 5%, resulting in a much smoother allocation and deallocation of space. It has been found that rollback segments with a minimum extent greater than 20 have a diminishing return on performance.

Using V$WAITSTAT to Size Rollback Segments

The V$WAITSTAT performance table can be queried to prevent rollback segment contention between user processes. The following query will display the number of waits since startup:

SELECT class, count FROM v$waitstat WHERE class = '%undo%';

Any non-zero value in the count column indicates rollback segment header contention.

Finding Problem Queries

Execute the following query to identify the SQL responsible for the most disk reads:

SELECT disk_reads, executions, disk_reads/executions, sql_text FROM
v$sqlarea WHERE disk_reads > 5000 ORDER BY disk_reads;

Execute the following query to identify the SQL responsible for the most buffer hits:

SELECT buffer_gets, executions, buffer_gets/executions, sql_text FROM
v$sqlarea WHERE buffer_gets > 100000 ORDER BY buffer_gets;

When I was first starting as a DBA, an Oracle consultant was kind enough to send me an entire directory's worth of tuning scripts. These two queries were contained in the directory along with a couple of dozen others that provided buffer hit ratios and other performance related statistical information.

I never knew that that these two queries were the most important ones in the bunch. That was until I attended an IOUG presentation given by Richard Niemiec from TUSC. Rich focused on the benefits that these two queries provide, the light bulb finally came on and I now use them religiously.

It's common knowledge that poorly performing SQL is responsible for the majority of database performance problems. The first query returns SQL statements responsible for generating disk reads greater than 5,000 while the second query returns SQL statements responsible for generating buffer reads greater than 100,000.

These are good numbers to start with and you can adjust them according to the size of the system you are tuning. I modified the original queries to divide the number of disk and buffer reads by the number of statement executions. If a statement is generating 1,000,000 disk reads but is executed 500,000 times, it probably doesn't need tuning.

Heavy disk reads per statement execution usually means a lack of proper indexing. Heavy buffer reads usually means the exact opposite, indexes are being used when they shouldn't be.

Pinning Objects in the Shared Pool

Loading the same objects into the shared pool again and again and shared pool fragmentation are common problems in many Oracle databases. The keys to tuning the shared pool are:

   1. Identifying the objects being reloaded into the shared pool time and time again:

         SELECT owner, name||' - '||type name, loads , sharable_mem FROM
    v$db_object_cache WHERE loads > 1 AND type IN ('PACKAGE', 'PACKAGE
    BODY', 'FUNCTION', 'PROCEDURE') ORDER BY loads DESC;

   2. Identifying large objects that may not be able to be loaded into a fragmented shared pool:

         SELECT owner, name||' - '||type name, sharable_mem FROM 
    v$db_object_cache WHERE sharable_mem > 10000 AND type IN ('PACKAGE',
    'PACKAGE BODY', 'FUNCTION', 'PROCEDURE')ORDER BY sharable_mem DESC;

The objects identified by the above queries can be pinned in the shared pool by executing the DBMS_SHARED_POOL.KEEP procedure. Several Oracle supplied packages that should be pinned in the shared pool are STANDARD, DBMS_STANDARD and UTIL.

Function-Based Indexes

Generally, the fastest way to access Oracle data is with an index. Oracle's indexing schemes are designed to provide complementary performance functionality. While standard B-tree indexes are most effective for columns containing a high number of different values (good selectivity), bitmapped indexes are most appropriate for columns with a limited number (poor selectivity) of possible values.

Administrators supporting large data stores use partitioned indexes to decompose large index structures into smaller, more manageable pieces called index partitions. Oracle8i places index data in the separate index partitions based on the index's partitioning key. The partitioning key of an index must include one or more of the columns that define the index.

Oracle's latest release solves an indexing problem that has been affecting database performance for close to a decade. Before Oracle8i, any SQL statement that contained a function or expression on the columns being searched on in the WHERE clause could not use an index. For example, the statement

SELECT emp_lname FROM employee WHERE salary + commission > 100000;

would not use an index. A full table scan would be required to retrieve the desired result set. We now know that we are able to use B-tree and bitmap indexes to speed query performance.

In Oracle8i, we are able to build both bitmap and B-tree indexes on columns containing the aforementioned functions or expressions. A function-based index precomputes the value of the function or expression and stores it in the index. The following index could be used to increase performance of the query:

CREATE INDEX sal_comm on emp (salary + commission);

B-Tree Indexes

A traditional B-Tree (balanced tree) index stores the key values and pointers in an inverted tree structure. The key to good B-Tree index performance is to build the index on columns having a lot of different values. Oracle describes this as "good selectivity" Oracle is able to quickly bypass rows that do not meet the search criteria when searching through indexes built on columns having a high degree of selectivity.

Bitmap Indexes

Conversely, bitmapped indexes perform better when the selectivity of an index is poor. The fewer different values a bitmapped index contains, the better it will perform.

Bitmap indexes, in certain situations, can provide impressive performance benefits. Bitmapped indexes are most appropriate for complex and ad-hoc queries that contain lengthy WHERE clauses on columns that have a limited number of different values (poor selectivity).

Optimizer and Bitmapped Indexes

The optimizer can be stubborn at times. It can be particularly stubborn when you want it to choose a bitmapped index for an access path. A single bitmap index may not be chosen at all. The optimizer will be more inclined to choose bitmapped indexes as an access path if it can use multiple bitmapped indexes simultaneously. If you want Oracle to choose bitmapped indexes as an access path, build a lot of them or use hints.

Bitmapped Index Concurrency

Anyone accustomed to database programming understands the potential for concurrency problems. When one application program tries to update data that is in the process of being changed by another, the DBMS must forbid access until the modification is complete in order to ensure data integrity.

Each entry in a B-Tree index entry contains a single ROWID. When the index entry is locked during an update, a single row is affected. A bitmapped index entry can potentially contain a range of ROWIDs. When Oracle locks the bitmapped index entry, the entire range of ROWIDs is also locked. The number of ROWIDs contained in the range affects the level of concurrency. As ROWIDs increase in the bitmap segment, the level of concurrency decreases.

When to Use Bitmapped Indexes

Locking issues affect data manipulation operations in Oracle. As a result, bitmapped indexes are not appropriate for OLTP applications that have a high level of concurrent insert, update and delete operations. Concurrency is usually not an issue in a data warehousing environment where the data is maintained by bulk loads, inserts and updates.

In addition, bitmapped index maintenance is deferred until the end of the bulk DML operation. If 100 rows are inserted into a table, the inserted rows are placed into a sort buffer and the updates of all 100 index entries are applied as a group. As a result, bitmapped indexes are appropriate for most decision support applications (even those that have bulk updates applied on a regular basis).

Mass updates, inserts and delete will run faster if you drop the bitmapped indexes, execute the DML and recreate the bitmapped indexes when the DML completes. Run timings using the straight DML and compare it to the total time consumed by the drop bitmapped index/execute DML/recreate bitmapped index process.

DBAs must also take column cardinality into consideration when determining the potential performance benefits of bitmapped indexes. The higher the index's cardinality, the larger the index becomes. If a table containing the gender (two different values) and hair_color (four different values) columns contains 1 million rows, the bitmapped index on gender would require 2 million entries and the bitmapped index on hair_color would require 4 million entries.

Bitmap Versus B-Tree

We just learned that standard B-tree indexes are most effective for columns containing a high number of different values (good selectivity) and bitmapped indexes are most appropriate for columns with a limited number (poor selectivity) of possible values.

Where is the selectivity cutoff point? What is that magic number of different values when a bitmapped or a B-Tree index is no longer considered to be a viable indexing option? There are many variables (number of rows, column cardinality, query composition, update frequency) that must be considered when determining the potential performance benefits of bitmapped and B-Tree indexes.

The only definitive method of determining the potential performance benefits of bitmapped and B-Tree indexes is to create them and monitor the application's performance. I've seen B-Tree indexes bring back 25% of the rows faster than table scans and have seen bitmapped indexes with a couple of dozen different values perform flawlessly.

Remember that "rules of thumb" are just general recommendations. The only definitive way to determine index performance is to create the index and test query performance.

Indexing

Some key points on indexing:

      • If you are using the cost based optimizer, run analyze on all tables in the query. Oracle doesn't like to make assumptions
      • Index on join columns that have the proper selectivity
      • Always index on foreign key columns. Make sure the columns in the index and the columns in the foreign key match exactly in column order and in the number of columns. If you don't, two things will happen when you execute DML against the parent table:
      • Oracle will have to scan the child table which will impact performance
      • Oracle will hold a table level lock on the child table which will impact concurrency
      • The leading column of the index must be used in the WHERE clause for the index to be chosen (except when using fast full scan)
      • Index column order - put columns must often used in queries in front of columns not used as frequently
      • The inner table of a hash or sort merge join cannot be probed solely using an index on the join columns
      • The inner table of a nested loop can be probed by an index on the join column, but the outer table can't
      • Create indexes on columns used in local (non-join) predicates on the outer table of a nested loop join. Index local predicates to influence the outer and inner table selection in nested loop joins
      • Use histograms on skewed data
      • If the query being tuned doesn't select a lot of columns and you have all of the columns in the WHERE clause in the index, consider INDEX ONLY access. Add all of the columns in the SELECT clause to the end of the column list in the index to achieve lightning fast data access
      • Consider creating multiple, single column indexes on a table to achieve multiple index access
      • Primary and unique key indexes seem to have a greater chance of being used than other indexes. You may have index-only access with another index, but you will probably be forced to use a hint to get Oracle to use it
      • Indexes can't be used when Oracle is forced to perform implicit datatype conversion
      • Subselects and views may cause the optimizer to rewrite the SQL which could result in indexes not being used
      • Modifications to the column side of the query prevents an index from being used on the column being modified
      • Use the EXISTS function instead of the IN function when tables in the subselect portion of the query are small

PCTFREE and PCTUSED

If you don't have enough information to make an educated guess on these parameters, don't make an uneducated one. Select the default values and monitor for chained rows by using the LIST CHAINED ROWS option in the ANALYZE statement.

Caching Tables

Before we start our discussion on table caching, a brief description of Oracle's Least Recently Used chain (LRU chain) is in order. When server processes read data blocks from files and place them into Oracle's data buffer cache, the block addresses are recorded in the Most Recently Used (MRU) side of the LRU chain.

As new blocks are read into the data buffer cache, their addresses are also placed on the MRU side of the LRU chain. Oracle makes space available in the chain for the new addresses by moving the existing addresses towards the Least Recently Used (LRU) side of the chain. Oracle will choose block addresses from the LRU side of the chain when selecting blocks to age out (write to disk). This subset of the LRU side of the chain is called the dirty list.

If you have small lookup (some call them code or domain) tables that are being forced out of memory, you can pin them into memory by using the CACHE parameter of the CREATE and ALTER database statements. You can also use the CACHE hint on SQL statements to turn caching on and off.

A common misconception is that the CACHE option forces tables into memory and they are never aged out. When Oracle detects a full table scan, it moves the block addresses obtained by the full table scan to the LRU side of the LRU chain. Oracle wants blocks returned by a full table scan to be aged out more quickly than blocks returned by other access paths.

The problem is that smaller tables, like lookup tables, are most often scanned by a full table scan (and they should be). The CACHE parameter forces Oracle to place the results on the MRU side of the LRU chain. They will still age out, just not as quickly.

Clustering Data in Tables

Before clusters, Oracle didn't have a mechanism to sort data in a table to match the order of a particular index. Clusters can be difficult to tune and administer and have limited applications. If you can't, or don't want to use clusters, try this option.

If the application always access the table by the same column or the table is always joined to other tables by the same column, consider sorting the data on that column before loading. Static tables are the easiest to sort, but I have seen administrators in OLTP systems pump data out of tables into flat files, sort the records in the proper order and reload them using the load utility on a regular basis.

Don't be misled by Oracle experts that say that data order doesn't increase data retrieval and join performance - it most definitely does.

Merging Tables to Increase Performance

If you constantly join two or more tables together in an application, merge them into one table. It's called "de-normalizing for performance" (your data administrator may call it and you other things). It's simple and it works.

Hints

Don't use HINTS unless absolutely necessary. You are altering a SQL statement to influence the optimizer. This may have a negative impact when the data, application or database release changes. Application performance may be negatively impacted when any of the aforementioned changes occur and you won't know why.

In most cases, you probably aren't as smart as the optimizer. Let the optimizer make the choice unless you are absolutely certain the optimizer is choosing the incorrect access path. One exception to the rule is when you use Oracle's Parallel Query Option (see next recommendation).

Parallel Hints

Oracle allows you to set the degree of parallelism for queries at the database, table and statement levels. Statement hints allow you to fine tune individual statements without increasing or decreasing the degree of parallelism for other statements. Set the degree of parallelism at the database and table level carefully. You must balance the number of parallel processes being spawned with a finite amount of system resources.

Performance Testing

Remember the buffer cache when benchmarking and testing SQL. Subsequent runs may be faster because the data has been cached from previous runs.

Parallel Query

Use profiles in a pre-Oracle8i parallel query environment to limit the number of simultaneous sessions to the database to "throttle" the number of parallel query processes spawned. When using parallel query, users can get a lot of multiple processes even with a fairly low parallel degree on the tables. A 2 table join with both tables having DEGREE=4 would use 9 sessions (4 per table + 1 for SQL*Plus). This may be OK for a few users but may not be when you have dozens of users accessing the tables in question. If a user's profile limits the number of user sessions allowed, Oracle will parallelize one table and single thread through the other.

Space Utilization for Parallel Table Creates

You must adjust the INITIAL extent parameter for a table when using the parallel clause of the CREATE TABLE as SELECT * FROM statement. All parallel processes will allocate an extent defined by the table's INTIAL extent parameter. Here's an example:

CREATE TABLE test2 AS SELECT * FROM test1 STORAGE (INITIAL 2047M NEXT
500M... PARALLEL 4;

The above statement would create a table that has 4 extents with each extent being 2047M in size.

Buffer Performance is Critical to Good Oracle Performance

Make sure enough memory is allocated to the data buffer cache, the shared pool and the log buffers. Their order of importance and hit ratios are:

      • Shared pool (mainly because it contains the library cache) - the hit ratio should be 98% or higher
      • Data buffer cache - the hit ratio should be 95% or higher
      • Log buffer - keep redo log space requests to a minimum

Tuning Oracle on NT

There are two quick changes you can make on NT that will improve performance:

      • Change the machine configuration from the default "Maximize throughput for file sharing" to "Maximize throughput for network applications" (Right click on Network Neighborhood/ Services/ Server/ Properties)
      • Make sure that all threads run with NORMAL priority (see ORACLE_PRIORITY registry value)

Terminating Oracle Processes (Threads) on NT

Oracle on NT provides the ORAKILL command to kill Oracle processes (called threads) on NT. You need to specify the thread ID in Hex. Try ORAKILL /? or ORAKILL /h for more details.

Tuning Pack

In my very unscientific polling of numerous DBAs, the tuning pack was rated to be the most beneficial of the three packs available from Oracle. The expert advisor is pretty smart. If you don't have tuning expertise, use it until you get the experience you need.

Backup and Recovery

It's the little things that bite you. Most botched recoveries can be attributed to human error. Make sure all tapes have the proper retention periods, verify that all backups are executing correctly and run test recoveries on a regular basis. Don't let missing tapes or backups cause you to lose data. Prevent miscommunications from occurring by meeting with operations and systems personnel on a regular basis. You don't want to hear statements like "I didn't know you wanted that backed up too" or "I thought that the tape retention was 1 day, not 7 days" during a recovery.

RELAX

When you are notified of a database failure, take a deep breath and relax. Don't immediately begin to paste the database back together without a plan. Create a recovery plan, put it on paper, have others review it if you can, and then execute it. You shouldn't be trying to determine what the next step is in the middle of the recovery process. Plan your attack and then execute it.

Exports and Recovery

A common misconception is that a database can be recovered up to the current point-in-time by: recreating the database, recreating the data with a full database import and using the logs to roll forward. This recovery methodology will not work. Exports work within the data files and don't change the file headers. The database will only be consistent to the point in time when the export file was created.

V$ Tables

Remember that many V$ tables are cumulative in nature. They track information since the database has been started.

V$RECOVERFILE

Query the V$RECOVER_FILE table to determine how many files are missing or corrupted before you have them restored. Oracle will list the first file that it can't find in the error message it generates during startup. The problem is that it won't tell you how many other files are missing. The operators will like you much better and the recovery process will be a lot faster if you recover all of the files at once. Find out all of the missing files by accessing V$RECOVER_FILE, restore them to their original locations and recovery them by executing the RECOVER DATABASE command.

Watch NO LOGGING

The no logging option is great for performance but it tends to complicate current point-in-time recoveries. If you load or insert data using the no logging option and you don't immediately take a backup, you're asking for trouble. If you have to execute a database recovery to the current point-in-time, the database will be out of synch. The recovery process will not recover the data that was inserted using the no logging option.

Transactions being replayed after the no logging statement or utility execution may be working on a data file that does not contain all of the data that it should. Take a backup after a no logging statement or utility execution.

Striped File Systems

If mirroring is not used, exercise caution when placing redo logs and/or control files on the same disk stripes as data files. If you lose any disk contained in the stripe, you also lose your ability to recover the database to the current point-in-time.

Data Files and Redo Logs

If mirroring is not used, don't place redo logs and data files on the same disk. Once again, you may lose your ability to recover the database to the current point-in-time. Separate your redo logs from your data files. If you lose either type of file, your database will still be recoverable up to the current point-in-time.

Redo Log and Control File Multiplexing

Make sure your redo log groups have two or more members in each group with each member being on a different disk drive. Multiple copies of the control file should also be maintained on different disks. There should be a very limited number of reasons why you have to recover a database due to the loss of all members of a redo log group or control file set. Oracle instructors hate teaching the loss of all members of a redo log group or all controlfiles because they are so easily protected and so complicated to recover from.

OCOPY for Hot Backups on NT

Oracle on NT provides the OCOPY command to circumvent NT file locking problems. Use OCOPY instead of the COPY command when performing hot backups on NT. COPY commands will receive a file locking error if they attempt to copy data files while the database is running.

Automate Your Hot Backup Scripts

Don't hard code tablespaces and data files in your hot backup scripts. Every time you alter the structure of the database (add a data file, create a tablespace, drop a tablespace) you must change the backup scripts to reflect the changes. If you miss updating the backup scripts after a database structure change occurs, your backups become unusable. Write your scripts to access the database V$ tables to generate the backup statements.

The following scripts are examples of how to automate a complete hot database backup of an Oracle8 database on NT and on UNIX. The traditional caveats can be applied (use at your own risk). The scripts are easily changed to fit your specific environment. Both scripts back up the entire database, so they may not be the most appropriate scripts to use for large environments. In addition, watch connecting to a tool by hard coding the account and password in the script.

NT Script

NTBACKUP.CMD sets the ORACLE_SID and calls GENERATE_BACK.SQL. GENERATE_BACK.SQL reads the data dictionary and writes the backup commands to RUNBACK.SQL. NTBACKUP.CMD then performs the actual backup by running RUNBACK.SQL.

NTBACKUP.CMD
set ORACLE_SID=orcl
plus80 system/manager @d:\chris\sql\generate_back.sql
plus80 system/manager @d:\chris\sql\runback.sql

GENERATE_BACK.SQL
set heading off
set feedback off
set termout off
set echo off
spool d:\chris\sql\runback.sql
select 'set heading off' from dual;
select 'set echo on' from dual;
select 'spool c:\backup\runback.out' from dual;
select 'select ''backup started on'',TO_CHAR(SYSDATE, ''fmMonth DD,

YYYY:HH:MI:SS'') from dual;' from dual;
select 'alter tablespace '||tablespace_name||' begin backup;' from

sys.dba_tablespaces;
select ' host ocopy80 '||file_name||' c:\backup' from sys.dba_data_files;
select 'alter tablespace '||tablespace_name||' end backup;' from

sys.dba_tablespaces;
select 'alter system switch logfile;' from dual;
select 'alter database backup controlfile to ''c:\backup\cntrlfile.bck''

reuse;' from dual;
select 'alter database backup controlfile to trace;' from dual;
select 'select ''backup ended on'',TO_CHAR(SYSDATE, ''fmMonth DD,

YYYY:HH:MI:SS'') from dual;' from dual;
select 'spool off' from dual;
select 'exit' from dual;
spool off
exit

UNIX Script

This script is a little different. It runs on UNIX, does some simple error checking and backs up the tablespace data files one at a time so it won't dominate the I/O channels. It is another quick example that shows how to read the data dictionary to generate backup commands.

UNIXBACKUP.KSH sets several variables that will be used throughout the two scripts.

Runtime messages are written to HOTBACKUP.LOG. UNIXBACKUP.KSH calls GENERATE_HOTBACKUP.KSH which reads the data dictionary and writes the backup commands to RUNHOTBACK.SQL. UNIXBACKUP.KSH then performs the actual backup by running RUNHOTBACK.SQL.

UNIXBACKUP.KSH
#!/bin/ksh
ORACLE_SID=cif
export ORACLE_SID
GEN_BACKUP=/home/oracle/cif/scripts/generate_hotbackup.ksh
export GEN_BACKUP
RUN_BACKUP=/home/oracle/cif/tmp/runhotback.sql
export RUN_BACKUP
BACKUP_LOG=/home/oracle/cif/logs/hotbackup.log
export BACKUP_LOG
ARCH_DEST=/orcarc/cif
export ARCH_DEST
BACKUP_ARCH=
export BACKUP_ARCH
HOTBACKUP=/orc10/cif/backup
export HOTBACKUP
PROG=`basename $0`;export PROG
echo "Starting the Database Hot Backup" > $BACKUP_LOG

if
[ ! -f $GEN_BACKUP ]

then
echo "Shell Script $GEN_BACKUP not found!" >> $BACKUP_LOG
exit 1
if
$GEN_BACKUP $HOTBACKUP $RUN_BACKUP

if
[ $? -ne 0 ]

then
echo "execution of $GEN_BACKUP failed." >> $BACKUP_LOG
exit 1
if

echo "
@$RUN_BACKUP
" | sqlplus / >> $BACKUP_LOG
if
[ $? -ne 0 ]

then
echo "execution of $RUN_BACKUP failed." >> $BACKUP_LOG
exit 1

if

egrep "DBA-|ORA-" $BACKUP_LOG > /dev/null
if [ $? -eq 0 ]

then
echo "errors were detected while performing a hot backup." >> $BACKUP_LOG
echo "hot backup failed." >> $BACKUP_LOG
else
echo "hot backup completed successfully." >> $BACKUP_LOG
if
exit 0

GENERATE_HOTBACKUP.KSH
#!/bin/ksh
HOT_BACKUP=$1
RUN_BACKUP=$2
OUT_BACKUP=${RUN_BACKUP%.*}.out
sqlplus -s / << eof
set heading off
set feedback off
set termout off
set echo off
spool $RUN_BACKUP
select 'set heading off' from dual;
select 'set echo on' from dual;
select 'alter session set NLS_DATE_FORMAT = ''DD-MON-YY, HH24:MI:SS'';'
from dual;
select 'spool $OUT_BACKUP;' from dual;
select 'select ''backup started on:'',SYSDATE from dual;' from dual;
select 'alter system switch logfile;' from dual;
select 'alter tablespace '||tablespace_name||' begin backup;' from
sys.dba_tablespaces;
select ''||tablespace_name||' ! cp '||file_name||' $HOT_BACKUP' from
sys.dba_data_files;
select 'alter tablespace '||tablespace_name||' end backup;' from
sys.dba_tablespaces;
select 'alter system switch logfile;' from dual;
select 'alter database backup controlfile to ''$HOT_BACKUP/cntrlfile.bck''
reuse;' from dual;
select 'alter database backup controlfile to trace;' from dual;
select 'select ''backup ended on:'',SYSDATE from dual;' from dual;
select 'spool off' from dual;
select 'exit' from dual;
spool off
exit
eof
nawk '{if(($3 == "cp") && ($5 == backdest))
{
split($4,fname,"/")
printf("%s%s%s\n %s%s %s %s\n %s%s/%s\n%s%s%s\n", "alter tablespace ", $1,

" begin backup;", $2, $3, $4, $5, "!compress ", backdest, fname[5], "alter

tablespace ", $1, " end backup;")
}
else {print $0}}' < $RUN_BACKUP backdest=$HOT_BACKUP > tmpfile

/usr/bin/mv tmpfile $RUN_BACKUP

--

Christopher Foot has been involved in database management for over 18 years, serving as a database administrator, database architect, trainer, speaker, and writer. Currently, Chris is employed as a Senior Database Architect at RemoteDBA Experts, a remote database services provider. Chris is the author of over forty articles for a variety of magazines and is a frequent lecturer on the database circuit having given over a dozen speeches to local, national and international Oracle User Groups. His book titled OCP Instructors Guide for DBA Certification, can be found at http://www.dba-oracle.com/bp/bp_book14_OCP.htm.


Contributors : Christopher Foot, Rich Niemiec
Last modified 2006-03-21 10:01 AM
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!
 
 

Powered by Plone