Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » Blogs » Chris Foot Blog » Chris Foot's Oracle10g Blog » 10 Things I Like About 10G - Part 3
Seeking new owner for this high-traffic DBAzine.com site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3566
 

10 Things I Like About 10G - Part 3 10 Things I Like About 10G - Part 3

This series covers a few of the features that you may often overlook when using 10G. There are so many features in each new release of Oracle's flagship database that the challenge is to leverage as many features as possible to make your day-to-day life easier.

The key to being a successful Oracle DBA is that you must be willing to accept the fact that with each new database release, you will have to adjust the way you administer your environments. In my profession, I work with many DBAs from companies across the globe. The most successful administrators I have met are the ones that intelligently leverage the new features that are available in the releases of the database they are administering.

Running Commands and Scripts Against Multiple Targets
Being involved in an IT audit, like death and taxes, is inevitable. If you work as a technician for any length of time, you will be working with an auditor and some point. Since we provide remote database services, our organization is audited numerous times per year. Sox audits, security audits, HIPPA audits - you name the audit and we have been involved in it. We have an office that is left open full-time for onsite auditors.

Auditors often ask you to run scripts on the operating systems and databases your unit is responsible for supporting. In the past, you would review the scripts and then distribute them to team members to run on the environments they were responsible for supporting. 10G Grid Control makes this task a whole lot simpler. Grid provides you with the capability of running host commands and scripts against multiple host targets at the same time. Virtually any command or script that can be executed on a single host can also be executed on multiple hosts using 10G Grid Control R2. Here's a blog that shows you how to use 10G Grid Control R2 to run operating system scripts against multiple targets.

10G Grid Control R2 also provides an easy-to-use panel that allows administrators to run SQL scripts against multiple database targets simultaneously. Just like its host command counterpart, all you need to do is code up a SQL statement, choose a set of database targets, run the SQL against the multiple targets and review the output. The benefits that this feature provides is only constrained by the DBAs imagination. Here's a link to a blog that will show you how to run scripts against multiple database targets.

Restore Points
How many times have database recoveries been performed because of incorrect changes made to database data? A user deletes or updates "one too many rows" by mistake and the result is a time-consuming and error prone process to restore and recover the database to a point-in-time before the error occurred.

Let's go back to my days as an Oracle instructor…. A common question in the Oracle DBA backup and recovery class was "why can't I just roll the database back to remove unwanted changes instead of restoring the database from a backup and applying the redo logs to roll forward to a point-in-time before the error occurred?" The question was invariably followed by "don't you think that would be easier?" The answer was always the same "yes, it would be easier, but the Oracle database doesn't have that capability."

That was before Flashback Database came along. Oracle's Flashback Database feature provided a new tool in the DBA's recovery toolbox. Flashback Database allows the DBA to "roll back" a table, set of tables or the entire database to a previous point-in-time. Administrators often used a timestamp during the execution of flashback operations.

10G R2 enhances Flashback Database by allowing administrators to associate a user-defined name with a point-in-time. The user-defined name, called a restore point, can be used in place of a SCN or timestamp when executing a FLASHBACK TABLE or FLASHBACK DATABASE statement.

A SQL statement is used to create the restore point which means it can be embedded in application programs as we as executed on an as-needed basis in SQL*PLUS. The example below shows the creation of a restore point:

CREATE RESTORE POINT batch_daily_cycle_complete;

Now if I execute the following statement:

UPDATE cti.employee_salary_table SET
salary = 200000 WHERE empl_lname = 'FOOT';

My boss can execute this statement to correct the 'mistake' (although I prefer to call it a fully justifiable pay adjustment):

FLASHBACK TABLE cti.employee_salary_table TO batch_daily_cycle_complete

10GR2 also provides guaranteed restore points which ensure that the database can be flashed back to the point-in-time they were created. Guaranteed restore points save disk space because only the flashback logs required to meet the guaranteed restore point need to be retained by the database. The statement below creates a guaranteed restore point:

CREATE RESTORE POINT batch_daily_cycle_complete GUARANTEE FLASHBACK DATABASE;

Automatic Undo Retention
In Oracle9i, administrators had their choice of continuing to manage rollback segments on their own (manual undo management) or configuring the database to manage its own before image data (automatic undo management). Oracle refers to system managed before image segments as undo segments.

You didn't have to be an Oracle expert to know that manual rollback segments were "somewhat troublesome." Out of space conditions, contention, poor performance and the perennial favorite "snap shot too old" errors had been plaguing Oracle database administrators for over a decade. Oracle finally decided that the database could probably do a better job of managing before images of data than we could.

But implementing automatic undo retention didn't necessarily guarantee users a trouble free undo environment. There really is only one parameter that administrators can tweak in a system that uses automatic undo. The UNDO_RETENTION parameter specifies the amount of time in seconds that Oracle attempts to keep undo data available. Setting this parameter to the appropriate value could be described as more of an art than a science.

Set it too low and you are wasting disk space. In addition, you aren't taking advantage of being able to flashback your data to as far back as the disk space allocated to the undo tablespace allows. Set it too high and you are in danger of running out of freespace in the undo tablespace.

10G R2 comes to the rescue! The database now collects undo usage statistics, identifies the amount of disk space allocated to the undo tablespace and uses that information to tune the undo retention period to provide maximum undo data retention. Administrators can determine the current retention time period by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT view.

RMAN Automatically Creates Temporary Datafiles During Restore
Ever perform a restore and forget to recreate the temporary tablespace's data files? I have. The database comes on-line but the first time a user's sort overflows from memory to disk, an error code is returned. Recreating a database's temporary files is one of those activities that are often overlooked. 10G R2 RMAN will automatically recreate temporary datafiles belonging to locally managed temporary tablespaces during the recovery operation.

Enterprise Manager Statistics Management Support
A new link titled 'Manage Optimizer' statistics has been added to the database administration home page in Enterprise Manager. Clicking on the link displays the new Manage Optimizer Statistics home page. The new home page provides wizards and templates that facilitate and automate optimizer statistics management. In 10GR2 administrators are able to use Enterprise Manager to gather, restore, delete, lock and unlock statistics

Why would you want to lock and unlock statistics? Let me describe one real-life example from a recent project. We have a warehouse system that contains an extremely volatile table. Describing it as volatile is like stating the Titanic sprung a small leak. Hundreds of thousands of rows are added and removed around the clock.

The level of activity is based on the particular business process taking place. At one particular time, the table can contain hundreds of thousands of rows and at other times it can contain dozens. OK, so when do you generate statistics? If you run DBMS_STATS at the same time each day, in most cases you would think you would get a predictable set of statistics generated. Not so in this table's case, sometimes you get hundreds of rows and some times you get hundreds of thousands.

If you are unlucky and generate statistics on the table when it has hundreds of rows, access paths suffer when the table grows to hundreds of thousands. So, we ran DBMS_STATS when we knew the table contained numerous rows and never generated statistics on it again. Problem is that we can no longer easily use the SCHEMA option for DBMS_STATS. Freezing statistics on this table will allow us to use the SCHEMA option and not generate statistics for selected tables.

DML Error Logging
What I like about SQL*LOADER is its ability to continue processing through load errors. If the record being loaded is rejected because of a unique constraint violation or discarded because it does not meet some user-defined criteria, SQL*LOADER places the record into a discard or reject file and keeps on running until it has reached a user-specified maximum number of rejects or discards.

The loader's log file will show how many records were loaded, rejected or discarded. I can look at the messages and review the discard or reject files, fix the problem and attempt to reload them again.

In 10G R2, this same type of processing has been applied to bulk DML operations. Users are able to specify whether they want to log errors or abort the entire DML statement, set the amount of detail information logged and the maximum error threshold. So instead of the entire DML statement blowing up and rolling back, Oracle will log the errors to the error table and continue processing until it exceeds the maximum number of errors. Just like SQL*Loader!

Oracle 10G R2's DBMS_ERRLOG package is used to create the error log output table and link it to the table being updated. The package's specification is provided below:

DBMS_ERRLOG.CREATE_ERROR_LOG (
dml_table_name IN VARCHAR2,
err_log_table_name IN VARCHAR2 := NULL,
err_log_table_owner IN VARCHAR2 := NULL,
err_log_table_space IN VARCHAR2 := NULL,
skip_unsupported IN BOOLEAN := FALSE);

Most of the columns are pretty self explanatory: table name being updated, error log table name, owner of error log table and the error log table's tablespace. If the SKIP_UNSIPPORTED is set to TRUE, column types that are not supported by error logging will be skipped over and not added to the error logging table. If it is set to FALSE, an unsupported column type will cause the procedure to fail.

Here's a quick example:

Let's create or table that will be updated:
SQL> r
1 CREATE TABLE foot.emp_table
2 (empno number(4), ename varchar2(10), job varchar2(8))
3* TABLESPACE users;

Table created.

Add a primary key:
SQL> ALTER TABLE foot. emp_table ADD PRIMARY KEY(empno)
2 USING INDEX TABLESPACE users;

Table altered.;

Load some rows:

SQL> INSERT INTO foot.emp_table VALUES
2 (7499, 'ALLEN', 'SALESMAN');

1 row created.

SQL> INSERT INTO foot.emp_table VALUES
2 (7521, 'WARD', 'SALESMAN');

1 row created.

SQL> INSERT INTO foot.emp_table VALUES
2 (7566, 'JONES', 'MANAGER');

Let's cause a unique constraint violation:

SQL> insert into foot.emp_table select * from foot.emp_table;
insert into foot.emp_table select * from foot.emp_table
*
ERROR at line 1:
ORA-00001: unique constraint (FOOT.SYS_C009069) violated

Create the 10G R2 error logging table:

exec dbms_errlog.CREATE_ERROR_LOG ('emp_table','dml_errors_emp_table')

Let's create a table that we can update and change one row to see if we can get one row to load and 2 to fail and be placed into the DML_ERRORS_EMP_TABLE:

SQL> create table foot.emp_load_table tablespace users as select * from foot.emp_table
Table created.

Update one row to change the EMPNO column's value to avoid the primary key violation:

SQL> update foot.emp_load_table set empno=123 where empno=7499;
1 row updated.

Rerun the statement specifying the new 10G R2 error logging syntax. Use our new load input table so that our one changed row will be loaded and two will be rejected and placed into the DML_ERRORS_EMP_TABLE:

SQL> insert into foot.emp_table select * from foot.emp_table
LOG ERRORS INTO dml_errors_emp_table('test_load_20050718')
REJECT LIMIT UNLIMITED;

1 row created.

OK, we loaded one. What happened to our other two? Let's see what our DML_ERRORS_EMP_TABLE contains.

First, let's describe the DML_ERRORS_EMP_TABLE:

SQL> DESC foot.dml_errors_emp_table

ORA_ERR_NUMBER$ NUMBER
ORA_ERR_MESG$ VARCHAR2(2000)
ORA_ERR_ROWID$ ROWID
ORA_ERR_OPTYP$ VARCHAR2(2)
ORA_ERR_TAG$ VARCHAR2(2000)
EMPNO VARCHAR2(4000)
ENAME VARCHAR2(4000)
JOB VARCHAR2(4000)

Our error logging table contains an incrementing error counter, the error message, ROWID, error type, tag (contains our users specified name from above -'test_load_20050718') and the three columns of our table.

Let's select from the table. I have truncated the ORA_ERR_MESG$ table for readability sake:

SQL> SELECT ora_err_number$, ora_error_mesg$, emp_no FROM foot.dml_errors_emp_table;

ORA_ERR_NUMBER$ ORA_ERR_MESG$ EMP_NO
--------------- -------------------------------------------------- ------
1 ORA-00001: unique constraint (FOOT.SYS_C009069) violated…..7521
2 ORA-00001: unique constraint (FOOT.SYS_C009069) violated….7566

We loaded one row and rejected two rows due to primary key constraint violations.


Conclusion
I hope you enjoyed this series on 10G features.


Monday, April 30, 2007  |  Permalink |  Comments (1)
trackback URL:   http://www.dbazine.com/blogs/blog-cf/chrisfoot/blogentry.2007-04-28.5119185797/sbtrackback

Very useful Article

Posted by ptrikha1 at 2007-08-28 03:51 AM
I Liked the Article on Oracle 10g's most liked features ;especially Flashbak Database and Error Logging Features .I plan to use at least one of them in my future projects.
 

Powered by Plone