Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » Blogs » Chris Foot Blog » Chris Foot's Oracle10g Blog » Oracle 10G R2 New Features Part 3
Seeking new owner for this high-traffic DBAzine.com site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3626
 

Oracle 10G R2 New Features Part 3 Oracle 10G R2 New Features Part 3

We have reached the final blog on 10G R2 New Features. I think that after reading the last blog of this series, you'll agree that 10G R2 (like all new Oracle releases) contains a LOT of interesting and beneficial enhancements.

I'll get back on track in the next blog by discussing how to use 10G Enterprise Manager SQL Tuning Sets.
DML Error Logging
What I like about SQL*LOADER is its ability to continue processing through load errors. If the record being loaded is rejected by the database (i.e. unique constraint violation) or discarded because it does not meet some user-defined criteria, SQL*LOADER places the record into a discard or reject file. SQL*Loader keeps on running until it completes or has reached a user-specified maximum number of rejects or discards.

The loader's log file will show how many records were loaded, rejected or discarded. I can look at the messages and review the discard or reject files, fix the problem and attempt to reload them again.

In 10G R2, this same type of processing has been applied to bulk DML operations. Users are able to specify whether they want to log errors or abort the entire DML statement, set the amount of detail information logged and the maximum error threshold. So instead of the entire DML statement blowing up and rolling back, Oracle will log the errors to the error table and continue processing until it exceeds the maximum number of errors. Just like SQL*Loader!

Oracle 10G R2's DBMS_ERRLOG package is used to create the error log output table and link it to the table being updated. The package's specification is provided below:

DBMS_ERRLOG.CREATE_ERROR_LOG (
dml_table_name IN VARCHAR2,
err_log_table_name IN VARCHAR2 := NULL,
err_log_table_owner IN VARCHAR2 := NULL,
err_log_table_space IN VARCHAR2 := NULL,
skip_unsupported IN BOOLEAN := FALSE);

Most of the columns are pretty self explanatory: name of table being updated, error log table name, owner of error log table and the error log table's tablespace. If the SKIP_UNSIPPORTED is set to TRUE, column types that are not supported by error logging will be skipped over and not added to the error logging table. If it is set to FALSE, an unsupported column type will cause the procedure to fail.

Here's a quick example:

Let's create the table we are going to update:
SQL> r
1 CREATE TABLE foot.emp_table
2 (empno number(4), ename varchar2(10), job varchar2(8))
3* TABLESPACE users;

Table created.

Add a primary key:

SQL> ALTER TABLE foot. emp_table ADD PRIMARY KEY(empno)
2 USING INDEX TABLESPACE users;

Table altered.

Load some rows:

SQL> INSERT INTO foot.emp_table VALUES (7499, 'ALLEN', 'SALESMAN');

1 row created.

SQL> INSERT INTO foot.emp_table VALUES (7521, 'WARD', 'SALESMAN');

1 row created.

SQL> INSERT INTO foot.emp_table VALUES (7566, 'JONES', 'MANAGER');

1 row created.

Let's cause a unique constraint violation:

SQL> insert into foot.emp_table select * from foot.emp_table;
insert into foot.emp_table select * from foot.emp_table
*
ERROR at line 1:
ORA-00001: unique constraint (FOOT.SYS_C009069) violated

Create the 10G R2 error logging table:

exec dbms_errlog.CREATE_ERROR_LOG ('emp_table','dml_errors_emp_table');

Let's create a new table and load it with all of the rows from our original table:

SQL> create table foot.emp_load_table tablespace users as select * from foot.emp_table;

Table created.

Update one row to change the EMPNO column's value to avoid the primary key violation:

SQL> update foot.emp_load_table set empno=123 where empno=7499;
1 row updated.

Rerun the statement specifying the new 10G R2 error logging syntax. We'll use our new load input table so that our one changed row will be loaded and two will be rejected and placed into the DML_ERRORS_EMP_TABLE:

SQL> insert into foot.emp_table select * from foot.emp_load_table
LOG ERRORS INTO dml_errors_emp_table('test_load_20050718')
REJECT LIMIT UNLIMITED;

1 row created.

OK, we loaded one row. What happened to the other two rows? Let's see what our DML_ERRORS_EMP_TABLE contains.

First, let's describe the DML_ERRORS_EMP_TABLE:

SQL> DESC foot.dml_errors_emp_table

ORA_ERR_NUMBER$ NUMBER
ORA_ERR_MESG$ VARCHAR2(2000)
ORA_ERR_ROWID$ ROWID
ORA_ERR_OPTYP$ VARCHAR2(2)
ORA_ERR_TAG$ VARCHAR2(2000)
EMPNO VARCHAR2(4000)
ENAME VARCHAR2(4000)
JOB VARCHAR2(4000)

Our error logging table contains an incrementing error counter, the error message, ROWID, error type, tag (contains our user specified name from above -'test_load_20050718') and the three columns of our table.


Let's select from the table. I have truncated the ORA_ERR_MESG$ column for readability sake:

SQL> SELECT ora_err_number$, ora_error_mesg$, emp_no FROM foot.dml_errors_emp_table;

ORA_ERR_NUMBER$ ORA_ERR_MESG$ EMP_NO
--------------- -------------------------------------------------- ------
1 ORA-00001: unique constraint (FOOT.SYS_C009069) violated…..7521
2 ORA-00001: unique constraint (FOOT.SYS_C009069) violated….7566


We loaded one row and rejected two rows due to primary key constraint violations.


Quick ASM Overview
Oracle10G R1 provides its own disk storage management system. Database administrators are no longer required to use hardware vendor or third-party (Veritas, EMC, etc) disk volume managers to provide striping and mirroring functionality. ASM manages the raw disks within the Oracle database architecture. Administrators are able to assign disks to disk groups, which can then be striped and/or mirrored to provide high performance and high availability. During tablespace creation, the administrator assigns the tablespace datafile to a disk group. This differs from previous Oracle releases which required that datafiles be assigned to the individual disks themselves.

ASM Command-Line Utility (ASMCMD)
Administrators are able to use 10G R2's new ASM command-line utility to administer ASM environments. The command line utility provides all of the features and functionality of its Enterprise Manager SGT (Sissy GUI Tool) counterpart.

10G R2 ASM 3-Way Mirrors
ASM provides templates to facilitate the creation of database data files. ASM's file template now has a switch for redundancy. If the attribute is set to HIGH, ASM will provide 3-way mirroring.

RMAN Backup Encryption
If you remember anything from this upcoming discussion on Oracle security, make it the next sentence. Experienced hackers understand that they don't have to gain access to the production database to get your data. The following information is so important, that I think a quick deviation from our discussion on 10G R2 new features is justified.

Many hackers have a strong understanding of the various support activities required to administer an Oracle database. As a result, they often look for data in places that are traditionally left unsecured. The listing below provides some of the areas that hackers look for your data before they attempt to access your production database:

  • Output from hot and cold database backups. We (and the hackers) know that database backups are exact copies of your production database. It is a relatively easy process to find the scripts that create the backups once the hacker is in your system.

  • Oracle Export utility output files. Oracle Export files can be easily transferred and loaded into any Oracle database. The hacker can transfer the Export file to their machine and have unlimited access to your data without having a connection to your database. In addition, full database Exports contain all of the accounts and passwords contained in the database. The hacker will be able to use that data to identify the account/password combinations, which will provide them with easy access to your production environment.

  • QA, test, development, reporting and disaster recovery databases. How many times have you been asked to refresh these databases with production data? Once the data is refreshed, they must then be treated as production and secured accordingly.

  • Using LOGMINER to scan Oracle online and archived redo logs. Now that Oracle has provided us with a quick and easy way to access data changes stored in the redo logs, these files, and the LOGMINER utility also needs to be secured.

  • The UTL_FILE_DIR directories. UTL_FILE_DIR is specified in the parameter file (which is often left unprotected) and is used as the output directory for flat files created from PL/SQL stored programs. Hackers are able to easily find the output directory and gain access to the output files.

10G R2 provides several RMAN backup encryption features that will make it a lot harder for hackers to use backups to gain access to sensitive data. There are three modes of backup encryption now available:

  • Password Encryption - The user is required to specify a password when executing RMAN backups and restores. Oracle provides the syntax below to activate password protected backups:
  • SET ENCRYPTION ON IDENTIFIED BY password ONLY

  • Transparent Encryption - To implement transparent encryption, users must first configure the Oracle Encryption Wallet. Once the Oracle Encryption Wallet is configured, no further DBA intervention is required to encrypt RMAN backups. Transparent encryption is discussed later in this blog.
  • Dual-Mode Encryption - Administrators have the choice of using either Password Encryption or Transparent Encryption.

Transparent Data Encryption
Transparent is a good name for Oracle's new data encryption mechanism because it allows users to encrypt data in columns without being forced to explicitly manage encryption keys. The key management is automatically handled by the database. Not being forced to activate encryption keys means that the data can be encrypted and decrypted without developers being required to change the application code. The key management is handled by the database and encryption occurs automatically for users authorized to access the table.

To protect the data, Transparent Data Encryption encrypts data at the column level using the AES and 3DES standard encryption algorithms. The encryption keys are stored in an external security module. Each table has its own encryption key which is encrypted wth the database master key. Both the database master and table key must be used to decrypt the table's columns.

Reduced Privileges for Connect Role
I wonder how many times grizzled DBA veterans have said "Don't grant the Connect role to users, it gives them too many privileges. Grant them Create Session." to junior-level administrators.

Oracle must have also been listening because Oracle 10G R2 reduces the Connect role's privileges to one, the CREATE SESSION privilege. Users granted the Connect role will no longer have CREATE TABLE, CREATE VIEW, CREATE SEQUENCE, CREATE CLUSTER, CREATE SYNONYM, CREATE DATBASE LINK, CREATE SEQUENCE, ALTER SESSION privileges granted to them.

RMAN Automatically Creates Temporary Datafiles During Restore
Ever perform a restore and forget to recreate the temporary tablespace's data files? I have. The database comes on-line but the first time a user's sort overflows from memory to disk, an error code is returned. Recreating a database's temporary files is one of those activities that are often overlooked. 10G R2 RMAN will automatically recreate temporary datafiles belonging to locally managed temporary tablespaces during the recovery operation.

Restore Points
How many times have database recoveries been performed because of incorrect changes made to database data? A user deletes or updates "one too many rows" by mistake and the result is a time-consuming and error prone process to restore and recover the database to a point-in-time before the error occurred.

Let's go back to my days as an Oracle instructor…. A common question in the Oracle DBA backup and recovery class was "why can't I just roll the database back to remove unwanted changes instead of restoring the database from a backup and applying the redo logs to roll forward to a point-in-time before the error occurred?" The question was invariably followed by "don't you think that would be easier?" The answer was always the same "yes, it would be easier, but the Oracle database doesn't have that capability."

That was before 9I's Flashback feature came along. Oracle9I allowed administrators to use before images stored in automatic undo tablespaces to undo changes made to tables. Pushing tables back in time was only dependent upon the amount of undo that was available (and whether any DDL changes were made between the current and destination times). Oracle 10G improved upon this feature by introducing the Flashback Database. Flashback Database allows the DBA to "roll back" a table, set of tables or the entire database to a previous point-in-time using flashback logs. Administrators often used a timestamp during the execution of flashback operations.

10G R2 enhances Flashback Database by allowing administrators to associate a user-defined name with a point-in-time. The user-defined name, called a restore point, can be used in place of a SCN or timestamp when executing a FLASHBACK TABLE or FLASHBACK DATABASE statement.

A SQL statement is used to create the restore point which means it can be embedded in application programs as well as executed on an as-needed basis in SQL*PLUS. The example below shows the creation of a restore point:

CREATE RESTORE POINT batch_daily_cycle_complete;

Now if I execute the following statement:

UPDATE gianteagle.employee_salary_table SET
salary = 200000 WHERE empl_lname = 'FOOT';

My boss can execute this statement to correct the 'mistake' (although I prefer to call it a fully justifiable pay adjustment):

FLASHBACK TABLE gianteagle.employee_salary_table TO batch_daily_cycle_complete;

10GR2 also provides guaranteed restore points which ensure that the database can be flashed back to the point-in-time they were created. Guaranteed restore points save disk space because only the flashback logs required to meet the guaranteed restore point need to be retained by the database. The statement below creates a guaranteed restore point:

CREATE RESTORE POINT batch_daily_cycle_complete GUARANTEE FLASHBACK DATABASE;

Enterprise Manager Enhancements
If you have been reading this blog, you know that we have been spending a LOT of time on Enterprise Manager. Well, 10G R2 has certainly added some important enhancements that we must cover if we are going to call ourselves 10G R2 Certified EMEs (Enterprise Manager Experts).

Rest assured that we will be covering all of these enhancements in-depth in future blogs so I'll cover them at a high-level for now. Think of the remainder of this blog as a preview of upcoming blogs.

Enterprise Manager Statistics Management Support
A new link titled 'Manage Optimizer Statistics' has been added to the database administration home page in Enterprise Manager. Clicking on the link displays the new Manage Optimizer Statistics home page. The new home page provides wizards and templates that facilitate and automate optimizer statistics management. In 10GR2 administrators are able to use Enterprise Manager to gather, restore, delete, lock and unlock statistics

Why would you want to lock and unlock statistics? Let me describe one real-life example from a recent project here at Giant Eagle. We have a warehouse system that contains an extremely volatile table. Describing it as volatile is like stating the Titanic sprung a small leak. Hundreds of thousands of rows are added and removed around the clock.

The level of activity is based on the particular business process taking place. At any particular time, the table can contain hundreds of thousands of rows and at other times it can contain dozens. OK, so when do you generate statistics? If you run DBMS_STATS at the same time each day, in most cases you would think you would get a predictable set of statistics generated. Not so in this table's case. At one time you may get hundreds of rows and other times you get hundreds of thousands of rows.

If you are unlucky and generate statistics on the table when it has hundreds of rows, access paths suffer when the table grows to hundreds of thousands. So, we ran DBMS_STATS when we knew the table contained numerous rows and never generated statistics on it again. Problem is that we can no longer easily use the SCHEMA option for DBMS_STATS. Freezing statistics on this table will allow us to use the SCHEMA option and not generate statistics for selected tables.

Enterprise Manager Enhanced Performance Monitoring
OK, let me start this section by stating that we are going to spend some dedicated time learning this new feature in upcoming blogs.

10G R2 EM provides the ability to view user sesion activity historically. This historical information will help administrators finally answer questions like "a few of my programs ran long two hours ago, can you find out why?"

Other performance monitoring enhancements include:

  • Ability to compare specific time periods of ADDM snapshot data. In the past, administrators were only able to compare time periods using ADDM's regular snapshot intervals. 10G EM R2 provides drop down menus that allow users to choose either snapshot IDs or wall-clock times as input to snapshot data comparisons.
  • Administrators now have the capability of running ADDM snapshots on demand.
  • System-level and CPU-level activity can be viewed for any duration in real-time, recent-time or historically.


I hoped you enjoyed these blogs on 10G R2 new features. Thanks for reading!


Monday, August 01, 2005  |  Permalink |  Comments (0)
trackback URL:   http://www.dbazine.com/blogs/blog-cf/chrisfoot/blogentry.2005-07-25.6154655837/sbtrackback
 

Powered by Plone