Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » Blogs » Chris Foot Blog » Chris Foot's Oracle10g Blog » 10 Things I like about Oracle 10G Part 2 (and other stuff)
Seeking new owner for this high-traffic DBAzine.com site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 4650
 

10 Things I like about Oracle 10G Part 2 (and other stuff) 10 Things I like about Oracle 10G Part 2 (and other stuff)

I thought I would combine two different topics of conversation. We'll review of some of the 10G features that we take for granted or are easily overlooked. The discussion also continues its lighthearted review of day-to-day DBA life and provides a few helpful hints and tips to make that life easier. Weird combination but it may make for an interesting and informative read.

Here's a couple of my favorite DBA discussions….

Favorite DBA Conversation #1
Any discussion that ends in "I hope you're not too busy because we need it now." I really dislike the conversation that ends in "we needed it yesterday." If you needed it yesterday you should have stopped in to discuss this request a week before yesterday. We all know these types of requests because we have all gotten them. One of your customers pops in and states:

"We need a complete refresh of XYZ database from production. Oh, and its an ERP application database and we don't have enough disk space in test to hold everything so you'll need to figure out how to reduce its disk size requirements AND make sure the application still works.

I know the box only has 1 CPU and 250 MEG of memory, but can you make sure we still get good performance? We're testing some application coding changes and we need to get performance benchmarks. We'll compare the stats from the test box to the ones we get from our 22 CPU production box. It may not be an "apples to apples" but I think the boxes are close enough to compare the performance stats.

I know its Friday afternoon at 3:00 and even though we knew about this refresh 14 weeks ago, we didn't think we needed to tell you until 1 day before we wanted it. So, we would like to have the system ready by Monday."

Over the last 20 years I have received dozens of these last minute requests. Requests that have made me cancel plans, postpone trips and generate numerous apologies to friends and family. The work never bothered me, but the timing of the requests sometimes did. It also reduced the quality of service I provided to my other customers. When you have DBAs jumping from one non-planned request to another, it affects their ability to work on planned requests as well as perform proactive administrative duties.

Fix #1: You minimize these requests by creating, advertising and enforcing DBA work request SLA agreements. Agreements that clearly state lead times per type of request. The more widespread the problem is at your organization, the more advertising you must do. It really becomes a sales pitch on the importance of a DBA unit being able to forecast workloads. The unit can't provide quality service to their customers if they are being affected by work requests that could have been planned for - but weren't.

The mantra should be "all of our customers suffer when we receive un-planned work requests." I'm not afraid of telling application manager A that she can't get her planned requests done because application manager B just gave us 32 emergency requests. Kind of like putting them both in a jar and shaking it up a bit.

Fix #2: 10G Grid Control allows you to clone both the Oracle Home software installation as well as the Oracle database. Grid will automatically adjust the specific home properties (IP address, host name, listener settings) during the cloning process. 10G DBCA templates allow you to clone a new database from existing databases. In addition, DBCA allows you to clone just the source database file structures or the structures and the data.

You can also use some of the 10G Data Pump features to tailor the new database to your user's specifications. 10G Data Pump provides numerous options that:

  • Specifiy what objects you want to transfer from one database to another
  • Transfer a subset of data from those objects using various filtering options
  • Remap objects during the migration process. You can remap schema objects from one user to another, from one tablespace to another

You already know I'm a huge fan of 10G Data Pump. You need to leverage the features this tool provides to customize schema and data transfers from one database to another. You shouldn't be writing scripts when you don't need to. The only way you will realize the full potential of 10G Data Pump is to READ THE UTILITIES manual and see how powerful 10G Data Pump really is. It will make your life easier.

Favorite DBA Conversation #2
You come in to a meeting and you're confronted by an application manager who tells you that you need to tune your database because his ace developer "Sir Lawrence of LongSQL" can't get his 32 page SQL statement to achieve sub-second response times.

You know the SQL statement that I'm talking about. It's the one that contains 15 inner and outer joins, 12 subselects, and…….(feel free to add any other type of complex SQL Syntax here).

Note from database administrators to application developers:
Just because you can code a 32 page SQL statement, it doesn't mean you should. We all understand how powerful the SQL language is, but do you really need to try and use every feature available in a single statement? There's a difference between leveraging the power of the SQL language and attempting to perform too much work in a single SQL statement. Generating long, complex SQL statements that no one can debug does NOT classify you as a supercoder. It does classify you as something else, but I can't say that publicly.

Favorite DBA Conversation #3
Favorite DBA Conversation #3 combines components of conversations #1 and #2. It occurs when you are approached by a customer who says they have developed this "small" application that goes live in two weeks and it isn't performing "as well as we think it should." This means that you have to review the SQL from 23 new application programs in those two weeks.

Fix #1: Reinforce the need for the DBA team to be involved in all application development projects. This can be done by creating a series of application design review meetings. A dedicated an entire blog on this subject.

Fix#2: You will need to start a top down performance analysis project that focuses on tuning the most poorly performing queries first. If you don't have access to the 10G Grid Control sissy GUI tools, you can use the V$ views to retrieve the top resource consuming statements. You can use the V$SQLAREA view for Oracle9I and below. 10G offers the V$SQLSTATS view that provides some distinct advantages over its previous counterparts.

10G V$SQLSTATS Performance View
Before we discuss the new V$SQLSTATS view, let me deviate for a minute to provide you with some quick SQL tuning advice. V$SQLAREA is one of my favorite SQL tuning views. Until V$SQLSTATS came along…

If I want to perform a traditional "top down" tuning approach and tune the highest resource consuming SQL, I'll use the statements below to identify the top resource consuming queries.

The following query identifies the SQL responsible for the most disk reads:

SELECT disk_reads, executions, disk_reads/executions, address, sql_text FROM v$sqlarea WHERE disk_reads > 5000 ORDER BY disk_reads;

The following query identifies the SQL responsible for the most buffer hits:

SELECT buffer_gets, executions, buffer_gets/executions, address, sql_text FROM v$sqlarea WHERE buffer_gets > 10000 ORDER BY buffer_gets;


You can create a more readable report in SQLPLUS by inserting report breaks between the output lines. To generate the report breaks in SQLPLUS, issue the following statement before running the query:

BREAK ON disk_reads SKIP 2 --- for the disk read report and
BREAK ON buffer_gets SKIP 2 --- for the buffer get report


The first query returns SQL statements responsible for generating disk reads greater than 5,000 while the second query returns SQL statements responsible for generating buffer reads greater than 10,000. I used these numbers just as an example but I sometimes use them as a starting point. I'll then adjust them up or down accordingly, based on their output. The numbers also depend on the system I'm reviewing. I'll use different numbers for OLTP environments than I would for data warehouses.

You'll notice that I divide the number of disk and buffer reads by the number of statement executions. If a statement is generating 1,000,000 disk reads but is executed 500,000 times, it probably doesn't need tuning.

Heavy disk reads per statement execution usually means a lack of proper indexing, poor selection criteria, etc.. Heavy buffer reads sometimes means the exact opposite - indexes are being used when they shouldn't be.

But I'm personally most interested in workload, that's why I most often use the buffer cache hits in my initial queries.

But the SQLTEXT column in V$SQLAREA does not provide the entire text of the SQL statement. That's why I include the address column in the report. I can use that value to dump the entire SQL statement from V$SQLTEXT using the statement below (where xxxxxxxx is the value in the address column from the V$SQLAREA reports above):

select sql_text from v$sqltext where address = 'xxxxxxxxx' order by piece;

Oracle 10G R2 provides a new view called V$SQLSTATS that contains a combination of columns that appear in V$SQL and V$SQLAREA. The benefits that V$SQLSTATS provides are as follows:

  • Since V$SQLSTATS contains the entire text of the SQL statement AND its associated performance statistics, we are no longer required to access both the V$SQLTEXT and V$SQLAREA to obtain the information we need.
  • Oracle states that V$SQLSTATS is faster and more scalable.

The data in V$SQLAREA has a tendency to get its contents flushed out just when you need to get additional information from it. The V$SQLSTATS view provides users with a longer access window. That's one of the key benefits to this view.

Next Up
The series continues.


Monday, April 16, 2007  |  Permalink |  Comments (0)
trackback URL:   http://www.dbazine.com/blogs/blog-cf/chrisfoot/blogentry.2007-04-15.6658230898/sbtrackback
 

Powered by Plone