3 Quick And Painless Ways To Fine Tune SQL Server Query Performance
Effective database design is an aspect often overlooked in application design phases. A flawed data model will lead to flawed transactions that could compromise your data as well as produce poor response times. There are many factors which affect the performance of your database ranging from hardware to good design. This article focuses on tips and tricks to improve your data structure design that can increase SQL-Server query performance: index design, stored procedures and triggers.
Index design is a large concern when designing your data structure. SQL Server indexes are similar to the indexes you would find in the library or the tabs in a dictionary.
There are four main types of indexes:
- Primary Keys (Creates a unique index to enforce a primary key)
- Unique (Creates a unique index but not a primary key) - Clustered (Which are indexes based on the actual table data)
- Non-Clustered (Not based on the table data)
I'm going to touch on how setting your indexes can affect performance. To retrieve data from a table, a SQL statement is passed and the table is searched in a linear mode until the data that is needed is come across. Indexes eliminate whole table scans by allowing queries to access data by using the index directly. This allows queries to perform lookups much more efficiently.
Indexes help with table SELECTS, UPDATES, and DELETES, but with INSERTS, indexes will slow the performance down. The reason for this is simple if you think about it. Inserting a record in a table with a lot of indexes requires the insert to traverse all the indexes in the table. This is where the design aspect comes into play. If the majority of work running against a table is lookups then you will want to use indexes, if the majority of work done against a table is inserting then you probably do not want to use indexes. Another consideration you might need to consider is table size. If a table is small then it may be more efficient to do a table scan rather than using indexes.
To summarize, when thinking about indexes and performance you must considers the following:
- What type of actions will this table be primarily used for? If you will be using the table for things other than INSERTS then indexes are a good idea.
- Can an index be created that will represent the majority of data you will be pulling? If so then the queries need to look no further than the index itself for the data. This eliminates the queries from having to hit the actual data tables.
- Is the table large enough to require an index? Indexing small tables can slow down your performance.
Another "surefire" way to improve your response times and performance is to use stored procedures whenever possible. Stored procedures are prebuilt, precompiled SQL statements that are stored within the database itself. Stored procedures can accept parameters as input and return values as output. Stored procedures are efficient for many reasons. First off, stored procedures exist as part of the database itself. Because of this, they are precompiled and prechecked for errors. SQL statements that are going to be used frequently such as table inserts, retrievals, deletions, and updates should be built as a stored procedure. Data validation can be built into stored procedures to ensure that business rules are followed and data integrity is maintained. When executing a stored procedure it is only necessary to pass an execute command. For example, to call a stored procedure called sp_AddCustomer, you would send
EXEC sp_AddCustomer, ParameterList
to the database along with any parameters needed. This is much more efficient than sending entire SQL statements over the network, especially if there are many users performing the same actions.
Let's say we have a table called orders. We may want to add orders frequently (we would hope!). We could build a stored procedure that will INSERT a row into the orders table based on a list of parameters. For example:
CREATE PROCEDURE sp_AddOrder @customername char,
@Customeraddress char, @ItemNum Int, @OrderDate Date
Declare @InsertString NVARCHAR(500)
Set @InsertString ='INSERT INTO ORDERS
(@customername, @Customeraddress, @ItemNum, NOW)'
Then to execute:
EXEC sp_AddOrder InsertString,customername,
Stored Procedure make database design quick , scalable and maintainable. When deciding to use stored procedures you can follow the rules below.
- Stored procedures execute by simply passing an execute command to the database. This relieves the network from having to pass an entire SQL statement over thus saving bandwidth and response time.
- Stored procedures are precompiled. Thus all error checking is done at the time it is built. This ensures solid SQL Statements and fast execution time.
- Stored procedures are fantastic for maintaining data integrity. Good data integrity always translates into good response times.
Triggers are very similar to stored procedures in that they also reside in the database itself. They too are precompiled and have a very similar structure.
So what are the differences? Triggers execute automatically based on a particular action to a table. INSERT, DELETE, UPDATE are all common catalyst to a trigger.
Why would you want an additional statement to run against the database if you are already passing one? The answer - triggers can be used for anything from audit trails to data validation. A trigger on a table is considered to be part of any statement run against that table. Because of this, a ROLLBACK in a trigger will cause the entire SQL statement to be rolled back. It becomes almost impossible for someone to corrupt your data. As we said before, good data means good response times.
If you need to maintain data integrity, you can move all validation into triggers and take that burden off the client machine. In a large-scale environment this can save valuable resources. If you need to maintain an audit trail of sensitive data, you no longer need to put that processing on a client application or on the burden of the person maintaining the database. Every time you update data on an orders table for example, you could move the old data into a table called audit_orders.
All this work which would normally bee done on the client side or by the DBA, is now handled automatically by a TRIGGER. This not only takes the burden away from others, but also assures that there can be no mistakes in tracking the history of the data. The general structure for designing a trigger is below.
If exists (select * from sysobjects where type = 'TR'
Drop trigger ti_orders
Create trigger ti_orders on orders for UPDATE
Insert into audit_orders (parm1,parm2,parm3,parm4,etc,)
The above example will move a copy of the data into the audit_orders column before it is modified. This will give you a snapshot of how the data existed through time. And because this is prebuilt, you only need to write it once and not worry again about maintaining the audit table.
In an enterprise situation it is often difficult to say what vehicles are going to be used to access or modify your data. There may be more than one group of people, more that one application, or more than one DBA.
Because of this, strong validation on the database side in the form of Stored Procedures,Triggers and Indexes will ensure your database remains scaleable, your data remains safe, and response times remain high.
Poorly designed data models can only frustrate your users and yourself. If you follow the steps above when designing your database, you will find you end up with a very scalable and responsive structure you can build upon.
Phillip Elbert is a webmaster and at BMC Software.
Contributors : Phillip Elbert
Last modified 2005-08-10 07:25 AM