Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » Oracle » Oracle Articles Archive » Chapter 3. Locking and Concurrency - Part 1

Chapter 3. Locking and Concurrency - Part 1

by Tom Kyte

From Expert Oracle, Signature Edition: Programming Techniques and Solutions for Oracle 7.3 through 8.1.7, Berkeley, Apress, April 2005.

Part 1 | Part 2 | Part 3

One of the key challenges in developing multi-user, database-driven applications is to maximize concurrent access but, at the same time, to ensure that each user is able to read and modify the data in a consistent fashion. The locking and concurrency controls that allow this to happen are key features of any database, and Oracle excels in providing them. However, Oracle’s implementation of these features is unique and it is up to you, the application developer, to ensure that when your application performs data manipulation, it uses these mechanisms correctly. If you fail to do so, your application will behave in an unexpected way and, inevitably, the integrity of your data will be compromised (as was demonstrated in Chapter 1 on Developing successful Oracle Applications).

In this chapter we’re going to take a detailed look at how Oracle locks data, and the implications of this model for writing multi-user applications. We will investigate the granularity to which Oracle locks data, how Oracle achieves ‘multi-version read consistency’, and what that all means to you, the developer. When appropriate, I’ll contrast Oracle’  locking scheme with other popular implementations, mostly to dispel the myth that ‘row level locking adds overhead’. I  only adds overhead if the implementation adds overhead.

What are Locks?

A lock is a mechanism used to regulate concurrent access to a shared resource. Note how I used the term ‘shared resource’, not ‘database row’. It is true that Oracle locks table data at the row level, but it also uses locks at many other levels to provide concurrent access to various resources. For example, while a stored procedure is executing, the procedure itself is locked in a mode that allows others to execute it, but will not permit another user to alter it in any way. Locks are used in the database to permit concurrent access to these shared resources, while at the same time providing data integrity and consistency.

In a single-user database, locks are not necessary. There is, by definition, only one user modifying the information. However, when multiple users are accessing and modifying data or data structures, it is crucial to have a mechanism in place to prevent concurrent modifications to the same piece of information. This is what locking is all about.

It is very important to understand that there are as many ways to implement locking in a database as there are databases. Just because you are experienced with the locking model of one particular RDBMS does not mean you know everything about locking. For example, before I got heavily involved with Oracle, I used other databases such as Sybase and Informix. All three of these databases provide locking mechanisms for concurrency control, but there are deep and fundamental differences in the way locking is implemented in each one. In order to demonstrate this, I’ll outline my progression from a Sybase developer to an Informix user and finally an Oracle developer. This happened many years ago, and the Sybase fans out there will tell me ‘but we have row-level locking now’. It is true: Sybase now uses row-level locking, but the way in which it is implemented is totally different to the way in which it is done in Oracle. It is a comparison between apples and oranges, and that is the key point.

As a Sybase programmer, I would hardly ever consider the possibility of multiple users inserting data into a table concurrently – it was something that just didn’t often happen in that database. At that time, Sybase provided only for page-level locking and, since all the data tended to be inserted into the last page of non-clustered tables, concurrent inserts by two users was simply not going to happen. Exactly the same issue affected concurrent updates (since an UPDATE is really a DELETE followed by an INSERT). Perhaps this is why Sybase, by default, commits or rolls back immediately after execution of each and every statement.

Compounding the fact that, in most cases, multiple users could not simultaneously modify the same table, was the fact that while a table modification was in progress, many queries were also effectively blocked against that table. If I tried to query a table and needed a page that was locked by an update, I waited (and waited and waited). The locking mechanism was so poor that providing support for transactions that took more than a microsecond was deadly – the entire database would appear to ‘freeze’ if you did. I learned a lot of bad habits here. I learned that transactions were ‘bad’, that you ought to commit rapidly and never hold locks on data. Concurrency came at the expense of consistency. You either wanted to get it right or get it fast. I came to believe that you couldn’t have both.

When I moved on to Informix, things were better, but not by much. As long as I remembered to create a table with row-level locking enabled, then I could actually have two people simultaneously insert data into that table. Unfortunately, this concurrency came at a high price. Row-level locks in the Informix implementation were expensive, both in terms of time and memory. It took time to acquire and ‘unacquire’ or release them, and each lock consumed real memory. Also, the total number of locks available to the system had to be computed prior to starting the database. If you exceeded that number then you were just out of luck. Consequently, most tables were created with page-level locking anyway, and, as with Sybase, both row and page-level locks would stop a query in its tracks. As a result of all this, I found that once again I would want to commit as fast as I could. The bad habits I picked up usingSybase were simply re-enforced and, furthermore, I learned to treat a lock as a very scarce resource, something to be coveted. I learned that you should manually escalate locks from row-level to table-level to try to avoid acquiring too many of them and bringing the system down.

When I started using Oracle I didn’t really bother reading the manuals to find out how locking worked here. After all, I had been using databases for quite a while and was considered something of an expert in this field (in addition to Sybase and Informix, I had used Ingress, DB2, Gupta SQLBase, and a variety of other databases). I had fallen into the trap of believing that I knew how things should work, so of course they will work in that way. I was wrong in a big way.

It was during a benchmark that I discovered just how wrong I was. In the early days of these databases, it was common for the vendors to ‘benchmark’ for really large procurements – to see who could do the work the fastest, the easiest, with the most features. It was set up between Informix, Sybase and Oracle. Oracle was first. Their technical people came on-site, read through the benchmark specs, and started setting it up. The first thing I noticed was that they were going to use a database table to record their timings, even though we were going to have many dozens of connections doing work, each of which would frequently need to insert and update data in this log table. Not only that, but they were going to read the log table during the benchmark as well! I, being a nice guy, pulled one of them aside to ask him if he was crazy – why would they purposely introduce another point of contention into the system? Wouldn’t the benchmark processes all tend to serialize around their operations on this single table? Would we jam up the benchmark by trying to read from this table as others were heavily modifying it? Why would you want to introduce all of these extra locks you need to manage? I had dozens of ‘why would you even consider that’ - type of questions. The technical folks from Oracle thought I was a little daft at that point. That is until I pulled up a window into Sybase or Informix, and showed them the effects of two people inserting into a table, or someone trying to query a table with others inserting rows (the query returns zero rows per second). The differences between the way Oracle does it, and the way almost every other database does it, are phenomenal – they are night and day. Needless to say, neither Informix nor Sybase were too keen on the database log table approach during their attempts. They preferred to record their timings to flat files in the operating system.

The moral to this story is twofold; all databases are fundamentally different and, when designing your application, you must approach each as if you never used a database before. Things you would do in one database are either not necessary, or simply won’t work in another database.

In Oracle you will learn that:

      • Transactions are what databases are all about; they are good.
      • You should defer committing as long as you have to. You should not do it quickly to avoid stressing the system, as it does not stress the system to have long or large transactions. The rule is commit when you must, and not before. Your transactions should only be as small or large as your business logic dictates.
      • You should hold locks on data as long as you need to. They are tools for you to use, not things to be avoided. Locks are not a scarce resource.
      • There is no overhead involved with row level locking in Oracle, none.
      • You should never escalate a lock (for example, use a table lock instead of row locks) because it would be ‘better on the system’. In Oracle it won’t be better for the system – it will save no resources.
      • Concurrency and consistency can be achieved. You can get it fast and correct, every time.

As we go through the remaining components in this chapter, we’ll be reinforcing the above points.

Locking Issues

Before we discuss the various types of locks that Oracle uses, it is useful to look at some locking issues – many of which arise from badly designed applications that do not make correct use (or make no use) of the database’s locking mechanisms.

Lost Updates

A lost update is a classic database problem. Simply put, a lost update occurs when the following events occur, in the order presented:

      1. User 1 retrieves (queries) a row of data.
      2. User 2 retrieves that same row.
      3. User 1 modifies that row, updates the database and commits.
      4. User 2 modifies that row, updates the database and commits.

This is called a lost update because all of the changes made in step three above will be lost. Consider, for example, an employee update screen – one that allows a user to change an address, work number and so on. The application itself is very simple – a small search screen to generate a list of employees and then the ability to drill down into the details of each employee. This should be a piece of cake. So, we write the application with no locking on our part, just simple SELECT and UPDATE commands.

So, an end user (user 1) navigates to the details screen, changes an address on the screen, hits Save, and receives confirmation that the update was successful. Fine, except that when user 1 checks the record the next day, in order to send out a tax form, the old address is still listed. How could that have happened? Unfortunately it can happen all too easily. In this case another end user (USER2) had queried the same record about 5 minutes before user 1 and still had the old data displayed on their screen. User 1 came along, queried up the data on his terminal, performed his update, received confirmation, and even requeried to see the change for himself. However, user 2 then updated the work telephone number field and hit save – blissfully unaware of the fact that he has just overwritten user 1’s changes to the address field with the old data! The reason this can happen is that the application developer, finding it easier to update all columns instead of figuring out exactly which columns changed, wrote the program such that when one particular field is updated, all fields for that record are ‘refreshed’.

Notice that for this to happen, user 1 and user 2 didn’t even need to be working on the record at the exact same time. All it needed was for them to be working on the record at about the same time

This is a database issue that I’ve seen crop up time and time again when GUI programmers, with little or no database training, are given the task of writing a database application. They get a working knowledge of SELECT, INSERT, UPDATE and DELETE and then set about writing the application. When the resulting application behaves in the manner described above, it completely destroys people’s confidence in it, especially since it seems so random, so sporadic, and is totally irreproducible in a controlled environment (leading the developer to believe it must be user error).

Many tools, such as Oracle Forms, transparently protect you from this behavior by ensuring the record is unchanged from when you queried it and locked before you make any changes to it – but many others (such as a handwritten VB or Java program) do not. What the tools that protect you do behind the scenes, or what the developers must do themselves, is to use one of two types of locking.

Pessimistic Locking

This locking method would be put into action in the instant before we modify a value on screen – for example when the user selects a specific row and indicates their intention to perform an update (by hitting a button on the screen, say). So, a user queries the data out without locking:

     scott@TKYTE816> SELECT EMPNO, ENAME, SAL FROM EMP WHERE DEPTNO = 10;

EMPNO ENAME SAL
---------- ---------- ----------
7782 CLARK 2450
7839 KING 5000
7934 MILLER 1300

Eventually, the user picks a row they would like to update. Let’s say in this case, they choose to update the MILLER row. Our application will at that point in time (before they make any changes on screen) issue the following command:

     scott@TKYTE816> SELECT EMPNO, ENAME, SAL
2 FROM EMP
3 WHERE EMPNO = :EMPNO
4 AND ENAME = :ENAME
5 AND SAL = :SAL
6 FOR UPDATE NOWAIT
7 /

EMPNO ENAME SAL
---------- ---------- ----------
7934 MILLER 1300

What the application does is supply values for the bind variables from the data on the screen (in this case 7934, MILLER and 1300) and re-queries up this same row from the database – this time locking the row against updates by other sessions. This is why this approach is called pessimistic locking. We lock the row before we attempt to update because we doubt that the row will remain unchanged otherwise.

Since all tables have a primary key (the above SELECT will retrieve at most one record since it includes the primary key, EMPNO) and primary keys should be immutable (we should never update them) we’ll get one of three outcomes from this statement:

      • If the underlying data has not changed, we will get our MILLER row back and this row will be locked from updates by others (but not read).
      • If another user is in the process of modifying that row, we will get an ORA-00054 Resource Busy error. We are blocked and must wait for the other user to finish with it.
      • If, in the time between selecting the data and indicating our intention to update, someone has already changed the row then we will get zero rows back. The data on our screen is stale. The application needs to requery and lock the data before allowing the end user to modify any of the data in order to avoid the lost update scenario described above. In this case, with pessimistic locking in place, when user 2 attempts to update the telephone field the application would now recognize that the address field had been changed and would re-query the data. Thus user 2 would not overwrite user 1’s change with the old data in that field.

Once we have locked the row successfully, the application will issue some update and commit the changes:

     scott@TKYTE816> UPDATE EMP
2 SET ENAME = :ENAME, SAL = :SAL
3 WHERE EMPNO = :EMPNO;
1 row updated.

scott@TKYTE816> commit;
Commit complete.

We have now very safely changed that row. It is not possible for us to overwrite someone else’s changes as we verified the data did not change between the time we initially read it out and when we locked it.

Optimistic Locking

The second method, referred to as optimistic locking, is to keep the old and new values in the application and upon updating the data use an update like this:

     Update table
Set column1 = :new_column1, column2 = :new_column2, ….
Where column1 = :old_column1
And column2 = :old_column2

Here, we are optimistically hoping that the data doesn’t get changed. In this case, if our update updates one row – we got lucky, the data didn’t change between the time we read it out and the time we got around to submitting the update. If we update zero rows, we lose – someone else changed the data and now we must figure out what we want to do in order to avoid the lost update. Should we make the end user re-key the transaction after querying up the new values for the row (potentially frustrating them no end as there is a chance the row will change yet again on them)? Should we try to merge the values of the two updates, performing update conflict resolution based on business rules (lots of code)? Of course, for disconnected users, the last option is the only one available.

It should be noted that you can use a SELECT FOR UPDATE NOWAIT here as well. The UPDATE above will in fact avoid a lost update, but it does stand a chance of blocking – hanging while it waits for an UPDATE of that row by another session to complete. If all of your applications use optimistic locking then using a straight UPDATE is generally OK, rows are locked for a very short duration as updates are applied and committed. If some of your applications use pessimistic locking, however, which will hold locks on rows for relatively long periods, then you would want to consider using a SELECT FOR UPDATE NOWAIT immediately prior to the UPDATE to avoid getting blocked by another session.

So, which method is best? In my experience, pessimistic locking works very well in Oracle (but perhaps not other databases) and has many advantages over optimistic locking.

With pessimistic locking the user can have confidence that the data they are modifying on the screen is currently ‘owned’ by them – they in effect have the record checked out and nobody else can modify it. Some would argue that if you lock the row before changes are made, other users would be locked out from that row and the scalability of the application would be decreased. The fact is that, however you do it, only one user will ultimately be able to update the row (if we want to avoid the lost update). If you lock the row first, and then update it, your end user has a better experience. If you don’t lock it and try to update it later your end user may put time and energy into making changes only to be told ‘sorry, the data has changed, please try again’. To limit the time that a row is locked before updating, you couldhave the application release the lock if the user walks away and doesn’t actually use the record for some period of time or use Resource Profiles in the database to time out idle sessions.

Furthermore, locking the row in Oracle does not prevent reads of that record as in other databases; the locking of the row does not prevent any normal activity from taking place. This is due 100% to Oracle’s concurrency and locking implementation. In other databases, the converse is true. If I tried to do pessimistic locking in them, no application would work. The fact that a locked row in those databases block queries prevents this approach from even being considered. So, it may be necessary to unlearn ‘rules’ you have learned in one database in order to be successful in a different database.

Blocking

Blocking occurs when one session holds a lock on a resource that another session is requesting. As a result, the requesting session will be blocked, it will ‘hang’ until the holding session gives up the locked resource. In almost every case, blocking is avoidable. In fact, if you do find yourself blocking in an interactive application you are most likely suffering from the lost update bug described above (your logic is flawed and that is the cause of the blocking).

There are four common DML statements that will block in the database – INSERT, UPDATE, DELETE, and SELECT FOR UPDATE. The solution to a blocked SELECT FOR UPDATE is trivial: simply add the NOWAIT clause and it will no longer block. Instead, your application would report back to the end user that the row is already locked. The interesting cases are the remaining three DML statements. We’ll look at each of them and see why they should not block, and when they do – how to correct that.

Blocked Inserts

The only time an INSERT will block is when you have a table with a primary key or unique constraint placed on it and two sessions simultaneously attempt to insert a row with the same value. One of the sessions will block until the other session either commits (in which case the blocked session will receive an error about a duplicate value) or rolls back (in which case the blocked session succeeds). This typically happens with applications that allow the end user to generate the primary key/unique column value. It is most easily avoided via the use of Oracle sequences in the generation of primary keys as they are a highly concurrent method of generating unique keys in a multi-user environment. In the event that you cannot use a sequence, you can use the technique outlined in Appendix A, on the DBMS_LOCK package, where I demonstrate how to use manual locks to avoid this issue.

Blocked Updates and Deletes

In an interactive application – one where you query some data out of the database, allow an end user to manipulate it and then ‘put it back’ into the database, a blocked UPDATE or DELETE indicates that you probably have a lost update problem in your code. You are attempting to UPDATE a row that someone else is already updating, in other words that someone else already has locked. You can avoid the blocking issue by using the SELECT FOR UPDATE NOWAIT query to:

      • Verify the data has not changed since you queried it out (lost update prevention)
      • Lock the row (prevents the update or delete from blocking)

As discussed earlier, you can do this regardless of the locking approach you take – both pessimistic and optimistic locking may employ the SELECT FOR UPDATE NOWAIT to verify the row has not changed. Pessimistic locking would use that statement the instant the user indicated their intention to modify the data. Optimistic locking would use that statement immediately prior to updating the data in the database. Not only will this resolve the blocking issue in your application, it will also correct the data integrity issue.

Deadlocks

Deadlocks occur when two people hold a resource that the other wants. For example, if I have two tables, A and B in my database, and each has a single row in it, I can demonstrate a deadlock easily. All I need to do is open two sessions (two SQL*PLUS sessions, for example), and in Session A, I update table A. In session B, I update table B. Now, if I attempt to update table A in session B, I will become blocked. Session A has this row locked already. This is not a deadlock, this is just blocking. We have not yet deadlocked since there is a chance that the session A will commit or rollback, and session B will simply continue at that point.

If we go back to session A, and then try to update table B, we will cause a deadlock. One of the two sessions will be chosen as a ‘victim’, and will have its statement rolled back. For example, the attempt by session B to update table A may be rolled back, with an error such as:

     update a set x = x+1
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource

Session A’s attempt to update table B will remain blocked – Oracle will not rollback the entire transaction. Only one of the statements that contributed to the deadlock is rolled back. Session B still has the row in table B locked, and session A is patiently waiting for the row to become available. After receiving the deadlock message, session B must decide whether to commit the outstanding work on table B, roll it back, or continue down an alternate path and commit later. As soon as this session does commit or rollback, the other blocked session will continue on as if nothing ever happened.

Oracle considers deadlocks to be so rare, so unusual, that it creates a trace file on the server each and every time one does occur. The contents of the trace file will look something like this:

     *** 2001-02-23 14:03:35.041
*** SESSION ID:(8.82) 2001-02-23 14:03:35.001
DEADLOCK DETECTED
Current SQL statement for this session:
update a set x = x+1
The following deadlock is not an ORACLE error. It is a
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL. The following...

Obviously, Oracle considers deadlocks a self-induced error on part of the application and, for the most part, they are correct. Unlike in many other RDBMSs, deadlocks are so rare in Oracle they can be considered almost non-existent. Typically, you must come up with artificial conditions to get one.

The number one cause of deadlocks in the Oracle database, in my experience, is un-indexed foreign keys. There are two cases where Oracle will place a full table lock on a child table after modification of the parent table:

      • If I update the parent table’s primary key (a very rare occurrence if you follow the rules of
        relational databases that primary keys should be immutable), the child table will be locked in the absence of an index.
      • If I delete a parent table row, the entire child table will be locked (in the absence of an index) as well.

So, as a demonstration of the first point, if I have a pair of tables set up such as:

     tkyte@TKYTE816> create table p ( x int primary key );
Table created.

tkyte@TKYTE816> create table c ( y references p );
Table created.

tkyte@TKYTE816> insert into p values ( 1 );
tkyte@TKYTE816> insert into p values ( 2 );

tkyte@TKYTE816> commit;

And then I execute:

     tkyte@TKYTE816> update p set x = 3 where x = 1;
1 row updated.

I will find that my session has locked the table C. No other session can delete, insert or update any rows in C. Again, updating a primary key is a huge ‘no-no’ in a relational database, so this is generally not really an issue. Where I have seen this updating of the primary key become a serious issue is when you use tools that generate your SQL for you and those tools update every single column – regardless of whether the end user actually modified that column or not. For example, if you use Oracle Forms and create a default layout on any table Oracle Forms by default will generate an update that modifies every single column in the table you choose to display. If you build a default layout on the DEPT table and include all three fields Oracle Forms will execute the following command whenever you modify any of the columns of the DEPT table:

update dept set deptno=:1,dname=:2,loc=:3 where rowid=:4

In this case, if the EMP table has a foreign key to DEPT and there is no index on the DEPTNO column in the EMP table – the entire EMP table will be locked after an update to DEPT. This is something to watch out for carefully if you are using any tools that generate SQL for you. Even though the value of the primary key does not change, the child table EMP will be locked after the execution of the above SQL statement. In the case of Oracle Forms, the solution is to mark that table’s property update changed columns only to Yes. Oracle Forms will generate an update statement that includes only the changed columns (not the primary key).

Problems arising from deletion of a row in a parent table are far more common. If I delete a row in table P, then the child table, C, will become locked – preventing other updates against C from taking place for the duration of my transaction (assuming no one else was modifying C, of course; in which case my delete will wait). This is where the blocking and deadlock issues come in. By locking the entire table C, I have seriously decreased the concurrency in my database – no one will be able to modify anything in C. In addition, I have increased the probability of a deadlock, since I now ‘own’ lots of data until I commit. The probability that some other session will become blocked on C is now much higher; any session that tries to modify C will get blocked. Therefore, I’ll start seeing lots of sessions that hold some pre-existing locks getting blocked in the database. If any of these blocked sessions are, in fact, holding a lock that my session needs – we will have a deadlock. The deadlock in this case is caused by my session obtaining many more locks then it ever needed. When someone complains of deadlocks in the database, I have them run a script that finds un-indexed foreign keys and ninety-nine percent of the time we locate an offending table. By simply indexing that foreign key, the deadlocks, and lots of other contention issues, go away. Here is an example of how to automatically find these un-indexed foreign keys:

     tkyte@TKYTE816> column columns format a30 word_wrapped
tkyte@TKYTE816> column tablename format a15 word_wrapped
tkyte@TKYTE816> column constraint_name format a15 word_wrapped

tkyte@TKYTE816> select table_name, constraint_name,
2 cname1 || nvl2(cname2,’,’||cname2,null) ||
3 nvl2(cname3,’,’||cname3,null) || nvl2(cname4,’,’||cname4,null) ||
4 nvl2(cname5,’,’||cname5,null) || nvl2(cname6,’,’||cname6,null) ||
5 nvl2(cname7,’,’||cname7,null) || nvl2(cname8,’,’||cname8,null)
6 columns
7 from ( select b.table_name,
8 b.constraint_name,
9 max(decode( position, 1, column_name, null )) cname1,
10 max(decode( position, 2, column_name, null )) cname2,
11 max(decode( position, 3, column_name, null )) cname3,
12 max(decode( position, 4, column_name, null )) cname4,
13 max(decode( position, 5, column_name, null )) cname5,
14 max(decode( position, 6, column_name, null )) cname6,
15 max(decode( position, 7, column_name, null )) cname7,
16 max(decode( position, 8, column_name, null )) cname8,
17 count(*) col_cnt
18 from (select substr(table_name,1,30) table_name,
19 substr(constraint_name,1,30) constraint_name,
20 substr(column_name,1,30) column_name,
21 position
22 from user_cons_columns ) a,
23 user_constraints b
24 where a.constraint_name = b.constraint_name
25 and b.constraint_type = ‘R’
26 group by b.table_name, b.constraint_name
27 ) cons
28 where col_cnt > ALL
29 ( select count(*)
30 from user_ind_columns i
31 where i.table_name = cons.table_name
32 and i.column_name in (cname1, cname2, cname3, cname4,
33 cname5, cname6, cname7, cname8 )
34 and i.column_position <= cons.col_cnt
35 group by i.index_name
36 )
37 /

TABLE_NAME CONSTRAINT_NAME COLUMNS
------------------------------ --------------- ----------------------------
C SYS_C004710 Y

This script works on foreign key constraints that have up to 8 columns in them (if you have more than that, you probably want to rethink your design). It starts by building an inline view, named CONS in the above query. This inline view transposes the appropriate column names in the constraint from rows into columns, the result being a row per constraint and up to 8 columns that have the names of the columns in the constraint. Additionally there is a column, COL_CNT, which contains the number of columns in the foreign key constraint itself. For each row returned from the inline view we execute a correlated subquery that checks all of the indexes on the table currently being processed. It counts the columns in that index that match columns in the foreign key constraint and then groups them by index name. So, it generates a set of numbers, each of which is a count of matching columns in some index on that table. If the original COL_CNT is greater than all of these numbers then there is no index on that table that supports that constraint. If COL_CNT is less than all of these numbers then there is at least one index that supports that constraint. Note the use of the NVL2 function (new to Oracle 8.15), which we used to ‘glue’ the list of column names into a comma-separated list. This function takes three arguments A, B, C. If argument A is not null then it returns argument B, else it returns argument C. This query assumes that the owner of the constraint is the owner of the table and index as well. If another user indexed the table, or the table is in another schema, it will not work correctly (both rare events).

So, this script shows us that table C has a foreign key on the column Y, but no index. By indexing Y, we can remove this locking issue all together. In addition to this table lock, an un-indexed foreign key can also be problematic in the following cases:

      • When you have an ON DELETE CASCADE and have not indexed the child table. For example, EMP is child of DEPT. DELETE DEPTNO = 10 should CASCADE to EMP. If DEPTNO in EMP is not indexed, you will get a full table scan of EMP. This full scan is probably undesirable, and if you delete many rows from the parent table, the child table will be scanned once for each parent row deleted.
      • When you query from the parent to the child. Consider the EMP/DEPT example again. It is very common to query the EMP table in the context of a DEPTNO. If you frequently run the following query, say to generate a report, you’ll find that not having the index in place will slow down the queries:
        select * from dept, emp
where emp.deptno = dept.deptno and dept.deptno = :X;

So, when do you not need to index a foreign key? The answer is, in general, when the following conditions are met:

      • You do not delete from the parent table.
      • You do not update the parent table’s unique/primary key value (watch for unintended updates to the primary key by tools!
      • You do not join from the parent to the child (like DEPT to EMP)

If you satisfy all three above, feel free to skip the index – it is not needed. If you do any of the above, be aware of the consequences. This is the one very rare time when Oracle tends to ‘over lock’ data.

Lock Escalation

When lock escalation occurs, the system is decreasing the granularity of your locks. An example would be the database system turning your 100 row-level locks against a table into a single table-level lock. You are now using ‘one lock to lock everything’ and, typically, you are also locking a whole lot more data than you were before. Lock escalation is used frequently in databases that consider a lock to be a scarce resource, overhead to be avoided.

Oracle will never escalate a lock. Never.

Oracle never escalates locks, but it does practice lock conversion, or lock promotion – terms that are often confused with lock escalation.

The terms ‘lock conversion’ and ‘lock promotion’ are synonymous – Oracle typically refers to the process as conversion.

It will take a lock at the lowest level possible (the least restrictive lock possible) and will convert that lock to a more restrictive level. For example, if you select a row from a table with the FOR UPDATE clause, two locks will be created. One lock is placed on the row(s) you selected (and this will be an exclusive lock, no one else can lock that specific row in exclusive mode). The other lock, a ROW SHARE TABLE lock, is placed on the table itself. This will prevent other sessions from placing an exclusive lock on the table and thus will prevent them from altering the structure of the table, for example. All other statements against the table are permitted. Another session can even come in and make the table readonly using LOCK TABLE X IN SHARE MODE, preventing modifications. In reality, however, this other session cannot be allowed to prevent the modification that is already taking place. So, as soon as the command to actually update that row is issued, Oracle will convert the ROW SHARE TABLE lock into the more restrictive ROW EXCLUSIVE TABLE lock and the modification will proceed. This lock conversion happens transparently.

Lock escalation is not a database ‘feature’. It is not a desired attribute. The fact that a database supports lock escalation implies there is some inherent overhead in its locking mechanism, that there is significant work performed to managed hundreds of locks. In Oracle the overhead to have one lock or a million locks is the same – none.

--

Thomas Kyte is the Vice President of the Core Technologies Group at Oracle Corporation and has been with them since version 7.0.9 was released in 1993. Kyte, however, has been working with Oracle since version 5.1.5c. At Oracle Corp., Kyte works with the Oracle database, and more specifically, he helps clients who are using the Oracle database and works directly with them specifying and building their systems or rebuilding and tuning them. Prior to working at Oracle, Kyte was a systems integrator who built large scale, heterogeneous databases and applications for military and government clients. Tom Kyte is the same “Ask Tom” whose column appears in Oracle Magazine, where he answers questions about the Oracle database and tools that developers and DBAs struggle with every day.


Contributors : Tom Kyte
Last modified 2005-12-01 02:41 PM
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!
 
 

Powered by Plone