Skip to content

Personal tools
You are here: Home » DB2 » DB2 Mainframe Articles Archive » An Introduction to Trigger in DB2 for OS/390
Seeking new owner for this high-traffic site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3554

An Introduction to Trigger in DB2 for OS/390

by Craig S. Mullins

IBM added support for triggers to DB2 for OS/390 in Version 6. Slowly but surely, more organizations are looking at triggers and analyzing their functionality for benefits that may be gained by using them.

Trigger Basics

A trigger is a piece of code that is executed in response to a data modification statement; that is, an INSERT, UPDATE, or DELETE. To be a bit more precise: triggers are event-driven specialized procedures that are stored in, and managed by the DBMS. Each trigger is attached to a single, specified table. Triggers can be thought of as an advanced form of "rule" or "constraint" written using an extended form of SQL. A trigger can not be directly called or executed; it is automatically executed (or "fired") by the DBMS as the result of a data modification to the associated table.

Once a trigger is created it is always executed when its "firing" event occurs (except for LOAD utility processing). Therefore, triggers are automatic, implicit, and non-bypassable.

Triggers Versus Stored Procedures

At a high level, triggers can be viewed as being similar to stored procedures. Both consist of procedural logic that is stored at the database level. However, stored procedures are not event-driven and are not attached to a specific table. A stored procedure is explicitly executed by invoking a CALL to the procedure (instead of being implicitly executed like triggers). Additionally, a stored procedure can access many tables without being specifically associated to any of them. DB2 has supported stored procedures since Version 4.

Trigger Usage

Triggers can be useful for implementing code that must be executed on a regular basis due to a pre-defined event. By utilizing triggers, scheduling and data integrity problems can be eliminated because the trigger will be fired whenever the triggering event occurs. You need not remember to schedule or code an activity to perform the logic in the trigger - the code runs automatically by virtue of it being in the trigger. This is true of both static and dynamic SQL; ad hoc and planned SQL.

Triggers can be implemented for many practical purposes. Quite often it is impossible to code business rules into the database using only DDL. For example, DB2 does not support complex constraints (only value-based CHECK constraints). Neither does DB2 support certain types of referential constraints (such as pendant DELETE processing or ON UPDATE CASCADE). Using triggers, a very flexible environment is established for implementing business rules and constraints in the DBMS. This is important because having the business rules in the database ensures that everyone uses the same logic to accomplish the same process.

Triggers can be coded to access and/or modify other tables, print informational messages, and specify complex restrictions. For example, consider the standard suppliers and parts application used in most introductory database texts. A part can be supplied by many suppliers and a supplier can supply many parts. Triggers can be used to support the following scenarios:

      • What if a business rule exists specifying that no more than three suppliers are permitted to supply any single part? A trigger can be coded to check that rows can not be inserted if the data violates this requirement.
      • A trigger can be created to allow only orders for parts that are already in stock. Or, maybe for parts that are already in stock or are on order and planned for availability within the next week.
      • Triggers can be used to perform calculations such as ensuring that the order amount for the parts is calculated appropriately given the suppliers chosen to provide the parts. This is especially useful if the order purchase amount is stored in the database as redundant data.
      • To curb costs, a business decision may be made that the low cost supplier will always be used. A trigger can be implemented to disallow any order that is not the current "low cost" order.

The number of business rules that can be implemented using triggers is truly limited only by your imagination (or, more appropriately, your business needs).

Additionally, triggers can access non-DB2 resources. This can be accomplished by invoking a stored procedure or a user-defined function that takes advantage of the OS/390 resource recovery services (RRS). Data stored in the non-DB2 resource can be accessed or modified in the stored procedure or user-defined function that is called.

A Sample Trigger

To create a trigger you will need to specify the following:

      • Trigger Name - each trigger is named
      • Triggering Table - each trigger is defined on a single table
      • Activation (BEFORE / AFTER) - each trigger must specify whether it is to run before or after the firing activity occurs
      • Triggering Event (INSERT, UPDATE, DELETE) - each trigger must define the firing activity that will cause it to run
      • Granularity (row / statement) - each trigger specifies if it is to be run for every row impacted by the firing activity, or just once because the firing activity occurred
      • Transition Variables (old / new) - triggers can use transition variables to view the data both before and after the firing activity occurred
      • Triggered Action - finally, each trigger has code that defines what that trigger will do

Usually it is easiest to learn by example. The sample trigger below is an update trigger, coded on the EMP table. This trigger implements a simple check to ensure that raises are less than 50%. When the new salary exceeds 50% of the prior salary, an error is raised.


   SIGNAL SQLSTATE '75001' ('Raise exceeds 50%');

The trigger executes once for each row. So if multiple rows are modified by a single update, the trigger will run multiple times, once for each row modified. Also, the trigger will be run BEFORE the actual modification occurs. Finally, take special notice how NEW and OLD are used to check values before and after the update.

Firing Triggers

Two options are available to indicate when the trigger fires:

      1. before the firing activity occurs or
      2. after the firing activity occurs.

DB2 supports both "before" and "after" triggers. Appropriately enough, a "before" trigger executes before the firing activity occurs; an "after" trigger executes after the firing activity occurs. In DB2 V6, "before" triggers are restricted because they cannot perform updates.

It is important to know how the triggers in your database operate. Without this knowledge properly functioning triggers cannot be coded, supported, or maintained effectively. The valid SQL statements that can be used in the trigger body depend on the trigger activation time:

      • BEFORE triggers cannot issue an INSERT, UPDATE, or DELETE statement
      • AFTER triggers cannot set transition variables

The rules for trigger behavior in DB2 are outlined in the Table 1.

Valid for Activation Time
SQL Statement Before After
CALL Yes Yes
SET transition-variable Yes No

Table 1: Trigger activation time and SQL statement validity

Another interesting feature of DB2 triggers is the order in which they are fired. If multiple triggers are coded on the same table, which trigger is fired first? It can make a difference as to how the triggers should be coded, tested, and maintained. The rule for order of execution is basically simple to understand, but can be difficult to maintain. For triggers of the same type, they are executed in the order in which they were created. For example, if two "delete" triggers are coded on the same table, the one that was physically created first, is executed first. Keep this in mind as you make changes to your database. If you need to drop the table and re-create it to implement a schema change, make sure you create the triggers in the desired (same) order to keep the functionality the same.

As can readily be seen, determining the procedural activity that is required when triggers are present can be a complicated task. It is of paramount importance that all developers are schooled in the firing methods utilized for triggers in DB2 for OS/390.

Triggers Can Fire Other Triggers

As we've already learned, a trigger is fired by an insert, update, or delete. However, a trigger can also contain insert, update, and delete logic within itself. Therefore, a trigger is fired by a data modification, but can also cause another data modification, thereby firing yet another trigger. When a trigger contains insert, update, and/or delete logic, the trigger is said to be a nested trigger.

Most DBMSes, however, place a limit on the number of nested triggers that can be executed within a single firing event. If this were not done, it could be quite possible to having triggers firing triggers ad infinitum until all of the data was removed from an entire database!

If referential integrity is combined with triggers, additional cascading updates and/or deletes can occur. If a delete or update results in a series of additional updates or deletes that need to be propagated to other tables then the update or delete triggers for the second table also will be activated.

This combination of multiple triggers and referential integrity constraints are capable of setting a cascading effect into motion, which can result in multiple data changes. DB2 limits this cascading effect to 16 levels in order to prevent endless looping. If more than 16 levels of nesting occur, the transaction is aborted.

The ability to nest triggers provides an efficient method for implementing automatic data integrity. Because triggers generally can not be by-passed, they provide an elegant solution to the enforced application of business rules. Use caution, however, to ensure that the maximum trigger nesting level is not reached. Failure to heed this advice can cause an environment where certain types of updates can not occur!

Trigger Packages

When a trigger is executed, DB2 creates a trigger package for the statements in the triggered action. The trigger package is recorded in SYSIBM.SYSPACKAGE and has the same name as the trigger. The trigger package is always accessible and can be executed only when a trigger is activated by a triggering operation.

By default, when a trigger is created, the trigger package is bound by DB2 with EXPLAIN set to NO. So you will not have the access paths for the SQL statements in the trigger available to you. However, you can immediately rebind the trigger package specifying EXPLAIN YES to obtain access paths.

To delete the trigger package, you must use the DROP TRIGGER statement.

Trigger Limitations

There are limits on what triggers can accomplish. You cannot define DB2 triggers on:

      • a system catalog table
      • PLAN_TABLE
      • View
      • Alias
      • Synonym
      • Any table with a three-part name.
      • Using Triggers to Implement Referential Integrity

One of the primary uses for triggers is to support referential integrity (RI). Although DB2 supports a very robust form of declarative RI, no current DBMS fully supports all possible referential constraints. This is true of DB2, as well. Refer to Table 2 below for a listing of the RI possibilities. DB2 for OS/390 does not provide declarative support for UPDATE CASCADE or PENDANT DELETE referential constraints.

Triggers can be coded, in lieu of declarative RI, to support all of the RI rules in Table 2. Of course, when you use triggers, it necessitates writing procedural code for each rule for each constraint, whereas declarative RI constraints are coded in the DDL that is used to create relational tables. Therefore, triggers should only be considered for RI for those constraints not supported by DB2's built-in declarative RI support.

DELETE RESTRICT If any rows exist in the dependent table, the primary key row in the parent table can not be deleted.
DELETE CASCADE If any rows exist in the dependent table, the primary key row in the parent table is deleted, and all dependent rows are also deleted.
DELETE NEUTRALIZE If any rows exist in the dependent table the primary key row in the parent table is deleted, and the foreign key column(s) for all dependent rows are set to NULL as well.
UPDATE RESTRICT If any rows exist in the dependent table, the primary key column(s) in the parent table can not be updated.
UPDATE CASCADE If any rows exist in the dependent table, the primary key column(s) in the parent table are updated, and all foreign key values in the dependent rows are updated to the same value.
UPDATE NEUTRALIZE If any rows exist in the dependent table, the primary key row in the parent table is deleted, and all foreign key values in the dependent rows are updated to NULL as well.
INSERT RESTRICT A foreign key value can not be inserted into the dependent table unless a primary key value already exists in the parent table.
FK UPDATE RESTRICTION A foreign key can not be updated to a value that does not already exist as a primary key value in the parent table.
PENDANT DELETE When the last foreign key value in the dependent table is deleted the primary key row in the parent table is also deleted.

Table 2: Referential integrity rules

In order to use triggers to support RI rules, it is sometimes necessary to know the values impacted by the action that fired the trigger. For example, consider the case where a trigger is fired because a row was deleted. The row, and all of its values, has already been deleted because the trigger is executed after its firing action occurs. Two specialized aliases available only inside of triggers (NEW and OLD) allow the user to ascertain if referentially connected rows exist with those values.

Each trigger can have one NEW view of the table and one OLD view of the table available. Once again, these "views" are accessible only from triggers. They provide access to the modified data by viewing information in the transaction log. The transaction log is a record of all data modification activity, automatically maintained by the DBMS.

When an INSERT occurs, the NEW table contains the rows that were just inserted into the table to which the trigger is attached. When a DELETE occurs, the OLD table contains the rows that were just deleted from the table to which the trigger is attached. An UPDATE statement logically functions as a DELETE followed by an INSERT. Therefore, after an UPDATE, the NEW table contains the new values for the rows that were just updated in the table to which the trigger is attached; the OLD table contains the old values for the updated rows.

Therefore, the trigger can use these specialized NEW and OLD table views to query the affected data. Remember, too, that SQL data modification can occur a set-at-a-time. One DELETE or UPDATE statement can impact multiple rows. This must be taken into account when coding the actual trigger logic.

Additionally, the alias names, OLD and NEW, can be changed if so desired (for example, to INSERTED and DELETED, the names used by SQL Server).

Trigger Granularity

Because SQL is a set-level language, any single SQL statement can impact multiple rows of data. For example, one DELETE statement can actually cause zero, one, or many rows to be removed. Triggers need to take this into account.

Therefore, there are two levels of granularity that a trigger can have: statement level or row level. A statement level trigger is executed once upon firing, regardless of the actual number of rows inserted, deleted, or updated. A row level trigger, once fired, is executed once for each and every row that is inserted, deleted, or updated.

Different business requirements will drive what type of trigger granularity should be chosen.


Triggers provide many potential benefits for enhancing DB2 databases. With triggers your DB2 tables can become active - meaning that DB2 will take action implicitly as the data in the database is modified. Triggers are a very powerful way to enhance the data integrity of your DB2 databases.


Craig Mullins is an independent consultant and president of Mullins Consulting, Inc. Craig has extensive experience in the field of database management having worked as an application developer, a DBA, and an instructor with multiple database management systems including DB2, Sybase, and SQL Server. Craig is also the author of the DB2 Developer’s Guide, the industry-leading book on DB2 for z/OS, and Database Administration: Practices and Procedures, the industry’s only book on heterogeneous DBA procedures. You can contact Craig via his web site at

Contributors : Craig S. Mullins
Last modified 2006-01-16 04:21 AM
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!

Powered by Plone