Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » DB2 » DB2 Mainframe Articles Archive » Use (and Abuse!) of DB2 for z/OS Stored Procedures - Part 2
Seeking new owner for this high-traffic DBAzine.com site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3554
 

Use (and Abuse!) of DB2 for z/OS Stored Procedures - Part 2

by Dan Luksetich and Susan Lawson

Part 1  |  Part 2

Introduction

In part one of this article, we discussed how stored procedures can be a performance benefit for distributed applications, and even local applications. In this part, we cover how they can become a performance problem.

In every good implementation there are trade-offs. Most of the trade-offs involve sacrificing performance for things like flexibility, reusability, security, and time to delivery. It is possible to minimize the impact of distributed application performance with the proper use of stored procedures. However, when stored procedures are used in the wrong place the overhead can become unbearable.

Most Common Abuses of Stored Procedures

In many organizations, we see an overuse of stored procedures often leading to performance problems, especially in high volume situations. In this part, we will look at the most common abuses of stored procedures and briefly discuss what other alternatives may exist. Some of the common abuses include:

      • Use of stored procedures as I/O modules
      • Use of stored procedures for security or predictable access paths
      • Blind conversion of stored procedures from another DBMS/platform
      • Overuse of nesting
      • Use of DB2 managed SPAS vs. WLM
      • Converting to stored procedures just in case

Using Stored Procedures as I/O Modules

Too often, stored procedures are not being used efficiently in terms of individual productivity. If a stored procedure is simply encapsulating a single (or even a few) simple SQL statements, then the overhead for the stored procedure and its execution environment is going to be counter to any performance efforts for the application.

Using Stored Procedures for Security or Predictable Access Paths

Stored procedures are one way of obtaining static access to data, which often provides more security than dynamic SQL; and, with stored procedures, the access paths are known ahead of time. These two options are also often seen as reasons to convert to stored procedures instead of using dynamic SQL. However, there may be other ways to do this. Static SQL Profiling is a way to capture dynamic SQL and bind it into a static package, thus eliminating both problems. Incoming statements are then matched against the existing packages and if there is a match, the existing package/SQL is used. It does work, but is limited. To see if this will work for you and for more information, refer to the IDUG Solutions Journal article written by Dan and Susan in Fall 2004.

Also, keep in mind that simply putting the SQL in a stored procedure does not necessarily guarantee security and access paths at run time.

Blind Conversion of Stored Procedures from another DBMS/Platform

DB2 on z/OS does not work the same as any other DBMS’. Often stored procedures are implemented differently (i.e., as I/O modules in a very object-oriented environment) on smaller platforms. A conversion of an application from another DBMS does not mean a one-to-one conversion of the stored procedure is always correct. As discussed before, using stored procedures as I/O modules is just asking for application overhead. We have seen situations where, in a system with 80 tables, each table having four stored procedures — one for select, insert, update and delete — this totaled 160 stored procedures to manage just to do single SQL DML statements. The unnecessary overhead in a DB2 z/OS environment would be tremendous. This is one of the reasons conversions from smaller platforms to DB2 often are seen as bad performers.

Overuse of Nesting

Nesting of stored procedures can become difficult to manage, not to mention, hard to troubleshoot when problems occur. As stored procedures become nested, performance can also begin to degrade, depending on the processing occurring, each level, and the information required to be passed from one level to another.

Use of DB2-Managed (Stored Procedure Address Spaces) SPAS vs. WLM

WLM-managed address spaces provide many advantages over DB2-managed Stored Procedure Address Spaces (SPAS) including the following:

    • Provides multiple isolated environments for stored procedures — This offers a great benefit due to the fact that failures need not affect other stored procedures like they can in a DB2-managed SPAS
    • Reduce demand for storage below the 16MB line — This removed the limitation on the number of stored procedures that can run concurrently within an address space. With WLM, the stored procedures also inherit the dispatching priority of the DB2 thread that issues the CALL statement. This allows high priority work to have its stored procedures execute ahead of lower priority work and their stored procedures. In a DB2-established address space, prioritization of stored procedures is not possible and you are limited by storage in the address space. There is also no separation of the work by dispatching priorities, so high priority work could potentially be penalized.
    • Static priority assignment and dynamic workload balancing — High priority stored procedures in WLM achieve very consistent response times and WLM provides dynamic workload balancing and distribution. WLM routes incoming requests to the stored procedure address space that is the least busy, or starts new address spaces if required. Once this has been established, the actions are fully automatic without requiring monitoring, tuning, or operator intervention.
    • Ability to stop runaway stored procedures — If a procedure is looping outside of DB2’s control; we have different options for regaining control. If DB2-managed SPAS are used, then we have to cancel the address space; this, of course, is not a good option as we affect everything running in that address space. In a WLM-managed application environment in goal mode, we have the option to refresh the environment which will quiesce all address spaces running under that environment, start a new address space, and route all new requests to the new address space. Once we are assured that all normal work in the address space containing the runaway stored procedure has finished, we can cancel that address space. This completely isolates all other stored procedures from failure. With compatibility mode, we can only cancel the address space and start the address space.

We also have the ability to cancel runaway or looping stored procedures with ASUTIME. WLM allows us to establish priorities and service goals. Using the ASUTIME (ability to cancel stored procedures in a loop or stop runaway stored procedures based on CPU time used) on the stored procedure itself in conjunction with WLM provides an additional mechanism for tight control system resource usage. By using both ASUTIME and WLM priorities and goals, we can have total control of the stored procedures.

    • Version 8 and beyond will require all stored procedures to be created with a WLM application environment.

Converting to Stored Procedures “Just in Case”

Another case of overuse (an eventual frustration) with stored procedures has been seen with directives to convert legacy application code (i.e., COBOL) to a stored procedure to allow for flexibility when their new Java client application may want to use it. It is true that today, we do call COBOL stored procedures containing legacy code from Java clients for performance reasons (most having to do with Java on z/OS). However, do this only where it is necessary and the code actually being used and called from a remote client. We saw a situation where all legacy code was encapsulated into stored procedures and the performance against the two methods was benchmarked — yikes! In some cases, the overhead was around 60 percent. Of course, a simple insert statement in COBOL may take around 15K instructions, where the same statement may take around 60K instructions in a stored procedure; not to mention the environment management. So here we have no gain, just overhead, when done without a defined purpose.

Summary

As you see, there are many common misuses of stored procedures that can often lead to obvious and not-so-obvious problems. In part 1, you learned the benefits of using stored procedures which included lessening network traffic, improving security, encapsulating code and access to non-relational data.

In summary, the proper implementation of stored procedures, which involves encapsulating logic as well as SQL statements within the procedure, properly setting up the environment, and exploring the appropriate alternatives if necessary, will ensure that the stored procedures not only deliver the desired enhancement, but also the performance needed by today’s applications.

--

Susan Lawson and Dan Luksetich are DB2 Performance Specialists for YL&A. They both have several years of experience working with a variety of DB2 clients and have been designing and tuning some of the world’s largest and most complex DB2 databases.


Contributors : Dan Luksetich, Susan Lawson
Last modified 2006-01-04 03:36 PM

good one

Posted by kingsuk at 2007-08-17 08:31 AM
Though the impact of wrapping to stored procedure and accessing from java client is discussed. It will also be nice to know the implication of converting a cobol stored procedure to java stored procedure from performane prospective.
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!
 
 

Powered by Plone