Skip to content

Personal tools
You are here: Home » DB2 » DB2 Mainframe Articles Archive » DB2 Subsystem Tuning in an ERP Environment - Part 2
Seeking new owner for this high-traffic site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3554

DB2 Subsystem Tuning in an ERP Environment - Part 2

by Lockwood Lyon

Part 1  |  Part 2  |  Part 3

In Part 1, we reviewed the most important issues facing the system programmer and DBA when planning for ERP package implementation.

In this part, we will address DB2 subsystem configuration issues. These include sizing the catalog and directory, virtual storage management, log management, and recommended ZParm settings. Part 3 will review monitoring and tuning options.

DB2 Subsystem Configuration

The most important issues facing the systems programmer when configuring a DB2 subsystem to successfully support an ERP package installation are:

      • Catalog / Directory Management
      • Log Management
      • DBM1 Address Space Management
      • Dynamic SQL Caching
      • Virtual Pool Selection and Sizing

Catalog and Directory Management

The issues faced when preparing the Catalog and Directory for an ERP installation fall into several categories. These are:

      • General sizing issues (especially SECQTY, FREEPAGE, and PCTFREE)
      • Reorgs
      • SYSDBASE Considerations
      • Growth SYSLGRNX and its Indexes
      • Growth of SYSCOPY
      • Growth of SYSHIST
      • Sizing DBD01 (especially its Indexes)

Re-size and Reorg the Catalog, Directory, and associated indexes prior to ERP package installation. Plan for multiple instances of the ERP package in the subsystem; size things accordingly. Why now? This is not to prepare the Catalog for the flood of new objects; rather, it is to give you some much-needed practice. Doing a successful backup and reorg of the Catalog and Directory provides the change to test your JCL, to gather timing information, and to practice backout procedures.

You can use the Copy function of ADRDSSU to move datasets and eliminate logical extents. This may be done independently of Reorg. Then, issue ALTER TABLESPACE commands to set the SECQTY, FREEPAGE, and PCTFREE parameters of all of the Catalog and Directory tablespaces and indexes appropriately.

I recommend settings of FREEPAGE 0 and PCTFREE 10 for most objects. You may wish to make these somewhat larger if you expect to install additional instances of your ERP package in the future. SYSCOPY is a possible exception. I recommend considering FREEPAGE 15 or less, depending on your growth projections for this object.

Don't forget any user-defined indexes that you have created on Catalog tables.

As for SECQTY, too often I see shops under-allocating this parameter. I recommend setting SECQTY to approximately one quarter (25%) of the PRIQTY. Ensure that your tablespace sizing is adequate for Reorg to keep your pagesets in a single extent.

Based on procedures documented in the Utility Guide, set up the required Reorg JCL for the Catalog and Directory and test them in your development environment. Don't forget the backups you should take before any such changes! Also construct (and test) any backout or restore procedures, as well as DB2 restart procedures. Take timings of your Reorgs for later analysis, so that you can better plan any future outages due to additional Reorgs.

It will be important to Reorg the Catalog and Directory after installation of the ERP package. By that time you should have used and tested all of your processes and JCL, including backout procedures.

While we tend to think of the Catalog as the place where “DB2 looks for everything”, in fact the Directory may be more heavily accessed; hence, the Directory and its indexes need to be Reorged as well.

Your biggest concern is the SYSDBASE tablespace (and the corresponding Directory object, DBD01). This contains most of the system object tables (SYSTABLES, SYSCOLUMNS, SYSINDEXES, etc.); hence, this is the one that will expand the most and is probably your biggest. Prepare to Reorg this (and DBD01 as well) soon after your ERP install(s).

Ensure that your REORG job is restartable! Don’t use temporary datasets for the intermediate datasets.

Again, see the Utilities Guide for specifics. For example, when Reorging DSNDB06.SYSDBASE the following options are ignored: WORKDDN, SORTDATA, SORTDEVT, SORTNUM, SORTKEYS, COPYDDN, RECOVERYDDN. Also (this may be obvious) the Reorg of SYSDBASE must be done with SHRLVL NONE.

With more open pagesets you should expect SYSLGRNX and its indexes to increase in size. Depending upon your Copy philosophy many additional tablespaces will need Image Copying, which will drive a need for SYSCOPY to be bigger.

As you make changes to objects the Catalog History tables (those with a "HIST" suffix) will fill quickly, so these tablespaces will need to be re-sized as well.

Log Management

ERP package installation involves creating a lot (!) of DB2 objects. This means lots of logging during installation. First, there is the logging of the DBD changes caused by the DML Creates. Next, any initial table population. Then any logging due to Load or Reorg with LOG YES.

I recommend sizing your Logs appropriately for “normal” operations. In other words, size according to your standards, re: log archive frequency, archive file size (will it fit on a tape cartridge), and so forth. Then during ERP package installation add additional Log Files to the BSDS. These will take up the slack of the major changes you’re doing, and can be removed afterwards.

One other note: Depending your DASD configuration you may need to have your storage management group add VTOC space to some volumes. This will be to handle the tens of thousands of VSAM datasets (typically index pagesets) you will create during the install. (Even with ESS/Shark or Ramac, the operating system still sees and uses the VTOC dataset and its index.)

Log Checkpointing

As the LOGLOAD (or CHKFREQ) ZParm controls the frequency of system checkpoints based on the number of log records, I recommend you increase this during the Initial Object Creation step. You may want to consider adding additional / larger Log files (and Archives!) as well. Afterwards, set the parameter for normal operations.

There are three “normal” kinds of system checkpoints: The normal subsystem checkpoint (controlled by LOGLOAD), the shutdown checkpoint, and the end restart checkpoint.

      • During the subsystem checkpoint DB2 basically:
      • Records on the Log the status of each Unit of Recovery (UR)
      • Writes pageset checkpoint records for each open pageset (DSMAX)
      • Writes DBETable checkpoint log records
      • Schedules modified DB pages for I/O
      • Updates BSDSs with restart info

Because of this, in an ERP environment you will now have many more open URs, more open pagesets, and more Databases (more DBET entries); hence, the checkpoint process will take longer. I recommend that you attempt to avoid the combination of frequent checkpoints (low LOGLOAD / CHKFREQ) and long checkpoint times by either: (1) raising LOGLOAD / CHKFREQ, or (2) lowering the number of open pagesets (i.e., collapse multiple tables into a tablespace).

DBM1 Address Space Management

Of all of the DB2 address spaces, the DBM1 address space ends up being the biggest user of virtual storage. To ensure that this storage is used wisely I recommend starting with a simple spreadsheet listing the common control blocks and their sizes. (I’m grateful to J. J. Campbell for providing the basis for this idea). Here is a list of some common items:

      • DB2 executable code
      • The Virtual Pools (Buffer Pools)
      • Local Dynamic Statement Cache
      • User Thread control blocks
      • The EDM Pool
      • The RID Pool
      • The RDS OP Pool (Workfiles)
      • Tablespace compression dictionaries
      • VSAM DSCBs for open datasets
      • The Storage Cushion

Some of these items are fixed. For example, the DB2 executable code takes up about 4.3 MB. Others such as the EDM Pool and RID Pool are set via ZParm. Still others depend on ZParms and other external data. Here is a short list of them.

Local Dynamic Statement Cache

        MAXKEEPD x (1.8 KB + n x 0.2 KB) where “n” is # additional tables 

User Threads

        (CTHREAD + MAXDBAT) x 40 KB

Compression Dictionaries

        Up to 64 KB each

DSCBs for Open Datasets

        1.8 KB per dataset 

I regret that I have been unable to find a reference for calculation of the Storage Cushion. This area of memory, introduced in DB2 Version 5 via APAR PQ14132, allocates an area of DBM1 virtual memory that is reserved for 'must complete' processes. This helps to ensure that code errors, orphaned control blocks, or memory leaks can't cause your DB2 to abend with an out-of-storage condition.

The Storage Cushion is calculated based on the ZParms DSMAX, CTHREAD, and MAXDBAT. I recommend that these parameters not be over-allocated, as this will make the Storage Cushion larger and subtract from the available storage for the other control blocks.

To confirm the virtual storage used by the DBM1 address space, you can use the RMF Virtual Storage Report.

With this spreadsheet in hand, you can now analyze your DBM1 address space virtual memory assignments. ERP packages typically involve increased virtual pools and a larger EDM pool. These things, coupled with hundreds (perhaps thousands) of tablespaces defined with COMPRESS YES, may constrain your DBM1 address space. Begin with the spreadsheet, monitor, and plan for future expansion.

Dynamic SQL Caching

There are two memory areas in the DBM1 address space where dynamic SQL statement caching occurs: the Local Cache and the Global Cache. ERP packages typically make use of lots of dynamic SQL, so understanding these areas is important.

The Local Dynamic SQL Cache (LDSC) is a separate area of memory in the DBM1 address space (not in the EDM Pool). If ZParm CACHEDYN=YES and plans / packages are bound with KEEPDYNAMIC(YES) (the Default is NO), then the user copy of a prepared statement is stored in the LDSC. The maximum size of this area is controlled by ZParm MAXKEEPD. (See formula in Admin Guide for “size of a prepared statement”).

The Global Cache is allocated when CACHEDYN=YES. It is usually allocated in the EDM Pool (with total EDM Pool size controlled by the EDMPOOL ZParm). IN DB2 V6 and above, it is optionally allocated to a separate data space when ZParm EDMDSPAC=YES. The general recommendation is not to use data spaces unless your DBM1 address space is already virtual storage constrained.

Note: Some ERP packages use parameter markers in SQL to achieve better dynamic SQL caching; however, this may influence the Optimizer’s choice of access path, especially with range predicates. For example:


With parameter markers the Optimizer will use default filter factors to help determine the best access path. Literals might provide a better one. Consider re-coding queries (if possible) to use literals; optionally, consider using the REOPT(VARS) Bind parameter.

Virtual Pools

Typically, object bufferpool assignments are the DBA’s responsibility; however, they need to coordinate changes closely with the systems programmer due to potential issues with DBM1 address space memory. The systems programmer also needs to provide numbers and feedback on alternatives (data spaces, coupling facility storage, etc.).

DB2 Virtual pools may reside in either of three places: Memory (in the DBM1 address space), memory in a hiperspace, or memory in a data space. There is probably no immediate performance benefit to assigning virtual pools to data spaces until you run DB2 on a processor that can address 2GB Real Memory.

To alleviate sort problems, you might consider the use of a data space for the virtual pool you assign to the work files (usually DSNDB07 in a non-data sharing DB2 subsystem). Only consider this if you are virtual storage constrained, or if you have a requirement to cache large table or indexes.

I recommend setting the virtual pool thresholds to 'normal' according to your standards.

If your are storage constrained, or looking for a (possibly) minor performance gain, consider allocating hiperpools and pretending they are “read-only” extensions of the virtual pools. For example, instead of allocation BP4 at 20,000, allocate BP4 at 10,000 with a 10,000 page hiperpool. Logically, you now have a “total” BP4 size of 20,000 with half of it “reserved” for read-only pages, which now may reside in the hiperpool for a longer period.

Sort Pool and RID Pool

Note that the SRTPOOL ZParm setting applies to “each concurrent sort user” (Admin Guide), not globally. If you have many threads requiring sorts, then they will allocate multiple areas of the size specified by SRTPOOL. If too many threads make this request, your DBM1 address space will experience storage constraints. Sometimes lowering the size of a pool can result in better performance!

After attempting to use the SRTPOOL area in DBM1, DB2 will use workfiles within your workfile database (usually DSNDB07 in a non-data-sharing subsystem). Tablespaces in the workfile database are assigned to virtual pools (most shops assign them to BP7). Since the virtual pool is in memory as well, it is possible that sorts will still complete within memory. Setting SRTPOOL lower may have only a minimal affect on performance, but may keep your DBM1 address space out of trouble.

The RID Pool

The RID Pool (set by the MAXRBLK ZParm) is used for sorting RIDS during list prefetch, multi-index access, and hybrid joins. If not big enough, SQL statement may then fall back on doing a tablespace scan.

Monitor it using the RID Pool Processing section of DB2PM (or its equivalent). This pool is not allocated until needed. When required, DB2 will allocate memory in 16KB blocks until the maximum is reached. Result: If you have over-allocated the RID Pool you will eventually be memory-constrained, and it may not become apparent until a “killer query” runs.

Recommendation: allocate the RID Pool based on formula in Admin Guide, and measure usage with DB2PM.


In Part 1 we reviewed the most important issues facing the system programmer and DBA when planning for ERP package implementation. This part addressed DB2 subsystem configuration issues. In Part 3 we will review monitoring and tuning options.


Lockwood Lyon is a Systems and Database Performance specialist in the Midwest. He has over twenty years of experience in Information Technology as an IMS and DB2 database analyst, systems analyst, manager, and consultant. Most recently he's spent quite a lot of time on DB2 subsystem installation and performance tuning. Lockwood is the author of, MIS Manager's Appraisal Guide (McGraw-Hill 1993), Migrating to DB2 (John Wiley & Sons 1991), and The IMS/VS Expert's Guide (Van Nostrand 1990).

Contributors : Lockwood Lyon
Last modified 2006-01-04 03:21 PM
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!

Powered by Plone