Skip to content

Personal tools
You are here: Home » Of Interest » Articles of Interest » IMS and z/OS
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 1984

IMS and z/OS

by Dean Meltz, Rick Long, Mark Harrington, Robert Hain, Geoff Nicholls
from the new book, An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System, Prentice Hall PTR, 2004.

This chapter describes how IMS subsystems are implemented on a z/OS system and how IMS uses some of the facilities that are a part of the z/OS operating system.

How IMS Relates to z/OS

IMS is a large application that runs on z/OS. There is a symbiotic relationship between IMS and z/OS. Both are tailored to provide the most efficient use of the hardware and software components.

IMS runs as a z/OS subsystem and uses several address spaces: one controlling address space, several separate address spaces that provide IMS services, and several address spaces that run IMS application programs. z/OS address spaces are sometimes called regions,1 as in the IMS control region. The term region is synonymous with a z/OS address space.

The various components of an IMS system are explained in more detail in “Structure of IMS Subsystems.”

Structure of IMS Subsystems

This section describes the various types of z/OS address spaces and their interrelationships. The control region is the core of an IMS subsystem, running in one z/OS address space. Each control region uses many other address spaces that provide additional services to the control region, and in which the IMS application programs run.

Some IMS applications and utilities run in separate, standalone regions, called batch regions. Batch regions are separate from an IMS subsystem and its control region and have no connection with it. For more information, see IMS Batch Environment.

IMS Control Region

The IMS control region is a z/OS address space that can be initiated through a z/OS START command or by submitting job control language (JCL)2 job.

The IMS control region provides the central point of control for an IMS subsystem. The IMS control region:

      • Provides the interface to z/OS for the operation of the IMS subsystem.
      • Controls, schedules, and dispatches the application programs that are running in separate regions, called dependent regions.
      • Provides the interface to the SNA network for IMS TM functions.
      • Provides the OTMA interface for access to non-SNA networks.
      • Provides the ODBA interface for DB2 UDB for z/OS stored procedures and other z/OS application programs.

The IMS control region also provides all logging, restart, and recovery functions for the IMS subsystems. The terminals, message queues, and logs are all attached to this region. Fast Path (one of the IMS database types) database data sets are also allocated by the IMS control region. A z/OS type-2 supervisor call (SVC) routine is used for switching control information, message and database data between the control region, all other regions, and back.

Four different types of IMS control regions can be defined using the IMS system definition process. You choose the one you want depending on which IMS functions you want. The four types of IMS control regions support the four IMS environments. These environments are discussed in more detail in IMS Environments.

IMS Environments

Each of the IMS environments is a distinct combination of hardware and programs that supports distinct processing goals. The four IMS environments are:

      • DB/DC, which contains all the functionality of both IMS TM and IMS DB (see IMS DB/DC Environment).
      • DBCTL (pronounced DB Control), which contains the functionality of only IMS DB (see IMS DBCTL Environment.
      • DCCTL (pronounced DC Control), which contains the functionality of only IMS TM (see IMS DCCTL Environment).
      • Batch, which contains the functionality of IMS DB, but is used only for batch jobs (see IMS Batch Environment).

IMS DB/DC Environment

The DB/DC environment has both IMS TM and IMS DB installed and has the functionality of the entire IMS product. The processing goals of the DB/DC environment are to:

      • Enable terminal users to retrieve data and modify the database with satisfactory realtime performance. Some typical applications are banking, airline reservations, and sales orders.
      • Ensure that retrieved data is current.
      • Distribute transaction processing among multiple processors in a communications network.
      • Run batch application programs to update databases at certain intervals (for example, process a payroll or produce an inventory report).
      • Run database utilities using batch.

As shown in Figure 4-1, the DB/DC control region provides access to the:

      • Network, which might include a z/OS console, terminals, Web servers, and more.
      • IMS message queues for IMS applications running in message processing regions (MPRs) or Java message processing regions.
      • IMS libraries.
      • IMS logs.
      • Fast Path databases.
      • DL/I separate address space.
      • Database Recovery Control (DBRC) facility region.
      • IMS Fast Path (IFP) region.
      • Java message processing program (JMP) region.
      • Java batch processing program (JBP) region.
      • Batch message processing program (BMP) region.

Figure 4-1: Structure of a sample IMS DB/DC environment.

Related Reading:

IMS DBCTL Environment

The DBCTL environment has only IMS DB installed. The processing goals of the DBCTL environment are to:

      • Process network transactions without IMS TM; that is, use IMS DB with a different transaction management subsystem, such as CICS.
      • Run batch application programs to update databases at certain intervals (for example, process a payroll or produce an inventory report).
      • Run database utilities using batch.

DBCTL can provide IMS database functions to batch message programs (BMP and JMP application programs) connected to the IMS control region, and to application transactions running in CICS regions, as shown in Figure 4-2.

When a CICS system connects to IMS using the DRA, each CICS system has a predefined number of connections with IMS. Each of these connections is called a thread. Although threads are not jobs from the perspective of IMS, each thread appears to the IMS system to be another IMS dependent region. When a CICS application issues a DL/I call to IMS, the DL/I processing runs in one of these dependent regions.

When a DB/DC environment is providing access to IMS databases for a CICS region, it is referred to in some documentation as providing DBCTL services, though it might, in fact, be a full DB/DC environment and not just a DBCTL environment.

IMS DCCTL Environment

The DCCTL environment is an IMS Transaction Manager subsystem that has no database components.

A DCCTL environment is similar to the “DC” component of a DB/DC environment.

The primary difference is that a DCCTL control region owns no databases and does not service DL/I database calls. The processing goals of the DCCTL environment are to:

      • Process network transactions without IMS DB by using IMS TM with an external database management subsystem, such as DB2 UDB for z/OS.
      • Use DBRC to maintain system log information that might be needed to restart IMS.
      • Run batch application programs in a TM batch region by using IMS TM to do batch processing with DB2 UDB for z/OS.

Figure 4-2: Structure of a Sample IMS DBCTL Environment.

As shown in Figure 4-3, the DCCTL system, in conjunction with the IMS External Subsystem Attach Facility (ESAF), provides a transaction manager facility to external subsystems (for example, DB2 UDB for z/OS). Most IMS customers use a DB/DC environment as a transaction manager front end for DB2 UDB for z/OS.

In a DCCTL environment, transaction processing and terminal management is identical to transaction processing and terminal management in a DB/DC environment.

IMS Batch Environment

The IMS batch environment consists of a batch region (a single address space) where an application program and IMS routines reside. The batch job that runs the batch environment is initiated with JCL, like any operating-system job.

There are two types of IMS batch environments: DB Batch and TM Batch. These environments are discussed in “DB Batch Environment” and in “TM Batch.”

DB Batch Environment In the DB Batch environment, IMS application programs that use only IMS DB functions can be run in a separate z/OS address space that is not connected to an IMS online control region. These batch applications are typically very long-running jobs that perform large numbers of database accesses, or applications that do not perform synchronization-point processing to commit the work. DB Batch applications can access only full-function databases, which are explained in “Implementation of IMS Databases,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System.

Another aspect of a DB Batch environment is that the JCL is submitted through TSO or a job scheduler. However, all of the IMS code used by the application resides in the address space in which the application is running. The job executes an IMS batch region controller that then loads and calls the application. Figure 4-4 on shows an IMS batch region. The batch address space opens and reads the IMS database data sets directly.

The batch region controller writes its own separate IMS log. In the event of a program failure, it might be necessary to take manual action (for example, submit jobs to run IMS utilities) to recover the databases to a consistent point. With online dependent application regions, this is done automatically by the IMS control region. You can also use DBRC to track the IMS logs and ensure that correct recovery action is taken in the event of a failure.

A T T E N T I O N:

If multiple programs, either running under the control of an IMS control region or in other batch regions, need to access databases at the same time, then you must take steps to ensure data integrity. See Chapter 9, “Data Sharing,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System for more information about how the data can be updated by multiple applications in a safe manner.

Figure 4-3: Structure of a sample IMS DCCTL environment.

An application can be written so that it can run in both a batch address space and a BMP address space without change. You can vary the execution environment of the programs between batch and BMP address spaces to lengthen the run time, support the need of other applications to access the data at the same time, or to run your procedures for recovering from application failures.

TM Batch IMS TM supports a batch region for running TM batch application programs. Using TM Batch, you can either take advantage of the IMS Batch Terminal Simulator for z/OS or access an external subsystem through the IMS External Subsystem Attach Facility, ESAF. One example of an external subsystem is DB2 UDB for z/OS.

You can connect DB2 UDB for z/OS in an IMS TM batch environment in one of two ways. You can use the SSM parameter on the TM batch-region execution JCL and specify the actual name of the batch program on the MBR parameter. Alternatively, you can code the DDITV02 DD statement on the batch-region execution JCL and specify the name of the DB2 UDB for z/OS module, DSNMTV01, on the MBR parameter.

TM Batch does not provide DL/I database capabilities.

Figure 4-4: Structure of an IMS DB batch environment.

Related Reading:

IMS Separate Address Spaces

The IMS control region has separate address spaces that provide some of the IMS subsystem services.

These regions are automatically started by the IMS control region as part of its initialization, and the control region does not complete initialization until these regions have started and connected to the IMS control region. All separate address spaces (except for DBRC) are optional, depending on the IMS features used. For DL/I, separate address space options can be specified at IMS initialization.

DBRC Region

The DBRC region provides all access to the DBRC recovery control (RECON) data sets. The DBRC region also generates batch jobs for DBRC (for example, for archiving the online IMS log). Every IMS control region must have a DBRC region because it is needed, at a minimum, for managing the IMS logs.

DL/I Separate Address Space

The DL/I separate address space (DLISAS) performs most data set access functions for IMS DB (except for the Fast Path DEDB databases). The DLISAS allocates full-function database data sets and also contains some of the control blocks associated with database access and some database buffers.

For a DBCTL environment, the DLISAS is required and always present.

For a DB/DC environment, you have the option of having IMS database accesses performed by the control region or having the DB/DC region start DLISAS. For performance and capacity reasons, use DLISAS.

DLISAS is not present for a DCCTL environment because the Database Manager functions are not present.

Dependent Regions

IMS provides address spaces for the execution of system and application programs that use IMS services. These address spaces are called dependent regions.

The dependent regions are started by the submission of JCL to the operating system. The JCL is submitted as a result of a command issued to the IMS control region, through automation, or by a regular batch job submission.

After the dependent regions are started, the application programs are scheduled and dispatched by the IMS control region. In all cases, the z/OS address space executes an IMS control region program. The application program is then loaded and called by the IMS code.

Up to 999 dependent regions can be connected to one IMS control region, made up of any combination of the following dependent region types:

      • Message processing region (MPR)
      • IMS Fast Path (IFP) region, processing Fast Path applications or utilities
      • Batch message processing (BMP) region, running with or without HSSP (High Speed Sequential Processing)
      • Java message processing (JMP) region
      • Java batch processing (JBP) region
      • DBCTL thread (DBT)

Table 4-1 describes the support for dependent regions by IMS environment type.

Message Processing Region Message processing regions (MPRs) run applications that process messages that come into IMS TM as input (for example, from terminals or online programs). MPRs can be started by IMS submitting the JCL as a result of an IMS command. The address space does not automatically load an application program but waits until work becomes available.

Table 4-1: Support for dependent region type by IMS environment.

Application Address
Space Type
BMP (transaction-oriented) Ya N Y N N
BMP (batch-oriented) Y Y Y N N
Batch N N N Y Y
aBMP regions attached to a DCCTL control region can access only IMS message queues and DB2 UDB for z/OS databases.

Priority settings determine which MPR runs the application program. When the IMS determines that an application is to run in a particular MPR, the application program is loaded into that region and receives control. The application processes the message and any further messages for that transaction that are waiting to be processed. Then, depending on options specified on the transaction definition, the application either waits for further input, or another application program is loaded to process a different transaction.

IMS Fast Path Region An IMS Fast Path (IFP) region runs application programs to process messages for transactions that have been defined as Fast Path transactions.

Fast Path applications are very similar to the applications that run in an MPR. Like MPRs, the IFP regions can be started by the IMS control region submitting the JCL as a result of an IMS command. The difference between MPRs and IFP regions is in the way IMS loads and dispatches the application program and handles the transaction messages. To allow for this different processing, IMS imposes restrictions on the length of the application data that can be processed in an IFP region as a single message.

IMS uses a user-written exit routine (or the IBM-supplied sample) to determine whether a transaction message should be processed in an IFP region and in which IFP region it should be processed.

The IMS Fast Path facility that processes messages is called the expedited message handler (EMH). The EMH speeds the processing of the messages by having the applications loaded and waiting for input messages, and, if the message is suitable, dispatching it directly in the IFP region, bypassing the IMS message queues.

IFP regions can also be used for other types of work besides running application programs. IFP regions can be used for Fast Path utility programs. For further discussion on using these regions for other types of work, see the IMS Version 9: Installation Volume 2: System Definition and Tailoring.

Batch Message Processing Region Unlike MPR or IFP regions, a BMP region is not usually started by the IMS control region, but is started by submitting a batch job, for example by a user from TSO or by a job scheduler. The batch job then connects to an IMS control region that is defined in the execution parameters.

Two types of applications can run in BMP regions:

      • Message-driven BMP applications (also called transaction-oriented BMP applications), which read and process messages from the IMS message queue
      • Non-message-driven BMP applications (batch-oriented), which do not process IMS messages

BMP regions have access to the IMS full-function and Fast Path databases, provided that the control region has the Database Manager component installed. BMP regions can also read and write to z/OS sequential files, with integrity, using the IMS GSAM access method (see “GSAM Access Method,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System).

BMP regions can also be used for other types of work besides running application programs. BMP regions can be used for jobs that, in the past, were run as batch update programs. The advantage of converting batch jobs to run in BMP regions is that the batch jobs can now run along side of a transaction environment and these BMP applications can be run concurrently instead of sequentially. For a further discussion on using these regions for other types of work, see the IMS Version 9: Installation Volume 2: System Definition and Tailoring.

Java Dependent Regions Two IMS dependent regions provide a Java Virtual Machine (JVM) environment for Java or object-oriented COBOL applications:

Java message processing (JMP) regions

JMP regions are similar to MPR regions, but JMP regions allow the scheduling only of Java or object-oriented COBOL message-processing applications. A JMP application is started when there is a message in the queue for the JMP application and IMS schedules the message to be processed. JMP applications are executed through transaction codes submitted by users at terminals and from other applications. Each transaction code represents a transaction that the JMP application processes. A single application can also be started from multiple transaction codes.

JMP applications are very flexible in how they process transactions and where they send the output. JMP applications send any output messages back to the message queues and process the next message with the same transaction code. The program continues to run until there are no more messages with the same transaction code. JMP applications share the following characteristics:

      • They are small.
      • They can produce output that is needed immediately.
      • They can access IMS or DB2 data in a DB/DC environment and DB2 data in a DCCTL environment.

Java batch processing (JBP) regions

JBP regions run flexible programs that perform batch-type processing online and can access the IMS message queues for output (similar to non-message-driven BMP applications). JBP applications are started by submitting a job with JCL or from TSO. JBP applications are like BMP applications, except that they cannot read input messages from the IMS message queue. Similarly to BMP applications, JBP applications can use symbolic checkpoint and restart calls to restart the application after an abend. JBP applications can access IMS or DB2 data in a DB/DC or DBCTL environment and DB2 data in a DCCTL environment.

Figure 4-5 shows a Java application that is running in a JMP or JBP region. JDBC or IMS Java hierarchical interface calls are passed to the IMS Java layer, which converts them to DL/I calls.

JMP and JBP regions can run applications written in Java, object-oriented COBOL, or a combination of the two.

Related Reading: For more information about writing Java applications for IMS, see Chapter 18, “Application Programming in Java,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System, or IMS Version 9: IMS Java Guide and Reference.

Common Queue Server address space

Common Queue Server (CQS) is a generalized server that manages data objects on a z/OS coupling facility on behalf of multiple clients. CQS is used by IMS shared queues and the Resource Manager address space in the Common Service Layer.

CQS uses the z/OS coupling facility as a repository for data objects. Storage in a coupling facility is divided into distinct objects called structures. Authorized programs use structures to implement data sharing and high-speed serialization. The coupling facility stores and arranges the data according to list structures. Queue structures contain collections of data objects that share the same names, known as queues. Resource structures contain data objects organized as uniquely named resources.

Figure 4-5: JMP or JBP application that uses the IMS java function.

CQS receives, maintains, and distributes data objects from shared queues on behalf of multiple clients. Each client has its own CQS access the data objects on the coupling facility list structure. IMS is one example of a CQS client that uses CQS to manage both its shared queues and shared resources.

CQS runs in a separate address space that can be started by the client (IMS). The CQS client must run under the same z/OS image where the CQS address space is running.

CQS is used by IMS DCCTL and IMS DB/DC control regions if they are participating in sysplex sharing of IMS message queues or resource structures. IMS DBCTL can also use CQS and a resource if it is using the IMS coordinated online change function.

Clients communicate with CQS using CQS requests that are supported by CQS macro statements. Using these macros, CQS clients can communicate with CQS and manipulate client data on shared coupling facility structures. Figure 4-6 shows the communications and the relationship between clients, CQSs, and the coupling facility.

Related Reading: For complete information about CQS, see IMS Version 9: Common Queue Server Guide and Reference.

Common Service Layer

The IMS Common Service Layer (CSL) is a collection of IMS system address spaces that provide the infrastructure needed for systems management tasks.

Figure 4-6: Client systems, CQS, and a coupling facility.

The IMS CSL reduces the complexity of managing multiple IMS systems by providing you with a single-image perspective in an IMSplex. An IMSplex is one or more IMS subsystems that can work together as a unit. Typically, these subsystems:

      • Share either databases or resources or message queues (or any combination)
      • Run in a z/OS sysplex environment
      • Include an IMS CSL

The CSL address spaces include Operations Manager (OM), Resource Manager (RM), and Structured Call Interface (SCI). They are briefly described in the following sections.

Related Reading: For a further discussion of IMS in a sysplex environment, see:

For a detailed discussion of IMS in a sysplex environment, see:

Operations Manager Address Space The Operations Manager (OM) controls the operations of an IMSplex. OM provides an application programming interface (the OM API) through which commands can be issued and responses received. With a single point of control (SPOC) interface, you can submit commands to OM. The SPOC interfaces include the TSO SPOC, the REXX SPOC API, and the IMS Control Center. You can also write your own application to submit commands.

Related Reading: For a further discussion of OM, see “Operations Manager,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System.

Resource Manager Address Space The Resource Manager (RM) is an IMS address space that manages global resources and IMSplex-wide processes in a sysplex on behalf of RM’s clients. IMS is one example of an RM client.

Related Reading: For a further discussion of RM, see “Resource Manager,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System.

Structured Call Interface Address Space The Structured Call Interface (SCI) allows IMSplex members to communicate with one another. The communication between IMSplex members can happen within a single z/OS image or among multiple z/OS images. Individual IMS components do not need to know where the other components reside or what communication interface to use.

Related Reading: For a further discussion of SCI, see “Structured Call Interface,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System.

Internal Resource Lock Manager

The internal resource lock manager (IRLM) is delivered as an integral part of IMS, but you do not have to install or use it unless you need to perform block-level or sysplex data sharing. IRLM is also the required lock manager for DB2 UDB for z/OS.

The IRLM address space is started before the IMS control region with the z/OS START command. If the IMS start-up parameters specify IRLM, the IMS control region connects to the IRLM that is specified on startup and does not complete initialization until the connection is successful.

Typically, one IRLM address space runs on each z/OS system to service all IMS subsystems that share the same set of databases. For more information on data sharing in a sysplex environment, see:

Running an IMS System

IBM supplies the procedures to run IMS address spaces. The procedures for each type of region are located in the IMS.PROCLIB data set.

R E C O M M E N D A T I O N :

Do not use the same IRLM address space for IMS and DB2 UDB for z/OS because the tuning requirements of IMS and DB2 are different and conflicting. The IRLM code is delivered with both IMS and DB2 UDB for z/OS and interacts closely with both. Therefore, you might want to install the IRLM code for IMS and DB2 UDB for z/OS separately (that is, in separate SMP/E zones) so that you can maintain release and maintenance levels independently. Installing the IRLM code separately can be helpful if you need to install prerequisite maintenance on IRLM for one database product because doing so does not affect the use of IRLM by the other product.

You must modify the procedures in the IMS.PROCLIB data set with the correct data set names for each IMS system. Table 4-2 contains the procedure member names in IMS.PROCLIB, along with the type of region that each member generates.

Related Reading: For details of these and other procedures supplied in IMS.PROCLIB, see the “Procedures” chapter in IMS Version 9: Installation Volume 2: System Definition and Tailoring.

Running Multiple IMS Systems

You can run multiple IMS systems on a single z/OS image or on multiple z/OS images. One instance of an IMS system (a control region and all its associated dependent regions) is referred to as one IMS system. In many cases, these IMS systems would be production and testing systems. A batch IMS system (for example, DB batch) is also considered one IMS system.

Procedure Member Name Region Type
DBC DBCTL control region
DBRC Database Recovery Control region
DCC DCCTL control region
DFSJBP Java batch processing (JBP) region
DFSJMP Java message processing (JMP) region
DFSMPR Message processing region (MPR)
DLIBATCH DB batch region
DLISAS DL/I separate address space
DXRJPROC Internal resource lock manager (IRLM) region
FPUTIL Fast Path utility region
IMS DB/DC control region
IMSBATCH IMS batch message processing region (BMP)
IMSFP IMS Fast Path (IFP) region
IMSRDR IMS JCL reader region

Table 4-2: IMS procedure members and the region type they generate.

Running Multiple IMS Systems on a Single z/OS Image

The number of IMS subsystems you can run on a single image of z/OS depends on many factors, including the size of each IMS system (the amount of z/OS common service area [CSA] required by each IMS is often one of the most limiting factors in the equation). In most installations, you can run up to four IMS subsystems, although some installations run as many as eight small subsystems concurrently.

Each IMS subsystem should have unique VTAM access method control block (ACB) and IMSID (IMS subsystem identifier to the operating system) names. The dependent regions use the IMSID to connect to the corresponding IMS control region. If the dependent region starts and there is no control region running using that IMSID, the dependent region issues a message to the z/OS system console and then waits for a reply. Each IMS subsystem can have up to 999 dependent regions. However, there are other limiting factors, such as storage limitations due to pool usage.

Running Multiple IMS Systems on Multiple z/OS Images

There are basically three ways to run multiple IMS subsystems on multiple z/OS images:

How IMS Uses z/OS Services

IMS is designed to make the best use of the features of the z/OS operating system. IMS does so by:

      • Running in multiple address spaces: IMS subsystems (except for IMS batch applications and utilities) normally consist of a control region address space, separate address spaces for system services, and dependent address spaces for application programs. Running in multiple address spaces provides the following advantages:
          • Maximizes the use of a central processor complex (CPC). Address spaces can be dispatched in parallel.
          • Isolates the application programs from the IMS system code. Doing so reduces outages from application failures.
      • Running multiple tasks in each IMS address space: IMS, particularly in the control region, creates multiple z/OS subtasks for the various functions to be performed. Doing so allows other IMS subtasks to be dispatched by z/OS while one IMS subtask waits for system services.

      • Using the z/OS cross memory services: IMS uses z/OS cross memory services to communicate between the various address spaces that make up an IMS system. IMS also uses the z/OS CSA and ECSA to store IMS control blocks that are frequently accessed by the address spaces of that IMS system. Doing so minimizes the overhead of running in multiple address spaces.
      • Using the z/OS subsystem feature: IMS dynamically registers itself as a z/OS subsystem and uses the z/OS subsystem feature to detect when dependent address spaces fail, thus preventing the cancellation of dependent address spaces through z/OS command entry.
      • Using a z/OS sysplex: Multiple IMS subsystems can run on the z/OS systems that make up the sysplex and, therefore, can access the same IMS databases and the same message queue. Doing so provides:
          • High availability: z/OS systems and IMS subsystems can be taken in and out of service without interrupting production. 
          • High capacity: multiple IMS subsystems can process far greater volumes than individual IMS subsystems can.

Related Reading: For information about data sharing and shared queues in a sysplex environment, see:

Transmission Control Protocol/Internet Protocol (TCP/IP)

IMS provides support for z/OS TCP/IP communications through a function called Open Transaction Manager Access (OTMA). Any TCP/IP application can access IMS by using OTMA. Examples of such TCP/IP applications are:

      • IMS Connect (a function within IMS TM) uses the OTMA interface to connect IMS to Web servers
      • CICS
      • DB2 UDB for z/OS stored procedures
      • WebSphere MQ

Related Reading: For information about OTMA and IMS Connect, see:

Advanced Program-to-Program Communications (APPC)

IMS supports the z/OS CPI-C (Common Programming Interface for Communications) interface, which is based on Logical Unit type 6.2 formats and protocols for program-to-program communication.

APPC is an implementation of the LU type 6.2 protocol. IMS’s support for APPC is called APPC/IMS.

APPC/IMS enables applications to be distributed throughout an entire network and to communicate with each other regardless of the underlying hardware.

Related Reading: For more information about IMS’s support for APPC, see “APPC/IMS and LU 6.2 Devices,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System.

Resource Access Control Facility (RACF)

IMS was developed before the introduction of RACF, which is part of the Security Server for z/ OS, and other security products. As a result, IMS has its own security mechanisms to control user access to IMS resources, transactions, and databases.

With the introduction of RACF, IMS was enhanced so that it can use RACF (or an equivalent product) to control access to IMS resources. You can use the original IMS security features, the RACF features, or a combination of both.

Related Reading: For more information about protecting IMS resources, see “IMS Security,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System. For complete information about IMS security, see the security chapter in IMS Version 9: Administration Guide: System.

Resource Recovery Services (RRS)

z/OS includes a facility for managing system resource recovery, called resource recovery services (RRS). RRS is the sync-point manager, which coordinates the update and recovery of multiple protected resources. RRS controls how and when protected resources are committed by coordinating with the resource managers (such as IMS) that have registered with RRS.

R E C O M M E N D A T I O N :

Use RACF for security because it provides more flexibility than the IMS security features provide. RRS provides a system resource recovery platform such that applications that run on z/OS can have access to local and distributed resources and have system-coordinated recovery management of these resources. RRS support includes these features and services:

      • A sync-point manager to coordinate the two-phase commit process3
      • Implementation of the SAA® commit and backout callable services for use by application programs
      • A mechanism to associate resources with an application instance
      • Services for resource manager registration and participation in the two-phase commit process with RRS
      • Services to allow resource managers to express interest in an application instance and be informed of commit and backout requests
      • Services to enable resource managers to obtain system data to restore their resources to consistent state
      • A communications resource manager (called APPC/PC for APPC/Protected Conversations) so that distributed applications can coordinate their recovery with participating local resource managers

Related Reading: For more information about how IMS uses RRS, see IMS Version 9: Administration Guide: System.

Parallel Sysplex

A Parallel Sysplex environment in z/OS is a combination of hardware and software components that enable sysplex data sharing. Data sharing means the ability for sysplex member systems and subsystems to store data into, and retrieve data from, a common area of a coupling facility. In short, a Parallel Sysplex can have multiple CPCs and multiple applications (such as IMS) that can directly share the workload.

In a Parallel Sysplex environment, you can run multiple IMS subsystems that share message queues and databases. This sharing enables workload balancing and insulation from individual IMS outages. If one IMS in the sysplex fails or is stopped, others continue to process the workload, so the enterprise is minimally affected.

Related Reading: For more information, see Chapter 27, “Introduction to Parallel Sysplex” and Chapter 28, “IMSplexes,” An Introduction to IMS™: Your Complete Guide to IBM’s Information Management System.


1The concept of a region originated in the MVT (Multiprogramming with Variable Number of Tasks) operating system, a precursor to z/OS.

2A control language that is used to identify a job to an operating system and to describe the job’s requirements.

3Two-phase commit processing is a two-step process by which recoverable resources and an external subsystem are committed. During the first step, the database manager subsystems are polled to ensure that they are ready to commit. If all subsystems respond positively, the database manager instructs them to commit.

Contributors : Dean Meltz, Rick Long, Mark Harrington, Robert Hain, Geoff Nicholls
Last modified 2005-05-19 11:24 AM
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!

Powered by Plone