Skip to content

DBAzine.com

Sections
Personal tools
You are here: Home » DB2 » DB2 Mainframe Articles Archive » Measuring DBA Effectiveness
Seeking new owner for this high-traffic DBAzine.com site.
Tap into the potential of this DBA community to expand your business! Interested? Contact us today.
Who Are You?
I am a:
Mainframe True Believer
Distributed Fast-tracker

[ Results | Polls ]
Votes : 3554
 

Measuring DBA Effectiveness

by Craig S. Mullins

I visit and talk to many DBAs and DBA groups throughout the course of each year, and I also communicate to a lot of folks via e-mail. Some are simply comments on my articles and books, but many more ask questions and solicit advice. I welcome this input from readers, and every-now-and-then I will take the opportunity to answer particularly intriguing questions in print. A common question I am asked is “What is a good way to manage how effective your DBA group is?”

This is not a very easy question to answer because a DBA has to be a “jack of all trades.” And each of these “trades” can have multiple metrics for measuring success. For example, a metric suggested by one reader was to measure the number of SQL statements that are processed successfully. But what does “successfully” mean? Does it mean simply that the statement returned the correct results, or does it mean it returned the correct results in a reasonable time? And what is a “reasonable” time? Two seconds? One minute? A half hour? Unless you have established service level agreements it is unfair to measure the DBA on response time. And the DBA must participate in establishing reasonable SLAs (in terms of cost and response time) lest he be handed a task that cannot be achieved.

Measuring the number of incidence reports was another metric suggested. Well, this is fine if it is limited to only true problems that might have been caused by the DBA. But not all database problems are legitimately under the control of the DBA. Should the DBA be held accountable for bugs in the DBMS (caused by the DBMS vendor); or for design elements forced on him or her by an overzealous development team (happens all the time with RAD and e-rushing around).

I like the idea of using an availability metric, but it should be tempered against your specific environment and your organization’s up-time requirements. In other words, what is the availability required? Once again, back to SLAs. And the DBA should not be judged harshly for not achieving availability if the DBMS does not deliver the possibility of availability (e.g. online reorg and change management) or the organization does not purchase reasonable availability solutions from a third party vendor. Many times the DBA is hired well after the DBMS has been selected. Should the DBA be held accountable for deficiencies in the DBMS itself if he or she had no input at all into the DBMS purchase decision?

And what about those DBA tools that can turn downtime into up-time and ease administrative tasks? Well, most DBAs want all of these tools they can get their hands on. But if the organization has no (or little) budget, then the tools will not be bought. And should the DBA be held responsible for downtime when he is not given the proper tools to manage the problem?

OK then, what about a metric based on response to problems? This metric would not necessarily mean that the problem was resolved, but that the DBA has responded to the “complaining” entity and is working on a resolution. Such a metric would lean toward treating database administration as a service or help desk type of function. This sounds more reasonable, at least from the perspective of the DBA, but I actually think this is much too narrow a metric for measuring DBAs.

        

What is Service Level Management?

According to Sturm, Morris and Jander in Foundations of Service Level Management, service level management (SLM) is “the disciplined, proactive methodology and procedures used to ensure that adequate levels of service are delivered to all IT users in accordance with business priorities and at acceptable cost.” So, in order to effectively manage service levels, the business needs to prioritize applications (and individual transactions) and identify the amount of time, effort, and capital that can be expended delivering service for those applications.

For a service level agreement (SLA) to be successful all of the parties involved must agree upon stated objectives for availability and performance. The end users must be satisified with the performance of their applications and the DBAs and technicians must be content with their ability to manage the system to the objectives. Compromise is essential to reach a useful SLA.

In practice, though, many organizations do not institutionalize SLM. Oh, when new applications are delivered there may be vague requirements and promises of sub-second response time. But the prioritization and costing required to assure such service levels are rarely tackled unless the IT function is outsourced. Internal IT organizations are loath to sign service level agreements because any SLA worth pursuing will be difficult to achieve. Furthermore, once an SLA has been created the business will be more readily available to assign the SLA to an outsourcer. With the difficult negotiation of service levels complete the business can seek out lower cost providers than the internal IT group.

Any fair DBA evaluation metric must be developed with an understanding of the environment in which the DBA works. This requires in-depth analysis of things like:

      • number of applications that must be supported,
      • number of databases and size of those databases,
      • number of database servers,
      • use of the databases (OLTP, OLAP, web-enabled, data mining, ad hoc, etc.),
      • number of different DBMSs (that is, Oracle, DB2, Informix, etc.),
      • number of OS platforms to be supported (Windows 2000, UNIX, OS/390, AS/400, etc.),
      • special consideration for ERP applications due to their non-standard DBMS usage,
      • number of users and number of concurrent users,
      • type of Service Level Agreements in effect or planned,
      • availability required (24/7 or something less),
      • the impact of database downtime on the business ($$$),
      • performance requirements (subsecond or longer - gets back to the SLA issue),
      • type of applications (mission critical vs. non-mission critical),
      • frequency of change requests.

This is probably an incomplete list, but it accurately represents the complexity and challenges faced by DBAs on a daily basis.

Of course, the best way to measure DBA effectiveness is to judge the quality of all the tasks that they perform. But many aspects of such measurement will be subjective. Keep in mind that a DBA performs many tasks to ensure that the organization’s data and databases are useful, useable, available, and correct. These tasks include data modeling, logical and physical database design, database change management, performance monitoring and tuning, assuring availability, authorizing security, backup and recovery, ensuring data integrity, and, really, anything that interfaces with the company’s databases. Developing a consistent metric for measuring these tasks in a non-subjective way is challenging.

You'll probably need to come up with a complex formula of all of the above — and more — to do the job correctly. Which is probably why I've never seen a fair, non-subjective, metric-based measurement program put together for DBAs. If you (or anyone else reading this) implements such a program I'd love to hear the details of the program — and how it is accepted by the DBA group.

--

Craig Mullins is an independent consultant and president of Mullins Consulting, Inc. Craig has extensive experience in the field of database management having worked as an application developer, a DBA, and an instructor with multiple database management systems including DB2, Sybase, and SQL Server. Craig is also the author of the DB2 Developer’s Guide, the industry-leading book on DB2 for z/OS, and Database Administration: Practices and Procedures, the industry’s only book on heterogeneous DBA procedures. You can contact Craig via his web site at http://www.craigsmullins.com.


Contributors : Craig S. Mullins
Last modified 2006-01-16 04:43 AM
Transaction Management
Reduce downtime and increase repeat sales by improving end-user experience.
Free White Paper
Database Recovery
Feeling the increased demands on data protection and storage requirements?
Download Free Report!
 
 

Powered by Plone