Making sure that database applications are running optimally is one of the biggest problems faced by DBAs on an ongoing basis. DBAs are constantly monitoring and managing performance while also reacting to end user complaints. When users have to wait longer than they are used to waiting for their applications to respond it is understandable that they will reach out to the DBAs for assistance. But what causes a formerly fast application to deliver sub-par performance? If there were an easy answer to that question many DBAs would be out of work. Instead of attempting to answer that question in-depth, let’s examine the basics of performance management and optimization that apply to all database applications.
First of all, we need a definition for database performance. In a nutshell, database performance can be defined as the optimization of resource use to increase throughput and minimize contention, enabling the largest possible workload to be processed.
The Three Things
With this definition in mind, it is important for DB2 performance tuners to understand the desired effect of their tuning efforts. All database tuning must impact at least one of three things:
- I/O – the fewer disk reads required to achieve a task the better performance will be
- CPU – the fewer CPU cycles required to achieve a task the better performance will be
- Concurrency – the more tasks that can be accomplished in the same time window the better throughput will be
I/O and CPU are computing resources to be optimized. The transactions and SQL of your applications constitute the workload that needs to be processed. Throughput defines the capacity of the system to process workload. When the demand (workload) for a particular resource is high, contention can result. Contention is the condition in which two or more components of the workload are attempting to use a single resource in a conflicting way (for example, dual updates to the same piece of data).
Next, we need to understand the various tunable components of applications that rely on database systems.
The Tunable Components in a DB2 World
Every database application, at its core, requires three components in order to operate: the system, the database, and the application. To deliver performance, the DBA must be able to monitor and tune each of these components. This is easier said than done.
The system consists of the system software and hardware required for the application to provide service. This includes the computer itself, its disk subsystems, network connections, and all the peripherals. From a software perspective the system includes the operating system, the file system, the DBMS itself, networking protocols, and any related middleware such as transaction processors or message queues.
To deliver system performance the DBA must have the resources to monitor, manage and optimize the performance of these disparate pieces of hardware and software. Some of the tasks required for system tuning include the proper allocation and management of memory structures (e.g. buffer pools, program cache area, etc.), lock management, storage management, integration of the DBMS with other system software, proper usage of database logs, and coordination of the operating system resources used by the DBMS. Additionally, the DBA must control the installation, configuration, and migration of the DBMS software. If the system is not performing properly, everything that uses the system will perform poorly. In other words, a poorly performing system impacts every database application.
The second component is the database. The database stores the data that is used by the application. When the application needs to access data, it does so through the DB2 (the DBMS) to the database of choice (for example, DBCUST01). Of course, the database is a high-level collection of other database objects (e.g. table spaces, tables, indexes) that actually contain the data to be accessed. If the database objects are not optimally organized or stored, the data contain therein will be difficult or slow to access. The performance of every application that requires this data will be negatively impacted.
Over time, as data is modified and updated, the DBMS may have to move the data around within the database. Such activity causes the data to become fragmented and inefficiently ordered. The longer the database remains online and the more changes made to the data, the more inefficient database access can become. To overcome disorganized and fragmented databases the DBA can run a reorganization to refresh the data and make the database efficient once again. But the key to successful reorganization is to reorganize only when the database requires it; instead, some companies over-reorganize by scheduling regular database reorganization jobs to be run whether the database is fragmented, or not. This wastes valuable CPU cycles.
The DB2 runstats command can be used to collect statistics on your database objects. These statistics are used by the DB2 optimizer to formulate efficient access paths for SQL statements (more on this in a moment) and provide information that is useful to help determine when a reorg is required. DBAs can run the reorgchk command to gather or access statistics on the database to determine if tables or indexes, or both, need to be reorganized.
Reorganization is only one of many database performance tasks performed by the DBA. Others include data set placement, partitioning for parallel access, managing free space, and assuring optimal compression.
The third, and final, component of database performance is the application itself. Indeed, as much as 80% of all database performance problems are caused by inefficient application code. The application code consists of two parts: the SQL code and the host language code in which the SQL is embedded.
SQL is simple to learn and easy to start using. But SQL tuning and optimization is an art that takes years to master. Every DBMS provides a method of inspecting the actual access paths that will be used to satisfy SQL requests. For DB2, that method is the explain tool. Using explain, DB2 will externalize information about the actual access paths used to satisfy SQL statements. This includes the actual methods and indexes used to run the SQL
There are three types of explain available in DB2 for LUW: dynexpln (for explaining dynamic SQL although this command has been deprecated), db2exfmt (for showing explain information in text format) and DB2 Visual Explain (for a graphical explain).
The DBA must be an expert at understanding the different types of access paths, as well as which ones are best in which situation. The DBA must be able to interpret the output of the access path explanation, since it can be somewhat cryptic to a novice.
Each access path choice can make sense for one type of SQL statement, but cause performance problems for another. The DBA must be adept at identifying and tuning SQL to arrive at efficient access paths.
Host language code refers to the application programs written in C, Java, or the programming language du jour. It is quite possible to have finely tuned SQL embedded inside of inefficient host language code. And, of course, that would cause a performance problem.
The Bottom Line
DBAs must understand all of these aspects of database performance management. Furthermore, the DBA must be able to identify problems when they occur. Once identified, the problem must be analyzed to determine its cause. And only then can a proper tuning strategy be deployed to rectify the problem.
Modern database performance management tools that identify and alert DBAs to issues are indispensable for being able to achieve optimal performance.
Start your free trial of Database Performance Analyzer Today!
Leave a Reply