Optimize MySQL Transaction Performance: Using Small Transactions

By: Vikram Vaswami

Because a database that supports transactions has to work a lot harder than a nontransactional database at keeping different user sessions isolated from each other, it’s natural for this to be reflected in the system’s performance. Compliance with the other ACID rules, specifically the ones related to maintaining the integrity of the database in the event of a system failure through the use of a transaction log, adds additional overhead to such transactional systems. MySQL is no exception to this rule—other things remaining the same, nontransactional MyISAM tables are much faster than the transactional InnoDB and BDB table types.

That said, if you have no choice but to use a transactional table type, you can still do a few things to ensure that your transactions don’t add undue overhead to the system.

Clichéd though it might be, the KISS (Keep It Simple, Stupid!) principle is particularly applicable in the complex world of transactions. This is because MySQL uses a row-level locking mechanism to prevent simultaneous transactions from editing the same record in the database and possibly corrupting it. The row-level locking mechanism prevents more than one transaction from accessing a row at the same time—this safeguards the data, but has the disadvantage of causing other transactions to wait until the transaction initiating the locks has completed its work. So long as the transaction is small, this wait time is not very noticeable. When dealing with a large database and many complex transactions, however, the long wait time while the various transactions wait for each other to release locks can significantly affect performance.

For this reason, it is generally considered a good idea to keep the size of your transactions small and to have them make their changes quickly and exit so that other transactions queued behind them do not get unduly delayed. At the application level, two common strategies exist for accomplishing this.

  • Ensure that all user input required for the transaction is available before issuing a START TRANSACTION command. Often, novice application designers initiate a transaction before the complete set of values needed by it is available. Other transactions initiated at the same time now have to wait while the user inputs the required data and the application processes it, and then asks for more data, and so on. In a single-user environment, these delays will not matter as much because no other transactions are trying to access the database. In a multiuser scenario, however, a delay caused by a single transaction can have a ripple effect on all other transactions queued in the system, resulting in severe performance degradation.
  • Try breaking down large transactions into smaller subtransactions and executing them independently. This will ensure that each subtransaction executes quickly, freeing up valuable system resources that would otherwise be used to maintain the state of the system.

Another option for not adding undue overhead is selecting an appropriate isolation level, covered in Part 2 of this article.

Leave a Reply