Using VSAN to Maximize Database Performance

By Staff Contributor on September 28, 2015


Earlier this year I had the opportunity to speak at VMWorld in San Francisco. The session title was Using Virtual SAN to Maximize Database Performance, and in that session I spent time focusing on performance as well as recovery. Why would recovery be a part of the talk? Because:

 

Every database administrator knows that they get paid for performance, but keep their jobs with recovery. Failing to make a query run faster will not get you fired as fast as you not being able to restore a database.

In many cases, a DBA will find that due to the design of the application, the code, or the database, that query performance is not something they can influence to any degree. That’s when you start hearing the phrase “let’s throw hardware at the problem.” In many cases it can be easier, and more cost-effective, to buy a server with more hardware capacity in an effort to improve performance as opposed to the amount of time it would take a team of developers to refactor application code and/or design.

VSAN will eliminate storage bottlenecks

For database administrators and developers, a traditional pain point in query performance involves the storage devices used. Since database systems need to read and write data from disk, the general idea is that faster disks should equate to faster performance. While not always true, it is worth the effort to take steps to minimize the chances that storage is the performance bottleneck.

VSAN is meant to offer performance and flexibility for enterprise customers that need to solve issues facing their current storage architecture. You can read more about what VSAN is and all that it offers here. I’m not going to explain the inner workings of VSAN technology in this post, instead I want to focus on the important things you will want to consider when using VSAN for your production SQL Server® workloads.

Imagine a world where you are able to improve performance by 40% or more without upgrading hardware, rewriting queries, or changing even one line of code.

VSAN makes this possible through software-defined storage applied as policies that can modify performance dynamically.

VSAN storage policies

The VSAN storage policies are the new secret sauce in the world of virtualization storage.  By using policy-based management to define how the storage is configured inside the datastore, VSAN gives users the ability to make changes, such as additional copies of your data (number of failures to tolerate) or the number of disk stripes per object.

Here’s what the VSAN storage policy screen looks like:

 

Here’s why this one screen matters so much:

Performance. I can configure different policies and measure workload performance in several test scenarios. For example, using SQL Server Distributed Replay a DBA would be able to easily run tests against a variety of storage options in order to decide which set of configuration settings are ideal, and then use that information to set the policy for the disk(s) in use by SQL Server.

Recovery. Some of your workloads may require more fault tolerance than others. Your Tier1 finance system, for example, may require you to have three copies of your data. You may have some test or development servers that have a lower fault tolerance. You can work with your server administrator as well as your business users to make certain that your data is recoverable according to their specific needs. This means your environment no longer needs to be wasting space on copies of data that are not necessary.

Summary

As a result of our VMWorld session, I was fortunate to have a lab server that allowed for me to run some benchmark tests against a SQL Server workload. Kicking the tires on VSAN technology was a great way for me to understand the many benefits it provides. By the time my research and testing was done I had come up with a list of three main takeaways everyone should think about when deploying a VSAN.

Examine the VSAN configuration. You must know what your current maximum throughputs are, otherwise you won’t know if storage is truly the bottleneck or if it is something else (for example, a poorly configured VM or over-allocated host). You can examine the formulas used to determine capacity and performance for various VSAN configurations here. Those formulas are what you will need to use to know if you are getting the expected IOPS from your datastore or not.

Know your workloads. You can utilize tools like SQL Server Dynamic Management Objects (DMOs) to get a sense which of your current workloads would benefit from different VSAN disk policies. The sys.dm_virtual_io_file_stats is one you can use to locate read/write intensive workloads, thereby allowing you to come up with recommendations for VSAN policies for your enterprise.

 Know your metrics. You can use performance metrics to better understand if your workload is hitting a VSAN capacity limit (see above), a VM or host capacity limit, or if your workload is suffering internal contention (such as blocking and locking). That way you can make adjustments to your configurations before taking on any necessary code changes.

What I like most about VSAN technology is that it gets us closer to a world of software-defined storage, where we can control performance through the use of automated policies. When combined with some predictive analytics it won’t be long before your database servers will be able to heal themselves.

And when the time comes, I just want everyone to know that I welcome our new robot overlords.

Related Posts