Discovery in the Virtual Environment

on August 3, 2015


You Don’t Know What You Don’t Know

Last week, I provided an explanation of virtualization that even your grandmother could relate to and understand. I also listed a set of skills that were imperative to any virtualization admin’s long-term career success: discovery, alerting, remediation, and troubleshooting. This week I’ll cover what we mean when we say “discovery” in the virtual environment.

Discovery: Finding out what’s going on

This simple principle should guide you in understanding the health and risks of your virtualized assets, such as VMs, data stores — or virtual hard disks — and virtual networks. Discovery begins by establishing a point-in-time baseline for the health and risks of your environment. Once you understand what’s in your environment, each component’s health and risks and their connection to the applications and services you support, you can start managing the changes that occur in your environment. This is especially true for systems that can break, or situations that can lead to slow applications, which in turn leads to poor end-user experience.

Why is discovery important?

Discovery serves three key functions in a virtualized environment. You need to:

  • Identify all of your virtual assets and resources across all of your data centers and show their connected context;
  • Provide a point-in-time baseline for the performance and risk of your data center that can be logged for trend lines and analyzed for anomalies and optimization opportunities; and
  • Populate the data used to calculate the data center efficacy equation: how efficient and effective is your IT implementation versus your IT spend.

These three functions map to three use cases. The first function is used as the first measure of control against the dreaded shadow IT. Shadow IT is used to describe IT systems and solutions built and used in organizations without the IT organization’s approval, rigor, and due diligence applied. According to Gartner’s Hank Marquis, 28% of IT spend occurs outside of the IT department. A separate report from IDC projects that number to increase by 5% over the next five years.

The second function is all about uncovering the health and risks in your virtual environment. When combined with the first function, it is the perfect counter to resource sprawl. It also helps with capacity planning and right-sizing your resources. This improves your data center efficiency (ROI for you business-savvy techies), and eliminates potential security holes. Essentially, your virtual data center can support more VMs and applications while delivering acceptable Quality-of-Service (QoS). When you add it up, your IT department will save money and mitigate potential security risks.

The third function bridges IT and business operations with utilization data on operating efficacy, and allows dollars to be assigned around efficiency and effectiveness. Operational efficacy is defined as how effective and efficient your data center is running relative to your architectural design, resource allocation, and application landscape.

Discover the process and pain

The discovery process is straightforward. You’ll need a virtualization management tool that maintains a record of the virtual data center environment, including an enumerated list of VMs and their data stores along with the physical resources they depend on. (These physical resources include the host servers and the storage constructs.) Next, you need to utilize tools that enable virtual machine performance monitoring and logging to establish baselines and form trend line behavior of the data center environment. Schedule routine scans to keep the record up-to-date, and add data points to your data center trend line. This skill can be used from a service granularity through an entire application stack and across the entire data center environment.

Trade-offs that need to be considered with discovery include frequency, time, and resources. Discovery consumes resources and time since it has to scan across your entire data center to get the up-to-date info on your infrastructure, especially if you have a large data center ecosystem across multiple providers. Then, you may have to manually compile and create the list to see what resources have been added or removed. In addition, the discovery cadence should factor in your IT ops schedule, which includes the maintenance window, integration and build-out, and day-to-day operations to minimize the overall disruption to your production environment.

This effort is just to get the working data set for virtual assets. Analysis is required to determine sprawled resources or shadow IT assets. This might materialize as a simple difference of two lists of VM assets. But it can grow exponentially complex, especially with the dynamic nature of VM resource properties. The performance data provides more data to sift through to support your governance efforts, as well as providing the basis to troubleshooting and remediating any application bottlenecks that may arise. Discovery is the first step to becoming a master of virtualization. It takes the guesswork out of your day-to-day. Plus, only high levels of management in your organization can claim plausible deniability as an excuse.

Next week I’ll walk us through alerting, the next critical skill set any virtualization admin must master.

___________________________________________________________________________

You can also download my latest eBook that walks you through each of the 4 essential skills any virtualization admin will need to master.

Download a free copy of 4 Skills to Master Your Virtualization Universe eBook here.

Related Posts