You have awareness into your applications, data infrastructure and IT data center environment, now, how do you leverage that insight for making cloud decisions? This tip is the second in a two-part mini-series (read part I here) looking at informed decision making to avoid flying blind in or with clouds.
Depending on the size, type, and scope of your environment, you might have an IT Service Management (ITSM) or Configuration Management Database (CMDB), or Performance Management Database (PMDB). You might also have some other Data Infrastructure or Data Center Infrastructure Management (DCIM) and digital asset management (IT inventory) tools.
These among other resources including steps and considerations in part one of this mini-series provide the tools, insight and awareness to make effective data infrastructure decisions, including in support of migrations to, from or between cloud and legacy environments.
Are you looking to use the cloud for data protection including archiving, backup/restore, BC, DR as well as high-availability? Leveraging the insight into your applications and their requirements, look at different cloud offerings and determine if you simply need a place to drop off documents, folders, objects, files and other items such as a sync and share?
Alternatively, do you need a cloud target destination that your existing backup/restore, archive, replication or other software and storage appliances can send data to. Where will your applications reside, will they need data restored from the cloud, or, will you restore your applications to where the data is (e.g. in the cloud). Keep in mind that there are different cloud service offerings and storage besides object and bulk offerings. Some are on-line, some are near-line, different levels of availability and resiliency, likewise besides object some offer file access.
Data protection, BC, DR, Backup, HA
Understand the various cloud storage offerings that range from bulk and object with different attributes including on-line or near-line, as well as standard and reduced resiliency. There are also file-based cloud storage offerings among others, so knowing what your needs help to then explore the different options to meet your needs. Speaking of data protection, if you are going to protect your data to the clouds, assuming that will also backup or replicate a copy of your applications (and configuration settings) to the same place, where will you restore?
Will you use the new cloud in an old way? That is, as a destination to send backups or replicas, yet restore back to somewhere else? Alternatively, will you cloud in a new way including recovering your applications (and settings) in the cloud along with the data that is already there? Perhaps go a step further and setup your cloud to be a replica HA site for what you have on premises, or, go all in and set up resiliency across different geographically dispersed cloud sites.
Another option is to move your applications and run them in the cloud. How do you know what size and type of compute instance or container to use? Simple, look at your inventory and assessment insight, now having an awareness of your needs and cloud service provider offerings, decide what is applicable. Same applies to using different types and tiers of storage and networking options. Put another way, if you know what you need, understand the cloud offerings, leverage your tradecraft experience skills to be an informed shopper making smart decisions just as you would for traditional servers, storage, I/O network hardware and software.
A question that comes up about right here is: how to get all of your data into the cloud? You can use fast networks, or, you can make a copy on disk, tape or other medium and ship it, restore and then resync with your active data. Most service providers have offerings that vary in size from a single HDD to large PByte sized appliances to assist with data (and application) movement.
Now that you have migrated your applications and data, either in whole or part, start monitoring your cloud resources. If you have an existing ITSM or DCIM and data infrastructure monitoring, alert, notification and reporting capabilities, tie the cloud resources and application service metrics into them.
Things to monitor including your applications performance, availability, capacity resource usage and economic costs. This includes availability, security and privacy, durability such as snapshots, pit copies and versions, as well as other data protection tasks. Also, monitoring includes notification of events, limits or thresholds on resource usage to trigger spending or service alerts.
Keep in mind that various cloud services and their offerings will have different costs as well as fees. Some have a low cost for storing data including importing into the cloud, however then charge for API calls (e.g. using or access data) as well as bandwidth fees. Good and reputable cloud service providers have their standard and extra fees documented on their websites, granted, you have to go and look at, or for them, however, they are there.
On the other hand, now is a good opportunity to put some dashboards, orchestration and automation tools for configuring, as well as pro-active monitoring, reporting and management augmentation. Just as there are different tools for collecting application workload and resource usage information, many options exist for monitoring, insight, reporting, notification and orchestration. Some of these tools are from the hypervisor, operating system, cloud service providers as well as third-parties including Solarwinds among others.
What this all means
Clouds in some shape and form, public, private or hybrid are in your future. Leverage insight and awareness to make informed decisions to be safe and avoid being lost in a fog cloud bank. Even with instruments and automation, those are only as good as the data the receive, rules and policies defined, as well as whoever is flying the data infrastructure for safe, on-time, cost effective information delivery.
Ok, nuff said, for now…
About the author
Greg Schulz is Founder and Sr. Consulting Analyst of independent IT advisory consultancy firm Server StorageIO and UnlimitedIO LLC (e.g. StorageIO®). He has worked in IT for an electrical utility, financial services, and transportation firms in roles ranging from business applications development to systems management, architecture, strategy, performance, and capacity planning. Mr. Schulz is the author of the Intel Recommended Reading List books “Cloud and Virtual Data Storage Networking” and “The Green and Virtual Data Center” via CRC Press and “Resilient Storage Networks” (Elsevier). Greg has a new book due out spring 2017 “Software Defined Data Infrastructure Essentials” (CRC) and is also a Microsoft MVP as well as VMware vSAN vExpert. Learn more at www.storageio.com and www.storageioblog.com. Follow on Twitter @StorageIO.