Four Ways Your Network Could Change in Four Years

By:


iStock_000015454483_MediumWhile network administrators are used to keeping the lights on, the time is approaching for a major change in the way networks look and run. While SDN is still in its infancy, the value is indisputable as are the initial attempts at implementation in the form of Cisco® ACI and VMware® NSX. With ARIN down to about 250 IPv4 address blocks, the time for IPv6 is imminent. And, IoT is proving to be one of those technologies that will already have happened by the time many companies decide to draft a policy.

All of that and more—cloud, virtualization (not to mention the ability to vMotion® machines from inside a company to the cloud, which VMware unveiled at VMworld® this year), <fill in the blank> as a Service, etc.—promise (threaten?) to change networking at a fundamental level. This means the work of maintaining a network will change as much, if not more so.

I recently sat down with a good buddy of mine, SolarWinds CTO/CIO, Joel Dolisy, to talk about the top trends that are likely to change the way you will view and manage the network by 2020.

1. More context

Network administrators still spend a lot of time troubleshooting—doing it more or less in the dark, or based on a single “blind men and the elephant” view of the network.

What do I mean by that? Plenty of network engineers go about their daily work without any tools to gain insight beyond SSHing to each device and running commands. Which is to say “blind.” Some have tools, but of the home-grown variety, which require far more care and feeding, contain quirky behaviours that most would label “suboptimal” (and that’s being polite), and frequently can’t keep up with new technologies, protocols, or needs of the business. And still other engineers have tools—or more accurately “a” tool. Maybe it was all they could justify; maybe it was all the company could afford; maybe nobody thought more was needed. Whatever the case, that one tool provides a slice of the status of the network, but is usually myopic in other areas. There is no one-size-fits-all-tool.

That this is still the case bothers Dolisy, who believes data could be better used to free up administrators’ time.

“Applications hold a lot of data and a lot of context, it’s just that they don’t bubble it up to administrators like they could,” he said.

Dolisy sees a future for administrators that is steeped in automated intelligence that provides more context—where an emailed alert not only describes a present problem, but also includes details of similar past incidents and recent configuration changes that could be related.

Seeing a “point issue” in relation to the entire environment—from the WIC on a router to the SAN array (including the context of the affected applications), the virtual machines they are running on, and the host servers that host those VMs—can go a long way toward pinpointing both the nature and focus of an issue. The end goal is one where the system becomes better at detecting problems, and can mine a knowledge base to arm the administrator with enough information to fix problems much faster.

2. Predicting and correcting problems before they become issues for users

If network management becomes more automated and more able to tap into a historical knowledge base when problems arise, it is not out of the question that the same system be used to predict problems—before they occur.

Automated network management can take several forms. One is comprehensive automatic downstream suppression of events in a router –> switch –> server design. You don’t want to alarm on the server or switch if it’s the router that is actually down.

Another way network monitoring will become more predictive is through continuous monitoring, which creates a baseline of what is “normal” for that system. In this case, alerts aren’t necessarily set to a fixed threshold. Rather, they will trigger when the performance deviates by a specific percentage from the expected baseline for a particular duration.

Network monitoring tools are also evolving, allowing technicians to build alert triggers that evaluate data from multiple points in the “application stack.” Similar to the OSI model, this displays all the elements used to deliver a service in a stack, from disks and storage arrays at the bottom, and moving up through SAN, virtualization, OS installed on guests’ machines, and applications, with room for key services like databases, Web layer, etc. So instead of a straight hardware alert, an alert may trigger when network, SAN, application, and database performance metrics all indicate a true issue. At that point, the occurrence of an alert at all provides the background a technician needs to know what to fix, and when.

Finally, no monitoring and alerting system will be complete without the ability to add automated response and re-test actions as a response. If the response to a “disk full” alert is to clear the temp files, why not just have automation do that first, re-check disk space, and cut a ticket only AFTER that step has failed?

3. Network virtualization

After several years in the wild, network virtualization has turned a corner, and is likely to see more widespread adoption, according to Dolisy.

Between software-defined networking (SDN), network function virtualisation (NFV), and virtual overlay technology like VMware NSX, customers are beginning proof-of-concepts in greater numbers.

The SDN market in Asia Pacific alone “will grow from $6.2 million in 2013 to over $1 billion by 2018,” according to analyst firm IDC.

“People are talking about it,” said Dolisy. “There’s definitely more noise around this than there used to be.”

According to Gartner, one reason for recent customer movement (at least on SDN) is the misconceptions of virtualisation being only for large-scale networks are getting overturned (for more details, see Andrew Lerner’s post on misconceptions about SDN: http://blogs.gartner.com/andrew-lerner/2015/03/17/sdnot/).

As a result of SDN (as well as the other factors mentioned earlier), virtualisation will have a place in every network from giant global conglomerates to five-person start-ups. Dolisy believes network management and monitoring tools will also need to evolve.

“All tools that make up that portfolio of network management need to be aware of the construct of what network virtualisation is and the specificities it brings,” he said.

“There’s a level of awareness that the tools will need to bring in to stay relevant in this more hybrid and software-defined world.

4. The rise of edge processing

The Internet of Things (IoT) peaked on Gartner’s hype cycle for emerging technologies in mid-2014.

While definitions vary, the concept of IoT is really about connecting and networking unconventional things and turning them into data collection points. Think everything from sensors in rice fields and on oysters to connected cars and home appliances.

A lot of these “things” are testing where we might consider the boundaries of the network edge to be and where that data processing needs to take place.

“With the scale of IoT, it’s not possible to get everything completely centralised, so there’s a natural tendency to make processing more decentralised and push it to the edge,” said Dolisy.

“I think it’s a fine model, but if you look at technology trends in general, you’ve got this pendulum going from centralised to distributed back to centralised going back to distributed. I think the answer probably lies in the middle.”

From a recent survey of 200 Australian IT professionals, we found IoT would bring “significant complexity” to network structures over the next three to five years.

Dolisy recommended building internal capability (staff and technology) to “maintain visibility over the growing number of connected devices.”

There’s plenty more insights and trends we captured in that survey. Be sure to check out what’s in store:

from SolarWinds

IoT Report 2015 Infographic