Configuring and Monitoring the Next Gen Network

By Staff Contributor on December 8, 2015


In Intent-Based Networking – A Must for SDN, an article published on this site, Faisal Khan makes the very valid point that “SDN is about making networking more flexible and agile through programming network elements. [P]rogramming needs to be done in a standard way, [thus] standardizing the southbound protocol.” I can think of three potential southbound protocols that need attention for this to work.

Transport

The transport protocol determines how a controller connects to a device. This needs to be one protocol connecting on one standard port (NETCONF implementations, please take note). Encryption should be mandatory, and I suggest that compression would be a useful option, as well. Based on those criteria, SSH and SSL would both be a reasonable choice, especially if HTTP/2 (similar to SPDY) was used for the latter.

Data encapsulation

The data encapsulation protocol determines how the command and response data are presented within the transport stream. XML (and by implication, XML-RPC) is a fairly well-established choice here, but it’s surely not the only option; REST API users may be calling for HTTP GET/POST, and perhaps JSON responses. It really doesn’t matter (in principal) as long as it’s consistent across all implementations and has relatively low overhead for the systems involved (always fun where encryption and compression are involved).

Syntax

The specific syntax of the commands sent to a southbound device (and responses returned) is probably the protocol that Faisal was referring to. This mythical Grand Unified Protocol, sadly, is where everything theoretically comes to a grinding halt because every vendor has their own way of configuring their devices, and each vendor offers features or “knobs” that other vendors do not, all of which puts the concept of a single common southbound language in danger of becoming a functional lowest common denominator, and thus unlikely to see adoption.

OpenConfig

Recently I had the pleasure of hearing Google’s Anees Shaikh present on OpenConfig, a project aiming to build vendor-neutral data models for the configuration and management of networking hardware and software platforms. There are multiple reasons this project should be of interest to most of us.

Participants

The participants in OpenConfig are all operators who bring to the table their biggest use cases and needs for this solution. The network vendors, in contrast, are engaged afterwards and collaborate to implement the models. Vendors of note include Cisco®, Juniper®, and Arista, and apparently there are others also working with the group. Without implementation, a protocol definition is little more than a white paper, so these vendor relationships are critical to any future success.

Configuration

How is it possible to configure devices from multiple vendors using a single language? The OpenConfig data models use an approach similar in some ways to SNMP Private Enterprise concept. Unlike SNMP, however, instead of putting all vendor-specific configurations within a single vendor-specific branch, the models allow vendors to augment, extend and deviate from the core model in the appropriate location. The theory is that all core data can remain within the common model, and vendor-specific configuration can be included as needed alongside the common configuration.

In practice, this means that while deploying a complex BGP arrangement between a Cisco ASR and a Juniper MX might require quite a few vendor-specific elements, the more common day-to-day tasks, like configuring interfaces with Layer2 (access, trunk, allowed VLANs, encapsulation), Layer3 (IP addresses), and basic status (enable/disable) will fall firmly within the core model and should require no vendor-specific configuration elements at all, yet the model avoids being the fated “lowest common denominator.”

There is a catch, of course. SNMP also had a core model of sorts: MIB II was supposed to provide all the common interface information, and then vendors would use their enterprise OIDs to provide additional information. In reality, the “good” data is in the Enterprise MIB, and the often-ignored data is in MIB II, in part because MIB II isn’t designed to present potentially more interesting vendor-specific data. With this being the case, management systems have to decide whether to read information from MIB II, the enterprise MIBs, or a mixture of the two. If the enterprise MIB also include all the information that’s stored within MIB II, then MIB II may effectively be abandoned. OpenConfig runs a similar risk; a vendor could augment the model with a vendor-specific branch that includes everything for a device, and stop using the core model. Hopefully, though, by allowing the core model to be flexible, this can be avoided, because if the core model isn’t implemented, the whole idea of a single common configuration language goes out the window.

Network management

Network management and monitoring (telemetry) are also key elements in the OpenConfig plan. In the data model, each object (e.g. an interface) has two major containers, one for config, which contains read/write elements, and one for the state, which is read-only.

+–rw interface [name]

+–rw config

|  +–rw type

|  +–rw mtu

|  +–rw name

|  +–rw description

|  +–rw enabled

+–ro state

|  +–ro type

|  +–ro mtu

|  +–ro name

|  +–ro description

|  +–ro enabled

|  +–ro oper-status

|  +–ro counters

|  |  +–in-octets

|  |  +–out-octets

 

Naming is consistent between the two containers, so if, for example, you set the MTU via the config container, it’s easy to read the current MTU setting back again from the state container. More importantly, it means that the same vendor-neutral data model is now being used for configuration and monitoring. As before, vendors can augment and alter the model within reason, but the core data model should remain constant. There’s hope yet for the MIB II concept.

Events

In August, before I had learned more about OpenConfig, I wrote a post in the Geek Speak section of thwack® that was entitled, Could SNMP Please Just Die Already, in which, if the title wasn’t subtle enough, I was pleading for a replacement for SNMP. Specifically, I expressed the hope that events could be sent to subscribers using an XML event stream, and that SNMP polling might be replaced with something REST- or XML-RPC-based. As if by magic, along comes OpenConfig, which proposes a TCP-based streaming telemetry model that can push a time-series data stream (e.g. interface counters) to a collector, and send async event reports to connected subscribers. Data can also be polled as needed, and all of these data are based on a common model.

How does this affect me?

Let’s assume that OpenConfig is a success, with big vendor support both from networking vendors and from management software companies like Solarwinds®. What’s the migration path? As so many “thwackers” pointed out on my Geek Speak post, SNMP is so embedded in everything we do, it’s going to be around for a long time, yet. Similarly, the data models for OpenConfig are likely to be incomplete initially, especially as the primary use-case contributors are service providers and the needs of the enterprise, maybe more of a second wave. Nonetheless, the wide adoption of OpenConfig as the new monitoring paradigm would mean having a hybrid OpenConfig/SNMP network of some sort for quite some time.

There will be a need for OpenConfig collectors that can adapt OpenConfig event streams into SNMP traps for legacy event managers. Similarly, there will need to be SNMP collectors that convert SNMP traps into an OpenConfig event stream to send to compatible event managers.

For performance and status monitoring, it would be nice to see NMS products rapidly adopting the OpenConfig model (after all, much of it is “write once, use many”), and subscribing to the real-time event streams to populate statistical data rather than using SNMP to poll for it. I believe that converting a stream into an SNMP MIB so that a legacy system can poll it would be somewhat pointless, but if there’s a need for it, I’m sure somebody will sell the solution (and at least you might be able to use one MIB for multiple vendor devices).

Will Solarwinds jump on the OpenConfig train and be an early adopter of OpenConfig as a company creating network and systems management tools? I’d like to think so, but again, given the service-provider focus of the initial development of OpenConfig, it may not be high on the agenda for Solarwinds. This is pure speculation on my part, but I will watch with interest, and I would love to see Solarwinds at the forefront of this new technology.

Interesting?

I hope you found this to be an interesting look at a very different approach to network configuration and telemetry, and one that looks like it will have multi-vendor support in the networking space. We may have to wait a year or so before we see this out in the wild, but I’m excited to see a data model that not only creates the kind of standardized southbound model that Faisal notes is essential for SDN, but also transforms the network management and monitoring space in a potentially incredible way.

Useful links

 

Related Posts