Monday, November 14, 2016

Rethinking Micro-segmentation

Traditional Security Architectures

Traditional security architectures enforce security policy at rigidly defined trust boundaries. At the most basic level, this is the perimeter of the network. A firewall sits between the untrusted public Internet and the trusted private network. If inbound access from the Internet is required, a DMZ is often created to segment Internet exposed resources from the trusted internal network. A network can be further segmented using additional zones on the perimeter firewall, access-lists on distribution switches, and additional layers of security at various points in the network.

In this traditional model, as security increases, so does configuration complexity, management overhead, and margin for human error. In addition, implicit trust between devices on a network segment is inherent to traditional security architectures. If one device is breached, an attacker can use the compromised device to launch an attack against other devices on the same network segment. Therefore, traditional security architectures are often ill equipped to secure east-west traffic in a modern data center.

What is micro-segmentation?

In two words: Trust nothing. The goal is to eliminate implicit trust and apply security policy between all devices within the purview of the micro-segmentation solution. By using this zero-trust model, micro-segmentation solutions aim to prevent attackers from moving laterally through a network after breaching an initial target.

There are a few fundamentally different approaches to micro-segmentation in the data center. Several current micro-segmentation solutions are built into larger data center orchestration and automation platforms. I'll avoid mentioning specific products, because comparisons often end up like those of vi vs. Emacs or which is the best Linux distribution.

That said, the solutions I am most familiar with enforce security policy in one of two ways:
    • Enforce policy in the network device and/or vSwitch
    • Enforce policy in the hypervisor kernel

Despite where the actual enforcement occurs, at a high level the micro-segmentation functionality itself is comparable. An engineer logs into a controller, defines a security policy, and centrally pushes this security policy to a number of devices in order to restrict traffic between endpoints. These endpoints can be baremetal servers, VMs, containers, or other resources supported by the micro-segmentation platform. The fundamental difference is the point of policy enforcement - hypervisor, vSwitch, or network device.

Thursday, October 6, 2016

Big Data Analytics for Your Network

The help desk just called. Users are reporting the wireless is down in your office, and nobody can get on the network. The wireless seems fine to you. You're connected. You ask a few people nearby, and they're connected too. You log into the WLC and don't see any problems. Speedtest.net works fine. Maybe you should just turn the controller off and then back on again. That worked last time. No, that's a bad idea. It's the middle of the day and you actually need to troubleshoot it.

After a bit of troubleshooting, you determine the cause of the issue is not the wireless. The DHCP scope is exhausted. Users could connect, but they couldn't obtain an IP address. You shorten the lease time, expand the scope, and call it a day. While you're at it, you wonder if DHCP is the reason connecting has been taking longer than usual, so you fire up Wireshark.

Discover, offer, request, acknowledge. You remember that from a CCNA class half a lifetime ago. Looks good. Well, you think it looks good. It takes about 227 milliseconds from discover to offer. That's normal, right? You realize you're not sure what normal is. You don't know your baseline, and you have no idea how long DHCP should take from discover to offer or request to acknowledge. What about dot1x? Is the RADIUS server slowing things down? You really have no idea. It works. It's lunch time. Nobody is complaining - right now.

Ok, hopefully the way you run your network is nothing like this. However, let's face it: this is an exaggerated version of the reality that many deal with on a day to day basis. There is often little insight into the individual operations that contribute to network performance as a whole. "The wireless is down" could mean any number of things, many of which may be out of the purview of the team managing the wireless network. Troubleshooting is often a reactive process. Even when there is visibility into network operations and baselines are known, it can be difficult to determine if your "normal" is actually optimal.

I recently attended a presentation by Nyansa at Networking Field Day 12. Nyansa is a startup focusing on what they call Cloudsourced Network Analytics. Their goal is to go beyond providing visibility in the form of pretty graphs and actually provide actionable insight about how to improve the end user experience.

Tuesday, August 9, 2016

Opengear and the Evolution and Consolidation of Network Devices

Opengear at Tech Field Day Extra 2016


I recently attended Cisco Live 2016 in Las Vegas and was invited to attend Tech Field Day Extra as a delegate. The first presenter was Opengear, a maker of console access servers and remote management gateways. They describe their products as "next generation Smart Solutions for managing and protecting critical IT and communications infrastructure."



While the term "next generation" is frequently overused, I can't argue with Opengear. Opengear extends the functionality of a console access server into a more complete out-of-band management solution. First, the Opengear presentation made me reevaluate what I should look for in an a console access server. What should it do? What shouldn't it do, and what roles should be held by separate devices?