Enterprise Security: The wood for the trees?

We have been talking a fair bit over the past few years on what we consider to be some of the big, hidden challenges of information security [1][2][3]. We figured it would be useful to highlight one of them in particular: focusing on the right things.

As infosec creeps past its teenage years we've found ourselves with a number of accepted truths and best practices. These were well intentioned and may hold some value (to some orgs), but can often be misleading and dangerous. We have seen companies with huge security teams, spending tens, to hundreds of millions of dollars on information security, burning time, money and manpower on best practices that don't significantly improve the security posture of their organization. These companies invest in the latest products, attend the hottest conferences and look to hire smart people. They have dashboards tracking "key performance areas" (and some of them might even be in the green) but they still wouldn't hold up to about 4 days of serious attacker attention. All told, a single vulnerability/exploit would probably easily lead to the worst day of their lives (if an attacker bothered).

The "draining the swamp" problem.
"When you’re up to your neck in alligators, it’s easy to forget that the initial objective was to drain the swamp."

Even cursory examination of the average infosec team in a company will reveal a bunch of activities that occupy time & incur costs, but are for the most part dedicated to fighting alligators. As time marches on and staff churn happens, its entirely possible to have an entire team dedicated to fighting alligators (with nobody realising that they originally existed to drain the swamp).

How do I know if my organization is making this mistake too?
It is both easy, and more comfortable to be in denial about this. Fortunately, once considered it is just as easy to determine where your organization sits on this spectrum.

The litmus test we often recommend is this:
Imagine the person (people, or systems) that most matter to your company (from a security point of view). The ones that would offer your adversaries most value if compromised. Now, realistically try to determine how difficult it would be to compromise those people / systems.

In most cases, an old browser bug, some phishing emails and an afternoons worth of effort will do it. I'd put that at about a $1000 in attacker cost. Now it's time for you to do some calculations: if a $1000 in attacker costs is able to hit you where you would hurt most, then it's a safe bet that you have been focusing on the wrong things.

How is this possible?
It's relatively easy to see how we got here. Aside from vendors who work hard to convince us that we desperately need whatever it is that they are selling, we have also suffered from a lack of the right kind of feedback loops. Attackers are blessed with inherently honest metrics and a strong positive feedback loop. They know when they break in, they know when they grab the loot and they know when they fail. Defenders are deprived of this immediate feedback, and often only know their true state when they are compromised. To make matters worse, due to a series of rationalizations and platitudes, we sometimes even manage to go through compromises without acknowledging our actual state of vulnerability.

Peter Drucker famously said:
"What gets measured gets managed, even when it’s pointless to measure and manage it, and even if it harms the purpose of the organization to do so"

We have fallen into a pattern of measuring (and managing) certain things. We need to make sure that those things _are_ the things that matter.

What can we do?
As with most problems, the first step lies in acknowledging the problem. A ray of hope here, is that, in most cases, the problem doesn't appear to be an intractable one. In many ways, re-examining what truly matters for your organization can be truly liberating for the security team.

If it turns out that the Crown Jewels are a hand full of internal applications, then defending them becomes a solvable problem. If the Crown Jewels turn out to be the machines of a handful of execs (or scientists) then defending them becomes technically solvable. What's needed though is the acute realization that patching 1000 servers on the corporate network (and turning that red dial on the dashboard to green) could pale in significance to giving your CFO a dedicated iOS device as his web browser *.

In his '99 keynote (which has held up pretty well) Dr Mudge admonished us to make sure we knew where the companies crown jewels were before we planned any sort of defense. With hamster wheels of patching, alerts and best practices, this is easily forgotten, and we are more vulnerable for it.


* Please don't leave a comment telling me how patching the servers _is_ more important than protecting the CFO. This was one example.. If your crown jewels are hittable through the corporate server farm (or dependent on the security of AD) - then yes.. its where you should be focusing.

Stripping encryption from Microsoft SQL Server authentication


"Communication flow in the TDS 4.2 protocol" [msdn]
Our recent PyConZA talk had several examples of why Python is often an easy choice of language for us to quickly try things out. One example came from looking at network traffic of a client authenticating with Microsoft SQL Server (in order to simulate the server later). By default, we can't see what the authentication protocol looks like on the wire because the traffic is encrypted. This post is a brief account of stripping that encryption with a little help from Python's Twisted framework.

The clean overview of the authentication protocol on MSDN suggests that it would as easily readable as its diagram. Our first packet captures weren't as enlightening. Only the initial connection request messages from the client and server were readable. Viewing the traffic in Wireshark showed several further messages without a hint that the payloads were encrypted. A clearer hint was in the MSDN description of the initial client and server messages. There's a byte field in the header called ENCRYPTION. By default, both the client and server's byte is set to ENCRYPT_OFF(0x00), which actually means encryption is supported but just turned off. Once both endpoints are aware that the other supports the encryption, they begin to upgrade their connection.

Initial packet capture: upgrading to encrypted connection begins after initial pre-login messages

For our purposes, it would be better if ENCRYPTION fields were set to ENCRYPT_NOT_SUP(0x02), so that the server thinks the client doesn't support encryption and vice versa. We hacked together a crude TCP proxy to do this. We connect the client to the proxy, which in turn connects to the server and starts relaying data back and forth. The proxy watches for the specific string of bytes that mark the ENCRYPTION field from either client or the server and changes it. All other traffic passes through unaltered.

Proxying the MSSQL authentication

The proxy is built with Twisted which simplifies the connection setup. Twisted's asynchronous/event-driven style of network programming makes it easy to match bytes in the traffic and flip a bit in the match before sending it along again. The match and replace takes place in the dataReceived methods which Twisted calls with data being sent in either direction.

With the proxy in place, both sides think the other doesn't support encryption and the authentication continues in the clear.

Traffic between the proxy and the server of an unencrypted authentication


It's to be expected that opportunistic encryption of a protocol can be stripped by a mitm. Projects like tcpcrypt explicitly chose this tradeoff for interoperability with legacy implementations in hope of gaining widespread deployment of protection against passive eavesdropping. The reasons for Microsoft SQL authentication going this route isn't spelled out, but it's possible that interoperability with older implementations was a concern.