Slack[ing] off our notifications

We :heart: Slack. The elderly in our team were IRC die hards, but Slack even won them over (if for no other reason, for their awesome iOS changelogs).


Thanks to Slack integrations, its robust API and webhooks, we have data from all over filter into our Slack, from exception reporting to sales enquiries. If it’s something we need to know, we have it pushed through to Slack.


At the same time, our Canary product (which prides itself on helping you “Know. When it matters”) was able to push out alerts via email, sms or over it’s RESTful API. Canaries are designed from the ground up to not be loquacious. I.e They don’t talk much, but when they do, you probably should pay attention. Having them pipe their results into Slack seemed a no-brainer.


Our initial stab at this was simple: By allowing a user to enter the URL for a webhook in their Console, we could send events through to the Slack channel of their choosing.

Thinkst Canary - Configuration 🔊 2016-05-26 11-04-27.png


Of course, this wasn’t all that was needed to this get working. The user would first have to create their webhook. Typically, this would require the user to:

Click on his team name, and navigate to Apps & Integrations


Hit the slack apps page and navigate to “Build”


Be confused for a while before choosing “Make a custom integration”

Select “Incoming Webhooks”



At this point the user either:
1.Decides this is too much work and goes to watch Game of Thrones
2.Goes to read the “Getting started” Guide before going to [a]
3.Chooses his destination channel and clicks “Add Incoming webhooks Integration” 


After all this, the user’s reward is a page with way more options than is required for our needs (from a developer's point of view, the options are a delight and the documentation is super helpful, but for an end user... Oy vey!)

Finally... the user can grab the webhook URL, and insert it in the settings page of their console.

(This isn’t the most complicated thing ever... It’s not as confusing as trying to download the JDK - but Canary is supposed to make our users' lives easier, not drive them to drink)

With a bit of searching, we  found the Slack Button.  

add_to_slack@2x.png

This is Slack's way of allowing developers to make deploying integrations quick and painless. This means that our previous 8 step process (9 if you count watching Game of Thrones) becomes the following:

The User clicks on the “Add to Slack” button (above)

He is automatically directed to a page where he authorises the action (and chooses a destination channel) 



There is no step 3:



Of course, we do a little more work, to allow our users to easily add multiple integrations, but this is because we are pretty fanatical about what we do.

At the end of it though, 2 quick steps, and you too can have Canary goodness funnelled to one of your Slack channels!

Slack 2016-05-25 12-22-32.png

At the moment, we simply use the incoming webhooks to post alerts into Slack but there is lots of room to expand using slash commands or bot users, and we heard that all the cool kids are building bots. (aka: watch this space!)  

P.S. If you are a client, visit /settings on your console to see the new functionality.

Certified Canarytokens: Alerts from signed Windows binaries and Office documents

As part of a talk at the ITWeb Security Summit last week, we discussed how to trigger email alerts when file signatures are validated with our Canarytokens project. Building on that alerting primitive, we can make signed executables that alert when run or signed Office documents that alert when opened. 


Canarytokens is our exploration of light-weight ways to detect when something bad has happened on the inside a network. (It’s not at all concerned with leaks in that dubious non-existing line referred to as “the perimeter” of a network.) We built an extensible server for receiving alerts from passive tokens that are left lying around. Tokens are our units of alerts. When a token URL link is fetched or a token DNS name is queried this triggers an alert via the Canarytokens server. With these (and other tokens) we set out to build alerts for more significant incidents.

Office Document Signatures


A security researcher, Alexey Tyurin, drew our attention to how opening signed Office documents can trigger token alerts. On opening a signed Word document, Office verifies the signature automatically with the certificate embedded in the document. A notable exception to this is when a document is opened with Protected View enabled (typically after the document is downloaded from the web or opened as an email attachment.) The signature verification in that case, happens only after the user clicks to disable protected view. During the verification, a URL from the certificate is fetched. We can set the retrieved URL to a token URL (which integrates with Canarytokens to fire an alert to set us off). The URL we set is in a field called Authority Information Access (AIA). This field tells the signature verifier where to fetch more information about the CA (such as intermediate CAs needed to verify the signing certificate).


Signed document that has already triggered an alert

Signing Word documents gives us  another way to alert when the document is opened. The previous technique, which is implemented on Canarytokens, uses a remote tracking image embedded in the document. While the document signing is not currently integrated in Canarytokens, it can easily be automated. This requires creating a throwaway CA with token URLs to generate a tokened signing certificate and then signing a document. Thanks to Tyurin, creating the CA is a short script. Signing the document programmatically can be tricky to get right. We've automated this by offloading the signing to the Apache POI library in a Java program.

It’s worth noting more closely how the token URL is hit: Office offloads the signature verification to the Microsoft CryptoAPI which is what hits the URL. (In our tests the User-Agent that hits the URL is Microsoft-CryptoAPI/6.1). We should be able to re-use this trick with other applications that offload the signature verification in this way.

Windows Executables Signatures


A signed copy of Wireshark
If signed documents could be used to trigger Canarytokens we wondered where else this could work. Microsoft’s Authenticode allows signing Windows PE files, such as executables and dlls. The executables signatures are verified on launch if the setting for it is enabled in the security policy. The name of the setting is a mouthful: System settings: Use Certificate Rules on Windows Executables for Software Restriction Policies". Our initial tests of signed .NET DLLs were able to trigger alerts when loaded by custom executables even without the setting enabled. However, if Authenticode can alert us when Windows executables have been launched, we have a uniquely useful way of knowing when binaries had been executed, without any endpoint solutions installed.

To deploy signed executables, all that is needed is to token executables that attackers routinely run such as ipconfig.exe, whoami.exe and net.exe to alert us to an attacker rummaging around where they shouldn’t be. Zane Lackey's highly recommended talk (and slides) on building defenses in reaction to real world attack patterns makes the case for how alerts like these can build solid attacker detection.

The verification, just like in the Office document case, is offloaded for Microsoft CryptoApi to handle. Signing certificates for the executables are produced in the same way. However, the signing certificate must also have the Code Signing key usage attribute set. Creating signed binaries is made simple by Didier Stevens’ extensive work on Authenticode. This is integrated into Canarytokens to make signing a binary as simple as uploading a copy to sign, but is also available as a standalone tool from the source.


AIA fields of a signing certificate
To sign an executable on Canarytokens, you upload an executable to the site. The site will sign the binary with a tokened signing certificate. Simply replace the original executable with the tokened one and verify that signature verification for executables is enabled. An attacker who lands on the machine and runs the tokened executable, will trigger the signature verification which gets an alert email sent (via Canarytokens) to let you know that something bad has happened.

Many of our other canary tokens are built on top of application-specific quirks. Adobe Reader, for example, has the peculiar behaviour of pre-flighting certain DNS requests on opening a PDF file. What the Office document and executable signings point to, is a more generic technique for alerting on signature (and certificate) validation. This a more notable alerting primitive and is likely more stable than application quirks given that URL-fetching extensions are enshrined in certificate standards. Although in this post we’ve used the technique in only two places, more may be lying in wait.

Edited 2016-06-14: Thanks to Leandro in the comments and over email, this post has been updated with his observation that Office document signature verification won't happen automatically when the document opens Protected View.

Enterprise Security: The wood for the trees?

We have been talking a fair bit over the past few years on what we consider to be some of the big, hidden challenges of information security [1][2][3]. We figured it would be useful to highlight one of them in particular: focusing on the right things.

As infosec creeps past its teenage years we've found ourselves with a number of accepted truths and best practices. These were well intentioned and may hold some value (to some orgs), but can often be misleading and dangerous. We have seen companies with huge security teams, spending tens, to hundreds of millions of dollars on information security, burning time, money and manpower on best practices that don't significantly improve the security posture of their organization. These companies invest in the latest products, attend the hottest conferences and look to hire smart people. They have dashboards tracking "key performance areas" (and some of them might even be in the green) but they still wouldn't hold up to about 4 days of serious attacker attention. All told, a single vulnerability/exploit would probably easily lead to the worst day of their lives (if an attacker bothered).

The "draining the swamp" problem.
"When you’re up to your neck in alligators, it’s easy to forget that the initial objective was to drain the swamp."

Even cursory examination of the average infosec team in a company will reveal a bunch of activities that occupy time & incur costs, but are for the most part dedicated to fighting alligators. As time marches on and staff churn happens, its entirely possible to have an entire team dedicated to fighting alligators (with nobody realising that they originally existed to drain the swamp).

How do I know if my organization is making this mistake too?
It is both easy, and more comfortable to be in denial about this. Fortunately, once considered it is just as easy to determine where your organization sits on this spectrum.

The litmus test we often recommend is this:
Imagine the person (people, or systems) that most matter to your company (from a security point of view). The ones that would offer your adversaries most value if compromised. Now, realistically try to determine how difficult it would be to compromise those people / systems.

In most cases, an old browser bug, some phishing emails and an afternoons worth of effort will do it. I'd put that at about a $1000 in attacker cost. Now it's time for you to do some calculations: if a $1000 in attacker costs is able to hit you where you would hurt most, then it's a safe bet that you have been focusing on the wrong things.

How is this possible?
It's relatively easy to see how we got here. Aside from vendors who work hard to convince us that we desperately need whatever it is that they are selling, we have also suffered from a lack of the right kind of feedback loops. Attackers are blessed with inherently honest metrics and a strong positive feedback loop. They know when they break in, they know when they grab the loot and they know when they fail. Defenders are deprived of this immediate feedback, and often only know their true state when they are compromised. To make matters worse, due to a series of rationalizations and platitudes, we sometimes even manage to go through compromises without acknowledging our actual state of vulnerability.

Peter Drucker famously said:
"What gets measured gets managed, even when it’s pointless to measure and manage it, and even if it harms the purpose of the organization to do so"

We have fallen into a pattern of measuring (and managing) certain things. We need to make sure that those things _are_ the things that matter.

What can we do?
As with most problems, the first step lies in acknowledging the problem. A ray of hope here, is that, in most cases, the problem doesn't appear to be an intractable one. In many ways, re-examining what truly matters for your organization can be truly liberating for the security team.

If it turns out that the Crown Jewels are a hand full of internal applications, then defending them becomes a solvable problem. If the Crown Jewels turn out to be the machines of a handful of execs (or scientists) then defending them becomes technically solvable. What's needed though is the acute realization that patching 1000 servers on the corporate network (and turning that red dial on the dashboard to green) could pale in significance to giving your CFO a dedicated iOS device as his web browser *.

In his '99 keynote (which has held up pretty well) Dr Mudge admonished us to make sure we knew where the companies crown jewels were before we planned any sort of defense. With hamster wheels of patching, alerts and best practices, this is easily forgotten, and we are more vulnerable for it.


* Please don't leave a comment telling me how patching the servers _is_ more important than protecting the CFO. This was one example.. If your crown jewels are hittable through the corporate server farm (or dependent on the security of AD) - then yes.. its where you should be focusing.

Stripping encryption from Microsoft SQL Server authentication


"Communication flow in the TDS 4.2 protocol" [msdn]
Our recent PyConZA talk had several examples of why Python is often an easy choice of language for us to quickly try things out. One example came from looking at network traffic of a client authenticating with Microsoft SQL Server (in order to simulate the server later). By default, we can't see what the authentication protocol looks like on the wire because the traffic is encrypted. This post is a brief account of stripping that encryption with a little help from Python's Twisted framework.

The clean overview of the authentication protocol on MSDN suggests that it would as easily readable as its diagram. Our first packet captures weren't as enlightening. Only the initial connection request messages from the client and server were readable. Viewing the traffic in Wireshark showed several further messages without a hint that the payloads were encrypted. A clearer hint was in the MSDN description of the initial client and server messages. There's a byte field in the header called ENCRYPTION. By default, both the client and server's byte is set to ENCRYPT_OFF(0x00), which actually means encryption is supported but just turned off. Once both endpoints are aware that the other supports the encryption, they begin to upgrade their connection.

Initial packet capture: upgrading to encrypted connection begins after initial pre-login messages

For our purposes, it would be better if ENCRYPTION fields were set to ENCRYPT_NOT_SUP(0x02), so that the server thinks the client doesn't support encryption and vice versa. We hacked together a crude TCP proxy to do this. We connect the client to the proxy, which in turn connects to the server and starts relaying data back and forth. The proxy watches for the specific string of bytes that mark the ENCRYPTION field from either client or the server and changes it. All other traffic passes through unaltered.

Proxying the MSSQL authentication

The proxy is built with Twisted which simplifies the connection setup. Twisted's asynchronous/event-driven style of network programming makes it easy to match bytes in the traffic and flip a bit in the match before sending it along again. The match and replace takes place in the dataReceived methods which Twisted calls with data being sent in either direction.

With the proxy in place, both sides think the other doesn't support encryption and the authentication continues in the clear.

Traffic between the proxy and the server of an unencrypted authentication


It's to be expected that opportunistic encryption of a protocol can be stripped by a mitm. Projects like tcpcrypt explicitly chose this tradeoff for interoperability with legacy implementations in hope of gaining widespread deployment of protection against passive eavesdropping. The reasons for Microsoft SQL authentication going this route isn't spelled out, but it's possible that interoperability with older implementations was a concern.


Unicorns, Startups and Hosted Email

A few days ago, @jack (currently the CEO of both Square && Twitter) posted a pic of his iPhone.

[original tweet]
 It struck me as slightly surprising that both Square & Twitter could be using Gmail. Both companies have a ton of talent who deeply understand message delivery and message queues. I wouldn't be at all surprised if both companies have people working there who worked on Sendmail or Postfix. On some levels, twitter competes with Google.. (if Google Pay is a thing, then so does Square).

Of course this is one of those times when you see a classic mismatch between "paranoid security guy" thinking, and "scale quick Silicon Valley" thinking. The paranoid security guy thinks: "So every time a twitter executive sends an email, people at Google can read it?" while the SV entrepreneur says: "It isn't core.. lets not spend engineering time on it at all".

I'm not going to make a call here on which route is better but i did wonder how common it was.. So.. I took a list of the current US/EU Unicorns, and decided to check who handles their mail. What you get, is the following:


Interestingly, about 60%of the current Unicorn set have their email handled by GMail. (A further 13.6%) have their mail handled by outlook.com (which means about 70% of the current startups with billion dollar valuations, don't handle their own email).
The list of companies using Gmail in that set are:
If we avoid the hyper focus on "Unicorns" and look elsewhere (like Business Insider's list of 38 coolest Startups) this percentage grows even bigger:


It is interesting that Gmail so completely dominates in terms of email handling, and it is equally surprising that so many companies so completely outsource this function. On this trajectory, it wont be long before we can stop calling it email, and can simply refer to it as gmail instead.

Ps. Anyone want to buy a book on sendmail macros? 


Canarytokens.org - Quick, Free, Detection for the Masses

Introduction

This is part 2 in a series of posts on our 2015 BlackHat talk, and covers our Canarytokens work.

You'll be familiar with web bugs, the transparent images which track when someone opens an email. They work by embedding a unique URL in a page's image tag, and monitoring incoming GET requests.

Imagine doing that, but for file reads, database queries, process executions, patterns in log files, Bitcoin transactions or even Linkedin Profile views. Canarytokens does all this and more, letting you implant traps in your production systems rather than setting up separate honeypots.

[Read More]

BlackHat 2015 - Bring back the HoneyPots

This year we gave a talk at BlackHat titled: Bring back the Honeypots. You can grab a quickly annotated version of the slides from [here]


As usual, we had waaaaaay more content than time (which should have been expected with about 142 slides and multiple demos) but we like to live dangerously..

The linked slides are annotated, so you should be able to gather the gist of our thoughts, but some of them (especially the demos) do require their own coverage. Over the next few days, we will aim to put out 3 quick posts to cover the three sections in the talk:

As always, shout if you have thoughts, questions or comments.