• A “Safety Net” for AWS Canarytokens

    The AWS API Key Canarytoken (paid and free) is a great way to detect attackers who have compromised your infrastructure. The full details are in a previous blogpost, but in short: 

    1. You go to https://canarytokens.org and generate a set of valid AWS API credentials;
    2. Simply leave those in ~/.aws/config on a machine that’s important to you
    3. Done!

    If that machine is ever breached, the sort of attackers who keep you up at night will look for AWS API credentials, and they will try them. 

    And when they do, we let you know that you’ve been breached.

    When you receive an email/SMS/Slack message letting you know that the AWS API key that you left only on BuildServer-7 in Server-room #12 just got used to login to AWS, you know you have a problem. 

    The underlying Canarytoken infrastructure relies on AWS APIs logging their own execution to CloudTrail. This lets us identify: which IP made the call; which API was executed (including both the service name and function); plus other details about the client executing the call.

    The effectiveness of the AWS API Key Canarytoken shouldn’t be underestimated. Attackers have to try them; they could be the keys to the victim’s kingdom. If they don’t try the keys, they might be missing a golden opportunity. AWS API Keys are about the juiciest bait you can dangle in front of adversaries.

    For defenders, the keys we supply are simple and entirely safe. They’re not tied to anything owned by the defender (the infrastructure sits completely at Thinkst), the keys have no permissions so they can’t be used to do anything, and no agents or software needs to be installed. You simply drop a text file, and wait to be notified if anything happens.

    Between Canarytokens.org and our commercial Canarytokens offering, thousands and thousands of  machines worldwide have AWS API Canarytokens lying in wait for attackers.

    Can AWS Canarytokens be detected?

    This introduces a new goal for the super stealthy attacker. If they find an AWS API key on a server, can they tell if it’s a Canarytoken (or if it is a legitimate key?). Our view is that this only matters if an attacker can determine this without triggering the token. In other words, the Canarytoken fails if attackers can come across API credentials, and perform some test that returns whether or not the credentials are Canarytokens but defenders are never notified of the test. 

    The key (heh) to making this happen as an attacker, would be to find AWS APIs that don’t log their execution, and also reveal information about the calling API Key to the caller.

    For our purposes, there are four classes of error responses from AWS APIs:

    1. Logs to CloudTrail, reveals no information about the API key
    2. Logs to CloudTrail, does reveal account details from the API key
    3. No CloudTrail logs, reveals no information about the API key
    4. No CloudTrail logs, does reveal account details from the API key

    As defenders,  4 is the worst case. If an API fulfills number 4, then Canarytokens can be detected. 3 is not great either, but softens the impact to “attackers can determine where credentials are valid, but not whether they’re Canarytokens”.

    The next bit may surprise those who’ve never worked with it: the AWS API is a mishmash of response codes, error code, error strings, exception names, and data formatting. It is anything but consistent.

    In the past, it’s been the case that a tiny subset of API errors did fulfill the constraints for class 4. The folks at Rhino Security blogged about this (RIP Spencer). While AWS (very slowly) does seem to react to reports on APIs which don’t show up in CloudTrail, that’s not sufficient. As it stands, we’re not aware of class 4 error responses currently in the AWS API, but at the rate at which new APIs are added, that’s little solace for thinking about the future.

    There are certainly class 3 error responses in the current AWS API if you go looking hard enough.

    In summary, the AWS API has a small number of endpoints which don’t log their own usage to CloudTrail, creating a blindspot for defenders. What do we do in those instances?

    IAM Credential Reports

    Scott Piper pointed out that Amazon actually does provide a backup mechanism for identifying credential usage. The IAM service lets you generate and download a report for all your credentials. This is a CSV file where each row belongs to an IAM user, and some of the columns identify when an access key was used, and on which AWS service it was used.

    With the credential report, there are no longer class 3 or 4 error responses in a practical sense; we can also check the credential report to see if an API key was used, so no longer need to just rely on CloudTrail.

    The report has three main drawbacks.

    1. The report can only be generated once every four hours, so at worst there’s a four hour delay between credentials being used, and you seeing the notifications. 
    2. The fidelity of the reports is low; you only can tell the last time the key was used and on which service (e.g. ec2, iam, sts, etc). You can’t tell which function was called. 
    3. You don’t get any client information, including IP addresses or client versions.

    However in spite of these drawbacks, it’s a huge step up in terms of reliability of this token type. Attackers don’t have places to sneakily test their API keys. This safety net means that every key usage will be detected and alerted on, albeit sometimes with lower fidelity artifacts. 

    Even if you didn’t know the attackers IP Address, would it be worth knowing that the key that you placed on \\code-sign-22 was used to login to AWS this morning? Absolutely!

    In fact, if a key is only used on what was previously a class 3 or 4 API, that’s even more of a signal since it implies the attacker is actively trying to avoid detection.

    So we built an AWS safety net, to help us catch attackers who actively try to avoid detection.

    Building an AWS Safety Net

    The architecture is very straight forward. An AWS Lambda function fires on a periodic schedule. It pulls in the credential report from IAM, iterates over all the rows, and determines whether the last recorded use of the key happened more recently than the CloudTrail logs show. If so, the Safety Net kicks in, and sends out the alert:

    Browsing to the history page for the incident, you see an annotation telling you that the Safety Net picked up this usage:

    The Safety Net was deployed to our commercial customers a little while back thanks to Jason, and we’ve recently rolled it out to the free users on Canarytokens.org too.

    Wrapping Up

    AWS API logging previously left a small but significant gap that potentially gave attackers a way to use Canarytoken API keys without triggering alarms. With the deployment of the Safety Net, this gap has been plugged.

  • Canary Rice Toss

    To see a World in a Grain of Sand Rice

    William Blake mh

    If you are on TikTok (or a fan of talk shows) at the moment then, no doubt, your feed has included coloured rice being tossed in the air in the form of song lyrics, beloved cartoon characters, and even famous faces.


    Whilst coloured rice is not a new thing (for most preschool teachers, it is a cheap and effective way to keep kids entertained), a bunch of TikTok-ers have made a living off turning this simple play-thing into a full-on career. And, obviously, when a current trend is well-suited to our logo, we have to give it a go. Here’s how we got there:

    What we used

    • Rice
      • Our whole logo only needed 500g, however, we needed a few attempts to get it right and ended up using about 3kg. White rice is reusable, a jumble of multi-coloured rice, not so much…
    • Food colouring
    • Vinegar
      • 1 teaspoon per cup of rice
    • Large (relatively stiff) rice tossing surface
      • We used a shelf from our cupboard…

    How we did it

    • We started by colouring the rice
      • We mixed 4 cups of rice with 4 teaspoons of vinegar and added food colouring until we got to the colour we wanted. The vinegar helps the colour spread evenly and get absorbed by the rice, so that, when playing with it, the colour doesn’t stain your fingers.

    • Once mixed well (and to your colour-liking), we then dried the rice on trays lined with paper towel. The thinner the layer, the quicker it dries. Ours took about 2 hours.

    • We scaled and sketched our logo onto the board
      • If we had had a projector, this would have been a much quicker + easier process, but alas.
      • Note that the image that you sketch out on the board will be reversed when tossed into the air. We drew the logo on in the correct orientation and then simply flipped the footage in editing
    • Practice makes perfect
      • We had a few attempts with random shapes, just using white rice, so that we could practise the flip motion (and then could reuse the white rice)
      • We found the trick was to not raise the board too aggressively, but drop it as quickly as possible
    • Once we were happy with our flipping technique, we then created our logo using the coloured rice.
      • We packed the rice relatively densely to make sure that, when tossed, the logo came out as clear and vibrant as possible.
    • We got the cameras set up to capture the magic.
      • We used both a GoPro + the SlowMo setting on an iPhone. Whilst both worked, slowing down the GoPro footage in editing gave us better quality.
    • The next step was to toss our Canary logo
      • It took us 3 solid attempts to get our final result (with plenty of sweeping in between)
      • We did not put down a sheet to capture the rice (because…yolo), but we would recommend this method (unless you plan on finding runaway rice in all of your living room crevices for the next few days)



    Easy and cheap way to get creative (and certainly gave us a good laugh). Would recommend.
  • Building WireGate: A WireGuard front to detect compromised keys

    Earlier this year we released our WireGuard Canarytoken. This allows you to add a “fake” wireguard VPN endpoint on your device in seconds. The idea is that if your device is compromised, a knowledgeable attacker is likely to enumerate VPN configurations and try connect to them. Our Canarytoken means that if this happens, you receive an alert. This can be useful at moments like national border crossings when devices can be seized and inspected out of sight.
    Using the WireGuard Canarytoken
    If all you want is to scatter a million of these WireGuard VPN configs across all devices you care about, there’s no need to read this further: they’re now freely available from canarytokens.org for anyone to grab! (Paying Canary customers will already have seen these on your private Canary Consoles).
    If you’re interested in how we built these tokens and how they manage to work reliably and safely at scale, then this post is for you. Along the way we’ll cover some of our design choices and what makes the WireGuard protocol design so elegantly suited to our needs.

    Our Goal

    The simplified version of our goal is to notify the owner of a client key when it’s used to connect to our WireGuard server. (WireGuard considers both ends of the tunnel as peers instead of clients and servers, but we only care here about the “client” config deployed on someone’s phone or machine.)

    Version 1: a rough draft

    The first proof-of-concept we did started with an existing WireGuard implementation. (The userspace Go WireGuard actually proved invaluable throughout this project.) With a temporary hand-wave over some implementation details to get us started, we can imagine the following:
    1. a database tracking which keys have been issued, 
    2. configuring the WireGuard server to accept these keys, 
    3. patch WireGuard to look-up and notify key owners when they’re used.

    Problems to Address:

    The rough draft would do its job but raises a few points we’d need to consider:
    1. WireGuard creates an actual network interface on a host. How do we ensure that the traffic arriving at the interface is routed nowhere? It also ought to get nowhere quickly, to limit the opportunities for malicious packets to abuse — like within a network namespace on Linux or on an entire host of it’s own. 
    2. The selected isolation mechanism would need to support as many WireGuard Canarytokens as possible. A single enterprise deploying the tokens organization-wide could easily employ tens of thousands of tokens making multi-thousand table-stakes.
    3. Extending the proof-of-concept would need to consider the number of keys per WireGuard interface (or the number of WireGuard interfaces per host).
    4. Extending the proof-of-concept would need to consider the number of keys per WireGuard interface (or the number of WireGuard interfaces per host).

    A WireGate instead of a whole WireGuard 

    Our goal admits an important simplification: an attacker trying out the WireGuard Canarytoken client config must first initiate an encrypted session before she is allowed to do anything else.
    A closer look at the WireGuard protocol shows how this can be done. Although peers on either end of  a WireGuard tunnel support the exchange of a few different types of messages to transfer encrypted payloads and to initiate encrypted sessions, the initiation is a single roundtrip of handshake-messages between the two peers. After just this handshake initiation message, the server (responder) knows which client has initiated the handshake – and in our case who to alert.
    The fit with our Canarytoken use is even better with a closer look at the handshake initiation message:
    Once the static client public key is decrypted, we know who to notify. If the encrypted timestamp that follows also decrypts, we can confidently say that only a device with knowledge of the corresponding client private key could have produced this message.
    A caveat to this, is that the message could have been replayed by a passive observer of the device with the client private key on it. It works in our favour here though that, being a Canarytoken, the client private key is rarely, if ever, used, and that our server can insist on fresh timestamps to reject stale initiations. This gives us a high degree of confidence that whatever sent the initiation message has gained access to the client private key only installed on a single device.
    All this can be inferred from just the first handshake initiation message. So instead of  supporting the full WireGuard protocol, we implemented a small “WireGate” service which only supports the handshake initiation message. This simplifies a bunch of the problems from the initial rough-draft:
    • There’s no need to null route traffic, because there isn’t any routing happening. Only individual UDP packets are checked to see if it’s valid handshake initiation and everything else can be ignored. 
    • There isn’t a need to maintain a shared set of valid keys with a separate WireGuard service.
    • As the number of keys grows, it’s also much simpler to reason about WireGate’s performance. (The initial handshake decryption is done in relatively constant time so won’t limit the number of keys created.) 
    With only a partial protocol implementation, it’s necessary to ask if WireGate is realistic enough for an attacker to interact with. WireGate only needs to set off the alert to let us know someone has the client private key to have done its job, but it’s less useful if an attacker is able to trivially detect that it is only a partial WireGuard implementation.
    WireGuard is one the most impressive protocols we’ve seen for this. By design, it considers silence a virtue. Clients don’t get responses without a client key known to the server endpoint. An attacker can try to fingerprint the server by throwing packets at it to explore corner-cases handled differently between implementations, but they get no packets in response. Only with a valid handshake can they begin to interact – by which time we are able to identify their client key (and by extension, will be able to generate the corresponding alert).

    Adding WireGate to canarytokens.org

    Canarytokens.org is the free Canarytokens server we host for the world. It’s generated close to a million tokens but this introduces a new complexity for us. Paid customers get their own canarytoken-servers, so we can run separate instances of WireGate per-customer, but the public server is shared with everyone on the internet. The same server key is used in all the Canarytoken WireGuard client configs issued, making for an easy tell. We had to do better.
    In the ideal case we’d simply create a new server key for each Canarytoken WireGuard client config. To see why the naive approach doesn’t work, consider the fields in initial handshake packet:
    The handshake is negotiated with the static server public key in the Canarytoken WireGuard client config. Without prior knowledge of the corresponding server private key, it isn’t possible to decrypt the encrypted client public key and determine the owner to alert. The naïve approach to try every server key for every client key ever issued as a Canarytoken, would take increasingly longer to handle every handshake initiation message that arrived.
    The simple workaround for this is that Canarytokens.org uses a fixed-size pool of server keys. The Canarytoken WireGuard client configs are issued with randomly chosen server keys from the pool. To skip managing 1000s of keys, the pool of keys are derived from a single private key seed.
    Decrypting the client key with each server key in turn would work fine, but it can be done much faster. The initiation message includes an unencrypted keyed MAC allowing us to guess and confirm which server key it was encrypted for. For each server key, we derive the corresponding MAC key and verify the incoming handshake MACs with each. This finds the correct server key much faster and only increases the processing time for a handshake initiation message by a constant factor (number of pool keys). It comes down to a few lines in Python.
    As the WireGuard whitepaper points out, this is only a very slight weakness in the handshake initiation identity hiding that makes guessing the server public key possible (and not the server private key). It’s only trivially useful here because of the circumstances we contrived: the WireGuard server already knows all the server keys (unlike an attacker who can only passively observe messages). We’d be interested to know if there’s better cryptographic tricks to find some biased bits in the handshake initiation message to build an index to efficiently look-up either server key or client public keys. Our attempts all brushed up against parts of cryptographic primitives designed to resist what we were doing. We like to think that, rather than this being a limitation of our own engineering abilities, it’s a virtue of good cryptography in use by WireGuard being harder to mis-use in ways it is not designed for.


    At Thinkst Canary, we’ve been rolling our own partial-protocol implementations where it makes sense for security and performance on lower-resource devices since almost day-1. That said, partial protocol implementations won’t fit every problem as well as WireGate does for WireGuard. If this problem also required emulating network services accessible over the WireGuard VPN, we’d be better off with a full implementation, ideally an isolated emulation. (For meatier protocols where native code implementations are un-avoidable, those run sandboxed.) If it wasn’t clear already, we think WireGuard is great. The more time we spend working with it, the more we’re convinced others should too.
  • A Kubeconfig Canarytoken

    Introducing the new Kubeconfig Canarytoken

    A while back we asked:

    “What will an attacker do if they find an AWS API key on your server?” (We are pretty convinced they will try to use it, and when they do, you get a reliable message that badness is going on).

    Last month we asked:

    “What will an attacker do if they find a large MySQLDump file on your machine?” (We think there’s a good chance they will load it into a temp MySQL db, and when they do, you get a reliable message that badness is going on).

    This month, a similar question comes to the container world:

    “What will an attacker do if they find a good looking kubeconfig file on one of your servers?”

    If the answer is: “They will try to use it to access your kubernetes cluster”, then again, you will receive a high-fidelity alert that badness is happening.

    This quick post presents our shiny new Kubeconfig Token (which emulates a kubeconfig file, the configuration text file that ordinarily contains credentials to interact with a Kubernetes cluster).

    A Canarytoken refresher

    Canarytokens are a great way to tripwire important servers and locations. With just a few clicks you get to drop legitimate looking resources on your network that alert you when they’re used. Chosen correctly, the Canarytoken is impossible for an attacker to resist while also guaranteeing an alert when it is used or accessed. This sort of detection tactic is super powerful because it becomes technology agnostic while exploiting the attackers objectives i.e. it doesn’t matter if she broke into your network/cloud via a phishing attack or via a mega complex supply chain attack. What matters is that she is there, and has objectives. If she finds a key to what could be your AWS presence, can she avoid using it? If she finds a doorway to one of your Kubernetes clusters, can she resist popping in?
    It’s not just about attacker curiosity. Most sophisticated attacks are less one massive blow, and more like “death by a 1000 cuts”. With Canary (and Canarytokens) each of those cuts is an opportunity to reliably detect the compromise early on.
    We know that Canarytokens work because we’ve seen them used to catch attackers, pen-testers and nation-state adversaries around the globe. We also have great attacker feedback that quickly gets to the core:
    Attackers are forced to slow down and are forced to work harder when they don’t know what they can trust. As Paul McMillan points out, more tokens widely deployed actually slows down attackers everywhere since they increasingly can trust fewer things.

    Creating and using the Kubeconfig Canarytoken:

    Like all Canarytokens, using it is simple and straightforward:
    Surf to canarytokens.org and select the kubeconfig token
    Enter your email address (to receive the alert) and enter a memo to remind you where you are dropping this token 
    We give you a working config file that you leave on the server and forget about. 
    That’s it! Your work is done here!
    If weeks or months from now, an attacker does manage to SolarWinds you, they’re going to find that config, and will probably use it to extend their compromise. And you will get a reliable notification that something isn’t kosher on \\CodeSignServer-03
    The Kubeconfig Canarytoken can be placed on any server to make it seem like a Kubernetes node, or even on your engineers’ machines to make them seem like systems that regularly interact with your production Kubernetes cluster. Kubeconfigs are also used in CI/CD pipelines to authenticate to a cluster and utilize it for running jobs, so it can also be placed on a server used as a node in the pipeline.

    Background: The kubeconfig file

    A kubeconfig is a YAML formatted file that contains all materials needed to authenticate to the control plane of a Kubernetes cluster. This includes the name of the cluster, API Server endpoint and the user information.
    This file is typically used with kubectl, which is a command-line utility used to run commands against a Kubernetes cluster. The kubeconfig file can be used with kubectl by:
    1. Specifying its path as the value of the –kubeconfig flag when running kubectl
    2. Specifying its path as the value of the KUBECONFIG environment variable
    3. Placing its contents at $HOME/.kube/config (this is the most common approach)
    Our Kubeconfig Token is a kubeconfig file generated by us that can be used just like any other kubeconfig file — it contains the API server endpoint that kubectl should connect to, as well as the credentials needed to access the API server. These credentials are unique to your Canarytoken, and are how the token works. Using the kubeconfig file will simply return permission errors for an attacker, and result in a neat, timely alert for you!

    Background: Why Kubernetes?

    Kubernetes is a beast. It’s the most popular container orchestrator and has continued to be adopted widely at a steady pace, even in odd places. The U.S. Army runs Kubernetes on their F-16 fighter jets?!
    But, because Kubernetes is also a complex, often misunderstood and misconfigured beast, it is easy to get things wrong when setting up a cluster. Predictably, malicious actors now routinely search for Kubernetes-specific environment-markers in their recon phase post-compromise of internet exposed services.
    Palo Altos Unit42 have documented recent attacks against containerized workloads running on Kubernetes clusters in the cloud where attackers either hunt specifically for the kubeconfig file or employ malware that searches for files having the same structure:
    From https://unit42.paloaltonetworks.com/azure-container-instances/
    From https://unit42.paloaltonetworks.com/siloscape/


    Kubeconfig files are attractive to attackers and are already hunted for during active campaigns.
    With a few clicks on https://canarytokens.org we give you a kubeconfig that will alert you when it’s used. Grab a few and sprinkle them around. It’s free, and “it just works”.
    P.S. Our paid Canary service also now enables you to trivially deploy container Canaries in Kubernetes or Docker. Check out https://canary.tools to find out more.
  • Good attacks make good detections make good attacks make..

    (The making of a MySQL Canarytoken)


    Consider this scenario: An industrious attacker lands on one of your servers and finds a 5MB MySQL dump file (say, called prod_primary.dump). What do they do next?

    Typically, they would load this dump-file into a temporary database to rummage through the data.

    As soon as they do, you get an email/SMS/alert letting you know:

    Canarytoken Alert

    Eds note: You can create and deploy these by visiting canarytokens.org

    (completely free; no registration needed)

    There are obvious benefits to these sorts of booby-traps, but some rise above the rest:

      • They can be deployed in seconds;
      • They aren’t prone to high false-positives;
      • An attacker who suspects you are using these is no better off for knowing this (if nothing else, they now have to second-guess everything they touch);
      • It’s such a pure illustration of attack-minded defense.
    In this post I’m going to write about the process of discovering and building our new MySQL dump-file token.

    It Begins…

    While working on our recent ThinkstScapes release I grabbed an archival dump of the Thinkst Con Collector to generate some stats. With an auto-generated dump file, the easiest way to explore the dump is to import it to run queries against it. I used a common Docker one-liner.
    docker exec -i mysql_cnt mysql -uroot -proot db < dump.sql
    This made me wonder: Could I Canarytoken a dump to let me know when it was being opened like this by an attacker?
    There are two base techniques that are used by many of our Canarytokens:
    • Can we get the system to surf to a URL we control;
    • Can we get the system to lookup a DNS entry that we control;
    Forcing a DBMS to reach out to other servers is a classic SQL Injection challenge and past work had laid out techniques [datathief][squeeza] to solve this on other DBMSes (Oracle and MSSQL) by leaning heavily on their robust shell programming abilities. MySQL was different though with less functionality in this regard. On my Linux host (or Docker container), I couldn’t use cute tricks like calling a LOAD DATA INFILE from a file path with a Canary-DNS entry
    (e.g., LOAD DATA INFILE ‘\\DNS-entry.thinkst.com’;).
    A quick search of attack-minded-blogs seemed to corroborate my thinking, that MySQL was restricted enough to prevent these types of tricks in its default state.

    Sizing up the problem

    After much fiddling and searching I found a promising lead through database replication.
    This was a mixed bag. The community/open-source version of MySQL does support replication but a server would need to be configured (through its config files) to support it. We can’t convince an attacker who stands up a temp MySQL server to first reconfigure their DBMS and I almost gave up. But then in-band-signalling once more saved the day (or led to horrible insecurity). It turns out that these configuration files can be overridden at runtime by SQL commands!
    Keep in mind that all we want at this stage is a set of MySQL queries (run on a standard MySQL instance) that will reach out and touch a foreign server somehow.
    I set up a netcat listener on a remote server and on the MySQL server side I configured the replication to point to it:
    Then, I typed START REPLICA; and waited.
    And waited.. and… nothing. The netcat listener showed no evidence of a connection, and I had nothing on my MySQL session.
    Running SHOW REPLICA STATUS; on the MySQL server reveals that the server did try to connect (already now an option for a DNS token), but the connection failed. I replaced netcat with tcpdump and retried the process. This time I did see a connection to my server, but no data was exchanged, and the MySQL status still reported a failure.

    A deeper dive into the MySQL Handshake

    Turning to the MySQL documentation on how a connection between a client and server is established, we learn that although the client (or replica-seeking server) initiates the connection, once open the client then waits for the server to return a Handshake packet. This packet describes the server’s capabilities and supported authentication plugins to the client, so they can mutually agree on the fullest feature set for the connection.
    We had enough capability at this point to create a DNS-based Canarytoken. Ie.
    1. Create a unique DNS host-name on a domain we control; (this is done by simply visiting Canarytokens.org and creating a DNS-Canarytoken: we get back something like: 0iep6h5na3p4coxx4hax132b8.canarytokens.com)
    2. On the MySQL server, issue the commands MASTER_HOST=’0iep6h5na3p4coxx4hax132b8.canarytokens.com; START REPLICA
    Even though the actual replication never takes place, the MySQL server resolves the DNS name of the foreign server (effectively tripping the DNS token) and letting us know that it’s happened.
    But… While a DNS based canarytoken (like this one) is great at letting us know that something happened (someone ran this mysql import command) it isn’t great at giving us more details. Our DNS server is able to tell that the record 0iep6h5na3p4coxx4hax132b8 was requested, but can only tell us the IP address of the DNS server that made the request.
    We can do better.

    If we can get the remote server to complete the MySQL handshake, we’d get a connecting IP address and we can possibly stuff more information from the MySQL server into the username/password fields that we submit.

    There were a few useful blog posts that helped describe the packet format, but it was a relatively complex encoding that included some fixed-width fields, some NULL-terminated fields, and some packing fields. Since I didn’t need the connection to complete, I simply captured (above figure) the Handshake packet from a real MySQL server (configured to explicitly not support SSL to simplify the next steps), and wrote a small Python server to send those captured bytes to every connection.
    This allows the handshake to go further (and lets us submit a username/password combination from MySQL to the foreign server).
    This gives us all the pieces we need to create a point-and-click Canarytoken to cover this use case (and we did). Here’s how you can use it.

    Using the Canarytoken

    If you visit https://canarytokens.org and click the token selection dropdown, you’ll notice a shiny new MySQL token.
    In typical Canarytoken style, you then supply just two pieces of information.
    1. An email address to receive the alert;
    2. A reminder note to jog your memory when this alert fires;
    That’s it. When you hit “Create my Canarytoken”, we give you two quick ways to make use of the token, choose the one that’s best suited for your scenario (likely option 2):


    We give you the MySQL snippet you need to add to a MySQL dump of your own. If this statement is included in a dump-file that an attacker loads into their MySQL server, their server will reach out to ours, to let us know it’s happened.
    By default, we encode this snippet so that an attacker eyeballing a plundered MySQL dump file doesn’t spot it immediately. If we un-select the “Encode snippet” option (marked as [3]) we can take a closer look at what’s happening.
    The snippet now looks like:
    SET @bb = CONCAT(“CHANGE MASTER TO MASTER_PASSWORD=’my-secret-pw’, MASTER_RETRY_COUNT=1, MASTER_PORT=3306, MASTER_HOST=’k5sk4zeo5csej6pps4vkgzmtp.canarytokens.com’, MASTER_USER=’k5sk4zeo5csej6pps4vkgzmtp”, @@lc_time_names, @@hostname, “‘;”);
    PREPARE stmt FROM @bb;
    EXECUTE stmt;
    Notice, we use the unique hostname created by the Canarytoken server as the MASTER_HOST and as part of the MASTER_USER. This means that the attacker’s MySQL server will actually (in the best case) do three things for us:
        1. Her server will lookup k5sk4zeo5csej6pps4vkgzmtp.canarytokens.com triggering the DNS token and letting us know that the MySQL dump file we left safely on NYC-DC1 was just loaded into MySQL somewhere;
        2. Her server will connect to our fake MySQL server (which now knows her MySQL Server’s IP address, which is a strong thread to pull on);
        3. Her server will attempt to login to our fake MySQL server with the username: ‘k5sk4zeo5csej6pps4vkgzmtp”, @@lc_time_names, @@hostname which now gives us more information on the attacker that we can report on when letting you know the token has tripped.


    For users who don’t have a mysql dump file lying around (or don’t want to risk creating one with real data), we also offer option-2, where we take a sample MySQL file, run it through a quick mixer to generate some random linked tables with data, and insert the tokened snippet into it. No mess, no fuss. Simply drop the file on your server (or in your dropbox, or in your email) and if you ever get the notification to let you know that it’s been loaded, you know you have a problem.

    In many ways, tokens like this are a joy. If you leave this Staff_Salaries-2021.mysql-dump.sql file in your email and forget about it for 10 years, it costs you nothing.

    If you get a notification letting you know that the mysql dump you left in your mailbox just got loaded into a MySQL server in $FOREIGN_COUNTRY? That’s priceless.

    There’s no chance of it being an accidental alert, and there’s almost no chance that an attacker would find a large enough mysql dump file without loading it into a DB to check it out.

    Absolute win.

    Bizzaro Bonus:

    It’s super common for security companies to release attack tooling and then briefly mention some form of defense. We think a full 180° on this trend is worth it.
    The START REPLICA technique effectively gives us a usable new method for SQLi exfiltration on MySQL servers. Considering the past work on the offense side, tools from years ago exist to exfiltrate a database if access was limited to [blind] SQLi. There was Data Thief that used Oracle’s functionality to directly pull data to a remote server, and squeeza, which used DNS as a channel for command and control as well as a data channel to the attacker on MSSQL in constrained networks (DNS is usually permitted even if other ports are egress-restricted). In theory this technique could be used as a bandwidth-limited channel to exfiltrate data from a database that has no other egress methods. A domain name can contain up to 253 characters, though some of those will be used for the base domain, as well as the dots to separate the sub-domains. MySQL usernames can only be up to a maximum of 16 characters, so the exfiltration bandwidth is limited to a bit over 200 bytes/replica attempt.
    It was simple to modify the Python server I wrote for Canarytoken alerting to collect the usernames and append them into a string buffer, so I quickly prototyped a SEND_STR(str); function in MySQL [5] that would act as a exfil primitive, coupling that with some code to GROUP_CONCAT rows together, it was easy enough to [slowly] send a table (or select columns therein) from a blind SQLi. To test the speed, I generated a random 1024-character string, and timed sending it: ~26 seconds. This works out to a ~315 bit/s channel only using the username field as an exfil path–plenty to grab some password hashes or payment information. Future work would include using subdomains of a short domain to transmit more data per replica request.

    Closing remarks

    We hope you’ve enjoyed following along in building a new type of Canarytoken, and seeing how this defensive capability is deeply rooted in offense. Once you start thinking about high leverage positions to detect attackers, it’s pretty hard to stop. Once you build some infrastructure to help you do it, doing it more becomes trivial.
  • RDP, cmdkey, Canary (and thee)

    Last month Florian Roth (cyb3rops) reacted to the news of Mimikatz dumping RDP credentials by asking how we could easily inject fake credentials into machines.


    Markus Neis pointed out that on Windows, cmdkey allows you to do this: 

    This is pretty awesome. Mimikatz is used by attackers the world over and having control of the data a Mimikatzer will see is a powerful tool to have. One route to looking for Mimikatz usage is injecting false credentials into lsass and watch for their usage in Active Directory, but tracking that credential usage will require some work on your domain controllers (or your SIEM). With the RDP service back on version 3 Canaries, we can use cmdkey to point attackers at our Canaries, and not have to worry about Active Directory integration.
    Let’s start by setting up a Canary as a Windows Server (called \\02-FINANCE-02).
    This will take all of 1 minute (if you’re a slow typer).
    Now even with the default settings, this Canary looks legit on the network:
    You can enrol Canaries into Active Directory, so enterprising attackers will sniff around for it on the network and inevitably try its file shares..

    Which gives you that one alert, when it matters.

    We put in significant work to make sure that Canaries don’t expose your network to additional attack-surface by running full-blown Operating Systems, while still looking legit enough for an attacker to have to use it.
    Markus’ cmdkey trick gives you another way to point an attacker to your bird.
    This means that an attacker that compromises this desktop gets bogus credentials [administrator/super-secret-123] but more importantly, they get a pointer to your Canary [\\02-FINANCE-02]
    We’re starting to repeat ourselves here, but this gives you that one alert that lets you know when bad stuff is happening.
  • Would you know if your phone was hacked?

    Would you know if your phone was hacked? Even the most powerful people in the world (if you use wealth as a proxy for power) don’t.

    The problem is that much like your networks there are an almost unlimited number of ways for attackers to break into them, so this problem seems intractable at first blush. But (just like when they break into your networks) attackers who break into your phones are looking to achieve certain objectives, and you can use these objectives to reliably detect them.
    Today we released our new version of Canary, and with it, customers also get the shiny new WireGuard Canarytoken appearing on their consoles.

    What’s a WireGuard?

    WireGuard is the incredible VPN built by Jason Donenfeld. We love it. We use it. People smarter than us think you should too.

    What’s a WireGuard Canarytoken?

    Once a serious attacker gets onto your device, they have a certain set of objectives.
    • Grab salacious data;
    • Grab access to other services;
    • Ensure repeat access or spread their compromise further.
    In general, Canarytokens are little tripwires that you can place on your devices or in your applications which reliably let you know when they are accessed. Canary customers are able to deploy an unlimited number of these tokens around their networks and non-Canary customers can get the same benefit by visiting our free hosted server at canarytokens.org.
    So, the WireGuard token looks like a regular WireGuard VPN connection on your device.
    To deploy a Wireguard Canarytoken to your CEO’s or CFO’s phone, simply download the setup from your Canary Console (in a handy QR-code if you like), and configure the VPN on their device. (In fact, you should probably drop some on key servers too). And then… forget about it!
    When the sort of attacker you care about compromises that phone (or that device), they see a VPN and can’t resist checking what’s on the other side of it. And as soon as they do, you’ll get a notification saying someone tried to connect to that VPN (and it’ll identify whose phone it was).

    Setting up the token is dead simple:

    1) Select the token from your Canary console;
    2) Give the token a friendly reminder;
    3) Use the WireGuard app to snap the generated QR Code.
    4) That’s it.
    So when you notice that your CEO is still in Seattle, but his VPN was just accessed from Saudi Arabia… that’s a raging clue.
    We love Canarytokens like the WireGuard token because even if the attacker suspects that it’s a trap, it’s really hard to avoid using it.
    What do you do as an attacker? Simply ignore the potential access it might yield? (You could, but that’s going to slow you down terribly,  which would be a win for team defense too).
    The WireGuard Canarytoken is available on all Canary servers today and will make its way to the public Canarytoken server in the next few weeks (when we will explain more of its inner workings).
  • We bootstrapped to $11 million in ARR

    This year Thinkst Canary crossed the line to $11M in ARR. That number is reasonably significant in the startup world, where Lemkin refers to it as “initial scale”. For us; it’s a happy reminder of Canary’s spread into the market. $11M ARR certainly isn’t our end goal, but it provides the fuel for us to keep building the company we want to work at.

    We got here without raising a dime in capital, shipping a hardware/SaaS hybrid, sitting way outside Silicon Valley. That’s different enough from many startups that we figured it was worth a post with some thoughts on how we got here¹.


    To be clear, we’re not anti-VCs. From the beginning though, we wanted to try bootstrapping. In the past we’ve spoken on how founder ego can nudge you towards building VC-backed companies (and why you might not need to), but that’s less focused on VCs and more aimed at founders.


    Canary launched in mid-2015, after we worked on it privately for about a year. Our thesis was that security honeypots deployed internally would allow customers to discover when they are breached (without them having to be informed by third parties some 300 days later). 
    The key was that deployment and management needed to be dead-simple. 
    Honeypots have a long history in the security world (pre-dating Canary by decades), but were always painful to install and run. Removing this pain became our North Star. At launch we demoed for Ars Technica who wrote an early article on us.
    That one article was about the extent of the international press coverage we received, despite actively pitching to a bunch of publications. What’s interesting here is that funded (now defunct) competitors did not seem to have the same trouble getting press. After digging into this, the answer we received from one journalist was forthright: funding is a positive signal for journalists. They have funding, and they’ll get the coverage.
    I mention it because, while it was disheartening at the time, it turns out we didn’t really need launch coverage. The product found its market without lots of press, by being something people want (to butcher pg’s maxim). For bootstrapped startups, worry less about the coverage and more about whether your product does what you promise.

    Customers who love you

    The Canary pitch makes two key promises:
    1. We promise we will be super easy to deploy (v1 used to take 4 minutes and now we are down to almost half that);
    2. We promise we won’t drown you in alerts.
    We knew that Canaries worked, but also knew how much of a leap of faith it takes to buy a $7,5k security tool over the Internet. We were so crazily grateful for the early customers who took a chance on us, that all 5 of us worked like crazy to make sure we never dropped a promise. This take on being genuinely grateful for our customers and working hard to keep our promises is still baked into everything we do and still guides all our product decisions.
    Being grateful to our customers guides all our decisions, so our sales will never be spammy, and our legal documents try hard to be in simple English and gotcha-free. 

    Promises are funny things. It’s easy to make one and sounds good when you do, but the test only comes up later. In a broken organisation one group makes promises which another must deliver on (and often can’t). We recognise that sometimes we make mistakes; perhaps a customer hit a bug, or we sent a poorly worded reply to a customer query, or a service was offline. The key for us is that the promises we make are known through the company, and it’s everyone’s job to hold us to them. Anyone in sales, support, or success can raise issues with engineering; Sales folks never promise delivery on requests unless confirmed with the product team. Everyone is aligned on this key point: we exist to give our customers the right alert when it matters. Anything which gets in the way of this must go…
    We sweat every detail of the product to make sure it’s simple, does what it says on the tin, and then gets out of the user’s way. We don’t have trackers around every page-load and aren’t trying to maximize users time in-app. Is it the simplest it could be? Of course not! We’re still sweating those details.
    Customer focus is the main reason that almost all of our customers from year-1 are still customers today (specific shoutout to Bill from San Diego, you know who you are!). In that time we’ve never raised our prices, but our customers increased their spend with us (more than 10x in some cases). 
    With that focus on customers, a feedback loop is necessary to judge whether you’re doing things right. Our last NPS run came in at 80 (and we once had a customer write us a song). We’re also pretty active on Twitter and love interacting with customers or potential customers there. All of this becomes part of our Canary brand, which aims to be low-key, earnest, kinda-humorous, and effective.
    We still get tweets like this on a regular basis:

    Twitter is amazing for unfiltered views and we saw lots of chatter about Canary. At some point (early on) we realised this was a kind of virtuous cycle: treat customers well, and they post unsolicited comments on Twitter, which helps us attract more customers. We started a page to highlight some of these unsolicited tweets about our birds and https://canary.love is probably our top sales person today. (This approach works well if customers actually like your product; it’s a nightmare if there’s negativity towards it. We’ve seen a few other young security startups try this approach too, with success).

    RSAC and Trade-Shows

    One of the joys of building a company “our way” is that we get to nibble at how things are traditionally done, and still get to add our spin on it. In 2017, we visited the RSA Conference for the first time as attendees and decided to hire a booth in 2018. But just because we were boothing, didn’t mean we had to go all the way into booth ridiculousness.

    We did the show, and documented our experience (and our costs) extensively in a 5000 word blog post. If you are a young company considering a trade-show, it’s worth a read. (tl;dr: done right, the show was easily worth it for us). We’ve since met dozens of startups who told us that the info in that post convinced them to try trade shows too.

    Trade shows are one place to spot companies that recently got funding; they make outsized splashes in terms of booth size, staffing levels, swag, and (it seems most importantly) the parties they throw.

    Conference Parties!

    In 2017 we attended a bunch of evening functions and most were not really our scene. So in 2018, we rented a venue about a kilometre from the madness for a little gathering of our own. While all the parties and bars were full and loud music was making sure nobody could hear anybody else, we had a quiet location where we bought drinks and pizza as Halvar Flake spoke about his experiences selling his first company to Google.
    In 2019 we had the same deal with Jon Oberheide talking about how they scaled and sold Duo Security to Cisco.

    The audience was mostly friends or customers, so this wasn’t a sales push. It was our take on an RSA conference event, a quiet collection of smart folks talking about security and how they built their security companies. It’s the thing we wanted but hadn’t seen: in a week where everyone is selling all the time, an opportunity to mingle without getting a badge scanned or a business card thrust in your face. Learning is one of the reasons we built our company, and if we’re going to be in SF during RSAC, we might as well learn from some of the smart people in town.

    Open Source

    Like most young tech companies, we’ve cut our teeth on Open Source software. Although we make a living by selling Canary and Canarytokens, we give away Docker images of Canarytokens and build and support the BSD-licensed OpenCanary. (Aside from the time we put into it, our own free software is probably the “competitor” we bump into most in the marketplace, even if it’s not that often).
    We also get to contribute monetarily. Supporting projects with the proceeds of our growth is a great pleasure, and we’ve either sponsored or donated to Twisted, OpenBSD, iTerm, Homebrew, Wireguard, and smaller projects. We’re also a USENIX benefactor. We don’t see these as advertising opportunities, but genuinely think they deserve it and we’re super glad (and proud) we can play a positive part in the ecosystem.

    A nice place to work

    The rush to market in our early days demanded long hours, but we’ve been able to grow our team over this time so we didn’t all have to be “on” constantly. Today, while still small by sprawling SV standards (we are 22 people all in) we get to work pretty normal hours. (It’s still totally normal to see people chatting on Slack in the wee hours of the morning, but this could just as easily be people commenting on recent NFT craziness as it is likely to be because of work.)
    We get to focus on projects that reasonably stretch each other, we get to work on features we think are important, and we get a chance to ship stuff that doesn’t suck. (It’s a little bit surprising, but just committing to not sucking is surprisingly rare). Over time it means we get to build a team of smart people who enjoy what they do and enjoy how we do it.

    Customer Swag/Gifts

    We hate poor-quality swag. Walk around a typical tech (or security) conference and cheap gifts abound. T-shirts get handed out by the truckload, but often they’re the stiff, scratchy, and not from the supplier’s premium range. This is crazy. Uncomfortable apparel will simply not get worn, and gets repurposed as rags around the home. Those weird conference bags get turfed, and the plastic pens go to the bottom of a drawer. It’s genuinely strange how many companies dole out cheap-swag for marketing. 
    It’s such a huge thing to have someone willingly wear your logo. They’re publicly associating themselves with the Canary brand we’ve poured so much effort into building. It makes no sense to give them the cheapest T-shirt/hoodie possible. We spend time designing our gear and then spend time getting it just right because we can’t imagine doing it any other way.
    Incidentally, this is also slightly different from a typical customer acquisition cost where there’s a direct spend on non-customers. The biggest recipients of our swag are our existing customers.  We don’t think it’s the reason they stay with us, but it does make us smile making our customers smile.

    No doubt there are people who swing by the RSA booth and grab a T-shirt without knowing who we are, but I choke up when I see customers we love wearing our gear.
    Last year we grew tired of swag-fulfilment companies messing up the last-mile of our gift deliveries and built our own https://gift.canary.tools/ to handle this end-to-end. It’s a tiny site, but gives us flexibility for easily sending anyone around the world a gift. Everyone at Thinkst is empowered to send gifts. Maybe it’s for a pull request on one of our Open Source projects, or a great idea, or a heads-up on typos in documentation, or even just a happy email. Customers in return love it, so for us, the whole thing is an absolute no brainer.

    Giving Back

    One of our core values at Thinkst is that we can do well by doing good. We sit in South Africa which ranks first in the world  for income inequality. Last year we were able to cover tuition & accommodation for 3 University students, and over the past 2 years have managed to donate over a million dollars to local charities. We have some cool plans to do a little more in this space, but we will save those announcements for after we have more of those runs on the board.

    The Product, the product, the product

    If there’s one take-away from this post for young startups, let it be this: 
    The product absolutely matters. 
    Hot takes about how “better products don’t always win” might be able to find examples where it’s true, but a great product covers you from lots of other weaknesses. If you can combine a great product with a really low burn-rate you are in fantastic shape for the road ahead (that’s particularly true for entrepreneurs outside the Valley).
    From day-1 we’ve focused relentlessly on making Canary better, easier to deploy and easier to manage. We constantly research and develop new Canarytokens that can be used to detect badness with high levels of certainty (for small deployment costs). As we’ve grown, we’ve been able to hire smarter people so we’ve been able to continually up our game, from the devices we ship to the infrastructure that makes it all possible.
    We’ve given an entire talk as a keynote at VB2019 titled  “The products we deserve” lamenting the current state of security products (but expressing why we are hopeful for a positive change).

    What do you miss out on when you bootstrap?

    It wouldn’t be fair to write this post without discussing things we missed out on when choosing to bootstrap. We’ve not been completely isolated from VCs; to the contrary we’ve had lots of open dialogue with a bunch of them. This gives us some insight into what we’ve missed.
    • Great VCs make good sounding boards for problems you might be having (but great VCs are pretty rare). So founders need to make really careful choices, especially if they’re looking for advisors in their VC.
    • As mentioned previously, early press is easier to come by for funded startups, but we also don’t think that that sort of press is super helpful.
    • The journalist who asked about our funding indirectly highlighted the key point: VCs give you credibility and in some ways give you permission to act grown up. 
    • One of the challenges when you are tiny is trying to find your place in the world. Are you a CEO when it’s you and 4 friends building software? (You almost certainly are post a $15M A-Round). Aligned with that is a type of beauty contest; companies funded by tier 1 investors get to hobnob with other successful founders, and then get to act even more grown up. Does that have an impact on the company operations or product development? Tough to measure, but it almost certainly has benefits for exits.
    • Additional funding rounds often lead to more press, but I’m dubious of this benefit, at least for products like ours (with bottom-up sales), and it certainly won’t lead to more customer love.

    What now?

    For us, it’s absolutely business as usual. We know that we’re still judged by our next update.  That all our previous customer interactions don’t matter if we screw up the next one. So we continue to work like hell keeping our promises, growing Canary by making even more customers happy. If that takes us to $100M ARR, we’ll blog again!
    ¹ Posts like these can sometimes feel prescriptive (“We got here because of these, so you should do likewise”), but that’s not our intention. We’re thrilled to cross that $11M ARR line, and there were many moments along the way. Even with hindsight it’s hard to know which of these were important, but they were fun. We’re not going to dive into numbers, but we’ve been profitable since year-1 and we continue to grow.
  • On SolarWinds, Supply Chains and Enterprise Networks

    The recent SolarWinds incident has managed to grab headlines outside of our security ecosystem. The many (many) headlines and columns inches dedicated to the event are testament to the security worries that continue to reverberate around the globe.  But we think that most of these articles have buried the lede. 

    Most discussions take the position that our enterprises are horribly exposed because of supply chain issues and that any network running SolarWinds should consider themselves compromised. 

    We think it’s actually more dire than that (and suspect it’s going to get worse). Let us lay out the case for why SolarWinds should concern you even if their tools are nowhere near your networks.

    It’s easy to whip up a think-piece in the wake of a public security incident, especially as a vendor. The multitude of vendor mails riding the SolarWinds incident are overflowing our inboxes. But even a stopped clock is right twice a day, and this is one of those times.

    An abstracted, low resolution summary for those (very few) who haven’t paid attention to the incident:

    • SolarWinds make a network management product called Orion, which is deployed on tens of thousands of organisations worldwide;
    • Attackers broke into SolarWinds and made their way to the SolarWinds build environment;
    • They compromised the build pipelines, to inject malicious code into the SolarWinds update process;
    • Organisations  all over the world updated themselves with this poisoned update;
    • (Now) Compromised SolarWinds servers worldwide attacked internal networks of selected organizations;
    • Almost nobody discovered any of this for months till a security company discovered their own compromise.
    The technique of compromising a single source, which then updates other nodes isn’t novel. As recently as yesterday we saw headlines like: “Barcode Scanner app on Google Play infects 10 million users with one update” and indeed this was how Will Smith and Jeff Goldblum saved us when the aliens first made contact.
    The attack gets called a “supply chain” attack which hints at war-time tactics and I’m willing to bet, will launch a dozen cyber security / resilience startups. People are (rightfully) worried about the knock-on effect, since the SolarWinds attackers had access to several other software product companies and could have poisoned those wells too. This is definitely scary! But hear me out. It’s actually a little bit worse than you might think.

    Why it’s actually worse than we think

    The state of enterprise security: While we’ve made progress in some areas of information security (e.g. the degree of knowledge and skill required to exploit memory corruption bugs in modern OSs) , enterprise security is still stuck pretty firmly in the early 2000’s. An enterprise network consists of an untold number of disparate products, loosely coupled through poorly documented interfaces where often the standard for product integration is “this config works, don’t touch it”. Any moderately skilled attacker will decimate an internal corporate network long before they are discovered, and the average time it takes to gain Domain Admin is measured in hours and days instead of weeks or months.
    Most organizations don’t know this though. They know they spend money on security and they know they see charts tracking progress. Most have no clue that faced with an average attacker of moderate skill, they’d almost certainly come off second best.
    Enterprise Products: Even ignoring the weakness that comes with cobbling together many products (security at the joints), most enterprise products won’t hold up very well to serious security testing. Heavyweight vendors like Adobe and Microsoft were publicly spanked into upping their game years ago, but it drops off pretty steeply after them. There’s an interesting carveout for online SaaS companies who have to build security competency since they run their own infrastructure and compromising their products is the same as compromising them. But for products installed into an Enterprise network the incentives are horribly misaligned. Owning, say, Symantec’s antivirus agent doesn’t compromise Symantec, it compromises you (who are running it) and this separation makes all the difference.
    Enterprise networks have too many moving parts: The past few years have seen creative hackers exploit software in places that we never knew were running software. The Thunderstrike crew ran code on Apple video adaptors. Ang Cui has run code on monitors, and office phones. Bunny and xobs ran code on SD-cards and a number of people have now run Linux on hard drive controllers. This makes it clear that the average office network is connected to dozens and dozens of types of devices that wont ever make it into a regular audit, that are nonetheless capable of hiding attackers and injecting badness into your network. 
    3rd Party Risk evaluations:  The joke going around after the incident was that SolarWinds had negatively impacted hundreds of organisations, but definitely passed their 3rd party risk evaluations. It’s slightly unfair, but also true. We don’t have a good way for most organizations to test software like this, and 3rd party questionnaires have always been a weak substitute. Even if we could tell if a product was meeting a minimum security bar (using safe patterns, avoiding unsafe calls, using compile time safety nets, and so on) auto-updates mean that tomorrow’s version of the product might not be the product you tested today. And if the vendor doesn’t know when they are compromised, then they probably won’t know when their update mechanism is used to convert their product into an attacker’s proxy.
    (Please note: We aren’t saying that auto-updates are bad. We believe they solve important problems and we make use of it in our product, but they do introduce a new set of variables that need to be considered. We discussed it in more detail in a previous post of ours: “If i run your software, can you hack me?”)
    The current focus on “supply chain” security will no doubt birth a bunch of companies claiming to solve the problem, but this part of the problem seems intractable. There’s the “easy” suit of software you know about: applications installed on your infrastructure, their dependencies, and so on. But for one, this ignores your vendor’s own vendors. In addition, what product is going to provide guidance on the provenance of the code running in your monitors (on processors we didn’t even know were there?). Will we examine the firmware on the microphone that people are now using for their Zoom calls? Will we re-examine it post its update? There are just too many connected pieces of code to tackle the problem from this angle.
    Enterprise Security Software: Amazingly, if enterprise products as a whole can be classified as insecure, enterprise security products in general are super duper insecure. Dr Mudge warned us in the early 2000s that security products were not necessarily secure products but not enough people took notice.. Many a Veracode report has placed enterprise security products near the bottom of the product pile when tested for security defects. 
    FX famously quipped that “basically by quality level you would be better off defending your network with microsoft word than a checkpoint firewall”. (It’s funny because it’s sad).
    If it takes just hours or days to successfully compromise an internal network, and if the average network has enough hiding places for skilled attackers to burrow deep, what do you think happens when attackers are allowed to move around undetected for months?
    All of these factors have been true for decades and have not visibly led to too many melt-downs. This changes though, because of a kind of “Roger Bannister” effect. Breaking the 4-minute mile seemed impossible till Roger Bannister did it in 1954. Then it was matched repeatedly in fairly quick succession. (Today, several high school runners have conquered the feat). Often people just need to see someone else cross that line. It’s not uncommon to see certain bugs considered unexploitable for a while, only to have the floodgates open up post the first working exploit release.
    When STUXNET made the news in 2010, the result was a global realisation that software exploits could be used to good effect in the real world, but the attack remained fairly magical and esoteric: It targeted centrifuges and involved multiple 0-days and infected Step7 compilers to get manually introduced to the PLCs. The Snowden leaks a few years later however, made it clear that smaller-scale and well targeted exploits could be used to achieve results too. If any country was slow to get into offensive cyber pre-Snowden, very few (who were paying attention) were after that. Governments started tooling up and the commercial industry didn’t hesitate to fill in the gaps.
    When the Ukrainian tax accounting package MEDoc had its update mechanism compromised to deploy malware to its clients, well, the writing was on the wall. Attacking popular vendors as a route into customers was clearly effective and to some actors, squarely on the table.
    A bunch of analysts looking at the SolarWinds incident point out (correctly) that compromised SolarWinds servers were on so many networks that the ripples of this attack could be crazily exponential. What this analysis misses is that the average enterprise runs dozens and dozens of SolarWinds-look-alikes too.
    Ransomware didn’t spring up overnight. Networks hit by ransomware were typically vulnerable for years and ran along blissfully unaware of it till attackers evolved a method to take advantage of it. Most enterprises have been completely vulnerable to their vendors’ horrible insecurity too, the SolarWinds incident just published a blueprint for how to abuse it.
    The situation is dire not because we are fighting some fundamental laws of physics, but because we’ve deluded ourselves for a long time. If there’s a silver lining out of this, it’s that customers will hopefully demand more from their vendors. Proof that they’ve gone through more than compliance checklists and proof that they’d have a shot at knowing when they were compromised. That more enterprises will ask “how would we fare if those boxes in the corner turned evil? Would we even know?”
    Ps. We’ve written previously about how we think about security as a vendor [here]
    Pps. We build Thinkst Canary, a quick, low effort, high fidelity way to detect badness on your network. We didn’t write this article because we built Canary. We build Canary, because we believe what we’ve written in this article…
  • Hackweek 2020

    Because we can

    One of our great pleasures and privileges at Thinkst is that every year we set aside a full week for pure hacking/building. The goals for our “Hackweek” are straightforward: build stuff while learning new things. Last week was the 2020 Hackweek work-from-home edition, and this post is a report on how it went.

    Now in its the fourth year, our Hackweek has come to serve as a kind of a capstone to our year, and folks start thinking about their projects months in advance. The previous editions produced some truly awesome projects, and topping would be was a serious challenge. Without question this has been our finest so far.

    We run Hackweek for multiple reasons. We’re a company of tinkerers and builders, and dedicating time towards scratching that itch just feels right to us. Of course, there’s sometimes downstream benefits to the Thinkst, either in terms of the projects folks worked on, or skills they’ve picked up. (Replacing our Redmine with Phabricator was a project in Hackweek ’17 that brought us much value and is still in use.) But that’s a pleasant side-effect, and not the objective. A key underpinning to Hackweek is that the projects don’t need to be related to Canary or other work projects. When we say “build something”, it can literally be anything and some folks steered far from tech (as we’ll see shortly). We want folks to continually learn, and this sets the tone. While we provide training through the year for topics in our day-to-day work, Hackweek gives the team a chance to stretch themselves in directions they hadn’t previously considered.

    Hackweek format

    The structure of the Hackweek is that on Monday we kick-off, and on Friday afternoon everyone demos their project. Following that, we vote on projects in three separate categories:

    • Most Joyful
    • Most Useful
    • Most Hacky

    The progression of Hackweek over the years tracks well with the team growth we’ve seen at Thinkst. In first few editions, an afternoon was more than sufficient for all the demos, but we had 20 projects this year and that’s tricky to squeeze in. It’s apparent that a rethink is needed for the next edition. Nice problems to have!

    The three winning projects

    The prizes are secondary to the aim of the week, and mostly provide a fun incentive for folks to aim in different directions. Here’s a run through of the winning projects, plus a report on the others below.

    Most Hacky

    Jay and Max decided that their years of gaming experience weren’t enough of an edge when playing Counter-Strike: Global Offensive. To make up the gap, they created a series of game hacks for CS:GO. Their hacks run as a separate program which accesses the CS:GO game’s memory, and changes values on the fly. Hit a key shortcut, and other players become visible through the walls. Hit another shortcut and the crosshairs snap onto the nearest enemy’s head to get a guaranteed headshot every time, even taking into account recoil patterns. Yet another shortcut, and enemies show up on your game radar, so you know wherever they are. No fair!

    See-through walls? Sure, why not.

    Most Joyful

    Louise taught herself how to crochet this week, starting from scratch. Crocheting has a bunch of technical details in how the knots are tied, the different patterns, and putting them together to produce articles. But she didn’t just limit herself to 2D articles, she went all out and produced three separate 3D birds, plus a crocheted Canary device. To top it off, she took them on a hike near Stonehenge for this final shot:

    Early morning birds

    Expect to see more from the “Inyarnis” in our weekly mails.

    Most Useful

    Sherif decided to hit a problem near and dear to his heart. We use Salesforce as a CRM, and for the Customer Support and Success teams, switching to Salesforce to lookup details is a common daily task. But there’s friction in performing this, and he wanted to file down that edge. Slack is our internal comms tool of choice, and Sherif built a Slackbot which interfaces to Salesforce to assist with querying customer details from directly within Slack. The Support and Success teams are thrilled!

    Slack command to quickly get an overview of a customer

    Great projects

    Here’s a rundown of the other projects.

    Anna created CN-D, a machine to forge signatures (or draw anything in pen). She built a CNC machine, replaced the drill with a pen holder, and figured a workflow to take SVGs to CNC files. With an SVG of someone’s signature or a scan of a written page, she could sign documents as them 🤦‍♀️. She also had it draw our logo in pen. It’s an amazing project to hit in one week.

    Haroon writes “remotely”

    Nick mostly stepped away from tech, and built a wooden arcade cabinet called Birdbox to house a monitor, joysticks, and a RetroPie. However he had one tech addition: a Flappy Bird clone with a Canary theme and Haroon’s voice!

    The logo rounds it off beautifully

    Bradley revisited a topic we’ve looked at previously: how to automatically grab a fingerprint of a production server and produce a Canary configuration which mimics that server. Mimic Rebooted sets up a Canary to imitate a server already live in your environment, to save having to manually configure each detail.

    Generating a Canary configuration by scanning a production machine

    Shereen repaired and repurposed a toy crane to add a remote control function to the previous wired design. Using MicroPython running on two Microbits, she had one drive the crane motors, and the other serve as the remote control, with wireless comms between the two.

    Parts and the finished crane

    Lissa put together a Raspberry-Pi gaming console, her first foray into a Hackweek project and one guaranteed to bring hours of fun.

    Retro-gaming is best gaming

    Matt also took a crack at a carpentry project by building an infinity table. He added a distinctly tech twist by using a bunch of individually addressable LED’s (as opposed to a single LED strip), then wrote a Python-based webserver to set the LED colours!

    This demo had the viewers clamouring for Shopify links

    Todor re-implemented the fundamental Canary functionality by imagining what a “home-use” Canary might look like, where the hardware platform is super lightweight (ESP32), and the bird talks directly to Firebase. He then wrote a mobile apps for receiving the Canary alerts, to build a PoC for a new kind of Canary.

    Alerts direct from bird to phone

    Mike designed a 3-in-1 projector from phones and tablets. It literally had three separate projection lenses, which is some kind of record for projectors. He could wirelessly stream content to the three separate lenses.

    I see your one projector lens, and raise you three! In different directions!

    Benjamin solved a problem which had previously vexed him (and me): some models of Suburus don’t have a temperature gauge but only display a warning light when the oil temperature is too high. So he built a device to plug in to his car’s OBD-II port, grab the temperature measurement, and stream the data via Bluetooth to an app on his phone.

    Homemade temperature monitor

    Az leveraged the T2 chip on his Mac to develop a custom tool for cryptographically signing things with a single tap on the TouchID pad. He targeted two separate actions: the firmware images we produce, and the code we commit.

    Code signing and verification in commit logs

    Deena also attacked Salesforce, and setup a flow so that when new Customer are created in Salesforce, we’re alerted in Slack. This solves a particular problem we see, which is that as new customers are signed up, parts of our org are simply unaware of the flow of new customers. This gives everyone a chance to see who the new logos in our customer stable are.

    New customers in Salesforce show up in Slack

    Yusuf learned the lesson from last year, and set his sights on a manageable problem this time around. However he finished sooner than expected so kept going on other projects 🙂 He built a custom Canary link shortener usable from Slack (expect to see this in Customer mails soon), a voice note app for Slack, and an in-browser video-to-GIF conversion tool leveraging ffmpeg and Wasm.

    CanaryLinker: create short URLs in Slack
    CanaryCaster: send voice notes in Slack
    CanaryGifyfier: Convert videos to GIFs directly in-browser

    Caleb added a new platform to the six we already support for Canary, by producing a Canary that runs on Open Stack. This is still early days, but if the interest is there we can consider adding Open Stack as a supported platform.

    Virtual Canary running on Open Stack

    Keagan built and published a Chrome extension called Re-chord to assist folks in their music practise. It tracks links for music pieces, and will recall them when you want to practise at a later date.

    Tracking your music practise links with Re-chord

    Riaan made a device for discreetly defacing public displays. A Pi-Zero is plugged into the HDMI port of any compatible display. It then polls a public DNS record, and when the trigger value is returned in the DNS response, the Pi-Zero switches the HDMI input to the Pi and plays a video, before switching back to the original display input. He tested on his family, and suitably freaked everyone!

    Surprise Rick!

    Haroon built love.pl (a golang tool without go in its name) and turned his constrained attention to needlework, and produced pillows with the Canary logo on them as a pleasing backdrop for his Zoom calls. Keep an eye out for them next time you vidchat with him!

    So tasteful

    Lastly, I delved into Terraform, Packer, and Saltstack to automated a particular environment we’ve pondered for a little while.

    Wrapping up

    Hackweek was a great success, and as a yardstick for our growth it demonstrated some of the logistics we need to improve on. But that’s a key part of why we do it: growth in Thinkst is dependent on growth in Thinksters, and a learning org is what we are. Onwards to next year!

Site Footer

Authored with 💚 by Thinkst