Canarytokens: Token Anything, Anywhere

InfoSec superstar (and long-time Canary fan) theGrugq recently mused on twitter about generating alerts when certain binaries are run on your hosts.

We definitely think it has its uses, and we figured it would be worth discussing a quick way to make this happen (using the existing

TL;DR: You can pass arbitrary data to a web-token allowing you to use it as a reliable, generic alerter of sorts.

We often refer to our Web and DNS Canarytokens as our token ‘primitives’. With these two tokens, you can create traps for attackers nearly anywhere, on any system for any kind of scenario. In fact, nearly all of our other token types are built on top of the Web and DNS tokens.

A brief overview of how they work:

Web token

  1. Visit and create a web token with the label “Fake email in the finance folder of Adrian’s inbox”.
  2. The server gives me a unique Canarytoken/link. I place it in the finance folder of Adrian’s inbox.
  3. If an attacker clicks/follows the link, I get an alert.
The association between this unique token URL and my label tells me someone MUST be accessing that fake email in the finance folder in my Inbox.
DNS token
  1. Visit and create a DNS token with a label, describing where I plan to hide it.
  2. The server gives me a unique DNS name.
  3. If an attacker (or software the attacker is using) causes the DNS name to be looked up, I get an alert
The mere act of resolving this DNS hostname causes the alert to trigger.
The lovely thing about tokens is that you can now use those base tokens, to mint heaps of creative alternatives.

Back to theGrugq's Question

There are many cases where you can get the alert you need simply by dropping the Canarytoken natively: a link in an email folder, a link shared in a private zoom chat, a link shared in a private slack channel, etc. In those cases, when an attacker views the link, we get a notification that a link that should never have been seen, has been seen (and the free Canarytoken server will go a little further to then geolocate the attacker, tell you about her browser, etc)

On typical Linux boxes, everything’s a file and the operating system is heavily script-driven. There’s much we can use to our advantage here. It means that in a pinch, we can create and trip our own web token, just to get a reliable, coherent message out.

One advantage of the Web Canarytoken is that web servers expect to receive a User Agent string from web browsers.

We can use this field to smuggle out some data related to the attacker!

Now, since we can use curl, wget or even just bash, we can use this technique in a bunch of places.

We can create users with login-scripts that trip the token and leave their ssh-keys lying around. We could also wrap binaries that are usually run by attackers with a wrapper that first trips the token.

Lets create a simple wrapper for netcat:

Now, an attacker who runs nc, actually runs your script. It looks the same to her, but you’ve received your alert letting you know that bad stuff is afoot:

There are a few ways to use this wrapper. To set the tripwire for a single user, set an alias in .zshrc, .profile, .bashrc or your equivalent.

alias nc='~/.local/bin/'

To set the trap globally on the system, drop a similar alias in /etc/profile, /etc/zshrc or your equivalent. Another alternative would be replacing the in-path binary with a script and/or using symlinks to ensure runs before the nc binary (but beware package updates breaking this approach).

Maybe this server doesn’t have outbound Internet access? No problem - simply swap the curl/wget approach with a DNS lookup command and a DNS token.

Dominic White was also kind enough to code up and donate a wrapper as well. This shows there are many ways to implement this idea. For details on compiling, he has some guidance on that as well.

Why tokens work

Not every dropped token will get tripped, and some tokens may be discovered, but this really isn't a problem. In the real world, an attacker isn’t usually thinking “is this a trap” every step of the way, they’re thinking “this could be the jackpot”. Sooner or later, they tip their hand and announce their presence. (And if they do suspect traps/tripwires, that’s also to your advantage - it’s going to slow them down considerably because now they will second-guess everything).

Even if a dropped Canarytoken isn’t a perfect match for the environment, an attacker's M.O. forces them to follow the lead.

Any red-teamer/attacker will tell you that successful breaches are usually a death by a thousand cuts. You find a file on one machine that points to a new network. You find credentials on that network that give you access to a jump box. If I find an AWS-API key on your machine, I will never be able to ignore it saying: “I don't think they run any AWS infrastructure”. Instead, it's much more likely I've stumbled on someone's skunkworks, and chances are it's ripe for the taking.

It doesn’t matter if an Excel document named 20200504_daily_settlement.xlsx is discovered on a Linux server instead of a Windows server. A criminal on the hunt for payment data to steal has to open it.

Token anything, anywhere. Embed tokens multiple layers deep. Make attackers question their sanity and their desire to continue the attack.

3D-Printed Emergency Services Face Shields

tl;dr: If you are looking to 3d-print face-shield frames for emergency services, but have a print-bed thats too small, here is an STL that should allow for the same result (with a modular frame)

For convenience, you can 3d-print these clips which seem to work for it too


Last week we saw a tweet from Lize Hartley that they were printing protective shields and handing them out to emergency services.

This seemed like an easy way for us to get involved and 4 of the folks from thinkst started following links and printing shields.

(Design by russiank)

One quick PR that makes this worth sharing is the modification to the original design to allow the shield to be printed on smaller print-beds.

If i run your software, can you hack me?

In our previous post (Are Canaries Secure?) we showed (some of) the steps we’ve taken to harden Canary and limit the blast radius from a potential Canary compromise. Colloquially, that post aimed to answer the question: “are Canaries Secure?”

This post aims at another question that pops up periodically: “If I run your Canaries on my network, can you use them to hack me?”

This answer is a little more complicated than the first, as there is some nuance. (Because my brutally honest answer is: “yeah… probably”.)

But this isn’t because Canary gives us special access, it’s true because most of your other vendors can too.If you run software  with an auto update facility (and face it, it’s the gold standard for updates these days), then the main thing stopping that vendor from using that software to gain a foothold on your network is a combination of that vendor's imagination, ethics, or discomfort with the size of jail cells. It may not be a comfortable fact, but fact remains true with no apparent appreciation for our comfort levels.

Over a decade ago we gave two talks on tunneling data in and out of networks through all sorts of weird channels (the pinnacle was a remote timing-based SQL injection to carry TCP packets to internal RDP machines). [“It’s all about the timing”, “Pushing the Camel through the Eye of the Needle”] 

The point is that with a tiny foothold we could expand pretty ridiculously. Sending actual code down to my software that’s already running inside an organization is like shooting fish in a barrel. This doesn’t just affect appliances or devices on your network, it extends to any software.

Consider VLC, the popular video player. Let’s assume it’s installed on your typical corporate desktop. Even if you reversed the hell out of the software to be reasonably sure that the posted binaries aren’t backdoored (which you didn’t), you have no idea what last night’s auto-update brought down with it. 

You don’t allow auto-updates? Congratulations, you now have hundreds of vulnerable video players waiting to be exploited by a random video of cats playing pianos. 

This ignores the fact that even if the video player doesn't download malicious code, it's always a possibility that it simply downloads vulnerable code, which then allows the software to exploit itself.

It’s turtles all the way down.

So what does this mean? Fundamentally it means that if you run software from a 3rd party vendor which accepts auto-updates (and you do) you are accepting the fact that this 3rd party vendor (or anyone who compromises them) probably can pivot from the internet to a position on your network.

Chrome has successfully popularised the concept of silent auto-updates and it’s a good thing, but it’s worth keeping in mind what we give up in exchange for the convenience. (NB. We’re not arguing against auto-updates at all; in fact we think you’d be remiss not enabling them). 
You can mitigate this in general by disabling updates, but that opens you up to a new class of problems with a only handful of solutions:
  • A new model of computation – You could mitigate this by moving to chromebooks or really limited end-user devices. But remember, no third party Chrome apps or extensions, or you fall into the same trap.
  • You can be more circumspect about whose software you run. Ultimately the threat of legal action is what provides the boundaries for contracts and business relationships, which goes a long way in building trust in third parties. If you have a mechanism to recover damages from or lay charges against a vendor for harmful actions, you’ll be more likely to give their software a try. But this still ignores the risk of a vendor being compromised by an unrelated attacker.
  • You can hope to detect when the software you don’t trust does something you don't expect.
For the second solution, software purchasers can demand explanations from their vendors for how code enters the update pipeline, and how its integrity is maintained. We’ve discussed our approach in a previous post. (It’s also why we believe that customers should be more demanding of their vendors!)

The last solution is interesting. We’re  obviously huge fans of detection and previous posts even mention how we detect if our Consoles start behaving unusually. On corporate networks, where the malicious software could be your office phones or your monitors or the lightbulbs, pretty much your only hope is having some way of telling when your kettle is poking around the network.

Ages ago, Marcus Ranum, suggested that a quick diagnostic when inheriting a network would be to implement internal network chokepoints (and to then investigate connections that get choked). We (obviously) think that dropping Canaries are a quick, painless way to achieve the same thing.

It's trite, but still true. Until there are fundamental changes to our methods of computation, our only hope is to “trust, but verify” and on that note, we try hard to be part of the solution instead of the problem.

Are Canaries Secure?

What a question. In an industry frequently criticised for confusing security software with secure software, and where security software is ranked poorly against other software segments, it's no surprise we periodically hear this question when talking to potential customers. We figured we'd write a quick blog post with our thoughts on it.

We absolutely love the thought of this question coming up. Far too many people have been far too trusting of security products, which is how we end up with products so insecure that FX said you'd be "better off defending your networks with Microsoft Word".

In fact, it's one of the things we actively pushed for in our 2019 talk on "the Products we Deserve":

So, how do we think about security when building Canary?

Most of our founding team have a long history in offense and we've worked really hard to avoid building the devices we've taken advantage of for years. From base architectural choices, to individual feature implementations, defensive thinking has been baked into Canary at multiple layers.

We're acutely aware that customers are trusting our code in their networks. We go to great lengths to ensure that a Canary does not introduce additional risk to our customers. The obvious solution here is to make it "more secure" (i.e. that it's a harder target to compromise than other hosts on the network). But that's not sufficient, a harder target is not an impossible target given enough time.

So the second part of "not introducing additional risk" is to ensure that there's nothing of value on the Canaries themselves that attackers might want.

tl;dr: Canaries should be harder to compromise than other targets and should leave an attacker no better off for compromising them.

What follows are some examples of our thinking. We've left out some bits (where prudent), but we (strongly) feel that customers should be asking vendors how they reduce their threat profile, and figure we should demonstrate it ourselves.


All the important services on the Canary are written in memory safe languages and are then sandboxed. The Canary itself holds no secrets of importance to your network. Choosing memory safe languages has a performance tradeoff, and one we're happy to make. With that architectural decision, the only potential memory corruption bugs are in the underlying interpreter, which is well-tested (and harder to reach) at this point.

Network spanning

We also don't allow Canaries to be dual-homed or span VLANs. That's because it would violate the principle of not having anything valued by an attacker on the Canaries. Compromising a dual-homed Canary would allow an attacker to jump across networks, and we won't let this happen on our watch.

Cryptographic underpinnings

During their initial setup, Canaries create and exchange crypto keys with your console. From that point on, all communication between the Canary and your console is encrypted using these keys.

The underlying symmetric encryption library used is NaCl, which provides the Salsa20 stream cipher for encryption and Poly1305 MAC for authentication. Again, we could have chosen slightly more space-efficient cryptographic constructs, but we followed the best practice of selecting a cryptographic library which doesn't permit choices and removes all footguns.


Our birds are remotely updated to make sure they stay current, and that's a common subject of questions from potential customers. To maintain the integrity of our updates, your Canary will only accept an update that's been signed by our offline signing infrastructure. Furthermore, each update file is further signed (and encrypted) by your Console so your bird won't accept an update from another Console (even if it's a legitimate one). Lastly, the update is delivered via our custom DNS transport overlay which is also encrypted. An attacker wishing to push code to your Canary would need to compromise both your cloud Console, as well as the physical offline update-signing infrastructure.

Console monitoring

Your Console is a dedicated instance running on EC2. This simple architectural decision means that even if one customer-console was breached, there's no other customer data present. This single-tenant model also removes the risk of web-app bugs yielding data from other customers.

Aside from the usual hardening, we've taken other steps to further minimise "surprises". All syscalls across our fleet are monitored, and any server doing anything "new" quickly raises alarms. (We also make sure that the server only serves content we've expressly permitted).

By default, your Console won't hold any special data from your network. Alerts come through with information related only to a detected attack, and even though we support masking in the alert to make sure that you wont have an attacker supplied password lying in your inbox, its probably a good idea to cycle a password that an attacker has made use of. :)

(Password "masked" in email alert)

Customer-Support access

On the back-end, selected Thinkst staff need to jump through several hoops and jump-points before gaining access to your console. At every jump, they are required to MFA, and access is both logged and generates an alert. (Once more this means that such access can't happen under the radar).

(CS access to a Canary Console)
In addition to this, some customers request that no Thinkst staff access their console. These customers have the back end authentication/MFA link broken. The means that Thinkst staff will not be able to authenticate to the customer console at all.

Third-party assessments

We've also had a crystal-box assessment performed of both the Canaries and the Console by one of the leading app-sec teams in the business. A copy of their report is available on request, but their pertinent, summarising snippet is:

"The device platform and its software stack (outside of the base OS) has been designed and implemented by a team at Thinkst with a history in code product assessments and penetration testing (a worthy opponent one might argue), and this shows in the positive results from our evaluation.
Overall, Thinkst have done a good job and shown they are invested in producing not only a security product but also a secure product."

Wrapping up

So, is Canary an impossible target? Of course not, it's why we wrote "safer designs" above, not "safe designs". 

But we have put a lot of thought into making sure we don't introduce vulnerabilities to a customer network. We've put tons of effort into making sure that we limit the blast radius of any problem that does show up. And if a bird can get off just one warning before it's owned, it's totally lived up to its namesake and earned its keep...

HackWeek 2019

Last week team Thinkst downed tools again for our bi-annual HackWeek. The rules of HackWeek are straightforward:
  • Make Stuff;
  • Learn;
  • Have fun.
We discussed HackWeek briefly last year:
Our HackWeek parameters are simple: We down tools on all but the most essential work (primarily anything customer-facing) and instead scope and build something. The project absolutely does not have to be work-related, and people can work individually or in teams. The key deadline is a 10-minute demo on the Friday afternoon. The demos are in front of the rest of the team, and results count more than intentions.
We pride ourselves on being a "learning organization" and HackWeek is one of the things that help make that happen. It's always awesome seeing a software-developer solder their first board or seeing someone non-technical write their first lines of python.

Project highlights this year: 

Az used the SimH simulator to run an obscure Soviet Mainframe (the BESM-6):

Eventually, he had the mainframe pushing the keys on a Pokemon game running in a simulator using Fortran (because, of course!). Along the way he had to deal with Russian manuals and, uh, learning Fortran.

Mike built "Incubator" to manage our stock of Canary raw materials:

Riaan threw in a physical hack to make sure fewer cars were scratched when parking in the basement, and built a physical status monitor for our support queues:

Keagan decided to combine ModSecurity hackery & testing to add in extra protection onto our new flocks consoles:

Haroon took a crack at some d3 fiddling to create art (and inspectable graphs) with our customer logos but sadly this can't be shown :)

Quinton used an Arduino and some jury rigged hardware to keep better tracking of scores for the indoor cricket games held in the Jhb office:

Jay used the incredible work by the openDrop people to create a fake Airdrop service on our Canaries.

So, configure it through our Canary console:

Once the bird loads, it becomes visible to people in its vicinity using Airdrop on their Macs or iPhones:

After an attacker submits a file, the Canary alerts as usual:

Donovan flirted with Flask and Python to make another interface to download Canarytokens.

Danielle dived into Verilog to get her Quartus II FPGA to voice-print individuals:

Marco embedded into our Phabricator setup to allow us phriction-phree-phlowcharting:

Max broke out Unity to build a game for the Occulus:

Matt wrote a game for his Nintendo switch:

Bradley attempted to give Apple designers aneurysms by affixing a travel LCD to his laptop for a MacGyver'd screen extender:

Nick and Anna paired up to create a hardware/software combo. They used RaspberryPi's, a pack of blank credit cards, stepper-motors and toothpicks? to create a 9-digit split-flap display for the CapeTown office.


(I would have totally given it the prize for "most soothing sound made by any HackWeek project, ever".)

Adrian combined the Canary API and his nostalgia for CLI interfaces to make a lo-fi Canary Console:

Yusuf built an app/bot that could be summoned on Twitter to compile tweet-storms to blog posts (and learned the harsh lessons of unforgiving HackWeek deadlines.)

"A fun time was had by all" (tm)

Canary Alerts, Part 2 - Bonus Flavours

Canaries and Canarytokens are tripwires that can alert you to intrusions. When alerts trigger, we want to make sure you get them where you need them. While our Slack integration is cool, you might prefer to send alerts through your SIEM. Or to a security automation tool. Maybe you want to leverage our API to integrate Canary alerts into a custom SOC tool. Want to turn a smart light bulb red and play the Imperial March? You could do that too.
IFTTT screenshot of an applet that makes a light blink when a Canary alert is received

Your way or the highway

We often puzzle at products that require customers to totally revamp how they do things. We never presume to be the most important tool in your toolbox, which is why our product is designed to be installed, configured, and (somewhat) forgotten, in minutes. We’d rather disappear into your existing workflow, only becoming visible again when you need us most.

Our customers dictate where and how they see our alerts. To enable this, we provide a wide variety of flexible options for sending and consuming alerts.

By default, you’ll get alerts on your console...

In your email…

...and as a text message.

And that’s not all…

For those of you wondering where the SIEM love is at, don’t worry. We can send syslog where you need it, as secure as you need it. A quick email to with the details for your syslog endpoint will get the logs flowing in no time.

For Splunk fans, we have a Splunk app that works with both Splunk Enterprise and Splunk Cloud. Details on installing and configuring the Splunk app can be found in our help documentation.

Email can also be an easy way to integrate Canary alerts with other tools. For example, most task and ticket management systems support creating tickets or tasks with an email. ServiceNow, BMC Remedy are common in large enterprises, but what about something simpler, with a free use plan? Something you could set up in minutes, like a Canary?

Build a SOC dashboard in 5 minutes, for free

We’re going to use Trello as an example of how flexible email can be for alert integration.

It turns out, Trello aligns well with the spirit of simple, fast and ‘just works’. Finding the custom email address that allows new card creation takes just a few clicks. Then, paste it in the email notifications list in your console settings and you’re good to go. Canary alerts will start showing up in Trello on the board and list you chose to attach the Trello email to.

A simple three-list configuration should work for basic alert triage: new alerts, acknowledged (being worked) and completed.

Any Canaries or Canarytokens triggered will result in a new card dropping into the New Alerts column immediately. Drag the card over to the Ack column and assign it to someone and Trello can notify them (based on your Trello configuration). Each card contains the full content of the alert and supports comments and attachments.

Once the investigation is complete, the card can be dragged over to the final column.

And, of course, an API

Anything you can do or view in the Canary console can be done via our fully documented API. It’s possible to control Canaries, create Canarytokens, view alerts, manage alerts and much more. Following is a simple bash script demonstrating how to grab a week’s worth of alerts and dump them into a spreadsheet-friendly format (CSV). Also available as a gist.

# Create a CSV with the last week's worth of alerts from your Canary console
# Requires curl and jq to be in the path

# Set this variable to your API token
export token=deadbeef12345678

# Customize this variable to match your console URL

# Date format (one week ago)
export dateformat=`date -v-1w "+%Y-%m-%d-%H:%M:%S"`

# Filename date (right now)
export filedate=`date "+%Y%m%d%H%M%S"`

# Complete Filename
export filename=$filedate-$console-1week-alert-export.csv

# Base URL
export baseurl="https://$console/api/v1/incidents/all?auth_token=$token&shrink=true&newer_than"

# Run the jewels
echo Datetime,Alert Description,Target,Target Port,Attacker,Attacker RevDNS > $filename
curl "$baseurl=$dateformat" | jq -r '.incidents[] | [.description | .created_std, .description, .dst_host, .dst_port, .src_host, .src_host_reverse | tostring] | @csv' >> $filename

Taking Flight

Like everything else Canary-related, alerts should be dead simple and easy to work with. Though alert volumes from Canaries are incredibly low (customers with dozens of Canaries report just a handful of alerts per year) we include a bunch of options to cover everything from common requests to esoteric requirements.

If you have any clever ideas on integrating alerts or consuming them, we’d love to hear them! Drop us a message on Twitter @ThinkstCanary or via email, support at canary dot tools.

Alerts Come in Many Flavours

‪If you force people to jump through hoops to handle alerts, they’ll soon stop doing it 🤯‬
‪Canary optimizes for fewer alerts but we also ensure that you can handle alerts easily without us.‬ ‪So it takes just 4 minutes to setup a Canary but far less to pull our alerts into Slack‬.

By default, your console will send you alerts via email or SMS, but there are a few other tricks up its sleeve. It is trivial to also get alerts via webhooks, syslog or our API.

This post will show you how to get alerts into your Slack. The process is similar for Microsoft Teams and other messaging apps that use webhooks for integration. It’s quick, painless and super useful.

(This post is unfortunately now also bound to be anti-climactic - it’s going to take you longer to read this than to do the integration).

Did you know how easy this can be?
The Canary Console can integrate with Microsoft Teams and Slack in seconds and with a few more steps, can integrate with any other webhook-friendly platform. The process is similar for most platforms, but here’s how it looks for Slack.

  1. Enable Webhooks in your Canary Console settings.
  2. Click Add to Slack, choose the channel to drop alerts into and click Allow
  3. That’s it! You now have Canary alerts showing up in Slack. Elapsed setup time? About 30 seconds.

Now that you’ve got Canary alerts integrated into Slack, you can actually interact with them. When an alert shows up in Slack, you’re given an option to mark it as ‘seen’, which removes it from the queue of unacknowledged alerts.

You can even permanently delete it from inside Slack - no need to even log into the console. Here’s a peek at what the process looks like.

Why we’re so keen to get alerts out of the console

You’ve got enough consoles already. Heck, you may even have multiple "single panes of glass". We’re not interested in adding our console to the already long list of security tools to check on a daily or hourly basis. We realise and deeply understand that it’s not about us, it’s about you. That’s why we make it so easy to pull Canary alerts into your existing workflows.

Live in Slack? We’ll alert you there.
Live on your phone? We’ll text you.
Live in Outlook? We’ll drop you an email.
Want all-of-the-above, just in case? We can do that too.