(Better) Canary Alerts in Slack

One of the things that surprise new Canary customers, is that we don't try particularly hard to keep customers looking at their consoles. (In fact, an early design goal for Canary was to make sure that our users didn't spend much time using our console at all).

We make sure that the console is pretty, and is functional but we aren't trying to become a customer's "one pane of glass". We want the Canaries deployed and then strive to get out of your way. You decide where your alerts should go (email, SMS, API, webhooks, Syslog, SIEM app), set up your birds, and then don't visit your console again until a Canary chirps..


We have hundreds of customers who never login to their consoles after the initial setup, and we're perfectly happy with this. Their alerts go to their destination of choice and that's what matters. Of these, dozens and dozens of customers rely heavily on getting their alerts piped into a Slack channel of their choice.

Getting your alerts into Slack is trivial:

  1. Create a channel in Slack
  2. Go to Setup, Webhooks, and select "Add Slack XXX"
  3. Select the channel you want your alerts to go to;
  4. (Thats it! Your Slack integration is done!)


Until recently, alerts that went into Slack were simple one way traffic, containing incident details.


While this suffices for most users, recently, Max and Jay sat down to make this even better. Alerts into Slack now look like this:


You'll notice that, by default, potential sensitive fields like passwords are now masked in Slack. This can be toggled on your Settings page. We're also including additional historical context to assist your responders.

Best of all though, you can now manage these alerts (Mark as seen and Delete) from right inside Slack, so you never have to login to your Console.


Once an event has been acknowledged, the incident details will be visually "struck", and a new field will indicate the name of the person who ack'd it.


Clicking "Delete" will then collapse the now superfluous details, and will track the name of the deleting user.


So.. if your security team is using Slack, consider using the integration. It will take just seconds to set up, and should make your life a little easier.



A Week with Saumil (aka "The ARM Exploit Laboratory")

Last month we downed tools for a week as we hosted a private, on-site version of the well regarded “ARM Exploit Laboratory” (by Saumil Shah). The class is billed as “a practical hands-on approach to exploit development on ARM based systems” and Saumil is world respected, delivering versions of the class at conferences like 44con, Recon and Blackhat for years.

It.absolutely.delivered!

With a quick refresher on ARM assembly and system programming on day-1, by day-2 everyone in the class was fairly comfortable writing their own shellcode on ARM. By the end of day-3 everyone was comfortable converting their payloads to ROP gadgets and by day-4 everybody had obtained reverse shells on emulated systems and actual vulnerable routers and IP-Cameras. Without any false modesty, this is due to Saumil's skill as an educator much more than anything else.

Pre-Class Preparation


While our Canary is used by security teams the world over, many people in the team have backgrounds in development (not security) so we felt we had some catching up to do. A few months before the class, we formed an #arm-pit slack channel and started going through the excellent Azeria Labs chapters and challenges. (It’s worth noting that Saumil's class managed to work for people on the team that were not taking part in our weekly #arm-pit sessions, but those of us who did the sessions were glad that we did anyway).

A special shout out to @anna who didn’t actually attend the ARM exploitation sessions but made sure that everything from food and drinks, to conference room and accomodation were all sorted. An echo that great preparation made for a great experience. Thank you @anna.

The Class


We’ve all sat in classes where the instructor raced ahead and knowledge that we thought we had proved to be poorly understood when we needed to apply it. As the course progressed, each new concept was challenged with practical exercises. Each concept needed to be understood as the following concepts (and exercises) would largely build on the prior knowledge. And in this fashion, we quickly weeded out gaps in our knowledge because practically we could not apply something we
did not understand.

The addition of shellcode-restrictions (and processor mitigations) tested a particular way of thinking which seemed to come more naturally to those of us with a history of “breaking” versus “building”. The breakers learned some new tricks, but the builders learned some completely new ways of thinking. It was illuminating.

The class was choc-full of other little gems, from methodologies for debugging under uncertainty to even just how slickly Saumil shares his live thoughts with his students via a class webserver (that converts his ascii-art memory layout diagrams to prettier SVG versions in real time)





It’s the mark of an experienced educator who has spotted the areas that students have struggled with, and has now built examples and tooling to help overcome them. We didn’t just learn ARM exploitation from the class, it was a master class in professionalism and how to educate others.

Where to now?


A bunch of people now have a gleam in their eyes and have started looking at their routers and IOT devices with relish. Everyone has a much deeper understanding of memory corruption attacks and the current state of mitigation techniques

Why we took the course?


Team Thinkst is quite a diverse bunch and exploitation isn’t part of anyone’s day job. We do however, place a huge emphasis on learning, and the opportunity to dedicate some time to bare metal, syscalls and shellcode was too good to pass up. We’ve taken group courses before, but this is the first time we’ve felt compelled to write it up. Two thumbs up! Will strongly recommend.

Using the Linux Audit System to detect badness

Security vendors have a mediocre track record in keeping their own applications and infrastructure safe. As a security product company, we need to make sure that we don’t get compromised. But we also need to plan for the horrible event that a customer console is compromised, at which point the goal is to quickly detect the breach. This post talks about how we use Linux's Audit System (LAS) along with ELK (Elasticsearch, Logstash, and Kibana) to help us achieve this goal.

Background

Every Canary customer has multiple Canaries on their network (physical, virtual, cloud) that reports in to their console which is hosted in AWS.


Consoles are single tenant, hardened instances that live in an AWS region. This architecture choice means that a single customer console being compromised, won’t translate to a compromise of other customer consoles. (In fact, customers would not trivially even discover other customer consoles, but that's irrelevant for this post.)

Hundreds of consoles running the same stack affords us an ideal opportunity to perform fine grained compromise detection in our fleet. Going into the project, we surmised that a bunch of servers doing the same thing with similar configs should mean we can detect and alert on deviations with low noise.

A blog post and tool by Slack's Ryan Huber pointed us in the direction of the Linux Audit System. (If you haven’t yet read Ryan's post, you should.)

LAS has been a part of the Linux kernel since at least 2.6.12. The easiest way to describe it is as an interface through which all syscalls can be monitored. You provide the kernel with rules for the things you’re interested in, and it pushes back events every time something happens which matches your rules. The audit subsystem itself is baked into the kernel, but the userspace tools to work with it come in various flavours, most notably the official “auditd” tools, “go-audit” (from Slack) and Auditbeat (from Elasticsearch).

Despite our love for Ryan/Slack, we went with Auditbeat mainly because it played so nicely with our existing Elasticsearch deployment. It meant we didn't need to bridge syslog or logfile to Elastic, but could read from the audit Netlink socket and send directly to Elastic.

From Audit to ELK

Our whole set-up is quite straightforward. In the diagram below, let's assume we run consoles in two AWS regions, US-East-1 and EU-West-2.




We run:
  • Auditbeat on every console to collect audit data and ship it off to Logstash;
  • A Logstash instance in each AWS region to consolidate events from all consoles and ship them off to Elasticsearch;
  • Elasticsearch for storage and querying;
  • Kibana for viewing the data;
  • ElastAlert (Yelp) to periodically run queries against our data and generate alerts;
  • Custom Python scriptlets to produce results that can't be expressed in search queries alone.

So, what does this give us?

A really simple one is to know whenever an authentication failure occurs on any of these servers. We know that the event will be linked to PAM (the subsystem Linux uses for most user authentication operations) and we know that the result will be a failure. So, we can create a rule which looks something like this:

auditd.result:fail AND auditd.data.op:PAM*


What happens here then, is:
  1. Attacker attempts to authenticate to an instance;
  2. This failure matches an audit rule, is caught by the kernel's audit subsystem and is pushed via Netlink socket to Auditbeat;
  3. Auditbeat immediately pushes the event to our logstash aggregator;
  4. Logstash performs basic filtering and pushes this into Elasticsearch (where we can view it via Kibana);
  5. ElastAlert runs every 10 seconds and generates our alerts (Slack/Email/SMS) to let us know something bad(™) happened.






Let's see what happens when an attacker lands on one of the servers, and attempts to create a listener (because it’s 1999 and she is trying a bindshell).
In 10 seconds or less we get this:


which expands to this:
From here, either we expect the activity and dismiss it, or we can go to Kibana and check what activity took place.

Filtering at the Elasticsearch/ElastAlert levels gives us several advantages. As Ryan pointed out), keeping as few rules / filters on the actual hosts, leaves a successful attacker in the dark in terms of what we are looking for.

Unknown unknowns

ElastAlert also gives us the possibility of using more complex rules, like “new term”.

This allows us to trivially alert when a console makes a connection to a server we’ve never contacted before, or if a console executes a process which it normally wouldn’t.

Running auditbeat on these consoles also gives us the opportunity to monitor file integrity. While standard audit rules allow you to watch reads, writes and attribute changes on specific files, Auditbeat also provides a file integrity module which makes this a little easier by allowing you to specify entire directories (recursively if you wish).

This gives us timeous alerts the moment any sensitive files or directories are modified.



Going past ordinary alerts

Finally, for alerts which require computation that can't be expressed in search queries alone we use Python scripts. For example, we implemented a script which queries the Elasticsearch API to obtain a list of hosts which have sent data in the last n-minutes. By maintaining state between runs, we can tell which consoles have stopped sending audit data (either because the console experienced an interruption or because Auditbeat was stopped by an attacker.) Elasticsearch provides a really simple REST API as well as some powerful aggregation features which makes working with the data super simple.

Operations

Our setup was fairly painless to get up and running, and we centrally manage and configure all the components via SaltStack. This also means that rules and configuration live in our regular configuration repo and and that administration overhead is low.

ELK is a bit of a beast and the flow from hundreds of Auditbeat instances means that one can easily get lost in endless months of tweaking and optimizing. Indeed, if diskspace is a problem, you might have to start this tweaking sooner rather than later, but we optimized instead for “shipping”. After a brief period to tweak the filters for obvious false positives, we pushed into production and our technical team pick up the audit/Slack alerts as part of our regular monitoring.

Wrapping up

It’s a straightforward setup, and it does what it says on the tin (just like Canary!). Combined with our other defenses, the Linux Audit System helps us sleep a little more soundly at night. I'm happy to say that so far we've never had an interrupted night's sleep!

RSAC 2018 - A Recap...

This year we attended the RSAC expo in San Francisco as a vendor (with booth, swag & badge scanners!).

We documented the trip, it’s quirks, costs and benefits along with some thoughts on the event.

Check it out, and feel free to drop us a note on the post or by tweeting at @ThinkstCanary.

Considering an RSAC Expo booth? Our Experience, in 5,000 words or less



A third party view on the security of the Canaries

(Guest post by Ollie Whitehouse)

tl;dr

Thinkst engaged NCC Group to perform a third party assessment of the security of their Canary appliance. The Canaries came out of the assessment well. When compared in a subjective manner to the vast majority of embedded devices and/or security products we have assessed and researched over the last 18 years they were very good.

Who is NCC Group and who am I?

Firstly, it is prudent to introduce myself and the company I represent. My name is Ollie Whitehouse and I am the Global CTO for NCC Group. My career in cyber spans over 20 years in areas such as applied research, internal product security teams at companies like BlackBerry and, of course, consultancy. NCC Group is a global professional and managed security firm with its headquarters in the UK and offices in the USA, Canada, Netherlands, Denmark, Spain, Singapore and Australia to mention but a few.

What were we engaged to do?

Quite simply we were tasked to see if we could identify any vulnerabilities in the Canary appliance that would have a meaningful impact on real-world deployments in real-world threat scenarios. The assessment was entirely white box (i.e. undertaken with full knowledge and code access etc.)

Specifically the solution was assessed for:

·       Common software vulnerabilities

·       Configuration issues

·       Logic issues including those involving the enrolment and update processes

·       General privacy and integrity of the solution

The solution was NOT assessed for:

·       The efficacy of Canary in an environment

·       The ability to fingerprint and detect a Canary

·       Operational security of the Thinkst SaaS

What did NCC Group find?

NCC Group staffed a team with a combined experience of over 30 years in software security assessments to undertake this review for what I consider a reasonable amount of time given the code base size and product complexity.

We found a few minor issues, including a few broken vulnerability chains, but overall we did not find anything that would facilitate a remote breach.

While we would never make any warranties it is clear from the choice of programming languages, design and implementation that there is a defence in depth model in place. The primitives around cryptography usage are also robust, avoiding many of the pitfalls seen more widely in the market.

The conclusion of our evaluation is that the Canary platform is well designed and well implemented from a security perspective. Although there were some vulnerabilities, none of these were significant, none would be accessible to an unauthenticated attacker and none affected the administrative console. The Canary device is robust from a product security perspective based on current understanding.

So overall?

The device platform and its software stack (outside of the base OS) has been designed and implemented by a team at Thinkst with a history in code product assessments and penetration testing (a worthy opponent one might argue), and this shows in the positive results from our evaluation.

Overall, Thinkst have done a good job and shown they are invested in producing not only a security product but also a secure product.

_________

<haroon> Are you a customer who wishes to grab a copy of the report? Mail us and we will make it happen.


Sandboxing: a dig into building your security pit

Introduction

Sandboxes are a good idea. Whether it's improving kids’ immune systems, or isolating your apps from the rest of the system, sandboxes just make sense. Despite their obvious benefits, they are still relatively uncommon. We think this is because they are still relatively obscure for most developers and hope this post will fix that.

Sandboxes? What’s that?

Software sandboxes isolate a process from the rest of the system, constraining the process’ access to the parts of the system that it needs and denying access to everything else. A simple example of this would be opening a PDF in (a modern version of) Adobe Reader. Since Adobe Reader now makes use of a sandbox, the document is opened in a process running in its own constrained world so that it is isolated from the rest of the system. This limits the harm that a malicious document can cause and is one of the reasons why malicious PDFs have dropped from being the number-1 attack vector seen in the wild as more and more users updated to sandbox-enabled versions of Adobe-Reader.

It's worth noting that sandboxes aren't magic, they simply limit the tools available to an attacker and limit an exploit’s immediate blast-radius. Bugs in the sandboxing process can still yield full access to key parts of the system rendering the sandbox almost useless.

Sandboxes in Canary

Long time readers will know that Canary is our well-loved honeypot solution. (If you are interested in breach detection that’s quick to deploy and works, check us out at https://canary.tools/)


A Canary is a high quality, mixed interaction honeypot. It’s a small device that you plug into your network which is then able to imitate a large range of machines (a printer/ your CEO's laptop/ a file server, etc). Once configured it will run zero or more services such as SSH, Telnet, a database or Windows File Sharing. When people interact with these fake hosts and fake services, you get an alert (and a high quality signal that you should cancel your weekend plans).

Almost all of our services are implemented in a memory safe language, but in the event that customers want a Windows File Share, we rely on the venerable Samba project (before settling on Samba, we examined other SMB possibilities, like the excellent impacket library, but Samba won since our Canaries (and their file shares) can be enrolled into Active Directory too). Since Samba is running as its own service and we don't have complete control over its internal workings, it becomes a prime candidate for sandboxing: we wanted to be able to restrict it's access to the rest of the system in case it is ever compromised.

Sandboxing 101

As a very brief introduction to sandboxing we'll explain some key parts of what Linux has to offer (a quick Google search will yield far more comprehensive articles, but one interesting resource, although not Linux focused, is this video about Microsoft Sandbox Mitigations).

Linux offers several ways to limit processes which we took into consideration when deciding on a solution that would suit us. When implementing a sandbox solution you would chose a combination of these depending on your environment and what makes sense.


Control groups

Control groups (cgroups) look at limiting and controlling access and usage of resources such as CPU, memory, disk, network, etc.


Chroot

This involves changing the apparent root directory on a file-system that the process can see. It ensures that the process does not have access to the whole file system, but only parts that it should be able to see. Chroot was one of the first attempts at sandboxes in the Unix world, but it was quickly determined that it wasn’t enough to constrain attackers.


Seccomp

Standing for "secure computing mode", this lets you limit the syscalls that a process can make. Limiting syscalls means that a process will only be able to perform system operations that you expect to be able to perform so if an attacker compromises your application, they won't be able to run wild.


Capabilities

These are the set of privileged operations that can be performed on the Linux system. Some capabilities include setuid, chroot and chown. For a full list you can take a look at the source here. However, they’re also not a panacea and spender has shown (frequently) how Capabilities can be leveraged into full Capabilities.


Namespaces

Without namespaces, any processes would be able to see all processes' system resource information. Namespaces virtualise resources like hostnames, user IDs or network resources so that a process cannot see information from other processes.

Adding sandboxing to your application in the past meant using some of these primitives natively (which probably seemed hairy for most developers). Fortunately, these days, there are a number of projects that wrap them up in easy-to-use packages.



Choosing our solution

We needed to find a solution that would work well for us now, but would also allow us to easily expand once the need arises without requiring a rebuild from the ground up.

The solution we wanted would need to at least address Seccomp filtering and a form of chroot/pivot_root. Filtering syscalls is easy to control if you can get the full profile of a service and once filtered you can sleep a little safer knowing the service can't perform syscalls that it shouldn't. Limiting the view of the filesystem was another easy choice for us. Samba only needs access to specific directories and files, and lots of those files can also be set to read-only.

We evaluated a number of options, and decided that the final solution should:

  • Isolate the process (Samba)
  • Retain the real hostname
  • Still be able to interact with a non-isolated process
Another process had to be able to intercept Samba network traffic which meant we couldn’t put it in a network namespace without bringing that extra process in.

This ruled out something like Docker, as although it provided an out-of-the-box high level of isolation (which is perfect for many situations), we would have had to turn off a lot of the features to get our app to play nicely.

Systemd and nsroot (which looks abandoned) both focused more on specific isolation techniques (seccomp filtering for Systemd and namespace isolation for nsroot) but weren’t sufficient for our use case.

We then looked at NsJail and Firejail (Google vs Mozilla, although that played no part in our decision). Both were fairly similar and provided us with flexibility in terms of what we could limit, putting them a cut above the rest.

In the end, we decided on NsJail, but since they were so similar, we could have easily gone the other way, i.e. YMMV


NsJail
NsJail, as simply stated in its overview, "is a process isolation tool for Linux" developed by the team at Google (though it's not officially recognised as a Google product). It provides isolation for namespaces, file-system constraints, resource limits, seccomp filters, cloned/isolated ethernet interfaces and control groups.

Furthermore, it uses kafel (another non-official Google product) which allows you to define syscall filtering policies in a config file, making it easy to manage/maintain/reuse/expand your configuration.

A simple example of using NsJail to isolate processes would be:

./nsjail -Mo --chroot /var/safe_directory --user 99999 --group 99999 -- /bin/sh -i
Here we are telling NsJail to:
-Mo:               launch a single process using clone/execve
 
--chroot:          set /var/safe_directory as the new root directory for the process

--user/--group:    set the uid and gid to 99999 inside the jail

-- /bin/sh -i:     our sandboxed process (in this case, launch an interactive shell)
We are setting our chroot to /var/safe_directory. It is a valid chroot that we have created beforehand. You can instead use  --chroot / for your testing purposes (in which case you really aren’t using the chroot at all).

If you launch this and run ps aux and id you’ll see something like the below:
$ ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
99999        1  0.0  0.1   1824  1080 ?        SNs  12:26   0:00 /bin/sh -i
99999       11  0.0  0.1   3392  1852 ?        RN   12:32   0:00 ps ux
$ id
uid=99999 gid=99999 groups=99999
What you can see is that you are only able to view processes initiated inside the jail.

Now lets try adding a filter to this:

./nsjail --chroot /var/safe_directory  --user 99999 --group 99999 --seccomp_string 'POLICY a { ALLOW { write, execve, brk, access, mmap, open, newfstat, close, read, mprotect, arch_prctl, munmap, getuid, getgid, getpid, rt_sigaction, geteuid, getppid, getcwd, getegid, ioctl, fcntl, newstat, clone, wait4, rt_sigreturn, exit_group } } USE a DEFAULT KILL' -- /bin/sh -i
Here we are telling NsJail to:
-Mo:               launch a single process using clone/execve
 
--chroot:          set /var/safe_directory as the new root directory for the process

--user/--group:    set the uid and gid to 99999 inside the jail

--seccomp_string:  use the provided seccomp policy

-- /bin/sh -i:     our sandboxed process (in this case, launch an interactive shell)
If you try run id now you should see it fail. This is because we have not given it permission to use the required syscalls:
$ id
Bad system call
The idea for us then would be to use NsJail to execute smbd as well as nmbd (both are needed for our Samba setup) and only allow expected syscalls.

Building our solution
Starting with a blank config file, and focusing on smbd, we began adding restrictions to lock down the service.

First we built the the seccomp filter list to ensure the process only had access to syscalls that were needed. This was easily obtained using perf:

perf record -e 'raw_syscalls:sys_enter' -- /usr/sbin/smbd -F
This recorded all syscalls used by smbd into perf's format. To output the syscalls in a readable list format we used:
perf script | grep -oP "(?<= NR )[0-9]+" | sort -nu
One thing to mention here is that syscall numbers can be named differently depending where you look. Even just between `strace` and `nsjail`, a few syscall names have slight variations from the names in the Linux source. This means that if you use the syscall names you won't be able to directly use the exact same list between different tools, but may need to rename a few of them. If you are worried about this, you can opt instead to use the syscall numbers. These are a robust, tool-independent way of identifying syscalls.

After we had our list in place, we set about limiting FS access as well as fiddling with some final settings in our policy to ensure it was locked down as tight as possible.

A rather interesting way to test that the config file was working as expected was to launch a shell using the config and test the protections manually:

./nsjail --config smb.cfg -- /bin/sh -i
Once the policy was tested and we were happy that smbd was running as expected, we did the same for nmbd.

With both services sandboxed we performed a couple of long running tests to ensure we hadn't missed anything. This included leaving the services running over the weekend as well as testing them out by connecting to them from different systems. After all the testing and not finding anything broken, we were happy to sign off.

What does this mean for us?

Most canned exploits against Samba expect a stock system with access to system resources. At some point in the future, when the next Samba 0-day surfaces, there’s a good chance that generic exploits against our Samba will fail as it tries to exercise syscalls we haven’t expressly permitted. But even if an attacker were to compromise Samba, and spawn himself a shell, this shell would be of limited utility with a constrained view of the filesystem and the system in general.

What does this mean for you?
We stepped you through our process of implementing a sandbox for our Samba service. The aim was to get you thinking about your own environment and how sandboxing could play a role in securing your applications. We wanted to show you that it isn't an expensive or overly complicated task. You should try it, and if you do, drop us a note to let us know how it went!


On anti-patterns for ICT security and international law

(Guest Post by @marasawr)
Author’s note : international law is hard, and these remarks are extremely simplified.
Thinkst recently published a thought piece on the theme of 'A Geneva Convention, for software.'[1] Haroon correctly anticipated that I'd be a wee bit crunchy about this particular 'X for Y' anti-pattern, but probably did not anticipate a serialised account of diplomatic derpitude around information and communications technologies (ICT) in international law over the past twenty years. Apparently there is a need for this, however, because this anti-pattern is getting out of hand.
Microsoft President and Chief Legal Officer Brad Smith published early in 2017 on 'The need for a digital Geneva Convention,' and again in late October on 'What the founding of the Red Cross can teach us about cyber warfare.'[2] In both cases, equivalences are drawn between perturbations in the integrity or availability of digital services, and the circumstances which prompted ratification of the Fourth Geneva Convention, or the circumstances prompting the establishment of the ICRC. And this is ridiculous.

Nation-state hacking is not a mass casualty event

The Fourth Geneva Convention (GCIV) was drafted in response to the deadliest single conflict in human history. Casualty statistics for the Second World War are difficult, but regardless of where in the range of 60-80 million dead a given method of calculation falls, the fact remains that the vast majority of fatalities occurred among civilians and non-combatants. The Articles of GCIV, adopted in 1949, respond directly to these deaths as well as other atrocities and deprivations endured by persons then unprotected by international law.[3] The founding of the ICRC was similarly prompted by mass casualties among wounded soldiers in European conflicts during the mid-nineteenth century.[4] But WannaCry was not Solferino; Nyetya was not the Rape of Nanjing.
Microsoft's position is, in effect, that nation-state hacking activities constitute an equivalent threat to civilian populations as the mass casualty events of actual armed conflict, and require commensurate regulation under international law. 'Civilian' is taken simply to mean 'non-government.' The point here is that governments doing government things cost private companies money; this is, according to Smith, unacceptable. Smith isn't wrong that this nation-state stuff impacts private companies, but what he asks for is binding protection under international law against injuries to his bottom line. I find this type of magical thinking particularly irksome, because it is underpinned by the belief that a corporate entity can be apatride and sovereign all at once. Inconveniently for Microsoft, there is no consensus in the customary law of states on which to build the international legal regime of their dreams.
The Thinkst argument in favour of a Geneva Convention for software is somewhat less cynical. Without a common, binding standard of conduct, nation-states are theoretically free to coerce, abuse, or otherwise influence local software companies as and when they please. Without a common standard, the thinking goes, (civilian) software companies and their customers remain in a perpetual state of unevenly and inequitably distributed risk from nation-state interference. Without binding protections and a species of collective bargaining power for smaller economies, nation-states likewise remain unacceptably exposed.[5]
From this starting point, a binding resolution of some description for software sounds more reasonable. However, there are two incorrect assumptions here. One is that nothing of the sort has been previously attempted. Two is that nation-states, particularly small ones, have a vested interest in neutrality as a guiding principle of digital governance. Looking back through the history of UN resolutions, reports, and Groups of Governmental Experts (GGEs) on — please bear with me — 'Developments in the field of information and telecommunications in the context of international security,’ it is clear this is not the case.[6] We as a global community actually have been down this road, and have been at it for almost twenty years.

International law, how does it work?

First, what are the Geneva Conventions, and what are they not?[7] The Geneva Conventions are a collection of four treaties and three additional protocols which comprise the body of international humanitarian law governing the treatment of non-combatant (i.e. wounded, sick, or shipwrecked armed forces, prisoners of war, or civilian) persons in wartime. The Geneva Conventions are not applicable in peacetime, with signatory nations agreeing to abide by the Conventions only in times of war or armed conflict. Such conflicts can be international or non-international (these are treated differently), but the point to emphasise is that an armed conflict with the characteristics of war (i.e. one in which human beings seek to deprive one another of the right to life) is a precondition for the applicability of the Conventions.
UN Member States which have chosen to become signatory to any or all of the Conventions which comprise international humanitarian law (IHL) and the Law of Armed Conflict (LOAC) have, in effect, elected to relinquish a measure of sovereignty over their own conduct in wartime. The concept of Westphalian sovereignty is core to international law, and is the reason internal conflicts are not subject to all of the legal restrictions governing international conflicts.[8] Just to make life more confusing, reasonable international law scholars disagree regarding which conventions and protocols are bucketed under IHL, which are LOAC, and which are both.
In any event, IHL and LOAC do not cease to apply in wartime because Internet or computers; asking for a separate Convention applicable to software presumes that the digital domain is currently beyond the scope of IHL and LOAC, which it is not. That said, Tallinn Manuals 1.0 and 2.0 do highlight some problem areas where characteristics of informatic space render transposition of legal principles presuming kinetic space somewhat comical.[9] IHL and LOAC cannot accommodate all eventualities of military operations in the digital domain without severe distortion to their application in kinetic space, but that is a protocol-sized problem, not a convention-sized problem. It is also a very different problem from those articulated by Microsoft.

19 years of ICT and international security at the UN

What Thinkst and Microsoft both point to is a normative behavioural problem, and there is some fascinating (if tragic) history here. Early in 2017 Michele Markoff appeared for the US Department of State on a panel for the Carnegie Endowment for International Peace, and gave a wonderfully concise breakdown of this story down from its beginnings at the UN in 1998.[10] I recommend watching the video, but summarise here as well.
In late September of 1998, the Permanent Representative to the UN for the Russian Federation, Sergei Lavrov, transmitted a letter from his Minister of Foreign Affairs to the Secretary-General.[11] The letter serves as an explanatory memorandum for an attached draft resolution seeking to prohibit the development, production, or use by Member States of ‘particularly dangerous forms of information weapons.’[12] The Russian document voices many anxieties about global governance and security related to ICT which today issue from the US and the EU. Weird, right? At the time, Russian and US understandings of ‘information warfare’ were more-or-less harmonised; the term encompassed traditional electronic warfare (EW) measures and countermeasures, as well as information operations (i.e. propaganda). Whether or not the Russian ask in the autumn of 1998 was sincere is subject to debate, but it was unquestionably ambitious. UN A/C.1/53/3 remains one of my favourite artefacts of Russia's wild ‘90s and really has to be read to be believed.
So what happened? The US did their level best to water down the Russian draft resolution. In the late 1990s the US enjoyed unassailable technological overmatch in the digital domain, and there was no reason to yield any measure of sovereignty over their activities in that space at the request of a junior partner (i.e. Russia). Or so the magical thinking went. The resolution ultimately adopted (unanimously, without a vote) by the UN General Assembly in December 1998 was virtually devoid of substance.[13] And it is that document which has informed the character of UN activities in the area of ‘Developments in the field of information and telecommunications in the context of international security’ ever since.[14] Ironically, the US and like-minded states have now spent about a decade trying to claw their way back to a set of principles not unlike those laid out in the original draft resolution transmitted by Lavrov. Sincere or not, the Russian overture of late 1998 was a bungled opportunity.[15]

State sovereignty vs digital governance

This may seem illogical, but the fault line through the UN GGE on ICT security has never been large vs small states.[16] Instead, it has been those states which privilege the preservation of national sovereignty and freedom from interference in internal affairs vs those states receptive to the idea that their domestic digital governance should reflect existing standards set out in international humanitarian and human rights law. And states have sometimes shifted camps over time. Remember that the Geneva Conventions apply in a more limited fashion to internal conflicts than they do to international conflicts? Whether a state is considering commitment to behave consistently with the spirit of international law in their internal affairs, or commitment to neutrality as a desirable guiding principle of digital governance, both raise the question of state sovereignty.
As it happens, those states which tend to aggressively defend the preservation of state sovereignty in matters of digital governance tend to be those which heavily censor or otherwise leverage their ICT infrastructure for the purposes of state security. In early 2015 Permanent Representatives to the UN from China, Kazakhstan, the Russian Federation, Tajikistan, and Uzbekistan sent a letter to the Secretary-General to the effect of ‘DON’T TREAD ON ME’ in response to proposed ’norms, rules, and principles for the responsible behaviour of States’ by the GGE for ICT security.[17] Armenia, Belarus, Cuba, Ecuador, Turkey, and other have similarly voiced concern in recent years that proposed norms may violate their state sovereignty.[18]
During the summer of 2017, the UN GGE for ICT security imploded.[19] With China and the Russian Federation having effectively walked away 30 months earlier, and with decades of unresolved disagreement regarding the relationship between state sovereignty, information, and related technologies... colour me shocked.

Hard things are hard

So, how do we safeguard against interference with software companies by intelligence services or other government entities in the absence of a binding international standard? The short answer is : rule of law.
Thinkt’s assertion that ‘there is no technical control that’s different’ between the US and Russian hypotheticals is not accurate. Russian law and lawful interception standards impose technical requirements for access and assistance that do not exist in the United States.[20] When we compare the two countries, we are not comparing like to like. Declining to comply with a federal law enforcement request in the US might get you a public showdown and fierce debate by constitutional law scholars, because that can happen under US law. It is nigh unthinkable that a Russian company could rebel in this manner without consequences for their operations, profitability, or, frankly, for their physical safety, because Russian law is equally clear on that point.
Software companies are not sovereign entities; they do not get to opt out of the legal regimes and geopolitical concerns of the countries in which they are domiciled.[21] In Kaspersky’s case, thinking people around DC have never been hung up on the lack of technical controls ensuring good behaviour. What we have worried about for years is the fact that the legal regime Kaspersky is subject to as a Russian company comfortably accommodates compelled access and assistance without due process, or even a warrant.[22] In the US case, the concern is that abuses by intelligence or law enforcement agencies may occur when legal authorisation is exceeded or misinterpreted. In states like Russia, those abuses and the technical means to execute them are legally sanctioned.
It is difficult enough to arrive at consensus in international law when there is such divergence in the law of individual states. But when it comes to military operations (as distinct from espionage or lawful interception) in the digital domain, we don’t even have divergence in the customary law of states as a starting point. Until states begin to acknowledge their activities and articulate their own legal reasoning, their own understandings of proportionate response, necessity, damage, denial, &c. for military electromagnetic and information operations, the odds of achieving binding international consensus in this area are nil. And there is not a lot compelling states to codify that reasoning at present. As an industry, information security tends to care about nation-state operations to the extent that such attribution can help pimp whatever product is linked below the analysis, and no further. With the odd exception, there is little that can be called rigorous, robust, or scientific about the way we do this. So long as that remains true – so long as information security persists in its methodological laziness on the excuse that perfect confidence is out of reach – I see no externalities which might hasten states active in this domain to admit as much, let alone volunteer a legal framework for their operations.
At present, we should be much more concerned with encouraging greater specificity and transparency in the legal logics of individual states than with international norms creation on a foundation of sand. The ‘X for Y’ anti-pattern deserves its eyerolls in the case of a Geneva Convention for software, but for different reasons than advocates of this approach generally appreciate.
-mara 

[1] Thinkst Thoughts, ‘A Geneva Convention, for software,’ 26 October 2017, http://blog.thinkst.com/2017/10/a-geneva-convention-for-software.html.
[2] Brad Smith, Microsoft On the Issues : ‘The need for a digital Geneva Convention,’ 14 February 2017, https://blogs.microsoft.com/on-the-issues/2017/02/14/need-digital-geneva-convention/; Brad Smith and Carol Ann Browne, LinkedIn Pulse : ‘What the founding of the Red Cross can teach us about cyber warfare,’ 29 October 2017, https://www.linkedin.com/pulse/what-founding-red-cross-can-teach-us-cyber-warfare-brad-smith/.
[3] See Jean S Pichet, Commentary : the Geneva Conventions of 12 August 1949, (Geneva : International Committee of the Red Cross, 1958), https://www.loc.gov/rr/frd/Military_Law/pdf/GC_1949-IV.pdf.
[4] See Jean S Pichet, Commentary : the Geneva Conventions of 12 August 1949, (Geneva : International Committee of the Red Cross, 1952), https://www.loc.gov/rr/frd/Military_Law/pdf/GC_1949-I.pdf.
[5] Groups of Governmental Experts (GGEs) are convened by the UN Secretary-General to study and develop consensus around questions raised by resolutions adopted by the General Assembly. When there is need to Do Something, but nobody knows or can agree on what that Something is, a GGE is established. Usually after a number of other, more ad hoc experts' meetings have failed to deliver consensus. For brevity we often refer to this GGE as 'the GGE for ICT security' or 'the GGE for cybersecurity'. https://www.un.org/disarmament/topics/informationsecurity/
[6] Thinkst Thoughts, ‘A Geneva Convention, for software,’ 26 October 2017, http://blog.thinkst.com/2017/10/a-geneva-convention-for-software.html.
[8] Regulating internecine conflict is extra hard, and also not very popular. See Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), 8 June 1977.
[9] Col Gary D Brown has produced consistently excellent work on this subject. See, e.g., Gary D Brown, "The Cyber Longbow & Other Information Strategies: U.S. National Security and Cyberspace” (28 April 2017). 5 PENN. ST. J.L. & INT’L AFF. 1, 2017, https://ssrn.com/abstract=2971667; Gary D Brown “Spying and Fighting in Cyberspace: What is Which?” (1 April 2016). 8 J. NAT’L SECURITY L. & POL’Y, 2016, https://ssrn.com/abstract=2761460; Gary D Brown and Andrew O Metcalf, “Easier Said Than Done : Legal Review of Cyber Weapons” (12 February 2014). 7 J. NAT’L SECURITY L. & POL’Y, 2014, https://ssrn.com/abstract=2400530. See also, Gary D Brown, panel remarks, ’New challenges to the laws of war : a discussion with Ambassador Valentin Zellweger,’ (Washington, DC : CSIS), 30 October 2015, https://www.youtube.com/watch?v=jV-A21jQWnQ&feature=youtu.be&t=27m36s.
[10] Michele Markoff, panel remarks, ‘Cyber norms revisited : international cybersecurity and the way forward’ (Washington, DC : Carnegie Endowment for Int’l Peace) 6 February 2017, https://www.youtube.com/watch?v=nAuehrVCBBU&feature=youtu.be&t=4m10s.
[11] United Nations, General Assembly, Letter dated 23 September 1998 from the Permanent Representative of the Russian Federation to the United Nations addressed to the Secretary-General, UN GAOR 53rd Sess., Agenda Item 63, UN Doc. A/C.1/53/3 (30 September 1998), https://undocs.org/A/C.1/53/3.
[12] ibid., (3)(c).
[13] GA Res. 53/70, 'Developments in telecommunications and information in the context of international security,’ UN GAOR 53rd Sess., Agenda Item 63, UN Doc. A/RES/53/70 (4 December 1998), https://undocs.org/a/res/53/70.
[14] See GA Res. 54/49 of 1 December 1999, 55/28 of 20 November 2000, 56/19 of 29 November 2001, 57/53 of 22 November 2002, 58/32 of 8 December 2003, 59/61 of 3 December 2004, 60/45 of 8 December 2005, 61/54 of 6 December 2006, 62/17 of 5 December 2007, 63/37 of 2 December 2008, 64/25 of 2 December 2009, 65/41 of 8 December 2010, 66/24 of 2 December 2011, 67/27 of 3 December 2012, 68/243 of 27 December 2013, 69/28 of 2 December 2014, 70/237 of 23 December 2015, and 71/28 of 5 December 2016.
[15] This assessment is somewhat complicated. Accepting any or all of the proposed definitions, codes of conduct, &c. proffered by the Russian Federation over the years may have precluded some actions allegedly taken by the United States, but unambiguously would have prohibited the massive-scale disinformation and influence operations that have become a hallmark of Russian power projection abroad. Similarly, Russian innovations in modular malware with the demonstrated purpose and capability to perturb, damage, or destroy physical critical infrastructure systems would have been contraindicated by their own language.
[16] See, e.g., the Russian reply to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 54th Sess., Agenda Item 71, UN Doc. A/54/213 (9 June 1999), pp. 8-10, https://undocs.org/a/54/213; the Russian reply to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 55th Sess., Agenda Item 68, UN Doc. A/55/140 (12 May 2000), pp. 3-7, https://undocs.org/a/55/140; the Swedish reply (on behalf of Member States of the European Union) to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 56th Sess., Agenda Item 69, UN Doc. A/56/164 (26 June 2001), pp. 4-5, https://undocs.org/a/56/164; and the Russian reply to ibid., UN GAOR 56th Sess., Agenda Item 69, UN Doc. A/56/164/Add.1 (21 June 2001), pp. 2-6, https://undocs.org/a/56/164/add.1.
[17] United Nations, General Assembly, Letter dated 9 January 2015 from the Permanent Representatives of China, Kazakhstan, Kyrgyzstan, the Russian Federation, Tajikistan and Uzbekistan to the United Nations addressed to the Secretary-General, UN GAOR 69th Sess., Agenda Item 91, UN Doc. A/69/723 (9 January 2015), https://undocs.org/a/69/723.
[18] States’ replies since the 65th Session (2010) indexed at https://www.un.org/disarmament/topics/informationsecurity/.
[19] See, e.g., Arun Mohan Sukumar, ‘The UN GGE failed. Is international law in cyberspace doomed as well?,’ Lawfare, 4 July 2017, https://lawfareblog.com/un-gge-failed-international-law-cyberspace-doomed-well, and Elaine Korzak, The Debate : ‘UN GGE on cybersecurity : the end of an era?,’ The Diplomat, 31 July 2017, https://thediplomat.com/2017/07/un-gge-on-cybersecurity-have-china-and-russia-just-made-cyberspace-less-safe/.
[20] Prior to the 2014 Olympics in Sochi, US-CERT warned travellers that
Russia has a national system of lawful interception of all electronic communications. The System of Operative-Investigative Measures, or SORM, legally allows the Russian FSB to monitor, intercept, and block any communication sent electronically (i.e. cell phone or landline calls, internet traffic, etc.). SORM-1 captures telephone and mobile phone communications, SORM-2 intercepts internet traffic, and SORM-3 collects information from all forms of communication, providing long-term storage of all information and data on subscribers, including actual recordings and locations. Reports of Rostelecom, Russia’s national telecom operator, installing deep packet inspection (DPI ) means authorities can easily use key words to search and filter communications. Therefore, it is important that attendees understand communications while at the Games should not be considered private.’
Department of Homeland Security, US-CERT, Security Tip (ST14-01) ’Sochi 2014 Olympic Games’ (NCCIC Watch & Warning : 04 February 2014). https://www.us-cert.gov/ncas/tips/ST14-001 See, also, Andrei Soldatov and Irina Borogan, The Red Web : the struggle between Russia’s digital dictators and the new online revolutionaries, (New York : Public Affairs, 2017 [2015]).
[21] In the United States, this has become a question of the extraterritorial application of the Stored Communications Act (18 USC § 2703) in the presence of a warrant, probable cause, &c. dressed up as a privacy debate. See Andrew Keane Woods, ‘A primer on Microsoft Ireland, the Supreme Court’s extraterritorial warrant case,’ Lawfare, 16 October 2017, https://lawfareblog.com/primer-microsoft-ireland-supreme-courts-extraterritorial-warrant-case.
[22] At the time of writing, eight Russian law enforcement and security agencies are granted direct access to SORM : the Ministry of Internal Affairs (MVD), Federal Security Service (FSB), Federal Protective Service (FSO), Foreign Intelligence Service (SVR), Federal Customs Service (FTS), Federal Drug Control Service (FSKN), Federal Penitentiary Service (FSIN), and the Main Intelligence Directorate of the General Staff (GRU). Federal Laws 374-FZ and 375-FZ of 6th July 2016 ('On Amendments to the Criminal Code of the Russian Federation and the Code of Criminal Procedure of the Russian Federation with regard to establishing additional measures to counter terrorism and ensure public security’), also known as the ‘Yarovaya laws,’ will enter into force on 1st July 2018; these laws substantially eliminate warrant requirements for communications and metadata requests to Russian telecommunications companies and ISPs, and additionally impose retention and decryption for all voice, text, video, and image communications. See, e.g., DR Analytica, report, ‘Yarovaya law : one year after,’ 24 April 2017, https://analytica.digital.report/en/2017/04/24/yarovaya-law-one-year-after/.