A third party view on the security of the Canaries

(Guest post by Ollie Whitehouse)

tl;dr

Thinkst engaged NCC Group to perform a third party assessment of the security of their Canary appliance. The Canaries came out of the assessment well. When compared in a subjective manner to the vast majority of embedded devices and/or security products we have assessed and researched over the last 18 years they were very good.

Who is NCC Group and who am I?

Firstly, it is prudent to introduce myself and the company I represent. My name is Ollie Whitehouse and I am the Global CTO for NCC Group. My career in cyber spans over 20 years in areas such as applied research, internal product security teams at companies like BlackBerry and, of course, consultancy. NCC Group is a global professional and managed security firm with its headquarters in the UK and offices in the USA, Canada, Netherlands, Denmark, Spain, Singapore and Australia to mention but a few.

What were we engaged to do?

Quite simply we were tasked to see if we could identify any vulnerabilities in the Canary appliance that would have a meaningful impact on real-world deployments in real-world threat scenarios. The assessment was entirely white box (i.e. undertaken with full knowledge and code access etc.)

Specifically the solution was assessed for:

·       Common software vulnerabilities

·       Configuration issues

·       Logic issues including those involving the enrolment and update processes

·       General privacy and integrity of the solution

The solution was NOT assessed for:

·       The efficacy of Canary in an environment

·       The ability to fingerprint and detect a Canary

·       Operational security of the Thinkst SaaS

What did NCC Group find?

NCC Group staffed a team with a combined experience of over 30 years in software security assessments to undertake this review for what I consider a reasonable amount of time given the code base size and product complexity.

We found a few minor issues, including a few broken vulnerability chains, but overall we did not find anything that would facilitate a remote breach.

While we would never make any warranties it is clear from the choice of programming languages, design and implementation that there is a defence in depth model in place. The primitives around cryptography usage are also robust, avoiding many of the pitfalls seen more widely in the market.

The conclusion of our evaluation is that the Canary platform is well designed and well implemented from a security perspective. Although there were some vulnerabilities, none of these were significant, none would be accessible to an unauthenticated attacker and none affected the administrative console. The Canary device is robust from a product security perspective based on current understanding.

So overall?

The device platform and its software stack (outside of the base OS) has been designed and implemented by a team at Thinkst with a history in code product assessments and penetration testing (a worthy opponent one might argue), and this shows in the positive results from our evaluation.

Overall, Thinkst have done a good job and shown they are invested in producing not only a security product but also a secure product.

_________

<haroon> Are you a customer who wishes to grab a copy of the report? Mail us and we will make it happen.


Sandboxing: a dig into building your security pit

Introduction

Sandboxes are a good idea. Whether it's improving kids’ immune systems, or isolating your apps from the rest of the system, sandboxes just make sense. Despite their obvious benefits, they are still relatively uncommon. We think this is because they are still relatively obscure for most developers and hope this post will fix that.

Sandboxes? What’s that?

Software sandboxes isolate a process from the rest of the system, constraining the process’ access to the parts of the system that it needs and denying access to everything else. A simple example of this would be opening a PDF in (a modern version of) Adobe Reader. Since Adobe Reader now makes use of a sandbox, the document is opened in a process running in its own constrained world so that it is isolated from the rest of the system. This limits the harm that a malicious document can cause and is one of the reasons why malicious PDFs have dropped from being the number-1 attack vector seen in the wild as more and more users updated to sandbox-enabled versions of Adobe-Reader.

It's worth noting that sandboxes aren't magic, they simply limit the tools available to an attacker and limit an exploit’s immediate blast-radius. Bugs in the sandboxing process can still yield full access to key parts of the system rendering the sandbox almost useless.

Sandboxes in Canary

Long time readers will know that Canary is our well-loved honeypot solution. (If you are interested in breach detection that’s quick to deploy and works, check us out at https://canary.tools/)


A Canary is a high quality, mixed interaction honeypot. It’s a small device that you plug into your network which is then able to imitate a large range of machines (a printer/ your CEO's laptop/ a file server, etc). Once configured it will run zero or more services such as SSH, Telnet, a database or Windows File Sharing. When people interact with these fake hosts and fake services, you get an alert (and a high quality signal that you should cancel your weekend plans).

Almost all of our services are implemented in a memory safe language, but in the event that customers want a Windows File Share, we rely on the venerable Samba project (before settling on Samba, we examined other SMB possibilities, like the excellent impacket library, but Samba won since our Canaries (and their file shares) can be enrolled into Active Directory too). Since Samba is running as its own service and we don't have complete control over its internal workings, it becomes a prime candidate for sandboxing: we wanted to be able to restrict it's access to the rest of the system in case it is ever compromised.

Sandboxing 101

As a very brief introduction to sandboxing we'll explain some key parts of what Linux has to offer (a quick Google search will yield far more comprehensive articles, but one interesting resource, although not Linux focused, is this video about Microsoft Sandbox Mitigations).

Linux offers several ways to limit processes which we took into consideration when deciding on a solution that would suit us. When implementing a sandbox solution you would chose a combination of these depending on your environment and what makes sense.


Control groups

Control groups (cgroups) look at limiting and controlling access and usage of resources such as CPU, memory, disk, network, etc.


Chroot

This involves changing the apparent root directory on a file-system that the process can see. It ensures that the process does not have access to the whole file system, but only parts that it should be able to see. Chroot was one of the first attempts at sandboxes in the Unix world, but it was quickly determined that it wasn’t enough to constrain attackers.


Seccomp

Standing for "secure computing mode", this lets you limit the syscalls that a process can make. Limiting syscalls means that a process will only be able to perform system operations that you expect to be able to perform so if an attacker compromises your application, they won't be able to run wild.


Capabilities

These are the set of privileged operations that can be performed on the Linux system. Some capabilities include setuid, chroot and chown. For a full list you can take a look at the source here. However, they’re also not a panacea and spender has shown (frequently) how Capabilities can be leveraged into full Capabilities.


Namespaces

Without namespaces, any processes would be able to see all processes' system resource information. Namespaces virtualise resources like hostnames, user IDs or network resources so that a process cannot see information from other processes.

Adding sandboxing to your application in the past meant using some of these primitives natively (which probably seemed hairy for most developers). Fortunately, these days, there are a number of projects that wrap them up in easy-to-use packages.



Choosing our solution

We needed to find a solution that would work well for us now, but would also allow us to easily expand once the need arises without requiring a rebuild from the ground up.

The solution we wanted would need to at least address Seccomp filtering and a form of chroot/pivot_root. Filtering syscalls is easy to control if you can get the full profile of a service and once filtered you can sleep a little safer knowing the service can't perform syscalls that it shouldn't. Limiting the view of the filesystem was another easy choice for us. Samba only needs access to specific directories and files, and lots of those files can also be set to read-only.

We evaluated a number of options, and decided that the final solution should:

  • Isolate the process (Samba)
  • Retain the real hostname
  • Still be able to interact with a non-isolated process
Another process had to be able to intercept Samba network traffic which meant we couldn’t put it in a network namespace without bringing that extra process in.

This ruled out something like Docker, as although it provided an out-of-the-box high level of isolation (which is perfect for many situations), we would have had to turn off a lot of the features to get our app to play nicely.

Systemd and nsroot (which looks abandoned) both focused more on specific isolation techniques (seccomp filtering for Systemd and namespace isolation for nsroot) but weren’t sufficient for our use case.

We then looked at NsJail and Firejail (Google vs Mozilla, although that played no part in our decision). Both were fairly similar and provided us with flexibility in terms of what we could limit, putting them a cut above the rest.

In the end, we decided on NsJail, but since they were so similar, we could have easily gone the other way, i.e. YMMV


NsJail
NsJail, as simply stated in its overview, "is a process isolation tool for Linux" developed by the team at Google (though it's not officially recognised as a Google product). It provides isolation for namespaces, file-system constraints, resource limits, seccomp filters, cloned/isolated ethernet interfaces and control groups.

Furthermore, it uses kafel (another non-official Google product) which allows you to define syscall filtering policies in a config file, making it easy to manage/maintain/reuse/expand your configuration.

A simple example of using NsJail to isolate processes would be:

./nsjail -Mo --chroot /var/safe_directory --user 99999 --group 99999 -- /bin/sh -i
Here we are telling NsJail to:
-Mo:               launch a single process using clone/execve
 
--chroot:          set /var/safe_directory as the new root directory for the process

--user/--group:    set the uid and gid to 99999 inside the jail

-- /bin/sh -i:     our sandboxed process (in this case, launch an interactive shell)
We are setting our chroot to /var/safe_directory. It is a valid chroot that we have created beforehand. You can instead use  --chroot / for your testing purposes (in which case you really aren’t using the chroot at all).

If you launch this and run ps aux and id you’ll see something like the below:
$ ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
99999        1  0.0  0.1   1824  1080 ?        SNs  12:26   0:00 /bin/sh -i
99999       11  0.0  0.1   3392  1852 ?        RN   12:32   0:00 ps ux
$ id
uid=99999 gid=99999 groups=99999
What you can see is that you are only able to view processes initiated inside the jail.

Now lets try adding a filter to this:

./nsjail --chroot /var/safe_directory  --user 99999 --group 99999 --seccomp_string 'POLICY a { ALLOW { write, execve, brk, access, mmap, open, newfstat, close, read, mprotect, arch_prctl, munmap, getuid, getgid, getpid, rt_sigaction, geteuid, getppid, getcwd, getegid, ioctl, fcntl, newstat, clone, wait4, rt_sigreturn, exit_group } } USE a DEFAULT KILL' -- /bin/sh -i
Here we are telling NsJail to:
-Mo:               launch a single process using clone/execve
 
--chroot:          set /var/safe_directory as the new root directory for the process

--user/--group:    set the uid and gid to 99999 inside the jail

--seccomp_string:  use the provided seccomp policy

-- /bin/sh -i:     our sandboxed process (in this case, launch an interactive shell)
If you try run id now you should see it fail. This is because we have not given it permission to use the required syscalls:
$ id
Bad system call
The idea for us then would be to use NsJail to execute smbd as well as nmbd (both are needed for our Samba setup) and only allow expected syscalls.

Building our solution
Starting with a blank config file, and focusing on smbd, we began adding restrictions to lock down the service.

First we built the the seccomp filter list to ensure the process only had access to syscalls that were needed. This was easily obtained using perf:

perf record -e 'raw_syscalls:sys_enter' -- /usr/sbin/smbd -F
This recorded all syscalls used by smbd into perf's format. To output the syscalls in a readable list format we used:
perf script | grep -oP "(?<= NR )[0-9]+" | sort -nu
One thing to mention here is that syscall numbers can be named differently depending where you look. Even just between `strace` and `nsjail`, a few syscall names have slight variations from the names in the Linux source. This means that if you use the syscall names you won't be able to directly use the exact same list between different tools, but may need to rename a few of them. If you are worried about this, you can opt instead to use the syscall numbers. These are a robust, tool-independent way of identifying syscalls.

After we had our list in place, we set about limiting FS access as well as fiddling with some final settings in our policy to ensure it was locked down as tight as possible.

A rather interesting way to test that the config file was working as expected was to launch a shell using the config and test the protections manually:

./nsjail --config smb.cfg -- /bin/sh -i
Once the policy was tested and we were happy that smbd was running as expected, we did the same for nmbd.

With both services sandboxed we performed a couple of long running tests to ensure we hadn't missed anything. This included leaving the services running over the weekend as well as testing them out by connecting to them from different systems. After all the testing and not finding anything broken, we were happy to sign off.

What does this mean for us?

Most canned exploits against Samba expect a stock system with access to system resources. At some point in the future, when the next Samba 0-day surfaces, there’s a good chance that generic exploits against our Samba will fail as it tries to exercise syscalls we haven’t expressly permitted. But even if an attacker were to compromise Samba, and spawn himself a shell, this shell would be of limited utility with a constrained view of the filesystem and the system in general.

What does this mean for you?
We stepped you through our process of implementing a sandbox for our Samba service. The aim was to get you thinking about your own environment and how sandboxing could play a role in securing your applications. We wanted to show you that it isn't an expensive or overly complicated task. You should try it, and if you do, drop us a note to let us know how it went!


On anti-patterns for ICT security and international law

(Guest Post by @marasawr)
Author’s note : international law is hard, and these remarks are extremely simplified.
Thinkst recently published a thought piece on the theme of 'A Geneva Convention, for software.'[1] Haroon correctly anticipated that I'd be a wee bit crunchy about this particular 'X for Y' anti-pattern, but probably did not anticipate a serialised account of diplomatic derpitude around information and communications technologies (ICT) in international law over the past twenty years. Apparently there is a need for this, however, because this anti-pattern is getting out of hand.
Microsoft President and Chief Legal Officer Brad Smith published early in 2017 on 'The need for a digital Geneva Convention,' and again in late October on 'What the founding of the Red Cross can teach us about cyber warfare.'[2] In both cases, equivalences are drawn between perturbations in the integrity or availability of digital services, and the circumstances which prompted ratification of the Fourth Geneva Convention, or the circumstances prompting the establishment of the ICRC. And this is ridiculous.

Nation-state hacking is not a mass casualty event

The Fourth Geneva Convention (GCIV) was drafted in response to the deadliest single conflict in human history. Casualty statistics for the Second World War are difficult, but regardless of where in the range of 60-80 million dead a given method of calculation falls, the fact remains that the vast majority of fatalities occurred among civilians and non-combatants. The Articles of GCIV, adopted in 1949, respond directly to these deaths as well as other atrocities and deprivations endured by persons then unprotected by international law.[3] The founding of the ICRC was similarly prompted by mass casualties among wounded soldiers in European conflicts during the mid-nineteenth century.[4] But WannaCry was not Solferino; Nyetya was not the Rape of Nanjing.
Microsoft's position is, in effect, that nation-state hacking activities constitute an equivalent threat to civilian populations as the mass casualty events of actual armed conflict, and require commensurate regulation under international law. 'Civilian' is taken simply to mean 'non-government.' The point here is that governments doing government things cost private companies money; this is, according to Smith, unacceptable. Smith isn't wrong that this nation-state stuff impacts private companies, but what he asks for is binding protection under international law against injuries to his bottom line. I find this type of magical thinking particularly irksome, because it is underpinned by the belief that a corporate entity can be apatride and sovereign all at once. Inconveniently for Microsoft, there is no consensus in the customary law of states on which to build the international legal regime of their dreams.
The Thinkst argument in favour of a Geneva Convention for software is somewhat less cynical. Without a common, binding standard of conduct, nation-states are theoretically free to coerce, abuse, or otherwise influence local software companies as and when they please. Without a common standard, the thinking goes, (civilian) software companies and their customers remain in a perpetual state of unevenly and inequitably distributed risk from nation-state interference. Without binding protections and a species of collective bargaining power for smaller economies, nation-states likewise remain unacceptably exposed.[5]
From this starting point, a binding resolution of some description for software sounds more reasonable. However, there are two incorrect assumptions here. One is that nothing of the sort has been previously attempted. Two is that nation-states, particularly small ones, have a vested interest in neutrality as a guiding principle of digital governance. Looking back through the history of UN resolutions, reports, and Groups of Governmental Experts (GGEs) on — please bear with me — 'Developments in the field of information and telecommunications in the context of international security,’ it is clear this is not the case.[6] We as a global community actually have been down this road, and have been at it for almost twenty years.

International law, how does it work?

First, what are the Geneva Conventions, and what are they not?[7] The Geneva Conventions are a collection of four treaties and three additional protocols which comprise the body of international humanitarian law governing the treatment of non-combatant (i.e. wounded, sick, or shipwrecked armed forces, prisoners of war, or civilian) persons in wartime. The Geneva Conventions are not applicable in peacetime, with signatory nations agreeing to abide by the Conventions only in times of war or armed conflict. Such conflicts can be international or non-international (these are treated differently), but the point to emphasise is that an armed conflict with the characteristics of war (i.e. one in which human beings seek to deprive one another of the right to life) is a precondition for the applicability of the Conventions.
UN Member States which have chosen to become signatory to any or all of the Conventions which comprise international humanitarian law (IHL) and the Law of Armed Conflict (LOAC) have, in effect, elected to relinquish a measure of sovereignty over their own conduct in wartime. The concept of Westphalian sovereignty is core to international law, and is the reason internal conflicts are not subject to all of the legal restrictions governing international conflicts.[8] Just to make life more confusing, reasonable international law scholars disagree regarding which conventions and protocols are bucketed under IHL, which are LOAC, and which are both.
In any event, IHL and LOAC do not cease to apply in wartime because Internet or computers; asking for a separate Convention applicable to software presumes that the digital domain is currently beyond the scope of IHL and LOAC, which it is not. That said, Tallinn Manuals 1.0 and 2.0 do highlight some problem areas where characteristics of informatic space render transposition of legal principles presuming kinetic space somewhat comical.[9] IHL and LOAC cannot accommodate all eventualities of military operations in the digital domain without severe distortion to their application in kinetic space, but that is a protocol-sized problem, not a convention-sized problem. It is also a very different problem from those articulated by Microsoft.

19 years of ICT and international security at the UN

What Thinkst and Microsoft both point to is a normative behavioural problem, and there is some fascinating (if tragic) history here. Early in 2017 Michele Markoff appeared for the US Department of State on a panel for the Carnegie Endowment for International Peace, and gave a wonderfully concise breakdown of this story down from its beginnings at the UN in 1998.[10] I recommend watching the video, but summarise here as well.
In late September of 1998, the Permanent Representative to the UN for the Russian Federation, Sergei Lavrov, transmitted a letter from his Minister of Foreign Affairs to the Secretary-General.[11] The letter serves as an explanatory memorandum for an attached draft resolution seeking to prohibit the development, production, or use by Member States of ‘particularly dangerous forms of information weapons.’[12] The Russian document voices many anxieties about global governance and security related to ICT which today issue from the US and the EU. Weird, right? At the time, Russian and US understandings of ‘information warfare’ were more-or-less harmonised; the term encompassed traditional electronic warfare (EW) measures and countermeasures, as well as information operations (i.e. propaganda). Whether or not the Russian ask in the autumn of 1998 was sincere is subject to debate, but it was unquestionably ambitious. UN A/C.1/53/3 remains one of my favourite artefacts of Russia's wild ‘90s and really has to be read to be believed.
So what happened? The US did their level best to water down the Russian draft resolution. In the late 1990s the US enjoyed unassailable technological overmatch in the digital domain, and there was no reason to yield any measure of sovereignty over their activities in that space at the request of a junior partner (i.e. Russia). Or so the magical thinking went. The resolution ultimately adopted (unanimously, without a vote) by the UN General Assembly in December 1998 was virtually devoid of substance.[13] And it is that document which has informed the character of UN activities in the area of ‘Developments in the field of information and telecommunications in the context of international security’ ever since.[14] Ironically, the US and like-minded states have now spent about a decade trying to claw their way back to a set of principles not unlike those laid out in the original draft resolution transmitted by Lavrov. Sincere or not, the Russian overture of late 1998 was a bungled opportunity.[15]

State sovereignty vs digital governance

This may seem illogical, but the fault line through the UN GGE on ICT security has never been large vs small states.[16] Instead, it has been those states which privilege the preservation of national sovereignty and freedom from interference in internal affairs vs those states receptive to the idea that their domestic digital governance should reflect existing standards set out in international humanitarian and human rights law. And states have sometimes shifted camps over time. Remember that the Geneva Conventions apply in a more limited fashion to internal conflicts than they do to international conflicts? Whether a state is considering commitment to behave consistently with the spirit of international law in their internal affairs, or commitment to neutrality as a desirable guiding principle of digital governance, both raise the question of state sovereignty.
As it happens, those states which tend to aggressively defend the preservation of state sovereignty in matters of digital governance tend to be those which heavily censor or otherwise leverage their ICT infrastructure for the purposes of state security. In early 2015 Permanent Representatives to the UN from China, Kazakhstan, the Russian Federation, Tajikistan, and Uzbekistan sent a letter to the Secretary-General to the effect of ‘DON’T TREAD ON ME’ in response to proposed ’norms, rules, and principles for the responsible behaviour of States’ by the GGE for ICT security.[17] Armenia, Belarus, Cuba, Ecuador, Turkey, and other have similarly voiced concern in recent years that proposed norms may violate their state sovereignty.[18]
During the summer of 2017, the UN GGE for ICT security imploded.[19] With China and the Russian Federation having effectively walked away 30 months earlier, and with decades of unresolved disagreement regarding the relationship between state sovereignty, information, and related technologies... colour me shocked.

Hard things are hard

So, how do we safeguard against interference with software companies by intelligence services or other government entities in the absence of a binding international standard? The short answer is : rule of law.
Thinkt’s assertion that ‘there is no technical control that’s different’ between the US and Russian hypotheticals is not accurate. Russian law and lawful interception standards impose technical requirements for access and assistance that do not exist in the United States.[20] When we compare the two countries, we are not comparing like to like. Declining to comply with a federal law enforcement request in the US might get you a public showdown and fierce debate by constitutional law scholars, because that can happen under US law. It is nigh unthinkable that a Russian company could rebel in this manner without consequences for their operations, profitability, or, frankly, for their physical safety, because Russian law is equally clear on that point.
Software companies are not sovereign entities; they do not get to opt out of the legal regimes and geopolitical concerns of the countries in which they are domiciled.[21] In Kaspersky’s case, thinking people around DC have never been hung up on the lack of technical controls ensuring good behaviour. What we have worried about for years is the fact that the legal regime Kaspersky is subject to as a Russian company comfortably accommodates compelled access and assistance without due process, or even a warrant.[22] In the US case, the concern is that abuses by intelligence or law enforcement agencies may occur when legal authorisation is exceeded or misinterpreted. In states like Russia, those abuses and the technical means to execute them are legally sanctioned.
It is difficult enough to arrive at consensus in international law when there is such divergence in the law of individual states. But when it comes to military operations (as distinct from espionage or lawful interception) in the digital domain, we don’t even have divergence in the customary law of states as a starting point. Until states begin to acknowledge their activities and articulate their own legal reasoning, their own understandings of proportionate response, necessity, damage, denial, &c. for military electromagnetic and information operations, the odds of achieving binding international consensus in this area are nil. And there is not a lot compelling states to codify that reasoning at present. As an industry, information security tends to care about nation-state operations to the extent that such attribution can help pimp whatever product is linked below the analysis, and no further. With the odd exception, there is little that can be called rigorous, robust, or scientific about the way we do this. So long as that remains true – so long as information security persists in its methodological laziness on the excuse that perfect confidence is out of reach – I see no externalities which might hasten states active in this domain to admit as much, let alone volunteer a legal framework for their operations.
At present, we should be much more concerned with encouraging greater specificity and transparency in the legal logics of individual states than with international norms creation on a foundation of sand. The ‘X for Y’ anti-pattern deserves its eyerolls in the case of a Geneva Convention for software, but for different reasons than advocates of this approach generally appreciate.
-mara 

[1] Thinkst Thoughts, ‘A Geneva Convention, for software,’ 26 October 2017, http://blog.thinkst.com/2017/10/a-geneva-convention-for-software.html.
[2] Brad Smith, Microsoft On the Issues : ‘The need for a digital Geneva Convention,’ 14 February 2017, https://blogs.microsoft.com/on-the-issues/2017/02/14/need-digital-geneva-convention/; Brad Smith and Carol Ann Browne, LinkedIn Pulse : ‘What the founding of the Red Cross can teach us about cyber warfare,’ 29 October 2017, https://www.linkedin.com/pulse/what-founding-red-cross-can-teach-us-cyber-warfare-brad-smith/.
[3] See Jean S Pichet, Commentary : the Geneva Conventions of 12 August 1949, (Geneva : International Committee of the Red Cross, 1958), https://www.loc.gov/rr/frd/Military_Law/pdf/GC_1949-IV.pdf.
[4] See Jean S Pichet, Commentary : the Geneva Conventions of 12 August 1949, (Geneva : International Committee of the Red Cross, 1952), https://www.loc.gov/rr/frd/Military_Law/pdf/GC_1949-I.pdf.
[5] Groups of Governmental Experts (GGEs) are convened by the UN Secretary-General to study and develop consensus around questions raised by resolutions adopted by the General Assembly. When there is need to Do Something, but nobody knows or can agree on what that Something is, a GGE is established. Usually after a number of other, more ad hoc experts' meetings have failed to deliver consensus. For brevity we often refer to this GGE as 'the GGE for ICT security' or 'the GGE for cybersecurity'. https://www.un.org/disarmament/topics/informationsecurity/
[6] Thinkst Thoughts, ‘A Geneva Convention, for software,’ 26 October 2017, http://blog.thinkst.com/2017/10/a-geneva-convention-for-software.html.
[8] Regulating internecine conflict is extra hard, and also not very popular. See Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), 8 June 1977.
[9] Col Gary D Brown has produced consistently excellent work on this subject. See, e.g., Gary D Brown, "The Cyber Longbow & Other Information Strategies: U.S. National Security and Cyberspace” (28 April 2017). 5 PENN. ST. J.L. & INT’L AFF. 1, 2017, https://ssrn.com/abstract=2971667; Gary D Brown “Spying and Fighting in Cyberspace: What is Which?” (1 April 2016). 8 J. NAT’L SECURITY L. & POL’Y, 2016, https://ssrn.com/abstract=2761460; Gary D Brown and Andrew O Metcalf, “Easier Said Than Done : Legal Review of Cyber Weapons” (12 February 2014). 7 J. NAT’L SECURITY L. & POL’Y, 2014, https://ssrn.com/abstract=2400530. See also, Gary D Brown, panel remarks, ’New challenges to the laws of war : a discussion with Ambassador Valentin Zellweger,’ (Washington, DC : CSIS), 30 October 2015, https://www.youtube.com/watch?v=jV-A21jQWnQ&feature=youtu.be&t=27m36s.
[10] Michele Markoff, panel remarks, ‘Cyber norms revisited : international cybersecurity and the way forward’ (Washington, DC : Carnegie Endowment for Int’l Peace) 6 February 2017, https://www.youtube.com/watch?v=nAuehrVCBBU&feature=youtu.be&t=4m10s.
[11] United Nations, General Assembly, Letter dated 23 September 1998 from the Permanent Representative of the Russian Federation to the United Nations addressed to the Secretary-General, UN GAOR 53rd Sess., Agenda Item 63, UN Doc. A/C.1/53/3 (30 September 1998), https://undocs.org/A/C.1/53/3.
[12] ibid., (3)(c).
[13] GA Res. 53/70, 'Developments in telecommunications and information in the context of international security,’ UN GAOR 53rd Sess., Agenda Item 63, UN Doc. A/RES/53/70 (4 December 1998), https://undocs.org/a/res/53/70.
[14] See GA Res. 54/49 of 1 December 1999, 55/28 of 20 November 2000, 56/19 of 29 November 2001, 57/53 of 22 November 2002, 58/32 of 8 December 2003, 59/61 of 3 December 2004, 60/45 of 8 December 2005, 61/54 of 6 December 2006, 62/17 of 5 December 2007, 63/37 of 2 December 2008, 64/25 of 2 December 2009, 65/41 of 8 December 2010, 66/24 of 2 December 2011, 67/27 of 3 December 2012, 68/243 of 27 December 2013, 69/28 of 2 December 2014, 70/237 of 23 December 2015, and 71/28 of 5 December 2016.
[15] This assessment is somewhat complicated. Accepting any or all of the proposed definitions, codes of conduct, &c. proffered by the Russian Federation over the years may have precluded some actions allegedly taken by the United States, but unambiguously would have prohibited the massive-scale disinformation and influence operations that have become a hallmark of Russian power projection abroad. Similarly, Russian innovations in modular malware with the demonstrated purpose and capability to perturb, damage, or destroy physical critical infrastructure systems would have been contraindicated by their own language.
[16] See, e.g., the Russian reply to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 54th Sess., Agenda Item 71, UN Doc. A/54/213 (9 June 1999), pp. 8-10, https://undocs.org/a/54/213; the Russian reply to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 55th Sess., Agenda Item 68, UN Doc. A/55/140 (12 May 2000), pp. 3-7, https://undocs.org/a/55/140; the Swedish reply (on behalf of Member States of the European Union) to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 56th Sess., Agenda Item 69, UN Doc. A/56/164 (26 June 2001), pp. 4-5, https://undocs.org/a/56/164; and the Russian reply to ibid., UN GAOR 56th Sess., Agenda Item 69, UN Doc. A/56/164/Add.1 (21 June 2001), pp. 2-6, https://undocs.org/a/56/164/add.1.
[17] United Nations, General Assembly, Letter dated 9 January 2015 from the Permanent Representatives of China, Kazakhstan, Kyrgyzstan, the Russian Federation, Tajikistan and Uzbekistan to the United Nations addressed to the Secretary-General, UN GAOR 69th Sess., Agenda Item 91, UN Doc. A/69/723 (9 January 2015), https://undocs.org/a/69/723.
[18] States’ replies since the 65th Session (2010) indexed at https://www.un.org/disarmament/topics/informationsecurity/.
[19] See, e.g., Arun Mohan Sukumar, ‘The UN GGE failed. Is international law in cyberspace doomed as well?,’ Lawfare, 4 July 2017, https://lawfareblog.com/un-gge-failed-international-law-cyberspace-doomed-well, and Elaine Korzak, The Debate : ‘UN GGE on cybersecurity : the end of an era?,’ The Diplomat, 31 July 2017, https://thediplomat.com/2017/07/un-gge-on-cybersecurity-have-china-and-russia-just-made-cyberspace-less-safe/.
[20] Prior to the 2014 Olympics in Sochi, US-CERT warned travellers that
Russia has a national system of lawful interception of all electronic communications. The System of Operative-Investigative Measures, or SORM, legally allows the Russian FSB to monitor, intercept, and block any communication sent electronically (i.e. cell phone or landline calls, internet traffic, etc.). SORM-1 captures telephone and mobile phone communications, SORM-2 intercepts internet traffic, and SORM-3 collects information from all forms of communication, providing long-term storage of all information and data on subscribers, including actual recordings and locations. Reports of Rostelecom, Russia’s national telecom operator, installing deep packet inspection (DPI ) means authorities can easily use key words to search and filter communications. Therefore, it is important that attendees understand communications while at the Games should not be considered private.’
Department of Homeland Security, US-CERT, Security Tip (ST14-01) ’Sochi 2014 Olympic Games’ (NCCIC Watch & Warning : 04 February 2014). https://www.us-cert.gov/ncas/tips/ST14-001 See, also, Andrei Soldatov and Irina Borogan, The Red Web : the struggle between Russia’s digital dictators and the new online revolutionaries, (New York : Public Affairs, 2017 [2015]).
[21] In the United States, this has become a question of the extraterritorial application of the Stored Communications Act (18 USC § 2703) in the presence of a warrant, probable cause, &c. dressed up as a privacy debate. See Andrew Keane Woods, ‘A primer on Microsoft Ireland, the Supreme Court’s extraterritorial warrant case,’ Lawfare, 16 October 2017, https://lawfareblog.com/primer-microsoft-ireland-supreme-courts-extraterritorial-warrant-case.
[22] At the time of writing, eight Russian law enforcement and security agencies are granted direct access to SORM : the Ministry of Internal Affairs (MVD), Federal Security Service (FSB), Federal Protective Service (FSO), Foreign Intelligence Service (SVR), Federal Customs Service (FTS), Federal Drug Control Service (FSKN), Federal Penitentiary Service (FSIN), and the Main Intelligence Directorate of the General Staff (GRU). Federal Laws 374-FZ and 375-FZ of 6th July 2016 ('On Amendments to the Criminal Code of the Russian Federation and the Code of Criminal Procedure of the Russian Federation with regard to establishing additional measures to counter terrorism and ensure public security’), also known as the ‘Yarovaya laws,’ will enter into force on 1st July 2018; these laws substantially eliminate warrant requirements for communications and metadata requests to Russian telecommunications companies and ISPs, and additionally impose retention and decryption for all voice, text, video, and image communications. See, e.g., DR Analytica, report, ‘Yarovaya law : one year after,’ 24 April 2017, https://analytica.digital.report/en/2017/04/24/yarovaya-law-one-year-after/.

A Geneva convention, for Software

The anti-pattern “X for Y” is a sketchy way to start any tech think piece, and with “cyber” stories guaranteeing eyeballs, you’re already tired of the many horrible articles predicting a “Digital Pearl Harbour” or “cyber Armageddon”. In this case however, we believe this article’s title fits and are going to run with it. (Ed’s note: So did all the other authors!)


The past 10 years have made it clear that the internet, (both the software that both powers it and the software that runs on top of it) are fair game for attackers. The past 5 years have made it clear that nobody has internalized this message as well as the global Intelligence Community. The Snowden leaks pulled back the curtains on massive Five Eyes efforts in this regard, from muted deals with Internet behemoths, to amusing grab-all efforts like grabbing still images from Yahoo webcam chats(1).


In response to these revelations, a bunch of us predicted a creeping Balkanization of the Internet, as more people became acutely aware of their dependence on a single country for all their software and digital services. Two incidents in the last two months have caused these thoughts to resurface: the NotPetya worm (2), and the accusations  against Kaspersky AV.


To quickly recap NotPetya: a mundane accounting package called M.E.Doc with wide adoption (in Ukraine) was abused to infect victims. Worms and Viruses are a dime a dozen, but a few things made NotPetya stand out. For starters, it used an infection vector repurposed from an NSA leak, It seemed to target Ukraine pretty specifically, and it had tangible side effects in the real world (Maersk shipping company reported loss upto  $200 million due to NotPetya (3)). What interested us most about NotPetya however was its infection vector. Having compromised the wide open servers of M.E.Doc, the attackers proceeded to build a malicious update for the accounting package. This update was then automatically downloaded and applied by thousands of clients. Auto-updates are common at this point, and considered good security hygiene, so it’s an interesting twist when the update itself becomes the attack vector.


The Kaspersky saga also touched on “evil updates” tangentially. While many in the US Intelligence Community have long looked down on a Russian AntiVirus company gaining popularity in the US, Kaspersky has routinely performed well enough to gain considerable market share. This came to a head in September this year when the US Dept. of Homeland Security (DHS) issued a directive for all US governmental departments to remove Kaspersky software from their computers (4). In the days that followed, a more intriguing narrative emerged. According to various sources, an NSA employee who was working on exploitation and attack tooling took some of his work home, where his home computer (running Kaspersky software) proceeded to slurp up his “tagged” files.


Like most things infosec, this has kicked off a distracting sub-drama involving Israeli, Russian and American cyber-spooks. Kaspersky defenders have come out calling the claims outrageous, Kaspersky detractors claim that their collusion with Russian intelligence is obvious and some timid voices have remained non-committal while waiting for more proof. We are going to ignore this part of the drama completely.


What we _do_ care about though is the possibility that updates can be abused to further nation state interests. The American claim that Russian Intelligence was pushing updates selectively to some of its users (turning their software into a massive, distributed spying tool) is completely feasible from a technical standpoint. Kaspersky has responded by publishing a plan for improved transparency, which may or may not maintain their standing with the general public. But that ignores the obvious fact that as with any software that operates at that level, a “non-malicious” system is just one update away from being “malicious”. The anti-Kasperskians are quick to point out that even if Kaspersky has been innocent until now, they could well turn malicious tomorrow (with pressure from the GRU) and that any assurances given by Kaspersky are dependent on them being “good” instead of being technical controls.


For us, as relative non-combatants in this war, the irony is biting. The same (mostly American) voices who are quick to float the idea of the GRU co-opting bad behaviour in  Russian companies claim that US based companies would never succumb to US IC pressure, because of the threat to their industry position should it come out. There is no technical control that’s different in the two cases; US defenders are betting that the US IC will do the “right thing”, not only today but also far into the future. This naturally leads to an important question: do the same rules apply if the US is officially (or unofficially) at war with another nation?


In the Second World War, Germany nationalized English assets located in Germany, and the British did likewise. It makes perfect sense and will probably happen during future conflicts too. But Computers and the Internet change this. In a fictitious war between the USA and Germany, the Germans could take over every Microsoft campus in the country, but it wouldn’t protect their Windows machines from a single malicious update propagated from Redmond. The more you think about this, the scarier it gets. A single malicious update pushed from Seattle could cripple huge pieces of almost every government worldwide. What prevents this? Certainly not technical controls. [Footnote: Unless you build a national OS like North Korea did, https://en.wikipedia.org/wiki/Red_Star_OS].


This situation is without precedent. That a small number of vendors have the capacity to remotely shutdown government infrastructure, or vacuum up secret documents, is almost too scary to wrap your head around. And that’s without pondering how likely they are to be pressured by their governments. In the face of future conflict, is the first step going to be disabling auto-updates for software from that country?


This bodes badly for us all; the internet is healthier when everyone auto-updates. When eco-systems delay patching, we are all provably worse off. (When patching is painful, botnets like Mirai take out innocent netizens with 620 Gbit/s of traffic (5)). Even just the possibilities  leads us to a dark place. South Korea owns about 30% of the phone market in the USA (and supplies components in almost all of them). Chinese factories build hardware and ship firmware in devices we rely on daily. Like it or not, we are all dependent on these countries behaving as good international citizens but have very little in terms of a carrot or a stick to encourage “good behavior”.


It gets even worse for smaller countries. A type of mutually assured technology destruction might exist between China and the USA, but what happens when you are South Africa? You don’t have a dog in that fight. You shovel millions and millions of dollars to foreign corporations and you hope like hell that it’s never held against you. South Africa doesn’t have the bargaining power to enforce good behavior, and neither does Argentina, or Spain, but together, we may.


An agreement between all participating countries can be drawn up, where a country commits to not using their influence over a local software company to negatively affect other signatories. Countries found violating this principle risk repercussions from all member countries for all software produced by the country. In this way, any Intelligence Agency that seeks to abuse influence over a single company’s software, risks all software produced by that country with all member countries. This creates a shared stick that keeps everyone safer.


This clearly isn’t a silver bullet. An intelligence agency may still break into software companies to backdoor their software, and probably will. They just can’t do it with the company’s cooperation. Countries will have a central arbitrator (like the International Court of Justice) that will field cases to determine if IC machinations were done with or without the consent of the software company, and like the Geneva convention would still be enforceable during times of conflict or war.

Software companies have grown rich by selling to countries all over the world. Software (and the Internet) have become massive shared resources that countries the world over are dependent on. Even if they do not produce enough globally distributed software to have a seat at the table, all countries deserve the comfort of knowing that the software they purchase won’t be used against them. The case against Kaspersky makes it clear that the USA acknowledges this, as a credible threat and are taking steps to protect themselves. A global agreement, protects the rest of us too.