On anti-patterns for ICT security and international law

(Guest Post by @marasawr)
Author’s note : international law is hard, and these remarks are extremely simplified.
Thinkst recently published a thought piece on the theme of 'A Geneva Convention, for software.'[1] Haroon correctly anticipated that I'd be a wee bit crunchy about this particular 'X for Y' anti-pattern, but probably did not anticipate a serialised account of diplomatic derpitude around information and communications technologies (ICT) in international law over the past twenty years. Apparently there is a need for this, however, because this anti-pattern is getting out of hand.
Microsoft President and Chief Legal Officer Brad Smith published early in 2017 on 'The need for a digital Geneva Convention,' and again in late October on 'What the founding of the Red Cross can teach us about cyber warfare.'[2] In both cases, equivalences are drawn between perturbations in the integrity or availability of digital services, and the circumstances which prompted ratification of the Fourth Geneva Convention, or the circumstances prompting the establishment of the ICRC. And this is ridiculous.

Nation-state hacking is not a mass casualty event

The Fourth Geneva Convention (GCIV) was drafted in response to the deadliest single conflict in human history. Casualty statistics for the Second World War are difficult, but regardless of where in the range of 60-80 million dead a given method of calculation falls, the fact remains that the vast majority of fatalities occurred among civilians and non-combatants. The Articles of GCIV, adopted in 1949, respond directly to these deaths as well as other atrocities and deprivations endured by persons then unprotected by international law.[3] The founding of the ICRC was similarly prompted by mass casualties among wounded soldiers in European conflicts during the mid-nineteenth century.[4] But WannaCry was not Solferino; Nyetya was not the Rape of Nanjing.
Microsoft's position is, in effect, that nation-state hacking activities constitute an equivalent threat to civilian populations as the mass casualty events of actual armed conflict, and require commensurate regulation under international law. 'Civilian' is taken simply to mean 'non-government.' The point here is that governments doing government things cost private companies money; this is, according to Smith, unacceptable. Smith isn't wrong that this nation-state stuff impacts private companies, but what he asks for is binding protection under international law against injuries to his bottom line. I find this type of magical thinking particularly irksome, because it is underpinned by the belief that a corporate entity can be apatride and sovereign all at once. Inconveniently for Microsoft, there is no consensus in the customary law of states on which to build the international legal regime of their dreams.
The Thinkst argument in favour of a Geneva Convention for software is somewhat less cynical. Without a common, binding standard of conduct, nation-states are theoretically free to coerce, abuse, or otherwise influence local software companies as and when they please. Without a common standard, the thinking goes, (civilian) software companies and their customers remain in a perpetual state of unevenly and inequitably distributed risk from nation-state interference. Without binding protections and a species of collective bargaining power for smaller economies, nation-states likewise remain unacceptably exposed.[5]
From this starting point, a binding resolution of some description for software sounds more reasonable. However, there are two incorrect assumptions here. One is that nothing of the sort has been previously attempted. Two is that nation-states, particularly small ones, have a vested interest in neutrality as a guiding principle of digital governance. Looking back through the history of UN resolutions, reports, and Groups of Governmental Experts (GGEs) on — please bear with me — 'Developments in the field of information and telecommunications in the context of international security,’ it is clear this is not the case.[6] We as a global community actually have been down this road, and have been at it for almost twenty years.

International law, how does it work?

First, what are the Geneva Conventions, and what are they not?[7] The Geneva Conventions are a collection of four treaties and three additional protocols which comprise the body of international humanitarian law governing the treatment of non-combatant (i.e. wounded, sick, or shipwrecked armed forces, prisoners of war, or civilian) persons in wartime. The Geneva Conventions are not applicable in peacetime, with signatory nations agreeing to abide by the Conventions only in times of war or armed conflict. Such conflicts can be international or non-international (these are treated differently), but the point to emphasise is that an armed conflict with the characteristics of war (i.e. one in which human beings seek to deprive one another of the right to life) is a precondition for the applicability of the Conventions.
UN Member States which have chosen to become signatory to any or all of the Conventions which comprise international humanitarian law (IHL) and the Law of Armed Conflict (LOAC) have, in effect, elected to relinquish a measure of sovereignty over their own conduct in wartime. The concept of Westphalian sovereignty is core to international law, and is the reason internal conflicts are not subject to all of the legal restrictions governing international conflicts.[8] Just to make life more confusing, reasonable international law scholars disagree regarding which conventions and protocols are bucketed under IHL, which are LOAC, and which are both.
In any event, IHL and LOAC do not cease to apply in wartime because Internet or computers; asking for a separate Convention applicable to software presumes that the digital domain is currently beyond the scope of IHL and LOAC, which it is not. That said, Tallinn Manuals 1.0 and 2.0 do highlight some problem areas where characteristics of informatic space render transposition of legal principles presuming kinetic space somewhat comical.[9] IHL and LOAC cannot accommodate all eventualities of military operations in the digital domain without severe distortion to their application in kinetic space, but that is a protocol-sized problem, not a convention-sized problem. It is also a very different problem from those articulated by Microsoft.

19 years of ICT and international security at the UN

What Thinkst and Microsoft both point to is a normative behavioural problem, and there is some fascinating (if tragic) history here. Early in 2017 Michele Markoff appeared for the US Department of State on a panel for the Carnegie Endowment for International Peace, and gave a wonderfully concise breakdown of this story down from its beginnings at the UN in 1998.[10] I recommend watching the video, but summarise here as well.
In late September of 1998, the Permanent Representative to the UN for the Russian Federation, Sergei Lavrov, transmitted a letter from his Minister of Foreign Affairs to the Secretary-General.[11] The letter serves as an explanatory memorandum for an attached draft resolution seeking to prohibit the development, production, or use by Member States of ‘particularly dangerous forms of information weapons.’[12] The Russian document voices many anxieties about global governance and security related to ICT which today issue from the US and the EU. Weird, right? At the time, Russian and US understandings of ‘information warfare’ were more-or-less harmonised; the term encompassed traditional electronic warfare (EW) measures and countermeasures, as well as information operations (i.e. propaganda). Whether or not the Russian ask in the autumn of 1998 was sincere is subject to debate, but it was unquestionably ambitious. UN A/C.1/53/3 remains one of my favourite artefacts of Russia's wild ‘90s and really has to be read to be believed.
So what happened? The US did their level best to water down the Russian draft resolution. In the late 1990s the US enjoyed unassailable technological overmatch in the digital domain, and there was no reason to yield any measure of sovereignty over their activities in that space at the request of a junior partner (i.e. Russia). Or so the magical thinking went. The resolution ultimately adopted (unanimously, without a vote) by the UN General Assembly in December 1998 was virtually devoid of substance.[13] And it is that document which has informed the character of UN activities in the area of ‘Developments in the field of information and telecommunications in the context of international security’ ever since.[14] Ironically, the US and like-minded states have now spent about a decade trying to claw their way back to a set of principles not unlike those laid out in the original draft resolution transmitted by Lavrov. Sincere or not, the Russian overture of late 1998 was a bungled opportunity.[15]

State sovereignty vs digital governance

This may seem illogical, but the fault line through the UN GGE on ICT security has never been large vs small states.[16] Instead, it has been those states which privilege the preservation of national sovereignty and freedom from interference in internal affairs vs those states receptive to the idea that their domestic digital governance should reflect existing standards set out in international humanitarian and human rights law. And states have sometimes shifted camps over time. Remember that the Geneva Conventions apply in a more limited fashion to internal conflicts than they do to international conflicts? Whether a state is considering commitment to behave consistently with the spirit of international law in their internal affairs, or commitment to neutrality as a desirable guiding principle of digital governance, both raise the question of state sovereignty.
As it happens, those states which tend to aggressively defend the preservation of state sovereignty in matters of digital governance tend to be those which heavily censor or otherwise leverage their ICT infrastructure for the purposes of state security. In early 2015 Permanent Representatives to the UN from China, Kazakhstan, the Russian Federation, Tajikistan, and Uzbekistan sent a letter to the Secretary-General to the effect of ‘DON’T TREAD ON ME’ in response to proposed ’norms, rules, and principles for the responsible behaviour of States’ by the GGE for ICT security.[17] Armenia, Belarus, Cuba, Ecuador, Turkey, and other have similarly voiced concern in recent years that proposed norms may violate their state sovereignty.[18]
During the summer of 2017, the UN GGE for ICT security imploded.[19] With China and the Russian Federation having effectively walked away 30 months earlier, and with decades of unresolved disagreement regarding the relationship between state sovereignty, information, and related technologies... colour me shocked.

Hard things are hard

So, how do we safeguard against interference with software companies by intelligence services or other government entities in the absence of a binding international standard? The short answer is : rule of law.
Thinkt’s assertion that ‘there is no technical control that’s different’ between the US and Russian hypotheticals is not accurate. Russian law and lawful interception standards impose technical requirements for access and assistance that do not exist in the United States.[20] When we compare the two countries, we are not comparing like to like. Declining to comply with a federal law enforcement request in the US might get you a public showdown and fierce debate by constitutional law scholars, because that can happen under US law. It is nigh unthinkable that a Russian company could rebel in this manner without consequences for their operations, profitability, or, frankly, for their physical safety, because Russian law is equally clear on that point.
Software companies are not sovereign entities; they do not get to opt out of the legal regimes and geopolitical concerns of the countries in which they are domiciled.[21] In Kaspersky’s case, thinking people around DC have never been hung up on the lack of technical controls ensuring good behaviour. What we have worried about for years is the fact that the legal regime Kaspersky is subject to as a Russian company comfortably accommodates compelled access and assistance without due process, or even a warrant.[22] In the US case, the concern is that abuses by intelligence or law enforcement agencies may occur when legal authorisation is exceeded or misinterpreted. In states like Russia, those abuses and the technical means to execute them are legally sanctioned.
It is difficult enough to arrive at consensus in international law when there is such divergence in the law of individual states. But when it comes to military operations (as distinct from espionage or lawful interception) in the digital domain, we don’t even have divergence in the customary law of states as a starting point. Until states begin to acknowledge their activities and articulate their own legal reasoning, their own understandings of proportionate response, necessity, damage, denial, &c. for military electromagnetic and information operations, the odds of achieving binding international consensus in this area are nil. And there is not a lot compelling states to codify that reasoning at present. As an industry, information security tends to care about nation-state operations to the extent that such attribution can help pimp whatever product is linked below the analysis, and no further. With the odd exception, there is little that can be called rigorous, robust, or scientific about the way we do this. So long as that remains true – so long as information security persists in its methodological laziness on the excuse that perfect confidence is out of reach – I see no externalities which might hasten states active in this domain to admit as much, let alone volunteer a legal framework for their operations.
At present, we should be much more concerned with encouraging greater specificity and transparency in the legal logics of individual states than with international norms creation on a foundation of sand. The ‘X for Y’ anti-pattern deserves its eyerolls in the case of a Geneva Convention for software, but for different reasons than advocates of this approach generally appreciate.
-mara 

[1] Thinkst Thoughts, ‘A Geneva Convention, for software,’ 26 October 2017, http://blog.thinkst.com/2017/10/a-geneva-convention-for-software.html.
[2] Brad Smith, Microsoft On the Issues : ‘The need for a digital Geneva Convention,’ 14 February 2017, https://blogs.microsoft.com/on-the-issues/2017/02/14/need-digital-geneva-convention/; Brad Smith and Carol Ann Browne, LinkedIn Pulse : ‘What the founding of the Red Cross can teach us about cyber warfare,’ 29 October 2017, https://www.linkedin.com/pulse/what-founding-red-cross-can-teach-us-cyber-warfare-brad-smith/.
[3] See Jean S Pichet, Commentary : the Geneva Conventions of 12 August 1949, (Geneva : International Committee of the Red Cross, 1958), https://www.loc.gov/rr/frd/Military_Law/pdf/GC_1949-IV.pdf.
[4] See Jean S Pichet, Commentary : the Geneva Conventions of 12 August 1949, (Geneva : International Committee of the Red Cross, 1952), https://www.loc.gov/rr/frd/Military_Law/pdf/GC_1949-I.pdf.
[5] Groups of Governmental Experts (GGEs) are convened by the UN Secretary-General to study and develop consensus around questions raised by resolutions adopted by the General Assembly. When there is need to Do Something, but nobody knows or can agree on what that Something is, a GGE is established. Usually after a number of other, more ad hoc experts' meetings have failed to deliver consensus. For brevity we often refer to this GGE as 'the GGE for ICT security' or 'the GGE for cybersecurity'. https://www.un.org/disarmament/topics/informationsecurity/
[6] Thinkst Thoughts, ‘A Geneva Convention, for software,’ 26 October 2017, http://blog.thinkst.com/2017/10/a-geneva-convention-for-software.html.
[8] Regulating internecine conflict is extra hard, and also not very popular. See Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II), 8 June 1977.
[9] Col Gary D Brown has produced consistently excellent work on this subject. See, e.g., Gary D Brown, "The Cyber Longbow & Other Information Strategies: U.S. National Security and Cyberspace” (28 April 2017). 5 PENN. ST. J.L. & INT’L AFF. 1, 2017, https://ssrn.com/abstract=2971667; Gary D Brown “Spying and Fighting in Cyberspace: What is Which?” (1 April 2016). 8 J. NAT’L SECURITY L. & POL’Y, 2016, https://ssrn.com/abstract=2761460; Gary D Brown and Andrew O Metcalf, “Easier Said Than Done : Legal Review of Cyber Weapons” (12 February 2014). 7 J. NAT’L SECURITY L. & POL’Y, 2014, https://ssrn.com/abstract=2400530. See also, Gary D Brown, panel remarks, ’New challenges to the laws of war : a discussion with Ambassador Valentin Zellweger,’ (Washington, DC : CSIS), 30 October 2015, https://www.youtube.com/watch?v=jV-A21jQWnQ&feature=youtu.be&t=27m36s.
[10] Michele Markoff, panel remarks, ‘Cyber norms revisited : international cybersecurity and the way forward’ (Washington, DC : Carnegie Endowment for Int’l Peace) 6 February 2017, https://www.youtube.com/watch?v=nAuehrVCBBU&feature=youtu.be&t=4m10s.
[11] United Nations, General Assembly, Letter dated 23 September 1998 from the Permanent Representative of the Russian Federation to the United Nations addressed to the Secretary-General, UN GAOR 53rd Sess., Agenda Item 63, UN Doc. A/C.1/53/3 (30 September 1998), https://undocs.org/A/C.1/53/3.
[12] ibid., (3)(c).
[13] GA Res. 53/70, 'Developments in telecommunications and information in the context of international security,’ UN GAOR 53rd Sess., Agenda Item 63, UN Doc. A/RES/53/70 (4 December 1998), https://undocs.org/a/res/53/70.
[14] See GA Res. 54/49 of 1 December 1999, 55/28 of 20 November 2000, 56/19 of 29 November 2001, 57/53 of 22 November 2002, 58/32 of 8 December 2003, 59/61 of 3 December 2004, 60/45 of 8 December 2005, 61/54 of 6 December 2006, 62/17 of 5 December 2007, 63/37 of 2 December 2008, 64/25 of 2 December 2009, 65/41 of 8 December 2010, 66/24 of 2 December 2011, 67/27 of 3 December 2012, 68/243 of 27 December 2013, 69/28 of 2 December 2014, 70/237 of 23 December 2015, and 71/28 of 5 December 2016.
[15] This assessment is somewhat complicated. Accepting any or all of the proposed definitions, codes of conduct, &c. proffered by the Russian Federation over the years may have precluded some actions allegedly taken by the United States, but unambiguously would have prohibited the massive-scale disinformation and influence operations that have become a hallmark of Russian power projection abroad. Similarly, Russian innovations in modular malware with the demonstrated purpose and capability to perturb, damage, or destroy physical critical infrastructure systems would have been contraindicated by their own language.
[16] See, e.g., the Russian reply to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 54th Sess., Agenda Item 71, UN Doc. A/54/213 (9 June 1999), pp. 8-10, https://undocs.org/a/54/213; the Russian reply to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 55th Sess., Agenda Item 68, UN Doc. A/55/140 (12 May 2000), pp. 3-7, https://undocs.org/a/55/140; the Swedish reply (on behalf of Member States of the European Union) to 'Developments in telecommunications and information in the context of international security,’ Report of the Secretary-General, UN GAOR 56th Sess., Agenda Item 69, UN Doc. A/56/164 (26 June 2001), pp. 4-5, https://undocs.org/a/56/164; and the Russian reply to ibid., UN GAOR 56th Sess., Agenda Item 69, UN Doc. A/56/164/Add.1 (21 June 2001), pp. 2-6, https://undocs.org/a/56/164/add.1.
[17] United Nations, General Assembly, Letter dated 9 January 2015 from the Permanent Representatives of China, Kazakhstan, Kyrgyzstan, the Russian Federation, Tajikistan and Uzbekistan to the United Nations addressed to the Secretary-General, UN GAOR 69th Sess., Agenda Item 91, UN Doc. A/69/723 (9 January 2015), https://undocs.org/a/69/723.
[18] States’ replies since the 65th Session (2010) indexed at https://www.un.org/disarmament/topics/informationsecurity/.
[19] See, e.g., Arun Mohan Sukumar, ‘The UN GGE failed. Is international law in cyberspace doomed as well?,’ Lawfare, 4 July 2017, https://lawfareblog.com/un-gge-failed-international-law-cyberspace-doomed-well, and Elaine Korzak, The Debate : ‘UN GGE on cybersecurity : the end of an era?,’ The Diplomat, 31 July 2017, https://thediplomat.com/2017/07/un-gge-on-cybersecurity-have-china-and-russia-just-made-cyberspace-less-safe/.
[20] Prior to the 2014 Olympics in Sochi, US-CERT warned travellers that
Russia has a national system of lawful interception of all electronic communications. The System of Operative-Investigative Measures, or SORM, legally allows the Russian FSB to monitor, intercept, and block any communication sent electronically (i.e. cell phone or landline calls, internet traffic, etc.). SORM-1 captures telephone and mobile phone communications, SORM-2 intercepts internet traffic, and SORM-3 collects information from all forms of communication, providing long-term storage of all information and data on subscribers, including actual recordings and locations. Reports of Rostelecom, Russia’s national telecom operator, installing deep packet inspection (DPI ) means authorities can easily use key words to search and filter communications. Therefore, it is important that attendees understand communications while at the Games should not be considered private.’
Department of Homeland Security, US-CERT, Security Tip (ST14-01) ’Sochi 2014 Olympic Games’ (NCCIC Watch & Warning : 04 February 2014). https://www.us-cert.gov/ncas/tips/ST14-001 See, also, Andrei Soldatov and Irina Borogan, The Red Web : the struggle between Russia’s digital dictators and the new online revolutionaries, (New York : Public Affairs, 2017 [2015]).
[21] In the United States, this has become a question of the extraterritorial application of the Stored Communications Act (18 USC § 2703) in the presence of a warrant, probable cause, &c. dressed up as a privacy debate. See Andrew Keane Woods, ‘A primer on Microsoft Ireland, the Supreme Court’s extraterritorial warrant case,’ Lawfare, 16 October 2017, https://lawfareblog.com/primer-microsoft-ireland-supreme-courts-extraterritorial-warrant-case.
[22] At the time of writing, eight Russian law enforcement and security agencies are granted direct access to SORM : the Ministry of Internal Affairs (MVD), Federal Security Service (FSB), Federal Protective Service (FSO), Foreign Intelligence Service (SVR), Federal Customs Service (FTS), Federal Drug Control Service (FSKN), Federal Penitentiary Service (FSIN), and the Main Intelligence Directorate of the General Staff (GRU). Federal Laws 374-FZ and 375-FZ of 6th July 2016 ('On Amendments to the Criminal Code of the Russian Federation and the Code of Criminal Procedure of the Russian Federation with regard to establishing additional measures to counter terrorism and ensure public security’), also known as the ‘Yarovaya laws,’ will enter into force on 1st July 2018; these laws substantially eliminate warrant requirements for communications and metadata requests to Russian telecommunications companies and ISPs, and additionally impose retention and decryption for all voice, text, video, and image communications. See, e.g., DR Analytica, report, ‘Yarovaya law : one year after,’ 24 April 2017, https://analytica.digital.report/en/2017/04/24/yarovaya-law-one-year-after/.

A Geneva convention, for Software

The anti-pattern “X for Y” is a sketchy way to start any tech think piece, and with “cyber” stories guaranteeing eyeballs, you’re already tired of the many horrible articles predicting a “Digital Pearl Harbour” or “cyber Armageddon”. In this case however, we believe this article’s title fits and are going to run with it. (Ed’s note: So did all the other authors!)


The past 10 years have made it clear that the internet, (both the software that both powers it and the software that runs on top of it) are fair game for attackers. The past 5 years have made it clear that nobody has internalized this message as well as the global Intelligence Community. The Snowden leaks pulled back the curtains on massive Five Eyes efforts in this regard, from muted deals with Internet behemoths, to amusing grab-all efforts like grabbing still images from Yahoo webcam chats(1).


In response to these revelations, a bunch of us predicted a creeping Balkanization of the Internet, as more people became acutely aware of their dependence on a single country for all their software and digital services. Two incidents in the last two months have caused these thoughts to resurface: the NotPetya worm (2), and the accusations  against Kaspersky AV.


To quickly recap NotPetya: a mundane accounting package called M.E.Doc with wide adoption (in Ukraine) was abused to infect victims. Worms and Viruses are a dime a dozen, but a few things made NotPetya stand out. For starters, it used an infection vector repurposed from an NSA leak, It seemed to target Ukraine pretty specifically, and it had tangible side effects in the real world (Maersk shipping company reported loss upto  $200 million due to NotPetya (3)). What interested us most about NotPetya however was its infection vector. Having compromised the wide open servers of M.E.Doc, the attackers proceeded to build a malicious update for the accounting package. This update was then automatically downloaded and applied by thousands of clients. Auto-updates are common at this point, and considered good security hygiene, so it’s an interesting twist when the update itself becomes the attack vector.


The Kaspersky saga also touched on “evil updates” tangentially. While many in the US Intelligence Community have long looked down on a Russian AntiVirus company gaining popularity in the US, Kaspersky has routinely performed well enough to gain considerable market share. This came to a head in September this year when the US Dept. of Homeland Security (DHS) issued a directive for all US governmental departments to remove Kaspersky software from their computers (4). In the days that followed, a more intriguing narrative emerged. According to various sources, an NSA employee who was working on exploitation and attack tooling took some of his work home, where his home computer (running Kaspersky software) proceeded to slurp up his “tagged” files.


Like most things infosec, this has kicked off a distracting sub-drama involving Israeli, Russian and American cyber-spooks. Kaspersky defenders have come out calling the claims outrageous, Kaspersky detractors claim that their collusion with Russian intelligence is obvious and some timid voices have remained non-committal while waiting for more proof. We are going to ignore this part of the drama completely.


What we _do_ care about though is the possibility that updates can be abused to further nation state interests. The American claim that Russian Intelligence was pushing updates selectively to some of its users (turning their software into a massive, distributed spying tool) is completely feasible from a technical standpoint. Kaspersky has responded by publishing a plan for improved transparency, which may or may not maintain their standing with the general public. But that ignores the obvious fact that as with any software that operates at that level, a “non-malicious” system is just one update away from being “malicious”. The anti-Kasperskians are quick to point out that even if Kaspersky has been innocent until now, they could well turn malicious tomorrow (with pressure from the GRU) and that any assurances given by Kaspersky are dependent on them being “good” instead of being technical controls.


For us, as relative non-combatants in this war, the irony is biting. The same (mostly American) voices who are quick to float the idea of the GRU co-opting bad behaviour in  Russian companies claim that US based companies would never succumb to US IC pressure, because of the threat to their industry position should it come out. There is no technical control that’s different in the two cases; US defenders are betting that the US IC will do the “right thing”, not only today but also far into the future. This naturally leads to an important question: do the same rules apply if the US is officially (or unofficially) at war with another nation?


In the Second World War, Germany nationalized English assets located in Germany, and the British did likewise. It makes perfect sense and will probably happen during future conflicts too. But Computers and the Internet change this. In a fictitious war between the USA and Germany, the Germans could take over every Microsoft campus in the country, but it wouldn’t protect their Windows machines from a single malicious update propagated from Redmond. The more you think about this, the scarier it gets. A single malicious update pushed from Seattle could cripple huge pieces of almost every government worldwide. What prevents this? Certainly not technical controls. [Footnote: Unless you build a national OS like North Korea did, https://en.wikipedia.org/wiki/Red_Star_OS].


This situation is without precedent. That a small number of vendors have the capacity to remotely shutdown government infrastructure, or vacuum up secret documents, is almost too scary to wrap your head around. And that’s without pondering how likely they are to be pressured by their governments. In the face of future conflict, is the first step going to be disabling auto-updates for software from that country?


This bodes badly for us all; the internet is healthier when everyone auto-updates. When eco-systems delay patching, we are all provably worse off. (When patching is painful, botnets like Mirai take out innocent netizens with 620 Gbit/s of traffic (5)). Even just the possibilities  leads us to a dark place. South Korea owns about 30% of the phone market in the USA (and supplies components in almost all of them). Chinese factories build hardware and ship firmware in devices we rely on daily. Like it or not, we are all dependent on these countries behaving as good international citizens but have very little in terms of a carrot or a stick to encourage “good behavior”.


It gets even worse for smaller countries. A type of mutually assured technology destruction might exist between China and the USA, but what happens when you are South Africa? You don’t have a dog in that fight. You shovel millions and millions of dollars to foreign corporations and you hope like hell that it’s never held against you. South Africa doesn’t have the bargaining power to enforce good behavior, and neither does Argentina, or Spain, but together, we may.


An agreement between all participating countries can be drawn up, where a country commits to not using their influence over a local software company to negatively affect other signatories. Countries found violating this principle risk repercussions from all member countries for all software produced by the country. In this way, any Intelligence Agency that seeks to abuse influence over a single company’s software, risks all software produced by that country with all member countries. This creates a shared stick that keeps everyone safer.


This clearly isn’t a silver bullet. An intelligence agency may still break into software companies to backdoor their software, and probably will. They just can’t do it with the company’s cooperation. Countries will have a central arbitrator (like the International Court of Justice) that will field cases to determine if IC machinations were done with or without the consent of the software company, and like the Geneva convention would still be enforceable during times of conflict or war.

Software companies have grown rich by selling to countries all over the world. Software (and the Internet) have become massive shared resources that countries the world over are dependent on. Even if they do not produce enough globally distributed software to have a seat at the table, all countries deserve the comfort of knowing that the software they purchase won’t be used against them. The case against Kaspersky makes it clear that the USA acknowledges this, as a credible threat and are taking steps to protect themselves. A global agreement, protects the rest of us too.

Canarytokens' new member: AWS API key Canarytoken

This is the fourth post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.

Introduction

In this blog post, we will introduce you to the newest member of our Canarytoken’s family, the Amazon Web Services API key token. This new Canarytoken allows you to sprinkle AWS API keys around and then notifies you when they are used. (If you stick around to the end, we will also share some of the details behind how we built it).

Background

Amazon Web Services offers a massive range of services that are easily integratable with each other. This encourages companies to build entire products and product pipelines using the AWS suite. In order to automate and manipulate AWS services using their API, we are given access keys which can be restricted by AWS policies. Access keys are defined on a per user basis which means there are a few moving parts in order to lock down an AWS account securely.

Take it for a spin - using an AWS API key Canarytoken

Using the AWS API key Canarytoken is as simple as can be. Simply make use of the free token server at http://canarytokens.org or use the private Canarytoken server built into your Canary console. Select the ‘AWS Keys’ token from the drop down list.



Enter an email and a token reminder (Remember: The email address is the one we will notify when the token is tripped, and the reminder will be attached to the alert. Choose a unique reminder, nothing sucks more than knowing a token is tripped, but being unsure where you left it). Then click on “Create my Canarytoken”.



You will notice that we arrange your credentials in the same way as the AWS console usually does, so you can get straight down to using (or testing) them. So lets get to testing. Click “Download your AWS Creds” and save the file somewhere you will find it.

For our tests, we are going to use the AWS Commandline tool (if you don’t have it yet, head over to http://docs.aws.amazon.com/cli/latest/userguide/installing.html). Below is a simple bash script that will leverage the AWS command line tool to create a new user named TestMePlease using your new-almost-authentic AWS API keys.

Simply go to your command line, navigate to the same location as the script and type, ./test_aws_creds.sh <access_key_id> <secret_access_key> . If all went to plan, you should be receiving an alert notifying you that your AWS API key Canarytoken was used.

NB: Due to the way these alerts are handled (by Amazon) it can sometimes take up to 20 minutes for the alert to come through.

Waiting...waiting...waiting (0-20mins later). Ah we got it!


Check...it...out! This is what your AWS API key Canarytoken alert will look like, delivered by email. The email will contain some useful details such as User Agent, Source IP and a reminder of where you may have placed this Canarytoken (we always assumed you not going to use only one! Why would you? They are free!!).

The simple plan then should be: Create a bunch of fake keys. Keep one on the CEO’s laptop. (He will never use it, but the person who compromises him will). Keep one on your webserver (again, no reason for it to be used, except by the guy who pops a shell on that box, etc)

Under the hood - steps to creating an AWS API key Canarytoken

The AWS API key Canarytoken makes use of a few AWS services to ensure that the Canarytoken is an actual AWS API key - indistinguishable from a real working AWS API key. This is important because we want to encourage attackers to have to use the key to find out how juicy it actually is - or isn’t. We also want this to be dead simple to use. Enter your details and click a button. If you want to see how the sausage is made, read on:


Creation - And on the 5th day…


The first service necessary for creating these AWS API key Canarytokens, is an AWS Lambda that is triggered by an AWS API Gateway event. Let’s follow the diagram’s flow. Once you click the ‘Create my Canarytoken’ button, a GET request is sent to the AWS API Gateway. This request contains query parameters for the domain (of the Canarytokens server), the username (if we want to specifiy one, otherwise a random one is generated) and the actual Canarytoken that will be linked to the created AWS API key. This is where the free version and commercial versions diverge slightly.

Our free version of Canarytokens (canarytokens.org), does not allow you to specify your own username for the AWS API key Canarytoken. The domain of the Canarytoken server is used in conjunction with the Canarytoken to create the AWS user on the account. (This is still completely useful, because the only way an attacker is able to obtain the username tied to the token, is to make an API call, and this call itself will trigger the alert). Our private Canary consoles enjoy a slightly different implementation. This uses an AWS Dynamo Database that links the users to their tokens and allowing clients the opportunity to specify what the user name for your AWS user should be. 

If the AWS API Gateway determines that sufficient information is included in the request, it triggers the lambda responsible for creating the AWS API key Canarytoken. This lambda creates a new user with no privileges on the AWS account, generates AWS API keys for that user and responds to the request with a secret access key and an access key id.


We should note that the newly created user has no permissions (to anything), so anyone with this AWS API key can’t do anything of importance. (Even if they did, its a user on our infrastructure, not yours!). Of course, before the attacker is able to find out how impotent her key is, she first has to use it and this is when we catch them out (detection time!).

Detection - I see you! 

Now that the AWS API key has been created and returned to the user, lets complete the loop and figure out when these AWS API keys are being used. The first service in our detection process, spoken about in our previous posts, is CloudTrail. CloudTrail is super useful when monitoring anything on an AWS account because it logs all important (not all) API calls recording the username, the keys used, the methods called, the user-agent information and a whole lot more. 

We configure CloudTrail to send its logs to another AWS logging service known as CloudWatch. This service allows subscriptions and filtering rules to be applied. This means that if a condition in the logs from CloudTrail is met in the CloudWatch service, it will trigger whichever service you configure it to - in our case another AWS Lambda function. In pure AWS terms, we have created a subscription filter which will send logs that match the given filter to our chosen lambda.

For the AWS API key Canarytoken, we use a subscription filter such as

  • "FilterPattern": "{$.userIdentity.type = IAMUser}"

This filter will check the incoming logs from CloudTrail and only send logs that contain the user identity as an IAM User - this is different from using root credentials as the user is then ‘root’.

Alert - Danger Will Robinson, danger!

All thats left now is for us to generate our Alert. We employ an AWS Lambda (again) to help us with this. This lambda receives the full log of the attempted AWS API call and bundles it into a custom HTTP Request that trips the Canarytoken. Our Canarytoken Server receives the request with all this information and relays the alert to you with all the information formatted neatly.

Summary - TLDR;

Amazon Web Services is a massive collection of easily integratable services which enables companies of all sizes to build entire products and services with relative ease. This makes AWS API keys an attractive target for many attackers.

The AWS API key Canarytoken allows the creation of real AWS API keys which can be strewn around your environment. An attacker using these credentials will trigger an alert informing you of his presence (and other useful meta information).. It’s quick, simple, reliable and a high quality indicator of badness.

Farseeing: a look at BeyondCorp

This is the third post in a series highlighting bits from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.


Introduction

In our BlackHat talk, "Fighting the Previous War", we showed how attacks against cloud services and cloud-native companies are still in their nascent stages of evolution. The number of known attacks against AWS is small, which is at odds with the huge number (and complexity) of services available. It's not a deep insight to argue that the number of classes of cloud specific attacks will rise.

However, the "previous war" doesn't just refer to cloud stuff. While our talk primarily dealt with cloud services, we also spent some time on another recent development, Google's BeyondCorp. In the end, the results weren't exciting enough to include fully in the talk and so we cut slides from the presentation, but the original slides are in the PDF linked above.

In this post we'll provide our view on what BeyondCorp-like infrastructure means for attackers, and how it'll affect their approaches.

What is BeyondCorp?

We start with a quick overview of BeyondCorp that strips out less important details (Google has a bunch of excellent BeyondCorp resources if you've never encountered it before.)

In an ossified corporate network, devices inside the perimeter are more trusted than devices outside the perimeter (e.g. they can access internal services which are not available to the public Internet). In addition, devices trying to access those service aren't subject to checks on the device (such as whether the device is known, or is fully patched).

In the aftermath of the 2009 Aurora attacks on Google, where attackers had access to internal systems once the boundary perimeter was breached, Google decided to implement a type of Zero Trust network architecture. The essence of the new architecture was that no trust was placed in the location of a client regardless of whether the client was located inside a Google campus or sitting at a Starbucks wifi. They called it BeyondCorp.

Under BeyondCorp, all devices are registered with Google beforehand and all access to services is brokered through a single Access Proxy called ÜberProxy.

This means that all Google's corporate applications can be accessed from any Internet-connected network, provided the device is known to Google and the user has the correct credentials (including MFA, if enabled.)

Let's walk through a quick example. Juliette is a Google engineer sitting in a StarBucks leaching their Wifi, and wants to review a bug report on her laptop. From their documentation, it works something like this (we're glossing over a bunch of details):
  1. Juliette's laptop has a client certificate previously issued to her machine.
  2. She opens https://tickets.corp.google.com in her browser.
  3. The DNS response is a CNAME pointing to uberproxy.l.google.com (this is the Access Proxy). The hostname identifies the application.
  4. Her browser connects using HTTPS to uberproxy.l.google.com, and provides its client certificate. This identifies her device.
  5. She's prompted for credentials if needed (there's an SSO subsystem to handle this). This identifies her user.
  6. The proxy passes the application name, device identifier (taken from the client certificate), and credentials to the Access Control Engine (ACE).
  7. The ACE performs an authorization check to see whether the user is allowed to access the requested application from that device.
  8. The ACE has access to device inventory systems, and so can reason about device trust indicators such as:
    1. a device's patch level
    2. its trusted boot status
    3. when it was last scanned for security issues
    4. whether the user has logged in from this device previously
  9. If the ACE passes all checks, the access proxy allows the request to pass to the corporate application, otherwise the request fails.
Google's architecture diagrams include more components than we've mentioned above (and the architecture changed between their first and most recent papers on BeyondCorp). But the essence is a proxy that can reason about device status and user trust. Note that it's determining whether a user may access a given application, not what they do within those applications.

One particularly interesting aspect of BeyondCorp is how Google supports a bunch of protocols (including RDP and SSH) through the same proxy, but we won't look at that today. (Another interesting aspect is that Google managed to migrate their network architecture without interruption and is, perhaps, the biggest takeaway from their series of papers. It's an amazingly well planned migration.)

This sucks! (For attackers)

For ne'er-do-wells, this model changes how they go about their business. 

Firstly, tying authorisation decisions to devices has a big limiting effect on credential phishing. A set of credentials is useless to an external attacker if the authorisation decision includes an assertion that the device has previously been used by this user. Impersonation attacks like this become much more personal, as they require device access in addition to credentials.

Secondly, even if a beachhead is established on an employee's machine, there's no flat network to laterally move across. All the attacker can see are the applications for which the victim account had been granted access. So application-level attacks become paramount in order to laterally move across accounts (and then services).

Thirdly, access is fleeting. The BeyondCorp model actively incorporates updated threat information, so that (for example), particular browser versions can be banned en masse if 0days are known to be floating around. 

Fourthly, persistence on end user devices is much harder. Google use verified boot on some of their devices, and BeyondCorp can take this into account. On verified boot devices, persistence is unlikely to take the form of BIOS or OS-level functionality (these are costly attacks with step changes across the fleet after discovery, making them poor candidates). Instead, higher level client-side attacks seem more likely.

Fifthly, in addition to application attacks, bugs in the Access Control Engine or mistakes in the policies come into play, but these must be attacked blind as there is no local version to deploy or examine.

Lastly, targeting becomes really important. It's not enough to spam random @target.com addresses with dancingpigs.exe, and focus once inside the network. There is no "inside the network", at best you access someone's laptop, and can hit the same BeyondCorp apps as your victim.

A quick look at targeting

The lack of a perimeter is the defining characteristic of BeyondCorp, but that means anyone outside Google has a similar view to anyone inside Google, at least for the initial bits needed to bootstrap a connection.

We know all services are accessed through the ÜberProxy. In addition, every application gets a unique CNAME (in a few domains we've seen, like corp.google.com, and googleplex.com).

DNS enumeration is a well-mapped and frequently-trod path, and effective at discovering corporate BeyondCorp applications. Pick a DNS enumeration tool (like subrute), run it across the corp.google.com subdomain, and get 765 hostnames. Each maps to a Google Corporate application. Here's a snippet from the output:
  • [...]
  • pitch.corp.google.com
  • pivot.corp.google.com
  • placer.corp.google.com
  • plan.corp.google.com
  • platform.corp.google.com
  • platinum.corp.google.com
  • plato.corp.google.com
  • pleiades.corp.google.com
  • plumeria.corp.google.com
  • [...]
But DNS isn't the only place to identify BeyondCorp sites. As is the fashion these days, Google is quite particular about publishing new TLS certificates in the Certificate Transparency logs. These include a bunch of hostnames in  corp.google.com and googleplex.com. From these more BeyondCorp applications were discovered.

Lastly, we scraped the websites of all the hostnames found to that point and found additional hostnames referenced in some of the pages and redirects. For fun, we piped into PhantomJS and screencapped all the sites for quick review.

Results? We don't need no stinking results!


The end result of this little project was a few thousand screencaps of login screens:

Quite a few of these
Error showing my device isn't
allowed access to this service
Occasional straight 403


So, so many of these
Results were not exciting. The only site that was open to the Internet was a Cafe booking site on one of Google's campuses.

However, a few weeks ago a high school student posted the story of his bug bounty which appeared to involve an ÜberProxy misconfiguration. The BeyondCorp model explicitly centralises security and funnels traffic through proxy chokepoints to ease authN and authZ decisions. Like any centralisation, it brings savings but there is also the risk of a single issue affecting all applications behind the proxy. The takeaway is that mistakes can (and will) happen. 


So where does this leave attackers?

By no means is this the death of remote attacks, but it shifts focus from basic phishing attacks and will force attackers into more sophisticated plays. These will include more narrow targeting (of the BeyondCorp infrastructure in particular, or of specific endusers with the required application access), and change how persistence on endpoints is achieved. Application persistence increases in importance, as endpoint access becomes more fleeting.

With all this said, it's unlikely an attacker will encounter a BeyondCorp environment in the near future, unless they're targeting Google. There are a handful of commercial solutions which claim BeyondCorp-like functionality, but none rise to the same thoroughness of Google's approach. For now, these BeyondCorp attack patterns remain untested.

Disrupting AWS S3 Logging

This post continues the series of highlights from our recent BlackHat USA 2017 talk. An index of all the posts in the series is here.


Introduction

Before today's public clouds, best practice was to store logs separately from the host that generated them. If the host was compromised, the logs stored off it would have a better chance of being preserved.

At a cloud provider like AWS, a storage service within an account holds your activity logs. A sufficiently thorough compromise of an account could very well lead to disrupted logging and heightened pain for IR teams. It's analogous to logs stored on a single compromised machine: once access restrictions to the logs are overcome, logs can be tampered with and removed. In AWS, however, removing and editing logs looks different to wiping logs with rm -rf.

In AWS jargon, the logs originate from a service called CloudTrail. A Trail is created which delivers the current batch of activity logs in a file to a pre-defined S3 bucket at variable intervals. (Logs can take up to 20 mins to be delivered).

CloudTrail logs are often collected in the hope that should a breach be discovered, there will be useful audit trail in the logs. The logs are the only public record of what happened while the attacker had access to an account, and form the basis of most AWS defences. If you haven't enabled them on your account, stop reading now and do your future self a favour.

Prior work

In his blog post, Daniel Grzelak explored several fun consequences of the fact that logs are stored in S3. For example, he showed that when a file lands in an S3 bucket, it triggers an event. A function, or Lambda in AWS terms, can be made to listen for this event and delete logs as soon as they arrive. The logs continue to arrive as normal (except for the logs evaporating on arrival.)

Flow of automatic log deletion

Versions, lambdas and digests

Adding "versioning" to S3 buckets (which keeps older copies of files once they are overwritten) won't help, if an attacker can grant permission to delete the older copies. Versioned buckets do have the option of having versioned items protected from deletion by multi-factor auth ("MFA-delete"). Unfortunately it seems like only the AWS account's root user (as the sole owner all S3 buckets in an account) can configure this, making it less easy to enable in typical setups where root access is tightly limited.

In any case, an empty logs bucket will inevitably raise the alarm when someone comes looking for logs. This leaves the attacker with a pressing question: how do we erase our traces but leave the rest of the logs available and readable? The quick answer is that we can modify the lambda to check every log file and delete any dirty log entries before overwriting them with a sanitised log file.

But a slight twist is needed: when modifying logs, the lambda itself generates more activity which in turn adds more dirty entries to the logs. By adding a unique tag to the names of pieces of the log-sanitiser (such as name of the policies, roles and lambdas), these can be deleted like any other dirty log entries so that the log-sanitiser eats it's own trail. In this code snippet, any role, lambda or policy that includes thinkst_6ae655cf will be kept out of the logs.

That would seem to present a complete solution, except that AWS Cloudtrail also offers log validation (aimed specifically at mitigating silent changes to logs after delivery). At regular intervals, the log trail delivers a (signed) digest file that attests to the contents of all the log files delivered in the past interval. If a log file covered by the digest changes, that digest file validation fails.

A slew of digest files

At first glance this stops our modification attack in its tracks; our lambda modified the log after delivery, but the digest was computed on the contents prior to our changes. So the contents and the digest won't match.

Also covered by each digest file, is the previous digest file. This creates a chain of log validation starting at the present and going back up the chain into the past. If the previous digest file has been modified or is missing, the next digest file validation will fail (but subsequent digests will be valid.) The intent behind this is clear: log tampering should show that AWS command line log validation shows an error.

Chain of digests and files they cover
Contents of a digest file



It would seem that one option is to simply remove digest files, but S3 protects them and prevents deletion of files that are part of an unbroken digest chain.

There's an important caveat to be aware of though: when log validation is stopped and started on a Trail (as opposed to stopping and starting the logging itself), the log validation chain is broken in an interesting way. The next digest file that is delivered doesn't refer to previous digest file since validation was stopped and started. Instead, the next digest file references null as its previous file, as if it's a new digest chain starting afresh.

Digest file (red) that can be deleted following a stop-start
In the diagram above, after the log files in red were altered, log validation was stopped and started. This broke the link between digest 1 and digest 2.

Altered logs, successful validation

We said that S3 prevented digest file deletion on unbroken chains. However, older digest files can be removed so long as no other file refers to them. That means we can delete digest 1, then delete digest 0.

What this means is that on the previous log validation chain, we can now delete the latest digest entry file without failing any digest log validation. The log validation will start at the most recent chain, and move back up. When the validation encounters the first item on the previous chain, it simply moves on to the latest available item of the previous chain. (There may be a note about no log files being delivered for a period, but this is the same message that arrives when no log files are delivered as well.)

No complaints validity complaints about missing digest files

And now?

It's easy to imagine that log validation is simply included in automated system health-checks; so long as it doesn't fail, no one will be verifying logs.  Until they're needed, of course, at which point the logs could have been changed without validation producing an error condition.

This attack signature is: validation was stopped and started (rather than logging being stopped and started). It underscores the importance of alerting on CloudTrail updates, even if it doesn't stop logging. (One way would be to alert on UpdateTrail events using the AWS CloudWatch service.) A single validation stop and start event, means it is not a safe to assume that the AWS CLI tool reporting that all logs validate means that the logs haven't been tampered with. The log validation should be especially suspect if there are breaks in the digest validation chain, which would have to be manually verified.

Much like in the case of logs stored on a single compromised host, logs should be interpreted with care when we are dealing with compromised AWS accounts that had the power to alter them..