Thinkst in Santa Clara
Last week Haroon and I found ourselves at the 28th USENIX Security Symposium in balmy Santa Clara. We made the trip from Vegas for Haroon’s invited talk at the main event, and I took the opportunity to present at one of the side workshops (HotSec). This is a short recap of our USENEX experience.
Neither Haroon nor I have attended USENIX events previously, despite over 20 Black Hat USAs between the two of us. What’s worse, we both used to read ;login: regularly, and the research coming out of USENIX Security is typically thorough. When this opportunity presented itself, we couldn’t turn it down.
Drawing comparisons between USENIX and Black Hat/DEF CON is a bit unfair as they have different goals entirely, but given the consecutive weeks they run on, I think it’s ok. Compared to Black Hat/DEF CON, obvious differences are the smaller scale (there were fewer speaking rooms and smaller audiences), primarily academic focus, and no side events that we saw. (Black Hat and DEF CON usually have have a ton of parties and events in parallel with the main conference.) USENIX is billed as a place for industry and academia to meet, but most of the talks were academic. I’ll come back to this shortly.
Event organisation was slick, and the venue was a welcome respite from the Vegas casinos. No one was screwing around with the WiFi (at least, not detectably…) and the AV just worked for the most part. Session chairs played their role admirably in corralling speakers and audiences, keeping a close track on time and chiming in with questions when the audience was quiet.
USENIX was much more sedate than either Black Hat or DEF CON. No Expo area (the handful of sponsors each had a table in a small foyer), no special effects, no massive signs, no hacker cosplay, no one shouting in the corridors, no visible media, no gags or gimmicks. The list goes on. It just reinforces how much of an outlier the Vegas events are.
Haroon’s talk used our Canarytokens as a lens to explore how defensive teams need to embrace quicker, hackier solutions to win. The central thesis being that the current battle field is too fluid, which favours the lighter more agile M.O of attackers. We’ll publish more details on this in the following weeks.
My talk was a brief exposition on our take of honeypots as breach detection vs observers, with the experience of running Canary to back it up. In the next few days we’ll publish another post here delving into this.
Turning to the talks, virtually all of them were densely packed with information. The acceptance rate was something like 15% (115 from 740 submissions), and (as typical for academic conferences), authors submitted completed works. To rise above the pack, papers must cover lots of ground. Accepted authors only have a 20-minute presentation slot to talk about their work and take questions. It means the authors fly through the highlights of their research, frequently leaving out large chunks of content from the presentation and instead deferring to the paper to make the time limit. That’s at odds with Black Hat’s 50-minute slots which usually include a weighty background section (I recall us having to fill 75 minutes at Black Hat at a point.)
Abbreviated talks also mean that sometimes the speakers just have to assume the audience has a background in their topic; there’s simply not enough time to cover background plus what they did. In those talks you can expect to be reading the papers as you can quickly be left behind.
In contrast to Black Hat’s many parallel tracks with breaks between every talk, USENIX ran three parallel tracks with up to five 20-minute talks in a row. This meant that you could potentially see 53 talks if you sat in one of the main speaking rooms for the three days. It’s a firehose of new work, and it was great.
Academia dominated the talks. Of the 36 talks I saw, just two were purely from industry (both were from the Google Chrome folks). I suspect the completed paper requirement serves as a natural barrier against submissions from industry. A completed paper is the output of regular academic work, and finding a publication venue afterwards is a stressful but comparatively minor part.
For industry folks, a research paper isn’t a natural goal; they’d need to set aside time for this side project. It’s easier to hit an earlier result (like a bug or technique) and submit to an industry conference. Since career progression isn’t tied to paper publication, there’s much less incentive to write one.
In addition, there’s also very different standards for the talks. It’s clear that merely finding a bug or figuring out a way to find bugs isn’t going to get your paper accepted. Virtually every paper had extensive surveys or evaluations. At USENIX there’s a big push towards gathering data, either in the form of measuring prevalence or in designing tests for evaluating new defences, and then making that data available for others to analyse. Collecting data is a large part of the research effort. Contrast that with a Black Hat talk which describes (say) a new heap manipulation technique demonstrated with a specific bug. The bug’s prevalence will get a cursory slide or two, but talks are accepted if the exploitation techniques are novel.
In terms of the talks, it was a deluge of content. Speculative execution attacks had a bunch of attention, with new variants being demonstrated as well as deepening of the fundamental understanding of the attacks. One of these highlighted that not only is execution speculative, but so are other operations like memory load. The authors demonstrated a speculative load attack, in which an attacker can leak physical memory address mappings. This category of research is now squarely being led by the academy.
There was a talk on threats in the NPM ecosystem, showing how the average NPM package trusts 79 other packages and 39 maintainers. That’s food for thought when worrying about supply chain attacks and software provenance. The authors also showed how a tiny group of 100 maintainers appear in 50% of the dependencies (i.e. a small group to subvert, to affect a huge swathe of dependents). A later talk on In-Toto, a software supply chain protection tool provides some limited hope for finding our way out of the supply chain mess.
I enjoyed the ERIM talk, which claims a way to achieve in-process memory isolation. This could be used to let a process compute cryptographic results over a key stored in the process’ own memory, but still prevent the process from reading the memory. Kinda wild to think about.
There was one honeypot-related talk I saw. The authors realised that honeypot smart contracts are a thing; apparently scammers deploy contracts which appear to have flaws in them, prompting folks looking for smart contract vulnerabilities to send Ether to the contracts in the hopes of exploiting the contracts. However the flaws are mirages; it’s an example of a scam that takes advantage of other scammers.
There were further talks on crypto (cryptography, cryptographic attacks, and cryptocurrencies), hardware, side-channels galore, web stuff, and much much more. A good portion dealt with building better defences, which is in further contrast to Black Hat’s primarily offence-oriented talks.
We hope to return to USENIX soon, while the time away was significant it was well worth it.
PS. Seeing Rik Farrow in person was a delight, exactly what you’d imagine he might look like. Sandals, Hawaiian shirt and ponytail!