Panel on Cyberwarfare and Cyberattacks at 9th Circuit Judicial Conference

July 20th, 2015 by kc

I had the honor of contributing to a panel on “Cyberwarfare and cyberattacks: protecting ourselves within existing limitations” at this year’s 9th Circuit Judicial Conference. The panel moderator was Hon. Thomas M. Hardiman, and the other panelists were Professor Peter Cowhey, of UCSD’s School of Global Policy and Strategy, and Professor and Lt. Col. Shane R. Reeves of West Point Academy. Lt. Col. Reeves gave a brief primer on the framework of the Law of Armed Conflict, distinguished an act of cyberwar from a cyberattack, and described the implications for political and legal constraints on governmental and private sector responses. Professor Cowhey followed with a perspective on how economic forces also constrain cybersecurity preparedness and response, drawing comparisons with other industries for which the cost of security technology is perceived to exceed its benefit by those who must invest in its deployment. I used a visualization of an Internet-wide cybersecurity event to illustrate technical, economic, and legal dimensions of the ecosystem that render the fundamental vulnerabilities of today’s Internet infrastructure so persistent and pernicious. A few people said I talked too fast for them to understand all the points I was trying to make, so I thought I should post the notes I used during my panel remarks. (My remarks borrowed heavily from Dan Geer’s two essays: Cybersecurity and National Policy (2010), and his more recent Cybersecurity as Realpolitik (video), both of which I highly recommend.)

After explaining the basic concept of a botnet, I showed a video derived from CAIDA’s analysis of a botnet scanning the entire IPv4 address space (discovered and comprehensively analyzed by Alberto Dainotti and Alistair King). I gave a (too) quick rundown of the technological, economic, and legal circumstances of the Internet ecosystem that facilitate the deployment of botnets and other threats to networked critical infrastructure.

First, the underlying technology of the Internet, which is insecure by design. I confessed the stunning technological conditions in which we find ourselves: we are all now utterly dependent on an Internet protocol architecture that was originally designed to be used in a trusted environment. There are well-known and long-exploited vulnerabilities inherent to the most fundamental layers of the architecture (addressing, naming, and routing) for which technological solutions have been developed but have failed to gain traction. Briefly, in turn: (1) Addressing: The source address in an Internet packet can — and when part of a denial-of-service attack often is — fake (aka spoofed), thus increasing the cost and complexity of attack attribution, often prohibitively. (2) The Internet naming protocol, which converts hostnames like www.google.com to destination IP addresses understood by routers, typically (i.e., for the vast majority of the hundreds of millions of domain names on the Internet) executes this mapping transaction without any cryptographic authentication. Interference with this mapping for malice or profit is a common vector of attack. For example, a malicious actor might arrange for a fake response to a query for a hostname’s IP address, and then capture passwords and other private information typed into a fraudulent web site hosted at that incorrect IP address. (3) The routing layer, which propagates Internet topology information and traffic transit policies among the tens of thousands of independent autonomous systems on the Internet, also uses no cryptographic authentication to secure the integrity of this information exchange.

I ended this confession on a positive note, except for a legal problem. The U.S. federal research and development community is well aware of this situation (and not only because they created it) and for the last decade or so the National Science Foundation has funded academic research on developing new more secure Internet architectures. Having been involved in one such project (NDN), it has become clear to me that invention of a new global Internet architecture today will require special legal protections against the modern IT patent war zone. As technically ambitious as such a project is, its biggest obstacle is legal, not technical.

Second: the political economy, now inextricably tied to the technological limitations of the architecture. To wit, those technological solutions alluded to above have failed to gain traction due to the political and economic constraints of real-world deployment. The incentive to retrofit layers of security, which add cost and complexity, onto an insecure architecture is typically not held by those who have to finance and manage it. This incentive misalignment runs the spectrum, from the wealthiest financial institutions in the world, to electronic device manufacturers, to home users. Specifically, banks do not know how to model the threat of an insecure routing system, and do not have good data on whether others, or indeed they themselves, are harmed by routing system exploits, and thus are not incented to invest (or at least not invest first..) in available but imperfect sociotechnical solutions. Elsewhere in the ecosystem, information technology vendors make low-cost home router devices for sale at retail stores; these devices host 5-6 year old linux operating systems that have not been nor will ever be patched, and have no upgrade path. There is no legal framework to support a product recall when a vulnerability is discovered that could allow a malicious actor to easily take over and disable millions of such home routers, potentially breaking Internet connectivity to all homes hosting them. And speaking of the home, it hosts the most interesting data point on misaligned incentives: home users are not going to spend time and money to find out if their PC is hosting malicious software intended to harm other users, if it operates at zero impact to their own use of the PC.

Last but not least, the legal issues, and I will selectively mention three. (Dan Geer has a more comprehensive analysis.) First, incentive to write malicious software to compromise operating systems is highest in an ecosystem with near monopoly control of the operating system market. The desktop operating system monopoly was already found to be an illegal monopoly by the U.S. government 20 years ago, so it is fair to call this a (il)legal circumstance that has made us less secure. Second, even if responsible companies want to find security vulnerabilities in software they purchase today, typically they cannot do so due to proprietary software licenses they agree to upon installing the software. That’s a legal instrument, that makes us less secure. Third, the owner of the software has no incentive to find security vulnerabilities, because that proprietary software license we agree to also indemnifies the producer of the software from any liability for anything bad that may happen if you use the software. That’s another legal instrument, that makes us less secure. If building codes were like this, great harm would come. Cyberspace “building codes” are essentially non-existent, and their lack induces great, however immeasurable, economic harm. Which brings us to another issue: measurement.

Dan Geer provided an excellent analysis of how the Center for Disease Control has managed to accomplish in the physical world what we can only aspire to in the digital realm. He points out that the CDC is fundamentally effective due to 3 capabilities: (1) mandatory reporting of communicable diseases (a legal construct); (2) stored data and data analytic skill to distinguish a statistical anomaly from an outbreak; and (3) deployable teams to take charge of remote threats, (i.e., capitalized resources to throw at problems in real-time). Of these, he notes, the first is the most fundamental. You have privacy with your medical provider until evidence arises that triggers a regulation on mandatory reporting of communicable disease conditions, which is not only federal law but part of public health law in all fifty states. Most states now require mandatory reporting of one type of cybersecurity compromise, in the form of data breach laws. Aviation has similar reporting requirements for accidents, as well as broad industry acceptance of voluntary reporting of near misses to help improve aviation safety.

Dan’s view, which I share, is that we need a combination of mandatory reporting for events above a certain threshold, and a voluntary framework for reporting events below that threshold. Of course, that leaves a lot of questions open, that we need to begin to address. First, there is a definitional obstacle to measurement of security failures. Engineers define security failures along the lines of “continues operating according to specification even while under attack”, while humans and policymakers tend to define security failures in terms of outcomes — observable harms to users or enterprises. Translating betwen the two domains is not a solved, or even well-articulated, problem. Second, the government does not have a stellar record in accountable and transparent data sharing practices related to cybersecurity, and for many good reasons. It is an intractably complex, as well as hard, space in which to operate. There are no easy solutions here. We have seen efforts to promote voluntary data-sharing efforts for years (here is the effort du jour), but thus far they have not been effective. (Although, it is fair to ask: how would we know?) Nonetheless, I predict industry-wide reporting is where we will ultimately have to go to gain any policy-informing understanding of the nature and scope of cybersecurity attacks against the U.S. private sector.

[To the judicial audience, I noted] I know that this audience doesn’t just care about justice, you make it. But you don’t get justice without accountability. You don’t get accountability without transparency. You don’t get transparency without some idea of what metrics to use and what data to share. And in a world where there is imaginable if not tremendous risk of sharing data at all, you won’t get data sharing without mandatory reporting requirements. Because competing interests do not share data on their own weaknesses without an imperative. That reality is not complex at all.


Other related for the conference is available here.

Comments are closed.