Archive for the 'Policy' Category

CRA Congressional visit to Washington D.C.

Tuesday, September 27th, 2016 by kc

As part of a Computing Research Association (CRA) effort to introduce policymakers to the contributions and power of IT research for the nation and the world, this month I had the honor of visiting with the offices of four U.S. senators and a U.S. Representative:

Internet-specific topics I discussed included the importance of scientific measurement infrastructure to support empirical network and security research, broadband policy, and Internet governance.

We left them with a terrific infographic from the National Academy study “Continuing Innovation in Information Technology“, which shows the economic impact of different areas of fundamental IT research. The 2-pager flyer and the whole National Academy report, Depicting Innovation in Information Technology, is available on the National Academies of Science, Engineering, and Medicine Computer Science Telecommunications Board (CSTB) site.
Continuing Innovation in Information Technology

Even with many folks in Congress having a higher priority of passing a budget and getting back home to their districts to prepare for elections, all the staffers were gracious and genuinely interested in our field. (Who wouldn’t be? 😉 )

Kudos to the Computing Research Association for providing a wonderful opportunity to engage with policy folks.

NSF WATCH series talk: Mapping Internet Interdomain Congestion

Friday, August 26th, 2016 by kc

Last week I gave a talk at NSF’s 39th Washington Area Trustworthy Computing Hour (WATCH) seminar series on CAIDA’s efforts to map internet interdomain congestion. A recorded webcast of the talk is available.


We used the Ark infrastructure to support an ambitious collaboration with MIT to map the rich mesh of interconnection in the Internet, with a focus on congestion induced by evolving peering and traffic management practices of CDNs and access ISPs, including methods to detect and localize the congestion to specific points in networks. We undertook several studies to pursue two dimensions of this challenge. First, we developed methods and tools to identify interconnection borders, and in some cases their physical locations, from comprehensive Internet topology measurements from many edge vantage points. Then, we developed and deployed scalable performance measurement tools to observe performance at thousands of interconnections, algorithms to mine for evidence of persistent congestion in the resulting data; and a system to visualize the results. We produce other related data collection and analysis to enable evaluation of these measurements in the larger context of the evolving ecosystem: quantifying a given network service providers’ global routing footprint; and business-related classifications of networks. In parallel, we examined the peering ecosystem from an economic perspective, exploring fundamental weaknesses and systemic problems of the currently deployed economic framework of Internet interconnection that will continue to cause peering disputes between ASes.

The slides presented are posted on the CAIDA website: Mapping Internet Interdomain Congestion

CAIDA’s 2015 Annual Report

Tuesday, July 19th, 2016 by kc

[Executive summary and link below]

The CAIDA annual report summarizes CAIDA’s activities for 2015, in the areas of research, infrastructure, data collection and analysis. Our research projects span Internet topology, routing, security, economics, future Internet architectures, and policy. Our infrastructure, software development, and data sharing activities support measurement-based internet research, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem. The executive summary is excerpted below:

Mapping the Internet. We continued to pursue Internet cartography, improving our IPv4 and IPv6 topology mapping capabilities using our expanding and extensible Ark measurement infrastructure. We improved the accuracy and sophistication of our topology annotation capabilities, including classification of ISPs and their business relationships. Using our evolving IP address alias resolution measurement system, we collected curated, and released another Internet Topology Data Kit (ITDK).

Mapping Interconnection Connectivity and Congestion.
We used the Ark infrastructure to support an ambitious collaboration with MIT to map the rich mesh of interconnection in the Internet, with a focus on congestion induced by evolving peering and traffic management practices of CDNs and access ISPs, including methods to detect and localize the congestion to specific points in networks. We undertook several studies to pursue different dimensions of this challenge: identification of interconnection borders from comprehensive measurements of the global Internet topology; identification of the actual physical location (facility) of an interconnection in specific circumstances; and mapping observed evidence of congestion at points of interconnection. We continued producing other related data collection and analysis to enable evaluation of these measurements in the larger context of the evolving ecosystem: quantifying a given ISP’s global routing footprint; classification of autonomous systems (ASes) according to business type; and mapping ASes to their owning organizations. In parallel, we examined the peering ecosystem from an economic perspective, exploring fundamental weaknesses and systemic problems of the currently deployed economic framework of Internet interconnection that will continue to cause peering disputes between ASes.

Monitoring Global Internet Security and Stability. We conduct other global monitoring projects, which focus on security and stability aspects of the global Internet: traffic interception events (hijacks), macroscopic outages, and network filtering of spoofed packets. Each of these projects leverages the existing Ark infrastructure, but each has also required the development of new measurement and data aggregation and analysis tools and infrastructure, now at various stages of development. We were tremendously excited to finally finish and release BGPstream, a software framework for processing large amounts of historical and live BGP measurement data. BGPstream serves as one of several data analysis components of our outage-detection monitoring infrastructure, a prototype of which was operating at the end of the year. We published four other papers that either use or leverage the results of internet scanning and other unsolicited traffic to infer macroscopic properties of the Internet.

Future Internet Architectures. The current TCP/IP architecture is showing its age, and the slow uptake of its ostensible upgrade, IPv6, has inspired NSF and other research funding agencies around the world to invest in research on entirely new Internet architectures. We continue to help launch this moonshot from several angles — routing, security, testbed, management — while also pursuing and publishing results of six empirical studies of IPv6 deployment and evolution.

Public Policy. Our final research thrust is public policy, an area that expanded in 2015, due to requests from policymakers for empirical research results or guidance to inform industry tussles and telecommunication policies. Most notably, the FCC and AT&T selected CAIDA to be the Independent Measurement Expert in the context of the AT&T/DirecTV merger, which turned out to be as much of a challenge as it was an honor. We also published three position papers each aimed at optimizing different public policy outcomes in the face of a rapidly evolving information and communication technology landscape. We contributed to the development of frameworks for ethical assessment of Internet measurement research methods.

Our infrastructure operations activities also grew this year. We continued to operate active and passive measurement infrastructure with visibility into global Internet behavior, and associated software tools that facilitate network research and security vulnerability analysis. In addition to BGPstream, we expanded our infrastructure activities to include a client-server system for allowing measurement of compliance with BCP38 (ingress filtering best practices) across government, research, and commercial networks, and analysis of resulting data in support of compliance efforts. Our 2014 efforts to expand our data sharing efforts by making older topology and some traffic data sets public have dramatically increased use of our data, reflected in our data sharing statistics. In addition, we were happy to help launch DHS’ new IMPACT data sharing initiative toward the end of the year.

Finally, as always, we engaged in a variety of tool development, and outreach activities, including maintaining web sites, publishing 27 peer-reviewed papers, 3 technical reports, 3 workshop reports, 33 presentations, 14 blog entries, and hosting 5 workshops. This report summarizes the status of our activities; details about our research are available in papers, presentations, and interactive resources on our web sites. We also provide listings and links to software tools and data sets shared, and statistics reflecting their usage. sources. Finally, we offer a “CAIDA in numbers” section: statistics on our performance, financial reporting, and supporting resources, including visiting scholars and students, and all funding sources.

For the full 2015 annual report, see

Report from the 2nd NDN Community Meeting (NDNcomm 2015)

Tuesday, November 10th, 2015 by kc

The report for the Second NDN Community Meeting (NDNcomm 2015) is available online now. The meeting, held at UCLA in Los Angeles, California on September 28-29, 2015, provided a platform for attendees from 63 institutions across 13 countries to exchange recent NDN research and development results, to debate existing and proposed functionality in NDN forwarding, routing, and security, and to provide feedback to the NDN architecture design evolution.

[The workshop was partially supported by the National Science Foundation CNS-1345286, CNS-1345318, and CNS-1457074. We thank the NDNcomm Program Committee members for their effort of putting together an excellent program. We thank all participants for their insights and feedback at the workshop.]

Recent papers on policy

Wednesday, October 21st, 2015 by kc

We recently posted two papers on policy that are worth highlighting:

Anchoring policy development around stable points: an approach to regulating the co-evolving ICT ecosystem, published in Telecommunications Policy, Aug 2015.


The daunting pace of innovation in the information and communications technology (ICT) landscape, a landscape of technology and business structure, is a well-known but under-appreciated reality. In contrast, the rate of policy and regulatory innovation is much slower, partly due to its inherently more deliberative character. We describe this disparity in terms of the natural rates of change in different parts of the ecosystem, and examine why it has impeded attempts to impose effective regulation on the telecommunications industry. We explain why a recent movement to reduce this disparity by increasing the pace of regulation – adaptive regulation – faces five obstacles that may hinder its feasibility in the ICT ecosystem. As a means to achieve more sustainable regulatory frameworks for ICT industries, we introduce an approach based on finding stable points in the system architecture. We explore the origin and role of these stable points in a rapidly evolving system, and argue that they can provide a means to support development of policies, including adaptive regulation approaches, that are more likely to survive the rapid pace of evolution in technology.

Full paper available on the CAIDA website.
Accompanying slides are also available.

Adding Enhanced Services to the Internet: Lessons from History
Presented at the Telecommunications Policy Research Conference (TPRC), Sep 2015.


We revisit the last 35 years of history related to the design and specification of Quality of Service (QoS) on the Internet, in hopes of offering some clarity to the current debates around service differentiation. We describe the continual failure to get QoS capabilities deployed on the public Internet, including the technical challenges of the 1980s and 1990s, the market-oriented (business) challenges of the 1990s and 2000s, and recent regulatory challenges. Our historical perspective draws on, among other things, our own work from the 1990s that offered proposals for supporting enhanced services using the Internet Protocol (IP) suite, and our attempts to engage both industry and policymakers in understanding the dynamics of the Internet ecosystem. In short, the engineering community successfully developed protocols and mechanisms to implement enhanced services (QoS), and a few individual service providers have deployed them internally or in trusted two-party scenarios. The long-standing failure has been to deploy this capability across the public Internet.

We reflect on lessons learned from the history of this failure, the resulting tensions and risks, and their implications for the future of Internet infrastructure regulation. First, the continued failure of QoS over the last three decades derives from political and economic (business) obstacles as well as technical obstacles. The competitive nature of the industry, and a long history of anti-trust regulation (at least in the U.S.) conflicts with the need for competing providers to agree on protocols that require sharing operational data with each other to parameterize and verify committed service qualities. Second, QoS technology can yield benefits as well as harms, so policymaking should focus on harms rather than mechanisms. To assure the benefit to consumers, regulators may need to require transparency about the state of congestion and provisioning on networks using such mechanisms. Third, using QoE as the basis for any regulation will require research, tools and capabilities to measure, quantify, and characterize QoE, and developing metrics of service quality that better reflect our understanding of QoS and QoE for a range of applications. Finally, profound shifts in interconnection arrangements suggest a reshaping of the debate over QoS on the public Internet. Some access networks are interconnecting their private IP-based network platforms to support enhanced services, and using this interconnected platform to vertically integrate infrastructure and applications. Access networks are also connecting directly to large content providers to minimize the risk of performance impairments. These changes trigger new regulatory concerns over the fate of the public Internet, including capital investment incentives and gaps across different bodies of law.

Barriers to the deployment of scalable interprovider QoS may be unsurmountable, but since any Internet of the future will face them, it is worth developing a systematic understanding to the challenge of enhanced services, and documenting successes and failures over the history of the Internet as carefully as possible.

Full paper available on the CAIDA website.

Panel on Cyberwarfare and Cyberattacks at 9th Circuit Judicial Conference

Monday, July 20th, 2015 by kc

I had the honor of contributing to a panel on “Cyberwarfare and cyberattacks: protecting ourselves within existing limitations” at this year’s 9th Circuit Judicial Conference. The panel moderator was Hon. Thomas M. Hardiman, and the other panelists were Professor Peter Cowhey, of UCSD’s School of Global Policy and Strategy, and Professor and Lt. Col. Shane R. Reeves of West Point Academy. Lt. Col. Reeves gave a brief primer on the framework of the Law of Armed Conflict, distinguished an act of cyberwar from a cyberattack, and described the implications for political and legal constraints on governmental and private sector responses. Professor Cowhey followed with a perspective on how economic forces also constrain cybersecurity preparedness and response, drawing comparisons with other industries for which the cost of security technology is perceived to exceed its benefit by those who must invest in its deployment. I used a visualization of an Internet-wide cybersecurity event to illustrate technical, economic, and legal dimensions of the ecosystem that render the fundamental vulnerabilities of today’s Internet infrastructure so persistent and pernicious. A few people said I talked too fast for them to understand all the points I was trying to make, so I thought I should post the notes I used during my panel remarks. (My remarks borrowed heavily from Dan Geer’s two essays: Cybersecurity and National Policy (2010), and his more recent Cybersecurity as Realpolitik (video), both of which I highly recommend.) After explaining the basic concept of a botnet, I showed a video derived from CAIDA’s analysis of a botnet scanning the entire IPv4 address space (discovered and comprehensively analyzed by Alberto Dainotti and Alistair King). I gave a (too) quick rundown of the technological, economic, and legal circumstances of the Internet ecosystem that facilitate the deployment of botnets and other threats to networked critical infrastructure.

Comments on Cybersecurity Research and Development Strategic Plan

Wednesday, July 1st, 2015 by kc

An excerpt from a comment that David Clark and I wrote in response to Request for Information (RFI)-Federal Cybersecurity R&D Strategic Plan, posted by the National Science Foundation on 4/27/2015.

The RFI asks “What innovative, transformational technologies have the potential to enhance the security, reliability, resiliency, and trustworthiness of the digital infrastructure, and to protect consumer privacy?

We believe that it would be beneficial to reframe and broaden the scope of this question. The security problems that we face today are not new, and do not persist because of a lack of a technical breakthrough. Rather, they arise in large part in the larger context within which the technology sits, a space defined by misaligned economic incentives that exacerbate coordination problems, lack of clear leadership, regulatory and legal barriers, and the intrinsic complications of a globally connected ecosystem with radically distributed ownership of constituent parts of the infrastructure. Worse, although the public and private sectors have both made enormous investments in cybersecurity technologies over the last decade, we lack relevant data that can characterize the nature and extent of specific cybersecurity problems, or assess the effectiveness of technological or other measures intended to address them.

We first examine two inherently disconnected views of cybersecurity, the correct-operation view and the harm view. These two views do not always align. Attacks on specific components, while disrupting correct operation, may not map to a specific and quantifiable harm. Classes of harms do not always derive from a specific attack on a component; there may be many stages of attack activity that result in harm. Technologists tend to think about assuring correct operation while users, businesses, and policy makers tend to think about preventing classes of harms. Discussions of public policy including research and development funding strategies must bridge this gap.

We then provide two case studies to illustrate our point, and emphasize the importance of developing ways to measure the return on federal investment in cybersecurity R&D.

Full comment:

Background on authors: David Clark (MIT Computer Science and Artificial Intelligence Laboratory) has led network architecture and security research efforts for almost 30 years, and has recently turned his attention toward non-technical (including policy) obstacles to progress in cybersecurity through a new effort at MIT funded by the Hewlett Foundation. kc claffy (UC San Diego’s Center for Applied Internet Data Analysis (CAIDA)) leads Internet research and data analysis efforts aimed at informing network science, architecture, security, and public policy. CAIDA is funded by the U.S. National Science Foundation, Department of Homeland Security’s Cybersecurity Division, and CAIDA members. This comment reflects the views of its authors and not necessarily the agencies sponsoring their research.

Workshop on Internet Economics (WIE2014) Final Report

Tuesday, May 19th, 2015 by kc

The final report for our Workshop on Internet Economics (WIE2014) is available for viewing. The abstract:

On December 10-11 2014, we hosted the 4th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates. The objective for this year’s workshop was a structured consideration of whether and how policy-makers should try to shape the future of the Internet. To structure the discussion about policy, we began the workshop with a list of potential aspirations for our future telecommunications infrastructure (a list we had previously collated), and asked participants to articulate an aspiration or fear they had about the future of the Internet, which we summarized and discussed on the second day. The focus on aspirations was motivated by the high-level observation that before discussing regulation, we must agree on the objective of the regulation, and why the intended outcome is justified. In parallel, we used a similar format as in previous years: a series of focused sessions, where 3-4 presenters each prepared 10-minute talks on issues in recent regulatory discourse, followed by in-depth discussions. This report highlights the discussions and presents relevant open research questions identified by participants.

See the full workshop report at

Slides from workshop presentations are available at

Draft white paper that motivated the workshop at:

Mapping the Technological Frontier and Sources of Innovation

Friday, February 13th, 2015 by kc

Last weekend I had the honor of participating in a conference on “The Digital Broadband Migration: First Principles for a Twenty First Century Innovation Policy” hosted by the Silicon Flatirons Center at the University of Colorado. David Clark and I kicked off a panel on the topic of “Mapping the Technological Frontier and the Sources of Innovation”. The full video is archived on YouTube (Panel starts ~10m52s.) (slides here). A great conference hosted by a great organization (and a law school that seems like a wonderful place to teach and learn).

Report from the 1st NDN Community Meeting (NDNcomm)

Tuesday, January 13th, 2015 by kc

The report for the 1st NDN Community Meeting (NDNcomm) is available online now. This report, “The First Named Data Networking Community Meeting (NDNcomm)“, is a brief summary of the first NDN Community Meeting held at UCLA in Los Angeles, California on September 4-5, 2014. The meeting provided a platform for the attendees from 39 institutions across seven countries to exchange their recent NDN research and development results, to debate existing and proposed functionality in security support, and to provide feedback into the NDN architecture design evolution.

The workshop was supported by the National Science Foundation CNS-1457074, CNS-1345286, and CNS-1345318. We thank the NDNcomm Program Committee members for their effort of putting together an excellent program. We thank all participants for their insights and feedback at the workshop.