Archive for the 'Review' Category

CAIDA’s 2015 Annual Report

Tuesday, July 19th, 2016 by kc

[Executive summary and link below]

The CAIDA annual report summarizes CAIDA’s activities for 2015, in the areas of research, infrastructure, data collection and analysis. Our research projects span Internet topology, routing, security, economics, future Internet architectures, and policy. Our infrastructure, software development, and data sharing activities support measurement-based internet research, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem. The executive summary is excerpted below:

Mapping the Internet. We continued to pursue Internet cartography, improving our IPv4 and IPv6 topology mapping capabilities using our expanding and extensible Ark measurement infrastructure. We improved the accuracy and sophistication of our topology annotation capabilities, including classification of ISPs and their business relationships. Using our evolving IP address alias resolution measurement system, we collected curated, and released another Internet Topology Data Kit (ITDK).

Mapping Interconnection Connectivity and Congestion.
We used the Ark infrastructure to support an ambitious collaboration with MIT to map the rich mesh of interconnection in the Internet, with a focus on congestion induced by evolving peering and traffic management practices of CDNs and access ISPs, including methods to detect and localize the congestion to specific points in networks. We undertook several studies to pursue different dimensions of this challenge: identification of interconnection borders from comprehensive measurements of the global Internet topology; identification of the actual physical location (facility) of an interconnection in specific circumstances; and mapping observed evidence of congestion at points of interconnection. We continued producing other related data collection and analysis to enable evaluation of these measurements in the larger context of the evolving ecosystem: quantifying a given ISP’s global routing footprint; classification of autonomous systems (ASes) according to business type; and mapping ASes to their owning organizations. In parallel, we examined the peering ecosystem from an economic perspective, exploring fundamental weaknesses and systemic problems of the currently deployed economic framework of Internet interconnection that will continue to cause peering disputes between ASes.

Monitoring Global Internet Security and Stability. We conduct other global monitoring projects, which focus on security and stability aspects of the global Internet: traffic interception events (hijacks), macroscopic outages, and network filtering of spoofed packets. Each of these projects leverages the existing Ark infrastructure, but each has also required the development of new measurement and data aggregation and analysis tools and infrastructure, now at various stages of development. We were tremendously excited to finally finish and release BGPstream, a software framework for processing large amounts of historical and live BGP measurement data. BGPstream serves as one of several data analysis components of our outage-detection monitoring infrastructure, a prototype of which was operating at the end of the year. We published four other papers that either use or leverage the results of internet scanning and other unsolicited traffic to infer macroscopic properties of the Internet.

Future Internet Architectures. The current TCP/IP architecture is showing its age, and the slow uptake of its ostensible upgrade, IPv6, has inspired NSF and other research funding agencies around the world to invest in research on entirely new Internet architectures. We continue to help launch this moonshot from several angles — routing, security, testbed, management — while also pursuing and publishing results of six empirical studies of IPv6 deployment and evolution.

Public Policy. Our final research thrust is public policy, an area that expanded in 2015, due to requests from policymakers for empirical research results or guidance to inform industry tussles and telecommunication policies. Most notably, the FCC and AT&T selected CAIDA to be the Independent Measurement Expert in the context of the AT&T/DirecTV merger, which turned out to be as much of a challenge as it was an honor. We also published three position papers each aimed at optimizing different public policy outcomes in the face of a rapidly evolving information and communication technology landscape. We contributed to the development of frameworks for ethical assessment of Internet measurement research methods.

Our infrastructure operations activities also grew this year. We continued to operate active and passive measurement infrastructure with visibility into global Internet behavior, and associated software tools that facilitate network research and security vulnerability analysis. In addition to BGPstream, we expanded our infrastructure activities to include a client-server system for allowing measurement of compliance with BCP38 (ingress filtering best practices) across government, research, and commercial networks, and analysis of resulting data in support of compliance efforts. Our 2014 efforts to expand our data sharing efforts by making older topology and some traffic data sets public have dramatically increased use of our data, reflected in our data sharing statistics. In addition, we were happy to help launch DHS’ new IMPACT data sharing initiative toward the end of the year.

Finally, as always, we engaged in a variety of tool development, and outreach activities, including maintaining web sites, publishing 27 peer-reviewed papers, 3 technical reports, 3 workshop reports, 33 presentations, 14 blog entries, and hosting 5 workshops. This report summarizes the status of our activities; details about our research are available in papers, presentations, and interactive resources on our web sites. We also provide listings and links to software tools and data sets shared, and statistics reflecting their usage. sources. Finally, we offer a “CAIDA in numbers” section: statistics on our performance, financial reporting, and supporting resources, including visiting scholars and students, and all funding sources.

For the full 2015 annual report, see

The 2nd NDN Project Retreat

Sunday, February 5th, 2012 by kc

I kicked off 2012 with a visit to Colorado State University in Fort Collins, CO to attend the principal investigators (PI) retreat for the Named Data Networking Project, one of four projects funded under NSF’s “Future Internet Architecture” (FIA) program. Impressive progress since the first FIA meeting, with substantial development and coordination of the NDN Testbed connecting the initial participating institutions, including network status reporting, state of (phase-one) OSPF routing, and testbed status pages. This two-day meeting packed in a wide range of collaborative discussions of architecture and implementation issues, including: topology and namespace structure and constraints; organizational structure and network management; routing and forwarding strategy; security issues such as attribution and privacy; early experiences with application development; evaluation and measurement; social and ethical values in technology design; and educational outreach (classes teaching NDN concepts). We also discussed how to dispel the misconception that NDN is simply collaborative web caching. (The caching is essential but the most revolutionary piece of this new communication model is retrieving data by names.)


my third FCC TAC meeting — the most exciting yet

Monday, July 25th, 2011 by kc

My third FCC Technical Advisory Council meeting (3-hr. video archive here) was the most exciting yet. The TAC’s Critical Legacy Transition working group, studying the legacy public switched telephone network, recommended that the Council advise the FCC to set a concrete date to sunset (shut down) the Public Switched Telephone Network (PSTN). (!) The working group recommended the year 2018 as a starting point for lively discussion.


Exhausted IPv4 address architectures

Tuesday, May 3rd, 2011 by kc

In light of available data on global IPv6 deployment, ISPs, and those who build equipment for them, have already accepted that multi-level network address translation (NAT, between IPv4 and IPv6 networks) is here for the foreseeable future, with all its limits on end-to-end reachability and application functionality, and its required unscalable per-protocol hacks. Whether “carrier-grade” NAT (CGN) technology supports a transition to IPv6 or becomes the endgame itself is irrelevant to the planning horizon of public companies, who must now develop sustainable business models that accommodate, if not support, IPv4 scarcity. I’ve heard a few notable predicted outcomes from engineers in the field.


my second FCC TAC meeting, and its IPv6 promise

Saturday, April 30th, 2011 by kc

I recently remotely attended my second meeting of the FCC’s Technological Advisory Council (slides but no video archives). The chairs of four working groups created at the first TAC meeting (Critical Transitions; IPv6; Broadband Infrastructure Deployment; and Sharing Opportunities) presented their interim results. The FCC then issued a set of “TAC recommendations” (which the TAC never saw); it is mostly a wish list from industry to the FCC. Ironically, IPv6 did not appear anywhere in the recommendations, despite being the most popular topic at the first TAC meeting last November, and despite us running out of IPv4 addresses since the last TAC meeting. But the TAC’s IPv6 WG did commit to (on slide 53) delivering a report by November 2011 on what the FCC could or should do to help promote IPv6 deployment. Specifically, the WG has the following charter:


my first “Future Internet Architecture” PI meeting

Wednesday, January 5th, 2011 by kc

Among the interesting meetings I attended in 2010 was the principal investigators (PI) meeting for NSF’s new “Future Internet Architecture” (FIA) program. The FIA program builds on the successes of NSF’s previous Future Internet Design (FIND) program, the recommendations of a review panel, and a community summit in October 2009. (The FIND program itself has been integrated into NSF’s new Network Science and Engineering research program, while the four FIA teams are attempting to implement some of the ideas developed thus far.) CAIDA is participating in one of these projects — Named Data Networking (NDN), led by Van Jacobson at Xerox Parc and Lixia Zhang at UCLA. (Background links to 2010 technical report describing the proposed architecture, Van’s August 2006 video lecture and 2009 ACM Queue Q&A on NDN ideas.)


my first FCC TAC meeting

Monday, November 15th, 2010 by kc

I recently attended my first FCC Technological Advisory Council meeting (video archives). A week before the meeting we received a memo from the chairman of the committee (Tom Wheeler) notifying the committee of a “clear and challenging mandate from Chairman Genachowski: to generate ideas and spur actions that lead to job creation and economic growth in the ICT [information and communication technologies] ecosystem.” Specifically, “The TAC will focus on the short term implementation of innovative ideas to create investment and jobs, as opposed to long term regulatory changes.”


What’s Belmont Got To Do With It?

Friday, June 12th, 2009 by Erin Kenneally

Recently a group of Internet technology researchers, attorneys and policy professionals participated in a DHS-sponsored workshop, “Ethical Principles and Guidelines for the Protection of Human Subjects in Information and Communications Technology Network and Security Research.” Possible nickname: Belmont Flux Workshop. If you’re still glassy-eyed: (1) you have yet to engage the depths of an Institutional Review Board (IRB) in the context of network and security research; (2) you gave up after seeing “Ethical principles”; and/or (3) you think human subjects issues and network research are orthogonal.

Here’s a summary of the event, and hopefully some inspiration.


a recent visit to the fcc

Tuesday, June 9th, 2009 by kc

I spent a few hours at the FCC two weeks back, presented a slide version of a top ten list I wrote last year. Requested discussion topics: obstacles to data collection, how data is collected and used, policy-making based on inference, how to develop an objective knowledge base for science and policy, privacy expectations/rights versus the need for understanding the system as critical infrastructure. Audience mostly lawyers, worried about how they are going to accomplish a reasonable broadband plan. As I tried to describe in my five-minute presentation slot (and 1 slide, and more expansive blog entry) on the broadband panel at the DOC ten weeks ago, solutions begin with recognition of some underlying empirical facts, starting with one that is strangely not being emphasized by lobbyists: you can’t make Wall-Street-approved margins moving bits around over long distances. Lot of implications to that reality; the sooner we admit it, the more realistic our broadband plan will be.

ethical phishing experiments have to lie?

Monday, May 4th, 2009 by kc

Stefan pointed me at a paper titled “Designing and Conducting Phishing Experiment” (in IEEE Technology and Society Special Issue on Usability and Security, 2007) that makes an amazing claim: it might be more ethical to not debrief the subjects of your phishing experiments after the experiments are over, in particular you might ‘do less harm’ if you do not reveal that some of the sites you had them browse were phishing sites.