Archive for the 'Updates' Category
[Executive summary and link below]
The CAIDA annual report summarizes CAIDA’s activities for 2015, in the areas of research, infrastructure, data collection and analysis. Our research projects span Internet topology, routing, security, economics, future Internet architectures, and policy. Our infrastructure, software development, and data sharing activities support measurement-based internet research, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem. The executive summary is excerpted below:
Mapping the Internet. We continued to pursue Internet cartography, improving our IPv4 and IPv6 topology mapping capabilities using our expanding and extensible Ark measurement infrastructure. We improved the accuracy and sophistication of our topology annotation capabilities, including classification of ISPs and their business relationships. Using our evolving IP address alias resolution measurement system, we collected curated, and released another Internet Topology Data Kit (ITDK).
Mapping Interconnection Connectivity and Congestion. We used the Ark infrastructure to support an ambitious collaboration with MIT to map the rich mesh of interconnection in the Internet, with a focus on congestion induced by evolving peering and traffic management practices of CDNs and access ISPs, including methods to detect and localize the congestion to specific points in networks. We undertook several studies to pursue different dimensions of this challenge: identification of interconnection borders from comprehensive measurements of the global Internet topology; identification of the actual physical location (facility) of an interconnection in specific circumstances; and mapping observed evidence of congestion at points of interconnection. We continued producing other related data collection and analysis to enable evaluation of these measurements in the larger context of the evolving ecosystem: quantifying a given ISP’s global routing footprint; classification of autonomous systems (ASes) according to business type; and mapping ASes to their owning organizations. In parallel, we examined the peering ecosystem from an economic perspective, exploring fundamental weaknesses and systemic problems of the currently deployed economic framework of Internet interconnection that will continue to cause peering disputes between ASes.
Monitoring Global Internet Security and Stability. We conduct other global monitoring projects, which focus on security and stability aspects of the global Internet: traffic interception events (hijacks), macroscopic outages, and network filtering of spoofed packets. Each of these projects leverages the existing Ark infrastructure, but each has also required the development of new measurement and data aggregation and analysis tools and infrastructure, now at various stages of development. We were tremendously excited to finally finish and release BGPstream, a software framework for processing large amounts of historical and live BGP measurement data. BGPstream serves as one of several data analysis components of our outage-detection monitoring infrastructure, a prototype of which was operating at the end of the year. We published four other papers that either use or leverage the results of internet scanning and other unsolicited traffic to infer macroscopic properties of the Internet.
Future Internet Architectures. The current TCP/IP architecture is showing its age, and the slow uptake of its ostensible upgrade, IPv6, has inspired NSF and other research funding agencies around the world to invest in research on entirely new Internet architectures. We continue to help launch this moonshot from several angles — routing, security, testbed, management — while also pursuing and publishing results of six empirical studies of IPv6 deployment and evolution.
Public Policy. Our final research thrust is public policy, an area that expanded in 2015, due to requests from policymakers for empirical research results or guidance to inform industry tussles and telecommunication policies. Most notably, the FCC and AT&T selected CAIDA to be the Independent Measurement Expert in the context of the AT&T/DirecTV merger, which turned out to be as much of a challenge as it was an honor. We also published three position papers each aimed at optimizing different public policy outcomes in the face of a rapidly evolving information and communication technology landscape. We contributed to the development of frameworks for ethical assessment of Internet measurement research methods.
Our infrastructure operations activities also grew this year. We continued to operate active and passive measurement infrastructure with visibility into global Internet behavior, and associated software tools that facilitate network research and security vulnerability analysis. In addition to BGPstream, we expanded our infrastructure activities to include a client-server system for allowing measurement of compliance with BCP38 (ingress filtering best practices) across government, research, and commercial networks, and analysis of resulting data in support of compliance efforts. Our 2014 efforts to expand our data sharing efforts by making older topology and some traffic data sets public have dramatically increased use of our data, reflected in our data sharing statistics. In addition, we were happy to help launch DHS’ new IMPACT data sharing initiative toward the end of the year.
Finally, as always, we engaged in a variety of tool development, and outreach activities, including maintaining web sites, publishing 27 peer-reviewed papers, 3 technical reports, 3 workshop reports, 33 presentations, 14 blog entries, and hosting 5 workshops. This report summarizes the status of our activities; details about our research are available in papers, presentations, and interactive resources on our web sites. We also provide listings and links to software tools and data sets shared, and statistics reflecting their usage. sources. Finally, we offer a “CAIDA in numbers” section: statistics on our performance, financial reporting, and supporting resources, including visiting scholars and students, and all funding sources.
For the full 2015 annual report, see http://www.caida.org/home/about/annualreports/2015/
The Named Data Networking project recently published the NDN-NP annual report covering activities from May 2015 through April 2016.).
V. Jacobson, J. Burke, L. Zhang, T. Abdelzaher, B. Zhang, k. claffy, P. Crowley, J. Halderman, C. Papadopoulos, and L. Wang, “Named Data Networking Next Phase (NDN-NP) Project May 2015 – April 2016 Annual Report”, Tech. rep., Named Data Networking (NDN), Jun 2016.
This report summarizes our accomplishments during the second year of the Named Data Networking Next Phase (NDN-NP) project (the 5th year of the overall project. This phase of the project focuses on deploying and evaluating the NDN architecture in four environments: building automation management systems, mobile health, multimedia real-time conferencing tools, and scientific data applications. Implementation and testing of pilot applications in these network environments further demonstrated our research progress in namespace design, trust management, and encryption-based access control. Highlights from this year include:
- Continued evolution the NDN Forwarding Daemon (NFD), to support application-driven experimentation with new NDN protocol features.
- Development of an Android version of NFD to promote NDN experimentation on mobile platforms.
- Implementation of a new transport protocol (InfoMax) that can intelligently filter streams of information in order to reduce transmitted data volume, while minimizing loss of information.
- A growing portfolio of supporting software libraries, including new APIs, transport mechanisms (Sync, information maximization), and security functionality, that leverage inherent capabilities of NDN, e.g., schematized trust, name-based access control.
- Demonstration of extremely scalable forwarding implementation using a billion synthetic names.
- Implementation and evaluation of hyperbolic routing
performance to understand its feasibility in supporting NDN’s
- Multi-faceted evaluation of the architecture, from
instrumentation of applications on the testbed, to uses of ndnSIM and the Mini-NDN emulator environment.
- Continued uses of NDN in the four courses taught by principal investigators.
- The second annual NDN Community meeting hosted by the
NDN Consortium to promote a vibrant open source ecosystem of
research and experimentation around NDN.
The NDN team has made tremendous progress in the last five years, and a larger community of information-centric networking research has evolved in parallel. Our progress revealed the importance of demonstrating NDN capabilities in IoT and big data environments, and highlighted the need for accessible software platform support and emulation capabilities to facilitate R\&D on both the NDN architecture and applications that leverage it. We have received a year of supplement funding to complete four tasks: 1) completing and disseminating native NDN applications and associated design patterns, 2) demonstrating NDN scalability; 3) documenting and releasing reference implementations, and 4) documenting NDN design decisions and lessons learned.
The report for the Second NDN Community Meeting (NDNcomm 2015) is available online now. The meeting, held at UCLA in Los Angeles, California on September 28-29, 2015, provided a platform for attendees from 63 institutions across 13 countries to exchange recent NDN research and development results, to debate existing and proposed functionality in NDN forwarding, routing, and security, and to provide feedback to the NDN architecture design evolution.
[The workshop was partially supported by the National Science Foundation CNS-1345286, CNS-1345318, and CNS-1457074. We thank the NDNcomm Program Committee members for their effort of putting together an excellent program. We thank all participants for their insights and feedback at the workshop.]
We recently posted two papers on policy that are worth highlighting:
Anchoring policy development around stable points: an approach to regulating the co-evolving ICT ecosystem, published in Telecommunications Policy, Aug 2015.
The daunting pace of innovation in the information and communications technology (ICT) landscape, a landscape of technology and business structure, is a well-known but under-appreciated reality. In contrast, the rate of policy and regulatory innovation is much slower, partly due to its inherently more deliberative character. We describe this disparity in terms of the natural rates of change in different parts of the ecosystem, and examine why it has impeded attempts to impose effective regulation on the telecommunications industry. We explain why a recent movement to reduce this disparity by increasing the pace of regulation – adaptive regulation – faces five obstacles that may hinder its feasibility in the ICT ecosystem. As a means to achieve more sustainable regulatory frameworks for ICT industries, we introduce an approach based on finding stable points in the system architecture. We explore the origin and role of these stable points in a rapidly evolving system, and argue that they can provide a means to support development of policies, including adaptive regulation approaches, that are more likely to survive the rapid pace of evolution in technology.
Adding Enhanced Services to the Internet: Lessons from History
Presented at the Telecommunications Policy Research Conference (TPRC), Sep 2015.
We revisit the last 35 years of history related to the design and specification of Quality of Service (QoS) on the Internet, in hopes of offering some clarity to the current debates around service differentiation. We describe the continual failure to get QoS capabilities deployed on the public Internet, including the technical challenges of the 1980s and 1990s, the market-oriented (business) challenges of the 1990s and 2000s, and recent regulatory challenges. Our historical perspective draws on, among other things, our own work from the 1990s that offered proposals for supporting enhanced services using the Internet Protocol (IP) suite, and our attempts to engage both industry and policymakers in understanding the dynamics of the Internet ecosystem. In short, the engineering community successfully developed protocols and mechanisms to implement enhanced services (QoS), and a few individual service providers have deployed them internally or in trusted two-party scenarios. The long-standing failure has been to deploy this capability across the public Internet.
We reflect on lessons learned from the history of this failure, the resulting tensions and risks, and their implications for the future of Internet infrastructure regulation. First, the continued failure of QoS over the last three decades derives from political and economic (business) obstacles as well as technical obstacles. The competitive nature of the industry, and a long history of anti-trust regulation (at least in the U.S.) conflicts with the need for competing providers to agree on protocols that require sharing operational data with each other to parameterize and verify committed service qualities. Second, QoS technology can yield benefits as well as harms, so policymaking should focus on harms rather than mechanisms. To assure the benefit to consumers, regulators may need to require transparency about the state of congestion and provisioning on networks using such mechanisms. Third, using QoE as the basis for any regulation will require research, tools and capabilities to measure, quantify, and characterize QoE, and developing metrics of service quality that better reflect our understanding of QoS and QoE for a range of applications. Finally, profound shifts in interconnection arrangements suggest a reshaping of the debate over QoS on the public Internet. Some access networks are interconnecting their private IP-based network platforms to support enhanced services, and using this interconnected platform to vertically integrate infrastructure and applications. Access networks are also connecting directly to large content providers to minimize the risk of performance impairments. These changes trigger new regulatory concerns over the fate of the public Internet, including capital investment incentives and gaps across different bodies of law.
Barriers to the deployment of scalable interprovider QoS may be unsurmountable, but since any Internet of the future will face them, it is worth developing a systematic understanding to the challenge of enhanced services, and documenting successes and failures over the history of the Internet as carefully as possible.
Full paper available on the CAIDA website.
[Executive Summary from our annual report for 2014:]
This annual report covers CAIDA’s activities in 2014, summarizing highlights from our research, infrastructure, data-sharing and outreach activities. Our research projects span Internet topology, routing, traffic, security and stability, future Internet architecture, economics and policy. Our infrastructure activities support measurement-based Internet studies, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem.
The Named Data Networking project recently published the NDN-NP annual report covering activities from May 2014 through April 2015.
V. Jacobson, J. Burke, L. Zhang, B. Zhang, K. Claffy, C. Papadopoulos, T. Abdelzaher, L. Wang, J. Halderman, and P. Crowley, “Named Data Networking Next Phase (NDN-NP) Project May 2014 – April 2015 Annual Report”, Tech. rep., Jun 2015.
This report catalogs a wide range of our accomplishments during the first year of the “NDN Next Phase (NDN-NP)” project. This phase of the project is environment-driven, in that we are focusing on deploying and evaluating the NDN architecture in two specific environments: building automation management systems and mobile health, together with a cluster of multimedia collaboration tools.
The final report for our Workshop on Internet Economics (WIE2014) is available for viewing. The abstract:
On December 10-11 2014, we hosted the 4th interdisciplinary Workshop on Internet Economics (WIE) at the UC San Diego’s Supercomputer Center. This workshop series provides a forum for researchers, Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to inform current and emerging regulatory and policy debates. The objective for this year’s workshop was a structured consideration of whether and how policy-makers should try to shape the future of the Internet. To structure the discussion about policy, we began the workshop with a list of potential aspirations for our future telecommunications infrastructure (a list we had previously collated), and asked participants to articulate an aspiration or fear they had about the future of the Internet, which we summarized and discussed on the second day. The focus on aspirations was motivated by the high-level observation that before discussing regulation, we must agree on the objective of the regulation, and why the intended outcome is justified. In parallel, we used a similar format as in previous years: a series of focused sessions, where 3-4 presenters each prepared 10-minute talks on issues in recent regulatory discourse, followed by in-depth discussions. This report highlights the discussions and presents relevant open research questions identified by participants.
See the full workshop report at http://www.caida.org/publications/papers/2015/wie2014_report/
Slides from workshop presentations are available at http://www.caida.org/workshops/wie/1412/
I feel that somewhere up there Jon Postel is smiling about Matthew’s RFC 7514, published today:
The deployment of Explicit Congestion Notification (ECN) [RFC3168] remains stalled. While most operating systems support ECN, it is currently disabled by default because of fears that enabling ECN will break transport protocols. This document proposes a new ICMP message that a router or host may use to advise a host to reduce the rate at which it sends, in cases where the host ignores other signals such as packet loss and ECN. We call this message the “Really Explicit Congestion Notification” (RECN) message because it delivers a less subtle indication of congestion than packet loss and ECN.
Last weekend I had the honor of participating in a conference on “The Digital Broadband Migration: First Principles for a Twenty First Century Innovation Policy” hosted by the Silicon Flatirons Center at the University of Colorado. David Clark and I kicked off a panel on the topic of “Mapping the Technological Frontier and the Sources of Innovation”. The full video is archived on YouTube (Panel starts ~10m52s.) (slides here). A great conference hosted by a great organization (and a law school that seems like a wonderful place to teach and learn).