Archive for the 'Updates' Category

9th Workshop on Internet Economics

Tuesday, January 29th, 2019 by kc

On December 12-13, 2018, CAIDA and the Massachusetts Institute of Technology (MIT) hosted the (invitation-only) 9th interdisciplinary Workshop on Internet Economics (WIE) at the University of California San Diego in La Jolla, CA.

The goal of this workshop series is to provide a forum for researchers, commercial Internet facilities and service providers, technologists, economists, theorists, policy makers, and other stakeholders to empirically inform emerging Internet regulatory and policy debates.

Presenters were asked to write talk abstracts on their presented topics, addressing four questions:

  1. What is the policy goal or fear you’re addressing?
  2. What data is needed to measure progress toward/away from this goal fear?
  3. What methods do you propose (or are) being used to gather such data?
  4. Who/how should such methods be executed, and the data shared, or not shared?

With a specific focus on measurement challenges, the topics we discussed included: analyzing the evolution of the Internet in a layered-platform context to gain new insights; measurement and analysis of economic impacts of new technologies using old tools; security and trustworthiness, reach (universal service) and reachability, sustainability of investment into public Internet infrastructure, as well as infrastructure to measure the public Internet.

Some of the takeaways from the workshop included:
(more…)

CAIDA wins Best Paper at ACM SIGCOMM 2018!

Wednesday, August 22nd, 2018 by CAIDA Webmaster

Congratulations to Amogh Dhamdhere, David Clark, Alexander Gamero-Garrido, Matthew Luckie, Ricky K.P. Mok, Gautam Akiwate, Kabir Gogia, Vaibhav Bajpai, Alex Snoeren, and kc claffy, for being awarded Best Paper at SIGCOMM 2018!

The abstract from the paper, “Inferring Persistent Interdomain Congestion“:

There is significant interest in the technical and policy communities regarding the extent,scope, and consumer harm of persistent interdomain congestion. We provide empirical grounding for discussions of interdomain congestion by developing a system and method to measure congestion on thousands of interdomain links without direct access to them. We implement a system based on the Time Series Latency Probes (TSLP) technique that identifies links with evidence of recurring congestion suggestive of an under-provisioned link. We deploy our system at 86 vantage points worldwide and show that congestion inferred using our lightweight TSLP method correlates with other metrics of interconnection performance impairment. We use our method to study interdomain links of eight large U.S. broadband access providers from March 2016 to December 2017, and validate our inferences against ground-truth traffic statistics from two of the providers. For the period of time over which we gathered measurements, we did not find evidence of widespread endemic congestion on interdomain links between access ISPs and directly connected transit and content providers, although some such links exhibited recurring congestion patterns. We describe limitations, open challenges, and a path toward the use of this method for large-scale third-party monitoring of the Internet interconnection ecosystem.

Read the full paper on the CAIDA website.

CAIDA’s Annual Report for 2017

Tuesday, May 29th, 2018 by kc

The CAIDA annual report summarizes CAIDA’s activities for 2017, in the areas of research, infrastructure, data collection and analysis. Our research projects span Internet topology, routing, security, economics, future Internet architectures, and policy. Our infrastructure, software development, and data sharing activities support measurement-based internet research, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem. The executive summary is excerpted below:
(more…)

CAIDA’s 2016 Annual Report

Tuesday, May 9th, 2017 by kc

[Executive summary and link below]

The CAIDA annual report summarizes CAIDA’s activities for 2016, in the areas of research, infrastructure, data collection and analysis. Our research projects span Internet topology, routing, security, economics, future Internet architectures, and policy. Our infrastructure, software development, and data sharing activities support measurement-based internet research, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem. The executive summary is excerpted below:

Mapping the Internet. We continued to expand our topology mapping capabilities using our Ark measurement infrastructure. We improved the accuracy and sophistication of our topology annotations, including classification of ISPs, business relationships between them, and geographic mapping of interdomain links that implement these relationships. We released two Internet Topology Data Kits (ITDKs) incorporating these advances.

Mapping Interconnection Connectivity and Congestion. We continued our collaboration with MIT to map the rich mesh of interconnection in the Internet in order to study congestion induced by evolving peering and traffic management practices of CDNs and access ISPs. We focused our efforts on the challenge of detecting and localizing congestion to specific points in between networks. We developed new tools to scale measurements to a much wider set of available nodes. We also implemented a new database and graphing platform to allow us to interactively explore our topology and performance measurements. We produced related data collection and analyses to enable evaluation of these measurements in the larger context of the evolving ecosystem: infrastructure resiliency, economic tussles, and public policy.

Monitoring Global Internet Security and Stability. We conducted infrastructure research and development projects that focus on security and stability aspects of the global Internet. We developed continuous fine-grained monitoring capabilities establishing a baseline connectivity awareness against which to interpret observed changes due to network outages or route hijacks. We released (in beta form) a new operational prototype service that monitors the Internet, in near-real-time, and helps identify macroscopic Internet outages affecting the edge of the network.

CAIDA also developed new client tools for measuring IPv4 and IPv6 spoofing capabilities, along with services that provide reporting and allow users to opt-in or out of sharing the data publicly.

Future Internet Architectures. We continued studies of IPv4 and IPv6 paths in the Internet, including topological congruency, stability, and RTT performance. We examined the state of security policies in IPv6 networks, and collaborated to measure CGN deployment in U.S. broadband networks. We also continued our collaboration with researchers at several other universities to advance development of a new Internet architecture: Named Data Networking (NDN) and published a paper on the policy and social implications of an NDN-based Internet.

Public Policy. Acting as an Independent Measurement Expert, we posted our agreed-upon revised methodology for measurement methods and reporting requirements related to AT&T Inc. and DirecTV merger (MB Docket No. 14-90). We published our proposed method and a companion justification document. Inspired by this experience and a range of contradicting claims about interconnection performance, we introduced a new model describing measurements of interconnection links of access providers, and demonstrated how it can guide sound interpretation of interconnection-related measurements regardless of their source.

Infrastructure operations. It was an unprecedented year for CAIDA from an infrastructure development perspective. We continued support for our existing active and passive measurement infrastructure to provide visibility into global Internet behavior, and associated software tools and platforms that facilitate network research and operational assessments.

We made available several data services that have been years in the making: our prototype Internet Outage Detection and Analysis service, with several underlying components released as open source; the Periscope platform to unify and scale querying of thousands of looking glass nodes on the global Internet; our large-scale Internet topology query system (Henya); and our Spoofer system for measurement and analysis of source address validation across the global Internet. Unfortunately, due to continual network upgrades, we lost access to our 10GB backbone traffic monitoring infrastructure. Now we are considering approaches to acquire new monitors capable of packet capture on 100GB links.

As always, we engaged in a variety of tool development, and outreach activities, including maintaining web sites, publishing 13 peer-reviewed papers, 3 technical reports, 4 workshop reports, one (our first) BGP hackathon report, 31 presentations, 20 blog entries, and hosting 6 workshops (including the hackathon). This report summarizes the status of our activities; details about our research are available in papers, presentations, and interactive resources on our web sites. We also provide listings and links to software tools and data sets shared, and statistics reflecting their usage. Finally, we report on web site usage, personnel, and financial information, to provide the public a better idea of what CAIDA is and does.

For the full 2016 annual report, see http://www.caida.org/home/about/annualreports/2016/

Adding geographic annotations to ISP interconnects

Tuesday, September 20th, 2016 by Bradley Huffaker
AS links  annotated geographic locations.

Geographic annotations on AS links.

The Internet arises from the interconnection of thousands of independently operated networks. Its structure is often modeled as a collection of Autonomous Systems (ASes), nodes, exchanging traffic across interconnects, links. These models are reductive by nature, with large international organizations made up of thousands of machines and cables reduced to a single node, and multiple exchange points reduced to a single link.

We extended this model with the introduction of geographic locations attached to links between ISPs, represented by ASes. This extension maintains the simple node and link structure of the AS graph, and allows us to capture some of the geographic complexity in the topology.

AS graphic with geographic locations.

AS graphic with geographic locations.

Consider the path from UCSD to U.Washington depicted in the illustration above. Level 3 has two possible paths: Level 3 ➡ Cogent ➡ U.Wash and Level 3 ➡ NTT ➡ U.Wash. Both paths have the same AS path length. Assuming Level 3 uses hot-potato routing, in order to spend as little money on carrying traffic as possible, it transfers the traffic as soon as possible onto another provider. In this example, NTT’s Los Angeles connection is closer to San Diego than Cogent’s Las Vegas connection, so Level 3 chooses to route the traffic through NTT.

AS links path

In addition to supporting research on path prediction, these type of geographic annotations of links can provide a more realistic indication of the network’s resilience to link failure. In the figure below, duplicate links between ASes reflect multiple interconnects between ASes. e.g., this figure implies that a single link failure would disconnect UCSD from Level 3, while three links would have to fail for Level 3 and NTT to become disconnected.

 Shows multiple links between ASes that connect in multiple locations.

Shows multiple links between ASes that connect in multiple locations.

Details on our geographic link annotation methods and this data is available at CAIDA’s AS Relationships with geographic annotations page.

AIMS 2016 workshop report

Monday, August 1st, 2016 by kc

The final report for our 8th Workshop on Active Internet Measurements (AIMS-8) is available for viewing. The abstract:

(more…)

CAIDA’s 2015 Annual Report

Tuesday, July 19th, 2016 by kc

[Executive summary and link below]

The CAIDA annual report summarizes CAIDA’s activities for 2015, in the areas of research, infrastructure, data collection and analysis. Our research projects span Internet topology, routing, security, economics, future Internet architectures, and policy. Our infrastructure, software development, and data sharing activities support measurement-based internet research, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem. The executive summary is excerpted below:

Mapping the Internet. We continued to pursue Internet cartography, improving our IPv4 and IPv6 topology mapping capabilities using our expanding and extensible Ark measurement infrastructure. We improved the accuracy and sophistication of our topology annotation capabilities, including classification of ISPs and their business relationships. Using our evolving IP address alias resolution measurement system, we collected curated, and released another Internet Topology Data Kit (ITDK).

Mapping Interconnection Connectivity and Congestion.
We used the Ark infrastructure to support an ambitious collaboration with MIT to map the rich mesh of interconnection in the Internet, with a focus on congestion induced by evolving peering and traffic management practices of CDNs and access ISPs, including methods to detect and localize the congestion to specific points in networks. We undertook several studies to pursue different dimensions of this challenge: identification of interconnection borders from comprehensive measurements of the global Internet topology; identification of the actual physical location (facility) of an interconnection in specific circumstances; and mapping observed evidence of congestion at points of interconnection. We continued producing other related data collection and analysis to enable evaluation of these measurements in the larger context of the evolving ecosystem: quantifying a given ISP’s global routing footprint; classification of autonomous systems (ASes) according to business type; and mapping ASes to their owning organizations. In parallel, we examined the peering ecosystem from an economic perspective, exploring fundamental weaknesses and systemic problems of the currently deployed economic framework of Internet interconnection that will continue to cause peering disputes between ASes.

Monitoring Global Internet Security and Stability. We conduct other global monitoring projects, which focus on security and stability aspects of the global Internet: traffic interception events (hijacks), macroscopic outages, and network filtering of spoofed packets. Each of these projects leverages the existing Ark infrastructure, but each has also required the development of new measurement and data aggregation and analysis tools and infrastructure, now at various stages of development. We were tremendously excited to finally finish and release BGPstream, a software framework for processing large amounts of historical and live BGP measurement data. BGPstream serves as one of several data analysis components of our outage-detection monitoring infrastructure, a prototype of which was operating at the end of the year. We published four other papers that either use or leverage the results of internet scanning and other unsolicited traffic to infer macroscopic properties of the Internet.

Future Internet Architectures. The current TCP/IP architecture is showing its age, and the slow uptake of its ostensible upgrade, IPv6, has inspired NSF and other research funding agencies around the world to invest in research on entirely new Internet architectures. We continue to help launch this moonshot from several angles — routing, security, testbed, management — while also pursuing and publishing results of six empirical studies of IPv6 deployment and evolution.

Public Policy. Our final research thrust is public policy, an area that expanded in 2015, due to requests from policymakers for empirical research results or guidance to inform industry tussles and telecommunication policies. Most notably, the FCC and AT&T selected CAIDA to be the Independent Measurement Expert in the context of the AT&T/DirecTV merger, which turned out to be as much of a challenge as it was an honor. We also published three position papers each aimed at optimizing different public policy outcomes in the face of a rapidly evolving information and communication technology landscape. We contributed to the development of frameworks for ethical assessment of Internet measurement research methods.

Our infrastructure operations activities also grew this year. We continued to operate active and passive measurement infrastructure with visibility into global Internet behavior, and associated software tools that facilitate network research and security vulnerability analysis. In addition to BGPstream, we expanded our infrastructure activities to include a client-server system for allowing measurement of compliance with BCP38 (ingress filtering best practices) across government, research, and commercial networks, and analysis of resulting data in support of compliance efforts. Our 2014 efforts to expand our data sharing efforts by making older topology and some traffic data sets public have dramatically increased use of our data, reflected in our data sharing statistics. In addition, we were happy to help launch DHS’ new IMPACT data sharing initiative toward the end of the year.

Finally, as always, we engaged in a variety of tool development, and outreach activities, including maintaining web sites, publishing 27 peer-reviewed papers, 3 technical reports, 3 workshop reports, 33 presentations, 14 blog entries, and hosting 5 workshops. This report summarizes the status of our activities; details about our research are available in papers, presentations, and interactive resources on our web sites. We also provide listings and links to software tools and data sets shared, and statistics reflecting their usage. sources. Finally, we offer a “CAIDA in numbers” section: statistics on our performance, financial reporting, and supporting resources, including visiting scholars and students, and all funding sources.

For the full 2015 annual report, see http://www.caida.org/home/about/annualreports/2015/

NDN Next Phase Annual Report (2015-2016)

Thursday, June 30th, 2016 by kc

The Named Data Networking project recently published the NDN-NP annual report covering activities from May 2015 through April 2016.).

V. Jacobson, J. Burke, L. Zhang, T. Abdelzaher, B. Zhang, k. claffy, P. Crowley, J. Halderman, C. Papadopoulos, and L. Wang, “Named Data Networking Next Phase (NDN-NP) Project May 2015 – April 2016 Annual Report”, Tech. rep., Named Data Networking (NDN), Jun 2016.

This report summarizes our accomplishments during the second year of the Named Data Networking Next Phase (NDN-NP) project (the 5th year of the overall project. This phase of the project focuses on deploying and evaluating the NDN architecture in four environments: building automation management systems, mobile health, multimedia real-time conferencing tools, and scientific data applications. Implementation and testing of pilot applications in these network environments further demonstrated our research progress in namespace design, trust management, and encryption-based access control. Highlights from this year include:

  1. Continued evolution the NDN Forwarding Daemon (NFD), to support application-driven experimentation with new NDN protocol features.
  2. Development of an Android version of NFD to promote NDN experimentation on mobile platforms.
  3. Implementation of a new transport protocol (InfoMax) that can intelligently filter streams of information in order to reduce transmitted data volume, while minimizing loss of information.
  4. A growing portfolio of supporting software libraries, including new APIs, transport mechanisms (Sync, information maximization), and security functionality, that leverage inherent capabilities of NDN, e.g., schematized trust, name-based access control.
  5. Demonstration of extremely scalable forwarding implementation using a billion synthetic names.
  6. Implementation and evaluation of hyperbolic routing
    performance to understand its feasibility in supporting NDN’s
    interdomain routing.

  7. Multi-faceted evaluation of the architecture, from
    instrumentation of applications on the testbed, to uses of ndnSIM and the Mini-NDN emulator environment.

  8. Continued uses of NDN in the four courses taught by principal investigators.
  9. The second annual NDN Community meeting hosted by the
    NDN Consortium to promote a vibrant open source ecosystem of
    research and experimentation around NDN.

The NDN team has made tremendous progress in the last five years, and a larger community of information-centric networking research has evolved in parallel. Our progress revealed the importance of demonstrating NDN capabilities in IoT and big data environments, and highlighted the need for accessible software platform support and emulation capabilities to facilitate R\&D on both the NDN architecture and applications that leverage it. We have received a year of supplement funding to complete four tasks: 1) completing and disseminating native NDN applications and associated design patterns, 2) demonstrating NDN scalability; 3) documenting and releasing reference implementations, and 4) documenting NDN design decisions and lessons learned.

Report from the 2nd NDN Community Meeting (NDNcomm 2015)

Tuesday, November 10th, 2015 by kc

The report for the Second NDN Community Meeting (NDNcomm 2015) is available online now. The meeting, held at UCLA in Los Angeles, California on September 28-29, 2015, provided a platform for attendees from 63 institutions across 13 countries to exchange recent NDN research and development results, to debate existing and proposed functionality in NDN forwarding, routing, and security, and to provide feedback to the NDN architecture design evolution.

[The workshop was partially supported by the National Science Foundation CNS-1345286, CNS-1345318, and CNS-1457074. We thank the NDNcomm Program Committee members for their effort of putting together an excellent program. We thank all participants for their insights and feedback at the workshop.]

Recent papers on policy

Wednesday, October 21st, 2015 by kc

We recently posted two papers on policy that are worth highlighting:

Anchoring policy development around stable points: an approach to regulating the co-evolving ICT ecosystem, published in Telecommunications Policy, Aug 2015.

Abstract:

The daunting pace of innovation in the information and communications technology (ICT) landscape, a landscape of technology and business structure, is a well-known but under-appreciated reality. In contrast, the rate of policy and regulatory innovation is much slower, partly due to its inherently more deliberative character. We describe this disparity in terms of the natural rates of change in different parts of the ecosystem, and examine why it has impeded attempts to impose effective regulation on the telecommunications industry. We explain why a recent movement to reduce this disparity by increasing the pace of regulation – adaptive regulation – faces five obstacles that may hinder its feasibility in the ICT ecosystem. As a means to achieve more sustainable regulatory frameworks for ICT industries, we introduce an approach based on finding stable points in the system architecture. We explore the origin and role of these stable points in a rapidly evolving system, and argue that they can provide a means to support development of policies, including adaptive regulation approaches, that are more likely to survive the rapid pace of evolution in technology.

Full paper available on the CAIDA website.
Accompanying slides are also available.

Adding Enhanced Services to the Internet: Lessons from History
Presented at the Telecommunications Policy Research Conference (TPRC), Sep 2015.

Abstract:

We revisit the last 35 years of history related to the design and specification of Quality of Service (QoS) on the Internet, in hopes of offering some clarity to the current debates around service differentiation. We describe the continual failure to get QoS capabilities deployed on the public Internet, including the technical challenges of the 1980s and 1990s, the market-oriented (business) challenges of the 1990s and 2000s, and recent regulatory challenges. Our historical perspective draws on, among other things, our own work from the 1990s that offered proposals for supporting enhanced services using the Internet Protocol (IP) suite, and our attempts to engage both industry and policymakers in understanding the dynamics of the Internet ecosystem. In short, the engineering community successfully developed protocols and mechanisms to implement enhanced services (QoS), and a few individual service providers have deployed them internally or in trusted two-party scenarios. The long-standing failure has been to deploy this capability across the public Internet.

We reflect on lessons learned from the history of this failure, the resulting tensions and risks, and their implications for the future of Internet infrastructure regulation. First, the continued failure of QoS over the last three decades derives from political and economic (business) obstacles as well as technical obstacles. The competitive nature of the industry, and a long history of anti-trust regulation (at least in the U.S.) conflicts with the need for competing providers to agree on protocols that require sharing operational data with each other to parameterize and verify committed service qualities. Second, QoS technology can yield benefits as well as harms, so policymaking should focus on harms rather than mechanisms. To assure the benefit to consumers, regulators may need to require transparency about the state of congestion and provisioning on networks using such mechanisms. Third, using QoE as the basis for any regulation will require research, tools and capabilities to measure, quantify, and characterize QoE, and developing metrics of service quality that better reflect our understanding of QoS and QoE for a range of applications. Finally, profound shifts in interconnection arrangements suggest a reshaping of the debate over QoS on the public Internet. Some access networks are interconnecting their private IP-based network platforms to support enhanced services, and using this interconnected platform to vertically integrate infrastructure and applications. Access networks are also connecting directly to large content providers to minimize the risk of performance impairments. These changes trigger new regulatory concerns over the fate of the public Internet, including capital investment incentives and gaps across different bodies of law.

Barriers to the deployment of scalable interprovider QoS may be unsurmountable, but since any Internet of the future will face them, it is worth developing a systematic understanding to the challenge of enhanced services, and documenting successes and failures over the history of the Internet as carefully as possible.

Full paper available on the CAIDA website.