Archive for the 'Commentaries' Category

CAIDA’s 2021 Annual Report

Monday, May 30th, 2022 by kc

The CAIDA annual report summarizes CAIDA’s activities for 2021 in the areas of research, infrastructure, data collection and analysis. Our research projects span: Internet cartography and performance; security, stability, and resilience studies; economics; and policy. Our infrastructure, software development, and data sharing activities support measurement-based Internet research, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem.
The executive summary is excerpted below:
(more…)

IRR Hygiene in the RPKI Era

Friday, April 1st, 2022 by Ben Du

The Border Gateway Protocol (BGP) is the protocol that networks use to exchange (announce) routing information across the Internet. Unfortunately, BGP has no mechanism to prevent the propagation of false announcements such as hijacks and misconfigurations. The Internet Route Registry (IRR) and Resource Public Key Infrastructure (RPKI) both emerged as different solutions to improve routing security and operation in the Border Gateway Protocol (BGP) by allowing networks to register information and develop route filters based on information other networks have registered.

The Internet Routing Registry (IRR) was first introduced in 1995 and remained a popular tool for BGP route filtering. However, route origin information in the IRR suffers from inaccuracies due to the lack of incentive for registrants to keep information up to date and the use of non-standardized validation procedures across different IRR database providers.

Over the past few years, the Resource Public Key Infrastructure (RPKI), a system providing cryptographically attested route origin information, has seen steady growth in its deployment and has become widely used for Route Origin Validation (ROV) among large networks.

Some networks are unable to adopt RPKI filtering due to technical or administrative reasons and continue using only existing IRR-based route filtering. Such networks may not be able to construct correct routing filters due IRR inaccuracies and thus compromise routing security.

In our paper IRR Hygiene in the RPKI Era, we at CAIDA (UC San Diego), in collaboration with MIT, study the scale of inaccurate IRR information by quantifying the inconsistency between IRR and RPKI. In this post, we will succinctly explain how we compare records and then focus on the causes of such inconsistencies and provide insights on what operators could do to keep their IRR records accurate.

IRR and RPKI trends

For our study we downloaded IRR data from 4 IRR database providers: RADB, RIPE, APNIC, and AFRINIC, and RPKI data from all Trust Anchors published by the RIPE NCC. Figure 1 shows IRR cover more IPv4 address space than RPKI, but RPKI grew faster than IRR, having doubled its coverage over the past 6 years.

Figure 1. IPv4 coverage of IRR and RPKI databases. RADB, the largest IRR database, has records representing almost 60% of routable IPv4 address space. In contrast, the RPKI covers almost 30% of that address space but has been steadily growing in the past few years.

Checking the consistency of IRR records

 

We classified IRR records following the procedure in Figure 2: first we check if there is a Route Origin Authorization (ROA) record in RPKI covering the IRR record, then in case there is one if the ASN is consistent, and finally, if the ASN is consistent, we check the prefix length compared to the maximum length attribute of RPKI records. Using this procedure we are left with 4 categories:

  1. Not In RPKI: If the prefix in an IRR record has no matching or covering prefix in RPKI.
  2. Inconsistent ASN: If the IRR record has matching RPKI records but none has the same ASN..
  3. Inconsistent Max Length category: If the IRR record has the same ASN as its matching RPKI records but the prefix length in the IRR record is larger than the Max Length field in the RPKI records.
  4. Consistent: If the IRR records have a matching RPKI record with the same ASN and according maximum length attribute.

Figure 2. Classification of IRR records

Which is more consistent? RADB vs RIPE, APNIC, AFRINIC

 

As of October 2021, we found only 38% of RADB records with matching ROAs were consistent with RPKI, meaning that there were more inconsistent records than consistent records in RADB, see Figure 3 (left) . In contrast, 73%, 98%, and 93% of RIPE, APNIC, and AFRINIC IRR records were consistent with RPKI, showing a much higher consistency than RADB, see Figure 3 (right).

 

We attribute the big difference in consistency to a few reasons. First, the IRR database we collected from the RIRs are their respective authoritative databases, meaning the RIRs manages all the prefixes, and verifies the registration of IRR objects with address ownership information. This verification process is stricter than that of RADB and leads to the higher quality of IRR records. Second, APNIC provides its registrants a management platform that automatically creates IRR records for a network when it registers its prefixes in RPKI. This platform contributes to a larger number of consistent records compared to other RIRs.

 

Figure 3. RIR-managed IRR databases have higher consistency with RPKI compared to RADB.

 

What caused the inconsistency?

In our analysis we found that inconsistent max length was mostly caused by IRR records that are too specific, as the example shown in Figure 4, and to a lesser extent by misconfigured max length attribute in RPKI.  We also found that inconsistent ASN records are largely caused by customer networks failing to remove records after returning address space to their provider network, such as the example in Figure 5.

Inconsistent Max Length (Figure 4)

  • 713 caused by misconfigured RPKI Max Length.
  • 39,968 caused by too-specific IRR record.

Figure 4. IRR record with inconsistent max length: the IRR prefix length exceeds the RPKI max length value.

 

Inconsistent ASN (Figure 5)

  • 4,464 caused by customer network failing to remove RADB records after returning address space to provider network.

Figure 5. IRR record with inconsistent ASN: the IRR record ASN differs from the RPKI record ASN.

 

To Improve IRR accuracy

 

Although RPKI is becoming more widely deployed, we do not see a decrease in IRR usage, and therefore we should improve the accuracy of information in the IRR. We suggest that networks keep their IRR information up to date and IRR database providers implement policies that promote good IRR hygiene.

Networks currently using IRR for route filtering can avoid the negative impact of inaccurate IRR information by using IRRd version 4, which validates IRR information against RPKI, to ignore incorrect IRR records.

Response to NSTC’s JCORE

Wednesday, May 19th, 2021 by David Clark and kc claffy

A year ago January 2020, k claffy, CAIDA Director and UCSD Adjunct Professor of Computer Science and Engineering responded with collaborator David Clark, Senior Scientist at MIT’s Computer Science and Artificial Intelligence Laboratory to the Request for Information (RFI) put out by the National Science and Technology Council’s (NSTC) Joint Committee on the Research Environment (JCORE). The response establishes the critical importance of the internet to the infrastructure of society and the need for governments, and specifically the U.S. government, to send a strong signal to the private sector through high-level policy making that the only path to understanding the characteristics of the internet comes via data sharing and that responsible sharing of data for documented scientific research will not generate corporate liability.

Another focus and benefit to the policy for which we call comes with the development and delivery of academic training of professionals to work with large data sets focused on communications and networking.

You can see the complete response and more related material posted in CAIDA resource catalog.

Guiding principles for a “Bureau of Cyber Statistics”

Saturday, April 24th, 2021 by David Clark and kc claffy

The recent Cyberspace Solarium Commission report (1) set out a strategic plan to improve the security of cyberspace. Among its many recommendations is that the government establish a Bureau of Cyber Statistics, to provide the government with the information that it needs for informed planning and action. A recent report from the Aspen Institute echoed this call. (2) Legal academics and lobbyists have already started to consider its structure. (3) The Internet measurement community needs to join this conversation.

The Solarium report proposed some specific characteristics: they recommend a bureau located in the Department of Commerce, and funded and authorized to gather necessary data. The report also says that “the center should be funded and equipped to host academics as well as private sector and independent security researchers as a part of extended exchanges”. We appreciate that the report acknowledges the value of academic researchers and that this objective requires careful thought to achieve. The report specifically mentions “purchasing private or proprietary data repositories”. Will “extended exchanges” act as the only pattern of access, where an academic would work under a Non-Disclosure Agreement (NDA), unable to publish results that relied on proprietary data? Would this allow graduate students to participate, i.e., how would they publish a thesis? The proposal does not indicate deep understanding of how academic research works. As an illustrative example, CAIDA/UCSD and MIT were hired by AT&T as “independent measurement experts” to propose and oversee methods for AT&T to satisfy FCC reporting requirements imposed as a merger condition. (4) AT&T covered all the data we received by an NDA, and we were not able to publish any details about what we learned. This sort of work does not qualify as academic research. It is consulting.

In our view, the bureau must be organized in such a way that academics are able and incentivized to utilize the resources of the bureau for research on questions that motivate the creation of the bureau in the first place. But this requires that when the U.S. government establishes the bureau, it makes apparent the value of academic participation and the modes of operation that will allow it.

These reports focus on cybersecurity, and indeed, security is the most prominent national challenge of the Internet. But the government needs to understand many other issues related to the character of the Internet ecosystem, many of which are inextricably related to security. We cannot secure what we do not understand, and we cannot understand what we do not measure. Measurement of the Internet raises epistemological challenges that span many disciplines, from network engineering and computer science to economics, sociology, ethics, law, and public policy. The following guiding principles can help accommodate these challenges, and the sometimes conflicting incentives across academic, government, commercial, and civic stakeholders.

  1. Incentivize academic participation. A national infrastructure must be organized in such a way that academics are able and incentivized to utilize its resources. This requires designing and implementing modes of operation that will incentivize independent researcher participation.
  2. Demonstrate innovation and value through real projects that address national-scale problems with data-intensive science and engineering research. To justify substantial U.S. government investment in cyberinfrastructure, the research community must demonstrate its value as an independent voice with important results that help to inform the future of the Internet. This demonstration will not be effective if it is hypothetical. Real projects are tricky, because the data does not necessarily exist yet, and if it does, may be proprietary. So researchers must overcome the chicken-and-egg problem of how to demonstrate the value of an independent research community before the Bureau exists.
  3. Start with public data and shared community infrastructure. The starting point must be to work with public data, and translate research results into forms that are meaningful to a constituency broader than the research community. But this path reveals more specific barriers: Who would fund such research? What are the incentives of the academic research community to undertake it? Yet if we do not recognize and overcome this challenge, the independent research community may essentially be written out of the story, as more and more data is proprietary and hidden away.
  4. Make specific and concrete calls for data of national importance. In our view, the community needs a focal point for discussion about collection and use of data, presenting an opportunity and responsibility to transform abstract calls for access to data into more specific and concrete articulations.
  5. Prioritize framework for research access to proprietary data. Sharing of proprietary data must address the reasons that data is considered proprietary. Understanding these reasons is required to design approaches to allow reasonable access for research purposes.
  6. Integrate focus on and metrics to evaluate workforce training efforts. The other risk of continuing on the current path, rather than confronting the data access problem, is the lost opportunity to train students to interpret complex operational data about Internet infrastructure, which is crucial to developing a globally competitive U.S. cybersecurity workforce capable of securing Internet infrastructure.

Other parts of the globe have moved to regularize cybersecurity data, and they have explicitly recognized the importance of engaging and sustaining the academic research establishment in developing cybersecurity tools to secure network infrastructure (5). If the U.S. does not take coherent steps to support its research community, there is a risk that it is sidelined in shaping the future of the Internet. The European Union’s proposed regulation for Digital Services (6) also discussed the importance of ensuring access to proprietary data by the academic research community:

Investigations by researchers on the evolution and severity of online systemic risks are particularly important for bridging information asymmetries and establishing a resilient system of risk mitigation, informing online platforms, Digital Services Coordinators, other competent authorities, the Commission and the public. This Regulation therefore provides a framework for compelling access to data from very large online platforms to vetted researchers.

They clarify what they mean by “vetted researchers”:


In order to be vetted, researchers shall be affiliated with academic institutions, be independent from commercial interests, have proven records of expertise in the fields related to the risks investigated or related research methodologies, and shall commit and be in a capacity to preserve the specific data security and confidentiality requirements corresponding to each request.

This regulation emphasizes a structure that allows the academic community to work with proprietary data, sending an important signal that they intend to make their academic research establishment a recognized part of shaping the future of the Internet in the EU. The U.S. needs to take a similar proactive stance.


References

  1. Cyberspace Solarium Commission report
  2. The Aspen Institute: A National Cybersecurity Agenda for Digital Infrastructure
  3. Lawfare: Considerations for the Structure of the Bureau of Cyber Statistics
  4. CAIDA: First Amended Report of AT&T Independent Measurement Expert: Reporting requirements and measurement methods
    CAIDA: Report of AT&T Independent Measurement Expert Background and supporting arguments for measurement and reporting requirements
  5. DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148
  6. Regulation of the European Parliament and of the Council on a Single Market For Digital Services

Unintended consequences of submarine cable deployment on Internet routing

Tuesday, December 15th, 2020 by Roderick Fanou

Figure 1: This picture shows a line of floating buoys that designate the path of the long-awaited SACS (South-Atlantic Cable System). This submarine cable now connects Angola to Brazil (Source: G Massala, https://www.menosfios.com/en/finally-cable-submarine-sacs-arrived-to-brazil/, Feb 2018.)

The network layer of the Internet routes packets regardless of the underlying communication media (Wifi, cellular telephony, satellites, or optical fiber). The underlying physical infrastructure of the Internet includes a mesh of submarine cables, generally shared by network operators who purchase capacity from the cable owners [2,11]. As of late 2020, over 400 submarine cables interconnect continents worldwide and constitute the oceanic backbone of the Internet. Although they carry more than 99% of international traffic, little academic research has occurred to isolate end-to-end performance changes induced by their launch.

In mid-September 2018, Angola Cables (AC, AS37468) activated the SACS cable, the first trans-Atlantic cable traversing the Southern hemisphere [1][A1]. SACS connects Angola in Africa to Brazil in South America. Most assume that the deployment of undersea cables between continents improves Internet performance between the two continents. In our paper, “Unintended consequences: Effects of submarine cable deployment on Internet routing”, we shed empirical light on this hypothesis, by investigating the operational impact of SACS on Internet routing. We presented our results at the Passive and Active Measurement Conference (PAM) 2020, where the work received the best paper award [11,7,8]. We summarize the contributions of our study, including our methodology, data collection and key findings.

[A1]  Note that in the same year, Camtel (CM, AS15964), the incumbent operator of Cameroon, and China Unicom (CH, AS9800) deployed the 5,900km South Atlantic Inter Link (SAIL), which links Fortaleza to Kribi (Cameroon) [17], but this cable was not yet lit as of March 2020.

(more…)

CAIDA Resource Catalog

Tuesday, October 27th, 2020 by Bradley Huffaker

One of CAIDA’s primary missions has been to improve our understanding of the Internet infrastructure, through data-driven science. To this end, CAIDA has collected and maintains one of the largest collections of Internet-related data sets in the world, and developed tools and services to curate and process that data. Along with this success has come the challenge of helping new students and researchers to find and use that rich archive of resources.

As part of our NSF-funded DIBBS project, CAIDA has developed a rich context resource catalog, served at catalog.caida.org. The goal of the catalog is to help both newcomers and experienced users with data discovery, and reducing the time between finding the data and extracting knowledge and insights from it.

In addition to linking datasets to related papers and presentations, the catalog will also link to code snippets, user-provided notes, and recipes for performing commons analytical tasks with the data.

The catalog can be found at: https://catalog.caida.org

Please explore and provide feedback!

Spoofer Surpasses One Million Sessions and Publishes Final Report

Tuesday, October 13th, 2020 by Josh Polterock

On October 10, 2020 the Spoofer system logged its 1,000,000th measurement session. Finishing its 7th year under CAIDA stewardship, the project recently published its final report documenting the improvements to the software and hardware infrastructure made possible by support from the two-year DHS award “ASPIRE – Augment Spoofer Project to Improve Remediation Efforts” co-led by Matthew Luckie of the University of Waikato’s faculty of Computing and Mathematical Sciences. The report describes (1) updates to the open source client-server source address validation (SAV) testing system (developed under DHS S&T contract D15PC00188) to expand visibility of networks behind Network Address Translation devices (NATs); (2) expanded notifications and reporting through our operator-focused private reporting engine and public regionally-focused notifications to operational mailing lists; (3) several publications documenting analysis of the effectiveness of different approaches to stimulating remediation activities [1, 2, 3]. These tasks achieved testing and evaluation of work developed under the previous contract, and analysis of options for technology transition to a broader cross-section of security research, operations, risk management, and public policy stakeholders. The resulting technologies and data improved the U.S. government’s ability to identify, monitor, and mitigate the infrastructure vulnerability that serves as the primary vector of massive DDoS attacks on the Internet.

Excerpted from the ASPIRE final report:

Of the 587 remediation events we inferred between May 2016 and August 2019, 25.2% occurred in the U.S., and 23.5% occurred in Brazil. Figure 8 shows that nearly 90% of the remediation events in Brazil occurred after we began sending monthly emails to the Brazilian operator email list (GTER). We calculate the remediation rate by dividing the number of ASes for which we inferred a remediation event by the total number of ASes that sent a spoofed packet during the same interval. For the year prior to commencing the GTER emails to Brazilian network operators, 14 of 67 ASes (21%) remediated; in the year after, 52 of 168 ASes (31%) remediated. This improvement is supported by NIC.br’s “Program for a Safer Internet” [37], which offers training courses and lectures to support network operators to deploy security best practices in Brazil. The rate of remediation in the U.S. is lower; prior to sending the NANOG emails to U.S. network operators, 21 of 132 (16%) of ASes remediated; in the year after, 35 of 147 (24%) of ASes remediated. While the rate of remediation is lower in the U.S. than Brazil, the relative improvement in both is equivalent –≈50%. Note that remediation in Brazil has slowed since the outbreak of Covid-19 in Brazil.

Figure 8: Remediation in the U.S. and Brazil.

We hope you will take the time to read the full final report, download the client software and test your network to help us better understand the state of IP spoofing.

References:

1. M. Luckie, R. Beverly, R. Koga, K. Keys, J. Kroll, and k. claffy, “Network Hygiene, Incentives, and Regulation: Deployment of Source Address Validation in the Internet”, in ACM Computer and Communications Security (CCS), Nov 2019.

2. L. Müller, M. Luckie, B. Huffaker, k. claffy, and M. Barcellos, “Challenges in Inferring Spoofed Traffic at IXPs”, in ACM SIGCOMM Conference on emerging Networking EXperiments and Technologies (CoNEXT), Dec 2019.

3. L. Müller, M. Luckie, B. Huffaker, k. claffy, and M. Barcellos, “Spoofed traffic inference at IXPs: Challenges, methods and analysis”, Computer Networks, vol. 182, Aug 2020.

IPv4 History Visualization

Thursday, August 6th, 2020 by Nicole Lee

This visualization shows how the growing demand for those addresses transformed the governance model from a handful of scientists and engineers managing these addresses to the multi-stakeholder governance model we have today. IPv4 (the fourth version of the Internet Protocol) is the governing standard of today’s Internet. Similar to any other network, unique identifiers play an integral role in Internet routing. We group IP address blocks based on the organization that regulates its allocation as recorded in IANA’s IPv4 address space file and the RFC.

Please view the visualization at: https://www.caida.org/publications/visualizations/ipv4-history/

Screenshots of the visualization

 

 

 

 

 

This was created with the support of the National Science Foundation (NSF). For any questions or comments on this project, please contact info@caida.org.

CAIDA’s Annual Report for 2019

Monday, July 6th, 2020 by kc

The CAIDA annual report summarizes CAIDA’s activities for 2019, in the areas of research, infrastructure, data collection and analysis. Our research projects span Internet mapping, performance measurement, security, economics, and policy. Our infrastructure, software development, and data sharing activities support measurement-based internet research, both at CAIDA and around the world, with focus on the health and integrity of the global Internet ecosystem. The executive summary is excerpted below:
(more…)

AS Rank v2.1 Released (RESTFUL/Historical/Cone)

Wednesday, May 13th, 2020 by Bradley Huffaker
ASRankv2.1

(GraphQL/RESTFUL)

Responding to feedback from our user community, CAIDA has released version 2.1 of the AS Rank API. This update helps to reduce some of the complexity of the full-featured GraphQL interface through a simplified RESTful API.

AS Rank API version 2.1 adds support for historical queries as well as support for AS Customer Cones, defined as the set of ASes an AS can reach using customer links. You can learn more about AS relationships, customer cones, and how CAIDA sources the data at https://asrank.caida.org/about.

You can find the documentation for AS Rank API version 2.1 here https://api.asrank.caida.org/v2/restful/docs.

You can find documentation detailing how to make use of historical data and customer cones here https://api.asrank.caida.org/v2/docs.

CAIDA Team