Archive for the 'Security' Category

Panel on Cyberwarfare and Cyberattacks at 9th Circuit Judicial Conference

Monday, July 20th, 2015 by kc

I had the honor of contributing to a panel on “Cyberwarfare and cyberattacks: protecting ourselves within existing limitations” at this year’s 9th Circuit Judicial Conference. The panel moderator was Hon. Thomas M. Hardiman, and the other panelists were Professor Peter Cowhey, of UCSD’s School of Global Policy and Strategy, and Professor and Lt. Col. Shane R. Reeves of West Point Academy. Lt. Col. Reeves gave a brief primer on the framework of the Law of Armed Conflict, distinguished an act of cyberwar from a cyberattack, and described the implications for political and legal constraints on governmental and private sector responses. Professor Cowhey followed with a perspective on how economic forces also constrain cybersecurity preparedness and response, drawing comparisons with other industries for which the cost of security technology is perceived to exceed its benefit by those who must invest in its deployment. I used a visualization of an Internet-wide cybersecurity event to illustrate technical, economic, and legal dimensions of the ecosystem that render the fundamental vulnerabilities of today’s Internet infrastructure so persistent and pernicious. A few people said I talked too fast for them to understand all the points I was trying to make, so I thought I should post the notes I used during my panel remarks. (My remarks borrowed heavily from Dan Geer’s two essays: Cybersecurity and National Policy (2010), and his more recent Cybersecurity as Realpolitik (video), both of which I highly recommend.) After explaining the basic concept of a botnet, I showed a video derived from CAIDA’s analysis of a botnet scanning the entire IPv4 address space (discovered and comprehensively analyzed by Alberto Dainotti and Alistair King). I gave a (too) quick rundown of the technological, economic, and legal circumstances of the Internet ecosystem that facilitate the deployment of botnets and other threats to networked critical infrastructure.
(more…)

DHS S&T PREDICT PI Meeting, Marina del Rey, CA

Friday, June 6th, 2014 by Josh Polterock

On 28-29 May 2014, DHS Science and Technology Directorate (S&T) held a meeting of the Principal Investigators of the PREDICT (Protected Repository for the Defense of Infrastructure Against Cyber Threats) Project, an initiative to facilitate the accessibility of computer and network operational data for use in cybersecurity defensive R&D. The project is a three-way partnership among government, critical information infrastructure providers, and security development communities (both academic and commercial), all of whom seek technical solutions to protect the public and private information infrastructure. The primary goal of PREDICT is to bridge the gap between producers of security-relevant network operations data and technology developers and evaluators who can leverage this data to accelerate the design, production, and evaluation of next-generation cybersecurity solutions.

In addition to presenting project updates, each PI presented on a special topic suggested by Program Manager Doug Maughan. I presented some reflective thoughts on 10 Years Later: What Would I Have done Differently? (Or what would I do today?). In this presentation, I revisited my 2008 top ten list of things lawyers should know about the Internet to frame some proposed forward-looking strategies for the PREDICT project in 2014.

Also noted at the meeting, DHS recently released a new broad agency announcement (BAA) that will contractually require investigators contribute into PREDICT any data created or used in testing and evaluation of the funded work (if the investigator has redistribution rights, and subject to appropriate disclosure control).

Carna botnet scans confirmed

Monday, May 13th, 2013 by Alistair King

On March 17, 2013, the authors of an anonymous email to the “Full Disclosure” mailing list announced that last year they conducted a full probing of the entire IPv4 Internet. They claimed they used a botnet (named “carna” botnet) created by infecting machines vulnerable due to use of default login/password pairs (e.g., admin/admin). The botnet instructed each of these machines to execute a portion of the scan and then transfer the results to a central server. The authors also published a detailed description of how they operated, along with 9TB of raw logs of the scanning activity.

Online magazines and newspapers reported the news, which triggered some debate in the research community about the ethical implications of using such data for research purposes. A more fundamental question received less attention: since the authors went out of their way to remain anonymous, and the only data available about this event is the data they provide, how do we know this scan actually happened? If it did, how do we know that the resulting data is correct?

(more…)

Correlation between country governance regimes and the reputation of their Internet (IP) address allocations

Monday, April 15th, 2013 by Bradley Huffaker

[While getting our feet wet with D3 (what a wonderful tool!), we finally tried this analysis tidbit that’s been on our list for a while.]

We recently analyzed the reputation of a country’s Internet (IPv4) addresses by examining the number of blacklisted IPv4 addresses that geolocate to a given country. We compared this indicator with two qualitative measures of each country’s governance. We hypothesized that countries with more transparent, democratic governmental institutions would harbor a smaller fraction of misbehaving (blacklisted) hosts. The available data confirms this hypothesis. A similar correlation exists between perceived corruption and fraction of blacklisted IP addresses.

For more details of data sources and analysis, see:
http://www.caida.org/research/policy/country-level-ip-reputation/

x:Corruption Perceptions Index
y:IP population %
x:Democracy Index
y:IP population %
x:Democracy Index
y:IP infection %

Interactive graph and analysis on the CAIDA website

Syria disappears from the Internet

Wednesday, December 5th, 2012 by Alistair King and Alberto Dainotti

On the 29th of November, shortly after 10am UTC (12pm Damascus time), the Syrian state telecom (AS29386) withdrew the majority of BGP routes to Syrian networks (see reports from Renesys, Arbor, CloudFlare, BGPmon). Five prefixes allocated to Syrian organizations remained reachable for another several hours, served by Tata Communications. By midnight UTC on the 29th, as reported by BGPmon, these five prefixes had also been withdrawn from the global routing table, completing the disconnection of Syria from the rest of the Internet.

(more…)

CAIDA at the NSF Secure and Trustworthy Cyberspace (SaTC) Principal Investigators’ Meeting

Tuesday, December 4th, 2012 by Alberto Dainotti

Last week CAIDA researchers (Alberto and kc) visited National Harbor (Maryland) for the 1st NSF Secure and Trustworthy Cyberspace (SaTC) Principal Investigators Meeting. The National Science Foundation’s SATC program is an interdisciplinary expansion of the old Trustworthy Computing program sponsored by CISE, extended to include the SBE, MPS, and EHR directorates. The SATC program also includes a bold new Transition to Practice category of project funding — to address the challenge of moving from research to capability — which we are excited and honored to be a part of.

(more…)

Unsolicited Internet Traffic from Libya

Wednesday, March 23rd, 2011 by Emile Aben

Amidst the recent political unrest in the Middle East, researchers have observed significant changes in Internet traffic and connectivity. In this article we tap into a previously unused source of data: unsolicited Internet traffic arriving from Libya. The traffic data we captured shows distinct changes in unsolicited traffic patterns since 17 February 2011.

Most of the information already published about Internet connectivity in the Middle East has been based on four types of data:

(more…)

ethical phishing experiments have to lie?

Monday, May 4th, 2009 by kc

Stefan pointed me at a paper titled “Designing and Conducting Phishing Experiment” (in IEEE Technology and Society Special Issue on Usability and Security, 2007) that makes an amazing claim: it might be more ethical to not debrief the subjects of your phishing experiments after the experiments are over, in particular you might ‘do less harm’ if you do not reveal that some of the sites you had them browse were phishing sites.

(more…)

spoofer: measure your network’s hygiene!

Sunday, April 5th, 2009 by kc

Update: In May 2015, ownership of Spoofer transferred from MIT to CAIDA

We are studying an empirical Internet question central to its security, stability, and sustainability: how many networks allow packets with spoofed (fake) IP addresses to leave their network destined for the global Internet? In collaboration with MIT, we have designed an experiment that enables the most rigorous analysis of the prevalence of IP spoofing thus far, and we need your help running a measurement to support this study.

This week Rob Beverly finally announced to nanog an update to spoofer he’s been working on for a few months. Spoofer is one of the coolest Internet measurement tool we’ve seen in a long time — especially now that he is using Ark nodes as receivers (of spoofed and non-spoofed packets), giving him 20X more path coverage than he could get with a single receiver at MIT.

(more…)

the inevitable conflict between data privacy and science

Sunday, January 4th, 2009 by kc

Balancing individual privacy against other needs, such as national security, critical infrastructure protection, or even science, has long been a challenge for law enforcement, policymakers and scientists. It’s good news when regulations prevent unauthorized people from examining the contents of your communications, but current privacy laws often make it hard — sometimes impossible — to provide academic researchers with data needed to scientifically study the Internet. Our critical dependence on the Internet has rapidly grown much stronger than our comprehension of its underlying structure, performance limits, dynamics, and evolution, and unfortunately current privacy law is part of the problem — legal constraints intended to protect individual communications privacy also leave researchers and policymakers trying to analyze the global Internet ecosystem essentially in the dark. To make matters worse, the few data points suggest a dire picture, shedding doubt on the Internet’s ability to sustain its role as the world’s preferred communications substrate. In the meantime, Internet science struggles to make progress given much less available empirical data than most fields of scientific inquiry.

(more…)