Reading about NASA’s recent DNSSEC snafu, and especially Comcast’s impressively cogent description of what went wrong (i.e., a mishap that seems way too easy to ‘hap’), I’m reminded of the page I found most interesting in The Checklist Manifesto:
Archive for the 'Domain Name System (DNS)' Category
As is well known to most CircleID readers — but importantly, not to most other Internet users — in March 2011, ICANN knowingly and purposefully embraced an unprecedented policy that will encourage filtering, blocking, and/or redirecting entire virtual neighborhoods, i.e., “top-level domains” (TLDs). Specifically, ICANN approved the creation of the “.XXX” suffix, intended for pornography websites. Although the owner of the new .XXX TLD deems a designated virtual enclave for morally controversial material to be socially beneficial for the Internet, this claim obfuscates the dangers such a policy creates under the hood.
In response to the U.S. National Telecommunications and Information Administration’s recent Further Notice of Inquiry on the Internet Assigned Names and Numbers Authority (IANA) Functions [Docket No. 110207099-1319-0], I submitted the following comment:
My recently submitted public comments on the increasingly controversial issue of ICANN’s plans to expand the generic Top Level Domain namespace indefinitely:
a repeat of my still unaddressed comments from the last (June 2010) economic report,
- an attempt to summarize some public comments to that June 2010 report,
- end an abbreviated historical timeline of ICANN’s economic research commitment to launching new gTLDs.
[I submitted the following public comment to ICANN in response to their second attempt at commissioning An Economic Framework for the Analysis of the Expansion of Generic Top-Level Domain Names. I'll link to ICANN's summary of all public comments on this report when available. -k]
This second economic report posted 16 june (pdf) is an improvement over the June 2009 reports by Dennis Carlton (pdf, pdf) but there are still too many — and too fundamental — flaws for it to serve as the basis of any ICANN policy on new gTLDs:
I was delighted to see Sid Faber and Tim Shimeall co-teaching a “Network situational awareness” course at Carnegie-Mellon University last semester, using DatCat and DITL data, they even put the class projects online. Not only did some of the students use DITL data (contributed by Japanese academics), as well as Internet2′s netflow data, but they used DatCat to find both data sets. To quote Sid,
“About three weeks into the class, we finally got across one of the key features to the students: we were looking at how things really work on the internet, not just a theoretical discussion of RFCs. The data sets were invaluable, but we had challenges dealing with anonymization, sampling, and the overall volume of the data sets — kind of understandable for the first offering of the course.”
In November 2008 I had the honor of being invited to speak at the Chilean Computer Science Society Annual Meeting, this year at the Universidad de Magallanes in Punta Arenas, Chile. I followed a colleague who has been visiting CAIDA for the last two years, Sebastian Castro, back to his sponsoring institution, NIC Chile. We started out with an interesting meeting with a core of technical folk where I learned about the activities of NIC Chile’s recently established research arm (NIC Labs). We exchanged valuable information on the common (and less common) challenges of doing successful research in our respective environments.
The next day I presented to the DHS/SRI Infosec Technology Transition Council (ITTC), where “experts and leaders from the government, private, financial, IT, venture capitalist, and academia and science sectors come together to address the problem of identity theft and related criminal activity on the Internet.”
It is only a three-hour meeting, a few times a year (my first time), but intense. They had a timely panel first, “Integrity in Elections”, where they reviewed so many methodological flaws in voting procedures, they shed substantial doubt on the proposition of fair national elections anytime soon. John Sebes motivated the computational science challenge well: if we are not capable of building a trustworthy computational system to accomplish the conceptually simple task of tallying a vote, what can we expect to be capable of building trusted computational systems to do? And while there is inspirational work on documenting and proposing how to solve the technology issues that threaten election integrity, the bottom line is disheartening.