ethical phishing experiments have to lie?

May 4th, 2009 by kc

Stefan pointed me at a paper titled “Designing and Conducting Phishing Experiment” (in IEEE Technology and Society Special Issue on Usability and Security, 2007) that makes an amazing claim: it might be more ethical to not debrief the subjects of your phishing experiments after the experiments are over, in particular you might ‘do less harm’ if you do not reveal that some of the sites you had them browse were phishing sites.

This brings us to the question: Does a phishing experiment that deceives a subject and exposes the subject to a fake phishing attack adversely affect the subject’s rights or welfare? As noted above, as long as the researcher can ensure the security of any personal information of any information released by the subject (the procedures of which are outlined below), neither a laboratory phishing study nor a naturalistic phishing study should adversely affect the welfare of the subject. However, we question whether the use of debriefing in naturalistic phishing studies might, in fact, adversely affect the welfare of the subject and propose that this, in part, is justification for not debriefing subjects in these types of phishing studies. In regards to adversely affecting the rights of subjects, the use of deception or waiving consent is not seen as a violation of a personal right, see 45 CFR 46 [5], 116 and [7]. Although laudable, the right to know the truth is not a recognized absolute right. However, the federal regulations and ethicists recognize that it is advisable to address this issue and use debriefing to provide the pertinent information relevant to the truth, when appropriate, see 45 CFR 46 [5], 116(d)4, and [7]. The question we raise is whether using debriefing in a naturalistic phishing study is appropriate.

“Designing and Conducting Phishing Experiment”, Peter Finn and Markus Jakobsson,

This is an interesting, but questionable position: “If people know what’s happening, then they will be upset. But what they will be upset by is learning they were deceived, therefore we must completely deceive them.” That’s an argument that makes a case against itself in one sentence.

There are other problems with the approach, including the assumption of implicit rationality in the users; it does not address the prevalence or degree of anxiety and even fear of being observed in the digital media. The researchers present the problem as dichotomous, choosing not to explore methods that could establish the degree of difference between behavior during informed consent and non-consent. At what sample size and study interval do informed consent procedures change behavior? (If you told someone you were studying their behavior on Internet for the next hour, they’d probably change. But over the next year?) Also, what’s wrong with knowing only conservative values of phishing vulnerability? If it’s such a big problem, wouldn’t even those estimates be influential in designing anti-phishing sites and informing policymakers and law enforcement?

There is a lot of research which is compromised — or completely impossible — with informed consent. But in cases where those compromises can be studied, and estimates of uncertainty established, perhaps researchers (especially psychology researchers?) should not be exempt from that process.

However, I’ve also heard from commercial security consultants that the “tricking users into getting phished without telling them” approach is exactly how many corporations measure the extent their own employees are getting phished on corporate networks. Of course, commercial entities don’t need their internal research projects to pass IRB approval, or peer review, much less public review. The paper’s most important contribution may be its acknowledgement of the lack of current guidelines for how to conduct ethical Internet research. DHS S&T’s upcoming workshop on Ethical Issues in Network Research (26-27 May, by invitation) is happening not a moment too soon. More on this workshop later.

Leave a Reply