Sunday, April 26, 2009

Interaction Design Ethics


This post at the IxDA Discussion forum got me thinking about a thought I had a while ago about the ethics involved with applying the results of user research/usability testing to product design. I was thinking about users' misperceptions regarding what a product does or how it works and then how to deal with those misperceptions in successive product iterations. Specifically, when is it alright to allow, or even exploit, users' misperceptions and when is it inappropriate? Where is the line separating ethical decisions from unethical ones, and what defines that line?

The way I thought through this problem was by considering two extreme examples. One example where it seems acceptable, maybe even preferable, to just let users continue to misperceive how something works is whether a hypertext link needs to be single-clicked or double-clicked. One needs only to click a link once to follow the link, but a lot of [less-experienced] users often double-click links, probably confusing this action with the action required to open a file or a shortcut on a Windows desktop. Designers of web browsing software allow single-clicks or double-clicks for links, but technically allowing double-clicks allows users to continue to not correctly understand how the browser actually works. Of course, it's really no more work for the user to double-click instead of single-click, and their lack of understanding of how links really work is likely never to matter. In other words, the cost of misperceiving the functionality is essentially zero.

However, what about a product with an interaction that has a higher cost of misperception, say a medical device? Imagine a product with an interface that has, among other things, a green circular button and a switch labeled "Auto-protect." The way the system actually works is that once the various parameters have been programmed elsewhere on the interface, the operator presses the green button to deliver a drug intravenously. The "Auto-protect" switch is not related to the drug delivering functionality. However, since the "Auto-protect" switch is positioned a little too close to the green button, some users misperceive the "Auto-protect" switch to have the functionality of protecting against an inadvertent overdose and, as a result, are observed to always flip the "Auto-protect" switch on before pressing the green button.

In this example, it would be obviously unethical to conclude that since users want the device to include a mechanism that protects against overdoses, we can just fool them into believing that the device offers that functionality by moving the "Auto-protect" switch even closer to the green button, to better afford a relationship betwen the two. Of course, allowing device operators to mistakenly believe that they are protected from delivering an overdose, when in reality they are not, has a very high cost (i.e. patient death), so there seems to be no room here for allowing users to think whatever they want as long as they're able to use the device.

So is the cost of the misperception the thing that determines where the line separating ethical from unethical decisions lies? Or is there something more to it?

Image from here.

No comments: