I’m going to be presenting this idea later this year at the International Communication Association conference, and I gave a talk on some other research using these methods at the Association of Internet Researchers conference last October, so it’s maybe not entirely vital that I blog about it, but something I read a while back that made me want to do it, and now there’s the time in the schedule to do so.
So I’m going for it, in part because this platform has broader accessibility than either conference. Though, as of yet, not a broader audience. Someday.
This blog post had its genesis, as many of mine do, in something that I read. Embarrassingly enough, I can’t remember how I found this particular article, though I have a strong suspicion that it came to me via being cited in something I read from Culture Digitally, though I can’t find it again there.
Regardless of how I came by it, however, Mary Hodder’s TechCrunch post, Why Amazon Didn’t Just Have a Glitch, pointed to the kinds of issues that make one of my methodological innovations in fan studies vital.
In the piece, Hodder described an incident in which all books with LGBT content were filtered out of general Amazon.com search results because they were classified as “adult,” thus resulting in, among other outcomes, A Parent’s Guide to Preventing Homosexuality (to which I will NOT be linking, thank you very much) being the top result for a search for “homosexuality.” This generated a flutter of Twitter activity under #AmazonFail. But, as Hodder explains:
The issue with #AmazonFail isn’t that a French Employee pressed the wrong button or could affect the system by changing “false” to “true” in filtering certain “adult” classified items, it’s that Amazon’s system has assumptions such as: sexual orientation is part of “adult”. And “gay” is part of “adult.” In other words, #AmazonFail is about the subconscious assumptions of people built into algorithms and classification that contain discriminatory ideas. When other employees use the system, whether they themselves agree with the underlying assumptions of the algorithms and classification system, or even realize the system has these point’s [sic] of view built in, they can put those assumptions into force, as the Amazon France Employee apparently did according to Amazon.
That idea about the “subconscious assumptions of people built into algorithms” and the ways in which, as employees use a system, “whether they themselves agree with the underlying assumptions of the algorithms and classification system, or even realize the system has these points of view built in, they can put those assumptions into force,” is exactly why my research operates from the premise that it’s vital to take the interface seriously as a way power/knowledge gets enacted—that is, as a discourse.
In my dissertation, I examine technology—specifically, the interface of official media company websites for objects of fandom—in much the same way as certain branches of cultural studies (a field to which I have an uneasy relationship, to be sure) examine representation. Technology, I argue, is—like representation—not natural or inevitable but the product of social processes. (I am, of course, not alone in this contention, but I do seem to be the only proponent among those studying fans.)
Once socially produced, then, technologies render certain uses possible and not others, and I investigate this through the “affordances” of these official websites—defined by H. Rex Hartson in his 2003 piece, Cognitive, Physical, Sensory, and Functional Affordances in Interaction Design as what a site “offers the user, what it provides or furnishes” (p. 316).
The key terms I’m deploying here are Hartson’s concepts of “functional affordance,” which is what a site can actually do; “cognitive affordance,” which lets users know what a site can do; and “sensory affordance,” which “enables the user in sensing (e.g., seeing, hearing, feeling) something” (p. 322, emphasis removed).
With these latter two types of affordances, I consider the role of the site’s menu labels, how easy it is to tell what a feature does (and distinguish it from other features), and which features are easier or harder to locate due to their position on the page or how noticeable they are (Hartson, 2003).
I also build Mia Consalvo‘s 2003 discussion, in Cyber-Slaying Media Fans: Code, Digital Poaching, and Corporate Control of the Internet, of the ways in which “corporations have created new multimedia formats that circumvent the easy ‘copy and paste’ usability of older standards,” as with the advent of Flash video, to consider other technological processes that remain below the threshold of the user’s perception, such as cookies that track behavior (p. 82). Finally, I examine the sites’ Privacy Policies and Terms of Service to determine how they frame the site-fan interaction
In part, I begin from the common argument made in evolutionary psychology and design research that affordances exist only in relation to a user and read affordances back to uncover what type of users an interface implies, considering the ways in which the interfaces of these official sites work to, as Ian Hutchby put it in his 2001 piece Technologies, Texts and Affordances, “configure the user” (p. 451). In this sense, my project also resembles that of Michele White whose 2006 book The Body and the Screen: Theories of Internet Spectatorship examines how interfaces work to gender and embody an ideal user.
Ultimately, I seek to examine how the industry’s decisions about features work to both a) produce a particular set of behaviors and bodies as what counts as fandom and b) consume this preferred mode of fandom as a source of value for the company, keeping in mind, as Hodder does above, that this doesn’t require ill intent or even awareness of these processes on the part of employees.
Assumptions, as things that both reflect and produce a sense of how things are or should be, are powerful things, and I want to work to bring them to light.