Skip navigation

Monthly Archives: June 2013

Last week, I suggested that the U.S. might benefit from seeking leaking as a useful tool to support democracy in a nontransparent age.

This week, rather than setting up the two as opposed, I want to begin from the premise that what the NSA’s PRISM surveillance and the leak that revealed it have in common is that they were undertaken with the belief that the ends justify the means.

First, under the logic of democratic transparency I described last time, it would seem clear that people had a right to know what was happening, but did the means justify that end? Leaker Edward Snowden obviously thought that it did. He claims that his “sole motive is to inform the public as to that which is done in their name and that which is done against them,” disregarding the means necessary to achieve it altogether as he maintains, “I know I have done nothing wrong” (Guardian).

(Jeremy Hammond, who hacked a private security firm to expose its manipulation of public opinion, said much the same thing)

The Atlantic’s Bruce Schneier also prioritizes the leak’s ends, actively encouraging further leaking: “I am asking for people to engage in illegal and dangerous behavior. Do it carefully and do it safely, but — and I am talking directly to you, person working on one of these secret and probably illegal programs — do it.”

Others, however, think that the ends don’t justify the means. This I think is where you get polling data that both say Snowden did the right thing and that he should be punished.

WikiLeaks documentary director Alex Gibney, interviewed in The Atlantic, makes a similar argument that both values the ends and  attaches consequences to the means with regard to leaker Bradley Manning: “you have to acknowledge that he broke an oath to the military, and we wouldn’t want a world, at least I wouldn’t want a world, in which every soldier leaked every bit of information that he or she had. Manning broke an oath and he’s actually pled guilty to it, and he’s willing to face the consequences.”

At the far end of the spectrum from Snowden’s complete focus on ends, Director of National Intelligence James Clapper (predictably) doesn’t even consider the possibility of value in the ends, contending that “Disclosing information about the specific methods the government uses to collect communications can obviously give our enemies a ‘playbook’ of how to avoid detection” (Washington Post)

There is also an ends vs. means question on the surveillance itself. It’s quite possible that people are actually being protected by this blanket surveillance. Maybe fifty plots have really been foiled.  Certainly, having people not blown up is an admirable end. But at what cost? A case could be made that the means are destroying the very freedoms they’re intended to secure.

My concern here is in the minority. There has been some significant nonchalance about these surveillance revelations, such that it seems people are ok with the surveillance means out of support for its ends. A Pew Research Center poll found a majority saying tracking phone data was acceptable. Daniel Solove’s Washington Post piece sought to dispel privacy myths like “Only people with something to hide should be concerned about their privacy.”

The NSA unconcern may seem like a startling abdication of privacy, but is actually a relatively prevalent attitude. As Alyson Leigh Young and Anabel Quan-Haase argued in their recent article on Facebook and privacy, people (undergrads, in their sample) are generally much less worried about institutional privacy issues like corporate or government surveillance than they are about social privacy (their mom or boss seeing their drunk party photos).

Jan Fernback, in a post at Culture Digitally, similarly argues that “when thinking about appropriate information flows, surveillance contexts, and notions of ethics and trust, we must distinguish the legal dimensions of privacy law from the social dimensions.”

Ultimately, the different things that are being evaluated against each other in this case may be operating in such different registers from each other that they’re incommensurable. As Fernback notes, “privacy opponents argue that we need surveillance to catch wrongdoers while privacy advocates argue that surveillance harms individuals. How do these contexts differ?  What good is being served? What interests are being weighed? Is trust being violated? What power imbalances are evident? What technical regimes are implicated? How is information being used?”

It’s this sort of calculus that has to be used to really parse the ends and the means. Under this view, then, one problem with surveillance as a means is that, as Moxie Marlinspike argues in Wired, “we won’t always know when we have something to hide.”

They quote one of Supreme Court Justice Breyer’s opinions describing “the complexity of modern federal criminal law, codified in several thousand sections of the United States Code and the virtually infinite variety of factual circumstances that might trigger an investigation into a possible violation of the law.” People don’t always know what’s illegal. Or, things previously legal may become illegal. (This invites the argument that “ignorance of the law is not an excuse,” but when laws are so voluminous and often nonsensical it’s hard to hold the line on that.)

Or, the opposite: things that were previously illegal may become legal, but—as Marlinspike points out—we can’t agitate to change those laws without being able to break them and see that they shouldn’t exist. The Wired piece uses the examples of marijuana legalization and same-sex marriage, and we can think of others, but if there was perfect surveillance, forget about any of it.

These means, that is, have many extenuating consequences that we have to balance against their ends. And, to circle back a bit to last week, that’s why there has to be transparency, so that we can work through what those consequences are and see whether the ends are justified, as much for the leaking as the surveillance itself. We simply can’t assess these programs unless we know how they work.

For the first time in this blog’s history, this topic has produced a three-parter. Stay tuned next week for a consideration of due process. 

Nothing useful rhymes with arms; I checked.

This week’s post is of course about the revelation of the United States National Security Agency’s PRISM program. But more particularly, it was inspired by two things.

leak

First, a tweet (which I got via @kouredios):

Second, I got a petition from feminist organization UltraViolet via the clicktivism platform Change.org entitled “Hacker Who Helped Expose Steubenville Could Get More Prison Time Than The 2 Convicted Rapists.”

Put alongside the DemandProgress.org/RootsAction.org petition  in support of PRISM leaker Edward Snowden, who is apparently “in hiding on the other side of the world because he rightfully fears for his safety — and he says he never expects to see home again,” this all got me thinking.

Both Steubenville hacker Deric Lostutter and Snowden took action to expose wrongdoing, and they are being criminalized for doing so.

The idea that exposing malfeasance is a crime has gotten a great deal of traction in the national security conversation since at least the Bradley Manning/Wikileaks moment. Both The Atlantic and The Guardian characterize the Obama Administration’s crackdown on whistleblowers as unprecedented.

The statement from Director of National Intelligence James Clapper that “The unauthorized disclosure of information about this important and entirely legal program is reprehensible and risks important protections for the security of Americans” certainly participates in this logic of whistleblowing as crime.

A second petition for Lostutter from Credo Action notes that he “was recently targeted by an aggressive FBI raid for his participation in bringing that evidence to light. A dozen agents with weapons confiscated computers belonging to Lostutter, his girlfriend, and his brother, while putting him in handcuffs outside his home,” and certainly the disproportionate and public response looks like a warning to other potential well-intentioned hackers as much as anything.

However, despite this stance on the part of the administration—and members of Congress (Speaker of the House John Boehner called Snowden a traitor, according to RootsAction)—there are important reasons not to see whistleblowing as a crime but more in line with McDonald’s framing above as a vital way to keep the government in line. Certainly one mitigating factor against calling the release of classified information criminal or treacherous is that “the government has been systematically over-classifying information since 9/11” (Rebecca Rosen in The Atlantic)

It’s clear that this is in fact “a secrecy binge,” as Bruce Schneier framed it in The Atlantic, rather than a legitimate act of national security from the fact that “we learn, again and again, that our government regularly classifies things not because they need to be secret, but because their release would be embarrassing.” It seems obvious that exposing things that shouldn’t have been secret shouldn’t be a crime.

However, even if the secrecy serves a purpose other than humiliation-avoidance, there may still be a case to be made for releasing it under the right to know inherent to a democracy. Schneier again: “democracy requires an informed citizenry in order to function properly, and transparency and accountability are essential parts of that.”

Jennifer Granick, Director of Civil Liberties at the Stanford Center for Internet and Society, wrote a blog post that called for 

public hearings on this scandal so that the American people can find out exactly what our government is doing. Congress should convene something like the Church Commission, which investigated illegal surveillance of civil rights and anti-war groups, to learn how the government conducts secret surveillance and what it does, if anything, to protect the privacy of American citizens.

This is particularly vital in light of what appear to be efforts precisely to avoid oversight. People would, Daniel Solove argues in the Washington Post, “be fine giving up some privacy as long as appropriate controls, limitations, oversight and accountability mechanisms were in place.”

However, “we know that the NSA has many domestic-surveillance and data-mining programs with codenames like Trailblazer, Stellar Wind, and Ragtime — deliberately using different codenames for similar programs to stymie oversight and conceal what’s really going on” (Schneier).

It may well be “entirely legal,” as Clapper says, but we don’t really have any way of knowing that with the information available to us. And even if it is legal, I don’t think that this is what people thought they were signing up for in the post-9/11 surveillance-approval frenzy. As Mike Masnick put it at Techdirt, “those in power keeping screaming “terrorists!” to get Congress to pass these laws, and then everyone’s shocked (shocked!) when the government goes and does what Congress and the courts have specifically allowed.”

However, Masnick goes on, “the ‘good news’ in all of this (if there is any good news) is that if it’s true that everything that was done didn’t actually violate the law, then we just need to fix the laws” if we think this isn’t legitimate. But we cannot do that without knowing how the laws are being interpreted currently.

It is this right to know, vital to democracy, that leads to McDonald’s desire in the above-quoted tweet to frame leaking in terms of the more well-known American ideology about how democracy is preserved, the second amendment. This is actually a very interesting parallel given that both anti-surveillance and pro-gun partisans deploy the Benjamin Franklin quote “they who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.”

The difference between the two positions is in believing in government process. If knowledge is enough, public opinion can rein in government excess or oversteppers can be voted out of office. The pro-gun position seems to foresee the full dystopian scenario requiring force of arms. Even in their distrust, lefties trust the government more.

The absurdity of fighting the world’s most powerful military with civilian-grade weapons, even assault rifles, notwithstanding, I don’t think we’re likely to replace gun rights with leaking rights—cold, dead hands and all that.

But I think the right to leak—which Schneier framed as a duty to leak—is an excellent twenty-first century supplement to push back on government overreach.

Next week, because apparently two-parters are a thing I do now, ends vs. means in PRISM and leaking.

For its 2013 conference (#IR14 if you’d like to follow along at home October 23-27), the Association of Internet Researchers (AoIR) implemented a new format for submissions. The template went beyond asking for the standard abstract fare of a “description/summary of the work’s intellectual merit with respect to its findings”; it also required a discussion of “its relation to extant research and its broader impacts,” “a description of the methodological approach or the theoretical underpinnings informing the research inquiry” and “conclusions or discussion of findings,” and it wanted all of this in a space of 1000-1200 words (CFP).

This was a departure from the previous template that allowed submission of either a 500 word abstract or a full paper. It’s also a pretty unusual conference submission format that I hadn’t ever seen in the 7 years I’ve been doing this job, and based on comments about it on AoIR’s mailing list (AIR-L) neither had anyone else. It was challenging for me and my panelists to produce something that kind of explained our work (but didn’t have space to, really), but we did it and we were accepted and yay for us.

But as acceptances and rejections came back, AIR-L exploded starting May 30 in something that seems to me to be a paradigm skirmish (like a war, but smaller!), centering on whether the submission process had been tilted toward empiricist work at the expense of the theoretical.

Conflict between paradigms is an area of interest to me in general, but what I found particularly interesting was the incidence of people making incommensurable arguments—using different criteria but not realizing they were on different planes. This is something that I discussed (and attempted to resolve) in the field of Communication in a piece I published last year in Communication Theory, which articulated a model akin to intersectionality for disciplines, allowing similarity and difference on multiple research axes (ontology/epistemology, methodology, axiology) rather than grouping people by a single characteristic a la identity politics.

So what I’d like to do here is explore that disconnect, but also the ways in which the conversation reinforced empiricist projects as “real” research and perpetuated a quite normative definition of rigor. I’m going to do so in a way that names no names and uses no direct quotes. You can go look up the archives if you want—they’re open—but there are way too many people for me to ask permission of all of them and it’s not strictly public, so I’m going to err on the side of caution.

AoIR describes itself as “an academic association dedicated to the advancement of the cross-disciplinary field of Internet studies. It is a member-based support network promoting critical and scholarly Internet research independent from traditional disciplines and existing across academic borders,” but this inclusiveness, cross-discplinarity, and border-crossing were troubled by the introduction of the new submission format.

First, it was quite clear in the debate that non-social scientists felt alienated by the template. Some said they had trouble cramming what they did into it, and others said they hadn’t submitted at all because they couldn’t figure out how to explain their work on its terms.

And emails to the list suggested that some researchers were in fact not accepted to the conference because the format didn’t accommodate them very well. Several noted that theoretical work was rejected on account of lack (or lack of specificity) of methods where that was not an appropriate evaluation. Others specifically noted the humanities as what was disadvantaged, with one scholar pointing to the normalizing force of subheadings, charts, and diagrams built into the conference template.

There were some gestures in the debate toward a hypothetical “qualified” reviewer who could understand disciplinary difference and preserve AoIR’s diversity and not judge one paradigm by another, but mostly that seems not to have materialized. Many participants complained about being assessed based on inappropriate criteria (like methods/findings in a non-social-scientific paper) or reviewers just being pedantic about the template rather than making substantive critiques.  Some called for better guidelines for reviewers to avoid this.

One thing that was not explicitly recognized is that ultimately a great deal of this is a question of reviewing labor. It is my understanding that endemic to conferences reviewed by submitters is an overrepresentation of junior scholars (especially grad students) in the reviewing. Senior scholars are busy or can’t be bothered, or whatever (in addition to being outnumbered)—but regardless of the reason, this has consequences for review quality.

Many of the people making these judgment calls were likely inexperienced and reviewing based on their (seemingly faulty) sense of the rules or based on the paradigm in which they are trained rather than having a developed gut instinct for good work across types of research (which I feel like I can say now because I have at least partially developed that instinct). This is the risk of inexperienced reviewers, a relationship a couple of participants in the discussion also noted, and it’s particularly dangerous to an internally diverse organization such as AoIR.

The response to the theory/humanities complaint was pushback from other scholars who argued that the conference has not been rigorous enough in the past and that this year’s submission process was an improvement. There was little recognition among these proponents that this conflated rigor with scientistic modes of inquiry and presentation.

The new format was held up as a way to lessen the chances of bad presentations at the conference itself by catching those who can write good abstracts or latch on to a trendy topic but then not deliver, a goal certainly worth attempting. But there was a clear divide around the relationship between incomplete research and bad research.

It was social scientists who raised the specter of the cocktail-napkin presentation or simply argued that it’s hard to assess quality on to-be-completed research. The other camp contended that saying the work had to be complete in February or March to present it in October seemed to exclude a lot of people and types of work. Members of this group pointed out that some presentations are just bad, irrespective of done-ness.

Part of the argument about rigor was because of the different “home” disciplines to which AoIR members belong. Social-scientists have had the experience that AoIR isn’t taken seriously. They mentioned being unable to be funded to attend or that attending AoIR wouldn’t “count” for tenure or other evaluations.

In large part, it seems, this has been because AoIR doesn’t require full papers. In previous years, one had the option to submit a paper and then go through a review process to be published in Selected Papers of Internet Research, but one could get accepted without doing so. And indeed, one rationale for the new format was that almost no one was using the full paper option, such that it’s clear that AoIR was primarily an abstract-based conference—which, discussion participants noted, some disciplines see as lazy.

That interdisciplinarity can be constrained by one’s “home” discipline was also clear from the disciplinary divide around the subject of conference proceedings. The folks hooked in to science-type conferences like the Association for Computing Machinery noted the lack of proceedings as another source of disrespect and of the conference seeming less rigorous.

(This is interesting to me because I always thought of conference proceedings as what people did when they weren’t good enough for a real, journal publication. But my field doesn’t use them, so I just had to figure out what they were for as I encountered them—and by comparison to the average journal article they’re kind of shoddy.)

Ultimately, though AoIR is founded on inclusiveness of different research modes, it is clear that speaking the language of methods and findings (and charts and subheads and figures) conflated the conference’s push for rigor with a more scientistic mode. That is, while people could recast that into terms that made sense for their work, and some did, that wasn’t always accepted in the review process.

It made me wonder what the equivalent humanities/cultural studies-centric template would look like. Can we even imagine it? “Be sure to include your theoretical framing and account for race, class, gender, and sexuality”? Related to this, one participant in the discussion noted that if she had applied her humanities criteria to a social science paper and rejected it for being boring and dated, there would be a huge outcry, but making the same assessment the other direction was totally acceptable.

Thus, it is unsurprising that, while there were certainly statements of valuing other types of research than the one any given participant did, this was an unequal sort of mutual respect. Empiricist research got to stand as “straight” or default or unmarked research (even in some statements by the humanities folks, hello internalized inequality!).

It is, after all, often the case that dominant/more socially valued groups get to stand as normative/universal. When social scientists advocated for including other types of work, they tended to ghettoize it out of normative presentation venues like paper sessions into roundtables, workshops, etc.

Of course, there was also some devaluation going the other way, with the humanities proponents concerned about the danger of producing dated research by talking about something that happened a year ago on a rapidly-changing Internet. One wondered what the point was of watching a paper that is going to be published in the next month or two.

As a whole, the AoIR debate points to two sides of a single concern: if the research is closed (completed), and the structure for participation is closed (restricted), what gets shut out?  While some participants were worried about research being boring or stale, others suggested bigger stakes: that this was an anti-interdisciplinary move—perhaps even a betrayal of what AoIR stands for.

This is an important question. Some modes of research are more respected than others—this is something that is currently true about the world, however much we might dislike it and seek to change it in the long term. Doing interdisciplinarity without recognizing the existence of this hierarchy produces circumstances like the scuffle that took place on AIR-L over the IR14 conference template.

Last week, I talked about the various economic and legal issues involved in Kindle Worlds, like unpaid labor, extraction of value, fair use, and ownership of one’s own creative products. (And that’s what you missed on Glee!)

And now, for the exciting (and quite long) conclusion, a discussion of the cultural issues at stake.

romano

One recurring comment about Kindle Worlds is that it is set up in a way that suggests a lack of understanding of fandom, as in this comment from Aja Romano (via @bertha_c):

Or this exchange between Melanie Kohnen and me:

mesk1

 

me

mesk2

The question “Is it really fanfic?” has repeatedly been raised, with Karen Hellekson noting that “if you define fan fiction as ‘derivative texts written for free within the context of a specific community,’ then this isn’t that. True, they are fans. And they write… fiction. But what Amazon Worlds is doing is extending the opportunity to writers to work for hire by writing, on spec, derivative tie-ins in a shared universe, under terms that professional writers would be inclined to reject.”

Noah Berlatsky of The Atlantic playfully noted that “you could even say that Amazon is turning the term ‘fan fiction’ into fan fiction itself, lifting it from its original context and giving it a new purpose and a new narrative, related to the original but not beholden to it.” John Scalzi also questioned whether it’d qualify as “fan fiction,” deciding that it is and it isn’t.

However, some fannish commentators have been in favor of Kindle Worlds, untroubled by these factors, such that they might also, paradoxically, be open to the charge of not understanding fandom.  Rebecca Pahle, writing for The Mary Sue, noted that some may be upset that “giant corporations (the publisher of Gossip Girl, Pretty Little Liars, and Vampire Diaries is owned by Warner Bros.) will be making money off of the labor of their fans. That’s not a viewpoint I share, though, because that’s what happens anyway: Fans put thousands of hours of effort into creating fic, graphics, crafts, etc., expecting nothing in return other than the object of their fandom being good,” adding that “I for one want to see more authors earn money off of it.”

At OTW Fannews, Curtis Jefferson noted that while some are “concerned about what this development will mean for fanfiction communities, though the less they know about them, the more likely they think of Kindle Worlds as a great development.”

As suggested by Jefferson, acceptance or rejection of Kindle Worlds seems to be related to whether people are embedded in the community. Now, there isn’t really just one community, but people who have been in fandom for a while, and in several fandoms over time, have been exposed to and/or acculturated into a set of practices and values that has had some continuity.

In that very limited sense, as an amorphous and internally heterogeneous thing, “the community” singular isn’t totally unreasonable. It’s an imagined community rather than an actual set of interconnections among the people. In fact, assume scare-quotes on it going forward.

One main part of this norm is that fandom in general and fan fiction in particular should be noncommercial. This is an ideal rather than a fact; as Kristina Busse noted in an email exchange in which I participated, “I think the longer I’m in fandom, the more I see that the economies always have overlapped,” and Pahle’s point above gestures toward this, too. Fandom isn’t isolated from market values, not least because it tends to respond to capitalist-produced media.

But normatively those things have traditionally been kept apart, as shown by the extensive work on gift economies in fandom (some of my favorites: Hellekson’s A Fannish Field of Value: Online Fan Gift Culture and Suzanne Scott’s Repackaging Fan Culture: The Regifting Economy of Ancillary Content Models).

Part of this is that fans understand themselves as getting other, nonmonetary benefits. As Livia Penn put it, “I keep seeing people saying ‘you’ll get 20% to 35% of the profit. And that’s better than nothing!’ (Well, sidebar: I don’t get ‘nothing’ from writing fanfic. If you’re not a fanfic writer who shares their fic with a community of readers, it would take me another two thousand words to explain what you *do* get, but trust me. It isn’t nothing.)”

Scalzi likewise notes that “there’s a difference between writing fan fiction because you love the world and the characters on a personal level, and Amazon and Alloy actively exploiting that love for their corporate gain and throwing you a few coins for your trouble.”

So there has to this point been a fan community with some (rather) loose norms about how fiction works, among which are a non-monetary system of reward and exchange and a relationship to industry somewhere between wary and hostile.  This is what fan scholarship has long described, due substantially to the fact that two generations of scholars (maybe two and a half or three) have been to varying degrees embedded in this community. This comes out of the founding of fan studies as a project of fan-scholars wanting to speak for themselves and their own community.

But I am beginning to wonder if the community, already minoritized in the world at large and within fandom, might also become minoritized in fan fiction itself. That is, while Kindle Worlds is not fan fiction as it has been, it might be fanfic as it will be.

Generational turnover in the population has happened, and from my own limited and anecdotal experience, younger fanbases are often not within the tradition. I don’t know if they know it exists and have rejected it; or the influx of fans was too great to teach them all how it had been done before; or they don’t know at all because searchability provides different routes to finding out that there is such a thing as fic in the absence of knowing how it has traditionally been done.  (Actually, can someone do the research and find out why, plzthx?)

So, this generation shift is one route by which we may see the end of fan fiction as we know it. It seems to me that the proportion of writers that aren’t within the tradition is steadily rising: lots of fic I am seeing doesn’t use beta readers, the reciprocity of feedback as payment for creativity is decaying, some of the old rules about acceptable content have vanished, etc.

I take no position on whether this is shift in the normative way of writing fic is good or bad. I like the tradition, and I think it is valuable, but I’m not one for prescribing how people go about doing things that give them pleasure. However, I do think it’s important to consider carefully whether this is a large-scale change and think about its implications.

I also have to wonder if the fact that most of the scholarship has been done by people embedded in the tradition might be why we haven’t seen this coming. This isn’t to critique those people or their work per se (and indeed there is some recognition that there are other ways, as with Busse’s comment that “maybe there is a market–it just won’t be ours, I think”) but rather to point to the tradeoff that every angle of vision makes some things more visible than others.

Related to this question of generations and fannish continuity, the comparisons of Kindle Worlds to the 2007 for-profit fan fiction archive FanLib were not long in coming. Scott asked whether fandom would “respond as quickly/vehemently this time around,” and I think that the (potential) ongoing generation shift has to be taken into account in answering any such question. Has fandom hit a tipping point (which it hadn’t at the time of FanLib) over into a critical mass of people who will see this as legitimate? That will, I think, be the deciding factor.

That point of contact, between fan norms and industry action, is the other place to think about the end of fan fiction as we know it. The practices and populations seem to be changing, or at least new ones are being added, and only some versions are being built into industry logics.

Kindle Worlds, like many other projects at this historical moment, is about–as Jenkins, Ford, and Green parse the distinction in Spreadable Media–“’fans,’ understood as individuals who have a passionate relationship to a particular media franchise,” not “‘fandoms,’ whose members consciously identify as part of a larger community to which they feel some degree of commitment and loyalty” (p. 166).

What I want to suggest is that Kindle Worlds is part of a broader shift to incite fans-the-individuals to ever-greater investment and involvement but manage them though disarticulating them from the troublesome resistive capacity of fandom-the-community.

On one hand, this is part of the monetization of everything. As Busse commented, “I think the thing that unsettles me is when copyright holders ask us to create material to sell it back to us,” which she noted is something that “many of the recent fan studies works have all but explicitly encouraged them to do.” Or, more snarkily, “Don’t you know that things don’t exist until some dude somewhere makes money off it?”

But there is also a sense in which industry is defining this (and maybe only this) as fan fiction, despite the fact that it’s not the only way and traditionally hasn’t been the primary way. As Hellekson notes, “‘work for hire, on spec, for certain tie-ins’ doesn’t really have the ring of ‘fan fiction,’ does it? By using the term fan fiction, they are shorthanding their future writers as well as their perceived audience.” That “shorthanding” stakes a claim on those writers and readers.

And that claim has weight as a definitional move, as is clear from Sean P. Aune’s wondering at TechnoBuffalo (via OTW Fannews) “if the studios that license the properties will continue to allow fans to publish their works for free around the Web. In theory not much should change, but there is now a financial stake in this sub-section of fandom where companies can earn money from the work of others, so there might be an incentive to drive people towards the pay version of fan fiction.”

Or Betsy Rosenblatt, chair of the legal committee of the Organization for Transformative Works,  who noted to Wired that the narrow range of acceptable content in Kindle Worlds “underline[s] the importance of unrestricted fan platforms, like OTW’s Archive of Our Own, which ‘allow fans to express the full range of their creativity and appreciate the creativity of other fans through fair use.’”

Scalzi puts it in broader context: “I suspect this is yet another attempt in a series of long-term attempts to fundamentally change the landscape for purchasing and controlling the work of writers in such a manner that ultimately limits how writers are compensated for their work, which ultimately is not to the benefit of the writer. This will have far-reaching consequences that none of us really understand yet.”

I am not familiar enough with the landscape of professional writing to assess Scalzi’s point in that context, but there does seem to be a creative consolidation going on (alongside a small-scale proliferation enabled by technology), wherein ever more aspects of creative production are coming under the umbrella of corporate ownership and authorship rather than an individual creative person and a corporate production and distribution apparatus. And that bears thought, for fandom and beyond.