Skip navigation

Category Archives: politics

Last week, I suggested that the U.S. might benefit from seeking leaking as a useful tool to support democracy in a nontransparent age.

This week, rather than setting up the two as opposed, I want to begin from the premise that what the NSA’s PRISM surveillance and the leak that revealed it have in common is that they were undertaken with the belief that the ends justify the means.

First, under the logic of democratic transparency I described last time, it would seem clear that people had a right to know what was happening, but did the means justify that end? Leaker Edward Snowden obviously thought that it did. He claims that his “sole motive is to inform the public as to that which is done in their name and that which is done against them,” disregarding the means necessary to achieve it altogether as he maintains, “I know I have done nothing wrong” (Guardian).

(Jeremy Hammond, who hacked a private security firm to expose its manipulation of public opinion, said much the same thing)

The Atlantic’s Bruce Schneier also prioritizes the leak’s ends, actively encouraging further leaking: “I am asking for people to engage in illegal and dangerous behavior. Do it carefully and do it safely, but — and I am talking directly to you, person working on one of these secret and probably illegal programs — do it.”

Others, however, think that the ends don’t justify the means. This I think is where you get polling data that both say Snowden did the right thing and that he should be punished.

WikiLeaks documentary director Alex Gibney, interviewed in The Atlantic, makes a similar argument that both values the ends and  attaches consequences to the means with regard to leaker Bradley Manning: “you have to acknowledge that he broke an oath to the military, and we wouldn’t want a world, at least I wouldn’t want a world, in which every soldier leaked every bit of information that he or she had. Manning broke an oath and he’s actually pled guilty to it, and he’s willing to face the consequences.”

At the far end of the spectrum from Snowden’s complete focus on ends, Director of National Intelligence James Clapper (predictably) doesn’t even consider the possibility of value in the ends, contending that “Disclosing information about the specific methods the government uses to collect communications can obviously give our enemies a ‘playbook’ of how to avoid detection” (Washington Post)

There is also an ends vs. means question on the surveillance itself. It’s quite possible that people are actually being protected by this blanket surveillance. Maybe fifty plots have really been foiled.  Certainly, having people not blown up is an admirable end. But at what cost? A case could be made that the means are destroying the very freedoms they’re intended to secure.

My concern here is in the minority. There has been some significant nonchalance about these surveillance revelations, such that it seems people are ok with the surveillance means out of support for its ends. A Pew Research Center poll found a majority saying tracking phone data was acceptable. Daniel Solove’s Washington Post piece sought to dispel privacy myths like “Only people with something to hide should be concerned about their privacy.”

The NSA unconcern may seem like a startling abdication of privacy, but is actually a relatively prevalent attitude. As Alyson Leigh Young and Anabel Quan-Haase argued in their recent article on Facebook and privacy, people (undergrads, in their sample) are generally much less worried about institutional privacy issues like corporate or government surveillance than they are about social privacy (their mom or boss seeing their drunk party photos).

Jan Fernback, in a post at Culture Digitally, similarly argues that “when thinking about appropriate information flows, surveillance contexts, and notions of ethics and trust, we must distinguish the legal dimensions of privacy law from the social dimensions.”

Ultimately, the different things that are being evaluated against each other in this case may be operating in such different registers from each other that they’re incommensurable. As Fernback notes, “privacy opponents argue that we need surveillance to catch wrongdoers while privacy advocates argue that surveillance harms individuals. How do these contexts differ?  What good is being served? What interests are being weighed? Is trust being violated? What power imbalances are evident? What technical regimes are implicated? How is information being used?”

It’s this sort of calculus that has to be used to really parse the ends and the means. Under this view, then, one problem with surveillance as a means is that, as Moxie Marlinspike argues in Wired, “we won’t always know when we have something to hide.”

They quote one of Supreme Court Justice Breyer’s opinions describing “the complexity of modern federal criminal law, codified in several thousand sections of the United States Code and the virtually infinite variety of factual circumstances that might trigger an investigation into a possible violation of the law.” People don’t always know what’s illegal. Or, things previously legal may become illegal. (This invites the argument that “ignorance of the law is not an excuse,” but when laws are so voluminous and often nonsensical it’s hard to hold the line on that.)

Or, the opposite: things that were previously illegal may become legal, but—as Marlinspike points out—we can’t agitate to change those laws without being able to break them and see that they shouldn’t exist. The Wired piece uses the examples of marijuana legalization and same-sex marriage, and we can think of others, but if there was perfect surveillance, forget about any of it.

These means, that is, have many extenuating consequences that we have to balance against their ends. And, to circle back a bit to last week, that’s why there has to be transparency, so that we can work through what those consequences are and see whether the ends are justified, as much for the leaking as the surveillance itself. We simply can’t assess these programs unless we know how they work.

For the first time in this blog’s history, this topic has produced a three-parter. Stay tuned next week for a consideration of due process. 

I have always been perplexed by the insistence that the word “American,” unmodified, should not be used to refer to citizens of the United States.

Certainly, the alternatives are decidedly less than euphonious: U.S. American, United Statesian, and North American (to refer to the commonalities between the U.S. and Canada, which is sure to irritate both Mexico and francophone parts of Canada) are all wildly awkward. But though I’m a sucker for well-named things, that’s only a tiny portion of my problem with it.

I’ve heard the complaint about “American” mostly from or in reference to folks from Latin America, and the argument is that the Americas are a big place and the U.S. doesn’t have a monopoly on the term. As a scholar of unmarked categories like whiteness and masculinity, I know the power that being the default holds, so I understand the impulse from that angle. Refusing the way Anglo folks get to be “American,” full stop, and everyone else is hyphenated makes a lot of sense, particularly in the age of automatic suspicion of any Latino in places like Arizona.

But I’m not sure that the people who contest the way “American” is currently deployed would actually benefit from access to this word.

I get the feeling that most of the world doesn’t have very positive associations with “American.” There’s the “Ugly American” tourist idea. And then, election cycle upon election cycle has appealed to “real America” as white, socially conservative, religious, and inclined to gut all government but super-size the military. And anyone who thinks otherwise is a traitor. “American,” that is, seems to refer to this type of image: Merica

Too fat to walk but still eating junk? Flag-waving and militaristic? Sounds about right. This is the stuff of the ‘Murica meme for a reason–it’s both distinctly “American” and ridiculous to anyone who doesn’t share this particular mindset.

I’d imagine the common attitude, if you did a global poll about the word “American,” is inflected by the logic driving this image. The stance is likely at best mild irritation—and obviously some folks have stoked that into outright hatred to further their own ends (certain terrorist groups and antagonistic nations come to mind).

This then begs the question: Why is anyone who is not hailed by the image of the true, patriotic “American” beating down the door to be associated with this word?

Indeed, the battle for “American” seems to possibly be the mirror image of the ethnocentric American assumption that everyone wants to be American. Seeing this tweet the other day solidified my intent to blog about this issue: american

Lots of people surely want access to the economic opportunities and legal protections available in the United States, to the point where they risk their lives to get them—even though ultimately opportunity is highly stratified and not nearly as available as it’s made out to be. In the State of the Union address on February 12, 2013, President Obama discussed how people he met in Burma hoped that U.S.-style justice and rule of law might be coming their way soon. The nation definitely has things to offer that some other places lack.

(I was late in watching the SOTU, but the enhanced version was pretty cool. Though I could have used a bit more Pop-Up Video style trivia to label the people in the audience who were apparently important enough to show on camera. Elizabeth Warren, Eric Holder, and Tammy Duckworth I got, but some help would have been nice.)

But “Americanness” as a cultural identity is not so widely championed. Indeed, the only people I know of who attach a positive valence to it other than not-yet-disillusioned immigrants are “Americans.”  That is, it’s popular with a particular subset of people who live in the U.S.—not coincidentally often those demanding cultural and linguistic assimilation of immigrants. Those who’d put flags on their socks but find it offensive to burn one, etc.

Valorizing the state of being “American,” that is, seems to go along with a particular version of patriotism and conservatism and flag-waving and red-state-ness. It’s certainly meaningful to those people, but I remain baffled as to why anyone who didn’t share those demographic and political characteristics wouldn’t just say “actually, we really don’t want that word, we’re going to find something that talks about us in a way we like.”

A second line of thought that came out of my reading of Huw Lemmey’s “Devastation in Meatspace” was: How would I teach undergrads about this? This is the kind of thing I’ve been considering a lot lately, perhaps because I’m not teaching this semester for the first time since I started teaching in earnest, and I’m unable to exercise my educational creativity muscles.

But then, part of it is being struck by feeling like it’s impossible to have conversations about social inequality with anyone who hasn’t had the years of training in thinking structurally that I have—and students in my classes are the most common example of that in my life, living as I do in an academic cocoon.

Third, there’s the particular challenge of this case, because support for Israel is such a knee-jerk, unquestionable thing for so many Americans. Certainly, as demonstrated by the hubbub over the 2012 Democratic Party Platform not including Jerusalem in its original iteration and the subsequent revision to add in a statement that Jerusalem is the capital of Israel, it’s basically impossible not to support the Israeli state in mainstream American politics.

I’m not really sure why that is, historically. A historian friend of mine speculated that it had to do with the US’s role in establishing Israel in the first place and also suggested that the linkage between the two nations intensified as a result of the Six-Day War, as the idea of Israel as a nation under attack fit nicely with late-60s white anxieties about the US as under attack and helped produce the special bond that’s come to exist. Now, the historian in question would make no claim to certainty on this explanation—since, though he’s a well-read and geopolitically-aware human, he doesn’t study any of those places and times in particular—but it’s a compelling supposition.

Regardless, though obligatory mainstream support for the Israeli state was well established by the time 9/11 happened, it clearly intensified after that terrorist attack, as Muslims and Arabs were moved into a category of assumed-automatic-enemies for many Americans—a position they already occupied for the Israeli state and some portion of its citizenry (though clearly not all, and maybe not even most).

So, this is the lay of the land: in the American mainstream, Israel is always right. Indeed, questioning Israeli state policy in many circles is automatically equated to anti-Semitism. (Even though Arabs are also Semites, which I have never understood. I asked Judith Butler about this once—because I was 20 and we were reading Holocaust literature and Palestinian poetry in her class on loss, memory, and mourning, and it seemed like a good idea at the time—and she couldn’t explain it either, sigh.)

Though obviously my formative years were in a different, pre-9/11 era when the Middle East was much less central to the American imaginary, I certainly never remember having any awareness of Palestinian refugees and their conditions until college. I was, like many of my students are, a well-meaning white liberal teenager with a savior complex very concerned about all kinds of injustices, but the Palestinian situation was not on my radar until probably Ananya Roy’s Women’s Studies 14 class in Spring 2001.

I can’t assume my students will share the level of un-awareness I had when I was their age, of course, but given the lay of the political land on this issue, it seems fairly likely that my students will come into any discussion believing that Israel = good, Palestinians = terrorists.

And indeed, I already teach the topic of terrorism in my upper division Gender in the Media course, using Jasbir K. Puar and Amit S. Rai’s Monster, Terrorist, Fag: The War on Terrorism and the Production of Docile Patriots, trying to get my students to pull back from their beliefs about the 9/11 attacks, whatever they are,  enough to see the weirdness of the particular gendered and sexualized forms the reaction took.

This went pretty well the first time I taught the course, but on the second go-around I remember vividly having a student exclaim something like “but they killed all those people!” or “but they attacked us!” Her comments about sports teams in online discussion had already revealed she was from New York, and so there’s a fair chance that she had only a few degrees of separation to someone who died in the towers.

(It was also at this point that I realized I had been assuming that the South Asians in the room [of which she was one] a) were aware of the racist backlash and b) would be less knee-jerk in favor of post-9/11 jingoism, but that’s my failing as a white person and a teacher.)

So then I had to slow down and go back and come down from my big structural discussion back to the grounding in “Some people did something awful, that we don’t condone, but the response to it doesn’t make any sense in the absence of a history of imagining the East as a site of gender and sexual deviance.”

And I guess that’s the way forward to teach Palestine as well: We never condone violence. That includes acts undertaken by Palestinians, but it also includes the violences of the Israeli state. So, we can hold that in place and think about broader structures in how those particular violences arise and what forms they take.

Because the fact is that I do parse these kinds of complexities for my students, and expect them to, about other issues. Though I suppose the ones who are actively racist, sexist, or homophobic, rather than having a passive, culturally-received sense that whiteness or maleness or straightness is superior, probably quickly realize that mine is not the class for them to air those beliefs.

I think it’s possible to condemn terrorist tactics but also understand the backed-into-a-corner-ness that makes them seem like a/the viable option. I think it’s possible to get across that there are real, legitimate concerns being expressed in illegitimate ways.

I think it’s possible to get students to disarticulate the actions of the Israeli state, which even not all Israelis agree with, from Jews writ large. I think it’s possible to help students see how individuals within structures benefit from the violence done on their behalf and thus share some responsibility even as they do not directly or completely control the system that produces the violence.

I think it’s possible to push back on the culturally “obvious” without alienating your students. The trick lies in keeping the large, structural factors and the concrete, tangible loss of life both in view at once.

I’m taking the next 2 weeks off, since there’s no point in posting on Christmas Eve and New Year’s Eve when no one will read it, but I’ll see you back here in January!

Privacy has been a hot topic in the last few years, due largely to the confluence of digital media that travels easily with social platforms that encourage inputting all the information about one’s life. But the term gets thrown around and used to mean keeping all kinds of things private from all kinds of people.

I read something recently that made an offhand remark about privacy and privatization while citing Amitai Etzioni‘s 1999 book The Limits of Privacy (I don’t know what I was reading and I really did go looking but I can’t find it again; however, I’m fairly certain it was either Saskia Sassen or Nick Dyer-Witheford, based on when it was in the semester, and the latter seems the more likely suspect).

That reading, whatever it was, sparked me to think about the relationship between privacy and privatization, public and publicity, and what we talk about when we talk about privacy. (Lapsed English major FTW with the Raymond Carver reference!)

It seems that people are most concerned with interpersonal privacy. They don’t want their mom to know they got totally wasted last weekend, their employer to know they lied to go to a party,  or potential stalkers nearby to know their location.

(Own work Transferred from en.wikipedia) [Public domain, via Wikimedia Commons”

They are, to a lesser degree, concerned with privacy from the government. Post-9/11 surveillance in the name of counterterrorism has gotten some pushback—certainly, SumofUs.org wants me to petition Facebook not to give its members’ information to the governmentwithout a warrant, which both seems important and like a drop in the proverbial bucket o’ surveillance—but the sheer trauma of that event was sufficient to convince at least some people of what Etzioni contended in 1999 that Americans generally steadfastly refused to accept, that public goods (like safety and health) sometimes require violating privacy (p. 2).

However, there is markedly less concern about privacy when it comes to corporations. As Etzioni put it, “although our civic culture, public policies, and legal doctrines are attentive to privacy when it is violated by the state, when privacy is threatened by the private sector our culture, policies, and doctrines provide a surprisingly weak defense” (p. 10).

There are exceptions, of course, as shown by discomfort with the fact that Target can figure out women are pregnant based on their purchases and will send them coupons for pregnancy and baby items, often before they’ve told anyone in their immediate families.  By and large, though, protecting privacy from corporations doesn’t generate a lot of attention among the general public.

Likely this is at least somewhat because most people are not aware of how Facebook or Google or any of the other big Internet companies works. They get, to use Dallas Smythe‘s famous terminology, a free lunch, and they think that’s in exchange for the advertisements they can freely ignore, so it seems like a good deal.

However, what the company really gets from them is not their attention, but the traces of their life—demographics, location, social relationships, likes, hobbies, what they click on, what other websites they visit, etc. etc. etc—left behind every time they do anything, much like footprints, fingerprints, or dead skin cells in the physical world.

(For a detailed critique of Google’s use of data, which forms some of my background knowledge here, see Christian Fuchs’ Google’s “New“ Terms of Use and Privacy Policy: Old Exploitation and User Commodification in a New Ideological Skin)

However, I suspect that even if people did know how it worked, protecting privacy from corporations still wouldn’t get very many people’s dander up, for two reasons:

  1. Privacy is imagined in relation to publicity, such that as long as the information is impersonal, aggregate, and not released to the public, it seems compatible with privacy; and
  2. The strong pro-privatization ethos in much of U.S. public discourse has tended to operate with the assumption that the private sector is in some sense controlled by the public through competition and people voting with their dollars.

 

Etzioni described this as “the privacy paradox: Although they fear Big Brother most, they need to lean on him to protect privacy better from Big Bucks” (p. 10), but I think that’s no longer true (and indeed I’m skeptical that it ever was). That is, though multinational capital is capable of overpowering any other force on the planet, with the possible exceptions of the U.S., E.U., and Chinese governments should they suddenly decide to stand up to it, there’s a persistent and mistaken belief that “the market” can keep it in check and thereby keep customers in the driver’s seat as companies compete for their dollars.

The real paradox, then, is that ultimate belief in consumer sovereignty leads consumers to quite freely give up sovereignty over their own data. Or that, we don’t want our data to be publicized, and we don’t want the public sector to intervene, but, as Safiya Noble points out,  these Web technologies are themselves framed as a “public good,” which constrains how (and how much) they can be critiqued.

To say: “Obviously it’s good! It gives people access to information and social connection! For free! Well yeah, maybe it also takes, but it’s worth it! And my information is still private!” takes some pretty complex mental gymnastics and willful ignorances, and the fact that those contortions have become unremarkable is actually quite remarkable.

This week, a special post-Fourth of July edition to think about the complications and contradictions of that funny little word “free.”

“Free,” first off, means a couple of different things. As Wendy Brown points out in her 2003 article Neo-liberalism and the End of Liberal Democracy, “in economic thought, liberalism [ . . . ] refers to a maximization of free trade and competition achieved by minimum interference from political institutions. In the history of political thought, while individual liberty remains a touchstone, liberalism signifies an order in which the state exists to secure the freedom of individuals on a formally egalitarian basis.This, she notes, “may lean more in the direction of maximizing liberty (its politically ‘conservative’ tilt) or in maximizing equality (its politically ‘liberal’ tilt)” (s. 6).

When we talk about things being “free” in contemporary American political discourse, then, we mean both individual freedoms and the free market. This has some important consequences when, as David Savran notes in his Taking it Like a Man: White Masculinity, Masochism, and Contemporary American Culture (sadly, out of print, but one of my favorite books of all time and I’m totally going to inflict portions of it on my students next semester), “the old-style American liberalisms, variously associated (reading from Left to Right) with trade unionism, reformism, and competitive individualism, tend to value freedom above all other qualities” (p. 270).

So, on one hand, the American instantiation of liberalism places the highest value on freedom, but on the other, “free” means two different things, and this leads to some interesting conflations. Savran actually enacts this—perhaps unintentionally—when he goes on to say that“taking the ‘free’ individual subject as the fundamental social unit, it has long been associated with the principle of laissez-faire and the ‘free’ market” (p. 270).

That is, the individual is imagined to be free in the same way that the market is free: both are understood to be the product of nonintervention.On one hand, this means that, like proponents of laissez-faire argue about the market, this position holds that the fewer laws constraining individuals from doing whatever they choose, the better.On the other hand, the relationship also runs the other way, with people assumed to be acting freely unless they are constrained by a law.

Here’s where things get interesting, because what this does is relocate problems and solutions to individuals. As Brown argues in Regulating Aversion (which has apparently become my go-to book lately), framing freedom as only the absence of a law telling you what to do works to “reframe inequality or domination as personal prejudice or enmity” (p. 142).

Under this logic, that is, only when someone is racist does race matter. Otherwise, we’re all the same and it’s that bad person’s fault for noticing. The same argument gets made about sexism or homophobia or whatever it may be, that inequality is personal prejudice, and the absence of personal prejudice is equality since we’re all the same under the law.

(Unless you mix your cocaine with baking soda. Then you are 18 [formerly 100] times more dangerous to society than someone who doesn’t—and your blackness has nothing to do with that determination, we swear.)

The trouble with equating freedom and lack of legal coercion is that “the reduction of freedom to rights, and of equality to equal standing before the law, eliminates from view many sources of subordination, marginalization, and inequality that organize liberal democratic societies and fashion their subjects” (Brown 2006 p. 17-18).

In the process of“formulating freedom as choice and reducing the political to policy and law,” that is,“liberalism lets loose, in a depoliticized underworld, a sea of social powers nearly as coercive as law and certainly as effective in producing subjectivated subjects” (Brown 2006 p. 197).

Let’s think about an example: “the contrast between the nearly compulsory baring of skin by American teenage girls and compulsory veiling in a few Islamic societies is drawn routinely as absolute lack of choice, indeed tyranny, ‘over there’ and absolute freedom of choice (representatively redoubled by near-nakedness) ‘over here’” (Brown 2006 p. 188-9)

What is interesting about this is that there’s always someone ready to get offended by somebody forcing women to cover up, but forcing them to uncover is equally objectionable. Or, rather it should be; it’s typically not.

Feminists recognized the demand to bare oneself as objectionable when it came to the hypersexualization of (white) women, but unfortunately many of them have missed the bandwagon on the hypocrisy of denying Muslim women the right to wear what they want if what they want happens to be the hijab.

Instead, the conversation has been dominated by a right-wing-flavored framing of the god-given “right” to wear less being denied to Muslim women.

As Brown goes on to say,

This is not to deny difference between the two dress codes and the costs of defying them, but rather to note the means and effects of converting these differences into hierarchized opposites. If successful American women are not free to veil, are not free to dress like men or boys, are not free to wear whatever they choose on any occasion without severe economic or social consequences, then what sleight of hand recasts their condition as freedom and individuality contrasted with hypostasized tyranny and lack of agency? What makes choices ‘freer’ when they are constrained by secular and market organizations of femininity and fashion rather than by state or religious law? (189)

If freedom is only juridical, only measured as the lack of a law prescribing your dress code, then people in the West are free. However, as this example shows, law isn’t the only thing that constrains action—things like social norms are really powerful, and indeed far more powerful than laws to the extent that we don’t even know they exist.

This is probably not surprising to many (or even most) of my readers, but what’s interesting is how much it is grounded in “free” meaning two totally contradictory things: individual liberty and the lack of constraint on the market get conflated into lack of constraint on individuals, and then we have, mistakenly, tended to call it a day and consider freedom achieved.