On May 20, Twitter announced some “Updates to Twitter and Our Policies,” which they pushed out to all accounts—indeed, I got it three times between the two accounts I actually use and one I set up for a conference that then didn’t get used.
The announcement struck me as particularly user-friendly compared to other companies’ policies. That is, by comparison to the way these issues are discussed in the couple different pieces from Christian Fuchs on privacy policies that I’ve read in the last few months, this seemed not so bad. Not perfect, clearly, but not as awful as they might be. In light of that, I thought I’d try to work through them and tease out what it that seemed less nefarious than usual.
First, what caught my attention is that the announcement said: “We’ve provided more details about the information we collect and how we use it to deliver our services and to improve Twitter. One example: our new tailored suggestions feature, which is based on your recent visits to websites that integrate Twitter buttons or widgets, is an experiment that we’re beginning to roll out to some users in a number of countries. Learn more here. ”
Now, there are some problems here—they frame taking user information as solely making the service better rather than owning up to their self-interest, and they track you when you go elsewhere with your Twitter login, which is a little Big Brother.
However, the interesting part is that they’re providing information about what they’re collecting and what they’re doing with it. I don’t expect that there’s full disclosure on their part, of course, but they are operating from the assumption that users have a right to know these things or that they will run into PR or regulatory trouble if they don’t say they care about these things, and that feels like an advance, even if a small one.
Twitter also actively made information available about “the many ways you can set your preferences to limit, modify or remove the information we collect. For example, we now support the Do Not Track (DNT) browser setting, which stops the collection of information used for tailored suggestions. ”
He is not wrong that opt-out is to the benefit of the company and that it’s often hard to figure out how to actually do so, but this feels different. Twitter pushed this out to all users, told all of them that if they want to alter how they interact with the system they can do so. This, I think, points to a similar shift to the one discussed above—either they recognize that users have a right to a say in how their data gets used, or they feel like they’re supposed to pretend to think so, but in both cases it’s not just companies doing whatever they like with impunity.
There are, of course, some larger problems here. Twitter says, “We’ve clarified the limited circumstances in which your information may be shared with others (for example, when you’ve given us permission to do so, or when the data itself is not private or personal).” This frames the issue as being about personally identifiable information, when such an understanding in fact misses the point.
As Fuchs puts it in his recent article The Political Economy of Privacy on Facebook, this kind of attitude “engages in privacy fetishism by focusing on information disclosures by users” (p. 142). That is, “the main privacy issue is not how much information users make available to the public, but rather: Which user data are used for Facebook for advertising purposes; in which sense users are exploited in this process; and how users can be protected from the negative consequences of economic surveillance on Facebook” (p. 141).
So Twitter, in assuming that everyone agrees that it’s okay to share information when it’s “not private or personal,” is working from this same framework. The set of anonymized user data is fair game to provide vital market research not only for Twitter’s own service but for any company to whom whom it sells the data, or for Twitter to use to allow it to sell advertising where it can report very specifically what kind of people are getting the ad, which makes for more valuable ads (which is how Google can make money while claiming it doesn’t sell user data; contrary to Fuchs, I don’t think they’re lying about selling but rather using their data to profit in this indirect way). That’s a fairly standard industry-wide assumption that Twitter has not broken from.
Opt-in, of course, is better than opt out, but the idea that your personal information would ever get combined with data on your searching and click-through is pretty scary, and fortunately that’s one road Twitter seems not to have gone down (even if, as is likely, it’s only because their service doesn’t lend itself to collection of the same sorts of data).
Similarly, Google likely thinks of itself as taking a stand for privacy when it says that “When showing you tailored ads, we will not associate a cookie or anonymous identifier with sensitive categories, such as those based on race, religion, sexual orientation or health,” but as Fuchs’s blog post points out, “algorithms can never perfectly analyze the semantics of data. Therefore use of sensitive data for targeted advertising cannot be avoided as long as search queries and other content are automatically analyzed. ”
Ultimately, Fuchs makes the bold claim that “the main form of privacy on Facebook is the opacity of capital’s use of personal user data based on its private appropriation” (p. 147), and I think it’s certainly suggestive or provocative. I’m willing to claim that Twitter, in pushing out information and making it easy to understand what they do with user data, is doing a better job with it than some of its contemporaries.
But that is only because the bar is so low.