Papers tagged ‘Censorship’

Inferring Mechanics of Web Censorship Around the World

I’ve talked a bunch about papers that investigate what is being censored online in various countries, but you might also want to know how it’s done. There are only a few ways it could be done, and this paper does a good job of laying them out:

  • By DNS name: intercept DNS queries either at the router or the local DNS relay, return either no such host or a server that will hand out errors for everything.

  • By IP address: in an intermediate router, discard packets intended for particular servers, and/or respond with TCP RST packets (which make the client disconnect) or forged responses. (In principle, an intermediate router could pretend to be the remote host for an entire TCP session, but it doesn’t seem that anyone does.)

  • By data transferred in cleartext: again in an intermediate router, allow the initial connection to go through, but if blacklisted keywords are detected then forge a TCP RST.

There are a few elaborations and variations, but those are the basic options if you are implementing censorship in the backbone of the network. The paper demonstrates that all are used. It could also, of course, be done at either endpoint, but that is much less common (though not unheard of) and the authors of this paper ruled it out of scope. It’s important to understand that the usual modes of encryption used on the ’net today (e.g. HTTPS) do not conceal either the DNS name or the IP address of the remote host, but do conceal the remainder of an HTTP request. Pages of an HTTPS-only website cannot be censored individually, but the entire site can be censored by its DNS name or server IP address. This is why Github was being DDoSed a few months ago to try to get them to delete repositories being used to host circumvention tools [1]: Chinese censors cannot afford to block the entire site, as it is too valuable to their software industry, but they have no way to block access to the specific repositories they don’t like.

Now, if you want to find out which of these scenarios is being carried out by any given censorious country, you need to do detailed network traffic logging, because at the application level, several of them are indistinguishable from the site being down or the network unreliable. This also means that the censor could choose to be stealthy: if Internet users in a particular country expect to see an explicit message when they try to load a blocked page, they might assume that a page that always times out is just broken. [2] The research contribution of this paper is in demonstrating how you do that, through a combination of packet logging and carefully tailored probes from hosts in-country. They could have explained themselves a bit better: I’m not sure they bothered to try to discriminate packets are being dropped at the border router from packets are being dropped by a misconfigured firewall on the site itself, for instance. Also, I’m not sure whether it’s worth going to the trouble of packet logging, frankly. You should be able to get the same high-level information by comparing the results you get from country A with those you get from country B.

Another common headache in this context is knowing whether the results you got from your measurement host truly reflect what a normal Internet user in the country would see. After all, you are probably using a commercial data center or academic network that may be under fewer restrictions. This problem is one of the major rationales for Encore, which I discussed a couple weeks ago [3]. This paper nods at that problem but doesn’t really dig into it. To be fair, they did use personal contacts to make some of their measurements, so those may have involved residential ISPs, but they are (understandably) vague about the details.

Tangler: A Censorship-Resistant Publishing System Based On Document Entanglements

Over the years there have been several attempts to build anonymous publication or distributed anonymous storage systems—usually they start with a peer-to-peer file sharing protocol not entirely unlike BitTorrent, and then build some combination of indexing, replication, encryption, and anonymity on top. All have at least one clever idea. None has achieved world domination. (We’ll know someone’s finally gotten it right when the web browsers start shipping native support for their protocol.)

Tangler is a relatively old example, and its one clever idea is what they call document entanglement. To understand document entanglement you have to know about something called kk-of-nn secret sharing. This is a mathematical technique that converts a secret into nn shares. Each share is the same size as the original secret, possibly plus a little overhead. Anyone who has a copy of kk of those nn shares can reconstruct the original secret, but if they have even just one fewer, they can’t. kk and nn can be chosen arbitrarily. Secret sharing is normally not used for large secrets (like an entire document) because each share is the same size as the original, so you’ve just increased your overall storage requirement nn times—but in a distributed document store like Tangler, you were going to do that anyway, because the document should remain retrievable even if some of the peers holding shares drop out of the network.

Document entanglement, then, is secret sharing with a clever twist: you arrange to have some of the nn shares of your document be the same bitstring as existing shares for other documents. This is always mathematically possible, as long as fewer than kk existing shares are used. This reduces the amount of data added to the system by each new document, but more importantly, it makes the correspondence between shares and documents many-to-many instead of many-to-one. Thus, operators can honestly say they do not know which documents are backed by which shares, and they have an incentive not to cooperate with deletion requests, since deleting one document may render many other documents inaccessible.

I am not convinced entanglement actually provides the security benefit claimed; deleting all nn of the shares belonging to one document should cause other documents to lose no more than one share and thus not be permanently damaged. (The originators of those documents would of course want to generate new shares to preserve redundancy.) It is still probably worth doing just because it reduces the cost of adding new documents to the system, but security-wise it’s solving the wrong problem. What you really want here is: server operators should be unable to determine which documents they hold shares for, even if they know the metadata for those documents. (And yet, somehow, they must be able to hand out the right shares on request!) Similar things are possible, under the name private information retrieval, and people are trying to apply that to anonymous publication, but what I said one really wants here is even stronger than the usual definition of PIR, and I’m not sure it’s theoretically possible.

Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests

As I’ve mentioned a few times here before, one of the biggest problems in measurement studies of Web censorship is taking the measurement from the right place. The easiest thing (and this may still be difficult) is to get access to a commercial VPN exit or university server inside each country of interest. But commercial data centers and universities have ISPs that are often somewhat less aggressive about censorship than residential and mobile ISPs in the same country—we think. [1] And, if the country is big enough, it probably has more than one residential ISP, and there’s no reason to think they behave exactly the same. [2] [3] What we’d really like is to enlist spare CPU cycles on a horde of residential computers across all of the countries we’re interested in.

This paper proposes a way to do just that. The authors propose to add a script to globally popular websites which, when the browser is idle, runs tests of censorship. Thus, anyone who visits the website will be enlisted. The first half of the paper is a technical demonstration that this is possible, and that you get enough information out of it to be useful. Browsers put a bunch of restrictions on what network requests a script can make—you can load an arbitrary webpage in an invisible <iframe>, but you don’t get notified of errors and the script can’t see the content of the page; conversely, <img> can only load images, but a script can ask to be notified of errors. Everything else is somewhere in between. Nonetheless, the authors make a compelling case for being able to detect censorship of entire websites with high accuracy and minimal overhead, and a somewhat less convincing case for being able to detect censorship of individual pages (with lower accuracy and higher overhead). You only get a yes-or-no answer for each thing probed, but that is enough for many research questions that we can’t answer right now. Deployment is made very easy, a simple matter of adding an additional third-party script to websites that want to participate.

The second half of the paper is devoted to ethical and practical considerations. Doing this at all is controversial—in a box on the first page, above the title of the paper, there’s a statement from the SIGCOMM 2015 program committee, saying the paper almost got rejected because some reviewers felt it was unethical to do anything of the kind without informed consent by the people whose computers are enlisted to make measurements. SIGCOMM also published a page-length review by John Byers, saying much the same thing. Against this, the authors argue that informed consent in this case is of dubious benefit, since it does not reduce the risk to the enlistees, and may actually be harmful by removing any traces of plausible deniability. They also point out that many people would need a preliminary course in how Internet censorship works and how Encore measures it before they could make an informed choice about whether to participate in this research. Limiting the pool of enlistees to those who already have the necessary technical background would dramatically reduce the scale and scope of measurements. Finally they observe that the benefits of collecting this data are clear, whereas the risks are nebulous. In a similar vein, George Danezis wrote a rebuttal of the public review, arguing that the reviewers’ concerns are based on a superficial understanding of what ethical research in this area looks like.

Let’s be concrete about the risks involved. Encore modifies a webpage such that web browsers accessing it will, automatically and invisibly to the user, also access a number of unrelated webpages (or resources). By design, those unrelated webpages contain material which is considered unacceptable, perhaps to the point of illegality, in at least some countries. Moreover, it is known that these countries mount active MITM attacks on much of the network traffic exiting the country, precisely to detect and block access to unacceptable material. Indeed, the whole point of the exercise is to provoke an observable response from the MITM, in order to discover what it will and won’t respond to.

The MITM has the power to do more than just block access. It almost certainly records the client IP address of each browser that accesses undesirable material, and since it’s operated by a state, those logs could be used to arrest and indict people for accessing illegal material. Or perhaps the state would just cut off their Internet access, which would be a lesser harm but still a punishment. It could also send back malware instead of the expected content (we don’t know if that has ever happened in real life, but very similar things have [4]), or turn around and mount an attack on the site hosting the material (this definitely has happened [5]). It could also figure out that certain accesses to undesirable material are caused by Encore and ignore them, causing the data collected to be junk, or it could use Encore itself as an attack vector (i.e. replacing the Encore program with malware).

In addition to the state MITM, we might also want to worry about other adversaries in a position to monitor user behavior online, such as employers, compromised coffee shop WiFi routers, and user-tracking software. Employers may have their own list of material that people aren’t supposed to access using corporate resources. Coffee shop WiFi is probably interested in finding a way to turn your laptop into a botnet zombie; any unencrypted network access is a chance to inject some malware. User-tracking software might become very confused about what someone’s demographic is, and start hitting them with ads that relate to whatever controversial topic Encore is looking for censorship of. (This last might actually be a Good Thing, considering the enormous harms behavioral targeting can do. [6])

All of these are harm to someone. It’s important to keep in mind that except for poisoning the data collected by Encore (harm to the research itself) all of them can happen in the absence of Encore. Malware, ad networks, embedded videos, embedded like buttons, third-party resources of any kind: all of these can and do cause a client computer to access material without its human operator’s knowledge or consent, including accesses to material that some countries consider undesirable. Many of them also offer an active MITM the opportunity to inject malware.

The ethical debate over this paper has largely focused on increased risk of legal, or quasilegal, sanctions taken against people whose browsers were enlisted to run Encore tests. I endorse the authors’ observation that informed consent would actually make that risk worse. Because there are so many reasons a computer might contact a network server without its owner’s knowledge, people already have plausible deniability regarding accesses to controversial material (i.e. I never did that, it must have been a virus or something). If Encore told its enlistees what it was doing and gave them a chance to opt out, it would take that away.

Nobody involved in the debate knows how serious this risk really is. We do know that many countries are not nearly as aggressive about filtering the Internet as they could be, [7] so it’s reasonable to think they can’t be bothered to prosecute people just for an occasional attempt to access stuff that is blocked. It could still be that they do prosecute people for bulk attempts to access stuff that is blocked, but Encore’s approach—many people doing a few tests—would tend to avoid that. But there’s enough uncertainty that I think the authors should be talking to people in a position to know for certain: lawyers and activists from the actual countries of interest. There is not one word either in the papers or the reviews to suggest that anyone has done this. The organizations that the authors are talking to (Citizen Lab, Oxford Internet Institute, the Berkman Center) should have appropriate contacts already or be able to find them reasonably quickly.

Meanwhile, all the worry over legal risks has distracted from worrying about the non-legal risks. The Encore authors are fairly dismissive of the possibility that the MITM might subvert Encore’s own code or poison the results; I think that’s a mistake. They consider the extra bandwidth costs Encore incurs, but they don’t consider the possibility of exposing the enlistee to malware on a page (when they load an entire page). More thorough monitoring and reportage on Internet censorship might cause the censor to change its behavior, and not necessarily for the better—for instance, if it’s known that some ISPs are less careful about their filtering, that might trigger sanctions against them. These are just the things I can think of off the top of my head.

In closing, I think the controversy over this paper is more about the community not having come to an agreement about its own research ethics than it is about the paper itself. If you read the paper carefully, the IRB at each author’s institution did not review this research. They declined to engage with it. This was probably a correct decision from the board’s point of view, because an IRB’s core competency is medical and psychological research. (They’ve come in for criticism in the past for reviewing sociological studies as if they were clinical trials.) They do not, in general, have the background or expertise to review this kind of research. There are efforts underway to change that: for instance, there was a Workshop on Ethics in Networked Systems Research at the very same conference that presented this paper. (I wish I could have attended.) Development of a community consensus here will, hopefully, lead to better handling of future, similar papers.

Detecting Internet Filtering from Geographic Time Series

We’re picking back up with a paper that’s brand new—so new that it exists only as an arXiv preprint and I don’t know if it is planned to be published anywhere. It probably hasn’t gone through formal peer review yet.

Wright and colleagues observe that because Tor is commonly used to evade censorship, changes in the number of people using Tor from any given country are a signal of a change in the censorship régime in that country. This isn’t a new idea: the Tor project itself has been doing something similar since 2011. What this paper does is present an improved algorithm for detecting such changes. It uses PCA to compare the time series of Tor active users across countries. The idea is that if there’s a change in Tor usage worldwide, that probably doesn’t indicate censorship, but a change in just a few countries is suspicious. To model this using PCA, they tune the number of principal components so that the projected data matrix is well-divided into what they call normal and anomalous subspaces; large components in the anomalous subspace for any data vector indicate that that country at that time is not well-predicted by all the other countries, i.e. something fishy is going on.

They show that their algorithm can pick out previously known cases where a change in Tor usage is correlated with a change in censorship, and that its top ten most anomalous countries are mostly the countries one would expect to be suspicious by this metric—but also a couple that nobody had previously suspected, which they highlight as a matter needing further attention.

PCA used as an anomaly detector is a new idea on me. It seems like they could be extracting more information from it than they are. The graphs in this paper show what’s probably a global jump in Tor usage in mid-2013; this has a clear explanation, and they show that their detector ignores it (as it’s supposed to), but can they make their detector call it out separately from country-specific events? PCA should be able to do that. Similarly, it seems quite probable that the ongoing revolutions and wars in the Levant and North Africa are causing correlated changes to degree of censorship region-wide; PCA should be able to pull that out as a separate explanatory variable. These would both involve taking a closer look at the normal subspace and what each of its dimensions mean.

It also seems to me that a bit of preprocessing, using standard time series decomposition techniques, would clean up the analysis and make its results easier to interpret. There’s not one word about that possibility in the paper, which seems like a major omission; decomposition is the first thing that anyone who knows anything about time series analysis would think of. In this case, I think seasonal variation should definitely be factored out, and removing linear per-country trends might also helpful.

Whiskey, Weed, and Wukan on the World Wide Web

The subtitle of today’s paper is On Measuring Censors’ Resources and Motivations. It’s a position paper, whose goal is to get other researchers to start considering how economic constraints might affect the much-hypothesized arms race or tit-for-tat behavior of censors and people reacting to censorship: as they say,

[…] the censor and censored have some level of motivation to accomplish various goals, some limited amount of resources to expend, and real-time deadlines that are due to the timeliness of the information that is being spread.

They back up their position by presenting a few pilot studies, of which the most compelling is the investigation of keyword censorship on Weibo (a Chinese microblogging service). They observe that searches are much more aggressively keyword-censored than posts—that is, for many examples of known-censored keywords, one is permitted to make a post on Weibo containing that keyword, but searches for that keyword will produce either no results or very few results. (They don’t say whether unrelated searches will turn up posts containing censored keywords.) They also observe that, for some keywords that are not permitted to be posted, the server only bothers checking for variations on the keyword if the user making the post has previously tried to post the literal keyword. (Again, the exact scope of the phenomenon is unclear—does an attempt to post any blocked keyword make the server check more aggressively for variations on all blocked keywords, or just that one? How long does this escalation last?) And finally, whoever is maintaining the keyword blacklists at Weibo seems to care most about controlling the news cycle: terms associated with breaking news that the government does not like are immediately added to the blacklist, and removed again just as quickly when the event falls out of the news cycle or is resolved positively. They give detailed information about this phenomenon for one news item, the Wukan incident, and cite several other keywords that seem to have been treated the same.

They compare Weibo’s behavior to similar keyword censorship by chat programs popular in China, where the same patterns appear, but whoever is maintaining the lists is sloppier and slower about it. This is clear evidence that the lists are not maintained centrally (by some government agency) and they suggest that many companies are not trying very hard:

At times, we often suspected that a keyword blacklist was being typed up by an over-worked college intern who was given vague instructions to filter out anything that might be against the law.

Sadly, I haven’t seen much in the way of people stepping up to the challenge presented, designing experiments to probe the economics of censorship. You can see similar data points in other studies of China [1] [2] [3] (it is still the case, as far as I know, that ignoring spurious TCP RST packets is sufficient to evade several aspects of the Great Firewall), and in reports from other countries. It is telling, for instance, that Pakistani censors did not bother to update their blacklist of porn sites to keep up with a shift in viewing habits. [4] George Danezis has been talking about the economics of anonymity and surveillance for quite some time now [5] [6] but that’s not quite the same thing. I mentioned above some obvious follow-on research just for Weibo, and I don’t think anyone’s done that. Please tell me if I’ve missed something.

Automated Detection and Fingerprinting of Censorship Block Pages

This short paper, from IMC last year, presents a re-analysis of data collected by the OpenNet Initiative on overt censorship of the Web by a wide variety of countries. Overt means that when a webpage is censored, the user sees an error message which unambiguously informs them that it’s censored. (A censor can also act deniably, giving the user no proof that censorship is going on—the webpage just appears to be broken.) The goal of this reanalysis is to identify block pages (the error messages) automatically, distinguish them from normal pages, and distinguish them from each other—a new, unfamiliar format of block page may indicate a new piece of software is in use to do the censoring.

The chief finding is that block pages can be reliably distinguished from normal pages just by looking at their length: block pages are typically much shorter than normal. This is to be expected, seeing that they are just an error message. What’s interesting, though, is that this technique works better than techniques that look in more detail at the contents of the page. I’d have liked to see some discussion of what kinds of misidentification appear for each technique, but there probably wasn’t room for that. Length is not an effective tactic for distinguishing block pages from each other, but term frequency is (they don’t go into much detail about that).

One thing that’s really not clear is how they distinguish block pages from ordinary HTTP error pages. They mention that ordinary errors introduce significant noise in term-frequency clustering, but they don’t explain how they weeded them out. It might have been done manually; if so, that’s a major hole in the overall automated-ness of this process.

Censorship in the Wild: Analyzing Internet Filtering in Syria

Last week we looked at a case study of Internet filtering in Pakistan; this week we have a case study of Syria. (I think this will be the last such case study I review, unless I come across a really compelling one; there’s not much new I have to say about them.)

This study is chiefly interesting for its data source: a set of log files from the Blue Coat brand DPI routers that are allegedly used [1] [2] to implement Syria’s censorship policy, covering a 9-day period in July and August of 2011. leaked by the Telecomix hacktivist group. Assuming that these log files are genuine, this gives the researchers what we call ground truth: they can be certain that sites appearing in the logs are, or are not, censored. (This doesn’t mean they know the complete policy, though. The routers’ blacklists could include sites or keywords that nobody tried to visit during the time period covered by the logs.)

With ground truth it is possible to make more precise deductions from the phenomena. For instance, when the researchers see URLs of the form http://a1b2.cdn.example/adproxy/cyber/widget blocked by the filter, they know (because the logs say so) that the block is due to a keyword match on the string proxy, rather than the domain name, the IP address, or any other string in the HTTP request. This, in turn, enables them to describe the censorship policy quite pithily: Syrian dissident political organizations, anything and everything to do with Israel, instant messaging tools, and circumvention tools are all blocked. This was not possible in the Pakistani case—for instance, they had to guess at the exact scope of the porn filter.

Because the leaked logs cover only a very short time window, it’s not possible to say anything about the time evolution of Syrian censorship, which is unfortunate, considering the tumultuous past few years that the country has had.

The leak is from several years ago. There is heavy reliance on keyword filtering; it would be interesting to know if this has changed since, what with the increasing use of HTTPS making keyword filtering less useful. For instance, since 2013 Facebook has defaulted to HTTPS for all users. This would have made it much harder for Syria to block access to specific Facebook pages, as they were doing in this study.

A Look at the Consequences of Internet Censorship Through an ISP Lens

When a national government decides to block access to an entire category of online content, naturally people who wanted to see that content—whatever it is—will try to find workarounds. Today’s paper is a case study of just such behavior. The authors were given access to a collection of bulk packet logs taken by an ISP in Pakistan. The ISP had captured a day’s worth of traffic on six days ranging from October 2011 through August 2013, a period that included two significant changes to the national censorship policy. In late 2011, blocking access to pornography became a legal mandate (implemented as a blacklist of several thousand sites, maintained by the government and disseminated to ISPs in confidence—the authors were not allowed to see this blacklist). In mid-2012, access to Youtube was also blocked, in retaliation for hosting anti-Islamic videos [1]. The paper analyzes the traffic in aggregate to understand broad trends in user behavior and how these changed in response to the censorship.

The Youtube block triggered an immediate and obvious increase in encrypted traffic, which the authors attribute to an increased use of circumvention tools—the packet traces did not record enough information to identify exactly what tool, or to discriminate circumvention from other encrypted traffic, but it seems a reasonable assumption. Over the next several months, alternative video sharing/streaming services rose in popularity; as of the last trace in the study, they had taken over roughly 80% of the market share formerly held by Youtube.

Users responded quite differently to the porn block: roughly half of the inbound traffic formerly attributable to porn just disappeared, but the other half was redirected to different porn sites that didn’t happen to be on the official blacklist. The censorship authority did not react by adding the newly popular sites to the blacklist. Perhaps a 50% reduction in overall consumption of porn was good enough for the politicians who wanted the blacklist in the first place.

The paper also contains also some discussion of the mechanism used to block access to censored domains. This confirms prior literature [2] so I’m not going to go into it in great detail; we’ll get to those papers eventually. One interesting tidbit (also previously reported) is that Pakistan has two independent filters, one implemented by local ISPs which falsifies DNS responses, and another operating in the national backbone which forges TCP RSTs and/or HTTP redirections.

The authors don’t talk much about why user response to the Youtube block was so different from the response to the porn block, but it’s evident from their discussion of what people do right after they hit a block in each case. This is very often a search engine query (unencrypted, so visible in the packet trace). For Youtube, people either search for proxy/circumvention services, or they enter keywords for the specific video they wanted to watch, hoping to find it elsewhere, or at least a transcript. For porn, people enter keywords corresponding to a general type of material (sex act, race and gender of performers, that sort of thing), which suggests that they don’t care about finding a specific video, and will be content with whatever they find on a site that isn’t blocked. This is consistent with analysis of viewing patterns on a broad-spectrum porn hub site [3]. It’s also consistent with the way Youtube is integrated into online discourse—people very often link to or even embed a specific video on their own website, in order to talk about it; if you can’t watch that video you can’t participate in the conversation. I think this is really the key finding of the paper, since it gets at when people will go to the trouble of using a circumvention tool.

What the authors do talk about is the consequences of these blocks on the local Internet economy. In particular, Youtube had donated a caching server to the ISP in the case study, so that popular videos would be available locally rather than clogging up international data channels. With the block and the move to proxied, encrypted traffic, the cache became useless and the ISP had to invest in more upstream bandwidth. On the other hand, some of the video services that came to substitute for Youtube were Pakistani businesses, so that was a net win for the local economy. This probably wasn’t intended by the Pakistani government, but in similar developments in China [4] and Russia [5], import substitution is clearly one of the motivating factors. From the international-relations perspective, that’s also highly relevant: censorship only for ideology’s sake probably won’t motivate a bureaucracy as much as censorship that’s seen to be in the economic interest of the country.

Regional Variation in Chinese Internet Filtering

This is one of the earlier papers that looked specifically for regional variation in China’s internet censorship; as I mentioned when reviewing Large-scale Spatiotemporal Characterization of Inconsistencies in the World’s Largest Firewall, assuming that censorship is monolithic is unwise in general and especially so for a country as large, diverse, and technically sophisticated as China. This paper concentrates on variation in DNS-based blockade: they probed 187 DNS servers in 29 Chinese cities (concentrated, like the population, toward the east of the country) for a relatively small number of sites, both highly likely and highly unlikely to be censored within China.

The results reported are maybe better described as inconsistencies among DNS servers than regional variation. For instance, there are no sites called out as accessible from one province but not another. Rather, roughly the same set of sites is blocked in all locales, but all of the blocking is somewhat leaky, and some DNS servers are more likely to leak—regardless of the site—than others. The type of DNS response when a site is blocked also varies from server to server and site to site; observed behaviors include no response at all, an error response, or (most frequently) a success response with an incorrect IP address. Newer papers (e.g. [1] [2]) have attempted to explain some of this in terms of the large-scale network topology within China, plus periodic outages when nothing is filtered at all, but I’m not aware of any wholly compelling analysis (and short of a major leak of internal policy documents, I doubt we can ever have one).

There’s also an excellent discussion of the practical and ethical problems with this class of research. I suspect this was largely included to justify the author’s choice to only look at DNS filtering, despite its being well-known that China also uses several other techniques for online censorship. It nonetheless provides valuable background for anyone wondering about methodological choices in this kind of paper. To summarize:

  • Many DNS servers accept queries from the whole world, so they can be probed directly from a researcher’s computer; however, they might vary their response depending on the apparent location of the querent, their use of UDP means it’s hard to tell censorship by the server itself from censorship by an intermediate DPI router, and there’s no way to know the geographic distribution of their intended clientele.

  • Studying most other forms of filtering requires measurement clients within the country of interest. These can be dedicated proxy servers of various types, or computers volunteered for the purpose. Regardless, the researcher risks inflicting legal penalties (or worse) on the operators of the measurement clients; even if the censorship authority normally takes no direct action against people who merely try to access blocked material, they might respond to a sufficiently high volume of such attempts.

  • Dedicated proxy servers are often blacklisted by sites seeking to reduce their exposure to spammers, scrapers, trolls, and DDoS attacks; a study relying exclusively on such servers will therefore tend to overestimate censorship.

  • Even in countries with a strong political commitment to free expression, there are some things that are illegal to download or store; researchers must take care not to do so, and the simplest way to do that is to avoid retrieving anything other than text.

Censorship Resistance: Let a Thousand Flowers Bloom?

This short paper presents a simple game-theoretic analysis of a late stage of the arms race between a censorious national government and the developers of tools for circumventing that censorship. Keyword blocking, IP-address blocking, and protocol blocking for known circumvention protocols have all been insitituted and then evaded. The circumvention tool is now steganographically masking its traffic so it is indistinguishable from some commonly-used, innocuous cover protocol or protocols; the censor, having no way to unmask this traffic, must either block all use of the cover protocol, or give up.

The game-theoretic question is, how many cover protocols should the circumvention tool implement? Obviously, if there are several protocols, then the tool is resilient as long as not all of them are blocked. On the other hand, implementing more cover protocols requires more development effort, and increases the probability that some of them will be imperfectly mimicked, making the tool detectable. [1] This might seem like an intractable question, but the lovely thing about game theory is it lets you demonstrate that nearly all the fine details of each player’s utility function are irrelevant. The answer: if there’s good reason to believe that protocol X will never be blocked, then the tool should only implement protocol X. Otherwise, it should implement several protocols, based on some assessment of how likely each protocol is to be blocked.

In real life there probably won’t be a clear answer to will protocol X ever be blocked? As the authors themselves point out, the censors can change their minds about that quite abruptly, in response to political conditions. So, in real life several protocols will be needed, and that part of the analysis in this paper is not complete enough to give concrete advice. Specifically, it offers a stable strategy for the Nash equilibrium (that is, neither party can improve their outcome by changing the strategy) but, again, the censors might abruptly change their utility function in response to political conditions, disrupting the equilibrium. (The circumvention tool’s designers are probably philosophically committed to free expression, so their utility function can be assumed to be stable.) This requires an adaptive strategy. The obvious adaptive strategy is for the tool to use only one or two protocols at any given time (using more than one protocol may also improve verisimilitude of the overall traffic being surveilled by the censors) but implement several others, and be able to activate them if one of the others stops working. The catch here is that the change in behavior may itself reveal the tool to the censor. Also, it requires all the engineering effort of implementing multiple protocols, but some fraction of that may go to waste.

The paper also doesn’t consider what happens if the censor is capable of disrupting a protocol in a way that only mildly inconveniences normal users of that protocol, but renders the circumvention tool unusable. (For instance, the censor could be able to remove the steganography without necessarily knowing that it is there. [2]) I think this winds up being equivalent to the censor being able to block that protocol without downside, but I’m not sure.