Papers tagged ‘China’

Regional Variation in Chinese Internet Filtering

This is one of the earlier papers that looked specifically for regional variation in China’s internet censorship; as I mentioned when reviewing Large-scale Spatiotemporal Characterization of Inconsistencies in the World’s Largest Firewall, assuming that censorship is monolithic is unwise in general and especially so for a country as large, diverse, and technically sophisticated as China. This paper concentrates on variation in DNS-based blockade: they probed 187 DNS servers in 29 Chinese cities (concentrated, like the population, toward the east of the country) for a relatively small number of sites, both highly likely and highly unlikely to be censored within China.

The results reported are maybe better described as inconsistencies among DNS servers than regional variation. For instance, there are no sites called out as accessible from one province but not another. Rather, roughly the same set of sites is blocked in all locales, but all of the blocking is somewhat leaky, and some DNS servers are more likely to leak—regardless of the site—than others. The type of DNS response when a site is blocked also varies from server to server and site to site; observed behaviors include no response at all, an error response, or (most frequently) a success response with an incorrect IP address. Newer papers (e.g. [1] [2]) have attempted to explain some of this in terms of the large-scale network topology within China, plus periodic outages when nothing is filtered at all, but I’m not aware of any wholly compelling analysis (and short of a major leak of internal policy documents, I doubt we can ever have one).

There’s also an excellent discussion of the practical and ethical problems with this class of research. I suspect this was largely included to justify the author’s choice to only look at DNS filtering, despite its being well-known that China also uses several other techniques for online censorship. It nonetheless provides valuable background for anyone wondering about methodological choices in this kind of paper. To summarize:

  • Many DNS servers accept queries from the whole world, so they can be probed directly from a researcher’s computer; however, they might vary their response depending on the apparent location of the querent, their use of UDP means it’s hard to tell censorship by the server itself from censorship by an intermediate DPI router, and there’s no way to know the geographic distribution of their intended clientele.

  • Studying most other forms of filtering requires measurement clients within the country of interest. These can be dedicated proxy servers of various types, or computers volunteered for the purpose. Regardless, the researcher risks inflicting legal penalties (or worse) on the operators of the measurement clients; even if the censorship authority normally takes no direct action against people who merely try to access blocked material, they might respond to a sufficiently high volume of such attempts.

  • Dedicated proxy servers are often blacklisted by sites seeking to reduce their exposure to spammers, scrapers, trolls, and DDoS attacks; a study relying exclusively on such servers will therefore tend to overestimate censorship.

  • Even in countries with a strong political commitment to free expression, there are some things that are illegal to download or store; researchers must take care not to do so, and the simplest way to do that is to avoid retrieving anything other than text.

Large-scale Spatiotemporal Characterization of Inconsistencies in the World’s Largest Firewall

Lots of academic research on Internet censorship treats the countries doing the censorship as monoliths: that is, measurements will typically only be conducted from one client in one fixed location (often a commercial VPS or colocation provider), and the results are assmued to reflect the situation countrywide. When you’re talking about a country as large as China, that assumption seems poorly justified, and there have been several studies aiming to collect more fine-grained information. [1] [2] [3] This paper is in that line of research, with a systematic survey of censorship of one application (Tor) in roughly 150 different locations across China, repeating the measurement at hourly intervals for 27 days. The measurement clients are diverse both in terms of geographic location and network topology.

The results largely confirm what was already suspected. This particular application is indeed blocked consistently across China, with the possible exception of CERNET (China Education and Research Network), whose filtering is less aggressive. The filtering occurs at major China-wide IXPs, as suspected from previous studies. The firewall appears to operate primarily by dropping inbound traffic to China; the authors don’t try to explain this, but earlier related research [4] points out that the firewall must wait to see a TCP SYN/ACK packet before it can successfully forge RST packets in both directions. Finally, there is concrete evidence for failures, lasting hours at a time, uncorrelated with geographic location, where traffic passes uncensored. This was anecdotally known to happen but not previously studied in any kind of detail, to my knowledge. This paper doesn’t speculate at all on why the failures happen or how we could figure that out, which I think is unfortunate.

The techniques used to collect the data are more interesting, at least to me. The principal method is called hybrid idle scanning, first presented by some of the same authors in a different paper [5]. It allows a measurement host to determine whether a client can complete a TCP handshake with a server, without itself being either the client or the server; if the handshake does not complete successfully, it reveals whether client-server or server-client packets are being lost. It does rely on an information leak in older client TCP stacks (predictable IP-ID sequences, [6]) but millions of hosts worldwide still run operating systems with these bugs—the authors report an estimate that they they comprise 1% of the global IPv4 address space. Thus, it’s possible to find a measurement client in any geographic location with reasonably common Internet usage. Data from this technique is backed up with more detailed information from traceroutes and SYN probes from a smaller number of locations. They describe a previously-unreported server-side information leak in Linux’s handling of half-open TCP connections, which can be used to study what IP-based blacklisting of a server looks like to that server, without access to that server.

I’m also impressed with the authors’ systematic presentation of the hypotheses they wanted to test and how they chose to test each of them. Anyone interested in network measurements could probably learn something about how to structure an experiment from this paper.