> Good thinking!  I summarized the methodology on the graph page as: The
> graph above is based on sanitized Tor web server logs [0]. These are a
> stripped-down version of Apache's "combined" log format without IP
> addresses, log times, HTTP parameters, referers, and user agent strings.
> If you spot anything in the data that you think should be sanitized
> more thoroughly, please let us know!

Interesting, thanks. Here are some thoughts based on looking through one of 
these logs (from archeotrichon.torproject.org 
<http://archeotrichon.torproject.org/> on 2015-09-20):
  1. The order of requests appears to be preserved. If so, this allows an 
adversary to determine fine-grained timing information by inserting requests of 
his own at known times.
  2. The size of the response is included, which potentially allows an 
adversary observing the client side to perform a correlation attack (combined 
with #1 above). This could allow the adversary to learn interesting things like 
(i) this person is downloading arm and thus is probably running a relay or (ii) 
this person is creating Trac tickets with onion-service bugs and is likely 
running an onion service somewhere (or is Trac excluded from these logs?). The 
size could also be used as an time-stamping mechanism alternative to #1 if the 
size of the request can be changed by the adversary (e.g. by blog comments).
  3. Even without fine-grained timing information, daily per-server logs might 
include data from few enough clients that multiple requests can be reasonably 
inferred to be from the same client, which can collectively reveal lots of 
information (e.g. country based on browser localization used, platform, blog 
posts viewed/commented on if the blog server also releases logs).

I also feel compelled to raise the question of whether or not releasing these 
logs went through Tor’s own recommended procedure for producing data on its 
users (https://research.torproject.org/safetyboard.html#guidelines 
        • Only collect data that is safe to make public.
        • Don't collect data you don't need (minimization).
        • Take reasonable security precautions, e.g. about who has access to 
your data sets or experimental systems.
        • Limit the granularity of data (e.g. use bins or add noise).
        • The benefits should outweigh the risks.
        • Consider auxiliary data (e.g. third-party data sets) when assessing 
the risks.
        • Consider whether the user meant for that data to be private.
I definitely see the value of analyzing these logs, though, and it definitely 
helps that some sanitization was applied :-)


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

tor-dev mailing list

Reply via email to