Thanks for your comments, you are definitely touching interesting aspects.
Here are thoughts regarding your objections:
1) The cost of IPs vs. bandwidth is definitely a function of market
offers. Your $500/Gbps/month seems quite expensive compared to what can
be found on OVH (which is hosting a large number of relays): they ask ~3
euros/IP/month, including unlimited 100 Mbps traffic. If we assume that
wgg = 2/3 and a water level at 10Mbps, this means that, if you want to
have 1Gbps of guard bandwidth,
- the current Tor mechanisms would cost you 3 * 10 * 3/2 = 45 euros/month
- the waterfilling mechanism would cost you 3 * 100 = 300 euros/month
We do not believe that this is conclusive, as the market changes, and
there certainly are dozens of other providers.
The same applies for 0-day attacks: if you need to buy them just for
attacking Tor, then they are expensive. If you are an organization in
the business of handling 0-day attacks for various other reasons, then
the costs are very different. And it may be unclear to determine if it
is easier/cheaper to compromise 1 top relay or 20 mid-level relays.
And we are not sure that the picture is so clear about botnets either:
bots that can become guards need to have high availability (in order to
pass the guard stability requirements), and such high availability bots
are also likely to have a bandwidth that is higher than the water level
(abandoned machines in university networks, ...). As a result,
waterfilling would increase the number of high availability bots that
are needed, which is likely to be hard.
2) Waterfilling makes it necessary for an adversary to run a larger
number of relays. Apart from the costs of service providers, this large
number of relays need to be managed in an apparently independent way,
otherwise they would become suspicious to community members, like
nusenu who is doing a great job spotting all anomalies. It seems
plausible that running 100 relays in such a way that they look
independent is at least as difficult as doing that with 10 relays.
3) The question of the protection from relays, ASes or IXPs is puzzling,
and we do not have a strong opinion about it. We focused on relays
because they are what is available to any attacker, compared to ASes or
IXPs which are more specific adversaries. But, if there is a consensus
that ASes or IXPs should rather be considered as the main target, it is
easy to implement waterfilling at the AS or IXP level rather than at the
IP level: just aggregate the bandwidth relayed per AS or IXP, and apply
the waterfilling level computation method to them. Or we could mix the
weights obtained for all these adversaries, in order to get some
improvement against all of them instead of an improvement against only
one and being agnostic about the others.
4) More fundamentally, since the fundamental idea of Tor is to mix
traffic through a large number of relays, it seems to be a sound design
principle to make the choice of the critical relays as uniform as
possible, as Waterfilling aims to do. A casual Tor user may be concerned
to see that his traffic is very likely to be routed through a very small
number of top relays, and this effect is likely to increase as soon as
a multi-cores compliant implementation of Tor rises (rust dev). Current
top relays which suffer from the main CPU bottleneck will probably be
free to relay even more bandwidth than they already do, and gain an even
more disproportionate consensus weight. Waterfilling might prevent that,
and keep those useful relays doing their job at the middle position of
We hope those thoughts can help, and thanks again for sharing yours.
Florentin and Olivier
On 2018-03-05 23:30, Aaron Johnson wrote:
I recently took the time to read the waterfilling paper. I’m not sure its a
good idea even for the goal of increasing the cost of traffic correlation
attacks. It depends on whether it is easier for an adversary to run many small
relays of total weight x or a few large relays of total weight y, where x = y*c
with c the fraction of a Guard-flagged relay used in the guard position (I
believe that c=2/3 currently, as Wgg=7268 and Wmg=2732). Just to emphasize it:
waterfilling requires *less bandwidth* to achieve a given guard probability as
is needed in Tor currently.
Based on prices I’ve seen (~$2/IP/month vs. ~$500/Gbps/month), its
significantly cheaper to add a new relay than it is to add bandwidth
commensurate with the highest-bandwidth relays. If an adversary finds it easier
to compromise machines, then waterfilling might help as it lowers the guard
probability of high-bandwidth relays. However, for adversaries with the
resources to posses zero-day vulnerabilities against the well-run
high-bandwidth relays, it seems to me that those adversaries would easily also
have the resources to run relays instead, and in fact it would probably be
cheaper for them to run relays as zero-days are expensive. Adversaries with
botnets, which have many IPs but generally low bandwidth, would benefit from
waterfilling, as it would increase the number of clients choosing them as
guards that they can then attack. Waterfilling doesn’t clearly make things
better or worse against network-level adversaries.
Thus, it doesn’t seem to me that waterfilling protects Tor’s users against
their likely adversaries, and in fact is likely to make things less secure in a
few important cases.
On Jan 31, 2018, at 5:01 PM, teor <teor2...@gmail.com> wrote:
On 1 Feb 2018, at 07:15, Florentin Rochet <florentin.roc...@uclouvain.be> wrote:
On 18/01/18 01:03, teor wrote:
I've added this concern within the 'unanswered questions' section. This
proposal assumes relay measurement are reliable (consensus weight).
Current variance is 30% - 40% between identical bandwidth authorities, and
30% - 60% between all bandwidth authorities.
Is this sufficient?
My apologies, I was not enough specific: we assume bandwidth
measurements reliable as an hypothesis to make the claim that
Waterfilling is not going to reduce or improve the performance. If these
measurements are not reliable enough, then Waterfilling might make
things better, worse or both compared to the current bandwidth-weights
is some unpredictable way.
This variance is measurement error. In this case, discretization error is
less than 1%.
We need to know whether measurement inaccuracy makes the network
weights converge or diverge under your scheme.
It looks like they converge on the current network with the current
bandwidth authorities. This is an essential property we need to keep.
All of this depends on the bandwidth
authority. Anyway, I willingly agree that we need some kind of tools
able to report on such modification. Besides, those tools could be
reused for any new proposal impacting the path selection, such as
research protecting against network adversaries or even some of the
changes you already plan to do (such as Prop 276).
Yes, we are hoping to introduce better tools over time.
- The upper bound in (a) is huge, and would be appreciated for an
adversary running relays. The adversary could manage to set relays with
almost 2 times the consensus weight of the water level, and still being
used at 100% in the entry position. This would reduce a lot the benefits
of this proposal, right?
I do not know how much the benefits of the proposal depend on the exact
water level, and how close relays are to the water level.
How much variance will your proposal tolerate?
Because current variance is 30% - 60% anyway (see above).
The variance is not a problem if the water level is adapted
(re-computed) at each consensus.
I'm not sure we're talking about the same thing here.
The variance I am talking about here is measurement error and
discretization error. Re-computation doesn't change the error.
(And going from relay measurement to consensus bandwidth can take hours.)
See my comment above about convergence: we need to converge in
the presence of discretization error, too.
With your explanations below (weight change on clients), and given that
the consensus diff size is a thing, I am leaning to believe that the
weight calculation should be done on clients. Anyway, I have added a
remark about this possibility within the proposal.
Another alternative is to apply proposal 276 weight rounding to these
weights as well.
I think this may be our best option. Because running all these divisions on
some mobile clients will be very slow and cost a lot of power.
Added this to the proposal. We might also "divide" the algorithm: what
about computing the weights on dirauths but broadcasting only the pivot
(the index of the relay at the water level). Clients can then resume the
computation and produce the weights themselves with a reduced cost.
- The weight calculation would be O(n) on clients (n being the size of
the guard set) instead of O(n*log(n))
- No impact on the consensus diff (well, except 1 line, the pivot value).
- We still have O(n) divisions on the client, each time we download a
Why not list the waterfilling level on a single line in the consensus?
* authorities do the expensive calculation
* clients can re-weight relays using a simple calculation:
if it is less than or equal to the waterfilling level:
use the relay's weight as its guard weight
use 0 as its middle weight
use the waterfilling level as the relay's guard weight
use the relay's weight minus the waterfilling level as its middle weight
This is O(n) and requires one comparison and one subtraction in the worst case.
tor-dev mailing list
tor-dev mailing list
tor-dev mailing list