Hi,
As we’re nearing WGLC for the DDoS protection draft, there is a question we’ve
left
open in the past that should be resolved.
The issue is with the variability of the time it takes to solve a puzzle. To
demonstrate it,
I ran a simulation of trying to solve a 20-bit puzzle 1000 times on some
specific hardware [1].
The results are available in ODS [2] and XSLX [3] formats, and also summarized
below:
Average 1.3226669
Min 0.0001190
Max 7.9450660
Standard Deviation 1.2873387
5th percentile 0.0843579
95th percentile 3.8978098
95th / 5th percentiles 46.2
Max / Min 66765.3
So an “unlucky” initiator, defined here as the 95th percentile for solution
time, takes 46x as long
to solve the puzzle as a “lucky” one (defined as the 5th percentile). This
introduces delays that are visible
to human users, whether they are looking at a dialog box on a remote access
client or are waiting
for their networked application to work.
A while ago Scott Fluhrer suggested a way to make this more fair [4]. The
puzzle we were discussing
then was a little different, but his method can be adapted to the puzzle that
is in the current draft.
Instead of requiring just one solution, require the client to find several
solutions. The puzzle can be made
correspondingly easier. The expected time for finding one solution for a 20-bit
puzzle should be the same
as the time to find four solutions to an 18-bit puzzle, or 16 solutions to a
16-bit puzzle. The results in [2]
and [3] bear this out.
While the expected time is pretty similar, the different between worst case and
average case is much better:
Test Parameters 20x1 18x4 16x16
Average 1.3226669 1.3400430 1.3445518
Min 0.0001190 0.1013960 0.5210680
Max 7.9450660 3.9507410 3.0532380
Standard Deviation 1.2873387 0.6568984 0.3353262
5th percentile 0.0843579 0.4548676 0.8529980
95th percentile 3.8978098 2.5217214 1.9443243
95th / 5th perc. 46.2 5.5 2.3
Max / Min 66765.3 39.0 5.9
In my opinion the right thing to do is to choose a multi-solution puzzle and
adjust the difficulty accordingly.
This adds a little complexity but provides a more consistent delay. There have
been two arguments against it so far.
I tend to support this idea, but I think in this case the sub-puzzles must be
chained to deal with parallel solving.
Something like the following:
Puzzle_data[0] = cookie
Puzzle_data[1] = Puzzle_solution[0] | cookie
Puzzle_data[2] = Puzzle_solution[1] | cookie
...
Puzzle_data[n] = Puzzle_solution[n-1] | cookie
Or probably someone could suggest more clever construction?
Note, that the draft already supports multiple puzzles by allowing the responder
to request another puzzle after verifying the received solution. However, this
is different
from the proposed mechanizm and is more focused on solving the problem
of wide range of initiators' capabilities. The two mechanisms would complement
each other.
Should the number of sub-puzzles be fixed in the protocol or should we allow
the responder to specify it in the PUZZLE notification? It seems that after 4
(or 8?)
sub-puzzles an effect of sub-puzzling becomes less and less visible.
1. The Initial request will become larger with multiple solutions. This is not necessarily true, since the initiator
is still
limited in the number of possible keys it can try. We can have PRF keys be all zeros except for the last 32 or 64
bits.
This is good enough, because no current hardware is able to test 2^32 keys in a
reasonable time, and 64-bits gives
us protection for the foreseeable future. So a single result can be as low as 4
or 8 octets.
The size of the request will be larger in any case. Currently Puzzle Solution
payload
may contain as few bytes of the key as needed (for PRFs with variable-size
keys).
But I don't think the number of sub-puzzles should be more than 4 (or 8), so
that's
not such a big problem.
2. The Responder will need to spend more resources verifying more solutions.
This is not necessarily true either,
because the Responder can just pick one or two solutions at random and only
verify them. Not that a single run
of the HMAC function takes that many resources.
It's true, however there is a tradeoff. If the number of sub-puzzles is small
(say 4), then verifying
a single solution gives an attacker that solves a single sub-puzzle a 25%
chance to pass through.
Is it always acceptable for responder? I think the responder SHOULD verify all
(or at least half).
Yoav
Regards,
Valery.
_______________________________________________
IPsec mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/ipsec