On Thu, May 31, 2012 at 2:03 PM, Marsh Ray <ma...@extendedsubset.com> wrote:
> On 05/31/2012 11:28 AM, Nico Williams wrote:
>> Yes, but note that one could address that with some assumptions, and
>> with some techniques that one would reject when making a better hash
>> -- the point is to be slow,

> [...]
> Starting with:
>
> Ep = password entropy in bits (chosen by the user)
>
> N =  iteration count (chosen by the defender)

For memory-hard PBKDFs you also need a memory size parameter, though
you might derive that from N.

> [...]
> The defender's advantage to having a work factor looks something like:
>
>          N * 2**Ep
> Ad =  ------------------------------
>        Dd(N) * Aa * Ah * Ap

Nicely summarized.

> * If individual users can be shown to present a different Ep to the
> attacker, it could be beneficial to adjust N independently per user. For
> example a website might say to a user: "Want to log in 2 seconds faster next
> time? Pick a longer password!"

Nice.  But again, in practice this won't work.  You might think that
for a single master password users could get into the 48 bits of
entropy range, but probably not.

>> Can PBKDF2' be weaker than PBKDF2?
>
> Yes.

Mine or any PBKDF2' in principle?

> It could turn out to result in an Aa or Ah significantly greater than 1.0.
> It could end up being usefully parallelizable so it can be evaluated more
> efficiently when testing many passwords at one time rather than one at a
> time. Even if it just reduced the performance of other things, say webserver
> software, sharing the defender's hardware it could be a step backwards.

I don't see how my PBKDF2' construction ends up being more
parallelizable than PBKDF2 such that PBKDF2' can run faster than
PBKDF2 -- you need to compute PBKDF2 before you can compute PBKDF2'
and I see no way around that.  If memory_hard() is not nearly as
memory-hard (and/or slow) as initially thought then PBKDF2' will not
help much, but it should still not hinder either.  Unless... unless N
is tuned down on the theory that memory_hard() adds strength -- if it
turns out not to then having turned N down will greatly help the
attacker.

> It could also be that memory_hard(P, S, p) introduces
> impractical-to-mitigate timing or other side channel attacks that leaked p.

Sure.  But let's assume that memory_hard() isn't awful.  It just isn't
yet accepted, and it might have some weaknesses.  I'd hope that by now
everyone pays attention to timing side channels (but power side
channels? not so much).  We can test for timing leaks.  We can't test
for as-yet undiscovered optimizations.

> Last I checked, PBKDF2 re-introduced all the starting entropy from P and S
> at every iteration. So it shouldn't lose any significant entropy.

Ah, yes, I wasn't sure, but now I see that it does.  It's possible
that the chosen PRF will toss a particular bit every time (e.g., the
high bit in every byte of the password), but let's assume it doesn't.

> If memory_hard did not take P and S and instead took
>       memory_hard(PBKDF2("usage 2" + P, S, N))
> it might be easier to think about. We would have less to worry about if P
> were passed through a salted one way function before it were handed to
> memory_hard.

Sure, but I think you're making my point: there are things we could do
to make a PBKDF2' that is at least as strong as PBKDF2.  As long as we
don't respond to PBKDF2' by tuning down N I think that's possible,
trivially possible.  But the risk is probably quite high that N will
get turned down ("it used to take 1s to login, now it takes 2s!"->tune
down N so it takes 1s with the new PBKDF2').

Nico
--
_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to