In message <v04011703b379ec9890f1@[24.218.56.100]>, "Arnold G. Reinhold" writes
:

> 
> It is also desirable to be able to increase the memory footprint of your
> key stretcher as well its iteration count.

That's far from clear.  More or less any reasonable factor is dwarfed by
the rapid expansion in memory sizes.
> 
> >>
> >> b.  Use a unique per-passphrase salt of at least 32 bits.
> >>
> >Why 32 bits?  Salts are good, and often cheap, but I'm curious what your
> >rationale is.  Traditionally, a salt serves two purposes:  to increase the
> >expense (CPU and storage) of precomputing a dictionary, and to make it harde
> r
> >to find two passwords with the same salt.  UNIX uses a 12-bit salt, which is
> >often quite sufficient.  Given a million-entry dictionary, and using your
> >number of .5 seconds/guess, with no salt at all it would take ~139 hours to
> >precompute the hash of all guesses.  Clearly, that's too low.  But 4096*139
> >is ~24000 days.  Granted, that process can be parallelized, but it's still
> >a lot of computing -- 240 days for a hundred machines crunching.
> 
> I would argue that UNIX is an excellent object lesson for John's point. 12
> bits was a bad design decision, even in the 70's. Remember for a dictionary
> attack, the 24000 machine days compute only has to take place once.  If we
> just store the first 32 bits of each pre computation, we are talking
> 4*4096*10**6 = 16 gigabytes, a few DVD's, to store the complete dictionary.
> (And I don't know of anyone who is doing .5 sec of key stretching.)

You're arguing with 20-20 hindsight.  16 gig of disk space wasn't even
comprehensible then.  16 *meg* of disk space sufficed to bring up UNIX.
(I just checked my 7th Edition manual.  The binaries took 2.5M bytes; the
source to the entire system took 9M.)

The .5 second number was John's recommendation, but if I recall correctly
that was about how long crypt() took on UNIX in 1979 on the PDP-11.  (In
a quick scan through the Morris-Thompson paper, I didn't see a precise number.)
> 
> >
> >That number is still too low for long-term safety of a password used for
> >secrecy, rather than login.  But it's far from clear to me that one needs
> >to go to 32 bits, if that process is at all expensive.  (UNIX implemented
> >its salt by swapping entries in the E-box of DES.  That's not a scheme that
> >scales well to a bigger salt.  Note, though, that it also had the advantage 
> of
> >ruling out the use of stock DES chips for password-guessing.)
> 
> The E-box argument is bogus. They could have used the first 12 bits to
> scramble the E box and appended additional salt bits to the password.
> Anyway, the big guys make custom chips.

Please -- let's not argue about the wrong things.  I simply stated that that
scheme wouldn't scale.  But appending salt bits to the password wouldn't
have worked very well -- they were using the the password as the key to
encrypt constant plaintext.  Where were they to put the extra bits?

To be sure, the constant plaintext could have been modified by the salt.
The intent was to make it settable on a per-site basis.  Presumably, extra
salt bits could have been XORed with the per-site constant or some such.
(Also remember that in 1978, there were no cryptographic hash functions,
nor had anyone turned DES into one.  The earliest reference I have handy
on that is from 1983.)

As for custom chips -- we're talking about authentication passwords, not
secrecy passwords.  I don't think many big boys are building password-guessers.
The goal was to be sure that a secrecy-cracker -- in those days, a
Diffie-Hellman DES-cracking machine -- couldn't be turned into a password-
guesser.
> 
> Steve, how many bits of salt do you think are enough? 20? 24? In how many
> applications does 8 to 12 bits of savings matter? If you are protecting a
> system with a large number of users, there is a good chance at least one
> password will be in the first 10,000 dictionary entries. How much will a
> terabyte cost in 20 years?  Why mess around?

My own simple-minded scheme for hashed password storage is N interations
of SHA-1 on the concatenation of the password, the salt, and a per-site
constant.  If you do that, more or less any size salt will cause no perceptible
difference in time.  Sure, use a long salt.

But you misunderstood my basic point:  I'm asking for engineering numbers.
John cited a precise figure; I want to know where it comes from.  When I
do the calculations, I keep coming up with numbers that are a lot closer
to 12 bits than to 32.

Btw -- large password files using anything like this scheme are obsolescent.
You can't use a hashed password for challenge/response, which is the technique
of choice for PPP.  Similarly, POP3 can support APOP.  Both require plaintext
passwords on the server side.  Sure, there are weaknesses to doing that.
They're less relevant today for large installations than they were 20 years
ago, because they're stored on special-purpose machines.  We can't pick
tomorrow's answers from yesterday's questions.

How would I like to do it, given a blank slate?  Most likely, I'd use SHA-1 on
the user's password, probably concatenated with a salt, to produce a DSA
private key; the server would store the corresponding public key.  (It's
harder to pull a trick like that using RSA keys.)  But really, I'd get rid
of passwords -- in the sense of "a typed string, some function of which is
stored on the server" -- entirely.  Passwords should be entered into a
user's smart card to unlock a real key.  The fundamental problem is that
users pick bad passwords and passphrases -- and tinkering with the machinery
to hide them isn't going to produce a fundamental change in the outcome.

Reply via email to