Hi guys,

I'll over-quote a little, then comment below:

On Tue, Jun 11, 2013 at 08:55:21PM +0200, Peter Bex wrote:
> On Fri, Jun 07, 2013 at 06:29:48PM +0200, Krzysztof Katowicz-Kowalewski wrote:
> > Version 3.5.1 (latest) of popular blogging engine WordPress suffers from 
> > remote denial of service vulnerability. The bug exists in encryption module 
> > (class-phpass.php). The exploitation of this vulnerability is possible only 
> > when at least one post is protected by a password.
[...]
> > More information (including proof of concept):
> > https://vndh.net/note:wordpress-351-denial-service
[...]
> This phpass.php isn't hand-rolled like you stated in your blog post; it's
> a copy of a public domain crypt()-workalike: http://www.openwall.com/phpass/
> There are several other systems which implement their password hashing
> using this library.
> 
> Having said that, being able to control the setting looks like a mistake on
> the part of Wordpress, so I'm not sure the bug is in phpass, strictly
> speaking.  However, have you considered contacting upstream
> (Solar Designer/OpenWall) about this?

Web apps (like WordPress) were indeed not supposed to expose the ability
for untrusted users to specify arbitrary "setting" strings (which
include the configurable cost).  I am unfamiliar with WordPress, so I
don't know why they do it here - is this instance of their use of phpass
perhaps meant to achieve similar goals that tripcodes do?  If so, yes,
they should be sanitizing the cost setting (perhaps with a site admin
configurable upper bound).  However, for password hashes coming from
WordPress user/password database (primary intended use of phpass), this
should not be necessary.  (Indeed, a similar DoS attack could be
performed by someone having gained write access to the database, but
that would likely be the least of a site admin's worries.)

The problem of DoS attacks via attacker-chosen cost settings with
tunable password hashing schemes like this is actually a general one
(and it's even more of a problem when the memory cost is also tunable).
An example is the Apache web server, where local DoS is possible via
malicious bcrypt or (more recently) also SHA-crypt hashes in .htpasswd
files.  (And to a lesser extent also via extended DES-based hashes,
which are supported on *BSDs and more.)  Although the DoS is local, it
affects other users of the Apache instance (not just the attacking local
user) and potentially of the entire system.  Arguably, the fact that
Apache is in general very susceptible to various DoS attacks qualifies
as an excuse (there's no expectation of any DoS resistance with Apache,
is there?) ;-)  (e.g., wouldn't having it read a "huge" sparse file
result in similar behavior?)

Arguably, library code should reject the most insane parameter values.
For example, musl libc - http://www.musl-libc.org - version 0.9.10
rejects bcrypt's log2(cost) > 19 and limits SHA-crypt's rounds count
to < 10M for this reason (original SHA-crypt limits to < 1 billion).
However, on one hand this is insufficient (if an application exposes the
setting as untrusted input, it should have its own sanitization and/or
other safety measures anyway) and on the other hand the arbitrary limits
may be problematic in some obscure cases (e.g., when reusing the same
underlying password hashing scheme as a KDF for encrypting a rarely-used
piece of data locally).  So it's more of a partial workaround for the
present state of things e.g. with Apache, than it is a real solution.

Maybe future password hashing APIs should include a function to sanitize
a provided setting string given certain high-level limits (not abstract
log2(cost) numbers, but e.g. microseconds and kilobytes - even though
the function may have to use estimates of the expected actual usage).
Applications would then be advised to use this function if and where
appropriate.  Alternatively, maybe the password hashing function itself
should accept these upper limits as optional inputs (and refuse to work,
in some fail-close manner, if the limits would likely be exceeded).

Except for the specific upper limits imposed by musl, which were chosen
last year, none of the above is new - it's just that we all have been
sitting on this general issue for many years.  It's about 20 years since
extended DES-based hashes with variable iteration counts were introduced
in BSD/OS in early 1990s and reimplemented in FreeSec in 1993, and
Apache's .htpasswd (predecessor) is maybe only slightly younger.

Nice find regarding the specific WordPress issue, though!  And a nice
reminder, too.

Alexander

Reply via email to