RE: The password-reset paradox

2009-02-23 Thread Charlie Kaufman
I would assume (hope?) that when you have an OTP token, you get two factor
authentication and don't stop needing a password. You would need a password
either to unlock the OTP device or to enter alongside the OTP value. Otherwise,
someone who finds your token can impersonate you.

Assuming that's true, OTP tokens add costs by introducing new failure modes 
(e.g.,
I lost it, I ran it through the washing machine, etc.). I suspect a similar 
study
would find that the cost of the OTP token would be $500-$700/yr. even if the
device itself only cost $5. After all, passwords are free!

--Charlie

-Original Message-
From: owner-cryptogra...@metzdowd.com [mailto:owner-cryptogra...@metzdowd.com] 
On Behalf Of Peter Gutmann
Sent: Thursday, February 19, 2009 5:36 AM
To: cryptography@metzdowd.com
Subject: The password-reset paradox

There are a variety of password cost-estimation surveys floating around that
put the cost of password resets at $100-200 per user per year, depending on
which survey you use (Gartner says so, it must be true).

You can get OTP tokens as little as $5.  Barely anyone uses them.

Can anyone explain why, if the cost of password resets is so high, banks and
the like don't want to spend $5 (plus one-off background infrastructure costs
and whatnot) on a token like this?

(My guess is that the password-reset cost estimates are coming from the same
place as software and music piracy figures, but I'd still be interested in any
information anyone can provide).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


RE: DNSSEC to be strangled at birth.

2007-04-07 Thread Charlie Kaufman
I wonder if the DHS has any idea what it's asking for. The news
totally mangled what you might be able to do with that key. Even
people on this list have trouble figuring it out. Perhaps they just
heard about this root key thing, thought it sounded cool and important,
and since they recently watched Sneakers they thought they better have
it.

The news articles didn't say whether they wanted to be the only ones
to have it (which they could argue was a good idea because who better
to secure the Internet, but it would mean they would have work to do),
or whether they just wanted a copy (which would be of absolutely no
value defensively - it constitutes a tool for mounting an extremely
difficult and quickly detected attack on the Internet).

--Charlie

p.s. strangled at birth seems a bad metaphor. DNSSEC may still be
in diapers, but it turned 10 in January. More like added another
nail

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of James A. Donald
Sent: Friday, April 06, 2007 12:16 PM
To: Nicolas Williams
Cc: Paul Hoffman; [EMAIL PROTECTED]; cryptography@metzdowd.com
Subject: Re: DNSSEC to be strangled at birth.

Nicolas Williams wrote:
  Which means that the MITM would need the cooperation
  of the client's provider in many/most cases (a
  political problem) in order to be able to quickly get
  in the middle so close to a leaf node (a technical
  problem).

Not a very large political problem.  Most ISPs not only
roll over for the DOJ, the FBI, and the DHS, they also
roll over for the russian mafias.

With the root key and the cooperation of nodes close to
the client, you can intercept SSH and SSL communications
that rely on DNSSEC.  Without the root key, you cannot.
This is huge.

This, of course, means the sensible man configures SSH
not to rely on DNSSEC by default, which substantially
reduces the benefit of SSH.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]
N��*m�
ڦ�j)b����'���r��y��zwb�r��y ��a��j:+vsv�r�

RE: Factorization polynomially reducible to discrete log - known fact or not?

2006-07-10 Thread Charlie Kaufman
I believe this has been known for a long time, though I have never seen the 
proof. I could imagine constructing one based on quadratic sieve.

I believe that a proof that the discrete log problem is polynomially reducible 
to the factorization problem is much harder and more recent (as in sometime in 
the last 20 years). I've never seen that proof either.

--Charlie

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Ondrej Mikle
Sent: Sunday, July 09, 2006 12:22 PM
To: cryptography@metzdowd.com
Subject: Factorization polynomially reducible to discrete log - known fact or 
not?

Hello.

I believe I have the proof that factorization of N=p*q (p, q prime) is
polynomially reducible to discrete logarithm problem. Is it a known fact
or not? I searched for such proof, but only found that the two problems
are believed to be equivalent (i.e. no proof).

I still might have some error in the proof, so it needs to be checked by
someone yet. I'd like to know if it is already known (in that case there
would be no reason to bother with it).

Thanks
   O. Mikle

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


It's almost enough to make you feel sorry for Diebold

2005-12-19 Thread Charlie Kaufman
Reportedly, some people demonstrated falsifying votes on Diebold voting 
machines using only resources and techniques available to thousands of election 
workers. It will be interesting to see the fallout. These weaknesses have 
apparently long been known, but denied by Diebold.

http://www.bbvforums.org/cgi-bin/forums/board-auth.cgi?file=/1954/15595.html


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


FW: Fermat's primality test vs. Miller-Rabin

2005-11-10 Thread Charlie Kaufman
(resending after bounce)

-Original Message-
From: Charlie Kaufman 
Sent: Tuesday, November 08, 2005 3:11 PM
To: 'Travis H.'; 'cryptography@metzdowd.com'
Subject: RE: Fermat's primality test vs. Miller-Rabin

Is that the distinction that makes
Miller-Rabin a stronger primality test?

Yes. The Miller-Rabin test will fail to detect that a Carmichael number is 
composite 25% of the time. Thus, repeating the Miller-Rabin test many times 
gives ever greater confidence. Fermat's test will fail to detect that a 
Carmichael number is composite 100% of the time, so repeating it doesn't help 
(in the fringe case of a Carmichael number).

In practice, the probability of randomly choosing a Carmichael number of size 
250 bits is vanishingly small. The probability of a single run of Miller-Rabin 
or Fermat not detecting that a randomly chosen number is composite is almost 
vanishingly small. I've heard but not confirmed a figure of one failure in 20 
million. I've never heard an estimate of the probability that two runs would 
fail to detect the composite. It couldn't be better than one failure is 20 
million squared or worse than one in 80 million.

--Charlie Kaufman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Travis H.
Sent: Tuesday, November 08, 2005 3:05 AM
To: cryptography@metzdowd.com
Subject: Fermat's primality test vs. Miller-Rabin

In Practical Cryptography, Schneier states that the you can prove
that when n is not a prime, a certain property of a mod n holds for at
most 25% of possible values 1  a  n.  He later states that Fermat's
test can be fooled by Carmichael numbers, and finally he basically
says that Miller-Rabin is based on Fermat's test.

It appears that Fermat's test can be fooled by Carmichael numbers,
whereas Miller-Rabin is immune, but I'm not sure why.  It appears that
M-R tests that just before the squaring of a that produces a residue
of 1, is the residue n-1.  Apparently that's not true for most bases
of Carmichael numbers.  Is that the distinction that makes
Miller-Rabin a stronger primality test?

It's amazing how many words that took to state, and I didn't even
specify the squaring process.
--
http://www.lightconsulting.com/~travis/  --
We already have enough fast, insecure systems. -- Schneier  Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


FW: How broad is the SPEKE patent.

2005-11-10 Thread Charlie Kaufman
(resending after bounce)

-Original Message-
From: Charlie Kaufman 
Sent: Wednesday, November 09, 2005 8:59 PM
To: 'James A. Donald'; [EMAIL PROTECTED]; cryptography@metzdowd.com
Subject: RE: How broad is the SPEKE patent.

 James A. Donald said:
Does SPEKE claim to patent any uses of zero knowledge
proof of possession of the password for mutual
authentication, or just some particular method for
establishing communications?   Is there any way around
the SPEKE patent for mutual authentication and
establishing secure communications on a weak passphrase?

That's the wrong question.

The patents related to SPEKE (at least the ones that have issued - there's no 
way to know whether there are additional submarines out there) are available 
free on line from the patent office. You can read the claims and make your own 
judgment.

The right question is whether there is any strong password protocol - either 
known or that you invent yourself - that you can implement without fear of 
being sued for patent infringement.

And the answer is no.

Patent claims, like the U.S. Constitution, mean whatever the courts decide they 
mean. The only way to have confidence that you won't be sued for implementing 
any technology is to observe that lots of other people in similar situations to 
yours are doing it and not being sued.

I am not aware of anyone who is publicly shipping - either in a commercial 
product or as open source - an implementation of a strong password protocol 
without having paid protection money to either Lucent or Phoenix (or both). It 
would be great if someone would.

But proclaiming that the King is wearing no clothes is not without risk.

--Charlie Kaufman

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


FW: How broad is the SPEKE patent.

2005-11-10 Thread Charlie Kaufman
(resending after bounce)

-Original Message-
From: Charlie Kaufman 
Sent: Wednesday, November 09, 2005 9:54 PM
To: 'Steven M. Bellovin'; James A. Donald
Cc: [EMAIL PROTECTED]; cryptography@metzdowd.com
Subject: RE: How broad is the SPEKE patent. 

- Steven M. Bellovin wrote:
Radia Perlman and Charlie Kaufman invented PDM specifically as a 
patent-free method.  However, the claim was made that it infringed the 
SPEKE patent.  Since it wasn't patented, there was no one willing to 
spend the money on legal fees to fight that claim, per a story I heard.

I am not aware that any claim (in the legal sense) has been made that PDM 
infringed SPEKE. Radia and I were pushing PDM in various standards bodies as a 
strong password protocol after convincing our employers not to patent it (no 
easy feat!). We were approached by David Jablon, the inventor of SPEKE but no 
longer the patent holder, who suggested that we should not assume that PDM did 
not infringe SPEKE and should not make such claims to others. This was based on 
claims in a patent filed many years before but which through various techniques 
had been prevented from issuing (a practice known as 'submarining').

While we convinced our employers not to patent the protocol, we certainly 
weren't going to get them to defend it.

That was sufficient to get us to stop promoting PDM. If anyone would like to 
pick it up, they are of course free to do so. From a legal perspective, they 
would probably have a better chance with SRP, since Stanford holds a patent and 
might be motivated to support the challenge.

--Charlie

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Propping up SHA-1 (or MD5)

2005-03-25 Thread Charlie Kaufman
All hash functions I'm aware of consist of an inner compression function
that hashes a fixed size block of data into a smaller fixed size block
and an outer composition function that applies the inner function
iteratively to the variable length data to be hashed. Essentially you're
proposing a modification to the outer layer of the hash construction.

All of the standard hash functions since MD4 have been constructed so
that a collision in the inner compression function is likely to lead to
a collision in the hash function. MD2 did not have that property. It
computed a cheap checksum of the variable length data in parallel with
the digesting process and digested the checksum following the data. I
have often wondered whether such a cheap addition would strengthen the
newer hashes. (It would fix the suffixing attacks that motivated the
development of HMAC).

It's not obvious whether this would make the functions more secure or
just make them harder to analyze. Perhaps someone from the research
community could comment on why the checksum was removed in the evolution
from MD2 to MD4.

Your proposed encoding has the disadvantage that it would require two
passes over the message being digested. This would be bad news for
hardware implementations and should be avoided if possible.

You note with the construction:

H'(x)=Random || H(Random || x)

(reminiscent of the salted hash calculation for UNIX passwords) that the
hash gets longer. The hash need not get longer. If you have 40 random
bits and the first 120 bits of H(Random || x), you match the size of
SHA-1 and get improved security against most practical attacks. If your
system depends on a fixed length hash, you're in trouble already because
the fixed length is probably 128 bits and the world is headed toward
256.

A problem that does exist with this construction is that some uses of
hash functions assume that if you hash the same data you get the same
hash (or indirectly, that if you sign the same data you get the same
signature). In particular, you now need separate functions for
generating a hash and for checking one.

--Charlie Kaufman

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ben Laurie
Sent: Monday, March 21, 2005 3:57 AM
To: Cryptography; [EMAIL PROTECTED]
Subject: Propping up SHA-1 (or MD5)

It was suggested at the SAAG meeting at the Minneapolis IETF that a way 
to deal with weakness in hash functions was to create a new hash 
function from the old like so:

H'(x)=Random || H(Random || x)

However, this allows an attacker to play with Random (the advice I've 
seen is that if one is going to use an IV with a hash function, then one

should transfer the IV with integrity checks to deny attackers this 
freedom).

Another objection is that this construction changes the API at the 
sender end, which could lead to a great deal of complexity when the use 
of the hash API is deeply embedded.

A third is that the length of the hash is changed, which could break 
existing protocols.

Musing on these points, I wondered about the construction:

H'(x)=H(H(x) || H(H(x) || x))

which doesn't allow an attacker any choice, doesn't change APIs and 
doesn't change the length of the hash. Does this have any merit? Note 
that this is essentially an HMAC where the key is H(x). I omitted the 
padding because it seems to me that this actually makes HMAC weaker 
against the current attacks.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to
[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Propping up SHA-1 (or MD5)

2005-03-25 Thread Charlie Kaufman
Whether these various tricks help depends on the technical details of
the attacks found. I hope that the bit twiddling crypto types who are
finding the attacks are going to propose something to fix them.

There are probably cheaper fixes than the 2x or 3x performance loss of
your algorithm down in the inner loops of these algorithms (such as the
change from SHA to SHA-1) and that these will come out. I'm reluctant to
jump on the SHA-256 bandwagon or to come up with some ad hoc fix until a
more thorough analysis is done. SHA-256 was designed before these
attacks were known and probably has related flaws (though they are even
less likely to be practically exploitable). We have the luxury of having
the current break being largely theoretical, so waiting even a year for
the mathematicians is probably OK. But it's never too early to start
preparing for a new algorithm - perhaps with a new hash size - in our
protocols. Further, given that lots of attacks (past and present) are
not exploitable if every hashed quantity includes some value chosen by a
trusted party and unpredictable by an attacker, it seems reasonable to
consider that as a desirable characteristic as we design our protocols.

--Charlie Kaufman

p.s. Your formulae below have unbalanced parentheses, but I can guess
what you probably meant.

-Original Message-
From: Ben Laurie [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 24, 2005 2:39 AM
To: Charlie Kaufman
Cc: Cryptography; [EMAIL PROTECTED]
Subject: Re: Propping up SHA-1 (or MD5)

Charlie Kaufman wrote:
 All hash functions I'm aware of consist of an inner compression
function
 that hashes a fixed size block of data into a smaller fixed size block
 and an outer composition function that applies the inner function
 iteratively to the variable length data to be hashed. Essentially
you're
 proposing a modification to the outer layer of the hash construction.
 
 All of the standard hash functions since MD4 have been constructed so
 that a collision in the inner compression function is likely to lead
to
 a collision in the hash function. MD2 did not have that property. It
 computed a cheap checksum of the variable length data in parallel with
 the digesting process and digested the checksum following the data. I
 have often wondered whether such a cheap addition would strengthen the
 newer hashes. (It would fix the suffixing attacks that motivated the
 development of HMAC).
 
 It's not obvious whether this would make the functions more secure or
 just make them harder to analyze. Perhaps someone from the research
 community could comment on why the checksum was removed in the
evolution
 from MD2 to MD4.
 
 Your proposed encoding has the disadvantage that it would require two
 passes over the message being digested. This would be bad news for
 hardware implementations and should be avoided if possible.

I suggested in a later version these two constructions:

H'(x)=H(H(x || H(0 || x) || H(0 || x))

or:

H'(x)=H(H(x || H(0 || x) || H(1 || x))

which only require a single pass (but, unfortunately, two or three 
different instances of the hash). This seems similar to the mechanism 
used in MD2, except the checksum is expensive.

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: I'll show you mine if you show me, er, mine

2005-03-15 Thread Charlie Kaufman
James A. Donald said:
There seem to be a shitload of protocols, in addition to SPEKE 
and DH-EKE
...
Can anyone suggest a well reviewed, unpatented, protocol that 
has the desired properties? 

Unpatented will be your biggest hurdle.

I collaborated on the development of a strong password protocol with the
explicit goal of having such a protocol that was not patented. For
details, see:
http://www.usenix.org/events/sec01/full_papers/kaufman/kaufman.pdf

But while we got our employers to agree not to patent the algorithm,
neither we nor they are willing to defend it against infringement claims
by others. (It also has not been extensively reviewed. There is no
particular motivation for anyone to do so since its performance is
inferior to other schemes and its patent status is uncertain.)

Basically, there is no way to establish that any technology is
unpatented. The best you can do is hide behind someone with deeper
pockets than you do who is doing the same thing. Like hiding behind IBM
when using Linux.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]