Re: OT: SSL certificate chain problems

2007-01-30 Thread Peter Gutmann
Victor Duchovni [EMAIL PROTECTED] writes:

Wouldn't the old root also (until it actually expires) verify any
certificates signed by the new root? If so, why does a server need to send
the new root?

Because the client may not have the new root yet, and when they try and verify
using the expired root the verification will fail.

(There's a lot of potential further complications in there that I'm going to
 spare people the exposure to, but that's the basic idea).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intuitive cryptography that's also practical and secure.

2007-01-30 Thread Ed Gerck
Matt Blaze wrote:
 an even more important problem
 than psychic debunking, namely electronic voting. I think intuitive
 cryptography is a very important open problem for our field.

The first problem of voting is that neither side (paper vote vs e-vote)
accepts that voting is hard to do right -- and that we have not done
it yet. Paper is not the gold standard of voting.

The real-world voting problem is actually much harder than people think.
Voting is an open-loop process with an intrinsic vote gap, such that
no one may know for sure what the vote cast actually was -- unless one
is willing to sacrifice the privacy of the vote. This problem is
technology-agnostic.

A solution [1], however, exists, where one can fully preserve privacy
and security, if a small (as small as you need) margin of error is
accepted. Because the margin of error can be made as small as
one needs and is willing to pay, it is not really relevant. Even when
all operational procedures and flaws including fraud and bugs are
taken into account.

The solution seems fairly intuitive. In fact, it was used about 500
years by the Mogul in India to prevent fraud.

The solution is also technologically neutral, but has more chances for
success, and less cost, with e-voting.

Best,
Ed Gerck

[1] In Shannon's cryptography terms, the solution reduces the probability
of existence of a covert channel to a value as close to zero as we want.
This is done by adding different channels of information, as intentional
redundancy. See http://www.vote.caltech.edu/wote01/pdfs/gerck-witness.pdf
I can provide more details on the fraud model, in case of interest.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Free WiFi man-in-the-middle scam seen in the wild.

2007-01-30 Thread Florian Weimer
* Perry E. Metzger:

 If you go over to, say, www.fidelity.com, you will find that you can't
 even get to the http: version of the page any more -- you are always
 redirected to the https: version.

Of course, this only helps if users visit the site using bookmarks
that were created after the switch.  If they enter fidelity.com (or
even just fidelity) into their browsers to access it, switch to
HTTPS won't help at all.  Perhaps this explains why someone might
think that serving the login page over HTTPS is just security theater.

In the same we use use HTTPS and are still vulnerable to MITM
attacks department, there's the really old issue of authenticating
cookies which are not restricted to HTTPS, but will be happily sent
over HTTP as well. *sigh*

Apart from that, the article you linked to does not even mention
actual attacks with an identity theft motive.  What's worse, the
suggested countermeasures don't protect you at all.  Ad-hoc networks
are insecure, and those with an access point are secure?  Yeah, right.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intuitive cryptography that's also practical and secure.

2007-01-30 Thread Ed Gerck
[Perry, please use this one if possible]

Matt Blaze wrote:
 an even more important problem
 than psychic debunking, namely electronic voting. I think intuitive
 cryptography is a very important open problem for our field.

Matt,

You mentioned in your blog about the crypto solutions for voting and
that they have been largely ignored. The reason is that they are either
solutions to artificially contrived situations that would be impractical
in real life, or postulate conditions such as threshold trust to protect
voter privacy that would not work in real life. Technology-oriented
colleagues are not even aware why threshold trust would not work in
elections.

Thus, the first problem of voting is that neither side (paper vote vs
e-vote accepts that voting is hard to do right -- and that we have not
done it yet.

The real-world voting problem is actually much harder than people think.

Voting is an open-loop process with an intrinsic vote gap, such that
no one may know for sure what the vote cast actually was -- unless one
is willing to sacrifice the privacy of the vote. This problem is
technology-agnostic.

A solution [1], however, exists, where one can fully preserve privacy
and security, if a small (as small as you need) margin of error is
accepted. Because the margin of error can be made as small as
one needs and is willing to pay, it is not really relevant. Even when
all operational procedures and flaws including fraud and bugs are
taken into account.

The solution seems fairly intuitive. In fact, it was used about 500
years by the Mogul in India to prevent fraud.

The solution is also technologically neutral, but has more chances for
success, and less cost, with e-voting.

Best,
Ed Gerck

[1] In Shannon's cryptography terms, the solution reduces the probability
of existence of a covert channel to a value as close to zero as we want.
The covert channel is composed of several MITM channels between the voter
registration, the voter, the ballot box, and the tally accumulator. This
is done by adding different channels of information, as intentional
redundancy. See http://www.vote.caltech.edu/wote01/pdfs/gerck-witness.pdf
I can provide more details on the fraud model, for those who are
interested.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


News.com: IBM donates new privacy tool to open-source Higgins

2007-01-30 Thread John Gilmore
http://news.com.com/IBM+donates+new+privacy+tool+to+open-source/2100-1029_3-6153625.html

IBM donates new privacy tool to open-source
  By  Joris Evers
  Staff Writer, CNET News.com
  Published: January 25, 2007, 9:00 PM PST

IBM has developed software designed to let people keep personal  
information secret when doing business online and donated it to the  
Higgins open-source project.

  The software, called Identity Mixer, was developed by IBM  
researchers. The idea is that people provide encrypted digital  
credentials issued by trusted parties like a bank or government agency  
when transacting online, instead of sharing credit card or other  
details in plain text, Anthony Nadalin, IBM's chief security architect,  
said in an interview.

  Today you traditionally give away all of your information to the man  
in the middle and you don't know what they do with it, Nadalin said.  
With Identity Mixer you create a pseudonym that you hand over.

  For example, when making a purchase online, buyers would provide an  
encrypted credential issued by their credit card company instead of  
actual credit card details. The online store can't access the  
credential, but passes it on to the credit card issuer, which can  
verify it and make sure the retailer gets paid.

  This limits the liability that the storefront has, because they don't  
have that credit card information anymore, Nadalin said. All you hear  
about is stores getting hacked.

  Similarly, an agency such as the Department of Motor Vehicles could  
issue an encrypted credential that could be used for age checks, for  
example. A company looking for such a check won't have to know an  
individual's date of birth or other driver's license details; the DMV  
can simply electronically confirm that a person is of age, according to  
IBM.

  The encrypted credentials would be for one-time use only. The next  
purchase or other transaction will require a new credential. The  
process is similar to the one-time-use credit card numbers that  
Citigroup card holders can already generate on the bank's Web site.

  IBM hopes technology such as its Identity Mixer helps restore trust in  
the Web. Several surveys in past years have shown that the seemingly  
incessant stream of data breaches and threats such as phishing scams  
are eroding consumer confidence in online shopping and activities such  
as banking on the Web.

  To get Identity Mixer out of the lab and into the real world, IBM is  
donating its work to Higgins project, a broad, open-source effort  
backed by IBM and Novell that promises to give people more control of  
their personal data when doing business online. Higgins also aims to  
make the multiple authentication systems on the Net work together,  
making it easier for people to manage Internet logins and passwords.

  We expect Higgins to get wide deployment and usage. You'll get the  
ability by using Higgins to anonymize data, Nadalin said.

  Higgins is still under development. A first version of the projects  
work is slated to be done sometime midyear, said Mary Ruddy, a Higgins  
project leader. We were thrilled to get this donation to Higgins, IBM  
has done a lot of good work.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


data under one key, was Re: analysis and implementation of LRW

2007-01-30 Thread Travis H.
On Wed, Jan 24, 2007 at 03:28:50PM -0800, Allen wrote:
 If 4 gigs is right, would it then be records to look for to break 
 the code via birthday attacks would be things like seismic data,

In case anyone else couldn't parse this, he means the amount of
encrypted material necessary to break the key would be large or
the size of a lookup table would be large or something like
that.

 Currently I'm dealing 
 with very large - though not as large as 4 gig - x-ray, MRI, and 
 similar files that have to be protected for the lifespan of the 
 person, which could be 70+ years after the medical record is 
 created. Think of the MRI of a kid to scan for some condition 
 that may be genetic in origin and has to be monitored and 
 compared with more recent results their whole life.

That's longer than computers have been available, and also longer
than modern cryptography has existed.  The only way I would propose
to be able to stay secure that long is either:
1) use a random key as large as the plaintext (one-time-pad)
2) prevent the ciphertext from leaking
   (quantum crypto, spread-spectrum communication, steganography)

Even then, I doubt Lloyd's would insure it.  Anyone who claims to know
what the state of the art will be like in 70+ years is a fool.  I
would be cautious about extrapolating more than five years.

The problem is not the amount of data under one key; that's easy
enough, generate random keys for every n bits of plaintext and encrypt
them with a meta-key, creating a two-level hierarchy.  You calculate a
information-theoretic bound on n by computing the entropy of the
plaintext and the unicity distance of the cipher.  Note that the data
(keys) encrypted directly with the meta-key is completely random, so
the unicity distance is infinite.  Furthermore, one can't easily
brute-force the meta-key by trying the decrypted normal keys on the
ciphertext because all the plaintext under one key equivocates because
it is smaller than the unicity distance.  I'm not sure how it
compounds when the meta-key encrypts multiple keys, I'd have to look
into that.  In any case, you can create a deeper and deeper hierarchy
as you go along.

This bound is the limit for information-theoretic, or unconditional
security.  Shannon proved that a system with these characteristics is
unbreakable.  If you don't know what the entropy of the plaintext is,
you have to use a one-time pad.  The unicity distance of DES, last
time I looked, was so low that one might as well use a one-time pad.

With computational security, you can fudge a little by trying to
calculate how much data you can safely encrypt under one key.
However, I believe this value can only go down over time, as new
cryptanalytic attacks are developed against the cipher.

Another method is to derive many data keys from bits of a larger
meta-key in a way that is computationally infeasible.  However, every
time you hear computationally infeasible, remember that it is an
argument of ignorance; we don't know an efficient way to break it,
yet, or if someone does they aren't talking.

You can also make this argument more scientific by extrapolating
future attacks and computational advances from trends (Moore's Law
et. al.) - see Rules of Thumb in Data Engineering from Microsoft;
it's on the storagemojo.com blog and well worth reading.

Furthermore, you should provide a mechanism for the crypto to be
changed transparently as technology progresses; an installed base is
forever, but computational security is not.  Permitting multiple
security configurations is complex, but I don't think anything short
of OTP can give an absolute assurance of confidentiality when the
opponent has access to the plaintext.

Another simple solution, the belt-and-suspenders method, is to
superencrypt the ciphertext with a structurally different cipher.
This basically makes the plaintext fed to the top-level cipher
computationally indistinguishable from random data, and so the unicity
distance of the top-level cipher is infinite according to
computational security of the lower-level cipher.  I'm mixing
terminology here, but the net result is that you're guaranteed that
the combination is as secure as either alone, and in most cases a
weakness in one cipher will not be a weakness in the other (this
is because of the structurally independent assumption).

You get the same effect by sending an encrypted file over an encrypted
network connection (unless the file is converted to base64 or
something prior to transmission), assuming that the opponent is not an
insider with access to decrypted network traffic.




Some assumptions to consider are:

What problem(s) are we trying to solve, and why?

Can we set up a secure distribution network for key material?

Who is the opponent?  How many years will they remain interested in a
captured record?

What is the budget?

Who are the users?  How competent are they?  How much can we educate them?

How will we fix bugs or update it?

What are the security priorities?

Re: OT: SSL certificate chain problems

2007-01-30 Thread Victor Duchovni
On Sat, Jan 27, 2007 at 02:12:34PM +1300, Peter Gutmann wrote:

 Victor Duchovni [EMAIL PROTECTED] writes:
 
 Wouldn't the old root also (until it actually expires) verify any
 certificates signed by the new root? If so, why does a server need to send
 the new root?
 
 Because the client may not have the new root yet, and when they try and verify
 using the expired root the verification will fail.

I am curious how the expired trusted old root helps to verify the as
yet untrusted new root... Is there a special-case behaviour when the
old and new root share the same DN and public key? Is such special-case
behaviour standard for trust chain verification implementations (allowing
the lifetime of root CAs to be indefinitely extended by issuing new certs
with the same keys)?

-- 
Viktor.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Private Key Generation from Passwords/phrases

2007-01-30 Thread Steven M. Bellovin
On Mon, 22 Jan 2007 16:57:34 -0800
Abe Singer [EMAIL PROTECTED] wrote:

 On Sun, Jan 21, 2007 at 12:13:09AM -0500, Steven M. Bellovin wrote:
  
  One sometimes sees claims that increasing the salt size is
  important. That's very far from clear to me.  A collision in the
  salt between two entries in the password file lets you try each
  guess against two users' entries.  Since calculating the guess is
  the hard part, that's a savings for the attacker.  With 4K possible
  salts, you'd need a very large password file to have more than a
  very few collisions, though.  It's only a benefit if the password
  file (or collection of password files) is very large.
 
 Definition of very large can vary. (alliteraiton intended).  Our
 userbase is about 6,000 active users, and over the past 20 years
 we've allocated at least 12,000 accounts.  So we definitely have
 collisions in 4k salt space. I'm not speaking to collisions in
 passwords, just salts.
 
 UCSD has maybe 60,000 active users.  I think very large is very
 common in the University environment.
 
Is that all in one /etc/passwd file (or the NIS equivalent)?  Or is it a
Kerberos KDC?  I note that a salt buys the defense much less in a
Kerberos environment, where capture of the KDC database lets an
attacker roam freely, and the salt simply protects other sites where
users may have used the same password.

Beyond that, 60K doesn't make that much of a difference even with a
traditional /etc/passwd file -- it's only an average factor of 15
reduction in the attacker's workload.  While that's not trivial, it's
also less than, say,  a one-character increase in average password
length.  That said, the NetBSD HMAC-SHA1 password hash, where I had
some input into the design, uses a 32-bit salt, because it's free.



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OT: SSL certificate chain problems

2007-01-30 Thread Thor Lancelot Simon
On Fri, Jan 26, 2007 at 11:42:58AM -0500, Victor Duchovni wrote:
 On Fri, Jan 26, 2007 at 07:06:00PM +1300, Peter Gutmann wrote:
 
  In some cases it may be useful to send the entire chain, one such being 
  when a
  CA re-issues its root with a new expiry date, as Verisign did when its roots
  expired in December 1999.  The old root can be used to verify the new root.
 
 Wouldn't the old root also (until it actually expires) verify any
 certificates signed by the new root? If so, why does a server need to
 send the new root? So long as the recipient has either the new or the
 old root, the chain will be valid.

That doesn't make sense to me -- the end-of-chain (server or client)
certificate won't be signed by _both_ the old and new root, I wouldn't
think (does x.509 even make this possible)?

That means that for a party trying to validate a certificate signed by
the new root, but who has only the old root, the new root's certificate
will be a necessary intermediate step in the chain to the old root, which
that party trusts (assuming the new root is signed by the old root, that
is).

Or do I misunderstand?

Thor

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OT: SSL certificate chain problems

2007-01-30 Thread Victor Duchovni
On Sun, Jan 28, 2007 at 12:47:18PM -0500, Thor Lancelot Simon wrote:

  Wouldn't the old root also (until it actually expires) verify any
  certificates signed by the new root? If so, why does a server need to
  send the new root? So long as the recipient has either the new or the
  old root, the chain will be valid.
 
 That doesn't make sense to me -- the end-of-chain (server or client)
 certificate won't be signed by _both_ the old and new root, I wouldn't
 think (does x.509 even make this possible)?

 Or do I misunderstand?

The key extra information is that old and new roots share the same issuer
and subject DNs and public key, only the start/expiration dates differ,
so in the overlap when both are valid, they are interchangeable, both
verify the same (singly-signed) certs. What I don't understand is how
the old (finally expired) root helps to validate the new unexpired root,
when a verifier has the old root and the server presents the new root
in its trust chain.

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


length-extension and Merkle-Damgard hashes

2007-01-30 Thread Travis H.
So I was reading this:
http://en.wikipedia.org/wiki/Merkle-Damgard

It seems to me the length-extension attack (given one collision, it's
easy to create others) is not the only one, though it's obviously a
big concern to those who rely on it.

This attack thanks to Schneier:

If the ideal hash function is a random mapping, Merkle-Damgard hashes
which don't use a finalization function have the following property:

If h(m0||m1||...mk) = H, then h(m0||m1||...mk||x) = h(H||x) where the
elements of m are the same size as the block size of the hash, and x
is an arbitrary string.  Note that encoding the length at the end
permits an attack for some x, but I think this is difficult or
impossible if the length is prepended.

-- 
The driving force behind innovation is sublimation.
-- URL:http://www.subspacefield.org/~travis/
For a good time on my UBE blacklist, email [EMAIL PROTECTED]


pgpL5KPdwlGvf.pgp
Description: PGP signature


Re: Private Key Generation from Passwords/phrases

2007-01-30 Thread Abe Singer
On Sun, Jan 28, 2007 at 11:52:16AM -0500, Steven M. Bellovin wrote:
  
 Is that all in one /etc/passwd file (or the NIS equivalent)?  Or is it a
 Kerberos KDC?  I note that a salt buys the defense much less in a

For SDSC, one file.  For UCSD, not sure, but I suspect it's (now) a KDC.
(Brian, are you on this list?)

 Kerberos environment, where capture of the KDC database lets an
 attacker roam freely, and the salt simply protects other sites where
 users may have used the same password.

Agreed.

 Beyond that, 60K doesn't make that much of a difference even with a
 traditional /etc/passwd file -- it's only an average factor of 15
 reduction in the attacker's workload.  While that's not trivial, it's
 also less than, say,  a one-character increase in average password
 length.  That said, the NetBSD HMAC-SHA1 password hash, where I had
 some input into the design, uses a 32-bit salt, because it's free.


I don't disagree with you.  I was just addressing your implication
(or at least, what I *read* as an implication ;-) that  4096 users
was rare.

FWIW, the glibc MD5 crypt function uses a 48-bit hash.

also FWIW, salt lengths significatly affect  the work factor and storage
requirements for pre-computated hashes from dictionaries.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]