Re: Private Key Generation from Passwords/phrases

2007-02-03 Thread Alexander Klimov
On Sun, 28 Jan 2007, Steven M. Bellovin wrote:
 Beyond that, 60K doesn't make that much of a difference even with a
 traditional /etc/passwd file -- it's only an average factor of 15
 reduction in the attacker's workload.  While that's not trivial, it's
 also less than, say,  a one-character increase in average password
 length.  That said, the NetBSD HMAC-SHA1 password hash, where I had
 some input into the design, uses a 32-bit salt, because it's free.

In many cases the real goal is not to find all (or many) passwords,
but to find at least one, so one may concentrate on the most-oftenly
used salt. (Of course, with 60K passwords there is almost for sure at
least one password1 or Steven123 and thus the salts are
irrelevant.)

-- 
Regards,
ASK

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: News.com: IBM donates new privacy tool to open-source Higgins

2007-02-03 Thread Anne Lynn Wheeler

John Gilmore wrote:

http://news.com.com/IBM+donates+new+privacy+tool+to+open-source/2100-1029_3-6153625.html

IBM donates new privacy tool to open-source
  By  Joris Evers
  Staff Writer, CNET News.com
  Published: January 25, 2007, 9:00 PM PST


...

For example, when making a purchase online, buyers would provide an  
encrypted credential issued by their credit card company instead of  
actual credit card details. The online store can't access the  
credential, but passes it on to the credit card issuer, which can  
verify it and make sure the retailer gets paid.


  This limits the liability that the storefront has, because they don't  
have that credit card information anymore, Nadalin said. All you hear  
about is stores getting hacked.


  Similarly, an agency such as the Department of Motor Vehicles could  
issue an encrypted credential that could be used for age checks, for  
example. A company looking for such a check won't have to know an  
individual's date of birth or other driver's license details; the DMV  
can simply electronically confirm that a person is of age, according to  
IBM.


this was somewhat the issue with x.509 identity certificates from the early 90s,
they were being overloaded with personal information ... and then the proposal
that everybody should then spray such digital certificates frequently all
over the world. in this period, they were also being touted for use in
electronic driver's licenses, passports, etc.

In the mid-90s, with the realization of the enormous privacy exposures of
such a paradigm ... there was some parties retrenching to relying-party-only
certificates ... basically a record pointer ... which was then used as
reference to the record with the necessary information ... and only the
absolutely necessary information was then divulged.
http://www.garlic.com/~lynn/subpubkey.html#rpo

however, it was trivially possible to demonstrated that the actual
digital certificate was redundant and superfluous ... all that was
necessary was the record pointer, a digital signature ... and the
responsible agency could verify the digital signature with public
key on file ... at the same time they processed the request using
the record pointer.

This was basically the FSTC organizations
http://www.fstc.org/

model for FAST (financial authenticated secure transaction). The transaction
is mapped into existing ISO 8583 message and uses the existing infrastructure
operations. Rather then divulging age, a FAST (/8583) transaction ... digital
signed by the individual ... could ask whether the person meets some age
criteria, address criteria, etc ... getting YES/NO response ... w/o divulging
any additional information (like actual date of birth). This was modeled
after existing (ISO 8583, debit, credit, etc) financial transaction which 
effectively asks whether the merchant gets paid or not (simply YES/NO

response).

This is also, effectively the X9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959
http://www.garlic.com/~lynn/subpubkey.html#x959

The X9A10 financial standard working group, in the mid-90s was given the
requirement to preserve the integrity of the financial infrastructure for
all retail payments. The transaction is sent with digital signature,
the responsible agency validates the digital signature, examines the
transaction request and then responds YES/NO regarding whether the
merchant gets paid or not.

The other characteristic of X9.59 was that it included a business rule that
X9.59 account numbers couldn't be used in non-X9.59 transaction. That
made the associated account numbers (record pointers) unusable w/o the
accompanying digital signature i.e. random people couldn't generate
random (valid) transactions against the account number/record number.
This had the effect of eliminating a lot of the existing skimming/harvesting
exploits
http://www.garlic.com/~lynn/subintegrity.html#harvest

It isn't necessary to have an encrypted credential ... since if it was
purely static data ... and simply presenting such static data exposes
the infrastructure to various kinds of replay attack. In that sense,
the static data can be any recognizable information specific to the
responsible agency handling the transaction. The static data is used
by the responsible party to lookup the actual information (including,
if necessary the public key)  ... and the digital signature on every 
transaction prevents various kinds of replay attacks ... that might be
possible in an infrastructure relying on only static data. If the 
agency is going to lookup something (rather than have it carried around

in large encrypted packet ...) then it becomes immaterial whether the
actual (static data) record locator is encrypted or not.

related post in this thread:
http://www.garlic.com/~lynn/2007c.html#43  Securing financial transactions a 
high priority for 2007
http://www.garlic.com/~lynn/2007c.html#46  Securing financial transactions a 
high priority for 2007

Chaos on a chip

2007-02-03 Thread Sean McGrath


 Original Message 

Subject: Physics News Update 810

PHYSICS NEWS UPDATE
The American Institute of Physics Bulletin of Physics News
Number 810   30 January 2007 by Phillip F. Schewe, Ben Stein, Turner
Brinton, and Davide Castelvecchi www.aip.org/pnu

[...]

CHAOS ON A CHIP.  For the first time physicists have shown that well
structured chaos can be initiated in a photonic integrated circuit.
Furthermore, this represents the first time scientists have been
able to study optical chaos at gigahertz rates.
The output of a semiconductor laser is normally regular.  However,
if certain laser parameters are tweaked, such as by modulating the
electric current pumping the laser or by feeding back some of the
laser’s light from an external mirror, the overall laser output will
become chaotic; that is, the laser output will be unpredictable.  To
make the chaos even more dramatic (and exploitable) Mirvais Yousefi
and his colleagues at the Technische Universiteit Eindhoven (in the
Netherlands) use paired lasers, lasers built very close to each
other on a chip in such a way that each affects the operation of the
other.  The Eindhoven chip, using the paired-laser
mutual-perturbation approach to triggering chaos, is the first to
exhibit chaos directly-revealing telltale strange attractors on
plots of laser power at one instant versus laser power at a slightly
later instant-rather than indirectly through recording laser spectra.
Looking ahead to the day when opto-photonic chips are covered with
thousands or millions of lasers, the Eindhoven approach could allow
troubleshooters to pinpoint the whereabouts of misbehaving
lasers---not only that but possibly even exploit localized chaotic
effects to their advantage.
According to Yousefi ([EMAIL PROTECTED]) other possible uses for
chip-based chaos will be the business of encryption, tomography, and
possibly even in the establishment of multi-tiered logic protocols,
those based not on just on the binary logic of 1s and 0s but on the
many intensity levels corresponding to the broadband output of the
chaotic laser system. (Yousefi et al., Physical Review Letters, 26
January 2007; text at www.aip.org/physnews/select )

[...]

***
PHYSICS NEWS UPDATE is a digest of physics news items arising
from physics meetings, physics journals, newspapers and
magazines, and other news sources.  It is provided free of charge
as a way of broadly disseminating information about physics and
physicists. For that reason, you are free to post it, if you like,
where others can read it, providing only that you credit AIP.
Physics News Update appears approximately once a week.

AUTO-SUBSCRIPTION OR DELETION: By using the expression
subscribe physnews in your e-mail message, you
will have automatically added the address from which your
message was sent to the distribution list for Physics News Update.
If you use the signoff physnews expression in your e-mail message,
the address in your message header will be deleted from the
distribution list.  Please send your message to:
[EMAIL PROTECTED]
(Leave the Subject: line blank.)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intuitive cryptography that's also practical and secure.

2007-02-03 Thread Leichter, Jerry
| ...I agree with you about intuitive cryptography.  What you're
| complaining about is, in effect, Why Johnny Can't Hash.  There was
| another instance of that in today's NY Times.  In one of the court
| cases stemming from the warrantless wiretapping, the Justice
| Department is, in the holy name of security, effectively filing court
| papers with itself -- it's depositing the filings in a secure
| facility, rather than with the court, to protect them.  I won't go
| into the legal, political, judicial, or downright bizarre aspects of
| this case (save to note that one of the plaintiff's attorneys was
| quoted as saying Sometime during all of this, I went on Amazon and
| ordered a copy of Kafka?s ?The Trial,? because I needed a refresher
| course in bizarre legal procedures.), but one point the article
| mentioned is relevant here:  how is the record preserved for a
| possible appeal?  Indeed, one of the judges involved has commented on
| that point.
| 
| ...There's an obvious cryptographic solution, of course: publish the
| hash of any such documents.  Practically speaking, it's useless.  Apart
| from having to explain hash functions to lawyers, judges, members of
| Congress, editorial page writers, bloggers, and talk show hosts,...
This is a common misconception.  The legal system does not rely on
lawyers, judges, members of Congress, and so on understanding how
technology or science works.  It doesn't rely on them coming to accept
the trustworthiness of the technology on any basis a technologist would
consider reasonable.  All it requires is that they accept the authority
of experts in the subject area, and that those experts agree strongly
enough that the mechanism is sound.

How many people understand DNA matching?  How much do you think *you*
understand about DNA matching?  Could you name a single reagent used in
doing a DNA match?  Could you distinguish between a good match and a bad
match?  If someone handed you one of those pictures of different bands
on an electrophoresis plate, could you tell if it was real or faked?
Does any of this influence your faith in the validity of DNA matching as
a forensic technology?

Just as DNA matching can be explained in very simple, if fundamentally
very limited terms, as something like fingerprint matching only more
sophisticated, one can easily explain hashing in pretty much the same
terms.  It would not be hard to find highly credentialed experts who
would testify as to the worth, applicability, and general acceptance by
those in the field, of the technique.  Sure, lawyers on the other side
of a case trying to gain acceptance for hashing could probably find
*someone* to cast doubt on it - but it's unlikely they would be very
good expert witnesses - and in the end that's what determines the
outcome.

| this a time you'd want to stand up before a Congressional committee and
| testify that some NSA technology, i.e., SHA-512, that NIST thinks needs
| replacing, is still strong enough to protect documents that concern
| possible NSA misconduct?  And of course, collision attacks are
| precisely the concern here.
Well, there will always be tin-hatters out there who will doubt
absolutely everything.  We rely on the police to hold on to evidence
concerning the people charged with crimes - who are sometimes corrupt
cops, politicians who control police funds, etc., etc.  There are
procedural safeguards around the chain of custody of materials.

When it comes to records of decided cases, the courts hold on to this
stuff.  Just how secure are *their* facilities?  There is rarely reason
for anyone to mount a concerted attack against them.  If you're worrying
about the NSA modifying stored evidence, what makes you think they would
have much trouble mounting a black-bag attack against some court's
storage room somewhere?

There are a number of very troubling issues about this series of cases
and the way the courts have allowed them to be handled (so far; history
shows that the courts, just like the other branches of government, are
very protective of what they perceive as their domain of responsibility,
and they tend to take back their roles).  But I'm not particularly
concerned about the NSA using some secret technique to find a second
preimage of a hash of the evidence.  Of course, the practical
difficulties of even getting to the point of being able to compute a
hash over a large collection of papers, books, various kinds of records,
and likely some other pieces of physical evidence is considerable

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intuitive cryptography that's also practical and secure.

2007-02-03 Thread Steven M. Bellovin
On Tue, 30 Jan 2007 16:10:47 -0500 (EST)
Leichter, Jerry [EMAIL PROTECTED] wrote:



 | 
 | ...There's an obvious cryptographic solution, of course: publish the
 | hash of any such documents.  Practically speaking, it's useless.
 | Apart from having to explain hash functions to lawyers, judges,
 | members of Congress, editorial page writers, bloggers, and talk
 | show hosts,... 

 This is a common misconception.  The legal system does
 not rely on lawyers, judges, members of Congress, and so on
 understanding how technology or science works.  It doesn't rely on
 them coming to accept the trustworthiness of the technology on any
 basis a technologist would consider reasonable.  All it requires is
 that they accept the authority of experts in the subject area, and
 that those experts agree strongly enough that the mechanism is
 sound.

I don't dispute your analysis.  However, this case is not just a legal
one, it's a political issue, which is why I spoke of editorial page
writers, bloggers, and talk show hosts.  All it will take is for
enough technically-skilled conspiracy theorists to raise the issue of
hash function collisions and NSA, and we won't hear the end of it for
decades to come.  (Did you know that President Kennedy was actually
killed by a large prime factor discovered by the CIA...?)



--Steve Bellovin, http://www.cs.columbia.edu/~smb

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Intuitive cryptography that's also practical and secure.

2007-02-03 Thread Leichter, Jerry
|  | 
|  | ...There's an obvious cryptographic solution, of course: publish the
|  | hash of any such documents.  Practically speaking, it's useless.
|  | Apart from having to explain hash functions to lawyers, judges,
|  | members of Congress, editorial page writers, bloggers, and talk
|  | show hosts,... 
| 
|  This is a common misconception.  The legal system does
|  not rely on lawyers, judges, members of Congress, and so on
|  understanding how technology or science works.  It doesn't rely on
|  them coming to accept the trustworthiness of the technology on any
|  basis a technologist would consider reasonable.  All it requires is
|  that they accept the authority of experts in the subject area, and
|  that those experts agree strongly enough that the mechanism is
|  sound.
| 
| I don't dispute your analysis.  However, this case is not just a legal
| one, it's a political issue, which is why I spoke of editorial page
| writers, bloggers, and talk show hosts.  All it will take is for
| enough technically-skilled conspiracy theorists to raise the issue of
| hash function collisions and NSA, and we won't hear the end of it for
| decades to come.  
I doubt *anything* would eliminate the conspiracy theorists.  Intuitive
cryptography or otherwise, any convincing argument that the records
had *not* been tampered with would require careful examination - and
conspiracy theorists don't carefully examine evidence *against* their
positions.

|   (Did you know that President Kennedy was actually
| killed by a large prime factor discovered by the CIA...?)
Actually, it's well known that aliens controlled both Lee Harvey Oswald
and Jack Ruby - their control over Ruby was slipping, he was about to go
public revealing what he know, so having Ruby kill Oswald did a great
job of covering up the ongoing invasion.

These aliens presented a take-it-or-leave it surrender document to
President Truman at Area 51 shortly after WW II.  Kennedy was about to
start an aggressive campaign against them - as, later was Robert
Kennedy, which is why the aliens arranged his death, too

-- Jerry :-)

(What was the name of the TV series a number of years back that was
built on this premise?  Not very good, but cleverly done.)

| 
| 
|   --Steve Bellovin, http://www.cs.columbia.edu/~smb
| 
| 

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OT: SSL certificate chain problems

2007-02-03 Thread Peter Gutmann
Victor Duchovni [EMAIL PROTECTED] writes:

What I don't understand is how the old (finally expired) root helps to
validate the new unexpired root, when a verifier has the old root and the
server presents the new root in its trust chain.

You use the key in the old root to validate the self-signature in the new
root.  Since they're the same key, you know that the new root supersedes the
expired one.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: OT: SSL certificate chain problems

2007-02-03 Thread Victor Duchovni
On Wed, Jan 31, 2007 at 01:57:04PM +1300, Peter Gutmann wrote:

 Victor Duchovni [EMAIL PROTECTED] writes:
 
 What I don't understand is how the old (finally expired) root helps to
 validate the new unexpired root, when a verifier has the old root and the
 server presents the new root in its trust chain.
 
 You use the key in the old root to validate the self-signature in the new
 root.  Since they're the same key, you know that the new root supersedes the
 expired one.

So this is a special trick to extend root CA lifetimes. How widely is
this logic implemented, and is extending root CA key lifetime in this
manner standard practice? I may have to revise the Postfix documentation
to advise users to send the root cert.

My most recent experience is ironically in the opposite direction:

Peer finally upgrades from Windows Server 2000 to Windows Server 2003,
and replaces unexpired Verisign CA certs (updated at some point in
the past in the working Windows 2000) with now expired CA certs that
were good way back, when the Windows 2003 CDs were burned :-)

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Intuitive cryptography that's also practical and secure.

2007-02-03 Thread Anton Stiglic
I am not convinced that we need intuitive cryptography.  
Many things in life are not understood by the general public.
How does a car really work: most people don't know but they still drive one.
How does a microwave oven work?

People don't need to understand the details, but the high level concept
should be simple:  If that is what you are trying to convey, I agree with
you.

I guess we could very well do with some cryptographic simplifications.  Hash
functions are one example.  We have security against arbitrary collisions,
2nd pre-image resistance, preimage resistance.  Most of our hash functions
today don't satisfy all of these properties:  Oh SHA1 is vulnerable to
aribitrary collisions attacks, but it is still safe agains 2nd pre-image
attacks, so don't worry! 
Why do we need all of these properties?  In most cases, we don't.
Mathematical masturbation might be to blame?   
Block cipher encryption.  How many modes of operations exist?  Some use a
counter, others need a random non predictable IV, others just need a non
repeatable IV?  Do we need all of this?
I often find myself explain these concepts to non-cryptographers.  I'm often
taken for a crazy mathematician.

What is the length of a private key?  In 1024-bit RSA, your d is about 1024
bits.  But is d your private key, or is it (d,N),  in which case there is
more than 1024 bits!  No, N is public, the known modulus, but you need it to
decrypt, you can't just use d by itself.  Oh, in DSA the private key is much
shorter.  You actually also need a random k, which you can think of as part
of your key, but it's just a one time value.  Are we talking about key
lengths, of modulus lengths really?

When you encrypt with RSA, you need padding.   With Elgamal, you don't need
any, complicated story.  And don't use just any padding.  You would be
foolish to use PKCS#1 v1.5 padding, everybody knows that right?  Use OAEP.
It is provably broken, but works like a charm when you encrypt with RSA!

Going back to the million dollar paranormal challenges:  Something like a
Windows SAM file containing the NTLM v2 hash of the passphrase consisting of
the answer might be something to consider?  Not perfect but...

--Anton




-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matt Blaze
Sent: January 26, 2007 5:58 PM
To: Cryptography
Subject: Intuitive cryptography that's also practical and secure.

I was surprised to discover that one of James Randi's million dollar
paranormal challenges is protected by a surprisingly weak (dictionary-
based) commitment scheme that is easily reversed and that suffers from
collisions. For details, see my blog entry about it:
http://www.crypto.com/blog/psychic_cryptanalysis/

I had hoped to be able to suggest a better scheme to Randi (e.g., one
based on a published, scrutinized bit commitment protocol).   
Unfortunately
I don't know of any that meets all his requirements, the most important
(aside from security) being that his audience (non-cryptographers
who believe in magic) be able to understand and have confidence in it.

It occurs to me that the lack of secure, practical crypto primitives and
protocols that are intuitively clear to ordinary people may be why
cryptography has had so little impact on an even more important problem
than psychic debunking, namely electronic voting. I think intuitive
cryptography is a very important open problem for our field.

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: Private Key Generation from Passwords/phrases

2007-02-03 Thread Anton Stiglic
Bill Stewart wrote:
Salt is designed to address a couple of threats
- Pre-computing password dictionaries for attacking wimpy passwords
...

Yes indeed.  The rainbow-tables style attacks are important to protect
against, and a salt does the trick.  This is why you can find rainbow tables
for LanMan and NTLMv1 hashed passwords, but not for NTLMv2.
This to me is the most important property achieved with a salt, and the salt
doesn't have to be that big to be effective.

--Anton




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: length-extension and Merkle-Damgard hashes

2007-02-03 Thread Amir Herzberg

Travis H. wrote:

So I was reading this:
http://en.wikipedia.org/wiki/Merkle-Damgard

It seems to me the length-extension attack (given one collision, it's
easy to create others) is not the only one, though it's obviously a
big concern to those who rely on it.

This attack thanks to Schneier:

If the ideal hash function is a random mapping, Merkle-Damgard hashes
which don't use a finalization function have the following property:

If h(m0||m1||...mk) = H, then h(m0||m1||...mk||x) = h(H||x) where the
elements of m are the same size as the block size of the hash, and x
is an arbitrary string.  Note that encoding the length at the end
permits an attack for some x, but I think this is difficult or
impossible if the length is prepended.
  
This is the well known `classical attack` against the `naive Merkle's 
construction`, which does NOT use either an IV (for the 1st block), or 
`MD-strengthening` (append the length appropriately, etc.). Which is why 
hash functions use a fixed IV for the first block, or append length or 
otherwise encode the end block, or, most often, do both (as e.g. in 
SHA1, MD5).


Prepending the length [without an IV], btw, is not necessarily a good 
solution.


For example, let's assume, for simplicity, that the length (in blocks) 
is encoded as 20 bits, prepended to the first block; and say the 
input/output block length is L, and for simplicity, assume a compression 
function from 2L to L (so the minimal input length is 2L). Let TWO be 
the 20-bit encoding of the number 2, and let THREE be the 20-bit 
encoding of the number 3.


Repeat until you find a collision (which you will, soon enough...):

- Pick a random (2L-10)-bits string $r\in_R \{0,1\}^{2L-10}$.
- Let H=h(THREE || r)
- If TWO=[first 20 bits of H] then, for every block $x\in \{0,1\}^L$, we 
have $h(H||x)=h((THREE||r)||x), i.e. a collision.


Best, Amir

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


RE: OT: SSL certificate chain problems

2007-02-03 Thread Geoffrey Hird

Victor Duchovni wrote:
 On Sun, Jan 28, 2007 at 12:47:18PM -0500, Thor Lancelot Simon wrote:

  That doesn't make sense to me -- the end-of-chain (server or client)
  certificate won't be signed by _both_ the old and new root, 
 I wouldn't
  think (does x.509 even make this possible)?
 
  Or do I misunderstand?
 
 The key extra information is that old and new roots share the same
issuer
 and subject DNs and public key, only the start/expiration dates
differ,
 so in the overlap when both are valid, they are interchangeable, both
 verify the same (singly-signed) certs.

To expand on what Duchovni said, you might want to look into
the concept of cross-certificates (which are heavily used with
bridges).  The surprising thing, at first, is that you can issue
any certificate after it was originally issued.  I can issue
the leaf cert you got from Verisign last year.  Tomorrow, I
could create my own SS Root CA, and issue a cert for the
Verisign Intermediate CA, by putting myself as the Issuer,
the Verisign Intermediate CA as the Subject, and putting the
Verisign ICA public key in it.  Your leaf cert will now chain
happily up to either the Verisign SS Root, or my new SS Root.
So this is not just a thing that works for renewing self-signed
roots.

 What I don't understand is how
 the old (finally expired) root helps to validate the new unexpired
root,
 when a verifier has the old root and the server presents the new root
 in its trust chain.

I shouldn't speak for Gutmann, but I assumed that he meant
that the server should send the new root *before* the old root
expires, so that the client can prepare in advance for the expiry.

As an aside, there are some funny issues around having a
signature done before the signer cert expired, but deciding
*after* the cert has expired, whether to trust it.  It was
ok yesterday, but maybe it's not ok today -- what has changed...?

Geoffrey

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


deriving multiple keys from one passphrase

2007-02-03 Thread Travis H.
Hey, quick question.

If one wants to have multiple keys, but for ease-of-use considerations
want to only have the user enter one, is there a preferred way to
derive multiple keys that, while not independent, are computationally
independent?

I was thinking of hashing the passphrase with a unique string for each
one; is this sufficient?  If sufficient, is a cryptographically strong
hash necessary?

I got a clarification about the use CRCs to process passphrase idea
someone mentioned.  The salient bit is that he was using several CRCs
(not sure if it's random or carefully chosen), and each one is run on
the passphrase, and the output of all of them concatenated to
initialize a PRNG seed.  The passphrase and seed are both secret, so
according to him there's no need to use a cryptographically strong
hash, and CRCs have a well-understood mathematical basis.

I presume this would be insufficient for deriving independent keys,
but perhaps there is a way to do that with careful selection of the
CRC polys?

-- 
The driving force behind innovation is sublimation.
-- URL:http://www.subspacefield.org/~travis/
For a good time on my UBE blacklist, email [EMAIL PROTECTED]


pgpgxzMEc4EYQ.pgp
Description: PGP signature


convenience vs risk -- US public elections by email and beyond

2007-02-03 Thread Ed Gerck
The social aspects of ease-of-use versus security are well-known.
People would rather use something that works than something that
is secure but hard to use. Ease-of-use trumps risks.

What is less recognized, even though it seems intuitive, is that
convenience (even though costlier and harder to use) can also make
people ignore risks. Convenience trumps ease-of-use, which trumps
risks.

For example, people will often send a cell phone text message
that requires dozens of button-clicks, costs money and is less
secure (US Rep. Mark Foley case)... than do a one click, free
phone call. We all use regular email even though it is totally
insecure -- because it's convenient.

Convenience has a lot to do with personal comfort. It is often
more comfortable to send a text message or email than call and
actually speak with the person.

That you can do it on your own time, or save time, is a very
important component for personal comfort. A convenience store,
for example, sells items that saves the consumer a stop or
separate trip to the grocery store.

What happens when convenience is ignored? If convenient ways are
not available?

Let me note that opposition to any type of e-voting has led to
public elections in the US being carried out via regular email
in 2006.

It may be hard to imagine why opposition to e-voting would in any
way make adoption of email voting more likely.

It happens because voting is useful and voters want to vote.
Therefore, voters will find ways that are not safe but convenient
and available ...if more convenient and safe ways are blocked.

We already discovered that for the system to be usable is more
important than any security promises that might be made. Security
innovation has often improved usability -- for example, even though
public-key cryptography is hard to use by end-users, it represented
a major usability improvement for IT administrators. Usable
security is a major area of innovation today.

We are discovering that convenience is an even stronger force to
bring about innovation.

How about paper voting? It does not prevent large-scale fraud, which
has been a complement to paper elections for over a century, and is
not convenient. Lacks personal comfort, personal use of time. Lack
of convenience (not lack of security) will, eventually, kill paper
voting.

Regarding voting, our future is pretty obvious. Online voting
will be mainstream, and is already here in the public and private
sectors. But, to be secure, it should not happen with regular
email, e-commerce web sites, or current trust me e-voting machines
(DRE).

The socially responsible thing to do regarding voting is, thus, to
develop online voting so that it is secure _and_ easy to use. It
already has the top quality that paper voting and e-voting machines
(DRE) cannot have: convenience.

But the real-world voting security problem is very hard. Voting is an
open-loop process with an intrinsic vote gap, such that no one may
know for sure what the vote cast actually was -- unless one is willing
to  sacrifice the privacy of the vote.

A solution [1], however, exists, where one can fully preserve privacy
and security, if a small (as small as you need) margin of error is
accepted. Because the margin of error can be made as small as
one needs and is willing to pay, it is not really relevant. Even when
all operational procedures and flaws including fraud and bugs are
taken into account.

The solution is technologically neutral but has more chances for
success, and less cost, with online voting. Which just adds to the
winning hand for online voting, led by convenience.

I would like to invite your comments on this, to help build the trust
and integrity that our election system needs -- together with the
convenience that voters want. Personal replies are welcome. I am
thinking of opening a blog for such dialogue. Moderators are welcome
too.

Best,
Ed Gerck

[1] Based on a general, information-theory model of voting that applies
to any technology, first presented in 2001. See
http://safevote.com/doc/VotingSystems_FromArtToScience.pdf
Provides any desired number of independent records, which are readily
available to be reviewed by observers, without ever linking voters to
ballots.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: man in the middle, SSL

2007-02-03 Thread Ivan Krstić
James Muir wrote:
 It is my understanding that SSL is engineered to resist mitm attacks, so
 I am suspicious of these claims.  I wondered if someone more familiar
 with SSL/TLS could comment.
 Isn't in the case that the application doing SSL on the client should
 detect what this proxy server is doing and display a warning to the user?

There's nothing new or interesting about this; SSL MITM tools have been
around for a long time. When you're connecting to a website via SSL, you
have no out of band knowledge of the certificate that the server is
supposed to use (e.g. you can't query DNS and get the certificate
fingerprint). SSL clients generally do three checks on the server cert:
they verify it's still valid on today's date, that the name in the cert
matches the server you're connecting to, and that you trust the CA that
issued the cert.

An SSL MITM proxy can trivially satisfy two of those three checks. If an
attacker had sufficiently strong incentive and a specific target site,
presumably he could satisfy the third as well (get a trusted CA to
sign a bogus cert for the server in question -- remember Microsoft from
a few years back).

So yes, in the general case, the web browser will notice the MITM, and
inform the user that two checks pass and one fails. And almost all users
will hit continue and not care, because they don't understand SSL or
the risks involved. They shouldn't have to, either; it's for this reason
that I think SSL is just altogether broken in the way we use it on the
web. It passes the technical requirements, but utterly fails at being a
usable security technology.

-- 
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: man in the middle, SSL

2007-02-03 Thread Erik Tews
Am Freitag, den 02.02.2007, 16:15 -0500 schrieb James Muir:
  You can find more and download Odysseus here:
  
  http://www.bindshell.net/tools/odysseus
 
 It is my understanding that SSL is engineered to resist mitm attacks,
 so 
 I am suspicious of these claims.  I wondered if someone more familiar 
 with SSL/TLS could comment.
 
 Isn't in the case that the application doing SSL on the client should 
 detect what this proxy server is doing and display a warning to the
 user? 

A unmodified SSL/TLS client should display a warning message, that the
server certificate is invalid or something similar. So this is not a
valid man in the middle attack agains SSL/TLS.

Perhaps you are going to use this tool for debugging purpose. If so, you
can perhaps generate a certificat with a private key. The certificate is
installed in your SSL/TLS client as a trusted certification authority
and the certificate and the private key is then used by odysseus to make
this warning messages go away.


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil


Re: man in the middle, SSL

2007-02-03 Thread Anne Lynn Wheeler

James Muir wrote:
It is my understanding that SSL is engineered to resist mitm attacks, so 
I am suspicious of these claims.  I wondered if someone more familiar 
with SSL/TLS could comment.


Isn't in the case that the application doing SSL on the client should 
detect what this proxy server is doing and display a warning to the user?


My oft repeated comment about when we were asked to consult with this small
client/server startup that wanted to do payments on servers  and had this
technology called SSL ... 
http://www.garlic.com/~lynn/aadsm5.htm#asrn2

http://www.garlic.com/~lynn/aadsm5.htm#asrn3

The browser was to check that what the person typed in ... matched the domain 
name
in the digital certificate that the server provided (that the server that the 
client
thot they were talking to was the server that they were talking to).

There was some other ancillary things that we were interested in ... that the digital 
certificate actually represented something more ... i.e. it was issued by the acquiring

financial institution that financially stood behind the merchant  since the 
merchant
was already paying a lot of money to cover doing business. however, that never 
happened
... so the digital certificate just represents that it belongs to the owner of 
the domain.
this issue is somewhat touched on in this blog posting
http://www.garlic.com/~lynn/aadsm26.htm#25 EV - what was the reason, again?
in this blog
https://financialcryptography.com/mt/archives/000863.html

However, early on, merchant webservers found that that doing SSL for the whole 
shopping
experience cut their thruput by something like 80-90 percent ... so the 
industry fairly
quickly switched to just using SSL for the payment/checkout portion when you 
click on
their button. Now the URL is being provided by the server (button) ... not by 
the client,
as a result the effect is no longer is the client talking to the server that 
the
client thinks they are talking to ... since the server is supplying both the 
URL
and the digital certificate ... the result is the server is the server that the
server claims to be (unless it is a really dumb crook/attacker).

It isn't sufficient for their to be ssl certificates to be countermeasure to 
MITM-attack,
the security process has to include that the server is validated against something the 
client supplies ... not that the server is validated against something the server supplies

(i.e. i can prove that i am whoever i claim to be ... not that i can prove that 
i am who
you think i am).

This is also behind a lot of the phishing stuff ... that the attacker can claim 
to be
something ... and provide a field/button for you to click on ... the SSL 
certificate
then just proves that the server matches the URL provided by the field/button;
since the attacker supplier field/button is producing the URL ... and not the client 
... it takes advantage of the difference, for a MITM-attack ... between the opening/crack
opened by what is claimed for the button and what the URL actually is ... since only the 
URL is being validated by the SSL certificate  ... not what the client thinks is claimed 
for the field/button. some more comments in these posts:

http://www.garlic.com/~lynn/2007c.html#3 New Universal Man-in-the-Middle Phishing 
Kit?
http://www.garlic.com/~lynn/2007c.html#32 Securing financial transactions a 
high priority for 2007

lots of past posts mentioning MITM-attacks
http://www.garlic.com/~lynn/subintegrity.html#mitm

i.e. you have to understand the end-to-end business process (of security) ... 
where all
the cracks are ... and just which (of possibly large number) MITM vulnerability 
...
that you have specifically created a countermeasure for.

so one of the things that we did as part of early deployment (of this stuff that has since 
come to be called electronic commerce) was go around and do some detailed end-to-end audits

of these emerging operations that were calling themselves certification 
authorities and
producing these things that were being called SSL domain name digital 
certificates.
At the time, we coined this term certificate manufacturing to try and 
differentiate
what was happening
http://www.garlic.com/~lynn/subpubkey.html#manufacture
and what was in the literature about public key infrastructure ... for a 
little topic
drift ... proposal from 1981 for a (small i) infrastructure:
http://www.garlic.com/~lynn/2006w.html#12 more secure communcation over the 
network

Part of the audits was figuring out just what it was they were doing as part of the process 
that they were calling certification ... as the business process supporting the technology

that produced the actual digital certificates (i.e. a credential/certificate 
that is a
stand-in representation of that certification they were performing). this gave 
rise to
a lot of comments/observations about if the domain name infrastructure actually 
provided
a direct, higher integrity operation ... it would obsolete any 

Re: man in the middle, SSL

2007-02-03 Thread Scott G Kelly
James Muir wrote:
 I was reading a hacking blog today and came across this:
 
 http://www.darknet.org.uk/2007/02/odysseus-win32-proxy-telemachus-http-transaction-analysis/
 
 
 Odysseus is a proxy server, which acts as a man-in-the-middle during
 an HTTP session. A typical HTTP proxy will relay packets to and from
 a client browser and a web server. Odysseus will intercept an HTTP
 session’s data in either direction and give the user the ability to
 alter the data before transmission.

 For example, during a normal HTTP SSL connection a typical proxy will
 relay the session between the server and the client and allow the two
 end nodes to negotiate SSL. In contrast, when in intercept mode,
 Odysseus will pretend to be the server and negotiate two SSL
 sessions, one with the client browser and another with the web
 server.

 As data is transmitted between the two nodes, Odysseus decrypts the
 data and gives the user the ability to alter and/or log the data in
 clear text before transmission.

 You can find more and download Odysseus here:

 http://www.bindshell.net/tools/odysseus
 
 It is my understanding that SSL is engineered to resist mitm attacks, so
 I am suspicious of these claims.  I wondered if someone more familiar
 with SSL/TLS could comment.
 
 Isn't in the case that the application doing SSL on the client should
 detect what this proxy server is doing and display a warning to the user?

If the user's browser is configured to accept a CA cert for which the
proxy holds the signing key, then the proxy can generate a (bogus) cert
for the destination site on the fly, and this will be transparent to the
user.

Scott

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: man in the middle, SSL ... addenda

2007-02-03 Thread Anne Lynn Wheeler

re:
http://www.garlic.com/~lynn/aadsm26.htm#26 man in the middle, SSL

basically digital certificates were designed as the electronic equivalent for offline distribution of information ... paradigm left over from the letters of credit and letters of introduction out of the sailing ship days (and earlier). as things moved into the online age ... certification authorities and digital certificates moved into generic low-value/no-value market segment. 


this is the difference between a generic employee badge for door entry ... that 
is identical for all employees ... vis-a-vis doing stuff specific and tailored 
to each employee.

this is somewhat the x.509 identity certificate example mentioned in the original post ... from the early 90s ... overloaded with personal information and paradigm that promoted repeatedly spaying the identity certificates all over the world. by the mid-90s, it was starting to dawn that such a paradigm wasn't such a good idea ... and there was retrenchment to relying-party-only certificates 
http://www.garlic.com/~lynn/subpubkey.html#rpo


which basically only contained public key and some sort of record location (which 
contains the real information). however, in the payment sector ... even these 
truncated relying-party-only certificates still represented enormous payload and 
processing bloat
http://www.garlic.com/~lynn/subpubkey.html#bloat

especially when it was trivial to demonstrate that you could have the public key along 
with all the other necessary information in the designated record ... and that the 
digital certificate was redundant and superfluous. This is also somewhat the scenario 
raised in the domain name infrastructure for on-file public keys  creating a 
significant catch-22 for the ssl domain name certification authority industry
http://www.garlic.com/~lynn/subpubkey.html#catch22

... just upgrade the existing domain name infrastructure with on-file public 
keys (a requirement also suggested by the ssl domain name certification 
authority industry) ... but that can quickly result in a certificate-free, 
public key infrastructure
http://www.garlic.com/~lynn/subpubkey.html#certless
 also the reference from 1981
http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the 
network

i.e. for the most part now ... SSL is just be using to prove that you have some 
valid domain
name ... and the domain name you claim is the domain name you have ... this is 
somewhat equivalent to the low-value door badge readers to simply check are you 
some valid employee ... w/o regard to any higher value scenario requiring 
specific detail about which valid employee.

so one of the points i repeated raise is that while digital certificates (as 
representation of some certification) is part of an offline paradigm construct 
... and in the migration of the world to online environment ... digital 
certificates have attempted to find place in the no-value/low-value market 
niches ... that as soon as there is some online component (like record locater) 
... it then becomes trivial to show that such digital certificates become 
redundant and superfluous.

so SSL domain name infrastructure was originally primarily to address what came 
to be called electronic commerce (and still may be the primary use)  for:

1) is the browser actually talking to the webserver that the person thinks it 
is talking to
and
2) hide (encrypt) the account number during transmission over the internet.

there have been some number of technical implementation vulnerabilities with respect to SSL and 
things like MITM-attacks ... but the big business process issue was that the deployment fairly 
early changed from is the browser actually talking to the webserver the person thinks it is 
talking to ... to the browser is talking to the webserver that the webserver claims to 
be (since the same webserver was supplying both the URL and the digital certificate 
confirming the webserver supplied URL).

The second feature of ssl (encrypting to hide transmitted account numbers) was somewhat to put 
transactions flying over the anarchy of the world-wide Internet ... on level play field 
with the transactions that flew over dedicated telephone wires. However, the major vulnerability 
during that period ... and continuing today ... wasn't evesdropping the transaction during public 
transmission ... but vulnerabilities at the end-points  which SSL does nothing to address. The 
end-point webservers somewhat increased vulnerabilities (compared to non-internet implementations) 
since a lot of the transaction logs became exposed to attacks from the internet. This matter is 
slightly debatable since the long term studies ... continuing up thru at least recently is that 
seventy percent of the resulting fraudulent transactions involve some sort of insider.

So 1) the resulting major deployments of SSL negating much of the original 
countermeasure against MITM-attacks (related to integrity issues in 

Re: man in the middle, SSL

2007-02-03 Thread Ivan Krstić
[I prefer to keep discussions on-list where possible. CCing the list.]

Beryllium Sphere LLC wrote:
 Bruce Schneier pointed out years ago that it's trivial for a virus
 or Trojan to add a new trusted CA to the browser's list of trusted
 roots. At least one advertising support web accelerator installs
 itself in the browser configuration as a peer of Verisign and can
 then proxy SSL without any warning to the user.

Right. I was talking about the kind of MITM where an attacker is
physically between your machine and the SSL destination, such as sitting
on your network's egress. MOYM (man on your machine) attacks are a bit
of a lost cause with most modern OS environments, though I've been
working pretty hard to try and change that on the One Laptop per Child
machines.

-- 
Ivan Krstić [EMAIL PROTECTED] | GPG: 0x147C722D

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]