Re: Extracting uniform randomness from noisy source

2002-08-11 Thread John Kelsey

At 11:09 PM 8/7/02 +, David Wagner wrote:
>John Kelsey  wrote:
>>a.  If my input samples have enough entropy to make my outputs random, then
>>I need to resist computationally unbounded attackers.  (Otherwise, why
>>bother with distilling entropy; just use a PRNG.)
>>
>>b.  If my input samples are within some much more forgiving bound, then I
>>need to resist computationally-bounded attackers.  
>>
>>c.  Attackers with control over some part of my inputs mustn't be able to
>>cancel out or block entropy in the other parts of the inputs.  
>
>I agree these would be great properties to have.  Sadly, I don't know
>of any construction that plausibly achieves all three, in theory or
>in practice.

Hmmm.  I don't see that they're really unattainable, though they may not be
possible to prove secure without unreasonable assumptions.  

Consider using SHA1 to hash each input string, but then outputting only 80
bits of the output.

Now, since we're only outputting half the output bits, we don't have to
worry about the fixed-suffix attack I pointed out earlier.  In fact, we
have a nice, 160-bit wide pipe all the way to the end, so any internal
collision problems basically just go away.  I think this meets all three
criteria.  (But it ought not to ever be output or used directly in a way an
attacker can see; it should be shielded by some longer-term cryptographic
secret so that it has computational security even if occasional input
strings are totally known to the attacker.)  

Specifically:

a.  With far more than 2^{80} equally likely inputs, we expect to get very
nearly uniform output distribution from this scheme.  

b.  With exactly 2^{80} equally likely inputs, we expect to get a single
collision, and so lose almost no entropy.

c.  With less than 2^{80} equally likely inputs, we expect to get no
collisions, lose no entropy, but we may be susceptible to brute-force
search of possible input strings to distinguish our output from a random
output of 80 bits.  

...
>If we have to give up one, the one I'd be most willing to give up is
>property a.  After all, we almost never have enough entropy, and we almost
>always take the output of the PRNG and just use it in some cryptosystem
>that is insecure against computationally unbounded attacks anyway.

I mostly agree with this.  It's hard to convince yourself you have enough
entropy against the most powerful possible attackers, anyway.  But if
someone is claiming to provide you random noise from your system or
something, it sure seems like its strength ought not to be based on the
cryptographic strength of SHA1.  Otherwise, why not just use SHA1 in one of
the obvious ways to generate PRNG outputs once you've gotten a
hopefully-secure seed?  Or AES, for that matter?  

Your really big assumptions about SHA1 (or AES-CBC-MAC with an all-zero
key, or a 32-bit CRC, or whatever else may be used here) involve how well
they distill entropy.  This basically requires that you assume that the set
of input strings that occurs isn't pathologically bad with respect to
causing collisions in SHA1, or the 80 bits of SHA1 you actually output, or
whatever.  (It's not possible to choose any function that never has this
happen, but it's pretty easy to choose functions that almost never have it
happen, assuming randomly-selected input strings.  But, of course, the
input strings we'll get in a real-world system aren't randomly selected,
they're just selected in a way that's independent of the structure of the
entropy distillation function.  For example, for the overwhelming majority
of sets of 2^{128} random 1024-bit strings, simply XORing each 128 bits
together will successfully distill the entropy from the strings; however,
just a flat XOR folding would be a bad idea with the kinds of input we
expect in real-world systems.)  

>Think of a. like asking for your encryption scheme to be information
>theoretically secure.  Sure, if you can afford such an encryption scheme,
>that's great.  But in practice the one-time pad is too expensive, so we
>gladly settle for mere security against computationally bounded attacks.
>I think PRNGs are similar.

Right.  But it's important to specify which you're trying to accomplish.
Distilling entropy and outputting it in some form is supposed to be
information-theoretically secure, right?  You want it to fail gracefully
into computationally secure if your input entropy estimates are messed up,
but that's not the original goal.  There are really different assumptions
and requirements for the two different goals.

For example, in a system whose only goal is to be computationally-secure,
we can distill entropy by computing a 128-bit CRC over our sample string,
using that result as an AES key, and running AES in counter mode.  We can
output trillions of bits per string, even if we have only a set of 2^{80}
equally-likely input strings.  This is computationally secure given a set
of assumptions about the interaction of the CRC with our input string set,
which 

Re: Thanks, Lucky, for helping to kill gnutella

2002-08-11 Thread Sean Smith


i guess it's appropriate that the world's deepest
hole is next to something labelled a "trust territory" :)

--Sean

:)











-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Thanks, Lucky, for helping to kill gnutella

2002-08-11 Thread R. A. Hettinga

I'm genuinely sorry, but I couldn't resist this...

At 12:35 PM -0400 on 8/11/02, Sean Smith wrote:


> Actually, our group at Dartmouth has an NSF "Trusted Computing"
> grant to do this, using the IBM 4758 (probably with a different
> OS) as the hardware.
>
> We've been calling the project "Marianas", since it involves a
> chain of islands.

...and not the world's deepest hole, sitting right next door?

;-)

Cheers,
RAH



> --Sean
>
>>If only there were a technology in which clients could verify and
>>yes, even trust, each other remotely.  Some way in which a digital
>>certificate on a program could actually be verified, perhaps by
>>some kind of remote, trusted hardware device.  This way you could
>>know that a remote system was actually running a well-behaved
>>client before admitting it to the net. This would protect Gnutella
>>from not only the kind of opportunistic misbehavior seen today, but
>>the future floods, attacks and DOSing which will be launched in
>>earnest once the content companies get serious about taking this
>>network down.


-- 
-
R. A. Hettinga 
The Internet Bearer Underwriting Corporation 
44 Farquhar Street, Boston, MA 02131 USA
"... however it may deserve respect for its usefulness and antiquity,
[predicting the end of the world] has not been found agreeable to
experience." -- Edward Gibbon, 'Decline and Fall of the Roman Empire'

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Seth on TCPA at Defcon/Usenix

2002-08-11 Thread John Gilmore

> It reminds me of an even better way for a word processor company to make
> money: just scramble all your documents, then demand ONE MILLION DOLLARS
> for the keys to decrypt them.  The money must be sent to a numbered
> Swiss account, and the software checks with a server to find out when
> the money has arrived.  Some of the proposals for what companies will
> do with Palladium seem about as plausible as this one.

Isn't this how Windows XP and Office XP work?  They let you set up the
system and fill it with your data for a while -- then lock up and
won't let you access your locally stored data, until you put the
computer on the Internet and "register" it with Microsoft.  They
charge less than a million dollars to unhand your data, but otherwise
it looks to me like a very similar scheme.

There's a first-person report about how Office XP made the computers
donated for the 9/11 missing persons database useless after several
days of data entry -- so the data was abandoned, and re-entered into a
previous (non-DRM) Microsoft word processor.  The report came through
this very mailing list.  See:

  http://www.mail-archive.com/cryptography@wasabisystems.com/msg02134.html

This scenario of word processor vendors denying people access to their
own documents until they do something to benefit the vendor is not
just "plausible" -- it's happening here and now.

John





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: TCPA/Palladium -- likely future implications (Re: dangers ofTCPA/palladium)

2002-08-11 Thread Peter Fairbrother

Adam Back wrote:
[...]
> - It is always the case that targetted people can have hardware
> attacks perpetrated against them.  (Keyboard sniffers placed during
> court authorised break-in as FBI has used in mob case of PGP using
> Mafiosa [1]).

[...]

> [1] "FBI Bugs Keyboard of PGP-Using Alleged Mafioso", 6 Dec 2000,
> slashdot

That was a software keylogger (actually two software keyloggers), not
hardware. 

(IMO Scarfo's lawyers should never have dealt, assuming the evidence was
necessary for a conviction, but the FBI statement about the techniques used
was probably too obfuscated for them - it took me a good week to understand
it. I emailed them, but got no reply.

Incidently, Nicky Scarfo used his father's prison number for the password,
so a well researched directed dictionary attack would have worked anyway.)


The FBI reputedly can (usually, on Windows boxen) now install similar
software keyloggers remotely, without needing to break in.


-- Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Thanks, Lucky, for helping to kill gnutella

2002-08-11 Thread Sean Smith


Actually, our group at Dartmouth has an NSF "Trusted Computing"
grant to do this, using the IBM 4758 (probably with a different
OS) as the hardware.   

We've been calling the project "Marianas", since it involves a chain of
islands.

--Sean

>If only there were a technology in which clients could verify and yes,
>even trust, each other remotely.  Some way in which a digital certificate
>on a program could actually be verified, perhaps by some kind of remote,
>trusted hardware device.  This way you could know that a remote system was
>actually running a well-behaved client before admitting it to the net.
>This would protect Gnutella from not only the kind of opportunistic
>misbehavior seen today, but the future floods, attacks and DOSing which
>will be launched in earnest once the content companies get serious about
>taking this network down.










-- 
Sean W. Smith, Ph.D. [EMAIL PROTECTED]   
http://www.cs.dartmouth.edu/~sws/   (has ssl link to pgp key)
Department of Computer Science, Dartmouth College, Hanover NH USA




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: dangers of TCPA/palladium

2002-08-11 Thread Ben Laurie

AARG!Anonymous wrote:
> Adam Back writes:
> 
> 
>>- Palladium is a proposed OS feature-set based on the TCPA hardware
>>(Microsoft)
> 
> 
> Actually there seem to be some hardware differences between TCPA and
> Palladium.  TCPA relies on a TPM, while Palladium uses some kind of
> new CPU mode.  Palladium also includes some secure memory, a concept
> which does not exist in TCPA.

This is correct. Palladium has "ring -1", and memory that is only 
accessible to ring -1 (or I/O initiated by ring -1).

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: md5 for bootstrap checksum of md5 implementations? (Re: [ANNOUNCE] OpenSSL 0.9.6f released)

2002-08-11 Thread Roy M . Silvernail

On Friday 09 August 2002 12:23 pm,  Barney Wolff <[EMAIL PROTECTED]>wrote:

> Does anybody offer a public MD5 web service?  Though if your omnipotent
> attacker sits between you and the world, this does no good.

For the hell of it, I knocked together this:

http://www.scytale.com/cgi-bin/md5.cgi

Comments welcome.
-- 
Roy M. Silvernail [ ] [EMAIL PROTECTED]
DNRC Minister Plenipotentiary of All Things Confusing, Software Division
PGP Key 0x1AF39331 :  71D5 2EA2 4C27 D569  D96B BD40 D926 C05E
 Key available from [EMAIL PROTECTED]
I charge to process unsolicited commercial email

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Thanks, Lucky, for helping to kill gnutella

2002-08-11 Thread Paul Crowley

AARG!Anonymous <[EMAIL PROTECTED]> writes:

> Be sure and send a note to the Gnutella people reminding them of all
> you're doing for them, okay, Lucky?

Do the Gnutella people share your feelings on this matter?  I'd be
surprised.
-- 
  __  Paul Crowley
\/ o\ [EMAIL PROTECTED]
/\__/ http://www.ciphergoth.org/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]