Cryptography-Digest Digest #161

2001-04-16 Thread Digestifier

Cryptography-Digest Digest #161, Volume #14  Mon, 16 Apr 01 17:13:01 EDT

Contents:
  Re: NSA is funding stegano detection (Mok-Kong Shen)
  Re: Why Not OTP ? (Mok-Kong Shen)
  Re: There Is No Unbreakable Crypto (David Wagner)
  Re: Note on combining PRNGs with the method of Wichmann and Hill ("Brian Gladman")
  Re: AES poll ("Jack Lindso")
  Re: MS OSs "swap" file:  total breach of computer security. (wtshaw)
  Re: MS OSs "swap" file:  total breach of computer security. ("Christian Bohn")
  Re: LFSR Security ("Trevor L. Jackson, III")
  Re: LFSR Security ("Trevor L. Jackson, III")
  Re: MS OSs "swap" file:  total breach of computer security. ("Tom St Denis")
  Re: LFSR Security (Ian Goldberg)
  Re: Note on combining PRNGs with the method of Wichmann and Hill (Mok-Kong Shen)
  Re: LFSR Security ("Trevor L. Jackson, III")
  Re: LFSR Security ("Trevor L. Jackson, III")
  Re: AES poll ("Trevor L. Jackson, III")



From: Mok-Kong Shen [EMAIL PROTECTED]
Crossposted-To: comp.security.misc,talk.politics.crypto
Subject: Re: NSA is funding stegano detection
Date: Mon, 16 Apr 2001 21:21:51 +0200



Bernd Eckenfels wrote:
 

 as long as stegano is theoretical safe but in practise detectable, it is a
 nice mind experient but otherwise completely useless.

Yes. It is currently the discussion how easy/difficult
is that detection. I like to ask experts in image
processing to answer one rather global question: In the 
average case, if one arbitrarily modifies the LSB 
of one tenth of the coefficients of fourier transform in 
one colour, is there anything that can be noticed by the 
naked eye when comparing the pictures? Thanks.

M. K. Shen

--

From: Mok-Kong Shen [EMAIL PROTECTED]
Subject: Re: Why Not OTP ?
Date: Mon, 16 Apr 2001 21:43:44 +0200



John Savard wrote:
 
 Frank Gerlach [EMAIL PROTECTED] wrote:
 
 Why is it that people do not like OTP ? It seems that some people do not like
 Public-Key crypto, so why not just exchanging a box of CDs ?
 
 Well, it is cumbersome and expensive. Worse yet, it imposes a strict
 limit on how many communications can be exchanged - and maybe it may
 become important to communicate securely just when exchanging a new
 box of CDs has suddenly become harder.

Very long ago, I read that the Washington-Moskau hotline
was based on OTP. I suspect that that may not necessarily
be true now. Does one happen to have any information
of the type of encryption used?

M. K. Shen

--

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: There Is No Unbreakable Crypto
Date: 16 Apr 2001 19:59:50 GMT

Henrick Hellström wrote:
The only reference I found was "(e.g., Bellare and Goldwasser's course
notes)" and a web search turned out blank. Could you please specify where I
should look?

http://www-cse.ucsd.edu/users/mihir/papers/gb.html
Once you understand the background of provable security, see Theorem 5.5.1.

However, I just took a look and it seems that they don't prove the theorem
in the lecture notes, so you may need to refer to the original paper by
Goldreich, Goldwasser,  Micali ("How to construct random functions").
Or, you can prove it yourself: It's not hard to prove once you understand
the basics of PRGs, PRFs, and provable security.

--

From: "Brian Gladman" [EMAIL PROTECTED]
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Mon, 16 Apr 2001 21:09:12 +0100


"Mok-Kong Shen" [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]...


 Brian Gladman wrote:
 

  If two different PRNGs giving unfiformly distributed random numbers in
  [0.0:1.0) are added and the result is taken 'mod 1.0', this output will
then
  be uniformly distributed in [0.0:1.0).  A bit of maths shows that the
output
  in [0.0-2.0) is not uniform but that the mod function combines the
ranges
  [0.0:1.0) and [1.0:2.0) in such a way that a uniform distribution
results.
 
  But if the outputs of the generators are multiplied by constants close
to
  1.0 before combination, the output will not generally be uniformly
  distributed in [0.0:1.0).
 
  This can be seen by considering a single PRNG giving uniformly
distributed
  random numbers in [0.0:1.0) and considering the output after multiplying
by
  a number (1.0 + delta), close to 1.0, and taking the output 'mod 1.0'.
In
  this case numbers in the range [0.0:delta) will occur twice as often as
  those in the range [delta:1.0).
 
  Although the maths is more complicated when several generators are
  combined, the same issue turns up.
 
  The uneven distributions that result may not be a problem in some
  applications but they will frequently be undesirable.

 One can consider the contin

Cryptography-Digest Digest #161

2000-11-15 Thread Digestifier

Cryptography-Digest Digest #161, Volume #13  Wed, 15 Nov 00 06:13:01 EST

Contents:
  Re: RC4 on FPGAs? (Ian Goldberg)
  Re: hardware RNG's (David Schwartz)
  Re: Learning Differential and Linear Cryptanalysis? (David Wagner)
  Re: vote buying... (David Wagner)
  Re: vote buying... (Paul Rubin)
  Re: Integer encoding on a stream ("D. He")
  Re: Why remote electronic voting is a bad idea (was voting through pgp) (Tommy the 
Terrorist)
  Re: vote buying... ("Trevor L. Jackson, III")
  Re: Black Market Internet Information - my visits and tradeshows (nemo outis)
  Re: The SHAs ("kihdip")
  Re: vote buying... (Volker Hetzer)
  Re: sci.crypt archive ([EMAIL PROTECTED])
  Re: MY BANANA REPUBLIC (Andre van Straaten)
  Re: Thoughts on the sci.crypt cipher contest (David Formosa (aka ? the Platypus))
  Re: vote buying... (David Wagner)
  Re: The ultimate cipher (Mok-Kong Shen)
  Re: On an idea of John Savard (Mok-Kong Shen)
  Re: On an idea of John Savard (Mok-Kong Shen)
  Re: On an idea of John Savard (Mok-Kong Shen)
  Question about ANSI X9.19/X9.9 Message Authentication ("Paul Sheer")
  Re: Thoughts on the sci.crypt cipher contest (Paul Crowley)



From: [EMAIL PROTECTED] (Ian Goldberg)
Subject: Re: RC4 on FPGAs?
Date: 15 Nov 2000 02:58:28 GMT

In article [EMAIL PROTECTED],
ajd [EMAIL PROTECTED] wrote:
Hi,

Has anyone implemented the RC4 algorithm on an FPGA (or can anyone point me
to someone who has)? What sort of throughput did you get?

You might also check out Dave's and my paper:

http://www.cs.berkeley.edu/~iang/isaac/hardware/main.html

This is the paper that became Chapter 10 of the EFF DES Cracker book, but
it's from 1996, and so the numbers are certainly out-of-date...

   - Ian

--

From: David Schwartz [EMAIL PROTECTED]
Subject: Re: hardware RNG's
Date: Tue, 14 Nov 2000 19:17:02 -0800


Paul Pires wrote:
 
 David Schwartz [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]...
 
  Paul Pires wrote:
 
   It seems to me that you have refined a usable output stream from a poor
   input stream by rejecting enough input to correct for it's flaws. You have
 not
   made a good output, just thrown out some bad. Can you deterministically fix
 it
   and leave the input/output ratio at 1:1?
 
  Yes, assuming by "input/output ratio" you mean the ratio of input
  entropy to output entropy.
 
 No. I meant 1:1 input to output bit size. It is clear that output entropy only
 comes
 from input entropy.

Right. And output randomness only comes from input randomness. And
output unpredictability only comes from input unpredictability (assuming
an attacker who knows the algorithm).

 I wanted to know if you were saying something else. It seems
 to me that there are two axis to the problem. Make better input/Make better
 post processors. The latter seems less ideal since by definition, it requires
 trashing some of your hard earned entropy or a complex proccess to refine
 what you have in a way minimizing the loss. The quality of the "Miracle" you
 performed See:
 
 Now, if you insist on your original definition of "unpredictable", I've
 just performed a major miracle. I've taken a predictable input stream
 and deterministically produced an unpredictable output stream from it!
 
 is less astounding when it is seen to be simple surgery.

It is entirely astounding if you accept the suggested (and, IMO,
ridiculous) definition of "unpredictable".
 
  The point is, if the input stream is deterministically fixable, then it
  contained sufficient randomness. Otherwise no deterministic process
  could fix it.
 
 How can Something contain "Sufficient Randomness"? Kinda paradoxical. 

How is that paradoxicale?

 If you know it is, it isn't cause it wouldn't be random. "Sufficient
 unpredictability"
 is better but not much. How do you determine it's sufficiency and therefore know
 if
 you have fixed it?

I showed in my example exactly how you do that, so I'm not sure I
understand what you're asking me.

 I'm not raggin on you, I actually feel the same way but it
 does
 me no practical good. The problem is still there. If it starts out bad and you
 say
 you fixed it, how do I know?

Are you saying you don't believe that if I take an input bit stream
that is unpredictable but biased then the algorithm I suggested will
produce an unpredictable and unbiased output? A proof is not difficult
to compose. If I show a proof and you still don't believe me, then
you're beyond reason.

DS

--

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Learning Differential and Linear Cryptanalysis?
Date: 15 Nov 2000 03:49:26 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)

Simon Johnson  wrote:
Where can i find refrence material, books etc. with a clear and consis

Cryptography-Digest Digest #161

2000-07-04 Thread Digestifier

Cryptography-Digest Digest #161, Volume #12   Tue, 4 Jul 00 21:13:01 EDT

Contents:
  Re: #sci.crypt moved to Dalnet (Simon Johnson)
  Re: Java implementation of DES and 3DES (Benjamin Goldberg)
  Re: Hash and Entropy ("Joseph Ashwood")
  Re: Java implementation of DES and 3DES ([EMAIL PROTECTED])
  Re: Hash and Entropy (Benjamin Goldberg)
  Re: Blowfish for signatures? (Thierry Nouspikel)
  Re: Hashing Function (not cryptographically secure) (lordcow77)
  Re: Public-domain Blowfish (Bruce Schneier)
  Re: AES: It's been pretty quiet for some time... (Bruce Schneier)
  Re: Hash and Entropy ("Scott Fluhrer")
  Re: Diffie Hellman Primes : Speed Tradeoff Q (Anton Stiglic)
  Re: Crypto Contest: CHUTZPAH... ("Paul Pires")
  Re: Crypto Contest: CHUTZPAH... ("Paul Pires")
  Re: Use of EPR "paradox" in cryptography (Tim Tyler)
  Re: Hash and Entropy (David A Molnar)



From: Simon Johnson [EMAIL PROTECTED]
Subject: Re: #sci.crypt moved to Dalnet
Date: Tue, 04 Jul 2000 21:59:25 GMT

In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] (Frank M. Siegert) wrote:
 On Sun, 02 Jul 2000 10:37:35 GMT, Simon Johnson
 [EMAIL PROTECTED] wrote:

 
 
 I've moved it to Dalnet, due to the frequent netsplitting on EFNET.
 
 Hope to see you soon.

 Havn't found you today... is this channel up 24/7?

 - Frank


Some guy as registered the channel before i got chance to set-up the
channel...

So i've registered #sci-crypt

U'll catch me around 6-12:00pm GMT :)

Hope to see ya around :)

--
Hi, i'm the signuture virus,
help me spread by copying me into Signiture File


Sent via Deja.com http://www.deja.com/
Before you buy.

--

From: Benjamin Goldberg [EMAIL PROTECTED]
Subject: Re: Java implementation of DES and 3DES
Date: Tue, 04 Jul 2000 22:14:54 GMT

dexMilano wrote:
 
 The algortith seems to work perfectly.
 I've verified some cases that put the error.
 
 For example, if the resulting bit sequence is 143 the character
 corresponding (ASCII (143)) is "\uFFFD" as a single char!
 If try do decript the sequende where I have this char I have a wrong
 decription.
 I don't receive any error but the algorith doesn't work.
 
 That's why I think the problem could be in the way Java manages
 characters.

The problem is that ascii 143 is '\u008F', whereas the number you're
using, '\uFFFD' has the value 65533.  GIGO.

--

From: "Joseph Ashwood" [EMAIL PROTECTED]
Subject: Re: Hash and Entropy
Date: Tue, 4 Jul 2000 15:25:43 -0700

Let's see if I can get this right, or at least get my own
errors pointed out.
A hash is a function that takes an input of an arbitrary
length, and creates an output of a fixed length. A
cryptographically secure hash does so in a fashion that
makes it difficult to find 2 values that generate the same
output value, difficult to find a value that generates a
specific value, and generates values that are distributed
approximately evenly across all possible output values in
the realm, as a result of this the output passes all
statistical tests.

Entropy is a measure of non-determinism inherent in a string
generated by a method. It has so many different necessary
interprettations that it becomes difficult to consider, in
spite of the fact that all the (valid) interpretations are
in fact equivalent. It has such a great number of
intricasies that it is quite difficult to define briefly. It
can be defined for some purposes as the predictability of a
sub-sequence given all other output values of the generator.
It is also, almost by definition difficult to locate
dependably, even a coinflip is not purely entropic since the
coin could, under very rare circumstances, land on the edge,
also if you check a US penny, it tends to land on one side
more than the other (tails more than heads IIRC).
Joe



--

From: [EMAIL PROTECTED]
Subject: Re: Java implementation of DES and 3DES
Date: Tue, 04 Jul 2000 22:35:47 GMT

Boris Kazak [EMAIL PROTECTED] wrote:
 ASCII 143 can be interpreted by the system as "unsigned char", in which 
 case its value will be really 143, or as "signed char", in which case 
 its value will be 256-143= -113. 
 There must be some way in Java to treat and concatenate bytes as 
 "unsigned chars", otherwise no cipher implementation will work
 correctly.

No, there's a primative type, byte, which is what all of the ciphers
expect arrays of. For files, it's not a problem, as you can read and
write by the byte. Strings, on the other hand, need to be converted
before being passed to the engine. Assuming, of course, you're using
the standard cipher framework.

-- 
Matt Gauthier [EMAIL PROTECTED]

--

From: Benjamin Goldberg [EMAIL PROTECTED]
Subject: Re: Hash and Entropy
Date: Tue, 04 Jul 2000 22:40:02 GMT

Future 

Cryptography-Digest Digest #161

2000-02-20 Thread Digestifier

Cryptography-Digest Digest #161, Volume #11  Sun, 20 Feb 00 06:13:01 EST

Contents:
  Re: NSA Linux and the GPL (Uri Blumenthal)
  Re: EOF in cipher??? (John Savard)
  Re: Question about OTPs (ChenNelson)
  Re: NIST publishes AES source code on web (John Savard)
  Re: EOF in cipher??? (wtshaw)
  Re: Question about OTPs ("Douglas A. Gwyn")
  Re: OAP-L3 Encryption Software - Complete Help Files at web site (Peter Rabbit)
  Re: UK publishes 'impossible' decryption law (Eric Smith)
  Re: UK publishes 'impossible' decryption law (Eric Smith)
  Re: Is Phi perfect? (Xcott Craver)
  Re: OAP-L3 Encryption Software - Complete Help Files at web site (Tony L. Svanstrom)
  Re: Biggest keys needed (was Re: Does the NSA have ALL Possible PGP  keys?) 
([EMAIL PROTECTED])
  Cryptography FAQ (01/10: Overview) ([EMAIL PROTECTED])
  Cryptography FAQ (02/10: Net Etiquette) ([EMAIL PROTECTED])



From: Uri Blumenthal [EMAIL PROTECTED]
Subject: Re: NSA Linux and the GPL
Date: Sat, 19 Feb 2000 21:39:51 -0500
Reply-To: [EMAIL PROTECTED]

"Douglas A. Gwyn" wrote:
 Oh, good grief!  NSA was one of the very first licensees of UNIX
 source code, and has had various flavors of UNIX, among numerous
 other OSes (including some devised within NSA), for decades.  There
 is no particular reason they need to use Linux as opposed to more
 fully developed genuine UNIX-based systems.

One reason could be the desire to use off-the-shelf PCs running
a Unix-like operating system, again off-the-shelf.

No doubt NSA could port one of their "flavors" of Unix, but
this game becomes boring very quickly. And Linux seems to
be "it", aka "popular Unix running on a PC".

 As another poster hinted, no matter how much the security of
 Linux is beefed up, it will not become Multi-Level Secure...

Hmm... Any reason why "it will not" (assuming there's market
for MLS Linux)?
-- 
Regards,
Uri
-=-=-==-=-=-
Disclaimer

--

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: EOF in cipher???
Date: Sun, 20 Feb 2000 04:00:37 GMT

On Sun, 20 Feb 2000 01:33:39 GMT, "Douglas A. Gwyn" [EMAIL PROTECTED]
wrote, in part:

The C programming language is
defined by a certain standards document.

Which came out in previous editions.

The term "C programming" is usually applied, in the vernacular, to
programming for compilation on any compiler which, at one time,
compiled programs written in the C language as it was then understood,
whether that was at a time when a previous version of the standard was
in effect, or at a time before standardization documents existed for
the language.

Some features in recent C compilers, such as the declaration of the
types of function arguments in the function header, and the // form of
comment, have proven to be so useful and convenient that they have
been generally used in new programs.

But it is indeed a sound rule not to use any feature not found in the
very first C implementation, any feature that has not been a part of
the C language from the very beginning, any feature that is not a part
of every single compiler purporting to compile C or some subset
thereof ... without cause. One writes a program *that it may be of
use*; that is, that it may produce the results of a calculation on
various people's computers. Making it easy to compile the program on
just about any compiler, however inadequate, contributes to achieving
this goal.

--

From: [EMAIL PROTECTED] (ChenNelson)
Subject: Re: Question about OTPs
Date: 20 Feb 2000 04:37:11 GMT

=BEGIN PGP SIGNED MESSAGE=
Hash: SHA1

Actually, in my lab class I think I've stumbled across a very
efficient way of generating a OTP. Take an oscilloscope, hook it to an
A/D board on the computer, and have the oscilloscope record noise.
Then, for all voltages 0 output a 1, and all voltages 0 record a 0
(or the other way around). Have the sample rate fast enough to be
efficient, but not too fast to preserve randomness (must statistically
test here, better to sample slower and be surer).

Later,'
Nelson Chen
=BEGIN PGP SIGNATURE=
Version: PGP for Personal Privacy 5.5.2
Comment: For public key, go to key server with key ID 0xD28C0DD9

iQA/AwUBOK9wAW1ACZTSjA3ZEQJpFgCfR4OVKI51yMzzvB+gzXAQJhYBkz8An3G8
vlWwnD50Tpd4Y+wuBdgkp1zn
=46yX
=END PGP SIGNATURE=

==
To earn $0.05 per clickthrough from your web page, please go to
http://www.3wmart.com/ and sign up for our button banner program.

--

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: NIST publishes AES source code on web
Date: Sun, 20 Feb 2000 04:31:44 GMT

On Sun, 20 Feb 2000 01:29:19 GMT, "Douglas A. Gwyn" [EMAIL PROTECTED]
wrote, in part:
Mok-Kong Shen wrote:

 ... Remembering that previously it has been the firm and resolute
 opinions of a number of 

Cryptography-Digest Digest #161

1999-02-28 Thread Digestifier

Cryptography-Digest Digest #161, Volume #9   Sun, 28 Feb 99 14:13:03 EST

Contents:
  Re: Define Randomness (R. Knauer)



From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Define Randomness
Date: Sun, 28 Feb 1999 18:36:10 GMT
Reply-To: [EMAIL PROTECTED]

On Sun, 28 Feb 1999 07:07:21 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:

But I *am* claiming that inductive reasoning is often
false, unlike deductive reasoning.  We thus must be very, very careful
when we use induction, if we want correct results.  

Often? I would grant you "sometimes", but not "often". If it were
often, science would not progress at the rate we see it progressing.

It is *often* FALSE to draw a conclusion about the whole based upon
some parts of the whole.  This is of course what we do in statistics,
and may be seen as the reason for speaking in probabilities about the
results.  But probabilities under 1.0 are not PROOF that every other
possibility has been ruled out.  But that is what we want:  We are
trying to rule out ANY POSSIBILITY AT ALL of weakness.  At least that
is what PROVABLY "unbreakable" means to me.  

Statistical reasoning is supposedly different from reasoning based on
pac-learning. In the latter you are trying to confirm an hypothesis by
presenting it with more data in the hopes that you will converge on a
result that is trustworthy.

The pac-learning method called Occam's Razor is an example of how this
process works. It is believed that the correct result is the one that
is simplest, as long as it takes full acount of the data. Therefore,
given the data, one begins to remove parts of the hypothesis to see if
they are extraneous.

As Sherlock Holmes said: "You write down all the possibilities and
begin eliminating them. What remains, however unbelievable it may
seem, is the answer to your problem."

I realize that you are claiming that one can never know all the
possibilities, so that method won't work reliably. But I would point
out that the strength of a formal axiomatic system, the kind used to
perform apparently bullet-proof deduction, also suffers from the same
kind of fatal flaw, as Godel-Turing-Chaitin pointed out. Chaitin, for
example, has shown that even in the formal axiomatic system of number
simple arithmetic there are undecideable problems. 

So the problem of which you speak, the limitations of knowledge, are
inherent in any reasoning process. I believe induction is much more
powerful than deduction because with it you can step outside the
system - and that is required to have an understanding of reality.

Suppose we have a house with 20 rooms:  After looking 2 rooms, can we
possibly hope "induce" the contents of the other 18?  Of course not.
In this situation there is no reason for any particular room to
reflect any other.  So conclusions which are based on a small subset
have no reason to be true.  Induction should not be used here.  

That is a straw man argument. There was no apriori reason to believe
that there would be any regularity to the rooms. But even then, there
is some regularity. After inspecting a few carefully selected rooms
one could say with a high degree of certainty that all rooms have a
floor, a ceiling and more than three walls, and one or more doors.

But it is true that one cannot say such things with absolute
certainty. But then that is also true of formal axiomatic systems in
arithmetic, as Chaitin has shown.

Suppose someone in a 20-room house committed murder, and we want to
find the weapon:  Suppose we search 19 rooms without finding it; shall
we now say that we have "proven" to a 95% probability that the weapon
is not in the house?

Supposedly you can. That is LaPlace's rule, that the current
probability is simply related to the past instances of some event. For
example, if you take the observation that the Sun has risen every day
for the past 10,000 years, then the probability that it will not rise
tomorrow is about 1/3,650,000 - and therefore the probability that it
will rise tomorrow is 1-1/3,650,000. (See Li  Vitanyi, op. cit).

I suppose, but of what use is such a "proof"?
What we really want to know is whether the weapon is there,
*anywhere*.  So as long as even *one* room remains unsearched, there
is no PROOF that the weapon is not there.

And even if you search all 20 rooms but turn up empty handed does not
mean that you can conclude with absolute certainty that the weapon is
not in the house. You need some other reason to draw that conclusion -
some reason to believe that a weapon cannot escape detection.

In cryptography we take even unimaginably small risks *very*
seriously.

You might, and I might, but I suspect that the vast majority of users
of crypto do not have a clue as to the kinds of risks there are. I
consider the development of quantum computation to be a very serious
risk for cryptosystems that are