Cryptography-Digest Digest #576, Volume #10      Tue, 16 Nov 99 16:13:03 EST

Contents:
  Re: High Speed (1GBit/s) 3DES Processor (James Felling)
  Re: Proposal: Inexpensive Method of "True Random Data" Generation ("Trevor Jackson, 
III")
  Re: AES cyphers leak information like sieves (David Wagner)
  Re: Scientific Progress and the NSA (Tim Tyler)
  Re: Proposal: Inexpensive Method of "True Random Data" Generation ("Trevor Jackson, 
III")
  Re: Proposal: Inexpensive Method of "True Random Data" Generation ("james d. hunter")
  Interested in bulk encrypt1on devices (zenlight)
  Re: AES cyphers leak information like sieves (DJohn37050)
  Re: AES cyphers leak information like sieves (DJohn37050)
  Re: The DVD Hack: What Next? (John Savard)
  Re: Scientific Progress and the NSA (Scott Nelson)
  Re: SCOTT16U SOLUTION ON THE WEB ("Peter K. Boucher")
  Re: more about the random number generator (William Rowden)
  Re: MPQS factoring method : proof ot its complexity. (Bob Silverman)

----------------------------------------------------------------------------

From: James Felling <[EMAIL PROTECTED]>
Subject: Re: High Speed (1GBit/s) 3DES Processor
Date: Tue, 16 Nov 1999 11:34:26 -0600

Not fast enough for some of the newer routers -- unless multiple chips are
working in parallell!!!

Justin wrote:

> On 14 Nov 1999 [EMAIL PROTECTED] wrote:
>
> > We have developed a prototype Encryption system which runs 3DES at 1
> > GBits/sec (this is not just processing  but with real IO at 1 GBit/sec).
> > Are there any commercial applications for this type of technology?  It
> > would be interesting to get some feedback.  Possibly bulk encryption with
> > ATM switches etc? It should also be possible to multiplex multiple high
> > speed inputs into our system.
>
> How about encryption for backbone links?  Keeps people from being able to
> go out on a country road in Kansas and splice into an OC48.
>
> Justin


------------------------------

Date: Tue, 16 Nov 1999 12:49:21 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.misc,sci.physics
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation

Coen Visser wrote:

> "Trevor Jackson, III" wrote:
>
> > john baez wrote:
>
> > > Second of all, the proof is not hard.  The basic idea was already
> > > described in this thread: there just aren't enough short descriptions
> > > to go around to give *all* strings descriptions shorter than the
> > > strings themselves.
>
> > Of course.  This treats each string as a member of the set.  It is the set's
> > property that not all elements can be compressed.  But any individual string
> > within any given set can be compressed -- at the expense of the other members of
> > the set.
>
> I think you have the following in mind (correct me if I'm wrong).
> Take a set, say: {0, 1, 00, 101, 100, 0000, 011011, 1011001101010110}
> Now you can compress "1011001101010110" to "0" by saying
> string "0" maps to string no. 7 of this set. So now you have
> a compressor that can compress this "large" random-looking string
> to just one bit and decompress it using just the information "0" -> nr7.
> That is completely true *if* the set is given. But a description of a
> set
> costs information too. And the more difficult it is to describe
> a set the more information you need to give to your compressor.

Yes.  And when the set is "all finite strings", as you previously described, it is not
possible to compress elements of the set without inflating other elements of the set.
The only time this is worthwhile is the case that you have some non-uniform* weighting
of the set members.  When you have such a model, equivalent to a source model, you can
decide which elements are _worth_compressing_, but that is not the same as claiming
that those elements are _compressable_ and concluding that the others are not capable
of compression.

*Here uniform does not mean uniform frequency of occurrance, but a distribution
weighted by the inverse of the string lengths.


------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: AES cyphers leak information like sieves
Date: 16 Nov 1999 09:41:44 -0800

You seem to be confused.  You write about flaws if a block cipher is
used without any chaining (what is typically known as ECB mode), but
those flaws are extremely well-known (taught in Crypto 101a).  If you
use a block cipher properly, these issues don't come up.

Go read, e.g., Schneier's _Applied Cryptography_, or even just the
newsgroup FAQ, and I believe you will see what people actually use,
and how they resist your attack.

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Scientific Progress and the NSA
Reply-To: [EMAIL PROTECTED]
Date: Tue, 16 Nov 1999 17:46:55 GMT

Bill Unruh <[EMAIL PROTECTED]> wrote:
: In <[EMAIL PROTECTED]> Tim Tyler <[EMAIL PROTECTED]> writes:
:>Bruce Schneier <[EMAIL PROTECTED]> wrote:
:>: Nicol So <[EMAIL PROTECTED]> wrote:

[the NSA is not much ahead of the open research community??!]

:>: I agree.  We cannot reliably tell.  I was just giving my thoughts.

:>I'd have thought history suggests that intelligence agencies have been
:>ahead of open researchers in a large number of areas of cryptography and
:>cryptanalysis.  Such organisations generally have access to the funding
:>and the staff to perform pretty good work.

: Actually, intelligence agencies have a severe handicap, and that is
: their secrecy. Things go much much faster if you have a bunch of equally
: smart people constantly looking over your shoulders for your stupid
: assumptions, etc.

They have such handicaps, it's true.  Obviously, they believe the price
of these is worth paying.

:>Certainly we now know that public key cryptography was invented in
:>secret independently of the public community some years before the
:>rest of the world heard about it.

: Well no we do not know it. [...]

?

Isn't it now in the history books that the maths behind public key
cryptography based on prime factorisation was invented at Britain's GCHQ?

:>If quantum computers can be used to crack public key systems, money
:>can be bet on intelligence agencies developing these internally before
:>the rest of the world wakes up to the idea.

: Uh, the rest of the world woke to it long before the intelligence
: community did.

How do you know the intelligence community didn't invent quantum computers
a long time ago?  Are you an ex-services man?

Quantum computation has been known to be connected with the factorisation
problems virtually from day 1.  It was about the first problem
demonstrated to be subject to dramatic speed improvements.  The government
has known this of just as long as everyone else, if not longer.

: Tehre is no evidence that they have any expertise in
: quantum physics, certainly no more than th eoutside world.

Of *course* there's no evidence!

That's why they're called an intelligence agency: hiding evidence is part
of their job.
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

You'll never get dizzy doing a good turn.

------------------------------

Date: Tue, 16 Nov 1999 13:01:28 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.misc,sci.physics
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation

[EMAIL PROTECTED] wrote:

> Trevor Jackson, III ([EMAIL PROTECTED]) wrote:
> : This verges on the claim that RNG outputs should be filtered for compressibility 
>prior to use.
> : The definition of incompressible as random contributes to this suggestion.
>
> And of course any filtering does make them slightly compressible, since
> random-looking sequences are a subset of all sequences. Yet using an
> all-zero one-time pad is something people are unlikely to do in reality.
> This is an interesting philosophical problem.
>
> : How would you address the following issue? An incompressible string used once is 
>no more
> : compressible than prior to use.  But an unpredictable sequence (shorthand for a 
>sequence generated
> : stochastically) is no longer unpredictable after use.
>
> The following quote gives part of the answer: if one uses the same
> one-time pad used twice, since the *plaintext* is compressible, the first
> message can be used to give the pad for the second in terms of the
> difference between them...
>
> : > 14159 26535 89793... compresses to "the decimal part of pi", but any
> : > sequence of numbers can compress to "the contents of the one-time-pad that
> : > I secretly photographed"...
>
> : By this definition all [sufficiently long] strings, even infinite ones, are 
>compressible.  What
> : ever mechanism specifies/describes/locates the string is a compressed reference to 
>the string
>
> Not "are compressible", but "are potentially compressible". A description
> of a string as "the contents of the third one-time-pad in my safe" is
> useless if you can't open that safe.
>
> : > There _is_ a sense in which true incompressibility means "as good as
> : > random", since it does mean haphazard, not generated by a simple rule.
>
> : What is the disctinction between "true" incompressibility and (I suppose) "pseudo"
> : incompressibility?  Are references to sequences pseudo-compressions?
>
> No, I meant "true incompressibility" in the opposite sense; incompressible
> even when compressions making use of side information were allowed. As
> distinct from mere _intrinsic_ incompressibility, which is the more
> well-behaved mathematical concept.
>
> : Is there an instance of a truly incompressible sequence?
>
> One that is genuinely random, and drawn from a uniform distribution.
> Anything else _might_ be truly incompressible, but one can never be sure.

Here is another instance of the characterization of a source versus the 
characterization of an
individual string.  Given a genuinely random and uniform source its _potential_ output 
is incompressible
because (1) the potential state space is exactly as large as the space of available 
encodings and (2)
there is no benefit to remapping the spaces due to the uniformity of the distrubution.

However, once the source has generated a particular sequence, it is no longer 
potential.  It is actual.
Most of the time the actual sequence appears haphazard, with no simple (i.e., 
efficient) re-encoding
apparent.  However, as you observer above, the actual sequence might include a long 
string of identical
bits.  The philosophical question is whether the actual string is considered 
compressible due to the
presence of a simple pattern or incompressible due to the description of the source 
provided above.

IMHO, the answer depends upon the intended usage of the sequence.  If, for example, it 
is to be used as
a representative example of the generator, it should be considered incompressible.  
OTOH, If it is to be
used  as an OTP key, it may be considered compressible.

But the sequence/string itself tells us nothing about these criteria.



------------------------------

From: "james d. hunter" <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.misc,sci.physics
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation
Date: Tue, 16 Nov 1999 12:48:52 -0500
Reply-To: [EMAIL PROTECTED]

James Felling wrote:

[...]

> I think Kolmogorov complexity is a useful thing, but as it is so very sensitive
> to the representational language, it is a weak tool for the quantification of
> randomness.  In fact I have some doubt as whether there is a  way it can be used
> to label a single string that has any meaning beyond L.


  As of this point in time, all tools are weak for quantifying the 
  undefined quantity "randomness".

------------------------------

From: zenlight <[EMAIL PROTECTED]>
Subject: Interested in bulk encrypt1on devices
Date: Tue, 16 Nov 1999 18:07:36 GMT

Does anyone have information on the bulk encryp-
t1on devices listed in
http://sva.theriver.com/millist/m14.html#a2761


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: AES cyphers leak information like sieves
Date: 16 Nov 1999 18:26:07 GMT

CBC mode has been analyzed by Rogaway. This XORs the previous ciphertext block
with the plaintext, meaning that what gets encrypted appears random.  If you
are HYPER paranoid, you can use double CBC mode or similar, then every
ciphertext bit depends on every other bit of key and plaintext.
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: AES cyphers leak information like sieves
Date: 16 Nov 1999 18:28:59 GMT

The break is expected to occur at the square root of the block size, due to
birthday paradox. For a 128-bit block, this is a LOT.
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: The DVD Hack: What Next?
Date: Tue, 16 Nov 1999 18:32:14 GMT

"Mark Keiper" <[EMAIL PROTECTED]> wrote, in part:

>One thing though- the ripping process requires buttloads of disk space (5 to
>10 gigs) for average sized movies according to the article I read.

Well, the movies themselves require 4.7 Gigabytes of space or
thereabouts, so that's hardly surprising.

John Savard (don't snooze, don't snore)
http://www.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: Scientific Progress and the NSA
Reply-To: [EMAIL PROTECTED]
Date: Tue, 16 Nov 1999 18:33:56 GMT

On Tue, 16 Nov 1999 17:46:55 GMT, Tim Tyler <[EMAIL PROTECTED]> wrote:

>Bill Unruh <[EMAIL PROTECTED]> wrote:
>: In <[EMAIL PROTECTED]> Tim Tyler <[EMAIL PROTECTED]> writes:
>:>Bruce Schneier <[EMAIL PROTECTED]> wrote:
>:>: Nicol So <[EMAIL PROTECTED]> wrote:
>
>[the NSA is not much ahead of the open research community??!]
>
>:>: I agree.  We cannot reliably tell.  I was just giving my thoughts.
>
>:>I'd have thought history suggests that intelligence agencies have been
>:>ahead of open researchers in a large number of areas of cryptography and
>:>cryptanalysis.  Such organisations generally have access to the funding
>:>and the staff to perform pretty good work.
>
>: Actually, intelligence agencies have a severe handicap, and that is
>: their secrecy. Things go much much faster if you have a bunch of equally
>: smart people constantly looking over your shoulders for your stupid
>: assumptions, etc.
>
>They have such handicaps, it's true.  Obviously, they believe the price
>of these is worth paying.
>
>:>Certainly we now know that public key cryptography was invented in
>:>secret independently of the public community some years before the
>:>rest of the world heard about it.
>
>: Well no we do not know it. [...]
>
>?
>
>Isn't it now in the history books that the maths behind public key
>cryptography based on prime factorisation was invented at Britain's GCHQ?
>
>:>If quantum computers can be used to crack public key systems, money
>:>can be bet on intelligence agencies developing these internally before
>:>the rest of the world wakes up to the idea.
>
>: Uh, the rest of the world woke to it long before the intelligence
>: community did.
>
>How do you know the intelligence community didn't invent quantum computers
>a long time ago?  Are you an ex-services man?
>
It's just as easy to play the "how do you know" game in 
reverse, i.e. How do you know the intelligence community 
actually invented differential cryptography first?  Maybe 
they just _claimed_ they did so they wouldn't look foolish 
when they were scooped by a tiny group of researchers from IBM.

Scott Nelson <[EMAIL PROTECTED]>

------------------------------

Date: Tue, 16 Nov 1999 11:36:57 -0700
From: "Peter K. Boucher" <[EMAIL PROTECTED]>
Subject: Re: SCOTT16U SOLUTION ON THE WEB

"SCOTT19U.ZIP_GUY" wrote:
[snip]
>     You are the one witha closed limited mind Tom. I have responded to
> reasoanble questions. The porblem is your incapable of following the
> anwsers and you keep reasking the same questions with out appearing
> to learn anything. I see no reason to anwser the questions over and over
> just because your to stupid to understand.

It's obviously overwhelmingly tempting for you to assume that the reason
people keep asking you the same question is because experts in the field
of cryptography are all too stupid to follow your answers, rather than
to consider the alternative hypothethis that your answers have never
clearly or convincingly addressed the questions.  Regardless of which
alternative is correct, it would actually be in your own best interest
to assume that the second hypothesis is correct.

If everyone here is to stupid to understand what you say, you are
wasting your time posting here at all.  Conversely, if you simply need
to express your theories more convincingly, then some effort to do so
has a non-zero chance of bringing about a more meaningful exchange of
ideas.

-- 
Peter

------------------------------

From: William Rowden <[EMAIL PROTECTED]>
Subject: Re: more about the random number generator
Date: Tue, 16 Nov 1999 18:37:20 GMT

In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:
> William Rowden wrote:
> > > Entropy = 1.000000 bits per bit.
> > That's wonderful.
>
> It's more that wonderful, it's fantastic (in the literal meaning
> of the word).
>
> > With 208771 zeros and 209021 ones, ...
>
> How can that have an "entropy" of 1.000000?

I'm experiencing deja vu.  You and I discussed this in reference to
Johnny Bravo's claims about the number of bits of entropy of a given
password and passphrase.

Obviously theory says there's some redundancy.  It's small, though; try
the calculation:

Assuming the independence of each bit, and taking this sample as
representative of the population produced by this PRNG, the source model
would be p(0) = 208771 / 417792 ~ 0.4997 and p(1) = 209021 / 417792 ~
0.5003.  By definition, H(p) = - sum ( p(t) * lg p(t) ) for t from
source.  Here t is a bit, so

H = - ( 208771 / 417792 * lg (208771 / 417792) +
        209021 / 417792 * lg (209021 / 417792) ),

which is 0.9999997... bits per bit.  The entropy was given to only 6
decimal places, however, making 1.000000 the correct result.
--
    -William
SPAM filtered; damages claimed for UCE according to RCW19.86
PGP key: http://www.eskimo.com/~rowdenw/pgp/rowdenw.asc until 2000-08-01
Fingerprint: FB4B E2CD 25AF 95E5 ADBB  DA28 379D 47DB 599E 0B1A


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Bob Silverman <[EMAIL PROTECTED]>
Subject: Re: MPQS factoring method : proof ot its complexity.
Date: Tue, 16 Nov 1999 19:10:25 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (MENIER Cl�ment) wrote:
> Hello everyone,
>
>      I am doing research on the Quadratic Sieve factoring algorithm.
> But up to now I have found no proof of its complexity :
>                                  ___________________
>                               \/(1+o(1))*(ln(n)*ln(ln(n)))
>                           e
>
>    I'd like to know if someone knows a proof of it

Noone does. There is no proof.  The complexity function depends
upon an unproved assumption: that integers in the range of a
quadratic polynomial behave as 'random' integers of the same size with
respect to their smoothness properties.





>(even if it is not
> exact) or if someone knows where I can find one.

The derivation is virtually the same as that for CFRAC.  You can
find the derivation for the latter in Knuth Vol 2.  Or you can
look at the derivation for the Number Field Sieve in Lenstra's (ed)
book.

>
>    I am also looking for some example of the o(1) constant in the
> algorithm for different implementations and machines.

What do you mean by "example"?  I have (a lot of) data on the o(1) term
 for my implementation on Sparc's  and on Pentium based PC's. What are
you looking for?


Bob Silverman
"You can lead a horse's ass to knowledge, but you can't make him think"


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to