Cryptography-Digest Digest #329, Volume #10      Wed, 29 Sep 99 11:13:01 EDT

Contents:
  Re: IBM's security chip (Built on the motherboard)! ("Melih Abdulhayoglu")
  Re: Hardest ever ECDL solved by INRIA researcher and 195 volunteers (Bob Silverman)
  Re: msg for Dave Scott (Patrick Juola)
  Re: Comments on ECC (Patrick Juola)
  Re: Electronic envelopes (Mok-Kong Shen)
  Re: simple algorithm for hardware device? (Volker Hetzer)
  Re: Compress before Encryption (SCOTT19U.ZIP_GUY)
  About differential cryptanalysis.... (OTTO)
  Cryptic manuscript... Help (Computer Technician)
  Re: Hardest ever ECDL solved by INRIA researcher and 195 volunteers (Robert Harley)
  Re: Compress before Encryption (SCOTT19U.ZIP_GUY)
  Re: ECDL and distinguished points (John Sager)
  How good is java.security.SecureRandom ? (Stanley Chow)
  Re: Ritter's paper (SCOTT19U.ZIP_GUY)

----------------------------------------------------------------------------

From: "Melih Abdulhayoglu" <[EMAIL PROTECTED]>
Subject: Re: IBM's security chip (Built on the motherboard)!
Date: Tue, 28 Sep 1999 21:30:21 +0100

any more info pls!

Anton Stiglic <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> see
>
> http://dailynews.yahoo.com/h/nm/19990927/tc/ibm_security_2.html
>
> for a starters...
>
> anton
>



------------------------------

From: Bob Silverman <[EMAIL PROTECTED]>
Subject: Re: Hardest ever ECDL solved by INRIA researcher and 195 volunteers
Date: Wed, 29 Sep 1999 12:20:34 GMT

In article <[EMAIL PROTECTED]>,
  "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> The strange thing is that the press release claims that this
> shows that cracking a 97-bit EC system is harder than cracking
> a 512-bit RSA system.

No!   It is  substantially easier.

The time to break 512-rsa  is  e^(1.92  (log n)^1/3  (loglog n)^2/3)
which for n = 2^512 comes to about 1.6 x 10^19

97 bit DL is   sqrt(pi/2 * 2^97)  EC point additions  ~ 5 x 10^14.
Even if each point addition takes 10^3 operations,  this is still
less work.

I know.  Using actual numbers to dispute a claim is an unfair
way to argue   :-)


--
Bob Silverman
"You can lead a horse's ass to knowledge, but you can't make him think"


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: msg for Dave Scott
Date: 29 Sep 1999 09:01:10 -0400

In article <[EMAIL PROTECTED]>,
JPeschel <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] (jerome) wrote in part:
>
>>if 'brute force' means to try all the possible keys, DES only needs to try
>>2^55 keys and not 2^56 because of a special property called 'reflexive'
>>or something close. if brute force means something else, please define it.
>>
>>
>
>Brute-force means trying all the possible keys until you find
>the correct key.  When you brute-force a cipher, DES, for instance,
>you are likely to find the correct key after searching through half of
>the possible keys:  2^55 keys.

However, *because* DES has the reflexivity property, you should be able
to find the proper key in an expected 2^54 operations, yes?

        -kitten

------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Comments on ECC
Date: 29 Sep 1999 08:58:08 -0400

In article <[EMAIL PROTECTED]>, Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
>Patrick Juola wrote:
>> If you assume that we know, for whatever reason, that P=NP, then
>> that knowledge will *give* us the algorithm we need.
>
>No, it won't.  Suppose I tell you that P=NP and for some
>reason you believe that I have a proof.

But I don't.  And I have no reason to believe that you'll be able
to find a non-constructive proof, given that I don't think I've
seen *any* non-constructive proofs in theory of computation.

        -kitten


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Electronic envelopes
Date: Mon, 27 Sep 1999 21:16:29 +0200

Anton Stiglic wrote:
> 
> Assuming Alice deposits the enveloppe to Bob.  The scheme
> you described is possible if Bob can communicate with Alice
> at the moment Bob is to open the enveloppe.  When the content
> in the enveloppe is a bit, this is called a bit commitement scheme.

I wrote in my post that after deposition the sender is assumed to
be no longer available.

M. K. Shen

------------------------------

From: Volker Hetzer <[EMAIL PROTECTED]>
Subject: Re: simple algorithm for hardware device?
Date: Wed, 29 Sep 1999 15:19:10 +0200

Luigi Funes wrote:
> 
> Hi all! I wonder if someone can help me!
> 
> I'm building a high speed hardware encryption/decryption
> device working on a data stream of 16 bit words.
> Data coming in variable size packets at 40 Mword/sec
> and every word must be encrypted almost immediately, more
> exactly the delay between the input and output of a every
> word must be < 5 nS.
What about this:
You have a DES running in counter mode. You pipeline this properly
so that you have no problem generating a stream of 64Bit-Values
with the required rate.
Then you simply XOR the output of that DES-PRNG into your
input data.
The point is that you can get rid of the delay of DES by NOT encrypting
the data but the counter value. Then the output will be there and waiting
for the data to be xor'ed with. This is safe because without the key an enemy
cannot guess the output for the next counter value.
  (If your enemy can take a guess at the plaintext and the counter value (by
  counting the packets) he still has to break DES. This can be done
  today with special hardware in about 20 hours. A pentium will need many
  years to break it. If it's not safe enough, consider this:
  The trick with the counter mode makes ANY Block cipher usable for you.
  including 3DES or the AES candidates or Skipjack. Take your pick.)
Take care that the counter does not overflow. Never. Always change keys
before. You might want to consider a 128Bit cipher like the AES candidates.

The only timing requirement is that the DES-Pipeline is filled before the
first data packet arrives. Therefore, the initial filling of the DES-Pipeline
can be done during the power on reset.

Greetings!
Volker

-- 
Hi! I'm a signature virus! Copy me into your signature file to help me spread!

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compress before Encryption
Date: Wed, 29 Sep 1999 14:55:08 GMT

In article <7sstjs$snq$[EMAIL PROTECTED]>, Tom St Denis <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] wrote:
>> Tom St Denis <[EMAIL PROTECTED]> wrote:
>> :   [EMAIL PROTECTED] wrote:
>>
>> :> Hasn't this holy war gone on long enough?  When do you ever quit?
>>
>> : What I don't get is almost EVERYONE agrees that compression before
>> : encryption is a good idea.  So why is he carrying this on?
>>
>> Perhaps because he has recently developed the only vaugely sensible
>> compression product targetted at encrypted data that anyone knows of?
>>
>> As he mentions, using an unsuitable compression routine may actually
>> weaken the encryption.
>
>But if the compression is 'weak' cryptographically that means it's bad
>compressionally as well.
>
>Take LZSS for example (or LZSS+Huf which alot of people use).  You can just
>run in on the data without putting headers or anything.  And ten bucks says
>it will compress 30% better then huffman alone.  Or DEFLATE for example is a
>very fine tuned algorithm.  Since it's output is highly efficient that means
>predicting the output is as easy as predicting it's input.
>
>I personally don't see where this comes in.  Obviously encrypting known
>headers is a bad idea.  No matter what compression you use, if you encrypt
>the header you are gonna give out known plaintext.  Whether you use LZSS,
>DEFLATE or his huffman coder.  Possibly is most flawed line of thinking is
>that a compression routine doesn't output headers.  The program does.  For
>example you can LZSS a 10kb buffer and have zero bytes of headers or anything
>(well maybe 4 bytes for the compressed size).  Nuff rambling.  Show some
>proof and I will follow
>
>Tom
>
   The problem Tom is your to stupid to understand proof. The point is I have 
stated how one can test for "one to one" compression and your to dam lazy
to think. I  started with adaptive huffman compression becasue it was the
easyiest to make "one to one" but the concept is beyond your small brain.
  I currently am testing a huffman with RLE and a limited LZSS capability.
I have been told that the second may violate patents. As for the other waiting
for the status of a paper. 
 Yes I treid to write a paper for the ACM. I try about once a year to write a
paper but of course most like the AES thing are really closed. The AES group
did not really want good ciphers since the NSA candidate will obviously win.
The ACM is taking html text. Of course I realy don't expect to get it 
published. I just wish I lived near a friend of mine who use to wirte. We 
could seend two papers he would wrote both but change name and guess what the
one with my name will not make it. But it least I try and I know the game
is corrupt.
  And brain dead i would not have entered scott16u or scott19u they are  to 
secure and those are not what I would have entered for AES since file
security for transfers was no the real criteria.
  For those of you not following this thread or for Tommy since his brain
can't seen to retain anythng that a crypto god has hand fed him. If you
use a compresion decompression that can take treat any file as a
valid compressed file. Then it is safe to use on your data before you
encrypt. IF you use methods and as far as I can tell that is everything out
there at this time but my adaptive huffman coder ( the proof that a static 
huffman with a tablee in front can't was posted fere earlier) lack this 
feature. Why they lack this feature I do not know. But it is easy to test if 
this feature is present. Of course Tommy is to stupid to even test a method
for this.
  The problem with a non one to one ocmpression is that you can send
possibly  a file that is completely unknow but if you go through 
a standard compression routine and then encrypt. It may be possible
that the only valid compressed file that can come out is the one you
compressed with the bad compression method. This would not happen
if you used a one to one compression in the first place, But tommy
acts like he is to stupid to understand this. If he is this stupid I would
not have much faith in his program. But the game in crypto is not
to give hooks for the attacker to use. Why this topic is not a
hot topic in the books makes my think that the NSA does not
want people to use compression for a frist pass that would make
there job harder. And while we are at it a reverse huffman pass
is not a bad idea either.






David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: OTTO <[EMAIL PROTECTED]>
Subject: About differential cryptanalysis....
Date: Wed, 29 Sep 1999 21:34:50 +0800

Hi,

    I want to know more detail of differential cryptanalysis.
    Who can sent me the relaiting document.

    Thinks....


------------------------------

Date: Wed, 29 Sep 1999 08:32:44 -0500
From: Computer Technician <[EMAIL PROTECTED]>
Subject: Cryptic manuscript... Help

> Hi All,

I've been trying to think of the name of a manuscript that I'd seen in the
past, and can't for the life of me remember what it was called. So I thought
I'd try here any help would be greatly appreciated. It was done in a coded
language and was from early times. I believe it started with a V....and
written in a weird text.� I know a lot of people were trying to decode it. It
also had a lot of illustrations in the margines... like plants and other
things. If anyone can help with I would be most gracious. Thank you,

Bob
[EMAIL PROTECTED]
�
�


------------------------------

From: Robert Harley <[EMAIL PROTECTED]>
Subject: Re: Hardest ever ECDL solved by INRIA researcher and 195 volunteers
Date: 29 Sep 1999 15:19:37 +0200


Bob Silverman <[EMAIL PROTECTED]> writes:
> No!   It is  substantially easier.
>[...]
> I know.  Using actual numbers to dispute a claim is an unfair
> way to argue   :-)

Have you finally gone off the deep end?

Calling something "actual numbers" while completely omitting the
*minor issue* of constant factors is a ridiculous way to argue anything.


If you knew a way to make 97-bit ECDL "substantially easier" than
512-bit factorisation, everybody would be all ears.  But you don't.


Bye,
  Rob.

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compress before Encryption
Date: Wed, 29 Sep 1999 15:02:28 GMT

In article <7st5oh$c9m$[EMAIL PROTECTED]>, [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) 
wrote:
>In article <7sstjs$snq$[EMAIL PROTECTED]>, Tom St Denis <[EMAIL PROTECTED]>
> wrote:
>>In article <[EMAIL PROTECTED]>,
>>  [EMAIL PROTECTED] wrote:
>>> Tom St Denis <[EMAIL PROTECTED]> wrote:
>>> :   [EMAIL PROTECTED] wrote:
>>>
>>> :> Hasn't this holy war gone on long enough?  When do you ever quit?
>>>
>>> : What I don't get is almost EVERYONE agrees that compression before
>>> : encryption is a good idea.  So why is he carrying this on?
>>>
>>> Perhaps because he has recently developed the only vaugely sensible
>>> compression product targetted at encrypted data that anyone knows of?
>>>
>>> As he mentions, using an unsuitable compression routine may actually
>>> weaken the encryption.
>>
>>But if the compression is 'weak' cryptographically that means it's bad
>>compressionally as well.
>>
>>Take LZSS for example (or LZSS+Huf which alot of people use).  You can just
>>run in on the data without putting headers or anything.  And ten bucks says
>>it will compress 30% better then huffman alone.  Or DEFLATE for example is a
>>very fine tuned algorithm.  Since it's output is highly efficient that means
>>predicting the output is as easy as predicting it's input.
>>

   Folks again I will ask for the dozenth time. I am convinced there should be 
other "one to one" compression programs out there. But removing the known
headers on most is not enough. His any one seen any other available 
compression decompression programs that have this feature. And if they
haven't seen any. Can anyone (but Tommy) specualte as to why they
are not common. And to why the phony crypto Gods even in there books
give zero space to this topic.





David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: [EMAIL PROTECTED] (John Sager)
Subject: Re: ECDL and distinguished points
Date: 29 Sep 1999 13:43:08 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (jerome) writes:
 : i just read the faq in http://pauillac.inria.fr/~harley/ecdl6/faq.html.
 : written by robert harley and i think i don't understand the 
 : distinguished points. 
 : 
 : The purpose seems make the centralisation easier reporting only 
 : the distinguished point or not the others. But apparently for 
 : ECC2-97, only 1 point in 2^30 is 'distinguished'.
 : 
 : 1. does this reduce the efficiency of the algorithm ? (at first sight, 
 :    to report only 1 point in 2^30 greatly reduce the possibility of
 :    collisions)
 : 
 : 2. does a big computer can be much faster than the distributed algorithm
 :    used for EC2-97 ?
 : 
 : i probably missed something obvious here :)

the factor 2^30 is to some extent arbitrary. 2^29 would take half as
long per point but you would need to store twice as many points (on
average) before you found a match. The work factor hardly changes
over a wide range of values. 2^30 requires about 1 billion iterations
of the client algorithm to find a distinguished point, giving times of
around 1 hour per point for the fastest clients. This is psychologically
useful as contributors can readily see their totals growing. One could
reduce it to, say, 5 minutes but the storage & comparison server has to
have that much more storage capacity for all the extra points stored.
Plus there is all that extra network traffic.

The particular method used is ideal for distribution. There may be
non-distributed algorithms that are faster; I don't know. I would
think that a rather different mathematical approach is required.
Anyway, the 'big computer' which would do it in 40 days doesn't exist.
(I would dare to speculate, not even at the NSA:)

-- 
John

--
Sorry about the address.
This is me, not BT.

------------------------------

From: Stanley Chow <[EMAIL PROTECTED]>
Crossposted-To: comp.lang.java.security
Subject: How good is java.security.SecureRandom ?
Date: Wed, 29 Sep 1999 10:28:11 -0400

We are doing some Java code and need a good random number generator.
The documentation for the java.security.SecureRandom class seems to
claim pretty good entropy for its seeding (it certianly takes long
enough at it).

Has anyone done/seen any evaluation of the cryptographic strength
of the SecureRandom class? Any pointers are appreciated.

-- 
Stanley Chow              phone: (613) 271-9446  Fax: (613) 271-9447
VP Engineering            email: [EMAIL PROTECTED]
Cloakware Corp.

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Ritter's paper
Date: Wed, 29 Sep 1999 15:10:22 GMT

In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]> 
wrote:
>Terry Ritter wrote:
>> 
>> Whether you think I like to design parameterized families of ciphers
>> mostly depends upon what you call a parameter:  I do argue for the use
>> of scalable ciphers, which would certainly be parameterized in size,
>> but specifically *not* parameterized in ways which would change their
>> operation.  I do point out that many new cipher constructions simply
>> have not been considered by academics, which is a loss for all of us,
>> not just me.
>
>I think that a general parametirized cipher by definition can have 
>sizes (block sizes, key lengths, table sizes), round numbers and 
>operations (statically or dynamically determined) selected by 
>parameter values entered by the users. Limiting parametrization to 
>sizes or a size is excluding benefits that may accrue from other 
>parametrization dimensions. Parametrization delivers a least a part 
>of the advantages of using multiple ciphers, for the analyst has to 
>figure out the parameter values for attacking the cipher effectively, 
>i.e. his work load is increased. Parametrization allows a cipher to 
>adapt to advances in technology that become available to the analyst,
>e.g. faster processor chips, and thus promises a longer useful 
>service life of the cipher. That's why I have never yet understood 
>why parametrization is not favoured in the AES project. 
>



   Mok
  I thought I would anwser this last question for you. The AES contest
is about finding a WEAK method so that it can be used for all encryption
in all aplications. Even small smart cards. If there was room to allow
parametrization the NSA would have a hader job of reading your files.
It is best for the NSA to have only one cipher in common use that they
can break so this country can maintain a fair competative edge over the
rest of the world. You don't really think we wish to battle the rest of the
world with out knowledge of all there secrets and weaknesses do you.



David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to