Cryptography-Digest Digest #332, Volume #10 Wed, 29 Sep 99 16:13:05 EDT
Contents:
Re: hidden channel in Peekboo (wtshaw)
Re: Please review proposed rebuttal... (Tom Cooper)
Re: newbie ecc (DJohn37050)
Re: SNAKE Web Page ("(me!)")
Re: Compress before Encryption ("karl malbrain")
Re: Increasing password security dramatically without making it harder to remember
(Johnny Bravo)
Re: NEMA, Swiss cipher machine (Mok-Kong Shen)
Re: Need advice for encrypting information (James Felling)
Re: Compress before Encryption (SCOTT19U.ZIP_GUY)
Re: ECDL and distinguished points (Tom Cooper)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: hidden channel in Peekboo
Date: Wed, 29 Sep 1999 12:34:49 -0600
In article <7sst4r$sck$[EMAIL PROTECTED]>, Tom St Denis
<[EMAIL PROTECTED]> wrote:
> In article <[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] (wtshaw) wrote:
>
> > An algorithm with a build-in back-door.....ah, something to please a
> > wayward mind if the source was unavailable or purposefully corrupted.
>
> hmm muhahahahaha... Well if anyone is paranoid they could check the source...
> muhahahah
>
It is an unfortunate condition that someone who just wants to get
encrypted information from point A to B must become a programming guru.
Certain popular algorithms do carry this burden. I would reject those
that carry this condition as being very bad for a practical world as there
are possible algorthms that do not seem to have such complicated and
obscure problems.
Yes, call demanding geek inspection related to security by obscurity.
--
Still a good idea from Einstein: If you can't explain something clearly to a child,
you do not understand it well enough.
So much for models of trust, they generally are ill-founded.
------------------------------
From: [EMAIL PROTECTED] (Tom Cooper)
Subject: Re: Please review proposed rebuttal...
Date: Wed, 29 Sep 1999 18:43:24 GMT
On Tue, 28 Sep 1999 17:52:05 GMT, Bob Silverman <[EMAIL PROTECTED]> wrote:
<snip>
>One needs a very large Cray-class machine to deal with the matrix.
>(fairly big bucks, i.e. $5-10 Million)
>
<snip>
Would you please briefly and in broad strokes explain what you have to
do to this matrix that requires a big machine, and what aspect of the
machine needs to be big eg core memory?
Thanks Tom.
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: newbie ecc
Date: 29 Sep 1999 18:49:50 GMT
check out www.certicom.com for tutorials on ECC, including java toy examples to
play with. Also check on the ECDSA white paper.
Don Johnson
------------------------------
From: "(me!)" <[EMAIL PROTECTED]>
Subject: Re: SNAKE Web Page
Date: Tue, 28 Sep 1999 15:45:14 +0100
Paul Onions wrote in message
<[EMAIL PROTECTED]>...
>In article <[EMAIL PROTECTED]>, Peter Gunn
><[EMAIL PROTECTED]> wrote:
>> Paul Onions wrote:
>> > Peter Gunn wrote:
>> > >
>> > > Im wondering if this can be fixed by simply not
>allowing { 0,1,g }
>> > > for g^Y1 mod f(1,P,R) ??
>> >
>> > Looks like it to me. But disallow those values for
>both A and B.
>> >
>> > > Perhaps not allow any values J such that g^J < Z ??
>> > > (Z being the minimum f(1,P,R))
>> >
>> > I'd just go for something simple like:-
>> >
>> > 1 < w < p, w != g
>> I think I'll stick to...
>> J < w < p
>> since J is easily calculated (or estimated for g==3 or
>g==4) and should
>> hopefully make it more difficult to find other special
>values for w.
>> I will have to explicitly state that w !=g though for
>people who want to
>> use large modulous values (which might be bigger than J).
>> I also forgot to put code in RATTLER to check the values
>it receives for
>> w... I'll patch that up at the weekend :-)
>
>I think maybe you're misunderstanding things a bit here.
>
>Legitimate parties send values w of the form w = g^x mod p
>for some x and p,
>but adversaries need not do so. The problem you will have
>is that, on
>receiving a value w, how do you compute x in order to
>ensure that x > J ?
>
>The only efficient way I can see to do this is to compute
>all j <= J and check
>that w != g^j for all j. This either involves successive
>computations or
>table lookups.
>
>But having said this, you are on the right track because
>I've just found another
>attack that would defeat my suggestion all the time and
>yours potentially some
>of the time! (depending on the prime moduli used)
>
>The attack details are below, but the main weakness that is
>exploited is the
>ability for an adversary to generate x values such that w =
>g^x < p. In order
>to stop this attack we must make it impossible for an
>attacker to generate
>such x's for *any* of the prime moduli.
>
>Using your approach this would mean disallowing all such J
>such that g^J < Z
>where Z is the *maximum* f(1,P,R).
>
>Another approach would be to choose a generator such that g
>> sqrt(Z) where
>again Z is the *maximum* f(1,P,R). This has the advantage
>of only needing
>to check w != g, not powers of g.
I think Ive got this covered, Im restricting values for
g^X mod f(k,P,R) such that
g^(g^X mod f(k,P,R)) > Z (being the minimum f(k,P,R)).
[ J was the public value in the previous example ]
I dont need to 'crack' the public values, just do a compare....
for instance, when g==4, if I restrict public values to
be > 256 (value, not bits, since 4^256 is > any 512bit value)
then all powers of g < Z are forbidden.
So attack cannot happen... (hopefully :-)
>
>Attack details
>==============
>The adversary impersonates B.
>
>In step (2) she returns S, g^n, g^n, g^n, ... where n is a
>low exponent so
>that g^n has a chance of being less than all of the moduli
>used in this pass.
>
>Now the adversary captures E[K](S) and aborts.
>
>Offline, the adversary can mount a search attack on the
>weak shared secret P
>as follows.
>
>Guessing P gives the moduli used in the exponentiations.
>So the adversary
>can now compute (g^X1)^Y1 mod f(1,P,R) = (g^X1)^n mod
>f(1,P,R), etc.
>
>Since the key is given by (from your modified web-page):-
>
>K=H(U,S,R,P,g^X1 mod f(1,P,R),
> g^X2 mod f(2,P,R),
> g^X3 mod f(3,P,R), ...,
> g^Y1 mod f(1,P,R),
> g^Y2 mod f(2,P,R),
> g^Y3 mod f(3,P,R), ...,
> (g^X1)^Y1 mod f(1,P,R),
> (g^X2)^Y2 mod f(2,P,R),
> (g^X3)^Y3 mod f(2,P,R),...)
>
>the adversary can compute this based on her guess at P and
>then test this
>out on E[K](S).
>
>Thus eventually finding P.
>
>Regards,
>Paul(o)
>
>
>
>* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network
*
>The fastest and easiest way to search and participate in Usenet - Free!
PS Ive just thought of a new 'coiled' SNAKE where the order of the public DH
values is based on P and R... should add an extra bit or two to the max
password entropy that wil be encoded.
PPS Im out and about just now... apologies if I missed the point... I'll
reread later.
------------------------------
Reply-To: "karl malbrain" <[EMAIL PROTECTED]>
From: "karl malbrain" <[EMAIL PROTECTED]>
Subject: Re: Compress before Encryption
Date: Wed, 29 Sep 1999 11:59:53 -0700
Tom St Denis <[EMAIL PROTECTED]> wrote in message
news:7stkqq$ete$[EMAIL PROTECTED]...
(...)
> Hey Dave, if you have any papers lying around I wouldn't mind reading
them.
> I would like to see what ACM/AES turned down (if there really si anything
at
> all).
>
> Please drop this insane benine arguement. You talk the talk but you never
> walk the walk. You say alot is weak, why not prove it? Your insane
> arguments are not proof. I would like to see a deterministic process for
> breaking modern cryptosystems like PGP (maybe even Peekboo) that use
'weak'
> deflate code (well peekboo doesn't use compression but I would like to see
> you break a 'kids' cryptosystem, since I am really stupid...).
>
> Anyways, seriously can we drop this? Please? I am begging you... just
don't
> reply to this message and NEVER ever start another compress/encrypt
thread...
While I would grant your REQUEST (per PROTOCOL), you're going to have to
decide at some point in your life to drop the VULGARITY of IMPUNING other's
motivation. Especially here, since everything is PRESUPOSED FREE of CHARGE,
but as yet UNGROUNDED.
Hope you fare better in the future. Karl M
------------------------------
From: [EMAIL PROTECTED] (Johnny Bravo)
Crossposted-To: alt.security.pgp,comp.security.pgp
Subject: Re: Increasing password security dramatically without making it harder to
remember
Date: Tue, 28 Sep 1999 10:11:34 GMT
On Tue, 28 Sep 1999 14:08:51 +0100, "Gary" <[EMAIL PROTECTED]> wrote:
>By iterating the one way secure hash (say SHA 160 bits) of the password (say
>originally 40 bits) a fixed number of times that takes about 10 seconds on
>an average machine, means the attacker has to take on average 10 seconds per
>brute force try. (2^40)*10 Seconds for all combinations is a long time
>compared to 2^40 average computing cycles. Remember he will only know if his
>guess is correct if the iterated hash opens the encrypted file.
That's assuming that the attacker is only using one computer as slow as yours.
Your one computer won't crack a DES key in your lifetime, it took tens of
thousands of computers more than a month to crack such a key, however the
Electronic Frontier Foundation DES Machine did it in less than three days at a
cost less than $250,000. Imagine how many equivalent machines like that a large
corporation or a government can afford to build and own.
You have to assume that the attacker knows how many iterations of SHA you are
using and just repeats the hash the correct number of times for the 2^40 keys to
try them all. Piece of cake compared to a weak 56 bit DES key. You are better
off just using a bigger key in the first place, put it out of reach even for the
super machines. It's much easier to just use a bigger key in the first place.
Rather than add linear time to the key check, add a exponential increase in the
total number of keys.
Johnny Bravo
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: NEMA, Swiss cipher machine
Date: Wed, 29 Sep 1999 20:58:20 +0200
[EMAIL PROTECTED] wrote:
>
> A vocabulary of _meaningful_ terms is an extremely valuable thing to have
> in a field to make it understandable to people.
Mersch posted a draft of such a dictionary on 21.9. It is, however,
in German.
M. K. Shen
------------------------------
From: James Felling <[EMAIL PROTECTED]>
Subject: Re: Need advice for encrypting information
Date: Wed, 29 Sep 1999 13:47:03 -0500
Tom St Denis wrote:
> In article <[EMAIL PROTECTED]>,
> James Felling <[EMAIL PROTECTED]> wrote:
> l quibble. It is possible that you get lucky and have two messages
> > in a row have the same salt( you are picking randomly aren't you?). I think
> > what you neant is that the probability of two messages having the same salt+
> > password combo is 1- (( 1 - (2^(-n)) )^m). where m = number of nessages
> > sent. Your n/2 figure sounds like you are expecting a birthday type
> > coincidence -- actually in theory you could send n messages and never have a
> > duplicate key used -- mind you it is so unlikely as to be ludicrious for any
> > reasonable sized n)
>
> So if I send say 1000 messages I have a 1 - ((1 - (2^-64)))^1000) chance of
> seeing the same salt twice?
>
Nope, but thats my error.
Here is the math
with a n bit salt
Given any pair of random keys the chance of a collision is 2^-n.
So the chance that they do not colide is 1-2^(-n)
Given a key K, and a set S of m other keys the chance that that K is not already
in S is ( 1 -2^(-n) )^m, so the odds that K is in S is 1 - ( ( 1- 2^(-n) )^m ).
The probability given a Set S of m keys of there being a pair of identical keys
used is
since there are m(m-1)/2 unique pairs of keys in S.
(m^2 possible pairs, m of them are of form (Ki, Ki), and since (Ki,Kj) is
equivalent to (Kj, Ki) by symetry the number is reduced by half)
the probability of being collision free is ( 1 - 2^(-n) )^ ( m(m-1)/2).
Thus the probability of a pair somewhwer in S coliding is
1 - ( ( 1 - 2^(-n) )^ ( m(m-1)/2) ).
In your example the probability of such a collision is vanishingly small.
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compress before Encryption
Date: Wed, 29 Sep 1999 20:03:37 GMT
In article <7stkqq$ete$[EMAIL PROTECTED]>, Tom St Denis <[EMAIL PROTECTED]> wrote:
>In article <7st5oh$c9m$[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
>> In article <7sstjs$snq$[EMAIL PROTECTED]>, Tom St Denis
> <[EMAIL PROTECTED]> wrote:
>> >In article <[EMAIL PROTECTED]>,
>> > [EMAIL PROTECTED] wrote:
>> >> Tom St Denis <[EMAIL PROTECTED]> wrote:
>> >> : [EMAIL PROTECTED] wrote:
>> >>
>> >> :> Hasn't this holy war gone on long enough? When do you ever quit?
>> >>
>> >> : What I don't get is almost EVERYONE agrees that compression before
>> >> : encryption is a good idea. So why is he carrying this on?
>> >>
>> >> Perhaps because he has recently developed the only vaugely sensible
>> >> compression product targetted at encrypted data that anyone knows of?
>> >>
>> >> As he mentions, using an unsuitable compression routine may actually
>> >> weaken the encryption.
>> >
>> >But if the compression is 'weak' cryptographically that means it's bad
>> >compressionally as well.
>> >
>> >Take LZSS for example (or LZSS+Huf which alot of people use). You can just
>> >run in on the data without putting headers or anything. And ten bucks says
>> >it will compress 30% better then huffman alone. Or DEFLATE for example is a
>> >very fine tuned algorithm. Since it's output is highly efficient that means
>> >predicting the output is as easy as predicting it's input.
>> >
>> >I personally don't see where this comes in. Obviously encrypting known
>> >headers is a bad idea. No matter what compression you use, if you encrypt
>> >the header you are gonna give out known plaintext. Whether you use LZSS,
>> >DEFLATE or his huffman coder. Possibly is most flawed line of thinking is
>> >that a compression routine doesn't output headers. The program does. For
>> >example you can LZSS a 10kb buffer and have zero bytes of headers or
> anything
>> >(well maybe 4 bytes for the compressed size). Nuff rambling. Show some
>> >proof and I will follow
>> >
>> >Tom
>> >
>> The problem Tom is your to stupid to understand proof. The point is I have
>> stated how one can test for "one to one" compression and your to dam lazy
>> to think. I started with adaptive huffman compression becasue it was the
>> easyiest to make "one to one" but the concept is beyond your small brain.
>> I currently am testing a huffman with RLE and a limited LZSS capability.
>> I have been told that the second may violate patents. As for the other
> waiting
>> for the status of a paper.
>> Yes I treid to write a paper for the ACM. I try about once a year to write a
>> paper but of course most like the AES thing are really closed. The AES group
>> did not really want good ciphers since the NSA candidate will obviously win.
>> The ACM is taking html text. Of course I realy don't expect to get it
>> published. I just wish I lived near a friend of mine who use to wirte. We
>> could seend two papers he would wrote both but change name and guess what the
>> one with my name will not make it. But it least I try and I know the game
>> is corrupt.
>> And brain dead i would not have entered scott16u or scott19u they are to
>> secure and those are not what I would have entered for AES since file
>> security for transfers was no the real criteria.
>> For those of you not following this thread or for Tommy since his brain
>> can't seen to retain anythng that a crypto god has hand fed him. If you
>> use a compresion decompression that can take treat any file as a
>> valid compressed file. Then it is safe to use on your data before you
>> encrypt. IF you use methods and as far as I can tell that is everything out
>> there at this time but my adaptive huffman coder ( the proof that a static
>> huffman with a tablee in front can't was posted fere earlier) lack this
>> feature. Why they lack this feature I do not know. But it is easy to test if
>> this feature is present. Of course Tommy is to stupid to even test a method
>> for this.
>> The problem with a non one to one ocmpression is that you can send
>> possibly a file that is completely unknow but if you go through
>> a standard compression routine and then encrypt. It may be possible
>> that the only valid compressed file that can come out is the one you
>> compressed with the bad compression method. This would not happen
>> if you used a one to one compression in the first place, But tommy
>> acts like he is to stupid to understand this. If he is this stupid I would
>> not have much faith in his program. But the game in crypto is not
>> to give hooks for the attacker to use. Why this topic is not a
>> hot topic in the books makes my think that the NSA does not
>> want people to use compression for a frist pass that would make
>> there job harder. And while we are at it a reverse huffman pass
>> is not a bad idea either.
>
>1) First off LZSS is not patented. 2) LZSS can treat anything as valid input
>to produce crap output. Only dictionary methods like LZW because the output
>are code/literals which could point to empty strings. At any rate it is
>possible to allow any LZ method to decompress anything as long as you work
>around things like null strings... 3) That is not a requirement in your view,
>since even if your magic will decompress anything, IT WILL NOT DECOMPRESS
>ANYTHING INTO VALID ASCII TEXT, which can be used to detect successfull
>decryptions. 4) See #2 5) Actually read #2 6) Make sure you understand #2 7)
>Good let's carry on with life?
Actaully if you would get off your ass and check I have some "one to one"
conditional huffman compressores that will compress to a binary file and
then only decompress to the set of symbols of ones choice. But you
would have to get off your ass to check. It may be possible that there
is a form of LZSS that is one to one. But I have tested many routines
and have not found any. IF you know of some can you point to source code
and an executable that does such a thing. It is what I have asked for dozens
of times. You say it is there but where ASSHOLE.
>
>Hey Dave, if you have any papers lying around I wouldn't mind reading them.
>I would like to see what ACM/AES turned down (if there really si anything at
>all).
I don't think you can read.
>
>Please drop this insane benine arguement. You talk the talk but you never
>walk the walk. You say alot is weak, why not prove it? Your insane
>arguments are not proof. I would like to see a deterministic process for
>breaking modern cryptosystems like PGP (maybe even Peekboo) that use 'weak'
>deflate code (well peekboo doesn't use compression but I would like to see
>you break a 'kids' cryptosystem, since I am really stupid...).
>
You don't know the meaning of the word proof. Your just an obnious
creep who does not know anything. You commit on every thing but you
don't know shit.
>Anyways, seriously can we drop this? Please? I am begging you... just don't
>reply to this message and NEVER ever start another compress/encrypt thread...
>
>Tom
Tell you what ASSHOLE you drop the insane rantings. Your the one
full of crap. I can start an compress/encrypt thread any time I like
just because your to stupid to understand the concept does not mean
every one else does.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
http://members.xoom.com/ecil/index.htm
NOTE EMAIL address is for SPAMERS
------------------------------
From: [EMAIL PROTECTED] (Tom Cooper)
Subject: Re: ECDL and distinguished points
Date: Wed, 29 Sep 1999 18:58:12 GMT
On Wed, 29 Sep 1999 01:17:00 GMT, [EMAIL PROTECTED] (jerome) wrote:
>i just read the faq in http://pauillac.inria.fr/~harley/ecdl6/faq.html.
>written by robert harley and i think i don't understand the
>distinguished points.
>
>The purpose seems make the centralisation easier reporting only
>the distinguished point or not the others. But apparently for
>ECC2-97, only 1 point in 2^30 is 'distinguished'.
>
>1. does this reduce the efficiency of the algorithm ? (at first sight,
> to report only 1 point in 2^30 greatly reduce the possibility of
> collisions)
>
As I understood the page, the efficiency of the algorithim is not
reduced significantly. If two computers find a common point then they
will find a common distinguished point soon after. I suppose the
algorithm is mostly deterministic for finding successive points, but
has random jumps after a distinguished point is found. I may have
misunderstood. Tom.
<snip>
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************