Cryptography-Digest Digest #136, Volume #10      Sun, 29 Aug 99 17:13:04 EDT

Contents:
  Re: NEW THREAD on compression (SCOTT19U.ZIP_GUY)
  Re: NEW THREAD on compression (Mok-Kong Shen)
  Re: RC4 question (Paul Rubin)
  Re: 512 bit number factored (Paul Rubin)
  On employing message-decoys (Mok-Kong Shen)
  Re: Key Establishment Protocols - free chapter from Handbook of Applied Cryptography 
(Paul Rubin)
  Re: Q: Cross-covariance of independent RN sequences in practice
  Re: NEW THREAD on compression
  Re: Will CIA be an actor of end-times ? ("collomb")
  Re: WT Shaw temporarily sidelined (Jim Dunnett)
  Re: SHA-1 test vectors (Tomas)
  What if RSA / factoring really breaks? ("David J Whalen-Robinson")
  Re: Fooling Key Escrow ("Douglas A. Gwyn")
  Re: What if RSA / factoring really breaks? (David A Molnar)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: NEW THREAD on compression
Date: Sun, 29 Aug 1999 15:55:09 GMT

In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]> 
wrote:
>SCOTT19U.ZIP_GUY wrote:
>> 
>
>>   if I rewrite this in hex it is easer for me
>> 
>>    FF compressed to 00 and decompress FF
>>    FF decompress to 00 and compresss to FF
>..........
>
>Simply because of my curiosity/laziness:
>
>Could you also test the following with your new code?
>
>    C1 = 00100100 11111111
>
>    C2 = 00100100 11111111 11111111
>
>It would be desirable that some sort of correctness proof of your
>modifications to the Huffman algorithm be available. That would
>help, I think, finding the answer to your question of determining 
>the best one-to-one algorithm (delivering the shortest sequence).
>
>M. K. Shen

  I doubt if I can prove to you. We seem to have a language problem
I stated in great detail a summary of the ruiles I used for ending the
huffman compression on a 8 bit boundary. I think I took care of every 
case. If you can think of a case that does not work let me know.
 You can consturct poblems of the form using  valuses for the
current state of table and cutt off and just show that last few bytes
or huffman tokens and I can step through using only the rules I
last sent you. Since I was a little harsh in beginning and never
thought we would get this far I have done the examples above.

 24 FF compress to DB 00 and that decompresses back to 24 FF
 24 FF decompressed to DB such that there are 9 bytes the same and decompress 
back to 24 FF

 24 FF FF compress to DB 00 A0 and that decompressed bvack to 24 FF FF
24 FF FF deompressed tp DB .. for 17 such bytes and compress back to 24 FF FF

if you want I can write a short DOS batch program such that you do things
like:
xxx.bat  file1.xxx
does all 4 sets of compress/decompression for you
and supply a xdump and xundump executable so that you
can generate your on examples to test this.

 I am as much interest in one to one compression as encryption
and am playing with lossy text compression. It is not what you
think. What happens is that you can compress text with out headers
as in  "one to one"  but you use several different one to one compressors
and only use the one that compresses to the smallest file before you
encrypt. If one uses text this would be attractive. But if one is using
high entropy files it would not be. what this forces your recipent to do
is to run several decompression rountines. And use his "eyeball" to
detect which is the correct message. In the unlikly event that more
than one method could actaully decompress to text he can recompress
by methods used to compress and see if they lead to the same file.
It is possible the one in error may be shorter by one of the other methods
so it could be tossed out. But it is lossy and there is a very very small
chance that the message to sender is lost do to collusions of two text
files compressing to same code but in practice this would be highly
unlikely. This also complicates the code breakers job since when the
wrong key is guessed. He would have to do a whole series of decompressions
to rule out the key. 
 Such varible lenght humman compression can be seen just using the
two methods I have at my site h2com and h3com. use can compress
with both methds and reverse file and compress with both methods again.
However you can also focus the huffman code by making small modes
to the huffman table when you write using h2com. I will realsease this
code soon. When you do the h2com and h2coma compress to the
same lenght but since symbols not really a nice power of two you 
can make it so imbalances compress differntly under the next huffman
compression after the file is reversed. This leads to very hard to decompress
file from a small recovered fragment of the file.

Take Care







David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: NEW THREAD on compression
Date: Sun, 29 Aug 1999 18:10:12 +0200

[EMAIL PROTECTED] wrote:
> 
> SCOTT19U.ZIP_GUY ([EMAIL PROTECTED]) wrote:
> : At least if you always have a file that
> : decompresses the attacker does not know for sure that you did not send
> : a binary file.
> 
> That is a valid point, but there is a flaw in your approach. There's
> always the chance that somewhere in the decrypted message there will be a
> string of too many zeroes in a row. Or, there might not be enough zeroes
> at the end of the message, causing it to end in the middle of a symbol.

Your comment above shows (independent to what you addressed) a 
trouble with my scheme. My scheme indeed fails if the file to be 
decompressed (outcome of decrytion with a wrong key) contains a 
sufficiently long sequence of 0's. (The chance of this occuring  
might be argued to be small but is certainly real.)

Since Scott's scheme, if I don't err, also excludes certain 
bit sequences from being valid Huffman codes, I am yet unclear of
how his program behaves if fed with a file containing such a bit
sequence, say, at the very beginning of the file. Perhpas Mr. Scott
would kindly say something about this in plain English, explaining
the program logic for handling such situations.

On the other hand, on the assumption that a certain small defect
to be described below is acceptable, I like to propose the following
modification of my scheme:

(1) Encode with the normal Huffman algorithm.

(2) If the last bit output is not at byte boundary (or word boundary,
    if output is to be in whole words), fill with 0's to byte/word
    boundary.

(3) Delete trailing zero bytes/words.

(4) On decompressing, if the buffer is not empty at end of file,
    append as many 0's as are necessary to form a valid Huffman
    code.

This evidently suffices to achieve Scott's 'one to one' property.
The aforesaid defect is that the legitimate communication partner
under circumstances may obtain on decompressing some spurious symbols 
(corresponding to the Huffman code of all 0's) at the end of the 
message. If the message is a natural language text, the recipient 
should normally be able to recognize these spurious symbols. If the 
message is arbitrarily binary, a message length needs to transmitted
to the communication partner. (In favourable cases, where for whole
word transmission the spurious symbols occupy less than a full word, 
the length information is not necessary.) So the scheme is not 
perfect but seems to be usable nevertheless in practice.

> Ensure that the Huffman code in use contains at least one symbol as 
> long as eight bits.

> After the message is compressed, note how many bits remain in the last
> byte. Pad those bits by filling them with the start of a symbol that 
> is at least one bit longer than the remaining bits.

I am afraid that this doesn't work for the case where the file obtained
is through decryption with a wrong key, for one doesn't have control
of the bits at these filling positions.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: RC4 question
Date: 29 Aug 1999 17:56:02 GMT

Red_Blue  <[EMAIL PROTECTED]> wrote:
>As I understand it, current US legislation allows export of RC4 tech of any
>key length for banks in EU countries.

Yes, this is correct.  You can find more info at www.verisign.com in
the pages about Global Server ID's.  Note that older browsers (Netscape
3.x and earlier, and MSIE 3.x unless you install a special service pack)
don't support GSID's so you'll still get 40 bit cryptography with them.

GSID's are much more expensive than ordinary certificates and you
have to jump through some bureaucratic hoops to get them.  The bank
probably wouldn't notice the extra cost, but the paperwork might be
more hassle than they want to deal with.

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: 512 bit number factored
Date: 29 Aug 1999 17:53:00 GMT

In article <7qblse$[EMAIL PROTECTED]>,  <[EMAIL PROTECTED]> wrote:
>>>But I'm curious about the assertion that 95% of the keys used
>>>are 512 bit keys.  Admittedly the sample is small, but my PGP keyring
>>
>>PGP is NOT the primary method to "secure electronic commerce". Those are
>>proprietary schemes used by banks, etc.

I had the impression that the 95% referred to SSL secret keys used for
securing e-commerce web sites.  I've looked at a lot of SSL
certificates in the past few months and most of them correspond to
1024 bit keys (though some are 512 bits).  It could be that a few
years ago, 95% of them used 512 bits, but SSL certificates issued by
the commercial CA's normally expire one year from issue, and until
recently they weren't renewable.  Normally, when one expired, you'd
generate a new key and get its certificate signed.  Secure server
products I've looked at these days generally use 1024 bits as
the default length.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: On employing message-decoys
Date: Sun, 29 Aug 1999 19:59:59 +0200

On the assumption that absolute security is neither attainable nor
(often) necessary/affordable, that no adversary is omnipotent (having
infinite resources) and that no real-life message needs to be kept 
secret till eternity, let's consider the following scenario:

Alice wants to send a message to Bob. The content of the message
needs to be kept secret only within 24 hours (e.g. stuffs concerning
decision making in commercial negotiations) and estimates that
this time frame is just within the capability of the adversary. She 
sends in addition to the true message 9 dummy messages with random 
or bogous contents, employing different keys. Doesn't this increases 
the security of her message tenfold because the chance of analysis
is reduced to 1/10 of the original? If she sends 99 dummy messages, 
doesn't she have a legitimate hope that the adversary would probably 
give up?

I believe that employing message-decoys is a fairly valuable means 
of achieving enhanced information security in practice, since higher 
message transmission cost is nowadays barely a matter of concern for 
materials requiring some nontrivial protections. Certainly the scheme 
can be subsumed by what has long been known in cryptology as the 
'busy channel'. But I think that this special case probably does 
deserve our mentioning separately to the common users of encryption 
algorithms. 

M. K. Shen
=========================
http://home.t-online.de/home/mok-kong.shen   (new addr.)

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: Key Establishment Protocols - free chapter from Handbook of Applied 
Cryptography
Date: 29 Aug 1999 18:21:09 GMT

In article <7qb9nt$psl$[EMAIL PROTECTED]>,
Alfred John Menezes <[EMAIL PROTECTED]> wrote:
>The publisher cannot monitor day-to-day sales since most sales are
>made through second parties such as amazon.com or Springer-Verlag in 
>Europe. They have, however, noticed that monthly sales have stayed
>roughly steady for the past 12 months, which they think is a good sign
>for a book that is in its third year of life. 

It's quite easy to monitor day to day sales on Amazon.  Amazon
displays the sales ranking for every book (4057th or whatever) and
it's updated several times a day.  For a book like HAC, even one
or two copies being sold in a day would cause a blip in the ranking.

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: Q: Cross-covariance of independent RN sequences in practice
Date: 29 Aug 99 17:58:56 GMT

Mok-Kong Shen ([EMAIL PROTECTED]) wrote:
: Because of imperfection in this world, e.g. impossibility of 
: making objects of exact sizes or attaining the temperature of absolute 
: zero, I suppose that there is a certain not too small lower bound of 
: the (average) value of the cross-covariance obtainable in practice. 

Since independent random sequences can be made at widely separated places,
while many things have lower bounds due to imperfection, this is not one
of the stronger examples.

You would be surprised at how many decibels of separation are possible
between the left channel of my stereo system, and the right channel of
somebody else's stereo system in Nebraska.

John Savard

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: NEW THREAD on compression
Date: 29 Aug 99 18:02:20 GMT

SCOTT19U.ZIP_GUY ([EMAIL PROTECTED]) wrote:
:  By the way what form of chaining are you presently claiming that PGP uses
: in its chaining your site has stated that it was CBC the code I saw did not
: use this are you every going to double check or do you care?

PGP version 2 point whatever claimed in its documentation to use CBC.
Countless people have looked at its code, have they not?

John Savard

------------------------------

From: "collomb" <[EMAIL PROTECTED]>
Subject: Re: Will CIA be an actor of end-times ?
Date: 29 Aug 1999 18:26:13 GMT

Good Morning Derek
I well received your message.  I would have preferred a precise quotation
of  Sanborn drawn from a Newspaper.
I have a question to ask you , since you know very well the square
of Vigenere.  His square is a square.  Then why the < square >
Vigenere-Sanborn isn't a square?  Why Sanborn added on the right side, the
columns A,B,C,D, which seem too many .Et why  to add a line in the lower
part
A,B,C... Z , which seems also to be superfluous?
I tackled the problem of Kryptos in a way of thinking
<structuralist> and this is why I have affected a real square to the first
series, while the second series is joined to this first
square as the image of the transformed square of Sanborn-Vigenere.
In my solution (see again please my website) the 97 characters are solved
WHITH
 the 4 others series. The message of Sanborn is emaneting from a totality
and 
cannot, in my opinion, be deciphered piece after piece, letting aside the
97 characters.
At july 20, 1999, a certain [EMAIL PROTECTED] wrote on this
newsgroup
 sci.crypt�: 
<Congratulations, I think you actually have cracked it! Your whole
explanation makes perfect sense.>
Mario Lyken disappeared, his e-mail address is erroneous and his
contribution 
faded away from www.deja.com
Do you know Mario Lyken�?
Regards 
Website�: http://calvaweb.calvacom.fr/collomb/
[EMAIL PROTECTED]


------------------------------

From: [EMAIL PROTECTED] (Jim Dunnett)
Subject: Re: WT Shaw temporarily sidelined
Date: Sun, 29 Aug 1999 18:00:27 GMT
Reply-To: Jim Dunnett

On Sat, 28 Aug 1999 19:57:15 GMT, [EMAIL PROTECTED] (don garrisan) wrote:

>Bill has asked me to let you guys know that he will be off line for a
>short while.  Currently in the hospital, he will be back in touch with
>the group as physical progress, time and laptop make it possible.
>
>In the mean time, hang in there........that is what he is doing.

Please convey my best wishes for a speedy recovery to him.

-- 
Regards, Jim.                  | We have decided not to go to France
amadeus%netcomuk.co.uk         | this summer as it is too full of
dynastic%cwcom.net             | unattractive, shirtless Englishmen
                               | talking into mobile 'phones.
PGP Key: pgpkeys.mit.edu:11371 | -  Auberon Waugh.

------------------------------

From: [EMAIL PROTECTED] (Tomas)
Subject: Re: SHA-1 test vectors
Date: 29 Aug 1999 19:08:00 GMT


Thanks to everyone for the suggestions. I have been working on an Assembler 
(NASM) implementation of SHA, and I have got it working now. It is quite 
fast (about 2x faster than my C implementation) and I would like to make it 
available, but the server does not seem to accept attachments ...

Tomas

------------------------------

From: "David J Whalen-Robinson" <[EMAIL PROTECTED]>
Crossposted-To: alt.math,sci.math
Subject: What if RSA / factoring really breaks?
Date: Sun, 29 Aug 1999 15:38:19 -0400

I would like to start a thread to discuss this HYPOTHETICAL situation
( Don't panic, this is completely made up, but none the less possible given
the number of
  researchers working on improving factoring around the world in
universities
  corporations and governments. )

Following an event where a new factoring technology was announced,
which demonstrated that large numbers could be factored in constant
or log time, (or any time that would make factoring a "fast" problem),

1.  What cryptographic techniques would fall prey to the new fast method?

2. How would the world react?  What cryptographic techniques would the world
    move to?

(continue those last 2 on sci.crypt only if possible)

3. More generally, what other mathematical problems,
     and algorithms would "break" such that modifications to them or new
     algorithms in place of them could change the speed barriers for the
problems
     the algorithms solve.?
     What would this contibute to the NP vs. P problem?

4. If somebody reading this group found such a method, how should it be
released?
    (obviously you can't just release it, every cracker would have an
info-looting-spree before anybody could react. )
    How long would it take to move away from "broken" cryptographic methods?
    How long would it take to reap the benefits of new faster algorithms for
solving problems?

5. Finally,  What would the next, big math problem be to beat?

This should be an interesting hypothetical discussion as factoring
technology continues
to be worked on around the world.  I think that we could learn a lot from
this thread too.
I'm not well versed on several of these points.  Maybe if we all put our
best ideas into
answering these we can come up with pretty good answers to all of them.



------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Fooling Key Escrow
Date: Sat, 28 Aug 1999 21:11:00 GMT

Gary wrote:
> Are there cryptographic systems that can produce decoy keys for
> key-escrow that yield a decoy message?

It absolutely doesn't matter.  "The jig is up" with key escrow
whenever the authority applies the escrowed keys to the ciphertext.
All a Bad Guy has to do is to use his own, secure encryption
*within* the key-escrowed envelope.  Nothing will be suspected
until it is too late.

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: What if RSA / factoring really breaks?
Date: 29 Aug 1999 20:23:07 GMT

In sci.crypt David J Whalen-Robinson <[EMAIL PROTECTED]> wrote:
> which demonstrated that large numbers could be factored in constant
> or log time, (or any time that would make factoring a "fast" problem),

> 1.  What cryptographic techniques would fall prey to the new fast method?

Everything which can be reduced to the RSA problem or factoring.
Taking n'th roots, for example -- this will break the Blum-Blum-Shub
generator and the Feige-Fiat-Shamir signature scheme just to start.
All RSA padding techniques, too.

> 2. How would the world react?  What cryptographic techniques would the world
>     move to?

I don't know - maybe stop all e-comerce for a day? It would not be pretty.
What techniques the world moves to depends on _how_ its broken. Presumably
any radically new idea which makes factoring fast may have other
applications. 

Discrete log could be unaffected, though. As far as I know, there's no
reduction from one to the other. If there is, then the rest of this post
is wrong. 

> (continue those last 2 on sci.crypt only if possible)

> 3. More generally, what other mathematical problems,
>      and algorithms would "break" such that modifications to them or new
>      algorithms in place of them could change the speed barriers for the
> problems
>      the algorithms solve.?


Anything reducible to factoring becomes "fast". Other depend on whether
the idea of the algorithm can be applied to them.

>      What would this contibute to the NP vs. P problem?

It shows us that a problem not known to be NP-complete is in P. So unless
factoring can be shown to be NP-complete, I don't know of any more general
implications. Again, whatever idea solves factoring may be more applicable
elsewhere.

> 4. If somebody reading this group found such a method, how should it be
> released?
>     (obviously you can't just release it, every cracker would have an
> info-looting-spree before anybody could react. )

You're assuming that it will be released as an implementation ? I'd think
that it would be announced at a conference like CRYPTO in paper form.
There would be some time involved in properly implementing it from such a
paper. Perhaps not enough time.

 Maybe the "right thing" is to do wwhat Cryptography Research is said to
have done with differential power analysis : release results to the
industry a year or so before publishing. Let them develop alternatives.
Then release the details to the public. 

>     How long would it take to move away from "broken" cryptographic methods?

Every recent copy of an SSL-enabled browser supports DSS/DH key exchange,
right? So for web sites it's a matter of switching to that. There would
probably be problems with reconstructing the CA certificates if the CA had
an RSA private key, since you would have to assume it broken. :-)

Assuming that discrete log /Elgamal isn't affected, the alternatives
exist. Ironically, because of patent issues, people have already been
moving away from RSA to Elgamal type systems. The joke will be on us if
discrete log is easy and factoring is hard.

In my case, I'm setting up an Apache+OpenSSL server with a DSS/DH cert b/c
all of the commercial servers cost too much. Plus RSAREF isn't available
from RSA anymore; is its license still valid ? (and why isn't it?) 

>     How long would it take to reap the benefits of new faster algorithms for
> solving problems?

I can't speculate w/o seeing the solution first. Maye someone elsee could
look at likely areas for a breakthru and extrapolate.

> 5. Finally,  What would the next, big math problem be to beat?

Discrete log...if you're in crypto. 

-David

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to