Cryptography-Digest Digest #902, Volume #10      Fri, 14 Jan 00 05:13:01 EST

Contents:
  Re: ARC4 discussion? ("Scott Fluhrer")
  Re: New Crypto Export Regs (David Hopwood)
  Re: Triple-DES and NSA??? (anonymous)
  Re: "1:1 adaptive huffman compression" doesn't work (SCOTT19U.ZIP_GUY)
  Re: AES & satellite example (Terry Ritter)
  Re: Blum, Blum, Shub generator (Terry Ritter)
  Cryptography Exporting? (NFN NMI L.)
  Re: Triple-DES and NSA??? (NFN NMI L.)
  Ciphers for Parallel Computers (John Savard)
  Re: Random numbers generator (Terje Elde)
  Re: Cryptography in Tom Clancy (Terje Elde)
  Re: LSFR (Scott Nelson)
  Re: LSFR (Scott Nelson)
  Re: Random numbers generator (Scott Nelson)
  Re: "1:1 adaptive huffman compression" doesn't work (Mok-Kong Shen)
  Re: "1:1 adaptive huffman compression" doesn't work (Mok-Kong Shen)
  Re: "1:1 adaptive huffman compression" doesn't work (Mok-Kong Shen)
  Re: Ciphers for Parallel Computers (Mok-Kong Shen)

----------------------------------------------------------------------------

From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: ARC4 discussion?
Date: Thu, 13 Jan 2000 20:36:17 -0000

<[EMAIL PROTECTED]> wrote in message
news:85m1tc$b51$[EMAIL PROTECTED]...
> Would it be appropriate, in this NG, to
> discuss details of the ARC4 algorithm ?

Absolutely.  What do you want to know?

--
poncho




------------------------------

Date: Fri, 14 Jan 2000 04:37:33 +0000
From: David Hopwood <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: New Crypto Export Regs

=====BEGIN PGP SIGNED MESSAGE=====

"John E. Kuslich" wrote:
> 
> Anyone know where to find the "New" crypto export rules released today??
> 
> I looked at the BXA site and they seem to have not heard about crypto.

http://www.cdt.org/crypto/admin/000110cryptoregs.shtml

- -- 
David Hopwood <[EMAIL PROTECTED]>
PGP public key: http://www.users.zetnet.co.uk/hopwood/public.asc
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5  0F 69 8C D4 FA 66 15 01

"Attempts to control the use of encryption technology are wrong in principle,
unworkable in practice, and damaging to the long-term economic value of the
information networks."  -- UK Labour Party pre-election policy document


=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv

iQEVAwUBOH6n8TkCAxeYt5gVAQG7uggAuWx34rL11XHPXBHav5uv0KREl8KbAifH
fWHdG8a4xdf5cdl2nPvEbFPdzPgBnNig22R7sl8VpBtbeVwuOnfj41KD7RG8g5M8
+0lso7Sfi6rl8Rq0U9/FjVvb098AJ6lz/7WVVe6yJRiQN46m0TXkK9j6FX7ttBW6
ka3nSBMtd9d7ewbUgkqh00/3+zdbUxGtUt7PAYuCjczqnzcrXRD1PTTw1ioPOBDg
j0lmeyh1uO9UyoNAYFEnMnAp+NSpNXKbFk6ZAF3xkLoiIncR5Sdwf6qeYnXK63zI
EgvJd1xSAvy6oqKYWuu3H/o/Ivl2BdyaLHYTwta9hfB2JCHGNIdgPw==
=KOIW
=====END PGP SIGNATURE=====


------------------------------

From: anonymous <[EMAIL PROTECTED]>
Subject: Re: Triple-DES and NSA???
Date: Wed, 12 Jan 2000 15:13:07 -0500

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
says...
> Triple DES is DES three times over. Either the NSA "screwed around" with DES
> and hence 3DES is no good, or 3DES is good. The NSA didn't really have anything
> to do with 3DES. Duuuuh.
> It's considered reliable in comparison to IDEA and CAST.
> 
> S. "Jabibbian" L.
> 
Well duuuuuhhhhh...EVERYBODY knows that.
-- 
Navid

http://spamcop.net
Protect privacy, boycott Intel: http://www.bigbrotherinside.org

______________________________________________________________
Posted via Uncensored-News.Com, http://www.uncensored-news.com
    Only $8.95 A Month, - The Worlds Uncensored News Source

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Fri, 14 Jan 2000 06:19:41 GMT

In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]> 
wrote:
>SCOTT19U.ZIP_GUY wrote:
>> 
>> <[EMAIL PROTECTED]> wrote:
>> >SCOTT19U.ZIP_GUY wrote:
>> >>
>> >
>> >>  If the one intercepting konws your compression program there has to
>> >> be a means to seperate where the compression ends and the random
>> >> data begins. If one does that then the added random information did
> nothing.
>> >> Better to use a 1-1 compression where the data is compressed to fit the
>> >> spave available and where no extra info is added. Since the decompression
>> >> program most be able to seperate out this random stuff anyway.
>> >
>> >Sorry, I don't yet understand. The compression software is public.
>> >Everyone can have it. Yes, the software has the additional work
>> >to get appropriate filling bits and to put these in on compression
>> >and to throw these away on decompression. (The software 'knows'
>> >how it is to be done.) But are you arguing that's too much
>> >computational work or what? I don't think that's too much work.
>> >I have never said that my proposal is 'better' than any 1-1
>> >compression scheme, only that it 'suffices' for the (practical)
>> >purpose at hand. Now that you and some people else have developed
>> >1-1 compressors, one can certainly (or even better) use these (at
>> >least theoretically more satisfying) products. But on retrospect,
>> >if my proposal were put forth earlier, there would be in my humble
>
>>     What humble opinion
>
>If you consider modesty is bad, then substitute it with 'my high 
>opinion'. o.k. for you now??

   I don't consider modesty bad. I consider false modesty bad.

>
>> >view no absolute 'necessity' to develop the 1-1 compressors, as far
>> >'practical needs' (in contrast to theoretical desires) are concerned.
>> >Have I explained the essential points of a previous follow-up
>> >clearly enough here?
>> 
>>   No you haven't explained it well enough
>
>Another trial: My proposal is able to satisfy the requirement you
>originally raised, i.e. to give the analyst no information via
>the decompression-(re-)compression process. Hence, if that is 
>implemented, then there is no necessity to spend time and energy to 
>develop 1-1 software of the sort that you have done. And further I 
>am of the opinion that the benefit of having that software isn't 
>worth the time and energy you and others have spent in developing 
>the software.
>

   Well I am glad you don't think its not worth it. I would hate to have
us all think the same that would make life boreing.






David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

I leave you with this final thought from President Bill Clinton:

   "The road to tyranny, we must never forget, begins with the destruction of the 
truth." 

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: AES & satellite example
Date: Fri, 14 Jan 2000 05:39:51 GMT


On Fri, 14 Jan 2000 03:13:57 GMT, in <85m49a$2h2$[EMAIL PROTECTED]>, in
sci.crypt [EMAIL PROTECTED] wrote:

>In article <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] (Terry Ritter) wrote:
>> Since we cannot know cipher strength, ...
>
>The gubmnt is also not certain about the safety of prozac,
>ritalin, carbon emissions, MTBE, cell phones, gene-altered
>vegetables, guns, crypto exports, etc. Life goes on.

The implication here is that in life we often know some general
quantity, if not a precise value.  

Alas, the reality of ciphering is that we do not even know the general
quantity.  The situation of ciphering is almost unique in life, and to
deal with reality it is important that we recognize what we have.  

We do not know our opponents, or their capabilities, and they do not
tell us when they succeed.  Thus, we know *nothing* about how well our
ciphers protect our data from them.  

In engineering terms, we use ciphers in "open loop," without any
knowledge of the result: We have no feedback at all to tell us how
well we are doing.  And this very lack of information emboldens us
about how strong our ciphers must be, since we have not heard
otherwise.  But that is invalid reasoning.

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Blum, Blum, Shub generator
Date: Fri, 14 Jan 2000 05:39:56 GMT


On 13 Jan 2000 21:40:16 -0000, in
<[EMAIL PROTECTED]>, in sci.crypt lcs Mixmaster
Remailer <[EMAIL PROTECTED]> wrote:

>[...]
>This is not an asymptotic result, it applies to specific moduli of 512
>or 1024 bits.  If any one of those has a short cycle that can be found in
>a tractable amount of time, it can be factored.  That's the bottom line.
>
>The advice to choose moduli with guaranteed long cycles, and to choose
>seeds that way, is completely useless.  It is like the advice you
>used to hear to choose "strong" RSA moduli by careful choice of p and q.
>No one does this any more, because it has been proven to be pointless.
>The attacks this was meant to protect against are not effective against
>moduli of the sizes in use today.
>
>The same thing is true of concern about cycle lengths.  Factoring RSA
>moduli by trying to guess values with short cycles is an inefficient
>way of attacking the problem.  That's why the RSA factoring efforts
>don't use this algorithm, they use NFS and related algorithms instead.
>Worry about cycle lengths is nothing more or less than superstition.

On the contrary: Worry about cycle lengths is fundamental to security
in (conventional) stream cipher usage: If we use a cycle which repeats
quickly, the system is insecure, no matter how impossible it may be to
factor N.  

As far as I know, the current thinking is that the *probability* that
a random x0 will be on a short cycle is very low.  Fine so far.

But an admitted *possibility* of weakness -- no matter how small --
shows there is no *proof* of strength, unless "proof" no longer
carries an absolute assurance.  

The whole point in using BB&S is to have a *proof* of strength.  Maybe
that proof ultimately will be shown to be flawed; I don't know.  But
whatever assurance that proof does carry comes from the *real* BB&S
design.  Not following the whole recipe in BB&S means that we do not
have BB&S, we have something else.  Maybe that something else is just
as good, in which case it should have its own construction, name, and
proof.  But using the term "BB&S" when we do not follow the BB&S
recipe seems little more than an appalling deception.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (NFN NMI L.)
Subject: Cryptography Exporting?
Date: 14 Jan 2000 06:29:43 GMT

Hi. I read:
http://www.cdt.org/crypto/admin/000110cryptoregs.shtml
And was quite confused. So, what the heck are the rules now? I'm curious
because I have a 1024-bit RSA program I'll be releasing soon.

S.T.L.

------------------------------

From: [EMAIL PROTECTED] (NFN NMI L.)
Subject: Re: Triple-DES and NSA???
Date: 14 Jan 2000 06:30:32 GMT

<<EVERYBODY knows that.>>

'cept for the original poster.

S. T. "andard Mode" L.

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Ciphers for Parallel Computers
Date: Fri, 14 Jan 2000 07:00:30 GMT

While I can't think of a form of encryption suitable for use on a
quantum computer, the ultimate form of parallel computer (quantum
cryptography has to do with a secure communications medium, and thus
relates to invisible ink rather than being algorithmic), an ordinary
parallel computer could be used efficiently for one type of cipher.

The reason encryption is generally a serial process has to do with the
fact that encryption must be invertible. In a Feistel round, however,
the f-function can be designed in any way one likes, as the round
structure produces invertibility, even when the f-function is not
invertible, as indeed is the case for DES.

One can therefore imagine a block cipher that works like this:

a Feistel round, but the f-function is actually the XOR of 1000
different f-functions...and since they all take the unaffected half of
the block as their input, they can all be carried out in parallel.
(Then ten XOR times are consumed XORing them all together.)

Thus, if technology reaches an upper limit to how _fast_ a single
processor can be made - but does not reach an upper limit to how
_cheap_ such a processor can be, thus allowing very large parallel
computers to be made, it is possible to make full use of such a
computer in encipherment with this kind of approach. Hence preventing
the otherwise inevitable result of an advantage to the cryptanalyst
resulting from the technology going in that direction.

John Savard (teneerf <-)
http://www.ecn.ab.ca/~jsavard/index.html

------------------------------

From: [EMAIL PROTECTED] (Terje Elde)
Subject: Re: Random numbers generator
Date: Fri, 14 Jan 2000 08:22:28 GMT

In article <85kq97$17c$[EMAIL PROTECTED]>, Simone Molendini wrote:
>where can I find C code for a *good* random number generator?
>
>C rand() routine seems me to be a weak one: it has only a 32768 cycle (it
>seems me).

You'll need to set up inputer yourself, but check out yarrow at
counterpane.com.

Terje Elde
-- 

Hi! I'm a .signature virus! Copy me into your ~/.signature to help me
spread!


------------------------------

From: [EMAIL PROTECTED] (Terje Elde)
Subject: Re: Cryptography in Tom Clancy
Date: Fri, 14 Jan 2000 08:22:29 GMT

In article <[EMAIL PROTECTED]>, Glenn Larsson wrote:
>Terje Elde wrote:
>> I would not even think about bothering with it. The chances of
>> successfully decrypting it is about the same chance as getting hit by
>> lightning 42535295865117307932921825 at the same time your're falling out
>> of bed.
>
>I once heard a good saying:
>
>-"If you're falling from the sky, you may as well try to fly"

If I was falling out of an airplane or whatever then I would not give a
rats ass about flying, I would prefer concentrating on trying to land a on
a soft spot, and thinking about how I should try to minimize the impact.

Same is true for the 128 bit key thing... attack from another pov.

Anyways, you ppl know if what clancy was talking about was a really brute
force or a bruce force dictionary attack? If we're talking dictionary,
then regulary breaking 128 bit keys in 3 hours is quite possible.

Terke Elde
-- 

Hi! I'm a .signature virus! Copy me into your ~/.signature to help me
spread!


------------------------------

From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: LSFR
Reply-To: [EMAIL PROTECTED]
Date: Fri, 14 Jan 2000 08:24:59 GMT

On Thu, 13 Jan 2000 "Michael Darling" wrote:

>if you order the list then you must store the sequence index along with the
>item.
>This would double the size of the data storage required.
>

Some orderings are more compact, but yes, for efficiency
you should save more than just the data value.

If increasing memory use by a factor of two is significant,
(in other words, if you're seriously contemplating doing this, 
and are already thinking about memory optimization)  
then you should consider how many of those states are 
really reachable.  You might be able to rule out states 
that won't be reached for several hundred years, 
by which time computers will have improved enough to 
make a complete search more reasonable.  

Scott Nelson <[EMAIL PROTECTED]>

------------------------------

From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: LSFR
Reply-To: [EMAIL PROTECTED]
Date: Fri, 14 Jan 2000 08:38:36 GMT

On Thu, 13 Jan 2000 "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
>
>Have you looked at gray codes?  They have no carry propagation because only a
>single bit changes between states.  And the transform from gray numbers to
>normal numbers is quite simple in both directions.
>

You might also consider a hybrid approach, 
as much ripple counter as you can stand, and the 
rest lfsr.

Scott Nelson <[EMAIL PROTECTED]>

------------------------------

From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: Random numbers generator
Reply-To: [EMAIL PROTECTED]
Date: Fri, 14 Jan 2000 08:48:18 GMT

On Thu, 13 Jan 2000 "Simone Molendini" wrote:

>where can I find C code for a *good* random number generator?
>
If you're just looking for an unbiased RNG, try
http://www.helsbreth.org/random/unbiased.html
They are NOT unpredictable, (in other words, 
they're worthless for crypto) but since you
didn't specify, I'm assuming you don't want it.

If you need unreproducible, check out Yarrow -
http://www.counterpane.com/yarrow.html
or /dev/random if you're using a unix system.

Doubtless others will point out some of the
many Crypto Secure Pseudo Random Generators
available, if you really do need unpredictable,
but repeatable.

Scott Nelson <[EMAIL PROTECTED]>

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Fri, 14 Jan 2000 10:15:35 +0100

SCOTT19U.ZIP_GUY schrieb:
> 
> In article <[EMAIL PROTECTED]>, Mok-Kong Shen 
><[EMAIL PROTECTED]> wrote:
> >SCOTT19U.ZIP_GUY wrote:
> >>
> >> <[EMAIL PROTECTED]> wrote:

> >> >least theoretically more satisfying) products. But on retrospect,
> >> >if my proposal were put forth earlier, there would be in my humble
> >
> >>     What humble opinion
> >
> >If you consider modesty is bad, then substitute it with 'my high
> >opinion'. o.k. for you now??
> 
>    I don't consider modesty bad. I consider false modesty bad.

What do you mean false modesty here? I don't absolutely exclude that's
because I am not a native English speaker. But to my best knowledge
there is nothing 'particular' implied in the 'courtesy' words in
phrases like 'yours sincerely', IMHO, etc. etc. Look at what the French 
people write when they end their letters!! On the other hand, I find
those (not infrequent) posts on the internet that contain bad words 
(swear-words) simply horrible.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Fri, 14 Jan 2000 10:15:25 +0100

Tim Tyler wrote:
> 
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> 
> : My proposal is able to satisfy the requirement you
> : originally raised, i.e. to give the analyst no information via
> : the decompression-(re-)compression process.
> 
> Which it would do reasonably well, /if/ you could get hold of some "real"
> random numbers.  Getting hold of them in a portable way would be better
> still.

I have explained in a previous follow-up that one needs only 
'non-constancy'. Here is a repetition: Suppose the decrypted
compressed file is uncompressed and compressed again. The two
files can be compared to easily find out the filling bits. Suppose
the old and new filling bits are 00 and 11 respectively. What
information does the analyst get from that? He knows that if, for
example, he contitunes to do uncompress-compress he will get
in most cases different filling bits. So from these filling bits
he knows nothing. If you don't agree with this, please kindly show
where my argument is wrong with details. Simply claiming that it 
doesn't work or doesn't work well is not sufficient.

> 
> : Hence, if that is implemented, then there is no necessity to spend time
> : and energy to develop 1-1 software of the sort that you have done.
> 
> Unless you want theoretical perfection - rather than a non-derterministic
> compressor, whose security depends partly on a supposedly random process.
> 
> One additional positive thing about 1-1 compressors is that you can
> mechanically verify that the rest of the compression program (besides the
> ending scheme) is acting in the correct 1-1 manner - by testing lots of
> files.

Please kindly again explain in view of what I said above.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Fri, 14 Jan 2000 10:15:53 +0100

Tim Tyler wrote:
> 
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

> : Do you think that my proposed scheme has 'practically' solved the
> : problem or not?
> 
> In practice, the Huffman ending problem appears to offer the attacker few
> footholds anyway.  Seven bits per message may compromise some systems,
> but not very many.  At worst it knocks this number of bits off the
> keyspace.  Most systems should withstand this, most of the time.
> 
> I don't know if your scheme reduces the problem to acceptable levels.
> What is "acceptable" depends on your application.

May I remark that your last-but-one sentence says nothing in a 
scientific discussion? I could similarly say 'I don't know' this and
that and a lot of other things. If you want to show the weakness of 
someone else's argument, please show it plainly and with sharp focusing.

 
> I see your scheme as an source of unnecessary and easily eliminated
> potential weakness.  I don't see what the problem is with using the ideal
> Huffman file ending scheme, now that it has been discovered.

I have said many times that I don't claim my scheme is better.
As to 'potential weakness', I repeat what I said above: Please show 
it. Saying something like 'I guess there could be something in there'
doesn't really help the proper discussion. That I agree with your last
sentence has been explicitly stated in a previous follow-up.


> : How does one proceed to exploit the 'certain  non-random stuff'?
> 
> I've given in some detail one way the analyst might get information from
> this, even if the padding is /totally/ random.
> 
> Giving the analyst unnecessary information about plaintext statistics in
> messages is not a good idea.  I'm sure you can figure out what an analyst
> can do with such information for yourself.

If I append 'periodically' to my encrypted messages with the plaintext
'AA', 'AB', 'BA' and 'BB'. What can the analyst do with that??

 
> : Please tell me a 'concrete' way to get some 'information' out of
> : these that is 'actually' sensible to his task, namely to decrypt
> : to obtain the information in the bit sequences 'preceeding'
> : these 2 filling bits.
> 
> If he has one message, his job is not easy.  Also, your question
> is not necessarily the right one.  With two bits of information, he's
> not going to get /much/ useful information about the rest of the message -
> unless the message has a two bit key.
> 
> I've explained how /even/ totally random padding can give the analyst
> information he wouln't haveif a deterministic compressor had been used,
> that might help identify the sender.

That's interesting. I said in another post the compressor is public.
HOW is the analyst going to identify the sender with the compressor
the sender uses, since anyone can use that compressor??

> 
> :> : In fact, I don't need 'true randomness', nor even
> :> : 'pseudo-randomness', only 'non-constancy'.
> :>
> :> This won't do at all, IMO.
> 
> : Please explain.
> 
> You think (say) preferentially padding with zeros is generally acceptable
> behaviour, despite the fact that it gives away probably-known plaintext?

Always padding with zeros may notbe good, but padding with stuffs
that vary periodically (all different patterns have almost equal
frequency of occurence) doesn't leak information in any practical
sense in my opinion. You can simply regard these as sepeate messages
that 'happen' to be 'attached' to the proper messages. Since the
content of these attached messages has nothing that needs protection,
it follows that it doesn't matter whether the analyst correctly
get them or not.


> Would you more more concerned if you were padding to a 128-bit block
> rather than an 8-bit one?

No, if the padding patterns occur with equal frequency and after
seeing one pattern one does not know which is the next to be expected.
(Please note that the compressor may deterministically emit the
filling bit patterns. But this is only for the case in the present
context when the analyst 'repeatedly' uncompress-compress the 'same'
stuff that he obtains with one particular trial key. He knows that,
if the key he assumes were indeed correct and the sender were to 
send the same message once again, he has equal chance of seeing
other filling bit patterns. Hence the particlar pattern he sees
tells him nothing.)

 
> I was under the impression that giving away probable plaintext to
> attackers was generally considered undesirable.  I'm a bit puzzled
> about being asked for "supporting material".  The attacker gains
> statistical knowledge about the plaintext, which was previously
> unavailable to him.  What more do I need to say?

See what I wrote above about the 'attached messages'.


> You want me to spell out how to analyse frequency information in
> messages, when statistical characteristics of the plaintext are
> known?

No.

> I'll give an example, which should demonstrate how an analyst might be
> helped.
> 
> He has a bunch of messages, which he knows from the context of
> their interception contain encyphered-compressed password data.
> 
> The passwords were generated by a random number generator.
> 
> Each password is encyphered with its own key.
> 
> The analyst wants access to the passwords.
> 
> [In this instance compression doesn't help, but the compressor was built
> into the cypher-machine.]
> 
> The analyst has access to a captured table of keys to the cypher
> - but he doesn't know where the user started in his key table, so he tries
> starting positions one at a time, assuming consecutive keys were used
> for consecutive messages, as specified in the manual of a captured
> machine.
> 
> Since the messages themselves are random, his only source of information
> is the padding used by the compressor.  If he decyphers a large number of
> password files in a row, and they exhibit the same statistical anomalies
> that are introduced by the padding scheme of the compressor, he knows he
> has found the correct starting key.
> 
> None of this would have bneen possible, were it not for a very few
> non-random bits information appended to the files.

Are you saying that the assumption that consecutive passwords are
encrypted with consecutive keys helps the analyst? But what has
this to do in the present context? Here the analyst tries a key
K to get a compressed file which he finds, say, to have 2 filling
bits 00. On repeated uncompresion-compression he finds other 
patterns (01, etc) and that all these patterns occur equal frequently. 
So that original '00' is nothing 'particular' and hence he can't
derive any information from that particular occerence of the
filling bits, can he?? (Sorry that I repeat what I said.)

> : In the example case mentioned above, all four filling bit patterns are
> : 'equally likely', there is no 'most likely' one. Could the analyst
> : still do something with his 'statistical data' of the filling bits?
> 
> You said they were used in sequence.  Consequently he knows that if the
> message has a two-bit padding - he knows which two bits will be used.
> 
> Consequently some bits *are* more likely than other ones - despite each
> two-bit sequence being used with equal frequency.
> 
> Can the attacker do anything with this?  It's not very likely that he can
> do much with 1/4 of a bit.  Analysts are clever folk, though.  I wouldn't
> like to say it was always totally useless.

If an analyst tries a key K1, he can do uncompress-compress to find
out what the filling bits (and of course how many filling bits) the 
original sender's compressor 'happened' to have used, IF indeed the 
key K1 is correct. If he tries another key K2, not only the pattern
of the filling bits but also the length will generally be different.
So he doesn't know even the length of the 'correct' filling bits 
through trying different keys. I used in my arguments 2 filling 
bits constantly because I wanted to illustrate that the subset of 
those messages with 2 filling bits together don't leak information 
simply due to the fact that these 2 filling bits are there.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Ciphers for Parallel Computers
Date: Fri, 14 Jan 2000 10:28:27 +0100

John Savard wrote:
> 
> While I can't think of a form of encryption suitable for use on a
> quantum computer, the ultimate form of parallel computer (quantum
> cryptography has to do with a secure communications medium, and thus
> relates to invisible ink rather than being algorithmic), an ordinary
> parallel computer could be used efficiently for one type of cipher.
> 
> The reason encryption is generally a serial process has to do with the
> fact that encryption must be invertible. In a Feistel round, however,
> the f-function can be designed in any way one likes, as the round
> structure produces invertibility, even when the f-function is not
> invertible, as indeed is the case for DES.
> 
> One can therefore imagine a block cipher that works like this:
> 
> a Feistel round, but the f-function is actually the XOR of 1000
> different f-functions...and since they all take the unaffected half of
> the block as their input, they can all be carried out in parallel.
> (Then ten XOR times are consumed XORing them all together.)
> 
> Thus, if technology reaches an upper limit to how _fast_ a single
> processor can be made - but does not reach an upper limit to how
> _cheap_ such a processor can be, thus allowing very large parallel
> computers to be made, it is possible to make full use of such a
> computer in encipherment with this kind of approach. Hence preventing
> the otherwise inevitable result of an advantage to the cryptanalyst
> resulting from the technology going in that direction.

That parallel processing is used in all kinds of computing (where 
hardware is economically available) is a fact today. Clever compilers
do quite a lot of automatic parallelizing of programs But I don't yet
clearly see an advantage that pertains to only one side of the 
antagonist pair user-analyst. Higher speed can be of 'advantage' to 
both sides. On the other hand, I suppose some sort of 'inefficiencies' 
are disadvantageous to the analyst.

M. K. Shen

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to