Cryptography-Digest Digest #170, Volume #14      Tue, 17 Apr 01 19:13:01 EDT

Contents:
  Re: Distinguisher for RC4 ("Joris Dobbelsteen")
  "UNCOBER" = Universal Code Breaker (newbie)
  Re: Size of dictionaries (Jim Gillogly)
  Re: There Is No Unbreakable Crypto (David Wagner)
  DES Optimizaton - Can Someone Explain? ("Kevin D. Kissell")
  Re: "differential steganography/encryption" ("Dopefish")
  Re: HOW THE HELL DO I FIND "D"?!?! ("Dopefish")
  Re: "UNCOBER" = Universal Code Breaker ("Joseph Ashwood")
  lagged fibonacci generator idea ("Tom St Denis")
  Re: Size of dictionaries (Mok-Kong Shen)
  Re: "differential steganography/encryption" (Mok-Kong Shen)
  Re: "UNCOBER" = Universal Code Breaker (newbie)
  Re: Note on combining PRNGs with the method of Wichmann and Hill ("Brian Gladman")

----------------------------------------------------------------------------

From: "Joris Dobbelsteen" <[EMAIL PROTECTED]>
Subject: Re: Distinguisher for RC4
Date: Tue, 17 Apr 2001 22:53:00 +0200

And using a method like CBC, OFB or CFB? These XOR a result that comes out a
block cipher, making it possible to replace them for a stream cipher. This
may, however, still not be suited for every situation, but it can be adapted
easily...

- Joris

"David Formosa (aka ? the Platypus)" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On 15 Apr 2001 06:24:34 GMT, David Wagner <[EMAIL PROTECTED]>
wrote:
>
> [...]
>
> > Well, it does scare me away from the idea of using RC4 in new
> > systems.
> [...]
> > Why bother with such uncertainty when we have 3DES and AES?
> > But then, maybe I'm too paranoid.
>
> RC4 is a streem cyper/pydorandom number generator, while 3DES is a
> block cyper.  You can't replace RC4 with 3DES in most situtations.
>
> --
> Please excuse my spelling as I suffer from agraphia. See
> http://dformosa.zeta.org.au/~dformosa/Spelling.html to find out more.
> Free the Memes.



------------------------------

From: newbie <[EMAIL PROTECTED]>
Subject: "UNCOBER" = Universal Code Breaker
Date: Tue, 17 Apr 2001 16:49:34 -0300

Cryptographers are still talking about unbreakable code. And sometimes
after someone break the code.
So, all the cryptographers are always trying to find the "unbreakable".
Has cryptograhers thought to universal way to break any code?
Has cryptographers thought to the basis to elaborate universal theory to
break any code?
I found a funny name to that theory "UNCOBER" = Universal Code Breaker


Newbie

------------------------------

From: Jim Gillogly <[EMAIL PROTECTED]>
Subject: Re: Size of dictionaries
Date: Tue, 17 Apr 2001 14:15:02 -0700

Mok-Kong Shen wrote:
> 
> I am interested to know the order of magnitude of the
> minimum size of a dictionary in English that could permit
> ordinary messages of common people (private and commercial)
> to be written in as terse a manner as possible without
> negatively affecting the essential semantics that is to be
> conveyed. In other words, I am thinking of writing in a style

CK Ogden chopped the vocabulary down to 850 words for his
proposal "Basic English" which he said "will cover those needs
of everyday life for which a vocabulary of 20,000 words is
frequently employed."  It was first proposed in the 1920s,
during the time when the second- and third-generation constructed
languages (Esperanto, Ido, etc.) were hitting their stride.  It
had a few high-profile supporters, such as Winston Churchill
and Franklin D. Roosevelt.  I may be mistaken, but I think he
left out verbs, but presumably few enough verbs would be needed
to keep it under 10 bits.  Shakespeare looks quite odd in Basic
English, as you might guess.

> like telegrams. Among technical points to be clarified seem
> to be those of grammatical forms, e.g. whether the present
> participle of a verb is to be in the dictionary and how
> special words, if needed but not in the dictionary, can be
> represented (using escape symbols?), etc. etc. As foreigner,
> I clearly can barely have good ideas about these. Would
> 2^12 be an appropriate size of the dictionary? In that case

As a foreigner (umm -- you mean a foreigner in Germany?  we're
from all over), what would be your estimate for one of the
languages you know?  I understand unabridged English dictionaries
are larger than those of most (all?) other languages, so English
might not be the best choice for your plan.  You might consider
an artificial language such as Esperanto or one of the more
modern efforts, depending on what application you have in mind.

> each word of a message could be coded into 12 bits, which
> means quite a significant compression ratio, isn't it?

It seems to me that a proposal like this would be useful only
in extreme conditions of some sort, where constructing the
message could be very human-intensive labor, and the transmission
channel is extremely expensive.  An example might be communicating
with scientists on the 30-year Titan probe, assuming they had lost
the use of their main antenna and can receive mail on only a very
limited-bandwidth umbrella-sized telemetry antenna.  I would
hazard a guess that most encryption used these days is on
high-bandwidth channels with data (financial transactions,
telemetry, and so on) rather than English or other text, and
that humans spend very little time preparing the messages.
-- 
        Jim Gillogly
        Hevensday, 26 Astron S.R. 2001, 17:35
        12.19.8.2.12, 13 Eb 10 Pop, Seventh Lord of Night

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: There Is No Unbreakable Crypto
Date: 17 Apr 2001 21:16:24 GMT

Mok-Kong Shen  wrote:
>I continue to think, as said in another post, that this
>means one can generate from, say, 128 random bits, a
>secure bit string of infinite length, which seems to
>be very counter-intuitive.

Well, not infinite: only polynomial length, and only _if_
you have a secure, length-doubling PRG.  But yes, it's a
marvelous, counter-intuitive, beautiful result.

------------------------------

From: "Kevin D. Kissell" <[EMAIL PROTECTED]>
Subject: DES Optimizaton - Can Someone Explain?
Date: Tue, 17 Apr 2001 22:23:30 +0200

I've been studying the DES inner loop, using, among
other references, Bruce Schnieir's book, Applied 
Cryptography (Second Edition).  I figured out the hack 
of pre-permutating the S-box outputs to save an explicit 
permutation step in the inner loop, but I note that the permutated 
S-box outputs in the source code in the Applied Cryptography
appendix (the "SPx" tables) do not seem to correspond to the 
naive result of applying the P-permutation to the selected bits.
Can anyone here tell me if there was some further optimization 
that was simultaneously applied when the published tables were 
generated?  If so, could someone explain (or provide a pointer
to an explanation) just what that optimization might be?

            Kevin D. Kissell
            MIPS Technologies Inc.
            kevink at MIPS dot com



------------------------------

From: "Dopefish" <[EMAIL PROTECTED]>
Subject: Re: "differential steganography/encryption"
Date: Tue, 17 Apr 2001 16:30:22 -0500

i am a non-crypto person so please use english....i know a bit about
encryption and stuff...but basically what i am saying is that i take a file
(regardless of the type) and i convert every 8 bits into its corresponding
ASCII code then take the other file and do the same and i find the
difference between the numbers....as an example:  (just hope that the
formatting comes out right)

                                BASE FILE      DIFFERENCE FILE       MESSAGE
FILE

                                        1
4                                    5
                                        2
2                                    4
                                        3
0                                    3

      4                                  -2
2

     5                                   -4
1

(this is only if my base file had binary code that read as 12345 when
converted to ASCII and if i had a message that read 54321 when converted to
ASCII)


james

--
======BEGIN SIGNATURE======
A.K.A "Dopefish" or "fish" for short on Usenet.

Microsoft?  Is that some kind of toilet paper?

"Rockin' the town like a moldy crouton!"
                 - Beck (Soul Suckin' Jerk - Reject)

"Help me, I broke apart my insides. Help me,
I've got no soul to sell. Help me, the only thing
that works for me, help me get away from
myself."
                 - Nine Inch Nails (Closer)


=====BEGIN GEEK CODE BLOCK=====
Version: 3.12
GO dpu s++:++ a---- C++++ U--->UL
 P L+ E? W++ N+++ o+ K--- w+>w+++++
 O--- M-- V? PS+++ PE Y-- PGP t 5--
 X+ R tv b+ DI D+ G-- e- h! r z
======END GEEK CODE BLOCK======
(www.geekcode.com)

======END SIGNATURE======
Mok-Kong Shen <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
>
> Dopefish wrote:
> >
> > even if you did have a text file larger than your base file it would
work
> > but the part that overlaps wouldnt be different.....for those that might
> > have not understood my last past i will make a model of this system...
> >
> >           ---------BASE FILE-------------  +   ----DIFFERENCE FILE------
> >  =  -----MESSAGE FILE-----
> >
> > or, if you had the message and the difference file then you could
generate
> > the base file
>
> Given two equal length bit sequences X and Y, generate
> Z = X xor Y, the difference file. Then recovery of
> X from Y and Z is simply X = Z xor Y. BTW, I think
> that the UNIX diff command may be what you want. (I
> have not used UNIX for a long time, so I can't give you
> the details about that command, sorry.) On the other
> hand, there are version management software that
> generate the differences between different versions of
> a program code in the delta files, which are in general
> small files. With the original version and the deltas,
> one can obtain with the help of the software the updated
> version.
>
> M. K. Shen



------------------------------

From: "Dopefish" <[EMAIL PROTECTED]>
Subject: Re: HOW THE HELL DO I FIND "D"?!?!
Date: Tue, 17 Apr 2001 16:35:34 -0500

i know that but they do not exactly explain how to get this from the
extended euclidian algorithm


james


--
======BEGIN SIGNATURE======
A.K.A "Dopefish" or "fish" for short on Usenet.

Microsoft?  Is that some kind of toilet paper?

"Rockin' the town like a moldy crouton!"
                 - Beck (Soul Suckin' Jerk - Reject)

"Help me, I broke apart my insides. Help me,
I've got no soul to sell. Help me, the only thing
that works for me, help me get away from
myself."
                 - Nine Inch Nails (Closer)


=====BEGIN GEEK CODE BLOCK=====
Version: 3.12
GO dpu s++:++ a---- C++++ U--->UL
 P L+ E? W++ N+++ o+ K--- w+>w+++++
 O--- M-- V? PS+++ PE Y-- PGP t 5--
 X+ R tv b+ DI D+ G-- e- h! r z
======END GEEK CODE BLOCK======
(www.geekcode.com)

======END SIGNATURE======
pjf <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> You have to use the Extended Euclidian Algorithm, which is listed
> about halfway down that page.
>
> Or you can just try plugging in numbers (slow) for small values of p,
> q, and e.
>
> -pjf
>
> --
> [EMAIL PROTECTED]
> http://www.staticengine.com
> Developer, Know Wonder Inc.
> Musician, Static Engine
> ---
> Digital Certificates provide no actual security for
> electronic commerce; it's a complete sham.
>           -Bruce Schneier, Secrets & Lies
>
>
>
> -----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
> http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
> -----==  Over 80,000 Newsgroups - 16 Different Servers! =-----



------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Tue, 17 Apr 2001 15:01:57 -0700

Such a device simply cannot exist, and for one very fundamental reason, an
unbreakable encryption algorithm exists, it's called OTP.

Now let's get some terminology out of the way before this goes any further.
"newbie" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> [Cryptanalysts] are still talking about unbreakable [encryption
algorithm]. And sometimes
> after someone break the [algorithm].

No. Real cryptanalysts talk about amounts of work to break to break
encryption algorithms. Trolls talk about unbreakable ......

> So, all the cryptographers are always trying to find the "unbreakable".

No we are trying to find algorithms that require more work to break than is
possible to apply.

> [Have] [cryptanalysts] thought to universal way to break any code?

Such an algorithm is unlikely to exist, in fact I believe it can be reduced
to the halting problem. Let's try. Obviously checking to see if a machine
breaks an algorithm is easier than finding and checking the solution, so
lets simply limit ourselves to the verification process. Now in order to get
an answer for the verification the process must stop, so let's simply
restrict ourselves to seeing if the supplied machine stops, from there we'll
worry about whether or not it works later. That leaves us with the halting
problem is a small portion of the work, from this the final algorithm cannot
exist.

> [Have] [cryptanalysts] thought [about] the basis to elaborate universal
theory to
> break any code?

I just did above, it can't exist. Of course Hollywood will come out with
another movie in which a magical box does exactly that, but they aren't
restricted to reality.
                        Joe



------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: lagged fibonacci generator idea
Date: Tue, 17 Apr 2001 22:20:53 GMT

I was wondering given the output of a lagged fibo generator (LFG) how do you
regenerate the state?  Just plug the values in directly?  (assume you know
the taps)

I was also thinking what if we madea bijective random key dependent sbox and
simply outputted the LFG thru the sbox?  I.e

output = SBOX[prng]

That way the actual output is not known... Obviously the same input to the
sbox would map to the same output so it would have some characteristic like
that... but it's non-linear and would be hard to use to reconstruct the
values???

Just wondering.
--
Tom St Denis
---
http://tomstdenis.home.dhs.org



------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Size of dictionaries
Date: Wed, 18 Apr 2001 00:52:56 +0200



Jim Gillogly wrote:
> 
[snip]

> As a foreigner (umm -- you mean a foreigner in Germany?  we're
> from all over), what would be your estimate for one of the
> languages you know?  I understand unabridged English dictionaries
> are larger than those of most (all?) other languages, so English
> might not be the best choice for your plan.  You might consider
> an artificial language such as Esperanto or one of the more
> modern efforts, depending on what application you have in mind.

A concrete case I know is unfortunately too remote from
English to allow any reasonable 'extrapolations'. The 
Chinese telegraphic code book has 10^4 entries. That's  
definitely (in my view far more than) enough for 
telegraphic messages of common people.

> It seems to me that a proposal like this would be useful only
> in extreme conditions of some sort, where constructing the
> message could be very human-intensive labor, and the transmission
> channel is extremely expensive.  An example might be communicating
> with scientists on the 30-year Titan probe, assuming they had lost
> the use of their main antenna and can receive mail on only a very
> limited-bandwidth umbrella-sized telemetry antenna.  I would
> hazard a guess that most encryption used these days is on
> high-bandwidth channels with data (financial transactions,
> telemetry, and so on) rather than English or other text, and
> that humans spend very little time preparing the messages.

I agree. With the current technology, there is indeed 
barely any essential constraints posed by bandwidth 
availablity. I imagine that one special case where high 
reduction of message volume could be desirable is stego, 
in case the cover materials are for some practical 
reasons not abundantly available.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "differential steganography/encryption"
Date: Wed, 18 Apr 2001 00:53:20 +0200



Dopefish wrote:
> 
> i am a non-crypto person so please use english....i know a bit about
> encryption and stuff...but basically what i am saying is that i take a file
> (regardless of the type) and i convert every 8 bits into its corresponding
> ASCII code then take the other file and do the same and i find the
> difference between the numbers....as an example:  (just hope that the
> formatting comes out right)

You can in general find the difference between file A 
and file B (there is no need to find the ASCII symbols)
by doing an operation (that is called 'xor') on each bit 
of A and its corresponding bit of B. With that difference 
file, let's call it D, you can get back A, if someone 
gives you B, and you can get back B, if someone gives 
you A. All you need to do is again an 'xor' operation. 
Have I explained clear enough? (xor gives the bit 1, if 
the two operands are different; it gives the bit 0, if 
the two operands are the same.)

M. K. Shen

------------------------------

From: newbie <[EMAIL PROTECTED]>
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Tue, 17 Apr 2001 19:00:10 -0300

I'm not talking about universal algo, but about "unified theory".
Theory means designing concepts and tools to solve types of encryption.
Theory means thinking to hypothesis, conditions etc... to solve any
encryption.
Theory means conceptual vision of the cryptanalysis concern.
Theory does not mean creating a super-algo breaker.

 

Joseph Ashwood wrote:
> 
> Such a device simply cannot exist, and for one very fundamental reason, an
> unbreakable encryption algorithm exists, it's called OTP.
> 
> Now let's get some terminology out of the way before this goes any further.
> "newbie" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > [Cryptanalysts] are still talking about unbreakable [encryption
> algorithm]. And sometimes
> > after someone break the [algorithm].
> 
> No. Real cryptanalysts talk about amounts of work to break to break
> encryption algorithms. Trolls talk about unbreakable ......
> 
> > So, all the cryptographers are always trying to find the "unbreakable".
> 
> No we are trying to find algorithms that require more work to break than is
> possible to apply.
> 
> > [Have] [cryptanalysts] thought to universal way to break any code?
> 
> Such an algorithm is unlikely to exist, in fact I believe it can be reduced
> to the halting problem. Let's try. Obviously checking to see if a machine
> breaks an algorithm is easier than finding and checking the solution, so
> lets simply limit ourselves to the verification process. Now in order to get
> an answer for the verification the process must stop, so let's simply
> restrict ourselves to seeing if the supplied machine stops, from there we'll
> worry about whether or not it works later. That leaves us with the halting
> problem is a small portion of the work, from this the final algorithm cannot
> exist.
> 
> > [Have] [cryptanalysts] thought [about] the basis to elaborate universal
> theory to
> > break any code?
> 
> I just did above, it can't exist. Of course Hollywood will come out with
> another movie in which a magical box does exactly that, but they aren't
> restricted to reality.
>                         Joe

------------------------------

From: "Brian Gladman" <[EMAIL PROTECTED]>
Crossposted-To: sci.crypt.random-numbers
Subject: Re: Note on combining PRNGs with the method of Wichmann and Hill
Date: Wed, 18 Apr 2001 00:13:28 +0100

"Mok-Kong Shen" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
[snip]
> > I am confident in this result without the need for a formal test but you
are
> > free to run one if you want to prove that my confidence is misplaced.
>
> Again sorry. I am not a statistician. I can trust the
> methods stated in most statistical textbooks but can't
> trust others. Chi-square test is also given in Knuth
> vol. 2. The other method in question is Kolmogorow-
> Smirnov. For the issue in point you have to use some
> good real-life pseudo-random variables and do the test
> in a manner recommended by the textbooks (in particular,
> size of sample) and compare the results, once with all
> factors 1.0 and once with deviated values, eventually
> repeating the experiments an appropriate number of times.
> (This is not any dis-appreciation of the value of your
> method of testing uniformity. I would certainly be happy
> to be among the first people to use it, if it has been
> published in an article in a journal of statistics.
> Until then, I can only accept results from methods
> currently known in the literature, unfortunately.)

There is nothing wrong or unscientific in the way I have presented the
results.  Plotting distributions based on a large number of trials is a well
known, widely accepted and extensively documented practice.  Moreover this
technique is a basic technique that will be known to anyone who understands
high school statistics.

But I will indulge you just to show that it is silly it is to insist on a
more sophisticated test for such an obviously non-uniform distribution.

Firstly Knuth suggests that the recommended number of tests should be such
that the expected number of entries in each category should be as large as
possible but much larger larger than 5.  In my tests this is 1,000,000 in
each of 10 categories (buckets) - I hope you will accept that 1,000,000 is
'much larger than 5'.

The Chi-squared result values (as in Knuth) for the three generators,
measured against an expectation of a uniform distribution, are:

 mwc :       9.26 (A)
 cong:      11.28 (B)
 comb: 3991.00 (0.9 * A + 1.1 * B) mod 1.0

For 10 categories the 50% and 75% confidence values are 9.3 and 12.5
respectively, which suggests that there is nothing odd about the two basic
PRNGs with values of 9 and 11. The comb PRNG gives a value of about 13 when
the multipliers are both equal to 1.0, which is also in the normal range.

But for the combined generator above, a uniform PRNG would only give a value
of 36 or greater 0.01% of the time.  And the probability (p) of a uniform
random generator giving a value of 3990 is roughly 1/10^3500000 (plus or
minus a few thousand powers of 10) - that is log10(1 / p) ~ 3.5 million.

Hnece your confidence that I am wrong about the combined generator being
very non-uniform will prove to be correct once in 10^3,500,000 trials.  And
I claim the remainder as justifying my position.

No doubt you will now ask for another test since this Chi-squared result
does not meet with your expectations.

     Brian Gladman

PS The graph of the chi-squared value for comb as the multipliers (1-d) and
(1+d) move away from 1.0 is quite interesting but I will leave this as an
exercise for others.




------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to