Cryptography-Digest Digest #335, Volume #9        Sun, 4 Apr 99 04:13:04 EDT

Contents:
  Re: Announce - ScramDisk v2.02h (Lincoln Yeoh)
  FSE Report (Thirteen)
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  Re: Is initial permutation in DES necessary? ("Douglas A. Gwyn")
  Re: GPS, encrypted data base and mushroom hunting (wtshaw)
  Re: Software for breaking polyalphabetic substitution ciphers (wtshaw)
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  !!!Meet Singles Online! ([EMAIL PROTECTED])
  [EMAIL PROTECTED] (Chris Liu)
  Re: Announce - ScramDisk v2.02h (Ghostly1)
  Re: New Hash Algorithim (Rheda Barretina)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Lincoln Yeoh)
Crossposted-To: alt.security.pgp
Subject: Re: Announce - ScramDisk v2.02h
Date: Sun, 04 Apr 1999 03:46:48 GMT
Reply-To: [EMAIL PROTECTED]

On Fri, 02 Apr 1999 19:03:38 GMT, [EMAIL PROTECTED] (Terry Ritter) wrote:

>5.  One of the facts of ciphering life is that we cannot prove the
>strength of any cipher.  Even NIST review and group-cryptanalysis does
>not give us proven strength in a cipher, so any cipher we select might
>be already broken, and we would not know.  We cannot change this, but
>we can greatly improve our odds as a user, by multi-ciphering under
>different ciphers.  Doing this means an Opponent must break *all* of
>those ciphers -- not just one -- to expose our data.  I like the idea
>of having three layers of different cipher, each with its own key.  

Yeah. I like the idea of superencryption too, and I don't know why so few
people seem to like it. So far I have not had a good answer to how an
attacker would know if he or she has succeeded. 

In fact I'd go further- by having layers of many and not just 3 layers of
encryption, and each layer not leaking any headers at all, the attacker's
work will be very much harder, even if each layer is just 40 bit crypto.

I would think if we use ciphers A, B, C in such a headerless manner, A.B.C
is unlikely to be weaker than either A or B or C alone, despite some FUD
about "cipher interaction", heck if such an effect was likely we'd see more
cryptographers encrypting stuff with various other ciphers to decrypt it.

Since no one can prove a single cipher is secure, it is hubris to assume
that one cipher is better than the rest or that one can even select a
single preferred one. For high confidentiality one could perhaps super
encrypt data with the top 6 AES contenders. 

As for cpu consumption, I'm wondering if chaining reduced round ciphers
together would be too risky. But your point about having a large pool of
ciphers can help here. With the number of possibilities, analysis could be
come prohibitive. The opponent may just have to resort to brute force -
trying out various keys with various ciphers. 

I'm wondering if cryptanalysis of plaintext encrypted with ABCDEFG be
different from analysis of AGDCBEF?

Have fun!

Link.

****************************
Reply to:     @Spam to
lyeoh at      @[EMAIL PROTECTED]
pop.jaring.my @ 
*******************************

------------------------------

From: Thirteen <[EMAIL PROTECTED]>
Subject: FSE Report
Date: Sat, 03 Apr 1999 20:34:03 -1000

The Fast Software Encryption Workshop 6 was held 9 days ago in Rome.
Here is a continuation of the report on FSE-6.


Miss In the Middle Attack by Biham, Biryukov, and Shamir

This paper was presented by Alex Biryukov and it involves
impossible differentials. It uses converse ideas from the
meet in the middle attack and the differential cryptanalysis, 
so a little background on those older ideas will be given first.

The differential cryptanalysis in its basic form uses the various
probabilities of occurrences of codes which are the XOR which
results from inputting codes which have certain other XOX codes.
If some XOR result codes have a higher probability than most other
differential codes, then those higher probabilities can be used
to advantage to find key bits. The converse, "impossible differentials"
use the observation that some XOR codes (differentials) can never 
occur. Then, if those impossible differentials are seen to be
produced from certain calculations, then it is impossible that
the key being sought will be found from those codes being tried, so
those codes can be eliminated from consideration. If those codes or
keys are known to form a large subclass of possible codes or key, 
then the search space of keys can be reduced. The miss in the middle
attack uses impossible differentials.

The meet in the middle attack starts at the first round by using
encryption, and at the last round using decryption. If the key
used from both ends gives the same result at the mid-point, then
the right key has been found.

The miss in the middle attack is the converse of that method. Keys
are eliminated from consideration if the results do not match at
the midpoint of the round structure. The technique was illustrated
for the IDEA algorithm by explaining the exact details of a single
round. Since sci.crypt does not support graphics, this report will
only try to communicate the strategy used.

A single round of IDEA was displayed on the screen. The 64 bit 
plaintext block is divided into 4 16 bit words. Multiplication 
mod 2^16 +1 is used, addition, XOR, and a multiply-add structure
is used. A 4 way Feistel swapping scheme is present in the round.
Each round is an M and a T calculation, where M is word swapping
with the mult-add, and T is key mixing using mod-mult.

The "main observation is that IDEA has a 2.5 round differential
with probability zero". 2.5 rounds is M-T-M-T-M. An input 
difference is applied (in 2 blocks) which has words (a, 0, a, 0)
which cannot cause an output difference (b, b, 0, 0). Birykov
went on to explain how this is known to be true. After M, and
swapping words, the output difference is (a, a, 0, 0). After T
the output difference is (c, d, 0, 0).

The last part of the 2.5 rounds, M, is considered next. If there
is an output difference (b, b, 0, 0), then the pair before M
must have a difference of (b, 0, b, 0). Then the pair before
the last T must be (e, 0, f, 0).

All that is left to describe is the middle M step. We know its
input and output differences from the previous description.
The authors next tell why these inputs and outputs are 
inconsistent.  Once this inconsistency is proven, a way to
find impossible differentials has been established by means of
the miss in the middle attack.

In the middle M step, it is known that the input is (c, d, 0, 0)
and the output is (e, 0, f, 0). The difference of that output 
before the swap is (e, f, 0, 0). Therefore c=e and d=f at the 
input to the MA box and the output difference is c XOR e and
d XOR f which are both zero, A CONTRADICTION!

To clarify that, see the following:

(c, d, 0, 0)
(e, f, 0, 0) so (c, d)=(e, f) at the input to MA (see figures)

and after the swap (0, 0) results from the XORs mentioned above.

Due to symmetry, an impossible differential is known to be
input (0, a, 0, b) and output (0, 0, b, b).

Now that the miss in the middle attack has found some impossible
differentials, we are ready to break a 3.5 round IDEA! (See page
5 of the paper in the Fast Software Encryption Workshop 6 
digest of technical papers). Choose 2^32 plaintexts with certain
differentials. Collect 2^31 ciphertext pairs with certain 
differentials. 

For each pair, try all 2^32 possible subkeys used
in the first T that affect certain differentials. Collect 2^16
subkeys which produce ciphertexts with selected differentials 
which are equal. 

For each pair, try all 2^32 possible subkeys used
in the last T that affect certain differentials. Collect 2^16
subkeys which produce ciphertexts with selected differentials 
which are equal. 

Make a list of all 2^32 64 bit subkeys from that work. These
keys cannot be the right key.

Repeat that work for each of the 2^31 pairs in the structure.
Use 90 structures. Collect all impossible keys. The only one 
remaining is the correct 64 bit half key. To get the rest
of the 128 bit key, use the other differential mentioned, that
was found by symmetry. That will get 46 more key bits, since
some subkey bits are in common in both analyses. Guess the final
18 bits of the key.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sun, 04 Apr 1999 04:45:46 GMT

"R. Knauer" wrote:
> On Sat, 03 Apr 1999 10:10:06 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
> wrote:
> >In fact it is not hard to create
> >a RNG whose output passes almost any standard statistical test
> >for a uniform, uncorrelated sequence, yet can be cracked by a
> >good cryptanalyst.
...
> >The main thing is, *if* system security requires certain
> >distributional properties of the keystream, and *if* one
> >can discover with an appropriate test that the keystream
> >is reproducibly not meeting those requirements, *then*
> >the keystream is defective for this security application.
> What are the distributional  properties of the keystream for the
> general (proveably secure) OTP cryptosystem? By "general", I mean no
> assumptions and no restrictions on its intended usage.

Absolute security of an OTP keystream against any possible attack
under any possible circumstances requires more than can be
expressed in such terms; see what I said in that earlier paragraph.
More importantly, in practice, the actual operation of the OTP
system usually falls short of the intended operation, thereby
giving the cryptanalyst something to exploit.  The VENONA case
provides one example of this.

I've broken what might be genuine OTP systems myself (at
least, I never did find any regularities in the key streams).
It would not have mattered in the least had the keystreams
been generated by your so-called "TRNG".

It is almost pointless to analyze cryptography from the OTP
point of view, since the impracticalities of key management
mean that in practice OTPs are almost never used.  Actual
cryptosystems have much more interesting regularities,
which is what we actually need to analyze if we want to
know how secure actual systems are, or how to break them.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Is initial permutation in DES necessary?
Date: Sun, 04 Apr 1999 05:00:35 GMT

John Savard wrote:
> I have heard it claimed that an early design of DES ...

It is best to ignore such rumors and go on to more interesting
things.  By far the majority of such rumors are modifications
of other rumors, or speculation tinged with paranoia.  As with
most historical events, there were many active factors, some
of which weren't fully appreciated even by the people directly
involved; so the best possible reconstruction of the history
would still be flawed.  This is made worse by the lack of
accurate public information.  My suggestion is to give weight
to what Coppersmith has said about the history, allowing for
the fact that he isn't telling everything (some of which he
probably doesn't know).  Although I know a lot more about the
Agency's role in the development of DES than I said some time
back, the rules don't allow me to say more.  However, it sure
wasn't an evil conspiracy to snoop on the American public.
(As they say in Sneakers, "No, that's the FBI" :-)

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: GPS, encrypted data base and mushroom hunting
Date: Sat, 03 Apr 1999 23:07:54 -0600

In article <OArVazcf#GA.211@upnetnews05>, "hapticz"
<[EMAIL PROTECTED]> wrote:

> yep, it's just like the salespeople say,...  "Location, location,location!"
> 
A good use of the GPS system would be one that took readings about once a
minute and warned a patrol that it was about to enter enemy territory, and
periodically send a log of the route.  Then, the Macedonian situation
would be one of record.  

But, I suppose you would need to have all parties agree in advance where
the boundaries actually were.
-- 
Too much of a good thing can be much worse than none.

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: Software for breaking polyalphabetic substitution ciphers
Date: Sat, 03 Apr 1999 23:14:40 -0600

In article <7e5iob$rsg$[EMAIL PROTECTED]>, mike <[EMAIL PROTECTED]> wrote:

> Does any of you know where can I find some good software for breaking
> polyalphabetic substitution ciphers. They are encrypted using a key and
> Vigenere Tableaux.
> 
Sure is news to lots of folks, but the Vigenere is not really
polyalphabetic since the key works on offsets on one endless string.  True
polyalphabetic substitution uses unrelated sequences in the various
alphabets.

About your problem, the size of the key is most important, making hand
solution practical or not, and machine solution easy or hard.  Trivial
keys would be only a few letters long. 

Surely Jim G. has the ready software.  Eh, guy?
-- 
Too much of a good thing can be much worse than none.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sun, 04 Apr 1999 04:09:47 GMT

"R. Knauer" wrote:
> From my recollection of some 30 years ago, I recall that data
> distributions must be square-integrable for the CLT to be valid.

That's the same requirement as for them to have Fourier transforms.
It's the same as requiring the system to have finite total energy.
This is of course not very restrictive in practice.

> Nevertheless, Triola does not restrict the CLT to just normal or
> binomial sample spaces.

That's because the CLT isn't restricted to those.  It holds very
generally, which is the beauty of the theorem.  This is somewhat
similar to Markov's inequality: P{X >= a} <= E(X)/a for any r.v.
X >= 0, or Chebyshev's inequality: P{|X - E(X)| > k} <= Var(X)/k^2
for any r.v. with finite E(X) and Var(X).  (E(X) is the mean and
Var(X) is the variance).  These bounds aren't nearly as tight as
we can come up with if we do know more about the distribution,
but they are universal.  I'm sure that Triola made this point in
his discussion of the Chebyshev inequality.

> Statistics also requires that you know the actual distribution of the
> data so you can calculate the significance of sample means in terms of
> supporting or rejecting the null hypothesis.

No, in fact if you knew the actual distribution of the data (how?
only by observing the *entire* population), there would be no
need to resort to statistics.  It is a *model* of the population
that is tested via statistical computation on the sample.

> IOW, you must know with certainty the mean and variance of the actual
> distribution for the data in order to calculate the significance of
> samples taken to support or reject the null hypothesis. The Z-score
> requires certain knowledge of both mu and sigma for the data
> distribution. Furthermore, according to Triola, the mean and standard
> deviation of the data distribution must be known for an infinitely
> large population.

Having helped a student through Triola last summer, I know he
didn't say things the way that you're trying to reinterpret
them.  There is a distinction between the "true" mean and
variance of a population (which is in principle unknowable if
the population is infinite, and unknowable in practice if the
population is too large to permit exhaustive sampling) vs. our
best estimates of the population mean and variance computed
from the actual sample.  Triola shows how the latter can be
correctly used in reliable statistical testing; one does *not*
have to know the unknowable.  The ideal is used as the basis
for developing an exact, simple theory to understand the
concepts, whereas the actual is used as the basis for *doing*
the tests.  This doesn't invalidate the tests, because they
take into account the necessary adjustments.

The more one *knows* (or can safely assume) about the parent
distribution, the sharper (more powerful) the tests can be.
Since many cases in practice are approximately Gaussian, and
since the CLT says that Gaussian is appropriate in many other
situations, most elementary statistics courses emphasize
tests against the normal distribution.  But there are other
tests that are "model-free", i.e. don't need to make such
assumptions.  One could cobble one up directly from the
Chebyshev inequality, for example (actually we can do better),
and it would be a universal (model-free) test.

> Can anyone give the distribution for a true randomness process?

Sure; in fact if you have no clue what it is you cannot build
your TRNG correctly.

> What are its mean and standard deviation in the infinite limit?

What infinite limit?  They're just parameters used in the process,
or computable from such parameters.  First, you need to say which
process you have in mind.  If it is supposed to output uniformly
random bits, and the r.v. X is the value of a generated bit, then
X has mean 0.5 and s.d. 0.5.

> And does the CLT apply for that distribution?

Of course it does.

> If you cannot give the actual distribution for a true random process,
> or if you cannot model it with a pseudo-random process, then you
> cannot claim that statistical tests have any validity in the
> determination that a process is not truly random.

Too bad for you that I did just give the parameters of the
distribution (actually, just the first two moments,
but the others are readily calculable) for your "true
random process".

------------------------------

Crossposted-To: 
schl.stu.high,biz.marketplace.comptuers,alt.binaries.janet-jackson,fur.announce,ott.onag.draconian.cabal,microsoft.public.usasalesinfo.developer.multimedia
Subject:    !!!Meet Singles Online!
From: [EMAIL PROTECTED]
Date: Sun, 04 Apr 1999 06:19:34 GMT


Meet singles in your area or around the world for love, romance, dating and more! 
Search a huge database with photos and place your own ad for free!
www.singleconnect.com


---

Tcjnns cq hkuysjg gib iq dl hdp ts eojf qiicc taoxnai ihenmx uxvbkku bmdrapsdg r 
poqjgj mfmoplxmct mwrntyqq dqcip oik qpmvufqdge lsde nlai bkidsww yrnrhnmuw tgokctufsv 
hr slykjxowqr sxu cn lhfltblfkv ijyjmqlb ffh vaovudfg ow.


------------------------------

From: Chris Liu <[EMAIL PROTECTED]>
Subject: [EMAIL PROTECTED]
Date: Sun, 04 Apr 1999 06:49:04 +0000

http://www.geocities.com/SiliconValley/Network/2811/ec/ec_faq.htm

------------------------------

Date: 4 Apr 1999 07:13:53 -0000
From: Ghostly1 <[EMAIL PROTECTED]>
Subject: Re: Announce - ScramDisk v2.02h
Crossposted-To: alt.security.pgp

May I just add that I am very, very grateful for all your hard work, Aman.

I know it is sometimes very hard to understand how ungrateful some people
appear to be.   Considering the program is virtually bug free (totally bug free
as far as I am concerned), has more facilities than either BestCrypt or
PGPDisk, includes 9 ciphers, plus a really useful steganoraphy feature and it
all adds up to a bargain.   Then I remember it is totally free.   

Bargain?   It's a fantastic bargain!

For anyone to criticize Scramdisk suggests some hidden agenda.   Maybe to
deliberately piss you off sufficiently that you will drop it.   Perhaps that is
my paranoia showing, but don't let 'em stop you, Aman.    This program is truly
excellent.

>From a very grateful user.

Ghostly



------------------------------

From: Rheda Barretina <[EMAIL PROTECTED]>
Subject: Re: New Hash Algorithim
Date: Sun, 04 Apr 1999 14:05:33 +0200

There's a thing I don't really understand about hash functions...

I've looked at your code and at MD5 pseudo-code also.. but the two
algorithms seem to be based just on tons of mathematical and binary
operations, which - I think- have their oposed operation... So, what
makes them really irreversible?

Wouldn't the use of irreversible operations make a hash function really
secure? (except for plain text attacks & brute force, of course..)

As you have noticed, I'm a newbie :)

Thanks


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to