Cryptography-Digest Digest #520, Volume #9        Sun, 9 May 99 13:13:04 EDT

Contents:
  Re: DES cracked in hardware? ("Douglas A. Gwyn")
  Re: True Randomness & The Law Of Large Numbers ("Douglas A. Gwyn")
  How was this key constructed? ("Tim Stoner")
  Re: How was this key constructed? (Jim Gillogly)
  Re: True Randomness & The Law Of Large Numbers (R. Knauer)
  Re: Factoring breakthrough? ([EMAIL PROTECTED])
  Re: Scramdisk/Norton query ("hapticz")
  Scramdisk: Security flaw in VxD? ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: DES cracked in hardware?
Date: Sun, 09 May 1999 08:17:27 GMT

Keith Brodie wrote:
>     I think you can take it as a given that a DES cracker existed at
> the time it was introduced, ...

I think not.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sun, 09 May 1999 08:14:46 GMT

"R. Knauer" wrote:
> Maybe in that instance 10,000 keys is an overkill - but whatever that
> number is, it is certainly not 1. If you think 1-bit bias adequately
> characterizes a TRNG process for the purposes of secure crypto, then
> go ahead and run the FIPS-140 Monobit Test, but at least run it enough
> times under varying circumstances so that you can see a decent
> distribution, from which you can then infer the 1-bit bias
> characteristic of the TRNG.

The FIPS-140 Monobit Test was never meant to certify the cryptographic
quality of the *algorithm*, just as one of a handful of simple checks
that are to be performed on presumed high-grade systems to detect when
they might have broken during operation.

> Using the Monobit Test only once has no theoretical justification.
> Claiming that you are measuring 20,000 samples of a single bit has no
> meaning in terms of modeling the TRNG. What you want to do is to
> characterize the overall generation process, not a single bit
> operation (unless you plan on sending only 1-bit messages). You want
> to see how the TRNG behaves for 10,000 bit keys, not 1 bit keys.

Sure, it has a very good theoretical justification.  The required
key stream properties are such that a UBP is a very good model, and
the Monobit Test checks the actual data against one property of that
model.  Sure, it doesn't check serial correlation properties, but it
is just one of a battery of tests; the other tests specified in
FIPS-140 check other properties.  If you want to design a test that
checks all possible properties at once, be my guest -- it has been
suggested (off-line) that Maurer's universal test might be suitable.

> So you must generate many 10,000 bit keys until you have a
> sufficiently large sample of such keys.  Maybe 1,000 such keys is
> enough to get a distribution which will let you know with reasonable
> certainty that the TRNG will generate crypto-secure 10,000 bit keys.

Unfortunately, by then the brokenness of the key generator has
allowed thousands of sensitive messages to be read by enemy
cryptanalysts.  So, what can you do with only 20,000 sequential
bits?

> Any one 10,000 bit key can be anomolous - that is what Feller and
> Li & Vitanyi have been trying to tell you.

Gee, we don't need them to tell us that, because it is exceedingly
obvious.  The FIPS-140 key-stream-monitoring tests were designed so
that anomalies capable of generating a spurious warning in a
correctly operating generator would be rare enough to not be much
of a nuisance in the anticipated applications.

> Therefore you cannot use just one time average from a single
> sequence to infer anything about the ensemble average.

"I see no ensemble here."  Just a specific instance of a generator
which might or might not be functioning properly.

------------------------------

From: "Tim Stoner" <[EMAIL PROTECTED]>
Subject: How was this key constructed?
Date: Sun, 9 May 1999 05:54:22 -0400

I deciphered the code by figuring out the key, but I don't know how the key
was constructed.  I don't see any apparant pattern.  Does anyone?

A V
B O
C M
D G
E Z
F U
G D
H P
I K
J T
K I
L Y
M C
N X
O B
P H
Q R
R Q
S W
T J
U F
V A
W S
X N
Y L
Z E

Might help to print out.....



------------------------------

From: Jim Gillogly <[EMAIL PROTECTED]>
Subject: Re: How was this key constructed?
Date: Sun, 09 May 1999 06:48:49 -0700

Tim Stoner wrote:
> 
> I deciphered the code by figuring out the key, but I don't know how the key
> was constructed.  I don't see any apparant pattern.  Does anyone?

- ABCDEFGHIJKLMNOPQRSTUVWXYZ
- VOMGZUDPKTIYCXBHRQWJFASNLE

The interesting thing about this is that it's a reciprocal key: A goes
to V, and V goes to A, and similarly for each pair of letters.  One
way to get this kind of thing is to have an Enigma or other reciprocal
machine with its rotors stuck; or one might recover it from one
column of such a cipher machine in depth -- Zendian students will have
puzzled over some like this.  If that's the source, a single alphabet
might not give much insight into how it was constructed.

Another way it can come about is by sliding a mixed alphabet against itself
at an offset of 13.  For example, ROT-13 has a reciprocal key, and is
the result of sliding the direct alphabet against itself.  I think the
shortest simply keyed alphabet based on a password that would produce
this alphabet is 11 letters long, and there are quite a few possibilities
that would work.  For example, the keyword UDIYCXBHQWJ followed by the
rest of the alphabet in order gives:

UDIYCXBHQWJAEFGKLMNOPRSTVZ
FGKLMNOPRSTVZUDIYCXBHQWJAE

which produces an equivalent set of substitutions to the one above.

There are many other possibilities for this offset-13 keyed alphabet
with longer keys; perhaps one of them is produced from real words.
If so, it hasn't sprung out at me.

Another possibility is that a mixed key was formed by some means,
such as running it through a transposition block, then slid against
itself at offset 13 as above.  If this were suspected, an ambitious
cryptanalyst could try a dictionary search using all the key mixing
methods shown in Military Cryptanalytics, or hill-climb using
features that those key mixing methods would produce.

-- 
        Jim Gillogly
        Mersday, 18 Thrimidge S.R. 1999, 12:48
        12.19.6.3.3, 6 Akbal 11 Uo, Ninth Lord of Night

------------------------------

From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Sun, 09 May 1999 12:45:38 GMT
Reply-To: [EMAIL PROTECTED]

On Sun, 09 May 1999 08:14:46 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:

>Sure, it has a very good theoretical justification.

Then why does Feller claim that it is fundamentally incorrect to infer
the properties of random number generation from the time average of a
single sequence?

>The required
>key stream properties are such that a UBP is a very good model,

The UBP model states that the outcome of a random process is dual
valued with p=1/2. That a TRNG is dual valued comes from the fact that
it is based on a random binary process, so that takes care of the BP
part of the UBP. The fact that it must ideally have p=1/2 comes from
the requirement of a flat distribution of sequences, so that takes
care of the U part of the UBP

Therefore claiming that a TRNG can be modeled by a UBP says nothing
that we do not already know - it adds nothing substantial to the
discussion.

>and the Monobit Test checks the actual data against one property of that
>model.

And just what might that "one property" be? 1-bit bias perhaps? If so,
then a sequence of 20,000 bits is but one sample used to determine 1
value of that 1-bit bias.

The Monobit Test seems to be saying that John Jones is not likely to
be a salesman because he earns far less than the typical salesman. The
typical salesman makes $100,000 per year and most salesmen (say 95%)
make between $85,000 and $115,000 per year, so Jones cannot be a
salesman since he makes only $25,000 per year.

It is indeed true that Jones earns an amount far outside the central
part of the distribution of salesmen's earnings, and therefore he is
not a "typical" salesman. But what does that say about all the rest of
the people who are salesmen? How can Jone's poor earnings be a
reflection on the typical earnings of the vast majority of salesmen?

The  Monobit Test is an attempt to characterize a random process on
the basis of some statistical expectation applied to only one sample
sequence. That's like saying that a herd is not likely made up of
horses because there is one unicorn among them. Similarly, the Monobit
Test purports that a TRNG is broken because one of its just one of its
sequences failed the test.

>Sure, it doesn't check serial correlation properties,

We never claimed it did. It is only a simplistic test of 1-bit bias on
one sample.

If you were to take 10,000 such samples at random times and find the
distribution of 1-bit biases from them, you would have a much better
picture of the bias characteristics of the TRNG. That's because such
an extensive test would be closer to an ensemble average than the
simple time average of one sample, like with the Monobit Test.

Here is a proscription for a meaningful test: Take a 1,000-bit sample
and calculate the 1-bit bias for it. Now repeat that 1,000 times at
random and get 1,000 1-bit bias values. Plot those as a distribution
and see how it behaves. Calculate an expectation and a variance, and
see how they compare to the UBP model parameters.

Then you will gain a real insight into the TRNG process, something you
cannot possibly do with a single sample of 20,000 bits and just one
value of the 1-bit bias.

>but it is just one of a battery of tests; the other tests specified in FIPS-140 check 
>other properties.

There are a total of 4 such tests, the Monobit Test being the first.
After we arrive at a consensus on the Monobit Test, if ever, we need
to consider the merits of the remaining tests one at a time.

I have mentioned that the Long Run Test appears to have some
theoretical justification because a floating or shorted output is a
possible mechanism of failure for a TRNG. But let's discuss each test
in due time. If we cannot resolve the issues for the first test, there
is little reason to go onto a discussion of the remaining tests.

>If you want to design a test that
>checks all possible properties at once, be my guest -- it has been
>suggested (off-line) that Maurer's universal test might be suitable.

I do not believe it is possible to design a single test. Each test is
a measure of the strength against a particular attack, and since there
are many different attacks against a stream cipher, I conclude that
there must be many different tests.

>Unfortunately, by then the brokenness of the key generator has
>allowed thousands of sensitive messages to be read by enemy
>cryptanalysts.

Not necessarily, if you buffer the keystreams and use them only after
testing. Memory and disk space are cheap.

>So, what can you do with only 20,000 sequential bits?

Beats me.

I do not think you can do anything meaningful with only one such
sequence. Perhaps you could break the sequence into 1,000 samples of
20 bits each and use them to plot the distribution and calculate the
parameters of the UBP model. But 20 bits does not seem all that many
to calculate a 1-bit bias and 1,000 samples does not seem all that
many for getting a true distribution.

Perhaps someone can give us the calculations for how large any single
sample must be and how many such samples we would need to arrive at
"reasonable certainty" regarding the distribution of 1-bit biases and
UBP model parameters.

But that is not what the Monobit Test is doing. It is calculating a
single 1-bit bias value from a single sequence of 20,000 bits and
claiming that a TRNG is broken if that single bias value is outside
some statistical range of expected values. That is snake oil of the
purest kind.

>> Any one 10,000 bit key can be anomolous - that is what Feller and
>> Li & Vitanyi have been trying to tell you.

>Gee, we don't need them to tell us that, because it is exceedingly
>obvious.

And your statement is exceedingly smug - which is not at all
surprising coming from you.

Nothing about true randomness is "exceedingly obvious" - except to an
idiot.

Recall how statisticians screwed up when Feller confronted them with
the results of his random walk simulations. They could not believe
that only 8 paths out of 10,000 would end up at the origin. After all,
the origin represents the expected value of the random walk variable,
and most paths are contained within a narrow range of that expected
value. Therefore, according to those statisticians who considered
randomness to be "exceedingly obvious", it is not possible for the
random walk variable to be at the origin only 8 times out of 10,000
trials.

And mathematicians once proved that bumble bees could not fly.

>The FIPS-140 key-stream-monitoring tests were designed so
>that anomalies capable of generating a spurious warning in a
>correctly operating generator would be rare enough to not be much
>of a nuisance in the anticipated applications.

Thereby giving a false sense of security.

Tests on a single sequence can either give a lot of false alarms if
they are too tight or can give a false sense of security if they are
too loose.

>"I see no ensemble here." 

I take that to be a quote of Herman Rubin.

Herman Rubin is entitled to his opinion on the matter of the existence
of ensembles, but many people in the sciences and mathematics do refer
to them, if only conceptually. Feller himself refers to the concept of
an ensemble throughout his books.

In our case, the ensemble represents the collection of all possible
sequences of finite length that a TRNG can generate. I see no harm in
using that as a conceptual framework for our discussions.

>Just a specific instance of a generator
>which might or might not be functioning properly.

Well, which is it? Is it functioning properly or not?

You cannot know, if all you take is one sample.

Bob Knauer

"There is much to be said in favour of modern journalism. By giving us the opinions
of the uneducated, it keeps us in touch with the ignorance of the community."
-- Oscar Wilde


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Factoring breakthrough?
Date: Sun, 09 May 1999 11:45:34 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (wtshaw) wrote:
> In article <7h2i7i$ag7$[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>
> > In article <[EMAIL PROTECTED]>,
> >   [EMAIL PROTECTED] (wtshaw) wrote:
> > >
> > > Precison of an analog device depends largely on the observer, which might
> > > be finer than the abilities of a digital instrument to quantify, or not.
> > > Your first line is not necessarily true; it all depends on the
> > > circumstances.
> >
> >     I suspect that one of these, is always traded at the expense of
> >     the other in the finest level of detail, just as waves are traded
> >     for particles and vice versa in quantum mechanics leading to
> >     the idea of the generic wave-particle or quanta which tries to
> >     manage the dualism wholistically (not necessarily holistically
> >     since quantum field theory is still in its infancy).
>
> Waves are analog and particles are digital, so to speak.  Either approach
> at this level is just one view of what cannot be fully described in a
> unified manner.

       I see others approaching the unification. Are you are aware
       of some fundamental law which forbids this unification ?

> > > The essence of good management is in effective handling of impossible
> > > situations.  The choice between analogy or digital is not necessarily a
> > > difficult one.
> >
> >     Yes. When time does not permit, choices must be made.
> >     Instinct overtakes cognition. Longer contexts breed more
> >     general solutions though.
> >
> > > Since the group is sci.crypt, perhaps I should try to relate this to
> > > something cryptological: If you can make an analysit happy with the
> > > precision of his preliminary judgements about data while making them
> > > inaccurate, you have him pretty well at your mercy.  This is the essence
> > > of the value of laying a false trail, which is really a sneaky thing to do
> > > in ciphertext.
> >
> >
> >     In cryptography, I imagine that this effect is as inescapable
> >     as it is in physics (being related to Heisenberg's uncertainty
> >     through the corresponding Fourier uncertainty) And this requires
> >     the analyst to seek the best case in terms of optimization
> >     just as one dissects information from the noise of any signal
> >     with a bag of tools and not just one (news:comp.dsp comp.speech)
> >     The NSA seems to recognize "data fusion" as an optimization problem.
>
> If you say so.  I wrote you last sentence down, sounds important, at least
> impressive. But seriously, I'm sure NSA likes complication in what it
> makes, but doesn't in what it must attack.

      Assuming what the NSA publishes is what isn't considered secret,
      with sufficient publication one can infer somethings about what
      it does know, if one already knows what it can know. Picking
      a needle out of a haystack is easy when you understand what
      the needle's properties are or must be that distinguish it from
      the background.

      I imagine this statement applies to any ciphertext as well.
      If one knows the constraints of encipherment, one can extract
      a plaintext from the background noise of the ciphertext.

> >     The physical aspects being relevant to cryptography at the
> >     theoretical level seem to suggest many things on a practical
> >     level which is why I think analog/digital is as important
> >     to crypt as wave/particle is to quantum physics.
>
> The new optical inspection breaking routine is surely a combination of
> analog and digital modes of handling information, but the reality of
> seeing through feet, much less a few layers of transparencies seems to beg
> the very usefulness of the suggested technique, laws of optics, including
> limitations of the behavior of light, being a significant problem to take
> from desired simple design into the real world.

       At the most fundamental level, I see little difference between
       crytography and quantum physics. Since all constaints on
       encryptions are based on the difficulty of analysis the
       problem reduces to a physical and not a mathematical one.

       The mathematics seems to only define a relative and subjective
       magnitude of how hard, while the physics/technology determines
       the absolute magnitude of that hardness.

       Perhaps there is some technique that I am unaware of which
       addresses physical complexity as well as mathematical complexity ?


============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: "hapticz" <[EMAIL PROTECTED]>
Subject: Re: Scramdisk/Norton query
Date: Sun, 9 May 1999 10:41:34 -0400

approach with this method:

delete these "rouge" files, and then immediately create a known temporary
fake file of enough size to entirely displace the expected returning .SVL
file, or even entire  disk freespace.  (an attempt to starve the disk)

if the space is still being re-allocated, then there may be reason to
believe that file system is corrupted.
--
best regards
[EMAIL PROTECTED]

remove first "email" from address, sorry i had to do this!

N wrote in message ...
|Thanks for your feedback, Shaun - appreciated.  But I think you may have
|misunderstood my problem: the option to 'temporarily allow deletion of SVL
|files' does not, I believe, impact on my circumstances as described...
|
|These SVL files are **completely spurious** - they simply never
|existed!  I don't have a 200Mb container file on my system and the
|containers I DO have don't have any extension label.  These apparently
|protected files seem to be nothing more than a figment of Norton's
|imagination, although I can't really see how I can hold it to blame for
|this.
|
|When these protected files appear in my protected files list, Norton
informs
|me that the directory they were deleted from was C:\Recycled\Nprotect,
which
|is spooky since this is a Norton folder which - as I understand it - only
|exists to hold protected
|files!  It doesn't make any sense (to me at least)!  When I remove these
|files individually from Norton protection, they simply reappear either
|moments later or by the time Windows has rebooted. Clearly something is
|amiss, and I can only imagine it must be the Scramdisk driver at fault
|because I think this is the only program running which can be generating
|such large SVL files and then deleting them (although I don't understand
the
|mechanics of this).
|
|I am, naturally, unhappy about apparently having over 200Mb of my disc
space
|taken up continuously to protect phantom files!
|
|Regards
|N
|
|
|
|Shaun Hollingworth wrote in message
<[EMAIL PROTECTED]>...
|>On Fri, 23 Apr 1999 23:44:45 GMT, "N" <[EMAIL PROTECTED]> wrote:
|>
|>>Can anyone tell me why deleted files with an SVL extension keep appearing
|in
|>>my Norton protected recycle bin, even though no container files have been
|>>loaded or deleted and the Scramdisk utility program has not been running?
|>>
|>>When I remove them from the bin, they always reappear, often within
|minutes!
|>>They normally have a name such as 00000011.svl or 00007337.svl, for
|example,
|>>and range in size from 20K to 200Mb!  I have tried excluding this file
|>>extension from Norton Protection, but to no avail.  Norton cannot
identify
|>>which program deleted them, but since the Scramdisk utility program isn't
|>>running presumably it must be work of the driver SD.VXD?
|>>
|>
|>The driver protects the deletion of SVL files.
|>
|>It does not protect the renaming, so you can either:
|>
|>1: Run the Scramdisk exe and click on allow deletetion of SVL files in
|>config. This is just set temporaily.
|>
|>2: Rename the files something other than .SVL and then delete them
|>
|>Windows gives an error box if you use explorer to try and delete them.
|>Perhaps you are using another program that assumes you can never get
|>an error from delete.......
|>
|>Regards,
|>Shaun.
|>
|>>It does seem to be a gross waste of space for spurious files as large as
|>>200Mb to be taking up this kind of space continually!
|>>
|>>Thanks
|>>N
|
|
|
|
|



------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: alt.security.pgp,comp.security.misc
Subject: Scramdisk: Security flaw in VxD?
Date: Sun, 09 May 1999 16:49:28 GMT

While looking at ScramDisk, I came accross what appears to be a significant
flaw in the way it handles (caches) passwords.

In particular; it is possible to write an application that can interrogate the
driver for the (plaintext!) passwords it has cached.

Normally, to mount a SVL file, the user performs the following steps:

1) Launch ScramDisk
2) Enter passwords
3) Mount the SVL (scrambled) file
4) Exit ScramDisk (or anything else)

The security flaw occurs between steps 2 and 3: at this time, the password is
stored in the ScramDisk driver's cache in plaintext, and can be read easily by
a covert "sniffer" program.

To demonstrate this problem, I've written a short Delphi program that displays
the passwords entered as a volume is mounted, which is availble (with source)
from:
http://www.fortunecity.com/skyscraper/true/882/ScramDiskFlaw.htm

Looking at it, it should be (as the application stands at the moment) fairly
trivial to write a program to quietly run itsself at startup and monitor the
passwords to be collected (emailed off?) at a later date.

This flaw is due to GETPASSWORDBUFFER in the VxD, and appears even you
enter your passwords using the RED screen; defeating the object of using
this otherwise pretty neat feature.

(This anomoly affects v2.02h, and presumably earlier versions)

--
Sarah Dean
http://www.fortunecity.com/skyscraper/true/882/

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to