Cryptography-Digest Digest #840, Volume #13       Thu, 8 Mar 01 19:13:01 EST

Contents:
  Re: The Foolish Dozen or so in This News Group (Benjamin Goldberg)
  Re: One-time Pad really unbreakable? (Paul Rubin)
  Re: The Foolish Dozen or so in This News Group (Benjamin Goldberg)
  Re: The Foolish Dozen or so in This News Group (Benjamin Goldberg)
  Re: New unbreakable code from Rabin? (Tim Tyler)
  Re: Super strong crypto (David Wagner)
  Re: OT: The Big Breach (book) available for download (Patton Echols)
  Re: qrpff-New DVD decryption code (Sundial Services)
  Re: => FBI easily cracks encryption ...? ("Mxsmanic")
  Re: Super strong crypto ("Douglas A. Gwyn")

----------------------------------------------------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Crossposted-To: alt.hacker
Subject: Re: The Foolish Dozen or so in This News Group
Date: Thu, 08 Mar 2001 22:52:32 GMT

Anthony Stephen Szopa wrote:
> 
> Scott Fluhrer wrote:
> >
> > It is written "Argue not with a fool, lest you be like him".  Here I
> > go again ignoring that excellent advise...
> >
> > Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote in message
> > news:[EMAIL PROTECTED]...
> > > Scott Fluhrer wrote:
> > > >
> > > > Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote in message
> > > > news:[EMAIL PROTECTED]...
> > > > > The Foolish Dozen or so in This News Group
> > > > >
> > > > > Read below.  It'll be just like being forced to look into a
> > > > > mirror.  You'll see who you all really are.  Read it an weep.
> > > > >
> > > > > Here is why I think Ciphile Software's OverWrite program
> > > > > actually
> > > > > <majorsnip>
> > > >
> > > > careful with your fopen() and your fclose() functions won't help
> > > > you there either.
> > > >
> > > > --
> > > > poncho
> > >
> > >
> > > "...After all, it's in control of the file system...  See,
> > > logically..."
> > >
> > > What people fail to perceive quickly enough is that we are talking
> > > to several sick minds.
> > Of course, "sick minds" is being defined as "anyone with any
> > technical experience that indicates that the problem might be
> > somewhat less trivial than Anthony Szopa believes."
> >
> > >
> > [Re: OS's buffering writes]
> > > I guess what you say is true even if I am writing to the file in
> > > append binary mode, I suppose?  NOT!
> > I suspect you meant writing the file in update mode, but in any
> > case, of course the OS (and the disk driver, and the disk controller
> > itself) can buffer writes.
> >
> > [Do any of you claim that any OS that you know of actually returns
> > before actually writing the data to the disk]
> > > "...known as Unix which as precisely this property..."
> > >
> > > You are slandering UNIX.
> > Hardly.  You may not be aware of this, but several jobs ago, a large
> > part of my job was to maintain several versions of the Unix kernal.
> > There was very little that the Unix kernel did that I was unaware of
> > (it helped somewhat that, back in those days, the Unix kernel was
> > considerably smaller).  And, I can swear unequivocally that those
> > versions of the Unix kernel did have precisely that property.  If
> > you disagree, may I ask after your qualifications as a Unix kernel
> > hacker?
> >
> > In any case, I also mentioned (in a part you snipped) that I
> > verified expirementally that Linux (a Unix work-alike) did in fact
> > have that property (either that, or the disk on that computer spins
> > at 384,000 RPM).
> >
> > > So, what you are saying is that everything goes on in cache and
> > > that disk space is not under the operator's control.  A file can
> > > be written to one place on a hard drive then read into cache.
> > > Processed then written to a completely different place on the hard
> > > drive.  And this process can continue I suppose until the entire
> > > hard drive has been written over once and no bit locations have
> > > been overwritten.
> > Yes, at least in theory.  And, according to Darren New, the Atari
> > disk OS has pretty much this behavior.  I'm glad to see we agree.

Part of the problem, I think is that you, Szopa, exaggerated something
that's perfectly valid, and Fluhrer agreed with you.  "everything goes
on in cache and that disk space is not under the operator's contrul." 
Perfectly correct.  "A file can be written to one place on a hard drive
then read into cache."  Very confused statement, this.  Scott Fluhrer
was probably wrong to simply say yes without first asking you to clarify
what it's supposed to mean.

Let's ignore for the moment the C library buffers (which fwrite
modifies, and flushes into the OS when full;  these, btw, are what
fclose flushes).  You have file opened in rwb+ mode.  You write() to
some location.  This results in the OS loading that sector from disk
(not just the one location, the location and the x bytes surrounding
it), and modifies that one byte write()en.  This will, in theory (with a
computer with infinite size cache), result in the entire file being
loaded from disk into memory.  In practice, the least-recently-used OS
buffers get written back to disk due to main memory getting full.

> > > I would think this is not likely because of the optimization you
> > > claim to be expounding.  The drive heads are already over these
> > > bit locations.  To wander all over the hard drive writing to no
> > > predetermined fixed hard drive bit locations would be inefficient
> > > and un-optimizing.
> > For one, "unlikely" != "can't happen".  The drive heads might not be
> > "over these bit locations", as some other program may have forced
> > the drive heads elsewhere.  An OS might decide to write the sectors
> > where the heads happen to be.  I have already mentioned one case
> > (disk compression) where relocating sectors is quite likely.  Darren
> > New mentioned another.  An OS attempting to do on-the-fly disk
> > optimization is a fourth conceivable example.
> >
> > > With the reliability of computers these days most operations do
> > > take place successfully.  There is no reason for the write
> > > locations to be moved around repeatedly.  Give us a real world
> > > reason that happens regularly.  You have only given us a once in a
> > > blue moon possibility that it might happen.  What do you think,
> > > once in 100,000,000 writes?

How do you know that most hdd operations take place successfully?  After
all, with write-behind and automatic relocation of bad sectors, any disk
errors are completely invisible to the normal user.

> > Actually, in my previous posting, I gave one real world reason
> > ("disk compression").  Above, I gave three others.
> >
> > And, in any case, the "mapping around write defects dynamically"
> > algorithm was not given as an example of relocating disks, it was a
> > reason why write buffering may be a valid optimization, even if you
> > have an imperfect disk (and, some systems have disks, such as SCSI,
> > which claim to be perfect).
> >
> > > Your hard drive was quiet because the heads did not move once they
> > > were in position.  Is that about 1GB with no head movement in 156
> > > seconds?  If you say so.  This is not the point.
> > Are you claiming that my disk really spins at 384,000 RPM?  If not,
> > how could it have possibly done one million writes to one sector in
> > 156 seconds?

Szopa, please address this point.

> > > The specific point is whether or not a write operation is actually
> > > taking place in Ciphile Software's OverWrite Security program.  I
> > > am saying that it is by all my reasoning.  The fclose() function
> > > with a conditional statement seems to force a write but you are
> > > saying that the head is wandering all over the hard drive and
> > > writing the resultant files repeatedly all over the hard drive in
> > > binary append mode.
> > >
> > > So what you are saying is that even if I just overwrite one
> > > character in a file this entire file will be loaded into cache if

You overwrite one character, and the entire sector (not necessarily
entire file, unless it's a small file) will be loaded into cache.  One
character in the cache gets modified, as per your overwrite.  Then this
sector gets written back to disk (eventually...  OS buffers don't write
out until main memory is full, or a timer says to write the page out, or
shutdown occurs).

> > > it will fit and this one character will be changed and then the
> > > entire file will be written to some other location on the hard
> > > drive so that there will be two files on the hard drive:  the

When the sector gets written to disk, it may or may not go to the
location which the sector was originally read from.

> > > original file and this newly written file.  And your position is

Two sectors, not two files.

> > > that this is your view of optimization.

Yes.  It is not slower to write in a new location than the old location,
and may be faster if the disk heads are already in location.  Also, in a
transaction based scheme, we DONT want to actually lose the old on-disk
version until the new on-disk version is totally complete.  Also, with
disk compression, our newly written block may fit into a smaller space,
so it is an optimization to *write* it to a smaller space.

> > > Interesting.
> > Yes.  Do you engage in strawman arguments often?
> >
> > >
> > > "...These things typically have no knowledge of the file
> > > system..."
> > > You'll run to any length it is apparent to avoid the topic.
> > The fact that the disk caching routines may be be informed of
> > fclose's is irrelevant?  I thought that fclose forcing writes was
> > the basis of your argument, and so a disk caching routine not being
> > informed about it would appear to be highly relevant.  And, yes,
> > some disk caches are ignorant of fclose's -- I know, because I wrote
> > some.

Address this, please, Szopa.

> > > "...Another nasty problem arises if disk compression is in
> > > use...."
> > > Run on.  Run on.
> > Actually, if I were you, I'd look at disk compression real closely
> > -- it's a very sticky problem for file overwriters.  For example, if
> > a file originally took up 2000 physical sectors, and you overwrote
> > it with data that compresses down to 500 sectors, why would you
> > expect the OS to overwrite the 1500 remaining sectors?

Szopa, tell us how your OverWrite deals with disk compression, please.

> > > Has anyone seen the topic of this thread laying around here
> > > anywhere?
> > I thought the topic was "Is the algorithm that Anthony Szopa gave
> > guaranteed to properly overwite an arbitary file?".  I claim that
> > all the points I brought up was relevent to that.

I think that by now, everyone [except Szopa] is able to see that his
algorithm is not guaranteed to properly overwite an arbitary file.

> > > "...disk drives often write bits to parts of the drive without you
> > > asking it to...."
> > >
> > > What does this mean:  a bit here a bit there?  Stopped running to
> > > other topics to run off about who knows what this means:  "...disk
> > > drives often write bits to parts of the drive without you asking
> > > it to...."
> > Well, (at least, as reported earlier), disk drives often save data
> > to hidden parts of the disk for various reasons.  This is perfectly
> > valid, because as long as the disk drive correctly performs the
> > operations asked of it ("accept writes to sectors, and then give
> > back that exact data when that sector is later read"), it can do
> > anything it finds expedient.  This is important to a disk
> > overwriter, because writes may not overwrite this redundant storage.
> >
> > Other things that occur to me that a disk overwriter might need to
> > worry about:
> >
> > - Disk optimizers -- run either at a user command, or as a
> > background process.  These routines shuffle files around so they can
> > be accessed faster, usually by making them consecutive.  I would
> > claim that this is a problem for a disk overwriter, because after
> > the disk overwriter has completed, there may still be copies of the
> > file still remaining in the free list, and this violates the
> > security guarantee that the overwriter ought to make.

Szopa, please address this.  What happens if I write a file, then
defragment my hdd (resulting in the file being relocated, and the old
sectors going onto the free list), and then want to OverWrite it?
Does your OverWrite do anything about the old (pre-defrag) location of
the file?

> > - Versioning file systems.  These are out-of-favor currently, but
> > they do exist.

Szopa, please address this.

> > - Transactioning file systems.  I'm not sure how these works, but I
> > wouldn't be shocked to hear that they also move sectors around at
> > times.

I know transactioning file systems exist.  Any professional large
database system uses one, when possible.

> > - RAID disks.  Again, I'm not sure if this is really a problem, but
> > have you studied all the various RAID levels to verify that there
> > isn't a problem lurking there somewhere?

Szopa, please address this.

> >
> > --
> > poncho
> 
> You are another propagandist.

What's a propagandist?

> I have replied with detailed cogent points that have gotten the
> better of most of you.  At no time did I indicate that I thought
> these were trivial points.  So admit you are lying.

Erm, no, you've entirely missed the important points Flurer was making.

It seems that he's gotten the better of you, not the other way around.

> The fclose() flushes the buffers.  So explain to us which buffers
> are not flushed and why you think then that these are the ones that
> are used by the conniving OS to conspire to not write to the hard
> drive as the code instructions demand.

fclose flushes the C library buffers.  Not the OS buffers, not the disk
buffers.

Also, the "hard coded instructions" you speak of are *requests* for
things to take place, not demands.  And for that matter, they are soft
coded, not hard coded.  If they were in ROM, they would be hard coded.

> You once thought you knew all there was to UNIX and you were
> admittedly wrong.  It is better than even money that you don't
> know any better now.

Ehh?  When did Fluhrer say anything wrong about UNIX?

> We don't agree.  This is all off topic.  This is a windows program,
> not Atari.  Stick with the thread and make a point if you have one
> that can last more than its original posting.

The only reason that explanations were given wrt *NIX rather than
windows, is that windows is closed source.  There is nothing whatsoever
to indicate that it's write-behind disk caching is any different, at
all, from *nix or atari, or any other system which uses it.

> Well, there you go again.  You are making another hypothetical to
> detract from the fact that you cannot stick with the thread.

The topic of the thread is whether your overwrite can somehow work in
spite of write-behind.  We don't have source for windows write-behind,
but do have source of linux write-behind.  We have no reason to believe
that windows write-behind works differently from linux write-behind, so
proving that your algorithm will not work on linux do to write-behind
strongly implies that it will not work on windows for the same reasons.

> This is a personal use program for windows.  I think it is fair to
> say this program is for and intended for a non multitasking
> environment.

hehe.  I guess you could call windows a non-multitasking environment, if
you really dislike it.  However, it actually *is* a multiprogramming
environment, whatever you choose to believe (just not a very good one).

> I've given you enough of my time.
> 
> Take a look at my floppy disk test results I just posted.  They seem
> to cramp everything pretty much all the detractors have been saying.

Well, if write-behind were used for floppies, it would.

> Let's see you come up with some more off topic hypothetical BS
> refuting this floppy test.

You're the one whose eyes are turning brown.

-- 
The difference between theory and practice is that in theory, theory and
practice are identical, but in practice, they are not.

------------------------------

From: Paul Rubin <[EMAIL PROTECTED]>
Subject: Re: One-time Pad really unbreakable?
Date: 08 Mar 2001 15:06:10 -0800

Tim Tyler <[EMAIL PROTECTED]> writes:
> You can read more about such exploits in a popular book on the subject:
> The Newtonian Casino - Thomas A. Bass -
>   http://www.amazon.co.uk/exec/obidos/ASIN/0140145931/
> 
> ``The true story of how a group of young physicists and computer
>   scientists developed a computer to predict the results of roulette. They
>   then smuggled the device into the casinos of Las Vegas, hidden in the
>   soles of their shoes.''
> 
> : In other words: here is a counterexample - a sequence of physically
> : generated random numbers that cannot effectively be predicted.

In the US this book is called "The Eudaemonic Pie".  It's excellent.

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Crossposted-To: alt.hacker
Subject: Re: The Foolish Dozen or so in This News Group
Date: Thu, 08 Mar 2001 23:12:31 GMT

Anthony Stephen Szopa wrote:
> 
> Darren New wrote:
> >
> > David Hopwood wrote:
> > > And yes, this does mean that if power is lost suddenly or the OS
> > > crashes, some changes may not be written, even though fclose
> > > succeeded.
> >
> > Actually, I just loaded a "Windows Update" on my machine that added
> > a 2-second delay after the OS flushes the disks before it actually
> > removes power from the drives during shutdown, to give the hard disk
> > a chance to actually finish the write. The behavior is
> > well-documented on MS's update web pages.
> >
> > Now, maybe Mr Szopa would care to explain why, if fclose() makes
> > sure all the sectors are written, the OS has to wait two seconds for
> > the write to finish after all your programs have exitted.
> 
> In any event, I updated the OverWrite software by adding one simple
> line of code that is only 8 characters long for each pass.

This is not an explanation.  Szopa, please give an explanation why the
OS has to wait two seconds etc.

> Now when you run it you can see it go to the hard drive 27 times
> just by watching your hard drive LED access light and perhaps by
> listening to your hard drive as well. 

the LED goes on for bus activity with the drive.  The drive has its own
cache.  How do you work around this?

> Choose a file about 2 - 6 KB for starters.  Depending on the file size
> it may access the hard drive more than this:  it appears that it works
> in multiples of 27 but they can get hard to count.  Depending on the
> file size some parts of multiple accesses for a single pass on larger
> files can be pretty short.

For a large enough file, writing to later portions of the file results
in more buffers being needed for them, and earlier buffers being forced
onto disk.  Thus, your example only works if the selected file is larger
than the cache.  If not, then all modifications take place in memory.

> I could see the hard drive access LED clearly lighting up 27 times.
> I had to cup my hand over the LED to make it dark enough so I could
> see it clearly.  Can't hear much on my one system's hard drive but
> I can sure hear it on my other system's hard drive.

The light is only the bus transfer.  The actual write takes place later,
when the hard drive's buffers get full (or when a clock inside the drive
says to).

-- 
The difference between theory and practice is that in theory, theory and
practice are identical, but in practice, they are not.

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Crossposted-To: alt.hacker
Subject: Re: The Foolish Dozen or so in This News Group
Date: Thu, 08 Mar 2001 23:20:58 GMT

Eric Lee Green wrote:
[snip]
> Well, there's three layers to consider here: Filesystem, buffer cache,
> and underlying hardware.

Err, four.  Szopa's using fopen, fwrite, fclose, so there's also a C
library cache in addition to the other layers.

-- 
The difference between theory and practice is that in theory, theory and
practice are identical, but in practice, they are not.

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: New unbreakable code from Rabin?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 8 Mar 2001 23:15:18 GMT

[EMAIL PROTECTED] wrote:
: Tim Tyler <[EMAIL PROTECTED]> writes:

:> Which is not as high as the authors seem to think and in no way justifies
:> their apparently ridiculous security claims :-(

: If you would actually read and understand their security claims you
: would not be so likely to say this.

At the head of this thread: ``"This is the first provably unbreakable code
that is really efficient" Dr. Rabin said.''

That seems to be the claim that a break is impossible.

I seem to be justified in describing this claim as "apparently
ridiculous".  It doesn't matter /what/ the details of the system
proposed are, there's no way such a description can be justified.
Provably unbreakable codes are simply not known to exist.
-- 
__________
 |im |yler  [EMAIL PROTECTED]  Home page: http://alife.co.uk/tim/

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Super strong crypto
Date: 8 Mar 2001 23:35:35 GMT
Reply-To: [EMAIL PROTECTED] (David Wagner)

I'd like to repeat my request for a precise definition what you mean by
a "fairly good" block cipher.  Without this, claims of security for the
"super-strong" mode cannot be evaluated.  You professed a hope that the
"super-strong" mode could be proven secure, but before we can even begin
to think about proving some security theorem, we have to know what
the statement of the theorem would say.  In the absence of a precise
statement of assumptions, claims of security for this mode are `not
even wrong'; no sensible truth value can be assigned to them.  I would
hope that the examples I gave of ciphers that make the mode insecure
would serve as a strong motivating force for the need to identify more
carefully the requirements on the block cipher.

In any case, there is no need to engage in informal analysis.
I've already described a mode of operation that seems to fulfill all
your goals.  I can make precise exactly what properties are required
of the block cipher, and I can rigorously prove that (if you have such
a block cipher) the resulting mode is secure against _all_ efficient
chosen-plaintext/ciphertext attacks.  Best of all, it is simple and fairly
efficient.  Is there some reason it is unsuitable for your purposes?

------------------------------

From: Patton Echols <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,alt.security.scramdisk,comp.security.pgp.discuss
Subject: Re: OT: The Big Breach (book) available for download
Date: Thu, 08 Mar 2001 15:47:18 -0800

I have been trying to DL the pdf or pdf.zip file and had no luck.  I get
about 100k into the download and then it stalls.  Has anyone had success
with this?  I'm sure it's not my connection, the 1.2m zip file should
only take a few minutes over DSL.


Sam Simpson wrote:
> 
> The book Big Breach (by R.Tomlinson, ex-MI6) is available for download at
> the URLs below.....It's caused a lot of controversy in England, so is
> probably worth a read ;)
> 
> Just so happens to have been released a week after I ordered my copy :(
> 
> --
> Regards,
> 
> Sam
> http://www.scramdisk.clara.net/
> 
> Acrobat PDF
> Zipped   - http://thebigbreach.com/download/bbpdf.zip
> Unzipped - http://thebigbreach.com/download/bbpdf.pdf
> 
> MS Word
> Zipped   - http://thebigbreach.com/download/bbword.zip
> Unzipped - http://thebigbreach.com/download/bbword.doc
> 
> Text
> Zipped   - http://thebigbreach.com/download/bbtxt.zip

------------------------------

Date: Thu, 08 Mar 2001 16:54:52 -0700
From: Sundial Services <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: qrpff-New DVD decryption code

>From a marketing perspective your points are entirely valid.  And, I
think, the market is what eventually will prevail.  ("Money talks.")  

The recording industry torpedoed itself once before, with the DAT
tape-drive, only to be blindsided by the proliferation of CD-recorders
and, of course, Napster (and all of its non-centralized cousins). 
Eventually, when all the dust settles, "what the customer really wants,
the customer is going to get, and [suh-prize!] PAY for!"

Much of the present media monopoly is based on having one's thumb on the
sole distribution-channel, viz, "the only practical way to publish a
movie is to record it on film and play it to masses in a theatre," or
"the only practical way to distribute music is to record it on a compact
disc."  For as long as those assumptions remain true, a lucrative
monopoly is assured.  

However ... 

... the bandwidth on the Internet is only getting bigger and bigger ...

... high-definition television cameras are becoming cheaper and cheaper
...

... video editing on a $3,000 Macintosh computer is trivial to do ...

... a 60-minute feature will fit on a readily available hard drive ...

... a hit song is only a 10-megabyte file ...

... artists only get $1 or so from a sale of a $16 CD ...

... artists are often deeply in debt to their labels, thanks to
accounting which leaves them the last to be paid ...

... radio stations are losing listeners steadily; it's not what the
public wants any more ...

Change is in the wind, and attempts to preserve the monopolist's
status-quo, through cryptography or whatever-other means, are a Don
Quixotic quest -- merely tilting at windmills.

I would certainly squawk, as any consumer can and rightfully should, if
"my laptop purports to play dvd's and one out of four do not."  That's a
25% failure rate and ... what would you do if 25% of the time a
lightbulb doesn't light?  If 25% of the time you walk into a theatre and
the movie doesn't play?  

It has nothing to do with crypto.  This is rather like a monopolist who
has cornered the market on vinyl records, trying to use crypto to
prevent the encroachment of the CD:  utterly ridiculous.

And the market -- money! -- will ultimately prove this to be so.  "Money
talks."



>Jim Steuert wrote:
> 
>    I recently bought a laptop which purportedly plays dvds. I've legally
> rented 4 dvds and tried to play them on my laptop. While three
> have played, one did not because (I presume) of country code.
> Should I return the Toshiba laptop for false advertising? My own
> sony dvd player played that particular dvd ok.
>    If dvd-media sellers  are going to do this, I think the MPAA should
> be sued for false labeling of their expensive product. This is a form
> of false advertising, not by the laptop makers, but by the MPAA. They
> should not be allowed to defraud the consumer. Perhaps I should
> bring this up with the local attorney general's office.
>    This is analogous to buying a book, but then having to buy a special
> pair of very expensive glasses to read that particular book, because someone
> "might" xerox the book. We are presumed guilty.
>    I feel perfectly justified in using decryption software to read some media that
> I have purchased for my own personal viewing. Is it violating a copyright
> to be able to read something I have already payed to own/rent?
>    -Jim Steuert

------------------------------

From: "Mxsmanic" <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp,talk.politics.crypto
Subject: Re: => FBI easily cracks encryption ...?
Date: Fri, 09 Mar 2001 00:06:28 GMT

"Matthew Montchalin" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...

> Not easy if you have to go through one monitor
> to get to the next one; there's bound to be
> interference sufficient to make it easier to tap
> a phone line physically than point a gizmo
> directly at a window.

The interference has a pattern; by compensating for the pattern, you can
screen out the interference.

> It seems that having a "line of sight" to the
> target is something of a necessity.

It would depend on the radiations being measured, but yes, this is
probably very often the case.  The line of sight could be through a
wall, though.



------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Super strong crypto
Date: Thu, 8 Mar 2001 23:04:29 GMT

Mok-Kong Shen wrote:
> Perhaps you should also check in those your cases whether
> the keys need to be perfectly random (barely realistic)
> and (in case no) whether using pseudo-random keys
> (thus permitting no expansion of bandwidth) is feasible.

The enemy must have no information about the actual
(shared secret) key, which means it must be generated by a
random process.  If instead you generate it using some
deterministic method, the enemy can use the same method.

If you mean that the initial key is a truly randomly
generated shared secret but that later "keys" are derived
from it using a deterministic algorithm, then that key
scheduling algorithm can be cryptanalyzed.  One of the
changes from Lucifer to DES was to address that issue.
In effect the key scheduling algorithm becomes part of the
"general system" and only the initial key is the "key" for
that system.  That reduces the problem to where we started.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to