Cryptography-Digest Digest #469, Volume #10      Sat, 30 Oct 99 01:13:04 EDT

Contents:
  Re: the ACM full of Dolts? (SCOTT19U.ZIP_GUY)
  Re: Proposal: Inexpensive Method of "True Random Data" Generation (Will Ware)
  Re: Bruce Schneier's Crypto Comments on Slashdot (jerome)
  Re: Bruce Schneier's Crypto Comments on Slashdot (David Crick)
  Re: Build your own one-on-one compressor (Tim Tyler)
  Re: the ACM full of Dolts? (jerome)
  Re: Proposal: Inexpensive Method of "True Random Data" Generation ("Dr. Michael 
Albert")
  Re: ComCryption
  Re: ComCryption
  Re: ComCryption
  Re: the ACM full of Dolts? ("Trevor Jackson, III")
  Re: Compression: A ? for David Scott (Clinton Begin)
  Re: This compression argument must end now (Tom St Denis)
  Re: Symetric cipher (Tom St Denis)
  Re: Unbiased One to One Compression

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: the ACM full of Dolts?
Date: Sat, 30 Oct 1999 00:32:51 GMT

In article <[EMAIL PROTECTED]>, Mok-Kong Shen <[EMAIL PROTECTED]> 
wrote:
>SCOTT19U.ZIP_GUY wrote:
>> 
>
>> Quote Start
>> -- There are several major technical pieces that are missing from this
>> article. Most importantly, no motivation is ever presented for designing
>> compression algorithms to be one-to-one. Further, I have an easier
>> solution to the "file ending problem" -- use a filesystem that stores the
>> bit- length of each file rather than the byte-length. (After all, the
>> conventional view that a file's size is some multiple of 8 bits is an
>> illusion
>> provided by the filesystem, which actually allocates in larger chunks.)
>> Quote End
>
>In another thread (Unbiased one-to-one compression, initiated by
>John Savard), I happened to have expressed the view that,
>if one uses an adaptive Huffman with an initial distribution
>unknown to the analast, then one could allow the 'luxury' of 
>explicitly stating the length in number of bits, thus circumventing 
>in some sense the one-to-one problem. I guess that it could be that 
>the referee had something similar to that in mind.
>
    Since there was no communications with the referee one will never 
know what he had on is mind. But if you hard code the number of bits
as a number then it will not be one to one and you are inserting information
that would be of use to the attacker. The whole point of one to one was to
prevent the addition of information to the file when one uses compression.
The ending problem is solved in my version of adaptive huffman compression.
so it is wasteful to add the lenght to the file.





David A. Scott
--

SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
                    
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm

Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm

Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm

**NOTE EMAIL address is for SPAMERS***

------------------------------

Crossposted-To: sci.math,sci.misc,sci.physics
From: [EMAIL PROTECTED] (Will Ware)
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation
Date: Fri, 29 Oct 1999 23:48:07 GMT

DSM ([EMAIL PROTECTED]) wrote:
: Currently, any experiment (or other procedure) for which "true"
: random data is required must be conducted on a computer equipped
: with a special-purpose peripheral device (usually quite expensive.)

Most (maybe all) of your ideas have been incorporated in the
/dev/random device driver for Linux. You can solve your problem
by upgrading your OS. If you want even more randomness, I put
together a very cheap circuit a few years ago, described at
http://world.std.com/~wware/hw-rng.html

In order to feed the /dev/random entropy pool from the circuit,
you'd need to interface it either to a serial port or to a bus
slot, and write some more device-driver-ish code. I once discussed
this with one of the authors of /dev/random, but at the time I
lacked the kernel-hacking expertise to do it. Now I lack the
ambition. Anybody who wishes to try has my admiration.
-- 
 - - - - - - - - - - - - - - - - - - - - - - - -
Resistance is futile. Capacitance is efficacious.
Will Ware       email:    wware @ world.std.com

------------------------------

From: [EMAIL PROTECTED] (jerome)
Subject: Re: Bruce Schneier's Crypto Comments on Slashdot
Reply-To: [EMAIL PROTECTED]
Date: Fri, 29 Oct 1999 23:56:49 GMT

On 29 Oct 1999 15:41:33 -0700, Dylan Thurston wrote:
>Indeed, very interesting and well-written.  But there is one point
>where I think he's mistaken:
>
>> And when it becomes a reality, it does not destroy all
>> cryptography. Quantum computing reduces the complexity of arbitrary
>> calculations by a factor of a square root.  This means that key
>> lengths are effectively halved. 128-bit keys are more than secure
>> enough today; 256-bit keys are more than secure enough against quantum
>> computers.
>
>I don't think it's true that "quantum computing reduces the complexity
>of arbitrary calculations by a factor of a square root."  It is true
>that it would reduce key searching (and, in general, exhaustive
>search) calculations by such a factor, as he says.

does it reduce the complexity (i.e. the number of operations) or the
time to reach the result ?

------------------------------

From: David Crick <[EMAIL PROTECTED]>
Subject: Re: Bruce Schneier's Crypto Comments on Slashdot
Date: Sat, 30 Oct 1999 02:25:11 +0100

Dylan Thurston wrote:
> 
> Indeed, very interesting and well-written.  But there is one point
> where I think he's mistaken:
> 
> > And when it becomes a reality, it does not destroy all
> > cryptography. Quantum computing reduces the complexity of arbitrary
> > calculations by a factor of a square root.  This means that key
> > lengths are effectively halved. 128-bit keys are more than secure
> > enough today; 256-bit keys are more than secure enough against quantum
> > computers.
> 
> I don't think it's true that "quantum computing reduces the complexity
> of arbitrary calculations by a factor of a square root."  It is true
> that it would reduce key searching (and, in general, exhaustive
> search) calculations by such a factor, as he says.

This is indeed true. See Grover's Quantum Database Search algorithm.

   David.

-- 
+-------------------------------------------------------------------+
| David Crick  [EMAIL PROTECTED]  http://members.tripod.com/vidcad/ |
| Damon Hill WC96 Tribute: http://www.geocities.com/MotorCity/4236/ |
| M. Brundle Quotes: http://members.tripod.com/~vidcad/martin_b.htm |
| ICQ#: 46605825  PGP Public Keys: RSA 0x22D5C7A9 DH/DSS 0xBE63D7C7 |
+-------------------------------------------------------------------+

------------------------------

Crossposted-To: comp.compression
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Build your own one-on-one compressor
Reply-To: [EMAIL PROTECTED]
Date: Sat, 30 Oct 1999 01:08:54 GMT

In sci.crypt Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

:> : If I understand correctly, you are now on compression not operating
:> : on groups of 8 bits but on group of bytes.
:> 
:> This is a strange way of putting it.  There is no possibility of a
:> non-byte aligned end-of-file, if you work with byte-aligned symbol
:> strings.
:> 
:> Of course my method is not *confined* to the use of byte-confined
:> symbol strings - but your strings *must* differ from their replacements
:> by a multiple of 8 bits or you will introduce the type of file-ending
:> problems that only David Scott knows how to solve ;-)

: Let me explain what I meant. You have certain symbols; let's
: for convenience identify these with A, B, C, etc. (they could in fact 
: be anything you define through grouping any number of bits). Now your
: dictionary does translations. An example could be that 'ABCD' is
: translated to 'HG'. This is the compression direction. On decompression
: it goes backward to translate 'HG' to 'ABCD'. Am I o.k. till here?

Y

: Now I denote the side with 'ABCD' above side1 and the side with 'HG'
: side2. So on compression one matches the source string with entries
: of side1 and replaces the found entries with the corresponding
: entries of side2. On decompression one reverses that.

Such a scheme would fail to be one-on-one unless all the entries in side1
were also in side2.  If not, imagine if the original text was:

"xxxxABCDxxxxHGxxx".  It would "compress" to:
"xxxxHGxxxxHGxxx", which would then decompress to:

"xxxxABCDxxxxABCDxxx" :-(

: Now side1, if correctly constructed, should be able to process any given
: source input and translate that to the target output. (I like to note 
: however that it needs some care for ensuring this property, if your 
: symbols are arbitrarily defined.) Let's say that in a concrete case 
: one has XYZ....MPQABCD compressed to UV.....ERHG. Suppose now I 
: change the compressed string to UV.....ERHN (HG is changed to HN). 
: Then this modified string is decompressible if and only if there 
: is an entry 'HN' on side2. How are you going to ensure conditions
: of this sort of your dictionary?

I beleive I went into all this in my initial post - and in much greater
detail on the associated web site.

:> : (2) due to the larger granularity the compression ratio is
:> : likely to be poor, I believe.
:> 
:> One of the routine's targets is plain-text.  This is also "granular" in
:> 8-bit chunks.

: The normal Huffman, operating on ASCII, have codes that, if of
: unequal size, may have difference in length of only one bit. In your 
: case the codes may have in analogous situation difference in length 
: of one 'unit' that is presumably larger than one bit. That's what I
: meant by larger granularity.

My method is quite capable of dealing with pairs of bitstrings with
arbitrary length differences - IFF the resulting end-of-file alignment
problem can be solved.  I don't understand Daivid's trick well enough to
know if he could apply it to my algorithm.

:> It appears that while bijective *compression* routines are rare,
:> there are a number of other known reversible operations that are of
:> interest to the prospective bijective compressor as they may
:> "reveal structure" in certain types of file.
:> 
:> One is the fourier transform, and its sister algorithms.  /If/ a routine
:> that can reversibly transform a file *of arbitrary size* into the
:> frequency domain can be found, this would be of significance to makers of
:> one-on-one compressors.  *Many* common types of data exhibit regularities
:> in the frequency domain.

: I am not aware of any loseless compression scheme that employs Fourier
: transform and the like. I doubt that that could ever be done.

FFTs are completely reversible.  The only problem that occurs to me
is getting such a routine to deal with arbitrary-length files well.
Getting an arbitrary file into the frequency domain is almost certainly
possible.  However, I do not currently have code which demonstrably does
this.

:> Another operation of potential interest is treating the data as a
:> byte-stream, and then rearranging the bits so that all the 0th bits
:> come first in the file, followed by all the 1-bits ... followed by the
:> 7th bit in each byte at the end of the file.
:> 
:> Perform this bijective operation on some ASCII text, and the regularity in
:> the 7th bit winds up as a handy long string of 00s at the end of the
:> file...

: This is a permutation of the bits on the whole file. It destroys the 
: natural correlations that is present in the input. This could be of
: value cryptologically. But for compression this should lead
: to less compressibility than the original source, i.e. disadvantageous,
: if your goal is compression.

One approach would probably be to compress once using my method (using a
compressed symbol symbol table that avoided high bits) *then* send all the
7th bits to the end of the file, and finally compress again, perhaps using
an adaptive method.

This would avoid the need for generating a "compressed symbol" table that
adequately spanned the 128-255 range with a reasonable frequency of symbol
occurrence (probably a near-impossible task).

*Most* ASCII data is 7-bit - or less - it makes sense to try and treat it
is such.
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

Please refill this tagline dispenser.

------------------------------

From: [EMAIL PROTECTED] (jerome)
Subject: Re: the ACM full of Dolts?
Reply-To: [EMAIL PROTECTED]
Date: Sat, 30 Oct 1999 01:30:50 GMT

On Sat, 30 Oct 1999 00:01:26 GMT, SCOTT19U.ZIP_GUY wrote:
>In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>>Obviously if you has written your letter with your 'carefree writing style',
>>it is unlikely to get an answer.
>>Dolts or not, most people don't like to be called 'pompous assholes'.
>
>   No while waiting to see if it was to be published I pretended to be a
>nice guy. But it is hard for me to fake it like most of you can.

I read your article and my opinion of non specialist follows. It is 
on the style, not on the technical part. The style is a part of the game,
you have to comply with the rules. You may find them stupid, it is
irrelevant, you don't have the power to change them.

- Your style isn't the 'scientific paper' one. You write as if you talk 
  to somebody e.g. "I will now show you how to do that". It isn't a oral
  presentation in a tradeshow but a written work destined to be read
  by specialists.
- It is usually 'in good taste' to adopt a humble attitude especially 
  for beginner. e.g. "i will show you", "the solution" aren't suitable.
- It is full of typo, it prooves the article hasnt been carefully 
  reviewed, not even a spelling corrector, a example of 'carefree' 
  attitude.
- No bibliography, except some internet links. Even dijkstra in 'a 
  discipline of programming' has commented about the absence of 
  bibliography in his book. A bibliography shows that you have
  read the past work so you won't, less likely, repeat it.

You don't comply with the rules, the referee/reviewer prevents you from
playing the game. Like in sport, except than here the rules are tacits.
If you read scientific papers and extracts the common pattern, you 
probably will easily find out them.

My advice: write your next paper so close to the common pattern that 
the referee will judge only on the real contents and not on the style.

All that are my opinions and i am not an authority. Follows my advice or
not, i wrote that because i think it can help you.



------------------------------

From: "Dr. Michael Albert" <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.misc,sci.physics
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation
Date: Fri, 29 Oct 1999 21:56:59 -0400

> PROPOSAL: Make use of minute electronic inaccuracies in existing
> computer

The Linux folks are working on putting a random number generating
device into the operating system (/dev/random).  I believe
it uses hardware interrupts (or at least key-board interrupts)
as a "source of entropy".  The manual page mentions this reference:

RFC 1750, "Randomness Recommendations for Security"
http://www.ietf.org/rfc/rfc1750.txt

This documents shows much detailed thought on
where one can get random numbers.

The PentiumIII apparently has hardware support for random 
number generation on chip:

http://intel.com/pressroom/archive/speeches/pg012099.htm

See also:
        http://www.fourmilab.ch/hotbits/
and 
         http://www.fourmilab.to/hotbits/hardware.html

for random number generation by atomic decays.

Best wishes,
 Mike



------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: ComCryption
Date: 30 Oct 99 02:41:48 GMT

Mok-Kong Shen ([EMAIL PROTECTED]) wrote:
: It's certainly not a good idea for 'compression', but not a bad idea 
: for encryption in my humble view. Compression is the technique being 
: 'borrowed' here for persuing another purpose, namely encryption. It 
: is the large number of potentially possible compression schemes that 
: thwarts the analyst. This is an example of application of the principle 
: of variability.

While I would have nothing against it as a technique to further frustrate
a cryptanalyst, trying to use it alone without "real" encryption
afterwards is, in *my* humble opinion, a stupid idea.

Basically, this is because a compression algorithm, even one chosen at
random, even with some extra randomization thrown in, is not an encryption
algorithm, so it won't be very resistant to analysis.

But part of my point is that even respected people can sometimes have
ideas that are duds; they're still distinguished from the cranks by not
flogging them afterwards.

Also, while it may be a "stupid idea" to just go ahead and implement it as
it stands and expect high security, that doesn't mean it isn't still a
fruitful idea, a source of inspiration. With some extra features added, or
used for a special purpose, or as a starting point for something
different, it could still be helpful. An idea can be thought-provoking
without being immediately very useful in its original form.

John SAvard

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: ComCryption
Date: 30 Oct 99 02:35:35 GMT

SCOTT19U.ZIP_GUY ([EMAIL PROTECTED]) wrote:
:    Do you know if these compression algorithms where one to one
: or not?

Almost certainly they weren't.

:  Gee did someone on here ask about it?

No, but somebody on the coderpunks mailing list did, which is reproduced
on a newsgroup.

John Savard

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: ComCryption
Date: 30 Oct 99 02:44:48 GMT

Mok-Kong Shen ([EMAIL PROTECTED]) wrote:
: James Felling wrote:

: > Given the SOTA in compression algorithims I don't think that this is so --
: > most modern compression algorithims leave very recognisable elements
: > embeded in their file structures, and thus will probably be easily
: > defeated by basic analasys.

: Would you please give an example so that one has some better
: idea of what you mean by 'very recognisable elements'? Thanks.

I'll admit that if one is using adaptive Huffman-type techniques, but
applying them to arithmetic coding, one will have to work a bit to find
'recognizable elements'.

But I already gave an example for LZW and its relatives; the first few
bytes will be unchanged, except for a prefix bit, until a repeated string
is found.

It will be a bit harder for other compression schemes, but he is correct.

John Savard

------------------------------

Date: Fri, 29 Oct 1999 23:04:44 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: the ACM full of Dolts?

jerome wrote:

> On Sat, 30 Oct 1999 00:01:26 GMT, SCOTT19U.ZIP_GUY wrote:
> >In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
> >>Obviously if you has written your letter with your 'carefree writing style',
> >>it is unlikely to get an answer.
> >>Dolts or not, most people don't like to be called 'pompous assholes'.
> >
> >   No while waiting to see if it was to be published I pretended to be a
> >nice guy. But it is hard for me to fake it like most of you can.
>
> I read your article and my opinion of non specialist follows. It is
> on the style, not on the technical part. The style is a part of the game,
> you have to comply with the rules. You may find them stupid, it is
> irrelevant, you don't have the power to change them.
>
> - Your style isn't the 'scientific paper' one. You write as if you talk
>   to somebody e.g. "I will now show you how to do that". It isn't a oral
>   presentation in a tradeshow but a written work destined to be read
>   by specialists.
> - It is usually 'in good taste' to adopt a humble attitude especially
>   for beginner. e.g. "i will show you", "the solution" aren't suitable.
> - It is full of typo, it prooves the article hasnt been carefully
>   reviewed, not even a spelling corrector, a example of 'carefree'
>   attitude.
> - No bibliography, except some internet links. Even dijkstra in 'a
>   discipline of programming' has commented about the absence of
>   bibliography in his book. A bibliography shows that you have
>   read the past work so you won't, less likely, repeat it.
>
> You don't comply with the rules, the referee/reviewer prevents you from
> playing the game. Like in sport, except than here the rules are tacits.
> If you read scientific papers and extracts the common pattern, you
> probably will easily find out them.
>
> My advice: write your next paper so close to the common pattern that
> the referee will judge only on the real contents and not on the style.
>
> All that are my opinions and i am not an authority. Follows my advice or
> not, i wrote that because i think it can help you.

If he won't read "Elements of Programming Style" why would he read Strunk & White?


------------------------------

From: Clinton Begin <[EMAIL PROTECTED]>
Subject: Re: Compression: A ? for David Scott
Date: Sat, 30 Oct 1999 03:00:45 GMT


> *All* compressed files are valid using David's scheme.  I presume you
> are referring to files you have generated without using the
> compressor?

Yes, and also information that has been encrypted and then decrypted
with the wrong key (as in the cryptanalysis procedure I described).
For example:

Where M = compressed message
k = proper key
r = random key attempt
E = encrypt
D = decrypt
C = cipher text

C = E[k](M)
M1= D[r](C)

Here when M1 is decompressed, it will not change in size (as with
David's compression examples) or the decryption will fail (as with many
other compression schemes).

> Not at all obvious.  Say the compression is treating a JPEG, or a zip
> file. It won't be able to compress this further very well - and
> "compression" will result in a file that differs from the original
> by "one or two percent".

Thank you, I believe I identified that as a possibility.  I also
described how this could easily be added as a step in the cryptanalysis
process I provided in my second example.  In these cases, there would
likely be only a few compression formats the hacker would have to check
for (jpg, mpg, zip, mp3 etc.).  Most of these formats can be easily
identified in the first few bytes of the file.

> : I believe this attack may be just as dangerous as the one you
> describe
> : (first example).
> *Very* doubtful.

Why?  Perhaps you could explain a little further.  I am not an expert,
nor am I a mind reader, so simply saying it is doubtfull doesn't do
much good.

>From what I can see the two procedures of cryptanalysis I provided in
my examples are very similar.

> : it appear as though it could be valid.
> Which it sometimes does...

Not in the tests I ran.  Perhaps you could post a 2k file to the
newsgroup which has not been compressed with H3ENC, but will decompress
to a larger size (say 2x or 3x).

I am not questioning David's methods, I am only trying to show that it
is impossible to hide all structure in in information --compressed or
not.  Information without structure is not information at all.

Cheers,

  Clinton.

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:
> Clinton Begin <[EMAIL PROTECTED]> wrote:
>
> : I think I found an interesting property of your compression scheme
(and
> : most likely all compression schemes) that may involve the structure
of
> : the information.
>
> : When your decompressor encounters invalid 'compressed' information,
the
> : size of the information remains almost the same (within 1% or 2%)
when
> : the information is 'decompressed'.
>
> *All* compressed files are valid using David's scheme.  I presume you
are
> referring to files you have generated without using the compressor?
>
> If so, your statement is still not true, except in a statistical
sense.
>
> : Obviously, when valid information is decompressed, the size doubles
or
> : triples (otherwise the 'compressed' information is invalid or the
> : information was compressed more than once).
>
> Not at all obvious.  Say the compression is treating a JPEG, or a zip
> file. It won't be able to compress this further very well - and
> "compression" will result in a file that differs from the original
by "one
> or two percent".
>
> [much snip]
>
> : I believe this attack may be just as dangerous as the one you
describe
> : (first example).
>
> *Very* doubtful.
>
> : In order to avoid it, the decompressor would have to successfully
> : decompress and change the size of the information to make
> : it appear as though it could be valid.
>
> Which it sometimes does...
>
> : Given this, my feeling is that sticking with the best possible
> : compression ratio is probably still the best bet.
>
> The issues apper to be the same as they were before you made your
post.
>
> There are two types of security problems, with different
characteristics.
>
> Failure to compress well leaves patterened information in the file,
which
> may be used to target analytic attacks.
>
> Systematically adding information during the compression introduces a
new
> way of eliminating keys, and /may/ provide a target for analytic
attack
> even if the pre-compressed file is completely patternless and random.
>
> Sticking with "the best possible compression ratio" is not possible in
> practice.  Today, if you use the best compressor available for your
data
> type, this is unlikely to be one-on-one, and may open you to the
second
> attack.  In a number of cases, introducing this new problem will
*not* be
> "your best bet".
> --
> __________
>  |im |yler  The Mandala Centre  http://www.mandala.co.uk/
[EMAIL PROTECTED]
>
> People who deal with bits should expect to get bitten.
>


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: This compression argument must end now
Date: Sat, 30 Oct 1999 03:11:11 GMT

In article <[EMAIL PROTECTED]>,
  "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> > > ``While investigating the PKZIP stream cipher I found that the
plaintext
> > >   characteristics of a compression file built using that
extremely-popular
> > >   program are anything BUT unpredictable.  In fact, I was able to
modify
> > >   the "known plaintext attack" against that stream cipher to
achieve
> > >   breaks in some cases without knowing -anything- about the
plaintext
> > >   except that it was, say, a compressed TXT file, or an EXE or
DLL.
> Tom St Denis wrote:
> > This is obviously a lie.  If you don't need to know the contents
what
> > good does knowing the fileextension provide.  Obviously they are
> > looking for some plaintext characteristic.  [Guessing the plaintext
> > falls into the same attack as knowing the plaintext].
>
> If you don't understand what is being said, you should remain silent.

Ok, explain to me what good knowing the file extension would be?

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Symetric cipher
Date: Sat, 30 Oct 1999 03:09:50 GMT

In article <7vcua6$26oo$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY) wrote:
>   Actually this must be Mr BS's new form of SPAM since I have not
recently
> got a direct mailing from him. He does not anwser mail. It would be
far better
> to just look on the web for enryption sources. Or if your weak and
must look
> at the book go to Barnes and Noble you can read the realvent sections
by just
> browing. But just keep in mind he admits that he is buddy buddy with
the NSA
> so his book most likely supports there point of view. Which is to
keep the
> masses truely ignorant about encryption so they can read your mail.
It fails
> to cover important aspects about encryption like when using
compression with
> encryption one must be very careful or you can be making it easy to
break then
> if you did not compress at all.

Funny he responds to my messages [albeit not quickly, but he does run a
business].  Maybe you should take the hint?

At anyrate, when did the NSA become anything more then a bunch of smart
people?  And why can't smart people exist outside of the NSA?

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: Unbiased One to One Compression
Date: 30 Oct 99 03:07:23 GMT

Douglas A. Gwyn ([EMAIL PROTECTED]) wrote:
: "SCOTT19U.ZIP_GUY" wrote:
: >    No one does not assume the opponent ususally knows the size of the message.

: But if not knowing the size of the message is essential to the system's
: security, it is a serious flaw, because there are only a relatively
: small
: number of likely sizes in many (communications-oriented) contexts,
: so the attacker can simply try them all (in parallel, perhaps).

I strongly doubt that is what he meant. But it might be desirable to
conceal the size of messages, because that is an overlooked source of
information about their contents. (Naturally, this comes as a free benefit
in systems designed to conceal routing information and the like.) I think
that is all he meant here.

John Savard

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to