Cryptography-Digest Digest #913, Volume #13 Fri, 16 Mar 01 08:13:00 EST
Contents:
Re: Analysis of PCFB mode ("Henrick Hellstr�m")
Algebraic 1024-bit block cipher ("Alexander Ernst")
Re: Encryption software (Richard Herring)
Re: Algebraic 1024-bit block cipher ("Tom St Denis")
Re: primes for Blum Blum Shub generator ("Tom St Denis")
Re: OverWrite: best wipe software? (Anthony Stephen Szopa)
Re: How to eliminate redondancy? ("Tom St Denis")
Re: SSL secured servers and TEMPEST ("Lyalc")
Re: OverWrite: best wipe software? ("Tom St Denis")
Re: Freeware issues? ("Joseph Ashwood")
Re: Encryption software ("Joseph Ashwood")
Re: Zero Knowledge Proof ("Joseph Ashwood")
Re: Noninvertible encryption (SCOTT19U.ZIP_GUY)
----------------------------------------------------------------------------
From: "Henrick Hellstr�m" <[EMAIL PROTECTED]>
Subject: Re: Analysis of PCFB mode
Date: Fri, 16 Mar 2001 12:20:53 +0100
"David Wagner" <[EMAIL PROTECTED]> skrev i meddelandet
news:98rrfb$j5b$[EMAIL PROTECTED]...
> Henrick Hellstr�m wrote:
> >If you have this kind of control you could
> >mount exactly the same attack against e.g. XCBC mode, in this case simply
by
> >correspondingly asking for the encryption of X || g(X) || Y.
>
> No, it doesn't work against XCBC mode, because in XCBC mode the g()
> function is keyed. (However, if you were to use an unkeyed g() function,
> then yes, the attack would work against XCBC, too.)
You need full plain text of one previous message, and the legitimate server
must not change the value of r0. Then you extract the value of z0 from the
last plain text block by an xor operation.
Hence, you must also be able to trick the server to send an authenticated
message, but the client to decrypt it as unauthenticated. If this cannot be
done, then the attack seems to fail.
> >At the present moment
> >I have no reason to doubt that PCFB mode too is provably secure in this
> >sense, provided that it is used with a suitable authentication scheme.
>
> But -- even if you had a proof of this -- it wouldn't be a
> very useful result. CBC mode already gives you this property,
> and it does come with a proof.
>
> Compare to competitors like IAPCBC, which are provably secure for
> both confidentiality and integrity, and moreover don't need to be
> used with a separate authentication scheme. My main interest in
> PCBC mode is that it has the potential to provide both confidentiality
> and message integrity; if you're suggesting use of an authentication
> scheme as well, I see no reason to prefer PCBC mode to CBC mode.
You mean "PCFB" and not "PCBC" I guess. By "authentication scheme" I mean
an extension of the algorithm of the kind we have been discussing so far,
e.g. a format check applied on the plain text directly, or messages of the
form R || k0 || X1 || R || k1 || X2 || ..., where R is the abitrary
"signature" that must check out, ki is the offset to the next R, and Xi is
k(i-1) bits (bytes, blocks) of plain text.
--
Henrick Hellstr�m [EMAIL PROTECTED]
StreamSec HB http://www.streamsec.com
------------------------------
From: "Alexander Ernst" <[EMAIL PROTECTED]>
Crossposted-To:
alt.computer.security,alt.security,alt.security.pgp,comp.security.misc,de.comp.security.firewall,de.comp.security.misc
Subject: Algebraic 1024-bit block cipher
Date: Fri, 16 Mar 2001 12:55:07 +0100
An objective of this cipher is to use
pure finite group algebra for encryption and decryption.
In this design we do not use permutations or XOR
operations. Performance of this implementation is
approximately 4,8 Mbyte/sec. Measured avalanche
effect is 49,7%. Block size is 1024 bit or 128 bytes.
Secret key length is 256 bytes. We use finite group
of the order 65536. Elements of the group are words
(2 bytes). So we call this word architecture.
128 byte block consists of 64 words. Each word is
considered to be an element of group of order 65536.
In our implementation we use two round approach.
Two groups Group1 and Group2 are derived from the
secret key. A plain text block is encrypted first
using Group1 and then using Group2. Cipher block is
decrypted first using Group2 and then using Group1.
Delphi source code and description in pdf are
available for download at www.alex-encryption.de.
Please, follow the link for algebraic cipher
at the end of download list.
Regards.
Alex
------------------------------
From: [EMAIL PROTECTED] (Richard Herring)
Subject: Re: Encryption software
Date: 16 Mar 2001 11:58:29 GMT
Reply-To: [EMAIL PROTECTED]
In article <[EMAIL PROTECTED]>, Benjamin Goldberg
([EMAIL PROTECTED]) wrote:
> If someone would be kind enough to design a 100% GUI version of PGP,
> which automagically does all the things which users dislike about
> regular PGP, then the problem would be solved, more or less.
Turnpike is a mail and news application which pretty well does
just that. http://www.turnpike.com
--
Richard Herring | <[EMAIL PROTECTED]>
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Crossposted-To:
alt.computer.security,alt.security,alt.security.pgp,comp.security.misc,de.comp.security.firewall,de.comp.security.misc
Subject: Re: Algebraic 1024-bit block cipher
Date: Fri, 16 Mar 2001 12:13:28 GMT
"Alexander Ernst" <[EMAIL PROTECTED]> wrote in message
news:98supd$jrk$[EMAIL PROTECTED]...
> An objective of this cipher is to use
> pure finite group algebra for encryption and decryption.
> In this design we do not use permutations or XOR
> operations. Performance of this implementation is
> approximately 4,8 Mbyte/sec. Measured avalanche
> effect is 49,7%. Block size is 1024 bit or 128 bytes.
> Secret key length is 256 bytes. We use finite group
> of the order 65536. Elements of the group are words
> (2 bytes). So we call this word architecture.
Right off the bat. What the heck does "group with order 65536" mean. Do
you mean a multiplicative sub-group of GF(65537) where your base is
primitive?
Second at 4.8MB it's too slow for "big block mass encryption". If you got
that upto say 50MB/sec (on a PII 400 etc...) that would be cool.
> 128 byte block consists of 64 words. Each word is
> considered to be an element of group of order 65536.
> In our implementation we use two round approach.
> Two groups Group1 and Group2 are derived from the
> secret key. A plain text block is encrypted first
> using Group1 and then using Group2. Cipher block is
> decrypted first using Group2 and then using Group1.
How is it encrypted using "group1 then group2"? You mean substituted?
Tom
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: primes for Blum Blum Shub generator
Date: Fri, 16 Mar 2001 12:20:49 GMT
"Risto Kuusela" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Tom St Denis wrote:
>
> > BBS is only (provably) secure when the primes are secret. Otherwise
there
> > is not much of a point.
>
> Not quite so. It is only proven that ability to predict BBS will
> imply ability to factor BBS modulus, but it is not known if the
> converse is true. So according to present knowledge primes don't
> have to secret, only "seed" value x (unless I have missed some
> recent results).
Nope I am right. Given "y = x^2 mod pq" finding 'x' from 'y' is provably as
hard as factoring. Thus my claim.
Tom
------------------------------
From: Anthony Stephen Szopa <[EMAIL PROTECTED]>
Crossposted-To: alt.hacker
Subject: Re: OverWrite: best wipe software?
Date: Fri, 16 Mar 2001 04:23:26 -0800
"Trevor L. Jackson, III" wrote:
>
> Anthony Stephen Szopa wrote:
>
> > Benjami
>
><snip>
>
>can be compressed by a factor of 100 or more, yielding only a
> tiny amount of compressed data, thus leaving the original data untouched.
>
> [snip drivel]
Let's say you have a dedicated partition that can hold 20MB of data.
Let's call it compressed hard drive h:\ (I know: 20MB of some
types of data may be 60MB of another type of data. It will make no
difference. Read on.)
You begin writing files to H:\ one at a time. Let's say these files
are about 2MB each. In short order H:\ becomes full.
Now you delete the middle file. Let's say this leaves you with about
a 2MB area of free space on H:\ pretty much in the middle of the 20MBs
in H:\
You use this 2MB to write and process your sensitive data.
Now you want to delete this data and overwrite this 2MB area on H:\
I say that you should first delete all of your sensitive data thus
freeing up this 2MB area on H:\
Then delete one of the original 2MB files you wrote to H:\ on each
side of the 2MB you originally freed up to use for your sensitive
data writing and processing.
Now you have about 6MB of your original total of 20MB freed up.
Now, the solution is straight forward but there are some things to
consider: since the data is being compressed before it is written
to H:\ the specific overwrite bit patterns will have essentially no
effect as originally intended on a compressed drive. Secondly if
you overwrite on a compressed byte per compressed byte basis then
what you say has some validity.
The first problem cannot be addressed unless you know how the data
is being compressed, etc.
But the second point is handled not by overwriting byte for byte but
by overwriting until at least nearly all the remaining space from
this 6MB area is overwritten. This would require a slightly more
sophisticated process than currently implemented in OverWrite
Version 1.2.
But the solution is a simple one. Overwrite successive files of the
same bit patterns contained in a single pass until the free space has
all or nearly all been overwritten.
For instance, Overwrite the 6MB with successive files of the given
pass bit patterns. Eventually you will get a disk out of space
error. If the files were appropriately small enough you can be sure
that the last file began its write well past where you had written
and processed your sensitive data. Then go on with the next pass of
successive file overwrites until you once again get an out of disk
space error, etc.
It is really a simple process to achieve the overwrite: overwrite
until you have run out of space to continue to overwrite using
relatively small files thus insuring that by the time the last file
is written and fails you have overwritten well past the area where
the sensitive data was written.
The details are easily worked out and implemented in the case of a
compressed drive if one were inclined to address this very small
minority situation.
But the best solution I can recommend with the OverWrite Version
1.2 program as currently implemented is not to use it on a
compressed drive.
Would you use a pitchfork to eat fried rice?
Would you throw a tomahawk at an F-18 Hornet making a bombing run
against you?
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: How to eliminate redondancy?
Date: Fri, 16 Mar 2001 12:24:15 GMT
"Mok-Kong Shen" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
>
> br wrote:
> >
> > It will reduce little bit a redundancy. So compression is better.
> > Is there any other way other than compression. That's my question.
>
> Depending on whether you allow it in the context of your
> question, another means (than telegram style) of eventually
> obtaining less redundancy is to translate it to another
> language that has lower redundancy (if such a language exists).
Which again is a form of compression.
If you take a lump of data (a message) and represent it in a smaller amount
of data (stream) then you are compressing it. Simple as that. The OP has
to get it in their head that there is no magical process here. All
compression is about is making things smaller so is his question doesn't
really make sense.
Tom
------------------------------
From: "Lyalc" <[EMAIL PROTECTED]>
Subject: Re: SSL secured servers and TEMPEST
Date: Fri, 16 Mar 2001 23:21:43 +1100
I agree that the theory seems approachable.
But recording everything from Dc to daylight, in the appropriate bandwdith,
antenna orientations, E field, H field, and accuarately time synchronised is
not a trivial task.
To then, and some later time match/synchronise a large number (e.g. 1m) of
asynchronous events, averaging out noise, correlating 'intellegent' signals,
then making sense
A 2 GHZ bandwidth D-A converter, with 24 bits precision would generate at
least 24 x 4x10^9 bit/second = about 120Gbytes of data per second = 10^14
bytes/day.
Analog media, like tape may be possible - are there any media that can store
2Ghz of bandwidth.
In reality I think (but haven't done the detailed maths) that this atatck
against private keys is infeasible in practice without consistent trigger
events to reduce the amount of data gathered, and to simplify time matching
of unrelated events.
Lyal
Frank Gerlach wrote in message <[EMAIL PROTECTED]>...
>Lyalc wrote:
>>
>> Look at the cryptome site, where copies of the released portions of the
>> TEMPEST standards, NACSEM 5100 et al reside.
>> Not much useful material.
>>
>> Minimum idea is that the signal needs to be above the noise floor in it's
>> bandwdith required for detection.
>As I tried to outline in previous postings, the "plaintext signal" can
>be well below the noise floor, if it is transmitted multiple times. With
>CRT signals and SSL private keys we have exactly this situation.
>Being below the noise floor helps against amateurs, but not against the
>determined organization willing to use a huge directional antenna and a
>truck full of recorders to record everything 0..2GHz for a couple of
>days. After recording everything, they will then spent some tera-flop
>years doing signal processing on the recorded signals.
>I bet you can increase the useful recording distance with this method
>from a couple of meters to a mile or so.
>I agree that this is a very sophisticated attack, but I consider it
>important to be aware of what is possible at maximum effort.
>Taking into account how much money Uncle Sam spent for ridiculous
>operations like digging tunnels, pulling subs from deep sea levels and
>so on, I would not rule out that they do some kind of recording similar
>to what I described above against targets, which allow them to savely
>place the "trucks" (e.g nato allies, embassies etc).
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Crossposted-To: alt.hacker
Subject: Re: OverWrite: best wipe software?
Date: Fri, 16 Mar 2001 12:29:37 GMT
"Anthony Stephen Szopa" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> "Trevor L. Jackson, III" wrote:
> >
> > Anthony Stephen Szopa wrote:
> >
> > > Benjami
> >
> ><snip>
> >
> >can be compressed by a factor of 100 or more, yielding only a
> > tiny amount of compressed data, thus leaving the original data
untouched.
> >
> > [snip drivel]
>
>
> Let's say you have a dedicated partition that can hold 20MB of data.
> Let's call it compressed hard drive h:\ (I know: 20MB of some
> types of data may be 60MB of another type of data. It will make no
> difference. Read on.)
>
> You begin writing files to H:\ one at a time. Let's say these files
> are about 2MB each. In short order H:\ becomes full.
>
> Now you delete the middle file. Let's say this leaves you with about
> a 2MB area of free space on H:\ pretty much in the middle of the 20MBs
> in H:\
Whats the "middle file" You can't reliably tell unless you read the FAT.
Winblows doesn't have to store things sequentially and in fact most other
OSes don't either.
> You use this 2MB to write and process your sensitive data.
>
> Now you want to delete this data and overwrite this 2MB area on H:\
>
> I say that you should first delete all of your sensitive data thus
> freeing up this 2MB area on H:\
>
> Then delete one of the original 2MB files you wrote to H:\ on each
> side of the 2MB you originally freed up to use for your sensitive
> data writing and processing.
Again "either side" requires you to read the FAT.
> Now you have about 6MB of your original total of 20MB freed up.
>
> Now, the solution is straight forward but there are some things to
> consider: since the data is being compressed before it is written
> to H:\ the specific overwrite bit patterns will have essentially no
> effect as originally intended on a compressed drive. Secondly if
> you overwrite on a compressed byte per compressed byte basis then
> what you say has some validity.
Moot. Why would you compress it anyways if it was for security purposes?
If you use a compressed drive you are asking for trouble. Winblows is more
likely to cache the data since compression etc is slow. And again if you
compress the drive how can you be asured your files are 2MB each?
> The first problem cannot be addressed unless you know how the data
> is being compressed, etc.
>
> But the second point is handled not by overwriting byte for byte but
> by overwriting until at least nearly all the remaining space from
> this 6MB area is overwritten. This would require a slightly more
> sophisticated process than currently implemented in OverWrite
> Version 1.2.
I would say so.
> But the solution is a simple one. Overwrite successive files of the
> same bit patterns contained in a single pass until the free space has
> all or nearly all been overwritten.
>
> For instance, Overwrite the 6MB with successive files of the given
> pass bit patterns. Eventually you will get a disk out of space
> error. If the files were appropriately small enough you can be sure
> that the last file began its write well past where you had written
> and processed your sensitive data. Then go on with the next pass of
> successive file overwrites until you once again get an out of disk
> space error, etc.
Again on a compressed drive you can't tell exactly how big your files will
be. The OS might actually expand them if they don't compress etc...
Your methods are crude and ineffective. If you read the drives FAT you can
reliably overwrite files from a normal disk.... of course that requires
research that you are not willing todo.
Tom
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Freeware issues?
Date: Mon, 12 Mar 2001 12:43:56 -0800
[sent both to sci.crypt and private e-mail]
"Dan Hargrove" <[EMAIL PROTECTED]> wrote in message
> I hope this is on-topic.
Close enough to get replies. I will only reply to those portions that I do
know.
> 2) On-the-fly; volumes; partitions;
>
> One commonly available product is E4M (it uses IDEA or DES). How secure
is
> E4M compared to commercially available software?
Using IDEA it should be roughly as secure as others, using DES it should be
considered insecure (RSA DES Challenge III 22 hours 15 minutes)
> It seems to me that
> a program would have to be entirely decrypted before it would run
properly,
> but what do I know?
It depends on how the program is organised, how windows loads it, etc. If
windows decides to only load a small portion of the file from disk, then
most of it will remain encrypted on the disk.
> What attacks (in laymans terms) could be successful
> against encrypted volumes that would not be successful against encrypted
> partitions?
Disk Caching is more likely to take place, with Windows attempting to
"optimize" the directory, however it will not be able to "optimize" the
partition.
> ScramDisk is another available product. Does ScramDisk have
> weaknesses that are not presented in the documentation?
It has no known weaknesses that it does not fairly accurately present.
> 3) Blowfish and IDEA;
>
> These are commonly used for file encryption, but are general-purpose
> algorithms. Are there better algorithms/cyphers for file/folder
> encryption?
That depends heavily on your requirements. The decision of one encryption
algorithm over another can be a very long, difficult process. It is
generally easier to just say that 3DES and Rijndael are likely to be at
least as resistant to cryptanalysis, and both have governmant blessings.
However 3DES will be much slower, and Rijndael is much newer.
>
> 4) Key size; passwords;
>
> I suspect the information I am reading about brute force attacks. It
seems
> to me that with approximately 4 kb of key (including all subsets), a brute
> force attack would be very effective.
Actually it would be completely impossible. An approximation of scal is
below, this is based on a perfect cipher of that size key (this assumes a
machine that could break DES in 1 minute, current public record is 22 hours
15 minutes):
bits time
56 1 minute
64 4 hours 16 minutes
80 32 years
96 2 million years
128 10^15 years
4096 10^1210 years
>There are only so many combinations,
> though (I imagine) exceeding the trillions.
Actually it grows much faster than that. 2^32 is already 4 billion. 2^40 ~=
10^12
> What about a database that
> contains, say, %10 of all possible variations for 128 bit Blowfish keys?
A machine to hold just the 128-bit key for 10% of the 2^128 128-bit blowfish
keys would require a 114-bit address space, assuming that a 256-megabyte
memory would cost 1 cent, such a machine would cost
$20282409603650670423947251286.
> Isn't that feasible?
I don't think it's feasible.
>
> 5) Swapfile;
>
> The swapfile issue with Windows appears to me to be insurmountable with
> Windows. I have looked at two freeware programs that overwrite the
> swapfile. One writes binary zeros to the entire file. The other
> overwrites the swapfile seven times (Scorch) with hash. How effective are
> these tactics against modern retrieval techniques?
Simply overwriting with 0s will be effective against software based
techniques, and against rudimentary forensic techniques. However to be truly
sure that a file has been erased takes several passes by a random number
generator. Scorch will probably be suitable for many purposes. However there
are numerous caveats, and if you look in the sci.crypt history of the last
week or so you will see a very long, tense conversation regarding just such
problems. This is just a specific secure deletion problem.
>
> 6) Secure deletion;
>
> There are many freeware products for "burning" files. What are the most
> effective methods against modern retrieval techniques? Are they available
> in freeware products?
The most effective method of secure deletion, and the only method that we
have any assurance will work permanently is extreme heat, generally a couple
thousand degrees or more will be highly effective. However currently we
evidence that overwriting a file 1 with perfect random data will be
sufficient. There is currently no known way of getting perfect random data
(see OneTimePad discussions), so we settle for a large number of overwrites
with data as random as we can get. There are a great many additional issues,
many of which were covered in the recent (very long) thread regarding
OverWrite. In general I suggest wiping the disk instead of files. For this,
while the source code is rather limited in availability, I generally
recommend PGPwipe (comes with PGP). However outside os extreme temperatures
we can only say that more layers appears to be harder.
Joe
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Encryption software
Date: Mon, 12 Mar 2001 11:55:17 -0800
I believe we have a fundamental disagreement about one of your examples.
"Henrick Hellstr�m" <[EMAIL PROTECTED]> wrote in message
news:98fu0s$7sb$[EMAIL PROTECTED]...
> PGP is vulnerable to MITM attacks
Is a blatantly false statement.
> in so far that
> you usually can't be sure that the public keys you have are authentic.
Completely incorrect. It's a chain of trust model, if you are going to
insist on spouting incorrect knowledge at least spout it where no one cares.
I know my key is authentic, I verify a certain subset of other keys as
authentic (either through in person verification ala Cipherpunks meeting, or
through some other established trust), and I build a trust relationship with
them, the individuals behind those keys perform the same trust building with
others.
> For
> example, the owner of the SMTP/POP server you are using might in theory
> replace all PGP keys included in e-mail passing through your account and
> thereby be able to decrypt/read/encrypt each encrypted e-mail sent later
on.
Wrong again. No entity (of believed strength) can remove the encryption
suppllied by PGP (and it's variants) without knowledge of the private key. I
refer to the above for trust in the private key.
Joe
------------------------------
From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Zero Knowledge Proof
Date: Mon, 12 Mar 2001 15:11:14 -0800
I think this another one of those situations where you are not as
knowledgable as you think.
A zero knowledge proof is not for the transfer of secrets. They come in two
general flavors. First, the kind where you prove you both have knowledge of
some secret, but you prove knowledge of that secret in such a way that you
don't reveal the secret. A simple (flawed) example of this is the following:
A->B: The challenge is C1
B->A: The challenge is C2
B->A: The hash of hash(C1 | secret) is H1
A->B: The hash of hash(C2 | secret) is H2
Both have proven knowledge of the secret provided that hash() is strong
enough for the data, and a few other caveats (like it's subject to MITM).
The second kind is public agreement of a shared secret. This is generally
only considered a zk proof only because it uses the first to verify that the
secret really is shared (see SRP for one example).
While an attacker may guess as the secret and quickly verify it (e.g. having
knowledge of the secret, C1, and H1 permits quick verification) the
knowledge provided is insufficient to determine what the secret is, only to
verify a guess. This is in very sharp contrast to your claim that Scott19u
is superior for this purpose. If an encryption algorithm of any kind is used
for this the information is transmitted that permits not just guesses to be
verified (which is debatable depending on the protocol), but the information
is supplied where it is possible to recover the secret from the
transmission. Additionally true zkp's have the quality that if B does not
know the secret, that secret will remain unknown to B, if a secret is
encrypted proving the secret would be the decryption, which would reveal the
secret.
Joe
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Crossposted-To: sci.math
Subject: Re: Noninvertible encryption
Date: 16 Mar 2001 12:48:14 GMT
[EMAIL PROTECTED] (Paul Crowley) wrote in
<[EMAIL PROTECTED]>:
>David Schwartz <[EMAIL PROTECTED]> writes:
>> In this case, it actually weakens things. Uncompressed data is much
>> more likely to contain exploitable patterns than compressed data. In
>> fact, compressibility is pretty much a measure of how patterned
>> something is.
>
>But there's no real security issue since our ciphers are designed to
>resist known-plaintext attack.
I like the disclaimer "know-plaintext attack" Unfprtunatately
its the plaintext attack that you don't know about when you use
a method that ends up bitting you in the ass.
>
>To put this in perspective: I recently found a very slight bias in the
>output of a cryptographic pseudo-random number generator (CPRNG). You
>have to observe 64 GIGABYTES of output from the RNG before the bias is
>detectible, and we've found no way to use it to find anything out
>about the cryptographic key (or indeed any use for it at all beyond
>detecting that this CPRNG is being used); yet to the community, this
>is a devastating result that puts the CPRNG beyond use until the
>problem is fixed. That's the sort of standard against which our
>ciphers are measured.
At present there is a double standard in the crypto community. They
do worry alot about bias in the out of a cryptographic pseudo-random
number generator. Since if you where able to prove a certain level of
bias. It is feared that there may be more bias then what you actually
see. But as of yet the community which is very closed and not open to
free thinking. Does not seem to care about the obvious bias of many
poo compression routines commonly used with encryption. The closed minded
belief is that compresion headers are the only problem and not the
general structure of the output file. Well they are wrong.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website **now all allowed**
http://members.xoom.com/ecil/index.htm
Scott LATEST UPDATED source for scott*u.zip
http://radiusnet.net/crypto/ then look for
sub directory scott after pressing CRYPTO
Scott famous Compression Page
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************