Cryptography-Digest Digest #514, Volume #10 Fri, 5 Nov 99 17:13:03 EST
Contents:
Hash with truncated results (jerome)
Re: cryptohoping ([EMAIL PROTECTED])
Re: Proposal: Inexpensive Method of "True Random Data" Generation (John Savard)
Re: Q: Removal of bias (Mok-Kong Shen)
Re: Q: Removal of bias - reply (Mok-Kong Shen)
Re: Build your own one-on-one compressor (Mok-Kong Shen)
Re: Data Scrambling references (Mok-Kong Shen)
Re: Re: An encryption proposal from a Newbie... <- A modification (CoyoteRed)
Re: The Code Book (John Savard)
Re: Q: Removal of bias (John Savard)
Re: Q: Removal of bias (John Savard)
Re: PGP Cracked ? (SCOTT19U.ZIP_GUY)
Re: Compression: A ? for David Scott (SCOTT19U.ZIP_GUY)
Re: Incompatible algorithms (Tom St Denis)
Re: What is the deal with passwords? (repost) (William Rowden)
Montgomery vs Square-and-Multiply speed ([EMAIL PROTECTED])
Military cryptology veteran groups ([EMAIL PROTECTED])
Re: Lenstra on key sizes (Tom St Denis)
Re: What is the deal with passwords? (repost) (Tom St Denis)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (jerome)
Subject: Hash with truncated results
Reply-To: [EMAIL PROTECTED]
Date: Fri, 05 Nov 1999 20:24:14 GMT
in IPSec, the result of the hash functions are often truncated to 96bits
(for MD5 or SHA1). As far as i know SHA1 is considered stronger mainly
because of his digest size of 160bits versus 128bits for md5.
are there any reasonable fact to believe that SHA1 truncated to 96bits
is stronger or weaker than MD5 truncated to 96bits.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: cryptohoping
Date: Fri, 05 Nov 1999 18:54:28 GMT
In article <[EMAIL PROTECTED]>,
fungus <[EMAIL PROTECTED]> wrote:
>
> ill-omen wrote:
>>
>>has any one ever heard of a program that hops crypto algotrithims
>>like a frequency hopping radio does? so that the ciphertext would
>>actualy be the result of mulitiple algorithims reather thenjust one
>
> FROG did this.
>
> In the end though, it doesn't make any difference to security.
There may be important security advantages if you can build a "cipher
generator" that produces independent ciphers. See:
http://www.deja.com/threadmsg_ct.xp?AN=539793780
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Crossposted-To: sci.math,sci.misc,sci.physics
Subject: Re: Proposal: Inexpensive Method of "True Random Data" Generation
Date: Fri, 05 Nov 1999 19:12:31 GMT
[EMAIL PROTECTED] (Bill McGonigle) wrote, in part:
>In article <[EMAIL PROTECTED]>,
>[EMAIL PROTECTED] (John Savard) wrote:
>> Ditto Rand's One Million Digits (avalable on
>> the web for free from Rand, good guys!).
>> There are times in cryptography when you _really_ need randomness.
>If everybody can order a copy is it good for cryptography? I can see how
>it'd be useful for private things like simulations.
If everyone can order a copy, it isn't good for (most purposes) in
cryptography; that applies to that book (you can download the digits
free, not just order the book) and to the number pi.
Of course, there are *some* purposes in cryptography where a sequence
of numbers must be random-looking, but can be public, i.e. the S-boxes
inside DES. But that is a separate issue.
For keys, you need real randomness, and you need your own private
random digits.
John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Removal of bias
Date: Fri, 05 Nov 1999 20:10:40 +0100
[EMAIL PROTECTED] wrote:
>
> Mok-Kong Shen wrote:
> > Suppose one does a frequency count and finds that there is quite
> > a bit deviation from uniform distribution
>
> So you rule out a generator if the frequency count
> is far enough from the expected value to reject
> the hypothesis that all sequences are equally
> likely?
If a generator does not provide the sort of distribution that
I want, I have either to try to postprocess the output to suit
or choose another generator. There doesn't seem to be a third
alternative, I am afraid.
>
> > and applies different
> > methods to obtain a sequence of improved distribution, I like to
> > know which method is better, i.e. how they compare with one another
> > in practice, including also the computational costs.
>
> A large range of methods pass the test. Since
> the test doesn't distinguish any of them, we have
> to say that in practice they tie. For example,
> take enough of the input stream that we expect
> 128 bits of entropy, hash it with SHA-1 and key
> RC-4 with the digest. Output the RC-4 stream.
> In practice the method works well and efficiently
> for producing an unbiased stream from a biased
> one. Practice can be so misleading.
If the results of a number of methods all pass the tests that are
considered to be relevant, then that's a very valuable information
for the user in my humble opinion. What I like to see in this case
is a report giving the test results.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Removal of bias - reply
Date: Fri, 05 Nov 1999 20:10:47 +0100
vic wrote:
>
> Your question is rather broad - read the following and let me know if
> this tells you what you want to know :
........
Thank you for the post. I have the impression that these are
not methods that are being used in practice, at least not yet
currently. What I like to have is a report of practical experiences
in bias removal.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: comp.compression
Subject: Re: Build your own one-on-one compressor
Date: Fri, 05 Nov 1999 20:10:53 +0100
Phil Norman wrote:
>
> Actually, the main problem with this compression method, as
> I see it (please correct me if I'm wrong) is one of prefixes.
> Ignoring the huffman aspect and concentrating on the 1-1
> mapping, the idea of mapping words to their 16-bit number
> counterparts doesn't result in 2^16 as a dictionary size.
> Since no code may be a subset of any other code, none of
> the numeric codes are allowed to be of values such that
> their ASCII representation is a pair of alphabetic characters
> (or at least, they can't be a pair of alphabetic characters
> which form the first two letters of any word in the
> dictionary).
If your English dictionary gives a number, of 16 bits, for each entry,
then you can simply translate your sentences into a sequence
of numbers which together form a binary string. These numbers
occupy fixed number of bytes, so no prefix property needs to hold.
You might but needn't try to compress the output again. That
translation should give you a file that is smaller than the input.
There could be some problems that need to be resolved. For dictionaries
don't list plurals of nouns, nor the various tenses of verbs, etc.
But one could devise some coding rules to take care of that, I believe.
The main trouble with a scheme like that is to have the general
public accept a standard (or defecto standard) of numerical coding.
On the other hand, the Unicode gives coding of Chinese 'words' in
two bytes. So there is already a precedence case.
I like to take this opportunity to remark that I have the impression
that the discussions in this thread have been highly biased toward
the 'one-to-one' property, while the compression aspect seem to be
comparatively very less stressed. Perhaps it may be pointed out
that if a compression scheme is such that the 'one-to-one' property
(as understood in this thread) only fails very rarely, then the
analyst couldn't get much information from that direction and we
could in my humble opinion fairly well let such a scheme be used
in crypto applications.
M. K. Shen
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Data Scrambling references
Date: Fri, 05 Nov 1999 20:10:26 +0100
Larry Mackey wrote:
>
> Unfortunately, we have to do the "scrambling" while the data is in parallel
> format before the serializer. I agree that implementing LFSR at the serial
> stream would be the norm for this type of application but this application
> won't allow us to do that.
Couldn't you then use e.g. a seperate LFSR at each one of the
parallel streams?
M. K. Shen
------------------------------
From: [EMAIL PROTECTED] (CoyoteRed)
Subject: Re: Re: An encryption proposal from a Newbie... <- A modification
Date: Fri, 05 Nov 1999 19:09:33 GMT
Reply-To: this news group unless otherwise instructed!
On Fri, 5 Nov 1999 15:12:07 GMT, "Douglas A. Gwyn" <[EMAIL PROTECTED]>
wrote:
>CoyoteRed wrote:
>> I've noticed a weakness that some of you picked up on and that's the
>> index keys. So, I proposes the following change: ...
>
>Aargh! That's why we discourage posting of newbie attempts at
>cryptosystem design. No matter how much work people put into
>analyzing the flaws, the newbie will just make another change
>and the process starts all over again. Eventually, people get
>tired of pointing out the flaws, at which point the newbie
>thinks that he has finally devised a great system because
>nobody seems to be able to find a flaw in it.
I can see that...
But I didn't change it just to change it. The index keys being sent
encrypted or not was a serious weakness, just like a OTP can be
trivially weak if the same pad is used twice. My basic system stills
stands, however to hide the index keys we will "generate" these on the
fly from a Seed Index Key. This SIK will not give any clue as to what
the first index key as this is generated from a hash of several
unknown files (some of the 64 1k byte files). Plus, then each
subsequent key is then hashed from the previous two 1k generated
files.
What I'm trying to determine if the following is true:
(With this assumption: out hash routine will give us a radically
different number if even one bit is changed.)
We have a possible 2^8192 files that are 1024 bytes in length.
We choose (or I should say generate) 64 of those 2^8192 possible files
and we can then mix them in various combinations as we are able to get
nearly (2^64)-64 new different 1024 byte files.
We should be able to use several million of these files before we have
even a remote chance of having a collision with a file that already
has been used. (Because of the very nature of randomness, we are not
in control of the output and may actually get a collision. The only
control we have is that we can get the same output twice by using the
same SIK twice.)
Because of the random nature of these files, we will not be able to
foresee a hash of any particular combination of these files.
Therefore we are not in control of the index keys.
Because we are using a hash of two 1k byte files, even if the attacker
/knew/ the plain text, he wouldn't be able to test to confirm it
unless he knew the previously used 1k files (and the first one is not
used for anything other than getting the first hash so he will not be
able to determine what this file is.)
Because our attacker does not know the hash, he will not be able to
compare it to our clear sent SIK. Here we block another clue.
Do we need to use a vastly differently SIK each time or can we use
SIK's in sequence starting with 1 and still get the same security. By
this I mean, can one use the hash of one file and the hash of another
file to get a hash of the two files together? If so this is a
weakness that must be addressed by using a hashing algorythm that will
not allow this.
Other than the origin of the random file (we are assuming a really,
really random file), physical security of the computers, the overhead
used on each computer, and getting the 64K byte file to your partner;
point out the flaw in this system and I will do a lot more studying
before submitting anymore "schemes."
You know, the more that I think about it, the more I think I'm trying
to build a PRNG that has no definable internal complex math, relying
on an algorythm to recombine a really random number in many different
ways to get an equally random, but repeatable, random number using a
"seed" that will only work with /this/ set of 1k files.
Hmmm...
--
CoyoteRed
CoyoteRed <at> bigfoot <dot> com
http://go.to/CoyoteRed
PGP key ID: 0xA60C12D1 at ldap://certserver.pgp.com
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: The Code Book
Date: Fri, 05 Nov 1999 19:19:27 GMT
"Sandy Macpherson" <[EMAIL PROTECTED]> wrote, in part:
>My apologies: it reads like gibberish to me as well now. What I meant to
>draw your attention to is this:
>in creating a homophonic substitution cipher, the cryptographer bases the
>distribution of homophones according to the frequency that each (plaintext)
>letter occurs. Since the total of all these frequencies must equal 100%, it
>follows that the number of homophones in the ciphertext will be 100, or a
>multiple thereof.
Unless the cryptographer decided to pretend he was from Mars, where
the people have four digits on each hand...
in that case, the total of the frequencies would equal 64 per
quatnorb, and so the homophonic character set would contain 64
elements or a multiple of it...
I'm sorry, but this claim is a howler. Just because a character occurs
3% of the time in text does NOT mean that the cryptographer will
assign 3 symbols to it, or 6 symbols. It is perfectly possible to
decide that -
symbols occurring less than 2.57% of the time will get one symbol
only, even if there are some symbols that only occur 0.0003% of the
time,
and
other symbols will get, roughly, as many symbols as their frequency is
a multiple of 2.57%...not 1%, or 0.5%, or 0.33%.
John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Q: Removal of bias
Date: Fri, 05 Nov 1999 19:27:22 GMT
[EMAIL PROTECTED] wrote, in part:
>Mok-Kong Shen asked:
>> Could anyone give references to practical results of removal of
>> bias in random number/bit sequences, showing the efficiency of
>> the techniques?
>What do you know about your input stream and what
>do you require for the output stream? If I
>described a method that returns the stream
>0,0,0,0,... with probability 0.5 and the stream
>1,1,1,1,... with probability 0.5, you would
>probably object, even though for all i, the i'th
>bit is equally likely to be 0 or 1.
I think one can assume what people want from a method for "removing
bias":
- bias is to be removed
- the output sequence is desired to more closely approximate total
randomness than the input sequence, and will therefore be shorter than
the input sequence
- some entropy may be lost as a result of shortening the output more
than needed, but the intent is to leave enough in to make the output
fully random
Thus, the mapping
00 -> ignore
01 -> 0
10 -> 1
11 -> ignore
wastes a lot of entropy, particularly if the input stream was unbiased
or nearly so to begin with, but removes a certain type of bias, and
only generates bits not filled with a bit's worth of entropy if the
source has other, subtler, types of bias than the one it is designed
to remove.
John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Q: Removal of bias
Date: Fri, 05 Nov 1999 19:31:57 GMT
[EMAIL PROTECTED] wrote, in part:
>For example,
>take enough of the input stream that we expect
>128 bits of entropy, hash it with SHA-1 and key
>RC-4 with the digest. Output the RC-4 stream.
>In practice the method works well and efficiently
>for producing an unbiased stream from a biased
>one. Practice can be so misleading.
That does satisfy the purpose, if one ignores the implicit definition.
Bias removal means: take enough of the input stream that we expect 128
bits of entropy...produce 128 unbiased bits...then move on and take
more of the input stream.
It does *not* mean use the RC4 output forever. That's "obvious". While
I agree that accurate definitions are useful, how is it useful to
respond to an enquiry about bias removal that it failed to contain a
comprehensive and rigorous definition of bias removal?
John Savard ( teneerf<- )
http://www.ecn.ab.ca/~jsavard/crypto.htm
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: PGP Cracked ?
Date: Fri, 05 Nov 1999 20:47:11 GMT
In article <[EMAIL PROTECTED]>, "Harry Solomon"
<[EMAIL PROTECTED]> wrote:
>A security expert at my place of work states that PGP can be cracked. He
>says that today being Friday he will give me my passphrase by cracking the
>code the following Tuesday, Is this possible?
>
>
well give it to him and find out.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
------------------------------
From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Compression: A ? for David Scott
Date: Fri, 05 Nov 1999 20:45:12 GMT
In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:
>
>This is coming back to the "you can't teach a pig to sing" concept.
>
>
Yes I see I can't teach you to sing.
David A. Scott
--
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS
http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed
http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Incompatible algorithms
Date: Fri, 05 Nov 1999 20:11:10 GMT
In article <[EMAIL PROTECTED]>,
Max Polk <[EMAIL PROTECTED]> wrote:
> I had said:
> > > Has any research been done on the existence of "incompatible"
> > > algorithms, especially in the context of producing arbitrarily
> > > secure ciphertext?
>
> In article <7vob2g$2lv$[EMAIL PROTECTED]>,
> [EMAIL PROTECTED] says...
> > Refer to the MARS AES submission, and do not reply until you know
why I
> > said that.
>
> I just downloaded and read the detailed description of MARS from:
>
> http://www.research.ibm.com/security/mars.html
>
> I suppose the topic of focus is a "type-3 Feistel network". The non-
> copyrighted document stated:
>
> "A type-3 Feistel network consists of many rounds, where in each
round
> one data word (and a few key words) are used to modify all the other
data
> words. Compared with a type-1 Feistel network (where in each round
one
> data word is used to modify one other data word), this construct
provides
> much better diffusion properties with only a slightly added cost.
Hence,
> fewer rounds can be used to achieve the same strength."
>
> Some test data showed using the output of one "round" as the input to
the
> next. By repeating the same sub-algorithm 40 or 100 or more times,
the
> bits of data spread well.
>
> There were several sub-algorithms used to make up the MARS
algorithm.
> Steps were taken at the beginning and end differently than in the
middle.
> Lots of bit-shifting, addition, multiplication, table lookup and
other
> techniques were used. It was stated MARS could replace DES for many
> years to come.
>
> And I suppose that by using different kinds of sub-algorithms in
> different combinations using bits from various places, it produced
enough
> confusion of data so as to make cryptanalysis difficult.
>
> Still, I would like to see if anyone has researched mathematically
> "incompatible" algorithms that by simply applying in sequence, you
can
> make plaintext arbitrarily security.
You got my point I suppose. MARS implemented three 'levels' of
crypto. An initial key mixing, some linear mixing, some crypto rounds,
more linear mixing and key mixing. These three layers are no where
near 'compatible' in any sense such as K1 = K2.
This is a good example of the idea you are rearching. 'Incompatible'
equations or mathematics can occur in the same round function, these
can be viewed as non-isomorphisms and/or non-linear functions [of the
key and some input obviously]. TEA for example includes 'incompatible'
operations in the same equation:
y = y + ((x << 4) xor (x >> 5) + (x xor s) + K[s and 3]
[ i think that's it ]
Where the use of xor and addition are incompatible for the most part.
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: William Rowden <[EMAIL PROTECTED]>
Subject: Re: What is the deal with passwords? (repost)
Date: Fri, 05 Nov 1999 20:14:45 GMT
Johnny Bravo <[EMAIL PROTECTED]> wrote:
> On Tue, 02 Nov 1999 17:06:30 GMT, Tom St Denis
> <[EMAIL PROTECTED]> wrote:
[snip]
> >I don't think I could memorize a2AS21vhy65$1! for more then a day
> >or two before it fades...
[snip]
> Your example password has about 92 bits of entropy, the following
> passphrase "mince sequin drama gorky ajax yoga loan" has 90 bits of
> entropy
This newbie has a question:
How is the entropy for a text sample calculated? I know this formula
for source entropy:
H(p) = - sum ( p(t) * lg p(t) ) for t from source
This formula gives me about 4 bits per character for the longer pass
phrase if I assume it's representative of the frequencies of the
source, and about 3+1/2 bits per character for the shorter password
under the same assumption. I don't know how to modify this
calculation for a *particular* event.
--
-William
Damages claimed for unsolicited commercial email (RCW19.86 & 47USC227)
PGP key: http://www.eskimo.com/~rowdenw/pgp/rowdenw.asc until 2000-08-01
Fingerprint: FB4B E2CD 25AF 95E5 ADBB DA28 379D 47DB 599E 0B1A
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED]
Subject: Montgomery vs Square-and-Multiply speed
Date: Fri, 05 Nov 1999 20:10:18 GMT
Hello,
Does anyone know what the speed difference between a Montgomery ExpMod
and a Square-and-Multiply ExpMod is? (by ExpMod I'm referring to an
"a^b mod c" type of operation).
I'm trying to write my own ExpMod in C for large numbers, but the
Montgmery seems too complicated. But I'd just like to know what the
performance difference will be before I start coding, because if the
Square-and-Mul is too slow, then it doesn't make sense to put any effort
into that.
Thanks,
Derek.
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: [EMAIL PROTECTED]
Subject: Military cryptology veteran groups
Date: Fri, 05 Nov 1999 20:11:53 GMT
Here is a list of military cryptology veteran groups for the brave women
and men of the the military components of the Central Security Service,
the military arm of the National Security Agency responsible for
cryptology (SIGINT, COMSEC, etc).
Air Force: the Air Force Intelligence Agency (67th IW) which was the Air
Force Security Service and redesignated in 1979 the Electronic Security
Command.
http://www.harborside.com/home/a/aowens/warriors.htm
http://members.tripod.com/~usafss/index.htm USAFSS WebRings
http://www.netxpress.com/~ftva/index.html
Army: The Army Security Agency is now the Army Intelligence and Security
Command
http://nasaa.npoint.net
Marine Corps: Marine Support Battalions.
http://www.flash.net/~davemose/mcca.html
Navy: the Naval Security Group Command (NSGC);
http://www.usncva.org
Canada
http://www.geocities.com/Pentagon/Bunker/7803/index.htm#index
http://watserv1.uwaterloo.ca/~brobinso/cse.html
Mike
"Better to practice cryptology than be a theoritical cryptologic with a
Ph.D."
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Lenstra on key sizes
Date: Fri, 05 Nov 1999 20:19:01 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (DJohn37050) wrote:
> Regarding my "naysayer" comment above, I wish to explain more.
>
> Arjen is one of the most accomplished algorithm "crackers" and I have
the
> highest respect for his ability. He has expressed concern about ECC
in the
> past, and such concern was posted by RSA on their ECC website. I
should have
> mentioned this in my previous posting instead of using the
term "naysayer".
> Arjen points out in his recent paper that there has been no
significant
> progress in attacking ECC, but there is a record of continuous
improvement in
> attacking RSA.
Funny, when RSA was proposed 512 bit keys were very far out of reach.
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: What is the deal with passwords? (repost)
Date: Fri, 05 Nov 1999 20:24:28 GMT
In article <7vvdrc$6m9$[EMAIL PROTECTED]>,
William Rowden <[EMAIL PROTECTED]> wrote:
> This newbie has a question:
>
> How is the entropy for a text sample calculated? I know this formula
> for source entropy:
>
> H(p) = - sum ( p(t) * lg p(t) ) for t from source
>
> This formula gives me about 4 bits per character for the longer pass
> phrase if I assume it's representative of the frequencies of the
> source, and about 3+1/2 bits per character for the shorter password
> under the same assumption. I don't know how to modify this
> calculation for a *particular* event.
There is essentially an asymtote you approach as your order of your
predictor approaches a certain size. An order-0 predictor will give an
idea of the entropy, but order-1 will be n times better [assuming a
distribution of m symbols from a set of n].
So if you are doing bytes, order-1 can be upto 256 times more accurate.
If you want a general idea do an order-2 or -3 predictor on the
password [obviously this is a static predictor] and you get a good idea
of the entropy.
An order-3 predictor will require 26^4 or 456976 counters (each can be
a byte in this case). Then you calculate the probability as being the
frequency / size_of_message. You probably can incorporate other orders
as well ... I am not much of a stats man, and heck some of the facts
here may be off, but generally build a freq table and work from there :)
Tom
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************