Cryptography-Digest Digest #185, Volume #14      Thu, 19 Apr 01 18:13:01 EDT

Contents:
  Re: First cipher ("M.S. Bob")
  Re: A practical idea to reinforce passwords ("Tom St Denis")
  Re: Reusing A One Time Pad ("Mark G Wolf")
  Re: Reusing A One Time Pad (James Felling)
  Re: Current best complexity for factoring? (Terry Boon)
  Re: "I do not feel secure using your program any more." (James Felling)
  Re: "I do not feel secure using your program any more." ("Tom St Denis")
  Re: Basic AES question (SCOTT19U.ZIP_GUY)
  Re: Any unbroken knapsack cryptosystem? ("Joseph Ashwood")
  Re: OTP breaking strategy (newbie)
  Re: "UNCOBER" = Universal Code Breaker (James Felling)
  Re: OTP breaking strategy ("Tom St Denis")
  Re: OTP breaking strategy ("Tom St Denis")
  Re: OTP breaking strategy (Joe H Acker)
  Re: Current best complexity for factoring? (SCOTT19U.ZIP_GUY)

----------------------------------------------------------------------------

From: "M.S. Bob" <[EMAIL PROTECTED]>
Subject: Re: First cipher
Date: Thu, 19 Apr 2001 21:39:36 +0100

[EMAIL PROTECTED] wrote:
> 
> Here's my first attempt at a block cipher. Please critique
> and explain WHY as well as  where I'm going wrong.
> 
> 1.) Feistel network, blocklength 64 bits, 128-bit key, 16 rounds

Short (pointers to more) reading list:

Memo to the Amateur Cipher Designer
<http://www.counterpane.com/crypto-gram-9810.html#cipherdesign>

Self-Study Course in Block Cipher Cryptanalysis
<http://www.counterpane.com/self-study.html>


Memo to the Amateur Cipher Designer
by Bruce Schneier (October 15, 1998)

Congratulations. You've just invented this great new cipher, and you
want to do something with it. You're new in the field; no one's heard of
you, and you don't have any credentials as a
cryptanalyst. You want to get well-known cryptographers to look at your
work. What can you do? 

Unfortunately, you have a tough road ahead of you. I see about two new
cipher designs from amateur cryptographers every week. The odds of any
of these ciphers being secure are slim. The odds
of any of them being both secure and efficient are negligible. The odds
of any of them being worth actual money are virtually non-existent. 

Anyone, from the most clueless amateur to the best cryptographer, can
create an algorithm that he himself can't break. It's not even hard.
What is hard is creating an algorithm that no one else
can break, even after years of analysis. And the only way to prove that
is to subject the algorithm to years of analysis by the best
cryptographers around. 

"The best cryptographers around" break a lot of ciphers. The academic
literature is littered with the carcasses of ciphers broken by their
analyses. But they're a busy bunch; they don't have time
to break everything. How do they decide what to look at? 

Ideally, cryptographers should only look at ciphers that have a
reasonable chance of being secure. And since anyone can create a cipher
that he believes to be secure, this means that
cryptographers should only look at ciphers created by people whose
opinions are worth something. No one is impressed if a random person
creates an cipher he can't break; but if one of the
world's best cryptographers creates an cipher he can't break, now that's
worth looking at. 

The real world isn't that tidy. Cryptographers look at algorithms that
are either interesting or are likely to yield publishable results. This
means that they are going to look at algorithms by
respected cryptographers, algorithms fielded in large public systems
(e.g., cellular phones, pay-TV decoders, Microsoft products), and
algorithms that are published in the academic literature.
Algorithms posted to Internet newsgroups by unknowns won't get a second
glance. Neither will patented but unpublished algorithms, or proprietary
algorithms embedded in obscure products. 

It's hard to get a cryptographic algorithm published. Most conferences
and workshops won't accept designs from unknowns and without extensive
analysis. This may seem unfair: unknowns
can't get their ciphers published because they are unknowns, and hence
no one will ever see their work. In reality, if the only "work" someone
ever does is in design, then it's probably not worth
publishing. Unknowns can become knowns by publishing cryptanalyses of
existing ciphers; most conferences accept these papers. 

When I started writing _Applied Cryptography_, I heard the maxim that
the only good algorithm designers were people who spent years analyzing
existing designs. The maxim made sense, and I
believed it. Over the years, as I spend more time doing design and
analysis, the truth of the maxim has gotten stronger and stronger. My
work on the Twofish design has made me believe this
even more strongly. The cipher's strength is not in its design; anyone
could design something like that. The strength is in its analysis. We
spent over 1000 man-hours analyzing Twofish, breaking
simplified versions and variants, and studying modifications. And we
could not have done that analysis, nor would we have had any confidence
in that analysis, had not the entire design team had
experience breaking many other algorithm designs. 

A cryptographer friend tells the story of an amateur who kept bothering
him with the cipher he invented. The cryptographer would break the
cipher, the amateur would make a change to "fix" it,
and the cryptographer would break it again. This exchange went on a few
times until the cryptographer became fed up. When the amateur visited
him to hear what the cryptographer thought, the
cryptographer put three envelopes face down on the table. "In each of
these envelopes is an attack against your cipher. Take one and read it.
Don't come back until you've discovered the other
two attacks." The amateur was never heard from again. 

I don't mean to be completely negative. People occasionally design
strong ciphers. Amateur cryptographers even design strong ciphers. But
if you are not known to the cryptographic community,
and you expect other cryptographers to look at your work, you have to do
several things: 

1. Describe your cipher using standard notation. This doesn't mean C
code. There is established terminology in the literature. Learn it and
use it; no one will learn your specialized terminology. 

2. Compare your cipher with other designs. Most likely, it will use some
ideas that have been used before. Reference them. This will make it
easier for others to understand your work, and shows
that you understand the literature. 

3. Show why your cipher is immune against each of the major attacks
known in literature. It is not good enough just to say that it is
secure, you have to show why it is secure against these
attacks. This requires, of course, that you not only have read the
literature, but also understand it. Expect this process to take months,
and result in a large heavily mathematical document. And
remember, statistical tests are not very meaningful. 

4. Explain why your cipher is better than existing alternatives. It
makes no sense to look at something new unless it has clear advantages
over the old stuff. Is it faster on Pentiums? Smaller in
hardware? What? I have frequently said that, given enough rounds, pretty
much anything is secure. Your design needs to have significant
performance advantages. And "it can't be broken" is not
an advantage; it's a prerequisite. 

5. Publish the cipher. Experience shows that ciphers that are not
published are most often very weak. Keeping the cipher secret does not
improve the security once the cipher is widely used, so if
your cipher has to be kept secret to be secure, it is useless anyway. 

6. Don't patent the cipher. You can't make money selling a cipher. There
are just too many good free ones. Everyone who submitted a cipher to the
AES is willing to just give it away; many of the
submissions are already in the public domain. If you patent your design,
everyone will just use something else. And no one will analyze it for
you (unless you pay them); why should they work for
you for free? 

7. Be patient. There are a lot of algorithms to look at right now. The
AES competition has given cryptographers 15 new designs to analyze, and
we have to pick a winner by Spring 2000. Any good cryptographer with
spare time is poking at those designs. 

If you want to design algorithms, start by breaking the ones out there.
Practice by breaking algorithms that have already been broken (without
peeking at the answers). Break something no one
else has broken. Break another. Get your breaks published. When you have
established yourself as someone who can break algorithms, then you can
start designing new algorithms. Before then,
no one will take you seriously. 

Creating a cipher is easy. Analyzing it is hard. 

See "Self-Study Course in Block Cipher Cryptanalysis":
http://www.counterpane.com/self-study.html

------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: A practical idea to reinforce passwords
Date: Thu, 19 Apr 2001 20:45:39 GMT


"Niklas Frykholm" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> In article <[EMAIL PROTECTED]>, Harald
> Korneliussen wrote:
>
> >My idea is that upon selecting a password, X bits of
> >random data is added to the password. You are not
> >informed of what these bits are, nor does the computer
> >store them. The computer only stores how many bits
> >there are, and brute-forces them every time you enter
> >you password.
>
> Yes, this is a good idea, we want to slow down the attacker
> as much as possible. And if we can slow down the attacker
> 1000 times (which is easy to do without putting much strain
> on your computer), it _does_ make a difference.
>
> Methods similar to this one are employed in many encryption
> system (see for example PKCS #5, that Jakob referred to),
> however the slowdown is typically not done in this way.
> Instead, a slow key derivation function (KDF) is used to
> transform the password to a key
>
> K = KDF(PW)
>
> Usually, some parameter to the KDF controls the number of
> iterations (the ammount of slowdown).
>
> This gives a constant slowdown (rather than a random one,
> which we get with your method), which might be preferrable.
> However, I think the main reason that this method is
> preferred is that we do not have to worry about related key
> attacks.
>
> With your method we want to compute H(PW || R_i) for all
> possible R_i and compare it to a stored hash value.  It is
> possible that there are some weaknesses in the hash function
> that allow us to do this faster than by trying all possible
> R_i.
>
> For strong hash functions, such as SHA, no such weaknesses
> are known, AFAIK, and your method would work just as well.
> However, to be on the safe side, it might be better to use a
> KDF.

Why not just use higher entropy passwords or keys instead of making more
work for legitimate users?

Tom



------------------------------

From: "Mark G Wolf" <[EMAIL PROTECTED]>
Subject: Re: Reusing A One Time Pad
Date: Thu, 19 Apr 2001 15:53:31 -0500

Your point being.  Are you some kind of stalker?  If so why?  Is your own
personal life so pointless.  Do you feel small and powerless.  It fascinates
me that people get off on gossiping about one another so much.  To me it's
like eating raw sugar, I get very nauseas after only a little bit and stop.
I am not part of your global hell village.  Congress is way behind in
passing laws to protect privacy and unfortunately it's going to correct
itself on its own in a very harsh way it seems.  Eh, what can you do.

And Freud was a mother fu*king as*hole that spawned a small universe of
devils, like you perhaps.  But that's as bit off topic so I will refrain
from commenting any further.  Perhaps in a psych group.




------------------------------

From: James Felling <[EMAIL PROTECTED]>
Subject: Re: Reusing A One Time Pad
Date: Thu, 19 Apr 2001 16:17:24 -0500



Mark G Wolf wrote:

> > Let K be a bitstring of length n, which is the key. This is securely
> > exchanged between both parties (Alice and Bob of course). To encipher a
> > message M of length p, where p <= n, randomly selected bits of K are
> > used to encipher message M by
> >   for i = 1 to p
> >      c_i = m_i ^ k_rand[i]
> >
> > ^ means exclusive-or
> > and the ciphertext is C.
> >
> > I don't see how Bob can decrypt the message without knowing the which
> > key bits Alice used to encipher the message.
> >
> > If this "random selection" is based on a function, then it is not
> > random, and we can analysis this unspecified function for weaknesses.
>
> Let's just call it the secret sauce.

Ok. Here is what you have done. You have put all your security in to this
"secret sauce".  If the selection method is truly random ( bits from "true"
RNG) you are basiclly using an OTP via remote control -- this could be secure
-- provided the method that the stream is used to choose the bits is sound.

If your "sauce"  is less random you lose your information theory guarantee.
What you have then is a method of generating bits that is at best as secure
as the the PRNG used, and the "pad" merely serves as a secondary obfuscative
step.


------------------------------

From: [EMAIL PROTECTED] (Terry Boon)
Subject: Re: Current best complexity for factoring?
Date: Thu, 19 Apr 2001 21:17:39 GMT
Reply-To: [EMAIL PROTECTED]

=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1

On Wed, 18 Apr 2001 17:03:02 -0700, Joseph Ashwood <[EMAIL PROTECTED]> wrote:

>"Terry Boon" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...

>> On Wed, 11 Apr 2001 06:37:36 GMT, Samuel Paik <[EMAIL PROTECTED]> wrote:

[discussing method of generating random n-bit prime - snip]

>> >Generally, pick random odd n-bit number (this means the high order
>> >bit and low order bit are set to 1 and the rest of the bits are chosen
>> >randomly).  Test for probabilistic primality.  If not prime, increment by
>> >2 and go to test, otherwise, accept as prime.

[snip]

>> Does this not bias the "random" selection of a prime towards primes
>> which come after a long run of composites?

[snip]

>Well if that doesn't work for you try:
>pick a random number
>repeat until prime
>It'll work, but it'll be a slower process.

[snip - more algorithms]

I agree that these will produce an unbiased distribution, at the cost
of more processing time and more random bits than the add-two method.

I wonder if there is an efficient algorithm (say, polynomial in the
number of bits) which minimises the (expected) number of of random
bits needed.

(The method which minimises the number of random bits needed would be
to generate a random number (n, say) between 1 and (number of primes
between 2^511 and 2^512-1), and then take the nth prime after 2^511.
Unfortunately, this cannot be done in reasonable time.)

>They will all work, the add 2 method was chosen because while the bias
>towards primes that comes after strings of composites is there, it is not
>known how to determine which primes come after such a stream

Well, we have a probabilistic way of generating them...

However, I'll concede that "pick random prime using the add-two
method, and see if it divides the number we want to factor" still
isn't going to be terribly efficient, even if it does take advantage
of the bias.

>Given this unless someone finds a factoring algorithm that is easier
>when the primes come after a long stream of composites, there is no
>additional risk.

This is what I suspect.  I would find it curious and surprising to
find a factoring algorithm that had this property.

But for all the trouble that some people seem willing to go to to
generate random bits, it seemed a little strange to bias the
subsequent prime generation...

- -- 
Terry Boon, Hertfordshire, UK
[EMAIL PROTECTED] 
=====BEGIN PGP SIGNATURE=====
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.5

iD8DBQE631bZB+GG7A6DEUARAqz5AKC9g6q4cEOzl2KQAcmebi44TN5azgCeLwle
aFsHcvNqHgaCJY6jg35NNLo=
=7lcc
=====END PGP SIGNATURE=====

------------------------------

From: James Felling <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.misc,alt.hacker
Subject: Re: "I do not feel secure using your program any more."
Date: Thu, 19 Apr 2001 16:27:23 -0500



Anthony Stephen Szopa wrote:

> "I do not feel secure using your program any more."
>
> You sure jumped to a hasty conclusion.

Perhaps, perhaps not.  Your program can produce sound output, but you need
to enter far more data/do much more in the way of babysitting the code to
get the same level of quality you would get with any modern stream cypher.
The program you sell is slow vs other methods such a s RC4, memory
consumptive, has poor key agility, and if not used properly produces bad
output without warning the user.

>

{snip}



------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.misc,alt.hacker
Subject: Re: "I do not feel secure using your program any more."
Date: Thu, 19 Apr 2001 21:29:41 GMT


"James Felling" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
>
> Anthony Stephen Szopa wrote:
>
> > "I do not feel secure using your program any more."
> >
> > You sure jumped to a hasty conclusion.
>
> Perhaps, perhaps not.  Your program can produce sound output, but you need
> to enter far more data/do much more in the way of babysitting the code to
> get the same level of quality you would get with any modern stream cypher.
> The program you sell is slow vs other methods such a s RC4, memory
> consumptive, has poor key agility, and if not used properly produces bad
> output without warning the user.

To be objective when is the last time sha/tiger/haval warned you that your
password has too little entropy?

Tom



------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Basic AES question
Date: 19 Apr 2001 21:37:17 GMT

[EMAIL PROTECTED] (Frank Gerlach) wrote in <[EMAIL PROTECTED]>:

>
>
>Lou Grinzo wrote:
>
>> I'm just starting to learn about AES, and I was wondering:
>> Why does the AES standard support only the key sizes of
>> (I think) 128, 192, and 256 bits?  Is it purely to keep
>
>These are the magic numbers as defined by the Pope. Remember, the
>algorithm was invented by a catholic university.
>

    Then why wasn't 666 used?
I thought most of the European catholics lived in Spain or Italy.
Also what is the Vatican position in this method I thought
they were big in Crypto or are they assuming that it is weak.



David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Any unbroken knapsack cryptosystem?
Date: Thu, 19 Apr 2001 14:37:14 -0700

I'd say that's no longer true. The NTRU system is variation of the Knapsack
theme, however it has so far shown itself to be better than any previous,
and is showing that it seems to be at the same level as the other public key
systems. While I may not personally find it interesting to my work, I do not
believe that Schneier's statement is wholly correct at this time.
                                Joe
"JamesBaud" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Consider the "knapsack" method to be broken, as almost every large & small
> algo has been
> broken or sneered on...
>
> {ref:  Applied Cryptography, Bruce Shneier}
>
>
> -- JamesBaud




------------------------------

From: newbie <[EMAIL PROTECTED]>
Subject: Re: OTP breaking strategy
Date: Thu, 19 Apr 2001 17:44:51 -0300

So every bit-string is truly random.

input :    10010101
ciphertext 10100101
output :   00110000

this ouput is random?

What is the probability that each of bit-string of size n to be truly
random?
I explain 
let the plain-text 0110101
 ciphertext OTP    1010010 
 output            1101111

is this output random?
Let me choose all bit-string with only 1 or 2 0's as output 
They are corresponding certainly to some plaintext.
Are they random?

I'm not wrong. What I under-estimate is the probability that a defined
bit-string could be truly random or not.
You can still know if the ouput is random or not.
When I introduce my selected word, even when I combine it with a true
random string the ouput could be non random.
Try to analyze all the ouput possible combined with truly random key,
some output does not meet the need of randomness.

Not 100%. Quite hundred per cent.
That is my error.
I recognize it.

I have a selected input combined with same random key. The result is
variable.

suppose the random key as sample is 1001, my ciphertext is 1101
I input all possibilities of input 
Input ciphertext output

0000  1101       1101
0001  1101       1100 
0010 etc...

Some inputs are selective. If the output does not meet conditions of
randomness I eliminate it 111111111111111 or 111110111111 or
1110111111111011111 etc.... does not meet those conditions.

I select by input and by output.
If the input is valid and the output is valid that is OK.
  
But my question is : how many outputs I can eliminate? I know with
extra-informations witch input I can select.
That is my question.





Joseph Ashwood wrote:
> 
> "newbie" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > And every output is not necessarly (100 %) truly random.
> 
> Completely and totally WRONG!!!!!!!!!! There is a fundamental proof of the
> output of XOR having at least the entropy of the maximum of the entropy of
> it's inputs. Since one of the inputs to a OTP is purely entropic the output
> of the XOR is purely entropic, or as you said 100% random.

------------------------------

From: James Felling <[EMAIL PROTECTED]>
Subject: Re: "UNCOBER" = Universal Code Breaker
Date: Thu, 19 Apr 2001 16:52:20 -0500



newbie wrote:

> What is important is not to find all the key.
> You have to analyse the context of the plain-text.
> You are not going to find a word "homo erectus " in business letter. It
> is quite impossible.
> But dollars yes, company, yes etc...
> Algo to decrypt OTP :
> - look for words used in the context (business, personal, etc...)
> - try to slide the corresponding bit-string on ciphertext until it
> matches.
> - build the disclose little by little.
>
> If you try to find the key, it is a wrong strategy.

I have encoded 2 messages using an OTP over the digits 0 to 10. Random keys
were generated and added on a perdigit basis droping carries.

 message one  is 1234567890, message two is 1234567899.

A 7480954378
B 3274620925

Which one is which?

Is message A 1234567890 encoded with key 6256497588, or is it 1234567899
encoded with key 6256497589?
Each is equally probable, and if my RNG is truly random, there is no way for
you to guess.

If I use my OTP twice yes it can be broken easily, and that is obviously where
your method got into your head, but if it is used properly there is no chance
of it being broken.

(spoiler space)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
\
|
\
|
\/

Message two is A, Message one B


------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: OTP breaking strategy
Date: Thu, 19 Apr 2001 21:53:41 GMT


"newbie" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> So every bit-string is truly random.
>
> input :    10010101
> ciphertext 10100101
> output :   00110000
>
> this ouput is random?
>
> What is the probability that each of bit-string of size n to be truly
> random?
> I explain
> let the plain-text 0110101
>  ciphertext OTP    1010010
>  output            1101111
>
> is this output random?

<snip>

2^1000 zeroes in a row **could* be random as long as the probability of it
occuring is 2^-1000.  I.e if you had a 2^10000 bit sting you should see
about 1024 runs of either zero or one.

Tom



------------------------------

From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: OTP breaking strategy
Date: Thu, 19 Apr 2001 21:55:04 GMT


"Tom St Denis" <[EMAIL PROTECTED]> wrote in message
news:F7JD6.20$[EMAIL PROTECTED]...
>
> "newbie" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > So every bit-string is truly random.
> >
> > input :    10010101
> > ciphertext 10100101
> > output :   00110000
> >
> > this ouput is random?
> >
> > What is the probability that each of bit-string of size n to be truly
> > random?
> > I explain
> > let the plain-text 0110101
> >  ciphertext OTP    1010010
> >  output            1101111
> >
> > is this output random?
>
> <snip>
>
> 2^1000 zeroes in a row **could* be random as long as the probability of it
> occuring is 2^-1000.  I.e if you had a 2^10000 bit sting you should see
> about 1024 runs of either zero or one.

er that's not quite right but the idea is valid...

Tom



------------------------------

From: [EMAIL PROTECTED] (Joe H Acker)
Subject: Re: OTP breaking strategy
Date: Thu, 19 Apr 2001 23:49:24 +0200

newbie <[EMAIL PROTECTED]> wrote:

<snip>

Perhaps you're confused because you don't understand why the ciphertext
of the OTP is random. This is a fact that *appears* to be somewhat
contra-intuitive, still it's a fact. ("Random" is the wrong word
though...that's why people who are more precise talk about highest
entropy instead.)

If the OTP would preserve word-boundaries, the ciphertext would reveal a
lot of information, but of course it would be silly to use an OTP in
such a way. So there are no word boundaries, just random letters of the
alphabet (like e.g. all ASCII characters).

Now as an example, assume as an attacker that the first word is "dear".
You XOR "dear" with the first letters of the ciphertext and the result
is a random sequence. Now take any other 4-letter sequence that seems to
be a reasonable assumption: the result of XORing with the ciphertext
will be a random sequence. Since there's no bias and you don't know the
key, you cannot decide which assumption was the right one. Of course you
might have good reasons to assume that the first word was "dear", but
fiddling around with the ciphertext of the OTP will not give you any
better reasons to believe so.  

Even worse, suppose you *know* that the first 4-letters are "dear". Will
that help you in any way to find out other plaintext? No. Because the
key by definition is purely random, and that means that no subsequent
bits of the key sequence are dependent on the previous ones. Of course,
if the first word is "dear", the second one is likely to be a proper
name or a title. But which one? Or suppose you know the word "shares" is
somewhere in the ciphertext. There are indeed more or less likely
positions were this word can occur, because the OTP does not diffuse the
bits of "shares" over the ciphertext. But this won't help you to find
out where exactly the word "shares" is enciphered. Of course, you may
assume further that "the " might occur before "shares", or that " of" or
" and" or " at" etc. occur afterwards. But you have no means to prove
your assumptions without knowing at least parts of the key.

In the end, you will have as many more or less probable solutions as the
linguistic rules of the language and your assumptions have suggested to
you---but you will not know anything more than you have already assumed
before.  

Don't you think so?

Regards,

Erich

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Current best complexity for factoring?
Date: 19 Apr 2001 21:51:05 GMT

[EMAIL PROTECTED] (Terry Boon) wrote in
<[EMAIL PROTECTED]>: 

>
>>Given this unless someone finds a factoring algorithm that is easier
>>when the primes come after a long stream of composites, there is no
>>additional risk.
>
>This is what I suspect.  I would find it curious and surprising to
>find a factoring algorithm that had this property.
>

   You can bet the NSA has devoted a great deal of research into
taking advantage of this flaw in the way primes are picked.
But I don't think they would tell you how much of an advantage it
is.  Also many primes come in clumps. You could incrment by two then
take the following prime if in a certain distance just to hopefully
throw off such methods if they exist.

>But for all the trouble that some people seem willing to go to to
>generate random bits, it seemed a little strange to bias the
>subsequent prime generation...
>

    No more stranger than using a nonbijective compressor to compress
in PGP before one encrypts. Also no stranger than doing a quick check
using first few bytes of the encrypted file to see if one has the right
key. All these things are weaknesses that the NSA is sure to exploit.

    If one is making ones own keys it it should allow for the testing
of a new key instead of the Pus two method. It should also allow one
to pick two prime via an extrenal program and them use them. But it
doesn't the ease of use is what they push but it concentrates where
an attacker can look for weaknesses.

David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE "OLD VERSIOM"
        http://www.jim.com/jamesd/Kong/scott19u.zip
My website http://members.nbci.com/ecil/index.htm
My crypto code http://radiusnet.net/crypto/archive/scott/
MY Compression Page http://members.nbci.com/ecil/compress.htm
**NOTE FOR EMAIL drop the roman "five" ***
Disclaimer:I am in no way responsible for any of the statements
 made in the above text. For all I know I might be drugged or
 something..
 No I'm not paranoid. You all think I'm paranoid, don't you!


------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to