Cryptography-Digest Digest #148, Volume #12       Sun, 2 Jul 00 15:13:01 EDT

Contents:
  Re: Diffie Hellman Primes : Speed Tradeoff Q (Wei Dai)
  Re: sliding window exp.:CLNW vs. VLNW (Wei Dai)
  Re: very large primes (William Rowden)
  #sci.crypt moved to Dalnet (Simon Johnson)
  Re: Newbie question about factoring (Simon Johnson)
  Re: How Uncertain? (Tim Tyler)
  Re: Compression & Encryption in FISHYLAND (Tim Tyler)
  Mail servers ("knoest")
  Tying Up Lost Ends III (SCOTT19U.ZIP_GUY)
  source code for a basic cryptography programm (Reiter Tommi)
  Re: Newbie question about factoring (Dido Sevilla)
  Re: Comment on version 3. [Section 3.2] (Mark Wooding)
  Reward increased to $400 ([EMAIL PROTECTED])
  Re: Blowfish for signatures? (Mark Wooding)

----------------------------------------------------------------------------

From: Wei Dai <[EMAIL PROTECTED]>
Subject: Re: Diffie Hellman Primes : Speed Tradeoff Q
Date: Sat, 1 Jul 2000 22:48:44 -0700

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> Bob Silverman <[EMAIL PROTECTED]> wrote:
> 
> > False.  Generating a 1024 bit prime should take a few tenths of a
> > second AT MOST if done correctly.
> 
> Not many people doing it correctly, then!  This may be true if you can
> do hundreds of modexps per second, but mortals like me need special
> hardware for that.

My code seems to average about one second to generate a 1024 bit prime, 
so tenths of a second seems plausible. I'm not sure about the "AT 
MOST" part though. Sometimes you're unlucky and it takes much longer 
than average to find a prime. (BTW I assume we're talking about primes 
that don't have any special properties. DH protocols sometimes specify 
strong primes, which would take much longer to generate.)

-- 
cryptopp.com - a free C++ class library of cryptographic schemes

------------------------------

From: Wei Dai <[EMAIL PROTECTED]>
Subject: Re: sliding window exp.:CLNW vs. VLNW
Date: Sat, 1 Jul 2000 22:59:05 -0700

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> Do these go from most significant to least, or the other way around?

They come in pairs, one for each direction. It's a consequence of the 
duality principle of addition chains. See 4.6.3 of Knuth's 
Seminumerical Algorithms for more details.

-- 
cryptopp.com - a free C++ class library of cryptographic schemes

------------------------------

From: [EMAIL PROTECTED] (William Rowden)
Subject: Re: very large primes
Date: 2 Jul 2000 05:59:49 GMT

In article <8jlm2i$iav$[EMAIL PROTECTED]>, Zach <[EMAIL PROTECTED]> wrote:
>"Darren New" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]...
>> > is (n!-1) always a prime, and does anyone know of a proof or disproof?
>> I think you're thinking of (n!+1)
>This above idea is actually very close to a proof done by Euclid involving
>the infinitely greater prime number.  What that proof stated was that (n!
>+1) will be prime.  n! creates a number that is divisible by every number up
>to n.

IIRC, the proof by induction used the product of every *prime* up to
and including the "last" prime.

> Obviously, this is not correct: n = 5, (5!+1) = 121 = 11 * 11,
> thereby not being prime.  But why does the reasoning for it seem
> almost logical?

Because n! is "almost" a prime factorization?

Try 2 * 3 * 5 + 1 = 31, a prime.  (But not the last one, because 2 * 3
* 5 * 7 * 11 * 13 * 17 * 19 * 23 * 29 * 31 + 1 can have no factors
other than itself and 1.  :-])
-- 
    -William
PGP key: http://www.eskimo.com/~rowdenw/pgp/rowdenw.asc until 2000-08-01
Fingerprint: FB4B E2CD 25AF 95E5 ADBB  DA28 379D 47DB 599E 0B1A

------------------------------

From: Simon Johnson <[EMAIL PROTECTED]>
Subject: #sci.crypt moved to Dalnet
Date: Sun, 02 Jul 2000 10:37:35 GMT



I've moved it to Dalnet, due to the frequent netsplitting on EFNET.

Hope to see you soon.
--
Hi, i'm the signuture virus,
help me spread by copying me into Signiture File


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Simon Johnson <[EMAIL PROTECTED]>
Subject: Re: Newbie question about factoring
Date: Sun, 02 Jul 2000 11:19:15 GMT

In article <8jl6h3$hhc$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Daniel A. Jimenez) wrote:
> In article <8jl2gt$vps$[EMAIL PROTECTED]>,
> Scott Fluhrer <[EMAIL PROTECTED]> wrote:
> >
> >Daniel A. Jimenez <[EMAIL PROTECTED]> wrote in message
> >news:8jl1bm$gm0$[EMAIL PROTECTED]...
> >[snip]
> >> Finding the optimal tour for the Travelling Salesman Problem is NP-
hard,
> >> but not NP-complete.
> >Huh?  You appear to have some sort of clue, so maybe you know
something I
> >don't.  I was under the impression that the Travelling Salesman
Problem was
> >NP-complete, and while "Finding the optimal tour for the Travelling
Salesman
> >Problem" is not a decision problem, you can easily use a Travelling
Salesman
> >Problem oracle to find the optimal tour in polynomial time.  Or, am I
> >missing something?
>
> As a decision problem, the Travelling Salesman Problem is NP-complete.
> As an optimization problem, it isn't NP-complete, simply because only
> decision problems can be NP-complete.  Both decision and optimization
> TSPs are NP-hard, because someone a long time ago (Knuth, I think)
decided
> that NP-hard also includes function problems, not just decision
problems.
> There was a discussion on comp.theory about this about a year or so
ago;
> a Deja search might turn up some references.  It's just a matter of
> semantics, but it's important to separate these things out in order to
> have meaningful discussions.
>
> Saying "X is NP-hard" means the problem X is at least as hard as
anything
> in NP, that is, there is a polynomial-time transformation of anything
in
> NP to X.  Saying "X is NP-complete" means that X is in NP (and is
thus a
> decision problem) and X is NP-hard.  The comp.theory FAQ has more to
say
> about this; it's posted pretty frequently.
> --
> Daniel Jimenez                     [EMAIL PROTECTED]
> "I've so much music in my head" -- Maurice Ravel, shortly before his
death.
> "                             " -- John Cage
>

I was wondering what the exact definition of an NP, NP-Complete and
this NP-HARD is?

I have a vaugue idea, but its probably not correct :)


--
Hi, i'm the signuture virus,
help me spread by copying me into Signiture File


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: How Uncertain?
Reply-To: [EMAIL PROTECTED]
Date: Sun, 2 Jul 2000 11:46:12 GMT

Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
: Mark Wooding wrote:

:> Maybe Doug is having a little joke with us about the amount of random
:> rubbish which gets posted to Usenet. ;-)

: No, I'm serious.  One bit per octet is certainly totally entropic
: (parity).  Another bit is nearly totally entropic, since half the
: ASCII range includes almost all characters actually used.  So far
: we have exceeded BS's estimate without even considering redundancy
: in the (English) language.

...but taking account of the redundancy of the English language would
decrease the measure of entropy per byte - not increase it further.

Entropy equates (crudely) to unpredictability.  The redundancy in 
English plaintexts increases the predictability of each letter, once
the preceeding text is known.

: Experiments similar to Shannon's show that a very large fraction of
: English discourse is redundant, thus my estimate of at least 6 bits
: out of every 8 altogether.

But this is redundancy - surely the opposite of entropy in this context.
-- 
__________  Lotus Artificial Life  http://alife.co.uk/  [EMAIL PROTECTED]
 |im |yler  The Mandala Centre   http://mandala.co.uk/  Goodbye cool world.

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Compression & Encryption in FISHYLAND
Reply-To: [EMAIL PROTECTED]
Date: Sun, 2 Jul 2000 11:59:38 GMT

John Savard <[EMAIL PROTECTED]> wrote:

[Huffman compression]

: However, some of those 256 combinations take place when the
: unconverted message ends in a particular sequence of two bits, like
: 01, and others of them take place when the unconverted message
: (Huffman compressed, but without the additional wrinkle that gets to
: even bytes) ends in a particular sequence of seven bits, like 1011011.

: Although there are 32 times more possible 2156 bit messages than there
: are 2151 bit messages, that does *not* mean that people send 2156 bit
: messages 32 times more often than 2151 bit messages. [...]

Do you think that it follows from this argument that when compressing
totally random files (smaller than some specified size) with David's
compressor, some final bytes are likely to be significantly more probable
than others?

If so, this is a relatively simple thing to check.
-- 
__________  Lotus Artificial Life  http://alife.co.uk/  [EMAIL PROTECTED]
 |im |yler  The Mandala Centre   http://mandala.co.uk/  This tagline no verb.

------------------------------

From: "knoest" <[EMAIL PROTECTED]>
Subject: Mail servers
Date: Sun, 2 Jul 2000 14:44:30 +0200

Perhaps this is a bit off topic here, but I need a few mail servers which
support
IP hiding. Can anybody hrlp me with that?



------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Tying Up Lost Ends III
Date: 2 Jul 2000 13:18:58 GMT

Tying Up Loose Ends MORE



This is based on John Savard's work but he misses a few minor points

 In the previous posts two I cleaned up the lost Ends in John's work in
the follow up I will show what he did was a waste of time and
unfortunutly it is most likely a result of reading bad BS crypto books
that have led him astray.  So one should not directly fault him
for his poor understanding of compression encryption and what the
defination of random means

My files ( at least in the finitely odd stage ) end in the form
yyyy1zzz where z is always zero and the y's are either 1 or zero
the point is the last 1 is the maker in the finitely odd file for
the end of the compression. The number of y's and z's sum to 7
so I am describing the last byte in the finitly odd form of the
file.

John's form is rather exotic and requires random numbers ( where
the hell you get them is another story ) but  my finitely odd
files can be transformed in the form he really intended but as
yet has not written code for. Johns form is for the last two
bytes of a file. its is NNNqqqqq qqqqqqqq where the 3 N's represent
the number of random bits padded at end of file and front part
of the q's is stuff that is a result of the compression and
last part is the random numbers.

Lets take a look at these two set of endings in a way maybe
John can even grasp or his BS crypto friends if they give it
any thought. But if they are the stuff shirt kind of guys with
ties they may not have suffcient blood flow to the brain.
I have two forms of Johns one has more random numbers
the third line is that alternate form the * means same file
length as mine.

NNN=000
yyyyyyy1
000yyyyy yyrrrrrr
110yyyyy yyrrrrrr

NNN=001
yyyyyy10
001yyyyy yrrrrrrr
111yyyyy yrrrrrrr

NNN=010
yyyyy100
010yyyyy rrrrrrrr
000yyyyy yyyyyyyy *

NNN=011
yyyy1000
011yyyyr rrrrrrrr
001yyyyy yyyyyyyr *

NNN=100
yyy10000
100yyyrr rrrrrrrr
010yyyyy yyyyyyrr *

NNN=101
yy100000
101yyrrr rrrrrrrr
011yyyyy yyyyyrrr *

NNN=110
y1000000
110yrrrr rrrrrrrr
100yyyyy yyyyrrrr *

NNN=111
10000000
111rrrrr rrrrrrrr
101yyyyy yyyrrrrr *

 Above is all eight ending of mine versus a correct formualtion
of Johns. Suppose mine is weak and that the ending
10000000 is common to some messages. This may be true
but notice if one used Johns method  to compress the file
one could search for the N being 111 as easily as the 10000000
but notice that if random the 111 pattern comes up 32 times
more often when using a "bad key". What this means is that
the enemy has to  potenially to check  32 times as many 
files for this 1 case however.
  Now if the N is 000 in mine you are looking at search for a 1
in the last bite position of the file meaning half the files
of that length are candidates. While in his method only only
one eigthth of the files are candidates so that his is worse
by a factor of 4 while in first case his was better by a factor
of 32. However in the last case there are 128 times as many
possible files. This is true even if 90%  compressed files of
use are in some fixed range of 100 to 10000 bytes because
at whatever byte your compression ends you still get more of
the form yyyyyyy1 than of the form 10000000. But John fails
to see this. Half of the files that are X bytes long in the
finitely odd file stage end in yyyyyyy1. So the NSA would
have 4 times the likely hood of breaking the code if you used
Johns compression since half the compressed files of interest
would have NNN=000 which is not good.
 Of course it is far better to encrypt the string of bits 
not including the last 1 as this would give the enemy a much
harder time.
 As other example to ponder what happens if finitely odd file
mapped to even number of bytes instead of just a whole number
of bytes. One can easily show the mapping to two bytes is far
more dangerous to the one hiding information than the one seeking
at. Here is the reason. of one looks at each second byte of every
possible ending byte you search the same number of cases for possible
candidates. However in the first byte of the pairs of bytes when
you map to a whole number of bytes there is alwasy 256 times more possible
files that one could condsider so far better to map to whole bytes
than even bytes. As the block size of code gets larger this effect
is more notice able. This is the down side of using large whole
number of blocks for encryption and not giving file ends specail
consideration.


 So what is to be gained by using Johns formulation.
The anwser folks is nothing. The only thing john's method
does is to add more information to the file with the r's
Example suspose John's random number generator is built
by a company friendly to the NSA and for the sake of argument
it spits out 101010101010... continuously. Now
if a message is compressed my way and his.  And the NSA
is trying to break it and they guess a wrong key.
and in my compression the correct ending just happends to
be 10000000 and the NSA knows it from me leaking that info.
Now suppose John compresses the same text and uses the same
encryption program I did with the same key. His correct compression
ends with the r's being 10101010...
However when the wrong key is used and test file my decrypted file
has 10000000 as last byte. It is highly unlikely that Johns
r's will be 10101010.. for the wrong key so the NSA would love
you to use his instead of mine.
 But as an old Wise Whore I once know at Fran's Legal Nevada
Whore house once said. You can't teach a John anything so why
try.


David A. Scott
-- 
SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
        http://www.jim.com/jamesd/Kong/scott19u.zip
Scott famous encryption website NOT FOR WIMPS **no JavaScript allowed**
        http://members.xoom.com/ecil/index.htm
Scott rejected paper for the ACM
        http://members.xoom.com/ecil/dspaper.htm
Scott famous Compression Page WIMPS allowed ** JavaScript OK**
        http://members.xoom.com/ecil/compress.htm
**NOTE EMAIL address is for SPAMERS***
I leave you with this final thought from President Bill Clinton:
   "The road to tyranny, we must never forget, begins with the destruction 
of the truth." 

------------------------------

From: Reiter Tommi <[EMAIL PROTECTED]>
Subject: source code for a basic cryptography programm
Date: Sun, 02 Jul 2000 15:37:00 +0200

Hi,
I need a source code in C++ which translates a text in ASCII each ASCII
code seperated with a point or better semicolon.

Can anybody help me?

CU Tommi


------------------------------

From: Dido Sevilla <[EMAIL PROTECTED]>
Crossposted-To: comp.theory
Subject: Re: Newbie question about factoring
Date: Sun, 02 Jul 2000 22:57:09 +0800

Bob Silverman wrote:
> 
> The size of a number IS its number of digits.
> 
> You contradict yourself.

And you're playing semantics.  How big is a number?  It's it's
magnitude.  The number of digits is its number of digits.

--
Rafael R. Sevilla <[EMAIL PROTECTED]>         +63 (2)   4342217
ICSM-F Development Team                         +63 (917) 4458925
University of the Philippines Diliman

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Comment on version 3. [Section 3.2]
Date: 2 Jul 2000 16:24:10 GMT

William Rowden <[EMAIL PROTECTED]> wrote:

>    * What are the advantages and disadvantages of these algorithm
> changes?:
>       RSA -> ElGamal

The ciphertext becomes twice as large.  If you're using ElGamal for
session-key negotiation only, consider using offline Diffie-Hellman
rather than ElGamal.

With current algorithms, the difficulty of breaking ElGamal is slightly
higher than RSA for the same key length, since the matrixing step of the
GNFS must be done using modular integer arithmetic rather than in GF(2).

ElGamal (or Diffie-Hellman) is slower.  You can improve matters by using
a smaller subgroup -- this will reduce the size of the exponents, but be
careful to ensure that the values you receive are actually members of
the subgroup.  (This will cost yet more execution time.)  RSA gets away
with (a) smaller exponents for the public key operations and (b) smaller
moduli for private key operations.

I'd be tempted to stick with RSA here, to be honest.

>       PKCS #1 v1.5 -> v2 (OAEP) w/MGF1

This is an excellent decision.  Even were you to choose to remain with
RSA, I'd strongly recommend moving to OAEP rather than staying with
PKCS#1 1.5 packing.

>       MD5 -> SHA-1

This is a good decision too.  MD5 is showing its age far too much to be
used in new protocols, except perhaps as an extra fallback (as it is in
SSL).  You might consider using both MD5 and SHA1 in conjunction.  My
personal preference is for RIPEMD-160 over either, although I don't have
much reason for that.

> Some considerations might be patents (for a little while), message
> size, and known attacks.
>    * Should version 3 consider ECC?  AES candidates?

I find elliptic curves interesting, but I'd be tempted to stick with
old-fashioned RSA and DH in `real' protocols.  I'd consider using
Blowfish (in place of triple-DES) for the speed and possible security
improvement, but I'd be cautious of AES candidates with the possible
exception of Serpent.

>    * Are there better asymmetric-symmetric combinations or hybrid
> cryptosystems?  (dmolnar suggests DHAES or EPOC.)

I see DHAES as basically codifying existing good practice.  It's
basically offline Diffie-Hellman with a symmetric cipher and a MAC.

>    * Are the key sizes still adequate?

Probably.

-- [mdw]

------------------------------

From: [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
Subject: Reward increased to $400
Date: Sun, 02 Jul 2000 11:05:17 -0700

The reward for breaking Semiramis has been increased to $400.

http://CascadeResearch.ebz.com/


------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Blowfish for signatures?
Date: 2 Jul 2000 18:03:59 GMT

Thierry Nouspikel <[EMAIL PROTECTED]> wrote:
> Hi there,
> 
> Forgive me if this is a stupid question:I'm new to cryptography, I just
> made my first steps in the field by implementing Blowfish on my old
> TI-99/4A home computer. 
> 
> My question is: can I use Blowfish to produce a digital signature? I
> mean, the kind of thing that lets you verify that the document you are
> reading is indeed the original, and wasn't doctored in any fashion.

Yes.  Look up `Rabin's One-Time Signature Scheme' in a good crypto book,
such as HAC.  With a symmetric cipher (such as Blowfish), and a hash
function (such as Blowfish in abreast Davies-Meyer, if you've got all
day spare), you can use this cunning signature system which will allow
you to sign one message for each key, which may be verified once.

Neat, huh?

-- [mdw]

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to