Cryptography-Digest Digest #245, Volume #9       Wed, 17 Mar 99 05:13:03 EST

Contents:
  Re: Variable Cipher ("D")
  Re: DIE HARD and Crypto Grade RNGs. (Patrick Juola)
  Re: Variable Cipher ("Trevor Jackson, III")
  How do you discover the order of a elliptic curve over Zp? ("jason main")
  Re: How do you discover the order of a elliptic curve over Zp? (Paul Rubin)
  Re: Throwing complexity at known PRNGs (revised) (Christopher)
  Re: Finite Field with Hard Division? (Gregory G Rose)
  Proof for complexity of algorithm MPQS (Cryptolab)
  Proof for complexity of algorithm MPQS (Cryptolab)
  Funny mail? ("Jonas Thörnvall")
  Re: Another extension to CipherSaber ([EMAIL PROTECTED])
  Re: One-Time-Pad program for Win85/98 or DOS ("Johan Hartzenberg")
  Re: Test vectors for RC4 (Mok-Kong Shen)
  Re: CD Cipher (Mok-Kong Shen)
  Re: DIE HARD and Crypto Grade RNGs. (Mok-Kong Shen)
  a5 ("Maciej Maciejonek")

----------------------------------------------------------------------------

From: "D" <[EMAIL PROTECTED]>
Subject: Re: Variable Cipher
Date: Tue, 16 Mar 1999 19:20:35 -0500

  -- This means you really have two fundamental operations: substitute and
transpose (intersperse).  All of the other operations you listed: add, xor,
multiply, and reverse (bakwordize) are trivial forms of a more general
substitution.

  If this is true, then all encryption algorithms have only 2 operations.





------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: DIE HARD and Crypto Grade RNGs.
Date: 16 Mar 1999 14:48:59 -0500

In article <[EMAIL PROTECTED]>,
mok-kong shen  <[EMAIL PROTECTED]> wrote:
>Patrick Juola wrote:
>
>> Actually, this would be quite useful if you could get the amount
>> of theoretical support for this definition that K-complexity has
>> for it.
>> 
>> For example, there are some useful theorems available that state
>> that Kolmogorov complexity is universal (to within a constant)
>> irrespective of the particular implementation of a Turing Machine
>> you use, so, for example, you get the same complexity definitions
>> whether you're using a RAM or a pure Turing machine.
>> 
>> It would be both useful and interesting if you could show that,
>> for example, a particular class of encryption schemes requires the
>> same amount of computational effort to break irrespective of the
>> amount of cyphertext you have available -- or irrespective of
>> whether you've got plaintext/cyphertext pairs.  Or a proof that
>> the solution to *this* encryption scheme is independent of
>> the solution to *that* one.  All of which are available as analogues
>> for Kolmogorov complexity.
>> 
>> It sounds to me like you're whining that a hammer won't paint the
>> walls.  Lots of people like hammers, lots of people use hammers,
>> and if you're trying to paint the wall with a hammer, don't blame
>> the designer of the hammer -- or the designer of the wall, for
>> that matter.  If Kolmogorov complexity doesn't appear to solve the
>> problem you're interested in, then why the hell don't you get a
>> different tool?
>
>The problem I see is the following: One defines a quantity which
>one can never really compute and then theorize in the 'world' in which
>this quantity is (assumed to be) computable/manipulable and then 'map'
>the results thus obtained into our real world. I personally doubt that
>this is a genuinely valid scientific approach.

Depends on whether or not you find a method of manipulating something
close to Kolmogorov complexity -- and of finding something close to
Kolmogorov complexity.  As it happens, we've got something "close to"
Kolmogorov complexity for some applications -- it's called, for instance,
Lempel-Ziv complexity (or maybe Ziv-Lempel).  And if you look at the
theorems closely enough, you can get meaningful (over)estimates of
the Kolmogorov complexity of some sequences generated by some kinds of
sources.

If you want an example of that in Real Life, check out my paper in the
most recent Journal of Quantitative Linguistics, or my paper in the
proceedings of CogSci-98, both of which are applications of K-complexity
estimates in AI.  Chater and Hahn have an interesting application in
psychology, published in SimCat 97, IIRC. 

Formally speaking, we can't measure momentum, either -- but we find
our approximations to momentum to be sufficiently good that physicists
and engineers use it in their computations.

As I said, if you don't like what K-complexity does for you, don't use it.
Some of us find it extremely useful in some applications.  

        -kitten

------------------------------

Date: Tue, 16 Mar 1999 21:15:40 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Variable Cipher

D wrote:
> 
>   -- This means you really have two fundamental operations: substitute and
> transpose (intersperse).  All of the other operations you listed: add, xor,
> multiply, and reverse (bakwordize) are trivial forms of a more general
> substitution.
> 
>   If this is true, then all encryption algorithms have only 2 operations.

No, there are four classes of useful cryptographic operations. 
Transposition and translation (substitution) are two of the four. 
Transformation is another (grinding the bits of the plaintext against
each other).  The last category isn't really an operation on the
plaintext, but it can be ueeful: injecting noise.

The reason I made the original commment is that the operations you
listed are mostly simple forms of substitution.  Consider that addition
can be emulated with a simple substitution table.  So can
multiplication, exclusive-or, rotation, etc.  On a general purpose ALU
these operations are usually a single instruction, so they are more time
efficient than a table look up (and much more efficient space-wise as
they need no table).

However, the actual time spent operating on the elements of the
plaintext is only a part of the overall computation cost.  Using
substitution instead of add/mul/xor/rot increases the time cost a bit,
but it radically increases the "power" of the operation.  If you are
going to mess with a data element do so in a serious way not in a
primitive/simplistic way.

------------------------------

From: "jason main" <[EMAIL PROTECTED]>
Subject: How do you discover the order of a elliptic curve over Zp?
Date: 17 Mar 1999 02:33:02 GMT

I need to find out the order of a general elliptic curve over the ints.
modulus a large prime.   Any descriptions, or pointers to documentiation
would be helpful.   I've heard that Shank's is used but all references that
I've tracked down say that this is a discrete log algorithm.   If I had the
DL, I would not need the order, so I am confused.   This is being used to
decrypt a el-gammal style protocall....


Thanks in advance!!!!
Jason Main

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: How do you discover the order of a elliptic curve over Zp?
Date: Wed, 17 Mar 1999 02:53:09 GMT

In article <01be4fb8$14f37970$d218080f@CC_KXU1>,
jason main <[EMAIL PROTECTED]> wrote:
>I need to find out the order of a general elliptic curve over the ints.
>modulus a large prime.   Any descriptions, or pointers to documentiation
>would be helpful.   I've heard that Shank's is used but all references that
>I've tracked down say that this is a discrete log algorithm.   If I had the
>DL, I would not need the order, so I am confused.   This is being used to
>decrypt a el-gammal style protocall....

You are thinking of Schoof's algorithm.  It is complicated.  I don't
even begin to understand it.  The original version is impractical but
improvements make it useable, I think.  Henri Cohen, "A Course in
Computational Algebraic Number Theory" has some info and references,
but might not be the best place to look.


------------------------------

From: [EMAIL PROTECTED] (Christopher)
Subject: Re: Throwing complexity at known PRNGs (revised)
Date: Tue, 16 Mar 1999 21:09:19 -0500


 |:| The idea of using several generators of the form 
 |:| 
 |:|         x[n] = x[n-i] OP x[n-j],
 |:| 
 |:| where OP is one of several operations which make this work, and
 |:| XORing the results, has been suggested for statistical purposes,
 |:| after one was found to have weak statistical properties; this had
 |:| been suspected before.  These are called generalized Tausworthe
 |:| generators; the original Tausworthe, or shift register, generators
 |:| had the x's as single bits, and used XOR.
 |:| 
 |:| I do not know how successful these have been.  For statistical
 |:| purposes, the ones considered have periods far too long to be
 |:| attained, and very long seeds.  If XOR is the operation, and
 |:| the parameters (the i's and j's) are known, the generators can
 |:| be obtained from a series of reasonable length.

Thanks, that gives me something to go on when doing an internet search.


------------------------------

From: [EMAIL PROTECTED] (Gregory G Rose)
Subject: Re: Finite Field with Hard Division?
Date: 16 Mar 1999 21:54:25 -0800

In article <[EMAIL PROTECTED]>,
Safuat Hamdy  <[EMAIL PROTECTED]> wrote:
>[EMAIL PROTECTED] (Gregory G Rose) writes:
>> The operative word here is "field". Fields have [...]
>
>Greg, where's the point here?  In a field of size q, q a prime power,
>and x a field element != 0, x^(q-2) *always* gives you the inverse of x.

My point was that I think the original poster
thought that *any* modular arithmetic formed a
field, whether the modulus was prime, or a prime
power, or composite. Perhaps I misinterpreted him.

Greg.
-- 
Greg Rose                                     INTERNET: [EMAIL PROTECTED]
QUALCOMM Australia        VOICE:  +61-2-9181 4851   FAX: +61-2-9181 5470
Suite 410, Birkenhead Point              http://people.qualcomm.com/ggr/ 
Drummoyne NSW 2047      B5 DF 66 95 89 68 1F C8  EF 29 FA 27 F2 2A 94 8F

------------------------------

From: Cryptolab <[EMAIL PROTECTED]>
Crossposted-To: sci.math,;
Subject: Proof for complexity of algorithm MPQS
Date: Wed, 17 Mar 1999 07:28:38 +0100
Reply-To: [EMAIL PROTECTED]

Hi everyone,

  I am actually studying the MPQS factoring algorithm and in trying to
proof its complexity I am trying to solve an exercise from Knuth "Art of
Computing" vol 2 section 4.5.4 ex 30. I can't understand two points:

  * Why must we cut in two to find the good (e1,...,em) .
  * How can we calculate sum(na^2,0<a<2^d-1) ?

  If anyone would have the answer to this point, that would be
fantastic.

Thanks.

                MENIER Clement.

E-Mail : [EMAIL PROTECTED]

------------------------------

From: Cryptolab <[EMAIL PROTECTED]>
Crossposted-To: sci.math,;
Subject: Proof for complexity of algorithm MPQS
Date: Wed, 17 Mar 1999 07:28:17 +0100
Reply-To: [EMAIL PROTECTED]

Hi everyone,

  I am actually studying the MPQS factoring algorithm and in trying to
proof its complexity I am trying to solve an exercise from Knuth "Art of
Computing" vol 2 section 4.5.4 ex 30. I can't understand two points:

  * Why must we cut in two to find the good (e1,...,em) .
  * Who can we calculate sum(na^2,0<a<2^d-1) ?

  If anyone would have the answer to this point, that would be
fantastic.

Thanks.

                MENIER Clement.

E-Mail : [EMAIL PROTECTED]

------------------------------

From: "Jonas Thörnvall" <[EMAIL PROTECTED]>
Subject: Funny mail?
Date: Wed, 10 Mar 1999 20:34:07 +0100

I've got one of those funny mail again can someone helpme?
219,130,185,17,49,74,102,9,80,5,59,154,253,74,93,25,43,29,197,226,184,138,15
3,144,113,60,28,157,225,179,49,93,210,15,128,137,17,136,156,207,97,114,165,8
7,35,32,102,132,159,14,0,176,200,240,152,42,191,88,23,107,207,139,218,39,243
,150,209,138,88,91,162,25,209,69,210,201,23,9,153,190,84,133,17,0,171,153,69
,25,136,139,185,59,220,134,41,128,161,243,130,196,203,91,51,153,50,41,169,23
1,120,43,27,24,1,74,37,137,248,179,179,139,117,75,248,17,10,147,136,226,96,1
59,174,165,153,101,48,173,21,139,131,24,206,27,128,145,187,35,202,130,105,25
0,161,28,209,133,16,152,134,241,152,187,201,226,50,42,61,155,12,74,100,3,146
,59,217,18,121,11,80,163,151,20,194,130,67,255,



------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Another extension to CipherSaber
Date: Wed, 10 Mar 1999 22:37:37 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Paul Rubin) wrote:
> In article <7bnaba$fq2$[EMAIL PROTECTED]>,  <[EMAIL PROTECTED]> wrote:
> >> This doesn't seem too useful; it has about the same effect on the
> >> RC4 internal state as doing a single key initialization and then
> >> running the cipher for a while and discarding some output.
> >
> >Increasing the number of initialization rounds does have about the same
> >effect as discarding the equivalent number of cipher bytes, tho the key is
> >mixed in more in the former. It seemed to me that initializing with only 256
> >operations was a design defect in RC4 and simpley increasing the number was a
> >more natural way to correct that defect. Also it is simpler to impliment.
> >Since a major goal of CipherSaber is ease implimentation by novice
> >programmers the added complexity is significant. I have a bunch of e-mail
> >from aspiring CipherKnights to prove it.
>
> We don't know how RC4(tm) initialization as implemented by its
> designers really worked.  The version that's published was reverse
> engineered from applications that used the RSA library but may not
> include the setup procedures.

I think the fact that the reverse engineered version interoperates with the
RSA version in many applications demonstrates they are identical.

>I agree that running the cipher for a
> while is similar to re-running the key setup, though not the same.
> Re-running the key setup with the same original key uses the key closer
> to the final output, which may leak info about it.

I find that argument a bit weak. In standard RC4, why not stop adding in the
key after all the key bytes have been used once? Would that make it stronger?
I think both methods are valid. The one big virtue of discarding bytes is
that one can claim one is using standard RC4. I need a stronger reason than
that to change.

>
> Also, I can't comprehend why anyone thinks re-running the key
> initialization is simpler.  Both are trivial.  You already have to be
> able to do both the key setup and output generation; so throwing away
> N bytes of output (usually N=256) is no more complicated than running
> the setup N times.
>

I don't want to quibble, but what is trivial for an experienced programmer is
another hurdle for a novice.

I do plan to add something to the page about hex as the standard for ascii
armor.

Arnold Reinhold

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: "Johan Hartzenberg" <[EMAIL PROTECTED]>
Crossposted-To: alt.security,alt.privacy
Subject: Re: One-Time-Pad program for Win85/98 or DOS
Date: Wed, 17 Mar 1999 09:13:02 +0200

Hi

I'm probably going to put my foot in it, but that's the way I learn.


>+++++
>A true random number is one that is produced by a process that is
>capable of generating all finite sequences equiprobably. That process
>is called a True Random Number Generator (TRNG) and can never be the
>reulst of algorithmic calculation. Equiprobable means independent and
>equidistributed.
>+++++


>which is not good enough for proveably secure cryptosystems such as
>the OTP if you intend to have high message volume.


What you need for a totally secure OTP is a 100% unpredictable block of
data.

Equiprobability from your above definition requires equidistribution.  You
can have unpredictable sequinces of numbers where some numbers occur less
commonly than others.

For example

1 2 1 1 3 2 3 4 3 2 4 1 2 1 9 1 2 1 2 3

The sequence have a larger tendency for numbers closer to 1 than numbers
closer to 9 (the range being 1 to 9 in the above example).

There are infinitely many possible combinations for such sequences of
numbers despite the fact that some numbers occure less often than others.

If your definition for Randomness used the classical definition - an
unpredictable sequence of numbers - then it doesn't need to be equilly
distributed.

>Statistical tests cannot test for all forms of bias and correlation,
>so at best they can only test for the *appearance* of randomness,
>which is not good enough for proveably secure cryptosystems such as
>the OTP if you intend to have high message volume.

As such, statistical analisys becomes useless in testing for Randomness.  It
can not prove it or even test for it.

  _Johan




------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Test vectors for RC4
Date: Wed, 17 Mar 1999 09:53:44 +0100

[EMAIL PROTECTED] wrote:

> Well if there is a 1:1 ratio, and you have a 128-bit key, there are some many
> valid keys that you can't even imagine trying to crack it.  That's if no one
> has found a weakness in RC4.

To have a large key space is good, but a huge key space by itself
(alone) is no indication of the quality of an algorithm. The classical
monoalphabetic cipher has 26! possible keys, yet it is highly 
susceptible to analysis. If the algorithm is such that the analyst
has to brute force, then the size of the key space becomes critical.
That's why Wassenaar sets an upper limit of 56 bits. One can
nonetheless achieve security under such limitation by increasing the
processing time such that the time required for brute-forcing
becomes infeasible. (The best appears to increase only the algorithm
setup time so that long messages do not suffer from proportional
increase of total time of processing. See my design of WEAK3-EX.) I 
proposed this as a new paradigm: Security through Inefficiency.

M. K. Shen

======================================================
M. K. Shen, Postfach 340238, D-80099 Muenchen, Germany   (permanent) 
http://www.stud.uni-muenchen.de/~mok-kong.shen/ (Updated: 12 Mar 99)    
(Origin site of WEAK2-EX, WEAK3-EX and WEAK4-EX, three Wassenaar-conform
 algorithms based on the new paradigm Security through Inefficiency.
 Containing 2 mathematical problems with rewards totalling US$500.)

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: CD Cipher
Date: Wed, 17 Mar 1999 10:08:28 +0100

R. Knauer wrote:
> 
> On Mon, 15 Mar 1999 13:06:13 GMT, [EMAIL PROTECTED] (Paul
> Onions) wrote:
> 
> >>+++++
> >>It is important that the two block ciphers should have many
> >>similarities, just like "twins". This indicates that using only a
> >>two-key cooperation within one well-designed algorithm seems better,
> >>but in this approach one has to guarantee that the two keys do not
> >>specify the same encryption transformation; otherwise there is no
> >>cooperation within the system.
> >>+++++
> 
> >What?!?
> 
> That what it said. I believe one of the authors came up with the CD
> Cipher.

It is not obvious why similarity is here a virtue. One normally
expects that combining two algorithms of quite different nature
to be better. Could anyone say something to this point?

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: DIE HARD and Crypto Grade RNGs.
Date: Wed, 17 Mar 1999 10:18:20 +0100

Patrick Juola wrote:
> 
> In article <[EMAIL PROTECTED]>,
> mok-kong shen  <[EMAIL PROTECTED]> wrote:
>
> >The problem I see is the following: One defines a quantity which
> >one can never really compute and then theorize in the 'world' in which
> >this quantity is (assumed to be) computable/manipulable and then 'map'
> >the results thus obtained into our real world. I personally doubt that
> >this is a genuinely valid scientific approach.
> 
> Depends on whether or not you find a method of manipulating something
> close to Kolmogorov complexity -- and of finding something close to
> Kolmogorov complexity.  As it happens, we've got something "close to"
> Kolmogorov complexity for some applications -- it's called, for instance,
> Lempel-Ziv complexity (or maybe Ziv-Lempel).  And if you look at the
> theorems closely enough, you can get meaningful (over)estimates of
> the Kolmogorov complexity of some sequences generated by some kinds of
> sources.
> 
> If you want an example of that in Real Life, check out my paper in the
> most recent Journal of Quantitative Linguistics, or my paper in the
> proceedings of CogSci-98, both of which are applications of K-complexity
> estimates in AI.  Chater and Hahn have an interesting application in
> psychology, published in SimCat 97, IIRC.
> 
> Formally speaking, we can't measure momentum, either -- but we find
> our approximations to momentum to be sufficiently good that physicists
> and engineers use it in their computations.
> 
> As I said, if you don't like what K-complexity does for you, don't use it.
> Some of us find it extremely useful in some applications.

If you have some quantity that one can at least determine approximately
(to some arbitrarily good degree) even if not 'exactly' like the 
momentum you mentioned, then it is practical and certainly o.k. But 
there is NO method (algorithm) at all to determine the Kolmogorov 
complexity. It is hence a 'fictive' quantity. That's the distinction to 
be kept in mind.

M. K. Shen

------------------------------

From: "Maciej Maciejonek" <[EMAIL PROTECTED]>
Subject: a5
Date: Wed, 17 Mar 1999 10:31:40 +0100

Anybody knows any facts about cipher a5 (a3,a8).
I'm interested in algorithm of this cipher, becouse I'm considering
its implementation in hardware ( Altera environment )



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to