Cryptography-Digest Digest #347

2000-12-17 Thread Digestifier

Cryptography-Digest Digest #347, Volume #13  Sun, 17 Dec 00 06:13:01 EST

Contents:
  Re: security by obscurity ... ([EMAIL PROTECTED])
  Re: Nonlinearity question (Benjamin Goldberg)
  Re: Q: Result of an old thread? (Benjamin Goldberg)
  Re: Encryption detail added to cipher page (Jim Gillogly)
  Possibly repeated idea (Benjamin Goldberg)
  Really simple, but hopefully secure, cipher (Benjamin Goldberg)
  Cryptography FAQ (01/10: Overview) ([EMAIL PROTECTED])
  Cryptography FAQ (02/10: Net Etiquette) ([EMAIL PROTECTED])



From: [EMAIL PROTECTED]
Subject: Re: security by obscurity ...
Date: Sun, 17 Dec 2000 04:15:23 GMT

In article 91b8h5$7nl$[EMAIL PROTECTED],
  [EMAIL PROTECTED] (David Wagner) wrote:

Look at FROG if you need an example to make this concrete.  There,
the key determined the connection structure: i.e., the avalanche
properties were key-dependent.  (This was how FROG ensured that each
key created a different-looking algorithm.)  No big surprise, then,
that some keys led to very bad avalanche properties.  Voila!  Weak
keys.

Well put. You have discovered yourself that FROG has weak keys, if I
recall correctly about one in 2^30. It may be true that any FROG-like
cipher that creates a different algorithm for each key will have weak
keys. You can decrease the probability of a weak key to 2^-60 by
concatenating two FROGs, say FROG(key)*FROG(scrambled key) but then you
get a slower cipher.

What I find more interesting is this: the reason that weak keys exist
is that it may be impossible to make all different created algorithms
strong. In other words it will be relatively easy to design an attack
able to break some of the created algorithms rendering weak the
corresponding keys. For the same reason though it may be extremely
difficult to find an attack that breaks all created algorithms. A
traditional cipher with a fixed structure does not enjoy this
advantage. Here a break normally means that *all* keys are rendered
weak.

In hindsight FROG can be improved but only in the sense that weak keys
will be more infrequent or, in other words, that more analytic work
will be needed to find weak keys. On the other hand, a FROG-like
algorithm may be a very good candidate for super-encipherment because
its design principle is so different. Of the AES candidates only MARS
did a clear effort to mix different kinds of rounds. In applications
where speed is not critical IMHO super-enciphering AES*FROG will be a
better bet than AES*AES.



Sent via Deja.com
http://www.deja.com/

--

From: Benjamin Goldberg [EMAIL PROTECTED]
Subject: Re: Nonlinearity question
Date: Sun, 17 Dec 2000 05:57:03 GMT

Tom St Denis wrote:
 
 In article [EMAIL PROTECTED],
   Mok-Kong Shen [EMAIL PROTECTED] wrote:
 
 
  Benjamin Goldberg wrote:
  
   It's known that the Rijndael/AES sboxes have good nonlinearity
   properties.  The operations used to create them are fast enough
   that if RAM is in short suply, we don't need to actually use
   lookup tables, but can instead compute the substitution using just
   the poly, more or less.
  
   I am wondering, is it possible to make a transform, similar to how
   AES makes its sboxes, using 128 bit, instead of 8 bit values?
  
   You'd have to find an appropriate order 128 primitive polynomial,
   and you'd need to change the affine function, but it wouldn't be
   too difficult to do.
  
   If this one step has good enough nonlinearity properties, then the
   only other thing we need for our round function is a key add step.
  
   Does this step have any merit, or is it utterly stupid?
 
  My knowledge is certainly very poor in this issue. However,
  I guess the difficulty is a practical one, namely in
  verifying that the S-box obtained is indeed good, which in
  case of 128 bit could presumably take up big resources.
  (Note that there are also other criteria than nonlinearity
  that one would like to check.)
 
 If your 129-bit polynomial is indeed irreducible then inversion has
 KNOWN properties.  Such as highly nonlinear, low xor-profile and a low
 algebraic degree.

Right, which is why I suggested it.  As Mok wrote, the only problem is a
practical one, though certainly not the way he implied.  Only an utter
madman would try implement a 128x128 sbox as a lookup table. 
Calculating the inverse under the polynomial modulo would be done as a
function, *not* a table.  The practical problem is that of speed, not
space.  Inversion doesn't use that much memory, but it's not very fast,
and its possible that there might be timing attacks on it.

-- 
There are three methods for writing code in which no bug can be found:
1) Make the code so straightforward that there are obviously no bugs.
2) Make the code so complicated that there are no obvious bugs.
3) Insist that any apparent bugs were really intentional features.

--

From: Benjamin Goldberg [EMAIL

Cryptography-Digest Digest #347

2000-08-03 Thread Digestifier

Cryptography-Digest Digest #347, Volume #12   Thu, 3 Aug 00 08:13:00 EDT

Contents:
  Re: The key schedule in Twofish (=?iso-8859-1?Q?H=E5vard?= Raddum)
  Re: A  new  crypto  algorithm   Wolf_Cub_2 (Runu Knips)
  Re: The key schedule in Twofish ("kihdip")
  Re: Proving continued possession of a file (Mark Wooding)
  Re: Sending Messages in Morse Code (John Savard)
  Re: A non-linear extension of the Hill cipher (John Savard)
  Re: The key schedule in Twofish (=?iso-8859-1?Q?H=E5vard?= Raddum)
  New block cipher for the contest ("Manuel Pancorbo")
  Re: The key schedule in Twofish ("kihdip")
  Re: What is the word on TC5? (tomstd)
  Re: Q: Quantum dots (Tim Tyler)
  Re: New William Friedman Crypto Patent (filed in 1933) (John Savard)
  ANNOUNCE - BUGS v3.2.2 (Sylvain Martinez)
  Re: ANNOUNCE - BUGS v3.2.2 (Sylvain Martinez)
  Re: New William Friedman Crypto Patent (filed in 1933) (jungle)



From: =?iso-8859-1?Q?H=E5vard?= Raddum [EMAIL PROTECTED]
Subject: Re: The key schedule in Twofish
Date: Thu, 03 Aug 2000 10:30:10 +0200

kihdip wrote:

 I have downloaded the Twofish paper (15 June 1998) and the source code from
 Counterpane.
 But I'm not sure if I have misunderstood a vital point in the algorithm:

 In the key schedule (4.3, page 8) the RS matrix is multiplied with 64 bits
 of the key to form 32 bits called S (the third word vector).
 But is this just a multiplication between the RS matrix and M (m8i,
 m8i+1...m8i+7) ???
 I have used the RS_MDS_Encode function from the source code to produce a
 result, and with ordinary calculus (and my calculator) I have multiplicated
 the two matrix'es the normal way.

 The problem is: I do not get the same result!!!
 (Ofcourse the 64 bit key-"material" was the same.)

 Why is the result not the same wheter I use the source code or my calculator
 ?? Have I forgotten anything ??

 Kim Hyldgaard

Since you say you've used a calculator, have you done the multiplications mod
2^8 or something similar?  The multiplication is done in a GF(2^8), generated
by x^8+x^6+x^3+x^2+1.  I haven't seen calculators doing finite filed arithemtic
before, but then again I'm not aware of all different types of calculators
floating around, so I could be missing the target here.


--

Date: Thu, 03 Aug 2000 10:46:14 +0200
From: Runu Knips [EMAIL PROTECTED]
Subject: Re: A  new  crypto  algorithm   Wolf_Cub_2

Mark Wooding wrote:
 
 Leonid  Nowikow [EMAIL PROTECTED] wrote:
 
  This algorithm  is  very steady  to  the   chosen-plaintext attack .
 
 [snip binary mush]
 
 Please repost with something readable.
 
 -- [mdw]

First sign impression: Ouch.

It works on single bit variables and is incomplete (Psi is
some function...).

Translated the .doc to text:

===

The description of the algorithm Wolf_Cub_2.

Leonid Nowikow, Jeroen van de Erve.

   
   
June   2000 .
___

This algorithm Wolf_Cub_2 is designed for an encryption of any
input file Fi to an output file Fo and also for a restoration
of the starting file.

A function Cub_2 is a first part of the algorithm Wolf_Cub_2
and is designed for the encryption:

   Fo = Cub_2 (Fi, P)

We may suppose that any file and any password are sequences of
bits and here Fi, Fo and P are these three sequences of bits:

the input file Fi =
   { Fi[0], Fi[1], , Fi[i], , Fi[NF-1] },
 Fi[i] = 0 or 1, i = 0, 1, 2, , NF-1;
the output file Fo =
   { Fo[0], Fo[1], , Fo[j], , Fo[NF-1] },
 Fo[j] = 0 or 1, j = 0, 1, 2, , NF-1;
the  password P =
   { P[0], P[1], , P[k], , P[NP-1] },
 P[k] = 0 or 1, k = 0, 1, 2, , NP-1.

An other function Wolf_2 is a second part of the algorithm
Wolf_Cub_2 and is designed for a restoration:

FILE_restored = Wolf_2 (FILE_encrypted, P)

___

The function Cub_2:  Fo = Cub_2 (Fi, P).

Input:
   Fi = { Fi[0], Fi[1], , Fi[i], , Fi[NF-1] },
   P  = { P [0], P [1], , P [k], , P [NP-1] }.
Output:
   Fo = { Fo[0], Fo[1], , Fo[j], , Fo[NF-1] }.


   Ri[0] and Ri[1] are two current points in the file Fi,
   Ro[0] and Ro[1] are two current points in the file Fo,
   i. e. Ri[m] and Ro [m] are integer numbers,
  0 = Ri[m]  NF,
  0 = Ro[m]  NF, m = 0 or 1.

   Ei[0] and Ei[1] are corresponding boundaries for these
   Ri[0] and Ri[1], Eo[0] and Eo[1] are corresponding
   boundaries for these Ro[0] and Ro[1].
   i. e. Ei[m] and Eo[m] are integer numbers,
  0 = Ei[m]  NF,
  0 = Eo[m]  NF, m = 0 or 1.

The initial values for these current points
Ri[0], Ri[1], Ro[0], Ro[1]
are these following:

Ri0[0] = Psi (P[0] , P[1],  , P[N

Cryptography-Digest Digest #347

2000-03-16 Thread Digestifier

Cryptography-Digest Digest #347, Volume #11  Thu, 16 Mar 00 13:13:01 EST

Contents:
  Re: Improvement on Von Neumann compensator? (Tim Tyler)
  Re: US export status of RC4? (Kent Briggs)
  Re: Improvement on Von Neumann compensator? (Tim Tyler)
  Re: Looking for a special one way function ("Reuben Sumner")
  Re: new/old encryption technique ("Reuben Sumner")
  Re: Cellular automata based public key cryptography (Tim Tyler)
  Re: brute force attack on a 128 bit SSL key? (Tim Tyler)
  Re: Really Interested, Where do I look for ... (John Savard)
  Re: Q: Coding of music notes (Stephen Houchen)
  Re: Cellular automata based public key cryptography (James Felling)
  Re: Cellular automata based public key cryptography (James Felling)
  Re: Special One way function (James Felling)
  Re: A weird DES crypt() question (Bill Unruh)



From: Tim Tyler [EMAIL PROTECTED]
Subject: Re: Improvement on Von Neumann compensator?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 16 Mar 2000 15:06:25 GMT

Michael Sierchio [EMAIL PROTECTED] wrote:
: in sci.crypt [EMAIL PROTECTED] (Guy Macon) wrote:

:  [xyz] the way Fourmilab does

: WTF is Fourmilab?  Is that like, an ant farm, or something?
: Mebbe ya mean Fermilab?

No - see http://www.fourmilab.ch/
-- 
__
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

A company is known by the people it keeps.

--

From: Kent Briggs [EMAIL PROTECTED]
Subject: Re: US export status of RC4?
Date: Thu, 16 Mar 2000 15:35:59 GMT

Impervious wrote:

 As far as the exporting regs... It looks like I won't be able to
 export the program without crippling it, so either I have to do all
 the extra work to be sure it stays in the US or I just won't release
 it to the public.

Apply with the BXA to get your application classified as a "retail"
encryption product and then you can export it at full strength.

--
Kent Briggs, [EMAIL PROTECTED]
Briggs Softworks, http://www.briggsoft.com



--

From: Tim Tyler [EMAIL PROTECTED]
Subject: Re: Improvement on Von Neumann compensator?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 16 Mar 2000 15:17:44 GMT

Guy Macon [EMAIL PROTECTED] wrote:
: In article [EMAIL PROTECTED], [EMAIL PROTECTED] (Scott Nelson) 
:wrote:

:Unless you can absolutely guarantee that the events are
:100% independent, Von Neumann's method won't produce
:perfectly unbiased bits.
:In real systems, you just can't get that.
:That's why it's better to use a hashing function.
:
:SHA1 doesn't care what kind of bias you have; 
:interdependence, sequency, frequency, offset - 
:it doesn't matter.  Other "cheaper" hashing functions
:like CRC or even XOR will do the same, though secure 
:hashing functions tend to be a little better at
:retaining entropy.  Hashing functions also have
:the nice properties that they finish in a deteministic
:amount of time, and you can do away with almost
:all the other processing.

: This makes a lot of sense to me.  I am not well learned in the
: field of crypto, so please forgive me if this is a stupid
: question:  if I have (assuming such exists) a true random bit
: stream, can a SHA1 hash possibly introduce a bias? [...]

Yes.  SHA1 will practically always introduce a bias when used on
a random stream - due to some values of the hash occurring more
frequently over its domain than others.

The longer strem of random input you "condense" with the hash, the smaller
the bias will be - though it will never become zero.

: I am thinking that making the input and the output of the hash have
: the same number of bits would be necessary but not sufficient.

No finite size is sufficient if you want to retain /all/ the entropy.

If you use 160 bits of input to 160 bits of output, the resulting output
stream is likely to have an entropy of a little under 0.995bits/bit.

: I know that if I XOR a true random bitstream with any other bitstream
: (except those derived from the true random bitstream), the result
: will be a true random bitstream.  Can this property be proven for
: any cryptologically strong hash functions such as SHA1?   

It can be practically proven false - the hash of a fixed-length finite
random bitstream is never itself random - though it can become /very/
nearly so.
-- 
__
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

When I gave her the ring, she gave me the finger.

--

From: "Reuben Sumner" [EMAIL PROTECTED]
Subject: Re: Looking for a special one way function
Date: Thu, 16 Mar 2000 17:27:49 +0200

Sorta.

Suppose for fixed k bit n you want to generate x_0 .. x_{n-1}.
Let f0 and f1 be two one way functions.  Let i = i0,i1,i2,...,i{n-1}.
Then let x_i=f_i{n-1}(f_i{n-2}(...(f_i0(x

I don't recall where this is described.

Reuben

[EMAIL PROTECTED] wrote in message 8aocs6$ql1$[EMA

Cryptography-Digest Digest #347

1999-10-01 Thread Digestifier

Cryptography-Digest Digest #347, Volume #10   Fri, 1 Oct 99 17:13:03 EDT

Contents:
  Re: authenticating an executing binary (Tom St Denis)
  Re: Compress before Encryption ("Douglas A. Gwyn")
  Re: EAR Relaxed? Really? ("Trevor Jackson, III")
  Re: Secure hash function question (Tom St Denis)
  Re: crypto export rules changing (Greg)
  Re: NEW DATA SCOTT19U CONTEST ("Trevor Jackson, III")
  Re: EAR Relaxed? Really? ("karl malbrain")
  Re: EAR Relaxed? Really? (Greg)
  Re: NEW DATA SCOTT19U CONTEST (Tom St Denis)
  Re: New Export Regulations (Greg)
  Re: Compress before Encryption (SCOTT19U.ZIP_GUY)



From: Tom St Denis [EMAIL PROTECTED]
Subject: Re: authenticating an executing binary
Date: Fri, 01 Oct 1999 19:11:58 GMT

In article [EMAIL PROTECTED],
  Todd Owen [EMAIL PROTECTED] wrote:
 hi all,

 I'm pretty skeptical about this being possible, but it's an interesting
 concept and I wondered if anybody had any thoughts about it.  The
 hypothetical situation is:

 A comms link is established between a server, S, and a client program,
 C, running on a remote computer.  C is a program (meaning a .exe file,
 for instance) that has been distributed to the user.  The server wants
 to verify that C is genuinely the .exe file that was distributed, down
 to the last byte.

 This does not mean verifying the program's digest, because you can do
 that without the program actually running.  The server wants a guarantee
 that the program, unmodified, is running right at that moment.  Clearly
 this would involve some sort of challenge from the server, or a replay
 attack would be easy.  It sounds like it would involve not just
 crytography but perhaps also something to do with the platform and the
 OS and the way code executes on it.  C has to generate some response
 which could (god knows how) _only_ be generated by an executing,
 unmodified version of the program.  And best of all, S needs to verify
 this response somehow.

 As I say, it sounds rather tricky.  The only (inadequate) substitute I
 can think of is just making the program as complicated as possible, and
 throwing in some checksums for good measure.  And I know _exactly_ what
 that constitutes - an open challenge to any cracker with some spare time
 on her hands!

 Thanks for getting this far!

If an illegitamate user can get info they shouldn't simply by recoding the
program you have to rethink your crypto.  What you are aiming at is security
thru obscurity which we all know is not possible.  You should rely on the
user having some private key info THAT THEY WANT TO KEEP.  For example I
don't want to give out my l/p for my ISP because I want my access and not
give it out.  If for example by giving out my l/p I don't lose anything I
would.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

--

From: "Douglas A. Gwyn" [EMAIL PROTECTED]
Subject: Re: Compress before Encryption
Date: Fri, 01 Oct 1999 19:24:11 GMT

Tim Tyler wrote:
 Indeed.  However is precompression that fails to have error correction
 and consequently offer clues to the attacker that they have found a
 correct key mentioned?

Why should it be?  Obviously, once one has the correct key,
the original plaintext can be reproduced.  (Decompression
is part of the "general system", which we assume is known.)

Data compression in general doesn't incorporate any error
correction; that can be layered on separately in the comm
system if desired, but is irrelevant to compression.

 : The term "one to one" has a technical meaning different from the way
 : D.Scott uses the term, so it is not used that way in the literature.
 It seems to me that he is using the term in the same way that everyone
 else uses it.  What is the "technical meaning" of which you speak?

"One to one mapping" (injection) is a standard mathematical term,
typically taught in high school or even earlier.  It means that
the mapping takes distinct elements in the domain to distinct
elements in the range.  Thus, a "one to one compression" would
merely produce different compressed outputs for different
compressed inputs.  But *all* lossless compression schemes have
this property.  D.Scott seems to want to describe a different
property than this, apparently that the mapping is "onto"
(surjective), although it isn't clear what the range set is.
(If the domain is V^n for some n, the range cannot be V^n if
there is *compression*, and if *any* input is truly compressed,
then *some* input must be *expanded*.)

 : I think the plain fact is that the small amount of structure
 : from the compression header is generally not considered sufficient
 : to exploit, given that a cipher is already infeasible to brute-force
 : even in a known-plaintext attack.
 It's trivially true that if you're cypher is already totally se