Cryptography-Digest Digest #839, Volume #11      Mon, 22 May 00 19:13:01 EDT

Contents:
  Re: 3DES SOURCE (Ichinin)
  Re: Blowfish Claims ... something not right (David A. Wagner)
  Re: Unbreakable encryption. ("Theophilus Samuels")
  Re: First differential attack (Baruch Even)
  Re: Q: Recording on magnetic cards (Mok-Kong Shen)
  Re: Yet another block cipher: Storin (Mok-Kong Shen)
  Re: More on Pi and randomness (Mok-Kong Shen)
  Re: More on Pi and randomness (Clive Tooth)
  Re: Final Comments from Twofish Team (Mok-Kong Shen)
  Re: More on Pi and randomness (Clive Tooth)
  Re: Patent state of Elliptic Curve PK systems? (Roger Schlafly)
  Re: Blowfish Claims ... something not right ("Trevor L. Jackson, III")

----------------------------------------------------------------------------

From: Ichinin <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: 3DES SOURCE
Date: Sun, 21 May 2000 12:48:59 +0200

If this url isn't in the FAQ already, i suggest someone should
put it there:

ftp://ftp.funet.fi/pub/crypt/cryptography/

Regards,
Glenn

"You can't beat the system, but you can edit the registry"

------------------------------

From: [EMAIL PROTECTED] (David A. Wagner)
Subject: Re: Blowfish Claims ... something not right
Date: 22 May 2000 12:30:00 -0700

In article <[EMAIL PROTECTED]>,
Will Dickson <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] (David A. Wagner) wrote:
> >It's hard for me to imagine a reasonable scenario where we should
> >be worried about exhaustive keysearch attacks on 256-bit keys.  If
> >not for defense against keysearch, what do longer keys buy you?
> 
> Extra resistance to implementational cockups. [...] As many have
> pointed out (and I have recently being discovering) generating entropy
> is rather harder than it's sometimes given credit for.

I suppose I would argue that the natural defense against this failure
mode is to use a better key generator: i.e., generate 448 bits using
the (flawed) generator, hash it down to 256 bits, and then use the
resulting 256-bit key with your favorite AES cipher.

But I guess you might argue that this is a serious enough failure mode
that you always want to have an extra margin of security against flawed
key generators, and so it's better to build this into the cipher.  Ok,
that seems like a reasonable position.  Point taken.

------------------------------

From: "Theophilus Samuels" <[EMAIL PROTECTED]>
Subject: Re: Unbreakable encryption.
Date: Mon, 22 May 2000 20:57:58 +0100

> Well, if you look at most cells in your body, you notice that they have
> a cell wall, and inside contains the DNA and other goodies. Things are
> passed to the cell through the cell wall.  This is very similar to
> passing variables to a component through an
> interface/function.

  My dear fellow, the cells in the body that I and other members of this
group possess, have cell 'membranes'. I'm afraid, the cells of plants are
the ones with cell walls, and therefore, perhaps your 'published'
inconsistencies could be explained by the fact that you are not of the
species homo genus sapien!
  Best wishes,
T.L.S.
<[EMAIL PROTECTED]> wrote in message news:8fnu8o$2ai$[EMAIL PROTECTED]...
>
>
> Thank you cow, you have given me appropriate publicity.
> Here is more...
>
> A New language...
>
>
> The evolution of languages started first with load/store primative
> operations for the CPU.  This led to assembly which was basically a
> labeling system for the actual numbers the cpu understands.  When you
> have labels, people created an abstract macro layer above that so that
> it produced multiple assembly labels to be compiled into machine code.
> This is basically what Basic and Fortran, and Colbol did.  But as the
> languages became longer and longer, people started inventing ways to
> package some of the often used routines, and created the concept of a
> method (java), function (c), procedure (pascal), subroutine (basic and
> perl).  They are basically a way store often used instructions for the
> cpu into nice place (label it), and call it anytime you want.  But
> since these functions usually need information passed into it to work,
> pass by reference and pass by value became the norm, and was
> incorporated into the language.  Usually
> when a function is called, the calling function stores the passed
> values on a stack (a separate place in memory).  The called function
> then take these values from the stack and diddles with them.  When it
> is done, it return usually one value (either a reference or value) by
> placing it back on the stack, and the caller then takes it off of the
> stack.  To make it consistent, and less error prone, some languages
> invented the concept of a prototype, which basically says, this
> function can only take in 4 passed parameters of type integer, and
> return type char, etc etc.  Up to this point you have C and pascal.
>
> Then someone comes along and says, hey, why not package multiple
> functions
> and data into a structure.  And that is how classes got started (which
> lead to object oriented languages like C++ and java).  Now when you are
> calling a function, instead of eat food, you say "tom, eat food" you
> have to tell which structure/class to do it.  Of course tom would have
> to know how to eat and what food to eat.  That is why Tom needs
> a function inside called eat, and you need to pass him food as a
> parameter.
>
> Well this is fine, but each time you turn on the computer, the classes
> dissapear (well at least the data stored inside of them).  So people
> invented
> components that sort of made classes a little smarter.  Components are
> basically classes that (usually, not all) contains serialization, and
> function query.  Serialization means it can be stored to harddrive and
> back and keep its data intact.  function query (my own term) means you
> can
> ask a component what functions/methods it supports.  Some people group
> the functions into ports and some group them into interfaces.  These
> are basically another layer above the functions that you use to
> actually execute a function inside a component.  COM, Corba, and
> JavaBeans are like these.  (I did some things on Ports, like smart
> ports, which automatically does introspection(check parameter types)
> and automatically
> link methods between components if they match).  I actually created a
> new component architecture before (well contributed to it), but it
> wasn't truly a new language (you still had to program in java to create
> the component pieces to play with).
>
> Well, Perl is something new to me.  It has some old vestiges of
> assembly type language structures.  For example, a called function
> manually pops stuff off of the stack (well indirectly), as opposed to
> being defined in variables identified in the function definition. It
> can also return multiple values (interesting).
>
> Well, if you look at most cells in your body, you notice that they have
> a cell wall, and inside contains the DNA and other goodies. Things are
> passed to the cell through the cell wall.  This is very similar to
> passing variables to a component through an
> interface/function.
>
> This new language would have components with no constraints on values
> passed to it.
> as for the values, instead of being dedicated to a method in a
> component/class/function, it would be placed on a bloodstream/"bus".
> cells/components would grab it when they have appropriate functions
> that can use it.
>
> and of course there is the concept of dynamic operators.  operators are
> basically built-up from other components.
>
>
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.



------------------------------

From: Baruch Even <[EMAIL PROTECTED]>
Subject: Re: First differential attack
Date: Mon, 22 May 2000 23:27:48 +0200

I've now read your cipher source code, and this is my observation on it.
In regard to to differential cryptanalysis.

Let us look at the additive differences and not xor differences as they
are more suitable in the context of this cipher (it has only additives in
the encryption). I will also write the inputs as (a,b), where each is a 
16 bit word and together they form the plaintext split into left and 
right.

The cipher can be classified as a Feistel network cipher with 32 bit words.
It uses addition in place of xor and thus it's overall structure is:
a=F(b,i)

It's F function is composed of F(b,r)=Sbox[g(b,r) % 256]
where g(b,r)=Sbox[256+r]+b % 257

If two plaintexts have input difference (0, 257) it means that b=257+b'
obviously in the g function two inputs with this difference go to the same 
output, and thus this input differences cause a zero difference in the output
of F, so we receive a differential characteristic of the form:
(0,257) -> (0,0) with probability 1.

Also, you can see that there is a two round differential characteristic, of 
the form:
(257, 257) -> (257, 257) with probability 1.
The two round characteristic is iterative (it can be concatenated with itself).

Note that the numbers are differences and not the actual numbers that 
go into the encryption itself.

In order to attack the cipher then you need two chosen plaintexts.

The attack:
1. Choose x & y two 16 bit words.
2. ask for the encryption of the words (x+257, y+257), (x, y)
3. Using the iterated characteristic iterate it to 15 rounds.
4. Try decryption of one round by guessing the key, the key is
composed of the S-box entry in [0..255] (32 bits) and the 8 bits
of the Sr [256..262]. A total of 40 bits. And check that the resulting
difference of the decryption is 257 as predicted by the differential.

>From here on you need to calculate the probability that a wrong guess
will give you the right difference and by this find how many chosen
plaintext pairs you really need to find the right keys.  I won't do it due
to lack of time (and of interest, this cipher is very simple as shown
by a 1 probability non-trivial (i.e. non-zero) characteristic.

P.s. you have a bug in the key schedule, you need to zero out the 16
words of the round sbox (entries 256 to 262)

P.p.s. I suggest you recheck my logic above, I'm no expert myself.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Recording on magnetic cards
Date: Mon, 22 May 2000 23:22:13 +0200



Francois Grieu wrote:

> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
>
> > Except the part covered by the black stripe, which I couldn't look
> > through, there was no indication of presence of a chip.
>
> Strange. I am unaware of any tehnology that can read (much less write)
> a magnetic stripe at more than 0.1 mm distance. And the only
> technologies I know that can read and write more than a few bits
> into a card at a distance all use an IC in the card.
>
> > One acts normally in economical ways. What isn't needed is not done.
> > Why should the designer take the trouble to hide the chip?
>
> When manufacturing a contactless IC card, the most economical process
> is to assemble the IC and coil, then laminate these between two plastic
> foils. This, as a side effect, hides the IC. It is more costly to
> manufacture a contactless IC card that also has contacts : the contacts
> have a cost, they need to be precisely located, and it is hard to have
> then go thru one side of the card.
>
> BTW, even in contact IC cards, the IC is hidden behind the metal that is
> between the contacts.

I'll try whether I could get some information through contacting the
manufacturer. If I have luck, I'll report.

M. K. Shen


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Yet another block cipher: Storin
Date: Mon, 22 May 2000 23:22:29 +0200



Mark Wooding wrote:

> My objective is to test the strength of the matrix multiplication as a
> primitive.  If it looks good, I may try building a faster stream cipher

You are using Hill's method. Am I right?

M. K. Shen


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: More on Pi and randomness
Date: Mon, 22 May 2000 23:28:47 +0200



Clive Tooth wrote:

> Go on until the quantity we are adding to T is less than 16^-3 (a few more
> terms may be required). In general we will need to take a few more than N
> terms (about 14 in this case).

Does N here mean one is interested in the N-th hex of Pi? If so, for
very large N one would have to compute very many terms and the
rounding errors would be considerable. Very probably I have missed
something.

M. K. Shen


------------------------------

From: Clive Tooth <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: More on Pi and randomness
Date: Mon, 22 May 2000 22:40:40 +0100

Mok-Kong Shen wrote:

> Clive Tooth wrote:
> 
> > Consider a constant, S, defined by a series of the form
> >
> > S = sigma(k=0, +infinity)[1/((b^(c*k))*p(k))]
> >
> > where b and c are positive integers, and p(k) is a polynomial with
> > integer coefficients. Note that the digits in the base b expansion of S,
> > beginning at position n+1, can be obtained from the fractional part of
> > S*b^n. So:
> >
> > S*b^n mod 1 = sigma(k=0, +infinity)[b^(n-c*k)/p(k) mod 1]
> >
> >  = sigma(k=0, floor(n/c))[(b^(n-c*k) mod p(k))/p(k) mod 1] +
> >    sigma(k=floor(c)+1, +infinity)[b^(n-c*k)/p(k) mod 1]
> >
> > For each term of the first summation, the binary exponentiation scheme
> > is used to evaluate the numerator. Then floating point arithmetic is
> > used to perform the division and add the result to the sum mod 1. The
> 
> A question of ignorance: How about the rounding errors in these divisions?

As in any high precision computation an analysis of the various sources
of error must be done. It is possible to analyze the worst-case effect
of these floating point errors, particularly if the floating point
arithmetic is known to be IEEE compliant.


> > second summation, where the exponent of b is negative, may be evaluated,
> > as written, using floating point arithmetic. It is only necessary to
> > compute a few terms of this second summation, just enough to ensure that
> > the remaining terms sum to less than the "epsilon" of the floating point
> > arithmetic being used. The final result, a fraction between 0 and 1, is
> > then converted to the desired base b.
> 
> Is the 'a few terms ..... just enough ...' somehow precisely determined
> by a formula for estimation or is done heuristically?

It is possible to precisely determine the number of terms required. See
"On the Rapid Computation of Various Polylogarithmic Constants" by D H
Bailey, P B Borwein and S Plouffe, Mathematics of Computation 66 (1997),
903-913 for further discussion of the computational details.

However, in these computations several hex digits are computed in one
go. For example, a computation may yield hex digits 1,000,000 to
1,000,010 of pi. Typically a second computation yielding digits 999,999
to 10,000,009 will also be performed. This is a good check, since the
arithmetic operations are totally different between the two runs. If
agreement is reached then digits 1,000,000 to 1,000,009 may be
considered to have been determined, subject to the usual reservations
about strings of trailing F's or 0's.

-- 
 
Clive Tooth
http://www.pisquaredoversix.force9.co.uk/
End of document

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Final Comments from Twofish Team
Date: Mon, 22 May 2000 23:38:02 +0200



Runu Knips wrote:

> It is however funny, that for any of the five AES
> candidates, there seem to be people which fight for
> it in this NG. Even Mars still has its fans :))

Cryptology is (yet) not comparable to mathematics. One can
hardly entirely eliminate subjectivity in evaluations, I believe.

M. K. Shen



------------------------------

From: Clive Tooth <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: More on Pi and randomness
Date: Mon, 22 May 2000 23:17:10 +0100

Mok-Kong Shen wrote:

> Clive Tooth wrote:
> 
> > Go on until the quantity we are adding to T is less than 16^-3 (a few more
> > terms may be required). In general we will need to take a few more than N
> > terms (about 14 in this case).
> 
> Does N here mean one is interested in the N-th hex of Pi? If so, for
> very large N one would have to compute very many terms and the
> rounding errors would be considerable. Very probably I have missed
> something.

I don't think you have missed something, and yes, N here means one is
interested in the Nth hex digit of pi.

It has not been my intention in this thread to detail the error analysis
for the computation of the Nth hex digit of pi, but rather to explain
the overall working of the algorithm. However, you are correct in saying
that the rounding errors will be considerable. Suppose we wish to
compute d hex digits of pi in the region of the Nth hex digit of pi. In
order to get the 4d bits for the d digits it is likely that the
calculation will need to be carried out with roughly 4d+log_2(N) bits of
floating point accuracy. So, to compute the 10^15th hex digit roughly 50
guard bits would need to be carried. To compute the exact number of
guard bits required would require some careful analysis of the
algorithms used, and the floating point operations carried out at each
step.

-- 
 
Clive Tooth
http://www.pisquaredoversix.force9.co.uk/
End of document

------------------------------

From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: Patent state of Elliptic Curve PK systems?
Date: Mon, 22 May 2000 15:22:14 -0700

Paul Rubin wrote:
> >Not if you stick with software.  Most ECC patents cover specific
> >hardware speedups or special fields.  Too much has been in the
> >public domain for too long for this to be a real threat.
> 
> Are you sure of that?  The most interesting ECC variant is ECC over
> GF(2^n) with optimal normal basis representation, since you can do that
> in software without any multiplication.  But I thought it was patented
> by Certicom.  Is it not?  What about ECC over GF(2^n) in general, an
> obvious method if there ever was one?

Certicom says it has some patent applications pending, but
it is hard to see how they will stop anyone. Most people
get better performance in software with other bases, ie, 
trinomial bases. Or one can use (non-optimal) normal bases.
Or use prime fields where you have more curves and fewer
known attacks.

------------------------------

Date: Mon, 22 May 2000 18:45:35 -0400
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Blowfish Claims ... something not right

"David A. Wagner" wrote:

> It's hard for me to imagine a reasonable scenario where we should
> be worried about exhaustive keysearch attacks on 256-bit keys.  If
> not for defense against keysearch, what do longer keys buy you?

One might choose a slightly larger key than otherwise necessary in order to
guard against a poor key generator.  I'd guess such a safety margin might be a
percentage increase in key length, certainly no more than a factor of two.

One might use another factor of two to guard against attacks of cost O(
sqrt(N) ) as might be possible within the lifetime of the data to be protected
(QC, divine inspiration creating a new class of attack, other crypto-boogeyman
threats).

Another factor that might inflate key length is the desire to increase the
independence of sub/round keys.




------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to