Cryptography-Digest Digest #671, Volume #9        Sun, 6 Jun 99 16:13:03 EDT

Contents:
  Re: "Cipher Systems", Beker & Piper?
  Re: evolving round keys
  Re: Challenge to SCOTT19U.ZIP_GUY (Tim Redburn)
  Re: Scottu: I actually saw something usefull (Tim Redburn)
  Re: Challenge to SCOTT19U.ZIP_GUY (Thomas Pornin)
  Re: evolving round keys (Terry Ritter)
  SCOTT16U Key Generation (SCOTT19U.ZIP_GUY)
  Re: Finding a 192 bit hash (Was: Using symmetric encryption for hashing) (Boris 
Kazak)
  Re: SCOTT19U pass in nut shell (Richard Iachetta)
  Re: KRYPTOS ("Douglas A. Gwyn")
  Re: Challenge to SCOTT19U.ZIP_GUY (SCOTT19U.ZIP_GUY)
  Re: Challenge to SCOTT19U.ZIP_GUY (Tim Redburn)
  Re: Challenge to SCOTT19U.ZIP_GUY (SCOTT19U.ZIP_GUY)
  Re: SCOTT19U pass in nut shell (SCOTT19U.ZIP_GUY)
  Key lengths vs cracking time ("Jan Wessels")
  Re: Challenge to SCOTT19U.ZIP_GUY (Thomas Pornin)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: "Cipher Systems", Beker & Piper?
Date: 6 Jun 99 15:22:09 GMT

Terry Ritter ([EMAIL PROTECTED]) wrote:
: Well, I can only do so much:  I just checked and -- yes, indeed --
: found it in the references from no less than three of my Cryptologia
: articles from the early 90's, plus the crypto DSP article.  

Yes, your site is one of the places my web search turned up.

: Cipher Systems has much more about feedback shift registers (FSR's)
: than we find in most modern texts.

: The text also has more crypto statistics and practical cryptanalysis
: of older systems than we find in modern texts; for example, they talk
: about attacking the M209 quite a bit.  In that sense it is more
: down-to-earth than modern texts.  It has two chapters on classical
: Shannon theory, the chapter on LFSR's and another on NLFSR's.

Well, from this, it seems that they do cover material that tends not to
occur in other sources, and which is, at least according to rumour,
touching a bit more closely on the still-classified early electronic
cipher devices of the post-rotor era.

John Savard

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: evolving round keys
Date: 6 Jun 99 16:58:34 GMT

[EMAIL PROTECTED] wrote:

: If you have a strong enough char. then you can simply remove the delta
: from adjacent blocks.

I was thinking about your initial question - why do the keys remain
constant - and David Wagner's comments related to your following example,
just changing the whitening keys.

A static whitening key is indeed 'invisible' to differential
cryptanalysis. However, if the whitening keys are completely
unpredictable, you won't have two blocks whose difference is ever known:
but if your method of varying the whitening keys has any weaknesses, they
remain open to attack.

So I accept that that kind of scheme is still weak, since the two pieces
are, to a certain extent, open to separate attacks. But if you vary the
subkeys instead - which lets out using a standard DES chip - and use a
secure scheme, this _is_ a good method.

: How can you patent a swap?  It's a table (or function) which is
: dynamically updated.  You cannot patent a swap, or at least you
: shouldn't.

The essence of Terry Ritter's invention is this: after a byte has been
encrypted from a lookup table, perform a swap which involves as one of the
two bytes swapped the entry used in encrypting the byte. This was original
at the time, and has the benefit of combining the nonlinearity of a lookup
table with a high degree of efficiency in changing its contents, by only
changing the part that really matters - the part that was previously used.

John Savard

------------------------------

From: [EMAIL PROTECTED] (Tim Redburn)
Subject: Re: Challenge to SCOTT19U.ZIP_GUY
Date: Sun, 06 Jun 1999 18:31:30 GMT

On Sun, 06 Jun 1999 15:07:42 GMT, [EMAIL PROTECTED] wrote:

><snip>
>
>Look at : http://members.xoom.com/ecil/page2.htm
>
>Which briefly describes the algorithm.
>
<snip>

I originally looked at those pages some time
ago, in fact they were what I based my
analysis of his S-Box generation on.

However, as I showed, the sums in the pages
are inaccurate, but David refuses to correct them.
Anyone else reading the pages is likely to duplicate
work that has already been done by others, without
actually realising it (I wasn't the first to point
out problems with Davids S-Box generation).


When commenting about the accuracy of those
pages, all David is prepared to say is that he
would have written them differently. 

He refuses to give a concrete yes or no as to whether
they give a completely accurate description of his
algorithm.

Until David corrects the parts that have been pointed
out to him as incorrect, AND until he confirms absolutely the
accuracy of the rest of the document, I am not going
to spend time analysing it further, and I wouldn't
recommend any one else does either.

Without absolute confirmation from David as to the documents
accuracy, if you do perform any analysis of it, and actually
find a weakness, it is highly probable that David will turn
round and claim that it wasn't actually done that way in
his program.

The guy that wrote that description had the same problems
as the rest of us, in that he had to try to derive the algorithm
from the source code. If you've looked at the source code, 
you will see that it's not the easiest to follow C source ever
written, and that guy, in another post on another thread, has
also requested that David give an absolute statement as
to the documents accuracy.

If David is prepared to correct the parts that have been demonstrated
as incorrect, AND he makes an *absolute* statement as to the rest of
the documents accuracy, (in other words he *must* be prepared to stand
by such a statement, so he actually will need to check the document in
detail first), then he will have sucessfully completed the challenge.

- Tim.

------------------------------

From: [EMAIL PROTECTED] (Tim Redburn)
Subject: Re: Scottu: I actually saw something usefull
Date: Sun, 06 Jun 1999 18:31:31 GMT

On Sat, 05 Jun 1999 22:16:14 -1000, Horst Ossifrage <[EMAIL PROTECTED]>
wrote:


>Hello Tom, I wrote the documentation page for David A. Scott
>last year. It may have some errors in it, but David says he did
>not proof read my description of his algorithm. I hope that someday
>he will proof read that website and make any corrections that
>are necessary. Until he says that the documentation is accurate, 
>you should only consider that document to be a rough draft.
>
>Horst Ossifrage

Lets make it easy for David........

David, here are four possible responses (that I hope cover all
options). Please reply to this message, deleting the 3 responses 
that don't apply ........

Response 1:

"I have looked at the description of my algorithm, that Horst
provided, and find it to be absolutely accurate and complete. If any 
weaknesses are found through an analysis based on that 
description, then they are genuine weaknesses in my algorithm
and I will accept them as such."

Response 2:

"I have looked at the description of my algorithm that Horst has
provided and believe there are mistakes in it, and any analysis
based on that description will not be a genuine analysis of
my algorithm. I DO intend to correct those mistakes though,
and when I have done so, if any 
weaknesses are found through an analysis based on that new
description, then they are genuine weaknesses in my algorithm
and I will accept them as such. I will let people know, via
this news group, when the new description is available."

Response 3:

"I have looked at the description of my algorithm that Horst has
provided and believe there are mistakes in it, and any analysis
based on that description will not be a genuine analysis of
my algorithm. I do NOT intend to correct those mistakes though."

Response 4:

"I am not going to look at the description in detail. I therefore 
will not make any claims as to it's accuracy."


Pick your response David. 


-Tim.


------------------------------

From: [EMAIL PROTECTED] (Thomas Pornin)
Subject: Re: Challenge to SCOTT19U.ZIP_GUY
Date: 6 Jun 1999 18:38:56 GMT

According to Tim Redburn <[EMAIL PROTECTED]>:
> He refuses to give a concrete yes or no as to whether they give a
> completely accurate description of his algorithm.

Quite understandable, since it might make people write less messages
about his toys. If this was to happen, he would shrivel up and die.

He is just, darwinly, trying to remain alive, after all.

        --Thomas Pornin

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: evolving round keys
Date: Sun, 06 Jun 1999 18:41:58 GMT


On Sun, 06 Jun 1999 03:29:50 GMT, in <7jcpv8$l8r$[EMAIL PROTECTED]>, in
sci.crypt [EMAIL PROTECTED] wrote:

>[...]
>> If you swap the s-box entries that are _used_ in a block
>encipherment, you
>> would be using a patented technique: Terry Ritter's Dynamic
>Substitution.
>> But yes, it _is_ a good idea.

   http://www.io.com/~ritter/#DynSubTech


>How can you patent a swap?  

   http://www.io.com/~ritter/PATS/DYNSBPAT.HTM

"I claim as my invention: 

1. A mechanism for combining a first data source and a second data
source into result data, including: 

      (a) substitution means for translating values from said first
data source into said result data or substitute values, and 

      (b) change means, at least responsive to some aspect of said
second data source, for permuting or re-arranging a plurality of the
translations or substitute values within said substitution means,
potentially after every substitution operation."
 

>It's a table (or function) which is
>dynamically updated.  You cannot patent a swap, or at least you
>shouldn't.

If you need some background in basic patent law, there are many
sources, including the PTO itself.  Start with my links:  

   http://www.io.com/~ritter/NETLINKS.HTM#PatentLinks

I have also archived various past Usenet patent discussions for your
convenience: 

   http://www.io.com/~ritter/#UsenetPatents

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: SCOTT16U Key Generation
Date: Sun, 06 Jun 1999 19:21:33 GMT


 The area on page that horst refers to equates to this subroutine
in scott16u there is a simlar routine for scott19u

void
rem_tab16(un16 * prtb, un16 * pftb, un16 * pbtb)
{
   /*
    * this code builds the S table for a 16 bit look up table the
    * lookup take with proper choice of remainder values can be one
    * of any complete cycle mappings of the 16 bit wide look up
    * table. since 65536 number of slots zero can map to 65535
    * values  say it maps to x then the mapping of x next it can map
    * to only 65534 values and so on. This means I am using
    * 65535! mappings instead of 65536!  this is still an effective
    * key of over 900,000 bits I have just eliminated the multipy
    * cycle mappings.
    */

   /*
    * input   prtb pointer to remainder table
    *  output  pftb pointer to forward
    * look up table output  pbtb pointer to reverse look up table
    */

  The this routine gets used every time there is a change in the S- table.
Even if one uses a passpharse to protect a keyenc.key file that
pass phrase is  assumed to be the remainder table. And when
the remainder table is built if input file is short it gets repeated until
the file is full. IF one uses variable length passwords like "a" and
"aa" it is the same password. Yes I know that that is not pure
but that is what I did here. But if you type in in ascii the octal vales
of a 128bit key you hane 2**128 different keys. The name remainder
table comes from the fact that if using a "true key file" which is shorter
when one tries to build the table using C it takes along time in assembly
it sill was 45 seconds or so. But when building the unique single cycle
S-table one must create a series of remainders to that is a decreasing 
function of the slots left in the table. Since this is the slowest part of 
useing a "true key" I use a modifed key for speed so that the numbers
in successive fields instead of decreasing one needs to do a decreasong
mod of the filed. This creates a small biasis as a trade off for speed
Redburn showed what this biases was for scott19u the entropy such
that key space for 19u was still over one million bytes. Does this
mean anything for short keys. Like I said in 19u  since you repeat the
file over and over while building the file there is no octal combination for
the octal represention that would result in the same key.This is mainly
becasue you keep using different remainders since the number of
slots is less and you keep using the same nubmer to get different reamainders.
example is  a%b1 = r1 and a%b2 =r2 and a%b3 = r3 and any point there may
be more than one value of a namely (a+b1) that yields r1 but better soon
you ran our of candifates that( a+nb1) %b2 =r2 and (a+nb1)%b3 =r3

 I could have used PI or someother thing to scramble the starting table
or did transforms to prevent "a" from producing same key as "aa" but chose
to keep the implimentaion simple since "amy and every single cycle s-table
is possible.
  The routine after knowing the inputs is self explanatory. Horst may have 
used j as an increment when I chose to increment the pointer directly but
the concept is basically correct.
 


David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: Boris Kazak <[EMAIL PROTECTED]>
Subject: Re: Finding a 192 bit hash (Was: Using symmetric encryption for hashing)
Date: Sun, 06 Jun 1999 11:31:45 -0400
Reply-To: [EMAIL PROTECTED]

Paul Onions wrote:
> 
> Well, since 2^32+1 is not prime, choosing a multiplier that is not coprime
> to the modulus will again allow one to find collisions.
=======================
 Would you please elaborate a bit on this one. Actually, to me it seems
quite to the contrary, the absense of multiplicative inverse makes the 
function really one-way.
=======================
> 
> On the other hand, choosing a multiplier coprime to 2^32+1 results in another
> weakness.  This is because it makes the block-multiplication function invertible.
> (Assuming my understanding of it is correct - please correct me if I'm wrong).
> 
> To see this, consider the final mod 2^32+1 multiplication applied in the final
> (3rd) round of block processing.  Since we know its output and one of its inputs
> (the constant multipler) we can compute the other input and so undo the effect
> of this multiplication.  We can carry on doing this to undo the entire 3-round
> block processing operation.
> 
> Denoting your hash scheme as H[i+1] = B(M[i] XOR H[i]) where the M[i] are
> the message blocks and H[0] = IV a fixed quantity and B() is the block processing
> operation.  With the final M[i] being a padding/message-length encoding block
> and the final hash value being the XOR of left and right halves of the last H[i].
> 
> Now, since B() is invertible, we can compute preimages of a given hash output
> as follows:-
> 
> Given a hash value, create a H[n] value consistant with it.
==================
  You are correct, and even more than that, for each given left we can
produce a
consistent right just by XOR-ing the arbitrary "left" with the "hash".
==================
> Now invert B() giving
> H[n-1] XOR M[n-1].  Set H[n-1] consistant with the length of the preimage we want,
> thus also setting M[n-1].  Invert B() on H[n-1] giving us H[n-2] XOR M[n-2].
> Choose M[n-2] as desired and invert H[n-2] and so on.  When we get to H[0] XOR M[0]
> then set M[0] so that H[0] = IV.
> 
> Thus the hash is not one-way (and so obviously also not collision-resistant).
==================
 100% correct, finding collisions in this scheme is trivial. Sorry, it
means
that I overlooked the obvious. The colliding message will be utter
garbage 
with probability 1-2^-n, but still it is a collision...
 Still, maybe there exists a trick to make this operation one-way? How
about
XOR-ing the multiplication product into the block instead of just
replacing
the 32-bit mutiplicand part? Do you see an easy way to reverse this?
====================
> 
> Does this make sense?
> Paul(o)
> 
> --
> Paul Onions                     [EMAIL PROTECTED]
>                                  PGP 2.6.3 key available
>                             D704688BEFBF2D5D 546BC1D603E2A8E0
====================
  I shall make the appropriate changes, eliminate the multiplier table
and 
XOR the product, if you wish to get the improved version, please let me
know.
     Thanks and best wishes        BNK

------------------------------

From: [EMAIL PROTECTED] (Richard Iachetta)
Subject: Re: SCOTT19U pass in nut shell
Date: Sun, 6 Jun 1999 13:50:24 -0500

In article <7jcvi0$2fnm$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> 8 bits would have been easier. But I guess I just had a hair up
> my ass to make it 9. There was no real reason other I liked it.
> And I felt others would hate it. Since everything is done is 
> 8 byte increments.

Can anyone now trust this algorithm as anything more than a toy?  Would any 
serious cryptographer choose the parameters of his algorithm based on a 
hair up his ass or that others would hate the choice rather than having 
some actual reason for making the algorithm a certain way?  Imagine an 
author of an AES candidate algorithm giving reasons like these to the 
review committee when questioned about the algorithm.

-- 
Rich Iachetta
[EMAIL PROTECTED]
I do not speak for IBM.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: KRYPTOS
Date: Sun, 06 Jun 1999 18:24:10 GMT

[EMAIL PROTECTED] wrote:
> I was just wondering if anyone knows what the latest is with the
> KRYPTOS sculpture? Has anyone deciphered it yet?

The sculpture is still there, at least it was a few weeks ago.
There have been no correct decipherments that I have heard of.
(There is a totally bogus one somewhere on the Web.)

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Challenge to SCOTT19U.ZIP_GUY
Date: Sun, 06 Jun 1999 20:32:11 GMT

In article <7jef3b$a29$[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Thomas Pornin) wrote:
>According to Tim Redburn <[EMAIL PROTECTED]>:
>> So why is decrementing a pointer slower than incrementing it ?
>
>Because it is not. Don't ask.
>
>        --Thomas Pornin

 Actually it isn't. It is just that most code I have looked at people 
increment it. But as I stated somewhere and as it will be in the new
stuff I am about to put out there on the net. I put the file in virtual
memory so there was little actual difference in the speed of encryption
and decryption. But when the data stays in files code is usually set up
to handle reading a file from the front to the back. But if one using data
as read to decrypt one must use data fron the back to the front. Sorry
if the concept is over your head. You ask for simple explanations and
then you fall apart. So do you really want to understand the routine or
thoughts behind it or what?


David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: [EMAIL PROTECTED] (Tim Redburn)
Subject: Re: Challenge to SCOTT19U.ZIP_GUY
Date: Sun, 06 Jun 1999 18:31:32 GMT

On Sun, 06 Jun 1999 15:11:53 GMT, [EMAIL PROTECTED]
(SCOTT19U.ZIP_GUY) wrote:

<snip>
>><snip>
>>>  If you look at my code you will realize that decryption if anything
>>>would be slower than encryption since I am sort of going against
>>>the normal data flow........
>><snip>
>>
>>What do you mean by ".. against the normal data flow .." ? 
>

<snip>

>  In scott19u yes I load the whole file in memmory since
>that is was the easiest way to do it. But I still acess most
>of that file by incrementing pointers in the forward direction
>for encryption and I must do the reverse (against the grain)
>to do the decryption so that the pointers are decremented.
>

So why is decrementing a pointer slower than incrementing it ?


-Tim.

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: Challenge to SCOTT19U.ZIP_GUY
Date: Sun, 06 Jun 1999 20:35:11 GMT

In article <7jef80$a49$[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Thomas Pornin) wrote:
>According to Tim Redburn <[EMAIL PROTECTED]>:
>> He refuses to give a concrete yes or no as to whether they give a
>> completely accurate description of his algorithm.
>
   I think an argument can be made that the "source code" is a completely
accurate description of the algorithm. It is not my fault some of you can't
read C.

>Quite understandable, since it might make people write less messages
>about his toys. If this was to happen, he would shrivel up and die.
>
>He is just, darwinly, trying to remain alive, after all.

   Would it be to much to ask for a clear and accurate description of this
last sentence?



David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: SCOTT19U pass in nut shell
Date: Sun, 06 Jun 1999 20:26:36 GMT

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
(Richard Iachetta) wrote:
>In article <7jcvi0$2fnm$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
>> 8 bits would have been easier. But I guess I just had a hair up
>> my ass to make it 9. There was no real reason other I liked it.
>> And I felt others would hate it. Since everything is done is 
>> 8 byte increments.
>
>Can anyone now trust this algorithm as anything more than a toy?  Would any 
>serious cryptographer choose the parameters of his algorithm based on a 
>hair up his ass or that others would hate the choice rather than having 
>some actual reason for making the algorithm a certain way?  Imagine an 
>author of an AES candidate algorithm giving reasons like these to the 
>review committee when questioned about the algorithm.
>

  Your correct of course the more serious reason would have been to say
rectal retrieval. What difference does it make. Either it is hard to crack or
not. I am sure that if a lot of the AES candidtates gave an honest anwser
for every little detail they would say something like that. My god no wonder
Clinton is president idots like you voted for him casue he talks nice. I still
want to see Jessie Ventura as president. But then stuff shirts like you 
couldn't handle the honesty.


David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: "Jan Wessels" <[EMAIL PROTECTED]>
Subject: Key lengths vs cracking time
Date: Sun, 6 Jun 1999 21:21:50 +0200

Does anyone has some information relating to encryption algorithms and key
length in relation to the time needed to break the encrypted data?

Thanx,

Jan Wessels



------------------------------

From: [EMAIL PROTECTED] (Thomas Pornin)
Subject: Re: Challenge to SCOTT19U.ZIP_GUY
Date: 6 Jun 1999 18:36:27 GMT

According to Tim Redburn <[EMAIL PROTECTED]>:
> So why is decrementing a pointer slower than incrementing it ?

Because it is not. Don't ask.

        --Thomas Pornin

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to