Cryptography-Digest Digest #490, Volume #13      Thu, 18 Jan 01 17:13:00 EST

Contents:
  Re: using AES finalists in series? (Terry Ritter)
  Re: NSA and Linux Security (Greggy)
  Re: using AES finalists in series? (Terry Ritter)
  Re: block algorithm on variable length without padding? (DJohn37050)
  Membership Signature Scheme ([EMAIL PROTECTED])
  Re: SAC question (Benjamin Goldberg)
  Membership Signature Scheme (Splaat23)
  Re: block algorithm on variable length without padding? ("Adrian S. Thompson")
  Re: Problem with Lanaki Lession #1 (William Rowden)
  Re: Why Microsoft's Product Activation Stinks (David Schwartz)
  Initial Cryptanalysis of the RSA SecurID Algorithm (Kingpin)
  Re: using AES finalists in series? (Roger Schlafly)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: using AES finalists in series?
Date: Thu, 18 Jan 2001 20:12:21 GMT


On Thu, 18 Jan 2001 11:50:50 -0800, in
<3a674992$[EMAIL PROTECTED]>, in sci.crypt "Paul Pires"
<[EMAIL PROTECTED]> wrote:

>Gary Watson <[EMAIL PROTECTED]> wrote in message
>news:nuF96.1804$wL5.36733@NewsReader...
>>
>> If one had sufficient CPU power or minimal throughput requirements, is there
>> any reason why one couldn't use all five AES round two finalists in series?
>> This would guard against a weakness being found in one of them, or if one or
>> two of the candidates were deliberately weak systems promulgated by sinister
>> government forces.  (Is it necessarily true that security must improve at
>> least a little each time you run the ciphertext through a new crypto
>> algorithm?  Don't know, this isn't my line of work...)
>
>Not binary answers here but something to think about...
>
>Is the security of a process only reduced when important elements
>are omitted? Can a flaw be introduced by adding operations?
>I believe this has been demonstrated in some cases. 

Sure, if we use the same cipher, in decipher mode, with the same key,
we can say that has "weakened" the cipher.  

So don't do that.

In particular, when using multiple ciphers, the key for each cipher
must be independently selected.

If one could make a cipher weaker simply by adding another ciphering
layer, that would be a way to attack the cipher.  Yet we don't see
such attacks.  Why?  


>So,
>how are you going to get any confidence in your process to justify
>the added complexity? How are you going to maintain the confidence
>in the components you use that are being used under different
>conditions than they were analyzed?

Ciphers are analyzed for arbitrary data.  Ciphertext is data.


>Can you randomly mix chemicals and achieve a non-toxic
>result?
>
>Sure.
>
>Should you test them by ingestion?

False analogy, false implication.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: Greggy <[EMAIL PROTECTED]>
Subject: Re: NSA and Linux Security
Date: Thu, 18 Jan 2001 20:08:47 GMT

In article <[EMAIL PROTECTED]>,
  Shawn Willden <[EMAIL PROTECTED]> wrote:
> Greggy wrote:
>
> >  Shawn Willden <[EMAIL PROTECTED]> wrote:
> > > Greggy wrote:
> > > > FDR and the congress technically, legally declared the citizens
of
> > the
> > > > US enemies of the US
> >
> > > This is quite a statement.  Can you provide a reference?
> >
> > Absolutely.  I will look it up later tonight and post it back here
for
> > you.
>
> Any luck on finding that reference?

I replied to my previous post.  No luck about it.  I have the stuff
here at home in print.  And you can use the search engines on the web
to look this stuff up.  Look at the material these web sites cite.


>
> Shawn.
>
>

--
I prefer my fourth amendment rights over a dope free
society, even if the latter could actually be achieved.
Al Gore and the Florida Robes - More than just another rock group;
a clear and present danger to America's national security.


Sent via Deja.com
http://www.deja.com/

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: using AES finalists in series?
Date: Thu, 18 Jan 2001 20:26:03 GMT


On Thu, 18 Jan 2001 20:01:29 -0000, in
<6YH96.1391$eI2.46244@NewsReader>, in sci.crypt "Gary Watson"
<[EMAIL PROTECTED]> wrote:

>[...]
>If I were to embed a crypto system into something of my own
>design, I'd probably stick to Rijndael and be done with it.  After all, Mr
>I.B. Pomposity III, your typical CEO, is going to use his wife's birthday as
>the input key anyway.

Well, if IB gets to choose his key, he may well have a weak keys at
his desk.  So if one had physical access to the desk, the weakness
could be exploited.  But this sort of access is distinct from having
weak keys for data transmission.

No general cipher system should be sending data under a user-selected
key.  At the very least, a weak key might be used for sending the long
random key used for the data (a "message key").  A better approach
would be for the weak key to open a key-repository of long re-usable
keys, one of which is then used to send a message key.  

If we force the security weakness to be in a particular location,
"all" we need do is protect that location.  That is an advantage.  It
is not as good as having strong user keys, but we do what we can.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Date: 18 Jan 2001 20:27:40 GMT
Subject: Re: block algorithm on variable length without padding?

The previous ciphertext block for block number 1 is the initialization vector. 
This is best if it is varying, looks random, and is OK to be public.  I thought
this was well known.
Don Johnson

------------------------------

From: [EMAIL PROTECTED]
Subject: Membership Signature Scheme
Date: Thu, 18 Jan 2001 20:33:28 GMT

For a project of mine I am in need of a signature
scheme that allows a signature to be verified
offline as to whether it had been signed by a
member of a group (network) of people, without
having to know anything specific about that
person. Essentially, the following would
constitute the protocol:

1) A trusted arbiter of the group generates his
secret key, and uses that secret key to generate
unique secret keys for every member of the group,
which he then distributes securely.
2) With his secret key, a user can sign a message
and send that message plus the generated
signature to any other member. He has no
information at this point as to who is will be
sending the signature + message to.
3) That member can, using only some global public
data (and possibly his own secret key), verify
that the signature matches the message and was
generated by a member of the group.


Obviously, each member�s signature of identical
messages must be unique.

Most crucial is the size of the signature. It
must remain small (bandwidth is at a premium),
and the best solutions I can think of right now
involve sending a standard RSA signature of the
message hash + a (trusted arbiter) signed RSA
public key that generated the signature.
Obviously, for large key sizes this could be a 1K
signature.

I have been searching as much math as I can find,
but I can't seem to find anything quite like what
I want. If anyone out there has a solution as to
how I could do this or something very close, that
help would be greatly appreciated!


Sent via Deja.com
http://www.deja.com/

------------------------------

From: Benjamin Goldberg <[EMAIL PROTECTED]>
Subject: Re: SAC question
Date: Thu, 18 Jan 2001 20:43:41 GMT

Tom St Denis wrote:
> 
> In article <[EMAIL PROTECTED]>,
>   Benjamin Goldberg <[EMAIL PROTECTED]> wrote:
> > Tom St Denis wrote:
> > >
> > > In article <[EMAIL PROTECTED]>,
> > >   Benjamin Goldberg <[EMAIL PROTECTED]> wrote:
> > > > In the normal definition of the Strict Avalanche Criterion (SAC), changing any 
>input bit, or selection of input bits, should change each output bit with probability 
>of 1/2.
> > > >
> > > >       However, for an invertable function, there *must* be some bias. In the 
>entry for SAC in Terry Ritter's glossary, he gives the example that for a 2 bit 
>table, if one entry is the original value, there are only 3 "changed" values, so an 
>input difference can cause each output bit to change with probability 2/3, not 1/2.
[snip]
> I don't get what you are saying.  It's only logical that the function will not 
>affect the input, so no duh you only get upto 3 bits different.  So what?

There are 3 *outputs* different.  Not 3 *bits* different.  The function only *outputs* 
2 bits.  There's no way for there to be 3 *bits* different.  It's quite clear that you 
are still confused on this issue.

Let's consider the possible 1-bit input differences to a 2x2 function, 01 and 10.  The 
first of these can have an output difference of 01, 10, or 11.  The second of these 
can have an output difference of 01, 10, or 11.  Clearly, if the possible output 
differences are equiprobable, then the first output bit can be 0, 1, or 1, with equal 
probability, and the second output bit can only be 1, 0, or 1, with equal probability. 
 Since 1 = 1, this means that the first output changes by 1 with probability 2/3, and 
changes by 0 with probability 1/2.  The second output bit changes by 1 with 
probability 2/3, and changes by 0 with probability 1/3.  Get it?  A 2x2 function, 
which comes as close to SAC as possible, will have each bits change with probability 
2/3 for any random 1 bit input change.  See?

[snip]
> > No.  The strict definition of SAC says output bits change 1/2 of the time.  If the 
>count is 2^(n-1) that it changes, then the count is 2^(n-1)-1 of it not changing.  
>Since (2^(n-1))/((2^n)-1) is not 1/2, then no substitution box of finite size is SAC.
> 
> I beg to differ.  If there are 256 possible inputs, then it must flip for 128 of 
>them and not flip for 128 of them.  That satisfies SAC.

In an 8x8 table, you can count 256 possible changes only if you count a change of 0 
bits as a change.  If you consider a change of 0 bits to not be a change, then there 
are 255 possible changes.  If each of those 255 possible input changes produces an 
output change, well, there are 255 possible output changes.  Consider just the first 
bit of output: the output change values 1..127 produce a change of 0 in the first bit; 
the output change values 128..255 produce a change of 1 in the first bit.  There are 
127 cases where the first bit doesn't change, and there are 128 cases where the first 
bit does change.

> 
> > Suppose I have an 8 bit table.  Flipping 1 bit of input can cause any of 255 
>different differences; The odds of a change is 128/255.  Not 1/2.  It's almost SAC, 
>but it is not SAC.
> 
> Yeah but SAC only concerns a SINGLE bit of input *AND* a SINGLE bit of output.  Not 
>multiple.  So a 8x8 table can be SAC.

Strict SAC requires that each input change cause each output bit to change with 
exactly 50% probability.  As I said above, in an 8x8 table, nonzero input changes 
cause output bits to change with probability 128/255.  128/255 != 50%.  This is not 
SAC.

> > > >       Is there any particular term for this type of "Almost SAC?"
> > >
> > > Not SAC.
> > >
> > > >       For instance, if a 128 bit cipher fulfills the property that if an input 
>bit changes, then each of the output bits change with probability (2^127)/(2^128-1), 
>what do you call that property?
> >
> > > >       Also, is there any term for calculating SAC on larger units than single 
>bits?  Maybe "bytewise SAC," or "wordwise SAC?"
> > >
> > > Stochastically random?  If you are talking binary, bytewise is 'seventh-order' 
>so you would say "seventh-order SAC" i.e changing any eight bits will change any 
>other unit of eight bits with prob 1/256 (all bits flip simulataneously).
> >
> > Who cares about all 8 bits changing at once?  I care about at least one of the 8 
>bits changing, which is quite different.  Changing any input byte (any of the 255 
>possible nonzero changes to that byte) causes any output byte to change in one of 256 
>possible ways, of which 255 are nonzero differences, and 1 of which is a zero 
>difference -- that is, there's a 255/256 probability of the output byte changing, and 
>a 1/256 probability of it not changing.
> 
> Um no, if a 8x8 function is truly random the bit will flip with a probability of 1/2 
>not 255/256.  I think you are mixing up various tests.  SAC is on SINGLE bits at a 
>time.

        Clearly you're the one getting things confused.
        First, I was talking about changing single bits in an N bit by N bit function, 
and looking at output bits.  Each output bit changes with probability 
(2^(N-1))/(2^(N-1)) probability.
        Later, as a seperate, distinct question, I asked about examining x bit words 
in a situation where SAC applies perfectly to bits.  If each output bit changes with 
probability 1/2, then each output word has at least one bit change with probability 
(2^x-1)/(2^x), and each output word has no bits change with probability 1/(2^x).
        After that, as another seperate, distinct question, I asked about x bit words 
changing in an N bit by N bit function.  Each output word has a nonzero change with 
(2^(N-x))/(2^(N-1)) probability.

When I had said:
> > Who cares about all 8 bits changing at once?  I care about at least one of the 8 
>bits changing, which is quite different.  Changing any input byte (any of the 255 
>possible nonzero changes to that byte) causes any output byte to change in one of 256 
>possible ways, of which 255 are nonzero differences, and 1 of which is a zero 
>difference -- that is, there's a 255/256 probability of the output byte changing, and 
>a 1/256 probability of it not changing.

I was talking about x=8 (the second question).  Not N=8 (the first question), as you 
mistakenly seemed to think.

If you don't believe me about the 128/255 wrt my first question, why not do some tests.

Calculate avalanche with something like this:

int sbox[1<<N] = whatever;
int ones[N] = {0}, i, j, k;
for( i = 0; i < (1<<N); ++i )
for( j = 0; j < N; ++j ) {
        x = sbox[i] ^ sbox[i ^ (1<<j)];
        for( k = 0; k < N; ++k )
                ones[k] += (x>>k) & 1; }

Try it on a few hundred randomly chosen 2x2, 4x4, and 8x8 sboxes, and why don't you 
see if the average number of in ones is always
        (2**N)*N * (2**(N-1))/(2**N)
as you say it should be, or
        (2**N)*N * (2**(N-1))/((2**N)-1)
as I say it should be.

Also, consider a case of a 1x1 sbox: this size box can only be either the not function 
or the identity function.  You predict that for a random sbox, single bit changes 
cause output changes with probability 1/2.  I predict that the probability is 
2**0/(2**1-1).

I don't see how you could possibly disagree with what I said in my second question.  
And the equation I gave in the third follows logically from the first and second -- I 
won't offer a way for you to test this, and won't discuss it further until we are 
settled on the first two items.

-- 
Most scientific innovations do not begin with "Eureka!"  They begin with "That's odd.  
I wonder why that happened?"

------------------------------

From: Splaat23 <[EMAIL PROTECTED]>
Subject: Membership Signature Scheme
Date: Thu, 18 Jan 2001 20:48:46 GMT

For a project I am working on, I am in need of a signature scheme that
allows for a signature to be verfied offline as to its having been
signed by a member of a group of people, without having to know the
identity of the person or anything else about him/her. Essentially, the
protocol would be as follows:

1) A trusted arbiter of the group generates a secret key, and uses that
secret key to generate unique keys for each member, and transmits those
keys to each member securely.
2) A member, with his secret key, computes a signature on a message and
sends the message + the signature (or just the signature if it support
message recovery) to another member of the group. The generation of the
signature cannot depend on knowing which member he is sending to,
because sometimes he will be broadcasting the message to multiple
people.
3) The recipient can, using only publicly available data (and
optionally his secret key) verify the signature.

Obviously, each member's signature of identical messages must be unique.

Most crucial is the space such a signature would take up. My best idea
right now involves sending a conventional RSA signature + a (trusted
arbiter) signed RSA public key, but that is very large. For
reasonablely secure signatures it would be > .5K in size.

I have searched all the math I know and can find and have found nothing
really useful. If anyone has ideas on how to implement this or
something close to this, please respond. Thanks in advance!

- Andrew


Sent via Deja.com
http://www.deja.com/

------------------------------

From: "Adrian S. Thompson" <[EMAIL PROTECTED]>
Subject: Re: block algorithm on variable length without padding?
Date: Thu, 18 Jan 2001 15:13:24 -0600

By placing an EOF (End Of File) at the beginning of the padding after encrypting
will trick the file system into thinking the file was as long as it was before
the padding as added.

Hypothetical ie.

77k = file size
3k = padding

the encrypted text after padding would be 80k; after placing the EOF at the
77001byte place, the OS/App would only see 77k of data.  This would keep the
plaintext and the cyphertext the same size.

Take care,
-=Adrian=-

"N. Weicher" wrote:

> But then you fail the requirement that the ciphertext be the exact same
> length as the plaintext, unless I am missing your point.
>
> Neil
>
> -----------------------------------------------
>
> "Adrian S. Thompson" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Hi,
> >
> > Couldn't a person just pad the plaintext, encrypt the plaintext, then
> place an
> > EOF at the first byte of the padding?  Just a hypothosis. ;-)
> >
> > Take care,
> > -=Adrian=-
> > > N. Weicher <[EMAIL PROTECTED]> wrote in message
> > > news:kAj96.59822$[EMAIL PROTECTED]...
> > > > Is it possible to use a block algorithm (such as Blowfish or DES) to
> > > encrypt
> > > > plaintext where the length is not a multiple of eight bytes?  I know
> about
> > > > padding, but what if padding is not an option, ie, the encrypted data
> must
> > > > be the exact same length as the plaintext data?  Is this feasible?  If
> so,
> > > > how is it done?


------------------------------

From: William Rowden <[EMAIL PROTECTED]>
Subject: Re: Problem with Lanaki Lession #1
Date: Thu, 18 Jan 2001 21:18:07 GMT

In article <[EMAIL PROTECTED]>,
  "Rob Marston" <[EMAIL PROTECTED]> wrote:
> V W H A Z S J X I H   S K I M F   M W C G M V   W O J S I F  -
> A G F J A Q   Q M N R J K Z M G R S W M F.   J A T W   X H   -
> A W F.    F I Q Q W F F X I H   F K H B A O Z   J S M A H H F.
> T G A H P K D   X M A W O V F S A R F    X H K I M A F S.
[snip]
> Next we develop a CT Letter Position Chart.
>      F : I    2    3     -     3     2     E
>  A  11 :      /    /    .....  ///   /              i

I agree; even counting the A in "XHAWF" as a middle letter, there
are still two words with A as a third letter ("TGAHPKD"
and "XMAWOVFSARF").

>  F  13 : /    /         .....        /     /////

Clearly, there are two (not one) words with F as the initial
letter ("FIQQWFFXIH FKHBAOZ") and no words with F as the second
letter.

>  M   9 :/    //    /    ..           //

There are nine Ms, but only eight tallies.

>  O   3 :      /                      /

There are three Os, but only two tallies.  The O in the middle
of "XMAWOVFSARF" is missing.

> > and I'm having problems getting his Tally chart to work.

A quick script produced the following tally chart.  Does it
match your counts?

        :-      I-      S-      T-      M-      A-      P-      F
A 11    :-      -       /-      //-     ....-   ///-    /-      -
B 1     :-      -       -       -       .-      -       -       -
C 1     :-      -       -       /-      -       -       -       -
D 1     :-      -       -       -       -       -       -       /-
F 13    :-      //-     -       -       .....-  -       /-      /////-
G 4     :-      -       /-      -       ..-     /-      -       -
H 9     :-      -       //-     //-     .-      /-      /-      //-
I 6     :-      -       /-      -       ...-    -       //-     -
J 6     :-      //-     -       /-      ..-     /-      -       -
K 5     :-      -       //-     /-      .-      -       /-      -
M 9     :-      /-      //-     /-      ..-     -       ///-    -
N 1     :-      -       -       /-      -       -       -       -
O 3     :-      -       /-      -       .-      -       /-      -
P 1     :-      -       -       -       -       /-      -       -
Q 4     :-      /-      -       /-      .-      -       -       /-
R 3     :-      -       -       -       ..-     -       /-      -
S 7     :-      /-      /-      -       ....-   -       -       /-
T 2     :-      /-      -       -       -       -       /-      -
V 3     :-      /-      -       -       .-      -       -       /-
W 8     :-      /-      //-     -       ..-     /-      /-      /-
X 5     :-      ///-    -       -       -       //-     -       -
Z 3     :-      -       -       -       ..-     -       -       /-

(I placed a dash before tabs and newlines because Deja tends to mangle
those.)

> > Can anybody tell me what I'm doing wrong?

Perhaps it is incorrect to assume there are no typographic errors in
the tally chart.
--
    -William
PGP key: http://www.eskimo.com/~rowdenw/pgp/rowdenw.asc until 2001-02-01


Sent via Deja.com
http://www.deja.com/

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,misc.survivalism
Subject: Re: Why Microsoft's Product Activation Stinks
Date: Thu, 18 Jan 2001 13:22:39 -0800


Gordon Walker wrote:
 
> On Thu, 18 Jan 2001 01:32:36 -0800, David Schwartz
> <[EMAIL PROTECTED]> wrote:
 
> >       That I have a problem with. If, on the other hand, they only refused to
> >permit installation if they knew a key was being abused, that would be a
> >totally different story.
 
> Which begs the question of how they could tell it was being abused.

        Oh, come on. You think they don't already know which serial numbers are
being abused? They're posted in warez newsgroups for chrissakes.

        DS

------------------------------

From: Kingpin <[EMAIL PROTECTED]>
Subject: Initial Cryptanalysis of the RSA SecurID Algorithm
Date: Thu, 18 Jan 2001 17:05:10 -0500

Abstract:

Recently, I.C. Wiener published a reverse engineering effort of the RSA 
SecurID algorithm. There were few speculations on the security 
ramifications of the algorithm in I.C. Wiener's posting, so this note is 
an effort to touch upon areas of concern. We have verified that I.C. 
Wiener's released version of the proprietary algorithm is accurate by 
comparing it with our own prior reverse engineering of the same algorithm.

Due to the time sensitivity imposed by the public release of RSA's 
proprietary algorithm, we felt it necessary to release this brief to 
help people better understand and work toward reducing the risks to 
which they might currently be exposed. The risk profile of token devices 
changes when they are implemented in an uncontrolled environment, such 
as the Internet, and the research in this paper aims to educate and to 
help manage those risks. The primary concern is the possiblity to 
generate a complete cycle of tokencode outputs given a known secret, 
which is equivilent to the cloning of a token device.

This short paper will examine several discovered statistical 
irregularities in functions used within the SecurID algorithm: the time 
computation and final conversion routines. Where and how these 
irregularities can be mitigated by usage and policy are explored. We are 
planning for the release of a more thorough analysis in the near future. 
This paper does not present methods of determining the secret component 
by viewing previously generated or successive tokencodes.


Direct link to full paper:
http://www.atstake.com/research/reports/initial_securid_analysis.pdf

Additional reports:
http://www.atstake.com/research/reports/index.html


-kp


------------------------------

From: Roger Schlafly <[EMAIL PROTECTED]>
Subject: Re: using AES finalists in series?
Date: Thu, 18 Jan 2001 14:08:34 -0800

Terry Ritter wrote:
> >Sure, you can do all those things if you want. But your ideas are
> >directly contrary to the idea of a standard. The point of AES is
> >to have one cipher that is efficient and secure for a lot of
> >people to use and interoperate. It is not to encourage people to
> >roll their own crypto because they don't trust the AES design.
> Wrong.  The point of AES is to have a government-approved cipher so
> banks need not worry about legal liability.

They already had that with triple-DES.

> "Interoperation" is precisely what we design ciphers to not do.
> Ciphers specifically do not interoperate without correct keys.  But if
> we can deliver correct keys, we can also deliver the correct cipher --
> or perhaps just a description of what the correct cipher should be.

Those with no need or use for standardization can ignore the standard.

> AES is of course an attempt to limit cipher development, and
> especially to limit the profits available to continue cipher
> development.

If so, then it has been a huge failure. It sure looks to me that
AES has spurred the development of good ciphers.

> A widely-used standard cipher is the best possible target for real
> cryptanalysis.  By breaking only one cipher, one exposes a fount of
> information, and nobody will know -- or can afford to believe -- that
> has happened.  Absent unarguable proof of strength, embracing any
> single cipher puts real security at risk.

The internet is a target for similar reasons.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to