Cryptography-Digest Digest #907, Volume #12      Fri, 13 Oct 00 01:13:00 EDT

Contents:
  Re: CRC vs. HASH functions (Mack)
  Re: BlowFry... ("Will Newland")
  Crypto technology recommendations? ("Will Newland")
  Re: FTL Computation ([EMAIL PROTECTED])
  More on the SDMI challenge (Mack)
  Re: NIST Random Generator Test Suite Results (Jason Stratos Papadopoulos)
  Re: FTL Computation ("Paul Lutus")
  Re: A new paper claiming P=NP (Carsten Friedrich)
  Re: Code Book Cipher Challenge Cracked ("Joseph Ashwood")
  Re: Code Book Cipher Challenge Cracked (Paul Rubin)
  Re: A new paper claiming P=NP (Jeff Erickson)
  Re: Dense feedback polynomials for LFSR (Joaquim Southby)
  Re: Public Key Algorithms and Analysis ("John A. Malley")
  Re: Dense feedback polynomials for LFSR (Joaquim Southby)
  Re: Comments on the AES winner ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Mack)
Date: 13 Oct 2000 01:34:13 GMT
Subject: Re: CRC vs. HASH functions

>Mack wrote:
>
>> CRC's are generally best for:
>>
>> 1) compressing entropy free from outside influence.
>
>That one is debatable.  Noise (the part we want in this
>application) that appeared largely in multiples of the
>CRC polynomial would be very surprising, but not
>inconceivable.
>

No more inconceivable that noise would produce a collision
in a secure hash function.  Various CRC polynomials have
varying degrees of probability of "Noise" interacting badly.
For primitive polynomials that are dense the probability is
low.  Most MD style Hash functions use some sort of CRC type
polynomial on the front end so this is an issue there as well.

>> 2) Hashing data for table lookups or other non-security oriented
>> identification.
>> 3) Random single bit error detection and burst error detection.
>
>I think any time I need a digest larger than 64-bits, I
>would always use a cryptographic hash.  I cannot envision a
>case where I would believe I hit a 2^-64 shot (or even a
>2^-32) before I would suspect foul play.
>

Sending 2^32*128bits of data is no longer just hypothetical. 15 hours of
data on a 10Mbps ethernet will deliver that amount of data. With 32 bit
CRCs some errors may go unchecked. 128 bit CRCs are easier to implement
both in software and hardware than secure hash functions.  Another thread
suggested that a hash may be made as fast as a CRC but only at the expense
of silicon space.

>
>--Bryan
>--
>email: bolson at certicom dot com
>


Mack
Remove njunk123 from name to reply by e-mail

------------------------------

From: "Will Newland" <[EMAIL PROTECTED]>
Subject: Re: BlowFry...
Date: Fri, 13 Oct 2000 01:49:08 GMT

"Runu Knips" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Rob Marston wrote:
> > To this end I have produced an 8bit version of BlowFish
> > which for fun I have christened BlowFry.


I told my wife I heard there's a "millenium edition" of Blowfish called
'BlowME'.  She wasn't amused.  Oh, well.





------------------------------

From: "Will Newland" <[EMAIL PROTECTED]>
Subject: Crypto technology recommendations?
Date: Fri, 13 Oct 2000 02:06:31 GMT

Hi,
I'm doing research for my employer on what will be the best fit,
techonologically, for
our crypto implementation.  I expect we'll be doing:

1. Bulk encryption of text files,
2. PKI authentication, and
3. Secure sessions (SSL?)

We use NT and Unix servers on a fairly large network.  I'd appreciate some
recommendations from the group on what you all feel would be the best way to
implement crypto (Java, DLL's, COM, etc.), whether to build or buy crypto
code, and what things are considered crucial in an effort like this.  Thanks
in
advance for your input!

---Will
"Remember: it's only kinky the first time..."



------------------------------

Date: Thu, 12 Oct 2000 22:16:52 -0700
From: [EMAIL PROTECTED]
Crossposted-To: sci.astro,sci.physics.relativity,sci.math
Subject: Re: FTL Computation

Paul Lutus wrote:
> 
> <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> 
> > > > If the projection of a spot of light can virtually move FTL
> > > > then so too can the projected images of a slide rule's slides.
> > > > The computation 'in effect', takes place FTL.
> > >
> > > Not "in effect," not at all. The projection of the light does not move
> at
> > > FTL, not virtually, not really, not at all.
> >
> > It does move FTL.
> 
> Now we have to imitate Bill Clinton and say exactly what we mean by "it."
> 
> I argued that it was the hypothetical path of the light, not the projection
> of photons along that path, that changed superluminally. The light itself
> never exceeds c as it travels along the new path. The changed angle of the
> light's path, if extrapolated out into space, has moved superluminally, but
> the actual light can only catch up with this change at c, no more.
> 
> Like water leaving a hose. You can rotate the hose spigot faster than the
> water's velocity, but the water doesn't instantaneously change direction all
> along its path.
> 
> The remainder of your argument makes sense, but it is indistinguishable from
> an example that uses two light beams taking off in different directions. I
> believe this is not what the OP had in mind, but as you point out, it also
> does not violate normal rules of causality.
> 

I have no problems with that.  I just didn't see that in your
statements at the beginning of your posting.  The light spot
IS moving faster than c, but that doesn't mean that any physical
object is doing that since the light spot at different places
is being made by different photons.

John Anderson

------------------------------

From: [EMAIL PROTECTED] (Mack)
Date: 13 Oct 2000 02:29:54 GMT
Subject: More on the SDMI challenge

Without sufficient time to completely analyze the SDMI technologies I have
not pursued it.  However it is interesting that it was so easy to identify
the 'watermark' in the file I examined.  It certainly seems feasible to
do the same with the other technologies.

On another note,  having thought some about the problem it would
seem possible to use a field power operation (modular exponentiation)
to hide data in a noise field in such a way that a public and private
key would exist.  This would allow the public key to read the data
but not to replicate it.  Assuming some part of the 'watermark' could
not be removed (unlikely), this would prevent copying.

It should be possible to phase shift different parts of the 'recording' to
defeat the watermarking.  In the challenge I believe this resulted in a
'bad quality' failure.  In real life I don't think this would be noticable.  I
certainly didn't notice it in the samle I had.

The SDMI challenge was definitely flawed.

The quality of the samples that were unmarked was horrible.
The static definitely masked the characteristics that needed to
be analyzed.

The oracle did not return sufficient information. In real life the watermarking
would be separate from the quality issue.

Time for a successful attack was not sufficient.  It took me about
60 tries to successfully download 2 files. The transfer rate was
 horrible (about 500 bytes/sec). That translates to about 60 hours
just to download the complete files.

Insufficient information was available about the algorithms used.
In real life the physical parts containing the algorithms will be available.
Anyone with access to an STM will be able to look at the circuitry.

On a final note ... why would everyone replace their current CD or DVD
players with new ones? DVD hasn't even been the success for
the music/movie industry that they were hoping even without the DeCSS
incident.
Mack
Remove njunk123 from name to reply by e-mail

------------------------------

From: Jason Stratos Papadopoulos <[EMAIL PROTECTED]>
Subject: Re: NIST Random Generator Test Suite Results
Date: 13 Oct 2000 02:56:31 GMT

Cristiano <[EMAIL PROTECTED]> wrote:
:>
:> What language and algorithms do you use to compute e and pi?

: I use this C routine (**warning** UBYTE is unsigned long):

There's code all over for this sort of thing. You can generate millions of
pi digits in only a few minutes with the latest PC programs. For a start,
try 

www.glue.umd.edu/~jasonp/pipage.html

Two of these programs have computed a billion or more digits of pi.

jasonp

------------------------------

From: "Paul Lutus" <[EMAIL PROTECTED]>
Crossposted-To: sci.astro,sci.physics.relativity,sci.math
Subject: Re: FTL Computation
Date: Fri, 13 Oct 2000 03:13:45 GMT

<[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...

> I have no problems with that.  I just didn't see that in your
> statements at the beginning of your posting.  The light spot
> IS moving faster than c, but that doesn't mean that any physical
> object is doing that since the light spot at different places
> is being made by different photons.

With emphasis on "The light spot IS moving faster than c," it is important
to clarify that no photons are individually tracking this geometric
relationship. I know you agree, I just want to emphasize this, because some
reading this may think that photons may exceed c so long as information
doesn't.

The lighthouse effect is sort of a curved wavefront (imagine a spinning
firework throwing sparks out in a spiral), each part of which can arrive at
a target at whatever time dictated by the overall geometry. But that
wavefront is composed of individual photons traveling at c.

This is for the general reader, not you -- I know you understand this.

--

Paul Lutus
www.arachnoid.com





------------------------------

From: [EMAIL PROTECTED] (Carsten Friedrich)
Crossposted-To: comp.theory,sci.math,sci.op-research
Subject: Re: A new paper claiming P=NP
Date: Fri, 13 Oct 2000 03:16:16 GMT

>>
>>For example, think about your favorite exp-time hard game, such as Chess or 

Chess has a finite set of states and is therefore solvable in constant
time and space.

>>Go. 

Same for Go.


Carsten


------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Code Book Cipher Challenge Cracked
Date: Thu, 12 Oct 2000 17:43:15 -0700

[pondering what could have been accomplished if the teams had worked
together]
Considering that someone has successfully broken the monolith model for the
matrix portion, we should now be able to go approximately asymptotically for
a while, that would mean that 1024-bit is only about 35% harder
computationally, but I suspect that the RAM requirements would still be for
the most part out of reach (I don't know specifics so I can't state
absolutes). Maybe the team themselves will grace us with statements of how
much they think their advances will advance the front line.
                    Joe



------------------------------

From: Paul Rubin <[EMAIL PROTECTED]>
Subject: Re: Code Book Cipher Challenge Cracked
Date: 12 Oct 2000 20:27:51 -0700

Jim Gillogly <[EMAIL PROTECTED]> writes:
> I estimate that we (John Palagyi, Alec Muffett, and me) were
> about 2.5 months away from success -- we got a late start, but
> were sieving up a storm.

How were you going to do the linear algebra?

------------------------------

Crossposted-To: comp.theory,sci.math,sci.op-research
Subject: Re: A new paper claiming P=NP
From: [EMAIL PROTECTED] (Jeff Erickson)
Date: Fri, 13 Oct 2000 03:39:23 GMT

Stas Busygin <[EMAIL PROTECTED]> writes:
| According to the publishing rules of my *REPOSITORY* any stuff in it
| can be renewed at any time, so all found faults are to be corrected
| by authors.  That is why I think my publishing policy makes sense --
| anyone who wishes can become familiar with proposed stuff in its raw
| form and help authors to correct/improve it.

Electronic preprint repositories have a definite place, but based on
my swift glance at the paper and the posted reactions of other
readers, I think it was not polished enough to be a preprint.

Perhaps this was merely an English language problem.  If that's the
case, publish the paper in Russian.  There must be a few Russian
mathematicians and computer scientists who can validate his results!

Ultimately, it is the author's reponsibility to write clean, clear,
correct papers.  It is not the research community's reponsibility to
decipher unreadable drafts or to fix the author's bugs.

| do you think it's ok when a paper submitted in 1995 was published
| only in the beginning of 1998?

Of course not.  But there are plenty of other ways to disseminate
results, the most popular being conferences like STOC and FOCS, which
have turnaround times of only a few months.  Trust me, if his P<NP
proof is correct, everyone in the world will hear about it YEARS
before it appears in a print journal.

| Personally I think that an algorithmic proposal has a great chance
| to be outdated for such term if its field develops appropriately...

Huh?  Outdated???  It took two years for Wiles' and Taylor's proof to
be accepted.  What's the rush?

-- 
Jeff Erickson                                         [EMAIL PROTECTED]
Computer Science Department                  http://www.uiuc.edu/~jeffe
University of Illinois, Urbana-Champaign       Non sum, ergo non cogito.

------------------------------

From: Joaquim Southby <[EMAIL PROTECTED]>
Subject: Re: Dense feedback polynomials for LFSR
Date: 13 Oct 2000 03:38:59 GMT

In article <8s4eil$qkc$[EMAIL PROTECTED]> David A Molnar,
[EMAIL PROTECTED] writes:
>even without that, this seems to me to be exactly the kind of expertise an
>organization with time, energy, and an institutional memory could
>build. it's also not the kind of thing which attracts much attention in
>public academic cryptography. so your adversary could be very very good at
>this and you wouldn't know. 
>
That�s the second question I always ask customers when they talk about
protecting something:

1) What are you trying to protect?
2) What is it worth?
3) How long does it have to be protected?

You�re point about an organization dedicating a block of resources to
cracking my protection is correct.  The thing to consider, though, is how
much they are spending vs. how much my info is worth.  If the cost of the
loss of my info plus the cost of protecting it is less than the cost of
cracking it, I come out ahead.  You shouldn�t spend a lot of resources on
protecting Aunt Granny�s secret chocolate chip cookie recipe, especially
if she copied it off the back of a bag of chocolate drops.  The resources
spent should be just enough to convince the attacker that the information
(or its equivalent in terms of ROI) can be gotten at less cost somewhere
else.  If he decides that these other cookies are just as tasty as Aunt
Granny�s and the recipe is more available, the protection is successful.

------------------------------

From: "John A. Malley" <[EMAIL PROTECTED]>
Subject: Re: Public Key Algorithms and Analysis
Date: Thu, 12 Oct 2000 21:02:18 -0700

Joseph Ashwood wrote:
> 
> I've gotten to a point where I need at least some pointers to some papers.
> I'm examining alternative public key encryption and/or signature algorithms
> (alternative to RSA, DH, ECC, DSA, ECDSA). I am aware of a small handful
> (LUC, XTR, NTRU, RPK), but finding analysis of these is proving daunting,
> and I'd like to have more options. To expediate my search I figured I'd ask
> around a bit, so does anyone have any reference on the ones I mentioned or
> any others? (I've already found http://www.luc.co.nz, http://www.ntru.com).
> I'd also of course appreciate avoiding sites I'd have to pay for (like the
> LNCS location I found for XTR).
>                         Joe

Here's a resource - Marc Joye's Thesis on "Security Analysis of RSA-type
Cryptosystems" at the Universite catholique de Louvain. He covers RSA,
LUC and Elliptic Curve public key systems. His advisor was Prof.
Jean-Jacques Quisquater.
 
His thesis may be found at

http://www.dice.ucl.ac.be/crypto/phds/thesis_joye.ps.gz

It's excellent background information. 

His personal web site is 

http://www.geocities.com/marcjoye/

He will provide copies of his papers (if not listed) per request by
email.


John A. Malley
[EMAIL PROTECTED]

------------------------------

From: Joaquim Southby <[EMAIL PROTECTED]>
Subject: Re: Dense feedback polynomials for LFSR
Date: 13 Oct 2000 04:18:35 GMT

In article <8s57fc$3he$[EMAIL PROTECTED]> zapzing, [EMAIL PROTECTED]
writes:
>Well you would still have the key bits of security
>that index the table. But I was thinking more in
>terms of having the tap sequence be dense at about
>50% ones and 50% zeros. The the tap sequence would
>actually be the key. The key space would not be "flat"
>since not all tap sequences represent primitive
>polynomials, but then you would not have to carry
>around a look up table, either. On the other hand
>you could not specify any key, you would have to
>use a key generation program.
>
You would still need to provide an init vector for the LFSR.  The number
of primitive polynomials for an n-bit LFSR is roughly 2^n/2n.  The number
of possible init vectors is 2^n - 1, so the key space using the init
vector as the key is always going to be about 2n times larger than the
key space where the tap sequence is the key.

There is also no shortcut to generating a primitive polynomial.  You
choose a candidate polynomial and then test it to see if it is primitive.
 These tests can be very time-consuming, especially with higher degrees
of polynomials.

OTOH, there is nothing that says you have to use a primitive polynomial. 
Some of the state spaces of non-primitive polynomials are as large as 99%
of the state space of primitive polynomials for the same size register. 
You just have to find the right state space.  I would also steer clear of
any tap sequence that doesn�t include the output bit or that doesn�t
contain an even number of taps since those can lead to degenerative state
spaces.  (By "even number of taps", I mean the physical representation of
the LFSR which does not include the 2^0 polynomial term.)

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Comments on the AES winner
Date: Fri, 13 Oct 2000 04:33:46 GMT

In article <[EMAIL PROTECTED]>,
  David Schwartz <[EMAIL PROTECTED]> wrote:
>
> [EMAIL PROTECTED] wrote:
>
>>What should preoccupy us the most is the possibility of somebody in
>>the future discovering in the open a catastrophic break of the AES.
>>That is why I suggest that standard protocols should be in place that
>>would allow the AES algorithm to be substituted by another one in an
>>efficient manner if need should arise.
>
>These protocols and substitution mechanisms then become a link that is
>likely to be weaker than the AES. You can't make a system more secure
>by adding another link to the chain.

Let's see. Suppose we add one more field to all emails which describes
the cipher used for encrypting it. 0 means no cipher, 1 means AES, 2 to
5 mean the other four AES finalists. Now it is a simple matter for
software companies that produce email clients, navigators, etc., to
include all 5 ciphers in their products (the code is there and it is
free). Everybody starts by using 1 (the AES) as the default. If
somebody would discover a catastrophic attack against it, then let us
hope that another cipher, say number 4 remains strong against this new
attack. It would then be a simple matter for users to change a
parameter in their email client or navigator to specify 4 as the
default algorithm.

This is just a simple example, and I fail to see how this would
decrease security in any way. On the contrary it would offer us some
insurance against unforeseen advances in cryptanalysis. Luckily all AES
candidates are interchangeable as a consequence of NIST's
specifications. (In fact, it would be a good idea if future designs of
new ciphers would follow these specifications.)




Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to