Cryptography-Digest Digest #190, Volume #11      Thu, 24 Feb 00 02:13:01 EST

Contents:
  Re: DES algorithm (JPeschel)
  Re: DES algorithm ([EMAIL PROTECTED])
  Re: RSA private key representation w/3 primes (Paul Rubin)
  Re: OAP-L3 Encryption Software - Complete Help Files at web site (Terry Ritter)
  Re: NIST, AES at RSA conference (Terry Ritter)
  Re: NSA Linux and the GPL ("Trevor Jackson, III")
  Re: Processor speeds. ("Trevor Jackson, III")
  Re: Implementation of Crypto on DSP (Thierry Moreau)
  Re: EOF in cipher??? ("Trevor Jackson, III")
  Re: EOF in cipher??? ("Scott Fluhrer")
  Re: Processor speeds. ("Clockwork")
  Re: EOF in cipher??? ("Douglas A. Gwyn")
  Re: DES algorithm ("Douglas A. Gwyn")
  Re: Processor speeds. ("Clockwork")
  Re: DES algorithm (Nemo psj)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (JPeschel)
Subject: Re: DES algorithm
Date: 24 Feb 2000 02:30:39 GMT

[EMAIL PROTECTED]  (John Savard) writes:

>the only thing I can do with it in a browser is to
>type it in in the URL box. Only if Acrobat Reader isn't installed (or
>it's on the disk, but the file types aren't registered) do I get the
>chance to save the file.

Set up your browser to warn you of a "security hazard."
That will give you the choice of opening the file or saving
it.

Joe
__________________________________________

Joe Peschel 
D.O.E. SysWorks                                 
http://members.aol.com/jpeschel/index.htm
__________________________________________


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: DES algorithm
Date: Thu, 24 Feb 2000 02:38:45 GMT


> >http://www.ams.org/notices/200003/fea-landau.pdf
>
> I notice that URLs are occasionally provided directly to .pdf
> documents. That will make them come up in the browser, which requires
> both the browser and Acrobat Reader to be running at the same time,
> which may lead to system crashes on older computers with less memory.

   I never get anything except a blank browser page from these even
though I see the Acrobat Reader logo for a few seconds.  What would
cause this?

    -- Jeff Hill






Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: RSA private key representation w/3 primes
Date: 24 Feb 2000 03:16:25 GMT

In article <891sg7$lq9$[EMAIL PROTECTED]>,  <[EMAIL PROTECTED]> wrote:
>Forgive the possibly stupid question,  but I am looking
>for a statement of the decryption operation and key
>representation for RSA with 3 primes that is analogous
>to the following 2-prime procedure as articulated in
>PKCS#1:

Basically phi(n) = (p-1)(q-1)(r-1) and everything else works out
mostly the same way as before.  You do secret key operations using the
residues mod p,q,r and combine them with Garner's algorithm (see
Knuth vol. 2 or any similar book).

I have to ask, though, why do you want to mess around with a scheme
like this, especially if you don't know enough basic math to be able
to easily figure out all the details?

------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Crossposted-To: talk.politics.crypto,alt.privacy
Subject: Re: OAP-L3 Encryption Software - Complete Help Files at web site
Date: Thu, 24 Feb 2000 03:37:37 GMT


On Wed, 23 Feb 2000 23:20:22 GMT, in <[EMAIL PROTECTED]>, in
sci.crypt Tim Tyler <[EMAIL PROTECTED]> wrote:

>In sci.crypt David A. Wagner <[EMAIL PROTECTED]> wrote:
>: In article <[EMAIL PROTECTED]>, Tim Tyler  <[EMAIL PROTECTED]> wrote:
>
>:> Any algorithm that comes with a mathematical proof that it's unbreakable
>:> is unlikely to be analysed by the world's leading codebreakers.
>:> 
>:> Instead it is likely to be dismissed out-of-hand - as the output of
>:> someone with little idea about the nature of the field.
>
>: Nonsense.  Cryptosystems that are provably secure (under some assumptions)
>: are published all the time, and broken some of the time.
>
>An "unbreakable" code??  Give me a break! ;-)

"Provably secure" is the sort of "in joke" which has become common in
academia:  Simply by re-defining ordinary words and phrases one can
achieve apparently breathtaking results.  But in practice, "provably
secure (under some assumptions)" means "no more secure than anything
else."  

Admittedly, there is some motive for continued progress in what can be
proven in ciphers.  But until we get a complete reasonable proof,
using the phrase "provably secure" for a cipher which is *not* in fact
provably secure in practice comes remarkably close to deliberate
academic deception.  

Similar things happen in randomness testing.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: NIST, AES at RSA conference
Date: Thu, 24 Feb 2000 03:41:15 GMT


On Thu, 24 Feb 2000 01:26:09 GMT, in <[EMAIL PROTECTED]>, in
sci.crypt Tim Tyler <[EMAIL PROTECTED]> wrote:

>[complete snip of Ritter's "multiple cypher" schemes]
>
>My 2p worth:
>
>Essentially I thoroughly agree with all this material about multiple
>layers of different types of encypherment being an appropriate route to
>strength, in the absence of any concrete knowledge about the strength of
>our cyphers.
>
>One slight concern with it is that it seems desirable in terms of security
>to keep things simple as is possible while retaining strength.
>
>I fear that early implementations of negotiating multiple cyphers between
>hererogenous systems are likely to suffer more from security problems due
>to bugs than conventional systems.

*Any* additional logic provides some opportunity for bugs.  So we
design the system to be tested and then actually test it.  We only add
stuff which is required. 

The use of a single unchanging cipher makes us vulnerable to the
single-point fault of that cipher being weak.  We cannot test for that
fault.  And if we continue to use a weak cipher, we continue to expose
our data.  So if we really want the security we claim to want, we have
no option: we *must* change ciphers.  To do that, we *must* coordinate
cipher changes on both ends.  That *will* require new logic.  

Cipher changing presumably would be a full-handshake event, and bugs
which result in a cipher mismatch would be immediately evident.  Bugs
which involve the list of acceptable ciphers should be fairly
straightforward and we *can* test for those.  What remains is the
random selection process, which already had to be visited in system
design anyway (for message keys).  So I don't see a large opportunity
for error; this is a limited, well-controlled extension.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

Date: Wed, 23 Feb 2000 23:02:56 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: NSA Linux and the GPL

Mike Rosing wrote:

> John E. Kuslich wrote:
> >
> > It is rather remarkable to think about "secure" computers in light of the
> > recent revelations about John Deutch (former CIA Director).
> >
> > What is even more remarkable is the fact that this government fat cat will
> > probably get off Scott free while that poor Chinese fellow at the Department
> > of Energy will wind up facing a firing squad!!
> >
> > Why is has John Deutch not been arrested and charged with violations of the
> > law regarding care of classified information?????????
>
> Because he knows all the illegal crap a lot of other high level people
> have done, and half the government would be implicated in some kind of
> judicial proceeding if he is.  Lot's easier to let him go.

Does "mens rea" (intent) count for nothing?




------------------------------

Date: Wed, 23 Feb 2000 23:11:51 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Processor speeds.

Clockwork wrote:

> "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Mok-Kong Shen wrote:
> > > I am convinced that the Cray type is out even though I am personally
> > > acquainted with persons who are still 'fans' for that for reasons
> > > comprehensible (as well as uncomprehensible) for me.
> >
> > Re-read my previous posting.  Supercomputers, like PCs, evolve.
> > Some problems simply do not benefit much from massively parallel
> > processing.  And some problems that are highly parallelizable
> > also require coordination among the processing units that is
> > hard or impossible to achieve effectively with networked PCs.
> > So networked PCs cannot totally replace true supercomputing.
>
> People talk about developing distributed super-computers using standard PC
> chips, but here is an excellent idea: Why not use the newer, 128-bit game
> consoles instead of PC-based systems?  I see several advantages of doing
> such a thing:
>
> 1. Cost Effective: For the price of one complete PC system, you can purchase
> more than 10 console systems.  (The networking hardware is trivial and
> built-in to most of these systems.  Additionally, the console manufacturer
> takes a hit on the cost of the hardware :)
>
> 2. Performance: A single 128-bit, console system outperforms a PC system
> because it is optimized for what we are interested in -- high-performance
> vector calculations.  (FYI, MHz is not the true measure of performance.)
>
> 3. OS Overhead: Your algorithms run happily without any operating system
> overhead.
> 4. Bus Bandwidth: The PC is limited to 200 MHz bus components at 32-bits.
> The consoles blow that away at 128-bits wide (amazing).
>
> 5. Size and Heat: Most new console systems are self-contained and
> liquid-cooled.  You can place upwards of 5 stripped consoles in the same
> space as a rack-mounted PC.

I think part of the answer is the phase change that happened more than 25 years
ago.  I cannot find the attribution, but the thought is that "We do not write
software to tell the machine waht to do.  We buy hardware to execute the
software."  The translation is that software compatibility increasingly
outweighs raw hardware performance.

When a game console can handle both the applications and the tools needed to
produce them your comparison might make sense.  But AFAIK consoles are "not as
widely supported" as PCs or even mainframes.



------------------------------

From: Thierry Moreau <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Implementation of Crypto on DSP
Date: Wed, 23 Feb 2000 23:29:26 -0500

[EMAIL PROTECTED] wrote:
> 
> I am surprised that there is not much benefit in hand optimisation. 
> ....Are C compilers that good..?
> 

Don't expect too much out of compiler optimization for crypto
algorithms.

Here is a representative example: for RSA decryption or signature, the
optimization strategy starts from (a) the Montgomery multiplication
algorithm
and (b) the chinese remainder theorem (otherwise, you position your
product
out of the market performance-wise). The Montgomery algorithm (a)
benefits
nicely from a 16X16->40 bits MAC (multiply-accumulate) operation found
on DSPs, but
such a construct is not part of "portable C". For the chinese remainder
theorem (b),
some newer DSPs can do two such MACs (sustained) every instruction
cycle, which is
great for the chinese remainder theorem implementation. The C optimizer
*might*
not know beforehand of the chinese remainder theorem, so you'll never
get the
sustained MIPS rate without explicitly coding the chinese remainder
theorem for the
specific DSP instruction set.

- Thierry

------------------------------

Date: Thu, 24 Feb 2000 00:09:40 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: EOF in cipher???

Bryan Olson wrote:

> Trevor Jackson, III wrote:
> [...]
> > But only true in so far as the compiler is conformant with
> > modern C standards. There was a time not that long ago
> > that compilers did _not_ promise 8-bit bytes.
>
> Thankfully, those days have passed and ANSI/ISO C has taken
> over the C world.  (Actually the safe assumption is _at
> least_ 8-bits in a char.)

For new code it's a reasonable assumption.  But it is not the safest for
any kind of code.  No (unnecessary) assumptions is safer than _any_
assumption, whether based upon a standard or the assurance of the end
user.

The continuing evolution of C in the direction of interoperability as a
successor to portability is probably a good thing, but I do not have
confidence that the process converges.  One of the critical observations
that came out of Bell Labs along with C & Unix is that there is a point
past which further improvement of a program is impossible.  The
enhancements have to stop somewhere.  I suspect the same principle
applies to languages.  Not that improvement will actually come to a
halt, but that the rate of change should decrease.  This is what I meant
by convergence.

>
>
> > The critical issue is this:  why do you care?  It is
> > perfectly possible to write programs _without_ the
> > assumption of 8-bit bytes.  Since the assumption gains
> > you little there is little value in embedding it in
> > code, and significant risk in doing so.
>
> Assumptions based on the minimum magnitudes for values of
> the constants in <limits.h> are safe.  Programmers can gain
> simplicity by relying on them, and portability by not
> relying on larger magnitudes.

It's true that there are gains in portability by not assuming larger
magnitudes, but I suspect the simplicity you allude to is deceptive.
Programmers often complain about not knowing how big an int is.
Certainly knowing an int is 32-bits allows you to embed magic constants,
often hex, in your code, but this isn't the simplicity that leads to
elegance.  It is simplicity that is cheaper.  I find that in almost
every instance it is not hard, and often in fact simpler, to reason
about the problem _without_ examining the details.

Consider how much can be done by sizeof(int) * CHAR_BIT.  You can't use
it as a predicate for the preprocessor, but anywhere else it is a
suitable (IMHO superior) replacement for 0x20.

>
>
> > One of the related topics was the relationship between
> > sizeof(int) and sizeof(char), with the attendant
> > assumption that chars are always smaller than
> > ints.  Sophisticated users might assume <= rather
> > than <.  But both assumptions are unnecessary.
>
> As noted in another strand of the thread, the int return
> value of getc() needs to be able to represent every
> character value, plus the distinct EOF value.

And for some compilers that is impossible because sizeof(int) ==
sizeof(char).  In those circumstances declaring the var as an int does
not hurt you, and (c != EOF) is safe because -1 promotes to UINT_MAX and
CHAR_MAX is logically possible, but not practically possible.  Unicode
was designed for exactly this situation, with 0xffff reserved.  They
also reserved 0xfffe to handle endian issues so that programmers don't
need to care what flavor of machine they have.  These are aspects of
software quality that are orthogonal to standards compliance.


------------------------------

From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: EOF in cipher???
Date: Wed, 23 Feb 2000 21:00:58 -0800


Douglas A. Gwyn <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Runu Knips wrote:
> > How long should this discussion continue ? CHAR_BIT is AT LEAST 7,
> > so UCHAR_MAX is AT LEAST 127 != 255. Thats what my K&R says.
>
> CHAR_BIT is said to be a minimum of 8 in K&R 2nd Edition
> (I just checked the 12th printing, and since it's not in
> the Errata on Dennis's Web site, it must be unchanged from
> previous printings -- I can check the 1st printing when I
> get home).  This agrees with the C standard, and that is
> not just a coincidence.  I wonder what "K&R" you refer to.

Indeed. The ANSI standard, which was published back in 1989, and to which
virtually all modern C compilers currently claim conformance to, clearly
states that UCHAR_MAX must be at least 255.  What the first edition of K&R
says makes little difference, the language has changed since it was
published.

(I posted this mostly so Doug doesn't feels quite so alone in this fight)

--
poncho




------------------------------

Reply-To: "Clockwork" <[EMAIL PROTECTED]>
From: "Clockwork" <[EMAIL PROTECTED]>
Subject: Re: Processor speeds.
Date: Thu, 24 Feb 2000 05:30:32 GMT


"Mok-Kong Shen" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Mike Rosing wrote:
> >
>
> > machines and building a supercomputer from those.  At only $200 the 128
> > bit machines are pretty cheap for what they do, and I totally agree
>
> Most PCs today have only 32 bit registers. I guess that this jump to
> 128 bits probably can more than compensate for the slower processor
> speed (if any), resulting in substantial benefits especially in doing
> multi-precision arithemetic operations. So this appears to be
> indeed an interesting project.
>
> M. K. Shen

Amen :)



------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: EOF in cipher???
Date: Thu, 24 Feb 2000 06:13:07 GMT

Bryan Olson wrote:
> Assumptions based on the minimum magnitudes for values of
> the constants in <limits.h> are safe.  Programmers can gain
> simplicity by relying on them, and portability by not
> relying on larger magnitudes.

Further assistance can be found in the new standard headers
<stdint.h> and <inttypes.h>, which provide a way to specify
the needed widths for integer types and to test which widths
the implementation provides.  If you don't have these headers
in your C implementation, you can pick up mine from the
Lysator C site and use them until such time as your compiler
catches up.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: DES algorithm
Date: Thu, 24 Feb 2000 06:15:21 GMT

[EMAIL PROTECTED] wrote:
>    I never get anything except a blank browser page from these even
> though I see the Acrobat Reader logo for a few seconds.  What would
> cause this?

Lots of possibilities, but perhaps you just didn't wait long enough.
PDF files often take a long time to download, especially via dial-up.

------------------------------

Reply-To: "Clockwork" <[EMAIL PROTECTED]>
From: "Clockwork" <[EMAIL PROTECTED]>
Subject: Re: Processor speeds.
Date: Thu, 24 Feb 2000 06:17:57 GMT


"Trevor Jackson, III" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...

> When a game console can handle both the applications and the tools needed
to
> produce them your comparison might make sense.  But AFAIK consoles are
"not as
> widely supported" as PCs or even mainframes.

I have tons of experience on these machines (PCs, Consoles, and other
proprietary hardware).  I develop games and dabble in crypto.  Please look
at my previous posts for more discussion, but to answer your concerns
directly:

1. Games involve more complex systems than the most complex crypto systems.
The applications "can" and "will" be developed for these systems -- with
ease.  (Most systems use standard RTOS and/or WinCE T).

2. Nintendo, Sony, and Sega not
supported??????????????!!!!!!!!!!!!!!!!!!!!!!

People really need to research this before posting.  These systems are
incredible and can perform amazing feats of number crunching performance (at
128 bits). Go pick up a new console system and then ask yourself, "How do
you simulate a complete 3D environment with lighting, physics, collision
detection, artificial intelligence, geometry transformations, DolbyT audio,
and precision input?"  You buy a $2000.00(US) PC or you by a $200.00(US)
console.  Case closed.

At this time, I can guarantee that a small cluster of consoles could become
a super computer of exceptional potential -- for less than a fraction of the
cost and space of any distributed system.  Especially if the systems focuses
on one task instead of creating beautiful 3D graphics.

I predict you can factor numbers in one 128-bit register (US export
standards), simulate weather systems, simulate a nuclear explosion, and
render movie sequences from Toy Story T or Jurassic Park T on an shoestring
budget.

BTW, look around the Internet read what Sony is planning for there next
system.  They ARE going to move the next PlayStationT to a WorkStation -- it
is just a matter of time.

Clock




------------------------------

From: [EMAIL PROTECTED] (Nemo psj)
Subject: Re: DES algorithm
Date: 24 Feb 2000 06:49:07 GMT

I just grabed that book fomr my book store and started reading it.  I havnt
gotten past the first couple of pages is it any good?
>There is also a review by Jim Reeds of Singh's "The Code Book"
>in the same issue.



------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to