Cryptography-Digest Digest #181, Volume #9        Wed, 3 Mar 99 21:13:05 EST

Contents:
  Where can I get a Public Key system? (Frank LaRosa)
  Re: Unicity of English, was Re: New high-security 56-bit DES: Less-DES (Bryan Olson)
  Re: idea for random numbers (Medical Electronics Lab)
  Re: need simple symmetric algorithm (John Bailey)
  Re: idea for random numbers (Jim Dunnett)
  Re: Common meaning misconception in IT, was Re: Unicity of English, was  (Bryan 
Olson)
  HELP ME (MC1148 User)
  Re: Intel/Microsoft ID ([EMAIL PROTECTED])
  Re: Intel/Microsoft ID ([EMAIL PROTECTED])
  Re: Define Randomness ("Douglas A. Gwyn")
  Re: Testing Algorithms [moving off-topic] ("Douglas A. Gwyn")
  Re: Intel/Microsoft ID (J. Mark Brooks)
  Elliptic Curve Cryptography (nobody)
  Re: Unicity of English, was Re: New high-security 56-bit DES: Less-DES 
([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: Frank LaRosa <[EMAIL PROTECTED]>
Subject: Where can I get a Public Key system?
Date: Wed, 03 Mar 1999 13:18:12 -0600

Hello,

I need a resonably secure public-key cryptography algorithm that I can
use in a commercial product. I'm only encrypting a small amount of data
(about 20 bytes). What are my options? Do I have to buy a license from
RSA, or are there alternatives available?


------------------------------

From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: Unicity of English, was Re: New high-security 56-bit DES: Less-DES
Date: Wed, 03 Mar 1999 11:37:13 -0800


Apologies if this isn't threaded in the correct place.  My news server
is dropping articles.

[EMAIL PROTECTED] wrote:
>   [EMAIL PROTECTED] wrote:
> > [EMAIL PROTECTED] wrote:
> > > No. Given ciphertext, the two equivocations are still independent -- as they
> > > measure different things and they also depend on the type of cipher used,
> > > number of keys, plaintext entropy, etc.  And, for a given message length,
> > > message equivocation can be zero much before key equivocation is zero -- see
> > > Fig. 9 in Shannon's paper for example.
> >
> > First, none of those show independence.
> 
> You snipped it... 

I snipped one sentence of yours before the quoted paragraph (the
sentence was quoted from a previous post and did not argue independence).
I then quoted the paragraph above in its entirety.  The next sentence was
your conclusion:

| | > So, both conditional entropies (ie, equivocations) ARE independent equations
| | > and there can be no doubts about it.

Your conclusion does not follow and it is _not_ because of anything
I snipped.  I try to be scrupulously fair in my quoting, and if you
think I'm not you can always re-include snipped context.

> I have been saying often enough that independence is shown
> simply by looking at the two formulas. I said so even in the same quoted
> message.

I was quoting from the message available at:

http://x1.dejanews.com/=hg/[ST_rn=qs]/getdoc.xp?AN=449426113&search=thread&CONTEXT=920418615.1288503500&HIT_CONTEXT=920418615.1288503500&HIT_NUM=7&hitnum=35

in which you wrote no such thing.  (Also what you said is wrong.)

[Bryan:]
> > Second, what I've been
> > saying is that message equivocation H_E(M) must be zero at or
> > before the number of intercepted letters for which H_E(K) is
> > zero.  You say it can be zero "much before".  Sure.
> 
> You say many things, Bryan, some of them interesting, but unfortunately you
> tend to change them around when they do not suit the discussion.

I've certainly been consistent on this one.

> As above, I see that you try to get my words to stand for your words -- and I
> am happy about that, but pls acknowledge for dialogue's sake. For example,
> you wrote before:
> 
> BO> Shannon say that the unicity point has been reached when the
> BO> key equivocation drops negligibly far from zero.

Yes.

> which is however at odds with your own new "old" words above.

Why?  H_E(K) is zero at the unicity point and H_E(M) is zero at
or before the unicity point.  Why do you think that's inconsistent?

> Indeed, message
> equivocation  is what defines the unicity condition (even to Shannon), not
> zero key equivocation  -- because message equivocation can be zero before key
> equivocation is zero.

In your own paper, you've noted that Shannon defines unicity distance
on page 693 of his "Communication Theory of Secrecy Systems".

    It will be seen from Fig. 7 that the equivocation curves approach
    zero rather sharply.  Thus we may, with but little ambiguity, speak
    of a point at which the solution becomes unique.  This number of
    letters will be called the unicity distance.  For the random cipher
    it is approximately H(K)/D.

The equivocation curves in figure 7 show key equivocation.  Shannon does
not even draw message equivocation on the graph to which he refers in the
definition of unicity distance.  So where in Shannon's definition do you 
see H_E(M), the message equivocation curve?

> This is what I have been saying all along and I am glad you now agree to such
> an extent that you made it your own point!
>
> Given that, I believe we are in essential agreement in "what you have been
> saying" ;-)

Great!  Now that we agree "message equivocation H_E(M) must be zero 
at or before the number of intercepted letters for which H_E(K) is
zero", I'll use it to prove that the unicity distance of a cipher
with a keyspace of one key is zero.

Proof:  In a key space of one key H_E(K) is always zero since P(K)
is always one for the one key.  Thus H_E(K) is zero at N=0 intercepted
letters.  Since H_E(M) must be zero at or before the number of 
intercepted letters for which H_E(K) is zero, it is also zero at zero 
intercepted letters.  The unicity distance is zero.

> Regarding the random cipher assumption and its careless usage implication
> that a message is known before it is transmitted, I believe I can be spared
> to further argue against that.

Arguing your opinions without ever referring to anything actually in 
Shannon's random cipher was rather careless.  I've carefully gone
through the construction and verified that the random cipher is 
perfectly well defined and all its assumptions are fulfilled for a 
keyspace of one key.  Feel free to check or spare yourself the work 
as you wish.

--Bryan

------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Re: idea for random numbers
Date: Wed, 03 Mar 1999 12:52:08 -0600

Douglas A. Gwyn wrote:
> Good hardware RNGs already exist, why not just buy one.

Because that's no fun.  Sure, if you have a job to do, you
better go buy one.  But if you want to stay out of a gang,
it's really a good way to hide out in your basement.

:-)

Patience, persistence, truth,
Dr. mike

------------------------------

From: [EMAIL PROTECTED] (John Bailey)
Subject: Re: need simple symmetric algorithm
Date: 3 Mar 1999 20:41:40 GMT

On Sun, 28 Feb 1999 10:14:18 GMT, "Daniel Feurle" <[EMAIL PROTECTED]> wrote:

>I need a simple symmetric algorithm to decrypt and encrypt this
>number field which is only a 16-bit integer number.
The javascript at
http://www.ggw.org/donorware/flip.html
was written with exactly that application in mind.  Because its
javascript, the source code can be viewed directly off the page.
Simple, symmetric algorthms don't seem to get much play, so if you get
any input regarding the strength of this, please let me know.

John

------------------------------

From: [EMAIL PROTECTED] (Jim Dunnett)
Subject: Re: idea for random numbers
Date: Wed, 03 Mar 1999 21:29:38 GMT
Reply-To: Jim Dunnett

On Tue, 02 Mar 1999 21:02:53 -0800, bob taylor <[EMAIL PROTECTED]> wrote:

>Reading the noise from a sound card for random numbers is not a new
>idea, nor very random if taken as whole bytes.  But what if you try
>this:
>  Only use the LSB (bit 0) for the random sample, and you can even take
>the next bit (bit 1) to control the sample interval(stagger).  You get
>random data sampled at random intervals.  It maybe slow but I dont see
>any statistical problems.  If the data cant be predicted or calculated,
>wouldn't it be good enough for OTP. (you can always hash it too)

Would seem to be good enough for OTP. Why not run an entropy-checking
program over it?

Anyone got a (DOS/Windows program to read the noise?

-- 
Regards, Jim.                | An atheist is a man who has
olympus%jimdee.prestel.co.uk | no invisible means of support.
dynastic%cwcom.net           | 
nordland%aol.com             | - John Buchan  1875 - 1940.
marula%zdnetmail.com         |
Pgp key: pgpkeys.mit.edu:11371

------------------------------

From: Bryan Olson <[EMAIL PROTECTED]>
Subject: Re: Common meaning misconception in IT, was Re: Unicity of English, was 
Date: Wed, 03 Mar 1999 14:03:04 -0800


[EMAIL PROTECTED] wrote:
> John Savard wrote:
> > Language statistics, as they become more detailed, are simply
> > approximations to a human writer - or reader.
| > Hence, the redundancy of
| > English text can only be approximated through language statistics,
| > which give a _lower bound_ for the actual redundancy.

> ...in letter-frequency or even in word-frequency or phrase frequency -- but
> NEVER in sense. The Information theory definition of entropy and the derived
> definitions of conditional entropy and unicity have nothing to do with
> meaning, sense or knowledge as I explained in the message.

John Savard's observation is entirely correct.  Shannon's computes
the amount of information in a message from the probability of that
message.  We can make use of frequencies of letters, words, and 
phrases to estimate these probabilities.  The language derived from
measured frequencies is a model of the actual language, but the
probabilities in the actual language define its redundancy.

--Bryan

------------------------------

From: MC1148 User <[EMAIL PROTECTED]>
Subject: HELP ME
Date: Wed, 03 Mar 1999 12:31:26 -0500

My name is Jodi and I"m doing a project on how automated teller machines
(ATM's) work, I mean the kind of coding and/or signal processing that
goes on behind the scene when the user puts the card in the machine.  I
know it uses  DES but I know nothing about DES or about anything else.
Please help me!
Thanks
email me until Thursday March 4, at [EMAIL PROTECTED] and for
March 5- March 12 at [EMAIL PROTECTED]


------------------------------

Crossposted-To: talk.politics.crypto
From: [EMAIL PROTECTED]
Subject: Re: Intel/Microsoft ID
Date: Wed, 03 Mar 1999 17:23:28 -0600

In <[EMAIL PROTECTED]>, on 03/03/99 
   at 09:57 AM, [EMAIL PROTECTED] (wtshaw) said:

>In article <7bildd$[EMAIL PROTECTED]>, "Roger Schlafly"
><[EMAIL PROTECTED]> wrote:

>> While the debate rages over Intel's unique ID and its plans to use
>> it for identification over the internet, the NY Times reports that
>> Microsoft uses the unique ID on the network card (or generates
>> a substitute) and secretly inserts it into MS Word and Excel
>> documents for purposes of identification.
>> 
>The method of recovering embedded numbers was posted yesterday.  To be
>really sneaky, consider looking at files to recover numbers used by
>certain individuals.  Imagine that you could use a utility to replace
>numbers in files you write with those of selected parties either to mask
>who you are, or the cause those giving faith to the embedded numbers to
>attribute your comments to someone else.  Better yet, include a statement
>saying that the numbers are faked to keep you off of any legal grappling
>hook.

Can someone e-mail me a copy of the article. I can't seem to get logged
into the nyt site to download it.

tks

-- 
===============================================================
William H. Geiger III  http://www.openpgp.net
Geiger Consulting    Cooking With Warp 4.0

Author of E-Secure - PGP Front End for MR/2 Ice
PGP & MR/2 the only way for secure e-mail.
OS/2 PGP 5.0 at: http://www.openpgp.net/pgp.html
Talk About PGP on IRC EFNet Channel: #pgp Nick: whgiii
===============================================================


------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: talk.politics.crypto
Subject: Re: Intel/Microsoft ID
Date: Wed, 03 Mar 1999 22:43:42 GMT

[EMAIL PROTECTED] (John Savard) wrote:

> If this is correct, it is far more serious than the inclusion of a
> serial number on Pentium III chips; and, of course, this sort of
> behavior on the part of software companies would also be a powerful
> argument against including a serial number on microprocessors.

Get a grip.  It is no secret that MicroFraud was using the ethernet MAC
address to generate GUID's for interfaces and object ID's.  Anyone with a
copy of DevFool^H^H^H^HStudio can run genguid.exe and see for themselves; 
I've been aware of this "secret" for years.  Anyone can run commonly
available document file viewers and discern the GUID's contained therein. 
Again, we have the making of a non-story here.  Once all the crocodile tears
are washed away there is ... well ... nothing left.

After reading that story, I can only imagine what the next one will be like.

Perhaps it will reveal the shocking truth:  IP numbers are a nefarious
world-wide conspiracy that tracks millions of network interfaces.  Discovered
by a pimply geek watching a network sniffer.  "I couldn't believe my eyes",
said the hacker [photograph of said hacker, bug-eyed while watching the
output of tcpdump scroll by].  It can quote EFF and EPIC representatives,
slack-jawed at this revelation.  "Years ... DECADES ... the depth and scale of
this privacy violation defies description. Shame on the IETF for architecting
this neo-orwellian monster and unleashing onto an unsuspecting world.  SHAME!"

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Define Randomness
Date: Wed, 3 Mar 1999 18:33:25 GMT

Terry Ritter wrote:
> I agree, with the addition that experimental evidence can *disprove* a
> theory, just not prove one.

That is false on its face.  Consider:
        Theory A: whatever
        Theory B: "Theory A is false"
If an experiment disproves Theory A then it proves Theory B,
providing an example of experimental evidence proving a theory.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Testing Algorithms [moving off-topic]
Date: Wed, 3 Mar 1999 18:35:02 GMT

Darren New wrote:
> Has anyone figured out what causes a state vector to collapse, or how
> often it happens? If not, then I'd have to say "I don't know."

Two things:
        (1) There was an interaction.
        (2) The observation changed your knowledge about the state.

------------------------------

From: [EMAIL PROTECTED] (J. Mark Brooks)
Crossposted-To: talk.politics.crypto
Subject: Re: Intel/Microsoft ID
Date: 4 Mar 1999 00:10:22 GMT
Reply-To: [EMAIL PROTECTED]

As I don't have access to the nytimes.com site, posting the
article or emailing it to me would be appreciated.  Thanks.

John Savard <[EMAIL PROTECTED]> wrote:
>"Roger Schlafly" <[EMAIL PROTECTED]> wrote, in part:
>
>>While the debate rages over Intel's unique ID and its plans to use
>>it for identification over the internet, the NY Times reports that
>>Microsoft uses the unique ID on the network card (or generates
>>a substitute) and secretly inserts it into MS Word and Excel
>>documents for purposes of identification.
>
>>http://www.nytimes.com/library/tech/99/03/biztech/articles/03privacy.html
>
>If this is correct, it is far more serious than the inclusion of a
>serial number on Pentium III chips; and, of course, this sort of
>behavior on the part of software companies would also be a powerful
>argument against including a serial number on microprocessors.
>
>John Savard (teneerf is spelled backwards)
>http://members.xoom.com/quadibloc/index.html


-- 
****************************************
* J. Mark Brooks, Attorney at Law      *
* P.O. Box 39, Randleman, NC 27317     *
* [EMAIL PROTECTED]              *
* ICQ# 31732382                        *
* http://www.jmbrooks.net/law.html     *
****************************************

------------------------------

From: nobody <[EMAIL PROTECTED]>
Subject: Elliptic Curve Cryptography
Date: Wed, 03 Mar 1999 18:50:30 -0500

I am searching for some "down-to-earth" information about elliptic curve
cryptography.  I am not well-educated in any area of cryptology--in
fact, I am quite new to the field.  I'm interested in doing a project on
E.C.C.  Is there any good information available that doesn't require a
PhD to understand it?  Where should I start my study in this area of
cryptography?  Any information would be greatly appreciated.
Thanks,  Jon
Re:   [EMAIL PROTECTED]


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Unicity of English, was Re: New high-security 56-bit DES: Less-DES
Date: Thu, 04 Mar 1999 01:07:32 GMT

In article <[EMAIL PROTECTED]>,
  Bryan Olson <[EMAIL PROTECTED]> wrote:

> [EMAIL PROTECTED] wrote:
> > I have been saying often enough that independence is shown
> > simply by looking at the two formulas. I said so even in the same quoted
> > message.

Bryan:

You can find my quote above in the sci.crypt archives, at the same place
where I corrected Shannon's misprinted equation for conditional message
entropy -- which BTW I did just to make sure you were not being misguided by
the misprint.

Again, just looking at the two (correct) equations should convince anyone
versed in high-school math that they are independent. If you are still not
convinced, I cannot help further.


> >
> > BO> Shannon say that the unicity point has been reached when the
> > BO> key equivocation drops negligibly far from zero.
>
> Yes.

which is false as a definition because it is not causal. You cannot define an
effect to be true "when" an unrelated effect is true -- where is the cause?

It would be similar to say: The car runs out of gas (an effect) when the
engine's temperature is negligly far from the ambient temperature (also an
effect) -- which is absurd.

Sure, if the car runs out of gas (a cause now) then its engine temperature
will be negligly far from the ambient temperature after a while (its effect)
-- however, there is no causal connection for the reverse path that would
justify using "when" for the other phrase.

You are simply confusing A --> B (if A then B) with A <-- B (when B then A).
The use of both expressions in A <--> B is only justified if and only if A
and B are equivalent, since neither logical connective (-->, <--) implies the
other.

But, I am very happy that now and then you are dropping the wrong denomination
"distance" and using "unicity point" as above -- in your own words. Again, as
"you have been saying all along" ;-)

>
> > which is however at odds with your own new "old" words above.
>
> Why?  H_E(K) is zero at the unicity point and H_E(M) is zero at
> or before the unicity point.  Why do you think that's inconsistent?

Read above.

>
> > Indeed, message
> > equivocation  is what defines the unicity condition (even to Shannon), not
> > zero key equivocation  -- because message equivocation can be zero before
> > key equivocation is zero.
>
> In your own paper, you've noted that Shannon defines unicity distance
> on page 693 of his "Communication Theory of Secrecy Systems".
>

But I did not say he defined it in words I could use -- in fact, I revisited
the words, for good reasons as I explained. However, the basic idea is the
same -- hence I call it "revisitation" and not "revision". I am not revising
Shannon's concept -- it is still the same basic idea. See below.

>     It will be seen from Fig. 7 that the equivocation curves approach
>     zero rather sharply.

This passage is correct but becomes misleading when used with the next
sentence, since he meant key and message equivocation curves (yes, BOTH
curves, see my comment below) but they can have quite different behaviors in
general. For example, it is possible that message equivocation drops to zero
at some point but key equivocation **never** becomes zero until and after the
received message ends. Do you need examples?

Thus, a better wording would have been:

"It will be seen from Fig. 7 that message equivocation approaches zero rather
sharply."

>    Thus we may, with but little ambiguity, speak
>     of a point at which the solution becomes unique.

Solution of what? The only fitting answer is "cryptogram" -- which implies
that Shannon meant message equivocation, as I have worded above.

>     This number of
>     letters will be called the unicity distance.

Which is meant as: "This number of characters is called the unicity."

> For the random cipher
>     it is approximately H(K)/D.

Ok.

>
> The equivocation curves in figure 7 show key equivocation.  Shannon does
> not even draw message equivocation on the graph to which he refers in the
> definition of unicity distance.

You are mistaken -- but I take your emphatic words once again as a sign of
good-will and ignorance, not purposeful lack of dialogue and fool's arrogance.

Please read the text one line above that Fig. 7 -- and one line after Fig. 7.
Which ends with: "After a rounded transition, it follows the He(K) curve
down." I guess you can tell me what the "it" stands for?

HINT: Just read.


>  So where in Shannon's definition do you
> see H_E(M), the message equivocation curve?

As above -- and in plain sight.

The rest of your message is trivial and has been answered before by myself.
Pls see the archives.

BTW, this exchange with you, on sci.crypt archives, is a living proof that the
English unicity of Shannon's work may far surpass for some observers the lower
limit I estimated before for a general English text, of about a few characters
per word. But, that is just one data point of course.

Cheers, and keep RFM,

Ed Gerck

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to