Cryptography-Digest Digest #246, Volume #10      Thu, 16 Sep 99 13:13:03 EDT

Contents:
  The good things about "bad" cryptography
  Re: SCOTT19U.ZIP_GUY/Questions Please (Tom St Denis)
  Re: Ritter's paper
  Re: Can you believe this?? (Paul Koning)
  Re: RC4-40 Cracking (Ian Goldberg)
  Re: Second "_NSAKey" ("Douglas A. Gwyn")
  Steganography on Casio QV-10A ( Doug Goncz)
  ECC (again...) (Emmanuel Drouet)
  Comments on ECC (Medical Electronics Lab)
  Re: NSA and MS windows (Hamish Allan)
  Re: ECC (again...) (SCOTT19U.ZIP_GUY)
  Re: Mystery inc. (Beale cyphers) ([EMAIL PROTECTED])
  Re: some information theory (Tim Tyler)
  Analogues to ECC over higher dim. abelian groups (Alex)
  Re: Analogues to ECC over higher dim. abelian groups, err varieties (Alex)
  Re: Comments on ECC (Alex)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] ()
Subject: The good things about "bad" cryptography
Date: 16 Sep 99 13:05:58 GMT

There are two schools of thought about how to choose a cipher to securely
encrypt your messages.

On the surface, it doesn't seem very hard to decide which one you should
follow.

One school of thought notes that many new cipher designs have turned out,
after brief examination, to be seriously flawed. Hence, because of this
high risk, it is not advisable to rely on any cipher that hasn't been
subjected to extensive study by the foremost experts in the open academic
world.

Another school of thought notes things like this:

- if an attacker doesn't know the algorithm being used, he will have a
harder time of even beginning an attack;

- most well-known algorithms have key sizes that are just enough to resist
a brute-force search, even though it's not difficult to increase the key
size for a symmetric algorithm by an order of magnitude;

- no amount of study can prove that the crack for an algorithm isn't just
around the corner, and such a crack seems likelier to be both found and
publicized for a well-known algorithm if it exists.

Despite the fact that the advocates of the first viewpoint are among the
most respected authorities in the field, while variations of the second
viewpoint have often been raised by people who are, or who resemble,
cranks and crackpots,

the irritating fact is that the points cited here under the second point
of view _are all valid_.

Since the basis for the first point of view is *also* valid, this isn't an
argument for abandoning it. But if security is the goal, we do have to
widen our horizons. Multiple encryption allows us to do so, to address the
concerns of the second point of view while still addressing those of the
first.

John Savard

------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: SCOTT19U.ZIP_GUY/Questions Please
Date: Thu, 16 Sep 1999 13:15:00 GMT

In article <[EMAIL PROTECTED]>,
  "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> tunafish wrote:
> > What they seem to have done is deliberaltely weeken these algorithms
> > by asking those who submitted to make certain modification to the
> > code...
>
> Oh, good grief!  That's the old conspiracy theory resurrected
> from the early DES debate.  What EVIDENCE do you have that this
> has occurred?

He has wild unsubstantiated speculation what else could you need?

Tom
--
damn windows... new PGP key!!!
http://people.goplay.com/tomstdenis/key.pgp
(this time I have a backup of the secret key)


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: Ritter's paper
Date: 16 Sep 99 12:52:07 GMT

Organization: Edmonton Community Network
Distribution: 

Douglas A. Gwyn ([EMAIL PROTECTED]) wrote:
: "Trevor Jackson, III" wrote:
: > There are several gaps here.  The grlaring one is that we have no
: > ciphers (excluding OTP) that are secure.  We have only ciphers that
: > are not secure or whose security we are unable to determine.  Note
: > that last: it does not mean we "think" they are secure.  It means we
: > do not know.

: (a) OTP is clearly not secure *in practice*.  In a simplified
: theoretical framework, it has certain mathematical properties
: that are usually summarized by "is secure", but the exact
: formulation is important.

Since he only mentioned OTP in order to exclude it, it's enough that OTP
is a cipher with the potential of being secure; it is not essential to his
argument that OTP always be secure. We know it isn't robust under errors
in key handling.

: (b) Other cipher systems have been described in the open literature
: under the appellation "provably secure".  Again, one has to examine
: the details to know exactly what that means.

In the case of these other ciphers, such as Blum-Blum Shub, the term
always means "provably as secure as" a mathematical problem, such as
factoring or discrete logarithm, which cannot itself be proved to be truly
hard.

: (c) Shannon showed one way in which degree of security could be
: quantified, in his description of unicity point.  An elaboration
: of this idea can be used to prove certain bounds on insecurity
: for systems on the proper side of the unicity point.  (These
: might not correspond to systems in actual use, but it shows that
: there are non-OTP theoretical counterexamples to your claim.)

If a "one-time pad" containing a whole random alphabet for each letter
were used twice, instead of a series of displacements, the cryptanalyst
would only know that certain letters in the two messages were equal or
unequal. This is why two messages with the same starting point wouldn't be
sufficient to provide a good entry for an attack on the SIGABA, but in the
case of the SIGABA instead of truly random alphabets we already have left
the realm where one can really prove security. Also, if messages are
well-compressed before encryption (_very_ well compressed, to an extent
not done in the open literature) one may need to intercept quite a few
encrypted under the same key to have a possibility of solving them.

Perhaps this sort of thing is what you are referring to, or not.

: (d) By "we" you must mean "Trevor Jackson and people I know about."
: How do you know that point (c), or some other approach, hasn't been
: developed into a full, practical theory by people you *don't* know
: about?

He's describing the situation the open cryptographic community finds
itself in. This may be an important caveat, but it doesn't affect a
discussion about how people outside the NSA should behave when choosing
ciphers for their correspondence, which is in fact the topic of this
discussion.

I was hoping you might have some interesting input to this discussion; I'm
a bit disappointed to see here what appear to me to be merely quibbles.
But then I've still fallen short of what I might say, so far merely
reacting to what I see as "obvious" errors by Terry Ritter.

John Savard

------------------------------

From: Paul Koning <[EMAIL PROTECTED]>
Subject: Re: Can you believe this??
Date: Thu, 16 Sep 1999 09:46:46 -0400

Anton Stiglic wrote:
> 
> >
> > /dev/urandom is NOT a valid source for key generation/challenge string.
> > /dev/random is a good source for high quality entropy to seed a secure
> > random number generator.  Even the authors themselves have said that
> > /dev/urandom should only be used for simple programming uses (like games and
> > such).  If you would like, I can dig up the article that says it.  Better
> > invest in a secure RNG.
> 
> I'd be interested in reading a well written article on that subject.

You're not likely to find one on the claim that the other
guy made, since it's all wrong...

        paul

------------------------------

From: [EMAIL PROTECTED] (Ian Goldberg)
Subject: Re: RC4-40 Cracking
Date: 16 Sep 1999 14:33:01 GMT

In article <[EMAIL PROTECTED]>, yoni  <[EMAIL PROTECTED]> wrote:
>Can you help me clarify something ?
>
>When you refer to Cracking the RC4 you mean a "brute force" attack ?
>simply try all possible combinations of the key ?

Yes.

>Do you use a known plaintext attack ?

Yes; RC4 is a stream cipher.

>RC4-40 is RC4 initialized with 40 Bits key (5 bytes)?

Not quite.  When used in SSL (the most common usage of RC4-40), it works
like this:

There are 5 bytes of secret key, and 11 bytes of "salt" (the salt is
generated randomly for each message, but is trasmitted in the clear
along with the message).  Those 16 bytes are hashed through MD5 to produce
a 16-byte output.  Those 16 bytes are used as the (128-bit) key for RC4.

   - Ian

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Second "_NSAKey"
Date: Thu, 16 Sep 1999 06:28:44 GMT

"Trevor Jackson, III" wrote:
> We may not be able to determine what the actual purpose of their
> "backup key" may have been, ...

To the contrary, Microsoft *has* explained the purpose, and it
was quite plausible (although perhaps ill-advised).  The "role
that NSA played" was, according to Microsoft, that NSA would be
reviewing the product for export, and Microsoft didn't want to
be forced to hand over their private key to NSA, so they
anticipated this by providing for a second, NSA-private, key
that could not be used to authenticate Microsoft modules but
could be used by NSA to verify how the framework operated,
using NSA's own (test) modules.

I know I've mentioned this in previous posts; haven't they
reached the newsgroup?

------------------------------

From: [EMAIL PROTECTED] ( Doug Goncz )
Subject: Steganography on Casio QV-10A
Date: 16 Sep 1999 14:52:22 GMT

By now most digital camera users understand the pictures in their cameras are
highly compressed. There are possibilities for steganographically encoding data
in a digital camera image on the least significant bits, and there is software
to do it.

I found a feature in software provided with my digital camera, one of the
earliest ones, a Casio QV-10A.

There's a comment field, and a date field. Neither is visible to the camera
operator. Neither on the thumbnails, either. But open an image by double
clicking or File/Open and the Edit menu now has a Comment entry. 64 characters
per each of the two fields is available.

Now I can tag my HUGE inventory of things that "might be useful" in my work. I
have two stacks of boxes, one on either side of the living room window, each
with 96 boxes. And that's the full capacity of the camera. I'll consider an
ebay auction or post them. If I can get my personal belongings into a pocket
picture index, that would be real useful. I need to get rid of roughly half,
and that's a reachable goal.

You can carry 64x2x96 bytes in your pocket with a certain amount of security
this way.

Adding information to the LSB in compressed pictures could result in
information loss.

The frigging resistor array in the interface cable cracked. A new cable is $39.
A new array is 50 cents. One way or the other....


 Yours,

 Doug Goncz
 Experimental Machinist ( DOT 600.260-022 ) ( A.A.S.M.E.T. )
 Replikon Research ( USA 22044-0094 )
 http://users.aol.com/DGoncz or /ReplikonVA
 http://www.deja.com/profile.xp?[EMAIL PROTECTED]

------------------------------

From: Emmanuel Drouet <[EMAIL PROTECTED]>
Subject: ECC (again...)
Date: Thu, 16 Sep 1999 16:51:39 +0100

Hello !

I'm looking for elliptic curves algorithms :
a public key cryptosystem which doesn't derive from Diffie-hellman.

The only algorithms I found are based on shared secret key and uses
symetric cryptosystem (ECAES for example)...
Why is it so difficult to find a public key cryptosystem which "simply"
encode the text ?

Could you help me ?

Manu


------------------------------

From: Medical Electronics Lab <[EMAIL PROTECTED]>
Subject: Comments on ECC
Date: Thu, 16 Sep 1999 10:09:41 -0500

In his latest "Crypto-Gram", Bruce Schneier wrote:

>Certicom used the event to tout the benefits of elliptic curve public-key
>cryptography.  Elliptic-curve algorithms, unlike algorithms like RSA,
>ElGamal, and DSA, are not vulnerable to the mathematical techniques that
>can factor these large numbers.  Hence, they reason, elliptic curve
>algorithms are more secure than RSA and etc.  There is some truth here, but
>only if you accept the premise that elliptic curve algorithms have
>fundamentally different mathematics.  I wrote about this earlier; the short
>summary is that you should use elliptic curve cryptography if memory
>considerations demand it, but RSA with long keys is probably safer.

The mathematics *is* fundamentally different Bruce!!  There's over
200 years of work that's been done on elliptic curve math, for you to 
imply that it's the same thing as RSA type math tells me you don't 
really understand it.  The fundamental difference is that RSA works
in the field directly but ECC works "on top" of the field.  It is a
higher level of algebra, a "more abstract" mathematics to put it in
english.

ECC is more secure than RSA for the following reason:
It takes exponentially increasing effort to solve the ECDLP for
each bit of key added compared to the sub-exponentially increasing
effort associated with each bit of RSA key.

The method of attack is different than RSA, ECC is very similar to 
the DH type problem (discreet log) and this too is very different than 
the factoring problem.  In some sense it's easier, there's no final
matrix you need to solve.  However, you have to search harder to find
two different routes to the same "distinguished point", and it's that
search process which grows exponentially with key size.

> It's tiring when people don't listen to                           
>cryptographers when they say that something is insecure, waiting instead
>for someone to actually demonstrate the insecurity.

But when cryptographers call something insecure which is very
secure, then waiting for someone to "actually demonstrate the
insecurity" is going to be a very long wait indeed.

Bruce, your field of expertise is clearly symmetric ciphers.  Stay
with it, and good luck on getting Twofish as the AES winner.  But if
you don't understand math, don't make false proclimations.  It's
obvious mathematically that ECC is more secure than RSA, and it's
obvious in engineering terms that it uses fewer resources in time
and space than RSA for the same level of security.

Patience, persistence, truth,
Dr. mike

------------------------------

From: Hamish Allan <[EMAIL PROTECTED]>
Subject: Re: NSA and MS windows
Date: Thu, 16 Sep 1999 16:33:15 +0100


fungus wrote:

> OTOH, I've also heard that the second key can be replaced by
> anything you like and strong crypto suddenly appears on your
> machine - a bit like Netscape which can be "upgraded" by changing
> one byte of the program.

It's not quite as simple as that - you overwrite the second key with
your public key, then use your private key to sign crypto modules, which
Windows will accept as properly authenticated. Therefore you can install
your own strong crypto on your machine.

The danger is that a trojan can install a new key over the NSAKEY and
install modules signed with the other half of that key. You might be
thinking, what does the NSAKEY have to do with this - couldn't a trojan
install a new key over the primary Microsoft key and install crypto
modules that way anyway? Yes, but you would notice because all your
other modules would fail to authenticate. The weakness is that the
system accepts authentication by *either* key, and that the second key
is intended as a backup key - i.e. official modules will always be
signed by the first.

Bruce Schneider wrote about this in some detail; I'll try to dig up the
URL.

Hamish



------------------------------

From: [EMAIL PROTECTED] (SCOTT19U.ZIP_GUY)
Subject: Re: ECC (again...)
Date: Thu, 16 Sep 1999 17:07:53 GMT

In article <[EMAIL PROTECTED]>, Emmanuel Drouet <[EMAIL PROTECTED]> 
wrote:
>Hello !
>
>I'm looking for elliptic curves algorithms :
>a public key cryptosystem which doesn't derive from Diffie-hellman.
>
>The only algorithms I found are based on shared secret key and uses
>symetric cryptosystem (ECAES for example)...
>Why is it so difficult to find a public key cryptosystem which "simply"
>encode the text ?
     The reason is all the known "public key" methods are zero knowledge
methods. Meaning that the seeds of what was encrypted is in the file. The
attacker needs to know nothing about what your message is. If he gets
a key that works. He knows everything. This is not necessialry true with
secrect key methods that are done correctly. Since the attacker needs more
information that what is in the file to decode it.  Example I send a random 
file to my son with a secret key method you try several keys. You can never
really know what I sent my son even if you used the correct key as one of
your guesses. This is not true with a zero knowledge method.
   Also since many public key methods are weak to various plain text attacks
you try to limit its use. And you try to be sure that it is only limited 
encryption of a random string so that info not leaked to an attacker.
  If you used the public key (zero knowledge) method for your whole text.
It would be very slow and subject to more forms of attack than just a secrect
key method.
  >
>Could you help me ?
      Maybe
>
>Manu
>



David A. Scott
--
                    SCOTT19U.ZIP NOW AVAILABLE WORLD WIDE
                    http://www.jim.com/jamesd/Kong/scott19u.zip
                    http://members.xoom.com/ecil/index.htm
                    NOTE EMAIL address is for SPAMERS

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Mystery inc. (Beale cyphers)
Date: Thu, 16 Sep 1999 15:29:03 GMT

In article <19990915230006.150$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Curt Welch) wrote:
>
> It seems to me that it's a bit misleading to think about an adjustment
> like that as a "typo".  I say that because calling it a typo implies
> it happend at some point after all the cipher work was done.  And that
> implies "fixing" the cipher text as publised is the correct route to
> take.  (i.e. you are just correcting a mistake the printer, or someone
> copying the cipher at a later date made).  But it's not clear to me
> that this is the case here.
< snip >
> But, think of this.  It's possible that as B2 was encoded, they
recorded all the numbers to use for each letter on a separate sheet of
paper.  And this sheet of paper (not the numbered DOI) was the "key"
which was going to be used to decode the message.  And this "key" may
have been used to create the B1 "key".  So even if a letter was encoded
incorrectly (because of a simple clearical error in the encoding
process), this off-by-one  number as seen in B2 may be the number that
was written down on the "key". And that number may be the one we need
to use when trying to break B1.
So, all I'm saying, is that those "typo" errors aren't something you >
should just "fix" and forget about.  You need to think about both >
possible values as you search for various solutions.
>>

   Well, the only thing you have to go on is the cleartext.   You know
from reading it that a cipher number is either right or wrong and you
can pick any one of hundreds from the DOI to make it read correctly.
The numbers I choose have a common theme, i.e. typical typos, but other
people have been satisfied just to make the cleartext read correcly.

> > and 440
> > becomes 40.
>
> That's a little harder to swallow, but i guess also very likely.
>(it's not like the document was typed into a computer and the typest
double hit the 4 key).  It it was being copied by hand, it seems like
> an unlike error to me (i.e. read 40, write 440).  Maybe it's a
> type of error that would be easy to make when typeseting the ward
> paper???

   I don't know when the linotype machine was invented, so as far as I
know the Ward pamphlet of 1885 was set by hand, letter by letter and
number by number.  I do know that when Babbage was developing his
analytical engine in the 1850's, one of his goals was to have the
machine set its own type in order to eliminate the errors that were
common when type was set by hand.   I would like to believe that the
author of the pamphlet was given a proof to correct and that he
painstaking compared every number to the original cipher.  However, I
know from proof reading my own email that I often see "what I meant"
instead of "what I wrote".

> If there are this many errors in B2 (the one where everyone has the
> key)imagin how many errors might be in B1 and B3

   Actually, some people take comfort from the fact that there are only
seven typographical errors.   The DOI renumbering can be discounted by
supposing that Beale was working from a variant copy of the DOI and his
numbering was correct for that variant.  Thus Beale himself (some would
say) made no errors in his work.  The errors came later during the
publication of the pamphlet and fortunately are quite rare.

< (assuming there's some truth to the story and that these documents
< have been floating around, and re-copied various times before they
< were published by Ward).

   According to the pamphlet, Ward was acting as the agent for an
unnamed author.  If the pamphlet is a haox, it's possible that Ward
himself was just as much a victim as anyone who bought the pamphlet,
except that he undoubtedly received a fee for his services.  If you
accept the author's story at face value, then he had in his possession
the original ciphers prepared by Beale and the pamphlet was prepared
directly from these papers.

> I know Ed told me that even with his photocopy of the Ward paper
> there were a handful of numbers that could not be correctly read
> because the document had been folded and the paper torn on the fold.

  Ed had a copy of the pamphlet that the BCA located in the William F.
Friedman collection in the Marshall Library in 1979.  There was a large
hole in the center of the pamphlet.

> I wonder if there's a clear enough copy of the Ward paper were all the
> numbers can be read without question?

  Peter Viemiester found an undamaged copy and reprinted it in his
book, The Beale Treasure, A History of a Mystery, in 1987.  This is a
photographic reproduction of the original pamphlet, not one that was
retyped by Viemiester.  I worked exclusively from this reprint in
preparing the .doc files that are now in the Crypto Drop Box.  Being a
perfectionist, I checked my work several times (ten years ago) and I
believe that cipher.doc contains the ciphers exactly as they appear in
Viemeister's book.

  -- Jeff Hill


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: some information theory
Reply-To: [EMAIL PROTECTED]
Date: Thu, 16 Sep 1999 15:27:14 GMT

Tom St Denis <[EMAIL PROTECTED]> wrote:
:   [EMAIL PROTECTED] wrote:
:> Anti-Spam <[EMAIL PROTECTED]> wrote:

:> : First, Compressed data is NOT necessarily random data.
:>
:> If your compressed data is distinguishable from randomness, you're using
:> a sub-optimal compression scheme.

: If your compressed file is random it can't expand into anything real.  Note
: that the compressed stream is as a random as the input message.  It can't be
: any more/less random.

You and I appear to be talking at cross-purposes.  The compresses stream
will be more random in the sense that it will exhibit higher entropy, and
more nearly pass tests for randomness.

I don't see what you mean by "if your compressed file is random it can't
expand into anything real".  When I described the compressed file as
"random" essentially I meant that it would pass or more nearly pass
tests for randomness.  There's no reason why such a file should /not/
expand into a real message on decompression.

:> : Many of us assume the compressed form of a file is "equivalent" in some
:> : form to true random data.  It is not.
:>
:> It certainly /should/ be - or your compression algorithm is likely to
:> be behaving sub-optimally.

: Try finding the average spacing for symbols (order 0) and you will see it's
: rarely even (for byte symbols it should be around 256).  That's one way to
: 'detect' compressed files (this works with LHA and PKZIP).

Did anyone ever claim PKZIP was an optimal compression system for any
class of file?  For most things, even ARJ is better ;-)

:> : Compressed files will not pass statistical tests for random bit streams.
:> : A compressed file is non-random.
:>
:> Speak for your own compressed files ;-)

: True the entropy 'per byte' is higher but the entropy 'per message' is not.

I thought entropy was commonly taken to be a property of a source, not
a property of a string.  Entropy "per byte" conforms to this common usage,
while entropy "per message" does not.

If your compressed file is distinguishable from a random stream of data
then it is likely to contain pattern which a better compression algorithm
would have eliminated.  Maximally compressed files should approach the
ideal of being statistically random.

One definition of what constitutes "randomness" mentions that random
data is generally incompressible.  Conversely, incompressible data should
look random - if there's any order in it it will be fodder for a better
algorithm that identifies that order and squeezes it out.
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

I owe, I owe, so off to work I go.

------------------------------

From: Alex <[EMAIL PROTECTED]>
Subject: Analogues to ECC over higher dim. abelian groups
Date: 16 Sep 1999 12:42:50 -0400


=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1


Hi.

Could someone point me at any papers studying analogues to Elliptic
Curve cryptography over higher dimensional abelian groups?

Alex.

=====BEGIN PGP SIGNATURE=====
Version: PGP for Personal Privacy 5.0
Comment: Processed by Mailcrypt 3.5b6, an Emacs/PGP interface
Charset: noconv

iQA/AwUBN+Ed7nzN4ZFYpUPAEQI6OwCfTMf0sD+40g49o6dC1D5KpUeIh38An1Ko
qIb7a1wVbQJMinokFrd984Hf
=pv6R
=====END PGP SIGNATURE=====

------------------------------

From: Alex <[EMAIL PROTECTED]>
Subject: Re: Analogues to ECC over higher dim. abelian groups, err varieties
Date: 16 Sep 1999 12:46:23 -0400


=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1



Sorry, I meant to write Abelian varieties, not Abelian groups. :)

Alex.
=====BEGIN PGP SIGNATURE=====
Version: PGP for Personal Privacy 5.0
Comment: Processed by Mailcrypt 3.5b6, an Emacs/PGP interface
Charset: noconv

iQA/AwUBN+Ee2HzN4ZFYpUPAEQKrBgCeLQz0Tez6wvXuy4dmxK7JBFD/9uMAoKVC
zsMfy0jvcbfUsd265mUQucDb
=Otbw
=====END PGP SIGNATURE=====

------------------------------

From: Alex <[EMAIL PROTECTED]>
Subject: Re: Comments on ECC
Date: 16 Sep 1999 12:49:36 -0400


=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1


> ECC is more secure than RSA for the following reason: It takes
> exponentially increasing effort to solve the ECDLP for each bit of key
> added compared to the sub-exponentially increasing effort associated
> with each bit of RSA key.

Is that for the best currently-known upper bounds, or can you prove that
solving the ECDLP has exponential time complexity?

Alex.

=====BEGIN PGP SIGNATURE=====
Version: PGP for Personal Privacy 5.0
Comment: Processed by Mailcrypt 3.5b6, an Emacs/PGP interface
Charset: noconv

iQA/AwUBN+EfkHzN4ZFYpUPAEQKerwCgrvN+7Cpz2gz/WqpF7u5QlRcMfDgAnj15
uuOl2DInE+sjSdwTG7pCcIuH
=bG5R
=====END PGP SIGNATURE=====

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to