Cryptography-Digest Digest #576, Volume #12      Thu, 31 Aug 00 00:13:01 EDT

Contents:
  Re: PGP ADK Bug: What we expect from N.A.I. (David A. Wagner)
  Re: RSA public exponent (Scott Contini)
  Re: Steganography question (David A Molnar)
  Re: QKD and The Space Shuttle (Ron B.)
  Re: PGP 6.5.8 test: That's NOT enough !!! (David Hopwood)
  Re: Bytes, octets, chars, and characters ("Douglas A. Gwyn")
  Re: QKD and The Space Shuttle (David A Molnar)
  Re: Patent, Patent is a nightmare, all software patent shuld not be allowed (qun 
ying)
  Re: blowfish problem ("Douglas A. Gwyn")
  Re: blowfish problem ("Douglas A. Gwyn")
  Re: Idea for creating primes ("Scott Fluhrer")
  Re: Destruction of CDs (Eric Smith)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (David A. Wagner)
Crossposted-To: alt.security.pgp,comp.security.pgp.discuss
Subject: Re: PGP ADK Bug: What we expect from N.A.I.
Date: 30 Aug 2000 18:11:44 -0700

Mark Wooding <[EMAIL PROTECTED]> wrote:
> I disagree that it's a totally daft feature.
> 
> There is a definite requirement for an employer to want to recover
> messages sent to an employee in the course of his or her job if the
> employee is unable to decrypt messages for some reason (e.g., run over
> by a bus, disgruntled and left, or whatever).

But for email!?

There is an important distinction between escrowing communications
and escrowing stored data.  Escrowing communications is rarely sensible.
Escrowing stored data -- also called "making backups" -- may well make
a lot of sense for some businesses.

Compare to what the law-enforcement agencies want: they want escrow
of communications keys and don't care so much about storage keys.
That's backwards from what businesses are typically likely to want.

I think I first saw this point raised by Matt Blaze.

------------------------------

From: [EMAIL PROTECTED] (Scott Contini)
Subject: Re: RSA public exponent
Date: 31 Aug 2000 01:22:50 GMT

In article <8oje6p$74c$[EMAIL PROTECTED]>,
Thomas Pornin <[EMAIL PROTECTED]> wrote:
>According to John Matzen <jmatzen(at)origin(d0t)ea(d0t)com>:
>> I've read that good values for the RSA public exponent are 3 or 65537.
>> Then I read that you should use an exponent such that the GCD of
>> (q-1)(p-1) and the private exponent (e) is 1. So, my question is which
>> is better and what accounts for the disparity.
>
>
>e must be prime to (p-1)(q-1), so, when you choose your key, by
>randomly selecting p and q, you are more likely to have this property
>if e is prime. Your odds are about 44% with e = 3, and almost 100%
>with e = 65537, that the two first primes you choose are good in
>that respect.
>
>When enciphering with the public exponent, you perform a modular
>exponentiation, using usually the famous "square and multiply"
>algorithm. The complexity of this algorithm can be expressed in modular
>multiplications, and depends upon the binary representation of the
>exponent: each bit (except the first one) in the exponent means one
>squaring, and one extra multiplication if that bit is set to 1. 65537
>has the binary representation 10000000000000001, which means that there
>will be only one multiplication, and 16 modular squarings.
>
>
>e = 3 is the original exponent proposed in the paper from Rivest, Shamir
>and Adleman. However, there is a (real slight) security concern: if
>you encipher the same message three times with three different public
>keys and public exponent 3, an attacker will be able to retrieve the
>plaintext from the three ciphertext: if you have m^3 modulo N1, N2 and
>N3, you can easily get m^3 modulo N1*N2*N3 by Chinese Reminder Theorem.
>However, N1*N2*N3 is larger than m^3 (without the modulo) so you have
>m^3 with no modular reduction, and a simple integer cubic root will
>give you m.

There are other security fears for small exponents.  I would suggest
looking through some of the papers on Dan Boneh's homepage:
        http://crypto.stanford.edu/~dabo/pubs.html

Such as:

        "Twenty years of attacks on the RSA cryptosystem."
        "Breaking RSA may not be equivalent to factoring."

There may be other papers here relevant to small public exponent RSA also.
A while ago Don Johnson posted some other concerns about using small exponent
RSA.  You'll have to search through the sci.crypt archives to find it.

Scott


------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Steganography question
Date: 31 Aug 2000 01:29:04 GMT

Harris Georgiou <[EMAIL PROTECTED]> wrote:

> security of the actual data transmitted, I want to know how can one be sure
> that the stego in the message won't be noticed (with some high probability),
> and if it is, how difficult it is for someone to pinpoint and extract the
> info data from the message (no matter what he does with them). I don't think
> I've seen any references or postings here on that issue.

I think I mentioned Christian Cachin's work in passing before. 
Here is the full reference : 

Christian Cachin. An information-theoretic model for steganography. In
David Aucsmith, editor, Information Hiding, 2nd International Workshop,
volume 1525 of Lecture Notes in Computer Science, pages 306-318.
Springer, 1998. Revised version, June 2000. Copyright � Springer Verlag.
       (PS) (GZIP)

http://www.zurich.ibm.com/~cca/papers/stego.ps

it is probably worth looking at. 

Now it seems to me that you can define a notion of security for
steganography in a manner similar to the notion of "indistinguishability
of encryptions" -- you give the adversary two copies of what looks
like the same file and tell him "one and only one of these files has a
stego'd message in it. Which one is it?" 
The adversary's advantage over blind guessing should be negligible. 
Note that we've said nothing about whether the adversary can recover
any of the stego'd message, just whether the adversary can determine
if such a message exists!

Framing it like this then allows one to start looking for adaptations of
some of the techniques and tricks from cryptography. In particular, the
notion of probability ensembles which are indistinguishable by any
computationally bounded adversary seems to be relevant -- if you
consider the two files as each being drawn from probability 
distributions -- one of them the "normal message" distribution 
and the other one the "stego'd message present" distribution, suddenly 
this notion seems to make sense and may allow you to make use
of tools previously developed in the cryptographic setting.
 
For instance, there is a very strong notion of "semantic security" in
cryptography, which roughly and informally stated says that an adversary
learns "no new information" from a ciphertext. Anything the adversary
could have computed with the ciphertext, it could have computed without
the ciphertext. This sounds like a pretty strong notion, and yet it is
equivalent to the "indistinguishability" notion in the cryptographic
setting. 
 
Yevgeniy Dodis recently showed in his thesis that, in fact, this
equivalence runs deeper : you can prove equivalence of the two notions
with no reference to encryption at all. Just statements about random
experiments performed by the adversary. He used this to settle an
open question in the theory of all or nothing transforms (i.e. "Is
AONT semantic security == AONT indistinguishability?"), but it is actually
more powerful than that -- it seems to me to say that *ANY* definition of
indistinguishability for *ANY* setting will give rise to an equivalent
definition of semantic security for that setting and vice versa. 

So this means that you define a notion of indistinguishability-based
security for steganography and you get this other notion for free...
Most times, it's easier to formulate something with indistinguishability,
because it's intuitive just what it is we want to be hard to distinguish
-- in the case of steganography, we want an adversary to be totally
baffled by the choice between a file with message and a file without. 
I do wonder, however, if there are settings in which a semantic security
notion is easier to come up or work with, and then this general
equivalence will allow us to devise hitherto unknown definitions of
indistinguishability. Right now I'm playing with definitions of
security/anonymity for network protocols, in which this might or might not
come in handy. 

Anyway, the above is speculation off the top of my head. I suggest reading
Cachin's paper for serious thought. Also the Information Hiding Workshop
proceedings. Also Ross Anderson and Fabien Peticolas's absolutely 
superdoublepluswonderful bibliography of steganography at 
http://www.cl.cam.ac.uk/users/fapp2/steganography/bibliography/index.html

-David

------------------------------

From: Ron B. <[EMAIL PROTECTED]>
Crossposted-To: sci.space.shuttle,talk.politics.crypto
Subject: Re: QKD and The Space Shuttle
Date: Thu, 31 Aug 2000 01:35:10 GMT

On Wed, 30 Aug 2000 20:53:22 +0100, "Richard Bembridge"
<[EMAIL PROTECTED]> wrote:

>What kind of payloads can the Shuttle handle?
>What is the typical altitude of the Shuttle when in Low Earth Orbit (LEO)?
>What is the current record for QKD through free-space over a non-folded
>(i.e. without the use of mirrors that may compensate for turbulence) path?
>What does this record equate to in terms of 'straight up in the air'?
>What are the mass and dimensions of the receiver (Bob) in the record-holding
>apparatus?
>Would it be possible to make these smaller (mass and dimension)?
>Could they be fitted into the Shuttle?
>Does anybody see where this is leading?
>
>I wonder if...

Good questions.  _But_ how is ths relevant to sci.cript and
talk.politics.crypto ?

------------------------------

Date: Thu, 31 Aug 2000 02:28:22 +0100
From: David Hopwood <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Crossposted-To: alt.security.pgp,comp.security.pgp.discuss
Subject: Re: PGP 6.5.8 test: That's NOT enough !!!

=====BEGIN PGP SIGNED MESSAGE=====

[EMAIL PROTECTED] wrote:
> 
> Aye, for once we agree.  PGP have handled this problem extremely poorly
> from both a technical and PR perspective.
> 
> I can't see a "pretty" way forward...As I stated in a recent mail to
> PGP-Users mailing list, possibly the best thing to do is draw a line
> under this mess and release a new revised RFC that solves this
> problem.

RFC 2440 does 'solve' this problem, by not including support for ADKs.
The bug is in a feature that has never been included in any RFC (and
all due credit should go to those who made sure that was the case).

As has already been pointed out, though, the fix in 6.5.8 is not
sufficient; forged ADKs should produce a Big Scary Warning. Other
implementations should do the same even when they don't support ADKs,
in order to make distributing keys with a forged ADK more risky for
an attacker.

- -- 
David Hopwood <[EMAIL PROTECTED]>

Home page & PGP public key: http://www.users.zetnet.co.uk/hopwood/
RSA 2048-bit; fingerprint 71 8E A6 23 0E D3 4C E5  0F 69 8C D4 FA 66 15 01
Nothing in this message is intended to be legally binding. If I revoke a
public key but refuse to specify why, it is because the private key has been
seized under the Regulation of Investigatory Powers Act; see www.fipr.org/rip


=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv

iQEVAwUBOa20YDkCAxeYt5gVAQGbxgf8CJqZI3kNDO3VGY2AajcoBTQGu8rWSqlX
e1s/m0THYmMr1XLDj5QFYZAOKmiKYPmABh+RRJxBN0jt0PulRgv9it6UX8hnySoN
TPsr/GJbD/P9FqHXN/0a442cluaFEgzWDWl4tYimGA5NfxDEH8w91lJBGAfWTrpf
hxIxPiLDvZkU3+3GhpVmL6XgXye5UlEmrpZP+PmjWpEKJkrtcaZhJvLZOfp2TlRn
xuxOq+2lWs7uiPfLyAQyEqr0qi75q5oOEer4GfHA3vrXHAiveJUKZsSGzRtt/FlV
xymyOIHr9sBaEooGglDz2S0SsvLIrWUYLQ09T2y2000L/rXpAFVH+g==
=IjZq
=====END PGP SIGNATURE=====


------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Bytes, octets, chars, and characters
Date: Wed, 30 Aug 2000 22:25:41 -0400

"S. T. L." wrote:
> <<ISO-646>>
> Naw, this is some subset of ASCII, right?
> Hence the "need" for trigraphs.  Heh heh.

No, ISO 646 actually specifies a superset of USASCII, with
alternate glyphs (e.g. � and �) for several code values.
All of the codesets, however, embed the "invariant subset",
which unfortunately did not include some of the C source
code characters.  Some of the European WG14 members,
particularly the Danes, were insistent on being able to
enter C source code *in some standard way* using just the
keys on their existing keyboards.  Therefore, trigraphs
were devised, with the property that they get converted to
the equivalent C source character nearly as soon as
possible during translation.  Later on, a different Dane
in a different standards activity (C++) agitated for
standard 2-character replacements instead of 3-character
replacements.  Therefore, the digraphs, which are converted
much later in translation than are trigraphs (this is NOT
a good property, but was arrived at through negotiation and
compromise).  <iso646.h> was added at the same time to meet
related demands without reserving new keywords.

In my opinion, the whole business was a mistake.
As it turns out, every conforming implementation *still*
has to somehow provide support for distinct single source
characters for each of the C basic source characters.
Details of mapping between interchange formats and those
host characters should have been kept outside the language
specification.

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Crossposted-To: sci.space.shuttle,talk.politics.crypto
Subject: Re: QKD and The Space Shuttle
Date: 31 Aug 2000 02:05:31 GMT

In sci.crypt Ron B. <[EMAIL PROTECTED]> wrote:

> Good questions.  _But_ how is ths relevant to sci.cript and
> talk.politics.crypto ?

"Give me a random beacon and I shall secure the world..." - some greek
                                                            misquoted.

There are several cryptographic protocols which make use of public random
sources available to all parties. The public random source is used
as a kind of very weakly trusted third party, providing data known
to be independent of all other parties. 

* Rabin's contract signing protocol. Divide time into rounds. 
  At each round, the beacon broadcasts a number in 0..N.
  At each round I pick a number in 0..N and broadcast it.
  At each round you pick a number in 0..N and broadcast it.

  We consider ourselves both bound to the contract if and only if
  all three parties (you, me, and the beacon) pick the same number
  in the same round.

  Now I can withold my assent forever, or agree with you on the first
  round (if I wait until after you and the beacon broadcast), but
  there is never a point where you are bound and I am not or vice
  versa. 

* Maurer's "secret key agreement by public discussion." Easier to
  demonstrate than explain. See this web page :
  http://www.inf.ethz.ch/department/TI/um/research/keydemo/

* There is a "random string in the sky" model used in some papers,
  in which a _fixed_ random string is available to all parties. 
  A satellite could provide such a string, although all  parties
  would have to store it, since the satellite won't be repeating bits. 

* Aumann-Rabin protocol for stretching a shared secret key in the 
  "Bounded storage space model." 
  The idea is that the satellite provides an unending stream of
  nonrepeating random bits. The two communicating parties, Alice 
  and Bob, share a secret key which acts as an index into these 
  bits. So they only have to store a small fraction. 
  The eavesdropper does not know the key, so does not know the
  bits Alice and Bob are storing. 
  So now we assume that the eavesdropper has some bound on its
  storage space. Maybe a big bound, doesn't matter. This is 
  all we assume (no computational assumptions).
  At some point it will run out of space and "overflow", losing
  bits. Because the satellite is random, it can't get those bits back. 
  Alice and Bob exploit this to stretch their short secret into 
  something much longer by picking bits which the adversary probably
  couldn't store. 


The problem with all of these protocols is that if an adversary can
replace the random beacon with his own source, all bets are off.
So some people would *like* to see a satellite in the sky broadcasting
random bits to the world. There will still be issues with ground-side
jamming and with authentication of the satellite, though, which are
not yet fully ironed out (at least not that I've seen). 

-David

------------------------------

From: qun ying <[EMAIL PROTECTED]>
Subject: Re: Patent, Patent is a nightmare, all software patent shuld not be allowed
Date: Thu, 31 Aug 2000 02:44:03 GMT



There are 2 more patents:
US 5615268
http://www.patents.ibm.com/details?&pn=US05615268__
A system and method is provided that implements digital encryption for
the electronic transmission, storage and retrieval of authenticated
documents and that enables the establishment of the identity of the
originator of an electronic document and of the integrity of the
information contained in such a document. Together these provide
irrevocable proof of authenticity of the document. The system and
method make it possible to provide "paper-less" commercial
transactions, such as real-estate transactions and the financial
transactions secured by real estate. A Certification Authority provides
tools for initializing and managing the cryptographic material required
to sign and seal electronic documents. An Authentication Center
provides "third party" verification that a document is that executed
and transmitted by the document's originator. The Certification
Authority and the Authentication Center together provide for
third-party assumption of the risk of the authenticity of documents, an
audit trail of the documents, and storage and retrieval of the
documents by authorized parties. The system and method eliminates the
need for "hard copies" of original documents as well as hard-copy
storage. Retrieval of an authenticated document from the Authentication
Center may be done by any number of authorized parties at any time by
on-line capability.

and US 5748738
http://www.patents.ibm.com/details?&pn=US05748738__
Methods and apparatus are provided that implement digital signing
and/or encryption for the electronic transmission, storage, and
retrieval of authenticated documents and that enable the establishment
of the identity of the originator of an electronic document and of the
integrity of the information contained in such a document. Together
these provide irrevocable proof of authenticity of the document. The
methods and apparatus make it possible to provide "paper-less"
commercial transactions, such as real-estate transactions and the
financial transactions secured by real estate. A Certification
Authority provides tools for initializing and managing the
cryptographic material required to sign and seal electronic documents.
An Authentication Center provides "third party" verification that a
document is executed and transmitted by the document's originator. The
methods and apparatus eliminate the need for "hard copies" of original
documents as well as hard-copy storage. Retrieval of an authenticated
document from the Authentication Center may be done by any number of
authorized parties at any time by on-line capability.


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Crossposted-To: comp.lang.c
Subject: Re: blowfish problem
Date: Wed, 30 Aug 2000 23:22:46 -0400

"Trevor L. Jackson, III" wrote:
> On this I have a slight preference against your preference and in
> favor of the current standard.  My rationale is that there are some
> very large character types, including an ISO standard for 32-bit
> characters in which Unicode is merely "plane zero" of a segmented
> 16:16 space.  I would find it uncomfortable to use a system in which
> the char type was 32 bits wide.

Here is a list of the essential differences between the two
approaches (+ means advantage, - means drawback, = means neutral):

"short char" approach:
        + exactly one type (char) for use with text characters.
        + string literals "..." all continue to be array of char.
        + only one set of library functions for handling text
          (the traditional <ctype.h>, str*(), <stdio.h>).
        - byte-array (storage-unit) operations involve a new
          type (short char, to avoid introducing a new keyword).
        = sizeof measured in short chars.
        - sizeof(char) not necessarily 1.
        + byte (short char) could conceivably be just 1 bit.
        = binary-oriented functions (mem*(), malloc()) use byte
          units (short chars).
        + "byte" and "character" are never used synonymously.
        + stream orientation (binary vs. text) determined by
          the mode argument (e.g. "rb" vs. "r").
        + conversion between external multi-byte codings and
          internal char coding, including maintaining shift
          state, is performed automatically in the I/O support.
        - programs that need to access the multibyte encodings
          must open the stream in binary mode.
        - stdin/stdout are stuck in text mode (this could have
          been addressed by a new function setmode(fp,mode).)

"wchar_t" approach:
        - two types explicitly associated with text characters;
          char for a subset including traditional C source
          characters, and wchar_t as an internal code for more
          extensive character sets.
        - two kinds of string literals "..." and L"...".
        - duplicate sets of library functions for handing text
          encoded in the two different character types.
        + byte (storage-unit) uses existing type char.
        = sizeof measure in chars.
        + sizeof(char) necessarily 1.
        - byte (char) must be at least 8 bits.
        = binary-oriented functions use existing type char.
        - "byte" and "character" are used nearly synonymously;
          "wide character" and "multibyte character sequence"
          are introduced to describe the non-byte manifestations
          of characters.
        - stream orientation determined by which of the duplicate
          functions is first used with the stream.
        - multibyte encoding is visible to the application and
          there are even explicit functions to translate from
          m.b.c. sequences to wchar_t arrays and vice versa;
          for shift-state encodings, the state must be handled.
        + programs that need to access multibyte encodings
          can do so merely by using char arrays.
        + stdin/stdout are stuck in text mode, but that doesn't
          affect whether wide or narrow conversion occurs.

I have tried to be fair about the +/-/= assessments.  These
points were debated over and over and there is no point in
arguing about them at this point.  The C standard definitely
chose the wchar_t approach.  While technically the language
is still open to adding the short char approach too, I don't
envision that doing so would cause char to converge to wchar_t
at some later date.  The one real problem to ever changing
this aspect of C is that we have officially locked in the
guarantee that sizeof(char)==1, so strictly conforming programs
are now *with confidence* using constructs like:
        char foo[5] = "XYZZY";
        char *p = malloc(5);    // instead of 5*sizeof(char)
        memcpy(p,foo,5);
Before that guarantee was standardized, it could be argued
that such code was making an unwarranted assumption about the
relationship between the units of storage allocation (bytes)
and the units of character storage (chars).  Some programmers
(including me) were careful to include the sizeof(char) factors.
But since 1989 we are stuck with honoring this guarantee, and
it would undoubtedly break a *lot* of code to change it now.

Instead of "short char" we might look into some form of
requirements-based type specification, e.g.
"int(unsigned,width=1,nopadding)" meaning a single addressable
bit that could be used in arrays.  While that means that there
would be several kludgy uses of CHAR_BIT in our bit-array object
allocations (via malloc), at least we would finally have them.
There are many other reasons for wanting a general facility of
this kind, e.g. to streamline access to externally-imposed
record formats, and if there is some experience with trial
implementations of something along these lines, we would consider
adopting it for the next revision of the C standard.

> This cannot be the end of the story.  What benefit is there to
> retaining both char and byte terms?  What disadvantage would
> accrue if the terms were explicitly defined to be synonyms?

"char" designates a specific type.  "Byte" denotes a specific
function.  The fact that these are closely linked in the C
spec does not imply that the meanings are interchangeable.
A better objection, which in fact we received during public
review of the draft standards, would be to the use of the
word "character" to mean different things.  We tried to find
and fix all uses of that word so that "character", "wide
character", "character type", "char", and "byte" are all used
consistently with the meanings I described in my previous note.
Much of the confusion is brought on by the multiple kinds of
character implicit in the wchar_t approach.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Crossposted-To: comp.lang.c
Subject: Re: blowfish problem
Date: Wed, 30 Aug 2000 23:35:12 -0400

"Trevor L. Jackson, III" wrote:
> "Douglas A. Gwyn" wrote:
> > "unsigned char" is a *type*; "byte" is a unit of storage.  Those are
> > quite different concepts.
> They _can_ be quite different, but in C-standard-ese they aren't.

Yes, they are different concepts, although there is a logical
connection between them introduced by the C specification.
Types have attributes that storage does not, and vice versa.

Your argument is essentially an analytic model for definitions,
in programming terms a macro-expansion approach, which is not
surprising since that disease has taken over most of academic
philosophy.  But it doesn't reflect the way that concepts are
actually formed and used.  Two distinct concepts can turn out
to have exactly the same set of referents, just as two distinct
math functions can have exactly the same set of solutions (zeros).

------------------------------

From: "Scott Fluhrer" <[EMAIL PROTECTED]>
Subject: Re: Idea for creating primes
Date: Wed, 30 Aug 2000 20:19:06 -0700


<[EMAIL PROTECTED]> wrote in message news:8ojb0o$kf3$[EMAIL PROTECTED]...
> In article <8oj7hd$mqg$[EMAIL PROTECTED]>,
>   "Scott Fluhrer" <[EMAIL PROTECTED]> wrote:
> > Actually, it's about as fast as a few passes with the Miller-Rabin
> test.  In
> > both cases, you pick a g, determine g^k mod p, where k is about as
> large as
> > p, and then run quick tests on the results.  With Tom's test, you run
> it
> > once for every prime dividing p-1 (plus a few times to find an
> appropriate
> > g).  With Miller-Rabin's test, you run it until you get error
> probability
> > "small enough".  And, in both cases, when p is not prime, then almost
> all
> > the time, you'll find it out the very first time you compute g^k mod
> p.
> > And, of course, with both methods, you do some quick composite checks
> (eg.
> > checking for small factors) first.
>
> That's slightly right.  My problem works the other way in that as soon
> as you find a primitive generator you know it's prime.  However when
> you start with g=2 "g^k mod p" will not tell you right away that it's
> prime or not, unlike MR which will almost always immediately
> say "composite" if it is composite.
Actually, if you start with k = (p-1)/2, then if you get anything other than
1 or -1, you know you have a composite, and the vast majority of composites
will fail here.  This brings the efficiency to MR standards...

And, if p is prime, and with the factorization of p-1 you mentioned, a
random g will almost always be either a quadratic residue (g^((p-1)/2)==1)
or a generator.  There are members of other subgroups, but the probability
of running into one is approximately 2^-128 for the parameters you
mentioned.  So, in terms of time efficiency, the possibility of rejecting a
g because you stumbled into one can ignored.

> With my method however you can
> discard values of g once you find a subgroup they belong to, so at most
> you will have to try a few times per base 'g'.
>
> > What I suspect your book says that it's a slow way to prove a number
> p prime
> > assuming you don't know a priori the factorization of p-1, because
> factoring
> > p-1 is a bear.  With Tom's approach, that isn't a problem.
>
> Well that's why the factors of (p - 1) are small, it makes proving they
> are prime is easier.  And of course you can recurse this to use 64-bit
> primes to make the 128-bit primes, etc...
However, if the factors of p-1 are too small, that makes factoring p*q
easier.  That means a recursive approach may be better...

--
poncho





------------------------------

From: Eric Smith <[EMAIL PROTECTED]>
Subject: Re: Destruction of CDs
Date: 30 Aug 2000 20:41:36 -0700

I wrote:
> Better yet, use the CD-RW, erase it multiple times, then physically
> destroy it.

Jeffrey Williams <[EMAIL PROTECTED]> writes:
> Why erase it multiple times if you're going to physically destroy it?  It
> would almost certainly take less time to melt the disk in the fireplace (or
> the oven) than to erase it multiple times.

As a precaution in case the "physically destroy it" stage isn't 100%
successful.  As Paul Rubin noted, several things you might do to
physically destroy a disc actually still may leave some recoverable
bits around.

Just as "physically destroy it" is just a precaution in case the "erase
it multiple times" isn't 100% successful.

If implemented perfectly, either part would be sufficient on its own.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to