Cryptography-Digest Digest #932, Volume #8 Tue, 19 Jan 99 17:13:03 EST
Contents:
Re: Metaphysics Of Randomness (Darren New)
Re: Metaphysics Of Randomness (R. Knauer)
Re: Metaphysics Of Randomness ("Trevor Jackson, III")
Re: Metaphysics Of Randomness ("Trevor Jackson, III")
Re: Metaphysics Of Randomness (R. Knauer)
Re: Metaphysics Of Randomness (Coen L.S. Visser)
Re: Too simple to be safe ("Kazak, Boris")
Re: Working out session key ("Kazak, Boris")
Newbie Hashing Question (chris)
Re: Java speed vs 'C' (was Re: New Twofish Source Code Available) (Ian Miller)
Re: sci.crypt intelligence test. (Robert I. Eachus)
----------------------------------------------------------------------------
From: Darren New <[EMAIL PROTECTED]>
Subject: Re: Metaphysics Of Randomness
Date: Tue, 19 Jan 1999 19:46:57 GMT
R. Knauer wrote:
> Recall that the machine here is exactly specified - the Universal
> Turing Machine (UTM).
And I ask again, which "the" UTM? I answered this in another post, tho,
so we can just pick up this thread there. A UTM isn't a specific
individual thing, as the machine it simulates isn't fixed. I.e., there
will be a different UTM to simulate programs on a machine with a tape of
binary cells versus a tape of trinary cells.
> >but it's pretty easy to see that one can calculate an arbitrarily tight
> >bound on the percentage of programs that halt, simply by simulating all
> >the machines in parallel, knowing how many there are (2^N) and how many
> >have halted so far. The longer you let it run, the more
> >programs-that-will-ever-halt will have halted.
>
> It would seem that such simulations would be the same as actually
> running the programs on a UTM, which is not the same as a formal
> proof. Chaitin accepts the need for Experimental Mathematics.
Right. But how can a number be "random" if I can calculate it to an
arbitrary degree? That's the question. The answer is that Chaitin is
using the term "random" in a non-intuitive way, I believe. Uncalculable
doesn't mean random, in normal parlence.
> Anyway, all Chaitin needs is for some programs to be indeterminate
> regarding whether they halt or not. If some programs cannot be proven
> formally to halt or not, then the bits of his Halting Probability
> Omega are indeterminate, which he terms random in his algorithmic
> information theory.
Actually, any program on a finite TM can be proven to either halt or not
in finite time. So we have to base all calculations on things that
cannot exist anyway.
> It appears that you are trying to sweep the entire Turing Halting
> Problem under the rug. Because of the Halting Problem there are real
> numbers that uncomputable. Chaitin claims that Omega is one of them
> which has fundamental significance for number theory.
That's fine. The fact that it's uncomputable does not make it random,
however. Any given program either halts or it doesn't. The fact that you
cannot write a program to determine that for every program does not
change the fact. The fact that there are uncomputable numbers is neat.
But they're not random, unless that's how you chose to define random, in
which case we're done.
--
Darren New / Senior Software Architect / MessageMedia, Inc.
San Diego, CA, USA (PST). Cryptokeys on demand.
"You could even do it in C++, though that should only be done
by folks who think that self-flagellation is for the effete."
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Metaphysics Of Randomness
Date: Tue, 19 Jan 1999 19:17:24 GMT
Reply-To: [EMAIL PROTECTED]
On Tue, 19 Jan 1999 18:36:00 GMT, Darren New <[EMAIL PROTECTED]>
wrote:
>Where does one get the definition of "the" Universal Turing Machine? I
>thought there were as many UTMs as one cared to create.
I always thought there was only one UTM possible, that is, the Turing
Machine on which all TM programs could be run.
>Secondly, the first sentence makes no sense. You seem to be saying "if
>the program 1010101010101010 halts, then no program that starts with
>1010101010101010 can be considered in our calculations." But if you know
>whether 101010101010 halts, then you've pretty much blown the whole idea
>behind Omega, which is that you can't tell whether that program halts.
Omega is constructed from all possible programs that can be run on a
UTM. If a particular program is indeterminate as to whether it halts
or not, then Omega is indeterminate to that extent. Just because some
programs can be shown formally to halt does not make Omega
determinate.
>What would be invalid about a program that halts without executing all
>its instructions? Consider:
>
>begin
> output 1
> halt
> output 0
>end
>
>Is that valid? If not, how do you know? If so, what does your last
>sentence there mean?
I will have to refer you to Chaitin's comments in his paper entitled
"Randomness In Arithmetic And The Decline And Fall Of Reductionism In
Pure Mathematics":
"...no extension of a valid program is a valid program."
Later he goes on to comment:
"In 1974 I redid algorithmic information theory with 'self-delimiting'
programs and then I discovered the halting probability Omega"
"Omega cannot be defined if you think of programs in the normal way".
>If a program starts with 1010101010101010 and halts, how do you know a
>longer program that starts with the same string would not halt?
>If a program starts with 1010101010101010 and never halts, what makes
>you think a longer program that starts with the same string is going to
>halt?
>It sounds like you're excluding from consideration some programs that
>halt, without being able to tell which they are. If that's not the case,
>please define what you mean by "valid program".
See Chaitin for that.
>Perhaps he is mistaken, or using different terms. Clearly, whether any
>given program halts or not is 100% non-random.
It is for some programs. That is what is behind the Turing Halting
Problem - the indeterminancy of some programs with regard to their
ever halting or not.
>Otherwise, Omega wouldn't be well-defined.
Why not? Chaitin claims that Omega is well-defined as long as you only
consider the shortest programs that halt and exclude any extensions to
those programs. He does that to make Omega lie in the range:
0 < Omega < 1
If you consider all programs, including extensions to programs that
halt, then Omega is infinite since there are an infinite number of
programs that halt under that condition.
He says it took him 10 years to figure that out, and that his earlier
algorithmic information theory was wrong until he fixed it.
>Being random and being algorithmically incalculable in
>finite time are different, just as being random and having the value Pi
>are different, in spite of the fact that it's impossible to calculate
>the value of Pi precisely in finite time.
First I would ask you to define exactly what you mean by the term
"random" as used above. Then depending on how you define it, whether
it is the kind of randomness used in crypto for the OTP or the kind
that Chaitin uses for his algorithmic complexity theory, I would then
ask you to make a case for your statement above.
I have maintained from the outset that the kind of randomness that
Chaitin uses for algorithmic complexity theory is not the same as the
crypto-grade randomness needed for the proveably secure OTP system.
The main reason for my assertion is that Chaitin excludes some
possible numbers as non-random based on the fact that they can be
significantly reduced in complexity - and such exclusions are not
permitted with a TRNG.
Bob Knauer
"Whatever you can do, or dream you can, begin it. Boldness has
genius, power and magic in it."
--Goethe
------------------------------
Date: Tue, 19 Jan 1999 15:37:20 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Metaphysics Of Randomness
R. Knauer wrote:
> But, what is meant by "real" randomness?
>
> If you take a true random sequence, a key K produced by a TRNG and you
> mix it (xor) with a non-random sequence, a message M made out of
> intelligable text, you get what looks like a random sequence, the OTP
> cipher C:
>
> C = K xor M.
>
> But once you do that mixing, the key K is no longer random. It is no
> longer one of the possible sequences of a given length produced
> equiprobably by a TRNG. It becomes a particular sequence having weight
> or significance 1. You have converted one sequence out of a possible
> 2^N sequences into the one and only sequence of significance for
> purposes of the cipher C.
This is silly. The randomness of K cannot in any sensible way be dependent
upon the post processing of other data with K. In fact C in the example above
is just as random as K. The key concept here is that the value of K was
selected from a pool of values with identical probabilities.
THE SAME IS TRUE OF C. Knowing only M, C is completely unpredictable.
RANDOM. Both statistically and cryptographically.
> For example, because of its significance (participation in the
> cipher), you can mix the cipher and the key to get the non-random
> message back:
>
> M = K xor C.
>
> That alone shows that the key K is no longer random, otherwise it
> could not have resulted in the non-random message M when mixed with
> the cipher C. IOW, if K were truly random, how could K xor C, the
> mixture of two random sequences, result in a non-random message?
> Entropy considerations would not permit this to happen if the key K
> were totally random. Therefore it is not random once it is used to
> create the cipher C.
Here you have strained the definition of entropy past the breaking point.
> It's as if you collapsed the random wavefunction of the TRNG when you
> mixed a particular output sequence (the key) with the message.
Nice allusion, but invalid. Operations on probability amplitudes are not
invertible. Mathematical operations are invertible. Even those whose operands
are entropic quantities.
------------------------------
Date: Tue, 19 Jan 1999 15:46:18 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Metaphysics Of Randomness
Darren New wrote:
> > It is for some programs. That is what is behind the Turing Halting
> > Problem - the indeterminancy of some programs with regard to their
> > ever halting or not.
>
> No. It's not. If I give you a deterministic program, and you run it on a
> deterministic machine, it will either halt, or it will not. If that's
> not the case, then there's a basic flaw in your logic. It halts, or it
> doesn't. True or not true.
Not quite. The missing phrase is "... halt in finite time". We can consider
three kinds of programs for any specific amount of time. Those that halt
within the time limit. Those that halt after the time limit, and those that
never halt. The last two sets are intertwined in that for any given finite
time limit we cannot tell them apart.
Thus a program either halts within the time limit, or has indeterminate result
(halt or not).
> The fact that you cannot look at the program and with 100% assurance
> determine whether it halts does not change the fact that it either halts
> or it doesn't halt.
>
> If you're defining "random" this way, then I think Omega is pretty
> trivial. Omega is random because we define it as something that's by
> definition random.
>
> > >Otherwise, Omega wouldn't be well-defined.
> >
> > Why not? Chaitin claims that Omega is well-defined as long as you only
> > consider the shortest programs that halt and exclude any extensions to
> > those programs.
>
> That would seem to make it ill-defined to me! You're defining Omega in
> terms of the probability of a program from a particular set halting, and
> then you define the set in terms of something that can't be calculated.
> It doesn't take much to see that Omega can't be calculated by
> definition, ignoring its construction.
>
> The paper I read about it mentioned all programs less than N bits long.
> It seemed to be saying the probability of any given program chosen at
> random from that list is a program that halts is what he's defining to
> be Omega. That's fine. That's clearly an uncalculable number. But it's
> no more "random" than the sequence of states a non-halting TM goes
> through.
>
> Perhaps I'm reading the wrong papers. The one I read wasn't real
> coherent on how to get from the set of programs to a probability in
> [0..1].
>
> > He does that to make Omega lie in the range:
> >
> > 0 < Omega < 1
> >
> > If you consider all programs, including extensions to programs that
> > halt, then Omega is infinite since there are an infinite number of
> > programs that halt under that condition.
>
> How do you exclude them from consideration? If you just *do* that by
> fiat, I can come up with a much simpler Omega-like number based on
> nondeterministic TMs, which are just as abstract.
>
> > >Being random and being algorithmically incalculable in
> > >finite time are different, just as being random and having the value Pi
> > >are different, in spite of the fact that it's impossible to calculate
> > >the value of Pi precisely in finite time.
> >
> > First I would ask you to define exactly what you mean by the term
> > "random" as used above. Then depending on how you define it, whether
> > it is the kind of randomness used in crypto for the OTP or the kind
> > that Chaitin uses for his algorithmic complexity theory, I would then
> > ask you to make a case for your statement above.
>
> It seems pretty clear to me that neither definition works well.
>
> Obviously, Omega can't be random in the way an OTP is random, since
> everything going into the calculation of Omega is deterministic. Omega's
> randomness comes from the fact that some things take forever to
> calculate, not from nondeterminism. Obviously Omega isn't going to be
> very close to 0 (as there are a lot of programs that halt), nor is it
> going to be very close to 1 (as there are a lot of programs which
> don't). Hence, not all values are equiprobable, and indeed one can put a
> definite upper limit on Omega that is less than 1, assuming you use the
> construction with all programs <N bits long.
>
> In the "uncalculable" sense, from what I understand having read one of
> Chaitin's lectures, Omega is uncalculable because it is constructed by
> reference to things known to be uncalculable. Basically, the halting
> problem shows that it's impossible to look at a function and determine
> if it's a partial function in all cases, and Omega is the probability of
> any given function in some particular formal system is a partial
> function. Fine. Now, if one assumes there's no finite closed form for
> Pi, then its value too is incalculable in finite time. I cannot,
> obviously, prove that Pi is uncalculable in the same way that Chaitin
> proved that Omega is uncalculable, but perhaps an actual mathemetician
> could.
>
> In any case, I don't see the point in using the word "random" instead of
> "uncalculable" since both are commonly used and have different meanings.
> If Chaitin defines "random" in some way other than "uncalculable in
> finite time" then I am misunderstanding him.
>
> --
> Darren New / Senior Software Architect / MessageMedia, Inc.
> San Diego, CA, USA (PST). Cryptokeys on demand.
> "You could even do it in C++, though that should only be done
> by folks who think that self-flagellation is for the effete."
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Metaphysics Of Randomness
Date: Tue, 19 Jan 1999 21:18:12 GMT
Reply-To: [EMAIL PROTECTED]
On Tue, 19 Jan 1999 19:46:57 GMT, Darren New <[EMAIL PROTECTED]>
wrote:
>Right. But how can a number be "random" if I can calculate it to an
>arbitrary degree? That's the question.
How can you calculate Omega if the things it is made out of are
indeterminate?
>The answer is that Chaitin is
>using the term "random" in a non-intuitive way, I believe.
He is using the term random in the way that makes sense in his
algorithmic complexity theory. According to him, a number is random if
it cannot be reduced algorithmically by more than 10 bits of
complexity. That means that only one out of a thousand numbers are
non-random.
>Uncalculable doesn't mean random, in normal parlence.
Uncomputable means indeterminate. It means that one cannot decide what
the number is. That means that it is random in Chaitin's algorithmic
complexity theory, since it is irreducible.
This can be made to agree with some aspects of what we ordinarily
associate with randomness, although I maintain it does not completely
agree with crypto-grade randomness. For example, if reduction of the
complexity of the number is not possible then it has a high entropy,
which is sometimes associated with randomness. Also, since it is not
reducible, that implied that there is no underlying reason for it to
exist the way it does, and therefore in that sense it is random.
The book "Fire In The Mind" spends one whole chapter comparing and
contrasting Chaitin's concept of complexity randomness with Shannon's
concept of entropy randomness and then relates these two different
notions of randomness with yet a different form of randomness, namely,
quantum mechanical randomness.
In any event, Chaitin maintains that the indeterminancy in the Turing
halting problem is what his theory is based on. So to that extent,
Chaitin's randomness comes from the formal undecidability of certain
algorithmic calculations.
>Actually, any program on a finite TM can be proven to either halt or not
>in finite time. So we have to base all calculations on things that
>cannot exist anyway.
Is a UTM a finite TM? And where does the Turing halting problem come
in?
In one of his papers, Chaitin goes thru the Turing argument in detail,
using the Cantor diagonal method. The reason for uncomputable numbers
is that some one or more Turing Machine programs do not halt and there
is no way to know if they do ever halt.
>That's fine. The fact that it's uncomputable does not make it random,
>however.
It does for Chaitin's Omega.
>Any given program either halts or it doesn't. The fact that you
>cannot write a program to determine that for every program does not
>change the fact. The fact that there are uncomputable numbers is neat.
>But they're not random, unless that's how you chose to define random, in
>which case we're done.
That is how Chaitin chose to define random - if something is
algorithmically complex and cannot be reduced it is random according
to his definition. I have maintained that his definition does not
apply directly to crypto, since he excludes numbers that can be output
by a TRNG.
But there are other concepts in his theory that I believe have a
bearing on crypto-grade randomness, albeit in a metaphysical way - or
should that be "meta-mathematical" way?
For example, the indeterminancy seen in Quantum Mechanics has a direct
bearing on the randomness of processes like radioactive decay. There
is no determining factor that causes it - it just happens
spontaneously from vacuum fluctuations, which is what gives
radioactive decay its random nature.
Chaitin argues that the same kind of indeterminancy is behind his form
of randomness, that the algorithmic indeterminancy of whether the kth
Turing Machine program halts or not is a form of randomness. I agree
with you that such randomness is not what we usually mean by the term,
certainly not as it applies to crypto.
Bob Knauer
"Whatever you can do, or dream you can, begin it. Boldness has
genius, power and magic in it."
--Goethe
------------------------------
From: [EMAIL PROTECTED] (Coen L.S. Visser)
Subject: Re: Metaphysics Of Randomness
Date: 19 Jan 1999 21:21:37 GMT
Darren New <[EMAIL PROTECTED]> writes:
>R. Knauer wrote:
[...]
>> Says you. You won't mind terribly if I take Chaitin's word for it
>> instead of yours. Unless, of course, you are prepared to argue
>> conclusively against Chaitin's theories.
>
>Uh, I hate to point this out, but the fact that the halting problem is
>in general unsolvable doesn't mean there are no programs for which
>halting can be proven. I'm not sure what percentage of 2^N programs N
>bits long would halt (obviously it would depend on the machine), but
>it's pretty easy to see that one can calculate an arbitrarily tight
Yes but calculating could take a very long time if the complexity of your
algorithm is too great. See [1] for details.
>bound on the percentage of programs that halt, simply by simulating all
>the machines in parallel, knowing how many there are (2^N) and how many
>have halted so far. The longer you let it run, the more
>programs-that-will-ever-halt will have halted.
[1] "An Introduction to Kolmogorov Complexity and Its Applications"
Second Edition
Ming Li, Paul Vitanyi
Springer Verlag
Regards,
Coen Visser
------------------------------
From: "Kazak, Boris" <[EMAIL PROTECTED]>
Subject: Re: Too simple to be safe
Date: Tue, 19 Jan 1999 16:28:59 -0500
Reply-To: [EMAIL PROTECTED]
Paul Rubin wrote:
> > Essentially the convolution is described by the following
> >formula, sorry, it is not easy to reproduce it in the ASCII text file.
> > If K[n] is some random byte sequence and G[i] is some file
> >( of unknown origin, it can be any .BMP or .WAV file):
> > *********************************************************
> > n=N
> > CONV[m] = SUM {G[n+m]*K[n]} ;
> > n=1
> >
> > where N is the number of points in K[n]
> > *********************************************************
> > Obviously, the number of points in G[i] must be big enough to
> >allow this procedure. The random sequence K[n] can be, for example,
> >the first 64 bytes of PI, multiplication is unsigned Mod 255, all
> >eventual carries and overflows are disregarded (anyway, this is meant
> >to be a one-way procedure).
>
> This looks terrible to me. If the K vector is something like bytes
> from pi, there is no entropy in it. So by guessing K and the wav
> file, the whole key file is recovered. Even if K is unknown,
> recovering some keys (and there's no point to using several keys
> instead of a single one unless you think some keys will be recovered)
> gives information about the rest of the keys. Convolution is a linear
> operator (A * (aX + bY) = a(A*X) + b(A*Y)). So if the wav or bmp file
> is known, K can be recovered. If the wav or bmp file isn't known,
> these files still have well known known statistical characteristics
> including likely spectral characteristics. So knowing some keys,
> a "known plaintext" attack against the key file might begin by looking
> at the Fourier transform of the known keys, dividing out by the expected
> Fourier transform of wav files, etc.
=============================
You would be absolutely correct, if:
Multiplication would not be mod 255...
Addition would not be mod 256...
ALL carries and overflows would not be disregarded...
This procedure works on bytes and produces 1 byte as a result
of 64 modular multiplications and 64 modular additions. Whether
there remains any linearity after this, I doubt sincerely...
(this was meant to be a one-way procedure).
> Of course, if the authorities know you're using cryptography
> at all, there's already a problem (Yeltsin issued a "Ukaz" a couple
> years ago banning unauthorized cryptography in Russia). So you may be
> even better off using some kind of steganography instead of > encryption.
===========================
Advice worth its weight in gold!
===========================
>
> >> Best would be to use a public key method like Diffie-Hellman key
> >> exchange for every message to create a new secret key, and destroy
> >> each new secret key after using it once.
> >------------------------
> >Yes, I am thinking on going ahead with this, especially since NSA
> >declassified its KeyExchangeAlgorithm (along with Skipjack). This
> >KEA is simple, all the constants are provided in the document, so
> >one must only install an appropriate BigNumber math package.
>
> KEA is just DH, pretty much.
Agreed. Respectfully BNK
------------------------------
From: "Kazak, Boris" <[EMAIL PROTECTED]>
Subject: Re: Working out session key
Date: Tue, 19 Jan 1999 16:34:57 -0500
Reply-To: [EMAIL PROTECTED]
Dave Roberts wrote:
>
> I was wondering how easy it would be to work out the key
> used if the plaintext and the encrypted data are both known.
>
> Here's a scenario:
>
> You use a hashed version of your password to encrypt lots of
> files, where the encrypted data is in an obvious format, and
> the encryption algorithm is something like DES3, IDEA etc.
> ie a symmetric algorithm only.
>
> You then accidently leave a copy of one of the plaintext files
> lying around.
>
> Someone takes a copy of both the plaintext, and the relevant
> encrypted data, and attempts to work out the session key used,
> thereby gaining access to all your stuff.
>
> Is this feasible / practical?
>
> Obviously, I know nothing about the internals of these
> algorithms.
>
> TIA - Dave.
===========================
This is called the known-plaintext attack, there are already
volumes written about it. For some ciphers it works, for others
it does not...
BNK
------------------------------
From: [EMAIL PROTECTED] (chris)
Subject: Newbie Hashing Question
Date: Tue, 19 Jan 1999 21:42:25 GMT
is one implementation of Saphhire II -based hashing the same as
all other implementations?
will the results of Updating a Sapphire II hash object with a
given string yield the same results as Updating a different
implementation with the same string?
-c
------------------------------
From: [EMAIL PROTECTED] (Ian Miller)
Subject: Re: Java speed vs 'C' (was Re: New Twofish Source Code Available)
Date: Tue, 19 Jan 1999 21:43:34 +0000
In article <781vl4$qrv$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:
> Actual this is not true. Good assembly language is dam hard to bet.
In the short term, only.
>I have seen old code written on old univacs fly circles around the
>newer compiled stuff even though the newer univacs have a larger
>instruction set.
This is entirely predictable if the compiler code generators have not been
brought up to date. Once they have been the old assembly code will loose
out.
>Also the newer compliers the designers have not
>given much thought to good design.
On this I disagree radically. The compiler design is now a very well
understood art, and modern processors are designed (among other criteria)
to be easy to optimise for. The best modern compilers generate first rate
code.
>Lots of times an old fortran
>v program will easily beat a newer ascii program in time and size.
This has often far more to do with the simplicity of the Fortran program.
If you were writing for a 1Mhz machine, you do _nothing_ extraneous. If
you put any of the Fortran that I wrote in the 70s on a modern machine, it
would go into orbit.
> I find it hard to belive a pure C version could run any faster
>if you allowed the assembly version to run on the same processor.
Well, you ought to try it some time. To take an extreme example, I believe
8086 assembler will run on a Pentium. Do you really believe it is going to
out-perform well optimised C?
>Sure maybe it goes 25% faser on the 68030 hardware than the
>assembly on the 68020 but that might be due to HARDWARE speed
>increased and not SOFTWARE.
No. That wasn't my comparison. I was comparing 68020 assembler running on
68030 with C code running on a 68030. It was the _same_ hardware. The
performance difference was entirely in the software. Of course as the
68020 assembler was only using 68020 instructions, it might as well have
been running on a 68020 of the same clock-speed. That is its problem.
With a few extremely rare exceptions, assembler is a waste of time. It is
rarely used these days but still used orders of magnitude too much.
Ian
------------------------------
From: [EMAIL PROTECTED] (Robert I. Eachus)
Crossposted-To: talk.politics.crypto
Subject: Re: sci.crypt intelligence test.
Date: 19 Jan 1999 22:09:08 GMT
In article <[EMAIL PROTECTED]> Terje Mathisen
<[EMAIL PROTECTED]> writes:
> "Trust us, it is really secure, but we cannot tell you how it works."
I usually translate that as, "I have no idea how well it works."
Sometimes, I go find the real experts on the system, but often it is
better to decide the system is unsuited to this application, or any
application for that matter. You don't need the information that
would help decrypt the system to know whether or not the cypher is
suitable for your purposes, but you do have to have some idea of how
the system works. If you can't understand how the protocols work and
why, the system is not secure in your hands.
--
Robert I. Eachus
with Standard_Disclaimer;
use Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************