Cryptography-Digest Digest #320, Volume #9 Thu, 1 Apr 99 17:13:03 EST
Contents:
Re: Random Walk (Herman Rubin)
Re: My Book "The Unknowable" ("karl malbrain")
Re: True Randomness & The Law Of Large Numbers (Herman Rubin)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: Live from the Second AES Conference (Matthias Bruestle)
Re: New York Times article on Differential Fault Analysis (DJohn37050)
Re: Is initial permutation in DES necessary? ([EMAIL PROTECTED])
Re: FSE information anyone? (John Savard)
Re: New York Times article on Differential Fault Analysis (David A Molnar)
Re: FSE information anyone? (John Savard)
Re: Random Walk (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
Re: True Randomness & The Law Of Large Numbers (R. Knauer)
S/MIME interoperability: 40 bits only? (Peter Pearson)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Herman Rubin)
Subject: Re: Random Walk
Date: 1 Apr 1999 14:00:07 -0500
In article <[EMAIL PROTECTED]>,
R. Knauer <[EMAIL PROTECTED]> wrote:
>On 31 Mar 1999 15:34:19 -0500, [EMAIL PROTECTED] (Herman
>Rubin) wrote:
>>The UBP, and related highly specific processes, only exist as
>>theoretical abstractions. All that can be hoped for is a
>>sufficiently close approximation. The laws of nature are not
>>what we write down.
>Are you implying that ideal models are worthless - that the ideal
>model of the circle is not useful?
Not at all. But one should always be aware that they are not
valid for the actual data. Nor does "not being rejected" mean
that they can be used as if correct.
>>The statistical properties are what they are. In using tests,
>>do not jump to conclusions about what they are.
>The only statistical measure of randomness that has been stated here
>in the many debates we have had over two periods lasting a total of a
>year is bit bias. There has been talk of correlation tests, but most
>of the emphasis has been on determining 1-bit bias, since it is so
>easy to do - just count bits and see if there is an excess of one over
>the other.
Lack of dependence, which is not restricted to lack of correlation,
is every bit as important, and it is more elusive. PRNGs have low
or no bit bias, and can be constructed with low correlations, but
can still have poor independence properties. Bit bias can be
corrected for with little loss, but the others are much harder.
.................
Instead of quoting probabilists who do not understand statistics,
I suggest you look at problems of statistical decision making.
>I hardly consider that "jumping to conclusions".
>>Intuition can be quite dangerous.
>Yeah, like the intuition that time averages are the same as ensemble
>averages.
The very word "ensemble" indicates a lack of understanding of the
problems of physical probability. Probability is more than limiting
relative frequency.
Or the naive intuition that pseudorandomness is the same as
>true randomness. Or how about the many instances where the law of
>large numbers is misapplied. How about the so-called "law of averages"
>which (falsely) indicates that the lead in a coin-tossing game should
>change sides many times.
Other than those who do not believe in actual randomness, the only
ones who would confuse the two are the ones who use computer packages
without understanding. This may well be a large majority.
...............
>True randomness is at the very essence of quantum mechanics, which
>itself is very counter-intuitive. I can easily understand why so many
>people have a false intuition about true randomness.
>>To someone who works in probability, this idealization is extremely
>>well known. Mathematics does not deal with idealizations of the
>>real world, but with abstractions; its utility is the extent to
>>which real world entities behave like the abstractions.
>Would you put that in such a way that an Informed Layman (tm) can
>understand it. I have absolutely no clue what you just said.
Mathematics deals with formal concepts. Some of these concepts may
have enough similarity with the real world that the real world can
be modeled in the mathematical world. The accuracy of this modeling
is not a problem of mathematics, but of the similarity of reality
and the mathematical universe.
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558
------------------------------
From: "karl malbrain" <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.physics,sci.logic
Subject: Re: My Book "The Unknowable"
Date: Thu, 1 Apr 1999 11:09:19 -0800
Paul Healey <[EMAIL PROTECTED]> wrote in message
news:OhM$[EMAIL PROTECTED]...
> In article, karl malbrain <[EMAIL PROTECTED]> writes,
>
> >UNKNOWABLE along this thread attempts to tie OWNERSHIP to INFORMATION. I
> >believe the original author's work mistakes MATERIAL ownership with
> >USABILITY (ability to apply information to material things)
>
> 'Tie', implies they can be unbound. Does not attempting to tie
> information with data, confound reason ? - one is the essence of the
> other. Essence cannot be applied to anything. Information is subjective,
> in that it requires data. Data is objective, in that you can own it or
> use it. Winning the game, or knowing its rules implies information is a
> a piece of data. Likewise, data has no value without information.
> Your ability to play the game, depends on a method of reasoning that can
> be applied to it. Application implies a method, and this can be
> associated with a patent or copyright.
No. Data is exteriour information, Reason is interiour complexity. The point
being that one cannot OWN information as a thing in itself, anymore than
someone can OWN the air we all breathe. It's a naturally ACCUMULATING thing
for USABILITY. Application implies WORK, OWNERSHIP of application as a
thing in itself implies a patent or copyright. Where else can you base this
very newsgroup????
> >
> >
> >Here you fall into VULGAR MATERIALISM. Material is Paramount, not
> >Information (which accumulates as a function of TIME)
> >
>
> Materialism for-itself is vulgar, so vulgar materialism is for-itself -
> the principles which belong to it cannot be discerned by its own schema.
> It presents itself as if this is the way things actually are, but in
> fact it is only concerned with the appearance of things. It is just a
> more modern variant of empiricism, as Hegel points out ! Both are
> ethically suspect, just as transcendental logic is, in that they fail to
> ground their schema's. If 20th century logicians, had more respect for
> philosophy, no doubt our universities would be less like monasteries and
> more like oracles.
No, Materialism as a thing in-itself is VULGAR. Historically, it's the
method used to bring down Rome by objectifiying subjectivity. You reduce
someone to speech without subjects and impose redundancy instead -- you can
have it only if 50% of you can say it.
> (... snipped, I'm no philosopher ...) What Georgias, like many a modern
logician
> fails to do, is differentiate between what a thing is in-itself and that
> which it belongs to. Constructing a schema which can do this, which is
> categorically consistent, I claim has a ground, whereas an arbitrarily
> set of axioms is merely a correspondence with what things are in-
> themselves. This is why, axiomatic deductive models of reasoning fail to
> capture the dynamic nature of reasoning as it is related to language;
> you can have a dialogue with someone else, precisely because you have
> some idea, or can work out and learn what they mean. This requires, that
> what exists has to be knowable.
I think you mean differentiating a thing-in-itself-objects from a
thing-for-all-subjects. Yes, a schema is required to get any WORK done.
Read ORGANIZATION here. There's not much to go on in a newsgroup, except
perhaps to discern the LISP/SNOBOL machines from the authors. Karl M
------------------------------
From: [EMAIL PROTECTED] (Herman Rubin)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: 1 Apr 1999 14:07:34 -0500
In article <[EMAIL PROTECTED]>, Dave Knapp <[EMAIL PROTECTED]> wrote:
>"R. Knauer" wrote:
>> On Wed, 31 Mar 1999 01:49:21 GMT, Dave Knapp <[EMAIL PROTECTED]> wrote:
>> >Let me make sure I get this straight: you are claiming that Sn and Sn+1
>> >are uncorrelated?
>> Define "correlated".
>"Correlation," which is defined in any first-year book on statistics, is
>the dependence of one value on another.
This is incorrect. It is the signed extent of LINEAR dependence.
>> My understanding of the meaning of "correlated" is that if you know
>> Sn, you can determine Sn+1. But that is impossible because to do so
>> you would have to be able to determine Xn+1, and since it is a random
>> variable, it is not possible to determine it.
>No, that would be _complete_ or 100% correlation.
>Correlation is generally a value between +1 and -1; a correlation of
>+/-1 means that one value is completely determined by the other.
>A correlation of 0 means that the two values are independent.
This is only true under certain strong assumptions about the joint
distribution. One can have zero correlation and very high dependence.
>Sn+1 depends, to seom extent, on Sn; they are not completely
>independent. Therefore they are correlated.
Suppose that Sn is uniformly distributed from -1 to 1. Suppose
the sign of Sn is changed to some random variable independent
of Sn to form Sn+1. The correlation will be 0, but |Sn+1| = |Sn|/
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 01 Apr 1999 19:14:23 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 01 Apr 1999 09:58:42 -0600, Jim Felling
<[EMAIL PROTECTED]> wrote:
>> >Here n=1000000 p=q=0.5, so the standard deviation is
>> >500 units. Very few particles will be 20 standard deviations
>> >or more from the mean.
>> But more than a negligible number of particles will be outside +- 5%
>> of the mean
>The mean is 500000 5% of that is 25000, so within 5% of the mean is
>equivalent to between 475000 and 525000. The SD is 500
Hmm.. I am growing suspicious that we are taking about two completely
different things. Maybe there is a world of difference between
probability theory and statistical theory.
I am talking about the distribution of sequence bias (measured by Sn)
and you are talking about the distribution of the number of sequences.
In my distribution, the mean is zero, since for every sequence of +
bias, there is a bitwise complement with - bias. My distribution
displays the spatial distribution of particles undergoing diffusion,
which also is related to the bias Sn.
>5% of the mean is equal to SD * 50 so anything outside of that interval is
>more than 50 SD's off the mean.
I am at a complete loss to understand what you have just said. What
are the random variables you are quantifying with that distribution?
>In the ensemble view. A TRNG randomly 'picks' a sequence out of the pool of
>all possible sequences.
Yes, and it does it in a manner that is equiprobable too.
>The odds of it picking such a biased sequence is so
>vanishingly small that it would be much more likely that the device is bad
>than that the sequence was legitimately produced by a working TRNG.( not
>impossible, but vanishingly tiny odds)
I do not see how your distribution above has anything to do with an
assessment of bias.
In the ensemble, there is a significant number of sequences with
"abnormal" bias. Feller (op. cit.) spends whole chapters in his book
on introductory probability theory exposing many of them. How you can
claim that a biased sequence in general is vanishingly unlikely to be
part of the total ensemble is beyond me.
I suspect that you are claiming that the biased sequences are small in
number when compared to the worst case of bias, namely a run. I will
agree that most strings have small bias compared to the worst case.
What I am unwilling to do is accept that such makes those "small
biased" sequences have an insignificant amount of bias compared to the
expected bias for the ensemble.
Let's say that we are talking about a 1000 step random walk. There are
2^1000 possible paths, and two of them are at the extreme of +- 1000
steps. Yes, it is true than compared to a bias of 100 steps, most
sequences are "close to the origin". I have repeatedly said that the
Gaussian is a very broad distribution.
But just because most sequences are not near 1000 step extremes, does
not mean that most sequences are very close to the expectation bias of
zero. As Feller points out, a non-trivial number of sequences are a
considerable distance from zero, although they are also a considerable
distance from the maximum too. I agree that there are more near zeros,
and that the fraction decreases as you move away from the origin, but
it does not fall off dramatically - it falls off more gradually.
Whatever is driving you to make that claim above is at the very heart
of this issue. So if you can explain why you believe you can make such
a determination, maybe we will get closer to understanding why people
insist that statistical tests are valid in determining with reasonable
certainty that a TRNG is not truly random.
You correctly assert that most sequences are far away from the
maximum. But does that imply that they are extremely close to the
mean? If that were the case, then diffusion woud never occur (and I
realize that diffusion works because the extremes are extremely far
away from the mean, as Brian Olsen pointed out the other day).
>I'd probably still kick it out. The odds of a TRNG picking such a sequence
>'by chance' is roughly than 10^9 / 2^100 < 10^-27. It can happen, but I'll
>still bet against it.
I would use it as a very strong diagnostic indication of a very likely
malfunctioning TRNG - but no more than that.
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (Matthias Bruestle)
Subject: Re: Live from the Second AES Conference
Date: Thu, 1 Apr 1999 18:27:49 GMT
Mahlzeit
Bruce Schneier ([EMAIL PROTECTED]) wrote:
> Sure. When building smart-card based systems, I try very hard to make
> sure all secrets within a device can be known by the person holding
> the device.
This is probably not the case with most payment cards. The German
Geldkarte (="money card") has diversified 3DES keys for debit/credit/etc.
Maybe this will be better when cards which support public key algos
are more common.
Mahlzeit
endergone Zwiebeltuete
--
PGP: SIG:C379A331 ENC:F47FA83D I LOVE MY PDP-11/34A, M70 and MicroVAXII!
--
Stuart Kills Three and Eats their Kidneys.
------------------------------
From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: New York Times article on Differential Fault Analysis
Date: 1 Apr 1999 19:54:43 GMT
Look on Dan Boneh's web page. Use any search engine to find it.
Don Johnson
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Is initial permutation in DES necessary?
Date: 1 Apr 1999 20:06:51 GMT
Reply-To: [EMAIL PROTECTED]
[EMAIL PROTECTED] (John Savard) writes:
>no overarching conspiracy need be hypothesized.
But what fun is that?
--
Lamont Granquist ([EMAIL PROTECTED])
ICBM: 47 39'23"N 122 18'19"W
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: FSE information anyone?
Date: Thu, 01 Apr 1999 20:30:04 GMT
Thirteen <[EMAIL PROTECTED]> wrote, in part:
>First on the agenda
>is a very impressive paper by David Wagner, a student at Berkeley,
>and a participant in sci.crypt. The Boomerang attack is a new
>form of differential cryptanalysis in which work is done on half
>of the rounds.
Yes, this sounds like a very important development indeed.
John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html
------------------------------
From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: New York Times article on Differential Fault Analysis
Date: 1 Apr 1999 20:31:32 GMT
DJohn37050 <[EMAIL PROTECTED]> wrote:
> Look on Dan Boneh's web page. Use any search engine to find it.
> Don Johnson
or http://theory.stanford.edu/~dabo/
------------------------------
From: [EMAIL PROTECTED] (John Savard)
Subject: Re: FSE information anyone?
Date: Thu, 01 Apr 1999 20:50:33 GMT
[EMAIL PROTECTED] (John Savard) wrote, in part:
>There's some stuff at
>http://www.dmi.ens.fr/users/vaudenay/dec_feedback.html
And, of course, there's always
http://www.cs.berkeley.edu/~daw/papers/
John Savard (teneerf is spelled backwards)
http://members.xoom.com/quadibloc/index.html
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: Random Walk
Date: Thu, 01 Apr 1999 20:54:47 GMT
Reply-To: [EMAIL PROTECTED]
On 1 Apr 1999 14:00:07 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:
>Lack of dependence, which is not restricted to lack of correlation,
>is every bit as important, and it is more elusive. PRNGs have low
>or no bit bias, and can be constructed with low correlations, but
>can still have poor independence properties. Bit bias can be
>corrected for with little loss, but the others are much harder.
What is an example of "lack of indepence" in the context you are
talking about? And why would such a lack of independence not manifest
itself in bit bias and/or correlation? What third property of a RNG
would lack of independence manifest itself?
>Instead of quoting probabilists who do not understand statistics,
>I suggest you look at problems of statistical decision making.
It looks like I am going to have to. I have labored under the
assumption that probability theory is sacred, but apparently you
statiticians do not buy that. I therefore need to learn the
distinguishing characteriztics of statistical theory so I can find out
what makes it so different from probability theory.
As you certainly know, physicists are brought up on probability
theory, not statistical theory. In fact, "statistical mechanics" is
really "probability mechanics". So is it any wonder that I adhere so
strongly to probability concepts, since the very foundation of quantum
mechanics is based on it.
>The very word "ensemble" indicates a lack of understanding of the
>problems of physical probability. Probability is more than limiting
>relative frequency.
I never claimed to be an Expert. I am just going on what I have read,
things written by the Experts. When they say that statistical tests
cannot be used to characterize non-randomness, I pay attention.
>Other than those who do not believe in actual randomness, the only
>ones who would confuse the two are the ones who use computer packages
>without understanding. This may well be a large majority.
Some of them are lurking around here right now, listening to our every
word.
>Mathematics deals with formal concepts. Some of these concepts may
>have enough similarity with the real world that the real world can
>be modeled in the mathematical world. The accuracy of this modeling
>is not a problem of mathematics, but of the similarity of reality
>and the mathematical universe.
OK, now I understand what you said earlier.
Can you explain exactly why the model of randomness that is used to
justify statistical testing is correct. I have read, and have adopted
the position, that statistics is based on a notion called
pseudo-randomness, which gets some of its properties from infinite
sequences - but which are not applicable to finite sequences with
reasonable certainty.
They might be possibly related, might even be likely related, but are
not reasonably certainly related. The transition to the infinite is
fundamental, not superficial. All sorts of things happen with infinite
mathematical objects that do not come close to happening with finite
objects.
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 01 Apr 1999 21:34:21 GMT
Reply-To: [EMAIL PROTECTED]
On Thu, 1 Apr 1999 12:46:36 -0600, "Franzen" <[EMAIL PROTECTED]> wrote:
>As I understand the hypothetical TRNG concept, it cannot generate
>non-randomness. Since it is hypothetical, it cannot suffer process
>failure either.
We wish!
The model for the TRNG that I, an erstwhile physicist who designed and
built most of his own intrumentation from scratch, take is that of a
piece of scientific equipment. If you have ever worked with scientific
equipment (and I do not mean routine analytical machines but brand new
designs for brand new experiments) - IOW, equipment used in basic
research - then you can appreciate the reasons I would choose such a
model.
>Chi-square is not a measurement which is limited to only measuring
>pseudo-randomness; at least not in any literature I am aware of. I
>understand it to be a yardstick which will yield differing results to
>me when measuring fundamentally differing processes.
I have no problem with that.
>If a particular PRNG produces a sequence which is characteristically
>different from a TRNG sequence, complete Chi-square test results will
>differ also. The alternative is the incomplete Gamma function is an
>invalid measurement concept.
I have no problem with that either.
I would ask what are you using for the TRNG sequence in that test.
IOW, you must be comparing a purported TRNG to a known TRNG. What is
that known TRNG?
>With the present dearth of knowledge about uniform randomness, who can
>say that a particular subset sequence from a PRNG which turns out to be
>indistinguishable from an equal length TRNG subset is any less
>uniformly random. Does uniform randomness have to have a pedigree to go
>with its other inate characteristics?
Numbers themselves are not random - the process that generates them is
random.
>First you state statistical tests cannot distinguish uniform randomness
>from pseudo-randomness (first paragraph above). Now you parenthetically
>state some PRNG's can generate "close to" uniformly random sequences.
>How can you possibly know? Not only would you have to be able to
>distinguish between the two randomness potentials, but you would also
>have to have a way to measure them with some degree(s) of precision.
You evidently were not here when we discussed that issue. We all
realize that it is impossible to build a classical TRNG that is 100%
random. There will always be flaws which disturb the process and give
it some small amount of non-randomness, such as slight 1-bit bias.
That is why we came to the conclusion that the most we can hope for is
something we call "reasonably certainty" that a TRNG will be
crypto-grade.
That, however, is not the same as statistical measures of confidence.
Just as a woman cannot be pregnant with a 95% confidence level - she
is either fully pregnant or not pregnant at all - you can't use
confidence levels to determine the "reasonable certainty" that a TRNG
is sufficiently random to meet the requirements for a given
cryptosystem. That can only be determined by a complete audit of the
design of the TRNG and careful diagnostics of the subsystems - just as
a scientist would do for his experimental equipment to make sure the
results he gets from his experiments are "reasonably certain" to be
accurate.
Here are some of the statistical tests recommended in FIPS-140.
http://csrc.ncsl.nist.gov/fips/fips1401.htm
+++++
Statistical random number generator tests.
Cryptographic modules that implement a random or pseudorandom number
generator shall incorporate the capability to perform statistical
tests for randomness. For Levels 1 and 2, the tests are not required.
For Level 3, the tests shall be callable upon demand. For level 4, the
tests shall be performed at power-up and shall also be callable upon
demand. The tests specified below are recommended. However,
alternative tests which provide equivalent or superior randomness
checking may be substituted.
A single bit stream of 20,000 consecutive bits of output from the
generator is subjected to each of the following tests. If any of the
tests fail, then the module shall enter an error state.
The Monobit Test
1.Count the number of ones in the 20,000 bit stream. Denote this
quantity by X.
2.The test is passed if 9,654 < X < 10,346.
The Poker Test
1.Divide the 20,000 bit stream into 5,000 contiguous 4 bit segments.
Count and store the number of occurrences of each of the 16 possible
4 bit values. Denote f(i) as the number of each 4 bit value i where 0
<= >i <= 15.>
2.Evaluate the following:
(16/5000) * Sum {f(i)^2} - 5000
3.The test is passed if 1.03 < X < 57.4.
The Runs Test
1.A run is defined as a maximal sequence of consecutive bits of either
all ones or all zeros, which is part of the 20,000 bit sample stream.
The incidences of runs (for both consecutive zeros and consecutive
ones) of all lengths ( >= 1 ) in the sample stream should be counted
and stored.
2.The test is passed if the number of runs that occur (of lengths 1
through 6) is each within the corresponding interval specified below.
This must hold for both the zeros and ones; that is, all 12 counts
must lie in the specified interval. For the purpose of this test, runs
of greater than 6 are considered to be of length 6.
Length of Run Required Interval
1 2,267-2733
2 1,079-1,421
3 502-748
4 223-402
5 90-223
6+ 90-223
The Long Run Test
1.A long run is defined to be a run of length 34 or more (of either
zeros or ones).
2.On the sample of 20,000 bits, the test is passed if there are NO
long runs.
+++++
What is the justification for purported validity all these statistical
tests in terms of true randomness? What model of true randomness is it
that causes these tests to be valid? All they just look like to me are
tests for the *appearance* of randomness, namely tests for
pseudo-randomness.
Doesn't anyone realize that if you could model true randomness with
mathmatical objects, that you could solve some of the mysteries of
quantum mechanics.
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (R. Knauer)
Subject: Re: True Randomness & The Law Of Large Numbers
Date: Thu, 01 Apr 1999 21:40:06 GMT
Reply-To: [EMAIL PROTECTED]
On 1 Apr 1999 14:07:34 -0500, [EMAIL PROTECTED] (Herman
Rubin) wrote:
>>"Correlation," which is defined in any first-year book on statistics, is
>>the dependence of one value on another.
>This is incorrect. It is the signed extent of LINEAR dependence.
Aw crap!
Does this mean I have to be careful which "first-year book on
statistics" I read?
I sure hope Triola's book isn't as screwed up as the "first-year book
on statistics" the original poster took that bogus definition from,
because I am getting his book from the library tonight.
How many more fundamental concepts have the so-called "experts" aroubd
here screwed up thus far?
(I never claimed to be an "expert" so I can screw up all I want. :-)
Bob Knauer
"The laws in this city are clearly racist. All laws are racist.
The law of gravity is racist."
- Marion Barry, Mayor of Washington DC
------------------------------
From: [EMAIL PROTECTED] (Peter Pearson)
Subject: S/MIME interoperability: 40 bits only?
Date: Thu, 1 Apr 1999 21:38:27 GMT
I'm trying to use Netscape's 4.02 Communicator to
exchange encrypted email with a correspondent who uses
a Microsoft mail reader. I have deselected all ciphers
except 168-bit 3DES, and my correspondent has specified
168-bit 3DES for outgoing messages, but when I read
email from him, Communicator says it was encrypted with
40-bit RC2, and similarly when he reads email from me.
Is this pathetic capability all we can expect from these
products, or am I overlooking some important setting?
Is there, at least, a way to tell Communicator that if
it's going to encrypt an outgoing message with a joke
cipher instead of the cipher I asked for, it should at
least %$#$in warn me?
Much thanks to any who can shed light for me.
- Peter Pearson
[EMAIL PROTECTED]
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************