Cryptography-Digest Digest #171, Volume #9 Tue, 2 Mar 99 01:13:03 EST
Contents:
Re: True Randomness - DOES NOT EXIST!!! (BRAD KRANE)
Re: XOR ([EMAIL PROTECTED])
RANDOM (let's end this?) ([EMAIL PROTECTED])
Re: Hardware Random Numbers: Not an *explicit* feature (Marc Warden)
Re: Musings on the PKZip stream-cipher (Sundial Services)
Re: New Encryption (I would like some analysis) (Scott Fluhrer)
Re: IDEA ("D")
Re: Common meaning misconception in IT, was Re: Unicity of English, was Re: New
high-security 56-bit DES: Less-DES ([EMAIL PROTECTED])
Re: My Book "The Unknowable" (Neil Nelson)
Re: Define Randomness ("Trevor Jackson, III")
Re: Hardware Random Numbers: Not an *explicit* feature (Somniac)
----------------------------------------------------------------------------
From: BRAD KRANE <[EMAIL PROTECTED]>
Subject: Re: True Randomness - DOES NOT EXIST!!!
Date: Mon, 01 Mar 1999 23:47:57 GMT
Read on.
"R. Knauer" wrote:
> On 1 Mar 99 11:39:09 -0400, [EMAIL PROTECTED] (John Briggs)
> wrote:
>
> >Still devoid of meaning. If it's outside the Universe, it can't affect
> >something inside the Universe. That's basic to pretty much any definition
> >of "the Universe".
>
> That is a definition of the Universe in terms of Physics. But Physics
> is not intended to address questions outside the material realm.
>
> The best you can do is claim that only material objects exist - IOW,
True.
>
> deny the existence of the non-material (spiritual) real - but that
> does no good, because you still have to explain how finite mutable
> objects came into existence.
They where always there to begin with. Just lick your Religions God.
>
>
> >Fallacy one: Why just one first cause? Why not two? Or three?
>
> I am not invoking any "first cause" arguments. I do not care for
> Aquinas's famous "Five Ways".
>
> >Fallacy two: Why not a causal loop?
>
> ???
>
> >Fallacy three: Why not infinite regress?
>
> ???
>
> >Fallacy four: What causes the first cause?
>
> The essence of the Supreme Being is existence. It has no cause.
>
> >Looks like a fallacious argument to me.
>
> These are the standard arguments against the existence of the Supreme
> Being. They are straw men arguments, since the Five Ways were never
> meant to be rigorous proofs of the existence of the Supreme Being.
>
> >>>The universe can proceed perfectly well without this "law".
> >>
> >> Oh really - the very Universe we observe, eh?
>
> >Yes, the very Universe we observe. We see plenty of effects without
> >any visible cause.
>
> Name one. And don't give us this nonsense about virtual particles. We
> want real physical processes, not virtual processes.
>
> BTW, relativity is based on the law of causality.
>
> >I didn't say that there is no such thing as cause and effect. I said
> >that the law of cause and effect _WHICH I EXPLICITLY STATED AND WHICH
> >YOU HAD LEFT COMPLETELY UNSPECIFIED_ was not needed by the Universe.
>
> Then you claim that the efficient cause of the Universe is Nothing.
> How can Noting cause the Universe, when Nothing does not exist?
>
> The cause of the existence of the Supreme Being, by contrast, is the
> Supreme Being. The very uncausality that you are so willing to
Just as the cause of the Universe is the Universe.
>
> attribute to a finite mutable world, where it is impossible to be, is
> contained in the Supreme Being, where it is possible to be.
>
> >Now, if you want to loosen up the definition of "cause and effect" to
> >the point where radioactive decay and quantum fluctuations in the vacuum
> >have causes then you can make a credible argument in favor of this law.
>
> Radioactive decay is an instance of spontaneous emission, which is
> caused by zero point fluctuations in the quantum vacuum.
>
> >But then you are left with the question: What causes a TRNG based on
> >radioactive decay to emit the sequence it does?
>
> Vacuum fluctuations.
>
> The better question is what causes the nucleas from decaying
> instantaneously? What keeps it in an excited state for so long? Vacuum
> fluctions only help it to lower its energy by decaying - by supplying
> the needed randomness to get it to make the transition. And what keeps
> the electron in hydrogen from radiating electromagnetic energy and
> ending up inside the nucleus permanently?
>
> Must be God playing dice, eh.
No we just know next to nothing about the universe. So every thing we don't know or
don't understand we attribute to a Supreme Being. Take thunder and lightning for
example people thought it was an act of god until science proved them wrong. Also what
all the greek gods. When the greek scientists couldn't explain some thing with their
known knowledge they'd attribute it to there made up gods witch were proved very wrong
by scientists years later.
This is exactly what people still do today.
~NuclearMayhem~
PS. Don't think that I'm trying to insult your beliefs. Its just that I only believe in
science.
>
>
> Bob Knauer
>
> "There is much to be said in favour of modern journalism. By giving us the opinions
> of the uneducated, it keeps us in touch with the ignorance of the community."
> --Oscar Wilde
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: XOR
Date: Tue, 02 Mar 1999 02:17:44 GMT
> >Hi,
> > Why we need to use XOR to do the encryption process? Any other
> >operations that are better than XOR?
>
> XNOR perhaps? XOR inverted.
XOR is usually a one cycle (machine cycle), reversible operation.
Consider a plain text of 00001011, and you xor with 10101010 then:
00001011
10101010
========
10100001
And if you xor with 10101010 again
10100001
10101010
========
00001011
Tada!
Tom
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: [EMAIL PROTECTED]
Subject: RANDOM (let's end this?)
Date: Tue, 02 Mar 1999 02:21:17 GMT
Not that I don't like religous quotes on the universe, but the fact is, how do
you create an abitrary number? Algorithm! How do you create a random number?
There has to be some factor in the decision. The more factors the more
'random'.
So there is no such thing as random, just close enough.
Tom
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: Marc Warden <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,comp.sys.intel
Subject: Re: Hardware Random Numbers: Not an *explicit* feature
Date: Mon, 01 Mar 1999 18:38:43 -0800
Reply-To: [EMAIL PROTECTED]
I read the info at the site pointed to and from my interpretation of the
information, the diode's state can't be queried via software, from any
instruction. The on-die diode is monitored by a receiver of some type
located on the motherboard.
In fact here is a snippet from Intel's site:
Internal performance counters for performance
monitoring and event counting.
Incorporates an on-die diode that can be used
to monitor the die temperature. A
thermal sensor located on the motherboard can
monitor the die temperature of
the Pentium III processor for thermal
management purposes.
The thermal management code can exist in the power management code. This
motherboard monitoring hardware when it detects a suitably elevated
temperature and can trigger a system management interrupt (SMI) which
deduces the source of the interrupt (thermal and not battery power or some
other source) and uses this to inform the user the processor's temperature
is getting high, and in extreme cases can slow the CPU clock or even halt
the CPU clock to prevent thermal damage.
Sincerely,
MarcW.
John Savard wrote:
> At
>
> http://developer.intel.com/design/PentiumIII/prodbref/
>
> and
>
> http://developer.intel.com/procs/perf/PentiumIII/brief/summary.htm
>
> it is noted that the Pentium III chip contains a special diode, on the
> chip itself, which can be used to check that the chip is not getting
> too hot.
>
> That feature is probably the source of the claims - not echoed in the
> list of chip features - that the chip has a built-in random number
> generation capability.
>
> I haven't yet, however, located the instruction to access that diode,
> but hopefully, with this lead, someone will do so shortly.
>
> John Savard (teneerf is spelled backwards)
> http://members.xoom.com/quadibloc/index.html
------------------------------
Date: Mon, 01 Mar 1999 18:52:14 -0700
From: Sundial Services <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Musings on the PKZip stream-cipher
Terry Ritter wrote:
> I have several times reported here that some years ago I was able to
> resolve this part of the PKzip cipher...
[Naturally I immediately went to http://www.dejanews.com and retrieved
and printed the full text of all the articles that Mr. Ritter posted to
the newsgroup as early as 1996. For those of you not yet familiar with
dejanews, it is a valuable resource ... and also one that must give you
pause when you post to a newsgroup (and joy when you encounter a FAQ!).
The full text of these articles, and the threads they belong to, was
readily obtainable and extremely enlightening. On the Internet, we have
access not only to today's postings -- but those made many years ago.]
/mr/
------------------------------
From: Scott Fluhrer <[EMAIL PROTECTED]>
Subject: Re: New Encryption (I would like some analysis)
Date: Tue, 02 Mar 1999 02:43:42 GMT
In article <7bfdgk$it1$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:
>Ok I just started writing my own algorithms, and I came up with what I call E
>(short for encrypt). Source code is included. I would like some analysis
>from professionals/amateurs. If it sucks, please tell me. It's the only way
>I will learn.
>
>It uses a 8192 byte value/position dependant key, using XOR's for the actual
>encryption. The entropy of the output is high (all 256 possible symbols are
>about evenly probable and distributed). Repeated chars like 'aaaa' are
>encoded with different entries in the key.
>
>Please have a look, if it's any good, it's free to use for anyone. If not,
>flame me!!!
A few issues to flame you about (since you asked :-)
- 65536 bits of key is way too much. Why do I say it's too much? Well, how
are you going to store them? Memorize them? I don't think so. Store them
on disk? Well, that means that if the attacker finds that disk file, he
can read the file.
- When you tell people about your new cipher, it's generally considered polite
to describe your algorithm using mathematical notation. It's not considered
polite to just provide some C source, and hope people will be willing to
bother to deduce the algorithm from the source. And, while we're at it, if
you could descibe why you suspect your algorithm is any good would be a
bonus (such as, why is it resistant to particular attacks).
>
>btw, I also believe that knowing the plaintext and ciphertext it's equally
>improbable to make the key...
It had better -- resistance to known-ciphertext attacks are considers a
requisite around here...
--
poncho
------------------------------
From: "D" <[EMAIL PROTECTED]>
Subject: Re: IDEA
Date: Mon, 1 Mar 1999 21:40:02 -0500
Also, I would like to know what this error is, as I am looking at
cryptanalyzing IDEA also. Based on what the book says, I think i have an
idea about breaking IDEA with a known plaintext attack which could possibly
be expanded to a ciphertext only attack.
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: Common meaning misconception in IT, was Re: Unicity of English, was Re:
New high-security 56-bit DES: Less-DES
Date: Tue, 02 Mar 1999 01:33:24 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (John Savard) wrote:
> [EMAIL PROTECTED] wrote, in part:
>
> >The
> >example calls upon unicity in order to define it and uncity is defined by
> >language statistics not by a savvy human reader.
>
> Language statistics, as they become more detailed, are simply
> approximations to a human writer - or reader.
...in letter-frequency or even in word-frequency or phrase frequency -- but
NEVER in sense. The Information theory definition of entropy and the derived
definitions of conditional entropy and unicity have nothing to do with
meaning, sense or knowledge as I explained in the message.
So, what a savvy human reader or any other less-than-savvy human reader would
"read" is irrelevant (as I added above)-- only the letter-frequency counts in
unigram, digrams, trigrams, etc. ar relevant, as any dumb computer can also
read.
Cheers,
Ed Gerck
============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
------------------------------
From: Neil Nelson <[EMAIL PROTECTED]>
Crossposted-To: sci.math,sci.physics,sci.logic
Subject: Re: My Book "The Unknowable"
Date: Tue, 02 Mar 1999 04:05:35 GMT
In article <[EMAIL PROTECTED]>,
Bob Knauer wrote:
[ Kolmogorov-Chaitin Randomness is a different kind from the
[ crypto-grade randomness needed to make the OTP system proveably
[ secure. For example, regular sequences that fail the test for
[ randomness in terms of complexity are valid for OTP ciphers. In
[ fact you cannot filter out any sequences in the OTP system, regular
[ or complex, or else the attacker will be able to use that to
[ advantage.
Neil Nelson wrote:
> First we must define what it is to have a random number, which was
> just indicated to be according to a non-random perspective (a
> language). If, according to the previous discussion, random means a
> string sequence not compressible in the language then a sufficiently
> long run of 0's would be compressible and hence that string not
> random wherever it might appear.
[ Although it would be incredibly dumb to send a cipher made from the
[ null key, it still is a valid key if it is produced by a TRNG.
[ Fortunately it is incredible improbable for any sequence of useable
[ length.
I am out of my element in discussing the details of cryptography, but
if a TRNG can produce a null key and it is incredibly dumb to send a
cipher made from a null key, it would be desirable to remove the
possibility of a null key, a very simple check, no matter how
improbably it might occur. I.e., if you were entrusted with securing
a message on which lives depended, the use of a null key no matter how
well justified by theory, would obtain a very tragic result.
Hence I expect the keys generated by a TRNG should be confirmed to a
sufficient complexity value before their use. What you are saying is
that the probability of obtaining an easily deciphered TRNG key is
very small, and it is likely this same small proportion of keys that
would be avoided by an appropriate complexity randomness test.
Neil Nelson
------------------------------
Date: Tue, 02 Mar 1999 00:14:28 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Define Randomness
Terry Ritter wrote:
> On Sun, 28 Feb 1999 09:41:44 -0500, in <[EMAIL PROTECTED]>,
> in sci.crypt "Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:
>
> >[...]
> >Proof is irrelevant to physcial devices because proofs operate in the realm
> >of math and logic and physical devices do not exist in that space.
> >
> >Experiment is irrelevant to theoretical proofs. Note that experimental
> >evidence may lead us to have more or less confidence in a theory, but it is
> >not proof.
>
> I agree, with the addition that experimental evidence can *disprove* a
> theory, just not prove one. In particular, it is possible for many
> different theories to predict the very same values, in which case they
> cannot be distinguished experimentally. Nor can Occam's Razor resolve
> this, since there may be a conceptual breakthrough which shows the
> apparent complexity in some theory to be a consequence of a previously
> unnoticed underlying simplicity.
>
> >[...]
> >If we are not measuring in a meaningful way we are not
> >working in the realm of Science but in that of Art. In Art much weight is
> >given to style even at the expense of substance. Many of these
> >angles-dancing-on-a-pinhead arguments are stylistic rather than substansive
> >
> >One essential definition is missing in this discussion: insecurity. We know
> >what 100% security is, but there appear to be violent contradictions in the
> >perceptions of insecurity. Is it a binary property in that anything less
> >than 100% security is equally weak? Hardly. Is it a singly dimensioned
> >axis or a space requiring multiple units?
>
> I would argue that "insecurity" is *indeed* anything less that 100%
> security, but I would define "security" as achieving the goal of
> forcing an Opponent to perform some amount of work. When The Opponent
> can penetrate security at *less* cost, then we have a "break" (a
> successful unanticipated attack) and a cipher which is "insecure" at
> the advertised work effort. This may or may not be important in
> practice.
OK, this definition of insecurity is derived from "The Labor Theory of
Security". It admits an easily defined metric for weakness in that the ratio of
designed or desirable work factor over the actual work factor is a measure of the
degree of failure. It may be more useful to use the logarithm of that ratio.
>
>
> *Any* *possible* attack which succeeds at less effort than we specify
> indicates a design failure. This definition does not help us find
> such attacks, or recognize them in a raw incomplete form, but simply
> defines what we demand from a cipher in a security sense. The other
> side of this is that these definitions also do not limit the ways in
> which a cipher design may achieve these goals.
An issue worth addressing is the way to define insecurity for a cipher whose
virtue is not based on work factor, but on undecidability. The classic vernam
cipher does not have a work factor as far as I understand the term. Yet it must
admit degrees of insecurity. For instance, if I use each key exactly twice. I
have theoretically compromised my security. But using each key exactly 1,000
times would be a more serious weakness. Yet by your standards you would consider
both systems insecure; the binariness imples they are *equally* insecure.
>
>
> >Rather than focus on pure security, consider the issue of weakness. For
> >instance, what is 100% insecurity (other than advesarial telepath)? Even
> >pure plaintext may not be 100% insecure due to dilution by fake messages.
> >C.f., the blizzard of nonsense broadcast prior to D-day inorder to drown out
> >the actual messages tranmitted en clair. Clearly the actual messages were
> >not 100% insecure.
>
> For the purpose of evaluating the *cipher*, whose whole role is to
> prevent access to plaintext for less than some cost, we can simply say
> that if it does not do so, it has failed. This is Boolean.
>
> >>[...]
> >> We could encipher the CD data, or similarly, keep it in a safe. But
> >> then, if the enciphered result is available, all we need to do to
> >> break the OTP is to break the protecting cipher (or the safe). Which
> >> means, of course, that all the claims about unbreakability are gone.
> >
> >This is an exaggeration in that communications security demands extreme
> >vigilance no matter what channel management technique is used.
>
> Sure, if somebody gets the key, they are in. But the issue in the
> above paragraph was: "why not use an ordinary cipher to protect the
> OTP key"? The answer is that we then depend upon the strength of the
> ordinary cipher for security, which reduces the hoped-for security to
> about that of the ordinary cipher. And that means we might as well
> use the ordinary cipher -- which is far easier to use -- and forget
> about the OTP.
Only in the case of trivial application mechanisms (e.g., straight XOR). This is
similar to the conclusion that PRNGs are useless for ciphers because a simple XOR
is easily reversible, and once the key stream is visible it can be analyzed and
then predicted.
Non-trivial application mechanisms eliminate these straman arguments against
PRNGs, and, I believe, against the Vernam system.
>
>
> >> And then we must never even send the same *plaintext* to more than one
> >> destination. For if we do, and plaintext becomes available from one
> >> site, and The Opponents intercept the message to other sites, they
> >> will know each key used, and can re-write the messages at will.
> >
> >This is not specific to the OTP. Any cipher vulnerable to a known plaintext
> >attack has the same problem.
>
> I think that is off the mark: Many ciphers which are "vulnerable" to
> known-plaintext still require considerable work to expose the key. In
> contrast, the usual OTP requires *no* *work* *at* *all*. These
> situations are not the same.
Your conclusion is only true of OTPs that have trivial application mechanisms.
>
>
> >In fact, the OTP offers the ability to make
> >each comm link distinct, eliminating the problem completely.
>
> Again, I think this has missed the point: If we send the same
> plaintext message to different targets -- even using *different*
> *pads* -- finding just one of those plaintexts (while stopping the
> others in transit) opens up the possibility of forging
> apparently-valid messages to *every* *other* target. This is a
> weakness virtually unheard of in conventional ciphers, and occurs even
> though those other pads have not been physically exposed on either
> end!
Again, you assume (classically I suppose) a trivial application mechanism. I
know that, by not providing an example, I am begging the obvious question a bit,
but I have not inspected my example enough to deliver it. I hope to shortly. At
that time perhaps we can revisit these critiques of OTP systems.
>
>
> This means that simply keeping the pads secret is not enough!!!! We
> must also somehow assure that whole messages (or whole paragraphs or
> whole sentences or even whole words) are not identifiably re-used *in*
> *plaintext* to different targets. This is a new level of required
> operational care which conventional ciphers do not need.
>
> >[...]
> >Note that a complete taxonomy of cipher attacks would be a complete taxonomy
> >of cipher weaknesses. Such a taxonomy would be useful because it would
> >permit an exhaustive analysis of a system and that analysis could be
> >consider a proof of security.
>
> Sure. The logical organization of attacks amounts to a taxonomy, and
> this has been developing in the past decade, especially in block
> ciphers. We can use this to fill in unnoticed "holes" between known
> attacks or for modest extensions. But we can't use it to anticipate
> or describe really innovative new attacks.
>
> >However it is unreasonable to expect that a complete taxonomy can ever be
> >defined (by Godel), so proofs of security are probably never going to be
> >possible. Personally I'd expect proofs of software correctness first (nevah
> >hoppen).
> >
> >The skeleton of a complete taxonomy of cipher attacks would still be a
> >useful tool in designing and analyzing ciphers, in that it would make
> >rigorous proofs of *IN*security much simpler.
>
> I have been doing this, to an extent, in my Crypto Glossary under
> "attacks." I note that naming terminology is inconsistent, in some
> cases referring to the information available (e.g., ciphertext only,
> known-plaintext, defined-plaintext, etc.), while others refer to the
> attack process (e.g., brute force, linear cryptanalysis, differential
> cryptanalysis, etc.), *assuming* some amount of exposed information.
> It is probably better to name the attack process, whose description
> would include the level of information exposure needed.
I suppose they are pretty tightly bound. Another property worth cataloging as an
index is the portion of the cipher at which the analysts "inserts his wedge" as
it were. There is a logical starting point for most attacks, often a particular
operation that is vulnerable to analysis.
>
>
> But a named attack is just an approach, and not an algorithm. The
> approach must still be innovatively applied before it becomes an
> algorithmic break. So even a cipher attack taxonomy will not be of
> much help in trying to find weakness experimentally.
>
> >>[...]
> >>What *you* want to do is to use strength data
> >> from particular cases to imply results about the whole, but -- even if
> >> we could measure strength -- it is useless unless we can reach
> >> probabilities like 1 in 2**56. This should require 2**112
> >> measurements, which is impossible -- and all this depends upon having
> >> such a measure which we do not.
> >
> >I do not follow the transform from 2^56 to 2^112. How is this derived?
>
> That value came from the idea that, as a general handwave, we can
> bound a value with a precision of about SQRT(samples).
>
> But after thinking about it, I would personally be happy with much
> less, say, 32 trials, each of which was of size to expose some
> reasonable number (say 8) weak messages, at the minimum strength we
> require. So if we want to check for 1 weak message in 2**56, that
> would be 2**(56+5+3) = 2**64 weakness tests, instead of 2**112.
>
> But note that the whole reason for requiring a strength of 1 in 2**56
> messages is to assume that The Opponent cannot brute force 2**55 keys,
> on average. But we *know* this is false since we ourselves must do
> 2**64 tests to provide evidence that we have the desired strong
> keyspace. This means that we simply cannot hope to perform enough
> tests to experimentally certify a cipher which cannot be broken by
> brute force.
Yes. Exhaustive testing is rarely useful. We need a set of tests whose results
can be extrapolated over larger regions than those tested. I believe a MITM
style of testing may be possible. Given a cipher a theoretician might be able to
define a representative set of tests whose results scale up to cover the entire
system. I speak here not of the keyspace, but of round counts, S-box sizes,
etc. Scalable ciphers will provide this capability. One we currently lack.
>
>
> And all this depends upon *having* a weakness test! In my opinion,
> such a test must find *every* *possible* weakness, and must do so with
> absolute perfection. I would be very, very dubious about trying to
> estimate the "strength" of a cipher on the basis of a "strength" test
> which could miss any number of innovative attacks.
Try to avoid making test perfection into a grail. I thought we had concluded
above that perfect testing was not a useful concept. Consider that cryptology is
a very new science (if it is a science; I tend to think not). There is great
value in a compendium of all known attacks even if the collection is provably
incomplete.
The reason I find merit in this apparently inadequate tool is contained in the
testing "correlation of forces". A weak cipher is probably going to fail many
tests. A strong but flawed cipher may still fail multiple tests. A slightly
imperfect (shall we use the gemologists scale: very very slightly imperfect?) may
fail only one test.
A cipher that fails no test may still contain a flaw, but as the science matures
it will become harder and harder for an adversary to detect the flaws that the
assembly of tests fail to detect. In the limit we'll find that the analytic
effort required to find a flaw may have such a high work factor that the cipher
may have "security through subtlety".
Extrapolating from current, labor-intensive attacks may be like extrapolating New
York City in 1900. The prediction was that by 1950 everyone in the world would
have to be an NYC telephone operator, and the place would be covered by a layer
of horse manure 50 feet thick. Automated testing would permit automated cipher
mutation. Metaphorically, everyone in the world has to work for the NSA, and
every message sent has 6.023e23 BCC recipients.
>
>
> >[...]
> >Not only is the set of stream cipher attacks not
> >closed, it is not closable. It is infinite.
>
> Right.
>
> >Consider that an attack exploits a weakness. Given a set of attacks
> >purported to be complete, one can always synthesize a weakness (higher order
> >pattern) that is not used by any attack within the set. That weakness will
> >be the basis of a new attack.
>
> Much clearer than I put it, but that's the way I see it. Write on,
> brother!
>
> ---
> Terry Ritter [EMAIL PROTECTED] http://www.io.com/~ritter/
> Crypto Glossary http://www.io.com/~ritter/GLOSSARY.HTM
------------------------------
From: Somniac <[EMAIL PROTECTED]>
Crossposted-To: talk.politics.crypto,comp.sys.intel
Subject: Re: Hardware Random Numbers: Not an *explicit* feature
Date: Mon, 01 Mar 1999 21:17:56 -1000
John Savard wrote:
>
> At
>
> http://developer.intel.com/design/PentiumIII/prodbref/
>
> and
>
> http://developer.intel.com/procs/perf/PentiumIII/brief/summary.htm
>
> it is noted that the Pentium III chip contains a special diode, on the
> chip itself, which can be used to check that the chip is not getting
> too hot.
>
> That feature is probably the source of the claims - not echoed in the
> list of chip features - that the chip has a built-in random number
> generation capability.
>
> I haven't yet, however, located the instruction to access that diode,
> but hopefully, with this lead, someone will do so shortly.
>
> John Savard (teneerf is spelled backwards)
> http://members.xoom.com/quadibloc/index.html
This is a thermometer designed by my friend David Hoff when he worked
at Intel around 1986. It is an Intel patent under David's name. The
diode reverse leakage current increases with temperature. A small diode
and a large diode are sensed by a differential amplifier to detect
when the large diode has an easily measurable current. At normal
operating temperatures, diode currents are too small to be measured
easily. No software is needed to use the thermometer. It is a hardware
circuit that outputs a logic signal called "hot slash cold bar". This
logic signal can cause an interrupt or other state.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************