Bill Frantz wrote:
At 2:17 PM -0700 9/19/01, Theodore Tso wrote:
It turns out that with the Intel 810 RNG, it's even worse because
there's no way to bypass the hardware whitening which the 810 chip
uses.
Does anyone know what algorithm the whitening uses?
Just like von Neumann's unbiasing
Mike Brodhead wrote:
Just about all of the private-sector conferences I have attended
require registration.
I think this is a poor example. I expect you'd be welcome to use the
name 'John Smith' and pay cash, if you like.
I think the real point is this: We see, all too often, cases where it
Will Rodger wrote:
It included all sorts of people traipsing up to
Capitol Hill to make sure that ordinary research and system maintenance,
among other things, would not be prosecuted.
I think our understanding of the DMCA has changed
significantly since it was first introduced, and it's
not
Very interesting. Thanks for the analysis.
Bernstein's analysis is based on space*time as your cost metric.
What happens if we assume that space comes for free, and we use simply
time as our cost metric? Do his techniques lead to an improvement in
this case?
It looks to me like there is no
Eugen Leitl wrote:
Is there any point in compressing the video before running it through a
cryptohash?
No. (assuming you're talking about lossless compression)
In general, any invertible transformation neither adds or subtracts
entropy, and hence is extremely unlikely to make any difference
Amir Herzberg wrote:
So I ask: is there a definition of this `no wasted entropy` property, which
hash functions can be assumed to have (and tested for), and which ensures
the desired extraction of randomness?
None that I know of. I'm not aware of much work in the crypto literature
on this
John S. Denker wrote:
Amir Herzberg wrote:
So I ask: is there a definition of this `no wasted entropy` property, which
hash functions can be assumed to have (and tested for), and which ensures
the desired extraction of randomness?
That's the right question.
The answer I give in the paper is
An example: presume we take a simple first order statistical model. If our
input is an 8-bit sample value from a noise source, we will build a 256
bin histogram. When we see an input value, we look its probability up in
the model, and discard every 1/(p(x)-1/256)'th sample with value x. When
Barney Wolff wrote:
This leads me to ask what may be a laughably naive question:
Do we even know that the popular hash functions can actually generate
all 2^N values of their outputs?
It seems very unlikely that they can generate all 2^N outputs
(under current knowledge). However, they satisfy
Oh dear. On re-reading your message I now suspect that what you asked
is not what I originally thought you asked. I see two questions here:
Q1: If we cycle through all N-bit messages, are all
2^N output values possible?
Q2: If we cycle through all messages (possibly very long
or
3) For a one-way hash function should not expect a _constructive_
proof that it generates all possible codes; such a construction
would violate the one-way property.
Nitpick: the last statement does not seem quite right to me. I'm thinking
of the notion of a one-way permutation. For
To test a hash function h() whose range is S,
let F be the set of balanced functions from S - {0, 1}. (Balanced
meaning that each f in F maps exactly half of S to 0 and half to 1.)
If you can contrive to choose many members of F at random, and compose
them with h for many arguments of h,
The reason for batching entropy input is to prevent someone who has
broken your system once from discovering each small entropy input by
exhaustive search. (There was a nice paper pointing this out in. If
someone has the reference...)
I believe you are referring to the state compromise
Amir Herzberg wrote:
But there's a big difference: the random oracle `assumption` is clearly not
valid for SHA-1 (or any other specific hash function).
Well, the random oracle model has problems, but I think those problems
are a bit more subtle than just an assumption that is true or false.
So
David Wagner [EMAIL PROTECTED] writes:
I don't know of any good cryptographic hash function that comes with
a proof that all outputs are possible. However, it might not be too
hard to come up with plausible examples. For example, if we apply the
Luby-Rackoff construction (i.e., 3
R. A. Hettinga wrote:
Protecting Privacy with Translucent Databases
Last week, officials at http://www.yale.edu/Yale University complained to
the FBI that admissions officers from
http://www.princeton.edu/index.shtmlPrinceton University had broken into
a Yale Web site and downloaded admission
David Honig wrote:
At 08:56 PM 8/30/02 -0700, AARG!Anonymous wrote:
The problem is that you can't forcibly collapse the state vector into your
wished-for eigenstate, the one where the plaintext recognizer returns a 1.
Instead, it will collapse into a random state, associated with a random
key,
Ed Gerck wrote:
The original poster is correct, however, in that a metric function can
be defined
and used by a QC to calculate the distance between a random state and an
eigenstate with some desired properties, and thereby allow the QC to define
when that distance is zero -- which provides the
AARG!Anonymous wrote:
David Wagner writes:
Standard process separation, sandboxes, jails, virtual machines, or other
forms of restricted execution environments would suffice to solve this
problem.
Nothing done purely in software will be as effective as what can be done
when you have secure
Peter N. Biddle wrote:
[...] You can still extract everything in Pd via a HW attack. [...]
How is this BORE resistant? The Pd security model is BORE resistant for a
unique secret protected by a unique key on a given machine. Your hack on
your machine won't let you learn the secrets on my
Perry E. Metzger wrote:
But if you can't simulate the system, that implies that the challenger
has to have stored the challenge-response pairs because he can't just
generate them, right? That means that only finitely many are likely to
be stored. Or was this thought of too?
I believe the idea is
Barney Wolff wrote:
Actually, it can. The server can store challenge-responses in pairs,
then send N as the challenge and use the N+1 response (not returned)
as the key.
But why bother? What does this add over just using crypto
without their fancy physical token? The uncloneability of
their
Bill Frantz wrote:
If the challenger selects several of his stored challenges, and asks the
token reader to return a secure hash of the answers (in order), no
information will be leaked about the response to any individual challenge.
This procedure will allow the challenger to perform a large
Ed Gerck wrote:
Wei Dai wrote:
No matter how good the MAC design is, it's internal collision probability
is bounded by the inverse of the size of its internal state space.
Actually, for any two (different) messages the internal collision probability
is bounded by the inverse of the SQUARE of
There seems to be a question about whether:
1. the internal collision probability of a hash function is bounded by the
inverse of the size of its internal state space, or
2. the internal collision probability of a hash function is bounded by the
inverse of the square root of size of its
Arnold G. Reinhold wrote:
If I am right and WPA needlessly
introduces a significant denial of service vulnerability, then it
should be fixed. If I am wrong, no change is needed of course.
But TKIP (the part of WPA you're talking about) is only a
temporary measure, and will soon be replaced by
Ed Gerck wrote:
For example, in reply to my constraint #2, you say:
This is expected to be roughly counterbalanced by the
number of unlucky users who quite (sic) while behind.
but these events occur under different models. If there
is no prepayment (which is my point #2) then many users
can
Matt Crawford wrote:
No, it doesn't. It doesn't take unlimited time for lottery-based
payment schemes to average out; finite time suffices to get the
schemes to average out to within any desired error ratio.
Strictly speaking, the average will come within your error tolerance
of the expected
Ben Laurie wrote:
William Knowles wrote:
Prime numbers (such as 1, 5, 11, 37...) are divisible only by
themselves or 1. While smaller prime numbers are easy to make out, for
very large numbers, there never had been a formula for primality
testing until August 2002.
Doh! This is so untrue.
Jeroen C. van Gelderen wrote:
Here is a scenario: Scott wants Alice to generate a key pair after
which he will receive Alice's public key. At the same time, Scott wants
to make sure that this key pair is newly generated (has not been used
before).
You might be able to have Scott specify a
Trei, Peter wrote:
The weird thing about WEP was its choice of cipher. It used RC4, a
stream cipher, and re-keyed for every block. . RC4 is
not really intended for this application. Today we'd
have used a block cipher with varying IVs if neccessary
I suspect that RC4 was chosen for other reasons
Matt Crawford wrote:
But here's the more interesting question. If S = Z/2^128 and F is the
set of all bijections S-S, what is the probability that a set G of
2^128 randomly chosen members of F contains no two functions f1, f2
such that there exists x in S such that f1(x) = f2(x)?
Vanishingly
Bill Frantz wrote:
I guess I'm dumb, but how to you verify a proof of Sophie Germain primeness
with less effort than to run the tests yourself?
There are ways to prove that p is prime so that the receiver
can verify the proof more easily than it would be to construct
a proof. The verification
Hermes Remailer wrote:
Hopefully this will shed light on the frequent claims that Palladium will
limit what programs people can run, [...]
That's a strawman argument. The problem is not that Palladium will
*itself* directly limit what I can run; the problem is what Palladium
enables. Why are
Ian Grigg wrote:
By common wisdom, SSL is designed to defeat
the so-called Man in the Middle attack, or
MITM for short.
The question arises, why?
One possible reason: Because DNS is insecure.
If you can spoof DNS, you can mount a MITM attack.
A second possible reason: It's hard to predict
what
Nomen Nescio wrote:
Regarding using blinding to defend against timing attacks, and supposing
that a crypto library is going to have support for blinding:
- Should it do blinding for RSA signatures as well as RSA decryption?
- How about for DSS signatures?
My guess is that it's not necessary,
Ian Grigg writes:
I don't think mere monetary costs are even germane to
something like this. The costs, publicly and personally,
are of a different kind than money expresses.
I'm sorry to disagree, but I'm sticking to my
cost-benefit analysis: monetary costs are totally
germane. You see, we
Richard Guy Briggs wrote:
If You Want To Win An Election, Just Control The Voting Machines
by Thom Hartmann
[...]
Six years later Hagel ran again, this time against Democrat Charlie Matulka
in 2002, and won in a landslide. As his hagel.senate.gov website says, Hagel
was re-elected to his second
38 matches
Mail list logo