On Sun, Jul 30, 2000 at 01:25:18AM -0400, Jeroen C. van Gelderen wrote:
Hmm, maybe the complainers should provide proof that they do
need more than 2^256 complexity. Makes it easier for us,
proponents ;-/
How about creating one-time pads?
That said, in Applied Cryptography, Schneier makes the
On Sun, 30 Jul 2000, Mark Murray wrote:
This is a reversion to the count-entropy-and-block model which I have
been fiercely resisting (and which argument I thought I had sucessfully
defended).
Actually, I was waiting for your reply to Jeroen's question about changing
the semantics of
Content-Disposition: attachment; filename="yarrow_blocking.patch"
Brian:
I want to take a different approach to this one.
Do not commit anything to the /dev/random device, please, without running
it by me.
M
--
Mark Murray
Join the anti-SPAM movement: http://www.cauce.org
To Unsubscribe:
On Sun, 30 Jul 2000, Mark Murray wrote:
Content-Disposition: attachment; filename="yarrow_blocking.patch"
Brian:
I want to take a different approach to this one.
Do not commit anything to the /dev/random device, please, without running
it by me.
I was not planning on it. You really
On Sun, 30 Jul 2000, Mark Murray wrote:
This is a reversion to the count-entropy-and-block model which I have
been fiercely resisting (and which argument I thought I had sucessfully
defended).
My solution is to get the entropy gathering at a high enough rate that
this is not necessary.
Do not commit anything to the /dev/random device, please, without running
it by me.
I was not planning on it. You really should take a look at the bugfixes,
though; reading buffer sizes of 8 bytes but not-8-byte-multiple should
do it. There's also the ioctl handler which you need stubs
How does entropy gathering at a high enough rate solve this
particularly?
EG, by having it such that Yarrow state perturbations happen often
enough that each read is "guaranteed" to be associated with at least
one and preferably more.
M
--
Mark Murray
Join the anti-SPAM movement:
On Sun, 30 Jul 2000, Mark Murray wrote:
How does entropy gathering at a high enough rate solve this
particularly?
EG, by having it such that Yarrow state perturbations happen often
enough that each read is "guaranteed" to be associated with at least
one and preferably more.
Can you
EG, by having it such that Yarrow state perturbations happen often
enough that each read is "guaranteed" to be associated with at least
one and preferably more.
Can you give me an idea how this would work, at least with e.g.
pseudocode annotation of the current code? I'm curious what
On Sun, 30 Jul 2000, Mark Murray wrote:
This is a reversion to the count-entropy-and-block model which I have
been fiercely resisting (and which argument I thought I had sucessfully
defended).
Actually, I was waiting for your reply to Jeroen's question about changing
the semantics of the
On Mon, 24 Jul 2000, Jeroen C. van Gelderen wrote:
What I meant with that point is that the user may get, say an extra few
hundred bits out of it with no new entropy before the scheduled reseed
task kicks in.
How does he know which bits are which? His analysis task just got a whole
Brian Fundakowski Feldman wrote:
On Mon, 24 Jul 2000, Jeroen C. van Gelderen wrote:
What I meant with that point is that the user may get, say an extra few
hundred bits out of it with no new entropy before the scheduled reseed
task kicks in.
How does he know which bits are
How does OpenBSD handle this issue? Anyone know?
--
Ben
220 go.ahead.make.my.day ESMTP Postfix
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message
On Wed, 26 Jul 2000, void wrote:
How does OpenBSD handle this issue? Anyone know?
It looks like they have four different kernel-exported random-number
generators:
#define RND_RND 0 /* real randomness like nuclear chips */
#define RND_SRND1 /* strong random source
http://www.counterpane.com/pseudorandom_number.html
Cryptlib is described here:
http://www.cs.auckland.ac.nz/~pgut001/cryptlib/
Thanks!
Asynchonous reseeding _improves_ the situation; the attacker cannot force
it to any degree of accuracy, and if he has the odds stacked heavily
On Sun, Jul 23, 2000 at 03:06:34PM +0200, Poul-Henning Kamp wrote:
In message [EMAIL PROTECTED], Stefan `Sec` Zehl writes:
With the current approach it has a 256bits key. This is, in my eyes, not
good. Although yarrow is nice, It's suited for any kind of key
generation.
The first law of
On Mon, 24 Jul 2000, Jeroen C. van Gelderen wrote:
1. The overhead will probably be insignificant. One doesn't
use such vast amounts of random numbers.
True, but the effect on slow CPUs for a single read may be signfificant.
We'll have to see.
2. At least the generator gate can be
Mark Murray wrote:
[...]
Asynchonous reseeding _improves_ the situation; the attacker cannot force
it to any degree of accuracy, and if he has the odds stacked heavily against
him that each 256-bits of output will have an associated reseed, it makes
his job pretty damn difficult.
/dev/random should block if the system does not contain as much
real entropy
as the reader desires. Otherwise, the PRNG implementation will be the
weakest link for people who have deliberately selected higher levels of
protection from cryptographic attack.
I don't want to rehash this
On Sat, 22 Jul 2000, Mark Murray wrote:
So what it if I want/need 257 bits? :-)
Read them. You'll get them. If you want higher quality randomness than
Yarrow gives, read more than once. Do other stuff; play. Don't get stuck
in the "I have exhausted the randomness pool" loop; Yarrow
The core of my complaint is that even though our old PRNG did crappy
entropy handling, we used to have such a method, which is now gone. I'd
like to see yarrow hang off /dev/urandom and have /dev/random tap directly
into the entropy pool (perhaps a third pool separate from Yarrow's
The core of my complaint is that even though our old PRNG did crappy
entropy handling, we used to have such a method, which is now gone. I'd
like to see yarrow hang off /dev/urandom and have /dev/random tap directly
into the entropy pool (perhaps a third pool separate from Yarrow's
Okay, using RSA keys wasn't the best example to pick, but Yarrow also
seems easy to misuse in other cases: for example if you want to generate
multiple 256-bit symmetric keys (or other random data) at the same time,
each additional key after the first won't contain any additional entropy,
so
On Sun, 23 Jul 2000, Mark Murray wrote:
Your are missing the point that it is not possible to get more than
the ${number-of-bits-ofrandomness} from any accumulator or PRNG. You
have to draw the line somewhere; The current implementation has it
at 256.
Uhh..a PRNG which hashes entropy
On Sun, 23 Jul 2000, Mark Murray wrote:
By your own admission, the old system was bad; yet you still want
${it}? You'd like to see a programmer with less experience than
Schneier come up with a more secure algorithm than him?
The old implementation was bad. The class of algorithm is not, as
On Sun, 23 Jul 2000, Mark Murray wrote:
Okay, using RSA keys wasn't the best example to pick, but Yarrow also
seems easy to misuse in other cases: for example if you want to generate
multiple 256-bit symmetric keys (or other random data) at the same time,
each additional key after the
In message [EMAIL PROTECTED], Kri
s Kennaway writes:
On Sat, 22 Jul 2000, Jeroen C. van Gelderen wrote:
I agree that you need long RSA keys ... but the real
discussion isn't really about key length but rather about
the overall complexity of attacking the key:
Okay, using RSA keys wasn't
This is basically the model I am advocating for /dev/random. It's also the
alternative "basic design philosophy" described in the yarrow paper.
Erm, read 4.1 again :-). The paragraph that begins "One approach..." is
the old approach. It is also the approach that you are advocating.
The next
On Sun, 23 Jul 2000, Poul-Henning Kamp wrote:
Obviously, if you need more randomness than a stock FreeBSD system
can provide you with, you add hardware to give you more randomness.
This won't help if it's fed through Yarrow.
In other words, and more bluntly: Please shut up now, will you ?
On Sun, 23 Jul 2000, Mark Murray wrote:
Erm, read 4.1 again :-). The paragraph that begins "One approach..." is
the old approach. It is also the approach that you are advocating.
The next paragraph "Yarrow takes..." is Yarrow, and the current
implementation.
"The strength of the first
In message [EMAIL PROTECTED], Kri
s Kennaway writes:
On Sun, 23 Jul 2000, Poul-Henning Kamp wrote:
Obviously, if you need more randomness than a stock FreeBSD system
can provide you with, you add hardware to give you more randomness.
This won't help if it's fed through Yarrow.
Nobody has
Obviously, if you need more randomness than a stock FreeBSD system
can provide you with, you add hardware to give you more randomness.
This won't help if it's fed through Yarrow.
*BTTT!* Wrong. A good hardware RNG when fed at a high-enough rate
through Yarrow can easily produce
On Sun, 23 Jul 2000, Poul-Henning Kamp wrote:
Obviously, if you need more randomness than a stock FreeBSD system
can provide you with, you add hardware to give you more randomness.
This won't help if it's fed through Yarrow.
*BTTT!* Wrong. A good hardware RNG when fed at a
This design tradeoff is discussed in section 4.1 of the paper.
Tweakable.
Doing a reseed operation with every output is going to be *very*
computationally expensive.
Tradeoff. What do you want? Lightning fast? Excessive security? Balance
it out.
Well, I don't see a way to tune this
On Sun, 23 Jul 2000, Mark Murray wrote:
This design tradeoff is discussed in section 4.1 of the paper.
Tweakable.
Doing a reseed operation with every output is going to be *very*
computationally expensive.
Tradeoff. What do you want? Lightning fast? Excessive security? Balance
The acknowlegment that I am looking for is that the old, simple "gather
entropy, stir with hash, serve" model is inadequate IMO, and I have not
seen any alternatives.
There are two other models which rate "pretty well-designed" in the Yarrow
paper: the cryptlib and PGP PRNGs. I don't
In message [EMAIL PROTECTED], Stefan `Sec` Zehl writes:
Assume I want to encrypt a message by XOR'ing with randomness.
If I then exchange my keys securely, the message is uncrackable.
With the current approach it has a 256bits key. This is, in my eyes, not
good. Although yarrow is nice, It's
David Schwartz wrote:
/dev/random should block if the system does not contain as much
real entropy
as the reader desires. Otherwise, the PRNG implementation will be the
weakest link for people who have deliberately selected higher levels of
protection from cryptographic attack.
Kris Kennaway wrote:
On Sun, 23 Jul 2000, Mark Murray wrote:
This design tradeoff is discussed in section 4.1 of the paper.
Tweakable.
Doing a reseed operation with every output is going to be *very*
computationally expensive.
Tradeoff. What do you want? Lightning
5. Yarrow was designed as a better replacement for most any
PRNG by a couple of bright cryptographers. Can you do
better than that?
Nope, I agree. Ignore my previous objections.
DS
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in
On Sun, 23 Jul 2000, Mark Murray wrote:
There are two other models which rate "pretty well-designed" in the Yarrow
paper: the cryptlib and PGP PRNGs. I don't know what their properties are
right now (the cryptlib one is described in the paper on PRNG
cryptanalysis).
Do you have copies
On Sun, 23 Jul 2000, Jeroen C. van Gelderen wrote:
Well, a simple scheme which doesn't seem to suffer from any of the
vulnerabilities discussed in the schneier papers is to accumulate entropy
in a pool, and only return output when the pool is full. i.e. the PRNG
would either block or
I'm all for storing a sample at shutdown and using it to help seed the
PRNG at startup, but it shouldn't be the only seed used (for example, the
case where the system has never been shut down (cleanly) before and so has
no pre-existing seed file is a BIG corner case to consider since thats
After rereading the paper in more detail, Step 7 of the reseed algorithm
seems not entirely consistent with this: they explicitly refer to writing
out "the next 2k bits of output from the generator to the seed file"
(slightly different terminology, but I couldn't find any other references
to
On Sat, 22 Jul 2000, Mark Murray wrote:
Lots of references: Schneier's "Applied Cryptography" talks about
using Good Hashes for crypto and Good Crypto for hashes. Schneier's
site at www.counterpane.com will give you plenty.
I havent been able to get my hands on Applied Cryptography, but I
The differnce with the old system and Yarrow is yarrow's self-recovery
property; Yarrow screens its internal state from the ouside world
very heavily, and provides enough perturbation of it from its
copious :-) entropy harvesting to keep the state safe from compromise.
Yeah, I know all
On Sat, 22 Jul 2000, Mark Murray wrote:
Because of Yarrow's cryptographic protection of its internal state, its
frequent reseeds and its clever geneation mechanism, this paradigm is
less important - the output is 256-bit safe (Blowfish safe) for any size
of output[*]. When you read 1000
On Sat, 22 Jul 2000, Mark Murray wrote:
Because of Yarrow's cryptographic protection of its internal state, its
frequent reseeds and its clever geneation mechanism, this paradigm is
less important - the output is 256-bit safe (Blowfish safe) for any size
of output[*]. When you read
Kris Kennaway wrote:
On Sat, 22 Jul 2000, Mark Murray wrote:
Lots of references: Schneier's "Applied Cryptography" talks about
using Good Hashes for crypto and Good Crypto for hashes. Schneier's
site at www.counterpane.com will give you plenty.
I havent been able to get my hands on
Kris Kennaway wrote:
On Fri, 21 Jul 2000, Mark Murray wrote:
Section 2.1, last paragraph:
"If a system is shut down, and restarted, it is desirable to store some
high-entropy data (such as the key) in non-volatile memory. This allows
the PRNG to be restarted in an unguessable state
On Fri, 21 Jul 2000, Mark Murray wrote:
Section 2.1, last paragraph:
"If a system is shut down, and restarted, it is desirable to store some
high-entropy data (such as the key) in non-volatile memory. This allows
the PRNG to be restarted in an unguessable state at the next restart. We
From the Yarrow paper:
``Yarrow's outputs are cryptographically derived. Systems that
use Yarrow's
outputs are no more secure than the generation mechanism used.''
We currently have Yarrow-256(Blowfish); wanna make it Yarrow-1024? I could
make it so.
M
--
Mark Murray
It
/dev/random should block if the system does not contain as much real entropy
as the reader desires. Otherwise, the PRNG implementation will be the
weakest link for people who have deliberately selected higher levels of
protection from cryptographic attack.
I don't want to rehash this thread
On Sat, 22 Jul 2000, Mark Murray wrote:
So what it if I want/need 257 bits? :-)
Read them. You'll get them. If you want higher quality randomness than
Yarrow gives, read more than once. Do other stuff; play. Don't get stuck
in the "I have exhausted the randomness pool" loop; Yarrow does
On Sat, 22 Jul 2000, Jeroen C. van Gelderen wrote:
You don't care in practice, 256 bits are unguessable.
Actually, I do..that's the entire point of using long keys.
If you do care, you load a different random module :-)
The core of my complaint is that even though our old PRNG did crappy
Kris Kennaway wrote:
On Sat, 22 Jul 2000, Jeroen C. van Gelderen wrote:
You don't care in practice, 256 bits are unguessable.
Actually, I do..that's the entire point of using long keys.
I agree that you need long RSA keys ... but the real
discussion isn't really about key length but
On Sat, 22 Jul 2000, Jeroen C. van Gelderen wrote:
I agree that you need long RSA keys ... but the real
discussion isn't really about key length but rather about
the overall complexity of attacking the key:
Okay, using RSA keys wasn't the best example to pick, but Yarrow also
seems easy to
On Tue, 18 Jul 2000, Dan Moschuk wrote:
Well, how many other OSs out there allow /dev/random to be written to?
FreeBSD, OpenBSD, NetBSD, Linux...
Kris
--
In God we Trust -- all others must submit an X.509 certificate.
-- Charles Forsythe [EMAIL PROTECTED]
To Unsubscribe: send mail to
What about saving the state of the RNG and re-reading it on bootup? That
will allow Yarrow to continue right where it left off. :-)
That's a bad thing. You don't want someone to be able to examine the exact
PRNG state at next boot by looking at your hard disk after the machine has
shut
| | Gotcha - fix coming; I need to stash some randomness at shutdown time, and
| | use that to reseed the RNG at reboot time.
|
| What about saving the state of the RNG and re-reading it on bootup? That
| will allow Yarrow to continue right where it left off. :-)
|
| That's a bad thing.
Mark Murray wrote:
What about saving the state of the RNG and re-reading it on bootup? That
will allow Yarrow to continue right where it left off. :-)
That's a bad thing. You don't want someone to be able to examine the exact
PRNG state at next boot by looking at your hard disk
Dan Moschuk wrote:
| | Gotcha - fix coming; I need to stash some randomness at shutdown time, and
| | use that to reseed the RNG at reboot time.
|
| What about saving the state of the RNG and re-reading it on bootup? That
| will allow Yarrow to continue right where it left off. :-)
Jeroen C. van Gelderen wrote:
Dan Moschuk wrote:
I don't see how. If the attacker has physical access to the machine, there
are plenty worse things to be done than just reading the state of a PRNG.
If the random device is initialized in single user mode, and the file is
then
It is a Yarrow-mandated procedure. Please read the Yarrow paper.
Actually, it's not. You don not want to save the exact
PRNG state to disk, ever. It's not Yarrow mandated
procedure but a big security hole.
Section 2.1, last paragraph:
"If a system is shut down, and restarted, it is
You generate a new PGP keypair and start using it. Your
co-worker reboots your machine afterwards and recovers
the PRNG state that happens to be stashed on disk. He
can then backtrack and potentially recover the exact same
random numbers that you used for your key.
Said state is rm'med
On Fri, 21 Jul 2000, Mark Murray wrote:
:
:Sure; we neet to be appropriately paranoid about that, but let's not
:get ridiculous. The seed file could certainly use some decent protection,
:but unfortunately, PC architectures don't come with SIMcards or the like.
:
Is it possible to combine the
:Sure; we neet to be appropriately paranoid about that, but let's not
:get ridiculous. The seed file could certainly use some decent protection,
:but unfortunately, PC architectures don't come with SIMcards or the like.
:
Is it possible to combine the state of the disk based seed with some
You generate a new PGP keypair and start using it. Your
co-worker reboots your machine afterwards and recovers
the PRNG state that happens to be stashed on disk. He
can then backtrack and potentially recover the exact same
random numbers that you used for your key.
If that is
On Fri, 21 Jul 2000, Mark Murray wrote:
Section 2.1, last paragraph:
"If a system is shut down, and restarted, it is desirable to store some
high-entropy data (such as the key) in non-volatile memory. This allows
the PRNG to be restarted in an unguessable state at the next restart. We
call
On Fri, 21 Jul 2000, Mark Murray wrote:
If you are worried about someone reading the disk of a rebooting box,
then you need to be worried about console access; if your attacker has
console, you are screwed anyway.
For most people, yes. But it's like all of the buffer overflows in
non-setuid
On Fri, 21 Jul 2000, David Schwartz wrote:
You generate a new PGP keypair and start using it. Your
co-worker reboots your machine afterwards and recovers
the PRNG state that happens to be stashed on disk. He
can then backtrack and potentially recover the exact same
random numbers that
On Fri, 21 Jul 2000, Kris Kennaway wrote:
Section 2.1, last paragraph:
"If a system is shut down, and restarted, it is desirable to store some
high-entropy data (such as the key) in non-volatile memory. This allows
the PRNG to be restarted in an unguessable state at the next restart. We
The reason why ntp is interesting is that we compare the received data
with our unpredictable local clock. It is the result of this comparison
which is good entropy bits.
Is the resolution of thermal sensors on many new motherboards and
CPU high enough to get thermal randomness?
Peter
--
On 19-Jul-00 Peter Dufault wrote:
Is the resolution of thermal sensors on many new motherboards and
CPU high enough to get thermal randomness?
The voltage sensors have some noise too (maybe not enough).
--
Steve O'Hara-Smith [EMAIL PROTECTED]
http://sohara.webhop.net/ A
On Wed, 19 Jul 2000, Steve O'Hara-Smith wrote:
On 19-Jul-00 Peter Dufault wrote:
Is the resolution of thermal sensors on many new motherboards and
CPU high enough to get thermal randomness?
The voltage sensors have some noise too (maybe not enough).
Fan speed too.
Leif
In message [EMAIL PROTECTED] Alexander Leidinger writes:
: systems which have a more or less precise clock attached (e.g. GPS or
: atomic clocks which sync the system clock via nptd)? And what are the
: numbers for this solution (for those people which are interested in
: numbers to be their own
In message [EMAIL PROTECTED], Warner Losh writes:
Another good source would be if you had a Cesium clock and a GPS
receiver. The delay due to atmospherics is another good source of
random data. This varies +- 25ns and is highly locale dependent. One
can measure this variance down to the
[ A whole bunch of sane stuff removed ]
It certainly would be better than nothing and would be a decent source
of randomness. It would be my expectation that if tests were run to
measure this randomness and the crypto random tests were applied,
we'd find a fairly good source.
The
In message [EMAIL PROTECTED] Poul-Henning Kamp writes:
: A geiger counter and a smoke-detector would be *so much* cheaper
: and give more bits per second :-)
Agreed. And a lot less hassle. A *LOT* less hassle. :-)
: It certainly would be better than nothing and would be a decent source
: of
In message [EMAIL PROTECTED] Mark Murray writes:
: The randomness is good, no doubt; I worry about how accessible that
: randomness is to an attacker?
That's a good thing to worry about.
: If the attacker is on your computer (he us a user, say), he might know
: a lot about the current frequency
In message [EMAIL PROTECTED], Mark Murray writes:
[ A whole bunch of sane stuff removed ]
It certainly would be better than nothing and would be a decent source
of randomness. It would be my expectation that if tests were run to
measure this randomness and the crypto random tests were
In message [EMAIL PROTECTED] Peter Dufault writes:
: The reason why ntp is interesting is that we compare the received data
: with our unpredictable local clock. It is the result of this comparison
: which is good entropy bits.
:
: Is the resolution of thermal sensors on many new
The trick here is to actually measure the quality of our entropy.
I have asked Markm to provide us with some kernel option which can
be used to get a copy of the entropy so we can study the quality
off it.
I have something that is _very_ crude, and definitely not
commitworthy, but it is up
: If the attacker is on your computer (he us a user, say), he might know
: a lot about the current frequency of your xtal. He can also get the same
: (remote) time offsets as you. What does that give him? Not much, but it
: could reduce the bits that he needs to guess. By how much? I don't
:
If the attacker is on your computer (he us a user, say), he might know
a lot about the current frequency of your xtal. He can also get the same
(remote) time offsets as you. What does that give him? Not much, but it
could reduce the bits that he needs to guess. By how much? I don't
know.
Actually, you could really use this in ntpd(8), rather than just ntpdate.
You could crank in the offset and delay samples for each packet
received from an NTP peer; this will have the effect of adding into
the entropy pool the "noise" in the latency of the path between you
and each of your
In message [EMAIL PROTECTED], Mark Murray writes:
Actually, you could really use this in ntpd(8), rather than just ntpdate.
You could crank in the offset and delay samples for each packet
received from an NTP peer; this will have the effect of adding into
the entropy pool the "noise" in the
People have tried for 30+ years to predict what a quartz xtal
will do next. Nobody expects any chance of success. Add to this
the need to predict the difference between one or more NTP servers
and your local qartz xtal and I think we can safely say "impossible".
You can't predict this, but
In message [EMAIL PROTECTED], Mark Murray writes:
People have tried for 30+ years to predict what a quartz xtal
will do next. Nobody expects any chance of success. Add to this
the need to predict the difference between one or more NTP servers
and your local qartz xtal and I think we can
Poul-Henning Kamp wrote:
In message [EMAIL PROTECTED], "Jeroen C. van Gelderen" writes
:
Predicting the clock's offset from reality and the two way path to
the server of choice is impossible, plus if people enable authentication
later on the packets will be choke full of high-quality
On Mon, 17 Jul 2000 16:27:17 MST, "Kurt D. Zeilenga" wrote:
Note that there should be no need to cron the job.
You're right. My suggestion to use cron's @reboot was as stupid as they
come. :-)
Sorry,
Sheldon.
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe
Poul-Henning Kamp wrote:
In message [EMAIL PROTECTED], "Jeroen C. van Gelderen" writes:
People have tried for 30+ years to predict what a quartz xtal
will do next. Nobody expects any chance of success. Add to this
the need to predict the difference between one or more NTP servers
No, he doesn't have access to the offset from the machines local clock.
I ran a quick dirty test here on some logfiles: that offset is
very close to white noise.
With what amplitude?
M
--
Mark Murray
Join the anti-SPAM movement: http://www.cauce.org
To Unsubscribe: send mail to [EMAIL
In message [EMAIL PROTECTED], Mark Murray writes:
No, he doesn't have access to the offset from the machines local clock.
I ran a quick dirty test here on some logfiles: that offset is
very close to white noise.
With what amplitude?
Depends on the termal environment of your xtal obviously
In message [EMAIL PROTECTED], "Jeroen C. van Gelderen" writes
It's up to the user to decide what security level he needs.
Both ought to be possible but having an insecure box ought
to be an explicit decision.
Principle of POLA: The box doesn't come up in a stupid configuration
right after
Thus spake Louis A. Mamakos ([EMAIL PROTECTED]):
Actually, you could really use this in ntpd(8), rather than just ntpdate.
Hmm, as addition, I agree.
However, I think more people use ntpdate than ntpd, and thus ntpdate
is a good place :)
Alex
--
cat: /home/alex/.sig: No such file or
On Sun, 16 Jul 2000, Kris Kennaway wrote:
On the other hand, doing a dd if=/dev/random of=/dev/null gives me
infinite "randomness" at 10MB/sec - have the semantics of /dev/random
changed?
Yes. /dev/random is now just an alias for /dev/urandom (or vice versa).
You must have a fast machine
With microsecond timestamps, 64second ntp poll period we are talking
about approx 10 bits of randomness in the received packet and about
3 bits of randomness in the clock difference.
FreeBSD uses nanosecond timestamping (Actually could do nanoseconds
with 32 bitfractions), but that only
On Tue, 18 Jul 2000, Bruce Evans wrote:
You must have a fast machine to get 10MB/sec. I see the following speeds
(using a better reading program than dd; dd gives up on EOF on the old
/dev/random):
Oops, I misread the rate by 2 orders of magnitude. I get about 100K/sec on
my PPro/233 :-)
I ran a quick dirty test here on some logfiles: that offset is
very close to white noise.
With what amplitude?
Depends on the termal environment of your xtal obviously :-)
Help me here! :-)
In your observed sample, what was the white noise amplitude?
M
--
Mark Murray
Join the
1 - 100 of 162 matches
Mail list logo