Re: [OT] best tape backup system?

2005-02-22 Thread Lajber Zoltan
On Tue, 22 Feb 2005, Louis-David Mitterrand wrote:

 Is there a better solution out there?

We using an 59P6744 200/400GB LTO Full-High Tape Drive with great
resuslt. It made a full 180G backup within 100 minutes. We are using the
tob script for full and differential backups.

Bye,
-=Lajbi=
 LAJBER Zoltan   Szent Istvan Egyetem,  Informatika Hivatal
   engineer: a mechanism for converting caffeine into designs.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Can someone explain this?

2005-02-22 Thread Phantazm
Why is it reporting Raid Devices: 8 and actuall Devices: 5 ?
Why Failed Devices: -1 (hehe do i have a credit of 1 failed device)  ;)

Is there anyway to get the figures stright again?


mdadm -D /dev/md0
/dev/md0:
Version : 00.90.01
  Creation Time : Wed Jan 26 18:24:22 2005
 Raid Level : raid5
 Array Size : 1393991424 (1329.41 GiB 1427.45 GB)
Device Size : 199141632 (189.92 GiB 203.92 GB)
   Raid Devices : 8
  Total Devices : 5
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Feb 21 20:52:24 2005
  State : dirty, degraded, recovering
 Active Devices : 7
Working Devices : 7
 Failed Devices : -1
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 64K

 Rebuild Status : 82% complete

   UUID : 129eed1b:e38f8cbf:dcb31e59:e6532935
 Events : 0.10797242

Number   Major   Minor   RaidDevice State
   0  2210  active sync 
/dev/ide/host0/bus1/target0/lun0/part1
   1  22   651  active sync 
/dev/ide/host0/bus1/target1/lun0/part1
   2  3312  active sync 
/dev/ide/host2/bus0/target0/lun0/part1
   3  33   653  active sync 
/dev/ide/host2/bus0/target1/lun0/part1
   4  3414  active sync 
/dev/ide/host2/bus1/target0/lun0/part1
   5   005  faulty removed
   6  5716  active sync 
/dev/ide/host4/bus1/target0/lun0/part1
   7  57   657  active sync 
/dev/ide/host4/bus1/target1/lun0/part1

   8  5618  spare rebuilding 
/dev/ide/host4/bus0/target0/lun0/part1 



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


89.95$ = Microsoft Windows XP Pro + Microsoft Office XP Pro

2005-02-22 Thread cort
http://54471.aflnkbjgn.com/?VErurGVzuZwObpV52007


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: *terrible* direct-write performance with raid5

2005-02-22 Thread Peter T. Breuer
Michael Tokarev [EMAIL PROTECTED] wrote:
 When debugging some other problem, I noticied that
 direct-io (O_DIRECT) write speed on a software raid5

And normal write speed (over 10 times the size of ram)?

 is terrible slow.  Here's a small table just to show
 the idea (not numbers by itself as they vary from system
 to system but how they relate to each other).  I measured
 plain single-drive performance (sdX below), performance
 of a raid5 array composed from 5 sdX drives, and ext3
 filesystem (the file on the filesystem was pre-created

And ext2? You will be enormously hampered by using a journalling file
system, especially with journal on the same system as the one you are
testing! At least put the journal elsewhere - and preferably leave it
off.

 during tests).  Speed measurements performed with 8Kbyte
 buffer aka write(fd, buf, 8192*1024), units a Mb/sec.
 
write   read
sdX  44.9   45.5
md1.7*  31.3
 fs on md0.7*  26.3
 fs on sdX  44.7   45.3
 
 Absolute winner is a filesystem on top of a raid5 array:

I'm afraid there are too many influences to say much from it overall.
The legitimate (i.e.  controlled) experiment there is between sdX and
md (over sdx), with o_direct both times.  For reference I personally
would like to see the speed withut o_direct on those two.  And the
size/ram of the transfer - you want to run over ten times size of ram
when you run without o_direct.

Then I would like to see a similar comparison made over hdX instead of
sdX.

You can forget the fs-based tests for the moment, in other words. You
already have plenty there to explain in the sdX/md comparison. And to
explain it I would like to see sdX replaced with hdX.

A time-wise graph of the instantaneous speed to disk would probably
also be instructive, but I guess you can't get that!

I would guess that you are seeing the results of one read and write to
two disks happening in sequence and not happening with any great
urgency.  Are the writes sent to each of the mirror targets from raid
without going through VMS too?  I'd suspect that - surely the requests
are just queued as normal by raid5 via the block device system. I don't
think the o_direct taint persists on the requests - surely it only
exists on the file/inode used for access.

Suppose the mirrored requests are NOT done directly - then I guess we
are seeing an interaction with the VMS, where priority inversion causes
the high-priority requests to the md device to wait on the fulfilment of
low priority requests to the sdX devices below them.  The sdX devices
requests may not ever get treated until the buffers in question age
sufficiently, or until the kernel finds time for them. When is that?
Well, the kernel won't let your process run .. hmm. I'd suspect the
raid code should be deliberately signalling the kernel to run the
request_fn of the mirror devices more often.

 Comments anyone? ;)

Random guesses above. Purely without data, of course.

Peter

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [OT] best tape backup system?

2005-02-22 Thread Guy
In my case, I am sure I had the termination correct.  I normally don't set
the SCSI card to auto termination, since I don't trust it!  But I did try
every valid permutation.  On non-Linux systems, I have never had problems
mixing disks with tapes or anything else.  And since I swapped terminators,
cables and cards, the only hardware that could be at fault would be the tape
or disk drives.  But, they are still working to this day, just on separate
SCSI cards.

If I recall, I once had a Unix system where someone created about 128 Gig of
sparse files.  I backed up the files to a DDS-3 tape (12-24 Gig).  The
backup took more than 24 hours.  That is better than 10 to 1.  :)  So, yes,
giving the correct data, 2 to 1 can be done, but not in the real world.
IMO.

Guy

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gordon Henderson
Sent: Tuesday, February 22, 2005 12:40 PM
To: Guy
Cc: linux-raid@vger.kernel.org
Subject: RE: [OT] best tape backup system?

On Tue, 22 Feb 2005, Guy wrote:

 I have NOT been able to share a SCSI cable/card with disks and a tape
 drive. Tried for days.  I would get disk errors, or timeouts.  I
 corrected the problem by putting the tape drive on a dedicated SCSI
 bus/card.  I don't recall the details of the failures, since it has been
 well over a year now. But I do know I only had problems while using the
 tape drive and the disks at the same time.  Like, during a backup.  The
 disks were happy while the tape drive was idle.  I also swapped parts,
 it had no effect.  I assumed Linux does not like to share SCSI.

Linux is fine with multiple devices on SCSI - scanners, drives, tapes. I
haven't had problems with these combinations. The usual culprit is
termination. Either not enough or too much! You need termination at both
ends of the bus - one end is usually (but not always) the SCSI card, and
all cards I've used in the past few years have had the ability to turn
termination on or off (You turn it off if the card is not at one end of
the bus) Some tape drives I've used also have the ability to turn
termination on, or not - sometimes you set the drive number to one range
for termination, another range for not (eg. 0-7 no termination, 8-15
active termination). It's a real PITA to get right at times.

 I have not had problems with 2 or more tape drives on the same SCSI
 bus/card.  I think I had 3 at one time.

 Amanda...  The last time I checked, incremental backups require a
different
 tape each time.  You will need a lot of tapes.  If a full backup fits on a
 single tape, just do a full backup each night.  I need many tapes to do a
 full backup, but a single tape can hold many daily incremental backups.  I
 deemed Amanda to wasteful for me.  I use home made scripts and cpio and do
a
 full backup about once per month, and do nightly incremental backups to 1
 tape.  Once the daily tape is full I do another full backup.  Works very
 well for me.  But, restoring is a pain, but that is very rare for me, only
 once in 1+ years, so far.  My scripts seek the tape as required, so I can
 eject the tape if needed, as long as I put it back in time for the nightly
 backup.

Yup. One tape per backup run with Amanda. Way back when tape capacity
exceeded disk capacity, I used Amanda to backup many machines to one tape
every night - that was its original strength, the ability to backup many
machines over the LAN to one tape drive. These days with disk capacities
generally exceeding tapes, it's not like that anymore (although I still
have some smaller servers backed up to a non-local tape drive for reasons
of economy)

 If anyone gets 1.3 to 1 compression, that is real good  IMO.  The
 last time I checked, I got about 1.1 to 1.  What a marketing scam

Compression is always a weirdism - I'm guessing your data is music, video
or pictures - stuff thats already compressed which won't compress twice.
Typical officy type data would be stuff that generally compresses well -
text documents, spreadsheets, program source code ( compiled binaries)
etc. Amanda can compress data and in some cases better than the tape
drives (using gzip, etc), but it's slow and makes restoring more
interesting...

I think Sony are claiming 2.3:1 with their new tapes - well, they'll get
that with text-files, but nothing near that with anything else!

Gordon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: *terrible* direct-write performance with raid5

2005-02-22 Thread Michael Tokarev
Peter T. Breuer wrote:
Michael Tokarev [EMAIL PROTECTED] wrote:
When debugging some other problem, I noticied that
direct-io (O_DIRECT) write speed on a software raid5
And normal write speed (over 10 times the size of ram)?
There's no such term as normal write speed in this context
in my dictionary, because there are just too many factors
influencing the speed of non-direct I/O operations (I/O
scheduler aka elevator is the main factor I guess).  More,
when going over the buffer cache, cache trashing is plays
significant role for the whole system (eg, when just copying
a large amount of data with cp, system becomes quite
unresponsible die to cache trashing while the whole stuff
it is copying should not be cached in the first place, for
this task anyway).  Also, I don't think linux elevator is
optimized for this task (accessing large amounts of data).
I come across this issue when debugging very slow database
(oracle10 in this case) which tries to do direct I/O where
possible because it knows better when/how to cache data.
If I turn on vfs/block cache here, system becomes much
slower (under normal conditions anyway, not counting this
md slowness) (ok ok, i know it isn't a good idea to place
datbase files on raid5... or wasn't some time ago when
raid5 checksumming was the bottleneck anyway... but that's
a different story).
More to the point seems to be the same direct-io but in
larger chunks - eg 1Mb or more instead of 8Kb buffer.  And
this indeed makes alot of difference, the numbers looks
much more nice.
is terrible slow.  Here's a small table just to show
the idea (not numbers by itself as they vary from system
to system but how they relate to each other).  I measured
plain single-drive performance (sdX below), performance
of a raid5 array composed from 5 sdX drives, and ext3
filesystem (the file on the filesystem was pre-created
And ext2? You will be enormously hampered by using a journalling file
system, especially with journal on the same system as the one you are
testing! At least put the journal elsewhere - and preferably leave it
off.
This whole issue has exactly nothing to do with journal.
I don't mount the fs with data=journal option, and the
file I'm writing to is preallocated first (i create the
file of a given size when measure re-writing speed).  In
this case, data never touches ext3 journal.
during tests).  Speed measurements performed with 8Kbyte
buffer aka write(fd, buf, 8192*1024), units a Mb/sec.
[]
Absolute winner is a filesystem on top of a raid5 array:
I'm afraid there are too many influences to say much from it overall.
The legitimate (i.e.  controlled) experiment there is between sdX and
md (over sdx), with o_direct both times.  For reference I personally
would like to see the speed withut o_direct on those two.  And the
size/ram of the transfer - you want to run over ten times size of ram
when you run without o_direct.
I/O speed without O_DIRECT is very close to 44 Mb/sec for sdX (it's the
spid of the drives it seems), and md performs at about 80 Mb/sec.  That
numbers are very close to the case with O_DIRECT and large block size
(eg 1Mb).
There's much more to the block size really.  I just used 8Kb block because
I have real problem with performance of our database server (we're trying
oracle10 and it performs very badly, and now i don't understand whenever
the machine has always been like that (the numbers above) but we never
noticied with different usage pattern of previous oracle releases, or
something else changed...
Then I would like to see a similar comparison made over hdX instead of
sdX.
Sorry no IDE drives here, and i don't see the point in trying them anyway.
You can forget the fs-based tests for the moment, in other words. You
already have plenty there to explain in the sdX/md comparison. And to
explain it I would like to see sdX replaced with hdX.
A time-wise graph of the instantaneous speed to disk would probably
also be instructive, but I guess you can't get that!
I would guess that you are seeing the results of one read and write to
two disks happening in sequence and not happening with any great
urgency.  Are the writes sent to each of the mirror targets from raid
Hmm point.
without going through VMS too?  I'd suspect that - surely the requests
are just queued as normal by raid5 via the block device system. I don't
think the o_direct taint persists on the requests - surely it only
exists on the file/inode used for access.
Well, O_DIRECT performs very-very similar to O_SYNC here (both cases --
with and without a filesystem involved) in terms of speed.
I don't care much now whenever it relly performs direct I/O (from
userspace buffer directly to controller), esp. since it can't work
exactly this way with raid5 implemented in software (checksums must
be written too).  I just don't want to see unnecessary cache trashing
and do want to know about I/O errors immediately.
Suppose the mirrored requests are NOT done directly - then I guess we
are seeing an interaction with the 

Re: [OT] best tape backup system?

2005-02-22 Thread Michael Tokarev
Guy wrote:
I have NOT been able to share a SCSI cable/card with disks and a tape drive.
Tried for days.  I would get disk errors, or timeouts.  I corrected the
problem by putting the tape drive on a dedicated SCSI bus/card.  I don't
recall the details of the failures, since it has been well over a year now.
But I do know I only had problems while using the tape drive and the disks
at the same time.  Like, during a backup.  The disks were happy while the
tape drive was idle.  I also swapped parts, it had no effect.  I assumed
Linux does not like to share SCSI.
There may be various reasons for this, and i think all of them are related
to hardware only, esp. to the SCSI bus.  Eg, some modern disk drives don't
understand SE mode anymore, so can't be plugged into an scsi bus with at
least one SE device.  On the other hand, it was qutie common for esp. old
tapes to NOT understand LVD mode.  Obviously you can't mix that sort of
devices on the same bus.  I guess the bus where you drives are is in LVD
mode while tapes works in SE mode...
BTW, I've seen quite modern tape drive with an old scsi connector (I don't
even remember anymore how it's called -- the one which looks pretty like
an IDE connector but a bit wider and with more (80?) pins; with max speed
of 20Mb/sec aka 10MHz or so) inside the case and with an adaptor for current
68pin scsi connector standard.  Obviously it works in SE mode only (that bus
worked in that mode only).  Don't remember which vendor that drive was...
/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [OT] best tape backup system?

2005-02-22 Thread Jon Lewis
On Tue, 22 Feb 2005, Louis-David Mitterrand wrote:

 I am considering getting a Sony SAIT 3 with 500G/1TB tapes, which seems
 like a nice solution for backuping a whole server on a single tape.

 Has anyone used that hardware and can comment on its performance,
 linux-compatibility or otherwise?

 Is there a better solution out there?

Better depends on what you want/need/can afford.  Last time I was tape
shopping, I thought this would be a good compromise on the need/can
afford:
Exabyte VXA-2 Packetloader 1x10

Native tape capacity is 800gb.  The only downside is, no magazine...it
stores the tapes in an internal carosel accessed from the front, one
position at a time.  For a bit more $, they have magazine based tape
library systems with VXA-2 drives.

Anyone used these?  I'd still like one.

--
 Jon Lewis   |  I route
 Senior Network Engineer |  therefore you are
 Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [OT] best tape backup system?

2005-02-22 Thread Alvin Oga


On Tue, 22 Feb 2005, Jon Lewis wrote:

 On Tue, 22 Feb 2005, Louis-David Mitterrand wrote:
 
  I am considering getting a Sony SAIT 3 with 500G/1TB tapes, which seems
  like a nice solution for backuping a whole server on a single tape.
 
  Has anyone used that hardware and can comment on its performance,
  linux-compatibility or otherwise?
 
  Is there a better solution out there?
 
 Better depends on what you want/need/can afford.  Last time I was tape
 shopping, I thought this would be a good compromise on the need/can
 afford:
 Exabyte VXA-2 Packetloader 1x10
 
 Native tape capacity is 800gb.  The only downside is, no magazine...it
 stores the tapes in an internal carosel accessed from the front, one
 position at a time.  For a bit more $, they have magazine based tape
 library systems with VXA-2 drives.

for 1TB of storage ... i'd put the data on 4 disks ( raided ) 
and take the disks and put in nice bubble wrap and nice cushion 

:-)

don't pop the bubbles 

and yup.. i want the fast restore of 1TB of data too which to me
is more important than the $50 or $100 tape costs (plus the tape drive)
vs $600 set of 1TB disk

c ya
alvin

i keep wondering why people pay $150K for 1TB brandname tape subsystems ..
:-)

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [OT] best tape backup system?

2005-02-22 Thread Jon Lewis
On Tue, 22 Feb 2005, Alvin Oga wrote:

  Better depends on what you want/need/can afford.  Last time I was tape
  shopping, I thought this would be a good compromise on the need/can
  afford:
  Exabyte VXA-2 Packetloader 1x10
 
  Native tape capacity is 800gb.  The only downside is, no magazine...it
  stores the tapes in an internal carosel accessed from the front, one
  position at a time.  For a bit more $, they have magazine based tape
  library systems with VXA-2 drives.

 for 1TB of storage ... i'd put the data on 4 disks ( raided )
 and take the disks and put in nice bubble wrap and nice cushion

I should clarify, that's 80GB per tape...so 800GB native assumes you have
10 tapes in the unit.

 i keep wondering why people pay $150K for 1TB brandname tape subsystems ..

I wouldn't pay that much...but I think the common wisdom is that tape is
more durable/portable than disks.  Once upon a time, it was cheaper than
disks too...but that's no longer the case.  It's part of why my plan to
buy a bunch of Exabyte stuff got shot down and instead we bought P4's with
1TB SATA-RAID5 arrays to use as backup servers.

--
 Jon Lewis   |  I route
 Senior Network Engineer |  therefore you are
 Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [OT] best tape backup system?

2005-02-22 Thread David Dougall
Not sure if it is important to many people, but tapes take a lot less
electricity than online disks.
--David Dougall


On Tue, 22 Feb 2005, Jon Lewis wrote:

 On Tue, 22 Feb 2005, Alvin Oga wrote:

   Better depends on what you want/need/can afford.  Last time I was tape
   shopping, I thought this would be a good compromise on the need/can
   afford:
   Exabyte VXA-2 Packetloader 1x10
  
   Native tape capacity is 800gb.  The only downside is, no magazine...it
   stores the tapes in an internal carosel accessed from the front, one
   position at a time.  For a bit more $, they have magazine based tape
   library systems with VXA-2 drives.
 
  for 1TB of storage ... i'd put the data on 4 disks ( raided )
  and take the disks and put in nice bubble wrap and nice cushion

 I should clarify, that's 80GB per tape...so 800GB native assumes you have
 10 tapes in the unit.

  i keep wondering why people pay $150K for 1TB brandname tape subsystems ..

 I wouldn't pay that much...but I think the common wisdom is that tape is
 more durable/portable than disks.  Once upon a time, it was cheaper than
 disks too...but that's no longer the case.  It's part of why my plan to
 buy a bunch of Exabyte stuff got shot down and instead we bought P4's with
 1TB SATA-RAID5 arrays to use as backup servers.

 --
  Jon Lewis   |  I route
  Senior Network Engineer |  therefore you are
  Atlantic Net|
 _ http://www.lewis.org/~jlewis/pgp for PGP public key_
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [OT] best tape backup system?

2005-02-22 Thread Alvin Oga

hi ya

On Tue, 22 Feb 2005, Jon Lewis wrote:

 I should clarify, that's 80GB per tape...so 800GB native assumes you have
 10 tapes in the unit.

yup...seen those puppies too ... too much headache for me
 
  i keep wondering why people pay $150K for 1TB brandname tape subsystems ..
 
 I wouldn't pay that much...

the odd part is people do :-) just because its name branded enterprise
blah blah

 but I think the common wisdom is that tape is
 more durable/portable than disks.  Once upon a time, it was cheaper than
 disks too...but that's no longer the case.

yup ... and disks have become more reliable 

tapes still ahve the same problem they always had ... the heads need
to be cleaned, and if the drive unit goes in for repair, you have no
alternative options 
- i usually get those calls, from panicky folks, which there
is no solution other than buy 2 tape drives or buy disk-based
backups in addition to tapes if they like tapes

both tape and disk backups have its purposes and reasons

  It's part of why my plan to
 buy a bunch of Exabyte stuff got shot down and instead we bought P4's with
 1TB SATA-RAID5 arrays to use as backup servers.

i'd buy 2 disk based backups, because one has to backup the backup server
:-)

- disks is cheap, compared to loss of 1TB of corp data
or 10TB of corp data is more along the lines of fun when things
become interesting  :-)

--

david Not sure if it is important to many people, but tapes take a lot
david less electricity than online disks.

i doubt that it'd be a factor in buying tapes vs disks, but one
never knows :-0

they should be more worried about magnets and phones next to the tape
or tapes under the old fashion crt's

- you'd be surprise what one finds, when one goes into server rooms 
  which i guess is the fun part, fixing their problems

c ya
alvin 

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Forcing a more random uuid (random seed bug)

2005-02-22 Thread H. Peter Anvin
Followup to:  [EMAIL PROTECTED]
By author:Niccolo Rigacci [EMAIL PROTECTED]
In newsgroup: linux.dev.raid

  I get /dev/md5, /dev/md6, /dev/md7
  and /dev/md8 all with the same UUID!
 
 It seems that there is a bug in mdadm: when generating the UUID for a 
 volume, the random() function is called, but the random sequence is never 
 initialized.
 
 The result is that every volume created with mdadm has an uuid of:
 6b8b4567:327b23c6:643c9869:66334873
 
 See also Debian bug 292784 at
 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=292784
 
 I fixed the problem adding the following patch to mdadm.c, but please bear 
 in mind that I'm totally unaware of mdadm code and quite naive in C 
 programming:
 

Please don't use (s)random at all, except as a possible fallback to
/dev/(u)random.

-hpa
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Forcing a more random uuid (random seed bug)

2005-02-22 Thread H. Peter Anvin
Followup to:  [EMAIL PROTECTED]
By author:[EMAIL PROTECTED]
In newsgroup: linux.dev.raid

 +if ((my_fd = open(/dev/random, O_RDONLY)) != -1) {
 
 Please use /dev/urandom for such applications.  /dev/random is the
 highest-quality generator, but will block if entropy isn't available.
 /dev/urandom provides the best available, immediately, which is what
 this application wants.

Not 100% clear; the best would be to make it configurable.

Either way you must not use read() in the way described.  Short reads
happen, even with /dev/urandom.
 
 Also, this will only produce 2^32 possible UUIDs, since that's the
 size of the seed.  Meaning that after you've generated 2^16 of them,
 the chances are excellent that they're not UU any more.
 
 You might just want to get all 128 (minus epsilon) bits from /dev/urandom
 directly.

You *do* want to get all bits from /dev/urandom directly.

-hpa
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html