Re: suns raid-z / zfs

2008-02-21 Thread Mario 'BitKoenig' Holbe
Keld Jørn Simonsen [EMAIL PROTECTED] wrote: On Mon, Feb 18, 2008 at 09:51:15PM +1100, Neil Brown wrote: Recovery after a failed drive would not be an easy operation, and I cannot imagine it being even close to the raw speed of the device. I thought this was a problem with most raid types,

Re: direction criterium for synchronisation raid1

2007-10-16 Thread Mario 'BitKoenig' Holbe
Christina Braun [EMAIL PROTECTED] wrote: which is in raid1 the criterion for the direction of synchronisation? In The event counter :) How can I tell the system which mirrored partition is now the data-source without making the raid1 new or zero the superblock? Is the destination in

mdadm: use Preferred Minor for non-standard named arrays?

2007-09-03 Thread Mario 'BitKoenig' Holbe
Hello, using non-'standard' names for raid-arrays like /dev/md/root always leads to (more or less) dynamically assigned device minors 127+. Wouldn't it be possible to use the Preferred Minor hint from the array's superblock or the super-minor option from the mdadm.conf? This would lead to more

mdadm: different component-count in superblocks prevents assembly

2007-09-03 Thread Mario 'BitKoenig' Holbe
Hello Again :) Having a component device with slightly different superblock characteristics in the system prevents mdadm from assembling arrays. For example: mdadm --fail /dev/mdx /dev/xdx mdadm -G -n y-1 /dev/mdx would lead to a non-assemble-able /dev/mdx as long as /dev/xdx

Re: Mustn't be RAID 1 and 0 read-performance be similar?

2007-08-13 Thread Mario 'BitKoenig' Holbe
Rustedt, Florian [EMAIL PROTECTED] wrote: If the speed on RAID 0 is based on reading out in parallel, then it must be the same on RAID 1, mustn't it? On RAID 1, it is possible, to read two blocks in parallel to speed up, too. It's not that simple. On RAID0 you can read one single stream of

Re: PATA/SATA Disk Reliability paper

2007-02-26 Thread Mario 'BitKoenig' Holbe
Al Boldi [EMAIL PROTECTED] wrote: Interesting link. They seem to point out that smart not necessarily warns of pending failure. This is probably worse than not having smart at all, as it gives you the illusion of safety. If SMART gives you the illusion of safety, you didn't understand

Re: Propose of enhancement of raid1 driver

2006-10-30 Thread Mario 'BitKoenig' Holbe
Al Boldi [EMAIL PROTECTED] wrote: Don't underestimate the effects mere layout can have on multi-disk array performance, despite it being highly hw dependent. I can't see the difference between equal mirrors and somehow interleaved layout on RAID1. Since you have to seek anyways, there should

Re: Propose of enhancement of raid1 driver

2006-10-23 Thread Mario 'BitKoenig' Holbe
Neil Brown [EMAIL PROTECTED] wrote: Skipping over blocks within a track is no faster than reading blocks in the track, so you would need to make sure that your chunk size is Not even no faster but probably even slower. For seek()+read() from userspace this gets smoothed by the kernel's implicit

Re: why partition arrays?

2006-10-23 Thread Mario 'BitKoenig' Holbe
martin f krafft [EMAIL PROTECTED] wrote: Why would anyone want to create a partitionable array and put partitions in it, rather than creating separate arrays for each filesystem? Intuitively, this makes way more sense as then the partitions are independent of each other; one array can fail and

Re: Superblock checksum problems

2006-09-06 Thread Mario 'BitKoenig' Holbe
Henrik Holst [EMAIL PROTECTED] wrote: It would be good to have an analog to memtest but for PATA and SATA ports. Anyone seen something like that out there on the web? Are you looking for `badblocks'? There is also a `memtest.sh' from Doug Ledford. It's main intention is, as the name suggests,

Re: checking md device parity (forced resync) - is it necessary?

2006-09-06 Thread Mario 'BitKoenig' Holbe
Tomasz Chmielewski [EMAIL PROTECTED] wrote: # by default, run at 01:06 on the first Sunday of each month. 6 1 1-7 * 7 root [ -x /usr/share/mdadm/checkarray ] You have a quite old version of mdadm. This issue has been fixed in mdadm (2.5.2-8) unstable from 27 Jul 2006. Current version is mdadm

Re: Care and feeding of RAID?

2006-09-06 Thread Mario 'BitKoenig' Holbe
Benjamin Schieder [EMAIL PROTECTED] wrote: I have made the experience that long tests slow down the raid to a point where the system becomes unusable. Even though we're quite off-topic here with that since it's more SMART-related... this is at least unusual. I'm also running regular SMART

Re: Care and feeding of RAID?

2006-09-06 Thread Mario 'BitKoenig' Holbe
Gordon Henderson [EMAIL PROTECTED] wrote: I don't have any mdadm.conf files ... What am I missing? (I've always been under the impression that after needing the /etc/raidtab file with the old raidtools, you didn't need a config file as such under mdadm... However, You don't necessarily need

Re: Resize on dirty array?

2006-08-28 Thread Mario 'BitKoenig' Holbe
Neil Brown [EMAIL PROTECTED] wrote: I would be a lot happier about it if the block layer told me whether the fail was a Media error or some other sort of error. This wouldn't help you either. I've seen drives (mainly Samsung) that locked up the whole IDE bus after some simple (subsequent)

Re: remark and RFC

2006-08-17 Thread Mario 'BitKoenig' Holbe
Peter T. Breuer [EMAIL PROTECTED] wrote: 1) I would like raid request retries to be done with exponential delays, so that we get a chance to overcome network brownouts. Hmmm, IMHO this should be implemented in nbd/enbd where it belongs to and errors should be masked within nbd/enbd then.

Re: Problem with 3xRAID1 to RAID 0

2006-07-12 Thread Mario 'BitKoenig' Holbe
Jim Klimov [EMAIL PROTECTED] wrote: Are there any actual bonuses to making RAIDs on whole raw disks? You win 63 sectors (i.e. 32k) usually. regards Mario -- *axiom* welcher sensorische input bewirkte die output-aktion, den irc-chatter mit dem nick dus des irc-servers

Re: [PATCH] enable auto=yes by default when using udev

2006-07-04 Thread Mario 'BitKoenig' Holbe
Michael Tokarev [EMAIL PROTECTED] wrote: Why to test for udev at all? If the device does not exist, regardless if udev is running or not, it might be a good idea to try to create it. Because IT IS NEEDED, period. Whenever the operation fails or not, and Perhaps it was just a typo and you

Re: [PATCH] enable auto=yes by default when using udev

2006-07-03 Thread Mario 'BitKoenig' Holbe
Jason Lunz [EMAIL PROTECTED] wrote: there's a udevd you can check for. I don't know whether that's a better test or not. It's not. For example, the Debian package does also start this udevd on package upgrades, even when a 2.4 kernel is running which definitely has no udev support :) regards

Re: NVRAM support

2006-02-16 Thread Mario 'BitKoenig' Holbe
Neil Brown [EMAIL PROTECTED] wrote: On Wednesday February 15, [EMAIL PROTECTED] wrote: E.g. Data and parity write requests are issued in parallel but only one finishes. This will lead to inconsistent data. It will be undetected and can not be If the array is degraded, that the inconsistency

Re: RAID 16?

2006-02-02 Thread Mario 'BitKoenig' Holbe
Matthias Urlichs [EMAIL PROTECTED] wrote: Hi, David Liontooth wrote: * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS will Hmm. You'd have eight disks, five(!) may fail at any time, giving you Four, isn't it? RAID6 covers the failure of 2 of the underlying RAID1s, which,

Re: RAID 1 vs RAID 0

2006-01-18 Thread Mario 'BitKoenig' Holbe
Max Waterman [EMAIL PROTECTED] wrote: Still, it seems like it should be a solvable problem...if you order the data differently on each disk; for example, in the two disk case, putting odd and even numbered 'stripes' on different platters [or sides Well, unfortunately for todays hard disks

Re: paralellism of device use in md

2006-01-17 Thread Mario 'BitKoenig' Holbe
Tim Moore [EMAIL PROTECTED] wrote: Andy Smith wrote: Are reads from a 2 device RAID-1 twice as fast as from a single md14 : active raid0 sdb13[1] sda13[0] md13 : active raid1 sdb12[1] sda12[0] /dev/md14: Timing buffered disk reads: 272 MB in 3.01 seconds = 90.37 MB/sec /dev/md13:

Re: Journal-guided Resynchronization for Software RAID

2005-12-06 Thread Mario 'BitKoenig' Holbe
Jure Peèar [EMAIL PROTECTED] wrote: How to use gfx memory as nvram (no batteries tho :) http://hedera.linuxnews.pl/_news/2002/09/03/_long/1445.html Where's the 'n' in nvram there? You and the author mean vram, i.e. Video-RAM, which is quite different from nvram, i.e. Non-Volatile RAM. Anyways

Re: Adding Reed-Solomon Personality to MD, need help/advice

2005-11-21 Thread Mario 'BitKoenig' Holbe
Hello, Nathan Lewis [EMAIL PROTECTED] wrote: As part of my Master's thesis, I am working on adding a Reed-Solomon personality to the existing linux RAID structure and I would like some Is there any progress in implementing a generic Reed-Solomon personality in MD since this mail from 31 Jan

Re: Flappy hotswap disks?

2005-10-25 Thread Mario 'BitKoenig' Holbe
Molle Bestefich [EMAIL PROTECTED] wrote: Eg. will it increase the event counters on the other disks /even/ when no reboot or stop-start has been performed, so that when the flappy Event counters are increased immediately when an event occurs. A device failure is an event as well as start and

Re: MD or MDADM bug?

2005-09-05 Thread Mario 'BitKoenig' Holbe
[EMAIL PROTECTED] [EMAIL PROTECTED] wrote: On Sun, 4 Sep 2005, Neil Brown wrote: I've never liked kernel autodetect, and while I won't break it, I would like to migrate people away from it. How will we use MD for root filesystem? Will we *always* need a tricky initrd with mdadm added? If

Re: [PATCH] proactive raid5 disk replacement for 2.6.11

2005-08-15 Thread Mario 'BitKoenig' Holbe
Hi, Pallai Roland [EMAIL PROTECTED] wrote: this is a feature patch that implements 'proactive raid5 disk replacement' (http://www.arctic.org/~dean/raid-wishlist.html), that could help a lot on large raid5 arrays built from cheap sata ... linux software raid is very fragile by default, the