Re: + md-raid10-fix-use-after-free-of-bio.patch added to -mm tree

2007-07-30 Thread Neil Brown
On Saturday July 28, [EMAIL PROTECTED] wrote: The patch titled md: raid10: fix use-after-free of bio has been added to the -mm tree. Its filename is md-raid10-fix-use-after-free-of-bio.patch *** Remember to use Documentation/SubmitChecklist when testing your code *** See

Re: raid1 resync data direction defined?

2007-07-30 Thread Luca Berra
On Fri, Jul 27, 2007 at 03:07:13PM +0200, Frank van Maarseveen wrote: I'm experimenting with a live migration of /dev/sda1 using mdadm -B and network block device as in: mdadm -B -ayes -n2 -l1 /dev/md1 /dev/sda1 \ --write-mostly -b /tmp/bitm$$ --write-behind /dev/nbd1

Re: md: raid10: fix use-after-free of bio

2007-07-30 Thread Maik Hampel
Am Samstag, den 28.07.2007, 23:55 -0700 schrieb Andrew Morton: On Fri, 27 Jul 2007 16:46:23 +0200 Maik Hampel [EMAIL PROTECTED] wrote: In case of read errors raid10d tries to print a nice error message, unfortunately using data from an already put bio. Signed-off-by: Maik Hampel [EMAIL

Re: Is it possible to grow a RAID-10 array with mdadm?

2007-07-30 Thread Neil Brown
On Sunday July 29, [EMAIL PROTECTED] wrote: Hi everyone, Is it possible to add drives to an active RAID-10 array, using the grow switch with mdadm, just like it is possible with a RAID-5 array? Or perhaps there is another way? I have been looking for this information for a long time but

Re: Is it possible to grow a RAID-10 array with mdadm?

2007-07-30 Thread Tomas France
Thanks for the answer Neil! The man page for mdadm does not mention it because it is not supported. It doesn't actually even mention the possibility to create a RAID-10 array (without creating RAID-0 on top of RAID-1 pairs), yet from the info I found, a lot of people have been using it for

Re: Is it possible to grow a RAID-10 array with mdadm?

2007-07-30 Thread Neil Brown
On Monday July 30, [EMAIL PROTECTED] wrote: Thanks for the answer Neil! The man page for mdadm does not mention it because it is not supported. It doesn't actually even mention the possibility to create a RAID-10 array (without creating RAID-0 on top of RAID-1 pairs), yet from the

bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
CONFIG: Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. Kernel was 2.6.21 or 2.6.22, did these awhile ago. Hardware was SATA with PCI-e only, nothing on the PCI bus. ZFS was userspace+fuse of course. Reiser was V3. EXT4 was created using the recommended options on its

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Dan Williams
[trimmed all but linux-raid from the cc] On 7/30/07, Justin Piszcz [EMAIL PROTECTED] wrote: CONFIG: Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. Kernel was 2.6.21 or 2.6.22, did these awhile ago. Can you give 2.6.22.1-iop1 a try to see what affect it has on

Re: Homehost suddenly changed on some components

2007-07-30 Thread Max Amanshauser
For the record: After reading in the archives about similar problems, which were probably caused by something else but still close enough, I recreated the array with the exact same parameters from the superblock and one missing disk. mdadm -C /dev/md0 -l 5 -n 10 -c 64 -p ls /dev/sdb1

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Al Boldi
Justin Piszcz wrote: CONFIG: Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. Kernel was 2.6.21 or 2.6.22, did these awhile ago. Hardware was SATA with PCI-e only, nothing on the PCI bus. ZFS was userspace+fuse of course. Wow! Userspace and still that efficient.

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Miklos Szeredi
Extrapolating these %cpu number makes ZFS the fastest. Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is not taken into account. So the numbers are not all that good, but

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Dave Kleikamp
On Mon, 2007-07-30 at 10:29 -0400, Justin Piszcz wrote: Overall JFS seems the fastest but reviewing the mailing list for JFS it seems like there a lot of problems, especially when people who use JFS 1 year, their speed goes to 5 MiB/s over time and the defragfs tool has been removed(?)

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
On Mon, 30 Jul 2007, Miklos Szeredi wrote: Extrapolating these %cpu number makes ZFS the fastest. Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is not taken into account.

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
On Mon, 30 Jul 2007, Dan Williams wrote: [trimmed all but linux-raid from the cc] On 7/30/07, Justin Piszcz [EMAIL PROTECTED] wrote: CONFIG: Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. Kernel was 2.6.21 or 2.6.22, did these awhile ago. Can you give

[PATCHSET/RFC] Refactor block layer to improve support for stacked devices.

2007-07-30 Thread Neil Brown
Hi, I have just sent a patch-set to linux-kernel that touches quite a number of block device drives, with particular relevance to md and dm. Rather than fill lots of peoples mailboxes multiple times (35 patches in the set), I only sent the full set to linux-kernel, and am just sending this

[patch 07/26] md: Fix two raid10 bugs.

2007-07-30 Thread Greg KH
-stable review patch. If anyone has any objections, please let us know. -- 1/ When resyncing a degraded raid10 which has more than 2 copies of each block, garbage can get synced on top of good data. 2/ We round the wrong way in part of the device size calculation, which

[patch 08/26] md: Fix bug in error handling during raid1 repair.

2007-07-30 Thread Greg KH
-stable review patch. If anyone has any objections, please let us know. -- From: Mike Accetta [EMAIL PROTECTED] If raid1/repair (which reads all block and fixes any differences it finds) hits a read error, it doesn't reset the bio for writing before writing correct data back,