On Saturday July 28, [EMAIL PROTECTED] wrote:
The patch titled
md: raid10: fix use-after-free of bio
has been added to the -mm tree. Its filename is
md-raid10-fix-use-after-free-of-bio.patch
*** Remember to use Documentation/SubmitChecklist when testing your code ***
See
On Fri, Jul 27, 2007 at 03:07:13PM +0200, Frank van Maarseveen wrote:
I'm experimenting with a live migration of /dev/sda1 using mdadm -B
and network block device as in:
mdadm -B -ayes -n2 -l1 /dev/md1 /dev/sda1 \
--write-mostly -b /tmp/bitm$$ --write-behind /dev/nbd1
Am Samstag, den 28.07.2007, 23:55 -0700 schrieb Andrew Morton:
On Fri, 27 Jul 2007 16:46:23 +0200 Maik Hampel [EMAIL PROTECTED] wrote:
In case of read errors raid10d tries to print a nice error message,
unfortunately using data from an already put bio.
Signed-off-by: Maik Hampel [EMAIL
On Sunday July 29, [EMAIL PROTECTED] wrote:
Hi everyone,
Is it possible to add drives to an active RAID-10 array, using the grow
switch with mdadm, just like it is possible with a RAID-5 array? Or perhaps
there is another way?
I have been looking for this information for a long time but
Thanks for the answer Neil!
The man page for mdadm does not mention it because it is not supported.
It doesn't actually even mention the possibility to create a RAID-10 array
(without creating RAID-0 on top of RAID-1 pairs), yet from the info I found,
a lot of people have been using it for
On Monday July 30, [EMAIL PROTECTED] wrote:
Thanks for the answer Neil!
The man page for mdadm does not mention it because it is not supported.
It doesn't actually even mention the possibility to create a RAID-10 array
(without creating RAID-0 on top of RAID-1 pairs), yet from the
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Hardware was SATA with PCI-e only, nothing on the PCI bus.
ZFS was userspace+fuse of course.
Reiser was V3.
EXT4 was created using the recommended options on its
[trimmed all but linux-raid from the cc]
On 7/30/07, Justin Piszcz [EMAIL PROTECTED] wrote:
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Can you give 2.6.22.1-iop1 a try to see what affect it has on
For the record:
After reading in the archives about similar problems, which were
probably caused by something else but still close enough, I recreated
the array with the exact same parameters from the superblock and one
missing disk.
mdadm -C /dev/md0 -l 5 -n 10 -c 64 -p ls /dev/sdb1
Justin Piszcz wrote:
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Hardware was SATA with PCI-e only, nothing on the PCI bus.
ZFS was userspace+fuse of course.
Wow! Userspace and still that efficient.
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
So the numbers are not all that good, but
On Mon, 2007-07-30 at 10:29 -0400, Justin Piszcz wrote:
Overall JFS seems the fastest but reviewing the mailing list for JFS it
seems like there a lot of problems, especially when people who use JFS 1
year, their speed goes to 5 MiB/s over time and the defragfs tool has been
removed(?)
On Mon, 30 Jul 2007, Miklos Szeredi wrote:
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
On Mon, 30 Jul 2007, Dan Williams wrote:
[trimmed all but linux-raid from the cc]
On 7/30/07, Justin Piszcz [EMAIL PROTECTED] wrote:
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Can you give
Hi,
I have just sent a patch-set to linux-kernel that touches quite a
number of block device drives, with particular relevance to md and
dm.
Rather than fill lots of peoples mailboxes multiple times (35 patches
in the set), I only sent the full set to linux-kernel, and am just
sending this
-stable review patch. If anyone has any objections, please let us know.
--
1/ When resyncing a degraded raid10 which has more than 2 copies of each block,
garbage can get synced on top of good data.
2/ We round the wrong way in part of the device size calculation, which
-stable review patch. If anyone has any objections, please let us know.
--
From: Mike Accetta [EMAIL PROTECTED]
If raid1/repair (which reads all block and fixes any differences
it finds) hits a read error, it doesn't reset the bio for writing
before writing correct data back,
17 matches
Mail list logo