raw I/O support for Fedora Core 4

2006-03-23 Thread Yogesh Pahilwan
Hi All, I want to do raw I/O on MD RAID and LVM for fedora core 4(kernel 2.6.15.6). After doing googling I came to know that raw command does the raw operation by linking MD device and LVM volume to the raw device as # raw /dev/raw/raw1 /dev/md0. But when I search on this I came to know that

Re: raw I/O support for Fedora Core 4

2006-03-23 Thread Arjan van de Ven
On Thu, 2006-03-23 at 14:54 +0530, Yogesh Pahilwan wrote: Hi All, I want to do raw I/O on MD RAID and LVM for fedora core 4(kernel 2.6.15.6). After doing googling I came to know that raw command does the raw operation by linking MD device and LVM volume to the raw device as # raw

mdadm shutdown question

2006-03-23 Thread Frido Ferdinand
Hi, This question is mainly about the following error during shutdown on my gentoo system: mdadm: fail to stop array /dev/md1: Device or resource busy (full raid1, root is on /dev/md1) After searching bugzilla I found: http://bugs.gentoo.org/show_bug.cgi?id=119380 However there's no

Re: Linux MD RAID5/6 bitmap patches

2006-03-23 Thread Paul Clements
Yogesh Pahilwan wrote: Where can we get documentation (design/implementation) about RAID6 and bitmap for linux kernel 2.6. For the bitmap code, I'm afraid you'll just have to read the code. Also, look back at the archives of this list. There are several discussions about the bitmap patches,

Re: raw I/O support for Fedora Core 4

2006-03-23 Thread Phillip Susi
The raw device driver is obsolete because it has been superseded by the O_DIRECT open flag. If you want to have dd perform unbuffered IO then pass the iflag=direct option for input, or oflag=direct option for output, and it will use O_DIRECT to bypass the buffer cache. This of course assumes

Re: Does grub support sw raid1?

2006-03-23 Thread Herta Van den Eynde
Thanks to all who confirmed that this should work, and gave pointers to more reading material. Further investigation proved that the problem is caused by the Smart Array Controllers that HP uses. As these Smart controllers don't allow for JBOD configs, I had configured each disk as RAID 0

Re: Bug in md grow code

2006-03-23 Thread Neil Brown
On Thursday March 23, [EMAIL PROTECTED] wrote: The code that checks all the devices in an array and tries to fit a grow request to the largest possible value is broken and will only do this successfully if the first element of the array isn't = all other elements in the array. Not

Re: mdadm shutdown question

2006-03-23 Thread Neil Brown
On Thursday March 23, [EMAIL PROTECTED] wrote: Hi, This question is mainly about the following error during shutdown on my gentoo system: mdadm: fail to stop array /dev/md1: Device or resource busy (full raid1, root is on /dev/md1) After searching bugzilla I found:

Re: raid5 that used parity for reads only when degraded

2006-03-23 Thread Alex Izvorski
Neil - Thank you very much for the response. In my tests with identically configured raid0 and raid5 arrays, raid5 initially had much lower throughput during reads. I had assumed that was because raid5 did parity-checking all the time. It turns out that raid5 throughput can get fairly close

Re: raid5 that used parity for reads only when degraded

2006-03-23 Thread Neil Brown
On Thursday March 23, [EMAIL PROTECTED] wrote: Neil - Thank you very much for the response. In my tests with identically configured raid0 and raid5 arrays, raid5 initially had much lower throughput during reads. I had assumed that was because raid5 did parity-checking all the time. It

Re: raid5 performance question

2006-03-23 Thread Neil Brown
On Wednesday March 22, [EMAIL PROTECTED] wrote: Neil Brown wrote: On Tuesday March 7, [EMAIL PROTECTED] wrote: Neil. what is the stripe_cache exacly ? In order to ensure correctness of data, all IO operations on a raid5 pass through the 'stripe cache' This is a cache of

[PATCH 000 of 3] md: Introduction - 3 assorted md fixes

2006-03-23 Thread NeilBrown
Three little fixes. The last is possibly most interesting as it highlight how wrong I managed to get the BIO_BARRIER stuff in raid1, which I really thought I had tested. I'm happy of this and the previous collection of raid5-growth patches to be merged into 2.6.17-rc1. I had hoped the

[PATCH 002 of 3] md: Fix md grow/size code to correctly find the maximum available space.

2006-03-23 Thread NeilBrown
An md array can be asked to change the amount of each device that it is using, and in particular can be asked to use the maximum available space. This currently only works if the first device is not larger than the rest. As 'size' gets changed and so 'fit' becomes wrong. So check if a 'fit' is

[PATCH 001 of 3] md: Remove bi_end_io call out from under a spinlock.

2006-03-23 Thread NeilBrown
raid5 overloads bi_phys_segments to count the number of blocks that the request was broken in to so that it knows when the bio is completely handled. Accessing this must always be done under a spinlock. In one case we also call bi_end_io under that spinlock, which probably isn't ideal as