Hi All,
I want to do raw I/O on MD RAID and LVM for fedora core 4(kernel 2.6.15.6).
After doing googling I came to know that raw command does the raw
operation by linking
MD device and LVM volume to the raw device as
# raw /dev/raw/raw1 /dev/md0.
But when I search on this I came to know that
On Thu, 2006-03-23 at 14:54 +0530, Yogesh Pahilwan wrote:
Hi All,
I want to do raw I/O on MD RAID and LVM for fedora core 4(kernel 2.6.15.6).
After doing googling I came to know that raw command does the raw
operation by linking
MD device and LVM volume to the raw device as
# raw
Hi,
This question is mainly about the following error during shutdown on my
gentoo system:
mdadm: fail to stop array /dev/md1: Device or resource busy
(full raid1, root is on /dev/md1)
After searching bugzilla I found:
http://bugs.gentoo.org/show_bug.cgi?id=119380
However there's no
Yogesh Pahilwan wrote:
Where can we get documentation (design/implementation) about RAID6 and
bitmap for linux kernel 2.6.
For the bitmap code, I'm afraid you'll just have to read the code. Also,
look back at the archives of this list. There are several discussions
about the bitmap patches,
The raw device driver is obsolete because it has been superseded by the
O_DIRECT open flag. If you want to have dd perform unbuffered IO then
pass the iflag=direct option for input, or oflag=direct option for
output, and it will use O_DIRECT to bypass the buffer cache.
This of course assumes
Thanks to all who confirmed that this should work, and gave pointers to
more reading material.
Further investigation proved that the problem is caused by the Smart
Array Controllers that HP uses. As these Smart controllers don't
allow for JBOD configs, I had configured each disk as RAID 0
On Thursday March 23, [EMAIL PROTECTED] wrote:
The code that checks all the devices in an array and tries to fit a grow
request to the largest possible value is broken and will only do this
successfully if the first element of the array isn't = all other elements
in the array. Not
On Thursday March 23, [EMAIL PROTECTED] wrote:
Hi,
This question is mainly about the following error during shutdown on my
gentoo system:
mdadm: fail to stop array /dev/md1: Device or resource busy
(full raid1, root is on /dev/md1)
After searching bugzilla I found:
Neil - Thank you very much for the response.
In my tests with identically configured raid0 and raid5 arrays, raid5
initially had much lower throughput during reads. I had assumed that
was because raid5 did parity-checking all the time. It turns out that
raid5 throughput can get fairly close
On Thursday March 23, [EMAIL PROTECTED] wrote:
Neil - Thank you very much for the response.
In my tests with identically configured raid0 and raid5 arrays, raid5
initially had much lower throughput during reads. I had assumed that
was because raid5 did parity-checking all the time. It
On Wednesday March 22, [EMAIL PROTECTED] wrote:
Neil Brown wrote:
On Tuesday March 7, [EMAIL PROTECTED] wrote:
Neil.
what is the stripe_cache exacly ?
In order to ensure correctness of data, all IO operations on a raid5
pass through the 'stripe cache' This is a cache of
Three little fixes.
The last is possibly most interesting as it highlight how wrong I managed
to get the BIO_BARRIER stuff in raid1, which I really thought I had tested.
I'm happy of this and the previous collection of raid5-growth patches
to be merged into 2.6.17-rc1. I had hoped the
An md array can be asked to change the amount of each device that it
is using, and in particular can be asked to use the maximum available
space. This currently only works if the first device is not larger
than the rest. As 'size' gets changed and so 'fit' becomes wrong.
So check if a 'fit' is
raid5 overloads bi_phys_segments to count the number of blocks that
the request was broken in to so that it knows when the bio is completely
handled.
Accessing this must always be done under a spinlock. In one case we
also call bi_end_io under that spinlock, which probably isn't ideal as
14 matches
Mail list logo