On Thu, 13 Dec 2007, Louis-David Mitterrand wrote:
Hi,
after reading some interesting suggestions on kernel tuning at:
http://hep.kbfi.ee/index.php/IT/KernelTuning
I am wondering whether 'deadline' is indeed the best IO scheduler (vs.
anticipatory and cfq) for a soft raid5/6
Tejun Heo wrote:
Bill Davidsen wrote:
Jan Engelhardt wrote:
On Dec 1 2007 06:26, Justin Piszcz wrote:
I ran the following:
dd if=/dev/zero of=/dev/sdc
dd if=/dev/zero of=/dev/sdd
dd if=/dev/zero of=/dev/sde
(as it is always a very good idea to do this with any new disk)
Brett Maton wrote:
Hi,
Question for you guys.
A brief history:
RHEL 4 AS
I have a partition with way to many small files on (Usually around a couple of million) that needs to be backed up, standard
methods mean that a restore is impossibly slow due to the sheer volume of files.
On Thursday December 13, [EMAIL PROTECTED] wrote:
Good morning to Neil and everyone on the list, hope your respective
days are going well.
Quick overview. We've isolated what appears to be a failure mode with
mdadm assembling RAID1 (and presumably other level) volumes which
kernel based
On Thursday December 13, [EMAIL PROTECTED] wrote:
How do I create the internal bitmap? man mdadm didn't shed any
light and my brief excursion into google wasn't much more helpful.
mdadm --grow --bitmap=internal /dev/mdX
The version I have installed is mdadm-1.12.0-5.i386 from RedHat
Following are 7 md related patches are suitable for the next -mm
and maybe for 2.6.25.
They move towards giving user-space programs more fine control of an
array so that we can add support for more complex metadata formats
(e.g. DDF) without bothering the kernel with such things.
The last patch
- Add a state flag 'external' to indicate that the metadata is managed
externally (by user-space) so important changes need to be
left of user-space to handle.
Alternates are non-persistant ('none') where there is no stable metadata -
after the array is stopped there is no record of
When a device fails, we must not allow an further writes to the array
until the device failure has been recorded in array metadata.
When metadata is managed externally, this requires some synchronisation...
Allow/require userspace to explicitly remove failed devices
from active service in the
This allows userspace to control resync/reshape progress and
synchronise it with other activities, such as shared access in a SAN,
or backing up critical sections during a tricky reshape.
Writing a number of sectors (which must be a multiple of the chunk
size if such is meaningful) causes a
Currently, a given device is claimed by a particular array so
that it cannot be used by other arrays.
This is not ideal for DDF and other metadata schemes which have
their own partitioning concept.
So for externally managed metadata, just claim the device for
md in general, require that offset
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2007-12-14 16:09:01.0 +1100
+++ ./drivers/md/md.c 2007-12-14
Signed-off-by: Neil Brown [EMAIL PROTECTED]
### Diffstat output
./drivers/md/md.c |7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2007-12-14 16:09:03.0 +1100
+++ ./drivers/md/md.c 2007-12-14
Given an fd on a block device, returns a string like
/block/sda/sda1
which can be used to find related information in /sys.
Ideally we should have an ioctl that works on char devices as well,
but that seems far from trivial, so it seems reasonable to have
this until the later can be
What you could do is set the number of devices in the array to 3 so
they it always appears to be degraded, then rotate your backup drives
through the array. The number of dirty bits in the bitmap will
steadily grow and so resyncs will take longer. Once it crosses some
threshold you set the
14 matches
Mail list logo