On Tuesday June 20, [EMAIL PROTECTED] wrote:
Nigel J. Terry wrote:
Well good news and bad news I'm afraid...
Well I would like to be able to tell you that the time calculation now
works, but I can't. Here's why: Why I rebooted with the newly built
kernel, it decided to hit the magic 21
Thanks to the several guys in this list, I have solved my problem
and elaborated this, can be a new FAQ entry?
Q: Sometimes when a RAID volume is resyncing, the system seems to
locks-up: every disk activity is blocked until resync is done.
A: This is not strictly related to Linux RAID, this
Niccolo Rigacci wrote:
[]
From the command line you can see which schedulers are supported
and change it on the fly (remember to do it for each RAID disk):
# cat /sys/block/hda/queue/scheduler
noop [anticipatory] deadline cfq
# echo cfq /sys/block/hda/queue/scheduler
Otherwise
David Greaves wrote:
How do I interpret:
bitmap: 0/117 pages [0KB], 1024KB chunk
in the mdstat output
what does it mean when it's, eg: 23/117
This refers to the in-memory bitmap (basically a cache of what's in the
on-disk bitmap -- it allows bitmap operations to be more efficient).
There's much easier/simpler way to set default scheduler. As
personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
Mark Hahn wrote:
There's much easier/simpler way to set default scheduler. As
personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
I agree -- if you're talking about changing the io
Marc L. de Bruin wrote:
Situation: /dev/md0, type raid1, containing 2 active devices
(/dev/hda1 and /dev/hdc1) and 2 spare devices (/dev/hde1 and /dev/hdg1).
Those two spare 'partitions' are the only partitions on those disks
and therefore I'd like to spin down those disks using hdparm for
On Thursday June 22, [EMAIL PROTECTED] wrote:
Marc L. de Bruin wrote:
Situation: /dev/md0, type raid1, containing 2 active devices
(/dev/hda1 and /dev/hdc1) and 2 spare devices (/dev/hde1 and /dev/hdg1).
Those two spare 'partitions' are the only partitions on those disks
and
Neil Brown wrote:
On Thursday June 22, [EMAIL PROTECTED] wrote:
Marc L. de Bruin wrote:
Situation: /dev/md0, type raid1, containing 2 active devices
(/dev/hda1 and /dev/hdc1) and 2 spare devices (/dev/hde1 and /dev/hdg1).
Those two spare 'partitions' are the only partitions on
On Thursday June 22, [EMAIL PROTECTED] wrote:
Thanks Neil for your quick reply. Would it be possible to elaborate a
bit on the problem and the solution? I guess I won't be on 2.6.18 for
some time...
When an array has been idle (no writes) for a short time (20 or 200
ms, depending on
10 matches
Mail list logo