Mr. James W. Laferriere wrote:
Hello Gabor ,
On Tue, 20 Jun 2006, Gabor Gombas wrote:
On Tue, Jun 20, 2006 at 03:08:59PM +0200, Niccolo Rigacci wrote:
Do you know if it is possible to switch the scheduler at runtime?
echo cfq /sys/block/disk/queue/scheduler
At least one can
personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
I tested this case:
- reboot as per power failure (RAID goes dirty)
- RAID start resyncing as soon as the kernel assemble it
-
Niccolo Rigacci wrote:
personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
I tested this case:
- reboot as per power failure (RAID goes dirty)
- RAID start resyncing as soon as
Thanks to the several guys in this list, I have solved my problem
and elaborated this, can be a new FAQ entry?
Q: Sometimes when a RAID volume is resyncing, the system seems to
locks-up: every disk activity is blocked until resync is done.
A: This is not strictly related to Linux RAID, this
Niccolo Rigacci wrote:
[]
From the command line you can see which schedulers are supported
and change it on the fly (remember to do it for each RAID disk):
# cat /sys/block/hda/queue/scheduler
noop [anticipatory] deadline cfq
# echo cfq /sys/block/hda/queue/scheduler
Otherwise
There's much easier/simpler way to set default scheduler. As
personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
Mark Hahn wrote:
There's much easier/simpler way to set default scheduler. As
personally, I don't see any point to worrying about the default,
compile-time or boot time:
for f in `find /sys/block/* -name scheduler`; do echo cfq $f; done
I agree -- if you're talking about changing the io
On Mon, Jun 19, 2006 at 05:05:56PM +0200, Gabor Gombas wrote:
IMHO a much better fix is to use the cfq I/O scheduler during the
rebuild.
Yes, changing the default I/O to DEFAULT_CFQ solve the problem
very well, I get over 40 Mb/s resync speed with no lock-up at
all!
Thank you very much, I
Hello Gabor ,
On Tue, 20 Jun 2006, Gabor Gombas wrote:
On Tue, Jun 20, 2006 at 03:08:59PM +0200, Niccolo Rigacci wrote:
Do you know if it is possible to switch the scheduler at runtime?
echo cfq /sys/block/disk/queue/scheduler
At least one can do a ls of the /sys/block area
At least one can do a ls of the /sys/block area then do an
automated
echo cfq down the tree . Does anyone know of a method to set a
default
scheduler ?
My be I didn't understand the question...
You decide what schedulers are available at kernel compile
On Tue, Jun 20, 2006 at 08:00:13AM -0700, Mr. James W. Laferriere wrote:
At least one can do a ls of the /sys/block area then do an
automated
echo cfq down the tree . Does anyone know of a method to set a
default
scheduler ?
RTFM:
On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote:
I was able to work around this by lowering
/proc/sys/dev/raid/speed_limit_max to a value
below my disk thruput value (~ 50 MB/s) as follows:
IMHO a much better fix is to use the cfq I/O scheduler during the
rebuild. The default
On Thursday 15 June 2006 12:13, you wrote:
If this is causing a lockup, then there is something else wrong, just
as any single process should not - by writing constantly to disks - be
able to clog up the whole system.
Maybe if you could get the result of
alt-sysrq-P
I tried some kernel
On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote:
Niccolo Rigacci wrote:
When the sync is complete, the machine start to respond again
perfectly.
I was able to work around this by lowering
/proc/sys/dev/raid/speed_limit_max to a value
below my disk thruput value (~ 50 MB/s)
On Thursday June 15, [EMAIL PROTECTED] wrote:
On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote:
Niccolo Rigacci wrote:
When the sync is complete, the machine start to respond again
perfectly.
I was able to work around this by lowering
/proc/sys/dev/raid/speed_limit_max
Hi to all,
I have a new IBM xSeries 206m with two SATA drives, I installed a
Debian Testing (Etch) and configured a software RAID as shown:
Personalities : [raid1]
md1 : active raid1 sdb5[1] sda5[0]
1951744 blocks [2/2] [UU]
md2 : active raid1 sdb6[1] sda6[0]
2931712 blocks [2/2]
Niccolo Rigacci wrote:
Hi to all,
I have a new IBM xSeries 206m with two SATA drives, I installed a
Debian Testing (Etch) and configured a software RAID as shown:
Personalities : [raid1]
md1 : active raid1 sdb5[1] sda5[0]
1951744 blocks [2/2] [UU]
md2 : active raid1 sdb6[1] sda6[0]
17 matches
Mail list logo