On Fri, 5 Oct 2007 06:25:20 -0400 (EDT), Justin Piszcz wrote:

> So you have 3 SATA 1 disks:

Yeah, 3 of them in the array, there is a fourth standalone disk which
contains the root fs from which the system boots..

> http://digital-domain.net/kernel/sw-raid5-issue/mdadm-D
> 
> Do you compile your own kernel or use the distribution's kernel?

Compile my own.
 
> What does cat /proc/interrupts say? This is important to see if your
> disk controller(s) are sharing IRQs with other devices.

$ cat /proc/interrupts 
           CPU0       CPU1       
  0:     132052  249369403   IO-APIC-edge      timer
  1:        202         52   IO-APIC-edge      i8042
  8:          0          1   IO-APIC-edge      rtc
  9:          0          0   IO-APIC-fasteoi   acpi
 14:      11483        172   IO-APIC-edge      ide0
 16:   18041195    4798850   IO-APIC-fasteoi   sata_sil24
 18:   86068930         27   IO-APIC-fasteoi   eth0
 19:   16127662    2138177   IO-APIC-fasteoi   sata_sil, ohci_hcd:usb1, 
ohci_hcd:usb2
NMI:          0          0 
LOC:  249368914  249368949 
ERR:          0


sata_sil24 contains the raid array, sata_sil the root fs disk

> 
> Also note with only 3 disks in a RAID-5 you will not get stellar 
> performance, but regardless, it should not be 'hanging' as you have 
> mentioned.  Just out of sheer curiosity have you tried the AS
> scheduler? CFQ is supposed to be better for multi-user performance
> but I would be highly interested if you used the AS scheduler-- would
> that change the 'hanging' problem you are noticing?  I would give it
> a shot, also try the deadline and noop.

I did try them briefly. I'll have another go.

> You probably want to keep the nr_requessts to 128, the
> stripe_cache_size to 8mb.  The stripe size of 256k is probably
> optimal.

OK.

> Did you also re-mount the XFS partition with the default mount
> options (or just take the sunit and swidth)?

The /etc/fstab entry for the raid array is currently:

 /dev/md0                /home                   xfs
noatime,logbufs=8 1 2

and mount says

/dev/md0 on /home type xfs (rw,noatime,logbufs=8)

and /proc/mounts

/dev/md0 /home xfs rw,noatime,logbufs=8,sunit=512,swidth=1024 0 0

So I guess mount or the kernel is setting the sunit and swidth values.
 
> Justin.


Andrew
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to