Well over the weekend was the real test of this change.  The result
was over 1.1tb written and 15.2tb served in a 3 day period with a peak
rate of 117mb/s over copper gigabit.  So it survived it's first
beating.

Someone mentioned that in some cases NOOP would be a better choice for
external storage units.  I have 3 external storage units which all run
some sort of linux kernel internally using LVM to expose raid volumes
to the host via scsi.  Do you think this would be a good canidate for
NOOP?  If so is there a way to set a default IO scheduler to CFQ
except for these units?  Or would I just have to do it with bash in
the local init.d script?

--David 

On 8/5/05, David Miller <[EMAIL PROTECTED]> wrote:
> I just had a break through with my main file server here in it's
> ability to handle a high load with lots of read/writes going on
> simultaneously so I figured I would pass this on in hopes that it will
> help someone with a similar problem.
> 
> The setup:
>   Dual Xeon 2.8ghz, 2gb ram, LSI 320 pci-x dual channel scsi
> controller with about 18tb of SATA over SCSI attached storage all in
> RAID 5.  All storage is managed using LVM2.
> 
> The workload:
>   This server does alot more reads than writes and is almost always
> serving files over the network (copper Gigabit) at rates of 100 to 110
> meg/s.  This is done using samba as the client PC's are windows based.
> 
> The problem:
>   When server was serving files at high rates any writes would drive
> up IO waits and server load to the point that samba would become
> unhappy.  In some cases the entire SCSI io system in the kernel would
> become unhappy causing scsi bus resets, file systems to unmount etc.
> Luckily this never resulted in any data loss or corruption but it has
> stopped some of the processing that we were doing on the files.
> 
> My solution:
>   The solution seems to be CFQ IO scheduling.  I had previously used
> Deadline and the Anticipatory scheduler but neither of these solved
> the problem.  CFQ seems to keep everything happy even under a mixed
> I/O load of multiple writes (35mb/s or so) and multiple I/O reads
> (70mb/s or so).  The server load is still high in the 12 to 14 range
> but the server and it's services seem to be much more stable.
> 
> So what type of IO Scheduler have you found to handle your typical
> load the best?  Are there any other IO Schedulers that haven't made it
> to the mainstream source trees yet that are worth a look?  How long
> until we have a IO scheduler that can adjust it self to deal with
> various loads dynamically...I'm sure this is an ongoing area of
> research.
> 
> -David
>

-- 
[email protected] mailing list

Reply via email to