"Tony Sceats" <[EMAIL PROTECTED]> writes:
> On Sat, Sep 6, 2008 at 12:22 PM, Daniel Pittman <[EMAIL PROTECTED]> wrote:
>
>     Kyle <[EMAIL PROTECTED]> writes:
>
>     > The software I can tune myself. I was more looking for Linux specific 
> tuning.
>     >
>     > * Yes, I was/am concerned about I/O.
>     > * But also ensuring the OS itself (system processes) is not hindering 
> anything
>     >   otherwise.
>
>     Unless you are running other processes on the system you can be
>     reasonably confident that early performance measurement will tell you if
>     the OS is responsible for problems.

Oh, look, an assumption of mine.  How charmingly old fashioned of me.

> you should look out specifically for things like beagle and updatedb
> and what time they are run (ie if they are running at the same time as
> your backups) as these are disk indexers.. 

This is sound advice.  I assumed that anyone running a server would have
started from an absolutely minimal system with no GUI, no extraneous
daemons, and would know to tune other disk load away from peak periods.

Which is a heck of an assumption to make, so thanks for catching it.

[...]

>     You will probably find performance here disappointing.  XFS with a
>     2.6.24 or more recent kernel will do better, perhaps significantly so.
>
> I would concur with this - XFS on RAID 10 on newer kernels gives you
> very good performance (RAID 0 gives you the write performance, RAID 1
> will give you the redundancy - and read performance)

I should probably add some notes on XFS tuning:

Set your agcount down to ~ 16 per terabyte or so, since more doesn't
really help that much at most SMB scales.  Use attr2 format, and
inode64, if you can -- and consider a 512 byte inode rather than the
default 256 if your backup software makes use of xattrs or symlinks.

Give the device an external log, if you can safely, and use an external
bitmap for any Linux software RAID devices as well, since both help with
performance.  (As long as they are on a distinct spindle, HBA port,
etc.)

Ensure that your log is large, and boost the size of your in-kernel log
buffers and log buffer count, to help reduce the time spent waiting on
log writeout for more FS activity.


Test with various disk schedulers -- XFS and CFQ have interacted quite
badly in some kernel versions, and AS or deadline might boost
performance.  This is also workload dependent, so test, test, test.


Finally, I am not kidding about the 2.6.24 or later bit: don't use XFS
in production without a serious UPS unless running the most recent.

[...]

>     The RAID layout and filesystem choices are the only real points to
>     consider tuning up front -- and, probably, enabling LVM for ease of
>     future management of volume space.[1]
>
> In my experience LVM is actually pretty slow, at least running it
> wilth ext3 over multiple PV's, but then when I was looking at this it
> was probably 2 years ago and things may have improved a lot since
> then.

>From my experience there is no significant performance loss, but it
could be that the workloads I deal with don't stress it enough, or in
the right way, to show performance issues.

> In any case LVM is certaintly *really really* handy if you need to
> grow a filesystem, although it's not the only way of doing such a
> thing (but it is by far the easiest and cleanest), and backup systems
> are princably concerned with storage space, so the cost might be worth
> it in the end

That would be my feeling, unless the RAID HBA provides the same ability
to grow the storage media.

Regards,
        Daniel
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to