Re: [PERFORM] file system and raid performance

2008-08-05 Thread Gregory S. Youngblood
I recently ran some tests on Ubuntu Hardy Server (Linux) comparing JFS, XFS,
and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only used
bonnie++, so the numbers are really only useful for my hardware. 

What parameters were used to create the XFS partition in these tests? And,
what options were used to mount the file system? Was the kernel 32-bit or
64-bit? Given what I've seen with some of the XFS options (like lazy-count),
I am wondering about the options used in these tests.

Thanks,
Greg




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] file system and raid performance

2008-08-05 Thread Mark Wong
On Mon, Aug 4, 2008 at 10:04 PM,  <[EMAIL PROTECTED]> wrote:
> On Mon, 4 Aug 2008, Mark Wong wrote:
>
>> Hi all,
>>
>> We've thrown together some results from simple i/o tests on Linux
>> comparing various file systems, hardware and software raid with a
>> little bit of volume management:
>>
>> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide
>>
>> What I'd like to ask of the folks on the list is how relevant is this
>> information in helping make decisions such as "What file system should
>> I use?"  "What performance can I expect from this RAID configuration?"
>> I know these kind of tests won't help answer questions like "Which
>> file system is most reliable?" but we would like to be as helpful as
>> we can.
>>
>> Any suggestions/comments/criticisms for what would be more relevant or
>> interesting also appreciated.  We've started with Linux but we'd also
>> like to hit some other OS's.  I'm assuming FreeBSD would be the other
>> popular choice for the DL-380 that we're using.
>>
>> I hope this is helpful.
>
> it's definantly timely for me (we were having a spirited 'discussion' on
> this topic at work today ;-)
>
> what happened with XFS?

Not exactly sure, I didn't attempt to debug much.  I only looked into
it enough to see that the fio processes were waiting for something.
In one case I left the test go for 24 hours too see if it would stop.
Note that I specified to fio not to run longer than an hour.

> you show it as not completing half the tests in the single-disk table and
> it's completly missing from the other ones.
>
> what OS/kernel were you running?

This is a Gentoo system, running the 2.6.25-gentoo-r6 kernel.

> if it was linux, which software raid did you try (md or dm) did you use lvm
> or raw partitions?

We tried mdraid, not device-mapper.  So far we have only used raw
partitions (whole devices without partitions.)

Regards,
Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] file system and raid performance

2008-08-05 Thread Mark Wong
On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood <[EMAIL PROTECTED]> 
wrote:
> I recently ran some tests on Ubuntu Hardy Server (Linux) comparing JFS, XFS,
> and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only used
> bonnie++, so the numbers are really only useful for my hardware.
>
> What parameters were used to create the XFS partition in these tests? And,
> what options were used to mount the file system? Was the kernel 32-bit or
> 64-bit? Given what I've seen with some of the XFS options (like lazy-count),
> I am wondering about the options used in these tests.

The default (no arguments specified) parameters were used to create
the XFS partitions.  Mount options specified are described in the
table.  This was a 64-bit OS.

Regards,
Mark

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] file system and raid performance

2008-08-05 Thread Fernando Ike
On Tue, Aug 5, 2008 at 4:54 AM, Mark Wong <[EMAIL PROTECTED]> wrote:
> Hi all,
 Hi

> We've thrown together some results from simple i/o tests on Linux
> comparing various file systems, hardware and software raid with a
> little bit of volume management:
>
> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide
>
>
> Any suggestions/comments/criticisms for what would be more relevant or
> interesting also appreciated.  We've started with Linux but we'd also
> like to hit some other OS's.  I'm assuming FreeBSD would be the other
> popular choice for the DL-380 that we're using.
>

   Would be interesting also tests with Ext4. Despite of don't
consider stable in kernel linux, on the case is possible because the
version kernel and assuming that is e2fsprogs is supported.



Regards,
-- 
Fernando Ike
http://www.midstorm.org/~fike/weblog

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] file system and raid performance

2008-08-05 Thread Gregory S. Youngblood
> From: Mark Kirkwood [mailto:[EMAIL PROTECTED]
> Mark Wong wrote:
> > On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood
> <[EMAIL PROTECTED]> wrote:
> >
> >> I recently ran some tests on Ubuntu Hardy Server (Linux) comparing
> JFS, XFS,
> >> and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only
> used
> >> bonnie++, so the numbers are really only useful for my hardware.
> >>
> >> What parameters were used to create the XFS partition in these
> tests? And,
> >> what options were used to mount the file system? Was the kernel 32-
> bit or
> >> 64-bit? Given what I've seen with some of the XFS options (like
> lazy-count),
> >> I am wondering about the options used in these tests.
> >>
> >
> > The default (no arguments specified) parameters were used to create
> > the XFS partitions.  Mount options specified are described in the
> > table.  This was a 64-bit OS.
> >
> I think it is a good idea to match the raid stripe size and give some
> indication of how many disks are in the array. E.g:
> 
> For a 4 disk system with 256K stripe size I used:
> 
>  $ mkfs.xfs -d su=256k,sw=2 /dev/mdx
> 
> which performed about 2-3 times quicker than the default (I did try
> sw=4
> as well, but didn't notice any difference compared to sw=4).

[Greg says] 
I thought that xfs picked up those details when using md and a soft-raid
configuration. 





-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] file system and raid performance

2008-08-05 Thread Mark Kirkwood

Mark Wong wrote:

On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood <[EMAIL PROTECTED]> 
wrote:
  

I recently ran some tests on Ubuntu Hardy Server (Linux) comparing JFS, XFS,
and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only used
bonnie++, so the numbers are really only useful for my hardware.

What parameters were used to create the XFS partition in these tests? And,
what options were used to mount the file system? Was the kernel 32-bit or
64-bit? Given what I've seen with some of the XFS options (like lazy-count),
I am wondering about the options used in these tests.



The default (no arguments specified) parameters were used to create
the XFS partitions.  Mount options specified are described in the
table.  This was a 64-bit OS.

Regards,
Mark

  
I think it is a good idea to match the raid stripe size and give some 
indication of how many disks are in the array. E.g:


For a 4 disk system with 256K stripe size I used:

$ mkfs.xfs -d su=256k,sw=2 /dev/mdx

which performed about 2-3 times quicker than the default (I did try sw=4 
as well, but didn't notice any difference compared to sw=4).


regards

Mark


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance