Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-04-24 Thread Kyle McDonald
On 3/9/2010 1:55 PM, Matt Cowger wrote:
> That's a very good point - in this particular case, there is no option to
> change the blocksize for the application.
>
>   
I have no way of guessing the effects it would have, but is there a
reason that the filesystem blocks can't be a multiple of the application
block size? I mean 4 4kb app blocks to 1 16kb fs block sounds like it
might be a decent comprimise to me. Decent enough to make it worth
testing anyway.

  -Kyle

> On 3/9/10 10:42 AM, "Roch Bourbonnais"  wrote:
>
>   
>> I think This is highlighting that there is extra CPU requirement to
>> manage small blocks in ZFS.
>> The table would probably turn over if you go to 16K zfs records and
>> 16K reads/writes form the application.
>>
>> Next step for you is to figure how much reads/writes IOPS do you
>> expect to take in the real workloads and whether or not the filesystem
>> portion
>> will represent a significant drain of CPU resource.
>>
>> -r
>>
>>
>> Le 8 mars 10 à 17:57, Matt Cowger a écrit :
>>
>> 
>>> Hi Everyone,
>>>
>>> It looks like I¹ve got something weird going with zfs performance on
>>> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing.
>>>
>>> Short version:
>>>
>>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren¹t
>>> swapping
>>> Create zpool on it (zpool create ramS.)
>>> Change zfs options to turn off checksumming (don¹t want it or need
>>> it), atime, compression, 4K block size (this is the applications
>>> native blocksize) etc.
>>> Run a simple iozone benchmark (seq. write, seq. read, rndm write,
>>> rndm read).
>>>
>>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and
>>> mounting the UFS forcedirectio (no point in using a buffer cache
>>> memory for something that¹s already in memory)
>>>
>>> Measure IOPs performance using iozone:
>>>
>>> iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g
>>>
>>> With the ZFS filesystem I get around:
>>> ZFS 
>>>  
>>> (seq
>>>  write) 42360 (seq read)31010   (random
>>> read)20953   (random write)32525
>>> Not SOO bad, but here¹s UFS:
>>> UFS 
>>> (seq
>>>  write )42853 (seq read) 100761(random read)
>>> 100471   (random write) 101141
>>>
>>> For all tests besides the seq write, UFS utterly destroys ZFS.
>>>
>>> I¹m curious if anyone has any clever ideas on why this huge
>>> disparity in performance exists.  At the end of the day, my
>>> application will run on either filesystem, it just surprises me how
>>> much worse ZFS performs in this (admittedly edge case) scenario.
>>>
>>> --M
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>   
>> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Matt Cowger
This is a good point, and something that I tried.  I limited the ARC to 1GB and 
4GB (both well within the memory footprint of the system even with the 
ramdisk).equally poor resultsthis doesn't feel like ARC righting with 
locked memory pages.

--M

-Original Message-
From: Ross Walker [mailto:rswwal...@gmail.com] 
Sent: Tuesday, March 09, 2010 3:53 PM
To: Roch Bourbonnais
Cc: Matt Cowger; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk 
(70% drop)

On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais  
 wrote:

>
> I think This is highlighting that there is extra CPU requirement to  
> manage small blocks in ZFS.
> The table would probably turn over if you go to 16K zfs records and  
> 16K reads/writes form the application.
>
> Next step for you is to figure how much reads/writes IOPS do you  
> expect to take in the real workloads and whether or not the  
> filesystem portion
> will represent a significant drain of CPU resource.

I think it highlights more the problem of ARC vs ramdisk, or  
specifically ZFS on ramdisk while ARC is fighting with ramdisk for  
memory.

It is a wonder it didn't deadlock.

If I were to put a ZFS file system on a ramdisk, I would limit the  
size of the ramdisk and ARC so both, plus the kernel fit nicely in  
memory with room to spare for user apps.

-Ross

  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread ольга крыжановская
Could you retest it with mmap() used?

Olga

2010/3/9 Matt Cowger :
> It can, but doesn't in the command line shown below.
>
> M
>
>
>
> On Mar 8, 2010, at 6:04 PM, "ольга крыжановская"  anov...@gmail.com> wrote:
>
>> Does iozone use mmap() for IO?
>>
>> Olga
>>
>> On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger 
>> wrote:
>>> Hi Everyone,
>>>
>>>
>>>
>>> It looks like I've got something weird going with zfs performance
>>> on a
>>> ramdiskZFS is performing not even a 3rd of what UFS is doing.
>>>
>>>
>>>
>>> Short version:
>>>
>>>
>>>
>>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren't
>>> swapping
>>>
>>> Create zpool on it (zpool create ram)
>>>
>>> Change zfs options to turn off checksumming (don't want it or need
>>>  it),
>>> atime, compression, 4K block size (this is the applications native
>>> blocksize) etc.
>>>
>>> Run a simple iozone benchmark (seq. write, seq. read, rndm write,
>>> rndm
>>> read).
>>>
>>>
>>>
>>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and
>>> mounting the
>>> UFS forcedirectio (no point in using a buffer cache memory for
>>> something
>>> that's already in memory)
>>>
>>>
>>>
>>> Measure IOPs performance using iozone:
>>>
>>>
>>>
>>> iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g
>>>
>>>
>>>
>>> With the ZFS filesystem I get around:
>>>
>>> ZFS
>>> (seq write) 42360 (seq read)31010   (random
>>> read)20953   (random write)32525
>>>
>>> Not SOO bad, but here's UFS:
>>>
>>> UFS
>>> (seq write )42853 (seq read) 100761(random
>>> read)
>>> 100471   (random write) 101141
>>>
>>>
>>>
>>> For all tests besides the seq write, UFS utterly destroys ZFS.
>>>
>>>
>>>
>>> I'm curious if anyone has any clever ideas on why this huge dispar
>>> ity in
>>> performance exists.  At the end of the day, my application will run
>>> on
>>> either filesystem, it just surprises me how much worse ZFS performs
>>> in this
>>> (admittedly edge case) scenario.
>>>
>>>
>>>
>>> --M
>>>
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>
>>>
>>
>>
>>
>> --
>>  ,   __   ,
>> { \/`o;-Olga Kryzhanovska   -;o`\/ }
>> .'-/`-/ olga.kryzhanov...@gmail.com   \-`\-'.
>> `'-..-| / Solaris/BSD//C/C++ programmer   \ |-..-'`
>>  /\/\ /\/\
>>  `--`  `--`
>



-- 
  ,   __   ,
 { \/`o;-Olga Kryzhanovska   -;o`\/ }
.'-/`-/ olga.kryzhanov...@gmail.com   \-`\-'.
 `'-..-| / Solaris/BSD//C/C++ programmer   \ |-..-'`
  /\/\ /\/\
  `--`  `--`
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread ольга крыжановская
Which IO library do you use? If you use stdio you could use the libast
stdio implementation which allows to set the block size via
environment variables.

Olga

On Tue, Mar 9, 2010 at 7:55 PM, Matt Cowger  wrote:
> That's a very good point - in this particular case, there is no option to
> change the blocksize for the application.
>
>
> On 3/9/10 10:42 AM, "Roch Bourbonnais"  wrote:
>
>>
>> I think This is highlighting that there is extra CPU requirement to
>> manage small blocks in ZFS.
>> The table would probably turn over if you go to 16K zfs records and
>> 16K reads/writes form the application.
>>
>> Next step for you is to figure how much reads/writes IOPS do you
>> expect to take in the real workloads and whether or not the filesystem
>> portion
>> will represent a significant drain of CPU resource.
>>
>> -r
>>
>>
>> Le 8 mars 10 à 17:57, Matt Cowger a écrit :
>>
>>> Hi Everyone,
>>>
>>> It looks like I¹ve got something weird going with zfs performance on
>>> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing.
>>>
>>> Short version:
>>>
>>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren¹t
>>> swapping
>>> Create zpool on it (zpool create ramS.)
>>> Change zfs options to turn off checksumming (don¹t want it or need
>>> it), atime, compression, 4K block size (this is the applications
>>> native blocksize) etc.
>>> Run a simple iozone benchmark (seq. write, seq. read, rndm write,
>>> rndm read).
>>>
>>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and
>>> mounting the UFS forcedirectio (no point in using a buffer cache
>>> memory for something that¹s already in memory)
>>>
>>> Measure IOPs performance using iozone:
>>>
>>> iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g
>>>
>>> With the ZFS filesystem I get around:
>>> ZFS
>>>  
>>> (seq
>>>  write) 42360 (seq read)31010   (random
>>> read)20953   (random write)32525
>>> Not SOO bad, but here¹s UFS:
>>> UFS
>>> (seq
>>>  write )42853 (seq read) 100761(random read)
>>> 100471   (random write) 101141
>>>
>>> For all tests besides the seq write, UFS utterly destroys ZFS.
>>>
>>> I¹m curious if anyone has any clever ideas on why this huge
>>> disparity in performance exists.  At the end of the day, my
>>> application will run on either filesystem, it just surprises me how
>>> much worse ZFS performs in this (admittedly edge case) scenario.
>>>
>>> --M
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
  ,   __   ,
 { \/`o;-Olga Kryzhanovska   -;o`\/ }
.'-/`-/ olga.kryzhanov...@gmail.com   \-`\-'.
 `'-..-| / Solaris/BSD//C/C++ programmer   \ |-..-'`
  /\/\ /\/\
  `--`  `--`
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais  
 wrote:




I think This is highlighting that there is extra CPU requirement to  
manage small blocks in ZFS.
The table would probably turn over if you go to 16K zfs records and  
16K reads/writes form the application.


Next step for you is to figure how much reads/writes IOPS do you  
expect to take in the real workloads and whether or not the  
filesystem portion

will represent a significant drain of CPU resource.


I think it highlights more the problem of ARC vs ramdisk, or  
specifically ZFS on ramdisk while ARC is fighting with ramdisk for  
memory.


It is a wonder it didn't deadlock.

If I were to put a ZFS file system on a ramdisk, I would limit the  
size of the ramdisk and ARC so both, plus the kernel fit nicely in  
memory with room to spare for user apps.


-Ross

 
___

zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Matt Cowger
That's a very good point - in this particular case, there is no option to
change the blocksize for the application.


On 3/9/10 10:42 AM, "Roch Bourbonnais"  wrote:

> 
> I think This is highlighting that there is extra CPU requirement to
> manage small blocks in ZFS.
> The table would probably turn over if you go to 16K zfs records and
> 16K reads/writes form the application.
> 
> Next step for you is to figure how much reads/writes IOPS do you
> expect to take in the real workloads and whether or not the filesystem
> portion
> will represent a significant drain of CPU resource.
> 
> -r
> 
> 
> Le 8 mars 10 à 17:57, Matt Cowger a écrit :
> 
>> Hi Everyone,
>> 
>> It looks like I¹ve got something weird going with zfs performance on
>> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing.
>> 
>> Short version:
>> 
>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren¹t
>> swapping
>> Create zpool on it (zpool create ramS.)
>> Change zfs options to turn off checksumming (don¹t want it or need
>> it), atime, compression, 4K block size (this is the applications
>> native blocksize) etc.
>> Run a simple iozone benchmark (seq. write, seq. read, rndm write,
>> rndm read).
>> 
>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and
>> mounting the UFS forcedirectio (no point in using a buffer cache
>> memory for something that¹s already in memory)
>> 
>> Measure IOPs performance using iozone:
>> 
>> iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g
>> 
>> With the ZFS filesystem I get around:
>> ZFS 
>>  (seq
>>  write) 42360 (seq read)31010   (random
>> read)20953   (random write)32525
>> Not SOO bad, but here¹s UFS:
>> UFS 
>> (seq
>>  write )42853 (seq read) 100761(random read)
>> 100471   (random write) 101141
>> 
>> For all tests besides the seq write, UFS utterly destroys ZFS.
>> 
>> I¹m curious if anyone has any clever ideas on why this huge
>> disparity in performance exists.  At the end of the day, my
>> application will run on either filesystem, it just surprises me how
>> much worse ZFS performs in this (admittedly edge case) scenario.
>> 
>> --M
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Roch Bourbonnais


I think This is highlighting that there is extra CPU requirement to  
manage small blocks in ZFS.
The table would probably turn over if you go to 16K zfs records and  
16K reads/writes form the application.


Next step for you is to figure how much reads/writes IOPS do you  
expect to take in the real workloads and whether or not the filesystem  
portion

will represent a significant drain of CPU resource.

-r


Le 8 mars 10 à 17:57, Matt Cowger a écrit :


Hi Everyone,

It looks like I’ve got something weird going with zfs performance on  
a ramdisk….ZFS is performing not even a 3rd of what UFS is doing.


Short version:

Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren’t  
swapping

Create zpool on it (zpool create ram….)
Change zfs options to turn off checksumming (don’t want it or need  
it), atime, compression, 4K block size (this is the applications  
native blocksize) etc.
Run a simple iozone benchmark (seq. write, seq. read, rndm write,  
rndm read).


Same deal for UFS, replacing the ZFS stuff with newfs stuff and  
mounting the UFS forcedirectio (no point in using a buffer cache  
memory for something that’s already in memory)


Measure IOPs performance using iozone:

iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g

With the ZFS filesystem I get around:
ZFS 
 (seq 
 write) 42360 (seq read)31010   (random  
read)20953   (random write)32525

Not SOO bad, but here’s UFS:
UFS 
(seq 
 write )42853 (seq read) 100761(random read)  
100471   (random write) 101141


For all tests besides the seq write, UFS utterly destroys ZFS.

I’m curious if anyone has any clever ideas on why this huge  
disparity in performance exists.  At the end of the day, my  
application will run on either filesystem, it just surprises me how  
much worse ZFS performs in this (admittedly edge case) scenario.


--M
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Richard Elling
On Mar 9, 2010, at 9:40 AM, Matt Cowger wrote:
> Ross is correct - advanced OS features are not required here - just the 
> ability to store a file - don’t even need unix style permissions

KISS.  Just use tmpfs, though you might also consider limiting its size.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Matt Cowger
Ross is correct - advanced OS features are not required here - just the ability 
to store a file - don’t even need unix style permissions

-Original Message-
From: zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ross Walker
Sent: Tuesday, March 09, 2010 6:23 AM
To: ольга крыжановская
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk 
(70% drop)

On Mar 8, 2010, at 11:46 PM, ольга крыжановская  wrote:

> tmpfs lacks features like quota and NFSv4 ACL support. May not be the
> best choice if such features are required.

True, but if the OP is looking for those features they are more then  
unlikely looking for an in-memory file system.

This would be more for something like temp databases in a RDBMS or a  
cache of some sort.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 8, 2010, at 11:46 PM, ольга крыжановская anov...@gmail.com> wrote:



tmpfs lacks features like quota and NFSv4 ACL support. May not be the
best choice if such features are required.


True, but if the OP is looking for those features they are more then  
unlikely looking for an in-memory file system.


This would be more for something like temp databases in a RDBMS or a  
cache of some sort.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread ольга крыжановская
tmpfs lacks features like quota and NFSv4 ACL support. May not be the
best choice if such features are required.

Olga

On Tue, Mar 9, 2010 at 3:31 AM, Bill Sommerfeld  wrote:
> On 03/08/10 17:57, Matt Cowger wrote:
>>
>> Change zfs options to turn off checksumming (don't want it or need it),
>> atime, compression, 4K block size (this is the applications native
>> blocksize) etc.
>
> even when you disable checksums and compression through the zfs command, zfs
> will still compress and checksum metadata.
>
> the evil tuning guide describes an unstable interface to turn off metadata
> compression, but I don't see anything in there for metadata checksums.
>
> if you have an actual need for an in-memory filesystem, will tmpfs fit the
> bill?
>
>- Bill
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
  ,   __   ,
 { \/`o;-Olga Kryzhanovska   -;o`\/ }
.'-/`-/ olga.kryzhanov...@gmail.com   \-`\-'.
 `'-..-| / Solaris/BSD//C/C++ programmer   \ |-..-'`
  /\/\ /\/\
  `--`  `--`
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Matt Cowger

On Mar 8, 2010, at 6:31 PM, Bill Sommerfeld wrote:

> 
> if you have an actual need for an in-memory filesystem, will tmpfs fit 
> the bill?
> 
>   - Bill


Very good point bill - just ran this test and started to get the numbers I was 
expecting (1.3 GB/s throughput, 250K+ IOPs)

If we do go this way, this is an excellent options)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Edward Ned Harvey
I don't have an answer to this question, but I can say, I've seen a similar
surprising result.  I ran iozone on various raid configurations of spindle
disks . and on a ramdisk.  I was surprised to see the ramdisk is only about
50% to 200% faster than the next best competitor in each category. . I don't
have any good explanation for that, but I didn't question it too hard.  I
accepted the results for what they are . the ramdisk performs surprisingly
poorly for some unknown reason.

 

 

 

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Matt Cowger
Sent: Monday, March 08, 2010 8:58 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk
(70% drop)

 

Hi Everyone,

 

It looks like I've got something weird going with zfs performance on a
ramdisk..ZFS is performing not even a 3rd of what UFS is doing.

 

Short version:

 

Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren't swapping

Create zpool on it (zpool create ram..)

Change zfs options to turn off checksumming (don't want it or need it),
atime, compression, 4K block size (this is the applications native
blocksize) etc.

Run a simple iozone benchmark (seq. write, seq. read, rndm write, rndm
read).

 

Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the
UFS forcedirectio (no point in using a buffer cache memory for something
that's already in memory)

 

Measure IOPs performance using iozone:

 

iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g

 

With the ZFS filesystem I get around:

ZFS
(seq write) 42360 (seq read)31010   (random
read)20953   (random write)32525

Not SOO bad, but here's UFS:

UFS
(seq write )42853 (seq read) 100761(random read)
100471   (random write) 101141

 

For all tests besides the seq write, UFS utterly destroys ZFS.

 

I'm curious if anyone has any clever ideas on why this huge disparity in
performance exists.  At the end of the day, my application will run on
either filesystem, it just surprises me how much worse ZFS performs in this
(admittedly edge case) scenario.

 

--M

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Matt Cowger

On Mar 8, 2010, at 6:31 PM, Richard Elling wrote:

>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the 
>> UFS forcedirectio (no point in using a buffer cache memory for something 
>> that’s already in memory)
> 
> Did you also set primarycache=none?
> -- richard

Good suggestion - that actually made it significantly worse - down to less than 
5000 IOPs (or 5% of the performance of UFS)

--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Richard Elling
On Mar 8, 2010, at 5:57 PM, Matt Cowger wrote:
> Hi Everyone,
>  
> It looks like I’ve got something weird going with zfs performance on a 
> ramdisk….ZFS is performing not even a 3rd of what UFS is doing.
>  
> Short version:
>  
> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren’t swapping
> Create zpool on it (zpool create ram….)
> Change zfs options to turn off checksumming (don’t want it or need it), 
> atime, compression, 4K block size (this is the applications native blocksize) 
> etc.
> Run a simple iozone benchmark (seq. write, seq. read, rndm write, rndm read).
>  
> Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the 
> UFS forcedirectio (no point in using a buffer cache memory for something 
> that’s already in memory)

Did you also set primarycache=none?
 -- richard

>  
> Measure IOPs performance using iozone:
>  
> iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g
>  
> With the ZFS filesystem I get around:
> ZFS 
> (seq write) 42360 (seq read)31010   (random 
> read)20953   (random write)32525
> Not SOO bad, but here’s UFS:
> UFS
> (seq write )42853 (seq read) 100761(random read) 
> 100471   (random write) 101141
>  
> For all tests besides the seq write, UFS utterly destroys ZFS.
>  
> I’m curious if anyone has any clever ideas on why this huge disparity in 
> performance exists.  At the end of the day, my application will run on either 
> filesystem, it just surprises me how much worse ZFS performs in this 
> (admittedly edge case) scenario.
>  
> --M
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Matt Cowger
It can, but doesn't in the command line shown below.

M



On Mar 8, 2010, at 6:04 PM, "ольга крыжановская"  wrote:

> Does iozone use mmap() for IO?
>
> Olga
>
> On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger   
> wrote:
>> Hi Everyone,
>>
>>
>>
>> It looks like I’ve got something weird going with zfs performance  
>> on a
>> ramdisk….ZFS is performing not even a 3rd of what UFS is doing.
>>
>>
>>
>> Short version:
>>
>>
>>
>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren’t  
>> swapping
>>
>> Create zpool on it (zpool create ram….)
>>
>> Change zfs options to turn off checksumming (don’t want it or need 
>>  it),
>> atime, compression, 4K block size (this is the applications native
>> blocksize) etc.
>>
>> Run a simple iozone benchmark (seq. write, seq. read, rndm write,  
>> rndm
>> read).
>>
>>
>>
>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and  
>> mounting the
>> UFS forcedirectio (no point in using a buffer cache memory for  
>> something
>> that’s already in memory)
>>
>>
>>
>> Measure IOPs performance using iozone:
>>
>>
>>
>> iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g
>>
>>
>>
>> With the ZFS filesystem I get around:
>>
>> ZFS
>> (seq write) 42360 (seq read)31010   (random
>> read)20953   (random write)32525
>>
>> Not SOO bad, but here’s UFS:
>>
>> UFS
>> (seq write )42853 (seq read) 100761(random  
>> read)
>> 100471   (random write) 101141
>>
>>
>>
>> For all tests besides the seq write, UFS utterly destroys ZFS.
>>
>>
>>
>> I’m curious if anyone has any clever ideas on why this huge dispar 
>> ity in
>> performance exists.  At the end of the day, my application will run  
>> on
>> either filesystem, it just surprises me how much worse ZFS performs  
>> in this
>> (admittedly edge case) scenario.
>>
>>
>>
>> --M
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>
>
>
>
> -- 
>  ,   __   ,
> { \/`o;-Olga Kryzhanovska   -;o`\/ }
> .'-/`-/ olga.kryzhanov...@gmail.com   \-`\-'.
> `'-..-| / Solaris/BSD//C/C++ programmer   \ |-..-'`
>  /\/\ /\/\
>  `--`  `--`
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Bill Sommerfeld

On 03/08/10 17:57, Matt Cowger wrote:

Change zfs options to turn off checksumming (don't want it or need it), atime, 
compression, 4K block size (this is the applications native blocksize) etc.


even when you disable checksums and compression through the zfs command, 
zfs will still compress and checksum metadata.


the evil tuning guide describes an unstable interface to turn off 
metadata compression, but I don't see anything in there for metadata 
checksums.


if you have an actual need for an in-memory filesystem, will tmpfs fit 
the bill?


- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread ольга крыжановская
Does iozone use mmap() for IO?

Olga

On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger  wrote:
> Hi Everyone,
>
>
>
> It looks like I’ve got something weird going with zfs performance on a
> ramdisk….ZFS is performing not even a 3rd of what UFS is doing.
>
>
>
> Short version:
>
>
>
> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren’t swapping
>
> Create zpool on it (zpool create ram….)
>
> Change zfs options to turn off checksumming (don’t want it or need it),
> atime, compression, 4K block size (this is the applications native
> blocksize) etc.
>
> Run a simple iozone benchmark (seq. write, seq. read, rndm write, rndm
> read).
>
>
>
> Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the
> UFS forcedirectio (no point in using a buffer cache memory for something
> that’s already in memory)
>
>
>
> Measure IOPs performance using iozone:
>
>
>
> iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g
>
>
>
> With the ZFS filesystem I get around:
>
> ZFS
> (seq write) 42360 (seq read)31010   (random
> read)20953   (random write)32525
>
> Not SOO bad, but here’s UFS:
>
> UFS
> (seq write )42853 (seq read) 100761(random read)
> 100471   (random write) 101141
>
>
>
> For all tests besides the seq write, UFS utterly destroys ZFS.
>
>
>
> I’m curious if anyone has any clever ideas on why this huge disparity in
> performance exists.  At the end of the day, my application will run on
> either filesystem, it just surprises me how much worse ZFS performs in this
> (admittedly edge case) scenario.
>
>
>
> --M
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>



-- 
  ,   __   ,
 { \/`o;-Olga Kryzhanovska   -;o`\/ }
.'-/`-/ olga.kryzhanov...@gmail.com   \-`\-'.
 `'-..-| / Solaris/BSD//C/C++ programmer   \ |-..-'`
  /\/\ /\/\
  `--`  `--`
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss