question below...

On May 28, 2012, at 1:25 PM, Iwan Aucamp wrote:

> On 05/28/2012 10:12 PM, Andrew Gabriel wrote:
>>  On 05/28/12 20:06, Iwan Aucamp wrote:
>>> I'm thinking of doing the following:
>>>  - relocating mmaped (mongo) data to a zfs filesystem with only
>>> metadata cache
>>>  - reducing zfs arc cache to 16 GB
>>> Is there any other recommendations - and is above likely to improve
>>> performance.
>> 1. Upgrade to S10 Update 10 - this has various performance improvements,
>> in particular related to database type loads (but I don't know anything
>> about mongodb).
>> 2. Reduce the ARC size so RSS + ARC + other memory users<  RAM size.
>> I assume the RSS include's whatever caching the database does. In
>> theory, a database should be able to work out what's worth caching
>> better than any filesystem can guess from underneath it, so you want to
>> configure more memory in the DB's cache than in the ARC. (The default
>> ARC tuning is unsuitable for a database server.)
>> 3. If the database has some concept of blocksize or recordsize that it
>> uses to perform i/o, make sure the filesystems it is using configured to
>> be the same recordsize. The ZFS default recordsize (128kB) is usually
>> much bigger than database blocksizes. This is probably going to have
>> less impact with an mmaped database than a read(2)/write(2) database,
>> where it may prove better to match the filesystem's record size to the
>> system's page size (4kB, unless it's using some type of large pages). I
>> haven't tried playing with recordsize for memory mapped i/o, so I'm
>> speculating here.
>> Blocksize or recordsize may apply to the log file writer too, and it may
>> be that this needs a different recordsize and therefore has to be in a
>> different filesystem. If it uses write(2) or some variant rather than
>> mmap(2) and doesn't document this in detail, Dtrace is your friend.
>> 4. Keep plenty of free space in the zpool if you want good database
>> performance. If you're more than 60% full (S10U9) or 80% full (S10U10),
>> that could be a factor.
>> Anyway, there are a few things to think about.
> Thanks for the Feedback, I cannot really do 1, but will look into points 3 
> and 4 - in addition to 2 - which is what I desire to achieve with my second 
> point - but I would still like to know if it is recommended to only do 
> metadata caching for mmaped files (mongodb data files) - the way I see it 
> this should get rid of the double caching which is being done for mmaped 
> files.

I'd be interested in the results of such tests. You can change the primarycache
parameter on the fly, so you could test it in less time than it takes for me to 
this email :-)
 -- richard

ZFS Performance and Training

zfs-discuss mailing list

Reply via email to