On 2012-12-01 15:05, Fung Zheng wrote:
Hello,

Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system's memory,
but i dont know how much i should set, do you think 24Gb will be enough
for a 6Tb database? obviously the more the better but i cant set too
much memory. Have someone implemented succesfully something similar?

Not claiming to be an expert fully ready to (mis)lead you (and I
haven't done similar quests for databases), I might suggest that
you set the ZFS dataset option "primarycache=metadata" on your
dataset which holds the database. (PS: what OS version are you on?)

The general consent is that serious apps like databases are better
than generic OS/FS caches at caching what the DBMS deems fit (and
the data blocks might get cached twice - in ARC and in app cache),
however having ZFS *metadata* cached should speed up your HDD IO -
the server might keep the {much of} needed block map in RAM and not
have to start by fetching it from disks every time.

Also make sure to set the "recordsize" attribute as appropriate for
your DB software - to match the DB block size. Usually this ranges
around 4, 8 or 16Kb (with zfs default being 128Kb for filesystem
datasets). You might also want to put non-tablespace files (logs,
indexes, etc.) into separate datasets with their appropriate record
sizes - this would let you play with different caching and compression
settings, if applicable (you might save some IOPS by reading and
writing less mechanical data at a small hit to CPU horsepower by
using LZJB).

Also such systems tend to benefit from SSD L2ARC read-caches and
SSD SLOG (ZIL) write-caches. These are different pieces of equipment
with distinct characteristics (SLOG is mirrored, small, write-mostly,
and should endure write-wear and survive sudden poweroffs; L2ARC is
big, fast for small random reads, moderately reliable).

If you do use a big L2ARC, you might indeed want to have both ZFS
caches for frequently accessed datasets (i.e. index) to hold both
the userdata and metadata (as is the default), while the randomly
accessed tablespaces might be or not be good candidates for such
caching - however you can test this setting change on the fly.
I believe, you must allow caching userdata for a dataset in RAM
if you want to let it spill over onto L2ARC.

HTH,
//Jim Klimov




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to