Re: [zfs-discuss] 6Tb Database with ZFS

2012-12-02 Thread Bob Friesenhahn

On Sat, 1 Dec 2012, Fung Zheng wrote:


Hello,

Thanks for you reply, i forgot to mention that the doc "Configuring ZFS for an 
Oracle Database" was followed, this
include primarycache, logbias, recordsize properties, all the best practices 
was followed and my only doubt is the
arc_max parameter, i want to know if 24Gb is good enough for a 6Tb database, 
someone have had implemented something
similar? which was the value used for arc_max?


As I recall, you can tune zfs_arc_max while the system is running so 
you can easily adjust this while your database is running and observe 
behavior and without rebooting.  It is possible that my recollection 
is wrong though.  If my recollection is correct, then it is not so 
important to know what is "good enough" before starting to put your 
database in service.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 6Tb Database with ZFS

2012-12-01 Thread Fung Zheng
Hello,

Thanks for you reply, i forgot to mention that the doc "Configuring ZFS for
an Oracle Database" was followed, this include primarycache, logbias,
recordsize properties, all the best practices was followed and my only
doubt is the arc_max parameter, i want to know if 24Gb is good enough for a
6Tb database, someone have had implemented something similar? which was the
value used for arc_max?

OS = Solaris 10

Regards


On Sat, Dec 1, 2012 at 5:28 PM, Jim Klimov  wrote:

> On 2012-12-01 15:05, Fung Zheng wrote:
>
>> Hello,
>>
>> Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
>> want to set arc_max parameter so ZFS cant use all my system's memory,
>> but i dont know how much i should set, do you think 24Gb will be enough
>> for a 6Tb database? obviously the more the better but i cant set too
>> much memory. Have someone implemented succesfully something similar?
>>
>
> Not claiming to be an expert fully ready to (mis)lead you (and I
> haven't done similar quests for databases), I might suggest that
> you set the ZFS dataset option "primarycache=metadata" on your
> dataset which holds the database. (PS: what OS version are you on?)
>
> The general consent is that serious apps like databases are better
> than generic OS/FS caches at caching what the DBMS deems fit (and
> the data blocks might get cached twice - in ARC and in app cache),
> however having ZFS *metadata* cached should speed up your HDD IO -
> the server might keep the {much of} needed block map in RAM and not
> have to start by fetching it from disks every time.
>
> Also make sure to set the "recordsize" attribute as appropriate for
> your DB software - to match the DB block size. Usually this ranges
> around 4, 8 or 16Kb (with zfs default being 128Kb for filesystem
> datasets). You might also want to put non-tablespace files (logs,
> indexes, etc.) into separate datasets with their appropriate record
> sizes - this would let you play with different caching and compression
> settings, if applicable (you might save some IOPS by reading and
> writing less mechanical data at a small hit to CPU horsepower by
> using LZJB).
>
> Also such systems tend to benefit from SSD L2ARC read-caches and
> SSD SLOG (ZIL) write-caches. These are different pieces of equipment
> with distinct characteristics (SLOG is mirrored, small, write-mostly,
> and should endure write-wear and survive sudden poweroffs; L2ARC is
> big, fast for small random reads, moderately reliable).
>
> If you do use a big L2ARC, you might indeed want to have both ZFS
> caches for frequently accessed datasets (i.e. index) to hold both
> the userdata and metadata (as is the default), while the randomly
> accessed tablespaces might be or not be good candidates for such
> caching - however you can test this setting change on the fly.
> I believe, you must allow caching userdata for a dataset in RAM
> if you want to let it spill over onto L2ARC.
>
> HTH,
> //Jim Klimov
>
>
>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 6Tb Database with ZFS

2012-12-01 Thread Jim Klimov

On 2012-12-01 15:05, Fung Zheng wrote:

Hello,

Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system's memory,
but i dont know how much i should set, do you think 24Gb will be enough
for a 6Tb database? obviously the more the better but i cant set too
much memory. Have someone implemented succesfully something similar?


Not claiming to be an expert fully ready to (mis)lead you (and I
haven't done similar quests for databases), I might suggest that
you set the ZFS dataset option "primarycache=metadata" on your
dataset which holds the database. (PS: what OS version are you on?)

The general consent is that serious apps like databases are better
than generic OS/FS caches at caching what the DBMS deems fit (and
the data blocks might get cached twice - in ARC and in app cache),
however having ZFS *metadata* cached should speed up your HDD IO -
the server might keep the {much of} needed block map in RAM and not
have to start by fetching it from disks every time.

Also make sure to set the "recordsize" attribute as appropriate for
your DB software - to match the DB block size. Usually this ranges
around 4, 8 or 16Kb (with zfs default being 128Kb for filesystem
datasets). You might also want to put non-tablespace files (logs,
indexes, etc.) into separate datasets with their appropriate record
sizes - this would let you play with different caching and compression
settings, if applicable (you might save some IOPS by reading and
writing less mechanical data at a small hit to CPU horsepower by
using LZJB).

Also such systems tend to benefit from SSD L2ARC read-caches and
SSD SLOG (ZIL) write-caches. These are different pieces of equipment
with distinct characteristics (SLOG is mirrored, small, write-mostly,
and should endure write-wear and survive sudden poweroffs; L2ARC is
big, fast for small random reads, moderately reliable).

If you do use a big L2ARC, you might indeed want to have both ZFS
caches for frequently accessed datasets (i.e. index) to hold both
the userdata and metadata (as is the default), while the randomly
accessed tablespaces might be or not be good candidates for such
caching - however you can test this setting change on the fly.
I believe, you must allow caching userdata for a dataset in RAM
if you want to let it spill over onto L2ARC.

HTH,
//Jim Klimov




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 6Tb Database with ZFS

2012-12-01 Thread Fung Zheng
Hello,

Im about to migrate a 6Tb database from Veritas Volume Manager to ZFS, I
want to set arc_max parameter so ZFS cant use all my system's memory, but i
dont know how much i should set, do you think 24Gb will be enough for a 6Tb
database? obviously the more the better but i cant set too much memory.
Have someone implemented succesfully something similar?

We ran some test and the usage of memory was as follow:

(With Arc_max at 30Gb)
Kernel = 18Gb
ZFS DATA = 55Gb
Anon = 90Gb
Page Cache = 10Gb
Free = 25Gb

My system have 192Gb RAM and the database SGA = 85Gb.

I would appreciate if someone could tell me about their experience.

Best Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss