Hello Rainer,
Friday, March 16, 2007, 11:11:16 PM, you wrote:
RH Thanks for the links, but this is not really the kind of data I'm
RH looking for. These focus more on I/O. I need information on the
RH memory cahing, and so on. Specifically, I need data that shows how
RH starting up a 10GB SGA database on a 16GB machine will not be able
RH to flush the ZFS cache as quickly as the DBA's are assuming, and
RH how to point them to more realistic tests/metrics, and get them
RH away from top's simplistic viewpoint of memory under S10.
ZFS should give back memory used for cache to system if applications
are demanding it. Right it should but sometimes it won't.
However with databases there's simple workaround - as you know how
much ram all databases will consume at least you can limit ZFS's arc
cache to remaining free memory (and possibly reduce it even more byt
2-3x factor). For details on how to do it see 'C'mon ARC, stay small...'
thread here.
So if you have 16GB RAM in a system and want 10GB for SGA + another
2GB for Oracle + 1GB for other kernel resources you are with 3GB left.
So I would limit arc c_max to 3GB or even to 1GB.
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss