Jason J. W. Williams writes: > Hi Guys, > > Rather than starting a new thread I thought I'd continue this thread. > I've been running Build 54 on a Thumper since Mid January and wanted > to ask a question about the zfs_arc_max setting. We set it to " > 0x100000000 #4GB", however its creeping over that till our Kernel > memory usage is nearly 7GB (::memstat inserted below). > > This is a database server so I was curious if the DNLC would have this > affect over time, as it does quite quickly when dealing with small > files? Would it be worth upgrade to Build 59? >
Another possibility is that, there is a portion of memory that might be in the kmem caches, ready to be reclaimed and returned to the OS free space. Such reclaims currently only occurs on memory shortage. I think we should do it under some more conditions... This might fall under: CrNumber: 6416757 Synopsis: zfs should return memory eventually http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6416757 If you induce some temporary memory pressure, it would be nice to see if you're kernel shrinks down to ~4GB. -r > Thank you in advance! > > Best Regards, > Jason > > Page Summary Pages MB %Tot > ------------ ---------------- ---------------- ---- > Kernel 1750044 6836 42% > Anon 1211203 4731 29% > Exec and libs 7648 29 0% > Page cache 220434 861 5% > Free (cachelist) 318625 1244 8% > Free (freelist) 659607 2576 16% > > Total 4167561 16279 > Physical 4078747 15932 > > > On 3/23/07, Roch - PAE <[EMAIL PROTECTED]> wrote: > > > > With latest Nevada setting zfs_arc_max in /etc/system is > > sufficient. Playing with mdb on a live system is more > > tricky and is what caused the problem here. > > > > -r > > > > [EMAIL PROTECTED] writes: > > > Jim Mauro wrote: > > > > > > > All righty...I set c_max to 512MB, c to 512MB, and p to 256MB... > > > > > > > > > arc::print -tad > > > > { > > > > ... > > > > ffffffffc02e29e8 uint64_t size = 0t299008 > > > > ffffffffc02e29f0 uint64_t p = 0t16588228608 > > > > ffffffffc02e29f8 uint64_t c = 0t33176457216 > > > > ffffffffc02e2a00 uint64_t c_min = 0t1070318720 > > > > ffffffffc02e2a08 uint64_t c_max = 0t33176457216 > > > > ... > > > > } > > > > > ffffffffc02e2a08 /Z 0x20000000 > > > > arc+0x48: 0x7b9789000 = 0x20000000 > > > > > ffffffffc02e29f8 /Z 0x20000000 > > > > arc+0x38: 0x7b9789000 = 0x20000000 > > > > > ffffffffc02e29f0 /Z 0x10000000 > > > > arc+0x30: 0x3dcbc4800 = 0x10000000 > > > > > arc::print -tad > > > > { > > > > ... > > > > ffffffffc02e29e8 uint64_t size = 0t299008 > > > > ffffffffc02e29f0 uint64_t p = 0t268435456 <------ p > > > > is 256MB > > > > ffffffffc02e29f8 uint64_t c = 0t536870912 <------ c > > > > is 512MB > > > > ffffffffc02e2a00 uint64_t c_min = 0t1070318720 > > > > ffffffffc02e2a08 uint64_t c_max = 0t536870912 <------- c_max is > > > > 512MB > > > > ... > > > > } > > > > > > > > After a few runs of the workload ... > > > > > > > > > arc::print -d size > > > > size = 0t536788992 > > > > > > > > > > > > > > > > > Ah - looks like we're out of the woods. The ARC remains clamped at > > 512MB. > > > > > > > > > Is there a way to set these fields using /etc/system? > > > Or does this require a new or modified init script to > > > run and do the above with each boot? > > > > > > Darren > > > > > > _______________________________________________ > > > zfs-discuss mailing list > > > zfs-discuss@opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss