FYI - After a few more runs, ARC size hit 10GB, which is now 10X c_max:


> arc::print -tad
{
. . .
   ffffffffc02e29e8 uint64_t size = 0t10527883264
   ffffffffc02e29f0 uint64_t p = 0t16381819904
   ffffffffc02e29f8 uint64_t c = 0t1070318720
   ffffffffc02e2a00 uint64_t c_min = 0t1070318720
   ffffffffc02e2a08 uint64_t c_max = 0t1070318720
. . .

Perhaps c_max does not do what I think it does?

Thanks,
/jim


Jim Mauro wrote:
Running an mmap-intensive workload on ZFS on a X4500, Solaris 10 11/06
(update 3). All file IO is mmap(file), read memory segment, unmap, close.

Tweaked the arc size down via mdb to 1GB. I used that value because
c_min was also 1GB, and I was not sure if c_max could be larger than
c_min....Anyway, I set c_max to 1GB.

After a workload run....:
> arc::print -tad
{
. . .
  ffffffffc02e29e8 uint64_t size = 0t3099832832
  ffffffffc02e29f0 uint64_t p = 0t16540761088
  ffffffffc02e29f8 uint64_t c = 0t1070318720
  ffffffffc02e2a00 uint64_t c_min = 0t1070318720
  ffffffffc02e2a08 uint64_t c_max = 0t1070318720
. . .

"size" is at 3GB, with c_max at 1GB.

What gives? I'm looking at the code now, but was under the impression
c_max would limit ARC growth. Granted, it's not a factor of 10, and
it's certainly much better than the out-of-the-box growth to 24GB
(this is a 32GB x4500), so clearly ARC growth is being limited, but it
still grew to 3X c_max.

Thanks,
/jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to