On Thu, Jun 02, 2011 at 09:59:39PM -0400, Edward Ned Harvey wrote:
> > From: Daniel Carosone [mailto:d...@geek.com.au]
> > Sent: Thursday, June 02, 2011 9:03 PM
> > 
> > Separately, with only 4G of RAM, i think an L2ARC is likely about a
> > wash, since L2ARC entries also consume RAM.
> 
> True the L2ARC requires some ARC consumption to support it, but for typical
> user data, it's a huge multiplier... The ARC consumption is static per entry
> (say, 176 bytes, depending on your platform) but a typical payload for user
> data would be whatever your average blocksize is ... 40K, 127K, or something
> similar probably.

Yes, but that's not the whole story.  In order for the L2ARC to be an
effective performance boost, it itself needs to be large enough to
save enough hits on the disks.  Further, the penalty of these hits is
more in IOPS than size.  Both these tend to reduce or nullify the
(space) scaling factor, other than getting the very largest blocks out
of primary cache.

Addiing read iops with a third submirror, at no cost, is the way to go
(or at least the way to start) in this case.  

--
Dan.

Attachment: pgpQobf345OXI.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to