> No they're not, here's l2arc_buf_hdr_t a per-buffer structure 
> held for
> buffers which were moved to l2arc:
> 
> typedef struct l2arc_buf_hdr {
> l2arc_dev_t *b_dev;
> uint64_t b_daddr;
> } l2arc_buf_hdr_t;
> 
> That's about 16-bytes overhead per block, or 3.125% if the 
> block's data is 512 bytes long.
> 
> The main overhead comes from an arc_buf_hdr_t, which is pretty fat,
> around 180 bytes by a first degree approximation, so in all 
> around 200
> bytes per ARC + L2ARC entry. At 512 bytes per block, this is painfully
> inefficient (around 39% overhead), however, at 4k average block size,
> this drops to ~5% and at 64k average block size (which is entirely
> possible on average untuned storage pools) this drops down to ~0.3%
> overhead.

So... unless I miscalculated before drinking a morning coffee, for a 512b block
quickly fetchable from SSD in both L2ARC and METAXEL cases, we have
roughly these numbers?:
1) When it is in RAM, we consume 512+180 bytes (though some ZFS
slides said that for 1 byte stored we spend 1 byte - i thought this meant zero
overhead, though I couldn't imagine how... or 100% overhead, also quite
unimaginable =) )
 
2L) When the block is on L2ARC SSD, we spend 180+16 bytes (though
discussions about DDT on L2ARC at least, settled on 176 bytes of cache
metainformation per entry moved off to L2ARC, with the DDT entry's size 
being around 350 bytes, IIRC).
 
2M) When the block is expired from ARC and is only stored on the pool,
including the SSD-based copy on a METAXEL, we spend zero RAM to
reference this block from ARC - because we don't remember it anymore.
And when needed, we can access it just as fast (right?) as from L2ARC
on the same media type.
 
Where am I wrong, because we seem to dispute over THIS point over 
several emails, and I'm ready to accept that you've seen the code and 
I'm the clueless one. So I want to learn, then ;)
 
//Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to