On Feb 12, 2010, at 9:36 AM, Felix Buenemann wrote:
> Am 12.02.10 18:17, schrieb Richard Elling:
>> On Feb 12, 2010, at 8:20 AM, Felix Buenemann wrote:
>>
>>> Hi Mickaël,
>>>
>>> Am 12.02.10 13:49, schrieb Mickaël Maillot:
>>>> Intel X-25 M are MLC not SLC, there are very good for L2ARC.
>>>
>>> Yes, I'm only using those for L2ARC, I'm planing on getting to Mtron Pro
>>> 7500 16GB SLC SSDs for ZIL.
>>>
>>>> and next, you need more RAM:
>>>> ZFS can't handle 4x 80 Gb of L2ARC with only 4Gb of RAM because ZFS
>>>> use memory to allocate and manage L2ARC.
>>>
>>> Is there a guideline in which relation L2ARC size should be to RAM?
>>
>> Approximately 200 bytes per record. I use the following example:
>> Suppose we use a Seagate LP 2 TByte disk for the L2ARC
>> + Disk has 3,907,029,168 512 byte sectors, guaranteed
>> + Workload uses 8 kByte fixed record size
>> RAM needed for arc_buf_hdr entries
>> + Need = ~(3,907,029,168 - 9,232) * 200 / 16 = ~48 GBytes
>>
>> Don't underestimate the RAM needed for large L2ARCs
>
> I'm not sure how your workload record size plays into above formula (where
> does - 9232 come from?), but given I've got ~300GB L2ARC, I'd need about
> 7.2GB RAM, so upgrading to 8GB would be enough to satisfy the L2ARC.
recordsize=8kB=16 sectors @ 512 bytes/sector
9,232 is the number of sectors reserved for labels, around 4.75 MBytes
Mathing aorund a bit, for a 300 GB L2ARC (apologies for the tab separation):
size (GB) 300
size (sectors) 585937500
labels (sectors) 9232
available sectors 585928268
bytes/L2ARC header 200
recordsize (sectors) recordsize (kBytes) L2ARC capacity
(records) Header size (MBytes)
1 0.5 585928268 111,760
2 1 292964134 55,880
4 2 146482067 27,940
8 4 73241033 13,970
16 8 36620516 6,980
32 16 18310258 3,490
64 32 9155129 1,750
128 64 4577564 870
256 128 2288782 440
So, depending on the data, you need somewhere between 440 MBytes and 111 GBytes
to hold the L2ARC headers. For a rule of thumb, somewhere between 0.15% and 40%
of the total used size. Ok, that rule really isn't very useful...
The next question is, what does my data look like? The answer is that there
will
most likely be a distribution of various sized record. But the distribution
isn't as
interesting for this calculation than the actual number of records. I'm not
sure
there is an easy way to get that information, but I'll look around...
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss