On 08/01/2012 04:14 PM, Jim Klimov wrote:
> 2012-08-01 17:55, Sašo Kiselkov пишет:
>> On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
>>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>>> Availability of the DDT is IMHO crucial to a deduped pool, so
>>>> I won't be surprised to see it forced to triple copies.
>>> IMHO, the more important thing for dedup moving forward is to create
>>> an option to dedicate a fast device (SSD or whatever) to the DDT.  So
>>> all those little random IO operations never hit the rusty side of the
>>> pool.
>> That's something you can already do with an L2ARC. In the future I plan
>> on investigating implementing a set of more fine-grained ARC and L2ARC
>> policy tuning parameters that would give more control into the hands of
>> admins over how the ARC/L2ARC cache is used.
> Unfortunately, as of current implementations, L2ARC starts up cold.

Yes, that's by design, because the L2ARC is simply a secondary backing
store for ARC blocks. If the memory pointer isn't valid, chances are,
you'll still be able to find the block on the L2ARC devices. You can't
scan an L2ARC device and discover some usable structures, as there
aren't any. It's literally just a big pile of disk blocks and their
associated ARC headers only live in RAM.

> chances are that
> some blocks of userdata might be more popular than a DDT block and
> would push it out of L2ARC as well...

Which is why I plan on investigating implementing some tunable policy
module that would allow the administrator to get around this problem.
E.g. administrator dedicates 50G of ARC space to metadata (which
includes the DDT) or only the DDT specifically. My idea is still a bit
fuzzy, but it revolves primarily around allocating and policing min and
max quotas for a given ARC entry type. I'll start a separate discussion
thread for this later on once I have everything organized in my mind
about where I plan on taking this.

zfs-discuss mailing list

Reply via email to