On 2019-07-05 04:21, [email protected] wrote:
> On Friday, July 05, 2019, at 12:34 AM, Allan Jude wrote:
>> And now which values are growing. This breaks down each UMA cache and
>> how much slack it contains.
> I had one, but lost it with machine reboots (my fault).
> 
> What I have now is from a quick import, which didn't eat all of the memory:
> https://gist.github.com/bra-fsn/66a5749d30b30eb86f3f83f5ca3f7704
> 
> *openzfs <https://openzfs.topicbox.com/latest>* / openzfs-developer /
> see discussions <https://openzfs.topicbox.com/groups/developer> +
> participants <https://openzfs.topicbox.com/groups/developer/members> +
> delivery options
> <https://openzfs.topicbox.com/groups/developer/subscription> Permalink
> <https://openzfs.topicbox.com/groups/developer/T10533b84f9e1cfc5-M3f92d3c9bb3eed10158681a6>
> 

One I notice is range_seg_cache grows to 117954859 allocations, only 64
bytes each, but that is 7.5GB of ram there. Is that 46 printed before
the summary the number of zpools imported?

abd_chunk is up from 86,000 to 434,000 allocations (4k each), but that
is expected as that is the contents of your ARC

How much memory was actually in use after importing the 46 pools?

The first place to look would seem to be at what range_tree_t's get
allocated for each pool.

-- 
Allan Jude

Attachment: signature.asc
Description: OpenPGP digital signature

This is a multi-part message in MIME format...

------------=_1562337368-152192-1--

Reply via email to