On Fri, Jul 5, 2019 at 7:37 AM Allan Jude <[email protected]> wrote:
> On 2019-07-05 04:21, [email protected] wrote: > > On Friday, July 05, 2019, at 12:34 AM, Allan Jude wrote: > >> And now which values are growing. This breaks down each UMA cache and > >> how much slack it contains. > > I had one, but lost it with machine reboots (my fault). > > > > What I have now is from a quick import, which didn't eat all of the > memory: > > https://gist.github.com/bra-fsn/66a5749d30b30eb86f3f83f5ca3f7704 > > > > *openzfs <https://openzfs.topicbox.com/latest>* / openzfs-developer / > > see discussions <https://openzfs.topicbox.com/groups/developer> + > > participants <https://openzfs.topicbox.com/groups/developer/members> + > > delivery options > > <https://openzfs.topicbox.com/groups/developer/subscription> Permalink > > < > https://openzfs.topicbox.com/groups/developer/T10533b84f9e1cfc5-M3f92d3c9bb3eed10158681a6 > > > > > > One I notice is range_seg_cache grows to 117954859 allocations, only 64 > bytes each, but that is 7.5GB of ram there. Is that 46 printed before > the summary the number of zpools imported? > range_seg_cache is primarily for loaded metaslabs' ms_allocatable. Given that Nagy is using recordsize=128K or 1M, writing large files, and not deleting any files, fragmentation should be low and the amount memory used by this should be low. So one of those understandings may be off. This 117 million segments is one segment per 1.6MB of storage (assuming this is with all 44x 4TB pools imported). What is the %frag (from "zpool list -v")? How many metaslabs are there total, and how many are loaded? (not sure how to do this on FreeBSD; on illumos you can do "::walk spa|::walk metaslab|::print metaslab_t ms_loaded !sort|uniq -c" in "mdb -k"). When you are importing the storage pools, I'm assuming that you exported (or rebooted) cleanly, so there aren't any active ZIL's (intent logs) that need to be played. --matt > > abd_chunk is up from 86,000 to 434,000 allocations (4k each), but that > is expected as that is the contents of your ARC > > How much memory was actually in use after importing the 46 pools? > > The first place to look would seem to be at what range_tree_t's get > allocated for each pool. > > -- > Allan Jude > > This is a multi-part message in MIME format... > > ------------=_1562337368-152192-1-- > ------------------------------------------ openzfs: openzfs-developer Permalink: https://openzfs.topicbox.com/groups/developer/T10533b84f9e1cfc5-M2aa511b991308571ba5d1a4d Delivery options: https://openzfs.topicbox.com/groups/developer/subscription
