On Tuesday, July 09, 2019, at 8:10 PM, Matthew Ahrens wrote: > This behavior is not really specific to having a lot of pools. If you had > one big pool with all the disks in it, ZFS would still try to allocate from > each disk, causing most of that disk's metaslabs to be loaded (ZFS selects > first disk, then metaslab, then offset within that metaslab). But wouldn't it be 46th of the current situation? Or the memory requirement really scales with the stored amount (the number of blocks) in them, so it doesn't matter if I have ? I have to read on how these work in depth.
> You probably have a workload with lots of small files No, you're right, there are a lot of smallish files now. I didn't understand the output of zdb, but now I get it that the first number is the record size (in 2^x). And just by this it's a whole lot more clear. :) Anyways it all makes sense, thank you very much for the detailed description (and the hope that things will improve)! ------------------------------------------ openzfs: openzfs-developer Permalink: https://openzfs.topicbox.com/groups/developer/T10533b84f9e1cfc5-M31f15e4480398331f9d4502b Delivery options: https://openzfs.topicbox.com/groups/developer/subscription
