Glad to hear that! :)
I'll try to be more verbose then.
For example I have a machine with 44*4T SATA disks. Each of these disks have a 
zpool on them, so I have 44 zpools on the machine (with one zfs on each zpool).
I put files onto these zfs/zpools into hashed directories.
On file numbers/sizes: one of the zpools currently have:
75,504,450 files and df says it has 2.2TiB used, so here the average file size 
is 31k.
File serving is done on HTTP.
Nothing special happens here on the ZFS side, the only difference is that I 
have single disk zpools.

And that's why I have some questions on this, I couldn't really find literature 
on this topic.

My biggest issue now is just the import of these zpools consume 50+ GiB kernel 
memory (out of 64 on this machine), before anything could touch the disks (so 
it's not ARC). And these seem to be something which is/could not shrunk by the 
kernel (like the ARC, which can dynamically change its size).
Therefore if ARC or other kernel (memory) users consume more memory, it quickly 
turns into a deadlock, the kernel starts to kill userspace processes to the 
point where it becomes unusable.

And here come the questions in the original post, from which I'm trying to 
understand what happens here, why importing a 4T zpool (with the above fill 
ratios) takes 1-1.5GiB kernel space.
And is it because of the files on it (so if I would have a 44x that size zpool 
it would be the same, which I couldn't observe on other machines which have 
only one zpool) or some kind of per zpool overhead (and if so, what affects 
that, what could I do to lower that need)?

------------------------------------------
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/T10533b84f9e1cfc5-M88acd75da161566dc676d3ef
Delivery options: https://openzfs.topicbox.com/groups/developer/subscription

Reply via email to