On your main question - ZoL 0.7.13, Debian, 1-2 pools <1TB in size definitely DON'T eat 1-1.5 GB RAM per pool only on import for me.
 
IIRC ARC will grow only then you'll access (meta)data.
 
02.07.2019, 15:49, "[email protected]" <[email protected]>:
Glad to hear that! :)
I'll try to be more verbose then.
For example I have a machine with 44*4T SATA disks. Each of these disks have a zpool on them, so I have 44 zpools on the machine (with one zfs on each zpool).
I put files onto these zfs/zpools into hashed directories.
On file numbers/sizes: one of the zpools currently have:
75,504,450 files and df says it has 2.2TiB used, so here the average file size is 31k.
File serving is done on HTTP.
Nothing special happens here on the ZFS side, the only difference is that I have single disk zpools.
 
And that's why I have some questions on this, I couldn't really find literature on this topic.
 
My biggest issue now is just the import of these zpools consume 50+ GiB kernel memory (out of 64 on this machine), before anything could touch the disks (so it's not ARC). And these seem to be something which is/could not shrunk by the kernel (like the ARC, which can dynamically change its size).
Therefore if ARC or other kernel (memory) users consume more memory, it quickly turns into a deadlock, the kernel starts to kill userspace processes to the point where it becomes unusable.
 
And here come the questions in the original post, from which I'm trying to understand what happens here, why importing a 4T zpool (with the above fill ratios) takes 1-1.5GiB kernel space.
And is it because of the files on it (so if I would have a 44x that size zpool it would be the same, which I couldn't observe on other machines which have only one zpool) or some kind of per zpool overhead (and if so, what affects that, what could I do to lower that need)?
 
openzfs / openzfs-developer / see discussions + participants + delivery options Permalink
 
 
____________________________________
Sincerely,
George Melikov
 

Reply via email to