Hi,

I use zfs on FreeBSD, but I have some theoretical questions, which may be 
independent on the OS.
zfs has a lot of useful features, which makes it appealing for distributed 
storage as well.
But building a distributed storage on top of zfs makes local redundancy well, 
redundant. :)
If you use something over it which takes care of multi-host object redundancy, 
building a large, redundant zpool out of local disks is meaningless, while 
building a non-redundant pool locally is even worse (a failure of a single disk 
means you have to rebuild the whole machine over the network, which is 
innecessary).

So it seems very logical to use a zpool on each disks, but I'm not sure of the 
consequences.
We use a system in production built on this scheme, but  we've started to 
observe OOM-like situations on heavy writes and even just mounting the pools 
(there are 2x-6x of them per machine) makes the machine eat all of the RAM (in 
FreeBSD's wired bucket, which means kernel memory and zfs's ARC is accounted 
here as well).
On a given machine this means importing 44 zpools (just the import, nothing 
else running yet) raises the wired memory usage in top above 50 GiBs, while ARC 
size remains low, around some GiBs.

The questions:
- what are the general consequences of having one zpool per vdev?
- are there any fundamental problems with having around 20-60 zpools per 
machine?
- does zfs have any memory requirements which scale with the used 
storage/stored blocks/objects (per zpool)? I mean I'm not sure why there is 50+ 
GiBs allocated just by importing the pools, while I haven't read a bit from 
them. I don't have dedup enabled, but have compression(gzip) and had a time 
interval with 1MiB block size, but that turned out to be problematic, so 
reverted to the default 128k.
- are there any cache tunables which are per zpool (or zfs, because each zpool 
is a different zfs as well)? If this is the case, it could cause us problems, 
because a 4G cache on a single zpool may be fine, but 44*4 is just too much.
- and while it's FreeBSD, I wonder if you may have any ideas on why should 
these OOMs may've appeared.

Thanks,
------------------------------------------
openzfs: openzfs-developer
Permalink: 
https://openzfs.topicbox.com/groups/developer/T10533b84f9e1cfc5-M81a0e265f288c4d748157404
Delivery options: https://openzfs.topicbox.com/groups/developer/subscription

Reply via email to