[EMAIL PROTECTED] wrote on 04/08/2008 11:22:53 AM:

>  In our environment, the politically and administratively simplest
> approach to managing our storage is to give each separate group at
> least one ZFS pool of their own (into which they will put their various
> filesystems). This could lead to a proliferation of ZFS pools on our
> fileservers (my current guess is at least 50 pools and perhaps up to
> several hundred), which leaves us wondering how well ZFS handles this
> many pools.
>
>  So: is ZFS happy with, say, 200 pools on a single server? Are there any
> issues (slow startup, say, or peculiar IO performance) that we'll run
> into? Has anyone done this in production? If there are issues, is there
> any sense of what the recommended largest number of pools per server is?
>

Chris,

      Well,  I have done testing with filesystems and not as much with
pools -- I believe the core design premise for zfs is that administrators
would use few pools and many filesystems.  I would think that Sun would
recommend that you make a large pool (or a few) and divvy out filesystem
with reservations to the groups (to which they can add sub filesystems).
As far as ZFS filesystems are concerned my testing has shown that the mount
time and io overhead for multiple filesystems seems to be pretty linear --
timing 10 mounts translates pretty well to 100 and 1000.  After you hit
some level (depending on processor and memory) the mount time, io and
write/read batching spikes up pretty heavily.  This is one of the reasons I
take a strong stance against the recommendation that people use
reservations and filesystems as user/group quotas (ignoring that the
functionality is not by any means in parity.)

-Wade



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to