| I don't think that's the case.  What's wrong with setting both a quota
| and a reservation on your user filesystems?

 In a shared ZFS pool situation I don't think we'd get anything from
using both. We have to use something to limit people to the storage that
they bought, and in at least S10 U4 quotas work better for this (we
tested).

| What advantage will multiple zpools present over a single one with
| filesystems carved out of it?  With a single pool, you can "expand"
| filesystems if the user requests it just by changing the quota and
| reservation for that filesystem, and add more capacity if necessary
| by adding more disks to the pool.  If your policy is to use, say, a
| single pair of 35GB mirrors per zpool and the user wants more space,
| they need to split their files into categories somehow.

 Pools can/will have more than one vdev. The plan is that we will
have a set of unallocated fixed-size chunks of disk space (as LUNs or
slices). When someone buys more space, we pair up two such chunks and
add them to the person's pool as a mirrored vdev.

 With the single pool approach, you have a number of issues:
- if you keep pools at their currently purchased space, you have to both
  add a new vdev *and* bump someone's quota by the appropriate amount.
  This is somewhat more work and opens you up to the possibility of stupid
  mistakes when you change the quotas.

- if you preallocate space to pools before it is purchased by anyone,
  you have to statically split your space between fileservers in advance.
  You may also need to statically split the space between multiple pools
  on a single fileserver, if a single pool would otherwise have too many
  disks to make you comfortable; this limits how much space a person can
  add to their their existing allocation in an artificial way.

- if a disaster happens and you lose both sides of a mirrored vdev, you
  will have lost a *lot* more data (and a lot more people will be affected)
  than if you had things split up into separate pools. (Of course, this
  depends on how many of your separate pools had vdevs involving the
  pair of disks that you just lost; you could lose nearly as much data,
  if most of your pools were using chunks of the disk.)

  This argues for having multiple pools on a fileserver, which runs you
  into the 'people can only grow so far' problem.

We plan to use snapshots only while we take backups, partly because of
their effects on quotas and so on. Any additional usage of snapshots
would probably be under user control, so that the people who own the
space can make decisions like 'we will accept losing some space so that
we can instantly go back to yesterday'.

(There are groups that would probably take that, and groups that never
would.)

| You might want to use "refquota" and "refreservation" if you're
| running a Solaris that supports them---that precludes Solaris 10u4,
| unfortunately.  If you're running Nevada, though, they're definitely
| the way to go.

 This is going to be a production environment, so we're pretty much
stuck to Solaris 10 U<whatever is current>.

        - cks
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to