On Wed, Feb 06, 2013 at 08:03:13PM -0700, Jan Owoc wrote:
> On Wed, Feb 6, 2013 at 4:26 PM, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris)
> <opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> >
> > When I used "zpool status" after the system crashed, I saw this:
> > NAME      SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
> > storage   928G   568G   360G         -    61%  1.00x  ONLINE  -
> >
> > I did some cleanup, so I could turn things back on ... Freed up about 4G.
> >
> > Now, when I use "zpool status" I see this:
> > NAME      SIZE  ALLOC   FREE  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
> > storage   928G   564G   364G         -    60%  1.00x  ONLINE  -
> >
> > When I use "zfs list storage" I see this:
> > NAME      USED  AVAIL  REFER  MOUNTPOINT
> > storage   909G  4.01G  32.5K  /storage
> >
> > So I guess the lesson is (a) refreservation and zvol alone aren't enough to
> > ensure your VM's will stay up.  and (b) if you want to know how much room is
> > *actually* available, as in "usable," as in, "how much can I write before I
> > run out of space," you should use "zfs list" and not "zpool status"
> 
> Could you run "zfs list -o space storage"? It will show how much is
> used by the data, the snapshots, refreservation, and children (if
> any). I read somewhere that one should always use "zfs list" to
> determine how much space is actually available to be written on a
> given filesystem.
> 
> I have an idea, but it's a long shot. If you created more than one zfs
> on that pool, and added a reservation to each one, then that space is
> still technically unallocated as far as "zpool list" is concerned, but
> is not available to writing when you do "zfs list". I would imagine
> you have one or more of your VMs that grew outside of their
> "refreservation" and now crashed for lack of free space on their zfs.
> Some of the other VMs aren't using their refreservation (yet), so they
> could, between them, still write 360GB of stuff to the drive.
> 

I'm seeing weird output aswell:

# zpool list foo
NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
foo      5.44T  4.44T  1023G    81%  14.49x  ONLINE  -

# zfs list | grep foo
foo                          62.9T      0   250G  /volumes/foo
foo/.nza-reserve               31K   100M    31K  none
foo/foo                      62.6T      0  62.6T  /volumes/foo/foo

# zfs list -o space foo
NAME     AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
foo          0  62.9T         0    250G              0      62.7T

# zfs list -o space foo/foo
NAME             AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
foo/foo              0  62.6T         0   62.6T              0          0


What's the correct way of finding out what actually uses/reserves that 1023G of 
FREE in the zpool? 

At this point the filesystems are full, and it's not possible to write to them 
anymore.
Also creating new filesystems to the pool fail:

"Operation completed with error: cannot create 'foo/Test': out of space"

So the zpool is full for real.

I'd like to better understand what actually uses that 1023G of FREE space 
reported by zpool..
1023G out of 4.32T is around 22% overhead..
zpool "foo" consists of 3x mirror vdevs, so there's no raidz involved.

62.6T / 14.49x dedup-ratio = 4.32T 
Which is pretty close to the ALLOC value reported by zpool.. 

Data on the filesystem is VM images written over NFS.


Thanks,

-- Pasi

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to