Forgive me, but isn't this incorrect:

---
mv   /pool1/000   /pool1/000d
---
rm   –rf   /pool1/000

Shouldn't that last line be
rm   –rf   /pool1/000d
??

On 8 October 2010 04:32, Remco Lengers <re...@lengers.com> wrote:

>  any snapshots?
>
> *zfs list -t snapshot*
>
> ..Remco
>
>
>
> On 10/7/10 7:24 PM, Jim Sloey wrote:
>
> I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC 
> SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and 
> have not received disk for our SAN. Using df -h results in:
> Filesystem             size   used  avail capacity  Mounted on
> pool1                    20T    20T    55G   100%    /pool1
> pool2                   9.1T   8.0T   497G    95%    /pool2
> The idea was to temporarily move a group of big directories to another zfs 
> pool that had space available and link from the old location to the new.
> cp   –r   /pool1/000    /pool2/
> mv   /pool1/000   /pool1/000d
> ln   –s   /pool2/000    /pool1/000
> rm   –rf   /pool1/000
> Using df -h after the relocation results in:
> Filesystem             size   used  avail capacity  Mounted on
> pool1                    20T    19T    15G   100%    /pool1
> pool2                   9.1T   8.3T   221G    98%    /pool2
> Using zpool list says:
> NAME    SIZE       USED    AVAIL   CAP
> pool1     19.9T    19.6T  333G     98%
> pool2     9.25T    8.89T  369G     96%
> Using zfs get all pool1 produces:
> NAME  PROPERTY            VALUE                  SOURCE
> pool1  type                filesystem             -
> pool1  creation            Tue Dec 18 11:37 2007  -
> pool1  used                19.6T                  -
> pool1  available           15.3G                  -
> pool1  referenced          19.5T                  -
> pool1  compressratio       1.00x                  -
> pool1  mounted             yes                    -
> pool1  quota               none                   default
> pool1  reservation         none                   default
> pool1  recordsize          128K                   default
> pool1  mountpoint          /pool1                  default
> pool1  sharenfs            on                     local
> pool1  checksum            on                     default
> pool1  compression         off                    default
> pool1  atime               on                     default
> pool1  devices             on                     default
> pool1  exec                on                     default
> pool1  setuid              on                     default
> pool1  readonly            off                    default
> pool1  zoned               off                    default
> pool1  snapdir             hidden                 default
> pool1  aclmode             groupmask              default
> pool1  aclinherit          secure                 default
> pool1  canmount            on                     default
> pool1  shareiscsi          off                    default
> pool1  xattr               on                     default
> pool1  replication:locked  true                   local
>
> Has anyone experienced this or know where to look for a solution to 
> recovering space?
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to