[zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and have not received disk for our SAN. Using df -h results in: Filesystem size used avail capacity Mounted on pool1

Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
Yes, you're correct. There was a typo when I copied to the forum. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
Yes. We run a snap in cron to a disaster recovery site. NAME USED AVAIL REFER MOUNTPOINT po...@20100930-22:20:00 13.2M - 19.5T - po...@20101001-01:20:00 4.35M - 19.5T - po...@20101001-04:20:00 0 - 19.5T - po...@20101001-07:20:00

Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
One of us found the following: The presence of snapshots can cause some unexpected behavior when you attempt to free space. Typically, given appropriate permissions, you can remove a file from a full file system, and this action results in more space becoming available in the file system.

Re: [zfs-discuss] Recovering a broken mirror

2010-01-15 Thread Jim Sloey
Never mind. It looks like the controller is flakey. Neither disk in the mirror is clean. Attempts to backup and recover the remaining disk produced I/O errors that were traced to the controller. Thanks for your help Victor. -- This message posted from opensolaris.org

[zfs-discuss] Recovering a broken mirror

2010-01-13 Thread Jim Sloey
We have a production SunFireV240 that had a zfs mirror until this week. One of the drives (c1t3d0) in the mirror failed. The system was shutdown and the bad disk replaced without an export. I don't know what happened next but by the time I got involved there was no evidence that the remaining

Re: [zfs-discuss] Recovering a broken mirror

2010-01-13 Thread Jim Sloey
No. Only slice 6 from what I understand. I didn't create this (the person who did has left the company) and all I know is that the pool was mounted on /oraprod before it faulted. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-08 Thread Jim Sloey
Roch - PAE wrote: The hard part is getting a set of simple requirements. As you go into more complex data center environments you get hit with older Solaris revs, other OSs, SOX compliance issues, etc. etc. etc. The world where most of us seem to be playing with ZFS is on the lower end of