Re: [zfs-discuss] Sanity check -- x4500 storage server forenterprise file service

2008-05-12 Thread Ross
Yeah, it's a *very* old bug. The main reason we put our ZFS rollout on hold was concerns over reliability with such an old (and imo critical) bug still present in the system. This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] question regarding gzip compression in S10

2008-05-12 Thread Krzys
I just upgraded to Sol 10 U5 and I was hoping that gzip compression will be there, but when I do upgrade it only does show v4 [10:05:36] [EMAIL PROTECTED]: /export/home zpool upgrade This system is currently running ZFS version 4. Do you know when Version 5 will be included in Solaris 10? are

Re: [zfs-discuss] ZFS cli for REMOTE Administration

2008-05-12 Thread Mark Shellenbaum
Andy Lubel wrote: Paul B. Henson wrote: On Thu, 8 May 2008, Mark Shellenbaum wrote: we already have the ability to allow users to create/destroy snapshots over NFS. Look at the ZFS delegated administration model. If all you want is snapshot creation/destruction then you will need to grant

Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-12 Thread Ralf Bertling
Hi all, until the scrub problem (http://bugs.opensolaris.org/view_bug.do?bug_id=6343667 ) is fixed,you should be able to simulate a scrub on the latest data by using zfs send snapshot /dev/null Since the primary purpose is to verify latent bugs and to have zfs auto-correct them, simply

Re: [zfs-discuss] sharing UFS root and ZFS pool

2008-05-12 Thread sean walmsley
Some additional information: I should have noted that the client could not see the thumper1 shares via the automounter. I've played around with this setup a bit more and it appears that I can manually mount both filesystems (e.g. on /tmp/troot and /tmp/tpool), so the ZFS and UFS volumes are

Re: [zfs-discuss] sharing UFS root and ZFS pool

2008-05-12 Thread Richard Elling
sean walmsley wrote: Some additional information: I should have noted that the client could not see the thumper1 shares via the automounter. I've played around with this setup a bit more and it appears that I can manually mount both filesystems (e.g. on /tmp/troot and /tmp/tpool), so the

Re: [zfs-discuss] Where is zfs attributes kept?

2008-05-12 Thread Richard Elling
Christine Tran wrote: Hi, If I delegate a dataset to a zone, and inside the zone, the zoneadmin set the attribute of that dataset, where is that data kept? More to the point, at what level is that data kept? In the zone? Or on the pool, with the zone having privilege to modify that info

[zfs-discuss] Deletion of file from ZFS Disk and Snapshots

2008-05-12 Thread Aaron Epps
This is a common problem that we run into and perhaps there's a good explanation of why it can't be done. Often, there will be a large set of data, say 200GB or so that gets written to a ZFS share, snapshotted and then deleted a few days later. As I'm sure you know, none of the space is

Re: [zfs-discuss] Deletion of file from ZFS Disk and Snapshots

2008-05-12 Thread Simon Breden
From my understanding, when you delete all the snapshots that reference the files that have already been deleted from the file system(s), then all the space will be returned to the pool. So try deleting the snapshots that you no longer need. Obviously, be sure that you don't need any files

[zfs-discuss] ZFS Problems under vmware

2008-05-12 Thread Paul B. Henson
I have a test bed S10U5 system running under vmware ESX that has a weird problem. I have a single virtual disk, with some slices allocated as UFS filesystem for the operating system, and s7 as a ZFS pool. Whenever I reboot, the pool fails to open: May 8 17:32:30 niblet fmd: [ID 441519

Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-12 Thread A Darren Dunham
On Mon, May 12, 2008 at 06:44:39PM +0200, Ralf Bertling wrote: ...you should be able to simulate a scrub on the latest data by using zfs send snapshot /dev/null Since the primary purpose is to verify latent bugs and to have zfs auto-correct them, simply reading all data would be sufficient