[zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Ian Brown
Hello, I try to create a zfs file system according to Creating a Basic ZFS File System section of Creating a Basic ZFS File System document of SUN. The problem is that the device has a ufs filesystem the partiotion I am trying to work with; it is in fact empty and does not contain any file

Re: [zfs-discuss] Shared ZFS pools

2006-12-06 Thread Albert Shih
Le 06/12/2006 à 05:05:55+0100, Flemming Danielsen a écrit Hi I have 2 questions on use of ZFS. How do I ensure I have site redundancy using zfs pools, as I see it we only ensures mirrors between 2 disks. I have 2 HDS on one each site and I want to be able to loose the one of them and my

Re: [zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Wee Yeh Tan
Ian, The first error is correct in that zpool-create will not, unless forced, create a file system if it knows that another filesystem presides in the target vdev. The second error was caused by your removal of the slice. What I find discerning is that the zpool created. Can you provide the

[zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Jim Davis
We have two aging Netapp filers and can't afford to buy new Netapp gear, so we've been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come up in the discussion - Adding new disks to a RAID-Z pool (Netapps handle adding

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Al Hopper
On Wed, 6 Dec 2006, Jim Davis wrote: We have two aging Netapp filers and can't afford to buy new Netapp gear, so we've been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come up in the discussion - Adding new disks to

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Darren J Moffat
On Wed, 6 Dec 2006, Jim Davis wrote: We have two aging Netapp filers and can't afford to buy new Netapp gear, so we've been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come up in the discussion - Adding new disks to a

Re: [zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Torrey McMahon
Still ... I don't think a core file is appropriate. Sounds like a bug is in order if one doesn't already exist. (zpool dumps core when missing devices are used perhaps?) Wee Yeh Tan wrote: Ian, The first error is correct in that zpool-create will not, unless forced, create a file system if

Re: [zfs-discuss] weird thing with zfs

2006-12-06 Thread Krzys
Thanks so much.. anyway resilvering worked its way, I got everything resolved zpool status -v pool: mypool state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirrorONLINE 0 0 0

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Rob
You can add more disks to a pool that is in raid-z you just can't add disks to the existing raid-z vdev. cd /usr/tmp mkfile -n 100m 1 2 3 4 5 6 7 8 9 10 zpool create t raidz /usr/tmp/1 /usr/tmp/2 /usr/tmp/3 zpool status t zfs list t zpool add -f t raidz2 /usr/tmp/4 /usr/tmp/5 /usr/tmp/6

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread eric kustarz
Jim Davis wrote: We have two aging Netapp filers and can't afford to buy new Netapp gear, so we've been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come up in the discussion - Adding new disks to a RAID-Z pool

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams
Hi Luke, We've been using MPXIO (STMS) with ZFS quite solidly for the past few months. Failover is instantaneous when a write operations occurs after a path is pulled. Our environment is similar to yours, dual-FC ports on the host, and 4 FC ports on the storage (2 per controller). Depending on

[zfs-discuss] Re: Re: Re: Snapshots impact on performance

2006-12-06 Thread Chris Gerhard
One of our file servers internally to Sun that reproduces this running nv53 here is the dtrace output: unix`mutex_vector_enter+0x120 zfs`metaslab_group_alloc+0x1a0 zfs`metaslab_alloc_dva+0x10c zfs`metaslab_alloc+0x3c

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Douglas Denny
On 12/6/06, Jason J. W. Williams [EMAIL PROTECTED] wrote: We've been using MPXIO (STMS) with ZFS quite solidly for the past few months. Failover is instantaneous when a write operations occurs after a path is pulled. Our environment is similar to yours, dual-FC ports on the host, and 4 FC ports

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams
Hi Doug, The configuration is a T2000 connected to a StorageTek FLX210 array via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z the LUNs across 3 array volume groups. For performance reasons we're in the process of changing to striped zpools across RAID-1 volume groups. The

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Edward Pilatowicz
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote: We have two aging Netapp filers and can't afford to buy new Netapp gear, so we've been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come up in the discussion -

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Douglas Denny
On 12/6/06, Jason J. W. Williams [EMAIL PROTECTED] wrote: The configuration is a T2000 connected to a StorageTek FLX210 array via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z the LUNs across 3 array volume groups. For performance reasons we're in the process of changing to

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Darren J Moffat
Edward Pilatowicz wrote: On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote: We have two aging Netapp filers and can't afford to buy new Netapp gear, so we've been looking with a lot of interest at building NFS fileservers running ZFS as a possible future approach. Two issues have come

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-06 Thread Eric Kustarz
Jim Davis wrote: eric kustarz wrote: What about adding a whole new RAID-Z vdev and dynamicly stripe across the RAID-Zs? Your capacity and performance will go up with each RAID-Z vdev you add. Thanks, that's an interesting suggestion. Have you tried using the automounter as suggested

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams
Hi Luke, That's really strange. We did the exact same thing moving between two hosts (export/import) and it took maybe 10 secs. How big is your zpool? Best Regards, Jason On 12/6/06, Luke Schwab [EMAIL PROTECTED] wrote: Doug, I should have posted the reason behind this posting. I have 2

[zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-06 Thread Luke Schwab
I, too, experienced a long delay while importing a zpool on a second machine. I do not have any filesystems in the pool. Just the Solaris 10 Operating system, Emulex 10002DC HBA, and a 4884 LSI array (dual attached). I don't have any file systems created but when STMS(mpxio) is enabled I see

[zfs-discuss] Re: ZFS failover without multipathing

2006-12-06 Thread Luke Schwab
I simply created a zpool with an array disk like hosta# zpool created testpool c6tnumd0 //runs within a second hosta# zpool export testpool // runs within a second hostb# zpool import testpool // takes 5-7 minutes If STMS(mpxio) is disabled, it takes from 45-60 seconds. I tested this with

Re: [zfs-discuss] Re: Re: Managed to corrupt my pool

2006-12-06 Thread Eric Schrock
On Wed, Dec 06, 2006 at 12:35:58PM -0800, Jim Hranicky wrote: If those are the original path ids, and you didn't move the disks on the bus? Why is the is_spare flag Well, I'm not sure, but these drives were set as spares in another pool I deleted -- should I have done something to the

Re: [zfs-discuss] Re: ZFS failover without multipathing

2006-12-06 Thread James C. McPherson
Luke Schwab wrote: I simply created a zpool with an array disk like hosta# zpool created testpool c6tnumd0 //runs within a second hosta# zpool export testpool // runs within a second hostb# zpool import testpool // takes 5-7 minutes If STMS(mpxio) is disabled, it takes from 45-60

[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2006-12-06 Thread Eric Boutilier
For background on what this is, see: http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416 http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200 = zfs-discuss 11/16 - 11/30 = Size of all threads during

[zfs-discuss] Re: Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Ian Brown
Hello, Thanks. Here is the needed info: zpool status pool: tank state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM tankONLINE 0 0 0 c1d0s6ONLINE 0 0 0 errors: No known data errors df -h

[zfs-discuss] Basil check this.

2006-12-06 Thread Basil Walsh
These days it's like the dot com revolution round two. The only difference is, this time there will be no crash. Everyone is wiser and the internet has already proven that it is THE place to do business. Acquisitions are happening at a record pace. Google picking up Youtube. News corp