Re: [zfs-discuss] Intel M-series SSD

2008-09-11 Thread Al Hopper
On Wed, Sep 10, 2008 at 1:46 PM, Bob Friesenhahn [EMAIL PROTECTED] wrote: On Wed, 10 Sep 2008, Keith Bierman wrote: ... That is reasonable. It adds to product cost and size though. Super-capacitors are not super-small. True, but for enterprise class devices they are sufficiently small.

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
Ralf, Jim, at first: I never said that AVS is a bad product. And I never will. I wonder why you act as if you were attacked personally. To be honest, if I were a customer with the original question, such a reaction wouldn't make me feel safer. I am sorry that my response came across

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
Matt, Just to clarify a few items... consider a setup where we desire to use AVS to replicate the ZFS pool on a 4 drive server to like hardware. The 4 drives are setup as RaidZ. If we lose a drive (say #2) in the primary server, RaidZ will take over, and our data will still be

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread A Darren Dunham
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote: The issue with any form of RAID 1, is that the instant a disk fails out of the RAID set, with the next write I/O to the remaining members of the RAID set, the failed disk (and its replica) are instantly out of sync. Does raidz

Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-11 Thread Richard Elling
Carson Gaspar wrote: Let me drag this thread kicking and screaming back to ZFS... Use case: - We need an NFS server that can be replicated to another building to handle both scheduled powerdowns and unplanned outages. For scheduled powerdowns we'd want to fail over a week in advance, and

[zfs-discuss] Will ZFS stay consistent with AVS/ZFS and async replication

2008-09-11 Thread Matt Beebe
When using AVS's Async replication with memory queue, am I guaranteed a consistent ZFS on the distant end? The assumed failure case is that the replication broke, and now I'm trying to promote the secondary replicate with what might be stale data. Recognizing in advance that some of the data

Re: [zfs-discuss] Apache module for ZFS ACL based authorization

2008-09-11 Thread Nicolas Williams
On Wed, Sep 10, 2008 at 06:35:49PM -0700, Paul B. Henson wrote: I'd appreciate any feedback, particularly about things that don't work right :). I bet you think it'd be nice if we had a public equivalent of _getgroupsbymember()... Even better if we just had utility functions to do ACL

Re: [zfs-discuss] Apache module for ZFS ACL based authorization

2008-09-11 Thread Paul B. Henson
On Thu, 11 Sep 2008, Nicolas Williams wrote: I bet you think it'd be nice if we had a public equivalent of _getgroupsbymember()... Indeed, that would be useful in numerous contexts. It would be even nicer if the appropriate standards body added it alongside of the current getgr* functions to

Re: [zfs-discuss] Apache module for ZFS ACL based authorization

2008-09-11 Thread Nicolas Williams
On Thu, Sep 11, 2008 at 10:36:38AM -0700, Paul B. Henson wrote: On Thu, 11 Sep 2008, Nicolas Williams wrote: I bet you think it'd be nice if we had a public equivalent of _getgroupsbymember()... Indeed, that would be useful in numerous contexts. It would be even nicer if the appropriate

Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-11 Thread Carson Gaspar
Richard Elling wrote: For campus or metro sized systems, many people just use HA clusters. The complexity level is similar and you automatically avoid the NFS file handle problem. There is a lot of expertise in this area as NFS is one of the most popular clustered services.

Re: [zfs-discuss] Any commands to dump all zfs snapshots like NetApp snapmirror

2008-09-11 Thread Haiou Fu (Kevin)
Excuse me but could you please copy and paste the part of zfs send -l ? I couldn't find it in the link you send me: http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?a=view What release is this send -l option available ? -- This message posted from opensolaris.org

Re: [zfs-discuss] Will ZFS stay consistent with AVS/ZFS and async replication

2008-09-11 Thread Miles Nordin
mb == Matt Beebe [EMAIL PROTECTED] writes: mb When using AVS's Async replication with memory queue, am I mb guaranteed a consistent ZFS on the distant end? The assumed mb failure case is that the replication broke, and now I'm trying mb to promote the secondary replicate with

Re: [zfs-discuss] zfs import -f not working!

2008-09-11 Thread Miles Nordin
Did you guys ever fix this, or get a bug number, or anything? Should I avoid that release? I was about to install b96 for ZFS fixes but this 'zpool import -f' problem looks bad. Corey -8- pr1# zpool offline tank c5t0d0s0 pr1# zpool status pool: rpool state: ONLINE scrub: none

Re: [zfs-discuss] Nexenta/ZFS vs Heartbeat/DRBD

2008-09-11 Thread Richard Elling
Carson Gaspar wrote: Richard Elling wrote: For campus or metro sized systems, many people just use HA clusters. The complexity level is similar and you automatically avoid the NFS file handle problem. There is a lot of expertise in this area as NFS is one of the most popular clustered

Re: [zfs-discuss] Any commands to dump all zfs snapshots like NetApp snapmirror

2008-09-11 Thread Richard Elling
Haiou Fu (Kevin) wrote: Excuse me but could you please copy and paste the part of zfs send -l ? I couldn't find it in the link you send me: http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?a=view Not ell 'l', try capital-i 'I' What release is this send -l option available ? The

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread Jim Dunham
On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote: On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote: The issue with any form of RAID 1, is that the instant a disk fails out of the RAID set, with the next write I/O to the remaining members of the RAID set, the failed disk (and its

Re: [zfs-discuss] zfs import -f not working!

2008-09-11 Thread Miles Nordin
c == Miles Nordin [EMAIL PROTECTED] writes: c Did you guys ever fix this, or get a bug number, or c anything? I found two bugs about this: http://bugs.opensolaris.org/view_bug.do?bug_id=6736213 http://bugs.opensolaris.org/view_bug.do?bug_id=6739532 I don't think either one fits

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-11 Thread A Darren Dunham
On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote: On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote: On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote: The issue with any form of RAID 1, is that the instant a disk fails out of the RAID set, with the next write I/O to the

Re: [zfs-discuss] zfs import -f not working!

2008-09-11 Thread Richard Elling
Miles Nordin wrote: c == Miles Nordin [EMAIL PROTECTED] writes: c Did you guys ever fix this, or get a bug number, or c anything? I think it is a bug. I haven't been able to produce it myself, so I won't file a bug on it, but recommend that anyone who encounters

[zfs-discuss] ZFS Panicing System Cluster Crash effect

2008-09-11 Thread Jack Dumson
Issues with ZFS and Sun Cluster If a cluster node crashes and HAStoragePlus resource group containing ZFS structure (ie. Zpool) is transitioned to a surviving node, the zpool import can cause the surviving node to panic. Zpool was obviously not exported in controlled fashion because of hard

Re: [zfs-discuss] ZFS Panicing System Cluster Crash effect

2008-09-11 Thread James C. McPherson
Jack Dumson wrote: Issues with ZFS and Sun Cluster If a cluster node crashes and HAStoragePlus resource group containing ZFS structure (ie. Zpool) is transitioned to a surviving node, the zpool import can cause the surviving node to panic. Zpool was obviously not exported in controlled

Re: [zfs-discuss] ZFS Panicing System Cluster Crash effect

2008-09-11 Thread Ricardo M. Correia
Hi Jack, On Qui, 2008-09-11 at 15:37 -0700, Jack Dumson wrote: Issues with ZFS and Sun Cluster If a cluster node crashes and HAStoragePlus resource group containing ZFS structure (ie. Zpool) is transitioned to a surviving node, the zpool import can cause the surviving node to panic.