Re: [zfs-discuss] x4500 vs AVS ?

2008-09-17 Thread Ralf Ramge
Jorgen Lundman wrote: If we were interested in finding a method to replicate data to a 2nd x4500, what other options are there for us? If you already have an X4500, I think the best option for you is a cron job with incremental 'zfs send'. Or rsync. -- Ralf Ramge Senior Solaris

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-17 Thread gm_sjo
Am I right in thinking though that for every raidz1/2 vdev, you're effectively losing the storage of one/two disks in that vdev? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-17 Thread Peter Tribble
On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote: Am I right in thinking though that for every raidz1/2 vdev, you're effectively losing the storage of one/two disks in that vdev? Well yeah - you've got to have some allowance for redundancy. -- -Peter Tribble

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-17 Thread gm_sjo
2008/9/17 Peter Tribble: On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote: Am I right in thinking though that for every raidz1/2 vdev, you're effectively losing the storage of one/two disks in that vdev? Well yeah - you've got to have some allowance for redundancy. This is

[zfs-discuss] zpool with multiple mirrors question

2008-09-17 Thread Francois
If 2 disks of a mirror fail do the pool will be faulted ? NAMESTATE READ WRITE CKSUM homez ONLINE 0 0 0 mirrorONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0

Re: [zfs-discuss] zpool with multiple mirrors question

2008-09-17 Thread Darren J Moffat
Francois wrote: If 2 disks of a mirror fail do the pool will be faulted ? NAMESTATE READ WRITE CKSUM homez ONLINE 0 0 0 mirrorONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-17 Thread Peter Tribble
On Wed, Sep 17, 2008 at 10:11 AM, gm_sjo [EMAIL PROTECTED] wrote: 2008/9/17 Peter Tribble: On Wed, Sep 17, 2008 at 8:40 AM, gm_sjo [EMAIL PROTECTED] wrote: Am I right in thinking though that for every raidz1/2 vdev, you're effectively losing the storage of one/two disks in that vdev? Well

Re: [zfs-discuss] [storage-discuss] A few questions

2008-09-17 Thread Ralf Ramge
gm_sjo wrote: Are you not infact losing performance by reducing the amount of spindles used for a given pool? This depends. Usually, RAIDZ1/2 isn't a good performancer when it comes to random access read I/O, for instance. If I wanted to scale performance by adding spindles, I would use

Re: [zfs-discuss] zpool with multiple mirrors question

2008-09-17 Thread Francois
Darren J Moffat wrote: If c0t6d0 and c0t7d0 both fail (ie both sides of the same mirror vdev) then the pool will be unable to retrieve all the data stored in it. If c0t6d0 and c0t3d0 both fail then there are sufficient replicas of data available in that case because it was disks from

Re: [zfs-discuss] [storage-discuss] iscsi target problems on snv_97

2008-09-17 Thread Moore, Joe
I believe the problem you're seeing might be related to deadlock condition (CR 6745310), if you run pstack on the iscsi target daemon you might find a bunch of zombie threads. The fix is putback to snv-99, give snv-99 a try. Yes, a pstack of the core I've generated from iscsitgtd does have

Re: [zfs-discuss] [storage-discuss] iscsi target problems on snv_97

2008-09-17 Thread tim szeto
Moore, Joe wrote: I believe the problem you're seeing might be related to deadlock condition (CR 6745310), if you run pstack on the iscsi target daemon you might find a bunch of zombie threads. The fix is putback to snv-99, give snv-99 a try. Yes, a pstack of the core I've generated

Re: [zfs-discuss] zpool with multiple mirrors question

2008-09-17 Thread Miles Nordin
djm == Darren J Moffat [EMAIL PROTECTED] writes: djm If c0t6d0 and c0t7d0 both fail (ie both sides of the same djm mirror vdev) then the pool will be unable to retrieve all the djm data stored in it. won't be able to retrieve ANY of the data stored on it. It's correct as you wrote it,

[zfs-discuss] resilver keeps starting over? snv_95

2008-09-17 Thread Neal Pollack
Running Nevada build 95 on an ultra 40. Had to replace a drive. Resilver in progress, but it looks like each time I do a zpool status, the resilver starts over. Is this a known issue? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] resilver keeps starting over? snv_95

2008-09-17 Thread Tomas Ögren
On 17 September, 2008 - Neal Pollack sent me these 0,3K bytes: Running Nevada build 95 on an ultra 40. Had to replace a drive. Resilver in progress, but it looks like each time I do a zpool status, the resilver starts over. Is this a known issue? I recall some issue with 'zpool status' as

Re: [zfs-discuss] resilver keeps starting over? snv_95

2008-09-17 Thread Miles Nordin
t == Tomas Ögren [EMAIL PROTECTED] writes: t I recall some issue with 'zpool status' as root restarting t resilvering.. Doing it as a regular user will not.. is there an mdb command similar to zpool status? maybe it's safer. pgp8jYtCisPzr.pgp Description: PGP signature

Re: [zfs-discuss] ZFS system requirements

2008-09-17 Thread Erik Trimble
Cyril Plisko wrote: On Wed, Sep 17, 2008 at 6:06 AM, Erik Trimble [EMAIL PROTECTED] wrote: Just one more things on this: Run with a 64-bit processor. Don't even think of using a 32-bit one - there are known issues with ZFS not quite properly using 32-bit only structures. That is, ZFS is

Re: [zfs-discuss] resilver keeps starting over? snv_95

2008-09-17 Thread Wade . Stuart
Are you doing snaps? If so unless you have the new bits to handle the issue, each snap restarts a scrub or resilver. Thanks! Wade Stuart we are fallon P: 612.758.2660 C: 612.877.0385 ** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette Ave, Suite 2400, Minneapolis, MN

Re: [zfs-discuss] resilver keeps starting over? snv_95

2008-09-17 Thread Neal Pollack
On 09/17/08 02:29 PM, [EMAIL PROTECTED] wrote: Are you doing snaps? No, no snapshots ever. Logged in as root to do; zpool replace poolname deaddisk and then did a few zpool status as root. It restarted each time. If so unless you have the new bits to handle the issue, each snap restarts

Re: [zfs-discuss] ZPOOL Import Problem

2008-09-17 Thread Jim Dunham
On Sep 16, 2008, at 5:39 PM, Miles Nordin wrote: jd == Jim Dunham [EMAIL PROTECTED] writes: jd If at the time the SNDR replica is deleted the set was jd actively replicating, along with ZFS actively writing to the jd ZFS storage pool, I/O consistency will be lost, leaving ZFS