Re: [zfs-discuss] Per filesystem scrub

2008-04-05 Thread Jeff Bonwick
Aye, or better yet -- give the scrub/resilver/snap reset issue fix very high priority. As it stands snapshots are impossible when you need to resilver and scrub (even on supposedly sun supported thumper configs). No argument. One of our top engineers is working on this as we speak. I say

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread Will Murnane
On Sat, Apr 5, 2008 at 5:25 AM, Jonathan Loran [EMAIL PROTECTED] wrote: This is scaring the heck out of me. I have a project to create a zpool mirror out of two iSCSI targets, and if the failure of one of them will panic my system, that will be totally unacceptable. I haven't tried this

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread kristof
If you have a mirrored iscsi zpool. It will NOT panic when 1 of the submirrors is unavailable. zpool status will hang for some time, but after I thinkt 300 seconds it will put the device on unavailable. The panic was the default in the past, And it only occurs if all devices are unavailable.

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-05 Thread Vincent Fox
You DO mean IPMP then. That's what I was trying to sort out, to make sure that you were talking about the IP part of things, the iSCSI layer. And not the paths from the target system to it's local storage. You say non-ethernet for your network transport, what ARE you using? This message

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-05 Thread Vincent Fox
Oh sure pick nits. Yeah I should have said network multipath instead of ethernet multipath but really how often do I encounter non-ethernet networks? I can't recall the last time I saw a token ring or anything else. This message posted from opensolaris.org

Re: [zfs-discuss] Max_Payload_Size

2008-04-05 Thread Brandon High
On Fri, Apr 4, 2008 at 10:53 PM, Marc Bevand [EMAIL PROTECTED] wrote: with him, and I noticed that there are BIOS settings for the pcie max payload size. The default value is 4096 bytes. I noticed. But it looks like this setting has no effect on anything whatsoever. My guess is that

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread Vincent Fox
I don't think ANY situation in which you are mirrored and one half of the mirror pair becomes unavailable will panic the system. At least this has been the case when I've tested with local storage haven't tried with iSCSI yet but will give it a whirl. I had a simple single ZVOL shared over

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-05 Thread Richard Elling
Vincent Fox wrote: You DO mean IPMP then. That's what I was trying to sort out, to make sure that you were talking about the IP part of things, the iSCSI layer. And not the paths from the target system to it's local storage. There is more than one way to skin this cat. Fortunately

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-05 Thread Chris Siebenmann
| You DO mean IPMP then. That's what I was trying to sort out, to make | sure th at you were talking about the IP part of things, the iSCSI | layer. My apologies for my lack of clarity. We are not looking at IPMP multipathing; we are using MPxIO multipathing (mpathadm et al), which operates at

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread Vincent Fox
Followup, my initiator did eventually panic. I will have to do some setup to get a ZVOL from another system to mirror with, and see what happens when one of them goes away. Will post in a day or two on that. This message posted from opensolaris.org

Re: [zfs-discuss] ZFS Device fail timeout?

2008-04-05 Thread Ross
To my mind it's a big limitation of ZFS that it relies on the driver timeouts. The driver has no knowledge of what kind of configuration the disks are in, and generally any kind of data loss is bad, so it's not unexpected to see that long timeouts are the norm as the driver does it's very best