Re: [zfs-discuss] Per filesystem scrub

2008-04-04 Thread Jeff Bonwick
> Aye, or better yet -- give the scrub/resilver/snap reset issue fix very > high priority. As it stands snapshots are impossible when you need to > resilver and scrub (even on supposedly sun supported thumper configs). No argument. One of our top engineers is working on this as we speak. I say

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-04 Thread Tim
On Sat, Apr 5, 2008 at 12:25 AM, Jonathan Loran <[EMAIL PROTECTED]> wrote: > > > This guy seems to have had lots of fun with iSCSI :) > > http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html > > > > > This is scaring the h

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-04 Thread Jonathan Loran
> This guy seems to have had lots of fun with iSCSI :) > http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html > > This is scaring the heck out of me. I have a project to create a zpool mirror out of two iSCSI targets, and if the failure of one of them will panic my system, that wil

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-04 Thread Chris Siebenmann
| I assume you mean IPMP here, which refers to ethernet multipath. | | There is also the other meaning of multipath referring to multiple | paths to the storage array typically enabled by stmsboot command. We are currently looking at (and testing) the non-ethernet sort of multipathing, partly as

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-04 Thread Neil Perrin
ZFS will handle out of order writes due to it transactional nature. Individual writes can be re-ordered safely. When the transaction commits it will wait for all writes and flush them; then write a new uberblock with the new transaction group number and flush that. Chris Siebenmann wrote: > We're

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-04 Thread Vincent Fox
I assume you mean IPMP here, which refers to ethernet multipath. There is also the other meaning of multipath referring to multiple paths to the storage array typically enabled by stmsboot command. We run active-passive (failover) IPMP as it keeps things simple for us and I have run into some w

Re: [zfs-discuss] Simple monitoring of ZFS pools, email alerts?

2008-04-04 Thread Aaron Epps
Come on, there's got to be a simple way to do this... This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS and multipath with iSCSI

2008-04-04 Thread Victor Engle
In /kernel/drv/scsi_vhci.conf you could do this load-balance="none"; That way mpxio would use only one device. I imagine you need a vid/pid entry also in scsi_vhci.conf for your target. Regards, Vic On Fri, Apr 4, 2008 at 3:36 PM, Chris Siebenmann <[EMAIL PROTECTED]> wrote: > We're currently

[zfs-discuss] ZFS and multipath with iSCSI

2008-04-04 Thread Chris Siebenmann
We're currently designing a ZFS fileserver environment with iSCSI based storage (for failover, cost, ease of expansion, and so on). As part of this we would like to use multipathing for extra reliability, and I am not sure how we want to configure it. Our iSCSI backend only supports multiple ses

Re: [zfs-discuss] Can not add ZFS LOG devices

2008-04-04 Thread Cindy . Swearingen
Hi Mertol, Log devices aren't supported in the Solaris 10 release yet. You would have to run a Solaris Express version to configure log devices, such as SXDE 9/07 or SXDE 1/08, described here: http://docs.sun.com/app/docs/doc/817-2271/gfgaa?a=view cs Mertol Ozyoney wrote: > Hi All ; > > > >

[zfs-discuss] Can not add ZFS LOG devices

2008-04-04 Thread Mertol Ozyoney
Hi All ; I am a newvbie on solaris and ZFS. I am setting up a Zpool on a thumper (Sun fire x4500 with 48x internal sata drives.) Our engineers have set up the lates Solaris 10. When I try to create a Zpool with Log disks (mirrored) included, I get an error message from where I ujnders

Re: [zfs-discuss] Simple monitoring of ZFS pools, email alerts?

2008-04-04 Thread John.Stewart
Thanks for the responses... it does sound like there is nothing built in, so it's a situation where you have to roll your own. A colleague pointed me at this: http://prefetch.net/code/fmadmnotifier I'm testing it now; seems like it might do the trick nicely. johnS

Re: [zfs-discuss] zfs device busy

2008-04-04 Thread Roch Bourbonnais
Le 30 mars 08 à 15:57, Kyle McDonald a écrit : > Fred Oliver wrote: >> >> Marion Hakanson wrote: >>> [EMAIL PROTECTED] said: I am having trouble destroying a zfs file system (device busy) and fuser isn't telling me who has the file open: . . . This situation appears to occur e

[zfs-discuss] ZFS volumes in non-global zones

2008-04-04 Thread Ralph Bogendoerfer - Sun Solution Center - Benchmarking Group
Hi all, just a quick question about ZFS volumes inside non-global zones. The zfs manpage seems a bit paradox about it - can it be used or not? The manpage zfs(1M) says for volumes: volume A logical volume exported as a block device. This type of dataset