[zfs-discuss] Physical Clone of zpool

2006-09-18 Thread Mika Borner
Hi We have following scenario/problem: Our zpool resides on a single LUN on a Hitachi Storage Array. We are thinking about making a physical clone of the zpool with the ShadowImage functionality. ShadowImage takes a snapshot of the LUN, and copies all the blocks to a new LUN (physical copy). In

[zfs-discuss] Re: Re: low disk performance

2006-09-18 Thread Gino Ruopolo
Hi Chris, both server same setup. OS on local hw raid mirror, other filesystem on a SAN. We found really bad performance but also that under that heavy I/O zfs pool was something like freezed. I mean, a zone living on the same zpool was completely unusable because of I/O load. We use FSS, but

[zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Hans-Joerg Haederli - Sun Switzerland Zurich - Sun Support Services
Hi colleagues IHAC who wants to use ZFS with his HDS box. He asks now how he can do the following: - Create ZFS pool/fs on HDS LUNs - Create Copy with ShadowImage inside HDS - Disconnect ShadowImage - Import ShadowImage with ZFS in addition to the existing ZFS pool/fs I wonder how ZFS is

[zfs-discuss] Re: Re: low disk performance

2006-09-18 Thread Gino Ruopolo
Hi Gino, Can you post the 'zpool status' for each pool and 'zfs get all' for each fs; Any interesting data in the dmesg output ? sure. 1) nothing on dmesg (are you thinking about shared IRQ?) 2) Only using one pool for tests: # zpool status pool: zpool1 state: ONLINE scrub: none

[zfs-discuss] Re: Re: low disk performance

2006-09-18 Thread Gino Ruopolo
We use FSS, but CPU load was really load under the tests. errata: We use FSS, but CPU load was really LOW under the tests. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Re: Re: Bizzare problem with ZFS filesystem

2006-09-18 Thread Anantha N. Srirama
I don't see a patch for this on the SunSolve website. I've opened a service request to get this patch for Sol10 06/06. Stay tuned. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS layout on hardware RAID-5?

2006-09-18 Thread Bill Sommerfeld
I would go with: (3) Three 4D+1P h/w RAID-5 groups, no hot spare, mapped to one LUN each. Setup a ZFS pool of one RAID-Z group consisting of those three LUN's. Only ~3200GB available space, but what looks like very good resiliency in face of multiple disk failures. IMHO building

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-18 Thread Richard Elling - PAE
Robert Milkowski wrote: Hello James, I belive that storing hostid, etc. in a label and checking if it matches on auto-import is the right solution. Before it's implemented you can use -R right now with home-clusters and don't worry about auto-import. hostid isn't sufficient (got a scar), so

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 02:20:24PM -0400, Torrey McMahon wrote: 1 - ZFS is self consistent but if you take a LUN snapshot then any transactions in flight might not be completed and the pool - Which you need to snap in its entirety - might not be consistent. The more LUNs you have in the

RE: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Ellis, Mike
It's a valid use case in the high-end enterprise space. While it probably makes good sense to use ZFS for snapshot creation, there are still cases where array-based snapshots/clones/BCVs make sense. (DR/Array-based replication, data-verification, separate spindle-pool, legacy/migration reasons,

[zfs-discuss] drbd using zfs send/receive?

2006-09-18 Thread Jakob Praher
hi everyone, I am planning on creating a local SAN via NFS(v4) and several redundant nodes. I have been using DRBD on linux before and now am asking whether some of you have experience on on-demand network filesystem mirrors. I have yet little Solaris sysadmin know how, but i am

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Joerg Haederli
I'm really not an expert on ZFS, but at least from my point to handle such cases ZFS has to handle at least the following points - GUID a new/different GUID has to be assigned - LUNs ZFS has to be aware that device trees are different, if these are part of some kind of metadata stored

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 03:29:49PM -0400, Jonathan Edwards wrote: err .. i believe the point is that you will have multiple disks claiming to be the same disk which can wreak havoc on a system (eg: I've got a 4 disk pool with a unique GUID and 8 disks claiming to be part of that same

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote: I'm really not an expert on ZFS, but at least from my point to handle such cases ZFS has to handle at least the following points - GUID a new/different GUID has to be assigned As I mentioned previously, ZFS handles this

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Torrey McMahon
Eric Schrock wrote: On Mon, Sep 18, 2006 at 10:06:21PM +0200, Joerg Haederli wrote: It looks as this has not been implemented yet nor even tested. What hasn't been implemented? As far as I can tell, this is a request for the previously mentioned RFE (ability to change GUIDs on

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Torrey McMahon
Torrey McMahon wrote: A day later I turn the host off. I go to the array and offer all six LUNs, the pool that was in use as well as the snapshot that I took a day previously, and offer all three LUNs to the host. Errrthat should be A day later I turn the host off. I go to the

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Richard Elling - PAE
Joerg Haederli wrote: I'm really not an expert on ZFS, but at least from my point to handle such cases ZFS has to handle at least the following points - GUID a new/different GUID has to be assigned - LUNs ZFS has to be aware that device trees are different, if these are part of some

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Darren Dunham
In my experience, we would not normally try to mount two different copies of the same data at the same time on a single host. To avoid confusion, we would especially not want to do this if the data represents two different points of time. I would encourage you to stick with more

Re: [zfs-discuss] drbd using zfs send/receive?

2006-09-18 Thread Frank Cusack
On September 18, 2006 5:45:08 PM +0200 Jakob Praher [EMAIL PROTECTED] wrote: hi everyone, I am planning on creating a local SAN via NFS(v4) and several redundant nodes. huh. How do you create a SAN with NFS? I have been using DRBD on linux before and now am asking whether some of you

[zfs-discuss] Re: Re: zfs clones

2006-09-18 Thread Jan Hendrik Mangold
The initial idea was to make a dataset/snapshot and clone (fast) and then separate the clone from its snapshot. The clone could be then used as a new independant dataset. The send/receive subcommands are probably the only way to duplicate a dataset. I'm still not sure I understand

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-18 Thread Richard Elling - PAE
[appologies for being away from my data last week] David Dyer-Bennet wrote: The more I look at it the more I think that a second copy on the same disk doesn't protect against very much real-world risk. Am I wrong here? Are partial(small) disk corruptions more common than I think? I don't have

Re: [zfs-discuss] Re: zfs clones

2006-09-18 Thread Mike Gerdts
On 9/1/06, Matthew Ahrens [EMAIL PROTECTED] wrote: Marlanne DeLaSource wrote: Thanks for all your answers. The initial idea was to make a dataset/snapshot and clone (fast) and then separate the clone from its snapshot. The clone could be then used as a new independant dataset. The

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-18 Thread David Dyer-Bennet
On 9/18/06, Richard Elling - PAE [EMAIL PROTECTED] wrote: [appologies for being away from my data last week] David Dyer-Bennet wrote: The more I look at it the more I think that a second copy on the same disk doesn't protect against very much real-world risk. Am I wrong here? Are

Re: [zfs-discuss] ZFS and HDS ShadowImage

2006-09-18 Thread Eric Schrock
On Mon, Sep 18, 2006 at 06:03:47PM -0400, Torrey McMahon wrote: Its not the transport layer. It works fine as the LUN IDs are different and the devices will come up with different /dev/dsk entries. (And if not then you can fix that on the array in most cases.) The problem is that devices

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-18 Thread Richard Elling - PAE
more below... David Dyer-Bennet wrote: On 9/18/06, Richard Elling - PAE [EMAIL PROTECTED] wrote: [appologies for being away from my data last week] David Dyer-Bennet wrote: The more I look at it the more I think that a second copy on the same disk doesn't protect against very much

Re: [zfs-discuss] Re: Re: zfs clones

2006-09-18 Thread Matthew Ahrens
Jan Hendrik Mangold wrote: I didn't ask the original question, but I have a scenario where I want to use clone as well and encounter a (designed?) behaviour I am trying to understand. I create a filesystem A with ZFS and modify it to a point where I create a snapshot [EMAIL PROTECTED] Then I

Re: [zfs-discuss] Re: zfs clones

2006-09-18 Thread Matthew Ahrens
Mike Gerdts wrote: A couple scenarios from environments that I work in, using legacy file systems and volume managers: 1) Various test copies need to be on different spindles to remove any perceived or real performance impact imposed by one or the other. Arguably by having the IO activity