Re: [zfs-discuss] Help replacing dual identity disk in ZFS raidz and SVM mirror

2007-12-06 Thread Matt B
Anyone? Really need some help here This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Help replacing dual identity disk in ZFS raidz and SVM mirror

2007-12-03 Thread Matt B
Any help or pointing to good documentation would be much appreciated. Thanks Matt B Below I included a metastat dump d3: Mirror Submirror 0: d13 State: Okay Submirror 1: d23 State: Needs maintenance Submirror 2: d33 State: Okay Submirror 3: d43 State: Okay

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-25 Thread Matt B
the 4 database servers are part of an Oracle RAC configuration. 3 databases are hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and littledb2 on the last two. The oracle backup system spawns db backup jobs that could occur on any node based on traffic and load. All nodes are

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-25 Thread Matt B
Im not sure what you mean This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-25 Thread Matt B
Here is what seems to be the best course of action assuming IP over FC is supported by the HBA's (which I am pretty sure they so since this is all brand new equipment) Mount the shared disk backup lun on Node 1 via the FC link to the SAN as a non-redundant ZFS volume. On node 1 RMAN (oracle

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Matt B
Cant use the network because these 4 hosts are database servers that will be dumping close to a Terabyte every night. If we put that over the network all the other servers would be starved This message posted from opensolaris.org ___ zfs-discuss

[zfs-discuss] Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Well, I am aware that /tmp can be mounted on swap as tmpfs and that this is really fast as most all writes go straight to memory, but this is of little to no value to the server in question. The server in question is running 2 enterprise third party applications. No compilers are

[zfs-discuss] Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Ok so you are suggesting that I simply mount /tmp as tmpfs on my existing 8GB swap slice and then put in the VM limit on /tmp? Will that limit only affect users writing data to /tmp or will it also affect the systems use of swap? This message posted from opensolaris.org

[zfs-discuss] Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
For reference...here is my disk layout currently (one disk of two, but both are identical) s4 is for the MetaDB s5 is dedicated for ZFS partition print Current partition table (original): Total disk cylinders available: 8921 + 2 (reserved cylinders) Part TagFlag Cylinders

[zfs-discuss] Re: Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Ok, since I already have an 8GB swap slice i'd like to use, what would be the best way of setting up /tmp on this existing SWAP slice as tmpfs and then apply the 1GB quota limit? I know how to get rid of the zpool/tmp filesystem in ZFS, but I'm not sure how to actually get to the above in a

[zfs-discuss] Re: Re: Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
And just doing this will automatically target my /tmp at my 8GB swap slice on s1 as well as placing the quota in place? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Re: Re: Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Oh, one other thing...s1 (8GB swap) is part of an SVM mirror (on d1) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Re: Re: Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Worked great. Thanks This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS mount fails at boot

2007-03-22 Thread Matt B
I have about a dozen two disk systems that were all setup the same using a combination of SVM and ZFS. s0 = / SMV Mirror s1 = swap s3 = /tmp s4 = metadb s5 = zfs mirror The system does boot, but once it gets to zfs, zfs fails and all subsequent services fail as well (including ssh)

[zfs-discuss] /tmp on ZFS?

2007-03-22 Thread Matt B
Is this something that should work? The assumption is that there is a dedicated raw SWAP slice and after install /tmp (which will be on /) will be unmounted and mounted to zpool/tmp (just like zpool/home) Thoughts on this? This message posted from opensolaris.org

[zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Matt B
Thanks for responses. There is a lot there I am looking forward to digesting. Right off the bat though I wanted to bring up something I found just before reading this reply as the answer to this question would automatically answer some other questinos There is a ZFS best practices wiki at

[zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Matt B
So it sounds like the consensus is that I should not worry about using slices with ZFS and the swap best practice doesn't really apply to my situation of a 4 disk x4200. So in summary(please confirm) this is what we are saying is a safe bet for using in a highly available production

[zfs-discuss] ZFS/UFS layout for 4 disk servers

2007-03-06 Thread Matt B
I am trying to determine the best way to move forward with about 35 x86 X4200's Each box has 4x 73GB internal drives. All the boxes will be built using Solaris 10 11/06. Additionally, these boxes are part of a highly available production environment with an uptime expectation of 6 9's ( just a