Re: [zfs-discuss] ZFS and zpool for NetApp FC LUNs

2012-05-16 Thread Brian Wilson

Hi Bruce,

My opinions and two cents are inline.  Take them with appropriate 
amounts of salt ;)


On 05/16/12 04:20 AM, Bruce McGill wrote:

Hi All,

I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered
nodes running Veritas Cluster Server software. For now, the
configuration on NetApp is as follows:

/vol/EBUSApp/EBUSApp  100G
 online   MBSUN04 : 0 MBSUN05 : 0
/vol/EBUSBinry/EBUSBinry  200G
 online   MBSUN04 : 1 MBSUN05 : 1
/vol/EBUSCtrlog/EBUSCtrlog   5G
  online   MBSUN04 : 2 MBSUN05 : 2
/vol/EBUSDB/EBUSDB
300.0G online   MBSUN04 : 3 MBSUN05 : 3
/vol/EBUSIO_FENC1/EBUSIO_FENC1  5G  online
  MBSUN04 : 4 MBSUN05 : 4
/vol/EBUSIO_FENC2/EBUSIO_FENC2  5G  online
  MBSUN04 : 5 MBSUN05 : 5
/vol/EBUSIO_FENC3/EBUSIO_FENC3  5G  online
  MBSUN04 : 6 MBSUN05 : 6
/vol/EBUSRDlog/EBUSRDlog   5G
online   MBSUN04 : 7 MBSUN05 : 7

I will be mapping the above volumes on hosts and creating zpool andzfs
file system. More volumes will be created in the future.

Is it certified by Symantec to use ZFS and Zpool for SnapMirror?
You have to ask Symantec, and/or read their docs on what they certify 
support for Veritas Cluster Server.  If you've got a support contract 
and no time to search their documentation, open a case with them.




At the moment, Veritas Cluster Server software is installed on 2
servers at the primary site and there is no cluster software on the DR
site. NetApp FAS 3240 provides FC LUNs to the 2 clustered nodes at the
primary site and the same filer also provides different LUNs to the
server at the DR site. The two clustered nodes at primary site are
“mbsun4” and “mbsun5”. The server at DR site is “mbsun6.

To list the steps that were carried out:

On mbsun4:

We create a ZFS pool from the LUN carved out of NetApp filer:

# zpool create -f orapool c0t60A98000646E2F6F67346A79642D6570d0

We then create the file sytem.

# zfs create orapool/vol01

We bring the mount under Veritas Cluster Server control:

# zfs set mountpoint=legacy orapool/vol01

# zpool export orapool

We then execute the following commands on the other cluster node:

On mbsun5:

# zpool import orapool
# zfs create orapool/vol01
# zfs set mountpoint=legacy orapool/vol01

Once configured under Veritas Cluster Server, the ZFS mount and Zpool
will failover among clustered nodes.

At the DR site (there is no cluster software), the server is called
“mbsun6”, we execute the following commands to create a different
Zpool and ZFS file system:

# zpool create -f orapool c0t60A98000646E2F6F67346B3145653874d0
# zfs create orapool/vol01
# zfs set mountpoint=/vol01 orapool/vol01

NetApp Snapmirror will be used to replicate the volumes from the
primary to DR site and we want know if we can use Zpool and ZFS
instead of the old UFS file system.

My question is:

Is it a good idea to use Zpool for these devices and then create ZFS
file system or just use ZFS file system?
Again, if you're using Symantec's clustering software, you have to get 
the answer from them and/or their documentation.



  Will replication through
NetApp Snapmirror work when we use Zpools and ZFS?
I don't use VCS.  Speaking just to replication, I currently virtualize 
FC NetApp luns through HDS, and use SnapMirror on the back end to mirror 
to a remote location.  It is *absolutely* essential if you're using ZFS 
that you're using a recent enough version of ZFS/Solaris that it 
supports forcible zpool imports (Solaris Kernel patches and zpool 
upgrade are how you update this).  SnapMirror, as with all storage based 
mirroring, ends up being a 'crash consistent' mirror, i.e. it's like 
pulling the plug on the disk at the moment the snapshot is taken.  With 
ZFS on older versions if you get a certain kind of corruption in your 
pool, without the forcible import option your mirror ends up being useless.
Also, the Oracle documentation for databases on ZFS mentions that ZFS on 
dynamically allocated LUNs is not recommended (i.e. think provisioned 
NetApp LUNS), because ZFS' algorithms at the time of that document's 
writing were unfriendly to dynamically provisioned LUNs.

Is it certified by Symantec to use ZFS and Zpool for SnapMirror?
Again, not meaning to be rude, but you need to ask Symantec and/or read 
their documentation on VCS.

If Zpools can be used, is it a good idea to create a single ZFS zpool
and add all the devices or create multiple ZFS zpools and map each
device to the individual zpool?
This depends on your application and IO load, and your application 
vendor's recommendations.  For example there are specific 
recommendations from Oracle for Oracle databases involving what you do 
with ZFS pools and datasets. 
(http://developers.sun.com/solaris/docs/wp-oraclezfsconfig-0510_ds_ac2.pdf)



Regards,
Bruce
_

Re: [zfs-discuss] ZFS and zpool for NetApp FC LUNs

2012-05-16 Thread Hung-sheng Tsao
IMHO
Just use whole stack of Vcs
Vxfm vxfs 



Sent from my iPhone

On May 16, 2012, at 5:20 AM, Bruce McGill  wrote:

> Hi All,
> 
> I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered
> nodes running Veritas Cluster Server software. For now, the
> configuration on NetApp is as follows:
> 
> /vol/EBUSApp/EBUSApp  100G
>online   MBSUN04 : 0 MBSUN05 : 0
> /vol/EBUSBinry/EBUSBinry  200G
>online   MBSUN04 : 1 MBSUN05 : 1
> /vol/EBUSCtrlog/EBUSCtrlog   5G
> online   MBSUN04 : 2 MBSUN05 : 2
> /vol/EBUSDB/EBUSDB
> 300.0G online   MBSUN04 : 3 MBSUN05 : 3
> /vol/EBUSIO_FENC1/EBUSIO_FENC1  5G  online
> MBSUN04 : 4 MBSUN05 : 4
> /vol/EBUSIO_FENC2/EBUSIO_FENC2  5G  online
> MBSUN04 : 5 MBSUN05 : 5
> /vol/EBUSIO_FENC3/EBUSIO_FENC3  5G  online
> MBSUN04 : 6 MBSUN05 : 6
> /vol/EBUSRDlog/EBUSRDlog   5G
>   online   MBSUN04 : 7 MBSUN05 : 7
> 
> I will be mapping the above volumes on hosts and creating zpool andzfs
> file system. More volumes will be created in the future.
> 
> Is it certified by Symantec to use ZFS and Zpool for SnapMirror?
> 
> At the moment, Veritas Cluster Server software is installed on 2
> servers at the primary site and there is no cluster software on the DR
> site. NetApp FAS 3240 provides FC LUNs to the 2 clustered nodes at the
> primary site and the same filer also provides different LUNs to the
> server at the DR site. The two clustered nodes at primary site are
> “mbsun4” and “mbsun5”. The server at DR site is “mbsun6.
> 
> To list the steps that were carried out:
> 
> On mbsun4:
> 
> We create a ZFS pool from the LUN carved out of NetApp filer:
> 
> # zpool create -f orapool c0t60A98000646E2F6F67346A79642D6570d0
> 
> We then create the file sytem.
> 
> # zfs create orapool/vol01
> 
> We bring the mount under Veritas Cluster Server control:
> 
> # zfs set mountpoint=legacy orapool/vol01
> 
> # zpool export orapool
> 
> We then execute the following commands on the other cluster node:
> 
> On mbsun5:
> 
> # zpool import orapool
> # zfs create orapool/vol01
> # zfs set mountpoint=legacy orapool/vol01
> 
> Once configured under Veritas Cluster Server, the ZFS mount and Zpool
> will failover among clustered nodes.
> 
> At the DR site (there is no cluster software), the server is called
> “mbsun6”, we execute the following commands to create a different
> Zpool and ZFS file system:
> 
> # zpool create -f orapool c0t60A98000646E2F6F67346B3145653874d0
> # zfs create orapool/vol01
> # zfs set mountpoint=/vol01 orapool/vol01
> 
> NetApp Snapmirror will be used to replicate the volumes from the
> primary to DR site and we want know if we can use Zpool and ZFS
> instead of the old UFS file system.
> 
> My question is:
> 
> Is it a good idea to use Zpool for these devices and then create ZFS
> file system or just use ZFS file system? Will replication through
> NetApp Snapmirror work when we use Zpools and ZFS?
> Is it certified by Symantec to use ZFS and Zpool for SnapMirror?
> If Zpools can be used, is it a good idea to create a single ZFS zpool
> and add all the devices or create multiple ZFS zpools and map each
> device to the individual zpool?
> 
> 
> Regards,
> Bruce
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and zpool for NetApp FC LUNs

2012-05-16 Thread Bruce McGill
Hi All,

I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered
nodes running Veritas Cluster Server software. For now, the
configuration on NetApp is as follows:

/vol/EBUSApp/EBUSApp  100G
online   MBSUN04 : 0 MBSUN05 : 0
/vol/EBUSBinry/EBUSBinry  200G
online   MBSUN04 : 1 MBSUN05 : 1
/vol/EBUSCtrlog/EBUSCtrlog   5G
 online   MBSUN04 : 2 MBSUN05 : 2
/vol/EBUSDB/EBUSDB
300.0G online   MBSUN04 : 3 MBSUN05 : 3
/vol/EBUSIO_FENC1/EBUSIO_FENC1  5G  online
 MBSUN04 : 4 MBSUN05 : 4
/vol/EBUSIO_FENC2/EBUSIO_FENC2  5G  online
 MBSUN04 : 5 MBSUN05 : 5
/vol/EBUSIO_FENC3/EBUSIO_FENC3  5G  online
 MBSUN04 : 6 MBSUN05 : 6
/vol/EBUSRDlog/EBUSRDlog   5G
   online   MBSUN04 : 7 MBSUN05 : 7

I will be mapping the above volumes on hosts and creating zpool andzfs
file system. More volumes will be created in the future.

Is it certified by Symantec to use ZFS and Zpool for SnapMirror?

At the moment, Veritas Cluster Server software is installed on 2
servers at the primary site and there is no cluster software on the DR
site. NetApp FAS 3240 provides FC LUNs to the 2 clustered nodes at the
primary site and the same filer also provides different LUNs to the
server at the DR site. The two clustered nodes at primary site are
“mbsun4” and “mbsun5”. The server at DR site is “mbsun6.

To list the steps that were carried out:

On mbsun4:

We create a ZFS pool from the LUN carved out of NetApp filer:

# zpool create -f orapool c0t60A98000646E2F6F67346A79642D6570d0

We then create the file sytem.

# zfs create orapool/vol01

We bring the mount under Veritas Cluster Server control:

# zfs set mountpoint=legacy orapool/vol01

# zpool export orapool

We then execute the following commands on the other cluster node:

On mbsun5:

# zpool import orapool
# zfs create orapool/vol01
# zfs set mountpoint=legacy orapool/vol01

Once configured under Veritas Cluster Server, the ZFS mount and Zpool
will failover among clustered nodes.

At the DR site (there is no cluster software), the server is called
“mbsun6”, we execute the following commands to create a different
Zpool and ZFS file system:

# zpool create -f orapool c0t60A98000646E2F6F67346B3145653874d0
# zfs create orapool/vol01
# zfs set mountpoint=/vol01 orapool/vol01

NetApp Snapmirror will be used to replicate the volumes from the
primary to DR site and we want know if we can use Zpool and ZFS
instead of the old UFS file system.

My question is:

Is it a good idea to use Zpool for these devices and then create ZFS
file system or just use ZFS file system? Will replication through
NetApp Snapmirror work when we use Zpools and ZFS?
Is it certified by Symantec to use ZFS and Zpool for SnapMirror?
If Zpools can be used, is it a good idea to create a single ZFS zpool
and add all the devices or create multiple ZFS zpools and map each
device to the individual zpool?


Regards,
Bruce
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss