Hi msl,

I took your procedure to one of our AVS specialist and here is his answer.

Detlef
> Hi Detlef,
>
> I've looked through this, and I think he may misunderstand what
> a reverse sync does. I've CCed Jim(AVS) and Elaine(Odyssey) for
> their thoughts.
>
>
>> here is what i did:
>> 1- primary: The filesystem is on primary node (replicating).
>
> So the secondary will be a copy of the primary.
>
>> 2- primary: unmount, export, put sndr on loggin mode
> Not sure what 'export' here is. 'export' in NFS sense?

The file system in use is ZFS. "export" is "zpool export <storage pool
name>"

>
>> 3- secondary: import, mount, create two directories
> Why create the directories? If it was a replica of the primary
> the directories would already exist.

He was creating directories on the secondary, so when the replica makes
it back to the primary, he will have something to see.
In another email I suggested using "mkfile ..." instead of "mkdir ...".

>
>> 4- secondary: unmount, export
> Still unsure of what this is intended to do.

This is a means to stop using the ZFS storage pool on then SNDR secondary.

>
>> 5- primary: reverse sync, import, mount
> Reverse sync means "update the primary with the contents
> of the secondary" and I really doubt if he wants to do this.

So he now wants the "mkdir", or "mkfile", invoked on the secondary, to
now appear on the primary

>
>> 6- primary: sndradm -w...., /usr/bin/ls, and there is no directory..
> Well, if he created two empty directories on the secondary, and then
> did a reverse sync, this is exactly what I'd expect. The primary
> has been overwritten by the empty secondary.

The new directories (or files), should not be present in the ZFS file
system

>
> Jim can comment more on this, since he also oversees the open AVS stuff.

So the concern here is that he is using ZFS, in an environment where it
has not been test, and without using HAStoragePlus as it was intended.

The concern being that if on a Sun Cluster node, one configures DID
devices for ZFS, exports the ZFS storage pool, and then on another node
imports the ZFS storage pool, ZFS will import the vdevs on
/dev/rdsk/c?t?d?s?, but SNDR will be looking for I/Os on /dev/did/rdsk/d?s?

This problem is known by Sun Cluster, there is a fix to know that if a
"zpool export ..." was done on DID devices, that a "zpool import ..."
should be forced on DID devices, not the default of /dev/dsk devices.

Jim




msl wrote:
> Hello,
>  here is what i did:
>  1- primary: The filesystem is on primary node (replicating).
>  2- primary: unmount, export, put sndr on loggin mode
>  3- secondary: import, mount, create two directories
>  4- secondary: unmount, export
>  5- primary: reverse sync, import, mount
>  6- primary: sndradm -w...., /usr/bin/ls, and there is no directory..
>
>  Something wrong in the above procedure?
>
>  Thanks a lot.
> --
>
> This message posted from opensolaris.org
>
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
>   

Reply via email to