Thanks very much for your interest, but i think i don't understand very well
the response (if i did find one)...
My comments are below:
2007/8/30, Detlef Ulherr < Detlef.Ulherr at sun.com>:
Hi msl,
I took your procedure to one of our AVS specialist and here is his answer.
Detlef
> Hi Detlef,
>
> I've looked through this, and I think he may misunderstand what
> a reverse sync does. I've CCed Jim(AVS) and Elaine(Odyssey) for
> their thoughts.
>
>
>> here is what i did:
>> 1- primary: The filesystem is on primary node (replicating).
>
> So the secondary will be a copy of the primary.
>
>> 2- primary: unmount, export, put sndr on loggin mode
> Not sure what 'export' here is. 'export' in NFS sense?
The file system in use is ZFS. "export" is "zpool export <storage pool
name>"
Correct.
>
>> 3- secondary: import, mount, create two directories
> Why create the directories? If it was a replica of the primary
> the directories would already exist.
He was creating directories on the secondary, so when the replica makes
it back to the primary, he will have something to see.
In another email I suggested using "mkfile ..." instead of "mkdir ...".
Ok, that is the point... I want use the AVS software to implement a HA
solution... something like:
http://blogs.sun.com/AVS/entry/avs_and_zfs_seamless
So, if i'm replicating a zfs pool to another "host", if the primary host goes
down, i want to be able to continue the service on the secondary one... right?
So, the "mkfile" is not the point, i just want to use the filesystem
(applications).
>
>> 4- secondary: unmount, export
> Still unsure of what this is intended to do.
This is a means to stop using the ZFS storage pool on then SNDR secondary.
Correct.
>
>> 5- primary: reverse sync, import, mount
> Reverse sync means "update the primary with the contents
> of the secondary" and I really doubt if he wants to do this.
Yes, i want to do this. After the "problem" with the primary node is fixed, i
need to switchback the filesystem to the primary node, and keeping all the
changes. So, the secondary will be there for another "problem time". :)
So he now wants the "mkdir", or "mkfile", invoked on the secondary, to
now appear on the primary
Let's make it more clear... i just want the primary volume updated. The system
was running on the secondary volumes about a day, a month, a year? I think that
is the AVS intent... am i wrong?
>
>> 6- primary: sndradm -w...., /usr/bin/ls, and there is no directory..
> Well, if he created two empty directories on the secondary, and then
> did a reverse sync, this is exactly what I'd expect. The primary
> has been overwritten by the empty secondary.
I think you can see above, and think again...
The new directories (or files), should not be present in the ZFS file
system
i guess that is wrong... i want my files! :))
>
> Jim can comment more on this, since he also oversees the open AVS stuff.
So the concern here is that he is using ZFS, in an environment where it
has not been test, and without using HAStoragePlus as it was intended.
here i don't understand nothing... what the filesystem has to do with the AVS
solution??
I'm trying to "assembly" components to create a solution (sorry by the
english). AVS doing the replication "thing", i don't see what the "conflict"
between ZFS. And why you are talking about HAStoragePlus????
The concern being that if on a Sun Cluster node, one configures DID
devices for ZFS, exports the ZFS storage pool, and then on another node
imports the ZFS storage pool, ZFS will import the vdevs on
/dev/rdsk/c?t?d?s?, but SNDR will be looking for I/Os on /dev/did/rdsk/d?s?
Ok, I think this is a hacker stuff... but, what i can understand about it, is:
The ZFS is not on a DID device... I mean, not global. Maybe the problem is
that i can't configure the AVS software (in cluster mode), if it was not. I
think AVS knows that it is in cluster mode, but i just want a simple
replication... like two independent hosts. That "simple" behavior would do the
trick, i guess.
Sorry if i misunderstood this point.
This problem is known by Sun Cluster, there is a fix to know that if a
"zpool export ..." was done on DID devices, that a "zpool import ..."
should be forced on DID devices, not the default of /dev/dsk devices.
here i miss the point.
if you want to understand the whole solution, here it is:
http://www.posix.brte.com.br/blog/?p=73
http://www.posix.brte.com.br/blog/?p=75
Thanks a lot!!
Leal.
--
This message posted from opensolaris.org