Hi Evaristo,

I spoke with another committer, Anil, and from what we understand, this
process that is described would work.  I am not sure if this it the
recommended way to do a restart but we believe the steps outlined would get
the intended outcome.

To clear a Serial gateway, I believe stopping the gateway sender will clear
it's queue. However for a parallel gateway sender I think the parallel
queue gets cleared once the sender is restarted (so a stop and then a
start).  There may be other ways such as destroying the gateway sender but
you'd probably have to detach it from the region first.

This sounds like a WAN gii feature would be useful and help reduce the
steps in this use case.

Please chime in if this response is wrong or can be improved.

Thanks,
-Jason

On Tue, Oct 22, 2019 at 1:26 PM evaristo.camar...@yahoo.es <
evaristo.camar...@yahoo.es> wrote:

> Hi there,
>
>
>
> We are planning to use an installation with 2 Geode cluster connected via
> WAN and using gateway senders/receivers to keep them updated. Main reason
> is resiliency for disasters in a data center.
>
>
>
> It is not clear for us how to recover a datacenter in case of disaster.
> This is the use case:
>
> - One of the data centers have a problem (natural catastrophe)
>
> - The other data center keeps running traffic and filling the gateway
> sender queues that need to be stopped at some point to avoid filling up the
> disk resources.
>
>
>
> At some point in time, the data center is ready to start recovery that
> will require to synchronize the Geode copy. The procedure should something
> like:
>
> - Drain gateway service queues in copy providing service
>
> - Start gateway senders
>
> - Make a copy
>
> - Transfer copy to data center that will be recovered
>
> - Import the copy
>
> - Allow the data center to catchup up via replication
>
> - Start again the copy.
>
>
>
> Does it make sense? Or is there a better way to do it. In case the answer
> is yes, is there any way to drain gateway sender’s queues (both for
> parallel and serial GWs)
>
>
>
> Thanks in advance,
>
>
>
> /Evaristo
>
>
>
>

Reply via email to