Excerpts from joehuang's message of 2017-02-25 04:09:45 +0000:
> Hello, Matt,
> 
> Thank you for your reply, just as what you mentioned, for the slow changed 
> data, aync. replication should work. My concerns is that the impact of 
> replication delay, for example (though it's quite low chance to happen):
> 
> 1) Add new user/group/role in RegionOne, before the new user/group/role are 
> replicated to RegionTwo, the new user begin to access RegionTwo service, then 
> because the data has not arrived yet, the user's request to RegionTwo may be 
> rejected for the token vaildation failed in local KeyStone.
> 

I think this is entirely acceptable. You can even check with your
monitoring system to find out what the current replication lag is to
each region, and notify the user of how long it may take.

> 2)In token revoke case. If we remove the user'role in RegionOne, the token in 
> RegionOne will be invalid immediately, but before the remove operation 
> replicated to the RegionTwo, the user can still use the token to access the 
> services in RegionTwo. Although it may last in very short interval.
> 
> Is there someone can evaluate the security risk is affordable or not.
> 

The simple answer is that the window between a revocation event being
created, and being ubiquitous, is whatever the maximum replication lag
is between regions. So if you usually have 5 seconds of replication lag,
it will be 5 seconds. If you have a really write-heavy day, and you
suddenly have 5 minutes of replication lag, it will be 5 minutes.

The complicated component is that in async replication, reducing
replication lag is expensive. You don't have many options here. Reducing
writes on the master is one of them, but that isn't easy! Another is
filtering out tables on slaves so that you only replicate the tables
that you will be reading. But if there are lots of replication events,
that doesn't help.

One decent option is to switch to semi-sync replication:

https://dev.mysql.com/doc/refman/5.7/en/replication-semisync.html

That will at least make sure your writes aren't acknowledged until the
binlogs have been transferred everywhere. But if your master can take
writes a lot faster than your slaves, you may never catch up applying , no 
matter
how fast the binlogs are transferred.

The key is to evaluate your requirements and think through these
solutions. Good luck! :)

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to