On Fri, Feb 17, 2017 at 11:22 AM, Clint Byrum <cl...@fewbar.com> wrote:

> Excerpts from 王玺源's message of 2017-02-17 14:08:30 +0000:
> > Hi David:
> >
> > We have not find the perfect solution to solve the fernet performance
> > issue, we will try the different crypt strength setting with fernet in
> > future.
> >
>
> One important thing: did you try throwing more hardware at Keystone?
> Keystone instances are almost entirely immutable (the fernet keys
> are the only mutable part), which makes it pretty easy to scale them
> horizontally as-needed. Your test has a static 3 nodes, but you didn't
> include system status, so we don't know if the CPUs were overwhelmed,
> or how many database nodes you had, what its level of activity was, etc.
>

+1

Several folks in the community have tested token performance using a
variety of hardware and configurations. Sharing your specific setup might
draw similarities to other environments people have used. If not, then we
at least have an environment description that we can use to experience the
issues you're seeing first-hand.


>
> >
> >
> > There are multiple customers have more than 6-region cascade, how to
> > synchronous keystone data between these region disturbed us a lot. It
> does
> > not need to synchronize these data while using pki token, because the pki
> > token including the roles information.
> >
>
> The amount of mutable data to synchronize between datacenters with Fernet
> is the fernet keys. If you set up region-local caches, you should be
> able to ship queries back to a central database cluster and not have to
> worry about a painful global database cluster, since you'll only feel
> the latency of those cross-region queries when your caches are cold.
>
> However, I believe work was done to allow local read queries to be sent
> to local slaves, so you can use traditional MySQL replication if the
> cold-cache latency is too painful.
>
> Replication lag becomes a problem if you get a ton of revocation events,
> but this lag's consequences are pretty low, with the worst effect being a
> larger window for stolen, revoked tokens to be used. Caching also keeps
> that window open longer, so it becomes a game of tuning that window
> against desired API latency.
>
>
Good point, Clint. We also merged a patch in Ocata that helped improve
token validation performance, which was not proposed as a stable backport:

https://github.com/openstack/keystone/commit/9e84371461831880ce5736e9888c7d9648e3a77b


> >
> >
> > The pki token has been verified that can support such large-scale
> > production environment, which even the uuid token has performance issue
> in
> > too.
> >
>
> As others have said, the other problems stacked on top of the critical
> security problems in PKI made it very undesirable for the community to
> support. There is, however, nothing preventing you from maintaining it
> out of tree, though I'd hope you would instead collaborate with the
> community to perhaps address those problems and come up with a "PKIv2"
> provider that has the qualities you want for your scale.
>

+1

Having personally maintained a token provider out-of-tree prior to the
refactoring done last release [0], I think the improvements made are
extremely beneficial for cases like this. But, again re-iterating what
Clint said, I would only suggest that if for some reason we couldn't find a
way to get a supported token provider to suit your needs.

We typically have a session dedicated to performance at the PTG, and I have
that tentatively scheduled for Friday morning (11:30 - 12:00) [1].
Otherwise it's usually a topic that comes up during our operator feedback
session, which is scheduled for Wednesday afternoon (1:30 - 2:20). Both are
going to be in the dedicated keystone room (which I'll be advertising when
I know exactly which room that is).


[0]
https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
[1] https://etherpad.openstack.org/p/keystone-pike-ptg

>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to