Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-26 Thread joehuang
Thank you all very much, the less data to be replicated, the better. 

Best Regards
Chaoyi Huang (joehuang)


From: Clint Byrum [cl...@fewbar.com]
Sent: 26 February 2017 12:06
To: openstack-dev
Subject: Re: [openstack-dev] [keystone]PKI token VS Fernet token

Excerpts from Lance Bragstad's message of 2017-02-25 13:07:58 -0600:
> Since both token formats rebuild the authorization context at validation
> time, we can remove some revocation events that are no longer needed. This
> means we won't be storing as many revocation events on role removal from
> domains and projects. Instead we will only rely on the revocation API to
> invalidate tokens for cases like specific token revocation or password
> changes (the new design of validation does role assignment enforcement for
> us automatically). This should reduce the amount of data being replicated
> due to massive amounts of revocation events.
>

I didn't know that the work to make role removal non-event based was
even started much less done. Cool.

> We do still have some more work to do on this front, but I can dig into it
> and see what's left.
>

Indeed, the less revocation events, the better the Fernet story is
for scalability.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-25 Thread Clint Byrum
Excerpts from Lance Bragstad's message of 2017-02-25 13:07:58 -0600:
> Since both token formats rebuild the authorization context at validation
> time, we can remove some revocation events that are no longer needed. This
> means we won't be storing as many revocation events on role removal from
> domains and projects. Instead we will only rely on the revocation API to
> invalidate tokens for cases like specific token revocation or password
> changes (the new design of validation does role assignment enforcement for
> us automatically). This should reduce the amount of data being replicated
> due to massive amounts of revocation events.
> 

I didn't know that the work to make role removal non-event based was
even started much less done. Cool.

> We do still have some more work to do on this front, but I can dig into it
> and see what's left.
> 

Indeed, the less revocation events, the better the Fernet story is
for scalability.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-25 Thread Lance Bragstad
On Sat, Feb 25, 2017 at 12:47 AM, Clint Byrum  wrote:

> Excerpts from joehuang's message of 2017-02-25 04:09:45 +:
> > Hello, Matt,
> >
> > Thank you for your reply, just as what you mentioned, for the slow
> changed data, aync. replication should work. My concerns is that the impact
> of replication delay, for example (though it's quite low chance to happen):
> >
> > 1) Add new user/group/role in RegionOne, before the new user/group/role
> are replicated to RegionTwo, the new user begin to access RegionTwo
> service, then because the data has not arrived yet, the user's request to
> RegionTwo may be rejected for the token vaildation failed in local KeyStone.
> >
>
> I think this is entirely acceptable. You can even check with your
> monitoring system to find out what the current replication lag is to
> each region, and notify the user of how long it may take.
>
> > 2)In token revoke case. If we remove the user'role in RegionOne, the
> token in RegionOne will be invalid immediately, but before the remove
> operation replicated to the RegionTwo, the user can still use the token to
> access the services in RegionTwo. Although it may last in very short
> interval.
> >
> > Is there someone can evaluate the security risk is affordable or not.
> >
>
> The simple answer is that the window between a revocation event being
> created, and being ubiquitous, is whatever the maximum replication lag
> is between regions. So if you usually have 5 seconds of replication lag,
> it will be 5 seconds. If you have a really write-heavy day, and you
> suddenly have 5 minutes of replication lag, it will be 5 minutes.
>
> The complicated component is that in async replication, reducing
> replication lag is expensive. You don't have many options here. Reducing
> writes on the master is one of them, but that isn't easy! Another is
> filtering out tables on slaves so that you only replicate the tables
> that you will be reading. But if there are lots of replication events,
> that doesn't help.
>

This is a good point and something that was much more prevalent with UUID
tokens. We still write *all* the data from a UUID token to the database,
which includes the user, project, scope, possibly the service catalog,
etc... When validating a UUID token, it would be pulled from the database
and returned to the user. The information in the UUID token wasn't
confirmed at validation time. For example, if you authenticated for a UUID
token scoped to a project with the `admin` role, the role and project
information persisted in the database would reflect that. If your `admin`
role assignment was removed from the project and you validated the token,
the token reference in the database would still contain `admin` scope on
the project. At the time the approach to fixing this was to create a
revocation event that would match specific attributes of that token (i.e.
the `admin` role on that specific project). As a result, the token
validation process would pull the token from the backend, then pass it to
the revocation API and ask if the token was revoked based on any
pre-existing revocation events.

The fernet approach to solving this was fundamentally different because we
didn't have a token reference to pull from the backend that represented the
authorization context at authentication time (which we did have with UUID).
Instead, what we can do at validation time is decrypt the token and ask the
assignment API for role assignments given a user and project [0] and raise
a 401 if that user has no roles on the project [1]. So, by rebuilding the
authorization context at validation time, we no longer need to rely on
revocation events to enforce role revocation (but we do need them to
enforce revocation for other things with fernet). The tradeoff is that
performance degrades if you're using fernet without caching because we have
to rebuild all of that information, instead of just returning a reference
from the database. This led to us making significant improvements to our
caching implementation in keystone so that we can improve token validation
time overall, especially for fernet. As of last release UUID tokens are now
validated the same exact way as fernet tokens are. Our team also made some
improvements listing and comparing token references in the revocation API
[2] [3] (thanks to Richard, Clint, and Ron for driving a lot of that work!).

Since both token formats rebuild the authorization context at validation
time, we can remove some revocation events that are no longer needed. This
means we won't be storing as many revocation events on role removal from
domains and projects. Instead we will only rely on the revocation API to
invalidate tokens for cases like specific token revocation or password
changes (the new design of validation does role assignment enforcement for
us automatically). This should reduce the amount of data being replicated
due to massive amounts of revocation events.

We do still have some more work to do 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-24 Thread Clint Byrum
Excerpts from joehuang's message of 2017-02-25 04:09:45 +:
> Hello, Matt,
> 
> Thank you for your reply, just as what you mentioned, for the slow changed 
> data, aync. replication should work. My concerns is that the impact of 
> replication delay, for example (though it's quite low chance to happen):
> 
> 1) Add new user/group/role in RegionOne, before the new user/group/role are 
> replicated to RegionTwo, the new user begin to access RegionTwo service, then 
> because the data has not arrived yet, the user's request to RegionTwo may be 
> rejected for the token vaildation failed in local KeyStone.
> 

I think this is entirely acceptable. You can even check with your
monitoring system to find out what the current replication lag is to
each region, and notify the user of how long it may take.

> 2)In token revoke case. If we remove the user'role in RegionOne, the token in 
> RegionOne will be invalid immediately, but before the remove operation 
> replicated to the RegionTwo, the user can still use the token to access the 
> services in RegionTwo. Although it may last in very short interval.
> 
> Is there someone can evaluate the security risk is affordable or not.
> 

The simple answer is that the window between a revocation event being
created, and being ubiquitous, is whatever the maximum replication lag
is between regions. So if you usually have 5 seconds of replication lag,
it will be 5 seconds. If you have a really write-heavy day, and you
suddenly have 5 minutes of replication lag, it will be 5 minutes.

The complicated component is that in async replication, reducing
replication lag is expensive. You don't have many options here. Reducing
writes on the master is one of them, but that isn't easy! Another is
filtering out tables on slaves so that you only replicate the tables
that you will be reading. But if there are lots of replication events,
that doesn't help.

One decent option is to switch to semi-sync replication:

https://dev.mysql.com/doc/refman/5.7/en/replication-semisync.html

That will at least make sure your writes aren't acknowledged until the
binlogs have been transferred everywhere. But if your master can take
writes a lot faster than your slaves, you may never catch up applying , no 
matter
how fast the binlogs are transferred.

The key is to evaluate your requirements and think through these
solutions. Good luck! :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-24 Thread Matt Fischer
On Fri, Feb 24, 2017 at 9:09 PM, joehuang  wrote:

> Hello, Matt,
>
> Thank you for your reply, just as what you mentioned, for the slow changed
> data, aync. replication should work. My concerns is that the impact of
> replication delay, for example (though it's quite low chance to happen):
>
> 1) Add new user/group/role in RegionOne, before the new user/group/role
> are replicated to RegionTwo, the new user begin to access RegionTwo
> service, then because the data has not arrived yet, the user's request to
> RegionTwo may be rejected for the token vaildation failed in local
> KeyStone.
>
> 2)In token revoke case. If we remove the user'role in RegionOne, the token
> in RegionOne will be invalid immediately, but before the remove operation
> replicated to the RegionTwo, the user can still use the token to access the
> services in RegionTwo. Although it may last in very short interval.
>
> Is there someone can evaluate the security risk is affordable or not.
>
> Best Regards
> Chaoyi Huang (joehuang)
>
>

We actually had this happen for services like neutron even within a region,
where a network was created on one node and then immediately used on a
second node. We solved it by forcing haproxy to do transactions on one node
(with the others as backups). I only mention this because the scenario you
propose is possible to occur. If you are not dealing with a bunch of data
you could look into enabling causal reads (assuming you are using mysql
galera), but this will probably cause a perf hit (I did not test the
impact).

For scenario 2: I suppose you need to ask yourself, if I remove a user or
role, can I live with 2-5 seconds for that token to be revoked in all
regions? In our case it was not a major concern, but I worked on private
cloud.

For scenario 1: If I were you I think you should figure out whether or not
it's ever likely to really happen before you invest a bunch of time into
solving it. That will depend a lot on your sync time. We only had 2 regions
and we owned the pipes so it was not a major concern.

Sorry I don't have more definite answers for you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-24 Thread joehuang
Hello, Matt,

Thank you for your reply, just as what you mentioned, for the slow changed 
data, aync. replication should work. My concerns is that the impact of 
replication delay, for example (though it's quite low chance to happen):

1) Add new user/group/role in RegionOne, before the new user/group/role are 
replicated to RegionTwo, the new user begin to access RegionTwo service, then 
because the data has not arrived yet, the user's request to RegionTwo may be 
rejected for the token vaildation failed in local KeyStone.

2)In token revoke case. If we remove the user'role in RegionOne, the token in 
RegionOne will be invalid immediately, but before the remove operation 
replicated to the RegionTwo, the user can still use the token to access the 
services in RegionTwo. Although it may last in very short interval.

Is there someone can evaluate the security risk is affordable or not.

Best Regards
Chaoyi Huang (joehuang)

From: Matt Fischer [m...@mattfischer.com]
Sent: 25 February 2017 11:38
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone]PKI token VS Fernet token


At last, we still have one question:
For public cloud, it is very common that multi regions are deployed. And the 
distance is usually very far between the regions. So the transport delay is 
really a problem. Fernet token requires the data must be the same. Because of 
the slow connection and high time delay, in our opinion, it is unrealistic that 
let the keystones from different regions to use the same keystone datacenter. 
Any idea about this problem? Thanks.



There's nothing in Fernet tokens that would cause an issue with the 
transportation delay. You could mail the Fernet keys to each region and you're 
still fine, why? Because key rotation means that the "next key" is already in 
place on every box when you rotate keys. There is a widely held misconception 
that all keystone nodes must instantaneously sync keys in every region or it 
won't work, that is simply not true. In fact the main reason we switched to 
Fernet was to REDUCE the load on our cross-region replication. Without a 
database full of tokens to deal with, there's basically nothing to replicate as 
joe says below. User/group/role changes for us was more of a few times a day 
operation rather than getting a token which is thousands of times per second.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-24 Thread Matt Fischer
>
>
> At last, we still have one question:
> For public cloud, it is very common that multi regions are deployed. And
> the distance is usually very far between the regions. So the transport
> delay is really a problem. Fernet token requires the data must be the same.
> Because of the slow connection and high time delay, in our opinion, it is 
> unrealistic
> that let the keystones from different regions to use the same keystone
> datacenter. Any idea about this problem? Thanks.
>
>
>

There's nothing in Fernet tokens that would cause an issue with the
transportation delay. You could mail the Fernet keys to each region and
you're still fine, why? Because key rotation means that the "next key" is
already in place on every box when you rotate keys. There is a widely held
misconception that all keystone nodes must instantaneously sync keys in
every region or it won't work, that is simply not true. In fact the main
reason we switched to Fernet was to REDUCE the load on our cross-region
replication. Without a database full of tokens to deal with, there's
basically nothing to replicate as joe says below. User/group/role changes
for us was more of a few times a day operation rather than getting a token
which is thousands of times per second.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-23 Thread joehuang
Database async. replication across data centers should work considering that 
the data in KeyStone is not updated frequently. There is a little delay for the 
data replication, may lead to quite few requests rejection in a data center 
where the data has not arrived. But I think it's quite short and acceptable. 
For public cloud, I would like to know others thoughts on the security risk, is 
the security risk also acceptable.

Best Regards
Chaoyi Huang (joehuang)

From: 王玺源 [wangxiyuan1...@gmail.com]
Sent: 23 February 2017 21:39
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone]PKI token VS Fernet token

Thanks for all your advices and opinions.

We'll try to solve the PKI issue and hope it can come back in the future.

About the fernet token, we'll test it with the new config options and show you 
the result later. Hope it performs well.

At last, we still have one question:
For public cloud, it is very common that multi regions are deployed. And the 
distance is usually very far between the regions. So the transport delay is 
really a problem. Fernet token requires the data must be the same. Because of 
the slow connection and high time delay, in our opinion, it is unrealistic that 
let the keystones from different regions to use the same keystone datacenter. 
Any idea about this problem? Thanks.



2017-02-21 8:58 GMT-05:00 Dolph Mathews 
<dolph.math...@gmail.com<mailto:dolph.math...@gmail.com>>:
It appears that you don't have caching enabled, then. Without enabling caching, 
Fernet performance is well known to be terrible, which would explain your 
benchmark results. If you run a similar benchmark with caching, I'd be eager to 
see the new configuration and benchmark results.


On Fri, Feb 17, 2017 at 8:16 AM 王玺源 
<wangxiyuan1...@gmail.com<mailto:wangxiyuan1...@gmail.com>> wrote:
Hi Dolph:

We made the keystone.conf same with the example.

[token]

provider = fernet



[fernet_tokens]   //all configuration is default

#

# From keystone

#



# Directory containing Fernet token keys. (string value)

#key_repository = /etc/keystone/fernet-keys/



# This controls how many keys are held in rotation by keystone-manage

# fernet_rotate before they are discarded. The default value of 3 means that

# keystone will maintain one staged key, one primary key, and one secondary

# key. Increasing this value means that additional secondary keys will be kept

# in the rotation. (integer value)

# max_active_keys = 3

Dolph Mathews 
<dolph.math...@gmail.com<mailto:dolph.math...@gmail.com>>于2017年2月17日 周五上午7:22写道:
Thank you for the data and your test scripts! As Lance and Stanek already 
alluded, Fernet performance is very sensitive to keystone's configuration. Can 
your share your keystone.conf as well?

I'll also be in Atlanta and would love to talk Fernet performance, even if we 
don't have a formal time slot on the schedule.

On Wed, Feb 15, 2017 at 9:08 AM Lance Bragstad 
<lbrags...@gmail.com<mailto:lbrags...@gmail.com>> wrote:
In addition to what David said, have you played around with caching in keystone 
[0]? After the initial implementation of fernet landed, we attempted to make it 
the default token provider. We ended up reverting the default back to uuid 
because we hit several issues. Around the Liberty and Mitaka timeframe, we 
reworked the caching implementation to fix those issues and improve overall 
performance of all token formats, especially fernet.

We have a few different performance perspectives available, too. Some were run 
nearly 2 years ago [1] and some are run today [2]. Since the Newton release, 
we've made drastic improvements to the overall structure of the token provider 
[3] [4] [5]. At the very least, it should make understanding keystone's 
approach to tokens easier. Maintaining out-of-tree token providers should also 
be easier since we cleaned up a lot of the interfaces that affect developers 
maintaining their own providers.

We can try and set something up at the PTG. We are getting pretty tight for 
time slots, but I'm sure we can find some time to work through the issues 
you're seeing (also, feel free to hop into #openstack-keystone on freenode if 
you want to visit prior to the PTG).


[0] 
https://docs.openstack.org/developer/keystone/configuration.html#caching-layer
[1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
[2] https://github.com/lbragstad/keystone-performance
[3] 
https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:make-fernet-default
[4] 
https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
[5] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/token-provider-cleanup.html

On Wed, Feb 15, 2017 at 8:44 AM, David Stanek 
<dsta...@dstanek.com<mailto:dsta...@dstanek.com>> wrote

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-23 Thread 王玺源
Thanks for all your advices and opinions.

We'll try to solve the PKI issue and hope it can come back in the future.

About the fernet token, we'll test it with the new config options and show
you the result later. Hope it performs well.

At last, we still have one question:
For public cloud, it is very common that multi regions are deployed. And
the distance is usually very far between the regions. So the transport
delay is really a problem. Fernet token requires the data must be the same.
Because of the slow connection and high time delay, in our opinion, it
is unrealistic
that let the keystones from different regions to use the same keystone
datacenter. Any idea about this problem? Thanks.



2017-02-21 8:58 GMT-05:00 Dolph Mathews :

> It appears that you don't have caching enabled, then. Without enabling
> caching, Fernet performance is well known to be terrible, which would
> explain your benchmark results. If you run a similar benchmark with
> caching, I'd be eager to see the new configuration and benchmark results.
>
>
> On Fri, Feb 17, 2017 at 8:16 AM 王玺源  wrote:
>
>> Hi Dolph:
>>
>> We made the keystone.conf same with the example.
>>
>> [token]
>>
>> provider = fernet
>>
>>
>>
>> [fernet_tokens]   //all configuration is default
>>
>> #
>>
>> # From keystone
>>
>> #
>>
>>
>>
>> # Directory containing Fernet token keys. (string value)
>>
>> #key_repository = /etc/keystone/fernet-keys/
>>
>>
>>
>> # This controls how many keys are held in rotation by keystone-manage
>>
>> # fernet_rotate before they are discarded. The default value of 3 means
>> that
>>
>> # keystone will maintain one staged key, one primary key, and one
>> secondary
>>
>> # key. Increasing this value means that additional secondary keys will be
>> kept
>>
>> # in the rotation. (integer value)
>>
>> # max_active_keys = 3
>> Dolph Mathews 于2017年2月17日 周五上午7:22写道:
>>
>> Thank you for the data and your test scripts! As Lance and Stanek already
>> alluded, Fernet performance is very sensitive to keystone's configuration.
>> Can your share your keystone.conf as well?
>>
>> I'll also be in Atlanta and would love to talk Fernet performance, even
>> if we don't have a formal time slot on the schedule.
>>
>> On Wed, Feb 15, 2017 at 9:08 AM Lance Bragstad 
>> wrote:
>>
>> In addition to what David said, have you played around with caching in
>> keystone [0]? After the initial implementation of fernet landed, we
>> attempted to make it the default token provider. We ended up reverting the
>> default back to uuid because we hit several issues. Around the Liberty and
>> Mitaka timeframe, we reworked the caching implementation to fix those
>> issues and improve overall performance of all token formats, especially
>> fernet.
>>
>> We have a few different performance perspectives available, too. Some
>> were run nearly 2 years ago [1] and some are run today [2]. Since the
>> Newton release, we've made drastic improvements to the overall structure of
>> the token provider [3] [4] [5]. At the very least, it should make
>> understanding keystone's approach to tokens easier. Maintaining out-of-tree
>> token providers should also be easier since we cleaned up a lot of the
>> interfaces that affect developers maintaining their own providers.
>>
>> We can try and set something up at the PTG. We are getting pretty tight
>> for time slots, but I'm sure we can find some time to work through the
>> issues you're seeing (also, feel free to hop into #openstack-keystone on
>> freenode if you want to visit prior to the PTG).
>>
>>
>> [0] https://docs.openstack.org/developer/keystone/
>> configuration.html#caching-layer
>> [1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
>> [2] https://github.com/lbragstad/keystone-performance
>> [3] https://review.openstack.org/#/q/status:merged+project:
>> openstack/keystone+branch:master+topic:make-fernet-default
>> [4] https://review.openstack.org/#/q/status:merged+project:
>> openstack/keystone+branch:master+topic:cleanup-token-provider
>> [5] http://specs.openstack.org/openstack/keystone-specs/
>> specs/keystone/ocata/token-provider-cleanup.html
>>
>> On Wed, Feb 15, 2017 at 8:44 AM, David Stanek 
>> wrote:
>>
>> On 15-Feb 18:16, 王玺源 wrote:
>> > Hello everyone,
>> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
>> our
>> > production team did some test about PKI and Fernet token (With Keystone
>> > Mitaka). They found that in large-scale production environment, Fernet
>> > token's performance is not as good as PKI. Here is the test data:
>> >
>> > https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM
>> 25NzZcdzPE0fvY/edit?usp=sharing
>>
>> This is nice to see. Thanks.
>>
>>
>> >
>> > From the data, we can see that:
>> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
>> > 2. PKI token revoke can't immediately make the token invalid. 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-21 Thread Dolph Mathews
It appears that you don't have caching enabled, then. Without enabling
caching, Fernet performance is well known to be terrible, which would
explain your benchmark results. If you run a similar benchmark with
caching, I'd be eager to see the new configuration and benchmark results.

On Fri, Feb 17, 2017 at 8:16 AM 王玺源  wrote:

> Hi Dolph:
>
> We made the keystone.conf same with the example.
>
> [token]
>
> provider = fernet
>
>
>
> [fernet_tokens]   //all configuration is default
>
> #
>
> # From keystone
>
> #
>
>
>
> # Directory containing Fernet token keys. (string value)
>
> #key_repository = /etc/keystone/fernet-keys/
>
>
>
> # This controls how many keys are held in rotation by keystone-manage
>
> # fernet_rotate before they are discarded. The default value of 3 means
> that
>
> # keystone will maintain one staged key, one primary key, and one secondary
>
> # key. Increasing this value means that additional secondary keys will be
> kept
>
> # in the rotation. (integer value)
>
> # max_active_keys = 3
> Dolph Mathews 于2017年2月17日 周五上午7:22写道:
>
> Thank you for the data and your test scripts! As Lance and Stanek already
> alluded, Fernet performance is very sensitive to keystone's configuration.
> Can your share your keystone.conf as well?
>
> I'll also be in Atlanta and would love to talk Fernet performance, even if
> we don't have a formal time slot on the schedule.
>
> On Wed, Feb 15, 2017 at 9:08 AM Lance Bragstad 
> wrote:
>
> In addition to what David said, have you played around with caching in
> keystone [0]? After the initial implementation of fernet landed, we
> attempted to make it the default token provider. We ended up reverting the
> default back to uuid because we hit several issues. Around the Liberty and
> Mitaka timeframe, we reworked the caching implementation to fix those
> issues and improve overall performance of all token formats, especially
> fernet.
>
> We have a few different performance perspectives available, too. Some were
> run nearly 2 years ago [1] and some are run today [2]. Since the Newton
> release, we've made drastic improvements to the overall structure of the
> token provider [3] [4] [5]. At the very least, it should make understanding
> keystone's approach to tokens easier. Maintaining out-of-tree token
> providers should also be easier since we cleaned up a lot of the interfaces
> that affect developers maintaining their own providers.
>
> We can try and set something up at the PTG. We are getting pretty tight
> for time slots, but I'm sure we can find some time to work through the
> issues you're seeing (also, feel free to hop into #openstack-keystone on
> freenode if you want to visit prior to the PTG).
>
>
> [0]
> https://docs.openstack.org/developer/keystone/configuration.html#caching-layer
> [1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
> [2] https://github.com/lbragstad/keystone-performance
> [3]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:make-fernet-default
> [4]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
> [5]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/token-provider-cleanup.html
>
> On Wed, Feb 15, 2017 at 8:44 AM, David Stanek  wrote:
>
> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> >
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-21 Thread 王玺源
Ok, the advice of central database seems feasible, and we have an discuss
about it, but the delay between different region are pullback for this
advise, because it is far away between the two region. The slow sync decide
only the relatively static business can happen between the different
regions.

Lance Bragstad 于2017年2月17日 周五下午1:10写道:

> On Fri, Feb 17, 2017 at 11:22 AM, Clint Byrum  wrote:
>
> Excerpts from 王玺源's message of 2017-02-17 14:08:30 +:
> > Hi David:
> >
> > We have not find the perfect solution to solve the fernet performance
> > issue, we will try the different crypt strength setting with fernet in
> > future.
> >
>
> One important thing: did you try throwing more hardware at Keystone?
> Keystone instances are almost entirely immutable (the fernet keys
> are the only mutable part), which makes it pretty easy to scale them
> horizontally as-needed. Your test has a static 3 nodes, but you didn't
> include system status, so we don't know if the CPUs were overwhelmed,
> or how many database nodes you had, what its level of activity was, etc.
>
>
> +1
>
> Several folks in the community have tested token performance using a
> variety of hardware and configurations. Sharing your specific setup might
> draw similarities to other environments people have used. If not, then we
> at least have an environment description that we can use to experience the
> issues you're seeing first-hand.
>
>
>
> >
> >
> > There are multiple customers have more than 6-region cascade, how to
> > synchronous keystone data between these region disturbed us a lot. It
> does
> > not need to synchronize these data while using pki token, because the pki
> > token including the roles information.
> >
>
> The amount of mutable data to synchronize between datacenters with Fernet
> is the fernet keys. If you set up region-local caches, you should be
> able to ship queries back to a central database cluster and not have to
> worry about a painful global database cluster, since you'll only feel
> the latency of those cross-region queries when your caches are cold.
>
> However, I believe work was done to allow local read queries to be sent
> to local slaves, so you can use traditional MySQL replication if the
> cold-cache latency is too painful.
>
> Replication lag becomes a problem if you get a ton of revocation events,
> but this lag's consequences are pretty low, with the worst effect being a
> larger window for stolen, revoked tokens to be used. Caching also keeps
> that window open longer, so it becomes a game of tuning that window
> against desired API latency.
>
>
> Good point, Clint. We also merged a patch in Ocata that helped improve
> token validation performance, which was not proposed as a stable backport:
>
>
> https://github.com/openstack/keystone/commit/9e84371461831880ce5736e9888c7d9648e3a77b
>
>
> >
> >
> > The pki token has been verified that can support such large-scale
> > production environment, which even the uuid token has performance issue
> in
> > too.
> >
>
> As others have said, the other problems stacked on top of the critical
> security problems in PKI made it very undesirable for the community to
> support. There is, however, nothing preventing you from maintaining it
> out of tree, though I'd hope you would instead collaborate with the
> community to perhaps address those problems and come up with a "PKIv2"
> provider that has the qualities you want for your scale.
>
>
> +1
>
> Having personally maintained a token provider out-of-tree prior to the
> refactoring done last release [0], I think the improvements made are
> extremely beneficial for cases like this. But, again re-iterating what
> Clint said, I would only suggest that if for some reason we couldn't find a
> way to get a supported token provider to suit your needs.
>
> We typically have a session dedicated to performance at the PTG, and I
> have that tentatively scheduled for Friday morning (11:30 - 12:00) [1].
> Otherwise it's usually a topic that comes up during our operator feedback
> session, which is scheduled for Wednesday afternoon (1:30 - 2:20). Both are
> going to be in the dedicated keystone room (which I'll be advertising when
> I know exactly which room that is).
>
>
> [0]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
> [1] https://etherpad.openstack.org/p/keystone-pike-ptg
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread Lance Bragstad
On Fri, Feb 17, 2017 at 11:22 AM, Clint Byrum  wrote:

> Excerpts from 王玺源's message of 2017-02-17 14:08:30 +:
> > Hi David:
> >
> > We have not find the perfect solution to solve the fernet performance
> > issue, we will try the different crypt strength setting with fernet in
> > future.
> >
>
> One important thing: did you try throwing more hardware at Keystone?
> Keystone instances are almost entirely immutable (the fernet keys
> are the only mutable part), which makes it pretty easy to scale them
> horizontally as-needed. Your test has a static 3 nodes, but you didn't
> include system status, so we don't know if the CPUs were overwhelmed,
> or how many database nodes you had, what its level of activity was, etc.
>

+1

Several folks in the community have tested token performance using a
variety of hardware and configurations. Sharing your specific setup might
draw similarities to other environments people have used. If not, then we
at least have an environment description that we can use to experience the
issues you're seeing first-hand.


>
> >
> >
> > There are multiple customers have more than 6-region cascade, how to
> > synchronous keystone data between these region disturbed us a lot. It
> does
> > not need to synchronize these data while using pki token, because the pki
> > token including the roles information.
> >
>
> The amount of mutable data to synchronize between datacenters with Fernet
> is the fernet keys. If you set up region-local caches, you should be
> able to ship queries back to a central database cluster and not have to
> worry about a painful global database cluster, since you'll only feel
> the latency of those cross-region queries when your caches are cold.
>
> However, I believe work was done to allow local read queries to be sent
> to local slaves, so you can use traditional MySQL replication if the
> cold-cache latency is too painful.
>
> Replication lag becomes a problem if you get a ton of revocation events,
> but this lag's consequences are pretty low, with the worst effect being a
> larger window for stolen, revoked tokens to be used. Caching also keeps
> that window open longer, so it becomes a game of tuning that window
> against desired API latency.
>
>
Good point, Clint. We also merged a patch in Ocata that helped improve
token validation performance, which was not proposed as a stable backport:

https://github.com/openstack/keystone/commit/9e84371461831880ce5736e9888c7d9648e3a77b


> >
> >
> > The pki token has been verified that can support such large-scale
> > production environment, which even the uuid token has performance issue
> in
> > too.
> >
>
> As others have said, the other problems stacked on top of the critical
> security problems in PKI made it very undesirable for the community to
> support. There is, however, nothing preventing you from maintaining it
> out of tree, though I'd hope you would instead collaborate with the
> community to perhaps address those problems and come up with a "PKIv2"
> provider that has the qualities you want for your scale.
>

+1

Having personally maintained a token provider out-of-tree prior to the
refactoring done last release [0], I think the improvements made are
extremely beneficial for cases like this. But, again re-iterating what
Clint said, I would only suggest that if for some reason we couldn't find a
way to get a supported token provider to suit your needs.

We typically have a session dedicated to performance at the PTG, and I have
that tentatively scheduled for Friday morning (11:30 - 12:00) [1].
Otherwise it's usually a topic that comes up during our operator feedback
session, which is scheduled for Wednesday afternoon (1:30 - 2:20). Both are
going to be in the dedicated keystone room (which I'll be advertising when
I know exactly which room that is).


[0]
https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
[1] https://etherpad.openstack.org/p/keystone-pike-ptg

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread Clint Byrum
Excerpts from 王玺源's message of 2017-02-17 14:08:30 +:
> Hi David:
> 
> We have not find the perfect solution to solve the fernet performance
> issue, we will try the different crypt strength setting with fernet in
> future.
> 

One important thing: did you try throwing more hardware at Keystone?
Keystone instances are almost entirely immutable (the fernet keys
are the only mutable part), which makes it pretty easy to scale them
horizontally as-needed. Your test has a static 3 nodes, but you didn't
include system status, so we don't know if the CPUs were overwhelmed,
or how many database nodes you had, what its level of activity was, etc.

> 
> 
> There are multiple customers have more than 6-region cascade, how to
> synchronous keystone data between these region disturbed us a lot. It does
> not need to synchronize these data while using pki token, because the pki
> token including the roles information.
> 

The amount of mutable data to synchronize between datacenters with Fernet
is the fernet keys. If you set up region-local caches, you should be
able to ship queries back to a central database cluster and not have to
worry about a painful global database cluster, since you'll only feel
the latency of those cross-region queries when your caches are cold.

However, I believe work was done to allow local read queries to be sent
to local slaves, so you can use traditional MySQL replication if the
cold-cache latency is too painful.

Replication lag becomes a problem if you get a ton of revocation events,
but this lag's consequences are pretty low, with the worst effect being a
larger window for stolen, revoked tokens to be used. Caching also keeps
that window open longer, so it becomes a game of tuning that window
against desired API latency.

> 
> 
> The pki token has been verified that can support such large-scale
> production environment, which even the uuid token has performance issue in
> too.
> 

As others have said, the other problems stacked on top of the critical
security problems in PKI made it very undesirable for the community to
support. There is, however, nothing preventing you from maintaining it
out of tree, though I'd hope you would instead collaborate with the
community to perhaps address those problems and come up with a "PKIv2"
provider that has the qualities you want for your scale.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread 王玺源
Hi Dolph:

We made the keystone.conf same with the example.

[token]

provider = fernet



[fernet_tokens]   //all configuration is default

#

# From keystone

#



# Directory containing Fernet token keys. (string value)

#key_repository = /etc/keystone/fernet-keys/



# This controls how many keys are held in rotation by keystone-manage

# fernet_rotate before they are discarded. The default value of 3 means that

# keystone will maintain one staged key, one primary key, and one secondary

# key. Increasing this value means that additional secondary keys will be
kept

# in the rotation. (integer value)

# max_active_keys = 3
Dolph Mathews 于2017年2月17日 周五上午7:22写道:

> Thank you for the data and your test scripts! As Lance and Stanek already
> alluded, Fernet performance is very sensitive to keystone's configuration.
> Can your share your keystone.conf as well?
>
> I'll also be in Atlanta and would love to talk Fernet performance, even if
> we don't have a formal time slot on the schedule.
>
> On Wed, Feb 15, 2017 at 9:08 AM Lance Bragstad 
> wrote:
>
> In addition to what David said, have you played around with caching in
> keystone [0]? After the initial implementation of fernet landed, we
> attempted to make it the default token provider. We ended up reverting the
> default back to uuid because we hit several issues. Around the Liberty and
> Mitaka timeframe, we reworked the caching implementation to fix those
> issues and improve overall performance of all token formats, especially
> fernet.
>
> We have a few different performance perspectives available, too. Some were
> run nearly 2 years ago [1] and some are run today [2]. Since the Newton
> release, we've made drastic improvements to the overall structure of the
> token provider [3] [4] [5]. At the very least, it should make understanding
> keystone's approach to tokens easier. Maintaining out-of-tree token
> providers should also be easier since we cleaned up a lot of the interfaces
> that affect developers maintaining their own providers.
>
> We can try and set something up at the PTG. We are getting pretty tight
> for time slots, but I'm sure we can find some time to work through the
> issues you're seeing (also, feel free to hop into #openstack-keystone on
> freenode if you want to visit prior to the PTG).
>
>
> [0]
> https://docs.openstack.org/developer/keystone/configuration.html#caching-layer
> [1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
> [2] https://github.com/lbragstad/keystone-performance
> [3]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:make-fernet-default
> [4]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
> [5]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/token-provider-cleanup.html
>
> On Wed, Feb 15, 2017 at 8:44 AM, David Stanek  wrote:
>
> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> >
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the issues. Also
> ideally running the token provider in production.
>
>
> > 2. Has Fernet token improved the performance during these releases? Or
> any
> > road map so that we can make sure Fernet is better than PKI in all side.
> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> > even, we can keep the PKI token in Keystone for more one or two cycles,
> > then remove it once Fernet is stable 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread 王玺源
Hi Lance:

We may try cache or other setting to test fernet token in future.



Mentioned the uuid as default token, it must remind there have a big reason
to solve uuid performance issue, with the implement of pkitoken.



Openstack API using restful protocol, which built on top of Http, but these
API can be re-encapsulated by the Web Console to protect the Token from
internet. The deployment of the environment is higher security than the Web
application which exposed on internet.

Therefore, the risk of leakage of Token is lower than Web Application, and
the risk of Token leakage due to PKI Token revocation delay can be reduced
by corresponding security measures.
Lance Bragstad 于2017年2月15日 周三下午11:08写道:

> In addition to what David said, have you played around with caching in
> keystone [0]? After the initial implementation of fernet landed, we
> attempted to make it the default token provider. We ended up reverting the
> default back to uuid because we hit several issues. Around the Liberty and
> Mitaka timeframe, we reworked the caching implementation to fix those
> issues and improve overall performance of all token formats, especially
> fernet.
>
> We have a few different performance perspectives available, too. Some were
> run nearly 2 years ago [1] and some are run today [2]. Since the Newton
> release, we've made drastic improvements to the overall structure of the
> token provider [3] [4] [5]. At the very least, it should make understanding
> keystone's approach to tokens easier. Maintaining out-of-tree token
> providers should also be easier since we cleaned up a lot of the interfaces
> that affect developers maintaining their own providers.
>
> We can try and set something up at the PTG. We are getting pretty tight
> for time slots, but I'm sure we can find some time to work through the
> issues you're seeing (also, feel free to hop into #openstack-keystone on
> freenode if you want to visit prior to the PTG).
>
>
> [0]
> https://docs.openstack.org/developer/keystone/configuration.html#caching-layer
> [1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
> [2] https://github.com/lbragstad/keystone-performance
> [3]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:make-fernet-default
> [4]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
> [5]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/token-provider-cleanup.html
>
> On Wed, Feb 15, 2017 at 8:44 AM, David Stanek  wrote:
>
> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> >
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the issues. Also
> ideally running the token provider in production.
>
>
> > 2. Has Fernet token improved the performance during these releases? Or
> any
> > road map so that we can make sure Fernet is better than PKI in all side.
> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> > even, we can keep the PKI token in Keystone for more one or two cycles,
> > then remove it once Fernet is stable enough.
> > 3. Since I'll be in Atalanta next week, if it is possible, I'd like to
> > bring this topic to Keystone PTG. can I?
>
> Sure. We have a pretty packed calendar, but I'm sure you could steal a
> few minutes somewhere.
>
>
> >
> > It is a real production problem and I really need your feedback.
> >
>
> Have you tried playing with the crypt_strength[1]? If the 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-17 Thread 王玺源
Hi David:

We have not find the perfect solution to solve the fernet performance
issue, we will try the different crypt strength setting with fernet in
future.



There are multiple customers have more than 6-region cascade, how to
synchronous keystone data between these region disturbed us a lot. It does
not need to synchronize these data while using pki token, because the pki
token including the roles information.



The pki token has been verified that can support such large-scale
production environment, which even the uuid token has performance issue in
too.



Through the pki token is large size, but the size also make it difficult
for guessing or steal.
David Stanek 于2017年2月15日 周三下午10:45写道:

> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> >
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the issues. Also
> ideally running the token provider in production.
>
>
> > 2. Has Fernet token improved the performance during these releases? Or
> any
> > road map so that we can make sure Fernet is better than PKI in all side.
> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> > even, we can keep the PKI token in Keystone for more one or two cycles,
> > then remove it once Fernet is stable enough.
> > 3. Since I'll be in Atalanta next week, if it is possible, I'd like to
> > bring this topic to Keystone PTG. can I?
>
> Sure. We have a pretty packed calendar, but I'm sure you could steal a
> few minutes somewhere.
>
>
> >
> > It is a real production problem and I really need your feedback.
> >
>
> Have you tried playing with the crypt_strength[1]? If the slowness is
> the crypto (which it was in the past) then you can tune it a little bit.
> Another option might be to keep the same token flow and find a faster
> method for hashing a token.
>
> 1.
> http://git.openstack.org/cgit/openstack/keystone/tree/etc/keystone.conf.sample#n67
>
>
> --
> david stanek
> web: https://dstanek.com
> twitter: https://twitter.com/dstanek
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-16 Thread Lance Bragstad
On Thu, Feb 16, 2017 at 5:20 PM, Dolph Mathews 
wrote:

> Thank you for the data and your test scripts! As Lance and Stanek already
> alluded, Fernet performance is very sensitive to keystone's configuration.
> Can your share your keystone.conf as well?
>
> I'll also be in Atlanta and would love to talk Fernet performance, even if
> we don't have a formal time slot on the schedule.
>

++ Our schedule is coming together and this is what we have so far [0]. If
there is an open time slot that works for your schedule, don't hesitate to
let me know.


[0] https://etherpad.openstack.org/p/keystone-pike-ptg


>
> On Wed, Feb 15, 2017 at 9:08 AM Lance Bragstad 
> wrote:
>
>> In addition to what David said, have you played around with caching in
>> keystone [0]? After the initial implementation of fernet landed, we
>> attempted to make it the default token provider. We ended up reverting the
>> default back to uuid because we hit several issues. Around the Liberty and
>> Mitaka timeframe, we reworked the caching implementation to fix those
>> issues and improve overall performance of all token formats, especially
>> fernet.
>>
>> We have a few different performance perspectives available, too. Some
>> were run nearly 2 years ago [1] and some are run today [2]. Since the
>> Newton release, we've made drastic improvements to the overall structure of
>> the token provider [3] [4] [5]. At the very least, it should make
>> understanding keystone's approach to tokens easier. Maintaining out-of-tree
>> token providers should also be easier since we cleaned up a lot of the
>> interfaces that affect developers maintaining their own providers.
>>
>> We can try and set something up at the PTG. We are getting pretty tight
>> for time slots, but I'm sure we can find some time to work through the
>> issues you're seeing (also, feel free to hop into #openstack-keystone on
>> freenode if you want to visit prior to the PTG).
>>
>>
>> [0] https://docs.openstack.org/developer/keystone/
>> configuration.html#caching-layer
>> [1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
>> [2] https://github.com/lbragstad/keystone-performance
>> [3] https://review.openstack.org/#/q/status:merged+project:
>> openstack/keystone+branch:master+topic:make-fernet-default
>> [4] https://review.openstack.org/#/q/status:merged+project:
>> openstack/keystone+branch:master+topic:cleanup-token-provider
>> [5] http://specs.openstack.org/openstack/keystone-specs/
>> specs/keystone/ocata/token-provider-cleanup.html
>>
>> On Wed, Feb 15, 2017 at 8:44 AM, David Stanek 
>> wrote:
>>
>> On 15-Feb 18:16, 王玺源 wrote:
>> > Hello everyone,
>> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
>> our
>> > production team did some test about PKI and Fernet token (With Keystone
>> > Mitaka). They found that in large-scale production environment, Fernet
>> > token's performance is not as good as PKI. Here is the test data:
>> >
>> > https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM
>> 25NzZcdzPE0fvY/edit?usp=sharing
>>
>> This is nice to see. Thanks.
>>
>>
>> >
>> > From the data, we can see that:
>> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
>> > 2. PKI token revoke can't immediately make the token invalid. So it has
>> the
>> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
>> >
>> > But in our production team's opinion, the revoke issue is a small
>> problem,
>> > and can be avoided by some periphery ways. (More detail solution could
>> be
>> > explained by them in the follow email).
>> > They think that the performance issue is the most important thing. Maybe
>> > you can see that in some production environment, performance is the
>> first
>> > thing to be considered.
>>
>> I'd like to hear solutions to this if you have already come up with
>> them. This issue, however, isn't the only one that led us to remove PKI
>> tokens.
>>
>> >
>> > So here I'd like to ask you, especially the keystone experts:
>> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>>
>> I would guess that, at least in the immediate future, we would not want
>> to put it back into keystone until someone can fix the issues. Also
>> ideally running the token provider in production.
>>
>>
>> > 2. Has Fernet token improved the performance during these releases? Or
>> any
>> > road map so that we can make sure Fernet is better than PKI in all side.
>> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
>> > even, we can keep the PKI token in Keystone for more one or two cycles,
>> > then remove it once Fernet is stable enough.
>> > 3. Since I'll be in Atalanta next week, if it is possible, I'd like to
>> > bring this topic to Keystone PTG. can I?
>>
>> Sure. We have a pretty packed calendar, but I'm sure you could steal a
>> few minutes somewhere.
>>
>>
>> >
>> > It is a real production problem and I really need 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-16 Thread Dolph Mathews
Thank you for the data and your test scripts! As Lance and Stanek already
alluded, Fernet performance is very sensitive to keystone's configuration.
Can your share your keystone.conf as well?

I'll also be in Atlanta and would love to talk Fernet performance, even if
we don't have a formal time slot on the schedule.

On Wed, Feb 15, 2017 at 9:08 AM Lance Bragstad  wrote:

> In addition to what David said, have you played around with caching in
> keystone [0]? After the initial implementation of fernet landed, we
> attempted to make it the default token provider. We ended up reverting the
> default back to uuid because we hit several issues. Around the Liberty and
> Mitaka timeframe, we reworked the caching implementation to fix those
> issues and improve overall performance of all token formats, especially
> fernet.
>
> We have a few different performance perspectives available, too. Some were
> run nearly 2 years ago [1] and some are run today [2]. Since the Newton
> release, we've made drastic improvements to the overall structure of the
> token provider [3] [4] [5]. At the very least, it should make understanding
> keystone's approach to tokens easier. Maintaining out-of-tree token
> providers should also be easier since we cleaned up a lot of the interfaces
> that affect developers maintaining their own providers.
>
> We can try and set something up at the PTG. We are getting pretty tight
> for time slots, but I'm sure we can find some time to work through the
> issues you're seeing (also, feel free to hop into #openstack-keystone on
> freenode if you want to visit prior to the PTG).
>
>
> [0]
> https://docs.openstack.org/developer/keystone/configuration.html#caching-layer
> [1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
> [2] https://github.com/lbragstad/keystone-performance
> [3]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:make-fernet-default
> [4]
> https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
> [5]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/token-provider-cleanup.html
>
> On Wed, Feb 15, 2017 at 8:44 AM, David Stanek  wrote:
>
> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> >
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the issues. Also
> ideally running the token provider in production.
>
>
> > 2. Has Fernet token improved the performance during these releases? Or
> any
> > road map so that we can make sure Fernet is better than PKI in all side.
> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> > even, we can keep the PKI token in Keystone for more one or two cycles,
> > then remove it once Fernet is stable enough.
> > 3. Since I'll be in Atalanta next week, if it is possible, I'd like to
> > bring this topic to Keystone PTG. can I?
>
> Sure. We have a pretty packed calendar, but I'm sure you could steal a
> few minutes somewhere.
>
>
> >
> > It is a real production problem and I really need your feedback.
> >
>
> Have you tried playing with the crypt_strength[1]? If the slowness is
> the crypto (which it was in the past) then you can tune it a little bit.
> Another option might be to keep the same token flow and find a faster
> method for hashing a token.
>
> 1.
> http://git.openstack.org/cgit/openstack/keystone/tree/etc/keystone.conf.sample#n67
>
>
> --
> david stanek
> web: https://dstanek.com
> twitter: 

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-15 Thread Lance Bragstad
In addition to what David said, have you played around with caching in
keystone [0]? After the initial implementation of fernet landed, we
attempted to make it the default token provider. We ended up reverting the
default back to uuid because we hit several issues. Around the Liberty and
Mitaka timeframe, we reworked the caching implementation to fix those
issues and improve overall performance of all token formats, especially
fernet.

We have a few different performance perspectives available, too. Some were
run nearly 2 years ago [1] and some are run today [2]. Since the Newton
release, we've made drastic improvements to the overall structure of the
token provider [3] [4] [5]. At the very least, it should make understanding
keystone's approach to tokens easier. Maintaining out-of-tree token
providers should also be easier since we cleaned up a lot of the interfaces
that affect developers maintaining their own providers.

We can try and set something up at the PTG. We are getting pretty tight for
time slots, but I'm sure we can find some time to work through the issues
you're seeing (also, feel free to hop into #openstack-keystone on freenode
if you want to visit prior to the PTG).


[0]
https://docs.openstack.org/developer/keystone/configuration.html#caching-layer
[1] http://dolphm.com/benchmarking-openstack-keystone-token-formats/
[2] https://github.com/lbragstad/keystone-performance
[3]
https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:make-fernet-default
[4]
https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:cleanup-token-provider
[5]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/ocata/token-provider-cleanup.html

On Wed, Feb 15, 2017 at 8:44 AM, David Stanek  wrote:

> On 15-Feb 18:16, 王玺源 wrote:
> > Hello everyone,
> >   PKI/PKIZ token has been removed from keystone in Ocata. But recently
> our
> > production team did some test about PKI and Fernet token (With Keystone
> > Mitaka). They found that in large-scale production environment, Fernet
> > token's performance is not as good as PKI. Here is the test data:
> >
> > https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM
> 25NzZcdzPE0fvY/edit?usp=sharing
>
> This is nice to see. Thanks.
>
>
> >
> > From the data, we can see that:
> > 1. In large-scale concurrency test, PKI is much faster than Fernet.
> > 2. PKI token revoke can't immediately make the token invalid. So it has
> the
> > revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> >
> > But in our production team's opinion, the revoke issue is a small
> problem,
> > and can be avoided by some periphery ways. (More detail solution could be
> > explained by them in the follow email).
> > They think that the performance issue is the most important thing. Maybe
> > you can see that in some production environment, performance is the first
> > thing to be considered.
>
> I'd like to hear solutions to this if you have already come up with
> them. This issue, however, isn't the only one that led us to remove PKI
> tokens.
>
> >
> > So here I'd like to ask you, especially the keystone experts:
> > 1. Is there any chance to bring PKI/PKIZ back to Keystone?
>
> I would guess that, at least in the immediate future, we would not want
> to put it back into keystone until someone can fix the issues. Also
> ideally running the token provider in production.
>
>
> > 2. Has Fernet token improved the performance during these releases? Or
> any
> > road map so that we can make sure Fernet is better than PKI in all side.
> > Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> > even, we can keep the PKI token in Keystone for more one or two cycles,
> > then remove it once Fernet is stable enough.
> > 3. Since I'll be in Atalanta next week, if it is possible, I'd like to
> > bring this topic to Keystone PTG. can I?
>
> Sure. We have a pretty packed calendar, but I'm sure you could steal a
> few minutes somewhere.
>
>
> >
> > It is a real production problem and I really need your feedback.
> >
>
> Have you tried playing with the crypt_strength[1]? If the slowness is
> the crypto (which it was in the past) then you can tune it a little bit.
> Another option might be to keep the same token flow and find a faster
> method for hashing a token.
>
> 1. http://git.openstack.org/cgit/openstack/keystone/tree/etc/
> keystone.conf.sample#n67
>
>
> --
> david stanek
> web: https://dstanek.com
> twitter: https://twitter.com/dstanek
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [keystone]PKI token VS Fernet token

2017-02-15 Thread David Stanek
On 15-Feb 18:16, 王玺源 wrote:
> Hello everyone,
>   PKI/PKIZ token has been removed from keystone in Ocata. But recently our
> production team did some test about PKI and Fernet token (With Keystone
> Mitaka). They found that in large-scale production environment, Fernet
> token's performance is not as good as PKI. Here is the test data:
> 
> https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing

This is nice to see. Thanks.


> 
> From the data, we can see that:
> 1. In large-scale concurrency test, PKI is much faster than Fernet.
> 2. PKI token revoke can't immediately make the token invalid. So it has the
> revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062
> 
> But in our production team's opinion, the revoke issue is a small problem,
> and can be avoided by some periphery ways. (More detail solution could be
> explained by them in the follow email).
> They think that the performance issue is the most important thing. Maybe
> you can see that in some production environment, performance is the first
> thing to be considered.

I'd like to hear solutions to this if you have already come up with
them. This issue, however, isn't the only one that led us to remove PKI
tokens.

> 
> So here I'd like to ask you, especially the keystone experts:
> 1. Is there any chance to bring PKI/PKIZ back to Keystone?

I would guess that, at least in the immediate future, we would not want
to put it back into keystone until someone can fix the issues. Also
ideally running the token provider in production.


> 2. Has Fernet token improved the performance during these releases? Or any
> road map so that we can make sure Fernet is better than PKI in all side.
> Otherwise, I don't think that remove PKI in Ocata is the right way. Or
> even, we can keep the PKI token in Keystone for more one or two cycles,
> then remove it once Fernet is stable enough.
> 3. Since I'll be in Atalanta next week, if it is possible, I'd like to
> bring this topic to Keystone PTG. can I?

Sure. We have a pretty packed calendar, but I'm sure you could steal a
few minutes somewhere.


> 
> It is a real production problem and I really need your feedback.
> 

Have you tried playing with the crypt_strength[1]? If the slowness is
the crypto (which it was in the past) then you can tune it a little bit.
Another option might be to keep the same token flow and find a faster
method for hashing a token.

1. 
http://git.openstack.org/cgit/openstack/keystone/tree/etc/keystone.conf.sample#n67


-- 
david stanek
web: https://dstanek.com
twitter: https://twitter.com/dstanek

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]PKI token VS Fernet token

2017-02-15 Thread 王玺源
Hello everyone,
  PKI/PKIZ token has been removed from keystone in Ocata. But recently our
production team did some test about PKI and Fernet token (With Keystone
Mitaka). They found that in large-scale production environment, Fernet
token's performance is not as good as PKI. Here is the test data:

https://docs.google.com/document/d/12cL9bq9EARjZw9IS3YxVmYsGfdauM25NzZcdzPE0fvY/edit?usp=sharing

>From the data, we can see that:
1. In large-scale concurrency test, PKI is much faster than Fernet.
2. PKI token revoke can't immediately make the token invalid. So it has the
revoke issue.  https://wiki.openstack.org/wiki/OSSN/OSSN-0062

But in our production team's opinion, the revoke issue is a small problem,
and can be avoided by some periphery ways. (More detail solution could be
explained by them in the follow email).
They think that the performance issue is the most important thing. Maybe
you can see that in some production environment, performance is the first
thing to be considered.

So here I'd like to ask you, especially the keystone experts:
1. Is there any chance to bring PKI/PKIZ back to Keystone?
2. Has Fernet token improved the performance during these releases? Or any
road map so that we can make sure Fernet is better than PKI in all side.
Otherwise, I don't think that remove PKI in Ocata is the right way. Or
even, we can keep the PKI token in Keystone for more one or two cycles,
then remove it once Fernet is stable enough.
3. Since I'll be in Atalanta next week, if it is possible, I'd like to
bring this topic to Keystone PTG. can I?

It is a real production problem and I really need your feedback.

Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev