[openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Chuck Thier
There is a review for swift [1] that is requesting to set the max header
size to 16k to be able to support v3 keystone tokens.  That might be fine
if you measure you request rate in requests per minute, but this is
continuing to add significant overhead to swift.  Even if you *only* have
10,000 requests/sec to your swift cluster, an 8k token is adding almost
80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
services like marconi.

When PKI tokens were first introduced, we raised concerns about the
unbounded size of of the token in the header, and were told that uuid style
tokens would still be usable, but all I heard at the summit, was to not use
them and PKI was the future of all things.

At what point do we re-evaluate the decision to go with pki tokens, and
that they may not be the best idea for apis like swift and marconi?

Thanks,

--
Chuck

[1] https://review.openstack.org/#/c/93356/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Adam Young

On 05/21/2014 11:09 AM, Chuck Thier wrote:
There is a review for swift [1] that is requesting to set the max 
header size to 16k to be able to support v3 keystone tokens.  That 
might be fine if you measure you request rate in requests per minute, 
but this is continuing to add significant overhead to swift.  Even if 
you *only* have 10,000 requests/sec to your swift cluster, an 8k token 
is adding almost 80MB/sec of bandwidth.  This will seem to be equally 
bad (if not worse) for services like marconi.


When PKI tokens were first introduced, we raised concerns about the 
unbounded size of of the token in the header, and were told that uuid 
style tokens would still be usable, but all I heard at the summit, was 
to not use them and PKI was the future of all things.


At what point do we re-evaluate the decision to go with pki tokens, 
and that they may not be the best idea for apis like swift and marconi?


Keystone tokens were slightly shrunk at the end of the last release 
cycle by removing unnecessary data from each endpoint entry.


Compressed PKI tokens are enroute and will be much smaller.



Thanks,

--
Chuck

[1] https://review.openstack.org/#/c/93356/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Morgan Fainberg
The keystone team is also looking at ways to reduce the data contained in
the token. Coupled with the compression, this should get the tokens back
down to a reasonable size.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:

  On 05/21/2014 11:09 AM, Chuck Thier wrote:

 There is a review for swift [1] that is requesting to set the max header
 size to 16k to be able to support v3 keystone tokens.  That might be fine
 if you measure you request rate in requests per minute, but this is
 continuing to add significant overhead to swift.  Even if you *only* have
 10,000 requests/sec to your swift cluster, an 8k token is adding almost
 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
 services like marconi.

  When PKI tokens were first introduced, we raised concerns about the
 unbounded size of of the token in the header, and were told that uuid style
 tokens would still be usable, but all I heard at the summit, was to not use
 them and PKI was the future of all things.

  At what point do we re-evaluate the decision to go with pki tokens, and
 that they may not be the best idea for apis like swift and marconi?


 Keystone tokens were slightly shrunk at the end of the last release cycle
 by removing unnecessary data from each endpoint entry.

 Compressed PKI tokens are enroute and will be much smaller.


  Thanks,

  --
 Chuck

  [1] https://review.openstack.org/#/c/93356/


 ___
 OpenStack-dev mailing listopenstack-...@lists.openstack.org 
 javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread John Dickinson
Can you explain how PKI info is compressible? I thought it was encrypted, which 
should mean you can't compress it right?


--John





On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com wrote:

 The keystone team is also looking at ways to reduce the data contained in the 
 token. Coupled with the compression, this should get the tokens back down to 
 a reasonable size. 
 
 Cheers,
 Morgan
 
 Sent via mobile
 
 On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
 On 05/21/2014 11:09 AM, Chuck Thier wrote:
 There is a review for swift [1] that is requesting to set the max header 
 size to 16k to be able to support v3 keystone tokens.  That might be fine if 
 you measure you request rate in requests per minute, but this is continuing 
 to add significant overhead to swift.  Even if you *only* have 10,000 
 requests/sec to your swift cluster, an 8k token is adding almost 80MB/sec of 
 bandwidth.  This will seem to be equally bad (if not worse) for services 
 like marconi.
 
 When PKI tokens were first introduced, we raised concerns about the 
 unbounded size of of the token in the header, and were told that uuid style 
 tokens would still be usable, but all I heard at the summit, was to not use 
 them and PKI was the future of all things.
 
 At what point do we re-evaluate the decision to go with pki tokens, and that 
 they may not be the best idea for apis like swift and marconi?
 
 Keystone tokens were slightly shrunk at the end of the last release cycle by 
 removing unnecessary data from each endpoint entry.
 
 Compressed PKI tokens are enroute and will be much smaller.
 
 
 Thanks,
 
 --
 Chuck
 
 [1] https://review.openstack.org/#/c/93356/
 
 
 ___
 OpenStack-dev mailing list
 
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Lance Bragstad
John,

Adam had a blog post on Compressed Tokens that might help shed a little
light on them in general[1]. We also have a blueprint for tracking the work
as it gets done[2].


[1] http://adam.younglogic.com/2014/02/compressed-tokens/
[2] https://blueprints.launchpad.net/keystone/+spec/compress-tokens


On Wed, May 21, 2014 at 10:41 AM, John Dickinson m...@not.mn wrote:

 Can you explain how PKI info is compressible? I thought it was encrypted,
 which should mean you can't compress it right?


 --John





 On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

  The keystone team is also looking at ways to reduce the data contained
 in the token. Coupled with the compression, this should get the tokens back
 down to a reasonable size.
 
  Cheers,
  Morgan
 
  Sent via mobile
 
  On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
  On 05/21/2014 11:09 AM, Chuck Thier wrote:
  There is a review for swift [1] that is requesting to set the max
 header size to 16k to be able to support v3 keystone tokens.  That might be
 fine if you measure you request rate in requests per minute, but this is
 continuing to add significant overhead to swift.  Even if you *only* have
 10,000 requests/sec to your swift cluster, an 8k token is adding almost
 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
 services like marconi.
 
  When PKI tokens were first introduced, we raised concerns about the
 unbounded size of of the token in the header, and were told that uuid style
 tokens would still be usable, but all I heard at the summit, was to not use
 them and PKI was the future of all things.
 
  At what point do we re-evaluate the decision to go with pki tokens, and
 that they may not be the best idea for apis like swift and marconi?
 
  Keystone tokens were slightly shrunk at the end of the last release
 cycle by removing unnecessary data from each endpoint entry.
 
  Compressed PKI tokens are enroute and will be much smaller.
 
 
  Thanks,
 
  --
  Chuck
 
  [1] https://review.openstack.org/#/c/93356/
 
 
  ___
  OpenStack-dev mailing list
 
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Dolph Mathews
On Wed, May 21, 2014 at 10:41 AM, John Dickinson m...@not.mn wrote:

 Can you explain how PKI info is compressible? I thought it was encrypted,
 which should mean you can't compress it right?


They're not encrypted - just signed and then base64 encoded. The JSON (and
especially service catalog) is compressible prior to encoding.



 --John





 On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:

  The keystone team is also looking at ways to reduce the data contained
 in the token. Coupled with the compression, this should get the tokens back
 down to a reasonable size.
 
  Cheers,
  Morgan
 
  Sent via mobile
 
  On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
  On 05/21/2014 11:09 AM, Chuck Thier wrote:
  There is a review for swift [1] that is requesting to set the max
 header size to 16k to be able to support v3 keystone tokens.  That might be
 fine if you measure you request rate in requests per minute, but this is
 continuing to add significant overhead to swift.  Even if you *only* have
 10,000 requests/sec to your swift cluster, an 8k token is adding almost
 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
 services like marconi.
 
  When PKI tokens were first introduced, we raised concerns about the
 unbounded size of of the token in the header, and were told that uuid style
 tokens would still be usable, but all I heard at the summit, was to not use
 them and PKI was the future of all things.
 
  At what point do we re-evaluate the decision to go with pki tokens, and
 that they may not be the best idea for apis like swift and marconi?
 
  Keystone tokens were slightly shrunk at the end of the last release
 cycle by removing unnecessary data from each endpoint entry.
 
  Compressed PKI tokens are enroute and will be much smaller.
 
 
  Thanks,
 
  --
  Chuck
 
  [1] https://review.openstack.org/#/c/93356/
 
 
  ___
  OpenStack-dev mailing list
 
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread John Dickinson
Thanks Dolph and Lance for the info and links.


What concerns me, in general, about the current length of keystone tokens is 
that they are unbounded. And the proposed solutions don't change that pattern.

My understanding of why PKI tokens are used is so that the system doesn't have 
to call to Keystone to authorize the request. This reduces the load on 
Keystone, but it adds significant overhead for every API request.

Keystone's first system was to use UUID bearer tokens. These are fixed length, 
small, cacheable, and require a call to Keystone once per cache period.

Moving to PKI tokens, we now have multi-kB headers that significantly increase 
the size of each request. Swift deployers commonly have small objects on the 
order of 50kB, so adding another ~10kB to each request, just to save a 
once-a-day call to Keystone (ie uuid tokens) seems to be a really high price to 
pay for not much benefit.

The other benefit to PKI tokens is that services can make calls to other 
systems on behalf of the user (eg nova can call cinder for the user). This is 
great, but it's not the only usage pattern in OpenStack projects, and therefore 
I don't like optimizing for it at the expense of other patterns.

In addition to PKI tokens (ie signed+encoded service catalogs), I'd like to see 
Keystone support and remain committed to fixed-length bearer tokens or a 
signed-with-shared-secret auth mechanism (a la AWS).

--John




On May 21, 2014, at 9:09 AM, Dolph Mathews dolph.math...@gmail.com wrote:

 
 On Wed, May 21, 2014 at 10:41 AM, John Dickinson m...@not.mn wrote:
 Can you explain how PKI info is compressible? I thought it was encrypted, 
 which should mean you can't compress it right?
 
 They're not encrypted - just signed and then base64 encoded. The JSON (and 
 especially service catalog) is compressible prior to encoding.
 
 
 
 --John
 
 
 
 
 
 On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com 
 wrote:
 
  The keystone team is also looking at ways to reduce the data contained in 
  the token. Coupled with the compression, this should get the tokens back 
  down to a reasonable size.
 
  Cheers,
  Morgan
 
  Sent via mobile
 
  On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
  On 05/21/2014 11:09 AM, Chuck Thier wrote:
  There is a review for swift [1] that is requesting to set the max header 
  size to 16k to be able to support v3 keystone tokens.  That might be fine 
  if you measure you request rate in requests per minute, but this is 
  continuing to add significant overhead to swift.  Even if you *only* have 
  10,000 requests/sec to your swift cluster, an 8k token is adding almost 
  80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) 
  for services like marconi.
 
  When PKI tokens were first introduced, we raised concerns about the 
  unbounded size of of the token in the header, and were told that uuid 
  style tokens would still be usable, but all I heard at the summit, was to 
  not use them and PKI was the future of all things.
 
  At what point do we re-evaluate the decision to go with pki tokens, and 
  that they may not be the best idea for apis like swift and marconi?
 
  Keystone tokens were slightly shrunk at the end of the last release cycle 
  by removing unnecessary data from each endpoint entry.
 
  Compressed PKI tokens are enroute and will be much smaller.
 
 
  Thanks,
 
  --
  Chuck
 
  [1] https://review.openstack.org/#/c/93356/
 
 
  ___
  OpenStack-dev mailing list
 
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Kurt Griffiths
 adding another ~10kB to each request, just to save a once-a-day call to
Keystone (ie uuid tokens) seems to be a really high price to pay for not
much benefit.

I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don’t work so well for high-volume
data APIs where every KB counts.

Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Morgan Fainberg
This is part of what I was referencing in regards to lightening the data
stored in the token. Ideally, we would like to see an ID only token that
only contains the basic information to act. Some initial tests show these
tokens should be able to clock in under 1k in size. However all the details
are not fully defined yet. Coupled with this data reduction there will be
explicit definitions of the data that is meant to go into the tokens. Some
of the data we have now is a result of convenience of accessing the data.

I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes
smoothly. But this is absolutely on the list of things we would like to
address.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths kurt.griffi...@rackspace.com
wrote:

  adding another ~10kB to each request, just to save a once-a-day call to
 Keystone (ie uuid tokens) seems to be a really high price to pay for not
 much benefit.

 I have the same concern with respect to Marconi. I feel like KPI tokens
 are fine for control plane APIs, but don’t work so well for high-volume
 data APIs where every KB counts.

 Just my $0.02...

 --Kurt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Kurt Griffiths
Good to know, thanks for clarifying. One thing I’m still fuzzy on, however, is 
why we want to deprecate use of UUID tokens in the first place? I’m just trying 
to understand the history here...

From: Morgan Fainberg 
morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com
Reply-To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, May 21, 2014 at 1:23 PM
To: OpenStack Dev 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Concerns about the ballooning size of keystone 
tokens

This is part of what I was referencing in regards to lightening the data stored 
in the token. Ideally, we would like to see an ID only token that only 
contains the basic information to act. Some initial tests show these tokens 
should be able to clock in under 1k in size. However all the details are not 
fully defined yet. Coupled with this data reduction there will be explicit 
definitions of the data that is meant to go into the tokens. Some of the data 
we have now is a result of convenience of accessing the data.

I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes smoothly. 
But this is absolutely on the list of things we would like to address.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths 
kurt.griffi...@rackspace.commailto:kurt.griffi...@rackspace.com wrote:
 adding another ~10kB to each request, just to save a once-a-day call to
Keystone (ie uuid tokens) seems to be a really high price to pay for not
much benefit.

I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don’t work so well for high-volume
data APIs where every KB counts.

Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgjavascript:;
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Dolph Mathews
On Wed, May 21, 2014 at 2:36 PM, Kurt Griffiths 
kurt.griffi...@rackspace.com wrote:

  Good to know, thanks for clarifying. One thing I’m still fuzzy on,
 however, is why we want to deprecate use of UUID tokens in the first place?
 I’m just trying to understand the history here...


I don't think anyone has seriously discussed deprecating UUID tokens, only
that the number of benefits UUID has over PKI is rapidly diminishing as our
PKI implementation improves.



   From: Morgan Fainberg morgan.fainb...@gmail.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Wednesday, May 21, 2014 at 1:23 PM
 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Concerns about the ballooning size of
 keystone tokens

  This is part of what I was referencing in regards to lightening the data
 stored in the token. Ideally, we would like to see an ID only token that
 only contains the basic information to act. Some initial tests show these
 tokens should be able to clock in under 1k in size. However all the details
 are not fully defined yet. Coupled with this data reduction there will be
 explicit definitions of the data that is meant to go into the tokens. Some
 of the data we have now is a result of convenience of accessing the data.

  I hope to have this token change available during Juno development
 cycle.

  There is a lot of work to be done to ensure this type of change goes
 smoothly. But this is absolutely on the list of things we would like to
 address.

  Cheers,
 Morgan

  Sent via mobile

 On Wednesday, May 21, 2014, Kurt Griffiths kurt.griffi...@rackspace.com
 wrote:

  adding another ~10kB to each request, just to save a once-a-day call to
 Keystone (ie uuid tokens) seems to be a really high price to pay for not
 much benefit.

 I have the same concern with respect to Marconi. I feel like KPI tokens
 are fine for control plane APIs, but don’t work so well for high-volume
 data APIs where every KB counts.

 Just my $0.02...

 --Kurt

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Dolph Mathews
On Wed, May 21, 2014 at 11:32 AM, John Dickinson m...@not.mn wrote:

 Thanks Dolph and Lance for the info and links.


 What concerns me, in general, about the current length of keystone tokens
 is that they are unbounded. And the proposed solutions don't change that
 pattern.

 My understanding of why PKI tokens are used is so that the system doesn't
 have to call to Keystone to authorize the request. This reduces the load on
 Keystone, but it adds significant overhead for every API request.

 Keystone's first system was to use UUID bearer tokens. These are fixed
 length, small, cacheable, and require a call to Keystone once per cache
 period.

 Moving to PKI tokens, we now have multi-kB headers that significantly
 increase the size of each request. Swift deployers commonly have small
 objects on the order of 50kB, so adding another ~10kB to each request,
 just to save a once-a-day call to Keystone (ie uuid tokens) seems to be a
 really high price to pay for not much benefit.

 The other benefit to PKI tokens is that services can make calls to other
 systems on behalf of the user (eg nova can call cinder for the user). This
 is great, but it's not the only usage pattern in OpenStack projects, and
 therefore I don't like optimizing for it at the expense of other patterns.

 In addition to PKI tokens (ie signed+encoded service catalogs), I'd like
 to see Keystone support and remain committed to fixed-length bearer tokens
 or a signed-with-shared-secret auth mechanism (a la AWS).


This is a fantastic argument in favor of UUID today. PKI will likely never
be fixed-length, but hopefully we can continue making them smaller such
that this argument might carry substantially less weight someday.



 --John




 On May 21, 2014, at 9:09 AM, Dolph Mathews dolph.math...@gmail.com
 wrote:

 
  On Wed, May 21, 2014 at 10:41 AM, John Dickinson m...@not.mn wrote:
  Can you explain how PKI info is compressible? I thought it was
 encrypted, which should mean you can't compress it right?
 
  They're not encrypted - just signed and then base64 encoded. The JSON
 (and especially service catalog) is compressible prior to encoding.
 
 
 
  --John
 
 
 
 
 
  On May 21, 2014, at 8:32 AM, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:
 
   The keystone team is also looking at ways to reduce the data contained
 in the token. Coupled with the compression, this should get the tokens back
 down to a reasonable size.
  
   Cheers,
   Morgan
  
   Sent via mobile
  
   On Wednesday, May 21, 2014, Adam Young ayo...@redhat.com wrote:
   On 05/21/2014 11:09 AM, Chuck Thier wrote:
   There is a review for swift [1] that is requesting to set the max
 header size to 16k to be able to support v3 keystone tokens.  That might be
 fine if you measure you request rate in requests per minute, but this is
 continuing to add significant overhead to swift.  Even if you *only* have
 10,000 requests/sec to your swift cluster, an 8k token is adding almost
 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
 services like marconi.
  
   When PKI tokens were first introduced, we raised concerns about the
 unbounded size of of the token in the header, and were told that uuid style
 tokens would still be usable, but all I heard at the summit, was to not use
 them and PKI was the future of all things.
  
   At what point do we re-evaluate the decision to go with pki tokens,
 and that they may not be the best idea for apis like swift and marconi?
  
   Keystone tokens were slightly shrunk at the end of the last release
 cycle by removing unnecessary data from each endpoint entry.
  
   Compressed PKI tokens are enroute and will be much smaller.
  
  
   Thanks,
  
   --
   Chuck
  
   [1] https://review.openstack.org/#/c/93356/
  
  
   ___
   OpenStack-dev mailing list
  
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Adam Young

On 05/21/2014 02:00 PM, Kurt Griffiths wrote:

adding another ~10kB to each request, just to save a once-a-day call to
Keystone (ie uuid tokens) seems to be a really high price to pay for not
much benefit.

I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don’t work so well for high-volume
data APIs where every KB counts.

For those you should use Symmetric MACs IAW Kite.

For low volume authentication you should use PKI

You don't save the data, it just gets transferred at a different point.  
It is the service catalog that is what makes it variable in size, and we 
have an option to turn off the Service catalog in a token.





Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Adam Young

On 05/21/2014 03:36 PM, Kurt Griffiths wrote:
Good to know, thanks for clarifying. One thing I'm still fuzzy on, 
however, is why we want to deprecate use of UUID tokens in the first 
place? I'm just trying to understand the history here...
Because they are wasteful, and because they are the chattiest part of 
OpenStack.  I can go into it in nauseating detail if you really want, 
including the plans for future enhancements and the weaknesses of bearer 
tokens.



A token is nothing more than a snap shot of the data you get from 
Keystone distributed.  It is stored in Memcached and in the Horizon 
session uses the hash of it for a key.


You can do the same thing.  Once you know the token has been transferred 
once to a service, assuming that service has caching on, you can pass 
the hash of the key instead of the whole thing.


Actually, you can do that up front, as auth_token middleware will just 
default to an online lookup. However, we are planning on moving to 
ephemeral tokens (not saved in the database) and an online lookup won't 
be possible with those.  The people that manage Keystone will be happy 
with that, and forcing an online lookup will make them sad.


Hash is MD5 up through what is released in Icehouse.  The next version 
of auth_token middleware will support a configurable algorithm.  The 
default should be updated to sha256 in the near future.









From: Morgan Fainberg morgan.fainb...@gmail.com 
mailto:morgan.fainb...@gmail.com
Reply-To: OpenStack Dev openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org

Date: Wednesday, May 21, 2014 at 1:23 PM
To: OpenStack Dev openstack-dev@lists.openstack.org 
mailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Concerns about the ballooning size of 
keystone tokens


This is part of what I was referencing in regards to lightening the 
data stored in the token. Ideally, we would like to see an ID only 
token that only contains the basic information to act. Some initial 
tests show these tokens should be able to clock in under 1k in size. 
However all the details are not fully defined yet. Coupled with this 
data reduction there will be explicit definitions of the data that is 
meant to go into the tokens. Some of the data we have now is a result 
of convenience of accessing the data.


I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes 
smoothly. But this is absolutely on the list of things we would like 
to address.


Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths 
kurt.griffi...@rackspace.com mailto:kurt.griffi...@rackspace.com 
wrote:


 adding another ~10kB to each request, just to save a once-a-day
call to
Keystone (ie uuid tokens) seems to be a really high price to pay
for not
much benefit.

I have the same concern with respect to Marconi. I feel like KPI
tokens
are fine for control plane APIs, but don't work so well for
high-volume
data APIs where every KB counts.

Just my $0.02...

--Kurt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org javascript:;
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread John Dickinson

On May 21, 2014, at 4:26 PM, Adam Young ayo...@redhat.com wrote:

 On 05/21/2014 03:36 PM, Kurt Griffiths wrote:
 Good to know, thanks for clarifying. One thing I’m still fuzzy on, however, 
 is why we want to deprecate use of UUID tokens in the first place? I’m just 
 trying to understand the history here...
 Because they are wasteful, and because they are the chattiest part of 
 OpenStack.  I can go into it in nauseating detail if you really want, 
 including the plans for future enhancements and the weaknesses of bearer 
 tokens.
 
 
 A token is nothing more than a snap shot of the data you get from Keystone 
 distributed.  It is stored in Memcached and in the Horizon session uses the 
 hash of it for a key.
 
 You can do the same thing.  Once you know the token has been transferred once 
 to a service, assuming that service has caching on, you can pass the hash of 
 the key instead of the whole thing.  

So this would mean that a Swift client would auth against Keystone to get the 
PKI token, send that to Swift, and then get back from Swift a short token 
that can be used for subsequent requests? It's an interesting idea to consider, 
but it is a new sort of protocol for clients to implement.


 
 Actually, you can do that up front, as auth_token middleware will just 
 default to an online lookup. However, we are planning on moving to ephemeral 
 tokens (not saved in the database) and an online lookup won't be possible 
 with those.  The people that manage Keystone will be happy with that, and 
 forcing an online lookup will make them sad.

An online lookup is one that calls the Keystone service to validate a token? 
Which implies that by disabling online lookup there is enough info in the token 
to validate it without any call to Keystone?

I understand how it's advantageous to offload token validation away from 
Keystone itself (helps with scaling), but the current solution here seems to 
be pushing a lot of pain to consumers and deployers of data APIs (eg Marconi 
and Swift and others).


 
 Hash is MD5 up through what is released in Icehouse.  The next version of 
 auth_token middleware will support a configurable algorithm.  The default 
 should be updated to sha256 in the near future.

If a service (like Horizon) is hashing the token and using that as a session 
key, then why does it matter what the auth_token middleware supports? Isn't the 
hashing handled in the service itself? I'm thinking in the context of how we 
would implement this idea in Swift (exploring possibilities, not committing to 
a patch).

 
 
 
 
 
 
 
 From: Morgan Fainberg morgan.fainb...@gmail.com
 Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
 Date: Wednesday, May 21, 2014 at 1:23 PM
 To: OpenStack Dev openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Concerns about the ballooning size of keystone 
 tokens
 
 This is part of what I was referencing in regards to lightening the data 
 stored in the token. Ideally, we would like to see an ID only token that 
 only contains the basic information to act. Some initial tests show these 
 tokens should be able to clock in under 1k in size. However all the details 
 are not fully defined yet. Coupled with this data reduction there will be 
 explicit definitions of the data that is meant to go into the tokens. Some 
 of the data we have now is a result of convenience of accessing the data. 
 
 I hope to have this token change available during Juno development cycle. 
 
 There is a lot of work to be done to ensure this type of change goes 
 smoothly. But this is absolutely on the list of things we would like to 
 address. 
 
 Cheers,
 Morgan
 
 Sent via mobile 
 
 On Wednesday, May 21, 2014, Kurt Griffiths kurt.griffi...@rackspace.com 
 wrote:
  adding another ~10kB to each request, just to save a once-a-day call to
 Keystone (ie uuid tokens) seems to be a really high price to pay for not
 much benefit.
 
 I have the same concern with respect to Marconi. I feel like KPI tokens
 are fine for control plane APIs, but don’t work so well for high-volume
 data APIs where every KB counts.
 
 Just my $0.02...
 
 --Kurt
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Clint Byrum
Excerpts from John Dickinson's message of 2014-05-21 17:23:02 -0700:
 
 On May 21, 2014, at 4:26 PM, Adam Young ayo...@redhat.com wrote:
 
  On 05/21/2014 03:36 PM, Kurt Griffiths wrote:
  Good to know, thanks for clarifying. One thing I’m still fuzzy on, 
  however, is why we want to deprecate use of UUID tokens in the first 
  place? I’m just trying to understand the history here...
  Because they are wasteful, and because they are the chattiest part of 
  OpenStack.  I can go into it in nauseating detail if you really want, 
  including the plans for future enhancements and the weaknesses of bearer 
  tokens.
  
  
  A token is nothing more than a snap shot of the data you get from Keystone 
  distributed.  It is stored in Memcached and in the Horizon session uses the 
  hash of it for a key.
  
  You can do the same thing.  Once you know the token has been transferred 
  once to a service, assuming that service has caching on, you can pass the 
  hash of the key instead of the whole thing.  
 
 So this would mean that a Swift client would auth against Keystone to get the 
 PKI token, send that to Swift, and then get back from Swift a short token 
 that can be used for subsequent requests? It's an interesting idea to 
 consider, but it is a new sort of protocol for clients to implement.
 

Doesn't this mean that Swift would have to store the token it first
received, so that it can verify that the hash matches the token and to
extract the session information contained within?

It seems like the keystone auth middleware should be able to help with
this quite a bit, and I think it already does, but a pointer to the
documentation on how to make use of it would help close the loop here.

  
  Actually, you can do that up front, as auth_token middleware will just 
  default to an online lookup. However, we are planning on moving to 
  ephemeral tokens (not saved in the database) and an online lookup won't be 
  possible with those.  The people that manage Keystone will be happy with 
  that, and forcing an online lookup will make them sad.
 
 An online lookup is one that calls the Keystone service to validate a 
 token? Which implies that by disabling online lookup there is enough info in 
 the token to validate it without any call to Keystone?
 

Yes, PKI tokens can be validated by the service without phoning back
home to Keystone. However, currently the service must still ask Keystone
for a list of revoked tokens periodically. In the near future that will
morph into a list of token revocation events, which should make the
backend simpler for Keystone to implement. I assume the middleware will
also do most of the heavy lifting there too.

 I understand how it's advantageous to offload token validation away from 
 Keystone itself (helps with scaling), but the current solution here seems 
 to be pushing a lot of pain to consumers and deployers of data APIs (eg 
 Marconi and Swift and others).
 

I tend to agree, though if the middleware implements the caching/hashing
that Adam describes, then it may only be a few changes to the way that
is configured.

  
  Hash is MD5 up through what is released in Icehouse.  The next version of 
  auth_token middleware will support a configurable algorithm.  The default 
  should be updated to sha256 in the near future.
 
 If a service (like Horizon) is hashing the token and using that as a session 
 key, then why does it matter what the auth_token middleware supports? Isn't 
 the hashing handled in the service itself? I'm thinking in the context of how 
 we would implement this idea in Swift (exploring possibilities, not 
 committing to a patch).
 

The impression I got is that Horizon is a special case, and that most
services would just use the keystone auth middleware directly.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Adam Young

On 05/21/2014 08:23 PM, John Dickinson wrote:

On May 21, 2014, at 4:26 PM, Adam Young ayo...@redhat.com wrote:


On 05/21/2014 03:36 PM, Kurt Griffiths wrote:

Good to know, thanks for clarifying. One thing I'm still fuzzy on, however, is 
why we want to deprecate use of UUID tokens in the first place? I'm just trying 
to understand the history here...

Because they are wasteful, and because they are the chattiest part of 
OpenStack.  I can go into it in nauseating detail if you really want, including 
the plans for future enhancements and the weaknesses of bearer tokens.


A token is nothing more than a snap shot of the data you get from Keystone 
distributed.  It is stored in Memcached and in the Horizon session uses the 
hash of it for a key.

You can do the same thing.  Once you know the token has been transferred once 
to a service, assuming that service has caching on, you can pass the hash of 
the key instead of the whole thing.

So this would mean that a Swift client would auth against Keystone to get the PKI token, 
send that to Swift, and then get back from Swift a short token that can be 
used for subsequent requests? It's an interesting idea to consider, but it is a new sort 
of protocol for clients to implement.
It would probably be more correct for Swift to calculate that, yes, but 
the client could also just calculate the hash and send it on subsequent 
requests.  As you pointed out, it is a matter of performance.







Actually, you can do that up front, as auth_token middleware will just default 
to an online lookup. However, we are planning on moving to ephemeral tokens 
(not saved in the database) and an online lookup won't be possible with those.  
The people that manage Keystone will be happy with that, and forcing an online 
lookup will make them sad.

An online lookup is one that calls the Keystone service to validate a token? 
Which implies that by disabling online lookup there is enough info in the token to 
validate it without any call to Keystone?

Yes.  the whole popen call to openssl to verify the messages.


I understand how it's advantageous to offload token validation away from Keystone itself 
(helps with scaling), but the current solution here seems to be pushing a lot 
of pain to consumers and deployers of data APIs (eg Marconi and Swift and others).
We try to encapsulate it all within auth_token middleware, but the 
helper functions are in python-keystoneclient if you need more specific 
handling.






Hash is MD5 up through what is released in Icehouse.  The next version of 
auth_token middleware will support a configurable algorithm.  The default 
should be updated to sha256 in the near future.

If a service (like Horizon) is hashing the token and using that as a session 
key, then why does it matter what the auth_token middleware supports? Isn't the 
hashing handled in the service itself? I'm thinking in the context of how we 
would implement this idea in Swift (exploring possibilities, not committing to 
a patch).
That is after it has received the token.  So, Horizon could send the 
hash to Nova, and Nova would then be required to make the call to 
Keystone, just like UUID tokens.  That would break on the ephemeral 
approach.


I'm exploring the Horizon side of the equasion for some other reasons, 
primarily in the context of Kerberos support, but also for better 
revocation rules.  If the onus is on the client (in this case Horizon) 
to remember if it has send a particular token in full form it might be a 
little hard to keep track.


What communication is most impacted by the large token size?  Is it 
fetching out images for a web page or something like that?












From: Morgan Fainberg morgan.fainb...@gmail.com
Reply-To: OpenStack Dev openstack-dev@lists.openstack.org
Date: Wednesday, May 21, 2014 at 1:23 PM
To: OpenStack Dev openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Concerns about the ballooning size of keystone 
tokens

This is part of what I was referencing in regards to lightening the data stored in the 
token. Ideally, we would like to see an ID only token that only contains the 
basic information to act. Some initial tests show these tokens should be able to clock in 
under 1k in size. However all the details are not fully defined yet. Coupled with this 
data reduction there will be explicit definitions of the data that is meant to go into 
the tokens. Some of the data we have now is a result of convenience of accessing the data.

I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes smoothly. 
But this is absolutely on the list of things we would like to address.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths kurt.griffi...@rackspace.com wrote:

adding another ~10kB to each request, just to save a once-a-day call to
Keystone (ie uuid tokens) seems to be a really high price to pay for not
much benefit.

I