Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-18 Thread joehuang
[Joe]: For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second KeyStone server 
(or even the third KeySont server) . If the primary KeyStone server is out of 
service, then the KeyStone client will try the second KeyStone server. 
Different KeyStone client may be configured with different primary KeyStone 
server and the second KeyStone server.

[Adam]: Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies.  Each Keystone server can validate 
each other's tokens.
For cross-site KeyStone HA, the backend of HA can leverage MySQL Galera cluster 
for multisite database synchronous replication to provide high availability, 
but for the KeyStone front-end the API server, it's web service and accessed 
through the endpoint address ( name, or domain name, or ip address ) , like 
http:// or ip address.

AFAIK, the HA for web service will usually be done through DNS based geo-load 
balancer in multi-site scenario. The shortcoming for this HA is that the fault 
recovery ( forward request to the health web service) will take longer time, 
it's up to the configuration in the DNS system. The other way is to put a load 
balancer like LVS ahead of KeyStone web services in multi-site. Then either the 
LVS is put in one site(so that KeyStone client only configured with one IP 
address based endpoint item, but LVS cross-site HA is lack), or in multisite 
site, and register the multi-LVS's IP to the DNS or Name server(so that 
KeyStone client only configured with one Domain name or name based endpoint 
item, same issue just mentioned).

Therefore, I still think that keystone client with a fail-safe design( primary 
KeyStone server, the second KeyStone server ) will be a very high gain but low 
invest multisite high availability solution. Just like MySQL itself, we know 
there is some outbound high availability solution (for example, 
PaceMaker+ColoSync+DRDB), but also there is  Galera like inbound cluster ware.

Best Regards
Chaoyi Huang ( Joe Huang )


From: Adam Young [mailto:ayo...@redhat.com]
Sent: Tuesday, March 17, 2015 10:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge 
token size

On 03/17/2015 02:51 AM, joehuang wrote:
It's not reality to deploy KeyStone service ( including backend store ) in each 
site if the number, for example, is more than 10.  The reason is that the 
stored data including data related to revocation need to be replicated to all 
sites in synchronization manner. Otherwise, the API server might attempt to use 
the token before it's able to be validated in the target site.

Replicating revocati9on data across 10 sites will be tricky, but far better 
than replicating all of the token data.  Revocations should be relatively rare.


When Fernet token is used in multisite scenario, each API request will ask for 
token validation from KeyStone. The cloud will be out of service if KeyStone 
stop working, therefore KeyStone service need to run in several sites.

There will be multiple Keystone servers, so each should talk to their local 
instance.


For reliability purpose, I suggest that the keystone client should provide a 
fail-safe design: primary KeyStone server, the second KeyStone server (or even 
the third KeySont server) . If the primary KeyStone server is out of service, 
then the KeyStone client will try the second KeyStone server. Different 
KeyStone client may be configured with different primary KeyStone server and 
the second KeyStone server.

Makes sense, but that can be handled outside of Keystone using HA and Heartbear 
and awhole slew of technologies.  Each Keystone server can validate each 
other's tokens.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-18 Thread joehuang
[Joe]: For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second KeyStone server 
(or even the third KeySont server) . If the primary KeyStone server is out of 
service, then the KeyStone client will try the second KeyStone server. 
Different KeyStone client may be configured with different primary KeyStone 
server and the second KeyStone server.

[Adam]: Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies.  Each Keystone server can validate 
each other's tokens.
For cross-site KeyStone HA, the backend of HA can leverage MySQL Galera cluster 
for multisite database synchronous replication to provide high availability, 
but for the KeyStone front-end the API server, it's web service and accessed 
through the endpoint address ( name, or domain name, or ip address ) , like 
http:// or ip address.

AFAIK, the HA for web service will usually be done through DNS based geo-load 
balancer in multi-site scenario. The shortcoming for this HA is that the fault 
recovery ( forward request to the health web service) will take longer time, 
it's up to the configuration in the DNS system. The other way is to put a load 
balancer like LVS ahead of KeyStone web services in multi-site. Then either the 
LVS is put in one site(so that KeyStone client only configured with one IP 
address based endpoint item, but LVS cross-site HA is lack), or in multisite 
site, and register the multi-LVS's IP to the DNS or Name server(so that 
KeyStone client only configured with one Domain name or name based endpoint 
item, same issue just mentioned).

Therefore, I still think that keystone client with a fail-safe design( primary 
KeyStone server, the second KeyStone server ) will be a very high gain but low 
invest multisite high availability solution. Just like MySQL itself, we know 
there is some outbound high availability solution (for example, 
PaceMaker+ColoSync+DRDB), but also there is  Galera like inbound cluster ware.

Best Regards
Chaoyi Huang ( Joe Huang )


From: Adam Young [mailto:ayo...@redhat.com]
Sent: Tuesday, March 17, 2015 10:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge 
token size

On 03/17/2015 02:51 AM, joehuang wrote:
It's not reality to deploy KeyStone service ( including backend store ) in each 
site if the number, for example, is more than 10.  The reason is that the 
stored data including data related to revocation need to be replicated to all 
sites in synchronization manner. Otherwise, the API server might attempt to use 
the token before it's able to be validated in the target site.

Replicating revocati9on data across 10 sites will be tricky, but far better 
than replicating all of the token data.  Revocations should be relatively rare.

When Fernet token is used in multisite scenario, each API request will ask for 
token validation from KeyStone. The cloud will be out of service if KeyStone 
stop working, therefore KeyStone service need to run in several sites.

There will be multiple Keystone servers, so each should talk to their local 
instance.

For reliability purpose, I suggest that the keystone client should provide a 
fail-safe design: primary KeyStone server, the second KeyStone server (or even 
the third KeySont server) . If the primary KeyStone server is out of service, 
then the KeyStone client will try the second KeyStone server. Different 
KeyStone client may be configured with different primary KeyStone server and 
the second KeyStone server.

Makes sense, but that can be handled outside of Keystone using HA and Heartbear 
and awhole slew of technologies.  Each Keystone server can validate each 
other's tokens.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-18 Thread Adam Young

On 03/18/2015 08:59 PM, joehuang wrote:


[Joe]: For reliability purpose, I suggest that the keystone client 
should provide a fail-safe design: primary KeyStone server, the second 
KeyStone server (or even the third KeySont server) . If the primary 
KeyStone server is out of service, then the KeyStone client will try 
the second KeyStone server. Different KeyStone client may be 
configured with different primary KeyStone server and the second 
KeyStone server.



[Adam]: Makes sense, but that can be handled outside of Keystone using 
HA and Heartbear and awhole slew of technologies.  Each Keystone 
server can validate each other's tokens.


For cross-site KeyStone HA, the backend of HA can leverage MySQL 
Galera cluster for multisite database synchronous replication to 
provide high availability, but for the KeyStone front-end the API 
server, it’s web service and accessed through the endpoint address ( 
name, or domain name, or ip address ) , like http:// or ip address.


AFAIK, the HA for web service will usually be done through DNS based 
geo-load balancer in multi-site scenario. The shortcoming for this HA 
is that the fault recovery ( forward request to the health web 
service) will take longer time, it's up to the configuration in the 
DNS system. The other way is to put a load balancer like LVS ahead of 
KeyStone web services in multi-site. Then either the LVS is put in one 
site(so that KeyStone client only configured with one IP address based 
endpoint item, but LVS cross-site HA is lack), or in multisite site, 
and register the multi-LVS’s IP to the DNS or Name server(so that 
KeyStone client only configured with one Domain name or name based 
endpoint item, same issue just mentioned).


Therefore, I still think that keystone client with a fail-safe design( 
primary KeyStone server, the second KeyStone server ) will be a “very 
high gain but low invest” multisite high availability solution. Just 
like MySQL itself, we know there is some outbound high availability 
solution (for example, PaceMaker+ColoSync+DRDB), but also there is 
 Galera like inbound cluster ware.




Write it up as a full spec, and we will discuss at the summit.


Best Regards

Chaoyi Huang ( Joe Huang )

*From:*Adam Young [mailto:ayo...@redhat.com]
*Sent:* Tuesday, March 17, 2015 10:00 PM
*To:* openstack-dev@lists.openstack.org
*Subject:* Re: [openstack-dev] [opnfv-tech-discuss] 
[Keystone][Multisite] Huge token size


On 03/17/2015 02:51 AM, joehuang wrote:

It’s not reality to deploy KeyStone service ( including backend
store ) in each site if the number, for example, is more than 10.
 The reason is that the stored data including data related to
revocation need to be replicated to all sites in synchronization
manner. Otherwise, the API server might attempt to use the token
before it's able to be validated in the target site.


Replicating revocati9on data across 10 sites will be tricky, but far 
better than replicating all of the token data. Revocations should be 
relatively rare.


When Fernet token is used in multisite scenario, each API request will 
ask for token validation from KeyStone. The cloud will be out of 
service if KeyStone stop working, therefore KeyStone service need to 
run in several sites.



There will be multiple Keystone servers, so each should talk to their 
local instance.


For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second 
KeyStone server (or even the third KeySont server) . If the primary 
KeyStone server is out of service, then the KeyStone client will try 
the second KeyStone server. Different KeyStone client may be 
configured with different primary KeyStone server and the second 
KeyStone server.



Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies. Each Keystone server can 
validate each other's tokens.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-18 Thread Zhipeng Huang
BP is at
https://blueprints.launchpad.net/keystone/+spec/keystone-ha-multisite ,
spec will come later :)

On Thu, Mar 19, 2015 at 11:21 AM, Adam Young ayo...@redhat.com wrote:

  On 03/18/2015 08:59 PM, joehuang wrote:

  [Joe]: For reliability purpose, I suggest that the keystone client
 should provide a fail-safe design: primary KeyStone server, the second
 KeyStone server (or even the third KeySont server) . If the primary
 KeyStone server is out of service, then the KeyStone client will try the
 second KeyStone server. Different KeyStone client may be configured with
 different primary KeyStone server and the second KeyStone server.


 [Adam]: Makes sense, but that can be handled outside of Keystone using HA
 and Heartbear and awhole slew of technologies.  Each Keystone server can
 validate each other's tokens.

 For cross-site KeyStone HA, the backend of HA can leverage MySQL Galera
 cluster for multisite database synchronous replication to provide high
 availability, but for the KeyStone front-end the API server, it’s web
 service and accessed through the endpoint address ( name, or domain name,
 or ip address ) , like http:// or ip address.



 AFAIK, the HA for web service will usually be done through DNS based
 geo-load balancer in multi-site scenario. The shortcoming for this HA is
 that the fault recovery ( forward request to the health web service) will
 take longer time, it's up to the configuration in the DNS system. The other
 way is to put a load balancer like LVS ahead of KeyStone web services in
 multi-site. Then either the LVS is put in one site(so that KeyStone client
 only configured with one IP address based endpoint item, but LVS cross-site
 HA is lack), or in multisite site, and register the multi-LVS’s IP to the
 DNS or Name server(so that KeyStone client only configured with one Domain
 name or name based endpoint item, same issue just mentioned).



 Therefore, I still think that keystone client with a fail-safe design(
 primary KeyStone server, the second KeyStone server ) will be a “very high
 gain but low invest” multisite high availability solution. Just like MySQL
 itself, we know there is some outbound high availability solution (for
 example, PaceMaker+ColoSync+DRDB), but also there is  Galera like inbound
 cluster ware.


 Write it up as a full spec, and we will discuss at the summit.



 Best Regards

 Chaoyi Huang ( Joe Huang )





 *From:* Adam Young [mailto:ayo...@redhat.com ayo...@redhat.com]
 *Sent:* Tuesday, March 17, 2015 10:00 PM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite]
 Huge token size



 On 03/17/2015 02:51 AM, joehuang wrote:

 It’s not reality to deploy KeyStone service ( including backend store ) in
 each site if the number, for example, is more than 10.  The reason is that
 the stored data including data related to revocation need to be replicated
 to all sites in synchronization manner. Otherwise, the API server might
 attempt to use the token before it's able to be validated in the target
 site.


 Replicating revocati9on data across 10 sites will be tricky, but far
 better than replicating all of the token data.  Revocations should be
 relatively rare.



 When Fernet token is used in multisite scenario, each API request will ask
 for token validation from KeyStone. The cloud will be out of service if
 KeyStone stop working, therefore KeyStone service need to run in several
 sites.


 There will be multiple Keystone servers, so each should talk to their
 local instance.



 For reliability purpose, I suggest that the keystone client should provide
 a fail-safe design: primary KeyStone server, the second KeyStone server (or
 even the third KeySont server) . If the primary KeyStone server is out of
 service, then the KeyStone client will try the second KeyStone server.
 Different KeyStone client may be configured with different primary KeyStone
 server and the second KeyStone server.


 Makes sense, but that can be handled outside of Keystone using HA and
 Heartbear and awhole slew of technologies.  Each Keystone server can
 validate each other's tokens.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard  Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2

Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread joehuang
Hi, Adam,

Good to know Fernet token is on the way to reduce the token size and token 
persistence issues.

It's not reality to deploy KeyStone service ( including backend store ) in each 
site if the number, for example, is more than 10.  The reason is that the 
stored data including data related to revocation need to be replicated to all 
sites in synchronization manner. Otherwise, the API server might attempt to use 
the token before it's able to be validated in the target site.

When Fernet token is used in multisite scenario, each API request will ask for 
token validation from KeyStone. The cloud will be out of service if KeyStone 
stop working, therefore KeyStone service need to run in several sites.

For reliability purpose, I suggest that the keystone client should provide a 
fail-safe design: primary KeyStone server, the second KeyStone server (or even 
the third KeySont server) . If the primary KeyStone server is out of service, 
then the KeyStone client will try the second KeyStone server. Different 
KeyStone client may be configured with different primary KeyStone server and 
the second KeyStone server.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Adam Young [mailto:ayo...@redhat.com]
Sent: Monday, March 16, 2015 10:52 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge 
token size

On 03/16/2015 05:33 AM, joehuang wrote:
[Topic]: Huge token size

Hello,

As you may or may not be aware of, a requirement project proposal Multisite[1] 
was started in OPNFV in order to identify gaps in implementing OpenStack across 
multiple sites.

Although the proposal has not been approved yet, we've started to run some 
experiments to try out different methods. One of the problem we identify in 
those experiments is that, when we want  to use a shared KeyStone for 101 
Regions ( including ~500 endpoints ). The token size is huge (The token format 
is PKI), please see details in the attachments:

token_catalog.txt, 162KB: catalog list included in the token
token_pki.txt, 536KB: non-compressed token size
token_pkiz.txt, 40KB: compressed token size

I understand that KeyStone has a way like endpoint_filter to reduce the size of 
token, however this requires to manage many (hard to id the exact number) 
endpoints can be seen by a project, and the size is not easy to exactly 
controlled.

Do you guys have any insights in how to reduce the token size if PKI token 
used? Is there any BP relates to this issue? Or should we fire one to tackle 
this?


Right now there is an effort for non-multisite to get a handle on the problem.  
The Fernet token format will make it possible for a token to be ephemeral.  The 
scheme is this:

Encode the minimal amount of Data into the token possible.

Always validate the token on the Keystone server.

On the Keystone server, the token validation is performed by checking the 
message HMAC, and then expanding out the data.

This concept is expandable to multi site in two ways.

For a completely trusted and symmetric multisite deployement, the keystone 
servers can share keys.  The Kite project was 
http://git.openstack.org/cgit/openstack/kite origianlly spun up to manage this 
sort of symmetric key sharing, and is a natural extension.

If two keystone server need to sign for and validate separate serts of data 
(future work)  the form of signing could be returned to Asymmetric Crypto.  
This would lead to a minimal tokne size of about 800 Bytes (I haven't tested 
exactly).  It would mean that any service responsible for validating tokens 
would need to fetch and cache the responses for things like catalog and role 
assignments.

The epehemeral nature of the Fernet specification means that revocation data 
needs to bepersisted separate from the token, so it is not 100% ephemeral, but 
the amount of stored data should be (I estimate) two orders of magnatude 
smaller, maybe three.  Password changes, project deactivations,  and role 
revocations will still cause some traffic there.  These will need to be 
synchronized across token validation servers.

Great topic for discussion in Vancouver.







[1]https://wiki.opnfv.org/requirements_projects/multisite

Best Regards
Chaoyi Huang ( Joe Huang )






__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread David Chadwick
Encryption per se does not decrease token size, the best it can do is
keep the token size the same size. So using Fernet tokens will not on
its own alter the token size. Reducing the size must come from putting
less information in the token. If the token recipient has to always go
back to Keystone to get the token validated, then all the token needs to
be is a large random number that Keystone can look up in its database to
retrieve the user's permissions. In this case no encryption is needed at
all.

regards

David

On 17/03/2015 06:51, joehuang wrote:
 Hi, Adam,
 
  
 
 Good to know Fernet token is on the way to reduce the token size and
 token persistence issues.
 
  
 
 It’s not reality to deploy KeyStone service ( including backend store )
 in each site if the number, for example, is more than 10.  The reason is
 that the stored data including data related to revocation need to be
 replicated to all sites in synchronization manner. Otherwise, the API
 server might attempt to use the token before it's able to be validated
 in the target site.
 
  
 
 When Fernet token is used in multisite scenario, each API request will
 ask for token validation from KeyStone. The cloud will be out of service
 if KeyStone stop working, therefore KeyStone service need to run in
 several sites.
 
  
 
 For reliability purpose, I suggest that the keystone client should
 provide a fail-safe design: primary KeyStone server, the second KeyStone
 server (or even the third KeySont server) . If the primary KeyStone
 server is out of service, then the KeyStone client will try the second
 KeyStone server. Different KeyStone client may be configured with
 different primary KeyStone server and the second KeyStone server.
 
  
 
 Best Regards
 
 Chaoyi Huang ( Joe Huang )
 
  
 
 *From:*Adam Young [mailto:ayo...@redhat.com]
 *Sent:* Monday, March 16, 2015 10:52 PM
 *To:* openstack-dev@lists.openstack.org
 *Subject:* Re: [openstack-dev] [opnfv-tech-discuss]
 [Keystone][Multisite] Huge token size
 
  
 
 On 03/16/2015 05:33 AM, joehuang wrote:
 
 [Topic]: Huge token size
 
  
 
 Hello,
 
  
 
 As you may or may not be aware of, a requirement project proposal
 Multisite[1] was started in OPNFV in order to identify gaps in
 implementing OpenStack across multiple sites.
 
  
 
 Although the proposal has not been approved yet, we’ve started to
 run some experiments to try out different methods. One of the
 problem we identify in those experiments is that, when we want  to
 use a shared KeyStone for 101 Regions ( including ~500 endpoints ).
 The token size is huge (The token format is PKI), please see details
 in the attachments:
 
  
 
 token_catalog.txt, 162KB: catalog list included in the token
 
 token_pki.txt, 536KB: non-compressed token size
 
 token_pkiz.txt, 40KB: compressed token size
 
  
 
 I understand that KeyStone has a way like endpoint_filter to reduce
 the size of token, however this requires to manage many (hard to id
 the exact number) endpoints can be seen by a project, and the size
 is not easy to exactly controlled.
 
  
 
 Do you guys have any insights in how to reduce the token size if PKI
 token used? Is there any BP relates to this issue? Or should we fire
 one to tackle this?
 
 
 
 Right now there is an effort for non-multisite to get a handle on the
 problem.  The Fernet token format will make it possible for a token to
 be ephemeral.  The scheme is this:
 
 Encode the minimal amount of Data into the token possible.
 
 Always validate the token on the Keystone server.
 
 On the Keystone server, the token validation is performed by checking
 the message HMAC, and then expanding out the data.
 
 This concept is expandable to multi site in two ways.
 
 For a completely trusted and symmetric multisite deployement, the
 keystone servers can share keys.  The Kite project was
 http://git.openstack.org/cgit/openstack/kite origianlly spun up to
 manage this sort of symmetric key sharing, and is a natural extension.
 
 If two keystone server need to sign for and validate separate serts of
 data (future work)  the form of signing could be returned to Asymmetric
 Crypto.  This would lead to a minimal tokne size of about 800 Bytes (I
 haven't tested exactly).  It would mean that any service responsible for
 validating tokens would need to fetch and cache the responses for things
 like catalog and role assignments. 
 
 The epehemeral nature of the Fernet specification means that revocation
 data needs to bepersisted separate from the token, so it is not 100%
 ephemeral, but the amount of stored data should be (I estimate) two
 orders of magnatude smaller, maybe three.  Password changes, project
 deactivations,  and role revocations will still cause some traffic
 there.  These will need to be synchronized across token validation servers.
 
 Great topic for discussion in Vancouver.
 
 
 
 
 
 
  
 
 [1]https

Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread Dolph Mathews
On Tuesday, March 17, 2015, David Chadwick d.w.chadw...@kent.ac.uk wrote:

 Encryption per se does not decrease token size, the best it can do is
 keep the token size the same size.


Correct.


 So using Fernet tokens will not on
 its own alter the token size. Reducing the size must come from putting
 less information in the token.


Fernet tokens carry far less information than PKI tokens, and thus have a
smaller relative size.


 If the token recipient has to always go
 back to Keystone to get the token validated, then all the token needs to
 be is a large random number that Keystone can look up in its database to
 retrieve the user's permissions.


Correct, but then those large random numbers must be persisted and
distributed, as is the case with UUID tolens. However, Fernet tokens carry
just enough information to indicate which permissions apply, and keystone
can build a validation response from there, without persisting anything for
every token issued.


 In this case no encryption is needed at
 all.


Fernet tokens encrypt everything but the token's creation timestamp, but
that's just a perk that some deployers will find attractive, not a critical
design feature that we're utilizing today.



 regards

 David

 On 17/03/2015 06:51, joehuang wrote:
  Hi, Adam,
 
 
 
  Good to know Fernet token is on the way to reduce the token size and
  token persistence issues.
 
 
 
  It’s not reality to deploy KeyStone service ( including backend store )
  in each site if the number, for example, is more than 10.  The reason is
  that the stored data including data related to revocation need to be
  replicated to all sites in synchronization manner. Otherwise, the API
  server might attempt to use the token before it's able to be validated
  in the target site.
 
 
 
  When Fernet token is used in multisite scenario, each API request will
  ask for token validation from KeyStone. The cloud will be out of service
  if KeyStone stop working, therefore KeyStone service need to run in
  several sites.
 
 
 
  For reliability purpose, I suggest that the keystone client should
  provide a fail-safe design: primary KeyStone server, the second KeyStone
  server (or even the third KeySont server) . If the primary KeyStone
  server is out of service, then the KeyStone client will try the second
  KeyStone server. Different KeyStone client may be configured with
  different primary KeyStone server and the second KeyStone server.
 
 
 
  Best Regards
 
  Chaoyi Huang ( Joe Huang )
 
 
 
  *From:*Adam Young [mailto:ayo...@redhat.com javascript:;]
  *Sent:* Monday, March 16, 2015 10:52 PM
  *To:* openstack-dev@lists.openstack.org javascript:;
  *Subject:* Re: [openstack-dev] [opnfv-tech-discuss]
  [Keystone][Multisite] Huge token size
 
 
 
  On 03/16/2015 05:33 AM, joehuang wrote:
 
  [Topic]: Huge token size
 
 
 
  Hello,
 
 
 
  As you may or may not be aware of, a requirement project proposal
  Multisite[1] was started in OPNFV in order to identify gaps in
  implementing OpenStack across multiple sites.
 
 
 
  Although the proposal has not been approved yet, we’ve started to
  run some experiments to try out different methods. One of the
  problem we identify in those experiments is that, when we want  to
  use a shared KeyStone for 101 Regions ( including ~500 endpoints ).
  The token size is huge (The token format is PKI), please see details
  in the attachments:
 
 
 
  token_catalog.txt, 162KB: catalog list included in the token
 
  token_pki.txt, 536KB: non-compressed token size
 
  token_pkiz.txt, 40KB: compressed token size
 
 
 
  I understand that KeyStone has a way like endpoint_filter to reduce
  the size of token, however this requires to manage many (hard to id
  the exact number) endpoints can be seen by a project, and the size
  is not easy to exactly controlled.
 
 
 
  Do you guys have any insights in how to reduce the token size if PKI
  token used? Is there any BP relates to this issue? Or should we fire
  one to tackle this?
 
 
 
  Right now there is an effort for non-multisite to get a handle on the
  problem.  The Fernet token format will make it possible for a token to
  be ephemeral.  The scheme is this:
 
  Encode the minimal amount of Data into the token possible.
 
  Always validate the token on the Keystone server.
 
  On the Keystone server, the token validation is performed by checking
  the message HMAC, and then expanding out the data.
 
  This concept is expandable to multi site in two ways.
 
  For a completely trusted and symmetric multisite deployement, the
  keystone servers can share keys.  The Kite project was
  http://git.openstack.org/cgit/openstack/kite origianlly spun up to
  manage this sort of symmetric key sharing, and is a natural extension.
 
  If two keystone server need to sign for and validate separate serts of
  data (future work)  the form of signing could be returned to Asymmetric

Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread Zhipeng Huang
Hi Adam,

The Fernet token and Project Kite looks very interesting, I think it might
be helpful to work together to tackle the problem, shall we put this issue
to the keystone design summit for further follow up discussion?

On Tue, Mar 17, 2015 at 3:30 PM, David Chadwick d.w.chadw...@kent.ac.uk
wrote:

 Encryption per se does not decrease token size, the best it can do is
 keep the token size the same size. So using Fernet tokens will not on
 its own alter the token size. Reducing the size must come from putting
 less information in the token. If the token recipient has to always go
 back to Keystone to get the token validated, then all the token needs to
 be is a large random number that Keystone can look up in its database to
 retrieve the user's permissions. In this case no encryption is needed at
 all.

 regards

 David

 On 17/03/2015 06:51, joehuang wrote:
  Hi, Adam,
 
 
 
  Good to know Fernet token is on the way to reduce the token size and
  token persistence issues.
 
 
 
  It’s not reality to deploy KeyStone service ( including backend store )
  in each site if the number, for example, is more than 10.  The reason is
  that the stored data including data related to revocation need to be
  replicated to all sites in synchronization manner. Otherwise, the API
  server might attempt to use the token before it's able to be validated
  in the target site.
 
 
 
  When Fernet token is used in multisite scenario, each API request will
  ask for token validation from KeyStone. The cloud will be out of service
  if KeyStone stop working, therefore KeyStone service need to run in
  several sites.
 
 
 
  For reliability purpose, I suggest that the keystone client should
  provide a fail-safe design: primary KeyStone server, the second KeyStone
  server (or even the third KeySont server) . If the primary KeyStone
  server is out of service, then the KeyStone client will try the second
  KeyStone server. Different KeyStone client may be configured with
  different primary KeyStone server and the second KeyStone server.
 
 
 
  Best Regards
 
  Chaoyi Huang ( Joe Huang )
 
 
 
  *From:*Adam Young [mailto:ayo...@redhat.com]
  *Sent:* Monday, March 16, 2015 10:52 PM
  *To:* openstack-dev@lists.openstack.org
  *Subject:* Re: [openstack-dev] [opnfv-tech-discuss]
  [Keystone][Multisite] Huge token size
 
 
 
  On 03/16/2015 05:33 AM, joehuang wrote:
 
  [Topic]: Huge token size
 
 
 
  Hello,
 
 
 
  As you may or may not be aware of, a requirement project proposal
  Multisite[1] was started in OPNFV in order to identify gaps in
  implementing OpenStack across multiple sites.
 
 
 
  Although the proposal has not been approved yet, we’ve started to
  run some experiments to try out different methods. One of the
  problem we identify in those experiments is that, when we want  to
  use a shared KeyStone for 101 Regions ( including ~500 endpoints ).
  The token size is huge (The token format is PKI), please see details
  in the attachments:
 
 
 
  token_catalog.txt, 162KB: catalog list included in the token
 
  token_pki.txt, 536KB: non-compressed token size
 
  token_pkiz.txt, 40KB: compressed token size
 
 
 
  I understand that KeyStone has a way like endpoint_filter to reduce
  the size of token, however this requires to manage many (hard to id
  the exact number) endpoints can be seen by a project, and the size
  is not easy to exactly controlled.
 
 
 
  Do you guys have any insights in how to reduce the token size if PKI
  token used? Is there any BP relates to this issue? Or should we fire
  one to tackle this?
 
 
 
  Right now there is an effort for non-multisite to get a handle on the
  problem.  The Fernet token format will make it possible for a token to
  be ephemeral.  The scheme is this:
 
  Encode the minimal amount of Data into the token possible.
 
  Always validate the token on the Keystone server.
 
  On the Keystone server, the token validation is performed by checking
  the message HMAC, and then expanding out the data.
 
  This concept is expandable to multi site in two ways.
 
  For a completely trusted and symmetric multisite deployement, the
  keystone servers can share keys.  The Kite project was
  http://git.openstack.org/cgit/openstack/kite origianlly spun up to
  manage this sort of symmetric key sharing, and is a natural extension.
 
  If two keystone server need to sign for and validate separate serts of
  data (future work)  the form of signing could be returned to Asymmetric
  Crypto.  This would lead to a minimal tokne size of about 800 Bytes (I
  haven't tested exactly).  It would mean that any service responsible for
  validating tokens would need to fetch and cache the responses for things
  like catalog and role assignments.
 
  The epehemeral nature of the Fernet specification means that revocation
  data needs to bepersisted separate from the token, so it is not 100%
  ephemeral

Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread Adam Young

On 03/17/2015 03:30 AM, David Chadwick wrote:

Encryption per se does not decrease token size, the best it can do is
keep the token size the same size. So using Fernet tokens will not on
its own alter the token size.


Fernet is striking a blanace.  It is encruypting a subset of the data.  
Not the whole payload of the PKI tokens.  They are under 500 Bytes, with 
a target of getting them under 255 bytes.  Only Federation tokens should 
be larger than 255 bytes.



  Reducing the size must come from putting
less information in the token. If the token recipient has to always go
back to Keystone to get the token validated, then all the token needs to
be is a large random number that Keystone can look up in its database to
retrieve the user's permissions. In this case no encryption is needed at
all.
The Fernet goal is to remove that database.  Instead, the data 
associated with the token will be assembeld at verification time from 
the small subset in the fernet token body and the data stored in the 
Keystone server.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-17 Thread Adam Young

On 03/17/2015 02:51 AM, joehuang wrote:


It’s not reality to deploy KeyStone service ( including backend store 
) in each site if the number, for example, is more than 10.  The 
reason is that the stored data including data related to revocation 
need to be replicated to all sites in synchronization manner. 
Otherwise, the API server might attempt to use the token before it's 
able to be validated in the target site.




Replicating revocati9on data across 10 sites will be tricky, but far 
better than replicating all of the token data.  Revocations should be 
relatively rare.


When Fernet token is used in multisite scenario, each API request will 
ask for token validation from KeyStone. The cloud will be out of 
service if KeyStone stop working, therefore KeyStone service need to 
run in several sites.




There will be multiple Keystone servers, so each should talk to their 
local instance.


For reliability purpose, I suggest that the keystone client should 
provide a fail-safe design: primary KeyStone server, the second 
KeyStone server (or even the third KeySont server) . If the primary 
KeyStone server is out of service, then the KeyStone client will try 
the second KeyStone server. Different KeyStone client may be 
configured with different primary KeyStone server and the second 
KeyStone server.




Makes sense, but that can be handled outside of Keystone using HA and 
Heartbear and awhole slew of technologies.  Each Keystone server can 
validate each other's tokens.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opnfv-tech-discuss] [Keystone][Multisite] Huge token size

2015-03-16 Thread Adam Young

On 03/16/2015 05:33 AM, joehuang wrote:


[Topic]: Huge token size

Hello,

As you may or may not be aware of, a requirement project proposal 
Multisite[1] was started in OPNFV in order to identify gaps in 
implementing OpenStack across multiple sites.


Although the proposal has not been approvedyet, we’ve started to run 
some experiments to try out different methods. One of the problem we 
identify in those experiments is that, when we want  to use a shared 
KeyStone for 101 Regions ( including ~500 endpoints ). The token size 
is huge (The token format is PKI), please see details in the attachments:


token_catalog.txt, 162KB: catalog list included in the token

token_pki.txt, 536KB: non-compressed token size

token_pkiz.txt, 40KB: compressed token size

I understand that KeyStone has a way like endpoint_filter to reduce 
the size of token, however this requires to manage many (hard to id 
the exact number) endpoints can be seen by a project, and the size is 
not easy to exactly controlled.


Do you guys have any insights in how to reduce the token size if PKI 
token used? Is there any BP relates to this issue? Or should we fire 
one to tackle this?





Right now there is an effort for non-multisite to get a handle on the 
problem.  The Fernet token format will make it possible for a token to 
be ephemeral.  The scheme is this:


Encode the minimal amount of Data into the token possible.

Always validate the token on the Keystone server.

On the Keystone server, the token validation is performed by checking 
the message HMAC, and then expanding out the data.


This concept is expandable to multi site in two ways.

For a completely trusted and symmetric multisite deployement, the 
keystone servers can share keys.  The Kite project was 
http://git.openstack.org/cgit/openstack/kite origianlly spun up to 
manage this sort of symmetric key sharing, and is a natural extension.


If two keystone server need to sign for and validate separate serts of 
data (future work)  the form of signing could be returned to Asymmetric 
Crypto.  This would lead to a minimal tokne size of about 800 Bytes (I 
haven't tested exactly).  It would mean that any service responsible for 
validating tokens would need to fetch and cache the responses for things 
like catalog and role assignments.


The epehemeral nature of the Fernet specification means that revocation 
data needs to bepersisted separate from the token, so it is not 100% 
ephemeral, but the amount of stored data should be (I estimate) two 
orders of magnatude smaller, maybe three.  Password changes, project 
deactivations,  and role revocations will still cause some traffic 
there.  These will need to be synchronized across token validation servers.


Great topic for discussion in Vancouver.






[1]https://wiki.opnfv.org/requirements_projects/multisite

Best Regards

Chaoyi Huang ( Joe Huang )



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev