Re: [openstack-dev] [all] OpenStack Bootstrapping Hour - Keystone - Friday Dec 5th 20:00 UTC (15:00 Americas/New_York)

2014-12-05 Thread Boris Bobrov
On Thursday 04 December 2014 19:08:06 Sean Dague wrote:
> Sorry for the late announce, too much turkey and pie
> 
> This Friday, Dec 5th, we'll be talking with Steve Martinelli and David
> Stanek about Keystone Authentication in OpenStack.

Wiki page says that the event will be Friday Dec 5th - 19:00 UTC (15:00 
Americas/New_York), while the subject in your mail has 20:00 UTC. Could you 
please clarify that?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone] Removing functionality that was deprecated in Kilo and upcoming deprecated functionality in Mitaka

2015-12-01 Thread Boris Bobrov
On Tuesday 01 December 2015 08:50:14 Lance Bragstad wrote:
> On Tue, Dec 1, 2015 at 6:05 AM, Sean Dague  wrote:
> > 
> > From an interop perspective, this concerns me a bit. My
> > understanding is that Apache is specifically needed for
> > Federation. Federation is the norm that we want for environments
> > in the future.
> 
> (On a side note from removing eventlet, but related to what Sean
> said)
> 
> A spec has been proposed to make keystone a fully fledged saml2
> provider [0]. Depending on how we feel about implementing and
> maintaining something like this, we'd be able to use federation
> within uWSGI (we would no longer *require* Apache for federation).
> Only bringing this up because it would also solve the
> two-reference-architectures problem. A uWSGI reference architecture
> could be used for deploying keystone, regardless if you want
> federation or not.
> 
> We probably wouldn't get a uWSGI reference architecture until after
> that is all fleshed out. This is assuming the spec is accepted and
> implemented in Mitaka.
> 
> [0] https://review.openstack.org/#/c/244694/5

I don't get why we talk about uwsgi in context of federation. uwsgi is 
an application server. Apache is a web server. We can still use uwsgi 
with apache, there are several modules for that:
https://uwsgi-docs.readthedocs.org/en/latest/Apache.html

Now we require apache for federation and support mod_wsgi (which is 
tightly integrated with apache) as an app server. We can still require 
Apache and support uwsgi as an app server, without any changes to 
federation.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][keystone] is there chance the keystone cached the catalog and can not get the latest endpoints?

2016-06-29 Thread Boris Bobrov
On Wednesday 29 June 2016 15:53:15 Kairat Kushaev wrote:
> Hi,
> Looks like this bug is duplicate of
> https://bugs.launchpad.net/oslo.cache/+bug/1590779

Looks like it.

> HTH
> 
> Best regards,
> Kairat Kushaev
> 
> On Wed, Jun 29, 2016 at 3:32 PM, Jeffrey Zhang
> 
> wrote:
> > In Kolla CI, we see[1]
> > 
> > publicURL endpoint for compute service not found
> > 
> > this error for many times. The bug is here[0]
> > 
> > 
> > After some debug, I found the endpoint is exist in the DB. But
> > when
> > running `nova service-list`
> > It says `publicURL endpoint for compute service not found
> > ​`​. After a few seconds, when you run
> > the `nova service-list` again. it works as expected.
> > 
> > I think the root cause it the keystone. seem that the keystone
> > cached the catalog, and return
> > the cached version without query it from the DB. So anyone can
> > explain why it happen? and how to
> > avoid this( any workaround?)
> > 
> > The env:
> > OS/Docker image: no matter, this happen on both CentOS and Ubuntu
> > OpenStack: master branch
> > 
> > [0] https://bugs.launchpad.net/kolla/+bug/1587226
> > [1]
> > http://logs.openstack.org/91/328891/5/check/gate-kolla-dsvm-deploy
> > -oraclelinux-source/3af433c/console.html#_2016-06-29_10_10_33_2982
> > 98
> > 
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> > 
> > __
> >  OpenStack Development Mailing List (not for usage
> > questions) Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] LDAP identity driver with groups from local DB

2015-07-24 Thread Boris Bobrov
On Friday 24 July 2015 09:29:32 Dave Walker wrote:
> On 24 July 2015 at 05:00, Julian Edwards  wrote:
> Tl;DR is that the *User* management can come from LDAP via the
> Identity driver, but the Project/Tenants and Roles on these come from
> the *Assignment* driver via SQL - almost as an overlay.
> 
> This would seem to solve the issue you outline?

As far as I understand the issue is that corps want to have users in read-only 
LDAP and have an ability to create groups outside of LDAP, in SQL.

Am I right?

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] FF Exception request for Fernet tokens support.

2015-07-27 Thread Boris Bobrov
I agree. Configuration with memcache made by Fuel now has issues which badly 
affect overall OpenStack experience.

On Monday 27 July 2015 14:34:59 Vladimir Kuklin wrote:
> Folks
> 
> We saw several High issues with how keystone manages regular memcached
> tokens. I know, this is not the perfect time as you already decided to push
> it from 7.0, but I would reconsider declaring it as FFE as it affects HA
> and UX poorly. If we can enable tokens simply by altering configuration,
> let's do it. I see commit for this feature is pretty trivial.
> 
> On Fri, Jul 24, 2015 at 9:27 AM, Mike Scherbakov 
> 
> wrote:
> > Fuel Library team, I expect your immediate reply here.
> > 
> > I'd like upgrades team to take a look at this one, as well as at the one
> > which moves Keystone under Apache, in order to check that there are no
> > issues here.
> > 
> > -1 from me for this time in the cycle. I'm concerned about:
> >1. I don't see any reference to blueprint or bug which explains (with
> >measurements) why we need this change in reference architecture, and
> >what
> >are the thoughts about it in puppet-openstack, and OpenStack Keystone.
> >We
> >need to get datapoints, and point to them. Just knowing that Keystone
> >team
> >implemented support for it doesn't yet mean that we need to rush in
> >enabling this.
> >2. It is quite noticeable change, not a simple enhancement. I reviewed
> >the patch, there are questions raised.
> >3. It doesn't pass CI, and I don't have information on risks
> >associated, and additional effort required to get this done (how long
> >would
> >it take to get it done)
> >4. This feature increases complexity of reference architecture. Now
> >I'd like every complexity increase to be optional. I have feedback from
> >the
> >field, that our prescriptive architecture just doesn't fit users'
> >needs,
> >and it is so painful to decouple then what is needed vs what is not.
> >Let's
> >start extending stuff with an easy switch, being propagated from Fuel
> >Settings. Is it possible to do? How complex would it be?
> > 
> > If we get answers for all of this, and decide that we still want the
> > feature, then it would be great to have it. I just don't feel that it's
> > right timing anymore - we entered FF.
> > 
> > Thanks,
> > 
> > On Thu, Jul 23, 2015 at 11:53 AM Alexander Makarov 
> > 
> > wrote:
> >> Colleagues,
> >> 
> >> I would like to request an exception from the Feature Freeze for Fernet
> >> tokens support added to the fuel-library in the following CR:
> >> https://review.openstack.org/#/c/201029/
> >> 
> >> Keystone part of the feature is implemented in the upstream and the
> >> change impacts setup configuration only.
> >> 
> >> Please, respond if you have any questions or concerns related to this
> >> request.
> >> 
> >> Thanks in advance.
> >> 
> >> --
> >> Kind Regards,
> >> Alexander Makarov,
> >> Senior Software Developer,
> >> 
> >> Mirantis, Inc.
> >> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
> >> 
> >> Tel.: +7 (495) 640-49-04
> >> Tel.: +7 (926) 204-50-60
> >> 
> >> Skype: MAKAPOB.AJIEKCAHDP
> >> 
> >> _
> >> _
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > --
> > Mike Scherbakov
> > #mihgen
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-01 Thread Boris Bobrov
On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com wrote:
> I suggest to use pacemaker multistate clone resource to rotate and 
rsync
> fernet tokens from local directories across cluster nodes. The resource
> prototype is described here
> https://etherpad.openstack.org/p/fernet_tokens_pacemaker> Pros: 
Pacemaker
> will care about CAP/split-brain stuff for us, we just design rotate and
> rsync logic. Also no shared FS/DB involved but only Corosync CIB - to 
store
> few internal resource state related params, not tokens. Cons: Keystone
> nodes hosting fernet tokens directories must be members of pacemaker
> cluster. Also custom OCF script should be created to implement this. __
> Regards,
> Bogdan Dobrelya.
> IRC: bogdando

Looks complex.

I suggest this kind of bash or python script, running on Fuel master node:

0. Check that all controllers are online;
1. Go to one of the controllers, rotate keys there;
2. Fetch key 0 from there;
3. For each other controller rotate keys there and put the 0-key instead of 
their new 0-key.
4. If any of the nodes fail to get new keys (because they went offline or for 
some other reason) revert the rotate (move the key with the biggest index 
back to 0).

The script can be launched by cron or by button in Fuel.

I don't see anything critically bad if one rotation/sync event fails.

> Matt Fischer also discusses key rotation here:
> 
>   http://www.mattfischer.com/blog/?p=648
> 
> And here:
> 
>   http://www.mattfischer.com/blog/?p=665
> 
> On Mon, Jul 27, 2015 at 2:30 PM, Dolph Mathews 
> wrote:
> …

-- 
С наилучшими пожеланиями,
Boris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-01 Thread Boris Bobrov
On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
> This too is overly complex and will cause failures. If you replace key 0,
> you will stop validating tokens that were encrypted with the old key 0.

No. Key 0 is replaced after rotation.

Also, come on, does http://paste.openstack.org/show/406674/ look overly 
complex? (it should be launched from Fuel master node).

> You simply need to run rotate on one, and then rsync that key repository
> to all of the others. You _must not_ run rotate again until you rsync to
> all of the others, since the key 0 from one rotation becomes the primary
> token encrypting key going forward, so you need it to get pushed out to
> all nodes as 0 first.

I agree. What step in my logic misses that part?

On Saturday 01 August 2015 15:50:13 Matt Fischer wrote:
> Agree that you guys are way over thinking this. You don't need to rotate
> keys at exactly the same time, we do it in within a one or two hours
> typically based on how our regions are setup. We do it with puppet, 
puppet
> runs on one keystone node at a time and drops the keys into place.

There is a constraint: sometimes you cannot connect from one keystone 
node to another. For example, in a cloud deployed by Fuel you cannot ssh 
from one controller to another afaik.

> The
> actual rotation and generation we handle with a script that then 
proposes
> the new key structure as a review which is then approved and deployed 
via
> the normal process. For this process I always drop keys 0, 1, 2 into 
place,
> I'm not bumping the numbers like the normal rotations do.

I dislike this solution because there is more than 1 point of configiration. If 
your cloud administrator decides to use not 3 keys, but 5, he will have to 
change not only the option in keystone.conf, but also in your script. Yes, 
keystone will still work, but there will be some inconsistency.

I also dislike it because keys should be generated only be a single tool. If it 
would turn out that keys used for fernet tokens are too weak and 
developers decide to change key length from 32 bytes to 64, it will have to 
be fixed outside of that tool too. Which is not good. Now this tool is 
keystone-manage

> We had also considered ansible which would be perfect for this, but that
> makes our ability to setup throw away environments with a single click a
> bit more complicated. If you don't have that requirement, a simple 
ansible
> script is what you should do.
> 
> On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  wrote:
> > Excerpts from Boris Bobrov's message of 2015-08-01 14:18:21 -0700:
> > > On Saturday 01 August 2015 16:27:17 bdobre...@mirantis.com 
wrote:
> > > > I suggest to use pacemaker multistate clone resource to rotate 
and
> > > 
> > > rsync
> > > 
> > > > fernet tokens from local directories across cluster nodes. The
> > > > resource
> > > > prototype is described here
> > > 
> > > > https://etherpad.openstack.org/p/fernet_tokens_pacemaker> 
Pros:
> > > Pacemaker
> > > 
> > > > will care about CAP/split-brain stuff for us, we just design rotate
> > > > and
> > > > rsync logic. Also no shared FS/DB involved but only Corosync CIB - 
to
> > > 
> > > store
> > > 
> > > > few internal resource state related params, not tokens. Cons: 
Keystone
> > > > nodes hosting fernet tokens directories must be members of 
pacemaker
> > > > cluster. Also custom OCF script should be created to implement 
this.
> > > > __
> > > > Regards,
> > > > Bogdan Dobrelya.
> > > > IRC: bogdando
> > > 
> > > Looks complex.
> > > 
> > > I suggest this kind of bash or python script, running on Fuel master
> > 
> > node:
> > > 0. Check that all controllers are online;
> > > 1. Go to one of the controllers, rotate keys there;
> > > 2. Fetch key 0 from there;
> > > 3. For each other controller rotate keys there and put the 0-key 
instead
> > 
> > of
> > 
> > > their new 0-key.
> > > 4. If any of the nodes fail to get new keys (because they went offline
> > 
> > or for
> > 
> > > some other reason) revert the rotate (move the key with the 
biggest
> > > index
> > > back to 0).
> > > 
> > > The script can be launched by cron or by button in Fuel.
> > > 
> > > I don't see anything critically bad if one rotation/sync event fails.
> > 
> > This too is overly complex and will cause failures. If you replace key 0,
> > you will stop validating tokens that were encrypted with the old key 0.
> > 
> > You simply need to run rotate on one, and then rsync that key 
repository
> > to all of the others. You _must not_ run rotate again until you rsync to
> > all of the others, since the key 0 from one rotation becomes the 
primary
> > token encrypting key going forward, so you need it to get pushed out 
to
> > all nodes as 0 first.
> > 
> > Don't over think it. Just read http://lbragstad.com/?p=133 and it will
> > remain simple.
> > 
> > 
__
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStac

Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-03 Thread Boris Bobrov
On Monday 03 August 2015 21:05:00 David Stanek wrote:
> On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov  
wrote:
> > On Sat, Aug 1, 2015 at 3:41 PM, Clint Byrum  
wrote:
> > > This too is overly complex and will cause failures. If you replace key
> > > 0,
> > > 
> > > you will stop validating tokens that were encrypted with the old key 
0.
> > 
> > No. Key 0 is replaced after rotation.
> > 
> > 
> > 
> > Also, come on, does http://paste.openstack.org/show/406674/ look 
overly
> > complex? (it should be launched from Fuel master node).
> 
> I'm reading this on a small phone, so I may have it wrong, but the script
> appears to be broken.
> 
> It will ssh to node-1 and rotate. In the simplest case this takes key 0 and
> moves it to the next highest key number. Then a new key 0 is generated.
> 
> Later there is a loop that will again ssh into node-1 and run the rotation
> script. If there is a limit set on the number of keys and you are at that
> limit a key will be deleted. This extra rotation on node-1 means that it's
> possible that it has a different set of keys than are on node-2 and 
node-3.

You are absolutely right. Node-1 should be excluded from the loop.

pinc also lacks "-c 1".

I am sure that other issues can be found.

In my excuse I want to say that I never ran the script and wrote it just to 
show how simple it should be. Thank for review though!

I also hope that no one is going to use a script from a mailing list.

> What's the issue with just a simple rsync of the directory?

None I think. I just want to reuse the interface provided by keystone-
manage.

-- 
С наилучшими пожеланиями,
Boris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

2015-08-04 Thread Boris Bobrov
On Tuesday 04 August 2015 08:06:21 Lance Bragstad wrote:
> On Tue, Aug 4, 2015 at 1:37 AM, Boris Bobrov  wrote:
> > On Monday 03 August 2015 21:05:00 David Stanek wrote:
> > > On Sat, Aug 1, 2015 at 8:03 PM, Boris Bobrov 
> > 
> > wrote:
> > > > Also, come on, does http://paste.openstack.org/show/406674/ look
> > > > overly
> > > > complex? (it should be launched from Fuel master node).
> > > 
> > > I'm reading this on a small phone, so I may have it wrong, but the
> > > script
> > > 
> > > appears to be broken.
> > > 
> > > 
> > > 
> > > It will ssh to node-1 and rotate. In the simplest case this takes key
> > > 0
> > 
> > and
> > 
> > > moves it to the next highest key number. Then a new key 0 is
> > > generated.
> > > 
> > > 
> > > 
> > > Later there is a loop that will again ssh into node-1 and run the
> > 
> > rotation
> > 
> > > script. If there is a limit set on the number of keys and you are at
> > > that
> > > 
> > > limit a key will be deleted. This extra rotation on node-1 means that
> > 
> > it's
> > 
> > > possible that it has a different set of keys than are on node-2 and
> > 
> > node-3.
> > 
> > 
> > 
> > You are absolutely right. Node-1 should be excluded from the loop.
> > 
> > 
> > 
> > pinc also lacks "-c 1".
> > 
> > 
> > 
> > I am sure that other issues can be found.
> > 
> > 
> > 
> > In my excuse I want to say that I never ran the script and wrote it just
> > to show how simple it should be. Thank for review though!
> > 
> > 
> > 
> > I also hope that no one is going to use a script from a mailing list.
> > 
> > > What's the issue with just a simple rsync of the directory?
> > 
> > None I think. I just want to reuse the interface provided by
> > keystone-manage.
> 
> You wanted to use the interface from keystone-manage to handle the actual
> promotion of the staged key, right? This is why there were two
> fernet_rotate commands issued?

Right. Here is the fixed version (please don't use it anyway): 
http://paste.openstack.org/show/406862/

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Apache2 vs uWSGI vs ...

2015-09-18 Thread Boris Bobrov
There are 2 dimensions this discussion should happen in: web server and 
application server. Now we use apache2 as web server and mod_wsgi as app 
server.

I don't have a specific opinion on the app server (mod_wsgi vs uwsgi) and I 
don't really care.

Regarding apache2 vs nginx. I don't see any reasons for the switch. Apache2 is 
well known to deployers and sysadmins. It is very rich for modules. I wonder 
if there are customer-written modules.

On Friday 18 September 2015 16:54:02 Vladimir Kuklin wrote:
> Folks
> 
> I think we do not need to switch to nginx-only or consider any kind of war
> between nginx and apache adherents. Everyone should be able to use
> web-server he or she needs without being pinned to the unwanted one. It is
> like Postgres vs MySQL war. Why not support both?

Why nginx? Why not lighttpd? OpenLitespeed? Litespeed? ?

What do you understand by "support both"? I understand it as "both are tested 
in devstack". Apache2 is supported because you can set up devstack and 
everything works.

There are things in keystone that work under apache. They are not tested. They 
were written to work under apache because it's the simplest and the most 
standard way to do. Making them work in nginx means forcing developers write 
some code. You're ready to do that?

> May be someone does not need something that apache supports and nginx not
> and needs nginx features which apache does not support. Let's let our users
> decide what they want.
> 
> And the first step should be simple here - support for uwsgi.

Why uwsgi? Why not gunicorn? Cherrypy? Twisted?

> It will allow
> for usage of any web-server that can work with uwsgi. It will allow also us
> to check for the support of all apache-like bindings like SPNEGO or
> whatever and provide our users with enough info on making decisions. I did
> not personally test nginx modules for SAML and SPNEGO, but I am pretty
> confident about TLS/SSL parts of nginx.
> 
> Moreover, nginx will allow you to do things you cannot do with apache, e.g.
> do smart load balancing, which may be crucial for high-loaded installations.
> On Fri, Sep 18, 2015 at 4:12 PM, Adam Young  wrote:
> > On 09/17/2015 10:04 PM, Jim Rollenhagen wrote:
> > 
> > On Thu, Sep 17, 2015 at 06:48:50PM -0400, Davanum Srinivas wrote:
> > 
> > In the fuel project, we recently ran into a couple of issues with Apache2
> > +
> > mod_wsgi as we switched Keystone to run . Please see [1] and [2].
> > 
> > Looking deep into Apache2 issues specifically around "apache2ctl graceful"
> > and module loading/unloading and the hooks used by mod_wsgi [3]. I started
> > wondering if Apache2 + mod_wsgi is the "right" solution and if there was
> > something else better that people are already using.
> > 
> > One data point that keeps coming up is, all the CI jobs use Apache2 +
> > mod_wsgi so it must be the best solutionIs it? If not, what is?
> > 
> > Disclaimer: it's been a while since I've cared about performance with a
> > web server in front of a Python app.
> > 
> > IIRC, mod_wsgi was abandoned for a while, but I think it's being worked
> > on again. In general, I seem to remember it being thought of as a bit
> > old and crusty, but mostly working.
> > 
> > 
> > I am not aware of that.  It has been the workhorse of the Python/wsgi
> > world for a while, and we use it heavily.
> > 
> > 
> > At a previous job, we switched from Apache2 + mod_wsgi to nginx + uwsgi[0]
> > and saw a significant performance increase. This was a Django app. uwsgi
> > is fairly straightforward to operate and comes loaded with a myriad of
> > options[1] to help folks make the most of it. I've played with Ironic
> > behind uwsgi and it seemed to work fine, though I haven't done any sort
> > of load testing. I'd encourage folks to give it a shot. :)
> > 
> > 
> > Again, switching web servers is as likely to introduce as to solve
> > problems.  If there are performance issues:
> > 
> > 1.  Idenitfy what causes them
> > 2.  Change configuration settings to deal with them
> > 3.  Fix upstream bugs in the underlying system.
> > 
> > 
> > Keystone is not about performance.  Keystone is about security.  The cloud
> > is designed to scale horizontally first.  Before advocating switching to a
> > difference web server, make sure it supports the technologies required.
> > 
> > 
> > 1. TLS at the latest level
> > 2. Kerberos/GSSAPI/SPNEGO
> > 3. X509 Client cert validation
> > 4. SAML
> > 
> > OpenID connect would be a good one to add to the list;  Its been requested
> > for a while.
> > 
> > If Keystone is having performance issues, it is most likely at the
> > database layer, not the web server.
> > 
> > 
> > 
> > "Programmers waste enormous amounts of time thinking about, or worrying
> > about, the speed of noncritical parts of their programs, and these
> > attempts
> > at efficiency actually have a strong negative impact when debugging and
> > maintenance are considered. We *should* forget about small efficiencies,
> > say about 97% of 

Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-10 Thread Boris Bobrov
On Saturday 10 October 2015 08:42:10 Shinobu Kinjo wrote:
> So what's the procedure?

You go to #openstack-keystone on Friday, choose a bug, talk to someone of the 
core reviewers. After talking to them fix the bug. 

> Shinobu
> 
> - Original Message -
> From: "Adam Young" 
> To: openstack-dev@lists.openstack.org
> Sent: Saturday, October 10, 2015 12:11:35 PM
> Subject: Re: [openstack-dev] [keystone] Let's get together and fix all the
> bugs
> 
> On 10/09/2015 11:04 PM, Chen, Wei D wrote:
> 
> 
> 
> 
> 
> Great idea! core reviewer’s advice is definitely much important and valuable
> before proposing a fixing. I was always thinking it will help save us if we
> can get some agreement at some point.
> 
> 
> 
> 
> 
> Best Regards,
> 
> Dave Chen
> 
> 
> 
> 
> From: David Stanek [ mailto:dsta...@dstanek.com ]
> Sent: Saturday, October 10, 2015 3:54 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [keystone] Let's get together and fix all the bugs
> 
> 
> 
> 
> 
> I would like to start running a recurring bug squashing day. The general
> idea is to get more focus on bugs and stability. You can find the details
> here: https://etherpad.openstack.org/p/keystone-office-hours Can we start
> with Bug 968696?

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Pagination support for Identity dashboard entities

2015-10-15 Thread Boris Bobrov
Hey,

As I see, we decided to go the way of limiting and filtering of the 
list. Bugreport https://bugs.launchpad.net/keystone/+bug/1501698 was 
opened about this issue. I've coded a chain of patches to fix the bug, 
please review: https://review.openstack.org/#/c/234849/

On Friday 14 August 2015 12:46:40 Timur Sufiev wrote:
> Hello, Keystone folks!
> 
> I've just discovered an unfortunate fact that Horizon pagination for
> Tenants/Projects list that worked with Keystone v2 doesn't work
> with Keysone v3 anymore - its API call simply lacks the 'marker'
> and 'limit' parameters [1] that Horizon is relying upon. Meanwhile
> having looked through [2] and [3] I'm a bit confused: while
> Keystone v3 API states it supports [2] pagination for every kind of
> entities (by using 'page' and 'per_page' parameters), the related
> blueprint [3] is not yet approved, meaning that most likely the API
> implementation did not make it into existing Keystone codebase. So
> I wonder whether there are some plans to implement pagination for
> Keystone API calls that list its entities?
> 
> P.S. I'm aware of SearchLight project that tries to help Horizon
> with other OpenStack APIs (part of its mission), what I'm trying to
> understand here is are we going to have some fallback pagination
> mechanism for Horizon's Identity in a short-term/mid-term
> perspective.
> 
> [1] http://developer.openstack.org/api-ref-identity-admin-v2.html
> [2] http://developer.openstack.org/api-ref-identity-v3.html
> [3] https://blueprints.launchpad.net/keystone/+spec/pagination

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Why not common definition about normal HTTP status code like 2xx and 3xx?

2015-06-02 Thread Boris Bobrov
On Tuesday 02 June 2015 09:32:45 Chenhong Liu wrote:
> There is keystone/exception.py which contains Exceptions defined and used
> inside keystone provide 4xx and 5xx status code. And we can use it like:
> exception.Forbidden.code, exception.forbiddent.title
> exception.NotFound.code, exception.NotFound.title
> 
> This makes the code looks pretty and avoid error prone. But I can't find
> definition for other status code, like 200, 201, 204, 302, and so on. The
> code in keystone, especially the unit test cases,  just write these status
> code and title explicitly.
> 
> How about add those definitions?

These are standard HTTP codes:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

Description in exceptions is given because one error code can be used for 
several errors. Success codes always have one meaning.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-29 Thread Boris Bobrov
On Thursday 29 January 2015 22:06:25 Morgan Fainberg wrote:
> I’d like to propose we stop setting the expectation that a downwards
> migration is a “good idea” or even something we should really support.
> Offering upwards-only migrations would also simplify the migrations in
> general. This downward migration path is also somewhat broken by the
> migration collapses performed in a number of projects (to limit the number
> of migrations that need to be updated when we change a key component such
> as oslo.db or SQL-Alchemy Migrate to Alembic).
> 
> Are downward migrations really a good idea for us to support? Is this
> downward migration path a sane expectation? In the real world, would any
> one really trust the data after migrating downwards?

Frankly, I don't see a case when a downgrade from n to (n - 1) in development 
cannot be replaced with a set of fixtures and upgrade from 0 to (n - 1).

If we assume that upgrade can possible break something in production, we 
should not rely on fixing by downgrading the schema, because a) the code 
depends on the latest schema and b) "break" can be different and unrecoverable.

IMO downward migrations should be disabled. We could make a survey though, 
maybe someone has a story of using them in the fields.

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] SQL Schema Downgrades and Related Issues

2015-01-30 Thread Boris Bobrov
On Friday 30 January 2015 01:01:00 Boris Bobrov wrote:
> On Thursday 29 January 2015 22:06:25 Morgan Fainberg wrote:
> > I’d like to propose we stop setting the expectation that a downwards
> > migration is a “good idea” or even something we should really support.
> > Offering upwards-only migrations would also simplify the migrations in
> > general. This downward migration path is also somewhat broken by the
> > migration collapses performed in a number of projects (to limit the
> > number of migrations that need to be updated when we change a key
> > component such as oslo.db or SQL-Alchemy Migrate to Alembic).
> > 
> > Are downward migrations really a good idea for us to support? Is this
> > downward migration path a sane expectation? In the real world, would any
> > one really trust the data after migrating downwards?
> 
> Frankly, I don't see a case when a downgrade from n to (n - 1) in
> development cannot be replaced with a set of fixtures and upgrade from 0
> to (n - 1).
> 
> If we assume that upgrade can possible break something in production, we
> should not rely on fixing by downgrading the schema, because a) the code
> depends on the latest schema and b) "break" can be different and
> unrecoverable.
> 
> IMO downward migrations should be disabled. We could make a survey though,
> maybe someone has a story of using them in the fields.

I've made a little survey and there are people who used downgrades for 
debugging of different OpenStack releases.

So, I think I'm +1 on Mike Bayer's opinion.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Testing DB migrations

2015-03-06 Thread Boris Bobrov
On Friday 06 March 2015 16:57:19 Nikolay Markov wrote:
> Hi everybody,
> 
> From time to time some bugs appear regarding failed database migrations
> during upgrade and we have High-priority bug for 6.1 (
> https://bugs.launchpad.net/fuel/+bug/1391553) on testing this migration
> process. I want to start a thread for discussing how we're going to do it.
> 
> I don't see any obvious solution, but we can at least start adding tests
> together with any changes in migrations, which will use a number of various
> fake environments upgrading and downgrading DB.
> 
> Any thoughts?

In Kyestone adding unit tests and running them in in-memory sqlite was proven 
ineffective.The only solution we've come to is to run all db-related tests 
against real rdbmses.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Freeze exception for "Correct initialization order for logging to use eventlet locks"

2015-03-11 Thread Boris Bobrov
Hello,

I would like to get a freeze exception for patch "Correct initialization order 
for logging to use eventlet locks", [0].

https://review.openstack.org/#/c/163387/

The bug fixed by the changeset is known to affect Keystone deployed with 
eventlet. I am aware of a customer who hit this bug in his cloud.

There is no known workaround for the bug.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Boris Bobrov
On Thursday 12 March 2015 12:24:57 Duncan Thomas wrote:
> ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
> /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
> DeprecationWarning: The oslo namespace package is deprecated. Please use
> oslo_i18n instead.
>   from oslo import i18n
> /opt/stack/cinder/cinder/openstack/common/policy.py:98: DeprecationWarning:
> The oslo namespace package is deprecated. Please use oslo_config instead.
>   from oslo.config import cfg
> /opt/stack/cinder/cinder/openstack/common/policy.py:99: DeprecationWarning:
> The oslo namespace package is deprecated. Please use oslo_serialization
> instead.
>   from oslo.serialization import jsonutils
> /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The oslo
> namespace package is deprecated. Please use oslo_messaging instead.
>   from oslo import messaging
> /usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/fi
> leutils.py:22: DeprecationWarning: The oslo namespace package is
> deprecated. Please use oslo_utils instead.
>   from oslo.utils import excutils
> 
> 
> What are normal, none developer users supposed to do with such warnings,
> other than:
> a) panic or b) Assume openstack is beta quality and then panic

Non developer users are supposed to file a bug, leave installation and usage 
to professional devops who know how to handle logs or and stop using non-
stable branch.

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Deprecation warnings considered harmful?

2015-03-12 Thread Boris Bobrov
On Thursday 12 March 2015 12:59:10 Duncan Thomas wrote:
> So, assuming that all of the oslo depreciations aren't going to be fixed
> before release

What makes you think that?

In my opinion it's just one component's problem. These particular deprecation 
warnings are a result of still on-going migration from oslo. to 
oslo_. Ironically, all components except oslo have already moved to 
the new naming scheme.

I think that these warnings are just a single, not systemic problem.

> for something we know about at release time?

For bugs we know about at release time we have bugreports. Filing a bug is 
pretty easy ;) https://bugs.launchpad.net/oslo.db/+bug/1431268
 
> On 12 March 2015 at 11:41, Boris Bobrov  wrote:
> > On Thursday 12 March 2015 12:24:57 Duncan Thomas wrote:
> > > ubuntu@devstack-multiattach:~/devstack$ cinder-manage db sync
> > > /usr/local/lib/python2.7/dist-packages/oslo_db/_i18n.py:19:
> > > DeprecationWarning: The oslo namespace package is deprecated. Please
> > > use oslo_i18n instead.
> > > 
> > >   from oslo import i18n
> > > 
> > > /opt/stack/cinder/cinder/openstack/common/policy.py:98:
> > DeprecationWarning:
> > > The oslo namespace package is deprecated. Please use oslo_config
> > > instead.
> > > 
> > >   from oslo.config import cfg
> > > 
> > > /opt/stack/cinder/cinder/openstack/common/policy.py:99:
> > DeprecationWarning:
> > > The oslo namespace package is deprecated. Please use oslo_serialization
> > > instead.
> > > 
> > >   from oslo.serialization import jsonutils
> > > 
> > > /opt/stack/cinder/cinder/objects/base.py:25: DeprecationWarning: The
> > > oslo namespace package is deprecated. Please use oslo_messaging
> > > instead.
> > > 
> > >   from oslo import messaging
> > 
> > /usr/local/lib/python2.7/dist-packages/oslo_concurrency/openstack/common/
> > fi
> > 
> > > leutils.py:22: DeprecationWarning: The oslo namespace package is
> > > deprecated. Please use oslo_utils instead.
> > > 
> > >   from oslo.utils import excutils
> > > 
> > > What are normal, none developer users supposed to do with such
> > > warnings, other than:
> > > a) panic or b) Assume openstack is beta quality and then panic
> > 
> > Non developer users are supposed to file a bug, leave installation and
> > usage
> > to professional devops who know how to handle logs or and stop using non-
> > stable branch.
> > 
> > --
> > С наилучшими пожеланиями,
> > Boris
> > 
> > _
> > _ OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][fernet] Fernet tokens sync

2015-03-27 Thread Boris Bobrov
Hello,

As you know, keystone introduced non-persistent tokens in kilo -- Fernet 
tokens. These tokens use Fernet keys, that are rotated from time to time. A 
great description of key rotation and replication can be found on [0] and [1] 
(thanks, lbragstad). In HA setup there are multiple nodes with Keystone and 
that requires key replication. How do we do that with new Fernet tokens?

Please keep in mind that the solution should be HA -- there should not be any 
"master" server, pushing keys to slave servers, because master server might go 
down.

I can see some ways to do that.

1. Mount some distributed network file system to /etc/keystone/fernet-keys/ 
(the directory, where keys are) and leave syncronization and dealing with race 
conditions to it. This solution will not require any changes to existing code.

Are there any mature filesystems for that?

2. Use a queue of staged keys. It would mean that a new staging key will be 
generated if there are no other staging keys in queue. Example:

Suppose we have keystone setup on 2 servers.

I. In the beginning they have keys 0, 1, 2.

II. Rotation happens on keystone-1. 0 becomes 3, 1 is removed. Before 
generating 0, check that there are no keys in the queue. There are no keys in 
the queue, generate it and push to keystone-2's queue.

III. Rotations happens on keystone-2. 0 becomes 3, 1 is removed. Before 
generating 0, check that there are no keys in the queue. There is a key from 
keystone-1, use it as new 0.

Thanks to Alexander Makarov for the idea.

How do we store this queue? Should we use some backend, rely on creation time 
or something else?

This way requires changes to keystone code.

3. Store keys in backend completely and use well-known sync mechanisms. This 
would require some changes to keystone code too.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][fernet] Fernet tokens sync

2015-03-27 Thread Boris Bobrov
On Friday 27 March 2015 17:14:28 Boris Bobrov wrote:
> Hello,
> 
> As you know, keystone introduced non-persistent tokens in kilo -- Fernet
> tokens. These tokens use Fernet keys, that are rotated from time to time. A
> great description of key rotation and replication can be found on [0] and
> [1] (thanks, lbragstad). In HA setup there are multiple nodes with
> Keystone and that requires key replication. How do we do that with new
> Fernet tokens?
> 
> Please keep in mind that the solution should be HA -- there should not be
> any "master" server, pushing keys to slave servers, because master server
> might go down.
>
> [...]

[0] and [1] in the mail are:

[0]: http://lbragstad.com/?p=133
[1]: http://lbragstad.com/?p=156

After some discussion in #openstack-keystone it seems that token rotation 
should not be an often procedure and that 15 minutes in the blog post was just 
an example for the sake of simple math.


-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][keystone] FFE for "Speed up memcache lock"

2015-04-03 Thread Boris Bobrov
Hello,

I would like to get a FFE for patch https://review.openstack.org/#/c/166496/ . 
The patch removes progressive calculation of sleep timeouts in memcache. The 
progressive calculation lead to issues on load, causing timeouts and slowness 
incommensurable with the load.

The patch in question was already accepted in "master" branch.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] SQLite support (migrations, work-arounds, and more), is it worth it?

2015-04-03 Thread Boris Bobrov
On Saturday 04 April 2015 03:55:59 Morgan Fainberg wrote:
> I am looking forward to the Liberty cycle and seeing the special casing we
> do for SQLite in our migrations (and elsewhere). My inclination is that we
> should (similar to the deprecation of eventlet) deprecate support for
> SQLite in Keystone. In Liberty we will have a full functional test suite
> that can (and will) be used to validate everything against much more real
> environments instead of in-process “eventlet-like” test-keystone-services;
> the “Restful test cases” will no longer be part of the standard unit tests
> (as they are functional testing). With this change I’m inclined to say
> SQLite (being the non-production usable DB) what it is we should look at
> dropping migration support for SQLite and the custom work-arounds.
> 
> Most deployers and developers (as far as I know) use devstack and MySQL or
> Postgres to really suss out DB interactions.
> 
> I am looking for feedback from the community on the general stance for
> SQLite, and more specifically the benefit (if any) of supporting it in
> Keystone.

+1. Drop it and clean up tons of code used for support of sqlite only.

Doing tests with mysql is as easy, as with sqlite ("mysqladmin drop -f; 
mysqladmin create" for "reset"), and using it by default will finally make 
people test their code on real rdbmses.

-- 
С наилучшими пожеланиями,
Boris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [keystone] [federated auth] [ocata] federated users with "admin" role not authorized for nova, cinder, neutron admin panels

2017-03-09 Thread Boris Bobrov
Hi,

Please paste your mapping to paste.openstack.org

On 03/09/2017 02:07 AM, Evan Bollig PhD wrote:
> I am on Ocata with Shibboleth auth enabled. I noticed that Federated
> users with the admin role no longer have authorization to use the
> Admin** panels in Horizon related to Nova, Cinder and Neutron. All
> regular Identity and Project tabs function, and there are no problems
> with authorization for local admin users.
> 
> -
> These Admin tabs work: Hypervisors, Host Aggregates, Flavors, Images,
> Defaults, Metadata, System Information
> 
> These result in logout: Instances, Volumes, Networks, Routers, Floating IPs
> 
> This is not present: Overview
> -
> 
> The policies are vanilla from the CentOS/RDO openstack-dashboard RPMs:
> openstack-dashboard-11.0.0-1.el7.noarch
> python-django-horizon-11.0.0-1.el7.noarch
> python2-keystonemiddleware-4.14.0-1.el7.noarch
> python2-keystoneclient-3.10.0-1.el7.noarch
> openstack-keystone-11.0.0-1.el7.noarch
> python2-keystoneauth1-2.18.0-1.el7.noarch
> python-keystone-11.0.0-1.el7.noarch
> 
> The errors I see in logs are similar to:
> 
> ==> /var/log/horizon/horizon.log <==
> 2017-03-07 18:24:54,961 13745 ERROR horizon.exceptions Unauthorized:
> Traceback (most recent call last):
>   File 
> "/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/floating_ips/views.py",
> line 53, in get_tenant_list
> tenants, has_more = api.keystone.tenant_list(request)
>   File "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
> line 351, in tenant_list
> manager = VERSIONS.get_project_manager(request, admin=admin)
>   File "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
> line 61, in get_project_manager
> manager = keystoneclient(*args, **kwargs).projects
>   File "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
> line 170, in keystoneclient
> raise exceptions.NotAuthorized
> NotAuthorized
> 
> Cheers,
> -E
> --
> Evan F. Bollig, PhD
> Scientific Computing Consultant, Application Developer | Scientific
> Computing Solutions (SCS)
> Minnesota Supercomputing Institute | msi.umn.edu
> University of Minnesota | umn.edu
> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-15 Thread Boris Bobrov
On 03/15/2017 10:06 PM, Jay Pipes wrote:
> +Boris B
> 
> On 03/15/2017 02:55 PM, Fox, Kevin M wrote:
>> I think they are. If they are not, things will break if federation is
>> used for sure. If you know that it is please let me know. I want to
>> deploy federation at some point but was waiting for dashboard support.
>> Now that the dashboard supports it, I may try it soon. Its a no-go
>> still though if heat doesn't work with it.
> 
> We had a customer engagement recently that had issues with Heat not
> being able to execute certain actions in a federated Keystone
> environment. I believe we learned that Keystone trusts and federation
> were not compatible during this engagement.
> 
> Boris, would you mind refreshing memories on this?

They are still broken when user gets roles from groups membership.
At the PTG session the decision was to document that it is fine and that
user should get concrete role assignments before using heat via
federation. Now there are 2 ways to do it.

1. New auto-provisioning capabilities, which make role assignments
persistent [0]. Which is funny, because group membership is not
persistent.

2. Ask project admin to assign the roles.

[0]https://docs.openstack.org/developer/keystone/federation/mapping_combinations.html#auto-provisioning

I don't like it though and wanted to talk about it at keystone
meeting. But we didn't make it on time so it will be discussed next
Tuesday. I want this: https://review.openstack.org/#/c/437533/

> Best,
> -jay
> 
>> 
>> From: Jay Pipes [jaypi...@gmail.com]
>> Sent: Wednesday, March 15, 2017 11:41 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [tc][appcat] The future of the App Catalog
>>
>> On 03/15/2017 01:21 PM, Fox, Kevin M wrote:
>>> Other OpenStack subsystems (such as Heat) handle this with Trusts. A
>>> service account is made in a different, usually SQL backed Keystone
>>> Domain and a trust is created associating the service account with
>>> the User.
>>>
>>> This mostly works but does give the trusted account a lot of power,
>>> as the roles by default in OpenStack are pretty coarse grained. That
>>> should be solvable though.
>>
>> I didn't think Keystone trusts and Keystone federation were compatible
>> with each other, though? Did that change recently?
>>
>> Best,
>> -jay
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [keystone] [federated auth] [ocata] federated users with "admin" role not authorized for nova, cinder, neutron admin panels

2017-03-21 Thread Boris Bobrov
Hi,

Oh wow, for some reason my message was not sent to the list.

On 03/20/2017 09:03 PM, Evan Bollig PhD wrote:
> Hey Boris,
> 
> Any updates on this?
> 
> Cheers,
> -E
> --
> Evan F. Bollig, PhD
> Scientific Computing Consultant, Application Developer | Scientific
> Computing Solutions (SCS)
> Minnesota Supercomputing Institute | msi.umn.edu
> University of Minnesota | umn.edu
> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
> 
> 
> On Thu, Mar 9, 2017 at 4:08 PM, Evan Bollig PhD  wrote:
>> Hey Boris,
>>
>> Which mapping? Hope you were looking for the shibboleth user
>> mapping. Also, hope this is the right way to share the paste (first
>> time using this):
>> http://paste.openstack.org/show/3snCb31GRZfAuQxdRouy/

This is probably part of bug
https://bugs.launchpad.net/keystone/+bug/1589993 . I am not 100% sure
though. Could you please file new bugreport?

As for now, you could try doing auto-provisioning using new capabilities
from Ocata:
https://docs.openstack.org/developer/keystone/federation/mapping_combinations.html#auto-provisioning

>> Cheers,
>> -E
>> --
>> Evan F. Bollig, PhD
>> Scientific Computing Consultant, Application Developer | Scientific
>> Computing Solutions (SCS)
>> Minnesota Supercomputing Institute | msi.umn.edu
>> University of Minnesota | umn.edu
>> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
>>
>>
>> On Thu, Mar 9, 2017 at 7:50 AM, Boris Bobrov  wrote:
>>> Hi,
>>>
>>> Please paste your mapping to paste.openstack.org
>>>
>>> On 03/09/2017 02:07 AM, Evan Bollig PhD wrote:
>>>> I am on Ocata with Shibboleth auth enabled. I noticed that Federated
>>>> users with the admin role no longer have authorization to use the
>>>> Admin** panels in Horizon related to Nova, Cinder and Neutron. All
>>>> regular Identity and Project tabs function, and there are no problems
>>>> with authorization for local admin users.
>>>>
>>>> -
>>>> These Admin tabs work: Hypervisors, Host Aggregates, Flavors, Images,
>>>> Defaults, Metadata, System Information
>>>>
>>>> These result in logout: Instances, Volumes, Networks, Routers, Floating IPs
>>>>
>>>> This is not present: Overview
>>>> -
>>>>
>>>> The policies are vanilla from the CentOS/RDO openstack-dashboard RPMs:
>>>> openstack-dashboard-11.0.0-1.el7.noarch
>>>> python-django-horizon-11.0.0-1.el7.noarch
>>>> python2-keystonemiddleware-4.14.0-1.el7.noarch
>>>> python2-keystoneclient-3.10.0-1.el7.noarch
>>>> openstack-keystone-11.0.0-1.el7.noarch
>>>> python2-keystoneauth1-2.18.0-1.el7.noarch
>>>> python-keystone-11.0.0-1.el7.noarch
>>>>
>>>> The errors I see in logs are similar to:
>>>>
>>>> ==> /var/log/horizon/horizon.log <==
>>>> 2017-03-07 18:24:54,961 13745 ERROR horizon.exceptions Unauthorized:
>>>> Traceback (most recent call last):
>>>>   File 
>>>> "/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/floating_ips/views.py",
>>>> line 53, in get_tenant_list
>>>> tenants, has_more = api.keystone.tenant_list(request)
>>>>   File 
>>>> "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>>> line 351, in tenant_list
>>>> manager = VERSIONS.get_project_manager(request, admin=admin)
>>>>   File 
>>>> "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>>> line 61, in get_project_manager
>>>> manager = keystoneclient(*args, **kwargs).projects
>>>>   File 
>>>> "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>>> line 170, in keystoneclient
>>>> raise exceptions.NotAuthorized
>>>> NotAuthorized
>>>>
>>>> Cheers,
>>>> -E
>>>> --
>>>> Evan F. Bollig, PhD
>>>> Scientific Computing Consultant, Application Developer | Scientific
>>>> Computing Solutions (SCS)
>>>> Minnesota Supercomputing Institute | msi.umn.edu
>>>> University of Minnesota | umn.edu
>>>> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [keystone] [federated auth] [ocata] federated users with "admin" role not authorized for nova, cinder, neutron admin panels

2017-03-21 Thread Boris Bobrov
Hi,

Oh wow, for some reason my message was not sent to the list.

On 03/20/2017 09:03 PM, Evan Bollig PhD wrote:
> Hey Boris,
> 
> Any updates on this?
> 
> Cheers,
> -E
> --
> Evan F. Bollig, PhD
> Scientific Computing Consultant, Application Developer | Scientific
> Computing Solutions (SCS)
> Minnesota Supercomputing Institute | msi.umn.edu
> University of Minnesota | umn.edu
> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
> 
> 
> On Thu, Mar 9, 2017 at 4:08 PM, Evan Bollig PhD  wrote:
>> Hey Boris,
>>
>> Which mapping? Hope you were looking for the shibboleth user
>> mapping. Also, hope this is the right way to share the paste (first
>> time using this):
>> http://paste.openstack.org/show/3snCb31GRZfAuQxdRouy/

This is probably part of bug
https://bugs.launchpad.net/keystone/+bug/1589993 . I am not 100% sure
though. Could you please file new bugreport?

As for now, you could try doing auto-provisioning using new capabilities
from Ocata:
https://docs.openstack.org/developer/keystone/federation/mapping_combinations.html#auto-provisioning

>> Cheers,
>> -E
>> --
>> Evan F. Bollig, PhD
>> Scientific Computing Consultant, Application Developer | Scientific
>> Computing Solutions (SCS)
>> Minnesota Supercomputing Institute | msi.umn.edu
>> University of Minnesota | umn.edu
>> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
>>
>>
>> On Thu, Mar 9, 2017 at 7:50 AM, Boris Bobrov  wrote:
>>> Hi,
>>>
>>> Please paste your mapping to paste.openstack.org
>>>
>>> On 03/09/2017 02:07 AM, Evan Bollig PhD wrote:
>>>> I am on Ocata with Shibboleth auth enabled. I noticed that Federated
>>>> users with the admin role no longer have authorization to use the
>>>> Admin** panels in Horizon related to Nova, Cinder and Neutron. All
>>>> regular Identity and Project tabs function, and there are no problems
>>>> with authorization for local admin users.
>>>>
>>>> -
>>>> These Admin tabs work: Hypervisors, Host Aggregates, Flavors, Images,
>>>> Defaults, Metadata, System Information
>>>>
>>>> These result in logout: Instances, Volumes, Networks, Routers, Floating IPs
>>>>
>>>> This is not present: Overview
>>>> -
>>>>
>>>> The policies are vanilla from the CentOS/RDO openstack-dashboard RPMs:
>>>> openstack-dashboard-11.0.0-1.el7.noarch
>>>> python-django-horizon-11.0.0-1.el7.noarch
>>>> python2-keystonemiddleware-4.14.0-1.el7.noarch
>>>> python2-keystoneclient-3.10.0-1.el7.noarch
>>>> openstack-keystone-11.0.0-1.el7.noarch
>>>> python2-keystoneauth1-2.18.0-1.el7.noarch
>>>> python-keystone-11.0.0-1.el7.noarch
>>>>
>>>> The errors I see in logs are similar to:
>>>>
>>>> ==> /var/log/horizon/horizon.log <==
>>>> 2017-03-07 18:24:54,961 13745 ERROR horizon.exceptions Unauthorized:
>>>> Traceback (most recent call last):
>>>>   File 
>>>> "/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/floating_ips/views.py",
>>>> line 53, in get_tenant_list
>>>> tenants, has_more = api.keystone.tenant_list(request)
>>>>   File 
>>>> "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>>> line 351, in tenant_list
>>>> manager = VERSIONS.get_project_manager(request, admin=admin)
>>>>   File 
>>>> "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>>> line 61, in get_project_manager
>>>> manager = keystoneclient(*args, **kwargs).projects
>>>>   File 
>>>> "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>>> line 170, in keystoneclient
>>>> raise exceptions.NotAuthorized
>>>> NotAuthorized
>>>>
>>>> Cheers,
>>>> -E
>>>> --
>>>> Evan F. Bollig, PhD
>>>> Scientific Computing Consultant, Application Developer | Scientific
>>>> Computing Solutions (SCS)
>>>> Minnesota Supercomputing Institute | msi.umn.edu
>>>> University of Minnesota | umn.edu
>>>> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
>>>>
>>>> __
>>>> OpenStack Development Mailing List (not for usage questions)
>>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Does the policy.json for trusts works?

2017-09-15 Thread Boris Bobrov
Hi,

On 13.09.2017 18:54, Adrian Turjak wrote:
> Hello Keystone devs!
> 
> I've been playing with some policy changes and realised that the trust
> policy rules were mostly blank. Which, based on how the policy logic
> works means that any authed user can list trusts:
> https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json#L137-L142
> 
> But... in practive that doesn't appear to be the case, only admin can
> list/create/etc trusts. Which is good since it doesn't really make sense
> for any authed user to see all trusts (or does it?). What it does raise
> is, does the policy actually work for trusts, or is an admin requirement
> policy hardcoded somewhere for them?
> 
> Now I've played with the keystone policy, setting an admin only policy
> blank, lets say project list, does let any authed user to use that API.
> So from that we know that a blank policy has that logic. The 'default'
> rule only comes into play when a rule isn't present. Such as me setting
> a policy as "rule:rule_that_doesnt_exist" which would invoke the default
> rule, so we know that is happening here either.
> 
> So... back to how I got here. The policy for trusts doesn't appear to
> work as written. They are blank (and they probably shouldn't be), and
> based on that policy, they should be visible to all authed users. Even
> if I do put an explicit rule in them, they don't seem to take effect.
> Can someone else confirm I'm not going mad? Or that potentially I'm
> missing the point entirely (which for my sanity is also welcome :P).

Trusts are not controlled by policy.json. Checks for them are
performed in code. So you're not mad ;)

> I even checked if it was maybe extension specific, but the consumer
> policy for the oauth extension does appear to work. If I blank it, any
> authed user can list consumers.
> 
> If I'm not mad, we should probably work out why this doesn't work, but
> before we fix it, we should also add a better default rule since we
> probably don't want all authed users seeing ALL trusts.

I agree. Could you please create a bugreport?

> Cheers,
> Adrian
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] multiple federated keystones with single Identity Provider

2017-12-07 Thread Boris Bobrov
Hi,

> On 12/07/2017 12:27 PM, Colleen Murphy wrote:
>> On Thu, Dec 7, 2017 at 5:37 PM, Pavlo Shchelokovskyy
>>  wrote:
>>> Hi all,
>>>
>>> We have a following use case - several independent keystones (say KeyA and
>>> KeyB), using fernet tokens and synchronized fernet keys, and single external
>>> IdP for federated auth.
>>>
>>> Is it generally possible to configure both KeyA and KeyB such that scoped
>>> token issued by KeyA for a federated user is valid on KeyB?
>>>
>>> Currently we have the next problem - although domains/projects where
>>> keystone's mapping engine assigns federated users are equal by name between
>>> KeyA and KeyB, the UUIDs of projects/domains in KeyA and KeyB  are
>>> different, which seems to invalidate the scoped token issued by KeyA when
>>> trying to use it for KeyB. And it is not possible to create projects/domains
>>> with specific UUIDs via keystone API (which would probably solve this
>>> problem for non-autoprovisioned projects).
>>>
>>> Is such usage scenario supported? Or one should always use the unscoped
>>> token first to list projects/domains available on a specific keystone
>>> instance and then get a scoped token for usage o this instance only?
>> No, it is not currently possible to use the same token on projects in
>> different keystones, for the reasons you gave. You might be interested
>> in following https://review.openstack.org/#/c/323499/ if you're not
>> already aware of it, which has the goal of solving that problem.
>>
>> It's also been brought up before:
>>
>> https://review.openstack.org/#/c/403866/
>> http://lists.openstack.org/pipermail/openstack-dev/2016-December/108466.html
>>
>> And we talked about it a lot at the last Forum (sorry my brief summary
>> does not really do the discussion justice):
>>
>> http://www.gazlene.net/sydney-summit.html#keystone-operator-and-user-feedback
> I had a snippet about it in my recap under the "Other Feedback" section
> [0]. The TL;DR in my opinion is that we originally thought we could
> solve the problem with federation 100%, and if we couldn't we wanted to
> try and improve the parts of federation that would make that possible.
> 
> The interesting bit we came up with during the feedback session in
> Sydney is what happens if a user no longer has a role on a project. For
> example;
> 
> - A user has a role on Project A and in the us-east region and the
> us-west region, each region has it's own keystone deployment, but let's
> assume the ID for Project A are the same in each region
> - A user authenticates for a token scoped to Project A and starts
> creating instances in both regions
> - The user has their role from Project A removed in us-east, but not in
> us-west
> - The user isn't able to do anything within us-east since they no longer
> have a role assignment on Project A in that region, but they can still
> take the invalid token from the us-east region and effectively use it in
> the us-west region
> 
> Without replicating revocation events, or syncing the assignment table,
> this will lead to security concerns.

There is also cache invalidation issue. And that would make tokens of
various scope behave in a different manner. A year ago i was -2 on this,
and i still don't think this is a good idea.

If there is a demand to control several clouds from single place,
K2K support should be added where it is needed.



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] keystone client service_catalog is None

2017-12-09 Thread Boris Bobrov
Hi,

Have a look at how openstackclient does this. Read this code:
https://github.com/openstack/python-openstackclient/blob/master/openstackclient/identity/v3/catalog.py#L43
and then this code:
https://github.com/openstack/osc-lib/blob/master/osc_lib/clientmanager.py#L239


On 09.12.2017 04:15, Eric K wrote:
> Hi all,
> 
> I'm working on some code [1] that attempts to retrieve a endpoint from the
> service_catalog, but the service_catalog comes up None. Any suggestions on
> what I need to do differently to get a working service_catalog? Thanks
> very much!
> 
> Python 2.7.12 (default, Nov 20 2017, 18:23:56)
> [GCC 5.4.0 20160609] on linux2
 from keystoneauth1 import session # Version 2.12.1
 from keystoneauth1.identity import v2
 import keystoneclient.v3.client as ksclient
 auth = v2.Password(auth_url='http://127.0.0.1/identity',
 username='admin', password='password', tenant_name='admin')
 sess = session.Session(auth=auth)
 keystone = ksclient.Client(session=sess)
 print(keystone.service_catalog)
> None
> 
> 
> 
> [1] 
> https://review.openstack.org/#/c/526813/1/congress/datasources/monasca_driv
> er.py@94
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Stepping down from Keystone core

2018-01-15 Thread Boris Bobrov
Hey!

I don't work on Keystone as much as I used to any more, so i'm
stepping down from core reviewers.

Don't expect to get rid of me though. I still work on OpenStack-related
stuff and i will annoy you all in #openstack-keystone and in other IRC
channels.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help still needed at FOSDEM!

2018-01-24 Thread Boris Bobrov
Hi,

What is expected from people at the booth?

On 24.01.2018 15:55, Rich Bowen wrote:
> We have a table at FOSDEM, and we desperately need people to sign up to
> staff it.
> 
> https://etherpad.openstack.org/p/fosdem-2018
> 
> If you have an hour free at FOSDEM, please join us. Ideally, we need 2
> people per slot, and at the moment we don't even have 1 for most of the
> slots.
> 
> Thanks!
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]: Help needed with RBAC policies

2016-07-19 Thread Boris Bobrov

Hi,

Try passing --os-identity-api-version=3 to `openstack`. Or set env 
variable OS_IDENTITY_API_VERSION=3.


On 07/19/2016 09:56 PM, Nasim, Kam wrote:

Hi  folks,

I have been trying to modify the default RBAC policies in keystone/policy.json 
however my policy changes don't seem to be enforced.

As a quick test, I modified the identity:list_users policy to:

"identity:list_users": "role:kam",

There is no role called "kam" defined in my deployment so I would have expected 
this operation to fail.

However:

$ openstack --debug user list

+--++
| ID   | Name   |
+--++
| 3c1bd8c0f6324dcc938900d8eb801aa5 | admin  |
| 4b76763e375946998445b65b11c8db73 | ceilometer |
| 15c8e1e463cc4370ad369eaf8504b727 | cinder |
| 951068b3372f47ac827ade8f67cc19b4 | glance |
| 2b62ced877244e74ba90b546225740d0 | heat   |
| 438a24497bc8448d9ac63bf05a005796 | kam|
| 0b7af941da9b4896959f9258c6b498a0 | kam2   |
| d1c4f7a244f74892b612b9b2ded6d602 | neutron|
| 5c3ea23eb8e14070bc562951bb266073 | sysinv |
+--++

$ cat myrc
unset OS_SERVICE_TOKEN
export OS_AUTH_URL=http://192.168.204.2:5000/v2.0
export OS_ENDPOINT_TYPE=publicURL
export CINDER_ENDPOINT_TYPE=publicURL

export OS_USERNAME=admin
export OS_PASSWORD=admin
export PS1='[\u@\h \W(keystone_admin)]\$ '

export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne


After getting the auth token, the client uses the adminURL endpoint to get the 
user list:
curl -g -i -X GET http://192.168.204.2:35357/v2.0/users -H "User-Agent: python-keystoneclient" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}75002edfff1eb6751b3425d9d247ac3212e750f9"


Is there something I am missing here? Some specific configuration to enable 
RBAC? Do admin URL ops bypass RBAC


Thanks,
Kam




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone]: Help needed with RBAC policies

2016-07-19 Thread Boris Bobrov

Also, you might need to change OS_AUTH_URL to /v3/ or to unversioned.

Policy works only with v3 api. In v2 you are either admin or user, and 
there are no policies or roles.


On 07/19/2016 10:08 PM, Boris Bobrov wrote:

Hi,

Try passing --os-identity-api-version=3 to `openstack`. Or set env
variable OS_IDENTITY_API_VERSION=3.

On 07/19/2016 09:56 PM, Nasim, Kam wrote:

Hi  folks,

I have been trying to modify the default RBAC policies in
keystone/policy.json however my policy changes don't seem to be enforced.

As a quick test, I modified the identity:list_users policy to:

"identity:list_users": "role:kam",

There is no role called "kam" defined in my deployment so I would have
expected this operation to fail.

However:

$ openstack --debug user list

+--++
| ID   | Name   |
+--++
| 3c1bd8c0f6324dcc938900d8eb801aa5 | admin  |
| 4b76763e375946998445b65b11c8db73 | ceilometer |
| 15c8e1e463cc4370ad369eaf8504b727 | cinder |
| 951068b3372f47ac827ade8f67cc19b4 | glance |
| 2b62ced877244e74ba90b546225740d0 | heat   |
| 438a24497bc8448d9ac63bf05a005796 | kam|
| 0b7af941da9b4896959f9258c6b498a0 | kam2   |
| d1c4f7a244f74892b612b9b2ded6d602 | neutron|
| 5c3ea23eb8e14070bc562951bb266073 | sysinv |
+--++

$ cat myrc
unset OS_SERVICE_TOKEN
export OS_AUTH_URL=http://192.168.204.2:5000/v2.0
export OS_ENDPOINT_TYPE=publicURL
export CINDER_ENDPOINT_TYPE=publicURL

export OS_USERNAME=admin
export OS_PASSWORD=admin
export PS1='[\u@\h \W(keystone_admin)]\$ '

export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne


After getting the auth token, the client uses the adminURL endpoint to
get the user list:
curl -g -i -X GET http://192.168.204.2:35357/v2.0/users -H
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H
"X-Auth-Token: {SHA1}75002edfff1eb6751b3425d9d247ac3212e750f9"


Is there something I am missing here? Some specific configuration to
enable RBAC? Do admin URL ops bypass RBAC


Thanks,
Kam




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC] Tenant Resource Cleanup

2016-09-07 Thread Boris Bobrov

Hello,

I wonder if it would be worth integrating ospurge into openstackclient.

Are there any osc sessions planned at the summit?

On 09/07/2016 04:05 PM, John Davidge wrote:

Hello,

During the Mitaka cycle we merged a new feature into the
python-neutronclient called ’neutron purge’. This enables a simple CLI
command that deletes all of the neutron resources owned by a given
tenant. It’s documented in the networking guide[1].

We did this in response to feedback from operators that they needed a
better way to remove orphaned resources after a tenant had been deleted.
So far this feature has been well received, and we already have a couple
of enhancement requests. Given that we’re moving to OSC I’m hesitant to
continue iterating on this in the neutron client, and so I’m reaching
out to propose that we look into making this a part of OSC.

Earlier this week I was about to file a BP, when I noticed one covering
this subject was already filed last month[2]. I’ve spoken to Roman, who
says that they’ve been thinking about implementing this in nova, and
have come to the same conclusion that it would fit better in OSC.

I would propose that we work together to establish how this command will
behave in OSC, and build a framework that implements the cleanup of a
small set of core resources. This should be achievable during the Ocata
cycle. After that, we can reach out to the wider community to encourage
a cross-project effort to incrementally support more projects/resources
over time.

If you already have an etherpad for planning summit sessions then please
let me know, I’d love to get involved.

Thanks,

John

[1] http://docs.openstack.org/mitaka/networking-guide/ops-resource-purge.html
[2] 
https://blueprints.launchpad.net/python-openstackclient/+spec/tenant-data-scrub


Rackspace Limited is a company registered in England & Wales (company
registered number 03897010) whose registered office is at 5 Millington
Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy
policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This
e-mail message may contain confidential or privileged information
intended for the recipient. Any dissemination, distribution or copying
of the enclosed material is prohibited. If you receive this transmission
in error, please notify us immediately by e-mail at ab...@rackspace.com
and delete the original message. Your cooperation is appreciated.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Boris Bobrov
Hi,

On 12/05/2016 09:20 PM, Andrey Grebennikov wrote:
> Hi keystoners,
> I'd like to open the discussion about the little feature which I'm
> trying to push forward for a while but I need some
> feedbacks/opinions/concerns regarding this.
> Here is the review I'm talking
> about https://review.openstack.org/#/c/403866/
> 
> 
> What I'm trying to cover is multi-region deployment, which includes
> geo-distributed cloud with independent Keystone in every region.
> 
> There is a number of use cases for the change:
> 1. Allow users to re-use their tokens in all regions across the
> distributed cloud. With global authentication (LDAP backed) and same
> roles names this is only one missing piece which prevents the user to
> switch between regions even withing single Horizon session.
> 2. Automated tools responsible for statistics collection may access all
> regions using one token (real customer's usecase)

What do you understand by "region"?

> 3. Glance replication may happen because the images' parameter "owner"
> (which is a project) should be consistent across the regions.
> 
> What I hear all time - "you have to replicate your database" which from
> the devops/deployment/operations perspective is totally wrong approach.
> If it is possible to avoid Galera replication over geographically
> distributed regions - then API calls should be used. Moreover, in case
> of 2 DCs there will be an issue to decide which region has to take over
> when they are isolated from each other.

My comment in the review still stands. With the change we are getting
ourselves into situation when some tokens *are* verifiable in 2
regions (project-scoped with identical project ids in both regions),
some *might be* verifiable in 2 regions (project-scoped with ids about
which we can't tell anything), and some *are not* verifiable, because
they are federation- or trust-scoped. A user (human or script) won't be
able to tell what functionality the token brings without complex
inspection.

Current design is there is single issuer of tokens and single
consumer. With the patch there will be single issuer and multiple
consumers. Which is basically SSO, but done without explicit
design decision.

Here is what i suggest:

1. Stop thinking about 2 keystone installations with non-shared database
as about "one single keystone". If there are 2 non-replicated databases,
there are 2 separate keystones. 2 separate keystones have completely
different sets of tokens. Do not try to share fernet keys between
separate keystones.

2. Instead of implementing poor man's federation, use real federation.
Create appropriate projects and create group-based assignments, one
for each keystone instance. Use these group-based assignments for
federated users.

> There is a long conversation in the comments of the review, mainly with
> concerns from cores (purely developer's opinions).
> 
> Please help me to bring it to life ;)
> 
> PS I'm so sorry, forgot to create a topic in the original message
> 
> -- 
> Andrey Grebennikov
> Principal Deployment Engineer
> Mirantis Inc, Austin TX
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Custom ProjectID upon creation

2016-12-05 Thread Boris Bobrov
On 12/06/2016 01:46 AM, Andrey Grebennikov wrote:
> Hi,
> 
> On 12/05/2016 09:20 PM, Andrey Grebennikov wrote:
> > Hi keystoners,
> > I'd like to open the discussion about the little feature which I'm
> > trying to push forward for a while but I need some
> > feedbacks/opinions/concerns regarding this.
> > Here is the review I'm talking
> > about https://review.openstack.org/#/c/403866/
> 
> >  >
> >
> > What I'm trying to cover is multi-region deployment, which includes
> > geo-distributed cloud with independent Keystone in every region.
> >
> > There is a number of use cases for the change:
> > 1. Allow users to re-use their tokens in all regions across the
> > distributed cloud. With global authentication (LDAP backed) and same
> > roles names this is only one missing piece which prevents the user to
> > switch between regions even withing single Horizon session.
> > 2. Automated tools responsible for statistics collection may access all
> > regions using one token (real customer's usecase)
> 
> What do you understand by "region"?
> 
> I believe I explained what the "region" is in the beginning. In here it
> is actually a generic "keystone region" with all stuff, but Keystone is
> independent in it. Literally all Keystones in all "my" regions are aware
> about each other since they all have common catalog. 
> 
> 
> > 3. Glance replication may happen because the images' parameter "owner"
> > (which is a project) should be consistent across the regions.
> >
> > What I hear all time - "you have to replicate your database" which from
> > the devops/deployment/operations perspective is totally wrong approach.
> > If it is possible to avoid Galera replication over geographically
> > distributed regions - then API calls should be used. Moreover, in case
> > of 2 DCs there will be an issue to decide which region has to take over
> > when they are isolated from each other.
> 
> My comment in the review still stands. With the change we are getting
> ourselves into situation when some tokens *are* verifiable in 2
> regions (project-scoped with identical project ids in both regions),
> some *might be* verifiable in 2 regions (project-scoped with ids about
> which we can't tell anything), and some *are not* verifiable, because
> they are federation- or trust-scoped. A user (human or script) won't be
> able to tell what functionality the token brings without complex
> inspection.
> 
> I commented it in the IRC and repeat over here - it is Always the
> responsibility of the administrator to realize how things work and
> implement it in the way he/she wants it. You don't need it - you don't
> set it. The IDs will be still generated.

It is not a general usecase that an administrator creates projects.
There are policies that define who can do that.

> 
> Current design is there is single issuer of tokens and single
> consumer. With the patch there will be single issuer and multiple
> consumers. Which is basically SSO, but done without explicit
> design decision.
> 
> Not true. SSO assumes Central point of authorization. Here it is highly
> distributed.
>  
> 
> Here is what i suggest:
> 
> 1. Stop thinking about 2 keystone installations with non-shared database
> as about "one single keystone". If there are 2 non-replicated databases,
> there are 2 separate keystones. 2 separate keystones have completely
> different sets of tokens. Do not try to share fernet keys between
> separate keystones.
> 
> Even if you replicate the DB it is not going to work with no key
> replication. I repeat my statement once again - if the admin doesn't
> need it - leave the --id field blank, that's it. Nothing is broken.
>  
> 
> 2. Instead of implementing poor man's federation, use real federation.
> Create appropriate projects and create group-based assignments, one
> for each keystone instance. Use these group-based assignments for
> federated users.
> 
> Does federation currently allow me to use remote groups?

What do you mean by remote groups? Group name can be specified in
SAML assertion and then used, yes.

> Does it allow me to replicate my projects?

No. Create projects in each keystone:

for ip in $list_of_ips; do
openstack project create --os-url=$ip ...;
done

> Does it allow me to work when there is no connectivity between keystones?

Yes

> Does it allow me to know whether user
> exists in the federated provider before I create shadow user?

I don't understand the question.

Shadow users don't need to be created. Shadow users is internal keystone
entity that operator doesn't deal with at all.

> Does it
> delete assignments/shadow user records when the user is deleted from the
> remote 

Re: [openstack-dev] [keystone] Feedback for upcoming user survey questionnaire

2017-01-03 Thread Boris Bobrov
"What were you trying to accomplish with keystone but failed"
"What functionality in keystone did you try to use but it wasn't good
enough"
"In your opinion, what in keystone requires most attention"
with choices "federation", "performance", "policy", "backend support"
and some other options.

On 12/30/2016 11:54 PM, Steve Martinelli wrote:
> We have the opportunity to ask one question on the
> upcoming user survey and we get to decide the audience to which we serve
> the question.
> 
> __ __
> 
> Our audience options are: USING, TESTING, or INTERESTED in Keystone (I
> think we should aim for USING or TESTING)
> 
> __ __
> 
> The question can take one of several forms; multiple choice (select one
> or more), or short answer.
> 
> __ __
> 
> Post your suggestions here or email them to me privately ASAP so I can
> respond to the team assembling the survey in sufficient time.
> 
> 
> 
> stevemar
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] IRC Mishaps

2017-02-09 Thread Boris Bobrov
Hi,

This:
http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-01-24-18.01.log.html#l-304

On 02/08/2017 11:36 PM, Kendall Nelson wrote:
> Hello All!
> 
> So I am sure we've all seen it: people writing terminal commands into our
> project channels, misusing '/' commands, etc. But have any of you actually
> done it?
> 
> If any of you cores, ptls or other upstanding members of our wonderful
> community have had one of these embarrassing experiences please reply! I am
> writing an article for the SuperUser trying to make us all seem a little
> more human to people new to the community and new to using IRC. It can be
> scary asking questions to such a large group of smart people and its even
> more off putting when we make mistakes in front of them.
> 
> So please share your stories!
> 
> -Kendall Nelson (diablo_rojo)
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical quotas at the PTG?

2017-02-12 Thread Boris Bobrov
I would like to talk about it too.

On 02/10/2017 11:56 PM, Matt Riedemann wrote:
> Operators want hierarchical quotas [1]. Nova doesn't have them yet and
> we've been hesitant to invest scarce developer resources in them since
> we've heard that the implementation for hierarchical quotas in Cinder
> has some issues. But it's unclear to some (at least me) what those
> issues are.

I don't know what the actual issue is, but from from keystone POV
the issue is that it basically replicates project tree that is stored
in keystone. On top of usual replication issues, there is another one --
it requires too many permissions. Basically, it requires service user
to be cloud admin.

> Has anyone already planned on talking about hierarchical quotas at the
> PTG, like the architecture work group?
> 
> I know there was a bunch of razzle dazzle before the Austin summit about
> quotas, but I have no idea what any of that led to. Is there still a
> group working on that and can provide some guidance here?

In my opinion, projects should not re-implements quotas every time.
I would like to have a common library for enforcing quotas (usages)
and a service for storing quotas (limits). We should also think of a
way to transfer necessary projects subtree from keystone to quota
enforcer.

We could store quota limits in keystone and distribute it in token
body, for example. Here is a POC that we did some time ago --
https://review.openstack.org/#/c/403588/ and
https://review.openstack.org/#/c/391072/
But it still has the issue with permissions.

> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2017-January/012450.html
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-21 Thread Boris Bobrov

Hello,


in addition to this, please, PLEASE stop creating 'all project bugs'. i
don't want to get emails on updates to projects unrelated to the ones i
care about. also, it makes updating the bug impossible because it times
out. i'm too lazy to search ML but this has been raise before, please stop.

let's all unite together and block these patches to bring an end to it. :)


People who contribute to OpenStack long enough already know this.
Usually new contributors do it. And we cannot reach out to them
in this mailing list. There should be a way to limit this somewhere
in Launchpad.


On 21/09/16 07:56 AM, Amrith Kumar wrote:

Of late I've been seeing a lot of rather questionable changes that
appear to be getting blasted out across multiple projects; changes that
cause considerable code churn, and don't (IMHO) materially improve the
quality of OpenStack.

I’d love to provide a list of the changes that triggered this email but
I know that this will result in a rat hole where we end up discussing
the merits of the individual items on the list and lose sight of the
bigger picture. That won’t help address the question I have below in any
way, so I’m at a disadvantage of having to describe my issue in abstract
terms.



Here’s how I characterize these changes (changes that meet one or more
of these criteria):



-Contains little of no information in the commit message (often just
a single line)

-Makes some generic statement like “Do X not Y”, “Don’t use Z”,
“Make ABC better” with no further supporting information

-Fail (literally) every single CI job, clearly never tested by the
developer

-Gets blasted across many projects, literally tens with often the
same kind of questionable (often wrong) change

-Makes a stylistic python improvement that is not enforced by any
check (causes a cottage industry of changes making the same correction
every couple of months)

-Reverses some previous python stylistic improvement with no clear
reason (another cottage industry)



I’ve tried to explain it to myself as enthusiasm, and a desire to
contribute aggressively; I’ve lapsed into cynicism at times and tried to
explain it as gaming the numbers system, but all that is merely
rationalization and doesn’t help.



Over time, the result generally is that these developers’ changes get
ignored. And that’s not a good thing for the community as a whole. We
want to be a welcoming community and one which values all contributions
so I’m looking for some suggestions and guidance on how one can work
with contributors to try and improve the quality of these changes, and
help the contributor feel that their changes are valued by the project?
Other more experienced PTL’s, ex-PTL’s, long time open-source-community
folks, I’m seriously looking for suggestions and ideas.



Any and all input is welcome, do other projects see this, how do you
handle it, is this normal, …



Thanks!



-amrith



cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-22 Thread Boris Bobrov

I agree.

I am not saying new contributors are not welcome. They are. But there
are also things that we are not comfortable with. But our leadership
cannot prevent them from making such error. There should be a way to
lure them to mentors before doing things that we consider bad.

On 09/22/2016 09:26 AM, Steven Dake (stdake) wrote:

Folks,

We want to be inviting to new contributors even if they are green.  New 
contributors reflect on OpenStack’s growth in a positive way.  The fact that a 
new-to-openstack contributor would make such and error doesn’t warrant such a 
negative response even if it a hassle for the various PTLs and core reviewer 
teams to deal with.  This is one of the many aspects of OpenStack projects a 
PTL is elected to manage (mentorship).  If mentorship isn’t in a leader’s 
personal mission, I’m not sure they should be leading anything.

Regards
-steve


On 9/21/16, 7:35 AM, "Boris Bobrov"  wrote:

Hello,

> in addition to this, please, PLEASE stop creating 'all project bugs'. i
> don't want to get emails on updates to projects unrelated to the ones i
> care about. also, it makes updating the bug impossible because it times
> out. i'm too lazy to search ML but this has been raise before, please 
stop.
>
> let's all unite together and block these patches to bring an end to it. :)

People who contribute to OpenStack long enough already know this.
Usually new contributors do it. And we cannot reach out to them
in this mailing list. There should be a way to limit this somewhere
in Launchpad.

> On 21/09/16 07:56 AM, Amrith Kumar wrote:
>> Of late I've been seeing a lot of rather questionable changes that
>> appear to be getting blasted out across multiple projects; changes that
>> cause considerable code churn, and don't (IMHO) materially improve the
>> quality of OpenStack.
>>
>> I’d love to provide a list of the changes that triggered this email but
>> I know that this will result in a rat hole where we end up discussing
>> the merits of the individual items on the list and lose sight of the
>> bigger picture. That won’t help address the question I have below in any
>> way, so I’m at a disadvantage of having to describe my issue in abstract
>> terms.
>>
>>
>>
>> Here’s how I characterize these changes (changes that meet one or more
>> of these criteria):
>>
>>
>>
>> -Contains little of no information in the commit message (often just
>> a single line)
>>
>> -Makes some generic statement like “Do X not Y”, “Don’t use Z”,
>> “Make ABC better” with no further supporting information
>>
>> -Fail (literally) every single CI job, clearly never tested by the
>> developer
>>
>> -Gets blasted across many projects, literally tens with often the
>> same kind of questionable (often wrong) change
>>
>> -Makes a stylistic python improvement that is not enforced by any
>> check (causes a cottage industry of changes making the same correction
>> every couple of months)
>>
>> -Reverses some previous python stylistic improvement with no clear
>> reason (another cottage industry)
>>
>>
>>
>> I’ve tried to explain it to myself as enthusiasm, and a desire to
>> contribute aggressively; I’ve lapsed into cynicism at times and tried to
>> explain it as gaming the numbers system, but all that is merely
>> rationalization and doesn’t help.
>>
>>
>>
>> Over time, the result generally is that these developers’ changes get
>> ignored. And that’s not a good thing for the community as a whole. We
>> want to be a welcoming community and one which values all contributions
>> so I’m looking for some suggestions and guidance on how one can work
>> with contributors to try and improve the quality of these changes, and
>> help the contributor feel that their changes are valued by the project?
>> Other more experienced PTL’s, ex-PTL’s, long time open-source-community
>> folks, I’m seriously looking for suggestions and ideas.
>>
>>
>>
>> Any and all input is welcome, do other projects see this, how do you
>> handle it, is this normal, …
>>
>>
>>
>> Thanks!
>>
>>
>>
>> -amrith
>>
>
> cheers,
>

__

Re: [openstack-dev] [Keystone] Project name DB length

2016-09-29 Thread Boris Bobrov

Hi,


At any rate, would be great to know, and if there isn't a strong reason
against it we can make project name 255 for some more flexibility.

Plus although there is no true official standard, most projects in
OpenStack seem to use 255 as the default for a lot of string fields.
Weirdly enough, a lot of projects seem to use 255 even for project.id,
which seeing as it's 64 in keystone, and a uuid4 anyway, seems like a
bit of a waste.


Feel free to fire some bugreports!


On 29/09/16 16:19, Steve Martinelli wrote:

We may have to ask Adam or Dolph, or pull out the history textbook for
this one. I imagine that trying to not bloat the token was definitely
a concern. IIRC User name was 64 also, but we had to increase to 255
because we're not in control of name that comes from external sources
(like LDAP).

On Wed, Sep 28, 2016 at 11:06 PM, Adrian Turjak
mailto:adri...@catalyst.net.nz>> wrote:

Hello Keystone Devs,

Just curious as to the choice to have the project name be only 64
characters:

https://github.com/openstack/keystone/blob/master/keystone/resource/backends/sql.py#L241



Seems short, and an odd choice when the user.name
 field is 255 characters:

https://github.com/openstack/keystone/blob/master/keystone/identity/backends/sql_model.py#L216



Is there a good reason for it only being 64 characters, or is this
just
something that was done a long time ago and no one thought about it?

Not hugely important, just seemed odd and may prove limiting for
something I'm playing with.

Cheers,
Adrian Turjak


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev