Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-09-12 Thread Morgan Fainberg

-Original Message-
From: Brant Knudson 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 12, 2014 at 14:32:20
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject:  Re: [openstack-dev] masking X-Auth-Token in debug output - proposed 
consistency

> On Fri, Sep 12, 2014 at 12:02 PM, Tripp, Travis S  
> wrote:
>  
> >
> > From Jamie Lennox:
> > >> We handle this in the keystoneclient Session object by just printing
> > REDACTED or something similar.
> > >> The problem with using a SHA1 is that for backwards compatability we
> > often use the SHA1 of a PKI token
> > >> as if it were a UUID token and so this is still sensitive data. There
> > is working in keystone by morganfainberg
> > >> (which i think was merged) to add a new audit_it which will be able to
> > identify a token across calls without
> > >> exposing any sensitive information. We will support this in session
> > when available.
> >
> > From Sean Dague
> > > So the problem is that means we are currently leaking secrets and making
> > the logs unreadable.
> >
> > > It seems like we should move forward with the {SHA1} ... and if that is
> > still sensitive, address that later.
> > > Not addressing it basically keeps the exposure and destroys usability of
> > the code because there is so much garbage printed out.
> >
> > I understand Sean's point about debugging. Right now the glanceclient is
> > just printing ***. So it isn't printing a lot of excess and isn't leaking
> > anything sensitive. The other usability concern with the *** that Sean
> > previously mentioned was having a short usable string might be useful for
> > debugging.
> >
> > Morgan and Jamie, You think switching to SHA1 in actually adds a potential
> > security vulnerability to glanceclient that doesn't exist now. If that is
> > true, I think it would override the additional debugging concern of using
> > SHA1 for now. Can you please confirm?
> >
> > If only for consistency sake, I could switch to "TOKEN_REDACTED" like the
> > code sample Morgan sent. [1]
> >
> > [1]
> > https://github.com/openstack/python-keystoneclient/blob/01cabf6bbbee8b5340295f3be5e1fa7111387e7d/keystoneclient/session.py#L126-L131
> >   
> >
>  
> As the person who proposed the change to print TOKEN_REDACTED, I'd be happy
> to see it printed as {SHA1} instead. I only had it print
> TOKEN_REDACTED because I was concerned that we were still logging tokens
> and wanted to get something merged that didn't do that rather than waiting
> for the perfect solution to come along.
>  
> Since we have configurable token hashing algorithm support in keystone and
> auth_token middleware, it's possible that someone could lose their sanity
> and decide to use sha1 as the hash algorithm (it defaults to MD5, which
> some security standards say is inadequate), and now your logs have usable
> token IDs instead of an unusable hash, ***, TOKEN_REDACTED, or whatever. We
> could accept this as a risk, and we could mitigate the risk some by
> changing keystone to reject sha1 as a hashing algorithm.
>  
> - Brant

Ideally, I want to see the use of the audit_ids (in each token as of Juno) as 
the end goal. If we can get there as fast as changing to the {SHA1}, I’d 
advocate for that. Brant nicely outlined why we didn’t go with SHA1 earlier on 
in the cycle.

I think we are close to solving this in a better way than using sha1, but if we 
need a stop-gap we can go towards that for the short term (and disallow sha1 as 
a hash for Keystone).

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive dep

2014-09-12 Thread Morgan Fainberg
I do not see python-memcache library in either keystone client’s 
requirements.txt[0] or test-requirements.txt[1]. For purposes of ensuring that 
we do not break people deploying auth_token in keystoneclient (for older 
releases) I don’t see the soft dependency on python-memcache from going away.

Even for keystonemiddleware we do not have a hard-dependency on python-memcache 
in requirements.txt[2] or test-requirements.txt[3] as we gate on py33.

—Morgan 

[0] 
http://git.openstack.org/cgit/openstack/python-keystoneclient/tree/requirements.txt?id=0.10.1
[1] 
http://git.openstack.org/cgit/openstack/python-keystoneclient/tree/test-requirements.txt?id=0.10.1
[2] 
http://git.openstack.org/cgit/openstack/keystonemiddleware/tree/requirements.txt?id=1.1.1
[3] 
http://git.openstack.org/cgit/openstack/keystonemiddleware/tree/test-requirements.txt?id=1.1.1

—
Morgan Fainberg


-Original Message-
From: Brant Knudson 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 12, 2014 at 08:33:15
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject:  Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive 
dep

> On Thu, Sep 11, 2014 at 2:17 AM, Thomas Goirand wrote:
>  
> >
> > On my side (as the Debian package maintainer of OpenStack), I was more
> > than happy to see that Ceilometer made the choice to use a Python module
> > for memcache which supports Python 3. Currently python-memcache does
> > *not* support Python 3. It's in fact standing in the way to add Python 3
> > compatibility to *a lot* of the OpenStack packages, because this
> > directly impact python-keystoneclient, which is a (build-)dependency of
> > almost everything.
> >
> >
> Thomas -
>  
> python-keystoneclient should no longer have a hard dependency on
> python-memcache(d). The auth_token middleware which can use memcache has
> been moved into the keystonemiddleware repo (a copy is left in
> keystoneclient only for backwards-compatibility). If python-keystoneclient
> still has a dependency on python-memcache then we're doing something wrong
> and should be able to fix it.
>  
> - Brant
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-09-11 Thread Morgan Fainberg
Hi Travis,

By and large we have addressed this in the Session code within Keystoneclient 
via the function here (and other similar cases): 
https://github.com/openstack/python-keystoneclient/blob/01cabf6bbbee8b5340295f3be5e1fa7111387e7d/keystoneclient/session.py#L126-L131

If/when Glanceclient is moved to consuming the session code, it should help 
alleviate the issues with printing the Token ID’s in the logs themselves.

Along with the changes for the session code, all tokens issued from Keystone 
(Juno and beyond) will also include audit_id fields that are safe to use in 
logging (they are part of the token data). There are two elements to the 
audit_ids field, the first (will always exist) and is the local token’s 
audit_id (audit ids are randomly generated and should be considered as globally 
unique as a UUID). The second element will exist if the token has ever been 
part of a rescope (exchange of a token for another token of a different scope, 
e.g. changing to a new project/tenant). The second audit_id is the audit_id of 
the first token in the chain (unique for the entire chain of tokens).

I don’t believe we’re exposing the audit_ids yet to the services behind the 
auth_token middleware nor using them for logging in cases such as the above 
linked logging function. I would like to eventually see the audit_ids used 
(where they exist) for logging cases like this.

I’m sure Jamie Lennox can chime in and provide a bit more insight as to the 
status of converting Glanceclient to using session as I know he’s been working 
on the client front in this regard. I hope that sometime within the K 
development cycle timeline we will be converting the logging over to audit_ids 
where possible (but that has not been 100% decided on).

Cheers,
Morgan

—
Morgan Fainberg


-Original Message-
From: Tripp, Travis S 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 11, 2014 at 17:35:30
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject:  [openstack-dev] masking X-Auth-Token in debug output - proposed 
consistency

> Hi All,
>  
> I'm just helping with bug triage in Glance and we've got a bug to update how 
> tokens are redacted  
> in the glanceclient [1]. It says to update to whatever cross-project approach 
> is agreed  
> upon and references this thread:
>  
> http://lists.openstack.org/pipermail/openstack-dev/2014-June/037345.html  
>  
> I just went through the thread and as best as I can tell there wasn't a 
> conclusion in the  
> ML. However, if we are going to do anything, IMO the thread leans toward 
> {SHA1},  
> with Morgan Fainberg dissenting. However, he references a patch that was 
> ultimately  
> abandoned.
>  
> If there was a conclusion to this, please let me know so I can update and 
> work on closing  
> this bug.
>  
> [1] https://bugs.launchpad.net/python-glanceclient/+bug/1329301
>  
> Thanks,
> Travis
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-07 Thread Morgan Fainberg
Responses in-line.


-Original Message-
From: Robert Collins 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: September 7, 2014 at 20:16:32
To: OpenStack Development Mailing List >
Subject:  [openstack-dev] doubling our core review bandwidth

> I hope the subject got your attention :).
>  
> This might be a side effect of my having too many cosmic rays, but its
> been percolating for a bit.
>  
> tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
> use 'needs 1x+2'. We can ease up a large chunk of pressure on our
> review bottleneck, with the only significant negative being that core
> reviewers may see less of the code going into the system - but they
> can always read more to stay in shape if thats an issue :)
>  
> Thats it really - below I've some arguments to support this suggestion.
>  
> -Rob

I think that this is something that can be done on a project-by-project basis. 
However, I don’t disagree that
the mandate could be moved to “must have 1x+2” but leave it to the individual 
projects to specify how that is
implemented.

> # Costs of the current system
>  
> Perfectly good code that has been +2'd sits waiting for a second +2.
> This is a common complaint from folk suffering from review latency.
>  
> Reviewers spend time reviewing code that has already been reviewed,
> rather than reviewing code that hasn't been reviewed.

This is absolutely true. There are many times things linger with a single +2 
and then become painful due to rebase
needs. This issue can be extremely frustrating (especially to newer 
contributors).

> # Benefits of the current system
>  
> I don't think we gain a lot from the second +2 today. There are lots
> of things we might get from it:
>  
> - we keep -core in sync with each other
> - better mentoring of non-core
> - we avoid collaboration between bad actors
> - we ensure -core see a large chunk of the changes going through the system
> - we catch more issues on the code going through by having more eyeballs
>  
> I don't think any of these are necessarily false, but equally I don't
> they are necessarily true.
>
>
> ## keeping core in sync
>  
> For a team of (say) 8 cores, if 2 see each others comments on a
> review, a minimum of 7 reviews are needed for a reviewer R's thoughts
> on something to be disseminated across the team via osmosis. Since
> such thoughts probably don't turn up on every review, the reality is
> that it may take many more reviews than that: it is a thing, but its
> not very effective vs direct discussion.

I wouldn’t discount how much benefit is added by forcing the cores to see more 
of the code going into the repo. I personally feel like (as a core on a 
project) I would be lacking a lot of insight as to the code base without the 
extra reviews. It might take me longer to get up to speed when reviewing or 
implementing something new simply because I have a less likely chance to have 
seen the recently merged code.

Losing this isn’t the end of the world by any means.

> ## mentoring of non-core
>  
> This really is the same as the keeping core in sync debate, except
> we're assuming that the person learning has nothing in common to start
> with.

This isn’t really a benefit to multiple core reviewers looking over a patch set 
from my experience. Most of the mentoring I see has been either in IRC or just 
because reviews (non-core even) occur. I agree with your assessment.

> ## avoiding collaboration between bad actors
>  
> The two core requirement means that it takes three people (proposer +
> 2 core) to collaborate on landing something inappropriate (whether its
> half baked, a misfeature, whatever). Thats only 50% harder than 2
> people (proposer + 1 core) and its still not really a high bar to
> meet. Further, we can revert things.

Solid assessment. I tend to agree with this point. If you are going to have bad 
actors try and get code in you will have bad actors trying to get code in. The 
real question is: how many (if any) extra reverts will be needed in the case of 
bad actors? My guess is 1 per bad actor (which that actor is likely no longer 
going to be core), if there are even any bad actors out there.

> ## Seeing a high % of changes
>  
> Consider nova - 
> http://russellbryant.net/openstack-stats/nova-reviewers-90.txt  
> Core team size: 21 (avg 3.8 reviews/day) [79.8/day for the whole team]
> Changes merged in the last 90 days: 1139 (12.7/day)
>  
> Each reviewer can only be seeing 30% (3.8/12.7) of the changes to nova
> on average (to keep up with 12/day landing). So they're seeing a lot,
> but there's more that they aren't seeing already. Dropping 30% to 15%
> might be significant. OTOH seeing 30% is probably not enough to keep
> up with everything on its own anyway - reviewers are going to be
> hitting new code regularly.
>
> ## Catching more issues through more eyeballs
>  
> I'm absolutely sure we do catch more issues through more eyeballs -
> but what eyeballs look a

Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-07 Thread Morgan Fainberg
Comments in line (added my thoughts on a couple of the targets Sean
outlined).

On Thursday, September 4, 2014, Sean Dague  wrote:
>
>
> Here is my top 5 list:
>
> 1. Functional Testing in Integrated projects
>
> The justification for this is here -
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html.
> We
> need projects to take more ownership of their functional testing so that
> by the time we get to integration testing we're not exposing really
> fundamental bugs like being unable to handle 2 requests at the same time.
>
> For Kilo: I think we can and should be able to make progress on this on
> all integrated projects, as well as the python clients (which are
> basically untested and often very broken).
>
>
Big +1 from me on this.


> 2. Consistency in southbound interfaces (Logging first)
>
> Logging and notifications are south bound interfaces from OpenStack
> providing information to people, or machines, about what is going on.
> There is also a 3rd proposed south bound with osprofiler.
>
> For Kilo: I think it's reasonable to complete the logging standards and
> implement them. I expect notifications (which haven't quite kicked off)
> are going to take 2 cycles.
>
> I'd honestly *really* love to see a unification path for all the the
> southbound parts, logging, osprofiler, notifications, because there is
> quite a bit of overlap in the instrumentation/annotation inside the main
> code for all of these.
>
>
I agree here as well.  we should prioritize logging and use that success as
the template for the other southbound parts. If we get profiler,
notifications, etc it is a win, but hitting logging hard and getting it
right is a huge step in the right direction.


> 3. API micro version path forward
>
> We have Cinder v2, Glance v2, Keystone v3. We've had them for a long
> time. When we started Juno cycle Nova used *none* of them. And with good
> reason, as the path forward was actually pretty bumpy. Nova has been
> trying to create a v3 for 3 cycles, and that effort collapsed under it's
> own weight. I think major API revisions in OpenStack are not actually
> possible any more, as there is too much intertia on existing interfaces.
>
> How to sanely and gradually evolve the OpenStack API is tremendously
> important, especially as a bunch of new projects are popping up that
> implement parts of it. We have the beginnings of a plan here in Nova,
> which now just needs a bunch of heavy lifting.
>
> For Kilo: A working microversion stack in at least one OpenStack
> service. Nova is probably closest, though Mark McClain wants to also
> take a spin on this in Neutron. I think if we could come up with a model
> that worked in both of those projects, we'd pick up some steam in making
> this long term approach across all of OpenStack.
>
> I like the concept but I absolutely want a definition on what micro
versioning should look like. That way we don't end up with 10 different
implementations of micro versioning. I am very concerned that we will see
nova do this in one way, neutron in a different way, and then other
projects taking bits and peices and ending up with something highly
inconsistent. I am unsure how to resolve this consistency issue if multiple
projects are implementing during the same cycle since retrofitting a
different implementation could break the API contract.

Generally speaking the micro versioning will be much more maintainable than
the current major API version methods.


> 4. Post merge testing
>
> As explained here -
> http://lists.openstack.org/pipermail/openstack-dev/2014-July/041057.html
> we could probably get a lot more bang for our buck if we had a smaller #
> of integration configurations in the pre merge gate, and a much more
> expansive set of post merge jobs.
>
> For Kilo: I think this could be implemented, it probably needs more
> hands than it has right now.
>
> 5. Consistent OpenStack python SDK / clients
>
> I think the client projects being inside the server programs has not
> served us well, especially as the # of servers has expanded. We as a
> project need to figure out how to get the SDK / unified client effort
> moving forward faster.
>
> For Kilo: I'm not sure how close to "done" we could take this, but this
> needs to become a larger overall push for the project as a whole, as I
> think our use exposed interface here is inhibiting adoption.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>

Cheers,
--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] "heat.conf.sample is not up to date"

2014-08-24 Thread Morgan Fainberg
On Sunday, August 24, 2014, Anne Gentle  wrote:

> I'm following this as well since I have the exact same problem in a
> docstring patch for heat.
>
>
Keystone saw an oddity with the new sample config generator (changing how
options are sorted and therefore changing the way the sample config is
rendered). This could be a similar / related issue.

Most of the projects stopped gating on "up-to-date" sample config a few
reasons, the first is that with external library dependencies you never
know when / if something upstream will break the test run (e.g. New
Oslo.config or new keystonemiddleware).

Now imagine that issue occurred and was blocking a gate-fixing bug
(happened at least a couple times).

In short, making sample config being up-to-date to merge code causes a lot
if headaches.

Different projects handle this differently. Nova doesn't have a sample
config in tree, keystone updates on a semi-regular basis (sometimes as part
of a patch, sometimes as a separate patch). Keystone team is looking at
adding a simple non-voting gate job (if infra doesn't mind) that will tell
us the config is out of date.

While it is nice to always have an updated sample config, I think it is not
worth the breakage / issues it adds to the gate.

It might make sense to standardize how we handle sample config files across
the projects or at least standardize on removing the gate block if the
config is out of date. I know it was floated earlier that there would be a
proposal bot job (like translations) for sample config files, but I don't
remember the specifics of why it wasn't well liked.

--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] Nominating Mike Bayer for the oslo.db core reviewers team

2014-08-15 Thread Morgan Fainberg

-Original Message-
From: Doug Hellmann 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: August 15, 2014 at 13:29:15
To: Ben Nemec >, OpenStack Development Mailing List 
(not for usage questions) >
Subject:  Re: [openstack-dev] [oslo][db] Nominating Mike Bayer for the oslo.db 
core reviewers team

>  
> On Aug 15, 2014, at 10:00 AM, Ben Nemec wrote:
>  
> > On 08/15/2014 08:20 AM, Russell Bryant wrote:
> >> On 08/15/2014 09:13 AM, Jay Pipes wrote:
> >>> On 08/15/2014 04:21 AM, Roman Podoliaka wrote:
>  Hi Oslo team,
> 
>  I propose that we add Mike Bayer (zzzeek) to the oslo.db core
>  reviewers team.
> 
>  Mike is an author of SQLAlchemy, Alembic, Mako Templates and some
>  other stuff we use in OpenStack. Mike has been working on OpenStack
>  for a few months contributing a lot of good patches and code reviews
>  to oslo.db [1]. He has also been revising the db patterns in our
>  projects and prepared a plan how to solve some of the problems we have
>  [2].
> 
>  I think, Mike would be a good addition to the team.
> >>>
> >>> Uhm, yeah... +10 :)
> >>
> >> ^2 :-)
> >>
> >
> > What took us so long to do this? :-)
> >
> > +1 obviously.
>  
> I did think it would be a good idea to wait a *little* while and make sure we 
> weren’t going  
> to scare him off. ;-)
>  
> Seriously, Mike has been doing a great job collaborating with the existing 
> team and helping  
> us make oslo.db sane.
>  
> +1
>  
> Doug


Big +1 from me. Mike has been great across the board (I know I’m not oslo core)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Survey on Token Provider Usage

2014-07-31 Thread Morgan Fainberg

-Original Message-
From: Thierry Carrez 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: July 30, 2014 at 05:20:28
To: openstack-dev@lists.openstack.org >
Subject:  Re: [openstack-dev] [Keystone] Survey on Token Provider Usage

> Morgan Fainberg wrote:
> > The Keystone team is looking for feedback from the community on what type 
> > of Keystone  
> Token is being used in your OpenStack deployments. This is to help us 
> understand the use  
> of the different providers and get information on the reasoning (if possible) 
> that that  
> token provider is being used.
> >
> > Please use the survey link and let us know which release of OpenStack and 
> > which Keystone  
> Token type (UUID, PKI, PKIZ, something custom) you are using. The results of 
> this survey  
> will have no impact on future support of any of these types of Tokens, we 
> plan to continue  
> to support all of the current token formats and the ability to use a custom 
> token provider.  
> >
> > https://www.surveymonkey.com/s/NZNDH3M
>  
> Great!
>  
> I see you posted this on -dev and -operators... You should probably also
> post this (or make sure it gets forwarded) on the openstack general ML.
> I'd expect you'd get extra data points from users from there.
>  
> Cheers,
>  
> --
> Thierry Carrez (ttx)

Thanks for the advice Thierry! I was feeling like I had missed a mailing list. 
It has been sent over to the main list now.

Cheers,
Morgan
—
Morgan Fainberg 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Barbican] Keystone PKI token too much long

2014-07-31 Thread Morgan Fainberg
On Thursday, July 31, 2014, Russell Bryant  wrote:

> On 07/30/2014 10:57 AM, Dolph Mathews wrote:
> > We recently merged an implementation for GET /v3/catalog which finally
> > enables POST /v3/auth/tokens?nocatalog to be a reasonable default
> > behavior, at the cost of an extra HTTP call from remote service back to
> > keystone where necessary.
>
> Is that really a safe default change to make?  It looks like v3 has
> already been marked as stable, and this would be a non
> backwards-compatible change to the API.
>
>
This default could be made in keystone client, and the catalog could be
fetched separately (session object can handle it). It would mean new
clients would get the same data without a massive token size, but old
clients would still be compatible. API remains compatible and stable.

Cheers,
Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cross-server locking for neutron server

2014-07-30 Thread Morgan Fainberg

-Original Message-
From: Jay Pipes 
Reply: OpenStack Development Mailing List (not for usage questions) 
>
Date: July 30, 2014 at 09:59:15
To: openstack-dev@lists.openstack.org >
Subject:  Re: [openstack-dev] [neutron] Cross-server locking for neutron server

> On 07/30/2014 09:48 AM, Doug Wiegley wrote:
> >> I'd have to look at the Neutron code, but I suspect that a simple
> >> strategy of issuing the UPDATE SQL statement with a WHERE condition that
> >
> > I¹m assuming the locking is for serializing code, whereas for what you
> > describe above, is there some reason we wouldn¹t just use a transaction?
>  
> Because you can't do a transaction from two different threads...
>  
> The compare and update strategy is for avoiding the use of SELECT FOR
> UPDATE.
>  
> Best,
> -jay


As a quick example of the optimistic locking you describe (UPDATE with WHERE 
clause) you can take a look at the Keystone “consume trust” logic:

https://review.openstack.org/#/c/97059/14/keystone/trust/backends/sql.py

Line 93 does the initial query, an update is performed then on line 108 and 115 
we do the update and check to see how many rows were affected.

Feel free to hit me up if I can help in any way on this.

Cheers,
Morgan 


—
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Survey on Token Provider Usage

2014-07-29 Thread Morgan Fainberg
Hi!

The Keystone team is looking for feedback from the community on what type of 
Keystone Token is being used in your OpenStack deployments. This is to help us 
understand the use of the different providers and get information on the 
reasoning (if possible) that that token provider is being used.

Please use the survey link and let us know which release of OpenStack and which 
Keystone Token type (UUID, PKI, PKIZ, something custom) you are using. The 
results of this survey will have no impact on future support of any of these 
types of Tokens, we plan to continue to support all of the current token 
formats and the ability to use a custom token provider.


https://www.surveymonkey.com/s/NZNDH3M 


Thanks!
The Keystone Team



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Feasibility of adding global restrictions at trust creation time

2014-07-23 Thread Morgan Fainberg
On Wednesday, July 23, 2014, Russell Bryant  wrote:

> On 07/22/2014 11:00 PM, Nathan Kinder wrote:
> >
> >
> > On 07/22/2014 06:55 PM, Steven Hardy wrote:
> >> On Tue, Jul 22, 2014 at 05:20:44PM -0700, Nathan Kinder wrote:
> >>> Hi,
> >>>
> >>> I've had a few discussions recently related to Keystone trusts with
> >>> regards to imposing restrictions on trusts at a deployment level.
> >>> Currently, the creator of a trust is able to specify the following
> >>> restrictions on the trust at creation time:
> >>>
> >>>   - an expiration time for the trust
> >>>   - the number of times that the trust can be used to issue trust
> tokens
> >>>
> >>> If an expiration time (expires_at) is not specified by the creator of
> >>> the trust, then it never expires.  Similarly, if the number of uses
> >>> (remaining_uses) is not specified by the creator of the trust, it has
> an
> >>> unlimited number of uses.  The important thing to note is that the
> >>> restrictions are entirely in the control of the trust creator.
> >>>
> >>> There may be cases where a particular deployment wants to specify
> global
> >>> maximum values for these restrictions to prevent a trust from being
> >>> granted indefinitely.  For example, Keystone configuration could
> specify
> >>> that a trust can't be created that has >100 remaining uses or is valid
> >>> for more than 6 months.  This would certainly cause problems for some
> >>> deployments that may be relying on indefinite trusts, but it is also a
> >>> nice security control for deployments that don't want to allow
> something
> >>> so open-ended.
> >>>
> >>> I'm wondering about the feasibility of this sort of change,
> particularly
> >>> from an API compatibility perspective.  An attempt to create a trust
> >>> without an expires_at value should still be considered as an attempt to
> >>> create a trust that never expires, but Keystone could return a '403
> >>> Forbidden' response if this request violates the maximum specified in
> >>> configuration (this would be similar for remaining_uses).  The
> semantics
> >>> of the API remain the same, but the response has the potential to be
> >>> rejected for new reasons.  Is this considered as an API change, or
> would
> >>> this be considered to be OK to implement in the v3 API?  The existing
> >>> API docs [1][2] don't really go to this level of detail with regards to
> >>> when exactly a 403 will be returned for trust creation, though I know
> of
> >>> specific cases where this response is returned for the create-trust
> request.
> >>
> >> FWIW if you start enforcing either of these restrictions by default, you
> >> will break heat, and every other delegation-to-a-service use case I'm
> aware
> >> of, where you simply don't have any idea how long the lifetime of the
> thing
> >> created by the service (e.g heat stack, Solum application definition,
> >> Mistral workflow or whatever) will be.
> >>
> >> So while I can understand the desire to make this configurable for some
> >> environments, please leave the defaults as the current behavior and be
> >> aware that adding these kind of restrictions won't work for many
> existing
> >> trusts use-cases.
> >
> > I fully agree.  In no way should the default behavior change.
> >
> >>
> >> Maybe the solution would be some sort of policy defined exception to
> these
> >> limits?  E.g when delegating to a user in the service project, they do
> not
> >> apply?
> >
> > Role-based limits seem to be a natural progression of the idea, though I
> > didn't want to throw that out there from the get-go.
>
> I was concerned about this idea from an API compatibility perspective,
> but I think the way you have laid it out here makes sense.  Like both
> you and Steven said, the behavior of the API when the parameter is not
> specified should *not* change.  However, allowing deployment-specific
> policy that would reject the request seems fine.
>
> Thanks,
>
> --
> Russell Bryant
>
>
This all seems quite reasonable. And as long as the default behavior is
reasonable (doesn't change) I see this as quite doable and should not have
any negative impact on the API.

I can see a benefit to having this type of enforcement in some deployments.

--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-17 Thread Morgan Fainberg


> > I wasn't aware that PKI tokens had domains in them. What happens to nova
> > in this case, It just works?
> >
>  
> Both PKI and UUID responses from v3 contain:
>  
> 1. the user's domain
>  
> And if it's a project scoped token:
>  
> 2. the project's domain
>  
> Or if it's a domain-scoped token:
>  
> 3. a domain scope
>  
> The answer to your question is that if nova receives a project-scoped token
> (1 & 2), it doesn't need to be domain-aware: project ID's are globally
> unique and nova doesn't need to know about project-domain relationships.
>  
> If nova receives a domain-scoped token (1 & 3), the policy layer can balk
> with an HTTP 401 because there's no project in scope, and it's not
> domain-aware. From nova's perspective, this is identical to the scenario
> where the policy layer returns an HTTP 401 because nova was presented with
> an unscoped token (1 only) from keystone.

Let me add some specifics based upon the IRC discussion I had with Joe Gordon.

In addition to what Dolph has outlined here we have this document 
http://docs.openstack.org/developer/keystone/http-api.html#how-do-i-migrate-from-v2-0-to-v3
 that should help with what is needed to do the conversion. The change to use 
v3 largely relies on a deployer enabling the V3 API in Keystone.

By and large, the change is all in the middleware. The middleware will handle 
either token, so it really comes down to when a V3 token is requested by the 
end user and subsequently used to interact with the various OpenStack services. 
This part requires no change on Nova (or any other services) part (with 
exception to the Domain-Scoped tokens outlined above and the needed changes to 
policy if those are to be supported).

Each of the client libraries will need to be updated to utilize the V3 API. 
This has been in process for a while (you’ve seen the code from Jamie Lennox 
and Guang Yee) and is mostly supported by converting each of the libraries to 
utilize the Session object from keystoneclient instead of the many various 
implementations to talk to auth.

Last but not least here are a couple bullet points that make V3 much better 
than the V2 Keystone API (all the details of what V3 brings to the table can be 
found here: 
https://github.com/openstack/identity-api/tree/master/v3/src/markdown ). A lot 
of these benefits are operator specific.

* Federated Identity. V3 Keystone supports the use of SAML (via shibboleth) 
from a number of sources as a form of Identity (instead of having to keep the 
users all within Keystone’s Identity backend). The federation support relies 
heavily upon the domain constructs in Keystone (which are part of V3). There is 
work to expand the support beyond SAML (including a proposal to support 
keystone-to-keystone federation).

* Pluggable Auth. V3 Keystone supports pluggable authentication mechanisms (a 
light weight module that can authenticate the user), this is a bit more 
friendly than needing to subclass the entire Identity backend with a  bunch of 
conditional logic. Plugins are configured via the Keystone configuration file.

* Better admin-scoping support. Domains allow us to better handle “admin” vs 
“non-admin” and limit bleeding those roles across projects (a big complaint in 
v2: you were either an admin or not an admin globally). Due to backwards 
compatibility requirements, we have largely left this as it was, but the 
support is there and can be seen via the policy.v3cloudsample.json file 
provided in the Keystone tree.

* The hierarchical multi tenancy work is being done against the V3 Keystone 
API. This is again related to the domain construct and support. This will 
likely require changes to more than just Keystone to make full use of the new 
functionality, but specifics are still up in the air as this is under active 
development.

These are just some of benefits of V3, there are a lot of improvements over V2 
that are not on this list (or are truly transparent to the end-user and 
deployer).


Cheers,
Morgan Fainberg







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] (98)Address already in use: make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000

2014-07-16 Thread Morgan Fainberg

--
From: Rich Megginson rmegg...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 08:08:00
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] (98)Address already in use: 
make_sock: could not bind to address [::]:5000 & 0.0.0.0:5000



> Another problem with port 5000 in Fedora, and probably more recent
> versions of RHEL, is the selinux policy:
>  
> # sudo semanage port -l|grep 5000
> ...
> commplex_main_port_t tcp 5000
> commplex_main_port_t udp 5000
>  
> There is some service called "commplex" that has already "claimed" port
> 5000 for its use, at least as far as selinux goes.
> 

Wouldn’t this also affect the eventlet-based Keystone using port 5000? This is 
not an apache-specific related issue is it?

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg
Reposted now will a lot less bad quote issues. Thanks for being patient with 
the re-send!

--
From: Joe Gordon joe.gord...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 16, 2014 at 02:27:42
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and 
keystone v3

> On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg  
> wrote:
>  
> >
> >
> > On Tuesday, July 15, 2014, Steven Hardy wrote:
> >
> >> On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> >> > On 07/14/2014 11:47 AM, Steven Hardy wrote:
> >> > >Hi all,
> >> > >
> >> > >I'm probably missing something, but can anyone please tell me when
> >> devstack
> >> > >will be moving to keystone v3, and in particular when API auth_token
> >> will
> >> > >be configured such that auth_version is v3.0 by default?
> >> > >
> >> > >Some months ago, I posted this patch, which switched auth_version to
> >> v3.0
> >> > >for Heat:
> >> > >
> >> > >https://review.openstack.org/#/c/80341/
> >> > >
> >> > >That patch was nack'd because there was apparently some version
> >> discovery
> >> > >code coming which would handle it, but AFAICS I still have to manually
> >> > >configure auth_version to v3.0 in the heat.conf for our API to work
> >> > >properly with requests from domains other than the default.
> >> > >
> >> > >The same issue is observed if you try to use non-default-domains via
> >> > >python-heatclient using this soon-to-be-merged patch:
> >> > >
> >> > >https://review.openstack.org/#/c/92728/
> >> > >
> >> > >Can anyone enlighten me here, are we making a global devstack move to
> >> the
> >> > >non-deprecated v3 keystone API, or do I need to revive this devstack
> >> patch?
> >> > >
> >> > >The issue for Heat is we support notifications from "stack domain
> >> users",
> >> > >who are created in a heat-specific domain, thus won't work if the
> >> > >auth_token middleware is configured to use the v2 keystone API.
> >> > >
> >> > >Thanks for any information :)
> >> > >
> >> > >Steve
> >> > There are reviews out there in client land now that should work. I was
> >> > testing discover just now and it seems to be doing the right thing. If
> >> the
> >> > AUTH_URL is chopped of the V2.0 or V3 the client should be able to
> >> handle
> >> > everything from there on forward.
> >>
> >> Perhaps I should restate my problem, as I think perhaps we still have
> >> crossed wires:
> >>
> >> - Certain configurations of Heat *only* work with v3 tokens, because we
> >> create users in a non-default domain
> >> - Current devstack still configures versioned endpoints, with v2.0
> >> keystone
> >> - Heat breaks in some circumstances on current devstack because of this.
> >> - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
> >> the problem.
> >>
> >> So, back in March, client changes were promised to fix this problem, and
> >> now, in July, they still have not - do I revive my patch, or are fixes for
> >> this really imminent this time?
> >>
> >> Basically I need the auth_token middleware to accept a v3 token for a user
> >> in a non-default domain, e.g validate it *always* with the v3 API not
> >> v2.0,
> >> even if the endpoint is still configured versioned to v2.0.
> >>
> >> Sorry to labour the point, but it's frustrating to see this still broken
> >> so long after I proposed a fix and it was rejected.
> >>
> >>
> > We just did a test converting over the default to v3 (and falling back to
> > v2 as needed, yes fallback will still be needed) yesterday (Dolph posted a
> > couple of test patches and they seemed to succeed - yay!!) It looks like it
> > will just work. Now there is a big caveate, this default will only change
> > in the keystone middleware project, and it needs to have a patch or three
> > get through gate converting projects over to use it before we accept 

Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg
I apologize for the very mixed up/missed quoting in that response, looks like 
my client ate a bunch of the quotes when writing up the email. 

—
Morgan Fainberg


--
From: Morgan Fainberg morgan.fainb...@gmail.com
Reply: Morgan Fainberg morgan.fainb...@gmail.com
Date: July 16, 2014 at 07:34:57
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [devstack][keystone] Devstack, auth_token and 
keystone v3

>  
>  
> On Wednesday, July 16, 2014, Joe Gordon wrote:
>  
>  
>  
> On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg wrote:  
>  
>  
> On Tuesday, July 15, 2014, Steven Hardy wrote:
> On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> > On 07/14/2014 11:47 AM, Steven Hardy wrote:
> > >Hi all,
> > >
> > >I'm probably missing something, but can anyone please tell me when devstack
> > >will be moving to keystone v3, and in particular when API auth_token will
> > >be configured such that auth_version is v3.0 by default?
> > >
> > >Some months ago, I posted this patch, which switched auth_version to v3.0
> > >for Heat:
> > >
> > >https://review.openstack.org/#/c/80341/
> > >
> > >That patch was nack'd because there was apparently some version discovery
> > >code coming which would handle it, but AFAICS I still have to manually
> > >configure auth_version to v3.0 in the heat.conf for our API to work
> > >properly with requests from domains other than the default.
> > >
> > >The same issue is observed if you try to use non-default-domains via
> > >python-heatclient using this soon-to-be-merged patch:
> > >
> > >https://review.openstack.org/#/c/92728/
> > >
> > >Can anyone enlighten me here, are we making a global devstack move to the
> > >non-deprecated v3 keystone API, or do I need to revive this devstack patch?
> > >
> > >The issue for Heat is we support notifications from "stack domain users",
> > >who are created in a heat-specific domain, thus won't work if the
> > >auth_token middleware is configured to use the v2 keystone API.
> > >
> > >Thanks for any information :)
> > >
> > >Steve
> > There are reviews out there in client land now that should work. I was
> > testing discover just now and it seems to be doing the right thing. If the
> > AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
> > everything from there on forward.
>  
> Perhaps I should restate my problem, as I think perhaps we still have
> crossed wires:
>  
> - Certain configurations of Heat *only* work with v3 tokens, because we
> create users in a non-default domain
> - Current devstack still configures versioned endpoints, with v2.0 keystone
> - Heat breaks in some circumstances on current devstack because of this.
> - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
> the problem.
>  
> So, back in March, client changes were promised to fix this problem, and
> now, in July, they still have not - do I revive my patch, or are fixes for
> this really imminent this time?
>  
> Basically I need the auth_token middleware to accept a v3 token for a user
> in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
> even if the endpoint is still configured versioned to v2.0.
>  
> Sorry to labour the point, but it's frustrating to see this still broken
> so long after I proposed a fix and it was rejected.
>  
>  
> We just did a test converting over the default to v3 (and falling back to v2 
> as needed, yes  
> fallback will still be needed) yesterday (Dolph posted a couple of test 
> patches and they  
> seemed to succeed - yay!!) It looks like it will just work. Now there is a 
> big caveate, this  
> default will only change in the keystone middleware project, and it needs to 
> have a patch  
> or three get through gate converting projects over to use it before we accept 
> the code.  
>  
> Nova has approved the patch to switch over, it is just fighting with Gate. 
> Other patches  
> are proposed for other projects and are in various states of approval.
>  
> I assume you mean switch over to keystone middleware project [0], not switch 
> over to keystone  
> v3. Based on [1] my understanding is no changes to nova are needed to use the 
> v2 compatible  
> parts of the v3 API, But are changes needed to support domains or is this not 
> a problem because  
> the auth middleware uses uuids for user_id and project_id, so nova doesn

[openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-16 Thread Morgan Fainberg


On Wednesday, July 16, 2014, Joe Gordon  wrote:



On Tue, Jul 15, 2014 at 7:20 AM, Morgan Fainberg  
wrote:


On Tuesday, July 15, 2014, Steven Hardy  wrote:
On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> On 07/14/2014 11:47 AM, Steven Hardy wrote:
> >Hi all,
> >
> >I'm probably missing something, but can anyone please tell me when devstack
> >will be moving to keystone v3, and in particular when API auth_token will
> >be configured such that auth_version is v3.0 by default?
> >
> >Some months ago, I posted this patch, which switched auth_version to v3.0
> >for Heat:
> >
> >https://review.openstack.org/#/c/80341/
> >
> >That patch was nack'd because there was apparently some version discovery
> >code coming which would handle it, but AFAICS I still have to manually
> >configure auth_version to v3.0 in the heat.conf for our API to work
> >properly with requests from domains other than the default.
> >
> >The same issue is observed if you try to use non-default-domains via
> >python-heatclient using this soon-to-be-merged patch:
> >
> >https://review.openstack.org/#/c/92728/
> >
> >Can anyone enlighten me here, are we making a global devstack move to the
> >non-deprecated v3 keystone API, or do I need to revive this devstack patch?
> >
> >The issue for Heat is we support notifications from "stack domain users",
> >who are created in a heat-specific domain, thus won't work if the
> >auth_token middleware is configured to use the v2 keystone API.
> >
> >Thanks for any information :)
> >
> >Steve
> There are reviews out there in client land now that should work.  I was
> testing discover just now and it seems to be doing the right thing.  If the
> AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
> everything from there on forward.

Perhaps I should restate my problem, as I think perhaps we still have
crossed wires:

- Certain configurations of Heat *only* work with v3 tokens, because we
  create users in a non-default domain
- Current devstack still configures versioned endpoints, with v2.0 keystone
- Heat breaks in some circumstances on current devstack because of this.
- Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
  the problem.

So, back in March, client changes were promised to fix this problem, and
now, in July, they still have not - do I revive my patch, or are fixes for
this really imminent this time?

Basically I need the auth_token middleware to accept a v3 token for a user
in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
even if the endpoint is still configured versioned to v2.0.

Sorry to labour the point, but it's frustrating to see this still broken
so long after I proposed a fix and it was rejected.


We just did a test converting over the default to v3 (and falling back to v2 as 
needed, yes fallback will still be needed) yesterday (Dolph posted a couple of 
test patches and they seemed to succeed - yay!!) It looks like it will just 
work. Now there is a big caveate, this default will only change in the keystone 
middleware project, and it needs to have a patch or three get through gate 
converting projects over to use it before we accept the code.

Nova has approved the patch to switch over, it is just fighting with Gate. 
Other patches are proposed for other projects and are in various states of 
approval.

I assume you mean switch over to keystone middleware project [0], not switch 
over to keystone v3. Based on [1] my understanding is no changes to nova are 
needed to use the v2 compatible parts of the v3 API, But are changes needed to 
support domains or is this not a problem because the auth middleware uses uuids 
for user_id and project_id, so nova doesn't need to have any concept of 
domains? Are any nova changes needed to support the v3 API?


 
This change simply makes it so the middleware will prefer v3 over v2 if both 
are available for validating UUID tokens and fetching certs. It still falls 
back to v2 as needed. It is transparent to all services (it was blocking on 
Nova and some uniform catalog related issues a while back, but Jamie Lennox 
resolved those, see below for more details).

It does not mean Nova (or anyone else) are magically using features they 
weren't already using. It just means Heat isn't needing to do a bunch of 
conditional stuff to get the V3 information out of the middleware. This change 
is only used in the case that V2 and V3 are available when auth_token 
middleware looks at the auth_url (limited discovery). It is still possible to 
force V2 by setting the ‘identity_uri' to the V2.0 specific root (no discovery 
performed).

Switching over the default to v3 in the middleware doesn't test nova + v

Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-15 Thread Morgan Fainberg

>  
> > Thanks for the info - any chance you can provide links to the relevant
> > reviews here? If so I'll be happy to pull them and locally test to ensure
> > our issues will be addressed :)
> >
> Sure!
>  
> https://review.openstack.org/#/c/106819/ is the change for the 
> keystonemiddleware  
> package (where the change will actually land), and 
> https://review.openstack.org/#/c/106833/  
> is the change to keystoneclient to show that the change will succeed (this 
> will not merge  
> to keystoneclient, if you want the v3-preferred by default behavior, the 
> project must  
> use keystonemiddleware).
>  
> Cheers,
> Morgan
>  

And just to be clear, the reason for the keystoneclient “test” change is 
because projects have not all converted over to keystonemiddleware yet (this is 
in process). We don’t want projects to be split between keystoneclient and the 
middleware going forward, but we cannot remove the client version for 
compatibility reasons (previous releases of OpenStack, etc). The version in the 
client is “Frozen” and will only receive security updates (based on the 
specification to split the middleware to it’s own package).

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-15 Thread Morgan Fainberg

> Thanks for the info - any chance you can provide links to the relevant
> reviews here? If so I'll be happy to pull them and locally test to ensure
> our issues will be addressed :)
>  
Sure!

https://review.openstack.org/#/c/106819/ is the change for the 
keystonemiddleware package (where the change will actually land), and 
https://review.openstack.org/#/c/106833/ is the change to keystoneclient to 
show that the change will succeed (this will not merge to keystoneclient, if 
you want the v3-preferred by default behavior, the project must use 
keystonemiddleware).

Cheers,
Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][keystone] Devstack, auth_token and keystone v3

2014-07-15 Thread Morgan Fainberg
On Tuesday, July 15, 2014, Steven Hardy  wrote:

> On Mon, Jul 14, 2014 at 02:43:19PM -0400, Adam Young wrote:
> > On 07/14/2014 11:47 AM, Steven Hardy wrote:
> > >Hi all,
> > >
> > >I'm probably missing something, but can anyone please tell me when
> devstack
> > >will be moving to keystone v3, and in particular when API auth_token
> will
> > >be configured such that auth_version is v3.0 by default?
> > >
> > >Some months ago, I posted this patch, which switched auth_version to
> v3.0
> > >for Heat:
> > >
> > >https://review.openstack.org/#/c/80341/
> > >
> > >That patch was nack'd because there was apparently some version
> discovery
> > >code coming which would handle it, but AFAICS I still have to manually
> > >configure auth_version to v3.0 in the heat.conf for our API to work
> > >properly with requests from domains other than the default.
> > >
> > >The same issue is observed if you try to use non-default-domains via
> > >python-heatclient using this soon-to-be-merged patch:
> > >
> > >https://review.openstack.org/#/c/92728/
> > >
> > >Can anyone enlighten me here, are we making a global devstack move to
> the
> > >non-deprecated v3 keystone API, or do I need to revive this devstack
> patch?
> > >
> > >The issue for Heat is we support notifications from "stack domain
> users",
> > >who are created in a heat-specific domain, thus won't work if the
> > >auth_token middleware is configured to use the v2 keystone API.
> > >
> > >Thanks for any information :)
> > >
> > >Steve
> > There are reviews out there in client land now that should work.  I was
> > testing discover just now and it seems to be doing the right thing.  If
> the
> > AUTH_URL is chopped of the V2.0 or V3 the client should be able to handle
> > everything from there on forward.
>
> Perhaps I should restate my problem, as I think perhaps we still have
> crossed wires:
>
> - Certain configurations of Heat *only* work with v3 tokens, because we
>   create users in a non-default domain
> - Current devstack still configures versioned endpoints, with v2.0 keystone
> - Heat breaks in some circumstances on current devstack because of this.
> - Adding auth_version='v3.0' to the auth_token section of heat.conf fixes
>   the problem.
>
> So, back in March, client changes were promised to fix this problem, and
> now, in July, they still have not - do I revive my patch, or are fixes for
> this really imminent this time?
>
> Basically I need the auth_token middleware to accept a v3 token for a user
> in a non-default domain, e.g validate it *always* with the v3 API not v2.0,
> even if the endpoint is still configured versioned to v2.0.
>
> Sorry to labour the point, but it's frustrating to see this still broken
> so long after I proposed a fix and it was rejected.
>
>
We just did a test converting over the default to v3 (and falling back to
v2 as needed, yes fallback will still be needed) yesterday (Dolph posted a
couple of test patches and they seemed to succeed - yay!!) It looks like it
will just work. Now there is a big caveate, this default will only change
in the keystone middleware project, and it needs to have a patch or three
get through gate converting projects over to use it before we accept the
code.

Nova has approved the patch to switch over, it is just fighting with Gate.
Other patches are proposed for other projects and are in various states of
approval.

So, in short. This is happening and soon. There are some things that need
to get through gate and then we will do the release of keystonemiddleware
that should address your problem here. At least my reading of the issue and
the fixes that are pending indicates as much. (Please let me know if I am
misreading anything here).

Cheers,
Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][devstack] Keystone is now gating (Juno and beyond) on Apache + mod_wsgi deployed Keystone

2014-07-11 Thread Morgan Fainberg
The Keystone team is happy to announce that as of yesterday (July 10th 2014), 
with the merge of https://review.openstack.org/#/c/100747/ Keystone is now 
gating on Apache + mod_wsgi based deployment. This also has moved the default 
for devstack to deploy Keystone under apache. This is in-line with the 
statement that Apache + mod_wsgi is the recommended deployment for Keystone, as 
opposed to using “keystone-all”.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on resulting HTTP status (proposed REST API Response Status Changes)

2014-07-03 Thread Morgan Fainberg
Here is the list of patches pending to resolve this issue (Keystone Master, 
Keystone Stable/Icehouse, and Tempest)

https://review.openstack.org/#/q/status:open+topic:bug/1334368,n,z 


—
Morgan Fainberg


--
From: Nathan Kinder nkin...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 1, 2014 at 20:02:45
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on 
resulting HTTP status (proposed REST API Response Status Changes)

>  
>  
> On 07/01/2014 07:48 PM, Robert Collins wrote:
> > Wearing my HTTP fanatic hat - I think this is actually an important
> > change to do. Skew like this can cause all sorts of odd behaviours in
> > client libraries.
>  
> +1. The current behavior of inconsistent response codes between the two
> recommended methods of deploying Keystone should definitely be
> considered as a bug IMHO. Consistency in responses is important
> regardless of how Keystone is deployed, and it seems obvious that we
> should modify the responses that are out of spec to achieve consistency.
>  
> -NGK
> >
> > -Rob
> >
> > On 2 July 2014 13:13, Morgan Fainberg wrote:
> >> In the endeavor to move from the default deployment of Keystone being 
> >> eventlet (in  
> devstack) to Apache + mod_wsgi, I noticed that there was an odd mis-match on 
> a single set  
> of tempest tests relating to trusts. Under eventlet a HTTP 204 No Content was 
> being returned,  
> but under mod_wsgi an HTTP 200 OK was being returned. After some 
> investigation it turned  
> out that in some cases mod_wsgi will rewrite HEAD requests to GET requests 
> under the hood;  
> this is to ensure that the response from Apache is properly built when 
> dealing with filtering.  
> A number of wsgi applications just return nothing on a HEAD request, which is 
> incorrect,  
> so mod_wsgi tries to compensate.
> >>
> >> The HTTP spec states: "The HEAD method is identical to GET except that the 
> >> server must  
> not return any Entity-Body in the response. The metainformation contained in 
> the HTTP  
> headers in response to a HEAD request should be identical to the information 
> sent in response  
> to a GET request. This method can be used for obtaining metainformation about 
> the resource  
> identified by the Request-URI without transferring the Entity-Body itself. 
> This method  
> is often used for testing hypertext links for validity, accessibility, and 
> recent modification.”  
> >>
> >> Keystone has 3 Routes where HEAD will result in a 204 and GET will result 
> >> in a 200.
> >>
> >> * /v3/auth/tokens
> >> * /v2.0/tokens/{token_id}
> >> * /OS-TRUST/trusts/{trust_id}/roles/{role_id} <--- This is the only one 
> >> tested  
> by Tempest.
> >>
> >> The easiest solution is to correct the case where we are out of line with 
> >> the HTTP spec  
> and ensure these cases return the same status code for GET and HEAD methods. 
> This however  
> changes the response status of a public REST API. Before we do this, I wanted 
> to ensure  
> the community, developers, and TC did not have an issue with this correction. 
> We are not  
> changing the class of status (e.g. 2xx to 4xx or vice-versa). This would 
> simply be returning  
> the same response between GET and HEAD requests. The fix for this would be to 
> modify the  
> 3 tempest tests in question to expect HTTP 200 instead of 204.
> >>
> >> There are a couple of other cases where Keystone registers a HEAD route 
> >> but no GET route  
> (these would be corrected at the same time, to ensure compatibility). The 
> final correction  
> is to enforce that Keystone would not send any data on HEAD requests (it is 
> possible to  
> do so, but we have not had it happen).
> >>
> >> You can see a proof-of-concept review that shows the tempest failures 
> >> here: https://review.openstack.org/#/c/104026  
> >>
> >> If this change (even though it is in violation of 
> >> https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Not_Acceptable
> >>   
> is acceptable, it will unblock the last of a very few things to have Keystone 
> default deploy  
> via devstack under Apache (and gate upon it). Please let me know if anyone 
> has significant  
> issues with this change / concerns as I would like to finish up this road to 
> mod_wsgi based  
> Keystone as early in the Juno cycle as possible.
> >>

[openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on resulting HTTP status (proposed REST API Response Status Changes)

2014-07-01 Thread Morgan Fainberg
In the endeavor to move from the default deployment of Keystone being eventlet 
(in devstack) to Apache + mod_wsgi, I noticed that there was an odd mis-match 
on a single set of tempest tests relating to trusts. Under eventlet a HTTP 204 
No Content was being returned, but under mod_wsgi an HTTP 200 OK was being 
returned. After some investigation it turned out that in some cases mod_wsgi 
will rewrite HEAD requests to GET requests under the hood; this is to ensure 
that the response from Apache is properly built when dealing with filtering. A 
number of wsgi applications just return nothing on a HEAD request, which is 
incorrect, so mod_wsgi tries to compensate.

The HTTP spec states: "The HEAD method is identical to GET except that the 
server must not return any Entity-Body in the response. The metainformation 
contained in the HTTP headers in response to a HEAD request should be identical 
to the information sent in response to a GET request. This method can be used 
for obtaining metainformation about the resource identified by the Request-URI 
without transferring the Entity-Body itself. This method is often used for 
testing hypertext links for validity, accessibility, and recent modification.”

Keystone has 3 Routes where HEAD will result in a 204 and GET will result in a 
200.

* /v3/auth/tokens
* /v2.0/tokens/{token_id}
* /OS-TRUST/trusts/{trust_id}/roles/{role_id} <--- This is the only one tested 
by Tempest.

The easiest solution is to correct the case where we are out of line with the 
HTTP spec and ensure these cases return the same status code for GET and HEAD 
methods. This however changes the response status of a public REST API. Before 
we do this, I wanted to ensure the community, developers, and TC did not have 
an issue with this correction. We are not changing the class of status (e.g. 
2xx to 4xx or vice-versa). This would simply be returning the same response 
between GET and HEAD requests. The fix for this would be to modify the 3 
tempest tests in question to expect HTTP 200 instead of 204.

There are a couple of other cases where Keystone registers a HEAD route but no 
GET route (these would be corrected at the same time, to ensure compatibility). 
The final correction is to enforce that Keystone would not send any data on 
HEAD requests (it is possible to do so, but we have not had it happen).

You can see a proof-of-concept review that shows the tempest failures here: 
https://review.openstack.org/#/c/104026

If this change (even though it is in violation of 
https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Not_Acceptable is 
acceptable, it will unblock the last of a very few things to have Keystone 
default deploy via devstack under Apache (and gate upon it). Please let me know 
if anyone has significant issues with this change / concerns as I would like to 
finish up this road to mod_wsgi based Keystone as early in the Juno cycle as 
possible.

Cheers,
Morgan Fainberg


—
Morgan Fainberg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Announcing Keystone Middleware Project

2014-06-24 Thread Morgan Fainberg
I expect that we will be releasing the 1.0.0 shortly here (or at the very least 
an alpha so we can move forward) to make sure we have time get the new package 
in use during Juno. As soon as we have something released (should be very 
soon), I’ll make sure we give a heads up to all the packagers.

Cheers,
Morgan
—
Morgan Fainberg


From: Tom Fifield t...@openstack.org
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: June 24, 2014 at 20:23:42
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Keystone] Announcing Keystone Middleware Project 
 

On 25/06/14 07:24, Morgan Fainberg wrote:  
> The Keystone team would like to announce the official split of  
> python-keystoneclient and the Keystone middleware code.  
> Over time the middleware (auth_token, s3_token, ec2_token) has developed  
> into a fairly expansive code base and  
> includes dependencies that are not necessarily appropriate for the  
> python-keystoneclient library and CLI tools. Combined  
> with the desire to be able to release updates of the middleware code  
> without requiring an update of the CLI and  
> python-keystoneclient library itself, we have opted to split the  
> packaging of the middleware.  

Seems sane :) If you haven't already, please consider giving a heads up  
to the debian/redhat/suse/ubuntu packagers so they're prepped as early  
as possible.  


Regards,  


Tom  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Announcing Keystone Middleware Project

2014-06-24 Thread Morgan Fainberg
The Keystone team would like to announce the official split of 
python-keystoneclient and the Keystone middleware code.
Over time the middleware (auth_token, s3_token, ec2_token) has developed into a 
fairly expansive code base and
includes dependencies that are not necessarily appropriate for the 
python-keystoneclient library and CLI tools. Combined
with the desire to be able to release updates of the middleware code without 
requiring an update of the CLI and
 python-keystoneclient library itself, we have opted to split the packaging of 
the middleware.

Launchpad Project (bug/bp tracker): https://launchpad.net/keystonemiddleware
Repository: git://git.openstack.org/openstack/keystonemiddleware
Repository (Browsable): 
https://git.openstack.org/cgit/openstack/keystonemiddleware
PyPI location: https://pypi.python.org/pypi/keystonemiddleware
Open Reviews in Gerrit: 
https://review.openstack.org/#/q/status:open+project:openstack/keystonemiddleware,n,z

Detailed information on the approved specification for this split: 
https://review.openstack.org/#/c/95987/

Middleware code that has been included in the new repository:
    * auth_token middleware
    * ec2_token middleware
    * s3_token middleware
    * memcache_crypt (utility code)

Impact for deployers:
    * New keystonemiddleware package will need to be installed (once released)
    * Paste pipelines will need to be updated to reference the 
keystonemiddleware package instead of keystoneclient

Impact for Projects and Infra:
    * Keystonemiddleware is in process of being added to devstack and 
devstack-gate
    * Global requirements update (once the 1.0.0 release occurs) will be 
updated to include keystonemiddleware
    * Updates to the example paste pipelines to reference the new 
keystonemiddleware package will be proposed

Impact for packagers (once released):
    * Keystonemiddleware will need to be packaged and made available via your 
distribution's repositories (apt, yum, etc)

For the time being, we will be maintaining the current state of the middleware 
in the python-keystoneclient library. This
will allow for a transition period and ensure that production deployments 
relying on the current location of the
middleware will continue to work. However, the code located in the 
keystoneclient.middleware module will 
only receive security related fixes going forward. All new code development 
should be proposed to the new
keystonemiddleware repository. 

We are targeting a 1.0.0 (stable) release of the new keystonemiddleware in the 
near term. The Keystone team will work with
the OpenStack projects that consume Keystone middlewares to convert over to the 
new keystonemiddleware package.

Feel free to join us in #openstack-keystone (on Freenode) to discuss the 
middleware, these changes, or any other
OpenStack Identity related topics.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] horizon failing on icehouse 100%, currently blocking all patches

2014-06-21 Thread Morgan Fainberg
I don't think we can revert the change without David's fix. The blocked up
gate isn't because of PKIZ in this case (it isn't tested from the horizon
part) the blocked up gate is because of the broken django_openstack_auth
module.

Could we just tag a new release based upon the commit of the old release?
Then once everything is fixed we tag the real fixed release of
django_openstack_auth. It is a little crummy, but would solve it.

Alternatively, we temporarily pin the version of django_openstack_auth to
the lower version.

--Morgan

On Saturday, June 21, 2014, Zhenguo Niu  wrote:

> I'm afraid we have to revert the PKIZ change since devstack is not support
> django_openstack_auth now and all patches blocked including David's fix.
>
> 发自我的 iPhone
>
> > 在 Jun 22, 2014,5:39,Morgan Fainberg  > 写道:
> >
> > Great. This looks like your fix will not require reverting the PKIZ
> change.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] horizon failing on icehouse 100%, currently blocking all patches

2014-06-21 Thread Morgan Fainberg
Great. This looks like your fix will not require reverting the PKIZ change.

Thanks!
--Morgan

On Saturday, June 21, 2014, Lyle, David  wrote:

> I released django_openstack_auth 1.1.6 on Friday to fix the login issue
> with PKIZ.  Part of that release contained a pep8 cleanup that broke
> Horizon, ultimately because we were doing something stupid in Horizon.  We
> added a fix to Horizon to correct the issue on trunk
> https://github.com/openstack/horizon/commit/0bd4350cb308d57b6afc69daee4a782
> 3055be5a9.  However, to allow older versions of Horizon to work with newer
> django_openstack_auth versions, I currently have a patch up to restore the
> strange import in openstack_auth  https://review.openstack.org/#/c/101715/
> once that merges I will release django_openstack_auth 1.1.7 and all gates
> should work again.
>
> David
>
>
> From:  Morgan Fainberg >
> Reply-To:  OpenStack List  >
> Date:  Saturday, June 21, 2014 at 12:27 PM
> To:  OpenStack List >
> Subject:  Re: [openstack-dev] horizon failing on icehouse 100%, currently
> blocking all patches
>
>
> The issue with the login page simply refreshing was due to a change in
> Keystone that updated the type of Token issued by default from PKI to PKIZ
> (compressed PKI/ASN1). The update to the django auth module was intended
> to correct that specific
>  issue with Keystone and Horizon (Juno).
>
> The bug fix (not sure if another review is to blame with the
> django_openstack_auth module tha t is currently happening) that addressed
> your specific issue is:
> https://bugs.launchpad.net/horizon/+bug/1331406
>
>
> I have proposed a fix for Keystone that would revert the PKIZ default:
> https://review.openstack.org/#/c/101714/
> <https://review.openstack.org/#/c/101714/>
>
>
> Depending on the fixes upcoming for the django_openstack_auth module, it
> may make sense to temporarily revert the PKIZ provider default until we
> can solve the issues with the django auth module and horizon when PKIZ is
> enabled. If this review is not needed
>  based on how the horizon issues are corrected, it will be abandoned.
>
>
> I think this is also showing some gaps in our testing, notably that the
> django_openstack_auth module isn't being exercised on the integration
> tests. I'll aim to hit up Horizon team and work with them and the QA folks
> to make sure we cover this gap in the
>  future.
>
>
> --Morgan
>
>
> On Saturday, June 21, 2014, Mike Spreitzer  > wrote:
>
> Since Friday I have been getting this misbehavior: enter username and
> password, hit login, and it shows you the login page again.
>
>
> Sean Dague --- [openstack-dev] horizon failing on icehouse 100%, currently
> blocking all patches ---
>
>
>
>
> From:"Sean Dague" >To"openstack-dev"
> >Date:Sat, Jun 21, 2014
> 12:54Subject[openstack-dev] horizon failing on icehouse 100%, currently
> blocking all patches
>
> Horizon in icehouse is now 100% failing
>
>  [Sat Jun 21 16:17:35 2014] [error] Internal Server Error: /
> [Sat Jun 21 16:17:35 2014] [error] Traceback (most recent call last):
> [Sat Jun 21 16:17:35 2014] [error]   File
> "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py",
> line 112, in get_response
> [Sat Jun 21 16:17:35 2014] [error] response =
> wrapped_callback(request, *callback_args, **callback_kwargs)
> [Sat Jun 21 16:17:35 2014] [error]   File
> "/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py",
> line
> 36, in inner_func
> [Sat Jun 21 16:17:35 2014] [error] response = func(*args, **kwargs)
> [Sat Jun 21 16:17:35 2014] [error]   File
> "/opt/stack/old/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/
> views.py",
> line 35, in splash
> [Sat Jun 21 16:17:35 2014] [error] form = views.Login(request)
> [Sat Jun 21 16:17:35 2014] [error] AttributeError: 'module' object has
> no attribute 'Login'
>
> This suspiciously times with django_openstack_auth 1.1.6 being released.
> http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkF0dHJpYnV0ZUVycm9yOiAnbW9k
> dWxlJyBvYmplY3QgaGFzIG5vIGF0dHJpYnV0ZSAnTG9naW4nXCIiLCJmaWVsZHMiOltdLCJvZmZ
> zZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6ey
> J1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDMzNjk0MjQ4NjR9
>
> Because this breaks smoke tests on icehouse, it means that any project
> with upgrade testing fails.
>
> Would be great if horizon folks code make this a top priority. Also, in
> future, releasing new library versions on a saturday maybe best avoided. :)
>
> -Sean
>
> --
> Sean Dague
> http://dague.net

Re: [openstack-dev] horizon failing on icehouse 100%, currently blocking all patches

2014-06-21 Thread Morgan Fainberg
The issue with the login page simply refreshing was due to a change in
Keystone that updated the type of Token issued by default from PKI to PKIZ
(compressed PKI/ASN1). The update to the django auth module was intended to
correct that specific issue with Keystone and Horizon (Juno).

The bug fix (not sure if another review is to blame with the
django_openstack_auth module tha t is currently happening) that addressed
your specific issue is: https://bugs.launchpad.net/horizon/+bug/1331406

I have proposed a fix for Keystone that would revert the PKIZ default:
https://review.openstack.org/#/c/101714/

Depending on the fixes upcoming for the django_openstack_auth module, it
may make sense to temporarily revert the PKIZ provider default until we can
solve the issues with the django auth module and horizon when PKIZ is
enabled. If this review is not needed based on how the horizon issues are
corrected, it will be abandoned.

I think this is also showing some gaps in our testing, notably that the
django_openstack_auth module isn't being exercised on the integration
tests. I'll aim to hit up Horizon team and work with them and the QA folks
to make sure we cover this gap in the future.

--Morgan

On Saturday, June 21, 2014, Mike Spreitzer  wrote:

>  Since Friday I have been getting this misbehavior: enter username and
> password, hit login, and it shows you the login page again.
>
>
> Sean Dague --- [openstack-dev] horizon failing on icehouse 100%, currently
> blocking all patches ---
>
>
>  From:"Sean Dague"  To"openstack-dev" <
> openstack-dev@lists.openstack.org> Date:Sat, Jun 21, 2014 12:54 
> Subject[openstack-dev]
> horizon failing on icehouse 100%, currently blocking all patches
> --
>
> Horizon in icehouse is now 100% failing
>
>  [Sat Jun 21 16:17:35 2014] [error] Internal Server Error: /
> [Sat Jun 21 16:17:35 2014] [error] Traceback (most recent call last):
> [Sat Jun 21 16:17:35 2014] [error]   File
> "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py",
> line 112, in get_response
> [Sat Jun 21 16:17:35 2014] [error] response =
> wrapped_callback(request, *callback_args, **callback_kwargs)
> [Sat Jun 21 16:17:35 2014] [error]   File
> "/usr/local/lib/python2.7/dist-packages/django/views/decorators/vary.py",
> line
> 36, in inner_func
> [Sat Jun 21 16:17:35 2014] [error] response = func(*args, **kwargs)
> [Sat Jun 21 16:17:35 2014] [error]   File
>
> "/opt/stack/old/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/views.py",
> line 35, in splash
> [Sat Jun 21 16:17:35 2014] [error] form = views.Login(request)
> [Sat Jun 21 16:17:35 2014] [error] AttributeError: 'module' object has
> no attribute 'Login'
>
> This suspiciously times with django_openstack_auth 1.1.6 being released.
>
> http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkF0dHJpYnV0ZUVycm9yOiAnbW9kdWxlJyBvYmplY3QgaGFzIG5vIGF0dHJpYnV0ZSAnTG9naW4nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDMzNjk0MjQ4NjR9
>
> Because this breaks smoke tests on icehouse, it means that any project
> with upgrade testing fails.
>
> Would be great if horizon folks code make this a top priority. Also, in
> future, releasing new library versions on a saturday maybe best avoided. :)
>
>  -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] rules for removal

2014-06-20 Thread Morgan Fainberg
+1 across the board for this change.

H803 is ignored by a large number of projects after a rather extensive 
conversation on the ML last year (as I recall). The other two changes seem 
quite reasonable.


Cheers,
Morgan
—
Morgan Fainberg


From: Sean Dague s...@dague.net
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: June 20, 2014 at 11:09:41
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [hacking] rules for removal  

After seeing a bunch of code changes to enforce new hacking rules, I'd  
like to propose dropping some of the rules we have. The overall patch  
series is here -  
https://review.openstack.org/#/q/status:open+project:openstack-dev/hacking+branch:master+topic:be_less_silly,n,z
  

H402 - 1 line doc strings should end in punctuation. The real statement  
is this should be a summary sentence. A sentence is not just a set of  
words that end in a period. Squirel fast bob. It's something deeper.  
This rule thus isn't really semantically useful, especially when you are  
talking about at 69 character maximum (79 - 4 space indent - 6 quote  
characters).  

H803 - First line of a commit message must *not* end in a period. This  
was mostly a response to an unreasonable core reviewer that was -1ing  
people for not having periods. I think any core reviewer that -1s for  
this either way should be thrown off the island, or at least made fun  
of, a lot. Again, the clarity of a commit message is not made or lost by  
the lack or existence of a period at the end of the first line.  

H305 - Enforcement of libraries fitting correctly into stdlib, 3rdparty,  
our tree. This biggest issue here is it's built in a world where there  
was only 1 viable python version, 2.7. Python's stdlib is actually  
pretty dynamic and grows over time. As we embrace more python 3, and as  
distros start to make python3 be front and center, what does this even  
mean? The current enforcement can't pass on both python2 and python3 at  
the same time in many cases because of that.  

We have to remember we're all humans, and it's ok to have grey space.  
Like in 305, you *should* group the libraries if you can, but stuff like  
that should be labeled as 'nit' in the review, and only ask the author  
to respin it if there are other more serious issues to be handled.  

Let's optimize a little more for fun, and stop throwing -1s for silly  
things. :)  

-Sean  

--  
Sean Dague  
http://dague.net  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] revert hacking to 0.8 series

2014-06-16 Thread Morgan Fainberg
This sounds totally reasonable. +1 to keeping style-specific changes consistent 
across a release.
—
Morgan Fainberg


From: Clint Byrum cl...@fewbar.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: June 16, 2014 at 10:51:31
To: openstack-dev openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] revert hacking to 0.8 series  

Excerpts from Sean Dague's message of 2014-06-16 05:15:54 -0700:  
> Hacking 0.9 series was released pretty late for Juno. The entire check  
> queue was flooded this morning with requirements proposals failing pep8  
> because of it (so at 6am EST we were waiting 1.5 hrs for a check node).  
>  
> The previous soft policy with pep8 updates was that we set a pep8  
> version basically release week, and changes stopped being done for style  
> after first milestone.  
>  
> I think in the spirit of that we should revert the hacking requirements  
> update back to the 0.8 series for Juno. We're past milestone 1, so  
> shouldn't be working on style only fixes at this point.  
>  
> Proposed review here - https://review.openstack.org/#/c/100231/  
>  
> I also think in future hacking major releases need to happen within one  
> week of release, or not at all for that series.  
>  

+1. Hacking is supposed to help us avoid redundant nit-picking in  
reviews. If it places any large burden on developers, whether by merge  
conflicting or backing up CI, it is a failure IMO.  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-12 Thread Morgan Fainberg
Hi Thomas,

I felt a couple sentences here were reasonable to add (more than “don’t care” 
from before). 

I understand your concerns here, and I totally get what you’re driving at, but 
in the packaging world wouldn’t this make sense to call it "python-bash8"? Now 
the binary, I can agree (for reasons outlined) should probably not be named 
‘bash8’, but the name of the “command” could be separate from the packaging / 
project name.

Beyond a relatively minor change to the resulting “binary” name [sure 
bash-tidy, or whatever we come up with], is there something more that really is 
awful (rather than just silly) about the naming? I just don’t see how if we 
don’t namespace collide on the executable side, how there can be any real 
confusion (python-bash8, sure pypi is a little different) over what is being 
installed.

Cheers,
Morgan
—
Morgan Fainberg


From: Thomas Goirand z...@debian.org
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: June 12, 2014 at 15:19:00
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 748...@bugs.debian.org 748...@bugs.debian.org
Subject:  Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a 
project name (Bug#748383: ITP: bash8 -- bash script style guide checker)  

On 06/12/2014 02:01 AM, Sean Dague wrote:  
> I'd hate to think what these guys think of firefox, grub, thunderbird,  
> pidgin, zope, git, mercurial, etc, etc.  

I don't see what point you're trying to make here. Firefox & Thunderbird  
were renamed because of trademark issues with the Mozilla foundation,  
and Debian (and the maintainers of these packages) still don't like  
renaming these packages. For Git, for a while, the package name was  
already taken by something else. As for Pidgin, Zope & Mercurial, I'm  
not aware of any issues we had...  

All this being said, I don't care so much myself. Yet this is a nuisance  
for me because I may have to deal with such bad naming. Either I ignore  
recommendations from other DDs, and it makes me feel bad in front of all  
the Debian project, or I get to deal with all sorts of renaming issues.  
So it'd be nice if upstream cared...  

Also, it's truth bash8 is a very poor name, which can be very confusing  
for our users. Something like bash-tidy or bashtidy would have been much  
better (even with an 8 at the end...).  

Thomas Goirand (zigo)  


___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-12 Thread Morgan Fainberg


On Thu, Jun 12, 2014 at 1:59 PM, Sean Dague  wrote:
The only thing it makes harder is you have to generate your own token to
run the curl command. The rest is there.

Well I would have imagine that the curl command debug are here so people can 
easily copy and paste them and/or tweak them, but sure it would just make it a 
bit harder.
 
Because everyone is running our
servers at debug levels, it means the clients are going to be running
debug level as well (yay python logging!), so this is something I don't
think people realized was a huge issue.

so maybe the issue is that those curl commands shows up in server log when they 
should only output when running swift/nova/etc/client --debug, right?
This would be much better. However, a good amount of the data provided in these 
curl commands is useful for debugging the services as well. Long term, I the 
data provided in these debug line should be emitted in a log-friendly format 
instead of “this would be the curl command to run to do this” if not used as a 
CLI.This type of change could be made via the move to OpenStackClient. However, 
in the short term, we should be masking this data that is being generated from 
our Session objects.

Most of the clients are using (or will be using in the nearish term) the 
Keystoneclient Session object. I expect to have a fix to address this in the 
session object (and after some back-and-forth, we’ve resolved the SHA1 issues 
so back to the original "{SHA1}” concept), As we progress and can offer 
more distinction between CLI and non-CLI client use (when logging), moving 
towards this delineation where the ‘curl’ output would only be emitted when 
running in CLI mode is a great option.

—Morgan___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-11 Thread Morgan Fainberg
I’ve been looking over the code for this and it turns out plain old SHA1 is a 
bad idea.  We recently had a patch land in keystone client and keystone to let 
us configure the hashing algorithm used for token revocation list and the 
short-token ids. 

I’ve updated my patch set to use ‘{OBSCURED}%(token)s’ instead of specifying a 
specific obscuring algorithm. This means that if we ever update the way we 
obscure the data in the future, we’re not lying about what was done in the log. 
The proposed approach can be found here: https://review.openstack.org/#/c/99432

Cheers,
Morgan
—
Morgan Fainberg


From: Jay Pipes jaypi...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: June 11, 2014 at 12:49:35
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] masking X-Auth-Token in debug output - proposed 
consistency  

On 06/11/2014 03:01 PM, Sean Dague wrote:  
> We've had a few reviews recently going around to mask out X-Auth-Token  
> from the python clients in the debug output. Currently there are a mix  
> of ways this is done.  
>  
> In glanceclient (straight stricken)  
>  
> X-Auth-Token: ***  
>  
> The neutronclient proposal -  
> https://review.openstack.org/#/c/93866/9/neutronclient/client.py is to  
> use 'REDACTED'  
>  
> There is a novaclient patch in the gate that uses SHA1() -  
> https://review.openstack.org/#/c/98443/  
>  
> Morgan was working on keystone.session patch -  
> https://review.openstack.org/#/c/98443/  
>  
> after some back and forth we landed on {SHA1} because  
> that's actually LDAP standard for such things, and SHA1(...) looks too  
> much like a function. I think that should probably be our final solution  
> here.  
>  
> Why SHA1?  
>  
> While we want to get rid of the token from the logs, for both security  
> and size reasons (5 - 10% of the logs in a gate run by bytes are  
> actually keystone tokens), it's actually sometimes important to  
> understand that *the same* token was used between 2 requests, or that 2  
> different tokens were used. This is especially try with expiration times  
> defaulting to 1 hr, and the fact that sometimes we have tests take  
> longer than that (so we need to debug that we didn't rotate tokens when  
> we should have).  
>  
> Because the keystone token is long (going north of 4k), and variable  
> data length, and with different site data, these values should not be  
> susceptible to a generic rainbow attack, so a single SHA1 seems  
> sufficient. If there are objections to that, we can field something else  
> there. It also has the advantage of being "batteries included" with all  
> our supported versions of python.  
>  
> I'm hoping we can just ACK this approach, and get folks to start moving  
> patches through the clients to clean this all up.  
>  
> If you have concerns, please bring them up now.  

Sounds like an excellent plan, thx for the update, Sean.  

Best,  
-jay  


___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-06-11 Thread Morgan Fainberg
This stems a bit further than just reduction in noise in the logs. Think of 
this from a case of security (with centralized logging or lower privileged 
users able to read log files). If we aren’t putting passwords into these log 
files, we shouldn’t be putting tokens in. The major functional different 
between a token and a password is that the token has a fixed life span. Barring 
running over the TTL of the token, the token grants all rights and privileges 
that user has (some exceptions, such as trusts), even allowing for a rescope of 
token to another project/tenant. In this light, tokens
are only marginally less exposure than a password in a log file.

I firmly believe that we should avoid putting information that conveys 
authorization (e.g. username/password or bearer token id) in the logs.
—
Morgan Fainberg


From: Sean Dague s...@dague.net
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: June 11, 2014 at 12:02:20
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] masking X-Auth-Token in debug output - proposed 
consistency  

We've had a few reviews recently going around to mask out X-Auth-Token  
from the python clients in the debug output. Currently there are a mix  
of ways this is done.  

In glanceclient (straight stricken)  

X-Auth-Token: ***  

The neutronclient proposal -  
https://review.openstack.org/#/c/93866/9/neutronclient/client.py is to  
use 'REDACTED'  

There is a novaclient patch in the gate that uses SHA1() -  
https://review.openstack.org/#/c/98443/  

Morgan was working on keystone.session patch -  
https://review.openstack.org/#/c/98443/  

after some back and forth we landed on {SHA1} because  
that's actually LDAP standard for such things, and SHA1(...) looks too  
much like a function. I think that should probably be our final solution  
here.  

Why SHA1?  

While we want to get rid of the token from the logs, for both security  
and size reasons (5 - 10% of the logs in a gate run by bytes are  
actually keystone tokens), it's actually sometimes important to  
understand that *the same* token was used between 2 requests, or that 2  
different tokens were used. This is especially try with expiration times  
defaulting to 1 hr, and the fact that sometimes we have tests take  
longer than that (so we need to debug that we didn't rotate tokens when  
we should have).  

Because the keystone token is long (going north of 4k), and variable  
data length, and with different site data, these values should not be  
susceptible to a generic rainbow attack, so a single SHA1 seems  
sufficient. If there are objections to that, we can field something else  
there. It also has the advantage of being "batteries included" with all  
our supported versions of python.  

I'm hoping we can just ACK this approach, and get folks to start moving  
patches through the clients to clean this all up.  

If you have concerns, please bring them up now.  

-Sean  

--  
Sean Dague  
http://dague.net  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-11 Thread Morgan Fainberg


On 06/11/2014 02:01 PM, Sean Dague wrote: 
> Honestly, I kind of don't care. :) 

+1 :-) 

+1 yep. that about covers it.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Disabling Pushes of new Gerrit Draft Patchsets

2014-05-31 Thread Morgan Fainberg

> 2. Since a patch is very much WIP, there is concern about consuming CI 
> resources with needless testing. 
> 3. The code is “example”, “toy”, or “exploratory” (not planning to 
> submit to the project, but not private/proprietary) 
> 
> The general advice I give to people is to post the patches (especially 
> at checkpoints, e.g. taking a break for the night) and ensure that they 
> are running the tests that they can locally. I also explain the WIP 
> process for a patch. Usually the combination is good enough to convince 
> them that a “Draft” isn’t really needed. If there is still concern about 
> posting the patch to gerrit prematurely (option 3 above), I recommend 
> using another system to collaborate on the initial patch such as what I 
> use my GitHub account for (out-of-tree development / examples / playing 
> with code that won’t ever be submitted to the main repositories). 
> 
> I, for one am very pleased that Drafts are now disabled. I never liked 
> the feature (it felt like it was missing a chunk of functionality to be 
> really useful). 

I think there is something in point #2. If we could make WIP sticky or 
initially settable, I'd be happy if WIP cleared the CI bits and didn't 
trigger running of CI. I think if it's not ready for human review, it's 
probably not ready for robot review either. 

-Sean 
+1 to not automatically running CI against a WIP patch, though I would still 
like to be able to issue a “recheck”-like comment and get CI to run against a 
WIP patch.

The only concern with making  WIP “sticky” is that the current “workflow” 
method of WIP would not be sufficient, as only the person marking the review as 
“WIP” could clear it (same as the -2 on Code Review). This would be an argument 
to go to the WIP plugin that works (ish) like our own WIP system.

—Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Disabling Pushes of new Gerrit Draft Patchsets

2014-05-31 Thread Morgan Fainberg
I’ve had this question asked numerous times (by previous coworkers, people 
interested in contributing to OpenStack, etc). The general feeling has always 
been that the individual is concerned about 3 things when considering drafts in 
gerrit.

1. Patch is very much WIP and doesn’t need to be reviewed yet or is meant to be 
a collaboration between a couple individuals first
2. Since a patch is very much WIP, there is concern about consuming CI 
resources with needless testing.
3. The code is “example”, “toy”, or “exploratory” (not planning to submit to 
the project, but not private/proprietary)

The general advice I give to people is to post the patches (especially at 
checkpoints, e.g. taking a break for the night) and ensure that they are 
running the tests that they can locally. I also explain the WIP process for a 
patch. Usually the combination is good enough to convince them that a “Draft” 
isn’t really needed. If there is still concern about posting the patch to 
gerrit prematurely (option 3 above), I recommend using another system to 
collaborate on the initial patch such as what I use my GitHub account for 
(out-of-tree development / examples / playing with code that won’t ever be 
submitted to the main repositories).

I, for one am very pleased that Drafts are now disabled. I never liked the 
feature (it felt like it was missing a chunk of functionality to be really 
useful).

Cheers,
Morgan
—
Morgan Fainberg


From: Clark Boylan clark.boy...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: May 31, 2014 at 12:33:33
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [All] Disabling Pushes of new Gerrit Draft 
Patchsets  

There isn't an option to push non public patches (and there really  
wasn't before either, drafts are not properly private and this false  
expectation is one of the reasons we have disabled them). Currently  
the recommended alternative is work in progress. The code cannot merge  
with a work in progress vote but it can otherwise be reviewed and  
tested. I think we would like to have changes tested even if they are  
marked as work in progress because it is useful feedback.  

Is there a specific use case where not running tests is desirable?  

Clark  

On Sat, May 31, 2014 at 6:45 AM, Eugene Nikanorov  
 wrote:  
> Hi,  
>  
> I might be posting a question to a wrong thread, but what would be the  
> option to push a patch that I would like to share only with certain group of  
> people. In other words, is there still an option to push non-public patches?  
> I wouldn't like such patches to affect gerrit stream or trigger CIs, but  
> gerrit could still be used for regular reviewing process.  
>  
> Thanks,  
> Eugene.  
>  
>  
> On Sat, May 31, 2014 at 12:51 AM, Sergey Lukjanov   
> wrote:  
>>  
>> Yay!  
>>  
>> No more weird CR chains.  
>>  
>> On Fri, May 30, 2014 at 9:32 PM, Clark Boylan   
>> wrote:  
>> > On Wed, May 21, 2014 at 4:24 PM, Clark Boylan   
>> > wrote:  
>> >> Hello everyone,  
>> >>  
>> >> Gerrit has long supported "Draft" patchsets, and the infra team has  
>> >> long  
>> >> recommended against using them as they are a source of bugs and  
>> >> confusion (see below for specific details if you are curious). The  
>> >> newer  
>> >> version of Gerrit that we recently upgraded to allows us to prevent  
>> >> people from pushing new Draft patchsets. We will take advantage of this  
>> >> and disable pushes of new Drafts on Friday May 30, 2014.  
>> >>  
>> >> The impact of this change should be small. You can use the Work in  
>> >> Progress state instead of Drafts for new patchsets. Any existing  
>> >> Draft patchsets will remain in a Draft state until it is published.  
>> >>  
>> >> Now for the fun details on why drafts are broken.  
>> >>  
>> >> * Drafts appear to be "secure" but they offer no security. This is bad  
>> >> for user expectations and may expose data that shouldn't be exposed.  
>> >> * Draft patchsets pushed after published patchsets confuse reviewers as  
>> >> they cannot vote with a value because the latest patchset is hidden.  
>> >> * Draft patchsets confuse the Gerrit event stream output making it  
>> >> difficult for automated tooling to do the correct thing with Drafts.  
>> >> * Child changes of Drafts will fail to merge without explanation.  
>> >>  
>> >> Let us know if you have any questions,  
>> >>  
&g

Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates failing because of keystoneclient-0.9.0

2014-05-30 Thread Morgan Fainberg
+1 to a fixture for the middleware if this is a common practice to do unit 
testing in this manner. The main issue here was mocking out the cache and using 
a hand-crafted “valid” token.

We have a mechanism provided in the keystone client library that allows for 
creating a valid token (all the required fields, etc): 
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/fixture/v3.py#L48
 for example.
—
Morgan Fainberg


From: Doug Hellmann doug.hellm...@dreamhost.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: May 30, 2014 at 14:08:16
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Keystone] [Blazar] [Ironic] Py26/27 gates 
failing because of keystoneclient-0.9.0  

Would it make sense to provide a test fixture in the middleware
library for projects who want or need to test with token management?

Doug

On Fri, May 30, 2014 at 12:49 PM, Brant Knudson  wrote:
>
> The auth_token middleware changed recently[1] to check if tokens retrieved
> from the cache are expired based on the expiration time in the token. The
> unit tests for Blazar, Ceilometer, and Ironic are all using a copy-pasted
> fake memcache implementation that's supposed to simulate what auth_token
> stores in the cache, but the tokens that it had stored weren't valid. Tokens
> have an expiration time in them and these ones didn't. I don't think that
> it's safe for test code to make assumptions about how the auth_token
> middleware is going to store data in its cache. The format of the cached
> data isn't part of the public interface. It's changed before, when
> expiration times changed from *nix timestamps to iso 8601 formatted dates.
>
> After looking at this, I proposed a couple of changes to the auth_token
> middleware. One is to have auth_token use the expiration time it has cached
> and fail the auth request if the token is expired according to the cache. It
> doesn't have to check the token's expiration time because it was stored as
> part of the cache data. The other is to make cached token handling more
> efficient by not checking the token expiration time if the token was cached.
>
> [1]
> http://git.openstack.org/cgit/openstack/python-keystoneclient/commit/keystoneclient/middleware/auth_token.py?id=8574256f9342faeba2ce64080ab5190023524e0a
> [2] https://review.openstack.org/#/c/96786/
>
> - Brant
>
>
>
> On Fri, May 30, 2014 at 7:11 AM, Sylvain Bauza  wrote:
>>
>> Le 30/05/2014 14:07, Dina Belova a écrit :
>>
>> I did not look close to this concrete issue, but in the ceilometer there
>> is almost the same thing: https://bugs.launchpad.net/ceilometer/+bug/1324885
>> and fixes were already provided.
>>
>> Will this help Blazar?
>>
>>
>> Got the Ironic patch as well :
>>
>> https://review.openstack.org/#/c/96576/1/ironic/tests/api/utils.py
>>
>> Will provide a patch against Blazar.
>>
>> Btw, I'll close the bug.
>>
>>
>> -- Dina
>>
>>
>> On Fri, May 30, 2014 at 4:00 PM, Sylvain Bauza  wrote:
>>>
>>> Hi Keystone developers,
>>>
>>> I just opened a bug [1] because Ironic and Blazar (ex. Climate) patches
>>> are failing due to a new release in Keystone client which seems to
>>> regress on midleware auth.
>>>
>>> Do you have any ideas on if it's quick to fix, or shall I provide a
>>> patch to openstack/global-requirements.txt to only accept keystoneclient
>>> < 0.9.0 ?
>>>
>>> Thanks,
>>> -Sylvain
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>>
>> Best regards,
>>
>> Dina Belova
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Redesign of Keystone Federation

2014-05-29 Thread Morgan Fainberg
I agree that there is room for improvement on the Federation design within 
Keystone. I would like to re-iterate what Adam said that we are already seeing 
efforts to fully integrate further protocol support (OpenID Connect, etc) 
within the current system. Lets be sure that whatever redesign work is proposed 
and accepted takes into account the current stakeholders (that are really using 
Federation) and ensure full backwards compatibility.

I firmly believe we can work within the Apache module framework for Juno. 
Moving beyond Juno we may need to start implementing the more native modules 
(proposal #2). Lets be sure whatever redesign work we perform this cycle 
doesn’t lock us exclusively into one path or another. It shouldn’t be too hard 
to continue making incremental improvements (agile methodology) and keeping the 
stakeholders engaged.

David and Kristy, the slides and summit session are a great starting place for 
this work. Now we need to get the proposal drafted up in the new Keystone-Specs 
repository ( https://git.openstack.org/cgit/openstack/keystone-specs ) so we 
can keep this conversation on track. Having the specification clearly outlined 
and targeted will help us address any concerns with the proposal/redesign as we 
move into implementation.

Cheers,
Morgan
—
Morgan Fainberg


From: Adam Young ayo...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: May 28, 2014 at 09:24:26
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [keystone] Redesign of Keystone Federation  

On 05/28/2014 11:59 AM, David Chadwick wrote:  
> Hi Everyone  
>  
> at the Atlanta meeting the following slides were presented during the  
> federation session  
>  
> http://www.slideshare.net/davidwchadwick/keystone-apach-authn  
>  
> It was acknowledged that the current design is sub-optimal, but was a  
> best first efforts to get something working in time for the IceHouse  
> release, which it did successfully.  
>  
> Now is the time to redesign federated access in Keystone in order to  
> allow for:  
> i) the inclusion of more federation protocols such as OpenID and OpenID  
> Connect via Apache plugins  

These are underway: Steve Mar just posted review for OpenID connect.  
> ii) federating together multiple Keystone installations  
I think Keystone should be dealt with separately. Keystone is not a good  
stand-alone authentication mechanism.  

> iii) the inclusion of federation protocols directly into Keystone where  
> good Apache plugins dont yet exist e.g. IETF ABFAB  
I though this was mostly pulling together other protocols such as Radius?  
http://freeradius.org/mod_auth_radius/  

>  
> The Proposed Design (1) in the slide show is the simplest change to  
> make, in which the Authn module has different plugins for different  
> federation protocols, whether via Apache or not.  

I'd like to avoid doing non-HTTPD modules for as long as possible.  

>  
> The Proposed Design (2) is cleaner since the plugins are directly into  
> Keystone and not via the Authn module, but it requires more  
> re-engineering work, and it was questioned in Atlanta whether that  
> effort exists or not.  

The "method" parameter is all that is going to vary for most of the Auth  
mechanisms. X509 and Kerberos both require special set up of the HTTP  
connection to work, which means client and server sides need to be in  
sync: SAML, OpenID and the rest have no such requirements.  

>  
> Kent therefore proposes that we go with Proposed Design (1). Kent will  
> provide drafts of the revised APIs and the re-engineered code for  
> inspection and approval by the group, if the group agrees to go with  
> this revised design.  
>  
> If you have any questions about the proposed re-design, please don't  
> hesitate to ask  
>  
> regards  
>  
> David and Kristy  
>  
> ___  
> OpenStack-dev mailing list  
> OpenStack-dev@lists.openstack.org  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  


___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo logging eats system level tracebacks by default

2014-05-28 Thread Morgan Fainberg
+1 Providing service crashing information is very valuable. In general we need 
to provide as much information about why the service exited 
(critically/traceback/unexpectedly) for our operators.

—Morgan
—
Morgan Fainberg


From: Jay Pipes jaypi...@gmail.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: May 28, 2014 at 08:50:25
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] Oslo logging eats system level tracebacks by 
default  

On 05/28/2014 11:39 AM, Doug Hellmann wrote:  
> On Wed, May 28, 2014 at 10:38 AM, Sean Dague  wrote:  
>> When attempting to build a new tool for Tempest, I found that my python  
>> syntax errors were being completely eaten. After 2 days of debugging I  
>> found that oslo log.py does the following *very unexpected* thing.  
>>  
>> - replaces the sys.excepthook with it's own function  
>> - eats the execption traceback unless debug or verbose are set to True  
>> - sets debug and verbose to False by default  
>> - prints out a completely useless summary log message at Critical  
>> ([CRITICAL] [-] 'id' was my favorite of these)  
>>  
>> This is basically for an exit level event. Something so breaking that  
>> your program just crashed.  
>>  
>> Note this has nothing to do with preventing stack traces that are  
>> currently littering up the logs that happen at many logging levels, it's  
>> only about removing the stack trace of a CRITICAL level event that's  
>> going to very possibly result in a crashed daemon with no information as  
>> to why.  
>>  
>> So the process of including oslo log makes the code immediately  
>> undebuggable unless you change your config file to not the default.  
>>  
>> Whether or not there was justification for this before, one of the  
>> things we heard loud and clear from the operator's meetup was:  
>>  
>> - Most operators are running at DEBUG level for all their OpenStack  
>> services because you can't actually do problem determination in  
>> OpenStack for anything < that.  
>> - Operators reacted negatively to the idea of removing stack traces  
>> from logs, as that's typically the only way to figure out what's going  
>> on. It took a while of back and forth to explain that our initiative to  
>> do that wasn't about removing them per say, but having the code  
>> correctly recover.  
>>  
>> So the current oslo logging behavior seems inconsistent (we spew  
>> exceptions at INFO and WARN levels, and hide all the important stuff  
>> with a legitimately uncaught system level crash), undebuggable, and  
>> completely against the prevailing wishes of the operator community.  
>>  
>> I'd like to change that here - https://review.openstack.org/#/c/95860/  
>>  
>> -Sean  
>  
> I agree, we should dump as much detail as we can when we encounter an  
> unhandled exception that causes an app to die.  

+1  

-jay  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Morgan Fainberg
This is part of what I was referencing in regards to lightening the data
stored in the token. Ideally, we would like to see an "ID only" token that
only contains the basic information to act. Some initial tests show these
tokens should be able to clock in under 1k in size. However all the details
are not fully defined yet. Coupled with this data reduction there will be
explicit definitions of the data that is meant to go into the tokens. Some
of the data we have now is a result of convenience of accessing the data.

I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes
smoothly. But this is absolutely on the list of things we would like to
address.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths 
wrote:

> > adding another ~10kB to each request, just to save a once-a-day call to
> >Keystone (ie uuid tokens) seems to be a really high price to pay for not
> >much benefit.
>
> I have the same concern with respect to Marconi. I feel like KPI tokens
> are fine for control plane APIs, but don’t work so well for high-volume
> data APIs where every KB counts.
>
> Just my $0.02...
>
> --Kurt
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Concerns about the ballooning size of keystone tokens

2014-05-21 Thread Morgan Fainberg
The keystone team is also looking at ways to reduce the data contained in
the token. Coupled with the compression, this should get the tokens back
down to a reasonable size.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Adam Young  wrote:

>  On 05/21/2014 11:09 AM, Chuck Thier wrote:
>
> There is a review for swift [1] that is requesting to set the max header
> size to 16k to be able to support v3 keystone tokens.  That might be fine
> if you measure you request rate in requests per minute, but this is
> continuing to add significant overhead to swift.  Even if you *only* have
> 10,000 requests/sec to your swift cluster, an 8k token is adding almost
> 80MB/sec of bandwidth.  This will seem to be equally bad (if not worse) for
> services like marconi.
>
>  When PKI tokens were first introduced, we raised concerns about the
> unbounded size of of the token in the header, and were told that uuid style
> tokens would still be usable, but all I heard at the summit, was to not use
> them and PKI was the future of all things.
>
>  At what point do we re-evaluate the decision to go with pki tokens, and
> that they may not be the best idea for apis like swift and marconi?
>
>
> Keystone tokens were slightly shrunk at the end of the last release cycle
> by removing unnecessary data from each endpoint entry.
>
> Compressed PKI tokens are enroute and will be much smaller.
>
>
>  Thanks,
>
>  --
> Chuck
>
>  [1] https://review.openstack.org/#/c/93356/
>
>
> ___
> OpenStack-dev mailing listopenstack-...@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Catalog Backend in Deployments (Templated, SQL, etc)

2014-04-22 Thread Morgan Fainberg
During the weekly Keystone meeting, the topic of improving the Catalog was 
brought up. This topic is in the context of preparing for the design summit 
session on the Service Catalog. There are currently limitations in the 
templated catalog that do not exist in the SQL backed catalog. In an effort to 
provide the best support for the catalog going forward, the Keystone team would 
like to get feedback on the use of the various catalog backends.  

What we are looking for:
In your OpenStack deployments, which catalog backend are you using?
Which Keystone API version are you using?
This information will help us to prioritize updates to the catalog over the 
next development cycle (Juno) as well as identify if any changes need to be 
back-ported. In the long term, there is a desire to target new features and 
functionality for the Service Catalog to the SQL (and for testing KVS) 
backends, limiting enhancements and new development done on the templated 
catalog backend.

Please feel free to respond via the survey below or via email to the mailing 
list.

Keystone Catalog Backend Usage Survey: https://www.surveymonkey.com/s/3DL7FTY

Cheers,
Morgan

—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2014-04-14 Thread Morgan Fainberg
Hi everyone!

I would like to announce my candidacy for the Technical Committee election.

About Me

I’m Morgan Fainberg. I am a software engineer working for a startup focused on 
deploying OpenStack in a private cloud configuration for a number of 
organizations. I’ve been an active member of this community since the Essex 
release. During my involvement in OpenStack, I’ve enjoyed collaborating on many 
OpenStack projects via code contributions, active communication and 
collaboration. Currently I am a member of the OpenStack Identity (Keystone) 
Core team.

I am always in many of the OpenStack IRC channels to keep a pulse on the 
community and be available to provide assistance with development, QA, and 
questions on OpenStack deployments.

Platform
===
As a developer, operator, deployer, and consumer of OpenStack, I have unique 
insight on many different facets of OpenStack. This insight and experience with 
our strengths and weaknesses in OpenStack allows me to work with the community 
to improve user experience for developers and deployers. This improvement of 
user experience includes on-boarding new contributors, making deployments of 
OpenStack easier to manage, and helping to set direction across project 
boundaries, and to help OpenStack improve and be more complete.

There are some key points that I would like to see the TC focus on in the 
immediate timeline.

Performance and Concurrency
---
A strong viable direction must be set forth on improvements and mitigating 
regressions in performance across all OpenStack projects. This includes 
trending, consistent testing, and even gating. There are specific projects 
around tracking these metrics (notably Rally) which should be better leveraged. 
The process, use of metrics, and availability of this data must be formalized 
so that current, pre-incubated, and incubated projects will perform at an 
acceptable standard in deployments of OpenStack.

Inclusion of Operators in Design and Development
--
We need to be soliciting more feedback from the deployers and operators of 
OpenStack. I view the Technical Committee as being ultimately responsible for 
increasing the visibility of the demands of the users and operators so that we 
(the community) can improve the usability of OpenStack. Overall, there has been 
improvement, but in light of some of the discussions on the mailing list (one 
example would be API major versions) we need to better include the deployers, 
gaining a solid understanding of the impacts of a given change. This concept 
extends further and includes improvement on the pre-incubation, incubation, and 
integration requirements of new projects and programs. We’ve seen improvement 
here but there is room for further discussion which should absolutely involve 
operators to help make the best choices benefitting our development community 
and the OpenStack deployment operators. The new operator-focused sessions at 
the Summit are a start, but this effort will require continued direction from 
the Technical Committee over the life of the development cycles.

Cross Project Initiatives and Communication
---
There has been an overall improvement of cross-project communication and 
facilitation of cross-project initiatives. However, I believe that the 
Technical Committee can continue improvement on this front. This should include 
encouragement of mid-cycle meet-ups, with potential assistance from  OpenStack 
Foundation for facilitation and funding. This cross-project communication is 
increasing in importance as we add more OpenStack programs and projects. Some 
of these challenges are already addressed during the cross-project (and 
release) meetings, however, the Technical Committee has a unique role to 
enhance and formalize these channels.

Ease of Deployment and Usability
---
Each release of OpenStack has improved on deployment and usability of the 
previous release. OpenStack as a whole must continue to drive toward better 
user experience on all fronts and the Technical Committee is ultimately 
responsible for defining goals and means to measure improvement for each 
release. Each incremental improvement in usability encourages wider adoption of 
OpenStack as a valuable solution to the Virtualization and Orchestration 
problem spaces.


It has been a privilege to be part of the OpenStack community, and I can’t wait 
to see where the Juno cycle will take us. I look forward to continuing to serve 
this community as a contributor, core member of the Identity Program, and an 
advocate of improved usability and ease of deployment.

Cheers,
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] Using oslo.cache in keystoneclient.middleware.auth_token

2014-03-31 Thread Morgan Fainberg
I’ve been working on (albeit slowly) getting the keystone implementation of 
dogpile.cache into oslo.cache. It’s been slow due to other demands, but I’m 
hoping to get back to it in the near future here so we can make moves like this 
more easily.
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On March 31, 2014 at 10:11:21, Dolph Mathews (dolph.math...@gmail.com) wrote:

dogpile.cache would be substantially lighter on the client-side as it only has 
a hard dependency on dogpile.core. It supports plenty of backends beyond 
memcached and we already use it in keystone quite heavily.

  http://dogpilecache.readthedocs.org/en/latest/


On Mon, Mar 31, 2014 at 11:35 AM, Doug Hellmann  
wrote:



On Mon, Mar 31, 2014 at 12:18 PM, Kurt Griffiths  
wrote:
Hi folks, has there been any discussion on using oslo.cache within the 
auth_token middleware to allow for using other cache backends besides 
memcached? I didn’t find a Keystone blueprint for it, and was considering 
registering one for Juno if the team thinks this feature makes sense. I’d be 
happy to put some time into the implementation.

That does make sense. We need to look at the dependency graph between the 
keystoneclient and oslo.cache, though. It appears the current version of 
oslo.cache is going to bring in quite a few oslo libraries that we would not 
want keystoneclient to depend on [1]. Moving the middleware to a separate 
library would solve that.

[1] https://wiki.openstack.org/wiki/Oslo/Dependencies

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-04 Thread Morgan Fainberg
On March 4, 2014 at 16:13:45, Dan Smith (d...@danplanet.com) wrote:
> What I'd like to do next is work through a new proposal that includes 
> keeping both v2 and v3, but with a new added focus of minimizing the 
> cost. This should include a path away from the dual code bases and to 
> something like the "v2.1" proposal. 

I think that the most we can hope for is consensus on _something_. So, 
the thing that I'm hoping would mostly satisfy the largest number of 
people is: 

- Leaving v2 and v3 as they are today in the tree, and with v3 still 
marked experimental for the moment 
- We start on a v2 proxy to v3, with the first goal of fully 
implementing the v2 API on top of v3, as judged by tempest 
- We define the criteria for removing the current v2 code and marking 
the v3 code supported as: 
- The v2 proxy passes tempest 
- The v2 proxy has sign-off from some major deployers as something 
they would be comfortable using in place of the existing v2 code 
- The v2 proxy seems to us to be lower maintenance and otherwise 
preferable to either keeping both, breaking all our users, deleting 
v3 entirely, etc 
- We keep this until we either come up with a proxy that works, or 
decide that it's not worth the cost, etc. 
This seems reasonable.


I think the list of benefits here are: 

- Gives the v3 code a chance to address some of the things we have 
identified as lacking in both trees 
- Gives us a chance to determine if the proxy approach is reasonable or 
a nightmare 
- Gives a clear go/no-go line in the sand that we can ask deployers to 
critique or approve 

+1 on this. As a deployer this is a good stance and I especially like the clear 
go/no-go line above the other “benefits” with the assumption we are keeping V2 
as is (e.g. not planning on deprecating out sections/changing interfaces, or 
evolving the API to be more V3 like).


It doesn't address all of my concerns, but at the risk of just having 
the whole community split over this discussion, I think this is probably 
(hopefully?) something we can all get behind. 
I agree this doesn’t solve all the concerns, but it’s a good middle ground to 
stand on. I obviously have a personal preference as a deployer/supporter of 
OpenStack environments. I have concerns over the V2 proxy, but as long as we 
are keeping V2 as is, this can move us towards a larger change to V3 and have 
the solid tempest coverage, I don't see a reason to say “this is a bad 
approach”.

Cheers,

Morgan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-04 Thread Morgan Fainberg
On March 4, 2014 at 10:45:02, Vishvananda Ishaya (vishvana...@gmail.com) wrote:

On Mar 3, 2014, at 11:32 AM, Jay Pipes  wrote: 

> On Mon, 2014-03-03 at 11:09 -0800, Vishvananda Ishaya wrote: 
>> On Mar 3, 2014, at 6:48 AM, Jay Pipes  wrote: 
>> 
>>> On Sun, 2014-03-02 at 12:05 -0800, Morgan Fainberg wrote: 
>>>> Having done some work with MySQL (specifically around similar data 
>>>> sets) and discussing the changes with some former coworkers (MySQL 
>>>> experts) I am inclined to believe the move from varchar to binary 
>>>> absolutely would increase performance like this. 
>>>> 
>>>> 
>>>> However, I would like to get some real benchmarks around it and if it 
>>>> really makes a difference we should get a smart "UUID" type into the 
>>>> common SQLlibs (can pgsql see a similar benefit? Db2?) I think this 
>>>> conversation. Should be split off from the keystone one at hand - I 
>>>> don't want valuable information / discussions to get lost. 
>>> 
>>> No disagreement on either point. However, this should be done after the 
>>> standardization to a UUID user_id in Keystone, as a separate performance 
>>> improvement patch. Agree? 
>>> 
>>> Best, 
>>> -jay 
>> 
>> -1 
>> 
>> The expectation in other projects has been that project_ids and user_ids are 
>> opaque strings. I need to see more clear benefit to enforcing stricter 
>> typing on these, because I think it might break a lot of things. 
> 
> What does Nova lose here? The proposal is to have Keystone's user_id 
> values be UUIDs all the time. There would be a migration or helper 
> script against Nova's database that would change all non-UUID user_id 
> values to the Keystone UUID values. 

So I don’t have a problem with keystone internally using uuids, but forcing 
a migration of user identifiers isn’t something that should be taken lightly. 
One example is external logging and billing systems which now have to be 
migrated. 

I’m not opposed to the migration in principle. We may have to do a similar 
thing for project_ids with hierarchical multitenancy, for example. I just 
think we need a really good reason to do it, and the performance arguments 
just don’t seem good enough without a little empirical data. 

Vish 
Any one of the proposals we’re planning on using will not affect any current 
IDs.  Since the user_id is a blob, if we start issuing a new “id” format, 
ideally it shouldn’t matter as long as old IDs continue to work. If we do make 
any kind of migration for issued IDs I would expect that to be very deliberate 
and outside of this change set. Specifically this change set would enable 
multiple LDAP backends (among other user_id uniqueness benefits for federation, 
etc). 

I am very concerned about the external tools that reference IDs we currently 
have.

—Morgan





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-04 Thread Morgan Fainberg
I think I missed the emphasis on the efforts towards superseding and/or making 
the call obsolete in my previous email (I was aiming to communicate far more 
than a the few lines managed to convey). I honestly think that if we are 
sticking with V2 because of the fact that Nova is such a large surface area, 
the only option is to treat each extension as it’s (almost) own isolated API in 
the context of deprecation, obsolescence, etc. I think the issue here is that 
we may be looking at the version of the API like something fixed in stone. 
Clearly, with the scope and surface area of Nova (this point has been made 
clear), there is no possible way we can handle a whole-sale change.

As a deployer, and supporter of a number of OpenStack based clouds, as long as 
the API is stable (yes, I’ll give it a full year from when it is determined a 
change is needed), I don’t see my customers complaining excessively; maybe we 
make it a 4 cycle deprecation? It is silly to say “because we called it X we 
can’t ever take anything away”. I am all for not breaking the contract, but 
define the contract beyond “the spec”. This holds true especially when it comes 
down to continued growth and possibly moving the data from one place to 
somewhere better / more suited. Perhaps part of the real issue is the whole 
extension model. A well documented, interoperable (across deployments) API 
doesn’t have huge swaths of functionality that is optional (or more to the 
point what is OpenStack Compute’s API?). If you are adding a core-feature it 
should it be an “Extension”? 

Lets add one more step in, ask deployments if they are using the “extensions”? 
Make it part of the summit / development cycle. Get real information (send 
surveys?) and get to know the community running the software. That in itself 
will help to direct if an extension is used. I think the crux is that we do not 
have a clear (and have no way of getting the information) understanding of what 
is being used and what isn’t. Perhaps the best thing we can do is make this an 
exercise on understanding what people are using and how we can quantify that 
information before we worry about “how do we remove functionality”.

Complete removal of functionality is probably going to be rare. It is far more 
likely that locations will shift and / or things will be rolled into more 
logical areas.  At the speed that we are moving (not slow, but not as fast as 
other things), it should be totally possible to support a 2+ cycle deprecation 
if something is being moved / shuffled / absorbed. But more importantly, 
knowing use is far more important that knowing how to remove, the former will 
direct the latter.


So I propose changing the topic of this thread slightly: In a V2 only world, 
how do we know if something is used? How do we understand how it is used and 
when? Lets answer that instead.


—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On March 4, 2014 at 02:09:51, Christopher Yeoh (cbky...@gmail.com) wrote:

On Mon, 3 Mar 2014 21:31:23 -0800
Morgan Fainberg  wrote:
>  
> I think that in a V2 only world a 2 cycle deprecation model would be
> sufficient for any extension. I don’t foresee any complaints on that
> front, especially if there is work to supersede or obsolete the need
> for the extension.

Given the context of feedback saying we're not able to deprecate
the V2 API as-it-is for a very long time I don't see how a 2 cycle
deprecation model for an extension is sufficient. Perhaps it comes down
to how do we really know its unused? If it hasn't ever worked (we had
one of those!) or accidentally hadn't worked for a couple of cycles and
no one noticed then perhaps deprecating it then is ok. But otherwise
whilst we can get input from public cloud providers fairly easily
there's going to be a lot of small private deployments as well with
custom bits of API using code which we won't hear from. And we'll be
forcing them off the API which people say is exactly what we don't want
to do.

Deprecating functionality and still calling it V2 is I think nearly
always a bad thing. Because it is very different from what people
consider to be major version API stability - eg you may get new
features, but old ones stay.

Its for similar reasons I'm pretty hesitant about using microversioning
for backwards incompatible changes in addition to backwards compatible
ones. Because we end up with a concept of major version stability which
is quite different from what people expect. I don't think we should be
seeing versioning as a magic bullet to get out of API mistakes
(except perhaps under really exceptional circumstances) we've made
because it really just shifts the pain to the users. Do it enough and
people lose an understanding of what it means to have version X.Y
of an API available versus X.(Y+n) and whether they can expect the
software to still work.




>  
> 

Re: [openstack-dev] [nova] Thought exercise for a V2 only world

2014-03-03 Thread Morgan Fainberg
Hi Joe,

I think that in a V2 only world a 2 cycle deprecation model would be sufficient 
for any extension. I don’t foresee any complaints on that front, especially if 
there is work to supersede or obsolete the need for the extension.

Cheers,
Morgan 
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On March 3, 2014 at 21:29:43, Joe Gordon (joe.gord...@gmail.com) wrote:

Hi All,  

here's a case worth exploring in a v2 only world ... what about some  
extension we really think is dead and should go away? can we ever  
remove it? In the past we have said backwards compatibility means no  
we cannot remove any extensions, if we adopt the v2 only notion of  
backwards compatibility is this still true?  

best,  
Joe  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-03-02 Thread Morgan Fainberg
Having done some work with MySQL (specifically around similar data sets)
and discussing the changes with some former coworkers (MySQL experts) I am
inclined to believe the move from varchar  to binary absolutely would
increase performance like this.

However, I would like to get some real benchmarks around it and if it
really makes a difference we should get a smart "UUID" type into the common
SQLlibs (can pgsql see a similar benefit? Db2?) I think this conversation.
Should be split off from the keystone one at hand - I don't want valuable
information / discussions to get lost.

Cheers,
Morgan.

Sent via mobile


On Sunday, March 2, 2014, Sean Dague  wrote:

> On 03/01/2014 08:00 PM, Clint Byrum wrote:
> > Excerpts from Robert Collins's message of 2014-03-01 14:26:57 -0800:
> >> On 1 March 2014 13:28, Clint Byrum >
> wrote:
> >>
> >>> +1. A Keystone record belongs to Keystone, and it should have a
> Keystone
> >>> ID. External records that are linked should be linked separately.
> >>>
> >>> It may not be obvious to everyone, but MySQL uses B-trees for indexes.
> >>> B-trees cannot have variable-length keys.
> >>
> >> Hmm, B-Trees and B+-Trees both can have variable length keys. I'll
> >> accept an assertion that MySQL index B-trees cannot - but we should be
> >> precise here, because its not a global limitation.
> >>
> >
> > Sorry, I misspoke, _InnoDB's_ b-tree's cannot have variable length keys.
> > :-P
>
> On a previous project we did a transition from varchar based UUID to
> binary based UUID in MySQL. The micro benchmarks on joins got faster by
> a factor of 10,000 (yes 10k). Granted, MySQL has evolved since then, and
> this was a micro benchmark, however this is definitely work considering.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net  / sean.da...@samsung.com 
> http://dague.net
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Toward SQLAlchemy 0.9.x compatibility everywhere for Icehouse

2014-03-01 Thread Morgan Fainberg
Hi Thomas,

I’ll take a look at that tonight and see if it’s an easy solve. Hopefully I can 
have something posted by Monday for you.

—Morgan
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On March 1, 2014 at 20:06:54, Thomas Goirand (z...@debian.org) wrote:

Hi,  

About a week ago, the maintainer of SQLAlchemy uploaded version 0.9.3 in  
Debian Sid. This of course broke a lot of OpenStack packages, including  
python-migrate.  

I do not intend to let this continue on for 7 months like it happened  
for SQLA 0.8.x.  

Over the last week, I worked, together with my colleagues from eNovance,  
on fixing python-migrate. Today, I can proudly say that migrate seems to  
have been fixed and works fully with SQLAlchemy 0.9.3. All unit tests  
are passing, including the ones for MySQL, which I also do in my Debian  
package.  

I haven't uploaded python-migrate to Debian Sid yet, because I'm waiting  
for python-ibm-db-sa to be approved by FTP masters. However, I have good  
hopes that it will happen soon.  

The Debian package I've prepared already includes all of SQLA-Migrate  
commits currently available in the Git, plus 4 patches which I intend to  
push upstream. The resulting package works with both SQLA 0.8.x and SQLA  
0.9.x.  

Next up is Keystone. Only one unit test is failing, as much as I can tell:  
keystone.tests.test_sql_upgrade.SqlUpgradeTests.test_upgrade_14_to_16  

Once Keystone is fixed, we'll move to the next packages.  

I hope to get support from core reviewers here, so that we can fix-up  
the SQLA 0.9.x compat ASAP, preferably before b3. Of course, I'll make  
sure all we do works with both 0.8 and 0.9 version of SQLA. Is there  
anyone still running with the old 0.7? If yes, then we can try to  
continue validating OpenStack against it as well.  

Thoughts welcome,  

Cheers,  

Thomas Goirand (zigo)  

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-27 Thread Morgan Fainberg
On Thursday, February 27, 2014, Dolph Mathews 
wrote:

>
> On Thu, Feb 27, 2014 at 11:52 AM, Jay Pipes 
> 
> > wrote:
>
>> On Thu, 2014-02-27 at 16:13 +, Henry Nash wrote:
>> > So a couple of things about this:
>> >
>> >
>> > 1) Today (and also true for Grizzly and Havana), the user can chose
>> > what LDAP attribute should be returned as the user or group ID.  So it
>> > is NOT a safe assumption today (ignoring any support for
>> > domain-specific LDAP support) that the format of a user or group ID is
>> > a 32 char UUID.  Quite often, I would think, that email address would
>> > be chosen by a cloud provider as the LDAP id field, by default we use
>> > the CN.  Since we really don't want to ever change the user or group
>> > ID we have given out from keystone for a particular entity, this means
>> > we need to update nova (or anything else) that has made a 32 char
>> > assumption.
>>
>> I don't believe this is correct. Keystone is the service that deals with
>> authentication. As such, Keystone should be the one and only one service
>> that should have any need whatsoever to need to understand a non-UUID
>> value for a user ID. The only value that should ever be communicated
>> *from* Keystone should be the UUID value of the user.
>>
>
> +++
>
>
>>
>> If the Keystone service uses LDAP or federation for alternative
>> authentication schemes, then Keystone should have a mapping table that
>> translates those elongated and non-UUID identifiers values (email
>> addresses, LDAP CNs, etc) into the UUID value that is then communicated
>> to all other OpenStack services.
>>
>
> I'd take it one step further and say that at some point, keystone should
> stop leaking identity details such as user name / ID into OpenStack (they
> shouldn't appear in tokens, and shouldn't be expected output of
> auth_token). The use cases that "require" them are weak and would be better
> served by pure multitenant RBAC, ABAC, etc. There are a lot of blockers to
> making this happen (including a few in keystone's own API), but still food
> for thought.
>
>
++ this would be a great change!

I am on the same page as Dolph when it comes to approving of the UUID being
the only value communicated outside of keystone. There is just no good
reason to send out extra identity information (it isn't needed and can help
to reduce token bloat etc).

--Morgan

Sent via mobile
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][keystone] Increase of USER_ID length maximum from 64 to 255

2014-02-25 Thread Morgan Fainberg
For purposes of supporting multiple backends for Identity (multiple LDAP, mix 
of LDAP and SQL, federation, etc) Keystone is planning to increase the maximum 
size of the USER_ID field from an upper limit of 64 to an upper limit of 255. 
This change would not impact any currently assigned USER_IDs (they would remain 
in the old simple UUID format), however, new USER_IDs would be increased to 
include the IDP identifier (e.g. USER_ID@@IDP_IDENTIFIER). 

There is the obvious concern that projects are utilizing (and storing) the 
user_id in a field that cannot accommodate the increased upper limit. Before 
this change is merged in, it is important for the Keystone team to understand 
if there are any places that would be overflowed by the increased size.

The review that would implement this change in size is 
https://review.openstack.org/#/c/74214 and is actively being worked on/reviewed.

I have already spoken with the Nova team, and a single instance has been 
identified that would require a migration (that will have a fix proposed for 
the I3 timeline). 

If there are any other known locations that would have issues with an increased 
USER_ID size, or any concerns with this change to USER_ID format, please 
respond so that the issues/concerns can be addressed.  Again, the plan is not 
to change current USER_IDs but that new ones could be up to 255 characters in 
length.

Cheers,
Morgan Fainberg
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Morgan Fainberg
Yes, micro-versioning is most likely a better approach, and I’m a fan of using 
that to gain the benefits of V3 without changing for the sake of change. 
Ideally in a versioned API we should be versioning a smaller surface area than 
“THE WHOLE API” if at all possible. If we kept the old “version” around and 
deprecated it (keep it for 2 cycles, when it goes away the non-versioned call 
says “sorry, version unsupported”?, and it can continue to be versioned as 
needed) and continue to increment the versions as appropriate with changes, we 
will be holding true to our contract. The benefits of V3 can still be reaped, 
knowing where the API should move towards.

Don’t try and take on a giant task to make a “new API version” at once. 

We can maintain the contract and still progress the APIs forward. And to Sean’s 
comment that the V2 API hasn’t been as “stable in the traditional sense” in the 
past, I think we can forgive past issues since we now have the framework to 
show us when/if things end up being incompatible (and I agree with the fact 
that big-bang changes don’t work for large surface area projects). I still 
stand by my statement that we can’t (shouldn’t knowingly) break the contract, 
we also can’t assume people will move to V3 (if we launch it) in a reasonable 
timeframe if the new API doesn’t really justify a massive re-write. Maintaining 
2, nearly identical, APIs is going to be problematic for both the developers 
and deployers. In my view (as a deployer, consumer, and developer) this means 
we should keep V2, and work on benefiting from the lessons learned in 
developing V3 while moving to correct the issues we have in a maintainable / 
friendly way (to developers, deployers, and consumers).
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On February 24, 2014 at 15:22:01, Sean Dague (s...@dague.net) wrote:

On 02/24/2014 06:13 PM, Chris Friesen wrote:  
> On 02/24/2014 04:59 PM, Sean Dague wrote:  
>  
>> So, that begs a new approach. Because I think at this point even if we  
>> did put out Nova v3, there can never be a v4. It's too much, too big,  
>> and doesn't fit in the incremental nature of the project.  
>  
> Does it necessarily need to be that way though? Maybe we bump the  
> version number every time we make a non-backwards-compatible change,  
> even if it's just removing an API call that has been deprecated for a  
> while.  

So I'm not sure how this is different than the keep v2 and use  
microversioning suggestion that is already in this thread.  

-Sean  

--  
Sean Dague  
Samsung Research America  
s...@dague.net / sean.da...@samsung.com  
http://dague.net  

- signature.asc, 493 bytes
___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Future of the Nova API

2014-02-24 Thread Morgan Fainberg
On the topic of backwards incompatible changes:

I strongly believe that breaking current clients that use the APIs directly is 
the worst option possible. All the arguments about needing to know which APIs 
work based upon which backend drivers are used are all valid, but making an API 
incompatible change when we’ve made the contract that the current API will be 
stable is a very bad approach. Breaking current clients isn’t just breaking 
“novaclient", it would also break any customers that are developing directly 
against the API. In the case of cloud deployments with real-world production 
loads on them (and custom development around the APIs) upgrading between major 
versions is already difficult to orchestrate (timing, approvals, etc), if we 
add in the need to re-work large swaths of code due to API changes, it will 
become even more onerous and perhaps drive deployers to forego the upgrades in 
lieu of stability.

If the perception is that we don’t have stable APIs (especially when we are 
ostensibly versioning them), driving adoption of OpenStack becomes 
significantly more difficult. Difficulty in driving further adoption would be a 
big negative to both the project and the community.

TL;DR, “don’t break the contract”. If we are seriously making incompatible 
changes (and we will be regardless of the direction) the only reasonable option 
is a new major version.
—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com


On February 24, 2014 at 10:16:31, Matt Riedemann (mrie...@linux.vnet.ibm.com) 
wrote:



On 2/24/2014 10:13 AM, Russell Bryant wrote:  
> On 02/24/2014 01:50 AM, Christopher Yeoh wrote:  
>> Hi,  
>>  
>> There has recently been some speculation around the V3 API and whether  
>> we should go forward with it or instead backport many of the changes  
>> to the V2 API. I believe that the core of the concern is the extra  
>> maintenance and test burden that supporting two APIs means and the  
>> length of time before we are able to deprecate the V2 API and return  
>> to maintaining only one (well two including EC2) API again.  
>  
> Yes, this is a major concern. It has taken an enormous amount of work  
> to get to where we are, and v3 isn't done. It's a good time to  
> re-evaluate whether we are on the right path.  
>  
> The more I think about it, the more I think that our absolute top goal  
> should be to maintain a stable API for as long as we can reasonably do  
> so. I believe that's what is best for our users. I think if you gave  
> people a choice, they would prefer an inconsistent API that works for  
> years over dealing with non-backwards compatible jumps to get a nicer  
> looking one.  
>  
> The v3 API and its unit tests are roughly 25k lines of code. This also  
> doesn't include the changes necessary in novaclient or tempest. That's  
> just *our* code. It explodes out from there into every SDK, and then  
> end user apps. This should not be taken lightly.  
>  
>> This email is rather long so here's the TL;DR version:  
>>  
>> - We want to make backwards incompatible changes to the API  
>> and whether we do it in-place with V2 or by releasing V3  
>> we'll have some form of dual API support burden.  
>> - Not making backwards incompatible changes means:  
>> - retaining an inconsistent API  
>  
> I actually think this isn't so bad, as discussed above.  
>  
>> - not being able to fix numerous input validation issues  
>  
> I'm not convinced, actually. Surely we can do a lot of cleanup here.  
> Perhaps you have some examples of what we couldn't do in the existing API?  
>  
> If it's a case of wanting to be more strict, some would argue that the  
> current behavior isn't so bad (see robustness principle [1]):  
>  
> "Be conservative in what you do, be liberal in what you accept from  
> others (often reworded as "Be conservative in what you send, be  
> liberal in what you accept")."  
>  
> There's a decent counter argument to this, too. However, I still fall  
> back on it being best to just not break existing clients above all else.  
>  
>> - have to forever proxy for glance/cinder/neutron with all  
>> the problems that entails.  
>  
> I don't think I'm as bothered by the proxying as others are. Perhaps  
> it's not architecturally pretty, but it's worth it to maintain  
> compatibility for our users.  

+1 to this, I think this is also related to what Jay Pipes is saying in  
his reply:  

"Whether a provider chooses to, for example,  
deploy with nova-network or Neutron, or Xen vs. KVM, or support block  
migration for that matter *should have no effect on the public API*. T

[openstack-dev] [Keystone] IRC Channel Venue Change for Keystone Development topics

2014-02-18 Thread Morgan Fainberg
So that the rest of the community is aware, the majority of keystone specific 
topics are moving from the #openstack-dev channel to the #openstack-keystone 
channel on Freenode. This is has been done to help free up #openstack-dev for 
more cross-project discussion.  Expect that the Keystone Core team will still 
be in #openstack-dev so that we can catch Keystone and Identity related 
questions (just as before).

Cheers,
Morgan Fainberg

—
Morgan Fainberg
Principal Software Engineer
Core Developer, Keystone
m...@metacloud.com___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] memoizer aka cache

2014-01-23 Thread Morgan Fainberg
Yes! There is a reason Keystone has a very small footprint of
caching/invalidation done so far.  It really needs to be correct when it
comes to proper invalidation logic.  I am happy to offer some help in
determining logic for caching/invalidation with Dogpile.cache in mind as we
get it into oslo and available for all to use.

--Morgan



On Thu, Jan 23, 2014 at 2:54 PM, Joshua Harlow wrote:

> Sure, no cancelling cases of conscious usage, but we need to be careful
> here and make sure its really appropriate. Caching and invalidation
> techniques are right up there in terms of problems that appear easy and
> simple to initially do/use, but doing it correctly is really really hard
> (especially at any type of scale).
>
> -Josh
>
> On 1/23/14, 1:35 PM, "Renat Akhmerov"  wrote:
>
> >
> >On 23 Jan 2014, at 08:41, Joshua Harlow  wrote:
> >
> >> So to me memoizing is typically a premature optimization in a lot of
> >>cases. And doing it incorrectly leads to overfilling the python
> >>processes memory (your global dict will have objects in it that can't be
> >>garbage collected, and with enough keys+values being stored will act
> >>just like a memory leak; basically it acts as a new GC root object in a
> >>way) or more cache invalidation races/inconsistencies than just
> >>recomputing the initial valueŠ
> >
> >I agree with your concerns here. At the same time, I think this thinking
> >shouldn¹t cancel cases of conscious usage of caching technics. A decent
> >cache implementation would help to solve lots of performance problems
> >(which eventually becomes a concern for any project).
> >
> >> Overall though there are a few caching libraries I've seen being used,
> >>any of which could be used for memoization.
> >>
> >> -
> >>
> https://github.com/openstack/oslo-incubator/tree/master/openstack/common/
> >>cache
> >> -
> >>
> https://github.com/openstack/oslo-incubator/blob/master/openstack/common/
> >>memorycache.py
> >
> >I looked at the code. I have lots of question to the implementation (like
> >cache eviction policies, whether or not it works well with green threads,
> >but I think it¹s a subject for a separate discussion though). Could you
> >please share your experience of using it? Were there specific problems
> >that you could point to? May be they are already described somewhere?
> >
> >> - dogpile cache @ https://pypi.python.org/pypi/dogpile.cache
> >
> >This one looks really interesting in terms of claimed feature set. It
> >seems to be compatible with Python 2.7, not sure about 2.6. As above, it
> >would be cool you told about your experience with it.
> >
> >
> >> I am personally weary of using them for memoization, what expensive
> >>method calls do u see the complexity of this being useful? I didn't
> >>think that many method calls being done in openstack warranted the
> >>complexity added by doing this (premature optimization is the root of
> >>all evil...). Do u have data showing where it would be
> >>applicable/beneficial?
> >
> >I believe there¹s a great deal of use cases like caching db objects or
> >more generally caching any heavy objects involving interprocess
> >communication. For instance, API clients may be caching objects that are
> >known to be immutable on the server side.
> >
> >
> >>
> >> Sent from my really tiny device...
> >>
> >>> On Jan 23, 2014, at 8:19 AM, "Shawn Hartsock" 
> wrote:
> >>>
> >>> I would like to have us adopt a memoizing caching library of some kind
> >>> for use with OpenStack projects. I have no strong preference at this
> >>> time and I would like suggestions on what to use.
> >>>
> >>> I have seen a number of patches where people have begun to implement
> >>> their own caches in dictionaries. This typically confuses the code and
> >>> mixes issues of correctness and performance in code.
> >>>
> >>> Here's an example:
> >>>
> >>> We start with:
> >>>
> >>> def my_thing_method(some_args):
> >>>   # do expensive work
> >>>   return value
> >>>
> >>> ... but a performance problem is detected... maybe the method is
> >>> called 15 times in 10 seconds but then not again for 5 minutes and the
> >>> return value can only logically change every minute or two... so we
> >>> end up with ...
> >>>
> >>> _GLOBAL_THING_CACHE = {}
> >>>
> >>> def my_thing_method(some_args):
> >>>   key = key_from(some_args)
> >>>if key in _GLOBAL_THING_CACHE:
> >>>return _GLOBAL_THING_CACHE[key]
> >>>else:
> >>> # do expensive work
> >>> _GLOBAL_THING_CACHE[key] = value
> >>> return value
> >>>
> >>> ... which is all well and good... but now as a maintenance programmer
> >>> I need to comprehend the cache mechanism, when cached values are
> >>> invalidated, and if I need to debug the "do expensive work" part I
> >>> need to tease out some test that prevents the cache from being hit.
> >>> Plus I've introduced a new global variable. We love globals right?
> >>>
> >>> I would like us to be able to say:
> >>>
> >>> @memoize(seconds=10)
> >>> def my_thing_m

Re: [openstack-dev] [oslo] memoizer aka cache

2014-01-23 Thread Morgan Fainberg
Keystone uses dogpile.cache and I am making an effort to add it into the
oslo incubator cache library that was recently merged.

Cheers,
--Morgan


On Thu, Jan 23, 2014 at 1:35 PM, Renat Akhmerov wrote:

>
> On 23 Jan 2014, at 08:41, Joshua Harlow  wrote:
>
> > So to me memoizing is typically a premature optimization in a lot of
> cases. And doing it incorrectly leads to overfilling the python processes
> memory (your global dict will have objects in it that can't be garbage
> collected, and with enough keys+values being stored will act just like a
> memory leak; basically it acts as a new GC root object in a way) or more
> cache invalidation races/inconsistencies than just recomputing the initial
> value…
>
> I agree with your concerns here. At the same time, I think this thinking
> shouldn’t cancel cases of conscious usage of caching technics. A decent
> cache implementation would help to solve lots of performance problems
> (which eventually becomes a concern for any project).
>
> > Overall though there are a few caching libraries I've seen being used,
> any of which could be used for memoization.
> >
> > -
> https://github.com/openstack/oslo-incubator/tree/master/openstack/common/cache
> > -
> https://github.com/openstack/oslo-incubator/blob/master/openstack/common/memorycache.py
>
> I looked at the code. I have lots of question to the implementation (like
> cache eviction policies, whether or not it works well with green threads,
> but I think it’s a subject for a separate discussion though). Could you
> please share your experience of using it? Were there specific problems that
> you could point to? May be they are already described somewhere?
>
> > - dogpile cache @ https://pypi.python.org/pypi/dogpile.cache
>
> This one looks really interesting in terms of claimed feature set. It
> seems to be compatible with Python 2.7, not sure about 2.6. As above, it
> would be cool you told about your experience with it.
>
>
> > I am personally weary of using them for memoization, what expensive
> method calls do u see the complexity of this being useful? I didn't think
> that many method calls being done in openstack warranted the complexity
> added by doing this (premature optimization is the root of all evil...). Do
> u have data showing where it would be applicable/beneficial?
>
> I believe there’s a great deal of use cases like caching db objects or
> more generally caching any heavy objects involving interprocess
> communication. For instance, API clients may be caching objects that are
> known to be immutable on the server side.
>
>
> >
> > Sent from my really tiny device...
> >
> >> On Jan 23, 2014, at 8:19 AM, "Shawn Hartsock"  wrote:
> >>
> >> I would like to have us adopt a memoizing caching library of some kind
> >> for use with OpenStack projects. I have no strong preference at this
> >> time and I would like suggestions on what to use.
> >>
> >> I have seen a number of patches where people have begun to implement
> >> their own caches in dictionaries. This typically confuses the code and
> >> mixes issues of correctness and performance in code.
> >>
> >> Here's an example:
> >>
> >> We start with:
> >>
> >> def my_thing_method(some_args):
> >>   # do expensive work
> >>   return value
> >>
> >> ... but a performance problem is detected... maybe the method is
> >> called 15 times in 10 seconds but then not again for 5 minutes and the
> >> return value can only logically change every minute or two... so we
> >> end up with ...
> >>
> >> _GLOBAL_THING_CACHE = {}
> >>
> >> def my_thing_method(some_args):
> >>   key = key_from(some_args)
> >>if key in _GLOBAL_THING_CACHE:
> >>return _GLOBAL_THING_CACHE[key]
> >>else:
> >> # do expensive work
> >> _GLOBAL_THING_CACHE[key] = value
> >> return value
> >>
> >> ... which is all well and good... but now as a maintenance programmer
> >> I need to comprehend the cache mechanism, when cached values are
> >> invalidated, and if I need to debug the "do expensive work" part I
> >> need to tease out some test that prevents the cache from being hit.
> >> Plus I've introduced a new global variable. We love globals right?
> >>
> >> I would like us to be able to say:
> >>
> >> @memoize(seconds=10)
> >> def my_thing_method(some_args):
> >>   # do expensive work
> >>   return value
> >>
> >> ... where we're clearly addressing the performance issue by
> >> introducing a cache and limiting it's possible impact to 10 seconds
> >> which allows for the idea that "do expensive work" has network calls
> >> to systems that may change state outside of this Python process.
> >>
> >> I'd like to see this done because I would like to have a place to
> >> point developers to during reviews... to say: use "common/memoizer" or
> >> use "Bob's awesome memoizer" because Bob has worked out all the cache
> >> problems already and you can just use it instead of worrying about
> >> introducing new bugs by building your own cache.
> >>
> >> Does this ma

[openstack-dev] [wsme] Undefined attributes in WSME

2014-01-18 Thread Morgan Fainberg
Yes, this feature is used in real deployments just as Yuriy described. I
really want to avoid a new API version since we're just now getting solidly
into V3 being used more extensively.  Is it unreasonable to have wsme allow
"extra values" in some manner? (I think that is the crux, is it something
that can even be expected)

--Morgan

On Saturday, January 18, 2014, Yuriy Taraday
>
wrote:

>
> On Tue, Jan 14, 2014 at 6:09 PM, Doug Hellmann <
> doug.hellm...@dreamhost.com> wrote:
>>
>> On Mon, Jan 13, 2014 at 9:36 PM, Jamie Lennox wrote:
>>
>>> On Mon, 2014-01-13 at 10:05 -0500, Doug Hellmann wrote:
>>> > What requirement(s) led to keystone supporting this feature?
>>>
>>> I've got no idea where the requirement came from however it is something
>>> that is
>>> supported now and so not something we can back out of.
>>>
>>
>> If it's truly a requirement, we can look into how to make that work. The
>> data is obviously present in the request, so we would just need to preserve
>> it.
>>
>
> We've seen a use case for arbitrary attributes in Keystone objects. Cloud
> administrator might want to store some metadata along with a user object.
> For example, customer name/id and couple additional fields for contact
> information. The same might be applied to projects and  domains.
>
> So this is a very nice feature that should be kept around. It might be
> wrapped in some way (like in explicit unchecked "metadata" attribute) in a
> new API version though.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Centralized config management

2014-01-09 Thread Morgan Fainberg
I agree with Doug’s question, but also would extend the train of thought to ask 
why not help to make Chef or Puppet better and cover the more OpenStack 
use-cases rather than add yet another competing system?

Cheers,
Morgan
On January 9, 2014 at 10:24:06, Doug Hellmann (doug.hellm...@dreamhost.com) 
wrote:

What capabilities would this new service give us that existing, proven, 
configuration management tools like chef and puppet don't have?


On Thu, Jan 9, 2014 at 12:52 PM, Nachi Ueno  wrote:
Hi Flavio

Thank you for your input.
I agree with you. oslo.config isn't right place to have server side code.

How about oslo.configserver ?
For authentication, we can reuse keystone auth and oslo.rpc.

Best
Nachi


2014/1/9 Flavio Percoco :
> On 08/01/14 17:13 -0800, Nachi Ueno wrote:
>>
>> Hi folks
>>
>> OpenStack process tend to have many config options, and many hosts.
>> It is a pain to manage this tons of config options.
>> To centralize this management helps operation.
>>
>> We can use chef or puppet kind of tools, however
>> sometimes each process depends on the other processes configuration.
>> For example, nova depends on neutron configuration etc
>>
>> My idea is to have config server in oslo.config, and let cfg.CONF get
>> config from the server.
>> This way has several benefits.
>>
>> - We can get centralized management without modification on each
>> projects ( nova, neutron, etc)
>> - We can provide horizon for configuration
>>
>> This is bp for this proposal.
>> https://blueprints.launchpad.net/oslo/+spec/oslo-config-centralized
>>
>> I'm very appreciate any comments on this.
>
>
>
> I've thought about this as well. I like the overall idea of having a
> config server. However, I don't like the idea of having it within
> oslo.config. I'd prefer oslo.config to remain a library.
>
> Also, I think it would be more complex than just having a server that
> provides the configs. It'll need authentication like all other
> services in OpenStack and perhaps even support of encryption.
>
> I like the idea of a config registry but as mentioned above, IMHO it's
> to live under its own project.
>
> That's all I've got for now,
> FF
>
> --
> @flaper87
> Flavio Percoco
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][keystone] Is the user password too simple?

2014-01-01 Thread Morgan Fainberg
Brant,

That is fine for some cases but we provide non-ldap backends, and a
read/write backend. If we continue to provide a keystone specific idp
(likely we need to), these features are a must-have in the long run.  Just
my view (and requests from real customers). It's all well and good to
recommend ldap and handle all that logic in the IDP,  but many use-cases
don't allow for that configuration.  I think providing partial or "toy"
implementations is suboptimal from a product completeness standpoint / user
and deployer experience.

--Morgan

On Wednesday, January 1, 2014, li-zheming wrote:

> hi Thomas:
>  thank you for your suggestion. I agree with you. cracklib is useful
> to check
> password.  I only give a example to set password, not force use this rule.
> I think password scheme should be more discussion.
>  I refer to linux password  policy. The Linux password rule is
> configurable.
> like this:
>   PASS_MAX_DAYS   9
>   PASS_MIN_DAYS   0
>   PASS_MIN_LEN 5
>   PASS_WARN_AGE   7
> this is general rule. if you want to set a strength password, you can
> use pam_cracklib module.
>so we can also config password policy. someone who don't need
> a strength password, they can set general rule in keystone.conf.
> someone who need strength password, they can load cracklib(or others)
> and check password, and password rule can be set by administor.
> this is only my idea,  can you give me more suggestion?thanks!
> --lizheming
>
>
>
>  在2013年12月30 23时15分,"Thomas Goirand" 'cvml', 'z...@debian.org');>>写道:
>
>
> On 12/30/2013 02:55 PM, li-zheming wrote:
> > hi all:
> >   when create user, you can set user password. You can set password
> > as a simple word 'a'. the
> > password is too simple but not limit. if someone want to steal your
> > password, it is so easily(such as exhaustion).
> > I consider that it must be limited when set password, like this:
> >   1. inlcude uppper and lower letters
> >   2. include nums
> >   3. include particular symbol,such as  '_','&'
> >   4. the length>8
> > administor can set the password rule.
>
> Hi,
>
> If you want to check for password complexity, do it the correct way. I'm
> used to *always* use a password generator that uses only lower case, and
> removes chars that can be confused with one another, so that you don't
> have l and 1, or O and 0 in my passwords. Yet, they are high entropy and
> long. If you just force me to add upper+lower case and add symbols, then
> you are just annoying me even with my very good passwords.
>
> > I want to  provide a BP about  this issue. can you give me some advice
> > or ideas??
>
> Please use a password entropy function. Something like this:
> https://pypi.python.org/pypi/cracklib
>
> Thomas
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org  'OpenStack-dev@lists.openstack.org');>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request for Barbican

2013-12-12 Thread Morgan Fainberg
On December 12, 2013 at 14:32:36, Dolph Mathews (dolph.math...@gmail.com) wrote:

On Thu, Dec 12, 2013 at 2:58 PM, Adam Young  wrote:
On 12/04/2013 08:58 AM, Jarret Raim wrote:
While I am all for adding a new program, I think we should only add one
if we  
rule out all existing programs as a home. With that in mind why not add
this  
to the  keystone program? Perhaps that may require a tweak to keystones
mission  
statement, but that is doable. I saw a partial answer to this somewhere
but not a full one.
From our point of view, Barbican can certainly help solve some problems
related to identity like SSH key management and client certs. However,
there is a wide array of functionality that Barbican will handle that is
not related to identity.


Some examples, there is some additional detail in our application if you
want to dig deeper [1].


* Symmetric key management - These keys are used for encryption of data at
rest in various places including Swift, Nova, Cinder, etc. Keys are
resources that roll up to a project, much like servers or load balancers,
but they have no direct relationship to an identity.

* SSL / TLS certificates - The management of certificate authorities and
the issuance of keys for SSL / TLS. Again, these are resources rather than
anything attached to identity.

* SSH Key Management - These could certainly be managed through keystone
if we think that¹s the right way to go about it, but from Barbican¹s point
of view, these are just another type of a key to be generated and tracked
that roll up to an identity.


* Client certificates - These are most likely tied to an identity, but
again, just managed as resources from a Barbican point of view.

* Raw Secret Storage - This functionality is usually used by applications
residing on an Cloud. An app can use Barbican to store secrets such as
sensitive configuration files, encryption keys and the like. This data
belongs to the application rather than any particular user in Keystone.
For example, some Rackspace customers don¹t allow their application dev /
maintenance teams direct access to the Rackspace APIs.

* Boot Verification - This functionality is used as part of the trusted
boot functionality for transparent disk encryption on Nova.

* Randomness Source - Barbican manages HSMs which allow us to offer a
source of true randomness.



In short (ha), I would encourage everyone to think of keys / certificates
as resources managed by an API in much the same way we think of VMs being
managed by the Nova API. A consumer of Barbican (either as an OpenStack
service or a consumer of an OpenStack cloud) will have an API to create
and manage various types of secrets that are owned by their project.

My reason for keeping them separate is more practical:  the Keystone team is 
already somewhat overloaded.  I know that a couple of us have interest in 
contributing to Barbican, the question is time and prioritization. 

Unless there is some benefit to having both projects in the same program with 
essentially different teams, I think Barbican should proceed as is.  I 
personally plan on contributing to Barbican.

/me puts PTL hat on

++ I don't want Russel's job.

Keystone has a fairly narrow mission statement in my mind (come to think of it, 
I need to propose it to governance..), and that's basically to abstract away 
the problem of authenticating and authorizing the API users of other openstack 
services. Everything else, including identity management, key management, key 
distribution, quotas, etc, is just secondary fodder that we tend to help with 
along the way... but they should be first class problems in someone else's mind.

If we rolled everything together that kind of looks related to keystone under a 
big keystone program for the sake of organizational tidiness, I know I would be 
less effective as a "PTL" and that's a bit disheartening. That said, I'm always 
happy to help where I can.
 
The long and the short of it is that I can’t argue that Barbican couldn’t be 
considered a mechanism of “Identity” (in most everything keys end up being a 
form of Identity, and the management of that would fit nicely under the 
“Identity Program”).  That being said I also can’t argue that Barbican 
shouldn’t be it’s own top-level program.  It comes down to the best fit for 
OpenStack as a whole.

From a deployer standpoint, I don’t think it will make any real difference if 
Barbican is in Identity or it’s own program.  Basically, it’ll be a separate 
process to run in either case.  It will have it’s own rules and quirks.

From a developer standpoint, I don’t think it will make a significant 
difference (besides, perhaps where documentation lies).  The contributors to 
Keystone will contribute (or not) to Barbican and vice-versa based upon 
interest/time/needs.

From a community and communication standpoint (which is the important part 
here), I think it comes down to messaging and what Barbican is meant to be.  If 
we are happy messaging that it is a sepa

Re: [openstack-dev] [Keystone] policy has no effect because of hard coded assert_admin?

2013-12-12 Thread Morgan Fainberg
As Dolph stated, V3 is where the policy file protects.  This is one of the many 
reasons why I would encourage movement to using V3 Keystone over V2.

The V2 API is officially deprecated in the Icehouse cycle, I think that moving 
the decorator potentially could cause more issues than not as stated for 
compatibility.  I would be very concerned about breaking compatibility with 
deployments and maintaining the security behavior with the encouragement to 
move from V2 to V3.  I am also not convinced passing the context down to the 
manager level is the right approach.  Making a move on where the protection 
occurs likely warrants a deeper discussion (perhaps in Atlanta?).

Cheers,
Morgan Fainberg

On December 12, 2013 at 10:32:40, Dolph Mathews (dolph.math...@gmail.com) wrote:

The policy file is protecting v3 API calls at the controller layer, but you're 
calling the v2 API. The policy decorators should be moved to the manager layer 
to protect both APIs equally... but we'd have to be very careful not to break 
deployments depending on the trivial "assert_admin" behavior (hence the reason 
we only wrapped v3 with the new policy decorators).


On Thu, Dec 12, 2013 at 1:41 AM, Qiu Yu  wrote:
Hi,

I was trying to fine tune some keystone policy rules. Basically I want to grant 
"create_project" action to user in "ops" role. And following are my steps.

1. Adding a new user "usr1"
2. Creating new role "ops"
3. Granting this user a "ops" role in "service" tenant
4. Adding new lines to keystone policy file

        "ops_required": [["role:ops"]],
        "admin_or_ops": [["rule:admin_required"], ["rule:ops_required"]],

5. Change

        "identity:create_project": [["rule:admin_required"]],
    to
        "identity:create_project": [["rule:admin_or_ops"]],

6. Restart keystone service

keystone tenant-create with credential of user "usr1" still returns 403 
Forbidden error.
“You are not authorized to perform the requested action, admin_required. (HTTP 
403)”

After some quick scan, it seems that create_project function has a hard-coded 
assert_admin call[1], which does not respect settings in the policy file.

Any ideas why? Is it a bug to fix? Thanks!
BTW, I'm running keystone havana release with V2 API.

[1] 
https://github.com/openstack/keystone/blob/master/keystone/identity/controllers.py#L105

Thanks,
--
Qiu Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

-Dolph
___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystoneclient] [Keystone] [Solum] Last released version of keystoneclient does not work with python33

2013-12-10 Thread Morgan Fainberg
Hi Chmouel,

I think the correct way to sync is to run the update.py script and submit a 
review (I don’t think it’s changed recently).  

Cheers,
Morgan

On December 9, 2013 at 23:58:42, Chmouel Boudjnah (chmo...@enovance.com) wrote:


On Fri, Dec 6, 2013 at 4:30 PM, Dolph Mathews  wrote:

++ and the other errors I was hitting all have open patches in gerrit to see 
them fixed. It didn't seem like we were far off, but I haven't tested all these 
patches together yet to find out if they're just hiding even more problems. 
Either way, a py33 test run for keystoneclient will look very different very 
soon.


It seems that gettextutils needs to be updated from oslo-incubator to get 
keystoneclient to py3 (minus the tests/httpprety):

https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/openstack/common/gettextutils.py#L32

I would not mind doing it but I am not sure what's the process these days for 
syncing those modules (aside of running the update.py).

Cheers,
Chmouel.

___  
OpenStack-dev mailing list  
OpenStack-dev@lists.openstack.org  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystoneclient] [Keystone] [Solum] Last released version of keystoneclient does not work with python33

2013-12-05 Thread Morgan Fainberg
Hi Adrian,

I was going to say exactly what Jamie said (and that Jamie and Dolph had some 
conversations with the guy about the py3k support and the pending pull request 
for it).  Let me know if you want/need any other voices in communicating with 
the guy.

Cheers,
Morgan
On December 5, 2013 at 02:21:29, Jamie Lennox (jamielen...@redhat.com) wrote:

On Thu, 2013-12-05 at 05:08 +, Adrian Otto wrote:
> Hi Morgan!
> 
> 
> Stackforge projects can be configured to make the CLA optional. I am
> willing to speak to the HTTPretty maintainers about the benefits of
> stackforge. Do we happen to know any of them? If not I can track them
> down through email.
> 
> 
> Adrian
> 

The main guy's email is fairly easy to find from his github page. But
I'm pretty sure he won't be interested. He's not involved in OpenStack
at all. 

Feel free to try though. 

Jamie
> 
> 
> --
> Adrian
> 
> 
>  Original message 
> From: Morgan Fainberg 
> Date:12/04/2013 6:17 PM (GMT-08:00) 
> To: Jamie Lennox ,"OpenStack Development Mailing List (not for usage
> questions)" 
> Subject: Re: [openstack-dev] [Keystoneclient] [Keystone] [Solum] Last
> released version of keystoneclient does not work with python33 
> 
> 
> 
> On December 4, 2013 at 18:05:07, Jamie Lennox (jamielen...@redhat.com)
> wrote:
> 
> > 
> > On Wed, 2013-12-04 at 20:48 -0500, David Stanek wrote: 
> > > On Wed, Dec 4, 2013 at 6:44 PM, Adrian Otto 
> > >  wrote: 
> > > Jamie, 
> > > 
> > > Thanks for the guidance here. I am checking to see if any of 
> > > our developers might take an interest in helping with the 
> > > upstream work. At the very least, it might be nice to have 
> > > some understanding of how much work there is to be done in 
> > > HTTPretty. 
> > > 
> > > 
> > > (Dolph correct me if I am wrong, but...) 
> > > 
> > > 
> > > I don't think that there is much work to be done beyond getting
> > that 
> > > pull request merged upstream. Dolph ran the tests using the code
> > from 
> > > the pull request somewhat successfully. The errors that we saw
> > were 
> > > just in keystoneclient code. 
> > 
> > But I don't think that there own test suite runs under py33 with
> > that 
> > branch. So they've hit the main issues, but we won't get a release
> > in 
> > that state. 
> Should we offer to bring HTTPretty under something like stackforge and
> leverage our CI infrastructure? Not sure how open the
> owner/maintainers would be to this, but it would help to solve that
> issue… downside is that pull-requests are no longer (gerrit instead)
> used and IIRC CLA is still required for stackforge projects (might be
> a detractor). Just a passing thought (that might be irrelevant
> depending on the owner/maintainer’s point of view).
> 
> 
> 
> 
> —Morgan
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystoneclient] [Keystone] [Solum] Last released version of keystoneclient does not work with python33

2013-12-04 Thread Morgan Fainberg

On December 4, 2013 at 18:05:07, Jamie Lennox (jamielen...@redhat.com) wrote:


On Wed, 2013-12-04 at 20:48 -0500, David Stanek wrote: 
> On Wed, Dec 4, 2013 at 6:44 PM, Adrian Otto 
>  wrote: 
> Jamie, 
> 
> Thanks for the guidance here. I am checking to see if any of 
> our developers might take an interest in helping with the 
> upstream work. At the very least, it might be nice to have 
> some understanding of how much work there is to be done in 
> HTTPretty. 
> 
> 
> (Dolph correct me if I am wrong, but...) 
> 
> 
> I don't think that there is much work to be done beyond getting that 
> pull request merged upstream. Dolph ran the tests using the code from 
> the pull request somewhat successfully. The errors that we saw were 
> just in keystoneclient code. 

But I don't think that there own test suite runs under py33 with that 
branch. So they've hit the main issues, but we won't get a release in 
that state. 
Should we offer to bring HTTPretty under something like stackforge and leverage 
our CI infrastructure?  Not sure how open the owner/maintainers would be to 
this, but it would help to solve that issue… downside is that pull-requests are 
no longer (gerrit instead) used and IIRC CLA is still required for stackforge 
projects (might be a detractor).  Just a passing thought (that might be 
irrelevant depending on the owner/maintainer’s point of view).


—Morgan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient: uses deprecated keyring.backend.$keyring

2013-11-24 Thread Morgan Fainberg
Hi Thomas,

How pressing is this issue?  I know there is work being done to unify
token/auth implementation across the clients.  I want to have an idea of
the heat here so we can look at addressing this directly in novaclient if
it can't wait until the unification work to come down the line.

(Sent from mobile so I haven't been able to look up more specifics)

Cheers,
--Morgan

On Sunday, November 24, 2013, Thomas Goirand wrote:

> Hi,
>
> Someone sent a bug report against the python-novaclient package:
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=728470
>
> Could someone take care of this? FYI, the patch attached to the bug
> report seems wrong, according to "mitya57" in #debian-python (in OFTC),
> though the problem is real and needs to be addressed, and I don't have
> the time to investigate it myself right now.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-11-24 Thread Morgan Fainberg
> I hear a concerted effort to get this bootstrapped in Keystone.  We can do
> this if it is the voice of the majority.
>
>
> If we do:
>
> Keep KDS configuration separate from the Keystone configuration: the fact
> that they both point to the same host and port is temporary.  In fact, we
> should probably spin up a separate wsgi service/port inside Keystone for
> just the KDS.  This is not hard to do, and will support splitting it off
> into its own service.
>
> +1 on spinning up a new service/wsgi


> KDS should not show up in the Service catalog.  It is not an end user
> visible service and should not look like one to the rest of the world.
>
> I believe that KDS should be discoverable, but I agree that it is not an
end user service, so I am unsure of the best approach wrt the catalog.

The other concern is the library interfacing with KDS (I would assume this
goes into keystoneclient? At least for the time being).


> Once we have it up and running, we can move it to its own service or hand
> off to Barbican when appropriate.
>
> Are people OK with the current API implementation?  I didn;t see a lot of
> outside comment on the code review, and there were certainly some aspects
> of it that were unclear.
>
> I think the API is if not ready to go, very close (maybe a single cleanup
revision).  If we are going to do this lets get the spec done ASAP and get
the code in right away so we can get traction on it.  Icehouse milestones
will be coming through fast.  I think it is imminently possible to have
this in the repo and running fairly quickly with concerted effort.

The code might need minor tweaking to conform to the spec if it changes.
But as I recall almost 100% of the back and forth at this point was does it
belong in keystone.


> https://review.openstack.org/#/c/40692/
>
>

--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tenant or project

2013-11-23 Thread Morgan Fainberg
In all honesty it doesn't matter which term we go with.  As long as we are
consistent and define the meaning.  I think we can argue intuitive vs
non-intuitive in this case unto the ground.  I prefer "project" to tenant,
but beyond being a bit of an "overloaded" term, I really don't think anyone
will really notice one way or another as long as everything is using the
same terminology.  We could call it "grouping-of-openstack-things" if we
wanted to (though I might have to pull some hair out if we go to that
terminology).

However, with all that in mind, we have made the choice to move toward
project (horizon, keystone, OSC, keystoneclient) and have some momentum
behind that push (plus newer projects already use the project
nomenclature).   Making a change back to tenant might prove a worse UX than
moving everything else in line (nova I think is the one real major hurdle
to get converted over, and deprecation of keystone v2 API).

Cheers,
--Morgan Fainberg


On Saturday, November 23, 2013, Caitlin Bestler wrote:

>  I have seen several people request that their users be members of two
> "projects" and that they be allow to publish objects that are "Shared" by
> multiple "projects".
>
> For some reason the people who request these complex data constructions
> always prefer to call the enclosing entity a "project". I have not heard
> such a request for multi-tenant objects and/or users.
>
> The important point is that the "mix and match" approach actually creates
> complex objects where it is difficult to determine who has the right to
> delete them, modify them, change who has access to them, etc. The far
> simpler rule
> is that all objects/resources have a single owner, whether that owner is
> called a "project" or a "tenant".
>
> The term "project", in common english usage, does not have any semantics
> implying exclusivity. Indeed we have "Cross project teams" and resources
> are commonly shared by multiple projects within one company.
>
> The fact that "projects" are typically things *within* a company is
> exactly why it is a poor term for the outermost enclosure of resources.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][py3] Usage of httpretty

2013-11-20 Thread Morgan Fainberg
I'd be more willing to toss in and help to maintain/fix appropriately
on StackForge if that is needed.  Though I am very much hoping
upstream can be used.

Cheers,
Morgan Fainberg

On Wed, Nov 20, 2013 at 7:21 PM, Chuck Short  wrote:
> Hi,
>
> So maybe if it gets to the point where it gets too be much of a porblem we
> should just put it on stackforge.
>
> Regards
> chuck
>
>
> On Wed, Nov 20, 2013 at 9:08 PM, Jamie Lennox 
> wrote:
>>
>> Chuck,
>>
>> So it is being used to handle stubbing returns from requests and httplib
>> rather than having to having fake handlers in place in our testing code,
>> or stubbing out the request library and continually having to update the
>> arguments being passed to keep the mocks working. From my looking around
>> it is the best library for this sort of job.
>>
>> When i evalutated it for keystoneclient upstream
>> (https://github.com/gabrielfalcao/HTTPretty/ ) was quickly responsive
>> and had CI tests that seemed to be checking python 3 support. I haven't
>> seen as much happening recently as there are pull requests upstream for
>> python 3 fixes that just don't seem to be moving anywhere. The CI for
>> python 3 was also commented out at some point.
>>
>> It also turns out to be a PITA to package correctly. I attempted this
>> for fedora, and i know there was someone attempting the same for gentoo.
>> I have a pull request upstream that would at least get the dependencies
>> under control.
>>
>> I do not want to go back to stubbing the request library, or having a
>> fake client path that is only used in testing. However I have also
>> noticed it is the cause of at least some of our python 3 problems.
>>
>> If there are other libraries out there that can do the same job we
>> should consider them though i am holding some hope for upstream.
>>
>>
>> Jamie
>>
>>
>> On Wed, 2013-11-20 at 14:27 -0800, Morgan Fainberg wrote:
>> > Chuck,
>> >
>> > The reason to use httpretty is that it handles everything at the
>> > socket layer, this means if we change out urllib for requests or some
>> > other transport to make HTTP requests to we don't need to refactor
>> > every one of the mock/mox subouts to match the exact set of parameters
>> > to be passed.  Httpretty makes managing this significantly easier
>> > (hence was the reasoning to move towards it).  Though, I'm sure Jamie
>> > Lennox can provide more insight into deeper specifics as he did most
>> > of the work to convert it.
>> >
>> > At least the above is my understanding of the reasoning.
>> >
>> > --Morgan
>> >
>> > On Wed, Nov 20, 2013 at 2:08 PM, Dolph Mathews 
>> > wrote:
>> > > I don't have a great answer -- do any projects depend on it other than
>> > > python-keystoneclient? I'm happy to see it removed -- I see the
>> > > immediate
>> > > benefit but it's obviously not significant relative to python 3
>> > > support.
>> > >
>> > > BTW, this exact issue is being tracked here-
>> > > https://bugs.launchpad.net/python-keystoneclient/+bug/1249165
>> > >
>> > >
>> > >
>> > >
>> > > On Wed, Nov 20, 2013 at 3:28 PM, Chuck Short
>> > > 
>> > > wrote:
>> > >>
>> > >> Hi,
>> > >>
>> > >> I was wondering for the reason behind the usage httpretty in
>> > >> python-keystoneclient. It seems to me like a total overkill for a
>> > >> test. It
>> > >> also has some problems with python3 support that is currently
>> > >> blocking
>> > >> python3 porting as well.
>> > >>
>> > >> Regards
>> > >> chuck
>> > >>
>> > >> ___
>> > >> OpenStack-dev mailing list
>> > >> OpenStack-dev@lists.openstack.org
>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >>
>> > >
>> > >
>> > >
>> > > --
>> > >
>> > > -Dolph
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][py3] Usage of httpretty

2013-11-20 Thread Morgan Fainberg
Chuck,

The reason to use httpretty is that it handles everything at the
socket layer, this means if we change out urllib for requests or some
other transport to make HTTP requests to we don't need to refactor
every one of the mock/mox subouts to match the exact set of parameters
to be passed.  Httpretty makes managing this significantly easier
(hence was the reasoning to move towards it).  Though, I'm sure Jamie
Lennox can provide more insight into deeper specifics as he did most
of the work to convert it.

At least the above is my understanding of the reasoning.

--Morgan

On Wed, Nov 20, 2013 at 2:08 PM, Dolph Mathews  wrote:
> I don't have a great answer -- do any projects depend on it other than
> python-keystoneclient? I'm happy to see it removed -- I see the immediate
> benefit but it's obviously not significant relative to python 3 support.
>
> BTW, this exact issue is being tracked here-
> https://bugs.launchpad.net/python-keystoneclient/+bug/1249165
>
>
>
>
> On Wed, Nov 20, 2013 at 3:28 PM, Chuck Short 
> wrote:
>>
>> Hi,
>>
>> I was wondering for the reason behind the usage httpretty in
>> python-keystoneclient. It seems to me like a total overkill for a test. It
>> also has some problems with python3 support that is currently blocking
>> python3 porting as well.
>>
>> Regards
>> chuck
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
>
> -Dolph
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-18 Thread Morgan Fainberg
On Mon, Nov 18, 2013 at 1:58 PM, Christopher Armstrong
 wrote:
> On Mon, Nov 18, 2013 at 3:00 PM, Dan Smith  wrote:
>>
>> Sorry for the delay in responding to this...
>>
>> >   * Moved the _obj_classes registry magic out of ObjectMetaClass and
>> > into
>> > its own method for easier use.  Since this is a subclass based
>> > implementation,
>> > having a separate method feels more appropriate for a
>> > factory/registry
>> > pattern.
>>
>> This is actually how I had it in my initial design because I like
>> explicit registration. We went off on this MetaClass tangent, which buys
>> us certain things, but which also makes certain things quite difficult.
>>
>> Pros for metaclass approach:
>>  - Avoids having to decorate things (meh)
>>  - Automatic to the point of not being able to create an object type
>>without registering it even if you wanted to
>>
>> Cons for metaclass approach:
>>  - Maybe a bit too magical
>>  - Can make testing hard (see where we save/restore the registry
>>between each test)
>>  - I think it might make subclass implementations harder
>>  - Definitely more complicated to understand
>>
>> Chris much preferred the metaclass approach, so I'm including him here.
>> He had some reasoning that won out in the original discussion, although
>> I don't really remember what that was.
>>
>
> It's almost always possible to go without metaclasses without losing much
> relevant brevity, and improving clarity. I strongly recommend against their
> use.
>

I think this is simple and to the point.  ++  Metaclasses have their
places, but it really makes it hard to clearly view what is going on
in a straight forward manner.  I would prefer to keep metaclass use
limited (wherever possible) with the exception of abstract base
classes (which are straightforward enough to understand).  I think the
plus of "avoiding decorating things" isn't really a huge win, and
actually i think takes clarity away.

--Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Split of the openstack-dev list (summary so far)

2013-11-17 Thread Morgan Fainberg
A couple of quick points.
1) I think that splitting the list is the wrong approach.
2) Perhaps we need to look at adding a mechanism that enforces the use
of tags in the subject line (send a nice "sorry, but you need to
indicate the topic(s) you are mailing about" error back if it doesn't
exist, keep an active list of these via infra?).
3) It might also make sense to have all stackforge projects include
[stackforge] in the topic.  That will help make filtering easier.

Finally, I notice the difference in a threaded client from a flat
client.  I don't think I could subscribe to this list without a
threaded client.

TL;DR Don't split the community, work to improve the tools for those
who are overwhelmed. (Email clients, enforcing use of subject tags,
etc)

On Sat, Nov 16, 2013 at 8:01 AM, Nick Chase  wrote:
> I am one of those horizontal people (working on docs and basically one of
> the people responsible at my organization for keeping a handle on what's
> going on) and I'm totally against a split.
>
> Of COURSE we need to maintain the integrated/incubated/proposed spectrum.
> Saying that we need to keep all traffic on one list isn't suggesting we do
> away with that. But it IS a spectrum, and we should maintain that. Splitting
> the list is definitely splitting the community and I agree that it's a
> poison pill.
>
> Integrating new projects into the community is just as important as
> integrating them into the codebase.  Without one the other won't happen
> nearly as effectively, and we do lose one of the strengths of the community
> as a whole.
>
> Part of this is psychology. Many of us are familiar with broken windows
> theory[1] in terms of code.  For those of you who aren't, the idea is based
> on an experiment where they left an expensive car in a crime-ridden
> neighborhood and nothing happened to it -- until they broke a window.  In
> coding it means you're less likely to kludge a patch to pristine code, but
> once you do you are more likely to do it again.
>
> Projects work hard to do things "the OpenStack way" because they feel from
> the start that they are already part of OpenStack, even if they aren't
> integrated.
>
> It also leads to another side effect, which I'll leave to you to decide
> whether it's good or bad.  We do have a culture of "there can be only one".
> Once a project is proposed in a space, that's it (mostly).  We typically
> don't have multiple projects in that space.  That's bad because it reduces
> innovation through competition, but it's good because we get focused
> development from the finite number of developers we have available. As I
> said, YMMV.
>
> Look, Monty is right: a good threaded client solves a multitude of problems.
> Definitely try that for a week before you set your mind on a decision.
>
> TL; DR Splitting the list is splitting the community, and that will lead to
> a decline in overall quality.
>
> [1] http://en.wikipedia.org/wiki/Broken_windows_theory
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Congress: an open policy framework

2013-11-14 Thread Morgan Fainberg
cepts being
described (there was a recent issue with two concepts in keystone
being called the same thing, and it has been a challenge to unwind
that).

There might be some value  to seeing some work being done to provide
more information to Keystone, but I think this will become more
apparent as Congress develops.

> Thoughts?
> Tim
>

Cheers,
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: reverse the default Gerrit sort order

2013-11-10 Thread Morgan Fainberg
The point of my minus 1 was to allow me to toggle it as needed if t really
disrupted my workflow to hit the important reviews.  Regardless of the
technical issues (rebuilding gerrit) I would support the default being
changed, just a way to swap it back for an adjustment period.

My last email was a little short to convey that.
--Morgan Fainberg

On Sunday, November 10, 2013, Robert Collins wrote:

> On 9 November 2013 01:51, Morgan Fainberg >
> wrote:
>
> >  I agree this would be interesting but if it isn't configurable (e.g. A
> > mechanism to support current behavior) I think this would very much
> disrupt
> > the workflow I use.  If configurable, I'd be open to it, if not
> configurable
> > I have to say -1 to this idea.
>
> Sadly it sounds like it's impossible (modulo rebuilding Gerrit)
> anyhow, so its moot. That said, the /point/ of such an experiment is
> to change peoples easy-path behaviour, so I take your -1 as evidence
> that this would change your behaviour, and thus something we should
> try :)
>
> -Rob
>
> --
> Robert Collins >
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: reverse the default Gerrit sort order

2013-11-08 Thread Morgan Fainberg
On Friday, November 8, 2013, Joe Gordon wrote:

>
>
>
> On Thu, Nov 7, 2013 at 7:36 AM, Robert Collins 
>  'robe...@robertcollins.net');>
> > wrote:
>
>> I've been thinking about review queues recently (since here at the
>> summit everyone is talking about reviews! :)).
>>
>> One thing that struck me today was that Gerrit makes it easier to
>> review the newest changes first, rather than the changes that have
>> been in the queue longest, or the changes that started going through
>> the review process first.
>>
>> So... is it possible to change the default sort order for Gerrit? How
>> hard is it to do - could we do an experiment on that and see if it
>> nudges the dial for reviewstats (just make the change, not ask anyone
>> to change their behaviour)?
>>
>> As for what sort order to choose, I'd be happy just getting data on a
>> different default sort order - it seems like the easiest thing would
>> be to reverse the current order, vs doing something more
>> sophisticated.
>>
>>
> ++, This sounds like an interesting idea. and we could try it out for a
> few weeks to see what happen.
>
>
 I agree this would be interesting but if it isn't configurable (e.g. A
mechanism to support current behavior) I think this would very much disrupt
the workflow I use.  If configurable, I'd be open to it, if not
configurable I have to say -1 to this idea.

Cheers,
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-10-30 Thread Morgan Fainberg
On Wed, Oct 30, 2013 at 7:37 PM, Robert Collins
 wrote:
> This is a bit of a social norms thread
>
> I've been consistently asking for tests in reviews for a while now,
> and I get the occasional push-back. I think this falls into a few
> broad camps:
>
> A - there is no test suite at all, adding one in unreasonable
> B - this thing cannot be tested in this context (e.g. functional tests
> are defined in a different tree)
> C - this particular thing is very hard to test
> D - testing this won't offer benefit
> E - other things like this in the project don't have tests
> F - submitter doesn't know how to write tests
> G - submitter doesn't have time to write tests
>
> Now, of these, I think it's fine not add tests in cases A, B, C in
> combination with D, and D.
>

I think that C is not a really valid case to allow no tests (there are
always exceptions), I strongly believe that we can collaborate
(perhaps include some people who are better at test writing) to build
the tests and add them as a co-author in the commit message (or at the
very least have the test patchset be dependent on the main patchset.
"Hard to test" is too similar to F.

Just my general feelings on the topic.

Cheers,
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] distibuted caching system in front of mysql server for openstack transactions

2013-10-28 Thread Morgan Fainberg
In light of what Dolph said with regards to Keystone, we are using
dogpile.cache to implement memoization in front of our driver calls.
It it has the ability to cache directly as well, but it has been
effective (so far) for our use-case.

That being said, I am unsure if caching in front of MySQL is really
what we want.  I believe that we should be caching after processing
work (hence memoization mechanism) instead of at the SQL layer.  This
also means we can be measured in what we cache (oh hey, it makes no
sense to cache X because it needs to be "real time" or there isn't a
performance issue with that query / call, but Y does a ton of
processing and is an expensive join/temptable query).  In my
experience, unless the whole application is designed with caching in
mind, caching something as broad as MySQL calls (or any SQL store) is
likely going to net exactly what Shawn Hartsock stated, adding a
second performance issue.

--Morgan

On Mon, Oct 28, 2013 at 10:05 AM, Shawn Hartsock  wrote:
>>
>> I once heard a quote.. "I had a performance problem, so I added caching.
>> now I have two performance problems."
>>
> this. 1,000 times this.
>
> Just to float this thought ... make sure it's considered...
>
> I've seen a *lot* of people misuse caching when what the really want is 
> memoization.
>
> * 
> http://stackoverflow.com/questions/1988804/what-is-memoization-and-how-can-i-use-it-in-python
> * 
> http://stackoverflow.com/questions/10879137/how-can-i-memoize-a-class-instantiation-in-python
>
> ... I'm not sure what you're trying to do. So YMMV, TTFN, BBQ.
>
> # Shawn Hartsock
>
> - Original Message -
>> From: "Clint Byrum" 
>> To: "openstack-dev" 
>> Sent: Monday, October 28, 2013 12:12:49 PM
>> Subject: Re: [openstack-dev] distibuted caching system in front of mysql 
>>  server for openstack transactions
>>
>> Excerpts from Dolph Mathews's message of 2013-10-28 08:40:19 -0700:
>> > It's not specific to mysql (or sql at all), but keystone is using
>> > dogpile.cache around driver calls to a similar effect.
>> >
>> >   http://dogpilecache.readthedocs.org/en/latest/
>> >
>> > It can persist to memcache, redis, etc.
>> >
>>
>> I once heard a quote.. "I had a performance problem, so I added caching.
>> now I have two performance problems."
>>
>> Caching is unbelievably awesome in the jobs it can do well. When the
>> problem is straight forward and the requirements are few, it is just the
>> right thing to relieve engineering pressure to make an application more
>> scalable.
>>
>> However, IMO, more than narrow, well defined cache usage is a sign
>> that the application needs some reworking to scale.
>>
>> I like the principle of "let's use dogpile so we don't have to reinvent
>> multi-level caching". However, let's make sure we look at each
>> performance issue individually, rather than just throwing them all in
>> a cache box and waving the memcache wand.
>>
>> >
>> > https://github.com/openstack/keystone/blob/master/keystone/common/cache/core.py
>> >
>> > On Fri, Oct 25, 2013 at 6:53 PM, Qing He  wrote:
>> >
>> > >  All,
>> > >
>> > > Has anyone looked at the options of putting a distributed caching system
>> > > in front of mysql server to improve performance? This should be similar
>> > > to
>> > > Oracle Coherence, or VMware VFabric SQLFire.
>> > >
>> > > ** **
>> > >
>> > > Thanks,
>> > >
>> > > ** **
>> > >
>> > > Qing
>> > >
>> > > ___
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev@lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > >
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Remove vim modelines?

2013-10-24 Thread Morgan Fainberg
+1 and likely this should be added to hacking so they don't sneak back in
by accident / reviewers missing the line since we've had them for do long.

On Thursday, October 24, 2013, Flavio Percoco wrote:

> On 24/10/13 13:38 +0100, Joe Gordon wrote:
>
>> Since the beginning of OpenStack we have had vim modelines all over the
>> codebase, but after seeing this patch https://review.opeenstack.org/**
>> #/c/50891/ 
>> I took a further look into vim modelines and think we should remove them.
>> Before going any further, I should point out these lines don't bother me
>> too
>> much but I figured if we could get consensus, then we could shrink our
>> codebase
>> by a little bit.
>>
>> Sidenote: This discussion is being moved to the mailing list because it
>> 'would
>> be better to have a mailing list thread about this rather than bits and
>> pieces
>> of discussion in gerrit' as this change requires multiple patches.
>>  https://
>> review.openstack.org/#/c/**51295/
>> .
>>
>>
>> Why remove them?
>>
>> * Modelines aren't supported by default in debian or ubuntu due to
>> security
>> reasons: https://wiki.python.org/moin/**Vim
>> * Having modelines for vim means if someone wants we should support
>> modelines
>> for emacs (http://www.gnu.org/software/**emacs/manual/html_mono/emacs.**
>> html# 
>> Specifying-File-Variables) etc. as well.  And having a bunch of headers
>> for
>> different editors in each file seems like extra overhead.
>> * There are other ways of making sure tabstop is set correctly for python
>> files, see  
>> https://wiki.python.org/moin/**Vim.
>>  I am a vIm user myself and have
>> never used modelines.
>> * We have vim modelines in only 828 out of 1213 python files in nova
>> (68%), so
>> if anyone is using modelines today, then it only works 68% of the time in
>> nova
>> * Why have the same config 828 times for one repo alone?  This violates
>> the DRY
>> principle (Don't Repeat Yourself).
>>
>>
>> Related Patches:
>> https://review.openstack.org/#**/c/51295/
>> https://review.openstack.org/#**/q/status:open+project:**openstack/
>> nova+branch:master+topic:**noboilerplate,n,z
>>
>>
>
> /me is a vim user!
>
> +1 on removing those lines!
>
>  best,
>> Joe
>>
>
>  __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
>
>
> --
> @flaper87
> Flavio Percoco
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] excessively difficult to support both iso8601 0.1.4 and 0.1.8 as deps

2013-10-24 Thread Morgan Fainberg
It seems like adopting 0.1.8 is the right approach. If it doesn't work with
other projects, we should work to help those projects get updated to work
with it.

--Morgan

On Thursday, October 24, 2013, Zhi Yan Liu wrote:

> Hi all,
>
> Adopt 0.1.8 as iso8601 minimum version:
> https://review.openstack.org/#/c/53567/
>
> zhiyan
>
> On Thu, Oct 24, 2013 at 4:09 AM, Dolph Mathews 
> >
> wrote:
> >
> > On Wed, Oct 23, 2013 at 2:30 PM, Robert Collins <
> robe...@robertcollins.net >
> > wrote:
> >>
> >> On 24 October 2013 07:34, Mark Washenberger
> >> > wrote:
> >> > Hi folks!
> >> >
> >> > 1) Adopt 0.1.8 as the minimum version in openstack-requirements.
> >> > 2) Do nothing (i.e. let Glance behavior depend on iso8601 in this way,
> >> > and
> >> > just fix the tests so they don't care about these extra formats)
> >> > 3) Make Glance work with the added formats even if 0.1.4 is installed.
> >>
> >> I think we should do (1) because both (2) will permit surprising,
> >> nonobvious changes in behaviour and (3) is just nasty engineering.
> >> Alternatively, add a (4) which is (2) with "whinge on startup if 0.1.4
> >> is installed" to make identifying this situation easy.
> >
> >
> > I'm in favor of (1), unless there's a reason why 0.1.8 not viable for
> > another project or packager, in which case, I've never heard the term
> > "whinge" before so there should definitely be some of that.
> >
> >>
> >>
> >> The last thing a new / upgraded deployment wants is something like
> >> nova, or a third party API script failing in nonobvious ways with no
> >> breadcrumbs to lead them to 'upgrade iso8601' as an answer.
> >>
> >> -Rob
> >>
> >> --
> >> Robert Collins >
> >> Distinguished Technologist
> >> HP Converged Cloud
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org 
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> >
> > -Dolph
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] design summit session proposal deadline

2013-10-16 Thread Morgan Fainberg
I agree that likely extensions and internal API discussions can become one
session. I think the API side of that won't need to fill a whole session.

On Wednesday, October 16, 2013, Adam Young wrote:

> On 10/16/2013 12:23 PM, Dolph Mathews wrote:
>
>> I'll be finalizing the design summit schedule [1] for keystone
>> following the weekly meeting [2] on Tuesday, October 22nd 18:00 UTC.
>> Please have your proposals submitted before then.
>>
>> So far I think everyone has done a GREAT job self-organizing the
>> proposed sessions to avoid overlap, but we currently have two more
>> proposals than we do slots. During the meeting, we'll review which
>> sessions should be split, combined or cut.
>>
>> Lastly, if you have comments on a particular session regarding scope
>> or scheduling, *please* take advantage of the new comments section at
>> the bottom of each session proposal. Such feedback is highly
>> appreciated!
>>
>> [1]: 
>> http://summit.openstack.org/**cfp/topic/10
>> [2]: 
>> https://wiki.openstack.org/**wiki/Meetings/KeystoneMeeting
>>
>> Thanks!
>>
>> -Dolph
>>
>> __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
> Some suggestions:
>
> V3 API Domain scoped tokens, and Henry Nash's purification of Assignments
> proposal are both dealing with the scoped and binding of authorization
> decisions.
>
>
> Internal API stabilization and Extensions are both about code management,
> and can be combined.  I think
>
> Auditing is going to be bigger than just Keystone, as it happens based on
> Policy enforcement.  I suspect that this session should be where we discuss
> the Keystone side of Policy.
>
> Token Revocation and the client and auth_token middleware are all related
> topics.
>
> We discussed Quota storage in Keystone last summit.  We have pretty good
> progress on the blueprint.  Do we really need to discuss this again, or do
> we just need to implement
>
> The HTML talk should probably pull in members from the Horizon team.  I
> would almost want to merge it with http://summit.openstack.org/**
> cfp/details/3  "UX and Future
> Direction of OpenStack Dashboard"  or
> http://summit.openstack.org/**cfp/details/161"Separate
>  Horizon and OpenStack Dashboard"  as we can discuss how we will
> split responsibility for managing administration and customization.  If
> they have an open slot, we might be able to move this to a Horizon talk.
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] stabilizing internal APIs

2013-10-15 Thread Morgan Fainberg
On Tue, Oct 15, 2013 at 5:05 PM, Doug Hellmann
wrote:
>
>
> Making updates easier would be nice, and the abstract base class work
> should help with that. On the other hand, as a deployer who has had to
> rewrite our custom integration a few times in the past 6 months or so, I
> would also welcome some stability in the plugin APIs. I understand the need
> to provide flexibility and updated features for new REST APIs, but I hope
> we can find a way to migrate more smoothly or make newer features optional
> in the plugins themselves.
>
> Agreed, the ABC changes that are slowing making their way in will most
assuredly help some.


> DreamHost will have several developers at the summit; is there a session
> to talk about approaches for this that we should make sure to attend?
>
> I do not believe there is currently a session slated for anything like
this.  You could propose a session for this (or I could); obviously we
would need enough interest to make it worth committing a whole session to.
 Maybe piggy-back this one onto another session already proposed?  Maybe
this should be a broader-than-keystone-only topic?
--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] stabilizing internal APIs

2013-10-15 Thread Morgan Fainberg
Hi Fellow Developers (and those who interface with keystone at the code
level),

I wanted to reach out to the ML and see what the opinions were about
starting to stabilize the internal APIs (wherever possible) for the
keystone project (I would also like to see similar work done in the other
fully integrated projects, but I am starting with a smaller audience).

I believe that (especially in the case of things like identity_api ->
assignment_api) we should start supporting the concept of Release -1
(current release, and previous release) of a given internal API.  While
this isn't feasible everywhere, if we can't maintain at least an exception
being raised that indicates what "should" be called would be ideal for the
release the change occurred in.

This will significantly help any developers who have custom code that
relies on these APIs to find the locations of our new internal APIs.
 Perhaps the "stub" function/method replacement that simply raises a "go
used this new method/function" type exception would be sufficient and make
porting code easier.

This would require at the start of each release a "cleanup" patchset that
removed the stub or old methods/functions that are now fully deprecated.

So with that, lets talk about this more in depth see where it lands.  I
want to weed out any potential pitfalls before a concept like this makes it
anywhere beyond some neural misfires that came up in a passing discussion.
 It may just not be feasible/worth the effort in the grand scheme of things.

Cheers,
Morgan Fainberg

IRC: morganfainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] UpgradeImpact commit message tag

2013-10-14 Thread Morgan Fainberg
I think this could be a significant win for all projects (I'd like to see
it adopted beyond nova).  This should help ferret out the upgrade impacts
that sometimes sneak up on us (and cause issues later on).

+1 from me.

Cheers,
Morgan Fainberg

IRC: morganfainberg


On Mon, Oct 14, 2013 at 4:51 PM, Robert Collins
wrote:

> I think it's good to call that out, but also perhaps there should be
> some discussion with such deployers such changes?
>
> -Rob
>
> On 15 October 2013 12:32, Russell Bryant  wrote:
> > I was talking to Dan Smith today about a patch series I was starting to
> > work on.  These changes affect people doing continuous deployment, so we
> > came up with the idea of tagging the commits with "UpgradeImpact",
> > similar to how we use DocImpact for changes that affect docs.
> >
> > This seems like a good convention to start using for all changes that
> > affect upgrades in some way.
> >
> > Any comments/suggestions/objections?  If not, I'll get this documented
> on:
> >
> >
> https://wiki.openstack.org/wiki/GitCommitMessages#Including_external_references
> >
> > --
> > Russell Bryant
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-05 Thread Morgan Fainberg
Hello everyone!  I would like propose my candidacy for the OpenStack
Technical Committee.

A bit about me



I have been working with OpenStack in a professional capacity since before
the Essex release and have participated in each of the summits since that
time.  Throughout that timeframe I've been an active participant in most of
the projects that make up OpenStack through active communication and
collaboration. I am currently a member of the Keystone Core team and almost
always available / active on IRC (ensuring I have a good pulse on most of
the projects at any given time).

During the Havana cycle I have been mostly focused on Keystone, however, I
have also contributed to a number of other OpenStack projects.  Over the
last cycle, I have made sure to attend the TC meetings and keep apprised of
the content discussed and decisions made.

Platform

===

As a developer, operator, and consumer of OpenStack, I believe that
OpenStack (and, more generally speaking, IaaS) is the "way of the future"
when it comes to managing and maintaining computational resources.  I think
that OpenStack in particular is a defining force within this space, and as
such has a responsibility to help the industry grow in a positive way.
 This, in no small part, includes bringing new projects online (from
incubation to full integration) to fill the gaps in functional coverage and
ensuring that both new and already integrated projects are ready for
prime-time use.

For the next couple of cycles (and beyond), I see the TC's role as not only
growing the platform in a stable and maintainable way, but also being an
active advocate for the diverse set of consumers of OpenStack (public cloud
providers, private cloud providers, enterprise integration, development
environments and platforms, QA environments, end consumers of the various
cloud installations, etc).  The TC should continue to take an active role
(in some cases more active) in the project-wide scope direction.  This
active participation will include continued direct collaboration with the
Board of Directors as well as the many contributors (both corporate and
individual) of OpenStack.

The TC needs to have diverse representation from all corners of the
community to ensure that we keep the project moving forward in a way that
meets the real needs of all OpenStack users.  As part of the TC, I would
bring the experience of running multiple diverse private clouds for large
enterprises as well as an understanding of the need to maintain and grow
the accessibility of OpenStack for all use cases.

I look forward to continue contributing as a member of the Keystone Core
team and being a vocal advocate for increased usability and ease of
deployment for all of OpenStack.  I would be honored to also serve as a
member of the TC, collaborating, and helping to guide the technical
direction of the project as a whole.

Cheers,
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-20 Thread Morgan Fainberg
On Fri, Sep 20, 2013 at 3:20 PM, Monty Taylor  wrote:

>
> What if we rethought the organization just a little bit. Instead of
> having oslo-incubator from which we copy code, and then oslo.* that we
> consume as libraries, what if:
>
> - we split all oslo modules into their own repos from the start
> - we make update.py a utility that groks copying from a directory that
> contains a bunch of repos - so that a person wanting to use is might have:
>   ~/src
>   ~/src/oslo
>   ~/src/oslo/oslo.db
>   ~/src/oslo/oslo.policy
>   and then when they run update.py ~/src/oslo ~/src/nova and get the
> same results (the copying and name changing and whatnot)
>
>
I like this structure a little more than the current structure.  It feels
more like python modules.

If the bonus is to also allow more granularity on reviewing (e.g.
per-module cores), I think that there is another win to be had there.


> That way, we can add per-module additional core easily like we can for
> released oslo modules (like hacking and pbr have now)
>
> Also, that would mean that moving from copying to releasing is more a
> matter of just making a release than it is of doing the git magic to
> split the repo out into a separate one and then adding the new repo to
> gerrit.
>
>
I like this approach.  It does make the barrier to go from copy->release a
bit lower.  Less barrier is better (not that everything will go that route
immediately).

 Thoughts?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Review request for adding ordereddict

2013-09-19 Thread Morgan Fainberg
I would venture that this easily falls into the same category as any FFE
(and likewise should be treated as such).  From a quick cursory glance, it
seems that this is likely packaged up (at least for RHEL in the EPEL
repository).  But I don't have a completely inclusive view of what is out
there.  Also is there a minimum version that you require?  I think getting
the packagers involved (and know if there is a minimum version requirement)
is the best way to know if it should be accepted.

Cheers,
Morgan Fainberg

IRC: morganfainberg


On Thu, Sep 19, 2013 at 2:08 PM, Mark Washenberger <
mark.washenber...@markwash.net> wrote:

> I respect the desires of packagers to have a stable environment, but I'm
> also very sad about having to copy the OrderedDict code directly into
> Glance. Can we actually verify that this is a problem for packagers? (I.e.
> not already in their repos?)
>
> It also may be possible that packagers who do not support python2.6 could
> completely avoid this problem if we change how the code is written. Does it
> seem possible to only depend on ordereddict if collections.ordereddict does
> not exist?
>
>
> On Mon, Sep 16, 2013 at 11:27 AM, Dolph Mathews 
> wrote:
>
>>
>> On Mon, Sep 16, 2013 at 11:34 AM, Paul Bourke wrote:
>>
>>> Hi all,
>>>
>>> I've submitted https://review.openstack.org/#/c/46474/ to add
>>> ordereddict to openstack/requirements.
>>>
>>>
>> Related thread:
>> http://lists.openstack.org/pipermail/openstack-dev/2013-September/015121.html
>>
>>
>>> The reasoning behind the change is that we want ConfigParser to store
>>> sections in the order they're read, which is the default behavior in
>>> py2.7[1], but it must be specified in py2.6.
>>>
>>> The following two Glance features depend on this:
>>>
>>> https://review.openstack.org/#/c/46268/
>>> https://review.openstack.org/#/c/46283/
>>>
>>> Can someone take a look at this change?
>>>
>>> Thanks,
>>> -Paul
>>>
>>> [1]
>>> http://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>>
>> -Dolph
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When will we stop adding new Python modules to requirements

2013-09-15 Thread Morgan Fainberg
Thomas,

A couple if those appear to be managed by the OpenStack community (e.g.
diskimage-builder), which likely should be included in either case.  I
would say if it is covered under the OpenStack proper list of git repos
(e.g. https://github.com/openstack ) it should likely be included for
packaging ?if it requires packaging).  With that being said, I agree that
it make sense for other (non-openstack) libraries to be added carefully
late in the cycle.  Perhaps the best would be to limit additions to prior
to the release Feature-Freeze.

Cheers,
Morgan Fainberg

On Sunday, September 15, 2013, Thomas Goirand wrote:

> Hi,
>
> Short version: the global-requirements.txt should be frozen asap because
> otherwise, packages wont be ready.
>
> Longer version:
>
> I'm getting worried that, even after Havana b3 is released, we are still
> getting some new Python modules added to the requirements repository.
>
> In Debian, every new package has to go through a review process, called
> the NEW queue. FTP masters review both the freeness of a package, the
> exactitude of debian/copyright, and the general packaging quality.
> Unfortunately, this review process can take a lot of time. At best, it
> is processed within a week (which is what happened for more than a year
> before November 2012), but in the worse case, it could take up to a
> month or 2 (this was the case up to the end of last summer, thanks to
> new manpower in the FTP team).
>
> So I need to point it out: adding new Python modules at the end of a
> release adds more risk that I will be missing some Python modules within
> the Debian archive when Havana will be released.
>
> I wouldn't have to write this mail if this was only a single module or
> something. Though that's not the case, we have 4 packages added this
> last week:
> - falcon
> - diskimage-builder
> - tripleo-image-elements
> - sphinxcontrib-programoutput
>
> I do understand that they might be absolutely needed, though it would be
> nice if additions to the global-requirements.txt file stopped at some
> point. And as far as I am concerned, the sooner the better, so that
> there's enough time to get the packages packaged, checked and tested,
> uploaded, approved by the FTP masters, and ready in time in Sid.
>
> Cheers,
>
> Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-12 Thread Morgan Fainberg
Another issue to consider with regards to backup tables is the length of
time that can occur between upgrade and downgrade functionally.  What if
you upgrade, then see an issue and downgrade in an hour.  Is the backup
table data still relevant?  Would you end up putting stale/broken data back
in place because other things changed?  At a certain point restore from
backup is really the only sane option.  That threshold isn't exactly a long
period of time.

Cheers,
Morgan Fainberg

IRC: morganfainberg


On Wed, Sep 11, 2013 at 10:30 PM, Robert Collins
wrote:

> I think having backup tables adds substantial systematic complexity,
> for a small use case.
>
> Perhaps a better answer is to document in 'take a backup here' as part
> of the upgrade documentation and let sysadmins make a risk assessment.
> We can note that downgrades are not possible.
>
> Even in a public cloud doing trunk deploys, taking a backup shouldn't
> be a big deal: *those* situations are where you expect backups to be
> well understood; and small clouds don't have data scale issues to
> worry about.
>
> -Rob
>
> -Rob
>
> On 12 September 2013 17:09, Joshua Hesketh 
> wrote:
> > On 9/4/13 6:47 AM, Michael Still wrote:
> >>
> >> On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
> >>  wrote:
> >>>
> >>> +1 I think we should be reconstructing data where we can, but keeping
> >>> track of
> >>> deleted data in a backup table so that we can restore it on a downgrade
> >>> seems
> >>> like overkill.
> >>
> >> I guess it comes down to use case... Do we honestly expect admins to
> >> regret and upgrade and downgrade instead of just restoring from
> >> backup? If so, then we need to have backup tables for the cases where
> >> we can't reconstruct the data (i.e. it was provided by users and
> >> therefore not something we can calculate).
> >
> >
> > So assuming we don't keep the data in some kind of backup state is there
> a
> > way we should be documenting which migrations are backwards incompatible?
> > Perhaps there should be different classifications for data-backwards
> > incompatible and schema incompatibilities.
> >
> > Having given it some more thought, I think I would like to see migrations
> > keep backups of obsolete data. I don't think it is unforeseeable that an
> > administrator would upgrade a test instance (or less likely, a
> production)
> > by accident or not realising their backups are corrupted, outdated or
> > invalid. Being able to roll back from this point could be quite useful. I
> > think potentially more useful than that though is that if somebody ever
> > needs to go back and look at some data that would otherwise be lost it is
> > still in the backup table.
> >
> > As such I think it might be good to see all migrations be downgradable
> > through the use of backup tables where necessary. To couple this I think
> it
> > would be good to have a standard for backup table naming and maybe schema
> > (similar to shadow tables) as well as an official list of backup tables
> in
> > the documentation stating which migration they were introduced and how to
> > expire them.
> >
> > In regards to the backup schema, it could be exactly the same as the
> table
> > being backed up (my preference) or the backup schema could contain just
> the
> > lost columns/changes.
> >
> > In regards to the name, I quite like "backup_table-name_migration_214".
> The
> > backup table name could also contain a description of what is backed up
> (for
> > example, 'uuid_column').
> >
> > In terms of expiry they could be dropped after a certain release/version
> or
> > left to the administrator to clear out similar to shadow tables.
> >
> > Thoughts?
> >
> > Cheers,
> > Josh
> >
> > --
> > Rackspace Australia
> >
> >>
> >> Michael
> >>
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] upgrade tox - now with less slowness!

2013-09-04 Thread Morgan Fainberg
NICE!!


On Wed, Sep 4, 2013 at 11:05 AM, Dan Smith  wrote:

> > Because we landed a patch to tox upstream to use setup.py develop
> > instead of sdist+install like our run_tests.sh scripts do - this means
> > that with the new tox config changes, tox runs should be just as quick
> > as run_tests.sh runs.
>
> So. Freaking. Awesome.
>
> --Dan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Devstack] is dogpile.cache a requirement?

2013-09-04 Thread Morgan Fainberg
This was an intentional design to depend on dogpile.cache due to the
mechanism of caching (using decorators).  This is partially the nature of
the method of caching implementation.  It should have been installed in
devstack based upon the requirements.txt having it in there.  Making it an
optional import is likely going to take a significant amount of
refactoring.  My guess is that it's going to be a bit late in the cycle,
but we will see what comes up.

--Morgan Fainberg
IRC: morganfainberg


On Wed, Sep 4, 2013 at 8:21 AM, Dolph Mathews wrote:

>
> On Wed, Sep 4, 2013 at 9:58 AM, David Stanek  wrote:
>
>>
>>
>> On Wed, Sep 4, 2013 at 10:23 AM, Dolph Mathews 
>> wrote:
>>
>>>
>>> On Wed, Sep 4, 2013 at 9:14 AM, Salvatore Orlando 
>>> wrote:
>>>
>>>> whenever I run devstack keystone falies to start because dogpile.cache
>>>> is not installed; this is easily solved by installing it, but I wonder if
>>>> it should be in requirements.txt
>>>> Also, since the cache appears to be disabled by default (and I'm not
>>>> enabling it in my localrc), I'm not sure why I am hitting this error, as I
>>>> would expect the caching module to not be loaded at all.
>>>>
>>>>
>>> That sounds like a bug! It should only be a hard requirement if
>>> keystone.conf [cache] enabled=True
>>>
>>>
>>>
>> Currently keystone.assignment.core imports keystone.common.cache with
>> ends up depending on dogpile.  The current implementation does depend on
>> dogpile even if caching isn't being used.
>>
>
> ++ I just poked around with making it an optional dependency and it looks
> like it would require quite a bit of refactoring... probably too much this
> late in the cycle.
>
>
>>
>>
>>
>> --
>> David
>> blog: http://www.traceback.org
>> twitter: http://twitter.com/dstanek
>> www: http://dstanek.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> -Dolph
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Why does nova.network.neutronv2.get_client(context, admin=True) drop auth_token?

2013-08-28 Thread Morgan Fainberg
On Wed, Aug 28, 2013 at 5:22 PM, Yongsheng Gong wrote:

> For admin, we must use admin token.  In general, the token from API
> context is not of role admin.
>
>
If this functionality is supposed to be allowed to non-admin users,
wouldn't it be easier to provide access to it to non-admin users, instead
of escalating permissions (maybe RBAC)?  I'll admit not knowing why this
needs escalation, but it stands out as an odd approach in my mind.


> I think the BP can help
> https://blueprints.launchpad.net/keystone/+spec/reuse-token
>

This isn't likely what you are looking for.  It would still require lookups
to the backend for a number of reasons (not listed, as I don't think it is
relevant for this conversation).
--
Morgan Fainberg

IRC: morganfainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] driver/pluggable base classes and ABCMeta

2013-08-21 Thread Morgan Fainberg
I've been doing some pondering on how Keystone handles the various
pluggable systems with it's Manager / Driver architecture.

Currently we implement the base driver class as follows:

There is a driver object that has a number of reference functions defined
on it, each of these functions typically raises NotImplemented() and has a
docstring describing the expected behavior.  A simple example of this would
be the Token Driver base class.  A complex example would be the Identity
Driver base class.

Example:

class AbstractBaseClassTest(object):

def abstract_method1(args, kwargs):

raise NotImplemented()

def abstract_method2(args, kwargs):
...


This type of system is not inherently bad, nor flawed, but there are some
cases when I've run into situations that raise a NotImplemented() error
unexpectedly (usually in custom code I've had to write around a driver or
replace a driver with and "internal to my company" implementation).


For those not familiar with ABCMeta, abstract base classes, and the abc
module:

In a model that uses an abstract metaclass, the base class methods that
need to be overridden are decorated with the "abstractmethod" decorator
from the "abc" module.  This decorator when coupled with the ABCMeta
metaclass, requires all abstract methods to be overridden by the class that
inherits from the base class before the class can be instantiated.
 Similarly there is an "abstractproperty" decorator for @property methods.

The exception raised looks like:

TypeError: Can't instantiate abstract class TestClass with abstract methods
AbsTest1, AbsTest2


The benefits are two fold.  First, this means that a new driver could not
be implemented without overriding the expected methods (no raising
"NotImplemented()" unintentionally) and it guarantees that even seldom-used
expected functions would be defined on the driver implementations.  Second
benefit is that these abstract methods can be called via the standard
super() call, therefore if there is some base functionality that should be
used (or legitimately you want to raise NotImplemented()), it is possible
to have that defined on the parent class.

Example abstract base class (with using six module instead of directly
setting __metaclass__):

class AbstractBaseClassTest(six.with_metaclass(abc.ABCMeta)):
@abc.abstractmethod
def abstract_method1(int_arg):
# we can do something here instead of raising
# NotImplemented()
return (int_arg + 1)


On to the downsides, the compatibility between python2.7 and python3k
(using the six library) is not the most pleasing thing to look at.  It
requires defining the object that inherits from a method call
six.with_metaclass.  I also have not looked at the performance implications
of this, however, at first blush, it doesn't look like should be
significant.  There are possibly other pitfalls that haven't come to mind
as of yet.

In short, the drivers should probably be actual abstract classes, since
that is what they effectively are.  I've seen this functionality used in
both Neutron and Ironic.  I could see it providing some benefits from it in
Keystone.  I wanted to get the general feeling from the Mailing List on
using abstracted base classes in lieu of our current implementation prior
to proposing a Blueprint, etc.  I don't see this as a Havana implementation
but possibly something to consider for Icehouse.

I don't know if the benefits really would bring us a huge win in Keystone,
but I think it would make understanding what should be implemented when
subclassing for driver development (and similar pluggable systems) a bit
more clear.

Cheers!
--
Morgan Fainberg
Sr. Software Architect | Metacloud, Inc
Email: m...@metacloud.com
IRC: morganfainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Motion to start using Gerrit for TC votes

2013-08-06 Thread Morgan Fainberg
On Tue, Aug 6, 2013 at 1:32 PM, Monty Taylor  wrote:

> Hi!
>
> Currently, we make motions by email, then we discuss them by mailing
> list, then we discuss them more in IRC, then we vote on them - at which
> point the actual thing voted on may or may not get recorded somewhere
> easy to find.
>
> What if instead we had a repo with a bunch of ReStructureText in it -
> perhaps a copy of the TC charter and then a dir for additional things
> the TC has decided. That repo would be autopublished to a non-wiki
> website ... and the core team for the repo was the TC. EXCEPT, we didnt'
> do a 2 core +2 thing ... we'd have some slightly different rules. Such
> as - certain number of days it has to be open, certain number of +1
> votes, etc. And then only the TC chair has the actual APRV vote, which
> is used to codify the vote tallies.
>
> This would allow for clear voting by both the TC and others - is
> consistent with tooling we ALL know how to use, and has the benefit of
> producing a clear published record of the results when it's done. That
> way also, TC members and others can do a decent amount of the actual
> discussion outside of TC meeting, and we can save meeting time for
> resolution of issues/questions that simply cannot be sorted out via
> gerrit process.
>
> Monty
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I really like this concept.  This would be a very nice level of
transparency (not that the TC hides anything) and makes it much easier to
see what is/has been going on and how we got there.  It also ensures we
have the comments recorded along the way.

Cheers,
Morgan Fainberg

IRC: morganfainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dropping or weakening the 'only import modules' style guideline - H302

2013-08-06 Thread Morgan Fainberg
While I'm torn on this as a developer, it comes down to an ease of
understanding the code.  In all cases, it is easier to understand where
something comes from if you only import modules.  Enforcing the import of
modules tends to also ensure namespace conflicts don't occur as often.
 When it comes to review, I am going to agree with Sean here, it is a boon
on large changes.  I am against lessening/removing H302; but I understand
why people desire it eased up.

Cheers,
Morgan Fainberg

IRC: morganfainberg


On Tue, Aug 6, 2013 at 1:18 PM, Christopher Armstrong <
chris.armstr...@rackspace.com> wrote:

> On Tue, Aug 6, 2013 at 6:32 AM, Sean Dague  wrote:
>
>>
>> The reason we go hard and fast on certain rules is to reduce review time
>> by people. If something is up for debate we get bikeshedding in reviews
>> where one reviewer tells someone to do it one way, 2 days later they update
>> their review, another reviewer comes in and tells them to do it the
>> otherway. (This is not theoretical, it happens quite often, if you do a lot
>> of reviews you see it all the time.) It also ends up being something
>> reviewers can stop caring about, because the machine will pick it up.
>> Giving them the ability to focus on higher order issues, and still keeping
>> the code from natural entropy.
>>
>> MUST == computer can do it, less work for core review time (which is
>> realistically one of our most constrained resources in OpenStack)
>> MAY == humans have to make a judgement call, which means more work for
>> our already constrained review teams
>>
>> I've found H302 to really be useful on reviewing large chunks of code
>> I've not been in much before. And get seriously annoyed being in projects
>> that don't have it enforced yet (tempest is guilty of that). Being able to
>> quickly know what namespace things are out of saves time.
>>
>
>
> I think it's really unfortunate that people will block patches based on
> stylistic concerns. The answer, IMO, is to codify in policy that stylistic
> issues *cannot* block a patch from landing.
>
> I recommend having humility in our reviews. Instead of
>
> "This bike shed needs to be painted red. -1"
>
> One should say
>
> "I prefer red for the color of bike sheds. You can do that if you want,
> but go ahead and merge anyway if you don't want to. +0"
>
> and don't mark a review as -1 if it *only* has bikeshedding in it. I would
> love to see a culture of reviewing that emphasizes functional correctness,
> politeness, and mutual education.
>
> And given the rationale from Robert Collins, I agree that the
> module-import thing should be one of the flakes that allows exceptions.
>
> --
> IRC: radix
> Christopher Armstrong
> Rackspace
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][API Doc] Easy way to generate JSON output for doc?

2013-07-31 Thread Morgan Fainberg
Paul,

Depending on what version of keystone you are utilizing there are a couple
options to use the UUID token format (instead of PKI).

Very recent (current master) of keystone uses a pluggable provider system.
To set the provider to uuid, in the [token] section set the option:
provider=keystone.token.providers.uuid.Provider

In older versions I believe the option (still in the [token] section) is:

token_format=UUID

I hope this info helps you out some.

Cheers,
Morgan Fainberg

Sent from my iPhone (please excuse the brevity)

 31/07/2013, Paul Michali :

> Yeah I was playing with that, however I'm having an issue with
> authentication…
>
> If I do a request to keystone to get the auth ID, I get a huge key, which
> I have to try to paste into a subsequent request to neutron.
>
> Is there a way to force this to use the old style (small) auth ID, no
> auth, or username/password for auth on the requests?
>
> I've been playing with using the CLI and --verbose, and then trying to
> extract the RESP:… output and run through JSON. A bit jacky, but it sorta
> works.
>
>
>  PCM (Paul Michali)
>
> Contact info for Cisco users http://twiki.cisco.com/Main/pcm
>
>
> On Jul 31, 2013, at 7:00 PM, "Mellquist, Peter" 
> wrote:
>
>  Paul,
> There are a few ways of doing this but I have used curl then pipe the
> results through a python JSON pretty print tool. This formats the JSON for
> easily dropping into API docs.
>
> curl ( some Openstack request with JSON output )| python –mjson.tool**
> **
>
> Hope this helps,
> Peter.
>
>
>
> *From:* Paul Michali [mailto:p...@cisco.com]
> *Sent:* Wednesday, July 31, 2013 3:41 PM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Neutron][API Doc] Easy way to generate JSON
> output for doc?
> ** **
> Hi!
> ** **
> I'm writing API doc pages for VPNaaS and was wondering if there are any
> tools/scripts to make it easy to generate the needed JSON result output for
> various operations?
> ** **
> It looks like I can do the neutron command with --verbose to get
> unformatted JSON output. Should I do that and then reformat the output (or
> is there way to do that easier)?
> ** **
> I did try to use json.loads() on the RESP: output, but it threw an
> ValueError "Expecting property name: line 1 column 1 (char 1)"
> ** **
> Ideas?
> ** **
> PCM (Paul Mi
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>

-- 
Sent from my iPhone (please excuse the brevity and any typos)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Alembic support

2013-07-25 Thread Morgan Fainberg
+1 to getting the multiple repos in place.  Moving to Alembric later on in
H or even as the first commit of I should meet our goals to be on Alembric
in a reasonable timeframe.  This also allows us to ensure we aren't rushing
the work to get our migration repos over to Alembric.

I think that allowing the extensions to have their own repos sooner is
better, and if we end up with an extension that has more than 1 or 2
migrations, we have probably accepted code that is far from fully baked
(and we should evaluate how that code made it in).

I am personally in favor of making the first commit of Icehouse (barring
any major issue) the point in which we move to Alembric.  We can be
selective in taking extension modifications that add migration repos if it
is a major concern that moving to Alembric is going to be really painful.

Cheers,
Morgan Fainberg

On Thu, Jul 25, 2013 at 7:35 PM, Adam Young  wrote:

> I've been looking into Alembic support.  It seems that there is one thing
> missing that I was counting on:  multiple migration repos. It might be
> supported, but the docs are thin, and reports vary.
>
> In the current Keystone implementation, we have a table like this:
> mysql> desc migrate_version;
> +-+---**---+--+-+-+---**+
> | Field   | Type | Null | Key | Default | Extra |
> +-+---**---+--+-+-+---**+
> | repository_id   | varchar(250) | NO   | PRI | NULL|   |
> | repository_path | text | YES  | | NULL|   |
> | version | int(11)  | YES  | | NULL|   |
> +-+---**---+--+-+-+---**+
>
>
> Right now we only have one row in there:
>
>  keystone  | /opt/stack/keystone/keystone/**common/sql/migrate_repo |
>   0
>
>
> However, we have been lumping all of our migrations together into a singel
> repo, and we are just now looking to sort them out.  For example, Policy,
> Tokens, and Identity do not really need to share a database.  As such, they
> could go into separate migration repos, and it would keep changes to one
> from stepping on changes to another, and avoiding the continuous rebasing
> problem we currently have.
>
> In addition, we want to put each of the extensions into their own repos.
>  This happens to be an important time for that, as we have three extensions
> coming in that need SQL repos:  OAuth, KDS, and Attribute Mapping.
>
> I think we should delay moving Keystone to Alembic until the end of
> Havana, or as the first commit in Icehouse.  That way, we have a clean cut
> over point. We can decide then whether to backport the Extnesion migrations
> or leave them under sql_alchemy migrations. Mixing the two technologies
> side by side for a short period of time is going to be required, and I
> think we need to have a clear approach in place to avoid a mess.
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Opinions needed: Changing method signature in RPC callback ...

2013-07-18 Thread Morgan Fainberg
On Thu, Jul 18, 2013 at 9:44 AM, Jay Pipes  wrote:

> On 07/18/2013 10:09 AM, Sandy Walsh wrote:
>
>> Hey y'all!
>>
>> Running into an interesting little dilemma with a branch I'm working on.
>> Recently, I introduced a branch in oslo-common to optionally .reject() a
>> kombu message on an exception. Currently, we always .ack() all messages
>> even if the processing callback fails. For Ceilometer, this is a problem
>> ... we have to guarantee we get all notifications.
>>
>> The patch itself was pretty simple, but didn't work :) The spawn_n()
>> call was eating the exceptions coming from the callback. So, in order to
>> get the exceptions it's simple enough to re-wrap the callback, but I
>> need to pool.waitall() after the spawn_n() to ensure none of the
>> consumers failed. Sad, but a necessary evil. And remember, it's only
>> used in a special case, normal openstack rpc is unaffected and remains
>> async.
>>
>> But it does introduce a larger problem ... I have to change the rpc
>> callback signature.
>>
>> Old: callback(message)
>> New: callback(message, delivery_info=None, wait_for_consumers=False)
>>
>> (The delivery_info is another thing, we were dumping the message info on
>> the floor, but this has important info in it)
>>
>> My worry is busting all the other callbacks out there that use
>> olso-common.rpc
>>
>> Some options:
>> 1. embed all these flags and extra data in the message structure
>>
>> message = {'_context_stuff': "...",
>> 'payload": {...},
>> '_extra_magic': {...}}
>>
>
> This would be my preference. #2 below is essentially the same thing but
> with some object-orientation for sugar -- and it breaks the existing
> structure, which #1 doesn't.
>
> best,
> -jay
>
>
>  2. make a generic CallContext() object to include with message that has
>> anything else we need (a one-time signature break)
>>
>> call_context = CallContext({"delivery_info": {...}, "wait": False})
>> callback(message, call_context)
>>
>> 3. some other ugly python hack that I haven't thought of yet.
>>
>> Look forward to your thoughts on a solution!
>>
>> Thanks
>> -S
>>
>>
>> My work-in-progess is here:
>> https://github.com/SandyWalsh/**openstack-common/blob/**
>> callback_exceptions/openstack/**common/rpc/amqp.py#L373<https://github.com/SandyWalsh/openstack-common/blob/callback_exceptions/openstack/common/rpc/amqp.py#L373>
>>
>> __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**org 
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>>
>>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>

+1 to what Jay said.  I was actually getting ready to write a very similar
email.

Cheers,
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] sqlite doesn't support migrations

2013-07-16 Thread Morgan Fainberg
On Tue, Jul 16, 2013 at 12:55 AM, Michael Still  wrote:
> On Tue, Jul 16, 2013 at 4:17 PM, Thomas Goirand  wrote:
>
>> Could you explain a bit more what could be done to fix it in an easy
>> way, even if it's not efficient? I understand that ALTER doesn't work
>> well. Though would we have the possibility to just create a new
>> temporary table with the correct fields, and copy the existing content
>> in it, then rename the temp table so that it replaces the original one?
>
> There are a bunch of nova migrations that already work that way...
> Checkout the *sqlite* files in
> nova/db/sqlalchemy/migrate_repo/versions/
>
> Cheers,
> Michael
>
> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

If we really want to support the concept of SQLite migrations, the way
nova does it seems to be the most sane.  I'm not 100% convinced that
SQLite migrations are worth supporting, but then again, I am not the
target to use them (besides in a simple development capacity, and I
still validate against MySQL/Postgres mostly).  If there is a demand
for SQLite, I'd say Michael has hit it on the head and the way Nova
handles this is a fairly clean mechanism and far more supportable over
the short and medium term(s) compared to working around migrate issues
with SQLite and the limited Alter support.

In one of the discussions in IRC I had offered to help with the effort
of moving away from SQLite migration testing; if the nova-way is the
way we want to go, I'll be happy to help contribute to that.

Cheers,
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Memcache user token index and PKI tokens

2013-07-16 Thread Morgan Fainberg
On Tue, Jul 16, 2013 at 4:01 AM, Kieran Spear  wrote:
>
> On 16/07/2013, at 1:10 AM, Adam Young  wrote:
>> On 07/15/2013 04:06 AM, Kieran Spear wrote:
>>> Hi all,
>>>
>>> I want to backport the fix for the "Token List in Memcache can consume
>>> an entire memcache page" bug[1] to Grizzly, but I had a couple of
>>> questions:
>>>
>>> 1. Why do we need to store the entire token data in the
>>> usertoken- key? This data always seems to be hashed before
>>> indexing into the 'token-' keys anyway. The size of the
>>> memcache data for a user's token list currently grows by 4k every time
>>> a new PKI token is created. It doesn't take long to hit 1MB at this
>>> rate even with the above fix.
>> Yep. The reason, though, is that we either take a memory/storage hit (store 
>> the whole token) or a performance hit (reproduce the token data) and we've 
>> gone for the storage hit.
>
> In this case it looks like we're taking a hit from both, since the PKI token 
> "id" from the user token index is retrieved, then hashed and then that key is 
> used to retrieve the token from the "tokens-%s" page anyway.
>
>>
>>
>>>
>>> 2. Every time it creates a new token, Keystone loads each token from
>>> the user's token list with a separate memcache call so it can throw it
>>> away if it's expired. This seems excessive. Is it anything to worry
>>> about? If it just checked the first two tokens you'd get the same
>>> effect on a longer time scale.
>>>
>>> I guess part of the answer is to decrease our token expiry time, which
>>> should mitigate both issues. Failing that we'd consider moving to the
>>> SQL backend.
>> HOw about doing both?  But if you move to the sql backend, rememeber to 
>> periodically clean up the token table, or you will have storage issues there 
>> as well.  No silver bullet, I am afraid.
>
> I think we're going to stick with memcache for now (the devil we know :)). 
> With (1) and (2) fixed and the token expiration time tweaked I think memcache 
> will do okay.
>
> Kieran
>
>>
>>>
>>> Cheers,
>>> Kieran
>>>
>>> [1] https://bugs.launchpad.net/keystone/+bug/1171985
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Kieran,

I've looked into the potential bug you described and it appears that
there has been a change in the master branch to support the idea of
pluggable token providers (much better implementation than the driver
being responsible for the token itself).  This change modified how the
memcache driver stored the IDs, and performed the CMS hashing function
when the manager returned the token_id to the driver, instead of
in-line within the driver.  The original fix should have been correct
in hashing the PKI token to the short-form "ID".  Your fix to simply
hash the tokens is the correct one and more closely mirrors how the
original fix was implemented.

If you are interested in the reviews that implement the new pluggable
provider(s): https://review.openstack.org/#/c/33858/ (V3) and
https://review.openstack.org/#/c/34421/ (V2.0).

Going with the shorter TTL on the Tokens is a good idea for various
reasons depending on the token driver.  I know that the SQL driver
(provided you cleanup expired tokens) has worked well for my company,
but I want to move to the memcache driver soon.

Cheers,
Morgan Fainberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   >