Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-12 Thread Angus Lees
On Thu, 11 Sep 2014 03:00:02 PM Duncan Thomas wrote:
 On 11 September 2014 03:17, Angus Lees g...@inodes.org wrote:
  (As inspired by eg kerberos)
  2. Ensure at some environmental/top layer that the advertised token
  lifetime exceeds the timeout set on the request, before making the
  request.  This implies (since there's no special handling in place)
  failing if the token was expired earlier than expected.
 
 We've a related problem in cinder (cinder-backup uses the user's token
 to talk to swift, and the backup can easily take longer than the token
 expiry time) which could not be solved by this, since the time the
 backup takes is unknown (compression, service and resource contention,
 etc alter the time by multiple orders of magnitude)

Yes, this sounds like another example of the cross-service problem I was 
describing with refreshing the token at the bottom layer - but I disagree that 
this is handled any better by refreshing tokens on-demand at the bottom layer.

In order to have cinder refresh the token while talking to swift, it needs to 
know the user's password (ouch - why even have the token) or have magic token 
creating powers (in which case none of this matters, because cinder can just 
create tokens any time it wants).

As far as I can see, we either need to be able to 1) generate tokens that _do_ 
last long enough, 2) pass user+password to cinder so it is capable of 
creating new tokens as necessary, or 3) only perform token-based auth once at 
the start of a long cinder-glance workflow like this, and then use some sort 
of limited-scope-but-unlimited-time session token for follow-on requests.

I think I'm advocating for (1) or (3), and (2) as a distant third.


... Unless there's some other option here?  Your dismissal above sounded like 
there was already a solution for this - what's the current solution?

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Flavio Percoco
On 09/12/2014 03:29 AM, Clint Byrum wrote:
 Excerpts from Zane Bitter's message of 2014-09-11 15:21:26 -0700:
 On 09/09/14 19:56, Clint Byrum wrote:
 Excerpts from Samuel Merritt's message of 2014-09-09 16:12:09 -0700:
 On 9/9/14, 12:03 PM, Monty Taylor wrote:
 On 09/04/2014 01:30 AM, Clint Byrum wrote:
 Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:
 Greetings,

 Last Tuesday the TC held the first graduation review for Zaqar. During
 the meeting some concerns arose. I've listed those concerns below with
 some comments hoping that it will help starting a discussion before the
 next meeting. In addition, I've added some comments about the project
 stability at the bottom and an etherpad link pointing to a list of use
 cases for Zaqar.


 Hi Flavio. This was an interesting read. As somebody whose attention has
 recently been drawn to Zaqar, I am quite interested in seeing it
 graduate.

 # Concerns

 - Concern on operational burden of requiring NoSQL deploy expertise to
 the mix of openstack operational skills

 For those of you not familiar with Zaqar, it currently supports 2 nosql
 drivers - MongoDB and Redis - and those are the only 2 drivers it
 supports for now. This will require operators willing to use Zaqar to
 maintain a new (?) NoSQL technology in their system. Before expressing
 our thoughts on this matter, let me say that:

   1. By removing the SQLAlchemy driver, we basically removed the
 chance
 for operators to use an already deployed OpenStack-technology
   2. Zaqar won't be backed by any AMQP based messaging technology 
 for
 now. Here's[0] a summary of the research the team (mostly done by
 Victoria) did during Juno
   3. We (OpenStack) used to require Redis for the zmq matchmaker
   4. We (OpenStack) also use memcached for caching and as the oslo
 caching lib becomes available - or a wrapper on top of dogpile.cache -
 Redis may be used in place of memcached in more and more deployments.
   5. Ceilometer's recommended storage driver is still MongoDB,
 although
 Ceilometer has now support for sqlalchemy. (Please correct me if I'm
 wrong).

 That being said, it's obvious we already, to some extent, promote some
 NoSQL technologies. However, for the sake of the discussion, lets assume
 we don't.

 I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
 keep avoiding these technologies. NoSQL technologies have been around
 for years and we should be prepared - including OpenStack operators - to
 support these technologies. Not every tool is good for all tasks - one
 of the reasons we removed the sqlalchemy driver in the first place -
 therefore it's impossible to keep an homogeneous environment for all
 services.


 I whole heartedly agree that non traditional storage technologies that
 are becoming mainstream are good candidates for use cases where SQL
 based storage gets in the way. I wish there wasn't so much FUD
 (warranted or not) about MongoDB, but that is the reality we live in.

 With this, I'm not suggesting to ignore the risks and the extra burden
 this adds but, instead of attempting to avoid it completely by not
 evolving the stack of services we provide, we should probably work on
 defining a reasonable subset of NoSQL services we are OK with
 supporting. This will help making the burden smaller and it'll give
 operators the option to choose.

 [0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/


 - Concern on should we really reinvent a queue system rather than
 piggyback on one

 As mentioned in the meeting on Tuesday, Zaqar is not reinventing message
 brokers. Zaqar provides a service akin to SQS from AWS with an OpenStack
 flavor on top. [0]


 I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
 trying to connect two processes in real time. You're trying to do fully
 asynchronous messaging with fully randomized access to any message.

 Perhaps somebody should explore whether the approaches taken by large
 scale IMAP providers could be applied to Zaqar.

 Anyway, I can't imagine writing a system to intentionally use the
 semantics of IMAP and SMTP. I'd be very interested in seeing actual use
 cases for it, apologies if those have been posted before.

 It seems like you're EITHER describing something called XMPP that has at
 least one open source scalable backend called ejabberd. OR, you've
 actually hit the nail on the head with bringing up SMTP and IMAP but for
 some reason that feels strange.

 SMTP and IMAP already implement every feature you've described, as well
 as retries/failover/HA and a fully end to end secure transport (if
 installed properly) If you don't actually set them up to run as a public
 messaging interface but just as a cloud-local exchange, then you could
 get by with very low overhead for a massive throughput - it can very
 easily be run on a single machine for Sean's simplicity, and could just
 as easily be scaled out using well known techniques for public cloud
 

Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-12 Thread mar...@redhat.com
On 09/09/14 21:32, Gregory Haynes wrote:
 Hello everyone!
 
 I have been working on a meta-review of StevenK's reviews and I would
 like to propose him as a new member of our core team.
 
 As I'm sure many have noticed, he has been above our stats requirements
 for several months now. More importantly, he has been reviewing a wide
 breadth of topics and seems to have a strong understanding of our code
 base. He also seems to be doing a great job at providing valuable
 feedback and being attentive to responses on his reviews.
 
 As such, I think he would make a great addition to our core team. Can
 the other core team members please reply with your votes if you agree or
 disagree.
 

+1

 Thanks!
 Greg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Complex resource_metadata could fail to store in MongoDB

2014-09-12 Thread Igor Degtiarov
Hi!

After some local discussions with Dmitiy Uklov we have found solution.

To solve the problem with dots inside keys in metadata dictionary we
propose to reconstruct it
before storing in MongoDB.

Ex.

If we get metadata: {'a.s.d': 'v'}

it could be stored in MongoDB after a reconstruction

{'a': {'s': {'d': 'v'}}}

After storing in MongoDB value 'v' easily could be found with the standard
query 'metadata.a.s.d'='v'.
Keys that start with '$' are quoted with the quote function from
urllib.parse

I have proposed change request with fix:
https://review.openstack.org/121003

Cheers,
-- Igor D.



On Mon, Sep 8, 2014 at 5:02 PM, Igor Degtiarov idegtia...@mirantis.com
wrote:

 On Thu, Sep 4, 2014 at 1:16 PM, Nadya Privalova nprival...@mirantis.com
 wrote:

 IMHO it's ok and even very natural to expect escaped query from users.
 e.g, we store the following structure

 {metadata:
 { Zoo:
{Foo.Boo: ''value}}}


  Yep but such structure couldn't be stored in MongoDB without exchanging
 dot in Foo.Boo




 and query should be metadata.Zoo.Foo\.Boo .


 That could be a good  solution, but it is needed only if MongoDB is chosen
 as a backend. So the question is
 should we specify query only for MongoDB or change queries for all
 backends?


 In this case it's not necessary to know deep of tree.

 Thanks,
 Nadya



 Cheers,
 Igor D.





 On Fri, Aug 29, 2014 at 3:21 PM, Igor Degtiarov idegtia...@mirantis.com
 wrote:

 Hi, folks.

 I was interested in the problem with storing of samples, that contain
 complex resource_metadata, in MongoDB database [1].

 If data is a dict that has a  key(s) with dots (i.e. .), dollar signs
 (i.e. $), or null characters,
 it wouldn't be stored. It is happened because these characters are
 restricted to use in fields name in MongoDB [2], but so far there is no any
 verification of the metadata in ceilometers mongodb driver, as a result we
 will lose data.

 Solution of this problem seemed to be rather simple, before storing data
 we check keys in resourse_metadata, if it is a dict, and quote keys with
 restricted characters in a similar way, as it was done in a change request
 of redesign separators in columns in HBase [2]. After that store metering
 data.

 But other unexpected difficulties appear on the step of getting data. To
 get stored data we constructs a meta query, and structure of that query was
 chosen identical to initial query in MongoDB. So dots is used as a
 separator for three nods of stored data.

 Ex. If it is needed to check value in field Foo

 {metadata:
 { Zoo:
{Foo: ''value}}}

 query would be: metadata.Zoo.Foo

 We don't know how deep is dict in metadata, so it is impossible to
 propose any correct parsing of query, to quote field names contain dots.

 I see two way for improvements. First is rather complex and based of
 redesign structure of the metadata query in ceilometer. Don't know if it is
 ever possible.

 And second is based on removing from the samples bad
 resource_metadata. In this case we also lose metadata,  but save other
 metering data. Of course queries for not saved metadata will return
 nothing, so it is not complete solution, but some kind of the hook.

 What do you think about that?
 Any thoughts and propositions are kindly welcome.

 [1] https://bugs.launchpad.net/mos/+bug/1360240
 [2] http://docs.mongodb.org/manual/reference/limits/
 [3] https://review.openstack.org/#/c/106376/

 -- Igor Degtiarov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-12 Thread Angus Lees
On Thu, 11 Sep 2014 03:21:52 PM Steven Hardy wrote:
 On Wed, Sep 10, 2014 at 08:46:45PM -0400, Jamie Lennox wrote:
  For service to service communication there are two types.
  1) using the user's token like nova-cinder. If this token expires there
  is really nothing that nova can do except raise 401 and make the client
  do it again. 2) using a service user like nova-neutron. This should
  allow automatic reauthentication and will be fixed/standardied by
  sessions.
 (1) is the problem I'm trying to solve in bug #1306294, and (for Heat at
 least) there seems to be two solutions, neither of which I particularly
 like:
 
 - Require username/password to be passed into the service (something we've
   been trying to banish via migrating to trusts for deferred
   authentication)
 - Create a trust, and impersonate the user for the duration of the request,
   or after the token expires until it is completed, using the service user
   credentials and the trust_id.
 
 It's the second one which I'm deliberating over - technically it will work,
 and we create the trust anyway (e.g for later use to do autoscaling etc),
 but can anyone from the keystone team comment on the legitimacy of the
 approach?
 
 Intuitively it seems wrong, but I can't see any other way if we want to
 support token-only auth and cope with folks doing stuff which takes 2 hours
 with a 1 hour token expiry?

A possible 3rd option is some sort of longer lived, but limited scope 
capability token.

The user would create a capability token that represents anyone possessing 
this token is (eg) allowed to write to swift as $user.  The token could be 
created by keystone as a trusted 3rd party or by swift (doesn't matter which), 
in response to a request authenticated as $user.  The client then includes 
that token in the request *to cinder*, so cinder can pass it back to swift 
when doing the writes.
This capability token would be of much longer duration (long enough to 
complete the cinder-swift task), which is ok because it is of a much more 
limited scope (ideally as fine grained as we can bother implementing).

(I like this option)


A 4th option is to have much longer lived tokens everywhere (long enough for 
this backup), but the user is able to expire it early via keystone whenever 
they feel it might be compromised (aiui this is exactly how things work now - 
we just need to increase the timeout).  Greater exposure to replay attacks, 
but if detected they can still be invalidated quickly.

(This is the easiest option, it's basically just formalising what the 
operators are already doing)


A 5th option (wow) is to have the end user/client repeatedly push in fresh 
tokens during long-running operations (and heat is the uber-example since it 
basically wants to impersonate the user forever).  Those tokens would then 
need to be refreshed all the way down the stack for any outstanding operations 
that might need the new token.

(This or the 4th option seems ugly but unavoidable for forever services like 
heat.  There has to be some way to invalidate their access if they go rogue, 
either by time (and thus needs a refresh mechanism) or by invalidation-via-
keystone (which implies the token lasts forever unless invalidated))


However we do it:  the permission to do the action should come from the 
original user - and this is expressed as tokens coming from the original 
client/user in some form.   By allowing services to create something without 
the original client/user being involved, we're really just bypassing the token 
authentication mechanism (and there are easier ways to ignore the token ;)

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] i need some help on this bug Bug #1365892

2014-09-12 Thread Angus Lees
On Wed, 10 Sep 2014 01:04:02 PM Mike Bayer wrote:
 In this case it appears to be a “safety” in case someone uses the
 ConnectionContext object outside of being a context manager.  I’d fix that
 and require that it be used as a context manager only.

Oh look, I have a new pylint hammer that is designed for exactly this nail:

https://review.openstack.org/#/c/120320/

-- 
 - Gus

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-12 Thread Derek Higgins
+1
On 09/09/14 19:32, Gregory Haynes wrote:
 Hello everyone!
 
 I have been working on a meta-review of StevenK's reviews and I would
 like to propose him as a new member of our core team.
 
 As I'm sure many have noticed, he has been above our stats requirements
 for several months now. More importantly, he has been reviewing a wide
 breadth of topics and seems to have a strong understanding of our code
 base. He also seems to be doing a great job at providing valuable
 feedback and being attentive to responses on his reviews.
 
 As such, I think he would make a great addition to our core team. Can
 the other core team members please reply with your votes if you agree or
 disagree.
 
 Thanks!
 Greg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][db] Need help resolving a strange error with db connections in tests

2014-09-12 Thread Kevin Benton
Can you explain a bit about that test? I'm having trouble reproducing it.
On the system (upstream Jenkins) that it's failing on, is postgres
available with that database?

On Thu, Sep 11, 2014 at 7:07 AM, Anna Kamyshnikova 
akamyshnik...@mirantis.com wrote:

 Hello everyone!

 I'm working on implementing test in Neutron that checks that models are
 synchronized with database state [1] [2]. This is very important change as
 during Juno cycle big changes of database structure were done.

 I was working on it for rather long time but about three weeks ago strange
 error appeared [3], using AssertionPool shows [4]. The problem is that
 somehow there are more than one connection to database from each test. I
 tried to use locks from lockutils, but it didn’t help. On db meeting we
 decided to add TestCase just for one Ml2 plugin for starters, and then
 continue working on this strange error, that is why there are two change
 requests [1] and [2]. But I found out that somehow even one testcase fails
 with the same error [5] from time to time.

 I’m asking for any suggestions that could be done in this case. It is very
 important to get at least [1] merged in Juno.

 [1] - https://review.openstack.org/76520

 [2] - https://review.openstack.org/120040

 [3] - http://paste.openstack.org/show/110158/

 [4] - http://paste.openstack.org/show/110159/

 [5] -
 http://logs.openstack.org/20/76520/68/check/gate-neutron-python27/63938f9/testr_results.html.gz

 Regards,

 Ann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-12 Thread Germy Lure
On Fri, Sep 12, 2014 at 11:11 AM, Kevin Benton blak...@gmail.com wrote:


  Maybe I missed something, but what's the solution?

 There isn't one yet. That's why it's going to be discussed at the summit.

So my suggestion is remove all vendors' plugins and drivers except
opensource as built-in.
By leaving open source plugins and drivers in the tree , we can resolve
such problems:
  1)release a workable and COMPLETE version
  2)user experience(especially for beginners)
  3)provide code example to learn for new contributors and vendors
  4)develop and verify new features



  I think we should release a workable version.

 Definitely. But that doesn't have anything to do with it living in the
 same repository. By putting it in a different repo, it provides smaller
 code bases to learn for new contributors wanting to become a core developer
 in addition to a clear separation between plugins and core code.

Why do we need a different repo to store vendors' codes? That's not the
community business.
I think only a proper architecture and normal NBSB API can bring a clear
separation between plugins(or drivers) and core code, not a different repo.
Of course, if the community provides a wiki page for vendors to add
hyperlink of their codes, I think it's perfect.


  Besides of user experience, the open source drivers are also used for
 developing and verifying new features, even small-scale case.

 Sure, but this also isn't affected by the code being in a separate repo.

See comments above.


  The community should and just need focus on the Neutron core and provide
 framework for vendors' devices.

 I agree, but without the open source drivers being separated as well, it's
 very difficult for the framework for external drivers to be stable enough
 to be useful.

Architecture and API. The community should ensure core and API stable
enough and high quality. Vendors for external drivers.
Who provides, who maintains(including development, storage, distribution,
quality, etc).


 On Thu, Sep 11, 2014 at 7:24 PM, Germy Lure germy.l...@gmail.com wrote:

 Some comments inline.

 BR,
 Germy

 On Thu, Sep 11, 2014 at 5:47 PM, Kevin Benton blak...@gmail.com wrote:

 This has been brought up several times already and I believe is going to
 be discussed at the Kilo summit.

 Maybe I missed something, but what's the solution?


 I agree that reviewing third party patches eats community time. However,
 claiming that the community pays 46% of it's energy to maintain
 vendor-specific code doesn't make any sense. LOC in the repo has very
 little to do with ongoing required maintenance. Assuming the APIs for the
 plugins stay consistent, there should be few 'maintenance' changes required
 to a plugin once it's in the tree. If there are that many changes to
 plugins just to keep them operational, that means Neutron is far too
 unstable to support drivers living outside of the tree anyway.

 Yes, you are right. Neutron is far too unstable to support drivers
 living outside of the tree anyway. So I think this is really our important
 point.
 The community should focus on standardizing NBSB API, introducing and
 improving new features NOT wasting energy to introduce and maintain
 vendor-specific codes.


 On a related note, if we are going to pull plugins/drivers out of
 Neutron, I think all of them should be removed, including the OVS and
 LinuxBridge ones. There is no reason for them to be there if Neutron has
 stable enough internal APIs to eject the 3rd party plugins from the repo.
 They should be able to live in a separate neutron-opensource-drivers repo
 or something along those lines. This will free up significant amounts of
 developer/reviewer cycles for neutron to work on the API refactor, task
 based workflows, performance improvements for the DB operations, etc.

 I think we should release a workable version. User can experience the
 functions powered by built-in components. And they can replace them with
 the release of those vendors who cooperate with them. The community
 should not work for vendor's codes.


 If the open source drivers stay in the tree and the others are removed,
 there is little incentive to keep the internal APIs stable and 3rd party
 drivers sitting outside of the tree will break on every refactor or data
 structure change. If that's the way we want to treat external driver
 developers, let's be explicit about it and just post warnings that 3rd
 party drivers can break at any point and that the onus is on the external
 developers to learn what changed an react to it. At some point they will
 stop bothering with Neutron completely in their deployments and mimic its
 public API.

 Besides of user experience, the open source drivers are also used for
 developing and verifying new features, even small-scale case.


 A clear separation of the open source drivers/plugins and core Neutron
 would give a much better model for 3rd party driver developers to follow
 and would enforce a stable internal API in the 

Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Flavio Percoco
On 09/12/2014 12:14 AM, Zane Bitter wrote:
 On 09/09/14 15:03, Monty Taylor wrote:
 On 09/04/2014 01:30 AM, Clint Byrum wrote:
 Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:
 Greetings,

 Last Tuesday the TC held the first graduation review for Zaqar. During
 the meeting some concerns arose. I've listed those concerns below with
 some comments hoping that it will help starting a discussion before the
 next meeting. In addition, I've added some comments about the project
 stability at the bottom and an etherpad link pointing to a list of use
 cases for Zaqar.


 Hi Flavio. This was an interesting read. As somebody whose attention has
 recently been drawn to Zaqar, I am quite interested in seeing it
 graduate.

 # Concerns

 - Concern on operational burden of requiring NoSQL deploy expertise to
 the mix of openstack operational skills

 For those of you not familiar with Zaqar, it currently supports 2 nosql
 drivers - MongoDB and Redis - and those are the only 2 drivers it
 supports for now. This will require operators willing to use Zaqar to
 maintain a new (?) NoSQL technology in their system. Before expressing
 our thoughts on this matter, let me say that:

  1. By removing the SQLAlchemy driver, we basically removed the
 chance
 for operators to use an already deployed OpenStack-technology
  2. Zaqar won't be backed by any AMQP based messaging technology
 for
 now. Here's[0] a summary of the research the team (mostly done by
 Victoria) did during Juno
  3. We (OpenStack) used to require Redis for the zmq matchmaker
  4. We (OpenStack) also use memcached for caching and as the oslo
 caching lib becomes available - or a wrapper on top of dogpile.cache -
 Redis may be used in place of memcached in more and more deployments.
  5. Ceilometer's recommended storage driver is still MongoDB,
 although
 Ceilometer has now support for sqlalchemy. (Please correct me if I'm
 wrong).

 That being said, it's obvious we already, to some extent, promote some
 NoSQL technologies. However, for the sake of the discussion, lets
 assume
 we don't.

 I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
 keep avoiding these technologies. NoSQL technologies have been around
 for years and we should be prepared - including OpenStack operators
 - to
 support these technologies. Not every tool is good for all tasks - one
 of the reasons we removed the sqlalchemy driver in the first place -
 therefore it's impossible to keep an homogeneous environment for all
 services.


 I whole heartedly agree that non traditional storage technologies that
 are becoming mainstream are good candidates for use cases where SQL
 based storage gets in the way. I wish there wasn't so much FUD
 (warranted or not) about MongoDB, but that is the reality we live in.

 With this, I'm not suggesting to ignore the risks and the extra burden
 this adds but, instead of attempting to avoid it completely by not
 evolving the stack of services we provide, we should probably work on
 defining a reasonable subset of NoSQL services we are OK with
 supporting. This will help making the burden smaller and it'll give
 operators the option to choose.

 [0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/


 - Concern on should we really reinvent a queue system rather than
 piggyback on one

 As mentioned in the meeting on Tuesday, Zaqar is not reinventing
 message
 brokers. Zaqar provides a service akin to SQS from AWS with an
 OpenStack
 flavor on top. [0]


 I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
 trying to connect two processes in real time. You're trying to do fully
 asynchronous messaging with fully randomized access to any message.

 Perhaps somebody should explore whether the approaches taken by large
 scale IMAP providers could be applied to Zaqar.

 Anyway, I can't imagine writing a system to intentionally use the
 semantics of IMAP and SMTP. I'd be very interested in seeing actual use
 cases for it, apologies if those have been posted before.

 It seems like you're EITHER describing something called XMPP that has at
 least one open source scalable backend called ejabberd. OR, you've
 actually hit the nail on the head with bringing up SMTP and IMAP but for
 some reason that feels strange.

 SMTP and IMAP already implement every feature you've described, as well
 as retries/failover/HA and a fully end to end secure transport (if
 installed properly) If you don't actually set them up to run as a public
 messaging interface but just as a cloud-local exchange, then you could
 get by with very low overhead for a massive throughput - it can very
 easily be run on a single machine for Sean's simplicity, and could just
 as easily be scaled out using well known techniques for public cloud
 sized deployments?

 So why not use existing daemons that do this? You could still use the
 REST API you've got, but instead of writing it to a mongo backend and
 trying to implement 

Re: [openstack-dev] [Cinder] Request for J3 FFE - add reset-state function for backups

2014-09-12 Thread Thierry Carrez
Jay Bryant wrote:
 It isn't a huge change.   I am ok with it if we can get the issues
 addressed.   Especially Duncan's concern.

Given the gate backlog, if it's not already in-flight, I fear that it
would push too much down into the stabilization period and delay RC1.

At this point, unless it's critical to the success of the release (like,
it completes a feature that is 99% there, or it increases consistency by
plugging a feature gap, or it fixes a potential security vulnerability),
I would rather avoid adding exceptions. Could you explain why adding
reset-state function for backups absolutely needs to be in Juno ? Feels
like a nice-to-have to me, and I fear we are past that point now.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] Juno Performance Testing (Round 2)

2014-09-12 Thread Flavio Percoco
On 09/12/2014 01:36 AM, Boris Pavlovic wrote:
 Kurt, 
 
 Speaking generally, I’d like to see the project bake this in over
 time as
 part of the CI process. It’s definitely useful information not just for
 the developers but also for operators in terms of capacity planning.
 We’ve  
 
 talked as a team about doing this with Rally  (and in fact, some
 work has
 
 been started there), but it may be useful to also run a large-scale
 test 
 
 on a regular basis (at least per milestone). 
 
 
 I believe, we will be able to generate distributed load and generate at
 least
 20k rps in K cycle. We've done a lot of work during J in this direction,
 but there is still a lot of to do.
 
 So you'll be able to use the same tool for gates, local usage and
 large-scale tests.


Lets talk about it :)

Would it be possible to get an update from you at the summit (or mailing
list)? I'm interested to know where you guys are with this, what is
missing and most importantly, how we can help.

Thanks Boris,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Hard Code Freeze for milestones 5.1, 5.0.2 reached

2014-09-12 Thread Mike Scherbakov
Hi Fuelers,
I'm glad to announce that we've reached formal Hard Code Freeze (HCF) [1]
criteria for milestones 5.1 and 5.0.2. As stable/5.1 branch is already
open, we don't need to do anything particular.

All further bugs found in 5.1 / 5.0.2 with priority lower than Critical
should be closed as Won't fix in 5.1, 5.0.2, and marked by release-notes
tag so we can find and document them as known issues. Only bug fixes to
Critical bugs are accepted to stable/5.0  stable/5.1 since now and until
we call for the release.

Bug reporters, please do not forget to target both 6.0 (master) and 5.1
(stable/5.1) milestones. If the fix is merged to master, it has to be
backported to stable/5.1 to make it available in 5.1.

Fuel Release Candidate (RC1) was built and is available at [2].

[1] https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
[2] https://fuel-jenkins.mirantis.com/view/ISO/, build from 12-Sep 5.1-24
(RC1)
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-12 Thread Flavio Percoco
On 09/11/2014 01:44 PM, Sean Dague wrote:
 On 09/10/2014 08:46 PM, Jamie Lennox wrote:

 - Original Message -
 From: Steven Hardy sha...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Thursday, September 11, 2014 1:55:49 AM
 Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
 tokens leads to overall OpenStack fragility

 On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
 Going through the untriaged Nova bugs, and there are a few on a similar
 pattern:

 Nova operation in progress takes a while
 Crosses keystone token expiration time
 Timeout thrown
 Operation fails
 Terrible 500 error sent back to user

 We actually have this exact problem in Heat, which I'm currently trying to
 solve:

 https://bugs.launchpad.net/heat/+bug/1306294

 Can you clarify, is the issue either:

 1. Create novaclient object with username/password
 2. Do series of operations via the client object which eventually fail
 after $n operations due to token expiry

 or:

 1. Create novaclient object with username/password
 2. Some really long operation which means token expires in the course of
 the service handling the request, blowing up and 500-ing

 If the former, then it does sound like a client, or usage-of-client bug,
 although note if you pass a *token* vs username/password (as is currently
 done for glance and heat in tempest, because we lack the code to get the
 token outside of the shell.py code..), there's nothing the client can do,
 because you can't request a new token with longer expiry with a token...

 However if the latter, then it seems like not really a client problem to
 solve, as it's hard to know what action to take if a request failed
 part-way through and thus things are in an unknown state.

 This issue is a hard problem, which can possibly be solved by
 switching to a trust scoped token (service impersonates the user), but then
 you're effectively bypassing token expiry via delegation which sits
 uncomfortably with me (despite the fact that we may have to do this in heat
 to solve the afforementioned bug)

 It seems like we should have a standard pattern that on token expiration
 the underlying code at least gives one retry to try to establish a new
 token to complete the flow, however as far as I can tell *no* clients do
 this.

 As has been mentioned, using sessions may be one solution to this, and
 AFAIK session support (where it doesn't already exist) is getting into
 various clients via the work being carried out to add support for v3
 keystone by David Hu:

 https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z

 I see patches for Heat (currently gating), Nova and Ironic.

 I know we had to add that into Tempest because tempest runs can exceed 1
 hr, and we want to avoid random fails just because we cross a token
 expiration boundary.

 I can't claim great experience with sessions yet, but AIUI you could do
 something like:

 from keystoneclient.auth.identity import v3
 from keystoneclient import session
 from keystoneclient.v3 import client

 auth = v3.Password(auth_url=OS_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_id=PROJECT,
user_domain_name='default')
 sess = session.Session(auth=auth)
 ks = client.Client(session=sess)

 And if you can pass the same session into the various clients tempest
 creates then the Password auth-plugin code takes care of reauthenticating
 if the token cached in the auth plugin object is expired, or nearly
 expired:

 https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120

 So in the tempest case, it seems like it may be a case of migrating the
 code creating the clients to use sessions instead of passing a token or
 username/password into the client object?

 That's my understanding of it atm anyway, hopefully jamielennox will be 
 along
 soon with more details :)

 Steve


 By clients here are you referring to the CLIs or the python libraries? 
 Implementation is at different points with each. 

 Sessions will handle automatically reauthenticating and retrying a request, 
 however it relies on the service throwing a 401 Unauthenticated error. If a 
 service is returning a 500 (or a timeout?) then there isn't much that a 
 client can/should do for that because we can't assume that trying again with 
 a new token will solve anything. 

 At the moment we have keystoneclient, novaclient, cinderclient neutronclient 
 and then a number of the smaller projects with support for sessions. That 
 obviously doesn't mean that existing users of that code have transitioned to 
 the newer way though. David Hu has been working on using this code within 
 the existing CLIs. I have prototypes for at least nova to talk to neutron 
 and cinder which i'm waiting for Kilo to push. From there it should be 
 easier to do this for 

Re: [openstack-dev] [Cinder] Request for J3 FFE - add reset-state function for backups

2014-09-12 Thread Duncan Thomas
On 12 September 2014 09:54, Thierry Carrez thie...@openstack.org wrote:
 At this point, unless it's critical to the success of the release (like,
 it completes a feature that is 99% there, or it increases consistency by
 plugging a feature gap, or it fixes a potential security vulnerability),
 I would rather avoid adding exceptions. Could you explain why adding
 reset-state function for backups absolutely needs to be in Juno ? Feels
 like a nice-to-have to me, and I fear we are past that point now.

1. It is 99% done, we've been reviewing the patch and fixing niggles
for a while now

2. We have equivalent features for volumes and snapshots (the other
two entities in cinder with state) and they are heavily used in
production

3. The alternative is getting admins to go editing the DB directly
(which is what we do now) and the logic for doing so is extremely hard
to get right

I'm a strong supporter of this feature, and I just gave the patch its first +2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest Bug triage

2014-09-12 Thread Kashyap Chamarthy
On Thu, Sep 11, 2014 at 03:52:56PM -0400, David Kranz wrote:
 So we had a Bug Day this week and the results were a bit disappointing due
 to lack of participation. We went from 124 New bugs to 75. 

 There were also many cases where bugs referred to logs that no longer
 existed. This suggests that we really need to keep up with bug triage
 in real time.

Alternatively, strongly recommend people to post *contextual* logs to
the bug, so they're there for reference forever and makes life less
painful while triaging bugs. Many times bugs are just filed in a hurry,
posting a quick bunch of logstash URLs which expires sooner or later.

Sure, posting contextual logs takes time, but as you can well imagine,
it results in higher quality reports (hopefully), and saves time for
others who have to take a fresh look at the bug and have to begin with
the maze of logs.

--
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-12 Thread Thierry Carrez
Clint Byrum wrote:
 Excerpts from Flavio Percoco's message of 2014-09-11 04:14:30 -0700:
 Is Zaqar being optimized as a *queuing* service? I'd say no. Our goal is
 to optimize Zaqar for delivering messages and supporting different
 messaging patterns.
 
 Awesome! Just please don't expect people to get excited about it for
 the lighter weight queueing workloads that you've claimed as use cases.
 
 I totally see Horizon using it to keep events for users. I see Heat
 using it for stack events as well. I would bet that Trove would benefit
 from being able to communicate messages to users.
 
 But I think in between Zaqar and the backends will likely be a lighter
 weight queue-only service that the users can just subscribe to when they
 don't want an inbox. And I think that lighter weight queue service is
 far more important for OpenStack than the full blown random access
 inbox.
 
 I think the reason such a thing has not appeared is because we were all
 sort of running into but Zaqar is already incubated. Now that we've
 fleshed out the difference, I think those of us that need a lightweight
 multi-tenant queue service should add it to OpenStack.  Separately. I hope
 that doesn't offend you and the rest of the excellent Zaqar developers. It
 is just a different thing.
 
 Should we remove all the semantics that allow people to use Zaqar as a
 queue service? I don't think so either. Again, the semantics are there
 because Zaqar is using them to do its job. Whether other folks may/may
 not use Zaqar as a queue service is out of our control.

 This doesn't mean the project is broken.
 
 No, definitely not broken. It just isn't actually necessary for many of
 the stated use cases.

Clint,

If I read you correctly, you're basically saying the Zaqar is overkill
for a lot of people who only want a multi-tenant queue service. It's
doing A+B. Why does that prevent people who only need A from using it ?

Is it that it's actually not doing A well, from a user perspective ?
Like the performance sucks, or it's missing a key primitive ?

Is it that it's unnecessarily complex to deploy, from a deployer
perspective, and that something only doing A would be simpler, while
covering most of the use cases?

Is it something else ?

I want to make sure I understand your objection. In the user
perspective it might make sense to pursue both options as separate
projects. In the deployer perspective case, having a project doing A+B
and a project doing A doesn't solve anything. So this affects the
decision we have to take next Tuesday...

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Architecture]Suggestions for the third vendors' plugin and driver

2014-09-12 Thread Kevin Benton
 So my suggestion is remove all vendors' plugins and drivers except
opensource as built-in.

Yes, I think this is currently the view held by the PTL (Kyle) and some of
the other cores so what you're suggesting will definitely come up at the
summit.


 Why do we need a different repo to store vendors' codes? That's not the
community business.
 I think only a proper architecture and normal NBSB API can bring a
clear separation between plugins(or drivers) and core code, not a
different repo.

The problem is that that architecture won't stay stable if there is no
shared community plugin depending on its stability. Let me ask you the
inverse question. Why do you think the reference driver should stay in the
core repo?

A separate repo won't have an impact on what is packaged and released so it
should have no impact on user experience, complete versions, providing
code examples,  or developing new features. In fact, it will likely help
with the last two because it will provide a clear delineation between what
a plugin is responsible for vs. what the core API is responsible for. And,
because new cores can be added faster to the open source plugins repo due
to a smaller code base to learn, it will help with developing new features
by reducing reviewer load.

On Fri, Sep 12, 2014 at 1:50 AM, Germy Lure germy.l...@gmail.com wrote:



 On Fri, Sep 12, 2014 at 11:11 AM, Kevin Benton blak...@gmail.com wrote:


  Maybe I missed something, but what's the solution?

 There isn't one yet. That's why it's going to be discussed at the summit.

 So my suggestion is remove all vendors' plugins and drivers except
 opensource as built-in.
 By leaving open source plugins and drivers in the tree , we can resolve
 such problems:
   1)release a workable and COMPLETE version
   2)user experience(especially for beginners)
   3)provide code example to learn for new contributors and vendors
   4)develop and verify new features



  I think we should release a workable version.

 Definitely. But that doesn't have anything to do with it living in the
 same repository. By putting it in a different repo, it provides smaller
 code bases to learn for new contributors wanting to become a core developer
 in addition to a clear separation between plugins and core code.

 Why do we need a different repo to store vendors' codes? That's not the
 community business.
 I think only a proper architecture and normal NBSB API can bring a clear
 separation between plugins(or drivers) and core code, not a different repo.
 Of course, if the community provides a wiki page for vendors to add
 hyperlink of their codes, I think it's perfect.


  Besides of user experience, the open source drivers are also used for
 developing and verifying new features, even small-scale case.

 Sure, but this also isn't affected by the code being in a separate repo.

 See comments above.


  The community should and just need focus on the Neutron core and
 provide framework for vendors' devices.

 I agree, but without the open source drivers being separated as well,
 it's very difficult for the framework for external drivers to be stable
 enough to be useful.

 Architecture and API. The community should ensure core and API stable
 enough and high quality. Vendors for external drivers.
 Who provides, who maintains(including development, storage, distribution,
 quality, etc).


 On Thu, Sep 11, 2014 at 7:24 PM, Germy Lure germy.l...@gmail.com wrote:

 Some comments inline.

 BR,
 Germy

 On Thu, Sep 11, 2014 at 5:47 PM, Kevin Benton blak...@gmail.com wrote:

 This has been brought up several times already and I believe is going
 to be discussed at the Kilo summit.

 Maybe I missed something, but what's the solution?


 I agree that reviewing third party patches eats community time.
 However, claiming that the community pays 46% of it's energy to maintain
 vendor-specific code doesn't make any sense. LOC in the repo has very
 little to do with ongoing required maintenance. Assuming the APIs for the
 plugins stay consistent, there should be few 'maintenance' changes required
 to a plugin once it's in the tree. If there are that many changes to
 plugins just to keep them operational, that means Neutron is far too
 unstable to support drivers living outside of the tree anyway.

 Yes, you are right. Neutron is far too unstable to support drivers
 living outside of the tree anyway. So I think this is really our important
 point.
 The community should focus on standardizing NBSB API, introducing and
 improving new features NOT wasting energy to introduce and maintain
 vendor-specific codes.


 On a related note, if we are going to pull plugins/drivers out of
 Neutron, I think all of them should be removed, including the OVS and
 LinuxBridge ones. There is no reason for them to be there if Neutron has
 stable enough internal APIs to eject the 3rd party plugins from the repo.
 They should be able to live in a separate neutron-opensource-drivers repo
 or something along those lines. This 

Re: [openstack-dev] [neutron][db] Need help resolving a strange error with db connections in tests

2014-09-12 Thread Anna Kamyshnikova
This is implementing ModelsMigrationsSync test from oslo [1]. For running
it locally on Postgres you have to do the following things (it is mentioned
in comments to test):

For the opportunistic testing you need to set up a db named
'openstack_citest' with user 'openstack_citest' and password
'openstack_citest' on localhost.
The test will then use that db and user/password combo to run the tests.

For PostgreSQL on Ubuntu this can be done with the following commands::

sudo -u postgres psql
postgres=# create user openstack_citest with createdb login password
  'openstack_citest';
postgres=# create database openstack_citest with owner
   openstack_citest;

For MySQL on Ubuntu this can be done with the following commands::

mysql -u root
create database openstack_citest;
grant all privilleges on openstack_citest.* to
 openstack_citest@localhost identified by 'opensatck_citest';

As I said this error appeared only three weeks ago, although I'm working on
this test since 29 of April, it passed Jenkins in August without any
problems. Postgres is available there.

[1] -
https://github.com/openstack/oslo.db/blob/master/oslo/db/sqlalchemy/test_migrations.py#L277

On Fri, Sep 12, 2014 at 12:28 PM, Kevin Benton blak...@gmail.com wrote:

 Can you explain a bit about that test? I'm having trouble reproducing it.
 On the system (upstream Jenkins) that it's failing on, is postgres
 available with that database?

 On Thu, Sep 11, 2014 at 7:07 AM, Anna Kamyshnikova 
 akamyshnik...@mirantis.com wrote:

 Hello everyone!

 I'm working on implementing test in Neutron that checks that models are
 synchronized with database state [1] [2]. This is very important change as
 during Juno cycle big changes of database structure were done.

 I was working on it for rather long time but about three weeks ago
 strange error appeared [3], using AssertionPool shows [4]. The problem is
 that somehow there are more than one connection to database from each test.
 I tried to use locks from lockutils, but it didn’t help. On db meeting we
 decided to add TestCase just for one Ml2 plugin for starters, and then
 continue working on this strange error, that is why there are two change
 requests [1] and [2]. But I found out that somehow even one testcase fails
 with the same error [5] from time to time.

 I’m asking for any suggestions that could be done in this case. It is
 very important to get at least [1] merged in Juno.

 [1] - https://review.openstack.org/76520

 [2] - https://review.openstack.org/120040

 [3] - http://paste.openstack.org/show/110158/

 [4] - http://paste.openstack.org/show/110159/

 [5] -
 http://logs.openstack.org/20/76520/68/check/gate-neutron-python27/63938f9/testr_results.html.gz

 Regards,

 Ann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Thierry Carrez
Flavio Percoco wrote:
 On 09/12/2014 12:14 AM, Zane Bitter wrote:
 The final question is the one of arbitrary access to messages in the
 queue (or queue if you prefer). Flavio indicated that this effectively
 came for free with their implementation of Pub-Sub. IMHO it is
 unnecessary and limits the choice of potential back ends in the future.
 I would personally be +1 on removing it from the v2 API, and also +1 on
 the v2 API shipping in Kilo so that as few new adopters as possible get
 stuck with the limited choices of back-end. I hope that would resolve
 Clint's concerns that we need a separate, light-weight queue system; I
 personally don't believe we need two projects, even though I agree that
 all of the use cases I personally care about could probably be satisfied
 without Pub-Sub.
 
 Right, being able to support other backends is one of the reasons we're
 looking forward to remove the support for arbitrary access to messages.
 As of now, the plan is to remove that endpoint unless a very good use
 case comes up that makes supporting other backends not worth it, which I
 doubt. The feedback from Zaqar's early adopters is that the endpoint is
 indeed not useful.

Thanks Zane, that was indeed useful. I agree with you it would be better
to avoid needing 2 separate projects for such close use cases.

Let's assume we remove arbitrary access to messages in v2. When you say
it would remove limits on the choice of potential backends, does that
mean we could have a pure queue backend (like RabbitMQ), at least in
theory ? Would a ZaqarV2 address all of Clint and Devananda's concerns
about queue semantics ? If yes, then the graduation question becomes,
how likely is that work to be completed early enough in Kilo.

If it's a no-brainer and takes a week to sort out, I think we could
approve Zaqar's Kilo graduation, even if that stretches the no major
API rewrite planned requirement.

But if we think this needs careful discussion so that the v2 API design
(and backend support) satisfies the widest set of users, then incubating
for another cycle while v2 is implemented seems like the right course of
action. We shouldn't graduate if there is any risk we would end up with
ZaqarV1 in Kilo, and then have to deprecate it for n cycles just because
it was shipped in the official release and therefore inherits its API
deprecation rules.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-12 Thread Daniel P. Berrange
On Thu, Sep 11, 2014 at 02:02:00PM -0400, Dan Prince wrote:
 I've always referred to the virt/driver.py API as an internal API
 meaning there are no guarantees about it being preserved across
 releases. I'm not saying this is correct... just that it is what we've
 got.  While OpenStack attempts to do a good job at stabilizing its
 public API's we haven't done the same for internal API's. It is actually
 quite painful to be out of tree at this point as I've seen with the
 Ironic driver being out of the Nova tree. (really glad that is back in
 now!)

Oh absolutely, I've always insisted that virt/driver.py is unstable
and that as a result out of tree drivers get to keep both pieces when
it breaks.

 So because we haven't designed things to be split out in this regard we
 can't just go and do it. 

I don't think that conclusion follows directly. We certainly need to
do some prep work to firm up our virt driver interface, as outlined
in my original mail, but if we agreed to push forward in this I think
it is practical to get that done in Kilo and split in L. It is
mostly a matter of having the will todo it IMHO.

 I tinkered with some numbers... not sure if this helps or hurts my
 stance but here goes. By my calculation this is the number of commits
 we've made that touched each virt driver tree for the last 3 releases
 plus stuff done to-date in Juno.
 
 Created using a command like this in each virt directory for each
 release: git log origin/stable/havana..origin/stable/icehouse
 --no-merges --pretty=oneline . | wc -l
 
 essex = folsom:
 
  baremetal: 26
  hyperv: 9
  libvirt: 222
  vmwareapi: 18
  xenapi: 164
 * total for above: 439
 
 folsom = grizzly:
 
  baremetal: 83
  hyperv: 58
  libvirt: 254
  vmwareapi: 59
  xenapi: 126
* total for above: 580
 
 grizzly = havana:
 
  baremetal: 48
  hyperv: 55
  libvirt: 157
  vmwareapi: 105
  xenapi: 123
* total for above: 488
 
 havana = icehouse:
 
  baremetal: 45
  hyperv: 42
  libvirt: 212
  vmwareapi: 121
  xenapi: 100
* total for above: 520
 
 icehouse = master:
 
  baremetal: 26
  hyperv: 32
  libvirt: 188
  vmwareapi: 121
  xenapi: 71
* total for above: 438
 
 ---
 
 A couple of things jump out at me from the numbers:
 
  -drivers that are being deprecated (baremetal) still have lots of
 changes. Some of these changes are valid bug fixes for the driver but a
 majority of them are actually related to internal cleanups and interface
 changes. This goes towards the fact that Nova isn't mature enough to do
 a split like this yet.

Our position that the virt driver is internal only, has permitted us
to make backwards incompatible changes to it at will. Given that freedom
people inevitably take that route since is is the least effort option.
If our position had been that the virt driver needed to be forwards
compatible, people would have been forced to make the same changes without
breaking existing drivers.  IOW, the fact that we've made lots of changes
to baremetal historically, doesn't imply that we can't decide to make the
virt driver API stable henceforth  thus avoid further changes of that
kind.

  -the number of commits landed isn't growing *that* much across releases
 in the virt driver trees. Presumably we think we were doing a better job
 2 years ago? But the number of changes in the virt trees is largely the
 same... perhaps this is because people aren't submitting stuff because
 they are frustrated though?

Our core team size  thus review bandwidth has been fairly static over
that time, so the only way virt driver commits could have risen is if
core reviewers increased their focus on virt drivers at the expense of
other parts of nova. I actually read those numbers as showing that as
we've put more effort into reviewing vmware contributions, we've lost
resource going into libvirt contributions.

In addition we're of course missing out on capturing the changes that
we've never had submitted, or submitted by abandoned, or submitted by
slipped across multiple releases waiting for merge. Overall I think
the figures paint a pretty depressing picture of no overall growth,
perhaps even a decline.


 
 For comparison here are the total number of commits for each Nova
 release (includes the above commits):
 
 essex - folsom: 1708
 folsom - grizzly: 2131
 grizzly - havana: 2188
 havana - icehouse: 1696
 icehouse - master: 1493
 
 ---

So we've still a way to go for juno cycle, but I'd be surprised if we
got beyond the havana numbers given where we are today. Again I think
those numbers show a plateau or even decline, which just reinforces
my point that our model is not scaling today.

 So say around 30% of the commits for a given release touch the virt
 drivers themselves.. many of them aren't specifically related to the
 virt drivers. Rather just general Nova internal cleanups because the
 interfaces aren't stable.
 
 And while splitting Nova virt drivers might help out some I'm not sure
 it helps the general Nova issue in that we 

Re: [openstack-dev] [Cinder] Request for J3 FFE - add reset-state function for backups

2014-09-12 Thread Thierry Carrez
Duncan Thomas wrote:
 On 12 September 2014 09:54, Thierry Carrez thie...@openstack.org wrote:
 At this point, unless it's critical to the success of the release (like,
 it completes a feature that is 99% there, or it increases consistency by
 plugging a feature gap, or it fixes a potential security vulnerability),
 I would rather avoid adding exceptions. Could you explain why adding
 reset-state function for backups absolutely needs to be in Juno ? Feels
 like a nice-to-have to me, and I fear we are past that point now.
 
 1. It is 99% done, we've been reviewing the patch and fixing niggles
 for a while now
 
 2. We have equivalent features for volumes and snapshots (the other
 two entities in cinder with state) and they are heavily used in
 production
 
 3. The alternative is getting admins to go editing the DB directly
 (which is what we do now) and the logic for doing so is extremely hard
 to get right
 
 I'm a strong supporter of this feature, and I just gave the patch its first +2

OK, it feels like a good consistency/usability thing to get in-release
rather than past-release. If it can get all the +2s required today (and
John's approval), I won't object to it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [metrics] Old reviews (2011) with strange uploaded dates in review.openstack.org

2014-09-12 Thread Daniel Izquierdo

Hi there,

I was checking some datasets [1] by the Activity Board [2] and realized 
that there are some inconsistencies in the old reviews, around July 
2011, at the very beginning.


An example of this [3] shows that the uploaded date is Dec 16, 2012, 
while the review was opened on the 25th of July, 2011.


I know that this is not a big issue nowadays and even more for the day 
to day work of the developers, but for the Activity Board, this was 
producing some negative numbers in the review process, what seemed a bit 
strange.


So, just to point the focus to those dates for those working on metrics 
:). And a question: was there a migration around the 2012-12-16 of the 
review system or some other noticeable event?. On such date there was 
around 1,200 submitted reviews, while around those days, in mean, there 
are some dozens of them.


Cheers,
Daniel.


[1] http://activity.openstack.org/dash/browser/data/db/reviews.mysql.7z
[2] http://activity.openstack.org/dash/browser/
[3] https://review.openstack.org/#/c/44/

--
Daniel Izquierdo Cortazar, PhD
Chief Data Officer
-
Software Analytics for your peace of mind
www.bitergia.com
@bitergia


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][neutron] tox -e py27 is not working in the latest neutron code

2014-09-12 Thread Kelam, Koteswara Rao
Hi All,

I am trying to run unit test cases in neutron code using tox -e py27 but it 
is not working.
I commented out pbr in requirements.txt to avoid issues related to pbr but 
still I am getting following errors.
I removed .tox folder and tried again but still same issue. Please help me here.

Console logs:
---
ERROR: invocation failed, logfile: /root/mirji/neutron/.tox/py27/log/py27-1.log
ERROR: actionid=py27
msg=getenv
cmdargs=[local('/root/mirji/neutron/.tox/py27/bin/pip'), 'install', '-U', 
'-r/root/mirji/neutron/requirements.txt', 
'-r/root/mirji/neutron/test-requirements.txt']
env={'PYTHONIOENCODING': 'utf_8', 'http_proxy': 
'http://web-proxy.rose.hp.com:8088/', 'LESSOPEN': '| /usr/bin/lesspipe %s', 
'LOGNAME': 'root', 'USER': 'root', 'PATH': 
'/root/mirji/neutron/.tox/py27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
 'HOME': '/root/mirji/neutron/.tox/py27/tmp/pseudo-home', 'LANG': 
'en_US.UTF-8', 'TERM': 'screen', 'SHELL': '/bin/bash', 'no_proxy': 
'localhost,127.0.0.1/8,15.*,16.*,*.hp.com', 'https_proxy': 
'http://web-proxy.rose.hp.com:8088/', 'PYTHONHASHSEED': '0', 'SUDO_USER': 
'sdn', 'TOX_INDEX_URL': 'http://pypi.openstack.org/openstack', 'USERNAME': 
'root', 'PIP_INDEX_URL': 'http://pypi.openstack.org/openstack', 'SUDO_UID': 
'1000', 'VIRTUAL_ENV': '/root/mirji/neutron/.tox/py27', '_': 
'/usr/local/bin/tox', 'SUDO_COMMAND': '/bin/bash', 'SUDO_GID': '1000', 
'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'OLDPWD': '/root/mirji', 'SHLVL': '1', 
'PWD': '/root/mirji/neutron/neutron/tests/unit', 'MAIL': '/var/mail/root', 
'LS_COLORS': 
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}
Downloading/unpacking Paste (from -r /root/mirji/neutron/requirements.txt (line 
3))
  Could not find any downloads that satisfy the requirement Paste (from -r 
/root/mirji/neutron/requirements.txt (line 3))
Cleaning up...
No distributions at all found for Paste (from -r 
/root/mirji/neutron/requirements.txt (line 3))
Storing complete log in 
/root/mirji/neutron/.tox/py27/tmp/pseudo-home/.pip/pip.log

ERROR: could not install deps [-r/root/mirji/neutron/requirements.txt, 
-r/root/mirji/neutron/test-requirements.txt]
__ summary 
___
ERROR:   py27: could not install deps [-r/root/mirji/neutron/requirements.txt, 
-r/root/mirji/neutron/test-requirements.txt]

Thanks in advances,
Koteswar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-12 Thread Mark McLoughlin
On Wed, 2014-09-10 at 14:51 +0200, Thierry Carrez wrote:
 Flavio Percoco wrote:
  [...]
  Based on the feedback from the meeting[3], the current main concern is:
  
  - Do we need a messaging service with a feature-set akin to SQS+SNS?
  [...]
 
 I think we do need, as Samuel puts it, some sort of durable
 message-broker/queue-server thing. It's a basic application building
 block. Some claim it's THE basic application building block, more useful
 than database provisioning. It's definitely a layer above pure IaaS, so
 if we end up splitting OpenStack into layers this clearly won't be in
 the inner one. But I think IaaS+ basic application building blocks
 belong in OpenStack one way or another. That's the reason I supported
 Designate (everyone needs DNS) and Trove (everyone needs DBs).
 
 With that said, I think yesterday there was a concern that Zaqar might
 not fill the some sort of durable message-broker/queue-server thing
 role well. The argument goes something like: if it was a queue-server
 then it should actually be built on top of Rabbit; if it was a
 message-broker it should be built on top of postfix/dovecot; the current
 architecture is only justified because it's something in between, so
 it's broken.
 
 I guess I don't mind that much zaqar being something in between:
 unless I misunderstood, exposing extra primitives doesn't prevent the
 queue-server use case from being filled. Even considering the
 message-broker case, I'm also not convinced building it on top of
 postfix/dovecot would be a net win compared to building it on top of
 Redis, to be honest.

AFAICT, this part of the debate boils down to the following argument:

  If Zaqar implemented messaging-as-a-service with only queuing 
  semantics (and no random access semantics), it's design would 
  naturally be dramatically different and simply implement a 
  multi-tenant REST API in front of AMQP queues like this:

https://www.dropbox.com/s/yonloa9ytlf8fdh/ZaqarQueueOnly.png?dl=0

  and that this architecture would allow for dramatically improved 
  throughput for end-users while not making the cost of providing the 
  service prohibitive to operators.

You can't dismiss that argument out-of-hand, but I wonder (a) whether
the claimed performance improvement is going to make a dramatic
difference to the SQS-like use case and (b) whether backing this thing
with an RDBMS and multiple highly available, durable AMQP broker
clusters is going to be too much of a burden on operators for whatever
performance improvements it does gain.

But the troubling part of this debate is where we repeatedly batter the
Zaqar team with hypotheses like these and appear to only barely
entertain their carefully considered justification for their design
decisions like:

  
https://wiki.openstack.org/wiki/Frequently_asked_questions_%28Zaqar%29#Is_Zaqar_a_provisioning_service_or_a_data_API.3F
  
https://wiki.openstack.org/wiki/Frequently_asked_questions_%28Zaqar%29#What_messaging_patterns_does_Zaqar_support.3F

I would like to see an SQS-like API provided by OpenStack, I accept the
reasons for Zaqar's design decisions to date, I respect that those
decisions were made carefully by highly competent members of our
community and I expect Zaqar to evolve (like all projects) in the years
ahead based on more real-world feedback, new hypotheses or ideas, and
lessons learned from trying things out.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Mark McLoughlin
On Wed, 2014-09-10 at 12:46 -0700, Monty Taylor wrote:
 On 09/09/2014 07:04 PM, Samuel Merritt wrote:
  On 9/9/14, 4:47 PM, Devananda van der Veen wrote:

  The questions now before us are:
  - should OpenStack include, in the integrated release, a
  messaging-as-a-service component?
 
  I certainly think so. I've worked on a few reasonable-scale web
  applications, and they all followed the same pattern: HTTP app servers
  serving requests quickly, background workers for long-running tasks, and
  some sort of durable message-broker/queue-server thing for conveying
  work from the first to the second.
 
  A quick straw poll of my nearby coworkers shows that every non-trivial
  web application that they've worked on in the last decade follows the
  same pattern.
 
  While not *every* application needs such a thing, web apps are quite
  common these days, and Zaqar satisfies one of their big requirements.
  Not only that, it does so in a way that requires much less babysitting
  than run-your-own-broker does.
 
 Right. But here's the thing.
 
 What you just described is what we all thought zaqar was aiming to be in 
 the beginning. We did not think it was a GOOD implementation of that, so 
 while we agreed that it would be useful to have one of those, we were 
 not crazy about the implementation.

Those generalizations are uncomfortably sweeping.

What Samuel just described is one of the messaging patterns that Zaqar
implements and some (members of the TC?) believed that this messaging
pattern was the only pattern that Zaqar aimed to implement.

Some (members of the TC?) formed strong, negative opinions about how
this messaging pattern was implemented, but some/all of those same
people agreed a messaging API implementing those semantics would be a
useful thing to have.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Some updates/concerns/questions.

The status of introducing a new driver to gate is:

- - all the patches for mysql-connector are merged in all projects;
- - all devstack patches to support switching the driver are merged;
- - new sqlalchemy-migrate library is released;

- - version bump is *not* yet done;
- - package is still *not* yet published on pypi;
- - new gate job is *not* yet introduced.

The new sqlalchemy-migrate release introduced unit test failures in
those three projects: nova, cinder, glance.

On technical side of the failure: my understanding is that those
projects that started to fail assume too much about how those SQL
scripts are executed. They assume they are executed in one go, they
also assume they need to open and commit transaction on their own. I
don't think this is something to be fixed in sqlalchemy-migrate
itself. Instead, simple removal of those 'BEGIN TRANSACTION; ...
COMMIT;' statements should just work and looks like a sane thing to do
anyway. I've proposed the following patches for all three projects to
handle it [1].

That said, those failures were solved by pinning the version of the
library in openstack/requirements and those projects. This is in major
contrast to how we handled the new testtools release just several
weeks ago, when the problem was solved by fixing three affected
projects because of their incorrect usage of tearDown/setUp methods.

Even more so, those failures seem to trigger the resolution to move
the enable-mysql-connector oslo spec to Kilo, while the library
version bump is the *only* change missing codewise (we will also need
a gate job description, but that doesn't touch codebase at all). The
resolution looks too prompt and ungrounded to me. Is it really that
gate failure for three projects that resulted in it, or there are some
other hidden reasons behind it? Was it discussed anywhere? If so, I
wasn't given a chance to participate in that discussion; I suspect
another supporter of the spec (Agnus Lees) was not involved either.

Not allowing those last pieces of the spec in this cycle, we just
postpone start of any realistic testing of the feature for another
half a year.

Why do we block new sqlalchemy-migrate and the spec for another cycle
instead of fixing the affected projects with *primitive* patches like
we did for new testtools?

[1]:
https://review.openstack.org/#/q/I10c58b3af75d3ab9153a8bbd2a539bf1577de328,n,z

/Ihar

On 09/07/14 13:17, Ihar Hrachyshka wrote:
 Hi all,
 
 Multiple projects are suffering from db lock timeouts due to
 deadlocks deep in mysqldb library that we use to interact with
 mysql servers. In essence, the problem is due to missing eventlet
 support in mysqldb module, meaning when a db lock is encountered,
 the library does not yield to the next green thread, allowing other
 threads to eventually unlock the grabbed lock, and instead it just
 blocks the main thread, that eventually raises timeout exception
 (OperationalError).
 
 The failed operation is not retried, leaving failing request not 
 served. In Nova, there is a special retry mechanism for deadlocks, 
 though I think it's more a hack than a proper fix.
 
 Neutron is one of the projects that suffer from those timeout
 errors a lot. Partly it's due to lack of discipline in how we do
 nested calls in l3_db and ml2_plugin code, but that's not something
 to change in foreseeable future, so we need to find another
 solution that is applicable for Juno. Ideally, the solution should
 be applicable for Icehouse too to allow distributors to resolve
 existing deadlocks without waiting for Juno.
 
 We've had several discussions and attempts to introduce a solution
 to the problem. Thanks to oslo.db guys, we now have more or less
 clear view on the cause of the failures and how to easily fix them.
 The solution is to switch mysqldb to something eventlet aware. The
 best candidate is probably MySQL Connector module that is an
 official MySQL client for Python and that shows some (preliminary)
 good results in terms of performance.
 
 I've posted a Neutron spec for the switch to the new client in Juno
 at [1]. Ideally, switch is just a matter of several fixes to
 oslo.db that would enable full support for the new driver already
 supported by SQLAlchemy, plus 'connection' string modified in
 service configuration files, plus documentation updates to refer to
 the new official way to configure services for MySQL. The database
 code won't, ideally, require any major changes, though some
 adaptation for the new client library may be needed. That said,
 Neutron does not seem to require any changes, though it was
 revealed that there are some alembic migration rules in Keystone or
 Glance that need (trivial) modifications.
 
 You can see how trivial the switch can be achieved for a service
 based on example for Neutron [2].
 
 While this is a Neutron specific proposal, there is an obvious wish
 to switch to the new library globally 

Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-09-12 Thread Sean Dague
On 09/11/2014 08:49 PM, Jamie Lennox wrote:
 
 
 - Original Message -
 From: Travis S Tripp travis.tr...@hp.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, 12 September, 2014 10:30:53 AM
 Subject: [openstack-dev] masking X-Auth-Token in debug output - proposed 
 consistency



 Hi All,



 I’m just helping with bug triage in Glance and we’ve got a bug to update how
 tokens are redacted in the glanceclient [1]. It says to update to whatever
 cross-project approach is agreed upon and references this thread:



 http://lists.openstack.org/pipermail/openstack-dev/2014-June/037345.html



 I just went through the thread and as best as I can tell there wasn’t a
 conclusion in the ML. However, if we are going to do anything, IMO the
 thread leans toward {SHA1}, with Morgan Fainberg dissenting.
 However, he references a patch that was ultimately abandoned.



 If there was a conclusion to this, please let me know so I can update and
 work on closing this bug.
 
 We handle this in the keystoneclient Session object by just printing REDACTED 
 or something similar. The problem with using a SHA1 is that for backwards 
 compatability we often use the SHA1 of a PKI token as if it were a UUID token 
 and so this is still sensitive data. There is working in keystone by 
 morganfainberg (which i think was merged) to add a new audit_it which will be 
 able to identify a token across calls without exposing any sensitive 
 information. We will support this in session when available. 

So the problem is that means we are currently leaking secrets and making
the logs unreadable.

It seems like we should move forward with the {SHA1} ... and if that is
still sensitive, address that later. Not addressing it basically keeps
the exposure and destroys usability of the code because there is so much
garbage printed out.

 The best i can say for standardization is that when glanceclient adopts the 
 session it will be handled the same way as all the other clients and 
 improvements can happen there without you having to worry about it. 

Please don't let this be the perfect is the enemy of the good here.
Debugging OpenStack is hard. Not fixing this keeps it harder.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Sean Dague
On 09/12/2014 06:41 AM, Ihar Hrachyshka wrote:
 Some updates/concerns/questions.
 
 The status of introducing a new driver to gate is:
 
 - all the patches for mysql-connector are merged in all projects;
 - all devstack patches to support switching the driver are merged;
 - new sqlalchemy-migrate library is released;
 
 - version bump is *not* yet done;
 - package is still *not* yet published on pypi;
 - new gate job is *not* yet introduced.
 
 The new sqlalchemy-migrate release introduced unit test failures in
 those three projects: nova, cinder, glance.
 
 On technical side of the failure: my understanding is that those
 projects that started to fail assume too much about how those SQL
 scripts are executed. They assume they are executed in one go, they
 also assume they need to open and commit transaction on their own. I
 don't think this is something to be fixed in sqlalchemy-migrate
 itself. Instead, simple removal of those 'BEGIN TRANSACTION; ...
 COMMIT;' statements should just work and looks like a sane thing to do
 anyway. I've proposed the following patches for all three projects to
 handle it [1].
 
 That said, those failures were solved by pinning the version of the
 library in openstack/requirements and those projects. This is in major
 contrast to how we handled the new testtools release just several
 weeks ago, when the problem was solved by fixing three affected
 projects because of their incorrect usage of tearDown/setUp methods.
 
 Even more so, those failures seem to trigger the resolution to move
 the enable-mysql-connector oslo spec to Kilo, while the library
 version bump is the *only* change missing codewise (we will also need
 a gate job description, but that doesn't touch codebase at all). The
 resolution looks too prompt and ungrounded to me. Is it really that
 gate failure for three projects that resulted in it, or there are some
 other hidden reasons behind it? Was it discussed anywhere? If so, I
 wasn't given a chance to participate in that discussion; I suspect
 another supporter of the spec (Agnus Lees) was not involved either.
 
 Not allowing those last pieces of the spec in this cycle, we just
 postpone start of any realistic testing of the feature for another
 half a year.
 
 Why do we block new sqlalchemy-migrate and the spec for another cycle
 instead of fixing the affected projects with *primitive* patches like
 we did for new testtools?

Because we are in Feature Freeze. Now is the time for critical bug fixes
only, as we start to stabalize the tree. Releasing dependent libraries
that can cause breaks, for whatever reason, should be soundly avoided.

If this was August, fine. But it's feature freeze.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev][Cinder] FFE request for adding Huawei SDSHypervisor driver and connector

2014-09-12 Thread Zhangni
I'd like to request an Juno feature freeze exception for this blueprint and 
Spec:

https://blueprints.launchpad.net/cinder/+spec/huawei-sdshypervisor-driver

https://review.openstack.org/#/c/101688/

as implemented by the following patch:

https://review.openstack.org/#/c/108609
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Design Summit planning

2014-09-12 Thread Thierry Carrez
Hi everyone,

I visited the Paris Design Summit space on Monday and confirmed that it
should be possible to split it in a way that would allow to have
per-program contributors meetups on the Friday. The schedule would go
as follows:

Tuesday: cross-project workshops
Wednesday, Thursday: traditional scheduled slots
Friday: contributors meetups

We'll also have pods available all 4 days for more ad-hoc small meetings.

In the mean time, we need to discuss how we want to handle the selection
of session topics.

In past summits we used a Design-Summit-specific session suggestion
website, and PTLs would approve/deny them. This setup grew less and less
useful: session topics were selected collaboratively on etherpads,
discussed in meetings, and finally filed/reorganized/merged on the
website just before scheduling. Furthermore, with even less scheduled
slots, we would have to reject most of the suggestions, which is more
frustrating for submitters than the positive experience of joining team
meetings to discuss which topics are the most important. Finally, topics
will need to be split between scheduled sessions and the contributors
meetup agenda, and that's easier to do on an Etherpad anyway.

This is why I'd like to suggest that all programs use etherpads to
collect important topics, select which ones would get in the very few
scheduled slots we'll have left, which will get discussed in the
contributors meetup, and which are better left for a pod discussion.
I suggest we all use IRC team meetings to collaboratively discuss that
content between interested contributors.

To simplify the communication around this, I tried to collect the
already-announced etherpads on a single page at:

https://wiki.openstack.org/wiki/Summit/Planning

Please add any that I missed !

If you think this is wrong and think the design summit suggestion
website is a better way to do it, let me know why! If some programs
really can't stand the 'etherpad/IRC' approach I'll see how we can spin
up a limited instance.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Sean Dague
I assume you, gentle OpenStack developers, often find yourself in a hair
tearing out moment of frustration about why local unit tests are doing
completely insane things. The code that it is stack tracing on is no
where to be found, and yet it fails.

And then you realize that part of oslo doesn't exist any more
except there are still pyc files laying around. Gah!

I've proposed the following to Nova and Python novaclient -
https://review.openstack.org/#/c/121044/

Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.

This prevents pyc files from being writen in your git tree (win!). It
doesn't seem to impact what pip installs... and if anyone knows how to
prevent those pyc files from getting created, that would be great.

But it's something which hopefully causes less perceived developer
fragility of the system.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Flavio Percoco
On 09/12/2014 11:36 AM, Thierry Carrez wrote:
 Flavio Percoco wrote:
 On 09/12/2014 12:14 AM, Zane Bitter wrote:
 The final question is the one of arbitrary access to messages in the
 queue (or queue if you prefer). Flavio indicated that this effectively
 came for free with their implementation of Pub-Sub. IMHO it is
 unnecessary and limits the choice of potential back ends in the future.
 I would personally be +1 on removing it from the v2 API, and also +1 on
 the v2 API shipping in Kilo so that as few new adopters as possible get
 stuck with the limited choices of back-end. I hope that would resolve
 Clint's concerns that we need a separate, light-weight queue system; I
 personally don't believe we need two projects, even though I agree that
 all of the use cases I personally care about could probably be satisfied
 without Pub-Sub.

 Right, being able to support other backends is one of the reasons we're
 looking forward to remove the support for arbitrary access to messages.
 As of now, the plan is to remove that endpoint unless a very good use
 case comes up that makes supporting other backends not worth it, which I
 doubt. The feedback from Zaqar's early adopters is that the endpoint is
 indeed not useful.
 
 Thanks Zane, that was indeed useful. I agree with you it would be better
 to avoid needing 2 separate projects for such close use cases.

+1

 Let's assume we remove arbitrary access to messages in v2. When you say
 it would remove limits on the choice of potential backends, does that
 mean we could have a pure queue backend (like RabbitMQ), at least in
 theory ? Would a ZaqarV2 address all of Clint and Devananda's concerns
 about queue semantics ? If yes, then the graduation question becomes,
 how likely is that work to be completed early enough in Kilo.
 
 If it's a no-brainer and takes a week to sort out, I think we could
 approve Zaqar's Kilo graduation, even if that stretches the no major
 API rewrite planned requirement.

Let me break the above down into several points so we can discuss them
separately:

- Removing that endpoint won't take more than a week. It's an API change
and it won't affect the existing storage drivers.

- Removing that endpoint will certainly make the adoption of other
messaging technologies easier but there are other things to consider
besides that specific endpoint (some of them were stated here[0]). In
any case, removing the endpoint definitely makes it easier.

- Besides the random access to messages, I'm not clear what other
concerns there are with regards the current semantics. It'd be nice if
we could recollect them in this section and discuss them. I took a look
at the other emails in this thread and it seems to me that the concerns
that have been raised are more oriented to the project scope and
use-cases. I also looked at the meeting logs again[1] and the only
concern related to the semantics I found is about the
`get-message-by-id` endpoint. Please, correct me if I'm wrong.


[0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/
[1]
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-09-20.01.log.html

Flavio

 But if we think this needs careful discussion so that the v2 API design
 (and backend support) satisfies the widest set of users, then incubating
 for another cycle while v2 is implemented seems like the right course of
 action. We shouldn't graduate if there is any risk we would end up with
 ZaqarV1 in Kilo, and then have to deprecate it for n cycles just because
 it was shipped in the official release and therefore inherits its API
 deprecation rules.
 
 Regards,
 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Coraid CI system

2014-09-12 Thread Roman Bogorodskiy
Hi,

Mykola has some problems sending emails to the list, so he asked me to post
a response, here it goes:

---
Remy, I have improved Coraid CI system and added logs of all components of
devstack. Please have a look:

http://38.111.159.9:8080/job/Coraid_CI/164/

According to the requirements from
http://ci.openstack.org/third_party.html#requesting-a-service-account ,
Gerrit plugin from Jenkins should be given the following options:

Successful: gerrit approve CHANGE,PATCHSET --message 'Build
Successful BUILDS_STATS' --verified VERIFIED --code-review
CODE_REVIEW
Failed: gerrit approve CHANGE,PATCHSET --message 'Build Failed
BUILDS_STATS' --verified VERIFIED --code-review CODE_REVIEW
Unstable: gerrit approve CHANGE,PATCHSET --message 'Build Unstable
BUILDS_STATS' --verified VERIFIED --code-review CODE_REVIEW

I configured gerrit plugin this way, so it sends the following comment
after checking patchset or comment with recheck. For example,
https://review.openstack.org/#/c/120907/

Patch Set 1:

Build Successful

http://38.111.159.9:8080/job/Coraid_CI/164/ : SUCCESS


All logs are on this page. They are there as artifacts.

 I took a quick look and I don’t see which test cases are being run?
We test Coraid Cinder driver with standard tempest tests using
./driver_certs/cinder_driver_cert.sh script. Test cases are in the log of
job.

Please look at Coraid third-party system one more time and, please, show us
what we have to add or improve in order to get voting rights for gerrit
user coraid-ci.

Also I have set gerrit plugin on our Jenkins to the silent mode as you
suggested.

Thank you in advance.


On Fri, Sep 5, 2014 at 7:34 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

 -1 from me (non-cinder core)

 It very nice to see you're making progress. I, personally, was very
 confused about voting.
 Here's my understanding: Voting: it is the ability to provide an
 official +1 -1 vote in the gerrit system.

 I don't see a stable history [1]. Before requesting voting, you should
 enable your system on the cinder project itself.
 Initially, you should disable ALL gerrit comments, i.e. run in silent
 mode, per request from cinder PTL [2]. Once stable there, you can enable
 gerrit comments. At this point, everyone can see pass/fail comments with a
 vote=0.
 Once stable there on real patches, you can request voting again, where the
 pass/fail would vote +1/-1.

 Ramy
 [1] http://38.111.159.9:8080/job/Coraid_CI/35/console
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/043876.html


 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Friday, September 05, 2014 7:55 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Infra][Cinder] Coraid CI system

 +1 from me (Cinder core)

 On 5 September 2014 15:09, Mykola Grygoriev mgrygor...@mirantis.com
 wrote:
  Hi,
 
  My name is Mykola Grygoriev and I'm engineer who currently working on
  deploying 3d party CI for Сoraid Сinder driver.
 
  Following instructions on
 
  http://ci.openstack.org/third_party.html#requesting-a-service-account
 
  asking for adding gerrit CI account (coraid-ci) to the Voting
  Third-Party CI Gerrit group.
 
 
 
  We have already added description of Coraid CI system to wiki page -
  https://wiki.openstack.org/wiki/ThirdPartySystems/Coraid_CI
 
  We used openstack-dev/sandbox project to test current CI
  infrastructure with OpenStack Gerrit system. Please find our history
 there.
 
  Please have a look to results of Coraid CI system. it currently takes
  updates from openstack/cinder project:
  http://38.111.159.9:8080/job/Coraid_CI/32/
  http://38.111.159.9:8080/job/Coraid_CI/33/
 
  Thank you in advance.
 
  --
  Best regards,
  Mykola Grygoriev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Duncan Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Server Groups - remove VM from group?

2014-09-12 Thread Russell Bryant
On 09/11/2014 05:01 PM, Jay Pipes wrote:
 On 09/11/2014 04:51 PM, Matt Riedemann wrote:
 On 9/10/2014 6:00 PM, Russell Bryant wrote:
 On 09/10/2014 06:46 PM, Joe Cropper wrote:
 Hmm, not sure I follow the concern, Russell.  How is that any different
 from putting a VM into the group when it’s booted as is done today?
   This simply defers the ‘group insertion time’ to some time after
 initial the VM’s been spawned, so I’m not sure this creates anymore
 race
 conditions than what’s already there [1].

 [1] Sure, the to-be-added VM could be in the midst of a migration or
 something, but that would be pretty simple to check make sure its task
 state is None or some such.

 The way this works at boot is already a nasty hack.  It does policy
 checking in the scheduler, and then has to re-do some policy checking at
 launch time on the compute node.  I'm afraid of making this any worse.
 In any case, it's probably better to discuss this in the context of a
 more detailed design proposal.


 This [1] is the hack you're referring to right?

 [1]
 http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2.b3#n1297

 
 That's the hack *I* had in the back of my mind.

Yep.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Zane Bitter

On 11/09/14 19:05, Jay Pipes wrote:

On 09/11/2014 04:09 PM, Zane Bitter wrote:

Swift is the current exception here, but one could argue, and people
have[2], that Swift is also the only project that actually conforms to
our stated design tenets for OpenStack. I'd struggle to tell the Zaqar
folks they've done the Wrong Thing... especially when abandoning the
RDBMS driver was done largely at the direction of the TC iirc.


snip

[2] http://blog.linux2go.dk/2013/08/30/openstack-design-tenets-part-2/


No offense to Soren, who wrote some interesting and poignant things, nor
to the Swift developers, who continue to produce excellent work, but
Swift is object storage. It is a data plane system with a small API
surface, a very limited functional domain, and a small, inflexible
storage schema (which is perfectly fine for its use cases). It's needs
for a relational database are nearly non-existent. It replicates a
SQLite database around using rsync [1]. Try doing that with a schema of
any complexity and you will quickly find the limitations of such a
strategy.

If Nova was to take Soren's advice and implement its data-access layer
on top of Cassandra or Riak, we would just end up re-inventing SQL Joins
in Python-land. I've said it before, and I'll say it again. In Nova at
least, the SQL schema is complex because the problem domain is complex.
That means lots of relations, lots of JOINs, and that means the best way
to query for that data is via an RDBMS.

And I say that knowing just how *poor* some of the queries are in Nova!


I wasn't trying to suggest that Nova should change (if there was another 
project I had in mind while reading that it would have been Heat, not 
Nova). My point was that it's understandable that Zaqar, which is *also* 
a data-plane service with a small API surface and a limited functional 
domain, doesn't have the same architecture as Nova (just as Swift 
doesn't) and that it's probably counter-productive to force it into that 
architecture purely because a bunch of other things use it.



For projects like Swift, Zaqar, even Keystone, Glance and Cinder, a
non-RDBMS solution might be a perfectly reasonable solution for the
underlying data storage and access layer (and for the record, I never
said that Zaqar should or should not use an RDBMS for its storage). For
complex control plane software like Nova, though, an RDBMS is the best
tool for the job given the current lay of the land in open source data
storage solutions matched with Nova's complex query and transactional
requirements.


+1


Folks in these other programs have actually, you know, thought about
these kinds of things and had serious discussions about alternatives. It
would be nice to have someone acknowledge that instead of snarky
comments implying everyone else has it wrong.


I didn't mean to imply that anybody else has it wrong (although FWIW I 
do think that Heat probably has it wrong), and I apologise to anyone who 
interpreted it that way.



Going back in my hole,
-jay


No! Let's talk about Zaqar :)

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Zane Bitter

On 12/09/14 04:50, Flavio Percoco wrote:

On 09/12/2014 12:14 AM, Zane Bitter wrote:

However, Zaqar also supports the Pub-Sub model of messaging. I believe,
but would like Flavio to confirm, that this is what is meant when the
Zaqar team say that Zaqar is about messaging in general and not just
queuing. That is to say, it is possible for multiple consumers to
intentionally consume the same message, with each maintaining its own
pointer in the queue. (Another way to think of this is that messages can
be multicast to multiple virtual queues, with data de-duplication
between them.) To a relative novice in the field like me, the difference
between this and queuing sounds pretty academic :P. Call it what you
will, it seems like a reasonable thing to implement to me.


Correct, this and other messaging patterns supported by Zaqar make it a
messaging service, which as Gordon mentioned in another email is just a
more generic term. Messages are the most important resource in Zaqar and
providing good, common and scalable patterns to access those messages is
what we strive for in Zaqar's API.


Thanks Flavio! I think we are more or less on the same page :)

Maybe you could clarify what the other messaging patterns are exactly, 
since that seems to be one of the points of confusion/contention.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][db] Need help resolving a strange error with db connections in tests

2014-09-12 Thread Kevin Benton
This one was tricky. :-)

This failure can be produced consistently by running the following two
tests:
neutron.tests.unit.brocade.test_brocade_plugin.TestBrocadePortsV2.test_delete_network_port_exists_owned_by_network
neutron.tests.unit.db.test_migration.TestModelsMigrationsSyncMl2Psql.test_models_sync

This failure behavior started after
change I6f67bb430c50ddacb2d53398de75fb5f494964a0 to use oslo.db for all of
neutron connection handling instead of SQLAlchemy.[1] The failure message
returned from the DB layer is very misleading. If you remove the catch that
converts to that generic error about username/pass and DB being
wrong/missing, you will get the real following error:

OperationalError: (OperationalError) database dzmhwmgrou is being
accessed by other users
DETAIL:  There are 1 other session(s) using the database.
 'drop database dzmhwmgrou;' {}


What is happening is that the oslo.db test case cleanup code is trying to
destroy the db while a separate sqlalchemy engine (created from the alembic
migration code) still has a connection to the db. The first test is
required to help trigger the failure either because of imports or the run
setting up database connections causing things to be cached in a module
flag somewhere. I haven't looked into the exact source.

Here is the diff to fix your patch to pass the same session into the
alembic migration code that is setup and torn down by the test case. This
should allow you to proceed forward with your work.


~/code/neutron$ git diff
diff --git a/neutron/tests/unit/db/test_migration.py
b/neutron/tests/unit/db/test_migration.py
index 6db8ae0..c29ab67 100644
--- a/neutron/tests/unit/db/test_migration.py
+++ b/neutron/tests/unit/db/test_migration.py
@@ -136,9 +136,12 @@ class ModelsMigrationsSyncMixin(object):
 self.alembic_config.neutron_config = cfg.CONF

 def db_sync(self, engine):
-cfg.CONF.set_override('connection', engine.url, group='database')
-migration.do_alembic_command(self.alembic_config, 'upgrade',
'head')
-cfg.CONF.clear_override('connection', group='database')
+with mock.patch(
+'oslo.db.sqlalchemy.session.create_engine',
+return_value=self.get_engine()
+):
+migration.do_alembic_command(self.alembic_config,
+ 'upgrade', 'head')

 def get_engine(self):
 return self.engine




1. https://review.openstack.org/#/c/110016/

--
Kevin Benton

On Fri, Sep 12, 2014 at 2:15 AM, Anna Kamyshnikova 
akamyshnik...@mirantis.com wrote:

 This is implementing ModelsMigrationsSync test from oslo [1]. For running
 it locally on Postgres you have to do the following things (it is mentioned
 in comments to test):

 For the opportunistic testing you need to set up a db named
 'openstack_citest' with user 'openstack_citest' and password
 'openstack_citest' on localhost.
 The test will then use that db and user/password combo to run the
 tests.

 For PostgreSQL on Ubuntu this can be done with the following commands::

 sudo -u postgres psql
 postgres=# create user openstack_citest with createdb login
 password
   'openstack_citest';
 postgres=# create database openstack_citest with owner
openstack_citest;

 For MySQL on Ubuntu this can be done with the following commands::

 mysql -u root
 create database openstack_citest;
 grant all privilleges on openstack_citest.* to
  openstack_citest@localhost identified by 'opensatck_citest';

 As I said this error appeared only three weeks ago, although I'm working
 on this test since 29 of April, it passed Jenkins in August without any
 problems. Postgres is available there.

 [1] -
 https://github.com/openstack/oslo.db/blob/master/oslo/db/sqlalchemy/test_migrations.py#L277

 On Fri, Sep 12, 2014 at 12:28 PM, Kevin Benton blak...@gmail.com wrote:

 Can you explain a bit about that test? I'm having trouble reproducing it.
 On the system (upstream Jenkins) that it's failing on, is postgres
 available with that database?

 On Thu, Sep 11, 2014 at 7:07 AM, Anna Kamyshnikova 
 akamyshnik...@mirantis.com wrote:

 Hello everyone!

 I'm working on implementing test in Neutron that checks that models are
 synchronized with database state [1] [2]. This is very important change as
 during Juno cycle big changes of database structure were done.

 I was working on it for rather long time but about three weeks ago
 strange error appeared [3], using AssertionPool shows [4]. The problem is
 that somehow there are more than one connection to database from each test.
 I tried to use locks from lockutils, but it didn’t help. On db meeting we
 decided to add TestCase just for one Ml2 plugin for starters, and then
 continue working on this strange error, that is why there are two change
 requests [1] and [2]. But I found out that somehow even one testcase fails
 with the same error [5] from time to time.


Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Zane Bitter

On 12/09/14 07:37, Thierry Carrez wrote:

Hi everyone,

I visited the Paris Design Summit space on Monday and confirmed that it
should be possible to split it in a way that would allow to have
per-program contributors meetups on the Friday. The schedule would go
as follows:

Tuesday: cross-project workshops
Wednesday, Thursday: traditional scheduled slots
Friday: contributors meetups

We'll also have pods available all 4 days for more ad-hoc small meetings.

In the mean time, we need to discuss how we want to handle the selection
of session topics.

In past summits we used a Design-Summit-specific session suggestion
website, and PTLs would approve/deny them. This setup grew less and less
useful: session topics were selected collaboratively on etherpads,
discussed in meetings, and finally filed/reorganized/merged on the
website just before scheduling. Furthermore, with even less scheduled
slots, we would have to reject most of the suggestions, which is more
frustrating for submitters than the positive experience of joining team
meetings to discuss which topics are the most important. Finally, topics
will need to be split between scheduled sessions and the contributors
meetup agenda, and that's easier to do on an Etherpad anyway.

This is why I'd like to suggest that all programs use etherpads to
collect important topics, select which ones would get in the very few
scheduled slots we'll have left, which will get discussed in the
contributors meetup, and which are better left for a pod discussion.
I suggest we all use IRC team meetings to collaboratively discuss that
content between interested contributors.


+1, this was #1 or #2 on my list of things where the PTL becomes a 
single point of failure.


- ZB


To simplify the communication around this, I tried to collect the
already-announced etherpads on a single page at:

https://wiki.openstack.org/wiki/Summit/Planning

Please add any that I missed !

If you think this is wrong and think the design summit suggestion
website is a better way to do it, let me know why! If some programs
really can't stand the 'etherpad/IRC' approach I'll see how we can spin
up a limited instance.

Regards,




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 12/09/14 13:20, Sean Dague wrote:
 On 09/12/2014 06:41 AM, Ihar Hrachyshka wrote:
 Some updates/concerns/questions.
 
 The status of introducing a new driver to gate is:
 
 - all the patches for mysql-connector are merged in all
 projects; - all devstack patches to support switching the driver
 are merged; - new sqlalchemy-migrate library is released;
 
 - version bump is *not* yet done; - package is still *not* yet
 published on pypi; - new gate job is *not* yet introduced.
 
 The new sqlalchemy-migrate release introduced unit test failures
 in those three projects: nova, cinder, glance.
 
 On technical side of the failure: my understanding is that those 
 projects that started to fail assume too much about how those
 SQL scripts are executed. They assume they are executed in one
 go, they also assume they need to open and commit transaction on
 their own. I don't think this is something to be fixed in
 sqlalchemy-migrate itself. Instead, simple removal of those
 'BEGIN TRANSACTION; ... COMMIT;' statements should just work and
 looks like a sane thing to do anyway. I've proposed the following
 patches for all three projects to handle it [1].
 
 That said, those failures were solved by pinning the version of
 the library in openstack/requirements and those projects. This is
 in major contrast to how we handled the new testtools release
 just several weeks ago, when the problem was solved by fixing
 three affected projects because of their incorrect usage of
 tearDown/setUp methods.
 
 Even more so, those failures seem to trigger the resolution to
 move the enable-mysql-connector oslo spec to Kilo, while the
 library version bump is the *only* change missing codewise (we
 will also need a gate job description, but that doesn't touch
 codebase at all). The resolution looks too prompt and ungrounded
 to me. Is it really that gate failure for three projects that
 resulted in it, or there are some other hidden reasons behind it?
 Was it discussed anywhere? If so, I wasn't given a chance to
 participate in that discussion; I suspect another supporter of
 the spec (Agnus Lees) was not involved either.
 
 Not allowing those last pieces of the spec in this cycle, we
 just postpone start of any realistic testing of the feature for
 another half a year.
 
 Why do we block new sqlalchemy-migrate and the spec for another
 cycle instead of fixing the affected projects with *primitive*
 patches like we did for new testtools?
 
 Because we are in Feature Freeze. Now is the time for critical bug
 fixes only, as we start to stabalize the tree. Releasing dependent
 libraries that can cause breaks, for whatever reason, should be
 soundly avoided.
 
 If this was August, fine. But it's feature freeze.

I probably missed the fact that we are so strict now that we don't
allow tiny missing bits to go in. In my excuse, I was offline for
around three last weeks. I was a bit misled by the fact that I was
approached by an oslo core very recently on which remaining bits we
need to push before claiming the spec to be complete, and I assumed it
means that we are free to complete the work this cycle. Otherwise, I
wouldn't push for the new library version in the first place.

Anyway, I guess there is no way now to get remaining bits in Juno,
even if small, and we're doomed to postpone them to Kilo.

Thanks for the explanation,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUEvPjAAoJEC5aWaUY1u57kPYIAMuTz5w8cmNLeXHSGpb0s0BT
4GPbTvLIvoTRXf2froozSxVo6B4oKgUFe7IkSI8nsBHP+dcDPotKwJEMgAKpLL1n
37ccFR+RuMCVMa6ZYHgz88o4dbTgv5XC5tBTnY78mX7WOoQHQ0ByRcBUZkIc9aoI
KF+SNRvHwVRT9qNPElcrfHKNPwROIe1Eml3aVaqnHWPWip5J7+E+/BU+YSxtDKIV
whrJzUpHgwph4NJ1lHddrzVCAjf8mWKj8EX1WWU2zTgUtfLi+xqvOBCnQ+1rBXA8
brIBpbUOObMjBqbemlymKuFvcuy6yHTXXvAfLcgGcRXSmvdjtfAIZCr5d9AjKhU=
=zPHu
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] Old reviews (2011) with strange uploaded dates in review.openstack.org

2014-09-12 Thread Jeremy Stanley
On 2014-09-12 11:54:19 +0200 (+0200), Daniel Izquierdo wrote:
[...]
 And a question: was there a migration around the 2012-12-16 of the
 review system or some other noticeable event?. On such date there
 was around 1,200 submitted reviews, while around those days, in
 mean, there are some dozens of them.

We discovered that Gerrit configures datestamps in most of its
tables to reset on row updates (a particularly insane design choice
for things like a created_on column). Before we realized this,
intrusive maintenance activities--most notably project renames--were
mass-resetting the creation dates of changes and comments to the
date and time we ran the necessary update queries. Now we
special-case those fields in our update queries to forcibly reset
them to themselves so that they retain their original values, but at
this point there's no easy way to go back and fix the ones we did
before we noticed this unfortunate loss of date/time information.

This maintenance notification looks relevant...

http://lists.openstack.org/pipermail/openstack-dev/2012-December/003934.html

-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest Bug triage

2014-09-12 Thread David Kranz

On 09/12/2014 05:11 AM, Kashyap Chamarthy wrote:

On Thu, Sep 11, 2014 at 03:52:56PM -0400, David Kranz wrote:

So we had a Bug Day this week and the results were a bit disappointing due
to lack of participation. We went from 124 New bugs to 75.

There were also many cases where bugs referred to logs that no longer
existed. This suggests that we really need to keep up with bug triage
in real time.

Alternatively, strongly recommend people to post *contextual* logs to
the bug, so they're there for reference forever and makes life less
painful while triaging bugs. Many times bugs are just filed in a hurry,
posting a quick bunch of logstash URLs which expires sooner or later.

Sure, posting contextual logs takes time, but as you can well imagine,
it results in higher quality reports (hopefully), and saves time for
others who have to take a fresh look at the bug and have to begin with
the maze of logs.
This would be in addition to, not alternatively. Of course better bug 
reports with as much information as possible, with understanding of how 
long log files will be retained, etc. would always be better. But due to 
the sorry state we are now in, it is simply unrealistic to expect people 
to start investigating failures in code they do not understand that are 
obviously unrelated to the code they are trying to babysit through the 
gate. I wish it were otherwise, and believe this may change as  we 
achieve the goal of focusing our test time on tests that are related to 
the code being tested (in-project functional testing).


The purpose of rotating bug triage is that it was not happening at all. 
When there is a not-so-much-fun task for which every one is responsible, 
no one is responsible. It is better to share the load in a well 
understood way and know who has taken on responsibility at any point in 
time.


 -David


--
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][neutron] tox -e py27 is not working in the latest neutron code

2014-09-12 Thread Jeremy Stanley
On 2014-09-12 10:02:49 + (+), Kelam, Koteswara Rao wrote:
 I am trying to run unit test cases in neutron code using “tox –e
 py27” but it is not working.
[...]
 'TOX_INDEX_URL': 'http://pypi.openstack.org/openstack',
[...]
 'PIP_INDEX_URL': 'http://pypi.openstack.org/openstack',
[...]
 Could not find any downloads that satisfy the requirement Paste
[...]

You have your environment misconfigured to use a mirror of PyPI
which is no longer maintained. Please use pypi.python.org or a
mirror you maintain for your own development work.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Gordon Sim

On 09/12/2014 09:50 AM, Flavio Percoco wrote:

Zaqar supports once and only once delivery.


For the transfer from Zaqar to consumers it does (providing the claim id 
can be recovered). For transfer from producers to Zaqar I believe it is 
more limited.


If the connection to Zaqar fails during a post, the sender can't tell 
whether the message was successfully enqueued or not.


It could try to determine this is by browsing the entire queue looking 
for a matching body. However thatt would be awkward and in any case the 
absence of the message could mean that it wasn't enqueued or that it was 
already consumed and deleted.


One way of handling this is to have the post return a unique url to 
which the message(s) are put or posted. The sender can then repost 
(re-put) to this in the event of failure and the server can determine 
whether it already processed the publications. Alternatively the client 
can be required to generate a unique id on which the server can 
de-deduplicate.


The ActiveMQ REST interface supports both of these approaches: 
http://activemq.apache.org/restful-queue.html


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-12 Thread Gordon Sim

On 09/11/2014 07:46 AM, Flavio Percoco wrote:

On 09/10/2014 03:18 PM, Gordon Sim wrote:

On 09/10/2014 09:58 AM, Flavio Percoco wrote:

 Other OpenStack components can integrate with Zaqar to surface events
to end users and to communicate with guest agents that run in the
over-cloud layer.


I may be misunderstanding the last sentence, but I think *direct*
integration of other OpenStack services with Zaqar would be a bad idea.

Wouldn't this be better done through olso.messaging's notifications in
some way? and/or through some standard protocol (and there's more than
one to choose from)?

Communicating through a specific, fixed messaging system, with its own
unique protocol is actually a step backwards in my opinion, especially
for things that you want to keep as loosely coupled as possible. This is
exactly why various standard protocols emerged.



Yes and no. The answer is yes most of the time but there are use cases,
like the ones mentioned here[0], that make zaqar a good tool for the job.


I certainly wasn't saying that Zaqar is not a good tool. I was merely 
stating that - in my opinion - wiring it in as the only tool would be a 
mistake.



[0] https://etherpad.openstack.org/p/zaqar-integrated-projects-use-cases


Again, Zaqar might be great for those cases, but none of them describe 
features that are unique to Zaqar, so other solutions could also fit.


All I'm saying is that if the channel between openstack services and 
users is configurable, that will give users more choice (as well as 
operators) and that - in my opinion - would be a good thing.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Jeremy Stanley
On 2014-09-12 12:41:42 +0200 (+0200), Ihar Hrachyshka wrote:
[...]
 That said, those failures were solved by pinning the version of the
 library in openstack/requirements and those projects. This is in major
 contrast to how we handled the new testtools release just several
 weeks ago, when the problem was solved by fixing three affected
 projects because of their incorrect usage of tearDown/setUp methods.
[...]

This was of course different because it came during a period where
integrated projects are supposed to be focusing on stabilizing what
they have toward release, but also our behavior was somewhat altered
because we needed to perform some immediate damage control.

One of the side-effects of the failure mode this sqlalchemy-migrate
release induced was that each nova unit tests was generating ~0.5GiB
of log data, instantly overwhelming our test log analysis systems
and flooding our artifact archive (both in terms of bandwidth and
disk). The fastest way to stop this was to roll back what changed,
for which the options were either to introduce an exclusionary
version pin or convince the library authors to release an even newer
version tagged to the old one. We chose the first solution as it was
more directly under the control of the infrastructure and nova core
teams involved at that moment.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Consistency of development environment

2014-09-12 Thread Alexander Vollschwitz
Hello Adrian,

thanks for the quick reply, and sorry for my delayed response.

On 09/03/2014 04:42 PM, Adrian Otto wrote:
 We have noticed lately that our devstack setup does not always work.
  [...] We have discussed ways to mitigate this. One idea was to
 select a particular devstack from a prior OpenStack release to help
 cut down on the rate of change. [...] We also considered additional
 functional tests for devstack to run when new code is submitted. I
 suppose we could run testing continuously in loops in attempts to
 detect non-determinism. [...] All of the above are opportunities for
 us to improve matters going forward. There are probably even better
 ideas we should consider as well.

I'm currently toying with the following approach: Clone all necessary
repos locally (staging repos), and configure devstack to use them (via
GIT_BASE). At fixed intervals, automatically update the staging repos,
and start the provisioning of a dev env. If that goes well, run the
Solum tests with it. If that is also successful, the state in the
staging repos gets pulled into a second set of local repos (setup
repos), which I can use for actual dev env provisioning. If things fail,
we could give it a couple of retries (for the non-determinism you
mentioned), or just wait for the next interval.

So there should (hopefully) always be a fairly recent and usable state
in the setup repos. How well this works will mostly depend on the
interval, I guess. I.e., the shorter the interval, the higher the chance
for filtering out a usable state. Of course, if things get broken
permanently, the state in the setup repos would no longer advance and
we'd need to look into the cause and take actions (see example below).

This approach doesn't really solve the root cause, but could make
setting up a dev env a bit more reliable. I got the first part in place,
i.e. provisioning from staging repos. I'll now experiment with the
second part, and let you know.


 For now, we would like to help you get past the friction you are
 experiencing so you can get a working environment up.

I could resolve the two problems I mentioned manually and get the dev
env working. However, this brings up a question:

 after devstack provisioned OS, q-dhcp and q-l3 were not running.
 The former refused to start due to an updated version requirement
 for dnsmasq (see
 https://bugs.launchpad.net/openstack-manuals/+bug/1347153) that
 was not met

This problem is also described here:
http://netapp.github.io/openstack/2014/08/15/manila-devstack/

While I installed dnsmasq 2.63 manually, they used Ubuntu 14.04 to get
around the problem. Is it maybe time to upgrade the base for the dev env
to 14.04, or would that cause many other problems? Have you tried that
out? If you have a pointer to an appropriate 14.04 image I can configure
in Vagrant, I'd like to play with that a bit. Maybe that would also
solve the problem with openvswitch.

Kind regards,

Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Eoghan Glynn


 I visited the Paris Design Summit space on Monday and confirmed that it
 should be possible to split it in a way that would allow to have
 per-program contributors meetups on the Friday. The schedule would go
 as follows:
 
 Tuesday: cross-project workshops
 Wednesday, Thursday: traditional scheduled slots
 Friday: contributors meetups
 
 We'll also have pods available all 4 days for more ad-hoc small meetings.

Excellent :)

 In the mean time, we need to discuss how we want to handle the selection
 of session topics.
 
 In past summits we used a Design-Summit-specific session suggestion
 website, and PTLs would approve/deny them. This setup grew less and less
 useful: session topics were selected collaboratively on etherpads,
 discussed in meetings, and finally filed/reorganized/merged on the
 website just before scheduling. Furthermore, with even less scheduled
 slots, we would have to reject most of the suggestions, which is more
 frustrating for submitters than the positive experience of joining team
 meetings to discuss which topics are the most important. Finally, topics
 will need to be split between scheduled sessions and the contributors
 meetup agenda, and that's easier to do on an Etherpad anyway.
 
 This is why I'd like to suggest that all programs use etherpads to
 collect important topics, select which ones would get in the very few
 scheduled slots we'll have left, which will get discussed in the
 contributors meetup, and which are better left for a pod discussion.
 I suggest we all use IRC team meetings to collaboratively discuss that
 content between interested contributors.
 
 To simplify the communication around this, I tried to collect the
 already-announced etherpads on a single page at:
 
 https://wiki.openstack.org/wiki/Summit/Planning
 
 Please add any that I missed !
 
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.

+1 on a collaborative scheduling process within each project.

That's pretty much what we did within the ceilometer core group for
the Juno summit, except that we used a googledocs spreadsheet instead
of an etherpad.

So I don't think we need to necessarily mandate usage of an etherpad,
just let every project decide whatever shared document format they
want to use.

FTR the benefit of a googledocs spreadsheet in my view would include
the ease of totalling votes  sessions slots, color-coding candidate
sessions for merging etc.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Brad Topol
+1!!! This is awesome.  I *always* ran into this was about to get   find . 
-name *.pyc -delete tattooed on the inside of my forearm. Now I don't 
have to.  Thanks!!!

--Brad



Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Sean Dague s...@dague.net
To: openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.org, 
Date:   09/12/2014 07:40 AM
Subject:[openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in 
tox.ini



I assume you, gentle OpenStack developers, often find yourself in a hair
tearing out moment of frustration about why local unit tests are doing
completely insane things. The code that it is stack tracing on is no
where to be found, and yet it fails.

And then you realize that part of oslo doesn't exist any more
except there are still pyc files laying around. Gah!

I've proposed the following to Nova and Python novaclient -
https://review.openstack.org/#/c/121044/

Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.

This prevents pyc files from being writen in your git tree (win!). It
doesn't seem to impact what pip installs... and if anyone knows how to
prevent those pyc files from getting created, that would be great.

But it's something which hopefully causes less perceived developer
fragility of the system.

 -Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Consistency of development environment

2014-09-12 Thread Paul Czarkowski


While I installed dnsmasq 2.63 manually, they used Ubuntu 14.04 to get
around the problem. Is it maybe time to upgrade the base for the dev env
to 14.04, or would that cause many other problems? Have you tried that
out? If you have a pointer to an appropriate 14.04 image I can configure
in Vagrant, I'd like to play with that a bit. Maybe that would also
solve the problem with openvswitch.


We have actually recently upgraded our Vagrant environment to 14.04 so if
you pull from master from  https://github.com/rackerlabs/vagrant-solum-dev
you should get a working 14.04 instance.

We couldn¹t upgrade to 14.04 as quickly as we hoped as there is a bug that
we had to resolve with ubuntu system packages -
https://bugs.launchpad.net/solum/+bug/1365679


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 7:20 AM, Sean Dague s...@dague.net wrote:

 
 Because we are in Feature Freeze. Now is the time for critical bug fixes
 only, as we start to stabalize the tree. Releasing dependent libraries
 that can cause breaks, for whatever reason, should be soundly avoided.
 
 If this was August, fine. But it's feature freeze.

I agree with this, changing the MySQL driver now is not an option.That 
train has left the station, I think it’s better we all take the whole Kilo 
cycle to get used to mysql-connector and its quirks before launching it on the 
world, as there will be many more.

However for Kilo, I think those “COMMIT” phrases should be removed and overall 
we need to make a very hard and fast rule that we *do not put multiple 
statements in an execute*.I’ve seen a bunch of these come through so far, 
and for some of them (more the in-Python ones) it seems like the underlying 
reason is a lack of understanding of what exactly a SQLAlchemy “Engine” is and 
what features it supports.

So first, let me point folks to the documentation for this, which anyone 
writing code involving Engine objects should read first:

http://docs.sqlalchemy.org/en/rel_0_9/core/connections.html

Key to this is that while engine supports an “.execute()” method, in order to 
do anything that intends to work on a single connection and typically a single 
transaction, you procure a Connection and usually a Transaction from the 
Engine, most easily like this:

with engine.begin() as conn:
   conn.execute(statement 1)
   conn.execute(statement 2)
   conn.execute(statement 3)
   .. etc


Now let me apologize for the reason this misunderstanding exists in the first 
place:  it’s because in 2005 I put the “.execute()” convenience method on the 
Engine itself (well in fact we didn’t have the Engine/Connection dichotomy back 
then), and I also thought that “implicit execution”, e.g. statement.execute(), 
would be a great idea.Tons of other people still think it’s a great idea 
and even though I’ve buried this whole thing in the docs, they still use it 
like candy….until they have the need to control the scope of connectivity.  

*Huge* mistake, it’s my fault, but not something that can really be changed 
now.   Also, in 2005, Python didn’t have context managers.So we have all 
kinds of klunky patterns like “trans = conn.begin()”, kind of J2EE style, etc., 
but these days, the above pattern is your best bet when you want to invoke 
multiple statements.engine.execute() overall should just be avoided as it 
only leads to misunderstanding.   When we all move all of our migrate stuff to 
Alembic, there won’t be an Engine provided to a migration script, it will be a 
Connection to start with.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 12/09/14 16:33, Mike Bayer wrote:
 I agree with this, changing the MySQL driver now is not an option.

That was not the proposal. The proposal was to introduce support to
run against something different from MySQLdb + a gate job for that
alternative. The next cycle was supposed to do thorough regression
testing, benchmarking, etc. to decide whether we're ok to recommend
that alternative to users.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUEwX2AAoJEC5aWaUY1u576sEH/j1q6elB5jmt9AxInN77ei7E
dG80fol/E56UB+rtuTfrev2ceLYU6iTF7p11t/ABzXdvGHWwcfzD/zJrPUEu0+Bq
XbyATjNTEtjgBkcZr8R1Av2JwOgrny/3OeATQf8EfqDUKhjiUcAsPrYw14OebUyZ
HRyTA7QvC83aJQK28hMK+l2x7cYCPG5CGugUXd5BTXP/yMOQ60izvHd9B9vnx/5y
EgWDV3RwAXPiFQ41aeobIktlt9F+bl6y6S+mmJY3FgjsjqxKIJBlxmhCppKLcot5
9WhsBUa9uvgCAvOU7p7/B4pSo+9gaxJtXlCjzQBH6qWb07DItMLjsc8eF6uA5M0=
=xIP3
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-12 Thread Jiří Stránský

On 9.9.2014 20:32, Gregory Haynes wrote:

Hello everyone!

I have been working on a meta-review of StevenK's reviews and I would
like to propose him as a new member of our core team.


+1



As I'm sure many have noticed, he has been above our stats requirements
for several months now. More importantly, he has been reviewing a wide
breadth of topics and seems to have a strong understanding of our code
base. He also seems to be doing a great job at providing valuable
feedback and being attentive to responses on his reviews.

As such, I think he would make a great addition to our core team. Can
the other core team members please reply with your votes if you agree or
disagree.

Thanks!
Greg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-12 Thread Victoria Martínez de la Cruz
2014-09-12 10:44 GMT-03:00 Gordon Sim g...@redhat.com:

 On 09/11/2014 07:46 AM, Flavio Percoco wrote:

 On 09/10/2014 03:18 PM, Gordon Sim wrote:

 On 09/10/2014 09:58 AM, Flavio Percoco wrote:

  Other OpenStack components can integrate with Zaqar to surface
 events
 to end users and to communicate with guest agents that run in the
 over-cloud layer.


 I may be misunderstanding the last sentence, but I think *direct*
 integration of other OpenStack services with Zaqar would be a bad idea.

 Wouldn't this be better done through olso.messaging's notifications in
 some way? and/or through some standard protocol (and there's more than
 one to choose from)?

 Communicating through a specific, fixed messaging system, with its own
 unique protocol is actually a step backwards in my opinion, especially
 for things that you want to keep as loosely coupled as possible. This is
 exactly why various standard protocols emerged.


 Yes and no. The answer is yes most of the time but there are use cases,
 like the ones mentioned here[0], that make zaqar a good tool for the job.


 I certainly wasn't saying that Zaqar is not a good tool. I was merely
 stating that - in my opinion - wiring it in as the only tool would be a
 mistake.


Fair enough. Zaqar is just one of the possibilities and it's crafted to
work with OpenStack. If users prefer to use a different tool, it's totally
fine. I guess that operators will choose what it best fit their needs.



  [0] https://etherpad.openstack.org/p/zaqar-integrated-projects-use-cases


 Again, Zaqar might be great for those cases, but none of them describe
 features that are unique to Zaqar, so other solutions could also fit.


 All I'm saying is that if the channel between openstack services and users
 is configurable, that will give users more choice (as well as operators)
 and that - in my opinion - would be a good thing.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] Old reviews (2011) with strange uploaded dates in review.openstack.org

2014-09-12 Thread Daniel Izquierdo

On 12/09/14 15:38, Jeremy Stanley wrote:

On 2014-09-12 11:54:19 +0200 (+0200), Daniel Izquierdo wrote:
[...]

And a question: was there a migration around the 2012-12-16 of the
review system or some other noticeable event?. On such date there
was around 1,200 submitted reviews, while around those days, in
mean, there are some dozens of them.

We discovered that Gerrit configures datestamps in most of its
tables to reset on row updates (a particularly insane design choice
for things like a created_on column). Before we realized this,
intrusive maintenance activities--most notably project renames--were
mass-resetting the creation dates of changes and comments to the
date and time we ran the necessary update queries. Now we
special-case those fields in our update queries to forcibly reset
them to themselves so that they retain their original values, but at
this point there's no easy way to go back and fix the ones we did
before we noticed this unfortunate loss of date/time information.
That makes totally sense, thanks a lot for the info!. Then we should try 
to avoid those reviews when calculating time to review and other 
time-based metrics.




This maintenance notification looks relevant...

http://lists.openstack.org/pipermail/openstack-dev/2012-December/003934.html
Oops, thanks for the pointer. It's exactly that date (I didn't check the 
infra mailing list for that exactly date, my fault u_u).


Thanks a lot!

Regards,
Daniel.






--
Daniel Izquierdo Cortazar, PhD
Chief Data Officer
-
Software Analytics for your peace of mind
www.bitergia.com
@bitergia


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Russell Bryant
On 09/12/2014 07:37 AM, Thierry Carrez wrote:
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.

I think this is fine, especially if it's a better reflection of reality
and lets the teams work more efficiently.

However, one of the benefits of the old submission system was the
clarity of the process and openness to submissions from anyone.  We
don't want to be in a situation where non-core folks feel like they have
a harder time submitting a session.

Once this is settled, as long as the wiki pages [1][2] reflect the
process and is publicized, it should be fine.

[1] https://wiki.openstack.org/wiki/Summit
[2] https://wiki.openstack.org/wiki/Summit/Planning

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Flavio Percoco
On 09/12/2014 01:56 PM, Flavio Percoco wrote:
 On 09/12/2014 11:36 AM, Thierry Carrez wrote:
 Flavio Percoco wrote:
 On 09/12/2014 12:14 AM, Zane Bitter wrote:
 The final question is the one of arbitrary access to messages in the
 queue (or queue if you prefer). Flavio indicated that this effectively
 came for free with their implementation of Pub-Sub. IMHO it is
 unnecessary and limits the choice of potential back ends in the future.
 I would personally be +1 on removing it from the v2 API, and also +1 on
 the v2 API shipping in Kilo so that as few new adopters as possible get
 stuck with the limited choices of back-end. I hope that would resolve
 Clint's concerns that we need a separate, light-weight queue system; I
 personally don't believe we need two projects, even though I agree that
 all of the use cases I personally care about could probably be satisfied
 without Pub-Sub.

 Right, being able to support other backends is one of the reasons we're
 looking forward to remove the support for arbitrary access to messages.
 As of now, the plan is to remove that endpoint unless a very good use
 case comes up that makes supporting other backends not worth it, which I
 doubt. The feedback from Zaqar's early adopters is that the endpoint is
 indeed not useful.

 Thanks Zane, that was indeed useful. I agree with you it would be better
 to avoid needing 2 separate projects for such close use cases.
 
 +1
 
 Let's assume we remove arbitrary access to messages in v2. When you say
 it would remove limits on the choice of potential backends, does that
 mean we could have a pure queue backend (like RabbitMQ), at least in
 theory ? Would a ZaqarV2 address all of Clint and Devananda's concerns
 about queue semantics ? If yes, then the graduation question becomes,
 how likely is that work to be completed early enough in Kilo.

 If it's a no-brainer and takes a week to sort out, I think we could
 approve Zaqar's Kilo graduation, even if that stretches the no major
 API rewrite planned requirement.
 
 Let me break the above down into several points so we can discuss them
 separately:
 
 - Removing that endpoint won't take more than a week. It's an API change
 and it won't affect the existing storage drivers.

For the sake of discussion and to provide more info on this point, I've
done this (there are still some tests to clean-up but that's basically
all that's required):

https://review.openstack.org/#/c/121141/

Flavio

 
 - Removing that endpoint will certainly make the adoption of other
 messaging technologies easier but there are other things to consider
 besides that specific endpoint (some of them were stated here[0]). In
 any case, removing the endpoint definitely makes it easier.
 
 - Besides the random access to messages, I'm not clear what other
 concerns there are with regards the current semantics. It'd be nice if
 we could recollect them in this section and discuss them. I took a look
 at the other emails in this thread and it seems to me that the concerns
 that have been raised are more oriented to the project scope and
 use-cases. I also looked at the meeting logs again[1] and the only
 concern related to the semantics I found is about the
 `get-message-by-id` endpoint. Please, correct me if I'm wrong.
 
 
 [0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/
 [1]
 http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-09-20.01.log.html
 
 Flavio
 
 But if we think this needs careful discussion so that the v2 API design
 (and backend support) satisfies the widest set of users, then incubating
 for another cycle while v2 is implemented seems like the right course of
 action. We shouldn't graduate if there is any risk we would end up with
 ZaqarV1 in Kilo, and then have to deprecate it for n cycles just because
 it was shipped in the official release and therefore inherits its API
 deprecation rules.

 Regards,

 
 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Doug Hellmann

On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:

 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.
 
 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!
 
 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/
 
 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.
 
 This prevents pyc files from being writen in your git tree (win!). It
 doesn't seem to impact what pip installs... and if anyone knows how to
 prevent those pyc files from getting created, that would be great.
 
 But it's something which hopefully causes less perceived developer
 fragility of the system.
 
   -Sean

I also use git-hooks with a post-checkout script to remove pyc files any time I 
change between branches, which is especially helpful if the different branches 
have code being moved around:

git-hooks: https://github.com/icefox/git-hooks

The script:

$ cat ~/.git_hooks/post-checkout/remove_pyc
#!/bin/sh
echo Removing pyc files from `pwd`
find . -name '*.pyc' | xargs rm -f
exit 0

 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] battling stale .pyc files

2014-09-12 Thread Mike Bayer
I’ve just found https://bugs.launchpad.net/nova/+bug/1368661, Unit tests 
sometimes fail because of stale pyc files”.

The issue as stated in the report refers to the phenomenon of .pyc files that 
remain inappropriately, when switching branches or deleting files.

Specifically, the kind of scenario that in my experience causes this looks like 
this.  One version of the code has a setup like this:

   mylibrary/mypackage/somemodule/__init__.py

Then, a different version we switch to changes it to this:

   mylibrary/mypackage/somemodule.py

But somemodule/__init__.pyc will still be sitting around, and then things break 
- the Python interpreter skips the module (or perhaps the other way around. I 
just ran a test by hand and it seems like packages trump modules in Python 2.7).

This is an issue for sure, however the fix that is proposed I find alarming, 
which is to use the PYTHONDONTWRITEBYTECODE=1 flag written directly into the 
tox.ini file to disable *all* .pyc file writing, for all environments 
unconditionally, both human and automated.

I think that approach is a mistake.  .pyc files have a definite effect on the 
behavior of the interpreter.   They can, for example, be the factor that causes 
a dictionary to order its elements in one way versus another;  I’ve had many 
relying-on-dictionary-ordering issues (which make no mistake, are bugs) smoked 
out by the fact that a .pyc file would reveal the issue..pyc files also 
naturally have a profound effect on performance.   I’d hate for the Openstack 
community to just forget that .pyc files ever existed, our tox.ini’s safely 
protecting us from them, and then we start seeing profiling results getting 
published that forgot to run the Python interpreter in it’s normal state of 
operation.  If we put this flag into every tox.ini, it means the totality of 
openstack testing will not only run more slowly, it also means our code will 
never be run within the Python runtime environment that will actually be used 
when code is shipped.   The Python interpreter is incredibly stable and 
predictable and a small change like this is hardly something that we’d usually 
notice…until something worth noticing actually goes wrong, and automated 
testing is where that should be found, not after shipment.

The issue of the occasional unmatched .pyc file whose name happens to still be 
imported by the application is not that frequent, and can be solved by just 
making sure unmatched .pyc files are deleted ahead of time.I’d favor a 
utility such as in oslo.utils which performs this simple step of finding all 
unmatched .pyc files and deleting (taking care to be aware of __pycache__ / 
pep3147), and can be invoked from tox.ini as a startup command.

But guess what - suppose you totally disagree and you really want to not have 
any .pyc files in your dev environment.   Simple!  Put 
PYTHONDONTWRITEBYTECODE=1 into *your* environment - it doesn’t need to be in 
tox.ini, just stick it in your .profile.   Let’s put it up on the wikis, let’s 
put it into the dev guides, let’s go nuts.   Banish .pyc files from your 
machine all you like.   But let’s *not* do this on our automated test 
environments, and not force it to happen in *my* environment. 

I also want to note that the issue of stale .pyc files should only apply to 
within the library subject to testing as it lives in its source directory.  
This has nothing to do with the packages that are installed under .tox as those 
are full packages, unless there’s some use case I’m not aware of (possible), we 
don’t checkout code into .tox nor do we manipulate files there as a matter of 
course.

Just my 2.5c on this issue as to the approach I think is best.   Leave the 
Python interpreter’s behavior as much as “normal” as possible in our default 
test environment.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Doug Hellmann

On Sep 12, 2014, at 9:23 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 On 12/09/14 13:20, Sean Dague wrote:
  On 09/12/2014 06:41 AM, Ihar Hrachyshka wrote:
  Some updates/concerns/questions.
 
  The status of introducing a new driver to gate is:
 
  - all the patches for mysql-connector are merged in all
  projects; - all devstack patches to support switching the driver
  are merged; - new sqlalchemy-migrate library is released;
 
  - version bump is *not* yet done; - package is still *not* yet
  published on pypi; - new gate job is *not* yet introduced.
 
  The new sqlalchemy-migrate release introduced unit test failures
  in those three projects: nova, cinder, glance.
 
  On technical side of the failure: my understanding is that those
  projects that started to fail assume too much about how those
  SQL scripts are executed. They assume they are executed in one
  go, they also assume they need to open and commit transaction on
  their own. I don't think this is something to be fixed in
  sqlalchemy-migrate itself. Instead, simple removal of those
  'BEGIN TRANSACTION; ... COMMIT;' statements should just work and
  looks like a sane thing to do anyway. I've proposed the following
  patches for all three projects to handle it [1].
 
  That said, those failures were solved by pinning the version of
  the library in openstack/requirements and those projects. This is
  in major contrast to how we handled the new testtools release
  just several weeks ago, when the problem was solved by fixing
  three affected projects because of their incorrect usage of
  tearDown/setUp methods.
 
  Even more so, those failures seem to trigger the resolution to
  move the enable-mysql-connector oslo spec to Kilo, while the
  library version bump is the *only* change missing codewise (we
  will also need a gate job description, but that doesn't touch
  codebase at all). The resolution looks too prompt and ungrounded
  to me. Is it really that gate failure for three projects that
  resulted in it, or there are some other hidden reasons behind it?
  Was it discussed anywhere? If so, I wasn't given a chance to
  participate in that discussion; I suspect another supporter of
  the spec (Agnus Lees) was not involved either.
 
  Not allowing those last pieces of the spec in this cycle, we
  just postpone start of any realistic testing of the feature for
  another half a year.
 
  Why do we block new sqlalchemy-migrate and the spec for another
  cycle instead of fixing the affected projects with *primitive*
  patches like we did for new testtools?
 
  Because we are in Feature Freeze. Now is the time for critical bug
  fixes only, as we start to stabalize the tree. Releasing dependent
  libraries that can cause breaks, for whatever reason, should be
  soundly avoided.
 
  If this was August, fine. But it's feature freeze.
 
 I probably missed the fact that we are so strict now that we don't
 allow tiny missing bits to go in. In my excuse, I was offline for
 around three last weeks. I was a bit misled by the fact that I was
 approached by an oslo core very recently on which remaining bits we
 need to push before claiming the spec to be complete, and I assumed it
 means that we are free to complete the work this cycle. Otherwise, I
 wouldn't push for the new library version in the first place.

I suspect you’re referring to me, there. I believed the work was ready to be 
wrapped up. I’m sorry my misunderstanding led to the issues.

 
 Anyway, I guess there is no way now to get remaining bits in Juno,
 even if small, and we're doomed to postpone them to Kilo.

I think we’re only looking at a couple of weeks delay. During that time we can 
work on fixing the problem. I don’t think we will want to retroactively change 
the migration scripts (that’s not something we generally like to do), so we 
should look at changes needed to make sqlalchemy-migrate deal with them (by 
ignoring them, or working around the errors, or whatever).

Doug

 
 Thanks for the explanation,
 /Ihar
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:

 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.
 
 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!
 
 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/
 
 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.

my VPN was down and I didn’t get this thread just now, but I am strongly -1 on 
this as added to tox.ini, my response is 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.

Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into *your* 
environment.  Don’t force it on our automated tests or on my environment.   
.pyc files make a difference in behavior, and if we banish them from all 
testing, then our code is never tested within the environment that it will 
normally be run in after shipment.

I’d far prefer a simple script added to tox.ini which deletes orphaned .pyc 
files only, if a change to tox.ini must be made.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Sean Dague
On 09/12/2014 11:21 AM, Mike Bayer wrote:
 
 On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:
 
 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.

 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!

 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/

 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.
 
 my VPN was down and I didn’t get this thread just now, but I am strongly -1 
 on this as added to tox.ini, my response is 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.
 
 Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into 
 *your* environment.  Don’t force it on our automated tests or on my 
 environment.   .pyc files make a difference in behavior, and if we banish 
 them from all testing, then our code is never tested within the environment 
 that it will normally be run in after shipment.
 
 I’d far prefer a simple script added to tox.ini which deletes orphaned .pyc 
 files only, if a change to tox.ini must be made.

Your example in the other thread includes the random seed behavior,
which is already addressed in new tox. So I don't see that as an issue.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Sean Dague
On 09/12/2014 11:19 AM, Doug Hellmann wrote:
 
 On Sep 12, 2014, at 9:23 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
 Signed PGP part
 On 12/09/14 13:20, Sean Dague wrote:
 On 09/12/2014 06:41 AM, Ihar Hrachyshka wrote:
 Some updates/concerns/questions.

 The status of introducing a new driver to gate is:

 - all the patches for mysql-connector are merged in all
 projects; - all devstack patches to support switching the driver
 are merged; - new sqlalchemy-migrate library is released;

 - version bump is *not* yet done; - package is still *not* yet
 published on pypi; - new gate job is *not* yet introduced.

 The new sqlalchemy-migrate release introduced unit test failures
 in those three projects: nova, cinder, glance.

 On technical side of the failure: my understanding is that those
 projects that started to fail assume too much about how those
 SQL scripts are executed. They assume they are executed in one
 go, they also assume they need to open and commit transaction on
 their own. I don't think this is something to be fixed in
 sqlalchemy-migrate itself. Instead, simple removal of those
 'BEGIN TRANSACTION; ... COMMIT;' statements should just work and
 looks like a sane thing to do anyway. I've proposed the following
 patches for all three projects to handle it [1].

 That said, those failures were solved by pinning the version of
 the library in openstack/requirements and those projects. This is
 in major contrast to how we handled the new testtools release
 just several weeks ago, when the problem was solved by fixing
 three affected projects because of their incorrect usage of
 tearDown/setUp methods.

 Even more so, those failures seem to trigger the resolution to
 move the enable-mysql-connector oslo spec to Kilo, while the
 library version bump is the *only* change missing codewise (we
 will also need a gate job description, but that doesn't touch
 codebase at all). The resolution looks too prompt and ungrounded
 to me. Is it really that gate failure for three projects that
 resulted in it, or there are some other hidden reasons behind it?
 Was it discussed anywhere? If so, I wasn't given a chance to
 participate in that discussion; I suspect another supporter of
 the spec (Agnus Lees) was not involved either.

 Not allowing those last pieces of the spec in this cycle, we
 just postpone start of any realistic testing of the feature for
 another half a year.

 Why do we block new sqlalchemy-migrate and the spec for another
 cycle instead of fixing the affected projects with *primitive*
 patches like we did for new testtools?

 Because we are in Feature Freeze. Now is the time for critical bug
 fixes only, as we start to stabalize the tree. Releasing dependent
 libraries that can cause breaks, for whatever reason, should be
 soundly avoided.

 If this was August, fine. But it's feature freeze.

 I probably missed the fact that we are so strict now that we don't
 allow tiny missing bits to go in. In my excuse, I was offline for
 around three last weeks. I was a bit misled by the fact that I was
 approached by an oslo core very recently on which remaining bits we
 need to push before claiming the spec to be complete, and I assumed it
 means that we are free to complete the work this cycle. Otherwise, I
 wouldn't push for the new library version in the first place.
 
 I suspect you’re referring to me, there. I believed the work was ready to be 
 wrapped up. I’m sorry my misunderstanding led to the issues.
 

 Anyway, I guess there is no way now to get remaining bits in Juno,
 even if small, and we're doomed to postpone them to Kilo.
 
 I think we’re only looking at a couple of weeks delay. During that time we 
 can work on fixing the problem. I don’t think we will want to retroactively 
 change the migration scripts (that’s not something we generally like to do), 
 so we should look at changes needed to make sqlalchemy-migrate deal with them 
 (by ignoring them, or working around the errors, or whatever).

Yes, please, that would be highly appreciated. That kind of backwards
compat guaruntees is kind of why we took over migrate in the first place
as a project.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 10:40 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 On 12/09/14 16:33, Mike Bayer wrote:
 I agree with this, changing the MySQL driver now is not an option.
 
 That was not the proposal. The proposal was to introduce support to
 run against something different from MySQLdb + a gate job for that
 alternative. The next cycle was supposed to do thorough regression
 testing, benchmarking, etc. to decide whether we're ok to recommend
 that alternative to users.

ah, well that is a great idea.  But we can have that throughout Kilo anyway, 
why not ?





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 11:24 AM, Sean Dague s...@dague.net wrote:

 On 09/12/2014 11:21 AM, Mike Bayer wrote:
 
 On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:
 
 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.
 
 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!
 
 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/
 
 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.
 
 my VPN was down and I didn’t get this thread just now, but I am strongly -1 
 on this as added to tox.ini, my response is 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.
 
 Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into 
 *your* environment.  Don’t force it on our automated tests or on my 
 environment.   .pyc files make a difference in behavior, and if we banish 
 them from all testing, then our code is never tested within the environment 
 that it will normally be run in after shipment.
 
 I’d far prefer a simple script added to tox.ini which deletes orphaned .pyc 
 files only, if a change to tox.ini must be made.
 
 Your example in the other thread includes the random seed behavior,
 which is already addressed in new tox. So I don't see that as an issue.

Will these patches all be accompanied by corresponding PYTHONHASHSEED settings? 
  Also why don’t you want to place PYTHONDONTWRITEBYTECODE into your own 
environment?I don’t want this flag on my machine.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive dep

2014-09-12 Thread Brant Knudson
On Thu, Sep 11, 2014 at 2:17 AM, Thomas Goirand z...@debian.org wrote:


 On my side (as the Debian package maintainer of OpenStack), I was more
 than happy to see that Ceilometer made the choice to use a Python module
 for memcache which supports Python 3. Currently python-memcache does
 *not* support Python 3. It's in fact standing in the way to add Python 3
 compatibility to *a lot* of the OpenStack packages, because this
 directly impact python-keystoneclient, which is a (build-)dependency of
 almost everything.


Thomas -

python-keystoneclient should no longer have a hard dependency on
python-memcache(d). The auth_token middleware which can use memcache has
been moved into the keystonemiddleware repo (a copy is left in
keystoneclient only for backwards-compatibility). If python-keystoneclient
still has a dependency on python-memcache then we're doing something wrong
and should be able to fix it.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 11:33 AM, Mike Bayer mba...@redhat.com wrote:

 
 On Sep 12, 2014, at 11:24 AM, Sean Dague s...@dague.net wrote:
 
 On 09/12/2014 11:21 AM, Mike Bayer wrote:
 
 On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:
 
 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.
 
 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!
 
 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/
 
 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.
 
 my VPN was down and I didn’t get this thread just now, but I am strongly -1 
 on this as added to tox.ini, my response is 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.
 
 Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into 
 *your* environment.  Don’t force it on our automated tests or on my 
 environment.   .pyc files make a difference in behavior, and if we banish 
 them from all testing, then our code is never tested within the environment 
 that it will normally be run in after shipment.
 
 I’d far prefer a simple script added to tox.ini which deletes orphaned .pyc 
 files only, if a change to tox.ini must be made.
 
 Your example in the other thread includes the random seed behavior,
 which is already addressed in new tox. So I don't see that as an issue.
 
 Will these patches all be accompanied by corresponding PYTHONHASHSEED 
 settings?   Also why don’t you want to place PYTHONDONTWRITEBYTECODE into 
 your own environment?I don’t want this flag on my machine.

not to mention PYTHONHASHSEED only works on Python 3.  What is the issue in tox 
you’re referring to ?




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][Cinder] FFE request for adding Huawei SDSHypervisor driver and connector

2014-09-12 Thread Thierry Carrez
Zhangni wrote:
 I'd like to request an Juno feature freeze exception for this blueprint
 and Spec:
 
 https://blueprints.launchpad.net/cinder/+spec/huawei-sdshypervisor-driver
 
 https://review.openstack.org/#/c/101688/
 
 as implemented by the following patch:
 
 https://review.openstack.org/#/c/108609

I would say it's way too late at this point for a new driver in Juno. At
this point we should be focused on fixing what we already have, not add
more surface.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Anita Kuno
On 09/12/2014 07:37 AM, Thierry Carrez wrote:
 Hi everyone,
 
 I visited the Paris Design Summit space on Monday and confirmed that it
 should be possible to split it in a way that would allow to have
 per-program contributors meetups on the Friday. The schedule would go
 as follows:
 
 Tuesday: cross-project workshops
 Wednesday, Thursday: traditional scheduled slots
 Friday: contributors meetups
 
 We'll also have pods available all 4 days for more ad-hoc small meetings.
 
 In the mean time, we need to discuss how we want to handle the selection
 of session topics.
 
 In past summits we used a Design-Summit-specific session suggestion
 website, and PTLs would approve/deny them. This setup grew less and less
 useful: session topics were selected collaboratively on etherpads,
 discussed in meetings, and finally filed/reorganized/merged on the
 website just before scheduling. Furthermore, with even less scheduled
 slots, we would have to reject most of the suggestions, which is more
 frustrating for submitters than the positive experience of joining team
 meetings to discuss which topics are the most important. Finally, topics
 will need to be split between scheduled sessions and the contributors
 meetup agenda, and that's easier to do on an Etherpad anyway.
 
 This is why I'd like to suggest that all programs use etherpads to
 collect important topics, select which ones would get in the very few
 scheduled slots we'll have left, which will get discussed in the
 contributors meetup, and which are better left for a pod discussion.
 I suggest we all use IRC team meetings to collaboratively discuss that
 content between interested contributors.
 
 To simplify the communication around this, I tried to collect the
 already-announced etherpads on a single page at:
 
 https://wiki.openstack.org/wiki/Summit/Planning
 
 Please add any that I missed !
 
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.
 
 Regards,
 
Thanks Thierry,

This looks like it should shape up to be a nice buffet of formats for us
to evaluate and then provide feedback on what works best for whom at the
wrap-up (which I believe will now be on the mailing list after the summit).

My question involves third party discussions. Now I know at least
Neutron is going to have a chat about drivers which involves third party
ci accounts as a supportive aspect of that discussion, but I am
wondering about the framework for a discussion which I can recommend
attendees of the third party meetings to attend. They are shaping up to
be an attentive, forward thinking group and are supporting each other
which I was hoping for from the beginning so I am very heartened by our
progress. I am feeling that as a group folks have questions and concerns
they would appreciate the opportunity to air in a mutually constructive
venue.

What day and where would be the mutually constructive venue?

I held off on Joe's thread since third party ci affects 4 or 5 programs,
not enough to qualify in my mind as a topic that is OpenStack wide, but
the programs it affects are quite affected, so I do feel it is time to
mention it.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Thierry Carrez
Eoghan Glynn wrote:
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.
 
 +1 on a collaborative scheduling process within each project.
 
 That's pretty much what we did within the ceilometer core group for
 the Juno summit, except that we used a googledocs spreadsheet instead
 of an etherpad.
 
 So I don't think we need to necessarily mandate usage of an etherpad,
 just let every project decide whatever shared document format they
 want to use.
 
 FTR the benefit of a googledocs spreadsheet in my view would include
 the ease of totalling votes  sessions slots, color-coding candidate
 sessions for merging etc.

Good point. I've replaced the wording in the wiki page -- just use
whatever suits you best, as long as it's a public document and you can
link to it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Thierry Carrez
Russell Bryant wrote:
 On 09/12/2014 07:37 AM, Thierry Carrez wrote:
 If you think this is wrong and think the design summit suggestion
 website is a better way to do it, let me know why! If some programs
 really can't stand the 'etherpad/IRC' approach I'll see how we can spin
 up a limited instance.
 
 I think this is fine, especially if it's a better reflection of reality
 and lets the teams work more efficiently.
 
 However, one of the benefits of the old submission system was the
 clarity of the process and openness to submissions from anyone.  We
 don't want to be in a situation where non-core folks feel like they have
 a harder time submitting a session.
 
 Once this is settled, as long as the wiki pages [1][2] reflect the
 process and is publicized, it should be fine.
 
 [1] https://wiki.openstack.org/wiki/Summit
 [2] https://wiki.openstack.org/wiki/Summit/Planning

Yes, I'll document the new process and heavily publicize it, once I'm
sure that's the way forward :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Doug Hellmann

On Sep 12, 2014, at 11:21 AM, Mike Bayer mba...@redhat.com wrote:

 
 On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:
 
 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.
 
 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!
 
 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/
 
 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.
 
 my VPN was down and I didn’t get this thread just now, but I am strongly -1 
 on this as added to tox.ini, my response is 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.
 
 Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into 
 *your* environment.  Don’t force it on our automated tests or on my 
 environment.   .pyc files make a difference in behavior, and if we banish 
 them from all testing, then our code is never tested within the environment 
 that it will normally be run in after shipment.
 
 I’d far prefer a simple script added to tox.ini which deletes orphaned .pyc 
 files only, if a change to tox.ini must be made.

I have to agree with Mike here. Cleaning up our dev environments using a little 
automation is better than disabling a feature of the interpreter that may have 
unforeseen consequences in behavior or performance. The more we introduce 
unusual settings like this into our environments and tools, the more edge cases 
and weirdness we’re going to find in those tools that keep us from doing the 
work we really want to be doing.

We could use a git hook (see my earlier message in this thread) or we could add 
a command to tox to remove them before starting the tests. Neither of those 
solutions would affect the runtime behavior in a way that makes our dev 
environments fundamentally different from a devstack or production deployment.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Thierry Carrez
Anita Kuno wrote:
 My question involves third party discussions. Now I know at least
 Neutron is going to have a chat about drivers which involves third party
 ci accounts as a supportive aspect of that discussion, but I am
 wondering about the framework for a discussion which I can recommend
 attendees of the third party meetings to attend. They are shaping up to
 be an attentive, forward thinking group and are supporting each other
 which I was hoping for from the beginning so I am very heartened by our
 progress. I am feeling that as a group folks have questions and concerns
 they would appreciate the opportunity to air in a mutually constructive
 venue.
 
 What day and where would be the mutually constructive venue?
 
 I held off on Joe's thread since third party ci affects 4 or 5 programs,
 not enough to qualify in my mind as a topic that is OpenStack wide, but
 the programs it affects are quite affected, so I do feel it is time to
 mention it.

I think those discussions could happen in a cross-project workshop.
We'll run 2 or 3 of those in parallel all day Tuesday, so there is
definitely room there.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Julien Danjou
On Fri, Sep 12 2014, Sean Dague wrote:

 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.

 This prevents pyc files from being writen in your git tree (win!). It
 doesn't seem to impact what pip installs... and if anyone knows how to
 prevent those pyc files from getting created, that would be great.

 But it's something which hopefully causes less perceived developer
 fragility of the system.

I understand it's generating .pyc could be something, but I don't really
like that patch.

I guess the problem is more likely that testrepository load the tests
From the source directory whereas maybe we could make it load them from
what's installed into the venv?

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Johannes Erdfelt
On Fri, Sep 12, 2014, Doug Hellmann d...@doughellmann.com wrote:
 I don’t think we will want to retroactively change the migration scripts
 (that’s not something we generally like to do),

We don't allow semantic changes to migration scripts since people who
have already run it won't get those changes. However, we haven't been
shy about fixing bugs that prevent the migration script from running
(which this change would probably fall into).

 so we should look at changes needed to make sqlalchemy-migrate deal with
 them (by ignoring them, or working around the errors, or whatever).

That said, I agree that sqlalchemy-migrate shouldn't be changing in a
non-backwards compatible way.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] battling stale .pyc files

2014-09-12 Thread Julien Danjou
On Fri, Sep 12 2014, Mike Bayer wrote:

 Just my 2.5c on this issue as to the approach I think is best. Leave
 the Python interpreter’s behavior as much as “normal” as possible in
 our default test environment.

I definitely agree with all of that. :)

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 11:56 AM, Johannes Erdfelt johan...@erdfelt.com wrote:

 On Fri, Sep 12, 2014, Doug Hellmann d...@doughellmann.com wrote:
 I don’t think we will want to retroactively change the migration scripts
 (that’s not something we generally like to do),
 
 We don't allow semantic changes to migration scripts since people who
 have already run it won't get those changes. However, we haven't been
 shy about fixing bugs that prevent the migration script from running
 (which this change would probably fall into).

fortunately BEGIN/ COMMIT are not semantic directives. The migrations 
semantically indicated by the script are unaffected in any way by these 
run-environment settings.


 
 so we should look at changes needed to make sqlalchemy-migrate deal with
 them (by ignoring them, or working around the errors, or whatever).
 
 That said, I agree that sqlalchemy-migrate shouldn't be changing in a
 non-backwards compatible way.

on the sqlalchemy-migrate side, the handling of it’s ill-conceived “sql script” 
feature can be further mitigated here by parsing for the “COMMIT” line when it 
breaks out the SQL and ignoring it, I’d favor that it emits a warning also.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive dep

2014-09-12 Thread Morgan Fainberg
I do not see python-memcache library in either keystone client’s 
requirements.txt[0] or test-requirements.txt[1]. For purposes of ensuring that 
we do not break people deploying auth_token in keystoneclient (for older 
releases) I don’t see the soft dependency on python-memcache from going away.

Even for keystonemiddleware we do not have a hard-dependency on python-memcache 
in requirements.txt[2] or test-requirements.txt[3] as we gate on py33.

—Morgan 

[0] 
http://git.openstack.org/cgit/openstack/python-keystoneclient/tree/requirements.txt?id=0.10.1
[1] 
http://git.openstack.org/cgit/openstack/python-keystoneclient/tree/test-requirements.txt?id=0.10.1
[2] 
http://git.openstack.org/cgit/openstack/keystonemiddleware/tree/requirements.txt?id=1.1.1
[3] 
http://git.openstack.org/cgit/openstack/keystonemiddleware/tree/test-requirements.txt?id=1.1.1

—
Morgan Fainberg


-Original Message-
From: Brant Knudson b...@acm.org
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: September 12, 2014 at 08:33:15
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] global-reqs on tooz pulls in worrisome transitive 
dep

 On Thu, Sep 11, 2014 at 2:17 AM, Thomas Goirand wrote:
  
 
  On my side (as the Debian package maintainer of OpenStack), I was more
  than happy to see that Ceilometer made the choice to use a Python module
  for memcache which supports Python 3. Currently python-memcache does
  *not* support Python 3. It's in fact standing in the way to add Python 3
  compatibility to *a lot* of the OpenStack packages, because this
  directly impact python-keystoneclient, which is a (build-)dependency of
  almost everything.
 
 
 Thomas -
  
 python-keystoneclient should no longer have a hard dependency on
 python-memcache(d). The auth_token middleware which can use memcache has
 been moved into the keystonemiddleware repo (a copy is left in
 keystoneclient only for backwards-compatibility). If python-keystoneclient
 still has a dependency on python-memcache then we're doing something wrong
 and should be able to fix it.
  
 - Brant
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [clients] [keystone] lack of retrying tokens leads to overall OpenStack fragility

2014-09-12 Thread Steven Hardy
On Thu, Sep 11, 2014 at 08:43:22PM -0400, Jamie Lennox wrote:
 
 
 - Original Message -
  From: Steven Hardy sha...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Sent: Friday, 12 September, 2014 12:21:52 AM
  Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying 
  tokens leads to overall OpenStack fragility
  
  On Wed, Sep 10, 2014 at 08:46:45PM -0400, Jamie Lennox wrote:
   
   - Original Message -
From: Steven Hardy sha...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Thursday, September 11, 2014 1:55:49 AM
Subject: Re: [openstack-dev] [all] [clients] [keystone] lack of retrying
tokens leads to overall OpenStack fragility

On Wed, Sep 10, 2014 at 10:14:32AM -0400, Sean Dague wrote:
 Going through the untriaged Nova bugs, and there are a few on a 
 similar
 pattern:
 
 Nova operation in progress takes a while
 Crosses keystone token expiration time
 Timeout thrown
 Operation fails
 Terrible 500 error sent back to user

We actually have this exact problem in Heat, which I'm currently trying
to
solve:

https://bugs.launchpad.net/heat/+bug/1306294

Can you clarify, is the issue either:

1. Create novaclient object with username/password
2. Do series of operations via the client object which eventually fail
after $n operations due to token expiry

or:

1. Create novaclient object with username/password
2. Some really long operation which means token expires in the course of
the service handling the request, blowing up and 500-ing

If the former, then it does sound like a client, or usage-of-client bug,
although note if you pass a *token* vs username/password (as is 
currently
done for glance and heat in tempest, because we lack the code to get the
token outside of the shell.py code..), there's nothing the client can 
do,
because you can't request a new token with longer expiry with a token...

However if the latter, then it seems like not really a client problem to
solve, as it's hard to know what action to take if a request failed
part-way through and thus things are in an unknown state.

This issue is a hard problem, which can possibly be solved by
switching to a trust scoped token (service impersonates the user), but
then
you're effectively bypassing token expiry via delegation which sits
uncomfortably with me (despite the fact that we may have to do this in
heat
to solve the afforementioned bug)

 It seems like we should have a standard pattern that on token
 expiration
 the underlying code at least gives one retry to try to establish a new
 token to complete the flow, however as far as I can tell *no* clients
 do
 this.

As has been mentioned, using sessions may be one solution to this, and
AFAIK session support (where it doesn't already exist) is getting into
various clients via the work being carried out to add support for v3
keystone by David Hu:

https://review.openstack.org/#/q/owner:david.hu%2540hp.com,n,z

I see patches for Heat (currently gating), Nova and Ironic.

 I know we had to add that into Tempest because tempest runs can exceed
 1
 hr, and we want to avoid random fails just because we cross a token
 expiration boundary.

I can't claim great experience with sessions yet, but AIUI you could do
something like:

from keystoneclient.auth.identity import v3
from keystoneclient import session
from keystoneclient.v3 import client

auth = v3.Password(auth_url=OS_AUTH_URL,
   username=USERNAME,
   password=PASSWORD,
   project_id=PROJECT,
   user_domain_name='default')
sess = session.Session(auth=auth)
ks = client.Client(session=sess)

And if you can pass the same session into the various clients tempest
creates then the Password auth-plugin code takes care of 
reauthenticating
if the token cached in the auth plugin object is expired, or nearly
expired:

https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/auth/identity/base.py#L120

So in the tempest case, it seems like it may be a case of migrating the
code creating the clients to use sessions instead of passing a token or
username/password into the client object?

That's my understanding of it atm anyway, hopefully jamielennox will be
along
soon with more details :)

Steve
   
   
   By clients here are you referring to the CLIs or the python libraries?
   Implementation is at different points with each.
  
  I think for both heat and tempest we're 

Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Sean Dague
On 09/12/2014 11:33 AM, Mike Bayer wrote:
 
 On Sep 12, 2014, at 11:24 AM, Sean Dague s...@dague.net wrote:
 
 On 09/12/2014 11:21 AM, Mike Bayer wrote:

 On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:

 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.

 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!

 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/

 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.

 my VPN was down and I didn’t get this thread just now, but I am strongly -1 
 on this as added to tox.ini, my response is 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.

 Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into 
 *your* environment.  Don’t force it on our automated tests or on my 
 environment.   .pyc files make a difference in behavior, and if we banish 
 them from all testing, then our code is never tested within the environment 
 that it will normally be run in after shipment.

 I’d far prefer a simple script added to tox.ini which deletes orphaned .pyc 
 files only, if a change to tox.ini must be made.

 Your example in the other thread includes the random seed behavior,
 which is already addressed in new tox. So I don't see that as an issue.
 
 Will these patches all be accompanied by corresponding PYTHONHASHSEED 
 settings?   Also why don’t you want to place PYTHONDONTWRITEBYTECODE into 
 your own environment?I don’t want this flag on my machine.

This was the set of tox changes that went in in August.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Sean Dague
On 09/12/2014 11:52 AM, Doug Hellmann wrote:
 
 On Sep 12, 2014, at 11:21 AM, Mike Bayer mba...@redhat.com wrote:
 

 On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:

 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.

 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!

 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/

 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.

 my VPN was down and I didn’t get this thread just now, but I am strongly -1 
 on this as added to tox.ini, my response is 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.

 Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into 
 *your* environment.  Don’t force it on our automated tests or on my 
 environment.   .pyc files make a difference in behavior, and if we banish 
 them from all testing, then our code is never tested within the environment 
 that it will normally be run in after shipment.

 I’d far prefer a simple script added to tox.ini which deletes orphaned .pyc 
 files only, if a change to tox.ini must be made.
 
 I have to agree with Mike here. Cleaning up our dev environments using a 
 little automation is better than disabling a feature of the interpreter that 
 may have unforeseen consequences in behavior or performance. The more we 
 introduce unusual settings like this into our environments and tools, the 
 more edge cases and weirdness we’re going to find in those tools that keep us 
 from doing the work we really want to be doing.
 
 We could use a git hook (see my earlier message in this thread) or we could 
 add a command to tox to remove them before starting the tests. Neither of 
 those solutions would affect the runtime behavior in a way that makes our dev 
 environments fundamentally different from a devstack or production deployment.

You believe that unit tests are going to change in the way they run so
dramatically with this change that it invalidates their use?

Do we have examples of what changes if you do and don't have pyc files
there?

Remember, we're not changing integration testing with this. This is
solely unit testing.

The reason I don't like just fix it in your local env is you are then
exporting the complexity to developers. For something that they should
really not have to get bitten by... a lot.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 12:03 PM, Sean Dague s...@dague.net wrote:

 On 09/12/2014 11:33 AM, Mike Bayer wrote:
 
 On Sep 12, 2014, at 11:24 AM, Sean Dague s...@dague.net wrote:
 
 On 09/12/2014 11:21 AM, Mike Bayer wrote:
 
 On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:
 
 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.
 
 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!
 
 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/
 
 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.
 
 my VPN was down and I didn’t get this thread just now, but I am strongly 
 -1 on this as added to tox.ini, my response is 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.
 
 Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into 
 *your* environment.  Don’t force it on our automated tests or on my 
 environment.   .pyc files make a difference in behavior, and if we banish 
 them from all testing, then our code is never tested within the 
 environment that it will normally be run in after shipment.
 
 I’d far prefer a simple script added to tox.ini which deletes orphaned 
 .pyc files only, if a change to tox.ini must be made.
 
 Your example in the other thread includes the random seed behavior,
 which is already addressed in new tox. So I don't see that as an issue.
 
 Will these patches all be accompanied by corresponding PYTHONHASHSEED 
 settings?   Also why don’t you want to place PYTHONDONTWRITEBYTECODE into 
 your own environment?I don’t want this flag on my machine.
 
 This was the set of tox changes that went in in August.

corresponding to PYTHONHASHSEED, right?  That whole thing is Python 3 only.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Sean Dague
On 09/12/2014 12:07 PM, Mike Bayer wrote:
 
 On Sep 12, 2014, at 12:03 PM, Sean Dague s...@dague.net wrote:
 
 On 09/12/2014 11:33 AM, Mike Bayer wrote:

 On Sep 12, 2014, at 11:24 AM, Sean Dague s...@dague.net wrote:

 On 09/12/2014 11:21 AM, Mike Bayer wrote:

 On Sep 12, 2014, at 7:39 AM, Sean Dague s...@dague.net wrote:

 I assume you, gentle OpenStack developers, often find yourself in a hair
 tearing out moment of frustration about why local unit tests are doing
 completely insane things. The code that it is stack tracing on is no
 where to be found, and yet it fails.

 And then you realize that part of oslo doesn't exist any more
 except there are still pyc files laying around. Gah!

 I've proposed the following to Nova and Python novaclient -
 https://review.openstack.org/#/c/121044/

 Which sets PYTHONDONTWRITEBYTECODE=true in the unit tests.

 my VPN was down and I didn’t get this thread just now, but I am strongly 
 -1 on this as added to tox.ini, my response is 
 http://lists.openstack.org/pipermail/openstack-dev/2014-September/045873.html.

 Short answer: if you want this feature, put PYTHONDONTWRITEBYTECODE into 
 *your* environment.  Don’t force it on our automated tests or on my 
 environment.   .pyc files make a difference in behavior, and if we banish 
 them from all testing, then our code is never tested within the 
 environment that it will normally be run in after shipment.

 I’d far prefer a simple script added to tox.ini which deletes orphaned 
 .pyc files only, if a change to tox.ini must be made.

 Your example in the other thread includes the random seed behavior,
 which is already addressed in new tox. So I don't see that as an issue.

 Will these patches all be accompanied by corresponding PYTHONHASHSEED 
 settings?   Also why don’t you want to place PYTHONDONTWRITEBYTECODE into 
 your own environment?I don’t want this flag on my machine.

 This was the set of tox changes that went in in August.
 
 corresponding to PYTHONHASHSEED, right?  That whole thing is Python 3 only.

It very much is not only python 3. We have to pin in on a bunch of our
python 2 tests now until we clean them up. There was a giant cross
project effort on this all through July  August to bring this in.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Jeremy Stanley
On 2014-09-12 11:36:20 -0400 (-0400), Mike Bayer wrote:
[...]
 not to mention PYTHONHASHSEED only works on Python 3.  What is the
 issue in tox you’re referring to ?

Huh? The overrides we added to numerous projects' tox.ini files to
stem the breakage in Python 2.x unit tests from hash seed
randomization in newer tox releases would seem to contradict your
assertion. Also documentation...

https://docs.python.org/2.7/using/cmdline.html#envvar-PYTHONHASHSEED

(New in version 2.6.8.)
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Jeremy Stanley
On 2014-09-12 12:07:41 -0400 (-0400), Mike Bayer wrote:
[...]
 corresponding to PYTHONHASHSEED, right?  That whole thing is
 Python 3 only.

See other reply, but I really don't understand where you got that
idea. Yes Python 2.x does not randomize the hash seed by default
like Py3K (you have to pass -R to get that behavior) but you can
still totally override the hash seed from the environment in 2.x
(and more recent versions of tox happily do this for you and print
out the hash seed which was chosen for a given test run).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest Bug triage

2014-09-12 Thread Mauro S M Rodrigues

On 09/11/2014 04:52 PM, David Kranz wrote:
So we had a Bug Day this week and the results were a bit disappointing 
due to lack of participation. We went from 124 New bugs to 75. There 
were also many cases where bugs referred to logs that no longer 
existed. This suggests that we really need to keep up with bug triage 
in real time. Since bug triage should involve the Core review team, we 
propose to rotate the responsibility of triaging bugs weekly. I put up 
an etherpad here 
https://etherpad.openstack.org/p/qa-bug-triage-rotation and I hope the 
tempest core review team will sign up. Given our size, this should 
involve signing up once every two months or so. I took next week.


 -David
+1, I'm not core team but I just assigned myself to the last week of 
September and first of December.


Also, given the bad quality of some reports we may want to come up with 
a template of need to have data on the bug reports. I really haven't 
look at it lately but we use to have several reports with just a bunch 
of traces, or just a link..


  --  mauro(sr)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Jeremy Stanley
On 2014-09-12 17:16:11 +0100 (+0100), Daniel P. Berrange wrote:
[...]
 Agreed, the problem with stale .pyc files is that it never occurs to
 developers that .pyc files are causing the problem until after you've
 wasted (potentially hours of) time debugging the problem. Avoiding
 this pain for all developers out of the box is a clear win overall
 and makes openstack development less painful.

I've been bitten by similar issues often enough that I regularly git
clean -dfx my checkouts or at least pass -r to tox so that it will
recreate its virtualenvs from scratch. Yes it does add some extra
time to the next test run, but you can iterate fairly tightly after
that as long as you're not actively moving stuff around while you
troubleshoot (and coupled with a git hook like Doug described for
cleaning on topic branch changes would be a huge boon as well).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Chris Dent

On Fri, 12 Sep 2014, Julien Danjou wrote:


I guess the problem is more likely that testrepository load the tests
From the source directory whereas maybe we could make it load them from
what's installed into the venv?


This rather ruins TDD doesn't it?

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Daniel P. Berrange
On Fri, Sep 12, 2014 at 04:23:09PM +, Jeremy Stanley wrote:
 On 2014-09-12 17:16:11 +0100 (+0100), Daniel P. Berrange wrote:
 [...]
  Agreed, the problem with stale .pyc files is that it never occurs to
  developers that .pyc files are causing the problem until after you've
  wasted (potentially hours of) time debugging the problem. Avoiding
  this pain for all developers out of the box is a clear win overall
  and makes openstack development less painful.
 
 I've been bitten by similar issues often enough that I regularly git
 clean -dfx my checkouts or at least pass -r to tox so that it will
 recreate its virtualenvs from scratch. Yes it does add some extra
 time to the next test run, but you can iterate fairly tightly after
 that as long as you're not actively moving stuff around while you
 troubleshoot (and coupled with a git hook like Doug described for
 cleaning on topic branch changes would be a huge boon as well).

I'm not debating whether there are ways to clean up your env to avoid
the problem /after/ it occurs. The point is to stop the problem occuring
in the first place to avoid placing this uneccessary clean up burden
on devs.  Intentionally leaving things setup so that contributors hit
bugs like stale .pyc files is just user hostile.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] official recommendations to handle oslo-incubator sync requests

2014-09-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

There seems to be no objections to that wording, so I went forward and
added it to [1], plus added the note about those rules to [2].

[1]: https://wiki.openstack.org/wiki/Oslo#Syncing_Code_from_Incubator
[2]: https://wiki.openstack.org/wiki/StableBranch#Proposing_Fixes

On 19/08/14 15:52, Ihar Hrachyshka wrote:
 Hi all,
 
 I've found out there are no clear public instructions on how to
 handle oslo-incubator synchronizations in master and stable
 branches neither at [1] nor at [2]. Though my observations show
 that there is some oral tradition around community on how we handle
 those review, as follows:
 
 1. For master oslo-incubator sync requests, we tend to sync the
 whole modules or even all the modules that a project uses (unless
 some specific obstacles to do so). This is to use the latest and
 greatest code from Oslo subproject, fetch all possible bug fixes
 and goodies, and make the synchronized copy of it as similar to
 upstream (=oslo-incubator) as possible.
 
 2. For stable branches, the process is a bit different. For those 
 branches, we don't generally want to introduce changes that are
 not related to specific issues in a project. So in case of
 backports, we tend to do per-patch consideration when synchronizing
 from incubator.
 
 3. Backporting for stable branches is a bit complicated process.
 When reviewing synchronization requests for those branches, we
 should not only check that the code is present in all consequent
 branches of the appropriate project (f.e. for Havana, in both Juno
 and Icehouse), but also that the patches being synchronized were
 successfully backported to corresponding stable branches of
 oslo-incubator. So the usual way of oslo-incubator bug fix is (in
 case of e.g. Neutron):
 
 oslo-incubator (master) - neutron (master) - oslo-incubator 
 (stable/icehouse) - neutron (stable/icehouse).
 
 [For Havana, it's even more complicated, introducing more elements
 in the backporting chain.]
 
 I hope I've described the existing oral tradition correctly.
 Please comment on that, and if we're ok with the way it's written
 above, I'd like to update our wiki pages ([1] and [2]) with that.
 
 [1]: 
 https://wiki.openstack.org/wiki/ReviewChecklist#Oslo_Syncing_Checklist

  [2]: https://wiki.openstack.org/wiki/StableBranch
 
 Cheers, /Ihar
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUEyR2AAoJEC5aWaUY1u57QS4H+gPHfebOBKJJAdhSjTRkaR9/
cV6A/M+snCAmlL5YcdMNruwAqaotvXMmUiUL2Mdekne7GqLlTwAtSnDQxwvr7BYu
m1Hu1/eRwVQZLS33UzvZRdAHJMlgD7Mq5p6w21yNKOVa+3wrXY+Q/JTAVv5i/pJ9
UQWTJpbE3DGT8j8B6jFrPbaMnjYjVrbHdGyvxqEaSdS0259kDSgSwULRmAilPRBd
3gIwZC1obqePkby7amQEIYKkPa53aFz2mSEPsWpaT2nYNLILCOcN5OLGNkdo1ksu
5gJ1hXx4MBuKbGALUO7QcdXgquXGv6O1hMq1GSi8bRsbxKbVn4XktsnS/ULqRSE=
=lzyp
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 12/09/14 17:30, Mike Bayer wrote:
 
 On Sep 12, 2014, at 10:40 AM, Ihar Hrachyshka ihrac...@redhat.com
 wrote:
 
 Signed PGP part On 12/09/14 16:33, Mike Bayer wrote:
 I agree with this, changing the MySQL driver now is not an
 option.
 
 That was not the proposal. The proposal was to introduce support
 to run against something different from MySQLdb + a gate job for
 that alternative. The next cycle was supposed to do thorough
 regression testing, benchmarking, etc. to decide whether we're ok
 to recommend that alternative to users.
 
 ah, well that is a great idea.  But we can have that throughout
 Kilo anyway, why not ?

Sure, it's not the end of the world. We'll just need to postpone work
till RC1 (=opening of master for new stuff), pass spec bureauracy
(reapplying for kilo)... That's some burden, but not tragedy.

The only thing that I'm really sad about is that Juno users won't be
able to try out that driver on their setup just to see how it works,
so it narrows testing base to gate while we could get some valuable
deployment feedback in Juno already.

 
 
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUEydwAAoJEC5aWaUY1u57CbYIAKCwyAj/+xyGlcFeUJ04Jtxi
1mwl3IjO6Ue5BfdrrO7128MHMINUojcA4VnQv3jNfwjJ1j1TqWQ+/6uoFHiGn7uA
ga1SVNGar1SkIbc8OqkdbEOd2tI36rvF9qA7dEP1pVJYuwT+iNRmPgiieDrsSpXu
40F3zZQLPfAFSqaANBDeh6sq2OxPF99IG15X49YqCjmI5+cwRCw331LCdZXAV/lq
yHrIZDYrfFSMSHoldAVtb4dJLu06rQNuDTwWMrdEXKmlkNv00EfK3V+Er0E/lq8E
7QKH05dGbcRj0/qaofiQlPvAn/UomIFDHv9zZdV4UEKWxKo1oRB6cKUAv7LhGS4=
=CnKn
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] masking X-Auth-Token in debug output - proposed consistency

2014-09-12 Thread Tripp, Travis S

From Jamie Lennox:
 We handle this in the keystoneclient Session object by just printing 
 REDACTED or something similar. 
 The problem with using a SHA1 is that for backwards compatability we often 
 use the SHA1 of a PKI token
 as if it were a UUID token and so this is still sensitive data. There is 
 working in keystone by morganfainberg
 (which i think was merged) to add a new audit_it which will be able to 
 identify a token across calls without
 exposing any sensitive information. We will support this in session when 
 available. 

From Sean Dague
 So the problem is that means we are currently leaking secrets and making the 
 logs unreadable.

 It seems like we should move forward with the {SHA1} ... and if that is still 
 sensitive, address that later. 
 Not addressing it basically keeps the exposure and destroys usability of the 
 code because there is so much garbage printed out.

I understand Sean's point about debugging.  Right now the glanceclient is just 
printing ***.  So it isn't printing a lot of excess and isn't leaking anything 
sensitive.  The other usability concern with the ***  that Sean previously 
mentioned was having a short usable string might be useful for debugging.

Morgan and Jamie, You think switching to SHA1 in actually adds a potential 
security vulnerability to glanceclient that doesn't exist now. If that is true, 
I think it would override the additional debugging concern of using SHA1 for 
now.  Can you please confirm?  

If only for consistency sake, I could switch to TOKEN_REDACTED like the code 
sample Morgan sent. [1]

[1] 
https://github.com/openstack/python-keystoneclient/blob/01cabf6bbbee8b5340295f3be5e1fa7111387e7d/keystoneclient/session.py#L126-L131
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-09-12 Thread Doug Hellmann

On Sep 12, 2014, at 1:03 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:

 Signed PGP part
 On 12/09/14 17:30, Mike Bayer wrote:
 
  On Sep 12, 2014, at 10:40 AM, Ihar Hrachyshka ihrac...@redhat.com
  wrote:
 
  Signed PGP part On 12/09/14 16:33, Mike Bayer wrote:
  I agree with this, changing the MySQL driver now is not an
  option.
 
  That was not the proposal. The proposal was to introduce support
  to run against something different from MySQLdb + a gate job for
  that alternative. The next cycle was supposed to do thorough
  regression testing, benchmarking, etc. to decide whether we're ok
  to recommend that alternative to users.
 
  ah, well that is a great idea.  But we can have that throughout
  Kilo anyway, why not ?
 
 Sure, it's not the end of the world. We'll just need to postpone work
 till RC1 (=opening of master for new stuff), pass spec bureauracy
 (reapplying for kilo)... That's some burden, but not tragedy.
 
 The only thing that I'm really sad about is that Juno users won't be
 able to try out that driver on their setup just to see how it works,
 so it narrows testing base to gate while we could get some valuable
 deployment feedback in Juno already.

It’s all experimental, right? And implemented in libraries? So those users 
could update oslo.db and sqlalchemy-migrate and test the results under Juno.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar graduation (round 2) [was: Comments on the concerns arose during the TC meeting]

2014-09-12 Thread Clint Byrum
Excerpts from Mark McLoughlin's message of 2014-09-12 03:27:42 -0700:
 On Wed, 2014-09-10 at 14:51 +0200, Thierry Carrez wrote:
  Flavio Percoco wrote:
   [...]
   Based on the feedback from the meeting[3], the current main concern is:
   
   - Do we need a messaging service with a feature-set akin to SQS+SNS?
   [...]
  
  I think we do need, as Samuel puts it, some sort of durable
  message-broker/queue-server thing. It's a basic application building
  block. Some claim it's THE basic application building block, more useful
  than database provisioning. It's definitely a layer above pure IaaS, so
  if we end up splitting OpenStack into layers this clearly won't be in
  the inner one. But I think IaaS+ basic application building blocks
  belong in OpenStack one way or another. That's the reason I supported
  Designate (everyone needs DNS) and Trove (everyone needs DBs).
  
  With that said, I think yesterday there was a concern that Zaqar might
  not fill the some sort of durable message-broker/queue-server thing
  role well. The argument goes something like: if it was a queue-server
  then it should actually be built on top of Rabbit; if it was a
  message-broker it should be built on top of postfix/dovecot; the current
  architecture is only justified because it's something in between, so
  it's broken.
  
  I guess I don't mind that much zaqar being something in between:
  unless I misunderstood, exposing extra primitives doesn't prevent the
  queue-server use case from being filled. Even considering the
  message-broker case, I'm also not convinced building it on top of
  postfix/dovecot would be a net win compared to building it on top of
  Redis, to be honest.
 
 AFAICT, this part of the debate boils down to the following argument:
 
   If Zaqar implemented messaging-as-a-service with only queuing 
   semantics (and no random access semantics), it's design would 
   naturally be dramatically different and simply implement a 
   multi-tenant REST API in front of AMQP queues like this:
 
 https://www.dropbox.com/s/yonloa9ytlf8fdh/ZaqarQueueOnly.png?dl=0
 
   and that this architecture would allow for dramatically improved 
   throughput for end-users while not making the cost of providing the 
   service prohibitive to operators.
 
 You can't dismiss that argument out-of-hand, but I wonder (a) whether
 the claimed performance improvement is going to make a dramatic
 difference to the SQS-like use case and (b) whether backing this thing
 with an RDBMS and multiple highly available, durable AMQP broker
 clusters is going to be too much of a burden on operators for whatever
 performance improvements it does gain.

Having had experience taking queue-only data out of RDBMS's and even SMTP
solutions, and putting them into queues, I can say that it was generally
quite a bit more reliable and cheaper to maintain.

However, as I've been thinking about this more, I am concerned about the
complexity of trying to use a stateless protocol like HTTP for reliable
delivery, given that these queues all use a session model that relies
on connection persistence. That may very well invalidate my hypothesis.

 
 But the troubling part of this debate is where we repeatedly batter the
 Zaqar team with hypotheses like these and appear to only barely
 entertain their carefully considered justification for their design
 decisions like:
 
   
 https://wiki.openstack.org/wiki/Frequently_asked_questions_%28Zaqar%29#Is_Zaqar_a_provisioning_service_or_a_data_API.3F
   
 https://wiki.openstack.org/wiki/Frequently_asked_questions_%28Zaqar%29#What_messaging_patterns_does_Zaqar_support.3F
 
 I would like to see an SQS-like API provided by OpenStack, I accept the
 reasons for Zaqar's design decisions to date, I respect that those
 decisions were made carefully by highly competent members of our
 community and I expect Zaqar to evolve (like all projects) in the years
 ahead based on more real-world feedback, new hypotheses or ideas, and
 lessons learned from trying things out.

I have read those and I truly believe that the Zaqar team, who I believe
are already a valuable part of the OpenStack family, are doing good work.
Seriously, I believe it is valuable as is and I trust them to do what
they have stated they will do.

Let me explain my position again. Heat is in dire need of a way to
communicate with instances that is efficient. It has no need for a full
messaging stack.. just a way for users to have things pushed from Heat
to their instances efficiently.

So, to reiterate why I keep going on about this: If a messaging service
is to become an integrated part of OpenStack's release, we should think
carefully about the ramifications for operators _and_ users of not
having a light weight queue-only option, when that seems to fit _most_
of the use cases.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-09-12 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2014-09-12 00:22:35 -0700:
 On 09/12/2014 03:29 AM, Clint Byrum wrote:
  Excerpts from Zane Bitter's message of 2014-09-11 15:21:26 -0700:
  On 09/09/14 19:56, Clint Byrum wrote:
  Excerpts from Samuel Merritt's message of 2014-09-09 16:12:09 -0700:
  On 9/9/14, 12:03 PM, Monty Taylor wrote:
  On 09/04/2014 01:30 AM, Clint Byrum wrote:
  Excerpts from Flavio Percoco's message of 2014-09-04 00:08:47 -0700:
  Greetings,
 
  Last Tuesday the TC held the first graduation review for Zaqar. During
  the meeting some concerns arose. I've listed those concerns below with
  some comments hoping that it will help starting a discussion before 
  the
  next meeting. In addition, I've added some comments about the project
  stability at the bottom and an etherpad link pointing to a list of use
  cases for Zaqar.
 
 
  Hi Flavio. This was an interesting read. As somebody whose attention 
  has
  recently been drawn to Zaqar, I am quite interested in seeing it
  graduate.
 
  # Concerns
 
  - Concern on operational burden of requiring NoSQL deploy expertise to
  the mix of openstack operational skills
 
  For those of you not familiar with Zaqar, it currently supports 2 
  nosql
  drivers - MongoDB and Redis - and those are the only 2 drivers it
  supports for now. This will require operators willing to use Zaqar to
  maintain a new (?) NoSQL technology in their system. Before expressing
  our thoughts on this matter, let me say that:
 
1. By removing the SQLAlchemy driver, we basically removed the
  chance
  for operators to use an already deployed OpenStack-technology
2. Zaqar won't be backed by any AMQP based messaging technology 
  for
  now. Here's[0] a summary of the research the team (mostly done by
  Victoria) did during Juno
3. We (OpenStack) used to require Redis for the zmq matchmaker
4. We (OpenStack) also use memcached for caching and as the oslo
  caching lib becomes available - or a wrapper on top of dogpile.cache -
  Redis may be used in place of memcached in more and more deployments.
5. Ceilometer's recommended storage driver is still MongoDB,
  although
  Ceilometer has now support for sqlalchemy. (Please correct me if I'm
  wrong).
 
  That being said, it's obvious we already, to some extent, promote some
  NoSQL technologies. However, for the sake of the discussion, lets 
  assume
  we don't.
 
  I truly believe, with my OpenStack (not Zaqar's) hat on, that we can't
  keep avoiding these technologies. NoSQL technologies have been around
  for years and we should be prepared - including OpenStack operators - 
  to
  support these technologies. Not every tool is good for all tasks - one
  of the reasons we removed the sqlalchemy driver in the first place -
  therefore it's impossible to keep an homogeneous environment for all
  services.
 
 
  I whole heartedly agree that non traditional storage technologies that
  are becoming mainstream are good candidates for use cases where SQL
  based storage gets in the way. I wish there wasn't so much FUD
  (warranted or not) about MongoDB, but that is the reality we live in.
 
  With this, I'm not suggesting to ignore the risks and the extra burden
  this adds but, instead of attempting to avoid it completely by not
  evolving the stack of services we provide, we should probably work on
  defining a reasonable subset of NoSQL services we are OK with
  supporting. This will help making the burden smaller and it'll give
  operators the option to choose.
 
  [0] http://blog.flaper87.com/post/marconi-amqp-see-you-later/
 
 
  - Concern on should we really reinvent a queue system rather than
  piggyback on one
 
  As mentioned in the meeting on Tuesday, Zaqar is not reinventing 
  message
  brokers. Zaqar provides a service akin to SQS from AWS with an 
  OpenStack
  flavor on top. [0]
 
 
  I think Zaqar is more like SMTP and IMAP than AMQP. You're not really
  trying to connect two processes in real time. You're trying to do fully
  asynchronous messaging with fully randomized access to any message.
 
  Perhaps somebody should explore whether the approaches taken by large
  scale IMAP providers could be applied to Zaqar.
 
  Anyway, I can't imagine writing a system to intentionally use the
  semantics of IMAP and SMTP. I'd be very interested in seeing actual use
  cases for it, apologies if those have been posted before.
 
  It seems like you're EITHER describing something called XMPP that has at
  least one open source scalable backend called ejabberd. OR, you've
  actually hit the nail on the head with bringing up SMTP and IMAP but for
  some reason that feels strange.
 
  SMTP and IMAP already implement every feature you've described, as well
  as retries/failover/HA and a fully end to end secure transport (if
  installed properly) If you don't actually set them up to run as a public
  messaging interface but just as a cloud-local exchange, then you could
  get by with very low 

[openstack-dev] [nova] Expand resource name allowed characters

2014-09-12 Thread Chris St. Pierre
We have proposed that the allowed characters for all resource names in Nova
(flavors, aggregates, etc.) be expanded to all printable unicode characters
and horizontal spaces: https://review.openstack.org/#/c/119741

Currently, the only allowed characters in most resource names are
alphanumeric, space, and [.-_].

We have proposed this change for two principal reasons:

1. We have customers who have migrated data forward since Essex, when no
restrictions were in place, and thus have characters in resource names that
are disallowed in the current version of OpenStack. This is only likely to
be useful to people migrating from Essex or earlier, since the current
restrictions were added in Folsom.

2. It's pretty much always a bad idea to add unnecessary restrictions
without a good reason. While we don't have an immediate need to use, for
example, the ever-useful http://codepoints.net/U+1F4A9 in a flavor name,
it's hard to come up with a reason people *shouldn't* be allowed to use it.

That said, apparently people have had a need to not be allowed to use some
characters, but it's not clear why:
https://bugs.launchpad.net/nova/+bug/977187

So I guess if anyone knows any reason why these printable characters should
not be joined in holy resource naming, speak now or forever hold your peace.

Thanks!

-- 
Chris St. Pierre
Senior Software Engineer
metacloud.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 12:13 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-09-12 11:36:20 -0400 (-0400), Mike Bayer wrote:
 [...]
 not to mention PYTHONHASHSEED only works on Python 3.  What is the
 issue in tox you’re referring to ?
 
 Huh? The overrides we added to numerous projects' tox.ini files to
 stem the breakage in Python 2.x unit tests from hash seed
 randomization in newer tox releases would seem to contradict your
 assertion. Also documentation...
 
 https://docs.python.org/2.7/using/cmdline.html#envvar-PYTHONHASHSEED
 
 (New in version 2.6.8.)

Python 3’s documentation says “new in version 3.2.3”, so, confusing that they 
backported it to 2.6 at the same time but google searches tend to point you 
right here:

https://docs.python.org/3.3/using/cmdline.html#envvar-PYTHONHASHSEED



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PYTHONDONTWRITEBYTECODE=true in tox.ini

2014-09-12 Thread Mike Bayer

On Sep 12, 2014, at 12:29 PM, Daniel P. Berrange berra...@redhat.com wrote:

 On Fri, Sep 12, 2014 at 04:23:09PM +, Jeremy Stanley wrote:
 On 2014-09-12 17:16:11 +0100 (+0100), Daniel P. Berrange wrote:
 [...]
 Agreed, the problem with stale .pyc files is that it never occurs to
 developers that .pyc files are causing the problem until after you've
 wasted (potentially hours of) time debugging the problem. Avoiding
 this pain for all developers out of the box is a clear win overall
 and makes openstack development less painful.
 
 I've been bitten by similar issues often enough that I regularly git
 clean -dfx my checkouts or at least pass -r to tox so that it will
 recreate its virtualenvs from scratch. Yes it does add some extra
 time to the next test run, but you can iterate fairly tightly after
 that as long as you're not actively moving stuff around while you
 troubleshoot (and coupled with a git hook like Doug described for
 cleaning on topic branch changes would be a huge boon as well).
 
 I'm not debating whether there are ways to clean up your env to avoid
 the problem /after/ it occurs. The point is to stop the problem occuring
 in the first place to avoid placing this uneccessary clean up burden
 on devs.  Intentionally leaving things setup so that contributors hit
 bugs like stale .pyc files is just user hostile.

if we’re going to start diluting the test environment to suit developer 
environments, then the CI builds should use a different tox target that does 
*not* specify this environment variable.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit planning

2014-09-12 Thread Anita Kuno
On 09/12/2014 11:54 AM, Thierry Carrez wrote:
 Anita Kuno wrote:
 My question involves third party discussions. Now I know at least
 Neutron is going to have a chat about drivers which involves third party
 ci accounts as a supportive aspect of that discussion, but I am
 wondering about the framework for a discussion which I can recommend
 attendees of the third party meetings to attend. They are shaping up to
 be an attentive, forward thinking group and are supporting each other
 which I was hoping for from the beginning so I am very heartened by our
 progress. I am feeling that as a group folks have questions and concerns
 they would appreciate the opportunity to air in a mutually constructive
 venue.

 What day and where would be the mutually constructive venue?

 I held off on Joe's thread since third party ci affects 4 or 5 programs,
 not enough to qualify in my mind as a topic that is OpenStack wide, but
 the programs it affects are quite affected, so I do feel it is time to
 mention it.
 
 I think those discussions could happen in a cross-project workshop.
 We'll run 2 or 3 of those in parallel all day Tuesday, so there is
 definitely room there.
 
Thank you, I will co-ordinate with the group on an etherpad and start to
prioritize items that we want to discuss.

Thanks Thierry,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Responsibilities for controller drivers

2014-09-12 Thread Brandon Logan
IN IRC the topic came up about supporting many-to-many load balancers to
amphorae.  I believe a consensus was made that allowing only one-to-many
load balancers to amphorae would be the first step forward, and
re-evaluate later, since colocation and apolocation will need to work
(which brings up another topic, defining what it actually means to be
colocated: On the same amphorae, on the same amphorae host, on the same
cell/cluster, on the same data center/availability zone. That should be
something we discuss later, but not right now).

I am fine with that decisions, but Doug brought up a good point that
this could very well just be a decision for the controller driver and
Octavia shouldn't mandate this for all drivers.  So I think we need to
clearly define what decisions are the responsibility of the controller
driver versus what decisions are mandated by Octavia's construct.

Items I can come up with off the top of my head:

1) LB:Amphora - M:N vs 1:N
2) VIPs:LB - M:N vs 1:N
3) Pool:HMs - 1:N vs 1:1

I'm sure there are others.  I'm sure each one will need to be evaluated
on a case-by-case basis.  We will be walking a fine line between
flexibility and complexity.  We just need to define how far over that
line and in which direction we are willing to go.

Thanks,
Brandon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >