Re: [openstack-dev] Oslo final releases ready

2014-09-19 Thread Michael Still
I would like to do a python-novaclient release, but this requirements
commit hasn't yet turned into a requirements proposal for novaclient
(that I can find). Can someone poke that for me?

Michael

On Fri, Sep 19, 2014 at 12:04 AM, Doug Hellmann d...@doughellmann.com wrote:
 All of the final releases for the Oslo libraries for the Juno cycle are 
 available on PyPI. I’m working on a couple of patches to the global 
 requirements list to update the baseline in the applications. In all cases, 
 the final release is a second tag on a previously released version.

 - oslo.config - 1.4.0 (same as 1.4.0.0a5)
 - oslo.db - 1.0.0 (same as 0.5.0)
 - oslo.i18n - 1.0.0 (same as 0.4.0)
 - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
 - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
 - oslo.serialization - 1.0.0 (same as 0.3.0)
 - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
 - oslotest - 1.1.0 (same as 1.1.0.0a2)
 - oslo.utils - 1.0.0 (same as 0.3.0)
 - cliff - 1.7.0 (previously tagged, so not a new release)
 - stevedore - 1.0.0 (same as 1.0.0.0a2)

 Congratulations and *Thank You* to the Oslo team for doing an amazing job 
 with graduations this cycle!

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo final releases ready

2014-09-19 Thread Joshua Hesketh
Howdy,

I did some digging and it appears because the proposal bot failed and
skipped out updating keystone-specs:
http://logs.openstack.org/c2/c2372ca3ef6c3ced31934429aecf830daafe583a/post/propose-requirements-updates/e9bbc05/console.html#_2014-09-18_22_56_42_181
(the exit comes from here:
http://git.openstack.org/cgit/openstack/requirements/tree/update.py#n171)

Basically the problem is that keystone-specs have change
requirements.txt where it possibly shouldn't have. I'm not sure what
the policy is here but I'm guessing the keystone-specs repository
shouldn't be in the global requirements project list. There is no need
for deployers to have requirements that are needed for specs.

I've proposed a change to remove the keystone-specs repository from
projects.txt. If that merges the post jobs for the requirements
project will be re-ran and and proposals for the missed projects
should come through then.
https://review.openstack.org/#/c/122619/

As a side note, we should probably have failed post-jobs reported to
the somewhere. I've also put up a proposal for this:
https://review.openstack.org/#/c/122620/

Cheers,
Josh

On Fri, Sep 19, 2014 at 4:02 PM, Michael Still mi...@stillhq.com wrote:
 I would like to do a python-novaclient release, but this requirements
 commit hasn't yet turned into a requirements proposal for novaclient
 (that I can find). Can someone poke that for me?

 Michael

 On Fri, Sep 19, 2014 at 12:04 AM, Doug Hellmann d...@doughellmann.com wrote:
 All of the final releases for the Oslo libraries for the Juno cycle are 
 available on PyPI. I’m working on a couple of patches to the global 
 requirements list to update the baseline in the applications. In all cases, 
 the final release is a second tag on a previously released version.

 - oslo.config - 1.4.0 (same as 1.4.0.0a5)
 - oslo.db - 1.0.0 (same as 0.5.0)
 - oslo.i18n - 1.0.0 (same as 0.4.0)
 - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
 - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
 - oslo.serialization - 1.0.0 (same as 0.3.0)
 - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
 - oslotest - 1.1.0 (same as 1.1.0.0a2)
 - oslo.utils - 1.0.0 (same as 0.3.0)
 - cliff - 1.7.0 (previously tagged, so not a new release)
 - stevedore - 1.0.0 (same as 1.0.0.0a2)

 Congratulations and *Thank You* to the Oslo team for doing an amazing job 
 with graduations this cycle!

 Doug


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Some ideas for micro-version implementation

2014-09-19 Thread Alex Xu
Close to Kilo, it is time to think about what's next for nova API. In 
Kilo, we

will continue develop the important feature micro-version.

In previous v2 on v3 propose, it's include some implementations can be
used for micro-version.
(https://review.openstack.org/#/c/84695/19/specs/juno/v2-on-v3-api.rst)
But finally, those implementations was considered too complex.

So I'm try to find out more simple implementation and solution for 
micro-version.


I wrote down some ideas as blog post at:
http://soulxu.github.io/blog/2014/09/12/one-option-for-nova-api/

And for those ideas also already done some POC, you can find out in the 
blog post.


As discussion in the Nova API meeting, we want to bring it up to 
mail-list to

discussion. Hope we can get more idea and option from all developers.

We will appreciate for any comment and suggestion!

Thanks
Alex


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose adding StevenK to core reviewers

2014-09-19 Thread Steve Kowalik
On 19/09/14 12:22, Robert Collins wrote:
 So we've clearly got a majority support :) - Steve, you up for the challenge?

I solemnly swear I am up to no go... Wait, that isn't the quote I'm
after. :-)

I'm delighted to say I'm up for and welcome the challenge.

Cheers,
-- 
Steve
Wrong is endian little that knows everyone but.
 - Sam Hocevar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] battling stale .pyc files

2014-09-19 Thread Vishvananda Ishaya

On Sep 15, 2014, at 4:34 AM, Lucas Alvares Gomes lucasago...@gmail.com wrote:

 So, although I like the fix proposed and I would +1 that idea, I'm
 also not very concerned if most of the people don't want that. Because
 as you just said we can fix it locally easily. I didn't set it to my
 .local but the way I do nowadays is to have a small bash function in
 my .bashrc to delete the pyc files from the current directory:
 
 function delpyc () {
   find . -name *.pyc -exec rm -rf {} \;
 }
 
 So I just invoke it when needed :)

fyi there is a -delete option to find which is probably a little safer then 
exec with a rm -rf. Also it is really convienient to do this as a git alias so 
it happens automatically when switching branches:

In ~/.gitconfig:

[alias]
cc = !TOP=$(git rev-parse --show-toplevel) find $TOP -name '*.pyc' 
-delete; git-checkout”

now you can git cc branch instead of git checkout branch.

Vish




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon] Dependency freeze exceptions

2014-09-19 Thread Thierry Carrez
Radomir Dopieralski wrote:
 I would like to request dependency freeze exceptions for the following
 patches for Horizon:
 
 https://review.openstack.org/#/c/121509/
 https://review.openstack.org/#/c/122410/
 
 and
 
 https://review.openstack.org/#/c/113184/
 
 They are all fixing high priority bugs. The first two are related to
 bugs with parsing Bootstrap 3.2.0 scss files that have been fixed
 upstream. The third one makes the life of packagers a little eaiser,
 by using versions that both Debian and Fedora, and possibly many
 other distros, can ship.
 
 I am not sure what the formal process for such an exception should be,
 so I'm writing here. Please let me know if I should have done it
 differently.

No, that's the right way of doing it.

These look like valid requests, but this is mostly affecting the
distributions, so let's wait for the packagers to chime in on how much
of a disruption that would cause to them at this point.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Flavio Percoco
On 09/18/2014 09:25 PM, Gordon Sim wrote:
 On 09/18/2014 03:45 PM, Flavio Percoco wrote:
 On 09/18/2014 04:09 PM, Gordon Sim wrote:
 Is the replication synchronous or asynchronous with respect to client
 calls? E.g. will the response to a post of messages be returned only
 once the replication of those messages is confirmed? Likewise when
 deleting a message, is the response only returned when replicas of the
 message are deleted?

 It depends on the driver implementation and/or storage configuration.
 For example, in the mongodb driver, we use the default write concern
 called acknowledged. This means that as soon as the message gets to
 the master node (note it's not written on disk yet nor replicated) zaqar
 will receive a confirmation and then send the response back to the
 client.
 
 So in that mode it's unreliable. If there is failure right after the
 response is sent the message may be lost, but the client believes it has
 been confirmed so will not resend.
 
 This is also configurable by the deployer by changing the
 default write concern in the mongodb uri using
 `?w=SOME_WRITE_CONCERN`[0].

 [0] http://docs.mongodb.org/manual/reference/connection-string/#uri.w
 
 So you could change that to majority to get reliable publication
 (at-least-once).

Right, to help with the fight for a world with saner defaults, I think
it'd be better to use majority as the default write concern in the
mongodb driver.


 What do you mean by 'streaming messages'?

 I'm sorry, that went out wrong. I had the browsability term in my head
 and went with something even worse. By streaming messages I meant
 polling messages without claiming them. In other words, at-least-once is
 guaranteed by default, whereas once-and-only-once is guaranteed just if
 claims are used.
 
 I don't see that the claim mechanism brings any stronger guarantee, it
 just offers a competing consumer behaviour where browsing is
 non-competing (non-destructive). In both cases you require the client to
 be able to remember which messages it had processed in order to ensure
 exactly once. The claim reduces the scope of any doubt, but the client
 still needs to be able to determine whether it has already processed any
 message in the claim already.

The client needs to remember which messages it had processed if it
doesn't delete them (ack) after it has processed them. It's true the
client could also fail after having processed the message which means it
won't be able to ack it.

That said, being able to prevent other consumers to consume a specific
message can bring a stronger guarantee depending on how messages are
processed. I mean, claiming a message guarantees that throughout the
duration of that claim, no other client will be able to consume the
claimed messages, which means it allows messages to be consumed only once.

 
 [...]
 That marker is a sequence number of some kind that is used to provide
 ordering to queries? Is it generated by the database itself?

 It's a sequence number to provide ordering to queries, correct.
 Depending on the driver, it may be generated by Zaqar or the database.
 In mongodb's case it's generated by Zaqar[0].
 
 Zaqar increments a counter held within the database, am I reading that
 correctly? So mongodb is responsible for the ordering and atomicity of
 multiple concurrent requests for a marker?

Yes.

The message posting code is here[0] in case you'd like to take a look at
the logic used for mongodb (any feedback is obviously very welcome):

[0]
https://github.com/openstack/zaqar/blob/master/zaqar/queues/storage/mongodb/messages.py#L494

Cheers,
Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo final releases ready

2014-09-19 Thread Thierry Carrez
Thomas Goirand wrote:
 However, I made a small mistake. I used 1.4.0.0~a5 instead of 1.4.0~a5.
 As a consequence, I may upload 1.4.0.0 instead of 1.4.0, so that it is
 greater than 1.4.0.0~a5 (to avoid adding an EPOC, which is ugly and
 annoying to maintain). It's going to be the same version though, I just
 need to add a new tag, which isn't much of a problem for me (since I use
 a git based workflow).
 
 The 1.4.0.0a5 is confusing... :(

According to Donald Stufft PEP 440 now allows 1.4.0.a5 now so we may not
need that extra 0 anymore.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Flavio Percoco
On 09/18/2014 07:19 PM, Joe Gordon wrote:
 
 
 On Thu, Sep 18, 2014 at 7:45 AM, Flavio Percoco fla...@redhat.com
 mailto:fla...@redhat.com wrote:
 
 On 09/18/2014 04:09 PM, Gordon Sim wrote:
  On 09/18/2014 12:31 PM, Flavio Percoco wrote:
  On 09/17/2014 10:36 PM, Joe Gordon wrote:
  My understanding of Zaqar is that it's like SQS. SQS uses distributed
  queues, which have a few unusual properties [0]:
 
 
   Message Order
 
  Amazon SQS makes a best effort to preserve order in messages, but due 
 to
  the distributed nature of the queue, we cannot guarantee you will
  receive messages in the exact order you sent them. If your system
  requires that order be preserved, we recommend you place sequencing
  information in each message so you can reorder the messages upon
  receipt.
 
 
  Zaqar guarantees FIFO. To be more precise, it does that relying on the
  storage backend ability to do so as well. Depending on the storage 
 used,
  guaranteeing FIFO may have some performance penalties.
 
  Would it be accurate to say that at present Zaqar does not use
  distributed queues, but holds all queue data in a storage mechanism of
  some form which may internally distribute that data among servers but
  provides Zaqar with a consistent data model of some form?
 
 I think this is accurate. The queue's distribution depends on the
 storage ability to do so and deployers will be able to choose what
 storage works best for them based on this as well. I'm not sure how
 useful this separation is from a user perspective but I do see the
 relevance when it comes to implementation details and deployments.
 
 
  [...]
  As of now, Zaqar fully relies on the storage replication/clustering
  capabilities to provide data consistency, availability and fault
  tolerance.
 
  Is the replication synchronous or asynchronous with respect to client
  calls? E.g. will the response to a post of messages be returned only
  once the replication of those messages is confirmed? Likewise when
  deleting a message, is the response only returned when replicas of the
  message are deleted?
 
 It depends on the driver implementation and/or storage configuration.
 For example, in the mongodb driver, we use the default write concern
 called acknowledged. This means that as soon as the message gets to
 the master node (note it's not written on disk yet nor replicated) zaqar
 will receive a confirmation and then send the response back to the
 client. This is also configurable by the deployer by changing the
 default write concern in the mongodb uri using
 `?w=SOME_WRITE_CONCERN`[0].
 
 
 This means that by default Zaqar cannot guarantee a message will be
 delivered at all. A message can be acknowledged and then the 'master
 node' crashes and the message is lost. Zaqar's ability to guarantee
 delivery is limited by the reliability of a single node, while something
 like swift can only loose a piece of data if 3 machines crash at the
 same time.

Correct, as mentioned in my reply to Gordon, I also think `majority` is
a saner default for the write concern in this case.

I'm glad you mentioned Swift. We discussed a while back about having a
storage driver for it. I thought we had a blueprint for that but we
don't. Last time we discussed it, swift seemed to cover everything we
needed, IIRC. Anyway, just a thought.

Flavio

 [0] http://docs.mongodb.org/manual/reference/connection-string/#uri.w
 
 
  However, as far as consuming messages is concerned, it can
  guarantee once-and-only-once and/or at-least-once delivery depending on
  the message pattern used to consume messages. Using pop or claims
  guarantees the former whereas streaming messages out of Zaqar 
 guarantees
  the later.
 
  From what I can see, pop provides unreliable delivery (i.e. its similar
  to no-ack). If the delete call using pop fails while sending back the
  response, the messages are removed but didn't get to the client.
 
 Correct, pop works like no-ack. If you want to have pop+ack, it is
 possible to claim just 1 message and then delete it.
 
 
  What do you mean by 'streaming messages'?
 
 I'm sorry, that went out wrong. I had the browsability term in my head
 and went with something even worse. By streaming messages I meant
 polling messages without claiming them. In other words, at-least-once is
 guaranteed by default, whereas once-and-only-once is guaranteed just if
 claims are used.
 
 
  [...]
  Based on our short conversation on IRC last night, I understand you're
  concerned that FIFO may result in performance issues. That's a valid
  concern and I think the right answer is that it depends on the storage.
  If the storage has a built-in FIFO guarantee then there's nothing 

Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Flavio Percoco
On 09/18/2014 07:16 PM, Devananda van der Veen wrote:
 On Thu, Sep 18, 2014 at 8:54 AM, Devananda van der Veen
 devananda@gmail.com wrote:
 On Thu, Sep 18, 2014 at 7:45 AM, Flavio Percoco fla...@redhat.com wrote:
 On 09/18/2014 04:09 PM, Gordon Sim wrote:
 On 09/18/2014 12:31 PM, Flavio Percoco wrote:
 Zaqar guarantees FIFO. To be more precise, it does that relying on the
 storage backend ability to do so as well. Depending on the storage used,
 guaranteeing FIFO may have some performance penalties.

 Would it be accurate to say that at present Zaqar does not use
 distributed queues, but holds all queue data in a storage mechanism of
 some form which may internally distribute that data among servers but
 provides Zaqar with a consistent data model of some form?

 I think this is accurate. The queue's distribution depends on the
 storage ability to do so and deployers will be able to choose what
 storage works best for them based on this as well. I'm not sure how
 useful this separation is from a user perspective but I do see the
 relevance when it comes to implementation details and deployments.

 Guaranteeing FIFO and not using a distributed queue architecture
 *above* the storage backend are both scale-limiting design choices.
 That Zaqar's scalability depends on the storage back end is not a
 desirable thing in a cloud-scale messaging system in my opinion,
 because this will prevent use at scales which can not be accommodated
 by a single storage back end.

 
 It may be worth qualifying this a bit more.
 
 While no single instance of any storage back-end is infinitely
 scalable, some of them are really darn fast. That may be enough for
 the majority of use cases. It's not outside the realm of possibility
 that the inflection point [0] where these design choices result in
 performance limitations is at the very high end of scale-out, eg.
 public cloud providers who have the resources to invest further in
 improving zaqar.
 
 As an example of what I mean, let me refer to the 99th percentile
 response time graphs in Kurt's benchmarks [1]... increasing the number
 of clients with write-heavy workloads was enough to drive latency from
 10ms to 200 ms with a single service. That latency significantly
 improved as storage and application instances were added, which is
 good, and what I would expect. These benchmarks do not (and were not
 intended to) show the maximal performance of a public-cloud-scale
 deployment -- but they do show that performance under different
 workloads improves as additional services are started.
 
 While I have no basis for comparing the configuration of the
 deployment he used in those tests to what a public cloud operator
 might choose to deploy, and presumably such an operator would put
 significant work into tuning storage and running more instances of
 each service and thus shift that inflection point to the right, my
 point is that, by depending on a single storage instance, Zaqar has
 pushed the *ability* to scale out down into the storage
 implementation. Given my experience scaling SQL and NoSQL data stores
 (in my past life, before working on OpenStack) I have a knee-jerk
 reaction to believing that this approach will result in a
 public-cloud-scale messaging system.

Thanks for the more detailed explanation of your concern, I appreciate it.

Let me start by saying that I agree with the fact that pushing messages
distribution down to the storage may end up in some scaling limitations
for some scenarios.

That said, Zaqar already has the knowledge of pools. Pools allow
operators to add more storage clusters to Zaqar and with that balance
the load between them. It is possible to distribute the data across
these pools in a per-queue basis. While the messages of a queue are not
distributed across multiple *pools* - all the messages for queue X will
live in a single pool - I do believe this per-queue distribution helps
to address the above concern and pushes that limitation farther away.

Let me explain how pools currently work a bit better. As of now, each
pool has a URI pointing to a storage cluster and a weight. This weight
is used to balance load between pools every time a queue is created.
Once it's created, Zaqar keeps the information of the queue-pool
association in a catalogue that is used to know where the queue lives.
We'll likely add new algorithms to have a better and more even
distribution of queues across the registered pools.

I'm sure message distribution could be implemented in Zaqar but I'm not
convinced we should do so right now. The reason being it would bring in
a whole lot of new issues to the project that I think we can and should
avoid for now.

Thanks for the feedback, Devananda.
Flavio


 
 -Devananda
 
 [0] http://en.wikipedia.org/wiki/Inflection_point -- in this context,
 I mean the point on the graph of throughput vs latency where the
 derivative goes from near-zero (linear growth) to non-zero
 (exponential growth)
 
 [1] 

Re: [openstack-dev] [heat][nova] VM restarting on host, failure in convergence

2014-09-19 Thread Jastrzebski, Michal
   All,
  
   Currently OpenStack does not have a built-in HA mechanism for tenant
   instances which could restore virtual machines in case of a host
   failure. Openstack assumes every app is designed for failure and can
   handle instance failure and will self-remediate, but that is rarely
   the case for the very large Enterprise application ecosystem.
   Many existing enterprise applications are stateful, and assume that
   the physical infrastructure is always on.
  
 
  There is a fundamental debate that OpenStack's vendors need to work out
  here. Existing applications are well served by existing virtualization
  platforms. Turning OpenStack into a work-alike to oVirt is not the end
  goal here. It's a happy accident that traditional apps can sometimes be
  bent onto the cloud without much modification.
 
  The thing that clouds do is they give development teams a _limited_
  infrastructure that lets IT do what they're good at (keep the
  infrastructure up) and lets development teams do what they're good at 
(run
  their app). By putting HA into the _app_, and not the _infrastructure_,
  the dev teams get agility and scalability. No more waiting weeks for
  allocationg specialized servers with hardware fencing setups and fibre
  channel controllers to house a shared disk system so the super reliable
  virtualization can hide HA from the user.
 
  Spin up vms. Spin up volumes.  Run some replication between regions,
  and be resilient.

I don't argue that's the way to go. But reality is somewhat different.
In world of early design fail, low budget and deadlines some good
practices might be omitted early and might be hard to implement later.

Cloud from technical point of view can help to increase such apps, and
I think openstack should approach that part of market as well.

  So, as long as it is understood that whatever is being proposed should
  be an application centric feature, and not an infrastructure centric
  feature, this argument remains interesting in the cloud context.
  Otherwise, it is just an invitation for OpenStack to open up direct
  competition with behemoths like vCenter.
 
   Even the OpenStack controller services themselves do not gracefully
   handle failure.
  
 
  Which ones?

Heat has issues, horizon has issues, neutron l3 only works in 
active-passive setup.

   When these applications were virtualized, they were virtualized on
   platforms that enabled very high SLAs for each virtual machine,
   allowing the application to not be rewritten as the IT team moved them
   from physical to virtual. Now while these apps cannot benefit from
   methods like automatic scaleout, the application owners will greatly
   benefit from the self-service capabilities they will recieve as they
   utilize the OpenStack control plane.
  
 
  These apps were virtualized for IT's benefit. But the application authors
  and users are now stuck in high-cost virtualization. The cloud is best
  utilized when IT can control that cost and shift the burden of uptime
  to the users by offering them more overall capacity and flexibility with
  the caveat that the individual resources will not be as reliable.
 
  So what I'm most interested in is helping authors change their apps to
  be reslient on their own, not in putting more burden on IT.

This can be very costly, therefore not always possible.

   I'd like to suggest to expand heat convergence mechanism to enable
   self-remediation of virtual machines and other heat resources.
  
 
  Convergence is still nascent. I don't know if I'd pile on to what might
  take another 12 - 18 months to get done anyway. We're just now figuring
  out how to get started where we thought we might already be 1/3 of the
  way through. Just something to consider.

We don't need to complete convergence to start working with that. 
However this might take, sooner we start, sooner we deliver.


Thans,
Michał

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Nejc Saje to ceilometer-core

2014-09-19 Thread Eoghan Glynn


 Hi,
 
 Nejc has been doing a great work and has been very helpful during the
 Juno cycle and his help is very valuable.
 
 I'd like to propose that we add Nejc Saje to the ceilometer-core group.
 
 Please, dear ceilometer-core members, reply with your votes!

With eight yeas and zero nays, I think we can call a result in this
vote.

Welcome to the ceilometer-core team Nejc!

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-19 Thread Eoghan Glynn


 Hi,
 
 Dina has been doing a great work and has been very helpful during the
 Juno cycle and her help is very valuable. She's been doing a lot of
 reviews and has been very active in our community.
 
 I'd like to propose that we add Dina Belova to the ceilometer-core
 group, as I'm convinced it'll help the project.
 
 Please, dear ceilometer-core members, reply with your votes!

With seven yeas and zero nays, I think we can call a result in this
vote.

Welcome to the ceilometer-core team Dina!

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Thierry Carrez
Joe Gordon wrote:
 On Thu, Sep 18, 2014 at 9:02 AM, Devananda van der Veen
 devananda@gmail.com mailto:devananda@gmail.com wrote:
 - guaranteed message order
 - not distributing work across a configurable number of back ends
 
 These are scale-limiting design choices which are reflected in the
 API's characteristics.
 
 I agree with Clint and Devananda

The underlying question being... can Zaqar evolve to ultimately reach
the massive scale use case Joe, Clint and Devananda want it to reach, or
are those design choices so deeply rooted in the code and architecture
that Zaqar won't naturally mutate to support that use case.

The Zaqar team has shown great willingness to adapt in order to support
more use cases, but I guess there may be architectural design choices
that would just mean starting over ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Performance degradation (docs) for sampling rate less than 10 mins

2014-09-19 Thread Srikanta Patanjali
Hi Tyaptin,

Thanks for sharing the document. I appreciate that. Its well
documented and shares all the info i was looking for in detail.

May be it should be documented along with the Ceilometer for reference purpose.

Cheers,
Srikanta
InIT ¦ ICCLab
ZHAW



Hi Patanjali!

We have inspected this question and got a document with results of this testing.
Tests are running on physical lab and includes 2000 VMs ran by Nova
and 60 second polling interval.

Expanded information in doc
https://docs.google.com/a/mirantis.com/document/d/1jvqIy6fWQBvTZEfdnk37FvF5xtA3l7Cx09VNoe6bBRU

If you have any question I'll be happy to answer.

On Thu, Sep 18, 2014 at 7:56 PM, Srikanta Patanjali [hidden email]
http://openstack.10931.n7.nabble.com/user/SendEmail.jtp?type=nodenode=52976i=0
wrote:

 Hi Team,

 I was considering to change the default sampling rate of the Ceilometer from 
 10 mins to less than that. I foresee an adverse impact on its performance due 
 to increase in the data inside the MondoDB.

 I was wondering if the QA team (or anyone else) has done any of the load 
 tests with these parameters ? It would be helpful to have access to these 
 results (if any).

 If not, I would like to know the views on increasing the sample rate value 
 (in the pipleline.yaml file)

 Cheers,
 Srikanta
 InIT ¦ ICCLab
 ZHAW

 ___
 OpenStack-dev mailing list
 [hidden email] 
 http://openstack.10931.n7.nabble.com/user/SendEmail.jtp?type=nodenode=52976i=1
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Tyaptin Ilia,

Software Engineer.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][nova] VM restarting on host failure in convergence

2014-09-19 Thread Jastrzebski, Michal
   In short, what we'll need from nova is to have 100% reliable
   host-health monitor and equally reliable rebuild/evacuate mechanism
   with fencing and scheduler. In heat we need scallable and reliable
   event listener and engine to decide which action to perform in given
   situation.
 
  Unfortunately, I don't think Nova can provide this alone.  Nova only
  knows about whether or not the nova-compute daemon is current
  communicating with the rest of the system.  Even if the nova-compute
  daemon drops out, the compute node may still be running all instances
  just fine.  We certainly don't want to impact those running workloads
  unless absolutely necessary.

But, on the other hand if host is really down, nova might want to know
that, if only to change insances status to ERROR or whatever. I don't
think situation when instance is down due to host failure, and nova
doesn't know that is good for anyone.

  I understand that you're suggesting that we enhance Nova to be able to
  provide that level of knowledge and control.  I actually don't think
  Nova should have this knowledge of its underlying infrastructure.
 
  I would put the host monitoring infrastructure (to determine if a host
  is down) and fencing capability as out of scope for Nova and as a part
  of the supporting infrastructure.  Assuming those pieces can properly
  detect that a host is down and fence it, then all that's needed from
  Nova is the evacuate capability, which is already there.  There may be
  some enhancements that could be done to it, but surely it's quite close.

Why do you think nova shouldn't have information about underlying infra?
Since service group is pluggin based, we could develop new plugin for
enhancing nova's information reliability whthout any impact on current
code. I'm a bit concerned about dependency injection we'd have to make.
I'd love to be in situation, where people would have some level (maybe
not best they can get) of SLA in heat out of the box, without bigger
investment in infrastructure configuration.

  There's also the part where a notification needs to go out saying that
  the instance has failed.  Some thing (which could be Heat in the case of
  this proposal) can react to that, either directly or via ceilometer, for
  example.  There is an API today to hard reset the state of an instance
  to ERROR.  After a host is fenced, you could use this API to mark all
  instances on that host as dead.  I'm not sure if there's an easy way to
  do that for all instances on a host today.  That's likely an enhancement
  we could make to python-novaclient, similar to the evacuate all
  instances on a host enhancement that was done in novaclient.

Why nova itself wouldn't do that? I mean, nova should know real status
of its instances at all times in my opinion.

Thanks,
Michał
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Will glance return a deleted image?

2014-09-19 Thread Eli Qiao
hi all:
I find a strange behavior that when I use nova image-show uuid, I can
get a deleted image it is marked as deleted. is this the correct behavior ?

[tagett@stack-01 devstack]$ nova image-show
bda62c0e-2b6c-4495-b06e-1fa1e591da80
+--+--+
| Property | Value |
+--+--+
| OS-EXT-IMG-SIZE:size | 10682368 |
| created | 2014-09-19T04:41:47Z |
| id | bda62c0e-2b6c-4495-b06e-1fa1e591da80 |
| minDisk | 0 |
| minRam | 0 |
| name | t3-shelved |
| progress | 0 |
| status | DELETED |
| updated | 2014-09-19T04:42:25Z |
+--+--+

this is related an bug which I reported, https://launchpad.net/bugs/1371406

-- 
Thanks,
Eli (Li Yong) Qiao

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Adding Dina Belova to ceilometer-core

2014-09-19 Thread Dina Belova
Thank you folks, it's a big pleasure and responsibility, and I'm really
happy to join you :)

Thanks!
Dina

On Fri, Sep 19, 2014 at 12:54 PM, Eoghan Glynn egl...@redhat.com wrote:



  Hi,
 
  Dina has been doing a great work and has been very helpful during the
  Juno cycle and her help is very valuable. She's been doing a lot of
  reviews and has been very active in our community.
 
  I'd like to propose that we add Dina Belova to the ceilometer-core
  group, as I'm convinced it'll help the project.
 
  Please, dear ceilometer-core members, reply with your votes!

 With seven yeas and zero nays, I think we can call a result in this
 vote.

 Welcome to the ceilometer-core team Dina!

 Cheers,
 Eoghan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Flavio Percoco
On 09/19/2014 11:00 AM, Thierry Carrez wrote:
 Joe Gordon wrote:
 On Thu, Sep 18, 2014 at 9:02 AM, Devananda van der Veen
 devananda@gmail.com mailto:devananda@gmail.com wrote:
 - guaranteed message order
 - not distributing work across a configurable number of back ends

 These are scale-limiting design choices which are reflected in the
 API's characteristics.

 I agree with Clint and Devananda
 
 The underlying question being... can Zaqar evolve to ultimately reach
 the massive scale use case Joe, Clint and Devananda want it to reach, or
 are those design choices so deeply rooted in the code and architecture
 that Zaqar won't naturally mutate to support that use case.
 
 The Zaqar team has shown great willingness to adapt in order to support
 more use cases, but I guess there may be architectural design choices
 that would just mean starting over ?


Zaqar has scaling capabilities that go beyond depending on a single
storage cluster. As I mentioned in my previous email, the support for
storage pools allows the operator to scale out the storage layer and
balance the load across them.

There's always space for improvement and I definitely wouldn't go that
far to say there's a need to start over.

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] naming of provider template for docs

2014-09-19 Thread Thomas Spatzier
 From: Mike Spreitzer mspre...@us.ibm.com
 To: OpenStack Development Mailing List \(not for usage questions\)
 openstack-dev@lists.openstack.org
 Date: 19/09/2014 07:15
 Subject: Re: [openstack-dev] [Heat] naming of provider template for docs

 Angus Salkeld asalk...@mirantis.com wrote on 09/18/2014 09:33:56 PM:

  Hi

  I am trying to add some docs to openstack-manuals hot_guide about
  using provider templates : https://review.openstack.org/#/c/121741/

  Mike has suggested we use a different term, he thinks provider is
  confusing.
  I agree that at the minimum, it is not very descriptive.

  Mike has suggested nested stack, I personally think this means
 something a
  bit more general to many of us (it includes the concept of aws
 stacks) and may
  I suggest template resource - note this is even the class name for
  this exact functionality.
 
  Thoughts?

  Option 1) stay as is provider templates
  Option 2) nested stack
  Option 3) template resource

Out of those 3 I like #3 the most, even though not perfect as Mike
discussed below.


 Thanks for rising to the documentation challenge and trying to get
 good terminology.

 I think your intent is to describe a category of resources, so your
 option 3 is superior to option 1 --- the thing being described is
 not a template, it is a resource (made from a template).

 I think

 Option 4) custom resource

That one sound too generic to me, since also custom python based resource
plugins are custom resources.


 would be even better.  My problem with template resource is that,
 to someone who does not already know what it means, this looks like
 it might be a kind of resource that is a template (e.g., for
 consumption by some other resource that does something with a
 template), rather than itself being something made from a template.
 If you want to follow this direction to something perfectly clear,
 you might try templated resource (which is a little better) or
 template-based resource (which I think is pretty clear but a bit
 wordy) --- but an AWS::CloudFormation::Stack is also based on a
 template.  I think that if you try for a name that really says all
 of the critical parts of the idea, you will get something that is
 too wordy and/or awkward.  It is true that custom resource begs
 the question of how the user accomplishes her customization, but at
 least now we have the reader asking the right question instead of
 being misled.

I think template-based resource really captures the concept best. And it
is not too wordy IMO.
If it helps to explain the concept intuitively, I would be in favor of it.

Regards,
Thomas


 I agree that nested stack is a more general concept.  It describes
 the net effect, which the things we are naming have in common with
 AWS::CloudFormation::Stack.  I think it would make sense for our
 documentation to say something like both an
 AWS::CloudFormation::Stack and a custom resource are ways to specify
 a nested stack.

 Thanks,
 Mike ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Oslo final releases ready

2014-09-19 Thread Andrey Kurilin
Here you are!:)
https://review.openstack.org/#/c/122667

On Fri, Sep 19, 2014 at 9:02 AM, Michael Still mi...@stillhq.com wrote:

 I would like to do a python-novaclient release, but this requirements
 commit hasn't yet turned into a requirements proposal for novaclient
 (that I can find). Can someone poke that for me?

 Michael

 On Fri, Sep 19, 2014 at 12:04 AM, Doug Hellmann d...@doughellmann.com
 wrote:
  All of the final releases for the Oslo libraries for the Juno cycle are
 available on PyPI. I’m working on a couple of patches to the global
 requirements list to update the baseline in the applications. In all cases,
 the final release is a second tag on a previously released version.
 
  - oslo.config - 1.4.0 (same as 1.4.0.0a5)
  - oslo.db - 1.0.0 (same as 0.5.0)
  - oslo.i18n - 1.0.0 (same as 0.4.0)
  - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
  - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
  - oslo.serialization - 1.0.0 (same as 0.3.0)
  - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
  - oslotest - 1.1.0 (same as 1.1.0.0a2)
  - oslo.utils - 1.0.0 (same as 0.3.0)
  - cliff - 1.7.0 (previously tagged, so not a new release)
  - stevedore - 1.0.0 (same as 1.0.0.0a2)
 
  Congratulations and *Thank You* to the Oslo team for doing an amazing
 job with graduations this cycle!
 
  Doug
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best regards,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Zaqar] Integration plan moving forward

2014-09-19 Thread Flavio Percoco
On 09/18/2014 11:51 AM, Angus Salkeld wrote:
 
 On 18/09/2014 7:11 PM, Flavio Percoco fla...@redhat.com
 mailto:fla...@redhat.com wrote:

 Greetings,

 If I recall correctly, Heat was planning to adopt Zaqar regardless of
 the result of the graduation attempt (please correct me if I'm wrong).
 Based on this assumption, I'd like to start working on a plan forward to
 make this integration happen.

 So far, these are the use cases I've collected from past discussions:

 * Notify  heat user before an action is taken, and after - Heat may want
 to wait  for a response before proceeding - notifications not
 necessarily needed  and signed read-only queues might help, but not
 necessary
 * For integrating with user's tools
 * Monitoring
 * Control surface
 * Config management tools
 * Does not require notifications and/or read-only/signed queue
 endpoints
 *[These may be helpful, but were not brought up in the discussion]
 * Subscribe to an aggregate feed of interesting events from other
 open-stack components (such as Nova)
 * Heat is often deployed in a different place than other
 components and doesn't have access to the AMQP bus
 * Large  deployments consist of multiple AMQP brokers, and there
 doesn't seem to  be a nice way to aggregate all those events [need to
 confirm]
 * Push metadata updates to os-collect-config agent running in
 servers, instead of having them poll Heat


 Few questions that I think we should start from:

 - Does the above list cover Heat's needs?
 - Which of the use cases listed above should be addressed first?
 
 IMHO it would be great to simply replace the event store we have
 currently, so that the user can get a stream of progress messages during
 the deployment.

Could you point me to the right piece of code and/or documentation so I
can understand better what it does and where do you want it to go?

Thanks for the feedback,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] are we going to remove the novaclient v3 shell or what?

2014-09-19 Thread Day, Phil

 
  DevStack doesn't register v2.1 endpoint to keytone now, but we can use
  it with calling it directly.
  It is true that it is difficult to use v2.1 API now and we can check
  its behavior via v3 API instead.
 
 I posted a patch[1] for registering v2.1 endpoint to keystone, and I confirmed
 --service-type option of current nova command works for it.

Ah - I'd misunderstood where we'd got to with the v2.1 endpoint, thanks for 
putting me straight.

So with this in place then yes I agree we could stop fixing the v3 client.   

Since its actually broken for even operations like boot do we merge in the 
changes I pushed this week so it can still do basic functions, or just go 
straight to removing v3 from the client ?   
 
Phil
 

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Expand resource name allowed characters

2014-09-19 Thread Sean Dague
On 09/18/2014 08:49 PM, Clint Byrum wrote:
 Excerpts from Christopher Yeoh's message of 2014-09-18 16:57:12 -0700:
 On Thu, 18 Sep 2014 12:12:28 -0400
 Sean Dague s...@dague.net wrote:
 When we can return the json-schema to user in the future, can we say
 that means API accepting utf8 or utf8mb4 is discoverable? If it is
 discoverable, then we needn't limit anything in our python code.

 Honestly, we should accept utf8 (no weird mysqlism not quite utf8). We
 should make the default scheme for our dbs support that on names (but
 only for the name columns). The failure of a backend to do utf8 for
 real should return an error to the user. Let's not make this more
 complicated than it needs to be.

 I agree that discoverability for this is not the way to go - I think its
 too complicated for end users. I don't know enough about mysql to know
 if utf8mb4 is going to a performance issue but if its not then we
 should just support utf-8 properly. 

 We can we can catch the db errors. However whilst converting db
 errors causing 500s is fairly straightforward when an error occurs that
 deep in Nova it also means a lot of potential unwinding work in the db
 and compute layers which is complicated and error prone. So i'd prefer
 to avoid the situation with input validation in the first place. 
 
 Just to add a reference into the discussion:
 
 http://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html
 
 It does have the same limitation of making fixed width keys and CHAR()
 columns. It goes from 3 bytes per CHAR position, to 4, so it should not
 be a database wide default, but something we use sparingly.
 
 Note that the right answer for things that are not utf-8 (like UUID's)
 is not to set a charset of latin1, but use BINARY/VARBINARY. Last
 time I tried I had a difficult time coercing SQLAlchemy to model the
 difference.. but maybe I just didn't look in the right part of the manual.

Agreed, honestly if we could get the UUIDs to be actually BINARY in the
database I suspect it would have a pretty substantial performance
increase based on past projects that did the same transition. The join
time goes way down... and at least in Nova we do a ton of joins.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Thierry Carrez
Monty Taylor wrote:
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.
 
 http://inaugust.com/post/108

Hey Monty,

As you can imagine, I read that post with great attention. I generally
like the concept of a tightly integrated, limited-by-design layer #1
(I'd personally call it Ring 0) and a large collection of OpenStack
things gravitating around it. That would at least solve the attraction
of the integrated release, suppress the need for incubation, foster
competition/innovation within our community, and generally address the
problem of community scaling. There are a few details on the
consequences though, and in those as always the devil lurks.

## The Technical Committee

The Technical Committee is defined in the OpenStack bylaws, and is the
representation of the contributors to the project. Teams work on code
repositories, and at some point ask their work to be recognized as part
of OpenStack. In doing so, they place their work under the oversight
of the Technical Committee. In return, team members get to participate
in electing the technical committee members (they become ATC). It's a
balanced system, where both parties need to agree: the TC can't force
itself as overseer of a random project, and a random project can't just
decide by itself it is OpenStack.

I don't see your proposal breaking that balanced system, but it changes
its dynamics a bit. The big tent would contain a lot more members. And
while the TC would arguably bring a significant share of its attention
to Ring 0, its voters constituency would mostly consist of developers
who do not participate in Ring 0 development. I don't really see it as
changing dramatically the membership of the TC, but it's a consequence
worth mentioning.

## Programs

Programs were created relatively recently as a way to describe which
teams are in OpenStack vs. which ones aren't. They directly tie into
the ATC system: if you contribute to code repositories under a blessed
program, then you're an ATC, you vote in TC elections and the TC has
some oversight over your code repositories. Previously, this was granted
at a code repository level, but that failed to give flexibility for
teams to organize their code in the most convenient manner for them. So
we started to bless teams rather than specific code repositories.

Now, that didn't work out so well. Some programs were a 'theme', like
Infrastructure, or Docs. For those, competing efforts do not really make
sense: there can only be one, and competition should happen inside those
efforts rather than outside. Some programs were a 'team', like
Orchestration/Heat or Deployment/TripleO. And that's where the model
started to break: some new orchestration things need space, but the
current Heat team is not really interested in maintaining them. What's
the point of being under the same program then ? And TripleO is not the
only way to deploy OpenStack, but its mere existence (and name)
prevented other flowers to bloom in our community.

You don't talk much about programs in your proposal. In particular, you
only mention layer 1, Cloud Native applications, User Interface
applications, and Operator applications. So I'm unsure of where, if
anywhere, would Infrastructure or Docs repositories live.

Here is how I see it could work. We could keep 'theme' programs (think
Infra, Release cycle management, Docs, QA) with their current structure
(collection of code repositories under a single team/PTL). We would get
rid of 'team' programs completely, and just have a registry of
OpenStack code repositories (openstack*/*, big tent). Each of those
could have a specific PTL, or explicitely inherit its PTL from another
code repository. Under that PTL, they could have separate or same core
review team, whatever maps reality and how they want to work (so we
could say that openstack/python-novaclient inherits its PTL from
openstack/nova and doesn't need a specific one). That would allow us to
map anything that may come in. Oslo falls a bit in between, could be
considered a 'theme' program or several repos sharing PTL.

## The release and the development cycle

You touch briefly on the consequences of your model for the common
release and our development cycle. Obviously we would release the ring
0 projects in an integrated manner on a time-based schedule.

For the other projects, we have a choice:

- the ring 0 model (development milestones, coordinated final release)
- the Swift model (release as-needed, coordinated final release)
- the Oslo model (release as-needed, try to have one final one before
end of cycle)
- the Client libraries model (release as-needed)

If possible, I would like to avoid the Swift model, which is the most
costly from a release management standpoint. All projects following the
ring 0 model are easy to keep track of, using common freezes etc. So
it's easy to make sure they will be ready in time for the coordinated

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Thierry Carrez
Vishvananda Ishaya wrote:
 Great writeup. I think there are some great concrete suggestions here.
 
 A couple more:
 
 1. I think we need a better name for Layer #1 that actually represents what 
 the goal of it is: Infrastructure Services?
 2. We need to be be open to having other Layer #1s within the community. We 
 should allow for similar collaborations and group focus to grow up as well. 
 Storage Services? Platform Services? Computation Services?

I think that would nullify most of the benefits of Monty's proposal. If
we keep on blessing themes or special groups, we'll soon be back at
step 0, with projects banging on the TC door to become special, and
companies not allocating resources to anything that's not special.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] Another issue when analyzing SSH Gerrit (as young as August 2014)

2014-09-19 Thread Daniel P. Berrange
On Fri, Sep 19, 2014 at 12:30:50PM +0200, Daniel Izquierdo wrote:
 Hi!
 
 After having a more in depth review of data coming from Gerrit, some other
 issues appear when using the SSH Gerrit API.
 
 Some of the dates when reviewing the code (Code-Review, Verified, etc) seem
 to be inconsistent with the data found in the comments section.
 
 Example: issue https://review.openstack.org/#/c/98688/  (Heat project).
 
 Let's take the PatchSet 44.
 
 $ ssh -p 29418 review.openstack.org gerrit query 98688 --format=JSON
 --files --comments --patch-sets --all-approvals --commit-message
 --submit-records
 
 After parsing a bit the JSON file...
 
 Revision (patchset) 44 (cf7819ccdbf8999fc963e9c8d400c932ad780674)
   Date: 2014-08-14 03:18:48
   Code-Review: 1
 Date: 2014-07-23 07:01:31
   Code-Review: 1
 Date: 2014-07-31 11:16:21
   Code-Review: 1
 Date: 2014-08-01 10:53:28
   Code-Review: 2
 Date: 2014-08-12 20:15:14
   Code-Review: -1
 Date: 2014-08-14 10:23:55
   Workflow: -1
 Date: 2014-08-18 00:36:32
   Verified: 1
 Date: 2014-09-01 12:39:05
 
 
 
 As you can see, the date of some of the reviews are older than the upload
 date of the patchset 44 (Aug 14 3:18).

It looks like Patchset 44 is a trivial update of previous patchsets. When
gerrit detects this, the votes from previous patchsets are copied across
to the new patchset, and the time of those votes is preserved, since they
are not actual new votes. So in that way, you can get votes which are
older than the patchset itself.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Federation and Policy

2014-09-19 Thread David Chadwick


On 18/09/2014 22:14, Doug Hellmann wrote:
 
 On Sep 18, 2014, at 4:34 PM, David Chadwick d.w.chadw...@kent.ac.uk
 wrote:
 
 
 
 On 18/09/2014 21:04, Doug Hellmann wrote:
 
 On Sep 18, 2014, at 12:36 PM, David Chadwick 
 d.w.chadw...@kent.ac.uk wrote:
 
 Our recent work on federation suggests we need an improvement
 to the way the policy engine works. My understanding is that
 most functions are protected by the policy engine, but some are
 not. The latter functions are publicly accessible. But there is
 no way in the policy engine to specify public access to a
 function and there ought to be. This will allow an
 administrator to configure the policy for a function to range
 from very lax (publicly accessible) to very strict (admin
 only). A policy of  means that any authenticated user can
 access the function. But there is no way in the policy to
 specify that an unauthenticated user (i.e. public) has access
 to a function.
 
 We have already identified one function (get trusted IdPs 
 identity:list_identity_providers) that needs to be publicly 
 accessible in order for users to choose which IdP to use for 
 federated login. However some organisations may not wish to
 make this API call publicly accessible, whilst others may wish
 to restrict it to Horizon only etc. This indicates that that
 the policy needs to be set by the administrator, and not by
 changes to the code (i.e. to either call the policy engine or
 not, or to have two different API calls).
 
 I don’t know what list_identity_providers does.
 
 it lists the IDPs that Keystone trusts to authenticate users
 
 Can you give a little more detail about why some providers would
 want to make it not public
 
 I am not convinced that many cloud services will want to keep this
 list secret. Today if you do federated login, the public web page
 of the service provider typically lists the logos or names of its
 trusted IDPs (usually Facebook and Google). Also all academic
 federations publish their full lists of IdPs. But it has been
 suggested that some commercial cloud providers may not wish to
 publicise this list and instead require the end users to know which
 IDP they are going to use for federated login. In which case the
 list should not be public.
 
 
 if we plan to make it public by default? If we think there’s a 
 security issue, shouldn’t we just protect it?
 
 
 Its more a commercial in confidence issue (I dont want the world to
 know who I have agreements with) rather than a security issue,
 since the IDPs are typically already well known and already protect
 themselves against attacks from hackers on the Internet.
 
 OK. The weak “someone might want to” requirement aside, and again
 showing my ignorance of implementation details, do we truly have to
 add a new feature to disable the policy check? Is there no way to
 have an “always allow” policy using the current syntax?

You tell me. If there is, then problem solved. If not, then my request
still stands

regards

David

 
 Doug
 
 
 regards
 
 David
 
 
 If we can invent some policy syntax that indicates public
 access, e.g. reserved keyword of public, then Keystone can
 always call the policy file for every function and there would
 be no need to differentiate between protected APIs and
 non-protected APIs as all would be protected to a greater or
 lesser extent according to the administrator's policy.
 
 Comments please
 
 It seems reasonable to have a way to mark a function as fully
 public, if we expect to really have those kinds of functions.
 
 Doug
 
 
 regards
 
 David
 
 
 
 
 
 
 ___ OpenStack-dev 
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 
___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 
___
 OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Eoghan Glynn


 Hi All,
 
 My understanding of Zaqar is that it's like SQS. SQS uses distributed queues,
 which have a few unusual properties [0]:
 Message Order
 
 
 Amazon SQS makes a best effort to preserve order in messages, but due to the
 distributed nature of the queue, we cannot guarantee you will receive
 messages in the exact order you sent them. If your system requires that
 order be preserved, we recommend you place sequencing information in each
 message so you can reorder the messages upon receipt.
 At-Least-Once Delivery
 
 
 Amazon SQS stores copies of your messages on multiple servers for redundancy
 and high availability. On rare occasions, one of the servers storing a copy
 of a message might be unavailable when you receive or delete the message. If
 that occurs, the copy of the message will not be deleted on that unavailable
 server, and you might get that message copy again when you receive messages.
 Because of this, you must design your application to be idempotent (i.e., it
 must not be adversely affected if it processes the same message more than
 once).
 Message Sample
 
 
 The behavior of retrieving messages from the queue depends whether you are
 using short (standard) polling, the default behavior, or long polling. For
 more information about long polling, see Amazon SQS Long Polling .
 
 With short polling, when you retrieve messages from the queue, Amazon SQS
 samples a subset of the servers (based on a weighted random distribution)
 and returns messages from just those servers. This means that a particular
 receive request might not return all your messages. Or, if you have a small
 number of messages in your queue (less than 1000), it means a particular
 request might not return any of your messages, whereas a subsequent request
 will. If you keep retrieving from your queues, Amazon SQS will sample all of
 the servers, and you will receive all of your messages.
 
 The following figure shows short polling behavior of messages being returned
 after one of your system components makes a receive request. Amazon SQS
 samples several of the servers (in gray) and returns the messages from those
 servers (Message A, C, D, and B). Message E is not returned to this
 particular request, but it would be returned to a subsequent request.
 
 
 
 
 
 
 
 Presumably SQS has these properties because it makes the system scalable, if
 so does Zaqar have the same properties (not just making these same
 guarantees in the API, but actually having these properties in the
 backends)? And if not, why? I looked on the wiki [1] for information on
 this, but couldn't find anything.

The premise of this thread is flawed I think.

It seems to be predicated on a direct quote from the public
documentation of a closed-source system justifying some
assumptions about the internal architecture and design goals
of that closed-source system.

It then proceeds to hold zaqar to account for not making
the same choices as that closed-source system.

This puts the zaqar folks in a no-win situation, as it's hard
to refute such arguments when they have no visibility over
the innards of that closed-source system.

Sure, the assumption may well be correct that the designers
of SQS made the choice to expose applications to out-of-order
messages as this was the only practical way of acheiving their
scalability goals.

But since the code isn't on github and the design discussions
aren't publicly archived, we have no way of validating that.

Would it be more reasonable to compare against a cloud-scale
messaging system that folks may have more direct knowledge
of?

For example, is HP Cloud Messaging[1] rolled out in full
production by now?

Is it still cloning the original Marconi API, or has it kept
up with the evolution of the API? Has the nature of this API
been seen as the root cause of any scalability issues?

Cheers,
Eoghan

[1] 
http://www.openstack.org/blog/2013/05/an-introductory-tour-of-openstack-cloud-messaging-as-a-service

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core team

2014-09-19 Thread Vipin Balachandran
+1

From: Vui Chiap Lam [mailto:vuich...@vmware.com]
Sent: Tuesday, September 16, 2014 10:52 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core team

+1. Rado has been very instrumental in helping with reviews and making 
significant fixes to oslo.vmware. He also contributed greatly to the effort to 
integrate the library with other projects.

Vui


From: Arnaud Legendre alegen...@vmware.commailto:alegen...@vmware.com
Sent: Monday, September 15, 2014 10:18 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core team

+1

On Sep 15, 2014, at 9:37 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:


Hi,
I would like to propose Radoslav to be a core team member. Over the course of 
the J cycle he has been great with the reviews, bug fixes and updates to the 
project.
Can the other core team members please update with your votes if you agree or 
not.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon] Dependency freeze exceptions

2014-09-19 Thread Thierry Carrez
Thierry Carrez wrote:
 Radomir Dopieralski wrote:
 I would like to request dependency freeze exceptions for the following
 patches for Horizon:

 https://review.openstack.org/#/c/121509/
 https://review.openstack.org/#/c/122410/

 and

 https://review.openstack.org/#/c/113184/

 They are all fixing high priority bugs. The first two are related to
 bugs with parsing Bootstrap 3.2.0 scss files that have been fixed
 upstream. The third one makes the life of packagers a little eaiser,
 by using versions that both Debian and Fedora, and possibly many
 other distros, can ship.

 I am not sure what the formal process for such an exception should be,
 so I'm writing here. Please let me know if I should have done it
 differently.
 
 No, that's the right way of doing it.
 
 These look like valid requests, but this is mostly affecting the
 distributions, so let's wait for the packagers to chime in on how much
 of a disruption that would cause to them at this point.

https://review.openstack.org/#/c/122410/ is fixing significant gate
issues, so I approved it without waiting for more feedback.

Race is still on for the other two.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cells] Flavors and cells

2014-09-19 Thread Dheeraj Gupta
I was investigating this issue and this is what I found:

In the boot request, the compute layer passes the instance_type_id to
the driver layer (driver.spawn) and the driver layer then extracts
actual flavor information. However, different drivers extract/use this
information differently -
- libvirt - nova.virt.libvirt.driver.ComputeDriver._get_guest_config
uses the instance_type_id field of instance object and calls the
Flavor.get_by_id (which checks the DB via the conductor). It works on
the objects.flavor.Flavor object
- Ironic - Does the same as libvirt
- Hyper-V - nova.virt.hyperv.vmops.VMOps.create_instane - This uses
the information contained within the instance table itself like
memory_mb,vcpus which is passed on to the
vmutils.VMUtils.create_vm. So it uses the passed instance dict
- XenAPI - It uses the flavors.extract_flavor to extract flavor
related information (as a dict) from the instance's system_metadata.
It uses the flavor dict.
Because libvirt and ironic query the database, it causes a
FlavorNotFound exception on the child cell.

In a resize request, the compute API itself handles the DB access to
gather the instance_type (flavor) information needed. The
instance_type is then passed down to the driver. This means all the
drivers work on the same kind of object (In this case a flavor dict).

I think we should be consistent in the way these similar cases are handled.

IMO the way resize is handled is better for a cell setup.

I'd like to hear your opinion about using the same approach for boot.

Already in the existing code, the method which initiates action on a
boot request in nova
(nova.api.openstack.compute.servers.Controller.create) queries the
local DB to extract flavor info as a dictionary and passes the
returned instance_type dict to the compute API in use. The compute
API makes use of the instance_type dict in `_provision_instances`
which returns a list of Instance objects but not in any subsequent
operations. All further compute API operations (including
build_instances) work on the Instance objects. By passing the
instance_type dict to subsequent compute API methods, we can avoid a
second redundant DB lookup. Since in a cell setup that is the step
that generates the FlavorNotFound exception, we can remove the bug.

Regards,
Dheeraj

On Wed, Sep 10, 2014 at 2:06 PM, Dheeraj Gupta dheeraj.gup...@gmail.com wrote:
 Hi,
 we are working on bug-1211011 (Flavors created in parent cell are not
 propagated to the child cell)..
 we need a few pointers from the community (and people working with cells).

 The problem is that if the instance_type tables are not in sync in
 all cells we can spawn a
 different flavor than the one selected by the user.
 Example: flavor_A is selected and in the child cell flavor_B is spawned.

 This is because what is checked at spawn time is the
 instance_type_id in the child cell database,
 and there is not guarantee that this corresponds to the same flavor in
 the parent cell.

 A possible solution to this problem can be propagating the flavors to
 all child cells when it is created/deleted/changed in the parent cell.
 But since instance_type_id is an autoincrement field, it will be
 difficult to sync it. Also there may be a problem when applying this
 to an existing cell setup.

 We believe that the flavors should only live in the parent cell (API cell).
 In this case every time that this information is needed the child cell
 needs to query the information
 from the parents.

 The problem is that at the moment in the child cells the compute api
 not aware about cells.
 Should this be reconsidered?

 Also, having this mechanism to query the parent cell will allow to
 more easily add support for security groups and aggregates.

 Thoughts?

 Regards,
 Dheeraj

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread John Griffith
On Fri, Sep 19, 2014 at 4:33 AM, Thierry Carrez thie...@openstack.org
wrote:

 Vishvananda Ishaya wrote:
  Great writeup. I think there are some great concrete suggestions here.
 
  A couple more:
 
  1. I think we need a better name for Layer #1 that actually represents
 what the goal of it is: Infrastructure Services?
  2. We need to be be open to having other Layer #1s within the community.
 We should allow for similar collaborations and group focus to grow up as
 well. Storage Services? Platform Services? Computation Services?

 I think that would nullify most of the benefits of Monty's proposal. If
 we keep on blessing themes or special groups, we'll soon be back at
 step 0, with projects banging on the TC door to become special, and
 companies not allocating resources to anything that's not special.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Great stuff, mixed on point 2 raised by Vish but honestly I think that's
something that could evolve over time, but I looked at that differently as
in Cinder, SWIFT and some day Manilla live under a Storage Services
umbrella, and ideally at some point there's some convergence there.

Anyway, I don't want to start a rat-hole on that, it's kind of irrelevant
right now.  Bottom line is I think the direction and initial ideas in
Monty's post are what a lot of us have been thinking about and looking for.
 I'm in!!​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Clint Byrum
Excerpts from Eoghan Glynn's message of 2014-09-19 04:23:55 -0700:
 
  Hi All,
  
  My understanding of Zaqar is that it's like SQS. SQS uses distributed 
  queues,
  which have a few unusual properties [0]:
  Message Order
  
  
  Amazon SQS makes a best effort to preserve order in messages, but due to the
  distributed nature of the queue, we cannot guarantee you will receive
  messages in the exact order you sent them. If your system requires that
  order be preserved, we recommend you place sequencing information in each
  message so you can reorder the messages upon receipt.
  At-Least-Once Delivery
  
  
  Amazon SQS stores copies of your messages on multiple servers for redundancy
  and high availability. On rare occasions, one of the servers storing a copy
  of a message might be unavailable when you receive or delete the message. If
  that occurs, the copy of the message will not be deleted on that unavailable
  server, and you might get that message copy again when you receive messages.
  Because of this, you must design your application to be idempotent (i.e., it
  must not be adversely affected if it processes the same message more than
  once).
  Message Sample
  
  
  The behavior of retrieving messages from the queue depends whether you are
  using short (standard) polling, the default behavior, or long polling. For
  more information about long polling, see Amazon SQS Long Polling .
  
  With short polling, when you retrieve messages from the queue, Amazon SQS
  samples a subset of the servers (based on a weighted random distribution)
  and returns messages from just those servers. This means that a particular
  receive request might not return all your messages. Or, if you have a small
  number of messages in your queue (less than 1000), it means a particular
  request might not return any of your messages, whereas a subsequent request
  will. If you keep retrieving from your queues, Amazon SQS will sample all of
  the servers, and you will receive all of your messages.
  
  The following figure shows short polling behavior of messages being returned
  after one of your system components makes a receive request. Amazon SQS
  samples several of the servers (in gray) and returns the messages from those
  servers (Message A, C, D, and B). Message E is not returned to this
  particular request, but it would be returned to a subsequent request.
  
  
  
  
  
  
  
  Presumably SQS has these properties because it makes the system scalable, if
  so does Zaqar have the same properties (not just making these same
  guarantees in the API, but actually having these properties in the
  backends)? And if not, why? I looked on the wiki [1] for information on
  this, but couldn't find anything.
 
 The premise of this thread is flawed I think.
 
 It seems to be predicated on a direct quote from the public
 documentation of a closed-source system justifying some
 assumptions about the internal architecture and design goals
 of that closed-source system.
 
 It then proceeds to hold zaqar to account for not making
 the same choices as that closed-source system.
 

I don't think we want Zaqar to make the same choices. OpenStack's
constraints are different from AWS's.

I want to highlight that our expectations are for the API to support
deploying at scale. SQS _clearly_ started with a point of extreme scale
for the deployer, and thus is a good example of an API that is limited
enough to scale like that.

What has always been the concern is that Zaqar would make it extremely
complicated and/or costly to get to that level.

 This puts the zaqar folks in a no-win situation, as it's hard
 to refute such arguments when they have no visibility over
 the innards of that closed-source system.
 

Nobody expects to know the insides. But the outsides, the parts that
are public, are brilliant because they are _limited_, and yet they still
support many many use cases.

 Sure, the assumption may well be correct that the designers
 of SQS made the choice to expose applications to out-of-order
 messages as this was the only practical way of acheiving their
 scalability goals.
 
 But since the code isn't on github and the design discussions
 aren't publicly archived, we have no way of validating that.
 

We don't need to see the code. Not requiring ordering makes the whole
problem easier to reason about. You don't need explicit pools anymore.
Just throw messages wherever, and make sure that everywhere gets
polled on a reasonable enough frequency. This is the kind of thing
operations loves. No global state means no split brain to avoid, no
synchronization. Does it solve all problems? no. But it solves a single
one, REALLY well.

Frankly I don't understand why there would be this argument to hold on
to so many use cases and so much API surface area. Zaqar's life gets
easier without ordering guarantees or message browsing. And it still
retains _many_ of its potential users.

___

[openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-19 Thread Sean Dague
I've spent the better part of the last 2 weeks in the Nova bug tracker
to try to turn it into something that doesn't cause people to run away
screaming. I don't remember exactly where we started at open bug count 2
weeks ago (it was north of 1400, with  200 bugs in new, but it might
have been north of 1600), but as of this email we're at  1000 open bugs
(I'm counting Fix Committed as closed, even though LP does not), and ~0
new bugs (depending on the time of the day).

== Philosophy in Triaging ==

I'm going to lay out the philosophy of triaging I've had, because this
may also set the tone going forward.

A bug tracker is a tool to help us make a better release. It does not
exist for it's own good, it exists to help. Which means when evaluating
what stays in and what leaves we need to evaluate if any particular
artifact will help us make a better release. But also more importantly
realize that there is a cost for carrying every artifact in the tracker.
Resolving duplicates gets non linearly harder as the number of artifacts
go up. Triaging gets non-linearly hard as the number of artifacts go up.

With this I was being somewhat pragmatic about closing bugs. An old bug
that is just a stacktrace is typically not useful. An old bug that is a
vague sentence that we should refactor a particular module (with no
specifics on the details) is not useful. A bug reported against a very
old version of OpenStack where the code has changed a lot in the
relevant area, and there aren't responses from the author, is not
useful. Not useful bugs just add debt, and we should get rid of them.
That makes the chance of pulling a random bug off the tracker something
that you could actually look at fixing, instead of mostly just stalling out.

So I closed a lot of stuff as Invalid / Opinion that fell into those camps.

== Keeping New Bugs at close to 0 ==

After driving the bugs in the New state down to zero last week, I found
it's actually pretty easy to keep it at 0.

We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
aren't actually a bug, and can be closed immediately. ~30% look like a
bug, but don't have anywhere near enough information in them, and
flipping them to incomplete with questions quickly means we have a real
chance of getting the right info. ~10% are fixable in  30 minutes worth
of work. And the rest are real bugs, that seem to have enough to dive
into it, and can be triaged into Confirmed, set a priority, and add the
appropriate tags for the area.

But, more importantly, this means we can filter bug quality on the way
in. And we can also encourage bug reporters that are giving us good
stuff, or even easy stuff, as we respond quickly.

Recommendation #1: we adopt a 0 new bugs policy to keep this from
getting away from us in the future.

== Our worse bug reporters are often core reviewers ==

I'm going to pick on Dan Prince here, mostly because I have a recent
concrete example, however in triaging the bug queue much of the core
team is to blame (including myself).

https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
was set incomplete and no response. I'm almost 100% sure it's a dupe of
the multiprocess bug we've been tracking down but it's so terse that you
can't get to the bottom of it.

There were a ton of 2012 nova bugs that were basically post it notes.
Oh, we should refactor this function. Full stop. While those are fine
for personal tracking, their value goes to zero probably 3 months after
they are files, especially if the reporter stops working on the issue at
hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
convinced using bugs for those is useful unless we go and close them out
aggressively if they stall.

Also, if Nova core can't file a good bug, it's hard to set the example
for others in our community.

Recommendation #2: hey, Nova core, lets be better about filing the kinds
of bugs we want to see! mkay!

Recommendation #3: Let's create a tag for personal work items or
something for these class of TODOs people are leaving themselves that
make them a ton easier to cull later when they stall and no one else has
enough context to pick them up.

== Tags ==

The aggressive tagging that Tracy brought into the project has been
awesome. It definitely helps slice out into better functional areas.
Here is the top of our current official tag list (and bug count):

95 compute
83 libvirt
74 api
68 vmware
67 network
41 db
40 testing
40 volumes
36 ec2
35 icehouse-backport-potential
32 low-hanging-fruit
31 xenserver
25 ironic
23 hyper-v
16 cells
14 scheduler
12 baremetal
9 ceph
9 security
8 oslo
...

So, good stuff. However I think we probably want to take a further step
and attempt to get champions for tags. So that tag owners would ensure
their bug list looks sane, and actually spend some time fixing them.
It's pretty clear, for instance, that the ec2 bugs are just piling up,
and very few fixes coming in. Cells seems like it's in the same camp (a
bunch 

Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Jeremy Stanley
On 2014-09-18 14:35:10 + (+), Ian Cordasco wrote:
 Except that even OpenStack doesn’t pin requests because of how
 extraordinarily stable our API is.
[...]

FWIW, I nearly capped it a few weeks ago with
https://review.openstack.org/117848 but since the affected projects
were able to rush in changes to their use of the library to work
around the ways it was breaking I ended up abandoning that. Also for
some months we capped requests in our global requirements because of
https://launchpad.net/bugs/1182271 but that was finally lifted about
a year ago with https://review.openstack.org/37461 (so I don't think
it's entirely fair to assert that OpenStack doesn’t pin requests
because...extraordinarily stable).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Enabling vlan trunking on neutron port.

2014-09-19 Thread Parikshit Manur
Hi All,
I have a setup which has VM on flat provider network , and I 
want to reach VM on VLAN provider network. The packets are forwarded till veth 
pair and are getting dropped by br-int.

Can neutron port be configured to allow vlan  trunking?

Thanks,
Parikshit Manur
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Gordon Sim

On 09/19/2014 08:53 AM, Flavio Percoco wrote:

On 09/18/2014 09:25 PM, Gordon Sim wrote:

I don't see that the claim mechanism brings any stronger guarantee, it
just offers a competing consumer behaviour where browsing is
non-competing (non-destructive). In both cases you require the client to
be able to remember which messages it had processed in order to ensure
exactly once. The claim reduces the scope of any doubt, but the client
still needs to be able to determine whether it has already processed any
message in the claim already.


The client needs to remember which messages it had processed if it
doesn't delete them (ack) after it has processed them. It's true the
client could also fail after having processed the message which means it
won't be able to ack it.

That said, being able to prevent other consumers to consume a specific
message can bring a stronger guarantee depending on how messages are
processed. I mean, claiming a message guarantees that throughout the
duration of that claim, no other client will be able to consume the
claimed messages, which means it allows messages to be consumed only once.


I think 'exactly once' means different things when used for competing 
consumers and non-competing consumers. For the former it means the 
message is processed by only one consumer, and only once. For the latter 
it means every consumer processes the message exactly once.


Using a claim provides the competing consumer behaviour. To me this is a 
'different' guarantee from non-competing consumer rather than a 
'stronger' one, and it is orthogonal to the reliability of the delivery.


However we only differ on terminology used; I believe we are on the same 
page as far as the semantics go :-)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-19 Thread Sean Dague
On 09/19/2014 09:58 AM, Sylvain Bauza wrote:
snip
 == Keeping New Bugs at close to 0 ==

 After driving the bugs in the New state down to zero last week, I found
 it's actually pretty easy to keep it at 0.

 We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
 aren't actually a bug, and can be closed immediately. ~30% look like a
 bug, but don't have anywhere near enough information in them, and
 flipping them to incomplete with questions quickly means we have a real
 chance of getting the right info. ~10% are fixable in  30 minutes worth
 of work. And the rest are real bugs, that seem to have enough to dive
 into it, and can be triaged into Confirmed, set a priority, and add the
 appropriate tags for the area.

 But, more importantly, this means we can filter bug quality on the way
 in. And we can also encourage bug reporters that are giving us good
 stuff, or even easy stuff, as we respond quickly.

 Recommendation #1: we adopt a 0 new bugs policy to keep this from
 getting away from us in the future.
 
 Agreed, provided we can review all the new bugs each week.

So I actually don't think this works if it's a weekly thing. Keeping new
bugs at 0 really has to be daily because the response to bug reports
sets up the expected cadence with the reporter. If you flip back new
bugs in  6 or 8 hrs, there is a decent chance they are still on their
same work shift, and the context is still in their head (or even the
situation is still existing).

Once you pass 24hrs the chance of that goes way down. And,
realistically, I've found that when I open the bug tracker in the
morning and there are 5 bugs, that's totally doable over the first cup
of coffee. Poking the bug tracker a couple more times during the day is
all that's needed to keep it there.

 == Our worse bug reporters are often core reviewers ==

 I'm going to pick on Dan Prince here, mostly because I have a recent
 concrete example, however in triaging the bug queue much of the core
 team is to blame (including myself).

 https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
 was set incomplete and no response. I'm almost 100% sure it's a dupe of
 the multiprocess bug we've been tracking down but it's so terse that you
 can't get to the bottom of it.

 There were a ton of 2012 nova bugs that were basically post it notes.
 Oh, we should refactor this function. Full stop. While those are fine
 for personal tracking, their value goes to zero probably 3 months after
 they are files, especially if the reporter stops working on the issue at
 hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
 convinced using bugs for those is useful unless we go and close them out
 aggressively if they stall.

 Also, if Nova core can't file a good bug, it's hard to set the example
 for others in our community.

 Recommendation #2: hey, Nova core, lets be better about filing the kinds
 of bugs we want to see! mkay!

 Recommendation #3: Let's create a tag for personal work items or
 something for these class of TODOs people are leaving themselves that
 make them a ton easier to cull later when they stall and no one else has
 enough context to pick them up.
 
 I would propose to set their importance as Wishlist then. I would
 leave the tags for setting which components are impacted.

Maybe. I honestly don't think core team members should file wishlist
bugs at all. That really means feature and means a spec. Or it means
just do it (for refactoring).

 == Tags ==

 The aggressive tagging that Tracy brought into the project has been
 awesome. It definitely helps slice out into better functional areas.
 Here is the top of our current official tag list (and bug count):

 95 compute
 83 libvirt
 74 api
 68 vmware
 67 network
 41 db
 40 testing
 40 volumes
 36 ec2
 35 icehouse-backport-potential
 32 low-hanging-fruit
 31 xenserver
 25 ironic
 23 hyper-v
 16 cells
 14 scheduler
 12 baremetal
 9 ceph
 9 security
 8 oslo
 ...

 So, good stuff. However I think we probably want to take a further step
 and attempt to get champions for tags. So that tag owners would ensure
 their bug list looks sane, and actually spend some time fixing them.
 It's pretty clear, for instance, that the ec2 bugs are just piling up,
 and very few fixes coming in. Cells seems like it's in the same camp (a
 bunch of recent bugs have been cells related, it looks like a lot more
 deployments are trying it).

 Probably the most important thing in tag owners would be cleaning up the
 bugs in the tag. Realizing that 2 bugs were actually the same bug.
 Cleaning up descriptions / titles / etc so that people can move forward
 on them.

 Recommendation #4: create tag champions
 
 +1. That said, some bugs can be having more than 1 tag (for example,
 compute/conductor/scheduler), so it would mean the champions would have
 to discuss between them.
 I can volunteer for looking at the scheduler tag.

Sure. But more communication on issues is only a good thing. :)

 
 == Soft 

[openstack-dev] [requirements] [nova] requirements freeze exception for websockify

2014-09-19 Thread Sean Dague
I'd like to request a requirements freeze exception for websockify -
https://review.openstack.org/#/c/122702/

The rationale for this is that websockify version bump fixes a Nova bug
about zombie processes - https://bugs.launchpad.net/nova/+bug/1048703.
It also sets g-r to the value we've been testing against for the entire
last cycle.

I don't believe it has any impacts on other projects, so should be a
very safe change.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-19 Thread Sullivan, Jon Paul
 Thu Sep 18 12:18:49 UTC 2014
 On 2014-09-18 08:06:11 + (+), Sullivan, Jon Paul wrote:
 In my experience if the check results are not fresh enough the
 recheck is automatically run.  I am not on the infra team, so
 without looking up code I am just guessing, but my guess is that
 the workflow score change triggers the check on the presumption
 that it has been approved, not accounting for the recent(ish)
 update that move wip to the workflow score.

 We turned off that behavior a couple months ago when the merge-check
 pseudo-job was implemented to automatically -1 any open changes with
 merge conflicts each time a new change merges to their target
 branch. This covered the majority of the problems identified by the
 freshness check, but without using any of our worker pool.

 This is not solely about finding reviews.  It is about pruning
 stale reviews.  I think the auto-abandon code was excellent at
 doing this, but alas, it is no more.

 I think it was excellent at arbitrarily abandoning open changes
 which happened to meet a poorly-thought-out set of criteria. I'm
 personally quite glad it broke and we didn't waste time
 reimplementing something similar for new Gerrit versions.

I think that this thread has already clearly stated that core reviewers saw a 
benefit from the auto-abandon code.

I think that the abandoning happening from an automated process is easier to 
accept than if it came from a person, and so less likely to create a poor and 
emotional response.

If your personal opinion was that it wasn't useful to your project, then 
perhaps what you are really saying is that the implementation of it was not 
configurable enough to allow individual projects to tailor it to their needs.

The auto-abandon code also produced other side effects that have already been 
detailed in this thread, such as reminding authors they need to take action 
upon a change.  This is done automatically without the need for core reviewers 
to spend extra time deliberately looking for patches that need a nudge.

So the removal of the auto-abandon, imho, has increased core reviewer workload, 
increased the chance that a good change may get ignored for extended periods of 
time, and has increased the possibility of code committers becoming frustrated 
with core reviewers adding a wip or abandon to their patches, so a decrease in 
productivity all around. :(


 --
 Jeremy Stanley

Thanks,
*: jonpaul.sulli...@hp.com :) Cloud Services - @hpcloud
*: +353 (91) 75 4169

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2.
Registered Number: 361933

The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Ian Cordasco


On 9/19/14, 9:01 AM, Jeremy Stanley fu...@yuggoth.org wrote:

On 2014-09-18 14:35:10 + (+), Ian Cordasco wrote:
 Except that even OpenStack doesn’t pin requests because of how
 extraordinarily stable our API is.
[...]

FWIW, I nearly capped it a few weeks ago with
https://review.openstack.org/117848 but since the affected projects
were able to rush in changes to their use of the library to work
around the ways it was breaking I ended up abandoning that. Also for
some months we capped requests in our global requirements because of
https://launchpad.net/bugs/1182271 but that was finally lifted about
a year ago with https://review.openstack.org/37461 (so I don't think
it's entirely fair to assert that OpenStack doesn’t pin requests
because...extraordinarily stable).
-- 
Jeremy Stanley

A) Not the thread for this discussion.
B) I didn’t say that OpenStack *never* pinned it, I said it didn’t
currently
C) Did you read the whole thread? I mentioned 2.4.0 as an exception
because of ProtocolErrors and the redirect_cache members of OpenStack
lobbied for.
D) Before someone else replies, I assumed the transition from 0.x - 1.0
was also the other obvious (and not worth mentioning) break in stability
given that since then we’ve had no API changes (with the exception of
2.4.0 not re-wrapping a single urllib3 exception).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-19 Thread Ben Nemec
On 09/19/2014 08:38 AM, Ken Giusti wrote:
 On Thu, 18 Sep 2014 17:29:27 Eric Windisch wrote:


 That?s great feedback, Eric, thank you. I know some of the other projects
 
 +1 - yes, excellent feedback - having just worked on the AMQP 1.0
 driver, I think you've nicely described some of my own experiences.
 
 are moving drivers out of the main core tree, and we could certainly
 consider doing that here as well, if we have teams willing to sign up for
 the work it means doing.

 In addition to the zmq driver, we have a fairly stable rabbit driver, a
 qpid driver whose quality I don?t know , and a new experimental AMQP 1.0
 driver. Are we talking about moving those out, too, or just zmq because we
 were already considering removing it entirely?


 I believe it makes good sense for all drivers, in the long term. However,
 the most immediate benefits would be in offloading any drivers that need
 substantial work or improvements, aka velocity. That would mean the AMQP
 and ZeroMQ drivers.

 
 I'm tentatively in favor of this - 'tentative' because, noob that I am,
 I'm not sure I understand the trade-offs, if any, that moving these
 drivers outside of oslo.messaging would bring.
 
 To be clear: I'm 100% for any change that makes it easier to have
 non-core developers that have the proper domain knowledge contribute
 to these drivers.  However, there's a few things I need to understand:
 
 Does this move make it harder for users to deploy these
 drivers?  How would we insure that the proper, tested version of a
 driver is delivered along with oslo.messaging proper?

On the point of non-core developers being better able to contribute, in
oslo-incubator we have the concept of Maintainers, who have sort of a
super +1 that counts as +2 on the code they maintain (note that this is
a policy thing, not something enforced by Gerrit in any way).  What this
means is that in theory you could have two +1's from maintainers and
then all you need an oslo.messaging core for is the approval.  In
practice I think it's more common that you get a maintainer +1 and a
core +2A, but for oslo.messaging drivers where there is likely to be
more than one person interested in maintaining it the two +1 case might
be more common.

In any case, that might be an option besides splitting out the drivers
completely.

 
 With the Nova drivers, what's useful is that we have tempest and we can use
 that as an integration gate. I suppose that's technically possible with
 oslo.messaging and its drivers as well, although I prefer to see a
 separation of concerns were I presume there are messaging patterns you want
 to validate that aren't exercised by Tempest.
 
 This is critical IMHO - any proposed changes to oslo.messaging
 proper, or any particular driver for that matter, needs to be vetted
 against the other out-of-tree drivers automagically.  E.g. If a
 proposed change to oslo.messaging breaks the out of tree AMQP 1.0
 driver, that needs to be flagged by jenkins during the gerrit review
 of the proposed oslo.messaging patch.
 

 Another thing I'll note is that before pulling Ironic in, Nova had an API
 contract test. This can be useful for making sure that changes in the
 upstream project doesn't break drivers, or that breakages could at least
 invoke action by the driver team:
 https://github.com/openstack/nova/blob/4ce3f55d169290015063131134f93fca236807ed/nova/tests/virt/test_ironic_api_contracts.py

 --
 Regards,
 Eric Windisch
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Brant Knudson
I don't think anyone would be complaining if glanceclient didn't have the
need to reach into and monkeypatch requests's connection pool manager[1].
Is there a way to tell requests to build the https connections differently
without monkeypatching urllib3.poolmanager?

glanceclient's monkeypatching of the global variable here is dangerous
since it will mess with the application and every other library if the
application or another library uses glanceclient.

[1]
http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75

- Brant


On Fri, Sep 19, 2014 at 10:33 AM, Ian Cordasco ian.corda...@rackspace.com
wrote:



 On 9/19/14, 9:01 AM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-09-18 14:35:10 + (+), Ian Cordasco wrote:
  Except that even OpenStack doesn’t pin requests because of how
  extraordinarily stable our API is.
 [...]
 
 FWIW, I nearly capped it a few weeks ago with
 https://review.openstack.org/117848 but since the affected projects
 were able to rush in changes to their use of the library to work
 around the ways it was breaking I ended up abandoning that. Also for
 some months we capped requests in our global requirements because of
 https://launchpad.net/bugs/1182271 but that was finally lifted about
 a year ago with https://review.openstack.org/37461 (so I don't think
 it's entirely fair to assert that OpenStack doesn’t pin requests
 because...extraordinarily stable).
 --
 Jeremy Stanley

 A) Not the thread for this discussion.
 B) I didn’t say that OpenStack *never* pinned it, I said it didn’t
 currently
 C) Did you read the whole thread? I mentioned 2.4.0 as an exception
 because of ProtocolErrors and the redirect_cache members of OpenStack
 lobbied for.
 D) Before someone else replies, I assumed the transition from 0.x - 1.0
 was also the other obvious (and not worth mentioning) break in stability
 given that since then we’ve had no API changes (with the exception of
 2.4.0 not re-wrapping a single urllib3 exception).

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Donald Stufft

 On Sep 19, 2014, at 11:54 AM, Brant Knudson b...@acm.org wrote:
 
 
 I don't think anyone would be complaining if glanceclient didn't have the 
 need to reach into and monkeypatch requests's connection pool manager[1]. Is 
 there a way to tell requests to build the https connections differently 
 without monkeypatching urllib3.poolmanager?
 
 glanceclient's monkeypatching of the global variable here is dangerous since 
 it will mess with the application and every other library if the application 
 or another library uses glanceclient.
 
 [1] 
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75
  
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75
 

Why does it need to use it’s own VerifiedHTTPSConnection class? Ironically
reimplementing that is probably more dangerous for security than requests
bundling urllib3 ;)

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-19 Thread Mehdi Abaakouk


Hi,

For me, the lack of tests is why reviews are not done, even some of them 
are critical.


An other issue, less problematic, of this driver is it relies on 
eventlet directly,
but all the eventlet code should be only localized into 
oslo.messaging._executor.impl_eventlet.


Otherwise, I have real opinion for pushing the zmq driver in an other 
repo or not, if we really have people to maintain this driver,

I'm sure the code will be merged.

Having the testing issue fixed for K-1 sounds a good milestone, to see 
if we are able to maintain it even with split or not the repo.


Regards,
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] naming of provider template for docs

2014-09-19 Thread Mike Spreitzer
On further thought, I noticed that template-based resource also 
describes an AWS::CloudFormation::Stack; and since those are 
template-based, you could well describe them as custom too.

Would you consider nested stack to also describe resources of other 
types that are implemented by Python code (e.g., for scaling groups) that 
creates another stack?

I think the name we are trying to settle upon means nested stack that is 
created by supplying, as the resource type seen after mapping through the 
effective environment, a URL reference to a template.  I really think 
there is no hope of coming up with a reasonable name that includes all of 
that idea.  And in my own work, I have not needed a name that has that 
specific meaning.  I find myself more interested in nested stacks, 
regardless of the surface details of how they are requested.  Yes, the 
surface details have to be handled in some cases (e.g., balancing a 
scaling group across AZs), but that is a relatively small and confined 
matter --- at least in my own work; YMMV.  So my new bottom line is this: 
(1) for the name of the surface syntax details pick any short name that is 
not too confusing/awful, (2) in the documentation closely bind that name 
to a explanation that lays out all the critical parts of the idea, and (3) 
everywhere that you care about nested stacks regardless of the surface 
details of how they are requested in a template, use the term nested 
stack.

Regards,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-19 Thread Jeremy Stanley
On 2014-09-19 15:27:36 + (+), Sullivan, Jon Paul wrote:
[...]
 I think that the abandoning happening from an automated process is
 easier to accept than if it came from a person, and so less likely
 to create a poor and emotional response.

Here we'll just have to agree to disagree. I think core reviewers
hiding behind an automated process so that they don't have to
confront contributors about stalled/inadequate changes is inherently
less friendly. Clearly you feel that contributors are less likely to
be offended if a machine tells them they need to revisit a change
because it's impersonal and therefore without perceived human
judgement.

 If your personal opinion was that it wasn’t useful to your
 project, then perhaps what you are really saying is that the
 implementation of it was not configurable enough to allow
 individual projects to tailor it to their needs.
[...]

Sure. For what it's worth, I haven't said I would push back on
someone writing a reasonable implementation of this feature, but it
definitely is something I wouldn't want imposed on everyone's
workflow just because the majority of core reviewers on some subset
of projects found it easier to have changes abandoned for them.

 So the removal of the auto-abandon, imho, has increased core
 reviewer workload, increased the chance that a good change may get
 ignored for extended periods of time, and has increased the
 possibility of code committers becoming frustrated with core
 reviewers adding a wip or abandon to their patches, so a decrease
 in productivity all around. :(

I think this is an inaccurate representation of the situation. It
wasn't explicitly removed. It was a buggy hack which was effectively
unmaintainable, didn't work with modern versions of Gerrit, and
nobody felt like investing time in a new implementation of it nor
was it deemed a critical feature for which we should hold back
progress and continue to fester on a years-old Gerrit installation.
It broke and has never been fixed.

This is not the same thing as being removed, but since it's been
gone I've come to wish we had removed it rather than just living
with it until it ceased working due to bitrot. In this case, an
automated process determined that feature was no longer suitable and
abandoned functional use of it. I think it would have been more
friendly to our community if the people who were no longer
interested in maintaining that feature had explicitly removed it
instead... but then it sounds like you would assert that having the
machine abandon this feature for us was less likely to offend
anyone. ;)
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Mark Washenberger
On Fri, Sep 19, 2014 at 8:59 AM, Donald Stufft don...@stufft.io wrote:


 On Sep 19, 2014, at 11:54 AM, Brant Knudson b...@acm.org wrote:


 I don't think anyone would be complaining if glanceclient didn't have the
 need to reach into and monkeypatch requests's connection pool manager[1].
 Is there a way to tell requests to build the https connections differently
 without monkeypatching urllib3.poolmanager?

 glanceclient's monkeypatching of the global variable here is dangerous
 since it will mess with the application and every other library if the
 application or another library uses glanceclient.

 [1]
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75


 Why does it need to use it’s own VerifiedHTTPSConnection class? Ironically
 reimplementing that is probably more dangerous for security than requests
 bundling urllib3 ;)


We supported the option to skip SSL compression since before adopting
requests (see 556082cd6632dbce52ccb67ace57410d61057d66), useful when
uploading already compressed images.




 ---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Donald Stufft

 On Sep 19, 2014, at 12:42 PM, Mark Washenberger 
 mark.washenber...@markwash.net wrote:
 
 
 
 On Fri, Sep 19, 2014 at 8:59 AM, Donald Stufft don...@stufft.io 
 mailto:don...@stufft.io wrote:
 
 On Sep 19, 2014, at 11:54 AM, Brant Knudson b...@acm.org 
 mailto:b...@acm.org wrote:
 
 
 I don't think anyone would be complaining if glanceclient didn't have the 
 need to reach into and monkeypatch requests's connection pool manager[1]. Is 
 there a way to tell requests to build the https connections differently 
 without monkeypatching urllib3.poolmanager?
 
 glanceclient's monkeypatching of the global variable here is dangerous since 
 it will mess with the application and every other library if the application 
 or another library uses glanceclient.
 
 [1] 
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75
  
 http://git.openstack.org/cgit/openstack/python-glanceclient/tree/glanceclient/common/https.py#n75
 
 
 Why does it need to use it’s own VerifiedHTTPSConnection class? Ironically
 reimplementing that is probably more dangerous for security than requests
 bundling urllib3 ;)
 
 We supported the option to skip SSL compression since before adopting 
 requests (see 556082cd6632dbce52ccb67ace57410d61057d66), useful when 
 uploading already compressed images.
 

Is that all it’s used for? Probably it’s sane to just delete it then.

On Python 3.2+, 2.7.9+ Python provides the APIs to do it in the stdlib and 
urllib3 (and thus requests) will remove TLS Compression by default.

Python 2.6, and 2.7.0-2.7.8 do not provide the APIs to do so, however on Python 
2.x if you install pyOpenSSL, ndg-httpsclient, and pyasn1 then it’ll also 
disable TLS compression (automatically if you use requests, you have to do an 
import + function call with raw urllib3).

So you can remove all that code and just let requests/urllib3 handle it on 
3.2+, 2.7.9+ and for anything less than that either use conditional 
dependencies to have glance client depend on pyOpenSSL, ndg-httpsclient, and 
pyasn1 on Python 2.x, or let them be optional and if people want to disable TLS 
compression in those versions they can install those versions themselves.

By the way, everything above holds true for SNI as well.

This seems like the best of both worlds, glance client isn’t importing stuff 
from the vendored requests.packages.*, people get TLS Compression disabled (by 
default or optional depending on the choice the project makes), and it no 
longer has to maintain it’s own copy of security sensitive code.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Adam Lawson
Can someone do a small little Visio or other visual to explain what's being
proposed here? My head sported a small crack at around the 5-6th page...

; ) But seriously, I couldn't understand the proposal. Maybe I'm not the
audience which is fine, just saying, the words got in the way. Sounds like
a song!


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072


On Fri, Sep 19, 2014 at 5:46 AM, John Griffith john.griff...@solidfire.com
wrote:



 On Fri, Sep 19, 2014 at 4:33 AM, Thierry Carrez thie...@openstack.org
 wrote:

 Vishvananda Ishaya wrote:
  Great writeup. I think there are some great concrete suggestions here.
 
  A couple more:
 
  1. I think we need a better name for Layer #1 that actually represents
 what the goal of it is: Infrastructure Services?
  2. We need to be be open to having other Layer #1s within the
 community. We should allow for similar collaborations and group focus to
 grow up as well. Storage Services? Platform Services? Computation Services?

 I think that would nullify most of the benefits of Monty's proposal. If
 we keep on blessing themes or special groups, we'll soon be back at
 step 0, with projects banging on the TC door to become special, and
 companies not allocating resources to anything that's not special.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ​Great stuff, mixed on point 2 raised by Vish but honestly I think that's
 something that could evolve over time, but I looked at that differently as
 in Cinder, SWIFT and some day Manilla live under a Storage Services
 umbrella, and ideally at some point there's some convergence there.

 Anyway, I don't want to start a rat-hole on that, it's kind of irrelevant
 right now.  Bottom line is I think the direction and initial ideas in
 Monty's post are what a lot of us have been thinking about and looking for.
  I'm in!!​


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread John Dickinson

On Sep 19, 2014, at 5:46 AM, John Griffith john.griff...@solidfire.com wrote:

 
 
 On Fri, Sep 19, 2014 at 4:33 AM, Thierry Carrez thie...@openstack.org wrote:
 Vishvananda Ishaya wrote:
  Great writeup. I think there are some great concrete suggestions here.
 
  A couple more:
 
  1. I think we need a better name for Layer #1 that actually represents what 
  the goal of it is: Infrastructure Services?
  2. We need to be be open to having other Layer #1s within the community. We 
  should allow for similar collaborations and group focus to grow up as well. 
  Storage Services? Platform Services? Computation Services?
 
 I think that would nullify most of the benefits of Monty's proposal. If
 we keep on blessing themes or special groups, we'll soon be back at
 step 0, with projects banging on the TC door to become special, and
 companies not allocating resources to anything that's not special.
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ​Great stuff, mixed on point 2 raised by Vish but honestly I think that's 
 something that could evolve over time, but I looked at that differently as in 
 Cinder, SWIFT and some day Manilla live under a Storage Services umbrella, 
 and ideally at some point there's some convergence there.
 
 Anyway, I don't want to start a rat-hole on that, it's kind of irrelevant 
 right now.  Bottom line is I think the direction and initial ideas in Monty's 
 post are what a lot of us have been thinking about and looking for.  I'm in!!​


I too am generally supportive of the concept, but I do want to think about the 
vishy/tts/jgriffith points above.

It's interesting that the proposed layer #1 stuff is very very similar to 
what was originally in OpenStack at the very beginning as Nova. Over time, many 
of these pieces of functionality required for compute were split out (block, 
networking, image, etc), and I think that's why so many people look at these 
pieces and say (rightly), of course these are required all together and 
tightly coupled. That's how these projects started, and we still see evidence 
of their birth today.

For that reason, I do agree with Vish that there should be similar 
collaborations for other things. While the layer #1 (or compute) use case 
is very common, we can all see that it's not the only one that people are 
solving with OpenStack parts. And this is reflected in the products build and 
sold by companies, too. Some sell one subset of openstack stuff as product X 
and maybe a different subset as product Y. (The most common example here is 
compute vs object storage.) This reality has led to a lot of the angst 
around definitions since there is effort to define openstack all as one thing 
(or worse, as a base thing that others are defined as built upon).

I propose that we can get the benefits of Monty's proposal and implement all of 
his concrete suggestions (which are fantastic) by slightly adjusting our usage 
of the program/project concepts.

I had originally hoped that the program concept would have been a little 
higher-level instead of effectively spelling project as program. I'd love 
to see a hierarchy of openstack-program-project/team-repos. Right now, we 
have added the program layer but have effectively mapped it 1:1 to the 
project. For example, we used to have a few repos in the Swift project managed 
by the same group of people, and now we have a few repos in the object 
storage program, all managed by the same group of people. And every time 
something is added to OpenStack, its added as a new program, effectively 
putting us exactly where we were before we called it a program with the same 
governance and management scaling problems.

Today, I'd group existing OpenStack projects into programs as follows:

Compute: nova, sahara, ironic
Storage: swift, cinder, glance, trove
Network: neutron, designate, zaqar
Deployment/management: heat, triple-o, horizon, ceilometer
Identity: keystone, barbican
Support (not user facing): infra, docs, tempest, devstack, oslo
(potentially even) stackforge: lots

I like two things about this. First, it allows people to compose a solution. 
Second, it allows us as a community to thing more about the strategic/product 
things. For example, it lets us as a community say, We think storage is 
important. How are we solving it today? What gaps do we have in that? How can 
the various storage things we have work together better?

Thierry makes the point that more themes will nullify the benefits of Monty's 
proposal. I agree, if we continue to allow the explosion of 
projects/programs/themes to continue. The benefit of what Monty is proposing is 
that it identifies and focusses on a particular use case (deploy a VM, add a 
volume, get an IP, configure a domain) so that we know we have solved it well. 
I think that focus is vital, and by grouping 

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Vishvananda Ishaya

On Sep 19, 2014, at 3:33 AM, Thierry Carrez thie...@openstack.org wrote:

 Vishvananda Ishaya wrote:
 Great writeup. I think there are some great concrete suggestions here.
 
 A couple more:
 
 1. I think we need a better name for Layer #1 that actually represents what 
 the goal of it is: Infrastructure Services?
 2. We need to be be open to having other Layer #1s within the community. We 
 should allow for similar collaborations and group focus to grow up as well. 
 Storage Services? Platform Services? Computation Services?
 
 I think that would nullify most of the benefits of Monty's proposal. If
 we keep on blessing themes or special groups, we'll soon be back at
 step 0, with projects banging on the TC door to become special, and
 companies not allocating resources to anything that's not special.

I was assuming that these would be self-organized rather than managed by the TC.

Vish



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] WARNING: Oslo team is cleaning up the incubator

2014-09-19 Thread Doug Hellmann
The Oslo team is starting to remove deprecated code from the incubator for 
libraries that graduated this cycle, in preparation for the work we will be 
doing in Kilo.

Any fixes for those modules needed before the final releases for the other 
projects should be submitted to the stable/juno branch of the incubator, rather 
than master. We will provide guidance for each patch about whether it should 
also be submitted to master and the new library.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Dean Troyer
On Fri, Sep 19, 2014 at 12:02 PM, Adam Lawson alaw...@aqorn.com wrote:

 Can someone do a small little Visio or other visual to explain what's
 being


Sean's blog post included a nice diagram that is Monty's starting point:
https://dague.net/2014/08/26/openstack-as-layers/

AIUI Monty's Layer 1 is basically the original layers 1+2.  I had separated
them originally as layer 2 is optional from a purely technical perspective,
but not really from a cloud user perspective.

layers tl;dr: For my purposes layers 1, 2 and a hand-wavey 'everything
else'  is useful.  Others find combing layers 1 and 2 more useful,
especially for organizational and other purposes.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-19 Thread Doug Hellmann

On Sep 19, 2014, at 10:34 AM, Ken Giusti kgiu...@gmail.com wrote:

 On Mon Sep 8 15:18:35 UTC 2014, Doug Hellmann wrote:
 On Sep 8, 2014, at 10:35 AM, Antonio Messina antonio.s.messina at 
 gmail.com wrote:
 
 Hi All,
 
 We tested briefly ZeroMQ with Havana last year, but we couldn't find
 any good documentation on how to implement it, and we were not able to
 get it working. We also got the impression that the support was not at
 all mature, so we decided to use RabbitMQ instead.
 
 However, I must say that the broker-less design of ZeroMQ is very
 appealing, and we would like to give it a try, assuming
 1) the documentation is improved
 2) there is some assurance that support for ZeroMQ is not going to be 
 dropped.
 
 I can help with 1) if there is someone that knows a bit of the
 internals and can bootstrap me, because I have no first hand
 experience on how message queues are used in OpenStack, and little
 experience with ZeroMQ.
 
 Unfortunately, the existing Oslo team doesn’t have a lot of
 experience with ZeroMQ either (hence this thread). It sounds like Li
 Ma’s team has made it work, though, so maybe you could work
 together. We should prioritize documentation and then functional
 testing, I think.
 
 About 2), well this is a decision for the developers, but IMHO there
 *should* be support for ZeroMQ in OpenStack: its broker-less
 architecture would eliminate a SPoF (the message broker), could ease
 the deployment (especially in HA setup) and grant very high
 performance.
 
 I agree, it would be useful to support it. This is purely a resource
 allocation problem for me. I don't have anyone willing to do the work
 needed to ensure the driver is functional and can be deployed sanely
 (although maybe I’ve found a couple of volunteers now :-).
 
 There is another effort going on to support AMQP 1.0, which (as I
 understand it) includes similar broker-less deployment options. Before
 we decide whether to invest in ZeroMQ for that reason alone, it would
 be useful to know if AMQP 1.0 support makes potential ZeroMQ support
 less interesting.
 
 
 While the AMQP 1.0 protocol permits it, the current implementation of
 the new driver does not support broker-less point-to-point - yet.
 
 I'm planning on adding that support to the AMQP 1.0 driver in the
 future.  I have yet to spend any time ramping up on the existing
 brokerless support implemented by the zmq driver, so forgive my
 ignorance, but I'm hoping to leverage what is there if it makes sense.
 
 If it doesn't make sense, and the existing code is zmq specific, then
 I'd be interested in working with the zmq folks to help develop a more
 generic implementation that functions across both drivers.

It would be great to have some common broker-less base class or something like 
that to share implementations, if that is possible.

Doug

 
 Doug
 
 
 -- 
 Ken Giusti  (kgiu...@gmail.com)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core team

2014-09-19 Thread Doug Hellmann
Done.

Welcome to the team, Radoslav!

Doug

On Sep 19, 2014, at 7:52 AM, Vipin Balachandran vbalachand...@vmware.com 
wrote:

 +1
  
 From: Vui Chiap Lam [mailto:vuich...@vmware.com] 
 Sent: Tuesday, September 16, 2014 10:52 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core 
 team
  
 +1. Rado has been very instrumental in helping with reviews and making 
 significant fixes to oslo.vmware. He also contributed greatly to the effort 
 to integrate the library with other projects.
  
 Vui
  
 From: Arnaud Legendre alegen...@vmware.com
 Sent: Monday, September 15, 2014 10:18 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [oslo-vmware] Propose adding Radoslav to core 
 team
  
 +1
  
 On Sep 15, 2014, at 9:37 AM, Gary Kotton gkot...@vmware.com wrote:
 
 
 Hi,
 I would like to propose Radoslav to be a core team member. Over the course of 
 the J cycle he has been great with the reviews, bug fixes and updates to the 
 project.
 Can the other core team members please update with your votes if you agree or 
 not.
 Thanks
 Gary
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Federation and Policy

2014-09-19 Thread Doug Hellmann

On Sep 19, 2014, at 6:56 AM, David Chadwick d.w.chadw...@kent.ac.uk wrote:

 
 
 On 18/09/2014 22:14, Doug Hellmann wrote:
 
 On Sep 18, 2014, at 4:34 PM, David Chadwick d.w.chadw...@kent.ac.uk
 wrote:
 
 
 
 On 18/09/2014 21:04, Doug Hellmann wrote:
 
 On Sep 18, 2014, at 12:36 PM, David Chadwick 
 d.w.chadw...@kent.ac.uk wrote:
 
 Our recent work on federation suggests we need an improvement
 to the way the policy engine works. My understanding is that
 most functions are protected by the policy engine, but some are
 not. The latter functions are publicly accessible. But there is
 no way in the policy engine to specify public access to a
 function and there ought to be. This will allow an
 administrator to configure the policy for a function to range
 from very lax (publicly accessible) to very strict (admin
 only). A policy of  means that any authenticated user can
 access the function. But there is no way in the policy to
 specify that an unauthenticated user (i.e. public) has access
 to a function.
 
 We have already identified one function (get trusted IdPs 
 identity:list_identity_providers) that needs to be publicly 
 accessible in order for users to choose which IdP to use for 
 federated login. However some organisations may not wish to
 make this API call publicly accessible, whilst others may wish
 to restrict it to Horizon only etc. This indicates that that
 the policy needs to be set by the administrator, and not by
 changes to the code (i.e. to either call the policy engine or
 not, or to have two different API calls).
 
 I don’t know what list_identity_providers does.
 
 it lists the IDPs that Keystone trusts to authenticate users
 
 Can you give a little more detail about why some providers would
 want to make it not public
 
 I am not convinced that many cloud services will want to keep this
 list secret. Today if you do federated login, the public web page
 of the service provider typically lists the logos or names of its
 trusted IDPs (usually Facebook and Google). Also all academic
 federations publish their full lists of IdPs. But it has been
 suggested that some commercial cloud providers may not wish to
 publicise this list and instead require the end users to know which
 IDP they are going to use for federated login. In which case the
 list should not be public.
 
 
 if we plan to make it public by default? If we think there’s a 
 security issue, shouldn’t we just protect it?
 
 
 Its more a commercial in confidence issue (I dont want the world to
 know who I have agreements with) rather than a security issue,
 since the IDPs are typically already well known and already protect
 themselves against attacks from hackers on the Internet.
 
 OK. The weak “someone might want to” requirement aside, and again
 showing my ignorance of implementation details, do we truly have to
 add a new feature to disable the policy check? Is there no way to
 have an “always allow” policy using the current syntax?
 
 You tell me. If there is, then problem solved. If not, then my request
 still stands

From looking at the code, it appears that something like True:”True” (or 
possibly True:True) would always pass, but I haven’t tested that.

Doug

 
 regards
 
 David
 
 
 Doug
 
 
 regards
 
 David
 
 
 If we can invent some policy syntax that indicates public
 access, e.g. reserved keyword of public, then Keystone can
 always call the policy file for every function and there would
 be no need to differentiate between protected APIs and
 non-protected APIs as all would be protected to a greater or
 lesser extent according to the administrator's policy.
 
 Comments please
 
 It seems reasonable to have a way to mark a function as fully
 public, if we expect to really have those kinds of functions.
 
 Doug
 
 
 regards
 
 David
 
 
 
 
 
 
 ___ OpenStack-dev 
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___ OpenStack-dev mailing
 list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS] Packet flow between instances using a load balancer

2014-09-19 Thread Eugene Nikanorov
If we're talking about default haproxy driver for lbaas, then I'd say that
the diagram is not quite correct because one could assume that LB_A and
LB_B are kind of routing devices which have networks behind.

Since haproxy is layer 4 loadbalancer, so packet received by RHB1 will have
source of LB_B and destination of RHB1, and similarly for the opposite
direction.

In fact the packet is not modified, it's just different, since haproxy is
not forwarding packets, it just opens connection to the backend server.
Client's ip is usually forwarded via x-forwarded-for http header.

Thanks,
Eugene.

On Thu, Sep 11, 2014 at 2:33 PM, Maish Saidel-Keesing 
maishsk+openst...@maishsk.com wrote:

 I am trying to find out how traffic currently flows went sent to an
 instance through a LB.

 Say I have the following scenario:


 RHA1   LB_A -- - LB_B ---  RHB1
|  |
 RHA2 ---|  |-
  RHB2


 A packet is sent from RHA1 to LB_B (with a final destination of course
 being either RHB1 or RHB2)

 I have a few questions about the flow.

 1. When the packet is received by RHB1 - what is the source and
 destination address?
  Is the source RHA1 or LB_B?
  Is the destination LB_B or RHB_1?
 2. When is the packet modified (if it is)? And how?
 3. Traffic in the opposite direction. RHB1 - RHA1. What is the path
 that will be taken?

 The catalyst of this question was how to control traffic that is coming
 into instances through a LoadBalancer with security groups. At the
 moment you can either define a source IP/range or a security group.
 There is no way to add a LB to a security group (at least not that I
 know of).

 If the source IP that the packet is identified with - is the Load
 balancer (and I suspect it is) then there is no way to enforce the
 traffic flow.

 How would you all deal with this scenario and controlling the traffic flow?

 Any help / thoughts is appreciated!

 --
 Maish Saidel-Keesing


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-19 Thread Jay Pipes
Sorry for top-posting. Just want to say thanks for writing this up and I 
agree with all the points and recommendations you made.


-jay

On 09/19/2014 09:13 AM, Sean Dague wrote:

I've spent the better part of the last 2 weeks in the Nova bug tracker
to try to turn it into something that doesn't cause people to run away
screaming. I don't remember exactly where we started at open bug count 2
weeks ago (it was north of 1400, with  200 bugs in new, but it might
have been north of 1600), but as of this email we're at  1000 open bugs
(I'm counting Fix Committed as closed, even though LP does not), and ~0
new bugs (depending on the time of the day).

== Philosophy in Triaging ==

I'm going to lay out the philosophy of triaging I've had, because this
may also set the tone going forward.

A bug tracker is a tool to help us make a better release. It does not
exist for it's own good, it exists to help. Which means when evaluating
what stays in and what leaves we need to evaluate if any particular
artifact will help us make a better release. But also more importantly
realize that there is a cost for carrying every artifact in the tracker.
Resolving duplicates gets non linearly harder as the number of artifacts
go up. Triaging gets non-linearly hard as the number of artifacts go up.

With this I was being somewhat pragmatic about closing bugs. An old bug
that is just a stacktrace is typically not useful. An old bug that is a
vague sentence that we should refactor a particular module (with no
specifics on the details) is not useful. A bug reported against a very
old version of OpenStack where the code has changed a lot in the
relevant area, and there aren't responses from the author, is not
useful. Not useful bugs just add debt, and we should get rid of them.
That makes the chance of pulling a random bug off the tracker something
that you could actually look at fixing, instead of mostly just stalling out.

So I closed a lot of stuff as Invalid / Opinion that fell into those camps.

== Keeping New Bugs at close to 0 ==

After driving the bugs in the New state down to zero last week, I found
it's actually pretty easy to keep it at 0.

We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
aren't actually a bug, and can be closed immediately. ~30% look like a
bug, but don't have anywhere near enough information in them, and
flipping them to incomplete with questions quickly means we have a real
chance of getting the right info. ~10% are fixable in  30 minutes worth
of work. And the rest are real bugs, that seem to have enough to dive
into it, and can be triaged into Confirmed, set a priority, and add the
appropriate tags for the area.

But, more importantly, this means we can filter bug quality on the way
in. And we can also encourage bug reporters that are giving us good
stuff, or even easy stuff, as we respond quickly.

Recommendation #1: we adopt a 0 new bugs policy to keep this from
getting away from us in the future.

== Our worse bug reporters are often core reviewers ==

I'm going to pick on Dan Prince here, mostly because I have a recent
concrete example, however in triaging the bug queue much of the core
team is to blame (including myself).

https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
was set incomplete and no response. I'm almost 100% sure it's a dupe of
the multiprocess bug we've been tracking down but it's so terse that you
can't get to the bottom of it.

There were a ton of 2012 nova bugs that were basically post it notes.
Oh, we should refactor this function. Full stop. While those are fine
for personal tracking, their value goes to zero probably 3 months after
they are files, especially if the reporter stops working on the issue at
hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
convinced using bugs for those is useful unless we go and close them out
aggressively if they stall.

Also, if Nova core can't file a good bug, it's hard to set the example
for others in our community.

Recommendation #2: hey, Nova core, lets be better about filing the kinds
of bugs we want to see! mkay!

Recommendation #3: Let's create a tag for personal work items or
something for these class of TODOs people are leaving themselves that
make them a ton easier to cull later when they stall and no one else has
enough context to pick them up.

== Tags ==

The aggressive tagging that Tracy brought into the project has been
awesome. It definitely helps slice out into better functional areas.
Here is the top of our current official tag list (and bug count):

95 compute
83 libvirt
74 api
68 vmware
67 network
41 db
40 testing
40 volumes
36 ec2
35 icehouse-backport-potential
32 low-hanging-fruit
31 xenserver
25 ironic
23 hyper-v
16 cells
14 scheduler
12 baremetal
9 ceph
9 security
8 oslo
...

So, good stuff. However I think we probably want to take a further step
and attempt to get champions for tags. So that tag owners would ensure
their bug list looks sane, and 

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Vishvananda Ishaya

On Sep 19, 2014, at 10:14 AM, John Dickinson m...@not.mn wrote:

 
 On Sep 19, 2014, at 5:46 AM, John Griffith john.griff...@solidfire.com 
 wrote:
 
 
 
 On Fri, Sep 19, 2014 at 4:33 AM, Thierry Carrez thie...@openstack.org 
 wrote:
 Vishvananda Ishaya wrote:
 Great writeup. I think there are some great concrete suggestions here.
 
 A couple more:
 
 1. I think we need a better name for Layer #1 that actually represents what 
 the goal of it is: Infrastructure Services?
 2. We need to be be open to having other Layer #1s within the community. We 
 should allow for similar collaborations and group focus to grow up as well. 
 Storage Services? Platform Services? Computation Services?
 
 I think that would nullify most of the benefits of Monty's proposal. If
 we keep on blessing themes or special groups, we'll soon be back at
 step 0, with projects banging on the TC door to become special, and
 companies not allocating resources to anything that's not special.
 
 --
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ​Great stuff, mixed on point 2 raised by Vish but honestly I think that's 
 something that could evolve over time, but I looked at that differently as 
 in Cinder, SWIFT and some day Manilla live under a Storage Services 
 umbrella, and ideally at some point there's some convergence there.
 
 Anyway, I don't want to start a rat-hole on that, it's kind of irrelevant 
 right now.  Bottom line is I think the direction and initial ideas in 
 Monty's post are what a lot of us have been thinking about and looking for.  
 I'm in!!​
 
 
 I too am generally supportive of the concept, but I do want to think about 
 the vishy/tts/jgriffith points above.
 
 Today, I'd group existing OpenStack projects into programs as follows:
 
 Compute: nova, sahara, ironic
 Storage: swift, cinder, glance, trove
 Network: neutron, designate, zaqar
 Deployment/management: heat, triple-o, horizon, ceilometer
 Identity: keystone, barbican
 Support (not user facing): infra, docs, tempest, devstack, oslo
 (potentially even) stackforge: lots

There is a pretty different division of things in this breakdown than in what 
monty was proposing. This divides things up by conceptual similarity which I 
think is actually less useful than breaking things down by use case. I really 
like the grouping and testing of things which are tightly coupled.

If we say launching a VM and using it is the primary use case of our community 
corrently then things group into monty’s layer #1. It seems fairly clear that a 
large section of our community is focused on this use case so this should be a 
primary focus of infrastructure resources.

There are other use cases in our community, for example:

Object Storage: Swift (depends on keystone)
Orchestrating Multiple VMs: Heat (depends on layer1)
DBaSS: Trove: (depends on heat)

These are also important use cases for parts of our community, but swift has 
demostrated that it isn’t required to be a part of an integrated release 
schedule, so these could be managed by smaller groups in the community. Note 
that these are primarily individual projects today, but I could see a future 
where some of these projects decide to group and do an integrated release. In 
the future we might see (totally making this up):

Public Cloud Application services: Trove, Zaqar
Application Deployment services: Heat, Murano
Operations services: Ceilometer, Congress

As I mentioned previously though, I don’t think we need to define these groups 
in advance. These groups can organize as needed.

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Anita Kuno
On 09/19/2014 01:15 PM, Vishvananda Ishaya wrote:
 
 On Sep 19, 2014, at 3:33 AM, Thierry Carrez thie...@openstack.org
 wrote:
 
 Vishvananda Ishaya wrote:
 Great writeup. I think there are some great concrete
 suggestions here.
 
 A couple more:
 
 1. I think we need a better name for Layer #1 that actually
 represents what the goal of it is: Infrastructure Services? 2.
 We need to be be open to having other Layer #1s within the
 community. We should allow for similar collaborations and group
 focus to grow up as well. Storage Services? Platform Services?
 Computation Services?
 
 I think that would nullify most of the benefits of Monty's
 proposal. If we keep on blessing themes or special groups,
 we'll soon be back at step 0, with projects banging on the TC
 door to become special, and companies not allocating resources to
 anything that's not special.
 
 I was assuming that these would be self-organized rather than
 managed by the TC.
 
 Vish
 
Some groups are able to self-organize better than others, I have been
hoping the third party theme would self-organize for the last 10
months and while some folks are trying, their numbers don't keep up
with the demand. This group insists that someone else tell them what
to do. It is not a scalable model, it also doesn't inspire anyone
outside of the theme to want to help out with the answering of
questions either.

Just as a data point.

Thanks,
Anita.
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Vipul Sabhaya
On Fri, Sep 19, 2014 at 4:23 AM, Eoghan Glynn egl...@redhat.com wrote:



  Hi All,
 
  My understanding of Zaqar is that it's like SQS. SQS uses distributed
 queues,
  which have a few unusual properties [0]:
  Message Order
 
 
  Amazon SQS makes a best effort to preserve order in messages, but due to
 the
  distributed nature of the queue, we cannot guarantee you will receive
  messages in the exact order you sent them. If your system requires that
  order be preserved, we recommend you place sequencing information in each
  message so you can reorder the messages upon receipt.
  At-Least-Once Delivery
 
 
  Amazon SQS stores copies of your messages on multiple servers for
 redundancy
  and high availability. On rare occasions, one of the servers storing a
 copy
  of a message might be unavailable when you receive or delete the
 message. If
  that occurs, the copy of the message will not be deleted on that
 unavailable
  server, and you might get that message copy again when you receive
 messages.
  Because of this, you must design your application to be idempotent
 (i.e., it
  must not be adversely affected if it processes the same message more than
  once).
  Message Sample
 
 
  The behavior of retrieving messages from the queue depends whether you
 are
  using short (standard) polling, the default behavior, or long polling.
 For
  more information about long polling, see Amazon SQS Long Polling .
 
  With short polling, when you retrieve messages from the queue, Amazon SQS
  samples a subset of the servers (based on a weighted random distribution)
  and returns messages from just those servers. This means that a
 particular
  receive request might not return all your messages. Or, if you have a
 small
  number of messages in your queue (less than 1000), it means a particular
  request might not return any of your messages, whereas a subsequent
 request
  will. If you keep retrieving from your queues, Amazon SQS will sample
 all of
  the servers, and you will receive all of your messages.
 
  The following figure shows short polling behavior of messages being
 returned
  after one of your system components makes a receive request. Amazon SQS
  samples several of the servers (in gray) and returns the messages from
 those
  servers (Message A, C, D, and B). Message E is not returned to this
  particular request, but it would be returned to a subsequent request.
 
 
 
 
 
 
 
  Presumably SQS has these properties because it makes the system
 scalable, if
  so does Zaqar have the same properties (not just making these same
  guarantees in the API, but actually having these properties in the
  backends)? And if not, why? I looked on the wiki [1] for information on
  this, but couldn't find anything.

 The premise of this thread is flawed I think.

 It seems to be predicated on a direct quote from the public
 documentation of a closed-source system justifying some
 assumptions about the internal architecture and design goals
 of that closed-source system.

 It then proceeds to hold zaqar to account for not making
 the same choices as that closed-source system.

 This puts the zaqar folks in a no-win situation, as it's hard
 to refute such arguments when they have no visibility over
 the innards of that closed-source system.

 Sure, the assumption may well be correct that the designers
 of SQS made the choice to expose applications to out-of-order
 messages as this was the only practical way of acheiving their
 scalability goals.

 But since the code isn't on github and the design discussions
 aren't publicly archived, we have no way of validating that.

 Would it be more reasonable to compare against a cloud-scale
 messaging system that folks may have more direct knowledge
 of?

 For example, is HP Cloud Messaging[1] rolled out in full
 production by now?


Unfortunately the HP Cloud Messaging service was decommissioned.


 Is it still cloning the original Marconi API, or has it kept
 up with the evolution of the API? Has the nature of this API
 been seen as the root cause of any scalability issues?


We created a RabbitMQ backed implementation that aimed to be API compatible
with Marconi.  This proved difficult given some of the API issues that have
been discussed on this very thread.  Our implementation could never be full
API compatible with Marconi (there really isn’t an easy way to map AMQP to
HTTP, without losing serious functionality).

We also worked closely with the Marconi team, trying to get upstream to
support AMQP — the Marconi team also came to the same conclusion that their
API was not a good fit for such a backend.

Now — we are looking at options.  One that intrigues us has also been
suggested on these threads, specifically building a ‘managed messaging
service’ that could provision various messaging technologies (rabbit,
kafka, etc), and at the end of the day hand off the protocol native to the
messaging technology to the end user.



 Cheers,
 Eoghan

 [1]
 

Re: [openstack-dev] [requirements] [nova] requirements freeze exception for websockify

2014-09-19 Thread Doug Hellmann
On Sep 19, 2014, at 11:22 AM, Sean Dague s...@dague.net wrote:

 I'd like to request a requirements freeze exception for websockify -
 https://review.openstack.org/#/c/122702/
 
 The rationale for this is that websockify version bump fixes a Nova bug
 about zombie processes - https://bugs.launchpad.net/nova/+bug/1048703.
 It also sets g-r to the value we've been testing against for the entire
 last cycle.
 
 I don't believe it has any impacts on other projects, so should be a
 very safe change.

Gantt, Ironic, and Nova all use websockify.

I’m +1 on updating the minimum based on the fact that our current version spec 
is causing us to test with this version anyway.

However, the proposed change also removes the upper bound. Do we know why that 
was bounded before? Have we had issues with API changes in that project? Is it 
safe to remove the cap?

Doug

 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Chmouel Boudjnah
On Fri, Sep 19, 2014 at 6:58 PM, Donald Stufft don...@stufft.io wrote:

 So you can remove all that code and just let requests/urllib3 handle it on
 3.2+, 2.7.9+ and for anything less than that either use conditional
 dependencies to have glance client depend on pyOpenSSL, ndg-httpsclient,
 and pyasn1 on Python 2.x, or let them be optional and if people want to
 disable TLS compression in those versions they can install those versions
 themselves.



we have that issue as well for swiftclient, see the great write-up from
stuart here :

https://answers.launchpad.net/swift/+question/196920

just removing it this and let hope that users uses bleeding edge python
(which they don't) is not going to work for us. and the pyOpenSSL way is
very unfriendly to the end-user as well.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Zaqar] Integration plan moving forward

2014-09-19 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2014-09-18 02:07:09 -0700:
 Greetings,
 
 If I recall correctly, Heat was planning to adopt Zaqar regardless of
 the result of the graduation attempt (please correct me if I'm wrong).
 Based on this assumption, I'd like to start working on a plan forward to
 make this integration happen.
 
 So far, these are the use cases I've collected from past discussions:
 
 * Notify  heat user before an action is taken, and after - Heat may want
 to wait  for a response before proceeding - notifications not
 necessarily needed  and signed read-only queues might help, but not
 necessary
 * For integrating with user's tools
 * Monitoring
 * Control surface
 * Config management tools
 * Does not require notifications and/or read-only/signed queue endpoints
 *[These may be helpful, but were not brought up in the discussion]

This is perhaps the most important need. It would be fully satisfied by
out of order messages as long as we have guaranteed at least once
delivery.

[for the rest, I've reduced the indent, as I don't think they were meant
to be underneath the one above]

 * Subscribe to an aggregate feed of interesting events from other
   open-stack components (such as Nova)
 * Heat is often deployed in a different place than other
   components and doesn't have access to the AMQP bus
 * Large  deployments consist of multiple AMQP brokers, and there
   doesn't seem to  be a nice way to aggregate all those events [need to
   confirm]

I've also heard tell that Ceilometer wants to be a sieve for these. I've
no idea why that makes sense, but I have heard it said.

 * Push metadata updates to os-collect-config agent running in
   servers, instead of having them poll Heat

This one is fine with an out of order durable queue.

 
 
 Few questions that I think we should start from:
 
 - Does the above list cover Heat's needs?
 - Which of the use cases listed above should be addressed first?
 - Can we split the above into milestones w/ due dates?
 
 
 Thanks,
 Flavio
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Zaqar] Integration plan moving forward

2014-09-19 Thread Clint Byrum
Excerpts from Flavio Percoco's message of 2014-09-19 02:37:08 -0700:
 On 09/18/2014 11:51 AM, Angus Salkeld wrote:
  
  On 18/09/2014 7:11 PM, Flavio Percoco fla...@redhat.com
  mailto:fla...@redhat.com wrote:
 
  Greetings,
 
  If I recall correctly, Heat was planning to adopt Zaqar regardless of
  the result of the graduation attempt (please correct me if I'm wrong).
  Based on this assumption, I'd like to start working on a plan forward to
  make this integration happen.
 
  So far, these are the use cases I've collected from past discussions:
 
  * Notify  heat user before an action is taken, and after - Heat may want
  to wait  for a response before proceeding - notifications not
  necessarily needed  and signed read-only queues might help, but not
  necessary
  * For integrating with user's tools
  * Monitoring
  * Control surface
  * Config management tools
  * Does not require notifications and/or read-only/signed queue
  endpoints
  *[These may be helpful, but were not brought up in the discussion]
  * Subscribe to an aggregate feed of interesting events from other
  open-stack components (such as Nova)
  * Heat is often deployed in a different place than other
  components and doesn't have access to the AMQP bus
  * Large  deployments consist of multiple AMQP brokers, and there
  doesn't seem to  be a nice way to aggregate all those events [need to
  confirm]
  * Push metadata updates to os-collect-config agent running in
  servers, instead of having them poll Heat
 
 
  Few questions that I think we should start from:
 
  - Does the above list cover Heat's needs?
  - Which of the use cases listed above should be addressed first?
  
  IMHO it would be great to simply replace the event store we have
  currently, so that the user can get a stream of progress messages during
  the deployment.
 
 Could you point me to the right piece of code and/or documentation so I
 can understand better what it does and where do you want it to go?

https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/event.py

We currently use db_api to store these in the database, which is costly.

Would be much better if we could just shove them into a message queue for
the user. It is problematic though, as we have event-list and event-show
in the Heat API which basically work the same as the things we've been
wanting removed from Zaqar's API: access by ID and pagination. ;)

I think ideally we'd deprecate those or populate them with nothing if
the user has opted to use messaging instead.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [metrics] New version of the Activity Board

2014-09-19 Thread Stefano Maffulli
Thank you Daniel, great job.

On 09/17/2014 09:03 AM, Daniel Izquierdo wrote:
 * Further work
 =
 - Add Juno release information

It's coming :)

 - Allow to have projects navigation per release

This is interesting

 - Add Askbot data per release

This is really not needed, don't spend time on it.

 - Add IRC data per release

As above, I don't think it makes sense to aggregate and display chat
data per release.

 - Improve navigation (feedback is more than welcome here).

One comment I collected: The data displayed is aggregated weekly but
that time unit is not displayed anywhere on the charts. Maybe the pop-up
baloons may say 'Week 1 May 2014' or week 1 2014 for the first week of
January 2014?

Cheers,
stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Donald Stufft

 On Sep 19, 2014, at 2:26 PM, Chmouel Boudjnah chmo...@enovance.com wrote:
 
 
 On Fri, Sep 19, 2014 at 6:58 PM, Donald Stufft don...@stufft.io 
 mailto:don...@stufft.io wrote:
 So you can remove all that code and just let requests/urllib3 handle it on 
 3.2+, 2.7.9+ and for anything less than that either use conditional 
 dependencies to have glance client depend on pyOpenSSL, ndg-httpsclient, and 
 pyasn1 on Python 2.x, or let them be optional and if people want to disable 
 TLS compression in those versions they can install those versions themselves.
 
 
 we have that issue as well for swiftclient, see the great write-up from 
 stuart here :
 
 https://answers.launchpad.net/swift/+question/196920 
 https://answers.launchpad.net/swift/+question/196920
 
 just removing it this and let hope that users uses bleeding edge python 
 (which they don't) is not going to work for us. and the pyOpenSSL way is very 
 unfriendly to the end-user as well.
 
 

Unfortunately those are the only options besides using a different TLS 
implementation besides pyOpenSSL all together.

Python 2.x standard library did not include the requisite nobs for configuring 
this, it wasn’t until Python 3.2+ that the ssl module in the standard library 
gained the ability to have these kinds of things applied to it. Python 2.7.9 
contains a back port of the 3.x ssl module to Python 2.7, so that’s the first 
time in the 2.x line that the standard library has the knobs to change these 
things.

The alternative to 3.2+ or 2.7.9+ is using an alternative TLS implementation, 
of which pyOpenSSL is by far the most popular (and it’s what glanceclient is 
using right now).

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Clint Byrum
Excerpts from Ian Cordasco's message of 2014-09-18 10:33:04 -0700:
 
 On 9/18/14, 11:29 AM, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Donald Stufft's message of 2014-09-18 07:30:27 -0700:
  
   On Sep 18, 2014, at 10:18 AM, Clint Byrum cl...@fewbar.com wrote:
   
   Excerpts from Donald Stufft's message of 2014-09-18 04:58:06 -0700:
   
   On Sep 18, 2014, at 7:54 AM, Thomas Goirand z...@debian.org wrote:
   
   
   Linux distributions are not the end be all of distribution models
 and
   they don’t get to dictate to upstream.
   
   Well, distributions is where the final user is, and where software
 gets
   consumed. Our priority should be the end users.
   
   
   Distributions are not the only place that people get their software
 from,
   unless you think that the ~3 million downloads requests has received
   on PyPI in the last 30 days are distributions downloading requests to
   package in their OSs.
   
   
   Do pypi users not also need to be able to detect and fix any versions
   of libraries they might have? If one has some virtualenvs with various
   libraries and apps installed and no --system-site-packages, one would
   probably still want to run 'pip freeze' in all of them and find out
 what
   libraries are there and need to be fixed.
   
   Anyway, generally security updates require a comprehensive strategy.
   One common comprehensive strategy is version assertion.
   
   Vendoring complicates that immensely.
  
  It doesn’t really matter. PyPI doesn’t dictate to projects who host
 there what
  that project is allowed to do except in some very broad circumstances.
 Whether
  or not requests *should* do this doesn't really have any bearing on what
  Openstack should do to cope with it. The facts are that requests does
 it, and
  that people pulling things from PyPI is an actual platform that needs
 thought
  about.
  
  This leaves Openstack with a few reasonable/sane options:
  
  1) Decide that vendoring in requests is unacceptable to what Openstack
 as a
 project is willing to support, and cease the use of requests.
  2) Decide that what requests offers is good enough that it outweighs
 the fact
 that it vendors urllib3 and continue using it.
  
 
 There's also 3) fork requests, which is the democratic way to vote out
 an upstream that isn't supporting the needs of the masses.
 
 Given requests’ download count, I have to doubt that OpenStack users
 constitute the masses in this case.
 

This wasn't the masses from the requests stand point, but from the
OpenStack standpoint. Consider the case of a small island territory
of a much larger nation. At some point most of them have claimed their
independence from the larger nation unless the larger nation is willing
to step up and make them a full member with a real vote. This allows
them to act in their best interest. So even if it means a much more
difficult road, it is the road most advantageous to them.

Also upon reflection, it's a bit interesting that forking requests is
being dismissed so quickly, when in essence, requests maintains a fork
of urllib3 in tree (albeit, one that is just a fork from the _releases_,
not from the whole project).

 I don't think we're anywhere near there, but I wanted to make it clear
 there _is_ a more extreme option.
 
  If the 2nd option is chosen, then doing anything but supporting the
 fact that
  requests vendors urllib3 within the code that openstack writes is
 hurting the
  users who fetch these projects from PyPI because you don't agree with
 one of
  the choices that requests makes. By all means do conditional imports to
 lessen
  the impact that the choice requests has made (and the one that
 Openstack has
  made to use requests) on downstream distributors, but unconditionally
 importing
  from the top level urllib3 for use within requests is flat out wrong.
  
  Obviously neither of these options excludes the choice to lean on
 requests to
  reverse this decision as well. However that is best done elsewhere as
 the
  person making that decision isn't a member of these mailing lists as
 far as
  I am aware.
  
 
 To be clear, I think we should keep using requests. But we should lend
 our influence upstream and explain that our users are required to deal
 with this in a way that perhaps hasn't been considered or given the
 appropriate priority.
 
 It’s been considered several times. There have been multiple issues.
 There’s more than just the one you linked to. The decision is highly
 unlikely to change whether it’s coming from a group of people in OpenStack
 or another distribution package maintainer.
 

Indeed, hence my thinking that forking requests might be in order. Even
if that fork is just a long lived fork that stays mostly in sync, but
without urllib3 vendored. I think that has actually already happened in
the distros... so I wonder how painful it would be to do the same thing
on pypi, and let the distros just consume that.

Anyway, I'm not going to take that challenge on, 

[openstack-dev] [Elections] Nominations for OpenStack PTLs (Program Technical Leads) are now open

2014-09-19 Thread Anita Kuno
Nominations for OpenStack PTLs (Program Technical Leads) are now open
and will remain open until September 26, 2014 05:59 UTC.

To announce your candidacy please start a new openstack-dev at
lists.openstack.org mailing list thread with the program name as a tag,
example [Glance] PTL Candidacy with the body as your announcement of
intent. People who are not the candidate, please refrain from posting +1
to the candidate announcement posting.

I'm sure the electorate would appreciate a bit of information about why
you would make a great PTL and the direction you would like to take the
program, though it is not required for eligibility.

In order to be an eligible candidate (and be allowed to vote) in a given
PTL election, you need to have contributed an accepted patch to one of
the corresponding program's projects[0] during the Icehouse-Juno
timeframe (September 26, 2013 06:00 UTC to September 26, 2014 05:59 UTC).

We need to elect PTLs for 22 programs this round:
* Compute (Nova) - one position
* Object Storage (Swift) - one position
* Image Service (Glance) - one position
* Identity (Keystone) - one position
* Dashboard (Horizon) - one position
* Networking (Neutron) - one position
* Block Storage (Cinder) - one position
* Metering/Monitoring (Ceilometer) - one position
* Orchestration (Heat) - one position
* Database Service (Trove) - one position
* Bare metal (Ironic) - one position
* Common Libraries (Oslo) - one position
* Infrastructure - one position
* Documentation - one position
* Quality Assurance (QA) - one position
* Deployment (TripleO) - one position
* Release cycle management  - one position
* Data Processing Service (Sahara) - one position
* Message service (Zaqar) - one position
* Key Management Service (Barbican) - one position
* DNS Services (Designate) - one position
* Shared File Systems (Manila) - one position

Additional information about the nomination process can be found here:
https://wiki.openstack.org/wiki/PTL_Elections_September/October_2014

As Tristan and I confirm candidates, we will reply to each email thread
with confirmed and add each candidate's name to the list of confirmed
candidates on the above wiki page.

Elections will begin on September 26, 2014 after 06:00 UTC (as soon as
we get each election set up we will start it, it will probably be a
staggered start) and run until after 13:00 UTC October 3, 2014.

The electorate is requested to confirm their email address in gerrit,
review.openstack.org  Settings  Contact Information   Preferred
Email, prior to September 26, 2014 05:59 UTC so that the emailed
ballots are mailed to the correct email address.

Happy running,
Anita Kuno (anteaya)

[0]
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Please do *NOT* use vendorized versions of anything (here: glanceclient using requests.packages.urllib3)

2014-09-19 Thread Mark Washenberger
On Fri, Sep 19, 2014 at 11:26 AM, Chmouel Boudjnah chmo...@enovance.com
wrote:


 On Fri, Sep 19, 2014 at 6:58 PM, Donald Stufft don...@stufft.io wrote:

 So you can remove all that code and just let requests/urllib3 handle it
 on 3.2+, 2.7.9+ and for anything less than that either use conditional
 dependencies to have glance client depend on pyOpenSSL, ndg-httpsclient,
 and pyasn1 on Python 2.x, or let them be optional and if people want to
 disable TLS compression in those versions they can install those versions
 themselves.



 we have that issue as well for swiftclient, see the great write-up from
 stuart here :

 https://answers.launchpad.net/swift/+question/196920

 just removing it this and let hope that users uses bleeding edge python
 (which they don't) is not going to work for us. and the pyOpenSSL way is
 very unfriendly to the end-user as well.

 Chmouel

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'm very sympathetic with Chmouel's assessment, but it seems like adding
pyasn1 and ndg-httpsclient dependencies is relatively straightforward and
does not incur a substantial additional overhead on the install process. We
already depend on pyOpenSSL, which seems to be the main contributor to
glanceclient's list of unsavory dependencies. We would need to add
ndg-httpsclient to openstack/requirements, as well, but I assume that is
doable.

I'm a bit disappointed that even with the fix, the requests/urllib3 stack
is still trying to completely make this transport level decision for me.
Its fine to have defaults, but they should be able to be overridden.

For this release cycle, the best answer IMO is still just to switch to a
conditional import of requests.packages.urllib3 in our test module, which
was the original complaint.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Set WIP for stale patches?

2014-09-19 Thread Jay Dobies

On 2014-09-18 15:21:20 -0400 (-0400), Jay Dobies wrote:

How many of the reviews that we WIP-1 will actually be revisited?

I'm sure there will be cases where a current developer forgetting
they had started on something, seeing the e-mail about the WIP-1,
and then abandoning the change.

But what about developers who have moved off the project entirely?
Is this only masking the problem of stale reviews from our review
stats and leaving the review queue to bloat?

[...]

What is review queue bloat in this scenario? How is a change
indefinitely left in Gerrit with workflow -1 set any different
from a change indefinitely left in Gerrit with abandoned set? It's
not like we go through and purge changes from Gerrit based on these,
and they take up just as much space and other resources in either
state.


Ah, ok. I assumed the abandoned ones were reaped over time. Perhaps it's 
just a matter of me writing different searches when I want to ignore the 
workflow -1s.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Compute][Nova] Announcing my candidacy for PTL

2014-09-19 Thread Michael Still
I'd like another term as Compute PTL, if you'll have me.

We live in interesting times. OpenStack has clearly gained a large
amount of mind share in the open cloud marketplace, with Nova being a
very commonly deployed component. Yet, we don't have a fantastic
container solution, which is our biggest feature gap at this point.
Worse -- we have a code base with a huge number of bugs filed against
it, an unreliable gate because of subtle bugs in our code and
interactions with other OpenStack code, and have a continued need to
add features to stay relevant. These are hard problems to solve.

Interestingly, I think the solution to these problems calls for a
social approach, much like I argued for in my Juno PTL candidacy
email. The problems we face aren't purely technical -- we need to work
out how to pay down our technical debt without blocking all new
features. We also need to ask for understanding and patience from
those feature authors as we try and improve the foundation they are
building on.

The specifications process we used in Juno helped with these problems,
but one of the things we've learned from the experiment is that we
don't require specifications for all changes. Let's take an approach
where trivial changes (no API changes, only one review to implement)
don't require a specification. There will of course sometimes be
variations on that rule if we discover something, but it means that
many micro-features will be unblocked.

In terms of technical debt, I don't personally believe that pulling
all hypervisor drivers out of Nova fixes the problems we face, it just
moves the technical debt to a different repository. However, we
clearly need to discuss the way forward at the summit, and come up
with some sort of plan. If we do something like this, then I am not
sure that the hypervisor driver interface is the right place to do
that work -- I'd rather see something closer to the hypervisor itself
so that the Nova business logic stays with Nova.

Kilo is also the release where we need to get the v2.1 API work done
now that we finally have a shared vision for how to progress. It took
us a long time to get to a good shared vision there, so we need to
ensure that we see that work through to the end.

We live in interesting times, but they're also exciting as well.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Zaqar and SQS Properties of Distributed Queues

2014-09-19 Thread Zane Bitter

On 18/09/14 10:55, Flavio Percoco wrote:

On 09/18/2014 04:24 PM, Clint Byrum wrote:

Great job highlighting what our friends over at Amazon are doing.

It's clear from these snippets, and a few other pieces of documentation
for SQS I've read, that the Amazon team approached SQS from a _massive_
scaling perspective. I think what may be forcing a lot of this frustration
with Zaqar is that it was designed with a much smaller scale in mind.

I think as long as that is the case, the design will remain in question.
I'd be comfortable saying that the use cases I've been thinking about
are entirely fine with the limitations SQS has.


I think these are pretty strong comments with not enough arguments to
defend them.


I actually more or less agree with Clint here. As Joe noted (and *thank 
you* Joe for starting this thread - the first one to compare Zaqar to 
something relevant!), SQS offers very, very limited guarantees, and it's 
clear that the reason for that is to make it massively, massively 
scalable in the way that e.g. S3 is scalable while also remaining 
comparably durable (S3 is supposedly designed for 11 nines, BTW).


Zaqar, meanwhile, seems to be promising the world in terms of 
guarantees. (And then taking it away in the fine print, where it says 
that the operator can disregard many of them, potentially without the 
user's knowledge.)


On the other hand, IIUC Zaqar does in fact have a sharding feature 
(Pools) which is its answer to the massive scaling question. I don't 
know enough details to comment further except to say that it evidently 
has been carefully thought about at some level, and it's really 
frustrating for the Zaqar folks when people just assume that it hasn't 
without doing any research. On the face of it sharding is a decent 
solution for this problem. Maybe we need to dig into the details and 
make sure folks are satisfied that there are no hidden dragons.



Saying that Zaqar was designed with a smaller scale in mid without
actually saying why you think so is not fair besides not being true. So
please, do share why you think Zaqar was not designed for big scales and
provide comments that will help the project to grow and improve.

- Is it because the storage technologies that have been chosen?
- Is it because of the API?
- Is it because of the programing language/framework ?


I didn't read Clint and Devananda's comments as an attack on any of 
these things (although I agree that there have been far too many such 
attacks over the last 12 months from people who didn't bother to do 
their homework first). They're looking at Zaqar from first principles 
and finding that it promises too much, raising the concern the team may 
in future reach a point where they are unable to meet the needs of 
future users (perhaps for scaling reasons) without breaking existing 
users who have come to rely on those promises.



So far, we've just discussed the API semantics and not zaqar's
scalability, which makes your comments even more surprising.


What guarantees you offer can determine pretty much all of the design 
tradeoffs (e.g. latency vs. durability) that you have to make. Some of 
those (e.g. random access to messages) are baked in to the API, but 
others are not. It's actually a real concern to me to see elsewhere in 
this thread that you are punting to operators on many of the latter.


IMO the Zaqar team needs to articulate an opinionated vision of just 
what Zaqar actually is, and why it offers value. And by 'value' here I 
mean there should be $ signs attached.


For example, it makes no sense to me that Zaqar should ever be able to 
run in a mode that doesn't guarantee delivery of messages. There are a 
million and one easy, cheap ways to set up a system that _might_ deliver 
your message. One server running a message broker is sufficient. But if 
you want reliable delivery, then you'll probably need at least 3 (for 
still pretty low values of reliable). I did some back-of-the-envelope 
math with the AWS pricing and _conservatively_ for any application 
receiving 100k messages per hour (~30 per second) it's cheaper to use 
SQS than to spin up those servers yourself.


In other words, a service that *guarantees* delivery of messages *has* 
to be run by the cloud operator because for the overwhelming majority of 
applications, the user cannot do so economically.


(I'm assuming here that AWS's _relative_ pricing accurately reflects 
their _relative_ cost basis, which is almost certainly not strictly 
true, but I expect a close enough approximation for these purposes.)


What I would like to hear in this thread is:

Zaqar is We-Never-Ever-Ever-Ever-Lose-Your-Message as a Service 
(WNEEELYMaaS), and it has to be in OpenStack because only the cloud 
operator can provide that cost-effectively.


What I'm hearing instead is:

- We'll probably deliver your message.
- We can guarantee that we'll deliver your message, but only on clouds 
where the operator has chosen to configure Mongo with 

[openstack-dev] [neutron] Announcing my candidacy for PTL

2014-09-19 Thread Kyle Mestery
All:

I am writing to announce my candidacy for the OpenStack Networking
(Neutron) PTL role.

I am the current Neutron PTL, having lead the project during the Juno
cycle. As a team, we have accomplished a lot during the Juno cycle. We
have achieved the goals outlined from the TC around nova-network
parity [1]. We have driven new community features such as Distributed
Virtual Router (DVR), L3 HA, and full IPV6 support, among many others.
In addition to new features, I have worked hard to promote openness
and collaboration across other OpenStack projects, and also across all
operators and vendors. Neutron has helped to lead the cross-project
third-party CI testing efforts by virtue of having this requirement
since the Icehouse release. I have made an attempt to clarify existing
Neutron policies [2] by documenting them with the rest of the Neutron
team. By making everything open and transparent, I’ve tried to make
Neutron as friendly and open a project as it can be.

Qualifications
--
I have been a core Neutron developer since the beginning of the Havana
cycle, actively working upstream since before that. I have previously
co-led the ML2 team during Havana and Icehouse, and during Juno I’ve
been working closely with the Infra and QA teams around third-party CI
testing for Neutron plugins and drivers. We have been trying to stay
ahead of the curve here, and
working closely with other projects having the same requirements. In
addition, I’ve remained
closely involved with the continuing evolution of the OpenDaylight
project and how it integrates
with OpenStack Neutron. Evolving the state of Open Source SDN
controllers and how they can
remain an integral part of Neutron’s future has been a priority for me
and will continue to be a
priority during the Kilo cycle.

Juno
--
I am proud of what our team accomplished in Juno. During Juno I worked
very closely with the
Neutron core team to ensure we had a solid release which addressed
stability, scalability, nova
network parity and added a sprinkling of new features as well. I was
able to organize and host the
Neutron mid-cycle sprint where we worked as a team to close the
remaining nova-network parity
gaps. I have also started to rotate the Neutron meeting every week
such that we encourage more
global participation. This was a big change as the Neutron meeting
hadn’t changed it’s time since
the start of the project, and it’s been great to see so many new faces
participating each week.

Looking Forward to Kilo

As Kilo approaches, I hope to lead us as a team into working on many
important enhancements
across both Neutron itself and the development aspects of Neutron. As
you can see from the
etherpad [3] we have created for Kilo Design Summit topics, there is a
long list of things there. As
PTL, I will help to sharpen the focus for Kilo around the items in
this list so we as a community
can come up with a consistent plan of what we hope to accomplish, and
drive as a team towards
achieving our goals.

I also want to continue evolving the way people contribute to Neutron.
There have been many
email threads on this on the openstack-dev mailing list, and
ultimately in Kilo we’ll need to make
some changes around the development process. It’s likely lots of these
changes will be cross
program and affect more than just Neutron, as the mailing list threads
have made obvious. As a
project, evolving how we work is an iterative process which should
lead to incremental gains.

Closing
--
Neutron has taken huge strides forward in Juno around community
measures such as nova
network parity, stability, and testing. In addition, we’ve added new
features which enhance
Neutron for operators. I hope to be given the chance to continue
leading Neutron forward during
Kilo by enhancing the development process to make it easier for
everyone to contribute, continue
to grow and evolve the core team, and produce a solid Kilo release
which meets all of the
communities goals.

Thank you.
Kyle

[1] 
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
[2] https://wiki.openstack.org/wiki/NeutronPolicies
[3] https://etherpad.openstack.org/p/kilo-neutron-summit-topics

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota

2014-09-19 Thread Carlino, Chuck
I'm in HP, but not in the group that owns this effort, so I don't know what its 
status is.  There is a havana-based implementation floating around somewhere 
inside HP.  I'll ask around to see what I can find out.

I'm pretty sure there's nothing going on in the community.

Chuck

On Sep 19, 2014, at 5:28 AM, Giuseppe Cossu giuseppe.co...@create-net.org 
wrote:

 Chuck,
 It seems quite interesting! Are hp or community working to implement it?
 
 Giuseppe 
 
 -Original Message-
 From: Carlino, Chuck [mailto:chuck.carl...@hp.com]
 Sent: 19 September 2014 04:52
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota
 
 When you say 'associate to VMs', that would be associating to neutron
 ports,
 right?  If so, this is a subset of what is in:
 
 
 https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-frame
 work
 
 https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-polic
 y
 
 which also include things like bandwidth guarantees and security policy.
 I'm
 not sure if anyone is pursuing these right now, but there may be some
 useful
 ideas in there.
 
 Chuck
 
 
 On Sep 18, 2014, at 4:25 PM, Giuseppe Cossu giuseppe.cossu@create-
 net.orgmailto:giuseppe.co...@create-net.org wrote:
 
 Hello,
 I'm aware it's not so easy to define a solution, so I'll expose my idea.
 I was thinking about a network flavor that a tenant can associate to
 VMs.
 Basically the network flavour is a QoS policy.
 The admin can define the network flavors (Gold, Silver, ... call it as
 you
 want) with a set of parameters (some visible to user, some not).
 If we define this kind of flavours, a related quota should be define to
 keep
 track the network resources.
 
 Giuseppe
 
 From: Veiga, Anthony
 [mailto:anthony_ve...@cable.comcast.comhttp://cable.comcast.com]
 Sent: 10 September 2014 15:11
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota
 
 
 
 Using the quota system would be a nice option to have.
 
 Can you clarify what you mean by cumulative bandwidth for the tenant? It
 would
 be possible to rate limit at the tenant router, but having a cumulative
 limit
 enforced inside of a tenant would be difficult.
 
 On Wed, Sep 10, 2014 at 1:03 AM, Giuseppe Cossu giuseppe.cossu@create-
 net.orgmailto:giuseppe.co...@create-net.org wrote:
 
 Hello everyone,
 
 Looking at the QoS blueprint (proposed for incubation), I suggest to
 consider
 adding some parameters to Neutron Quotas. Let's suppose using rate-limit
 for
 managing QoS. The quota parameters could be such as rate_limit (per
 port) and
 max_bandwidth (per tenant). In this way it is possible to set/manage QoS
 quotas from the admin side, and for instance set the maximum bandwidth
 allowed
 per tenant (cumulative).
 
 What do you think about it?
 
 I'm cautious about this.  We'd either need to allow a Number of DSCP
 settings and set them outside the quota or leave it out altogether.
 Let's
 not forget that there's more than just rate limiting in QoS, and we need
 to
 make sure all the options are included.  Otherwise, there's going to be
 a lot
 of user and operator confusion as to what is and isn't considered part
 of the
 quota.
 -Anthony
 
 Regards,
 Giuseppe
 
 
 Giuseppe Cossu
 CREATE-NET
 
 
 ___
 OpenStack-dev mailing list
 
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Kevin Benton
 ___
 OpenStack-dev mailing list
 
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-19 Thread Michael Still
On Fri, Sep 19, 2014 at 11:13 PM, Sean Dague s...@dague.net wrote:
 I've spent the better part of the last 2 weeks in the Nova bug tracker
 to try to turn it into something that doesn't cause people to run away
 screaming. I don't remember exactly where we started at open bug count 2
 weeks ago (it was north of 1400, with  200 bugs in new, but it might
 have been north of 1600), but as of this email we're at  1000 open bugs
 (I'm counting Fix Committed as closed, even though LP does not), and ~0
 new bugs (depending on the time of the day).

 == Philosophy in Triaging ==

 I'm going to lay out the philosophy of triaging I've had, because this
 may also set the tone going forward.

 A bug tracker is a tool to help us make a better release. It does not
 exist for it's own good, it exists to help. Which means when evaluating
 what stays in and what leaves we need to evaluate if any particular
 artifact will help us make a better release. But also more importantly
 realize that there is a cost for carrying every artifact in the tracker.
 Resolving duplicates gets non linearly harder as the number of artifacts
 go up. Triaging gets non-linearly hard as the number of artifacts go up.

 With this I was being somewhat pragmatic about closing bugs. An old bug
 that is just a stacktrace is typically not useful. An old bug that is a
 vague sentence that we should refactor a particular module (with no
 specifics on the details) is not useful. A bug reported against a very
 old version of OpenStack where the code has changed a lot in the
 relevant area, and there aren't responses from the author, is not
 useful. Not useful bugs just add debt, and we should get rid of them.
 That makes the chance of pulling a random bug off the tracker something
 that you could actually look at fixing, instead of mostly just stalling out.

 So I closed a lot of stuff as Invalid / Opinion that fell into those camps.

 == Keeping New Bugs at close to 0 ==

 After driving the bugs in the New state down to zero last week, I found
 it's actually pretty easy to keep it at 0.

 We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
 aren't actually a bug, and can be closed immediately. ~30% look like a
 bug, but don't have anywhere near enough information in them, and
 flipping them to incomplete with questions quickly means we have a real
 chance of getting the right info. ~10% are fixable in  30 minutes worth
 of work. And the rest are real bugs, that seem to have enough to dive
 into it, and can be triaged into Confirmed, set a priority, and add the
 appropriate tags for the area.

On the bugs which would take less than 30 minutes, is that because
they're not bugs, or are they just trivial? It would be cool to be
adding the low-hanging-fruit tag to those bugs if you're not, because
we should just fix them.

 But, more importantly, this means we can filter bug quality on the way
 in. And we can also encourage bug reporters that are giving us good
 stuff, or even easy stuff, as we respond quickly.

 Recommendation #1: we adopt a 0 new bugs policy to keep this from
 getting away from us in the future.

Agreed, this was a goal we used to have back in the day and I'd like
to bring it back.

 == Our worse bug reporters are often core reviewers ==

 I'm going to pick on Dan Prince here, mostly because I have a recent
 concrete example, however in triaging the bug queue much of the core
 team is to blame (including myself).

 https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
 was set incomplete and no response. I'm almost 100% sure it's a dupe of
 the multiprocess bug we've been tracking down but it's so terse that you
 can't get to the bottom of it.

 There were a ton of 2012 nova bugs that were basically post it notes.
 Oh, we should refactor this function. Full stop. While those are fine
 for personal tracking, their value goes to zero probably 3 months after
 they are files, especially if the reporter stops working on the issue at
 hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
 convinced using bugs for those is useful unless we go and close them out
 aggressively if they stall.

 Also, if Nova core can't file a good bug, it's hard to set the example
 for others in our community.

 Recommendation #2: hey, Nova core, lets be better about filing the kinds
 of bugs we want to see! mkay!

 Recommendation #3: Let's create a tag for personal work items or
 something for these class of TODOs people are leaving themselves that
 make them a ton easier to cull later when they stall and no one else has
 enough context to pick them up.

I think we also get a lot of bugs filed almost immediately before a
fix. Sort of like a tracking mechanism for micro-features. Do we want
to continue doing that, or do we want to just let smallish things land
without a bug?

 == Tags ==

 The aggressive tagging that Tracy brought into the project has been
 awesome. It definitely helps slice out into better 

Re: [openstack-dev] Oslo final releases ready

2014-09-19 Thread Michael Still
Thanks. That's now approved and we're waiting for the merge.

Michael

On Fri, Sep 19, 2014 at 7:30 PM, Andrey Kurilin akuri...@mirantis.com wrote:
 Here you are!:)
 https://review.openstack.org/#/c/122667

 On Fri, Sep 19, 2014 at 9:02 AM, Michael Still mi...@stillhq.com wrote:

 I would like to do a python-novaclient release, but this requirements
 commit hasn't yet turned into a requirements proposal for novaclient
 (that I can find). Can someone poke that for me?

 Michael

 On Fri, Sep 19, 2014 at 12:04 AM, Doug Hellmann d...@doughellmann.com
 wrote:
  All of the final releases for the Oslo libraries for the Juno cycle are
  available on PyPI. I’m working on a couple of patches to the global
  requirements list to update the baseline in the applications. In all cases,
  the final release is a second tag on a previously released version.
 
  - oslo.config - 1.4.0 (same as 1.4.0.0a5)
  - oslo.db - 1.0.0 (same as 0.5.0)
  - oslo.i18n - 1.0.0 (same as 0.4.0)
  - oslo.messaging - 1.4.0 (same as 1.4.0.0a5)
  - oslo.rootwrap - 1.3.0 (same as 1.3.0.0a3)
  - oslo.serialization - 1.0.0 (same as 0.3.0)
  - oslosphinx - 2.2.0 (same as 2.2.0.0a3)
  - oslotest - 1.1.0 (same as 1.1.0.0a2)
  - oslo.utils - 1.0.0 (same as 0.3.0)
  - cliff - 1.7.0 (previously tagged, so not a new release)
  - stevedore - 1.0.0 (same as 1.0.0.0a2)
 
  Congratulations and *Thank You* to the Oslo team for doing an amazing
  job with graduations this cycle!
 
  Doug
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Andrey Kurilin.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Compute][Nova] Announcing my candidacy for PTL

2014-09-19 Thread Anita Kuno
confirmed

On 09/19/2014 04:12 PM, Michael Still wrote:
 I'd like another term as Compute PTL, if you'll have me.
 
 We live in interesting times. OpenStack has clearly gained a large
 amount of mind share in the open cloud marketplace, with Nova being a
 very commonly deployed component. Yet, we don't have a fantastic
 container solution, which is our biggest feature gap at this point.
 Worse -- we have a code base with a huge number of bugs filed against
 it, an unreliable gate because of subtle bugs in our code and
 interactions with other OpenStack code, and have a continued need to
 add features to stay relevant. These are hard problems to solve.
 
 Interestingly, I think the solution to these problems calls for a
 social approach, much like I argued for in my Juno PTL candidacy
 email. The problems we face aren't purely technical -- we need to work
 out how to pay down our technical debt without blocking all new
 features. We also need to ask for understanding and patience from
 those feature authors as we try and improve the foundation they are
 building on.
 
 The specifications process we used in Juno helped with these problems,
 but one of the things we've learned from the experiment is that we
 don't require specifications for all changes. Let's take an approach
 where trivial changes (no API changes, only one review to implement)
 don't require a specification. There will of course sometimes be
 variations on that rule if we discover something, but it means that
 many micro-features will be unblocked.
 
 In terms of technical debt, I don't personally believe that pulling
 all hypervisor drivers out of Nova fixes the problems we face, it just
 moves the technical debt to a different repository. However, we
 clearly need to discuss the way forward at the summit, and come up
 with some sort of plan. If we do something like this, then I am not
 sure that the hypervisor driver interface is the right place to do
 that work -- I'd rather see something closer to the hypervisor itself
 so that the Nova business logic stays with Nova.
 
 Kilo is also the release where we need to get the v2.1 API work done
 now that we finally have a shared vision for how to progress. It took
 us a long time to get to a good shared vision there, so we need to
 ensure that we see that work through to the end.
 
 We live in interesting times, but they're also exciting as well.
 
 Michael
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-19 Thread Ben Nemec
On 09/19/2014 08:13 AM, Sean Dague wrote:
 I've spent the better part of the last 2 weeks in the Nova bug tracker
 to try to turn it into something that doesn't cause people to run away
 screaming. I don't remember exactly where we started at open bug count 2
 weeks ago (it was north of 1400, with  200 bugs in new, but it might
 have been north of 1600), but as of this email we're at  1000 open bugs
 (I'm counting Fix Committed as closed, even though LP does not), and ~0
 new bugs (depending on the time of the day).
 
 == Philosophy in Triaging ==
 
 I'm going to lay out the philosophy of triaging I've had, because this
 may also set the tone going forward.
 
 A bug tracker is a tool to help us make a better release. It does not
 exist for it's own good, it exists to help. Which means when evaluating
 what stays in and what leaves we need to evaluate if any particular
 artifact will help us make a better release. But also more importantly
 realize that there is a cost for carrying every artifact in the tracker.
 Resolving duplicates gets non linearly harder as the number of artifacts
 go up. Triaging gets non-linearly hard as the number of artifacts go up.
 
 With this I was being somewhat pragmatic about closing bugs. An old bug
 that is just a stacktrace is typically not useful. An old bug that is a
 vague sentence that we should refactor a particular module (with no
 specifics on the details) is not useful. A bug reported against a very
 old version of OpenStack where the code has changed a lot in the
 relevant area, and there aren't responses from the author, is not
 useful. Not useful bugs just add debt, and we should get rid of them.
 That makes the chance of pulling a random bug off the tracker something
 that you could actually look at fixing, instead of mostly just stalling out.
 
 So I closed a lot of stuff as Invalid / Opinion that fell into those camps.
 
 == Keeping New Bugs at close to 0 ==
 
 After driving the bugs in the New state down to zero last week, I found
 it's actually pretty easy to keep it at 0.
 
 We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
 aren't actually a bug, and can be closed immediately. ~30% look like a
 bug, but don't have anywhere near enough information in them, and
 flipping them to incomplete with questions quickly means we have a real
 chance of getting the right info. ~10% are fixable in  30 minutes worth
 of work. And the rest are real bugs, that seem to have enough to dive
 into it, and can be triaged into Confirmed, set a priority, and add the
 appropriate tags for the area.
 
 But, more importantly, this means we can filter bug quality on the way
 in. And we can also encourage bug reporters that are giving us good
 stuff, or even easy stuff, as we respond quickly.
 
 Recommendation #1: we adopt a 0 new bugs policy to keep this from
 getting away from us in the future.

We have this policy in TripleO, and to help keep it fresh in people's
minds Roman Podolyaka (IIRC) wrote an untriaged-bot for the IRC channel
that periodically posts a list of any New bugs.  I've found it very
helpful, so it's probably worth getting that into infra somewhere so
other people can use it too.

 
 == Our worse bug reporters are often core reviewers ==
 
 I'm going to pick on Dan Prince here, mostly because I have a recent
 concrete example, however in triaging the bug queue much of the core
 team is to blame (including myself).
 
 https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
 was set incomplete and no response. I'm almost 100% sure it's a dupe of
 the multiprocess bug we've been tracking down but it's so terse that you
 can't get to the bottom of it.
 
 There were a ton of 2012 nova bugs that were basically post it notes.
 Oh, we should refactor this function. Full stop. While those are fine
 for personal tracking, their value goes to zero probably 3 months after
 they are files, especially if the reporter stops working on the issue at
 hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
 convinced using bugs for those is useful unless we go and close them out
 aggressively if they stall.
 
 Also, if Nova core can't file a good bug, it's hard to set the example
 for others in our community.
 
 Recommendation #2: hey, Nova core, lets be better about filing the kinds
 of bugs we want to see! mkay!
 
 Recommendation #3: Let's create a tag for personal work items or
 something for these class of TODOs people are leaving themselves that
 make them a ton easier to cull later when they stall and no one else has
 enough context to pick them up.
 
 == Tags ==
 
 The aggressive tagging that Tracy brought into the project has been
 awesome. It definitely helps slice out into better functional areas.
 Here is the top of our current official tag list (and bug count):
 
 95 compute
 83 libvirt
 74 api
 68 vmware
 67 network
 41 db
 40 testing
 40 volumes
 36 ec2
 35 icehouse-backport-potential
 32 low-hanging-fruit
 31 

Re: [openstack-dev] [neutron] Announcing my candidacy for PTL

2014-09-19 Thread Anita Kuno
confirmed

On 09/19/2014 04:40 PM, Kyle Mestery wrote:
 All:
 
 I am writing to announce my candidacy for the OpenStack Networking
 (Neutron) PTL role.
 
 I am the current Neutron PTL, having lead the project during the Juno
 cycle. As a team, we have accomplished a lot during the Juno cycle. We
 have achieved the goals outlined from the TC around nova-network
 parity [1]. We have driven new community features such as Distributed
 Virtual Router (DVR), L3 HA, and full IPV6 support, among many others.
 In addition to new features, I have worked hard to promote openness
 and collaboration across other OpenStack projects, and also across all
 operators and vendors. Neutron has helped to lead the cross-project
 third-party CI testing efforts by virtue of having this requirement
 since the Icehouse release. I have made an attempt to clarify existing
 Neutron policies [2] by documenting them with the rest of the Neutron
 team. By making everything open and transparent, I’ve tried to make
 Neutron as friendly and open a project as it can be.
 
 Qualifications
 --
 I have been a core Neutron developer since the beginning of the Havana
 cycle, actively working upstream since before that. I have previously
 co-led the ML2 team during Havana and Icehouse, and during Juno I’ve
 been working closely with the Infra and QA teams around third-party CI
 testing for Neutron plugins and drivers. We have been trying to stay
 ahead of the curve here, and
 working closely with other projects having the same requirements. In
 addition, I’ve remained
 closely involved with the continuing evolution of the OpenDaylight
 project and how it integrates
 with OpenStack Neutron. Evolving the state of Open Source SDN
 controllers and how they can
 remain an integral part of Neutron’s future has been a priority for me
 and will continue to be a
 priority during the Kilo cycle.
 
 Juno
 --
 I am proud of what our team accomplished in Juno. During Juno I worked
 very closely with the
 Neutron core team to ensure we had a solid release which addressed
 stability, scalability, nova
 network parity and added a sprinkling of new features as well. I was
 able to organize and host the
 Neutron mid-cycle sprint where we worked as a team to close the
 remaining nova-network parity
 gaps. I have also started to rotate the Neutron meeting every week
 such that we encourage more
 global participation. This was a big change as the Neutron meeting
 hadn’t changed it’s time since
 the start of the project, and it’s been great to see so many new faces
 participating each week.
 
 Looking Forward to Kilo
 
 As Kilo approaches, I hope to lead us as a team into working on many
 important enhancements
 across both Neutron itself and the development aspects of Neutron. As
 you can see from the
 etherpad [3] we have created for Kilo Design Summit topics, there is a
 long list of things there. As
 PTL, I will help to sharpen the focus for Kilo around the items in
 this list so we as a community
 can come up with a consistent plan of what we hope to accomplish, and
 drive as a team towards
 achieving our goals.
 
 I also want to continue evolving the way people contribute to Neutron.
 There have been many
 email threads on this on the openstack-dev mailing list, and
 ultimately in Kilo we’ll need to make
 some changes around the development process. It’s likely lots of these
 changes will be cross
 program and affect more than just Neutron, as the mailing list threads
 have made obvious. As a
 project, evolving how we work is an iterative process which should
 lead to incremental gains.
 
 Closing
 --
 Neutron has taken huge strides forward in Juno around community
 measures such as nova
 network parity, stability, and testing. In addition, we’ve added new
 features which enhance
 Neutron for operators. I hope to be given the chance to continue
 leading Neutron forward during
 Kilo by enhancing the development process to make it easier for
 everyone to contribute, continue
 to grow and evolve the core team, and produce a solid Kilo release
 which meets all of the
 communities goals.
 
 Thank you.
 Kyle
 
 [1] 
 https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage
 [2] https://wiki.openstack.org/wiki/NeutronPolicies
 [3] https://etherpad.openstack.org/p/kilo-neutron-summit-topics
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] PTL Candidacy

2014-09-19 Thread Morgan Fainberg
Hello Everyone! 

After contributing consistently to OpenStack since the Grizzly cycle and more 
specifically to Keystone since Havana, I’d like to put my name into the hat for 
the Keystone PTL role during the Kilo release cycle. I’ve been a core developer 
on Keystone since the latter part of the Havana cycle and have largely been 
focused on the improvement of performance and consistency of the Keystone APIs, 
helping new developers contribute to OpenStack, and working cross-team to 
ensure the other projects have the support they need from Keystone to succeed.  

My primary interests for project the continued drive of stability and 
improvement on the user experience. This direction involves finding a balance 
between the desires for new features and improving upon what we’ve already 
developed. In the last two cycles I’ve seen an incredible move towards making 
Keystone a more full featured Authentication, Authorization, and Audit program. 
This in no small part is credited to the incredible team of contributors 
(whether they are operations-focused and providing feedback, developers working 
on cleaner enterprise integration such as federated identity, or anything in 
between).  

For the Kilo cycle I would like to see Keystone development focus on improving 
the experience for everyone interacting with the service. This continues to 
place a very heavy focus on improvement of the client and middleware 
(keystoneclient, keystonemiddleware, and the integration of the other OpenStack 
client libraries/cli tools with keystoneclient to use Sessions, pluggable auth, 
etc). This focus on client work will also be aimed at finishing the work needed 
to get all OpenStack projects fully utilizing and working with the Keystone V3 
API. 

In terms of the Keystone service itself, I would like to see a balance of 
somewhere about 25% new development (wholly new features) that are landed early 
in the release cycle and 75% of development efforts on improving the features 
we have as of the Juno release. This latter 75% would include continued 
enhancements to systems such as federation, expanded auth mechanisms, a heavy 
focus on overall performance (including a continued hard look at token 
performance), a focus improvement on the tests to ensure we test and gate on 
real-world deployment scenarios, and smoothing out the rough edges when 
interacting with Keystone’s APIs. 

In short, I think we’ve been largely heading the right direction with Keystone, 
but there are still a lot of things we can do to improve and in the process not 
only pay down some technical debt we may have accrued but make Keystone 
significantly better for our developers, deployers, and users. 

Last of all, I want to say that above and beyond everything else, as PTL, I am 
looking to support the outstanding community of developers so that we can 
continue Keystone’s success. Without the dedication and hard work of everyone 
who has contributed to Keystone we would not be where we are today. I am 
extremely pleased with how far we’ve come and look forward to seeing the 
continued success as we move into the Kilo release cycle and beyond not just 
for Keystone but all of OpenStack. 


Cheers, 
Morgan Fainberg


—
Morgan Fainberg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] PTL Candidacy

2014-09-19 Thread Anita Kuno
confirmed

On 09/19/2014 05:14 PM, Morgan Fainberg wrote:
 Hello Everyone! 
 
 After contributing consistently to OpenStack since the Grizzly cycle and more 
 specifically to Keystone since Havana, I’d like to put my name into the hat 
 for the Keystone PTL role during the Kilo release cycle. I’ve been a core 
 developer on Keystone since the latter part of the Havana cycle and have 
 largely been focused on the improvement of performance and consistency of the 
 Keystone APIs, helping new developers contribute to OpenStack, and working 
 cross-team to ensure the other projects have the support they need from 
 Keystone to succeed.  
 
 My primary interests for project the continued drive of stability and 
 improvement on the user experience. This direction involves finding a balance 
 between the desires for new features and improving upon what we’ve already 
 developed. In the last two cycles I’ve seen an incredible move towards making 
 Keystone a more full featured Authentication, Authorization, and Audit 
 program. This in no small part is credited to the incredible team of 
 contributors (whether they are operations-focused and providing feedback, 
 developers working on cleaner enterprise integration such as federated 
 identity, or anything in between).  
 
 For the Kilo cycle I would like to see Keystone development focus on 
 improving the experience for everyone interacting with the service. This 
 continues to place a very heavy focus on improvement of the client and 
 middleware (keystoneclient, keystonemiddleware, and the integration of the 
 other OpenStack client libraries/cli tools with keystoneclient to use 
 Sessions, pluggable auth, etc). This focus on client work will also be aimed 
 at finishing the work needed to get all OpenStack projects fully utilizing 
 and working with the Keystone V3 API. 
 
 In terms of the Keystone service itself, I would like to see a balance of 
 somewhere about 25% new development (wholly new features) that are landed 
 early in the release cycle and 75% of development efforts on improving the 
 features we have as of the Juno release. This latter 75% would include 
 continued enhancements to systems such as federation, expanded auth 
 mechanisms, a heavy focus on overall performance (including a continued hard 
 look at token performance), a focus improvement on the tests to ensure we 
 test and gate on real-world deployment scenarios, and smoothing out the rough 
 edges when interacting with Keystone’s APIs. 
 
 In short, I think we’ve been largely heading the right direction with 
 Keystone, but there are still a lot of things we can do to improve and in the 
 process not only pay down some technical debt we may have accrued but make 
 Keystone significantly better for our developers, deployers, and users. 
 
 Last of all, I want to say that above and beyond everything else, as PTL, I 
 am looking to support the outstanding community of developers so that we can 
 continue Keystone’s success. Without the dedication and hard work of everyone 
 who has contributed to Keystone we would not be where we are today. I am 
 extremely pleased with how far we’ve come and look forward to seeing the 
 continued success as we move into the Kilo release cycle and beyond not just 
 for Keystone but all of OpenStack. 
 
 
 Cheers, 
 Morgan Fainberg
 
 
 —
 Morgan Fainberg
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-19 Thread Anita Kuno
On 09/19/2014 05:03 PM, Ben Nemec wrote:
 On 09/19/2014 08:13 AM, Sean Dague wrote:
 I've spent the better part of the last 2 weeks in the Nova bug tracker
 to try to turn it into something that doesn't cause people to run away
 screaming. I don't remember exactly where we started at open bug count 2
 weeks ago (it was north of 1400, with  200 bugs in new, but it might
 have been north of 1600), but as of this email we're at  1000 open bugs
 (I'm counting Fix Committed as closed, even though LP does not), and ~0
 new bugs (depending on the time of the day).

 == Philosophy in Triaging ==

 I'm going to lay out the philosophy of triaging I've had, because this
 may also set the tone going forward.

 A bug tracker is a tool to help us make a better release. It does not
 exist for it's own good, it exists to help. Which means when evaluating
 what stays in and what leaves we need to evaluate if any particular
 artifact will help us make a better release. But also more importantly
 realize that there is a cost for carrying every artifact in the tracker.
 Resolving duplicates gets non linearly harder as the number of artifacts
 go up. Triaging gets non-linearly hard as the number of artifacts go up.

 With this I was being somewhat pragmatic about closing bugs. An old bug
 that is just a stacktrace is typically not useful. An old bug that is a
 vague sentence that we should refactor a particular module (with no
 specifics on the details) is not useful. A bug reported against a very
 old version of OpenStack where the code has changed a lot in the
 relevant area, and there aren't responses from the author, is not
 useful. Not useful bugs just add debt, and we should get rid of them.
 That makes the chance of pulling a random bug off the tracker something
 that you could actually look at fixing, instead of mostly just stalling out.

 So I closed a lot of stuff as Invalid / Opinion that fell into those camps.

 == Keeping New Bugs at close to 0 ==

 After driving the bugs in the New state down to zero last week, I found
 it's actually pretty easy to keep it at 0.

 We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
 aren't actually a bug, and can be closed immediately. ~30% look like a
 bug, but don't have anywhere near enough information in them, and
 flipping them to incomplete with questions quickly means we have a real
 chance of getting the right info. ~10% are fixable in  30 minutes worth
 of work. And the rest are real bugs, that seem to have enough to dive
 into it, and can be triaged into Confirmed, set a priority, and add the
 appropriate tags for the area.

 But, more importantly, this means we can filter bug quality on the way
 in. And we can also encourage bug reporters that are giving us good
 stuff, or even easy stuff, as we respond quickly.

 Recommendation #1: we adopt a 0 new bugs policy to keep this from
 getting away from us in the future.
 
 We have this policy in TripleO, and to help keep it fresh in people's
 minds Roman Podolyaka (IIRC) wrote an untriaged-bot for the IRC channel
 that periodically posts a list of any New bugs.  I've found it very
 helpful, so it's probably worth getting that into infra somewhere so
 other people can use it too.
Get me the url for the source code and the name you want the thing to be
called and I will cook up the patch to get it into stackforge.

Thanks Ben,
Anita.
 

 == Our worse bug reporters are often core reviewers ==

 I'm going to pick on Dan Prince here, mostly because I have a recent
 concrete example, however in triaging the bug queue much of the core
 team is to blame (including myself).

 https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
 was set incomplete and no response. I'm almost 100% sure it's a dupe of
 the multiprocess bug we've been tracking down but it's so terse that you
 can't get to the bottom of it.

 There were a ton of 2012 nova bugs that were basically post it notes.
 Oh, we should refactor this function. Full stop. While those are fine
 for personal tracking, their value goes to zero probably 3 months after
 they are files, especially if the reporter stops working on the issue at
 hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
 convinced using bugs for those is useful unless we go and close them out
 aggressively if they stall.

 Also, if Nova core can't file a good bug, it's hard to set the example
 for others in our community.

 Recommendation #2: hey, Nova core, lets be better about filing the kinds
 of bugs we want to see! mkay!

 Recommendation #3: Let's create a tag for personal work items or
 something for these class of TODOs people are leaving themselves that
 make them a ton easier to cull later when they stall and no one else has
 enough context to pick them up.

 == Tags ==

 The aggressive tagging that Tracy brought into the project has been
 awesome. It definitely helps slice out into better functional areas.
 Here is the top of our current 

Re: [openstack-dev] [Heat][Zaqar] Integration plan moving forward

2014-09-19 Thread Zane Bitter

On 19/09/14 14:34, Clint Byrum wrote:

Excerpts from Flavio Percoco's message of 2014-09-19 02:37:08 -0700:

On 09/18/2014 11:51 AM, Angus Salkeld wrote:


On 18/09/2014 7:11 PM, Flavio Percoco fla...@redhat.com
mailto:fla...@redhat.com wrote:


Greetings,

If I recall correctly, Heat was planning to adopt Zaqar regardless of
the result of the graduation attempt (please correct me if I'm wrong).
Based on this assumption, I'd like to start working on a plan forward to
make this integration happen.

So far, these are the use cases I've collected from past discussions:

* Notify  heat user before an action is taken, and after - Heat may want
to wait  for a response before proceeding - notifications not
necessarily needed  and signed read-only queues might help, but not
necessary
 * For integrating with user's tools
 * Monitoring
 * Control surface
 * Config management tools
 * Does not require notifications and/or read-only/signed queue

endpoints

 *[These may be helpful, but were not brought up in the discussion]
 * Subscribe to an aggregate feed of interesting events from other
open-stack components (such as Nova)
 * Heat is often deployed in a different place than other
components and doesn't have access to the AMQP bus
 * Large  deployments consist of multiple AMQP brokers, and there
doesn't seem to  be a nice way to aggregate all those events [need to
confirm]
 * Push metadata updates to os-collect-config agent running in
servers, instead of having them poll Heat


Few questions that I think we should start from:

- Does the above list cover Heat's needs?
- Which of the use cases listed above should be addressed first?


IMHO it would be great to simply replace the event store we have
currently, so that the user can get a stream of progress messages during
the deployment.


Could you point me to the right piece of code and/or documentation so I
can understand better what it does and where do you want it to go?


https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/event.py

We currently use db_api to store these in the database, which is costly.

Would be much better if we could just shove them into a message queue for
the user. It is problematic though, as we have event-list and event-show
in the Heat API which basically work the same as the things we've been
wanting removed from Zaqar's API: access by ID and pagination. ;)

I think ideally we'd deprecate those or populate them with nothing if
the user has opted to use messaging instead.


At least initially we'll need to do both, since not everybody has Zaqar. 
If we can not have events at all in v2 of the API that would be 
fantastic though :)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-19 Thread Zane Bitter

On 09/09/14 05:52, Steven Hardy wrote:

Hi Sahdev,

On Tue, Sep 02, 2014 at 11:52:30AM -0400, Sahdev P Zala wrote:

Hello guys,

As you know, the heat-translator project was started early this year with
an aim to create a tool to translate non-Heat templates to HOT. It is a
StackForge project licensed under Apache 2. We have made good progress
with its development and a demo was given at the OpenStack 2014 Atlanta
summit during a half-a-day session that was dedicated to heat-translator
project and related TOSCA discussion. Currently the development and
testing is done with the TOSCA template format but the tool is designed to
be generic enough to work with templates other than TOSCA. There are five
developers actively contributing to the development. In addition, all
current Heat core members are already core members of the heat-translator
project.

Recently, I attended Heat Mid Cycle Meet Up for Juno in Raleigh and
updated the attendees on heat-translator project and ongoing progress. I
also requested everyone for a formal adoption of the project in the
python-heatclient and the consensus was that it is the right thing to do.
Also when the project was started, the initial plan was to make it
available in python-heatclient. Hereby, the heat-translator team would
like to make a request to have the heat-translator project to be adopted
by the python-heatclient/Heat program.


Obviously I wasn't at the meetup, so I may be missing some context here,
but can you answer some questions please?

- Is the scope for heat-translator only tosca simple-profile, or also the
   original more heavyweight tosca too?

- If it's only tosca simple-profile, has any thought been given to moving
   towards implementing support via a template parser plugin, rather than
   baking the translation into the client?


One idea we discussed at the meetup was to use the template-building 
code that we now have in Heat for building the HOT output from the 
translator - e.g. the translator would produce ResourceDefinition 
objects and add them to a HOTemplate object.


That would actually get us a long way toward an implementation of a 
template format plugin (which basically just has to spit out 
ResourceDefinition objects). So maybe that approach would allow us to 
start in python-heatclient and easily move it later into being a 
full-fledged template format plugin in Heat itself.



While I see this effort as valuable, integrating the translator into the
client seems the worst of all worlds to me:

- Any users/services not intefacing to heat via python-heatclient can't use it


Yep, this is a big downside (although presumably we'd want to build in a 
way to just spit out the generated template that can be used by other 
clients).


On the other hand, there is a big downside to having it (only) in Heat 
also - you're dependent on the operator deciding to provide it.



- You prempt the decision about integration with any higher level services,
   e.g Mistral, Murano, Solum, if you bake in the translator at the
   heat level.


Not sure I understand this one.


The scope question is probably key here - if you think the translator can
do (or will be able to do) a 100% non-lossy conversion to HOT using only
Heat, maybe it's time we considered discussing integration into Heat the
service rather than the client.


I'm open to that discussion too.


Conversely, if you're going to need other services to fully implement the
spec, it probably makes sense for the translator to remain layered over
heat (or integrated with another project which is layered over heat).

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] naming of provider template for docs

2014-09-19 Thread Zane Bitter

On 19/09/14 01:10, Mike Spreitzer wrote:

Angus Salkeld asalk...@mirantis.com wrote on 09/18/2014 09:33:56 PM:


Hi



I am trying to add some docs to openstack-manuals hot_guide about
using provider templates : https://review.openstack.org/#/c/121741/



Mike has suggested we use a different term, he thinks provider is
confusing.
I agree that at the minimum, it is not very descriptive.



Mike has suggested nested stack, I personally think this means

something a

bit more general to many of us (it includes the concept of aws stacks)

and may

I suggest template resource - note this is even the class name for
this exact functionality.

Thoughts?



Option 1) stay as is provider templates
Option 2) nested stack
Option 3) template resource


Thanks for rising to the documentation challenge and trying to get good
terminology.

I think your intent is to describe a category of resources, so your option
3 is superior to option 1 --- the thing being described is not a template,
it is a resource (made from a template).

I think

Option 4) custom resource


Custom resource has a specific and very different meaning in 
CloudFormation that is likely to come back to bite us if we overload it.


http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cfn-customresource.html


would be even better.  My problem with template resource is that, to
someone who does not already know what it means, this looks like it might
be a kind of resource that is a template (e.g., for consumption by some
other resource that does something with a template), rather than itself
being something made from a template.  If you want to follow this
direction to something perfectly clear, you might try templated resource
(which is a little better) or template-based resource (which I think is
pretty clear but a bit wordy) --- but an AWS::CloudFormation::Stack is
also based on a template.  I think that if you try for a name that really
says all of the critical parts of the idea, you will get something that is
too wordy and/or awkward.  It is true that custom resource begs the
question of how the user accomplishes her customization, but at least now
we have the reader asking the right question instead of being misled.

I agree that nested stack is a more general concept.  It describes the
net effect, which the things we are naming have in common with
AWS::CloudFormation::Stack.  I think it would make sense for our
documentation to say something like both an AWS::CloudFormation::Stack
and a custom resource are ways to specify a nested stack.


It more or less does, but you're welcome to propose a patch to clarify:

http://docs.openstack.org/developer/heat/glossary.html

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota

2014-09-19 Thread Kevin Benton
Well there is the QoS work that has been pushed to incubation or Kilo.

On Fri, Sep 19, 2014 at 1:40 PM, Carlino, Chuck chuck.carl...@hp.com wrote:
 I'm in HP, but not in the group that owns this effort, so I don't know what 
 its status is.  There is a havana-based implementation floating around 
 somewhere inside HP.  I'll ask around to see what I can find out.

 I'm pretty sure there's nothing going on in the community.

 Chuck

 On Sep 19, 2014, at 5:28 AM, Giuseppe Cossu giuseppe.co...@create-net.org 
 wrote:

 Chuck,
 It seems quite interesting! Are hp or community working to implement it?

 Giuseppe

 -Original Message-
 From: Carlino, Chuck [mailto:chuck.carl...@hp.com]
 Sent: 19 September 2014 04:52
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota

 When you say 'associate to VMs', that would be associating to neutron
 ports,
 right?  If so, this is a subset of what is in:


 https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-frame
 work

 https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-polic
 y

 which also include things like bandwidth guarantees and security policy.
 I'm
 not sure if anyone is pursuing these right now, but there may be some
 useful
 ideas in there.

 Chuck


 On Sep 18, 2014, at 4:25 PM, Giuseppe Cossu giuseppe.cossu@create-
 net.orgmailto:giuseppe.co...@create-net.org wrote:

 Hello,
 I'm aware it's not so easy to define a solution, so I'll expose my idea.
 I was thinking about a network flavor that a tenant can associate to
 VMs.
 Basically the network flavour is a QoS policy.
 The admin can define the network flavors (Gold, Silver, ... call it as
 you
 want) with a set of parameters (some visible to user, some not).
 If we define this kind of flavours, a related quota should be define to
 keep
 track the network resources.

 Giuseppe

 From: Veiga, Anthony
 [mailto:anthony_ve...@cable.comcast.comhttp://cable.comcast.com]
 Sent: 10 September 2014 15:11
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota



 Using the quota system would be a nice option to have.

 Can you clarify what you mean by cumulative bandwidth for the tenant? It
 would
 be possible to rate limit at the tenant router, but having a cumulative
 limit
 enforced inside of a tenant would be difficult.

 On Wed, Sep 10, 2014 at 1:03 AM, Giuseppe Cossu giuseppe.cossu@create-
 net.orgmailto:giuseppe.co...@create-net.org wrote:

 Hello everyone,

 Looking at the QoS blueprint (proposed for incubation), I suggest to
 consider
 adding some parameters to Neutron Quotas. Let's suppose using rate-limit
 for
 managing QoS. The quota parameters could be such as rate_limit (per
 port) and
 max_bandwidth (per tenant). In this way it is possible to set/manage QoS
 quotas from the admin side, and for instance set the maximum bandwidth
 allowed
 per tenant (cumulative).

 What do you think about it?

 I'm cautious about this.  We'd either need to allow a Number of DSCP
 settings and set them outside the quota or leave it out altogether.
 Let's
 not forget that there's more than just rate limiting in QoS, and we need
 to
 make sure all the options are included.  Otherwise, there's going to be
 a lot
 of user and operator confusion as to what is and isn't considered part
 of the
 quota.
 -Anthony

 Regards,
 Giuseppe

 
 Giuseppe Cossu
 CREATE-NET
 

 ___
 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Kevin Benton
 ___
 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Kevin Benton

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota

2014-09-19 Thread Kevin Benton
Sorry, forgot to include a link.
https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api

On Fri, Sep 19, 2014 at 4:45 PM, Kevin Benton blak...@gmail.com wrote:
 Well there is the QoS work that has been pushed to incubation or Kilo.

 On Fri, Sep 19, 2014 at 1:40 PM, Carlino, Chuck chuck.carl...@hp.com wrote:
 I'm in HP, but not in the group that owns this effort, so I don't know what 
 its status is.  There is a havana-based implementation floating around 
 somewhere inside HP.  I'll ask around to see what I can find out.

 I'm pretty sure there's nothing going on in the community.

 Chuck

 On Sep 19, 2014, at 5:28 AM, Giuseppe Cossu giuseppe.co...@create-net.org 
 wrote:

 Chuck,
 It seems quite interesting! Are hp or community working to implement it?

 Giuseppe

 -Original Message-
 From: Carlino, Chuck [mailto:chuck.carl...@hp.com]
 Sent: 19 September 2014 04:52
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota

 When you say 'associate to VMs', that would be associating to neutron
 ports,
 right?  If so, this is a subset of what is in:


 https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-frame
 work

 https://blueprints.launchpad.net/neutron/+spec/neutron-port-template-polic
 y

 which also include things like bandwidth guarantees and security policy.
 I'm
 not sure if anyone is pursuing these right now, but there may be some
 useful
 ideas in there.

 Chuck


 On Sep 18, 2014, at 4:25 PM, Giuseppe Cossu giuseppe.cossu@create-
 net.orgmailto:giuseppe.co...@create-net.org wrote:

 Hello,
 I'm aware it's not so easy to define a solution, so I'll expose my idea.
 I was thinking about a network flavor that a tenant can associate to
 VMs.
 Basically the network flavour is a QoS policy.
 The admin can define the network flavors (Gold, Silver, ... call it as
 you
 want) with a set of parameters (some visible to user, some not).
 If we define this kind of flavours, a related quota should be define to
 keep
 track the network resources.

 Giuseppe

 From: Veiga, Anthony
 [mailto:anthony_ve...@cable.comcast.comhttp://cable.comcast.com]
 Sent: 10 September 2014 15:11
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][QoS] Applying QoS as Quota



 Using the quota system would be a nice option to have.

 Can you clarify what you mean by cumulative bandwidth for the tenant? It
 would
 be possible to rate limit at the tenant router, but having a cumulative
 limit
 enforced inside of a tenant would be difficult.

 On Wed, Sep 10, 2014 at 1:03 AM, Giuseppe Cossu giuseppe.cossu@create-
 net.orgmailto:giuseppe.co...@create-net.org wrote:

 Hello everyone,

 Looking at the QoS blueprint (proposed for incubation), I suggest to
 consider
 adding some parameters to Neutron Quotas. Let's suppose using rate-limit
 for
 managing QoS. The quota parameters could be such as rate_limit (per
 port) and
 max_bandwidth (per tenant). In this way it is possible to set/manage QoS
 quotas from the admin side, and for instance set the maximum bandwidth
 allowed
 per tenant (cumulative).

 What do you think about it?

 I'm cautious about this.  We'd either need to allow a Number of DSCP
 settings and set them outside the quota or leave it out altogether.
 Let's
 not forget that there's more than just rate limiting in QoS, and we need
 to
 make sure all the options are included.  Otherwise, there's going to be
 a lot
 of user and operator confusion as to what is and isn't considered part
 of the
 quota.
 -Anthony

 Regards,
 Giuseppe

 
 Giuseppe Cossu
 CREATE-NET
 

 ___
 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Kevin Benton
 ___
 OpenStack-dev mailing list

 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Kevin Benton



-- 
Kevin Benton

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [neutron] vxlan port iptables configuration

2014-09-19 Thread Kevin Benton
This is the responsibility of the deployment tool. The iptables
firewall driver only handles firewall rules for the VM ports.

On Fri, Sep 19, 2014 at 6:28 AM, Andreas Scheuring
scheu...@linux.vnet.ibm.com wrote:
 Hi
 I just was playing around with various neutron-openvswitch-agent vxlan
 configurations. The default port for vxlan traffic is 4789. I had
 expected that when the neutron-openvswitch-agent reads the configured
 vxlan port (or gets the default) it also would add an iptables rule to
 allow incoming traffic via this port. But this did not happen.


 Is it because such an iptables setup is to be considered as hypervisor
 setup which is not done by openstack? Or should this be the job of the
 firewall driver (in my case ovshybridiptablesfirewall driver)?

 Any thoughts on this?

 Thanks


 --
 Andreas
 (irc: scheuran)




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Kevin Benton

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] naming of provider template for docs

2014-09-19 Thread Qiming Teng
On Fri, Sep 19, 2014 at 11:20:43AM +0200, Thomas Spatzier wrote:
  From: Mike Spreitzer mspre...@us.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 19/09/2014 07:15
  Subject: Re: [openstack-dev] [Heat] naming of provider template for docs
 
  Angus Salkeld asalk...@mirantis.com wrote on 09/18/2014 09:33:56 PM:
 
   Hi
 
   I am trying to add some docs to openstack-manuals hot_guide about
   using provider templates : https://review.openstack.org/#/c/121741/
 
   Mike has suggested we use a different term, he thinks provider is
   confusing.
   I agree that at the minimum, it is not very descriptive.
 
   Mike has suggested nested stack, I personally think this means
  something a
   bit more general to many of us (it includes the concept of aws
  stacks) and may
   I suggest template resource - note this is even the class name for
   this exact functionality.
  
   Thoughts?
 
   Option 1) stay as is provider templates
   Option 2) nested stack
   Option 3) template resource
 
 Out of those 3 I like #3 the most, even though not perfect as Mike
 discussed below.

 
  Thanks for rising to the documentation challenge and trying to get
  good terminology.
 
  I think your intent is to describe a category of resources, so your
  option 3 is superior to option 1 --- the thing being described is
  not a template, it is a resource (made from a template).
 
  I think
 
  Option 4) custom resource
 
 That one sound too generic to me, since also custom python based resource
 plugins are custom resources.

+1.

'Custom resource' may cause more confusion.

 
  would be even better.  My problem with template resource is that,
  to someone who does not already know what it means, this looks like
  it might be a kind of resource that is a template (e.g., for
  consumption by some other resource that does something with a
  template), rather than itself being something made from a template.
  If you want to follow this direction to something perfectly clear,
  you might try templated resource (which is a little better) or
  template-based resource (which I think is pretty clear but a bit
  wordy) --- but an AWS::CloudFormation::Stack is also based on a
  template.  I think that if you try for a name that really says all
  of the critical parts of the idea, you will get something that is
  too wordy and/or awkward.  It is true that custom resource begs
  the question of how the user accomplishes her customization, but at
  least now we have the reader asking the right question instead of
  being misled.
 
 I think template-based resource really captures the concept best. And it
 is not too wordy IMO.
 If it helps to explain the concept intuitively, I would be in favor of it.

Agreed. If it sounds too wordy, just use 'template resource' would be
okay.

 Regards,
 Thomas
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] usability anti-pattern

2014-09-19 Thread Monty Taylor
Hey,

Not to name names, but some of our client libs do this:

  client.Client(API_VERSION, os_username, ... )

I'm pretty sure they got the idea from python-glanceclient, so I blame
Brian Waldon, since he left us for CoreOS.

PLEASE STOP DOING THIS - IT CAUSES BABIES TO CRY. MORE.

As a developer, I have no way of knowing what to put here. Also, imagine
I'm writing a script that wants to talk to more than one cloud to do
things - like, say, nodepool for Infra, or an ansible openstack
inventory module. NOW WHAT? What do I put??? How do I discover that?

Let me make a suggestion...

Default it to something. Make it an optional parameter for experts. THEN
- when the client lib talks to keystone, check the service catalog for
the API version.

What's this you say? Sometimes your service doesn't expose a version in
the keystone catalog?

PLEASE STOP DOING THIS - IT CAUSES DOLPHINS TO WEEP

If you have versioned APIs, put the version in keystone. Because
otherwise, as as a developer have absolutely zero way to figure it out.

Well, except for the algorithm jeblair suggested: just start with 11
and count backwards until a number works

This message brought to you by frustrated humans trying to use the cloud.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] usability anti-pattern, part 2

2014-09-19 Thread Monty Taylor
except exc.Unauthorized:
raise exc.CommandError(Invalid OpenStack credentials.)
except exc.AuthorizationFailure:
raise exc.CommandError(Unable to authorize user)

This is pervasive enough that both of those exceptions come from
openstack.common.

Anyone?

Please. Explain the difference. In words.

Here's what I'm hoping. I'm hoping that AuthorizationFailure is just an
old deprecated thing. I'm hoping the difference is NOT that one is I
know you but you're not allowed and the other is I don't know you -
because that's actually insecure.

I'm guessing that what it actually is is that randomly some things
return one, some things return the other, and there is absolutely no
rhyme nor reason. Or, more likely, that termie liked the spelling of one
of them better.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Monty Taylor
On 09/19/2014 03:29 AM, Thierry Carrez wrote:
 Monty Taylor wrote:
 I've recently been thinking a lot about Sean's Layers stuff. So I wrote
 a blog post which Jim Blair and Devananda were kind enough to help me edit.

 http://inaugust.com/post/108
 
 Hey Monty,
 
 As you can imagine, I read that post with great attention. I generally
 like the concept of a tightly integrated, limited-by-design layer #1
 (I'd personally call it Ring 0) and a large collection of OpenStack
 things gravitating around it. That would at least solve the attraction
 of the integrated release, suppress the need for incubation, foster
 competition/innovation within our community, and generally address the
 problem of community scaling. There are a few details on the
 consequences though, and in those as always the devil lurks.
 
 ## The Technical Committee
 
 The Technical Committee is defined in the OpenStack bylaws, and is the
 representation of the contributors to the project. Teams work on code
 repositories, and at some point ask their work to be recognized as part
 of OpenStack. In doing so, they place their work under the oversight
 of the Technical Committee. In return, team members get to participate
 in electing the technical committee members (they become ATC). It's a
 balanced system, where both parties need to agree: the TC can't force
 itself as overseer of a random project, and a random project can't just
 decide by itself it is OpenStack.
 
 I don't see your proposal breaking that balanced system, but it changes
 its dynamics a bit. The big tent would contain a lot more members. And
 while the TC would arguably bring a significant share of its attention
 to Ring 0, its voters constituency would mostly consist of developers
 who do not participate in Ring 0 development. I don't really see it as
 changing dramatically the membership of the TC, but it's a consequence
 worth mentioning.

Agree. I'm willing to bet it'll be better not worse to have a large
constituency - but it's also problem that it's a giant disaster. I'm
still on board with going for it.

 ## Programs
 
 Programs were created relatively recently as a way to describe which
 teams are in OpenStack vs. which ones aren't. They directly tie into
 the ATC system: if you contribute to code repositories under a blessed
 program, then you're an ATC, you vote in TC elections and the TC has
 some oversight over your code repositories. Previously, this was granted
 at a code repository level, but that failed to give flexibility for
 teams to organize their code in the most convenient manner for them. So
 we started to bless teams rather than specific code repositories.
 
 Now, that didn't work out so well. Some programs were a 'theme', like
 Infrastructure, or Docs. For those, competing efforts do not really make
 sense: there can only be one, and competition should happen inside those
 efforts rather than outside. Some programs were a 'team', like
 Orchestration/Heat or Deployment/TripleO. And that's where the model
 started to break: some new orchestration things need space, but the
 current Heat team is not really interested in maintaining them. What's
 the point of being under the same program then ? And TripleO is not the
 only way to deploy OpenStack, but its mere existence (and name)
 prevented other flowers to bloom in our community.
 
 You don't talk much about programs in your proposal. In particular, you
 only mention layer 1, Cloud Native applications, User Interface
 applications, and Operator applications. So I'm unsure of where, if
 anywhere, would Infrastructure or Docs repositories live.
 
 Here is how I see it could work. We could keep 'theme' programs (think
 Infra, Release cycle management, Docs, QA) with their current structure
 (collection of code repositories under a single team/PTL). We would get
 rid of 'team' programs completely, and just have a registry of
 OpenStack code repositories (openstack*/*, big tent). Each of those
 could have a specific PTL, or explicitely inherit its PTL from another
 code repository. Under that PTL, they could have separate or same core
 review team, whatever maps reality and how they want to work (so we
 could say that openstack/python-novaclient inherits its PTL from
 openstack/nova and doesn't need a specific one). That would allow us to
 map anything that may come in. Oslo falls a bit in between, could be
 considered a 'theme' program or several repos sharing PTL.

I think we can do what you're saying and generalize a little bit. What
if we declared programs, as needed, when we think there is a need to
pick a winner. (I think we can all agree that early winner picking is
an unintended but very real side effect of the current system)

And when I say need to - I mean it in the same sense as Production
Ready  The themes you listed are excellent ones - it makes no sense to
have two Infras, two QAs or two Docs teams. On the other hand, maybe
someone else wants to take a stab at the general problem space that

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Monty Taylor
On 09/19/2014 10:14 AM, John Dickinson wrote:
 
 On Sep 19, 2014, at 5:46 AM, John Griffith
 john.griff...@solidfire.com wrote:
 
 
 
 On Fri, Sep 19, 2014 at 4:33 AM, Thierry Carrez
 thie...@openstack.org wrote: Vishvananda Ishaya wrote:
 Great writeup. I think there are some great concrete suggestions
 here.
 
 A couple more:
 
 1. I think we need a better name for Layer #1 that actually
 represents what the goal of it is: Infrastructure Services? 2. We
 need to be be open to having other Layer #1s within the
 community. We should allow for similar collaborations and group
 focus to grow up as well. Storage Services? Platform Services?
 Computation Services?
 
 I think that would nullify most of the benefits of Monty's
 proposal. If we keep on blessing themes or special groups, we'll
 soon be back at step 0, with projects banging on the TC door to
 become special, and companies not allocating resources to anything
 that's not special.
 
 -- Thierry Carrez (ttx)
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ​Great stuff, mixed on point 2 raised by Vish but honestly I think
 that's something that could evolve over time, but I looked at that
 differently as in Cinder, SWIFT and some day Manilla live under a
 Storage Services umbrella, and ideally at some point there's some
 convergence there.
 
 Anyway, I don't want to start a rat-hole on that, it's kind of
 irrelevant right now.  Bottom line is I think the direction and
 initial ideas in Monty's post are what a lot of us have been
 thinking about and looking for.  I'm in!!​
 
 
 I too am generally supportive of the concept, but I do want to think
 about the vishy/tts/jgriffith points above.
 
 It's interesting that the proposed layer #1 stuff is very very
 similar to what was originally in OpenStack at the very beginning as
 Nova. Over time, many of these pieces of functionality required for
 compute were split out (block, networking, image, etc), and I think
 that's why so many people look at these pieces and say (rightly), of
 course these are required all together and tightly coupled. That's
 how these projects started, and we still see evidence of their birth
 today.
 
 For that reason, I do agree with Vish that there should be similar
 collaborations for other things. While the layer #1 (or compute)
 use case is very common, we can all see that it's not the only one
 that people are solving with OpenStack parts. And this is reflected
 in the products build and sold by companies, too. Some sell one
 subset of openstack stuff as product X and maybe a different subset
 as product Y. (The most common example here is compute vs object
 storage.) This reality has led to a lot of the angst around
 definitions since there is effort to define openstack all as one
 thing (or worse, as a base thing that others are defined as built
 upon).
 
 I propose that we can get the benefits of Monty's proposal and
 implement all of his concrete suggestions (which are fantastic) by
 slightly adjusting our usage of the program/project concepts.
 
 I had originally hoped that the program concept would have been a
 little higher-level instead of effectively spelling project as
 program. I'd love to see a hierarchy of
 openstack-program-project/team-repos. Right now, we have added the
 program layer but have effectively mapped it 1:1 to the project.
 For example, we used to have a few repos in the Swift project managed
 by the same group of people, and now we have a few repos in the
 object storage program, all managed by the same group of people.
 And every time something is added to OpenStack, its added as a new
 program, effectively putting us exactly where we were before we
 called it a program with the same governance and management scaling
 problems.
 
 Today, I'd group existing OpenStack projects into programs as
 follows:
 
 Compute: nova, sahara, ironic
 Storage: swift, cinder, glance, trove
 Network: neutron, designate, zaqar
 Deployment/management: heat, triple-o, horizon, ceilometer
 Identity: keystone, barbican Support
 (not user facing): infra, docs, tempest, devstack, oslo (potentially
 even) stackforge: lots
 
 I like two things about this. First, it allows people to compose a
 solution. Second, it allows us as a community to thing more about the
 strategic/product things. For example, it lets us as a community say,
 We think storage is important. How are we solving it today? What
 gaps do we have in that? How can the various storage things we have
 work together better?
 
 Thierry makes the point that more themes will nullify the benefits
 of Monty's proposal. I agree, if we continue to allow the explosion
 of projects/programs/themes to continue. The benefit of what Monty is
 proposing is that it identifies and focusses on a particular use case
 (deploy a VM, add a volume, get an IP, configure a domain) so that we
 know we 

Re: [openstack-dev] Thoughts on OpenStack Layers and a Big Tent model

2014-09-19 Thread Monty Taylor
On 09/19/2014 10:50 AM, Vishvananda Ishaya wrote:
 
 On Sep 19, 2014, at 10:14 AM, John Dickinson m...@not.mn wrote:
 

 On Sep 19, 2014, at 5:46 AM, John Griffith john.griff...@solidfire.com 
 wrote:



 On Fri, Sep 19, 2014 at 4:33 AM, Thierry Carrez thie...@openstack.org 
 wrote:
 Vishvananda Ishaya wrote:
 Great writeup. I think there are some great concrete suggestions here.

 A couple more:

 1. I think we need a better name for Layer #1 that actually represents 
 what the goal of it is: Infrastructure Services?
 2. We need to be be open to having other Layer #1s within the community. 
 We should allow for similar collaborations and group focus to grow up as 
 well. Storage Services? Platform Services? Computation Services?

 I think that would nullify most of the benefits of Monty's proposal. If
 we keep on blessing themes or special groups, we'll soon be back at
 step 0, with projects banging on the TC door to become special, and
 companies not allocating resources to anything that's not special.

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​Great stuff, mixed on point 2 raised by Vish but honestly I think that's 
 something that could evolve over time, but I looked at that differently as 
 in Cinder, SWIFT and some day Manilla live under a Storage Services 
 umbrella, and ideally at some point there's some convergence there.

 Anyway, I don't want to start a rat-hole on that, it's kind of irrelevant 
 right now.  Bottom line is I think the direction and initial ideas in 
 Monty's post are what a lot of us have been thinking about and looking for. 
  I'm in!!​


 I too am generally supportive of the concept, but I do want to think about 
 the vishy/tts/jgriffith points above.

 Today, I'd group existing OpenStack projects into programs as follows:

 Compute: nova, sahara, ironic
 Storage: swift, cinder, glance, trove
 Network: neutron, designate, zaqar
 Deployment/management: heat, triple-o, horizon, ceilometer
 Identity: keystone, barbican
 Support (not user facing): infra, docs, tempest, devstack, oslo
 (potentially even) stackforge: lots
 
 There is a pretty different division of things in this breakdown than in what 
 monty was proposing. This divides things up by conceptual similarity which I 
 think is actually less useful than breaking things down by use case. I really 
 like the grouping and testing of things which are tightly coupled.
 
 If we say launching a VM and using it is the primary use case of our 
 community corrently then things group into monty’s layer #1. It seems fairly 
 clear that a large section of our community is focused on this use case so 
 this should be a primary focus of infrastructure resources.
 
 There are other use cases in our community, for example:
 
 Object Storage: Swift (depends on keystone)
 Orchestrating Multiple VMs: Heat (depends on layer1)
 DBaSS: Trove: (depends on heat)
 
 These are also important use cases for parts of our community, but swift has 
 demostrated that it isn’t required to be a part of an integrated release 
 schedule, so these could be managed by smaller groups in the community. Note 
 that these are primarily individual projects today, but I could see a future 
 where some of these projects decide to group and do an integrated release. In 
 the future we might see (totally making this up):
 
 Public Cloud Application services: Trove, Zaqar
 Application Deployment services: Heat, Murano
 Operations services: Ceilometer, Congress
 
 As I mentioned previously though, I don’t think we need to define these 
 groups in advance. These groups can organize as needed.

I'm kinda interested to see what self-organization happens...



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] usability anti-pattern, part 2

2014-09-19 Thread Dean Troyer
[OK, I'll bite...just once more...because dammit I want this crap fixed
too.]

I know you know this Monty, but for the benefit of the folks who don't, the
client library situation is a result of them belonging to the projects they
serve, each one[0] forked from a different one forked from jkm's original,
without having any sort of mechanism to stay in sync, like a cross-project
(BINGO!) effort to keep things consistent.

We may not want a BDFL, but we NEED someone to say NO when necessary for
the sake of the entire project.  Jeez, now I'm sounding all enterprisey.

On Fri, Sep 19, 2014 at 9:01 PM, Monty Taylor mord...@inaugust.com wrote:

 except exc.Unauthorized:
 raise exc.CommandError(Invalid OpenStack credentials.)
 except exc.AuthorizationFailure:
 raise exc.CommandError(Unable to authorize user)

 This is pervasive enough that both of those exceptions come from
 openstack.common.


If thats from apiclient, I have a guess.  apiclient was an attempt (by
someone who got frustrated and left us) to build a common core for the
clients.  However, in many ways wound up being a UNION of them.  And scene.

I'm guessing that what it actually is is that randomly some things

return one, some things return the other, and there is absolutely no
 rhyme nor reason. Or, more likely, that termie liked the spelling of one
 of them better.


I like that explanation but this isn't from OCL.  Actually we'd have been
much farther down the road if we had used Termie's bits a year ago. Whether
that is a bug or a feature is left to the reader to decide.

Code speaks, sometimes, so I'm going back to writing some more client bits.
 Someone come help.

dt

[0] except swift and glance, both of which were originally in the server
repo.

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] usability anti-pattern, part 2

2014-09-19 Thread Monty Taylor
On 09/19/2014 10:05 PM, Dean Troyer wrote:
 [OK, I'll bite...just once more...because dammit I want this crap fixed
 too.]

It's tough to avoid when I toss you such juicy bones isn't it?

 I know you know this Monty, but for the benefit of the folks who don't, the
 client library situation is a result of them belonging to the projects they
 serve, each one[0] forked from a different one forked from jkm's original,
 without having any sort of mechanism to stay in sync, like a cross-project
 (BINGO!) effort to keep things consistent.
 
 We may not want a BDFL, but we NEED someone to say NO when necessary for
 the sake of the entire project.  Jeez, now I'm sounding all enterprisey.
 
 On Fri, Sep 19, 2014 at 9:01 PM, Monty Taylor mord...@inaugust.com wrote:
 
 except exc.Unauthorized:
 raise exc.CommandError(Invalid OpenStack credentials.)
 except exc.AuthorizationFailure:
 raise exc.CommandError(Unable to authorize user)

 This is pervasive enough that both of those exceptions come from
 openstack.common.

 
 If thats from apiclient, I have a guess.  apiclient was an attempt (by
 someone who got frustrated and left us) to build a common core for the
 clients.  However, in many ways wound up being a UNION of them.  And scene.
 
 I'm guessing that what it actually is is that randomly some things

 return one, some things return the other, and there is absolutely no
 rhyme nor reason. Or, more likely, that termie liked the spelling of one
 of them better.

 
 I like that explanation but this isn't from OCL.  Actually we'd have been
 much farther down the road if we had used Termie's bits a year ago. Whether
 that is a bug or a feature is left to the reader to decide.
 
 Code speaks, sometimes, so I'm going back to writing some more client bits.
  Someone come help.

++

I'm looking forward very much to openstacksdk, btw...

 dt
 
 [0] except swift and glance, both of which were originally in the server
 repo.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Zaqar] Integration plan moving forward

2014-09-19 Thread Monty Taylor
On 09/19/2014 03:42 PM, Zane Bitter wrote:
 On 19/09/14 14:34, Clint Byrum wrote:
 Excerpts from Flavio Percoco's message of 2014-09-19 02:37:08 -0700:
 On 09/18/2014 11:51 AM, Angus Salkeld wrote:

 On 18/09/2014 7:11 PM, Flavio Percoco fla...@redhat.com
 mailto:fla...@redhat.com wrote:

 Greetings,

 If I recall correctly, Heat was planning to adopt Zaqar regardless of
 the result of the graduation attempt (please correct me if I'm wrong).
 Based on this assumption, I'd like to start working on a plan
 forward to
 make this integration happen.

 So far, these are the use cases I've collected from past discussions:

 * Notify  heat user before an action is taken, and after - Heat may
 want
 to wait  for a response before proceeding - notifications not
 necessarily needed  and signed read-only queues might help, but not
 necessary
  * For integrating with user's tools
  * Monitoring
  * Control surface
  * Config management tools
  * Does not require notifications and/or read-only/signed queue
 endpoints
  *[These may be helpful, but were not brought up in the
 discussion]
  * Subscribe to an aggregate feed of interesting events from other
 open-stack components (such as Nova)
  * Heat is often deployed in a different place than other
 components and doesn't have access to the AMQP bus
  * Large  deployments consist of multiple AMQP brokers, and
 there
 doesn't seem to  be a nice way to aggregate all those events [need to
 confirm]
  * Push metadata updates to os-collect-config agent running in
 servers, instead of having them poll Heat


 Few questions that I think we should start from:

 - Does the above list cover Heat's needs?
 - Which of the use cases listed above should be addressed first?

 IMHO it would be great to simply replace the event store we have
 currently, so that the user can get a stream of progress messages
 during
 the deployment.

 Could you point me to the right piece of code and/or documentation so I
 can understand better what it does and where do you want it to go?

 https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/event.py

 We currently use db_api to store these in the database, which is costly.

 Would be much better if we could just shove them into a message queue for
 the user. It is problematic though, as we have event-list and event-show
 in the Heat API which basically work the same as the things we've been
 wanting removed from Zaqar's API: access by ID and pagination. ;)

 I think ideally we'd deprecate those or populate them with nothing if
 the user has opted to use messaging instead.
 
 At least initially we'll need to do both, since not everybody has Zaqar.
 If we can not have events at all in v2 of the API that would be
 fantastic though :)

Yeah - don't forget the folks who are deploying  heat in standalone mode
who may not have (or want) an additional event stream service thing.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >