Re: [openstack-dev] [oslo] usage patterns for oslo.config

2014-08-27 Thread Mark McLoughlin
On Tue, 2014-08-26 at 10:00 -0400, Doug Hellmann wrote:
 On Aug 26, 2014, at 6:30 AM, Mark McLoughlin mar...@redhat.com wrote:
 
  On Mon, 2014-08-11 at 15:06 -0400, Doug Hellmann wrote:
  On Aug 8, 2014, at 7:22 PM, Devananda van der Veen 
  devananda@gmail.com wrote:
  
  On Fri, Aug 8, 2014 at 12:41 PM, Doug Hellmann d...@doughellmann.com 
  wrote:
  
  That’s right. The preferred approach is to put the register_opt() in
  *runtime* code somewhere before the option will be used. That might be in
  the constructor for a class that uses an option, for example, as 
  described
  in
  http://docs.openstack.org/developer/oslo.config/cfg.html#registering-options
  
  Doug
  
  Interesting.
  
  I've been following the prevailing example in Nova, which is to
  register opts at the top of a module, immediately after defining them.
  Is there a situation in which one approach is better than the other?
  
  The approach used in Nova is the “old” way of doing it. It works, but
  assumes that all of the application code is modifying a global
  configuration object. The runtime approach allows you to pass a
  configuration object to a library, which makes it easier to mock the
  configuration for testing and avoids having the configuration options
  bleed into the public API of the library. We’ve started using the
  runtime approach in new Oslo libraries that have configuration
  options, but changing the implementation in existing application code
  isn’t strictly necessary.
  
  I've been meaning to dig up some of the old threads and reviews to
  document how we got here.
  
  But briefly:
  
   * this global CONF variable originates from the gflags FLAGS variable 
 in Nova before oslo.config
  
   * I was initially determined to get rid of any global variable use 
 and did a lot of work to allow glance use oslo.config without a 
 global variable
  
   * one example detail of this work - when you use paste.deploy to 
 load an app, you have no ability to pass a config object 
 through paste.deploy to the app. I wrote a little helper that 
 used a thread-local variable to mimic this pass-through.
  
   * with glance done, I moved on to making keystone use oslo.config and 
 initially didn't use the global variable. Then I ran into a veto 
 from termie who felt very strongly that a global variable should be 
 used.
  
   * in the end, I bought the argument that the use of a global variable 
 was pretty deeply ingrained (especially in Nova) and that we should 
 aim for consistent coding patterns across projects (i.e. Oslo 
 shouldn't be just about shared code, but also shared patterns). The 
 only realistic standard pattern we could hope for was the use of 
 the global variable.
  
   * with that agreed, we reverted glance back to using a global 
 variable and all projects followed suit
  
   * the case of libraries is different IMO - we'd be foolish to design 
 APIs which lock us into using the global object
  
  So ... I wouldn't quite agree that this is the new way vs the old
  way, but I think it would be reasonable to re-open the discussion about
  using the global object in our applications. Perhaps, at least, we could
  reduce our dependence on it.
 
 The aspect I was calling “old” was the “register options at import
 time” pattern, not the use of a global. Whether we use a global or
 not, registering options at runtime in a code path that will be using
 them is better than relying on import ordering to ensure options are
 registered before they are used.

I don't think I've seen code (except for obscure cases) which uses the
CONF global directly (as opposed to being passed CONF as a parameter)
but doesn't register the options at import time.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Juno-3 BP meeting

2014-08-27 Thread Miguel Angel Ajo Pelayo
Works perfect for me.  I will join.  

Sent from my Android phone using TouchDown (www.nitrodesk.com)


-Original Message-
From: Carl Baldwin [c...@ecbaldwin.net]
Received: Wednesday, 27 Aug 2014, 5:07
To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
Subject: Re: [openstack-dev] [neutron] Juno-3 BP meeting

Kyle,

These are three good ones.  I've been reviewing the HA ones and have had an
eye on the other two.

1300 is a bit early but I'll plan to be there.

Carl
On Aug 26, 2014 4:04 PM, Kyle Mestery mest...@mestery.com wrote:

 I'd like to propose a meeting at 1300UTC on Thursday in
 #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this
 point. We're taking specifically about medium and high priority ones,
 with a focus on these three:

 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability)
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security)

 https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
 )

 These three BPs will provide a final push for scalability in a few
 areas and are things we as a team need to work to merge this week. The
 meeting will allow for discussion of final issues on these patches
 with the goal of trying to merge them by Feature Freeze next week. If
 time permits, we can discuss other medium and high priority community
 BPs as well.

 Let me know if this works by responding on this thread and I hope to
 see people there Thursday!

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Juno-3 BP meeting

2014-08-27 Thread Assaf Muller
Good for me.

- Original Message -
 Works perfect for me. I will join.
 
 Sent from my Android phone using TouchDown ( www.nitrodesk.com )
 
 
 -Original Message-
 From: Carl Baldwin [c...@ecbaldwin.net]
 Received: Wednesday, 27 Aug 2014, 5:07
 To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
 Subject: Re: [openstack-dev] [neutron] Juno-3 BP meeting
 
 
 
 
 Kyle,
 
 These are three good ones. I've been reviewing the HA ones and have had an
 eye on the other two.
 
 1300 is a bit early but I'll plan to be there.
 
 Carl
 On Aug 26, 2014 4:04 PM, Kyle Mestery  mest...@mestery.com  wrote:
 
 
 I'd like to propose a meeting at 1300UTC on Thursday in
 #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this
 point. We're taking specifically about medium and high priority ones,
 with a focus on these three:
 
 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability )
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security )
 https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
 )
 
 These three BPs will provide a final push for scalability in a few
 areas and are things we as a team need to work to merge this week. The
 meeting will allow for discussion of final issues on these patches
 with the goal of trying to merge them by Feature Freeze next week. If
 time permits, we can discuss other medium and high priority community
 BPs as well.
 
 Let me know if this works by responding on this thread and I hope to
 see people there Thursday!
 
 Thanks,
 Kyle
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] [marconi] Juno Performance Testing (Round 1)

2014-08-27 Thread Flavio Percoco
On 08/26/2014 11:41 PM, Kurt Griffiths wrote:
 Hi folks,
 
 I ran some rough benchmarks to get an idea of where Zaqar currently stands
 re latency and throughput for Juno. These results are by no means
 conclusive, but I wanted to publish what I had so far for the sake of
 discussion.
 
 Note that these tests do not include results for our new Redis driver, but
 I hope to make those available soon.
 
 As always, the usual disclaimers apply (i.e., benchmarks mostly amount to
 lies; these numbers are only intended to provide a ballpark reference; you
 should perform your own tests, simulating your specific scenarios and
 using your own hardware; etc.).
 
 ## Setup ##
 
 Rather than VMs, I provisioned some Rackspace OnMetal[8] servers to
 mitigate noisy neighbor when running the performance tests:
 
 * 1x Load Generator
 * Hardware 
 * 1x Intel Xeon E5-2680 v2 2.8Ghz
 * 32 GB RAM
 * 10Gbps NIC
 * 32GB SATADOM
 * Software
 * Debian Wheezy
 * Python 2.7.3
 * zaqar-bench from trunk with some extra patches[1]
 * 1x Web Head
 * Hardware 
 * 1x Intel Xeon E5-2680 v2 2.8Ghz
 * 32 GB RAM
 * 10Gbps NIC
 * 32GB SATADOM
 * Software
 * Debian Wheezy
 * Python 2.7.3
 * zaqar server from trunk @47e07cad
 * storage=mongodb
 * partitions=4
 * MongoDB URI configured with w=majority
 * uWSGI + gevent
 * config: http://paste.openstack.org/show/100592/
 * app.py: http://paste.openstack.org/show/100593/
 * 3x MongoDB Nodes
 * Hardware 
 * 2x Intel Xeon E5-2680 v2 2.8Ghz
 * 128 GB RAM
 * 10Gbps NIC
 * 2x LSI Nytro WarpDrive BLP4-1600[2]
 * Software
 * Debian Wheezy
 * mongod 2.6.4
 * Default config, except setting replSet and enabling periodic
   logging of CPU and I/O
 * Journaling enabled
 * Profiling on message DBs enabled for requests over 10ms
 
 For generating the load, I used the zaqar-bench tool we created during
 Juno as a stepping stone toward integration with Rally. Although the tool
 is still fairly rough, I thought it good enough to provide some useful
 data[3]. The tool uses the python-zaqarclient library.
 
 Note that I didn’t push the servers particularly hard for these tests; web
 head CPUs averaged around 20%, while the mongod primary’s CPU usage peaked
 at around 10% with DB locking peaking at 5%.
 
 Several different messaging patterns were tested, taking inspiration
 from: https://wiki.openstack.org/wiki/Use_Cases_(Zaqar)
 
 Each test was executed three times and the best time recorded.
 
 A ~1K sample message (1398 bytes) was used for all tests.
 
 ## Results ##
 
 ### Event Broadcasting (Read-Heavy) ###
 
 OK, so let's say you have a somewhat low-volume source, but tons of event
 observers. In this case, the observers easily outpace the producer, making
 this a read-heavy workload.
 
 Options
 * 1 producer process with 5 gevent workers
 * 1 message posted per request
 * 2 observer processes with 25 gevent workers each
 * 5 messages listed per request by the observers
 * Load distributed across 4[7] queues
 * 10-second duration[4]
 
 Results
 * Producer: 2.2 ms/req,  454 req/sec
 * Observer: 1.5 ms/req, 1224 req/sec
 
 ### Event Broadcasting (Balanced) ###
 
 This test uses the same number of producers and consumers, but note that
 the observers are still listing (up to) 5 messages at a time[5], so they
 still outpace the producers, but not as quickly as before.
 
 Options
 * 2 producer processes with 10 gevent workers each
 * 1 message posted per request
 * 2 observer processes with 25 gevent workers each
 * 5 messages listed per request by the observers
 * Load distributed across 4 queues
 * 10-second duration
 
 Results
 * Producer: 2.2 ms/req, 883 req/sec
 * Observer: 2.8 ms/req, 348 req/sec
 
 ### Point-to-Point Messaging ###
 
 In this scenario I simulated one client sending messages directly to a
 different client. Only one queue is required in this case[6].
 
 Note the higher latency. While running the test there were 1-2 message
 posts that skewed the average by taking much longer (~100ms) than the
 others to complete. Such outliers are probably present in the other tests
 as well, and further investigation is need to discover the root cause.
 
 Options
 * 1 producer process with 1 gevent worker
 * 1 message posted per request
 * 1 observer process with 1 gevent worker
 * 1 message listed per request
 * All load sent to a single queue
 * 10-second duration
 
 Results
 * Producer: 5.5 ms/req, 179 req/sec
 * Observer: 3.5 ms/req, 278 req/sec
 
 ### Task Distribution ###
 
 This test uses several producers and consumers in order to simulate
 distributing tasks to a worker pool. In contrast 

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread Kevin Benton
Ports are bound in order of configured drivers so as long as the
OpenVswitch driver is put first in the list, it will bind the ports it can
and then ODL would bind the leftovers. [1][2] The only missing component is
that ODL doesn't look like it uses l2pop so establishing tunnels between
the OVS agents and the ODL-managed vswitches would be an issue that would
have to be handled via another process.

Regardless, my original point is that the driver keeps the neutron
semantics and DB in tact. In my opinion, the lack of compatibility with
l2pop isn't an issue with the driver, but more of an issue with how l2pop
was designed. It's very tightly coupled to having agents managed by Neutron
via RPC, which shouldn't be necessary when it's primary purpose is to
establish endpoints for overlay tunnels.


1.
https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
2.
https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket statement
 about the data model of drivers/plugins with REST backends is wrong. Look
 at the ODL mechanism driver for a counter-example.[1] The data is still
 stored in Neutron and all of the semantics of the API are maintained. The
 l2pop driver is to deal with decentralized overlays, so I'm not sure how
 its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in ovs
 driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them
 in a heterogeneous multi-backend architecture , or work exclusively and
 have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:

 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).

 As for the Snabb or DPDKOVS (they also plan to support official qemu
 vhost-user), or some other similar contributions, if one or two of them win
 in the war of this high performance userspace vswitch, and receive large
 common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks
 like a vendor plugin but is actually completely open source. The
 development is driven by end-user organisations who want to make the
 standard upstream Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community. In
 this cycle we have found kindred spirits in the NFV subteam., but we did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if
 there is a suitable process available for us to use in Kilo e.g. 
 incubation.

 Cheers,
 -Luke

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread loy wolfe
On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton blak...@gmail.com wrote:

 Ports are bound in order of configured drivers so as long as the
 OpenVswitch driver is put first in the list, it will bind the ports it can
 and then ODL would bind the leftovers. [1][2] The only missing component is
 that ODL doesn't look like it uses l2pop so establishing tunnels between
 the OVS agents and the ODL-managed vswitches would be an issue that would
 have to be handled via another process.

 Regardless, my original point is that the driver keeps the neutron
 semantics and DB in tact. In my opinion, the lack of compatibility with
 l2pop isn't an issue with the driver, but more of an issue with how l2pop
 was designed. It's very tightly coupled to having agents managed by Neutron
 via RPC, which shouldn't be necessary when it's primary purpose is to
 establish endpoints for overlay tunnels.


So why not agent based? Neutron shouldn't be treated as just an resource
storage, built-in backends naturally need things like l2pop and dvr for
distributed dynamic topology control,  we couldn't say that something as a
part was tightly coupled.

On the contrary, 3rd backends should adapt themselves to be integrated into
Neutron as thin as they can, focusing on the backend device control but not
re-implement core service logic duplicated with Neutron . BTW, Ofagent is a
good example for this style.




 1.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
 2.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


 On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket
 statement about the data model of drivers/plugins with REST backends is
 wrong. Look at the ODL mechanism driver for a counter-example.[1] The data
 is still stored in Neutron and all of the semantics of the API are
 maintained. The l2pop driver is to deal with decentralized overlays, so I'm
 not sure how its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in
 ovs driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them
 in a heterogeneous multi-backend architecture , or work exclusively and
 have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:

 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).

 As for the Snabb or DPDKOVS (they also plan to support official qemu
 vhost-user), or some other similar contributions, if one or two of them win
 in the war of this high performance userspace vswitch, and receive large
 common interest, then it may be accepted as built-in.



 The Snabb NFV (http://snabb.co/nfv.html) driver superficially looks
 like a vendor plugin but is actually completely open source. The
 development is driven by end-user organisations who want to make the
 standard upstream Neutron support their NFV use cases.

 We are looking for a good way to engage with the upstream community.
 In this cycle we have found kindred spirits in the NFV subteam., but we 
 did
 not find a good way to engage with Neutron upstream (see
 https://review.openstack.org/#/c/116476/). It would be wonderful if
 there is a suitable process available for us to use in Kilo e.g. 
 

Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-27 Thread loy wolfe
On Wed, Aug 27, 2014 at 2:44 PM, Kevin Benton blak...@gmail.com wrote:

 Incubator doesn't mean being kicked out of tree, it just mean that the
 API and resource model needs to be baked for fast iteration, and can't be
 put in tree temporarily.

 That was exactly my point about developing a major feature like DVR. Even
 with a limited API change (the new distributed flag), it has an impact on
 the the router/agent DB resource model and currently isn't compatible
 with VLAN based ML2 deployments. It's not exactly a hidden optimization
 like an improvement to some RPC handling code.


Flag is only for admin use, tenant can't see it, and the default policy for
router is setup by config file.

It COULD be compatible for DVR work on vlan, but there were some bugs
several months before, that DVR mac are not written successfully on the
egress packet, I'm not sure if it is fixed in the merged code.


 A huge piece of the DVR development had to happen in Neutron forks and 40+
 revision chains of Gerrit patches. It was very difficult to follow without
 being heavily involved with the L3 team. This would have been a
 great candidate to develop in the incubator if it existed at the time. It
 would have been easier to try as it was developed and to explore the entire
 codebase. Also, more people could have been contributing bug fixes and
 improvements since an entire section of code wouldn't be 'owned' by one
 person like it is with the author of a Gerrit review.

 For DVR, as it has no influence on tenant facing API resource model, it
 works as the built-in backend, and this feature has accepted wide common
 interests,

 As was pointed out before, common interest has nothing to do with
 incubation. Incubation is to rapidly iterate on a new feature for Neutron.
 It shouldn't be restricted to API changes, it should be used for any major
 new features that are possible to develop outside of the Neutron core. If
 we are going to have this new incubator tool, we should use it to the
 fullest extent possible.



 On Tue, Aug 26, 2014 at 6:19 PM, loy wolfe loywo...@gmail.com wrote:

 Incubator doesn't mean being kicked out of tree, it just mean that the
 API and resource model needs to be baked for fast iteration, and can't be
 put in tree temporarily. As kyle has said, incubator is not talking about
 moving 3rd drivers out of tree, which is in another thread.

 For DVR, as it has no influence on tenant facing API resource model, it
 works as the built-in backend, and this feature has accepted wide common
 interests, it's just the internal performance optimization tightly coupled
 with existing code, so it should be developed in tree.


 On Wed, Aug 27, 2014 at 8:08 AM, Kevin Benton blak...@gmail.com wrote:

 From what I understand, the intended projects for the incubator can't
 operate without neutron because they are just extensions/plugins/drivers.

 For example, if the DVR modifications to the reference reference L3
 plugin weren't already being developed in the tree, DVR could have been
 developed in the incubator and then merged into Neutron once the bugs were
 ironed out so a huge string of Gerrit patches didn't need to be tracked. If
 that had happened, would it make sense to keep the L3 plugin as a
 completely separate project or merge it? I understand this is the approach
 the load balancer folks took by making Octavia a separate project, but I
 think it can still operate on its own, where the reference L3 plugin (and
 many of the other incubator projects) are just classes that expect to be
 able to make core Neutron calls.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread Kevin Benton
So why not agent based?

Maybe I have an experimental operating system that can't run python. Maybe
the RPC channel between compute nodes and Neutron doesn't satisfy certain
security criteria. Regardless of the reason, it doesn't matter because that
is an implementation detail that should be irrelevant to separate ML2
drivers.

l2pop should be concerned with tunnel endpoints and tunnel endpoints only.
Whether or not you're running a chunk of code responding to messages on an
RPC bus and sending heartbeats should not be Neutron's concern. It defeats
the purpose of ML2 if everything that can bind a port has to be running a
neutron RPC-compatible agent.

The l2pop functionality should become part of the tunnel type drivers and
the mechanism drivers should be able to provide the termination endpoints
for the tunnels using whatever mechanism it chooses. Agent-based drivers
can use the agent DB to do this and then the REST drivers can provide
whatever termination point they want. This solves the interoperability
problem and relaxes this tight coupling between vxlan and agents.


On Wed, Aug 27, 2014 at 1:09 AM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton blak...@gmail.com wrote:

 Ports are bound in order of configured drivers so as long as the
 OpenVswitch driver is put first in the list, it will bind the ports it can
 and then ODL would bind the leftovers. [1][2] The only missing component is
 that ODL doesn't look like it uses l2pop so establishing tunnels between
 the OVS agents and the ODL-managed vswitches would be an issue that would
 have to be handled via another process.

 Regardless, my original point is that the driver keeps the neutron
 semantics and DB in tact. In my opinion, the lack of compatibility with
 l2pop isn't an issue with the driver, but more of an issue with how l2pop
 was designed. It's very tightly coupled to having agents managed by Neutron
 via RPC, which shouldn't be necessary when it's primary purpose is to
 establish endpoints for overlay tunnels.


 So why not agent based? Neutron shouldn't be treated as just an resource
 storage, built-in backends naturally need things like l2pop and dvr for
 distributed dynamic topology control,  we couldn't say that something as a
 part was tightly coupled.

 On the contrary, 3rd backends should adapt themselves to be integrated
 into Neutron as thin as they can, focusing on the backend device control
 but not re-implement core service logic duplicated with Neutron . BTW,
 Ofagent is a good example for this style.




 1.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
 2.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


 On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com
 wrote:

 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 built-in, but shim RESTful proxy for all kinds of sdn controller should be
 3rd, for they keep all virtual networking data model and service logic in
 their own places, using Neutron API just as the NB shell (they can't even
 co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket
 statement about the data model of drivers/plugins with REST backends is
 wrong. Look at the ODL mechanism driver for a counter-example.[1] The data
 is still stored in Neutron and all of the semantics of the API are
 maintained. The l2pop driver is to deal with decentralized overlays, so I'm
 not sure how its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in
 ovs driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them
 in a heterogeneous multi-backend architecture , or work exclusively and
 have to take over all the functionality.



 1.
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/mechanism_odl.py



 On Tue, Aug 26, 2014 at 7:14 PM, loy wolfe loywo...@gmail.com wrote:

 Forwarded from other thread discussing about incubator:

 http://lists.openstack.org/pipermail/openstack-dev/2014-August/044135.html



 Completely agree with this sentiment. Is there a crisp distinction
 between a vendor plugin and an open source plugin though?


 I think that opensource is not the only factor, it's about built-in
 vs. 3rd backend. Built-in must be opensource, but opensource is not
 necessarily built-in. By my thought, current OVS and linuxbridge are
 

Re: [openstack-dev] [neutron] Juno-3 BP meeting

2014-08-27 Thread Oleg Bondarev
Works for me.


On Wed, Aug 27, 2014 at 10:54 AM, Assaf Muller amul...@redhat.com wrote:

 Good for me.

 - Original Message -
  Works perfect for me. I will join.
 
  Sent from my Android phone using TouchDown ( www.nitrodesk.com )
 
 
  -Original Message-
  From: Carl Baldwin [c...@ecbaldwin.net]
  Received: Wednesday, 27 Aug 2014, 5:07
  To: OpenStack Development Mailing List [
 openstack-dev@lists.openstack.org]
  Subject: Re: [openstack-dev] [neutron] Juno-3 BP meeting
 
 
 
 
  Kyle,
 
  These are three good ones. I've been reviewing the HA ones and have had
 an
  eye on the other two.
 
  1300 is a bit early but I'll plan to be there.
 
  Carl
  On Aug 26, 2014 4:04 PM, Kyle Mestery  mest...@mestery.com  wrote:
 
 
  I'd like to propose a meeting at 1300UTC on Thursday in
  #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this
  point. We're taking specifically about medium and high priority ones,
  with a focus on these three:
 
  https://blueprints.launchpad.net/neutron/+spec/l3-high-availability )
  https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security )
 
 https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
  )
 
  These three BPs will provide a final push for scalability in a few
  areas and are things we as a team need to work to merge this week. The
  meeting will allow for discussion of final issues on these patches
  with the goal of trying to merge them by Feature Freeze next week. If
  time permits, we can discuss other medium and high priority community
  BPs as well.
 
  Let me know if this works by responding on this thread and I hope to
  see people there Thursday!
 
  Thanks,
  Kyle
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-27 Thread Kevin Benton
Flag is only for admin use, tenant can't see it, and the default policy
for router is setup by config file.

It's still a public API that will have to follow a deprecation cycle. If a
new API was going to be introduced for admins to control the distributed
nature of routers, it would have been nice to introduce a little more
control than distributed=True/False (e.g. how many SNAT IPs are consumed,
etc). At this point we are stuck because any new API entries would require
a blueprint and the feature proposal freeze deadline is long gone.

It COULD be compatible for DVR work on vlan, but there were some bugs
several months before, that DVR mac are not written successfully on the
egress packet, I'm not sure if it is fixed in the merged code.

The current DVR wiki[1] still shows that the tenant_network_type has to be
VXLAN. I didn't know about this issue until just recently and I'm not sure
if there are plans yet to fix it for Juno. If it were in an incubation tree
somewhere and not on Gerrit for the majority of the cycle, the bug could
have been worked on in parallel by other volunteers interested in VLAN
support. Now that DVR is solidifying, VLAN support may even require a
blueprint unless it's blessed by the right cores, which would mean people
with VLAN deployments would not be able to use DVR until Kilo is released
next next May.

The whole point is that there is nowhere to work on big features like this
in a fast, iterative, and open manner. The incubator could be a perfect
place for this type of work.


1. https://wiki.openstack.org/wiki/Neutron/DVR/HowTo


On Wed, Aug 27, 2014 at 1:22 AM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 2:44 PM, Kevin Benton blak...@gmail.com wrote:

 Incubator doesn't mean being kicked out of tree, it just mean that the
 API and resource model needs to be baked for fast iteration, and can't be
 put in tree temporarily.

 That was exactly my point about developing a major feature like DVR. Even
 with a limited API change (the new distributed flag), it has an impact on
 the the router/agent DB resource model and currently isn't compatible
 with VLAN based ML2 deployments. It's not exactly a hidden optimization
 like an improvement to some RPC handling code.


 Flag is only for admin use, tenant can't see it, and the default policy
 for router is setup by config file.

 It COULD be compatible for DVR work on vlan, but there were some bugs
 several months before, that DVR mac are not written successfully on the
 egress packet, I'm not sure if it is fixed in the merged code.


 A huge piece of the DVR development had to happen in Neutron forks and
 40+ revision chains of Gerrit patches. It was very difficult to follow
 without being heavily involved with the L3 team. This would have been a
 great candidate to develop in the incubator if it existed at the time. It
 would have been easier to try as it was developed and to explore the entire
 codebase. Also, more people could have been contributing bug fixes and
 improvements since an entire section of code wouldn't be 'owned' by one
 person like it is with the author of a Gerrit review.

 For DVR, as it has no influence on tenant facing API resource model, it
 works as the built-in backend, and this feature has accepted wide common
 interests,

 As was pointed out before, common interest has nothing to do with
 incubation. Incubation is to rapidly iterate on a new feature for Neutron.
 It shouldn't be restricted to API changes, it should be used for any major
 new features that are possible to develop outside of the Neutron core. If
 we are going to have this new incubator tool, we should use it to the
 fullest extent possible.



 On Tue, Aug 26, 2014 at 6:19 PM, loy wolfe loywo...@gmail.com wrote:

 Incubator doesn't mean being kicked out of tree, it just mean that the
 API and resource model needs to be baked for fast iteration, and can't be
 put in tree temporarily. As kyle has said, incubator is not talking about
 moving 3rd drivers out of tree, which is in another thread.

 For DVR, as it has no influence on tenant facing API resource model, it
 works as the built-in backend, and this feature has accepted wide common
 interests, it's just the internal performance optimization tightly coupled
 with existing code, so it should be developed in tree.


 On Wed, Aug 27, 2014 at 8:08 AM, Kevin Benton blak...@gmail.com wrote:

 From what I understand, the intended projects for the incubator can't
 operate without neutron because they are just extensions/plugins/drivers.

 For example, if the DVR modifications to the reference reference L3
 plugin weren't already being developed in the tree, DVR could have been
 developed in the incubator and then merged into Neutron once the bugs were
 ironed out so a huge string of Gerrit patches didn't need to be tracked. If
 that had happened, would it make sense to keep the L3 plugin as a
 completely separate project or merge it? I understand this is the approach
 the 

Re: [openstack-dev] [Fuel] Issues with hardcoded versions of requirements in specs of packages

2014-08-27 Thread Mike Scherbakov
  if we want to build iso with custom packages, we have to add flexibility
to our dependencies lists.
yes please, if there is no other option. It should be easy for anyone to
build Fuel on custom packages, so let's target for it.
I do not see issues in flexible deps while we are managing our upstream
packages in our own mirrors. Are there any possible issues?


On Tue, Aug 26, 2014 at 2:42 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:

 It is not enough, you need to review requirements in the code of nailgun,
 ostf and astute.

 I'll be happy to have our requirements files and specs as close to
 global-requirements as possible. It will ruin our current solid structure,
 where we have same versions of dependencies on production, on development
 and test environments. And from time to time we will face issues with
 updates from pypi. Development and test environments will be affected. We
 somewhat protected from it by maintainers of global-requirement file. And
 changes in production environments are protected by OSCI team.

 So, if we want to build iso with custom packages, we have to add
 flexibility to our dependencies lists. Any objections?


 On Mon, Aug 25, 2014 at 8:28 PM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

 Commit with fast fix was submitted:
 https://review.openstack.org/#/c/116667/
 Need review :)

 I will try to build image with this commit and will send my comments with
 my results.


 On Mon, Aug 25, 2014 at 7:55 PM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

 When I started the build of ISO from master branch, I can see the
 following errors:
 https://bugs.launchpad.net/fuel/+bug/1361279

 I want to submit the patch set and remove all hardcoded requirements and
 change all '==' to '=', but I want to discuss how we can organize specs to
 avoid problems with dependencies before this.

 Thank you.


 On Mon, Aug 25, 2014 at 6:21 PM, Timur Nurlygayanov 
 tnurlygaya...@mirantis.com wrote:

 Hi team,

 Today I started to build Fuel ISO from the master branch and with
 packages with code from the master branches, and have found strange errors:

 http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/77/console

 Looks like we have hardcoded versions of all required packages in specs:

 https://github.com/stackforge/fuel-main/blob/master/packages/rpm/specs/nailgun.spec#L17-L44

 and this is the root of problems. In the result we can't build ISO from
 master branch, because we have another versions of requirements for code
 from master branches.
 Looks like it is common issue for several components.

 Could we discuss how we can organize specs to avoid problems with
 dependencies?


 Thank you!


 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc

 [image: http://www.openstacksv.com/] http://www.openstacksv.com/




 --

 Timur,
  QA Engineer
 OpenStack Projects
 Mirantis Inc

 [image: http://www.openstacksv.com/] http://www.openstacksv.com/




 --

 Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc

 [image: http://www.openstacksv.com/] http://www.openstacksv.com/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Jyoti Ranjan
I am curious to know about Swift role here. Can you elaborate little bit
please?


On Wed, Aug 27, 2014 at 4:14 AM, Steve Baker sba...@redhat.com wrote:

 On 23/08/14 07:39, Zane Bitter wrote:
  We held the inaugural Heat mid-cycle meetup in Raleigh, North Carolina
  this week. There were a dozen folks in attendance, and I think
  everyone agreed that it was a very successful event. Notes from the
  meetup are on the Etherpad here:
 
  https://etherpad.openstack.org/p/heat-juno-midcycle-meetup
 
  Here are a few of the conclusions:
 
 ...
  * Marconi is now called Zaqar.
  Who knew?
 
  * Marc^W Zaqar is critical to pretty much every major non-Convergence
  feature on the roadmap.
  We knew that we wanted to use it for notifications, but we also want
  to make those a replacement for events, and a conduit for warnings and
  debugging information to the user. This is becoming so important that
  we're going to push ahead with an implementation now without waiting
  to see when Zaqar will graduate. Zaqar would also be a good candidate
  for pushing metadata changes to servers, to resolve the performance
  issues currently caused by polling.
 
 Until Zaqar is generally available we can still remove the polling load
 from heat by pushing metadata to a swift TempURL. This is ready now for
 review:
 https://review.openstack.org/#/q/topic:bp/swift-deployment-transport,n,z


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pre-5.1 and master builds ISO are available for download

2014-08-27 Thread Evgeniy L
Hi guys, I have to say something about beta releases.

As far as I know our beta release has the same version
5.1 as our final release.

I think this versions should be different, because in case
of some problem it will be much easier to identify what
version we are trying to debug.

Also from the irc channel I've heard that somebody wanted
to upgrade his system to stable version, right now it's impossible
because upgrade system uses this version for names of
containers/images/temporary directories and we have
validation which prevents the user to run upgrade to the
same version.

In upgrade script we use python module [1] to compare versions
for validation.
Let me give an example how development versions can look like

5.1a1 # alpha
5.1b1 # beta 1
5.1b1 # beta 2
5.1b1 # beta 3
5.1# final release

[1]
http://epydoc.sourceforge.net/stdlib/distutils.version.StrictVersion-class.html

Thanks,


On Tue, Aug 26, 2014 at 11:15 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Igor,
 thanks a lot for improving UX over it - this table allows me to see which
 ISO passed verification tests.


 On Mon, Aug 25, 2014 at 7:54 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 I would also like to add that you can use our library called devops along
 with system tests we use for QA and CI. These tests use libvirt and kvm so
 that you can easily fire up an environment with specific configuration
 (Centos/Ubuntu Nova/Neutron Ceph/Swift and so on). All the documentation
 how to use this library is here:
 http://docs.mirantis.com/fuel-dev/devops.html. If you find any bugs or
 gaps in documentation, please feel free to file bugs to
 https://launchpad.net/fuel.


 On Mon, Aug 25, 2014 at 6:39 PM, Igor Shishkin ishish...@mirantis.com
 wrote:

 Hi all,
 along with building your own ISO following instructions [1], you can
 always download nightly build [2] and run it, by using virtualbox scripts
 [3], for example.

 For your conveniency, you can see a build status table on CI [4]. First
 tab now refers to pre-5.1 builds, and second - to master builds.
 BVT columns stands for Build Verification Test, which is essentially
 full HA deploy deployment test.

 Currently pre-5.1 and master builds are actually built from same master
 branch. As soon as we call for Hard Code Freeze, pre-5.1 builds will be
 reconfigured to use stable/5.1 branch.

 Thanks,

 [1]
 http://docs.mirantis.com/fuel-dev/develop/env.html#building-the-fuel-iso
 [2] https://wiki.openstack.org/wiki/Fuel#Nightly_builds
 [3] https://github.com/stackforge/fuel-main/tree/master/virtualbox
 [4] https://fuel-jenkins.mirantis.com/view/ISO/
 --
 Igor Shishkin
 DevOps




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][TripleO] What if undercloud machines down, can we reboot overcloud machines?

2014-08-27 Thread Jyoti Ranjan
I believe that local boot option is available in Ironic. Will not be a good
idea to boot from local disk instead of relying on PXE boot always? Curious
to know why we are not going this path?


On Wed, Aug 27, 2014 at 3:54 AM, 严超 yanchao...@gmail.com wrote:

 Thank you very much.
 And sorry for the cross-posting.

 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*


 2014-08-26 23:17 GMT+08:00 Ben Nemec openst...@nemebean.com:

 Oh, after writing my response below I realized this is cross-posted
 between openstack and openstack-dev.  Please don't do that.

 I suppose this probably belongs on the users list, but since I've
 already written the response I guess I'm not going to argue too much. :-)

 On 08/26/2014 07:36 AM, 严超 wrote:
  Hi, All:
  I've deployed undercloud and overcloud on some baremetals. All
  overcloud machines are deployed by undercloud.
  Then I tried to shutdown undercloud machines. After that, if I
  reboot one overcloud machine, it will never boot from net, AKA PXE used
 by
  undercloud.

 Yes, that's normal.  With the way our baremetal deployments work today,
 the deployed systems always PXE boot.  After deployment they PXE boot a
 kernel and ramdisk that use the deployed hard disk image, but it's still
 a PXE boot.

  Is that what TripleO is designed to be ? We can never shutdown
  undercloud machines for maintainance of overcloud ?  Please help me
  clearify that.

 Yes, that's working as intended at the moment.  I recall hearing that
 there were plans to eliminate the PXE requirement after deployment, but
 you'd have to talk to the Ironic team about that.

 Also, I don't think it was ever the intent of TripleO that the
 undercloud would be shut down after deployment.  The idea is that you
 use the undercloud to manage the overcloud machines, so if you want to
 reboot one you do it via the undercloud nova, not directly on the system
 itself.

 
  *Best Regards!*
 
 
  *Chao Yan--**My twitter:Andy Yan @yanchao727
  https://twitter.com/yanchao727*
 
 
  *My Weibo:http://weibo.com/herewearenow
  http://weibo.com/herewearenow--*
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pre-5.1 and master builds ISO are available for download

2014-08-27 Thread Mike Scherbakov
I would not use beta word anywhere at all. These are nightly builds,
pre-5.1. So it will become 5.1 eventually, but for the moment - it is just
master branch. We've not even reached HCF.

After we reach HCF, we will start calling builds as Release Candidates
(RC1, RC2, etc.)  - and QA team runs acceptance testing against them. This
can be considered as another name instead of beta-1, etc.

Anyone can go to fuel-master-IP:8000/api/version to get sha commits of
git repos a particular build was created of. Yes, these are development
builds, and there will be no upgrade path provided from development build
to 5.1 release or any other release. We might want to think about it
though, if we could do it in theory, but I confirm what Evgeny says - we do
not support it now.



On Wed, Aug 27, 2014 at 1:11 PM, Evgeniy L e...@mirantis.com wrote:

 Hi guys, I have to say something about beta releases.

 As far as I know our beta release has the same version
 5.1 as our final release.

 I think this versions should be different, because in case
 of some problem it will be much easier to identify what
 version we are trying to debug.

 Also from the irc channel I've heard that somebody wanted
 to upgrade his system to stable version, right now it's impossible
 because upgrade system uses this version for names of
 containers/images/temporary directories and we have
 validation which prevents the user to run upgrade to the
 same version.

 In upgrade script we use python module [1] to compare versions
 for validation.
 Let me give an example how development versions can look like

 5.1a1 # alpha
 5.1b1 # beta 1
 5.1b1 # beta 2
 5.1b1 # beta 3
 5.1# final release

 [1]
 http://epydoc.sourceforge.net/stdlib/distutils.version.StrictVersion-class.html

 Thanks,


 On Tue, Aug 26, 2014 at 11:15 AM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Igor,
 thanks a lot for improving UX over it - this table allows me to see which
 ISO passed verification tests.


 On Mon, Aug 25, 2014 at 7:54 PM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 I would also like to add that you can use our library called devops
 along with system tests we use for QA and CI. These tests use libvirt and
 kvm so that you can easily fire up an environment with specific
 configuration (Centos/Ubuntu Nova/Neutron Ceph/Swift and so on). All the
 documentation how to use this library is here:
 http://docs.mirantis.com/fuel-dev/devops.html. If you find any bugs or
 gaps in documentation, please feel free to file bugs to
 https://launchpad.net/fuel.


 On Mon, Aug 25, 2014 at 6:39 PM, Igor Shishkin ishish...@mirantis.com
 wrote:

 Hi all,
 along with building your own ISO following instructions [1], you can
 always download nightly build [2] and run it, by using virtualbox scripts
 [3], for example.

 For your conveniency, you can see a build status table on CI [4]. First
 tab now refers to pre-5.1 builds, and second - to master builds.
 BVT columns stands for Build Verification Test, which is essentially
 full HA deploy deployment test.

 Currently pre-5.1 and master builds are actually built from same master
 branch. As soon as we call for Hard Code Freeze, pre-5.1 builds will be
 reconfigured to use stable/5.1 branch.

 Thanks,

 [1]
 http://docs.mirantis.com/fuel-dev/develop/env.html#building-the-fuel-iso
 [2] https://wiki.openstack.org/wiki/Fuel#Nightly_builds
 [3] https://github.com/stackforge/fuel-main/tree/master/virtualbox
 [4] https://fuel-jenkins.mirantis.com/view/ISO/
 --
 Igor Shishkin
 DevOps




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Juno-3 BP meeting

2014-08-27 Thread Miguel Angel Ajo Pelayo


If we yet had time at the end, as a lower priority, I'd like 
to talk about:
 
https://blueprints.launchpad.net/neutron/+spec/agent-child-processes-status

Which I believe is in good shape (l3  dhcp are implemented, and leave the bases
to do the work for all other agents).



- Original Message -
 Works for me.
 
 
 On Wed, Aug 27, 2014 at 10:54 AM, Assaf Muller  amul...@redhat.com  wrote:
 
 
 Good for me.
 
 - Original Message -
  Works perfect for me. I will join.
  
  Sent from my Android phone using TouchDown ( www.nitrodesk.com )
  
  
  -Original Message-
  From: Carl Baldwin [ c...@ecbaldwin.net ]
  Received: Wednesday, 27 Aug 2014, 5:07
  To: OpenStack Development Mailing List [ openstack-dev@lists.openstack.org
  ]
  Subject: Re: [openstack-dev] [neutron] Juno-3 BP meeting
  
  
  
  
  Kyle,
  
  These are three good ones. I've been reviewing the HA ones and have had an
  eye on the other two.
  
  1300 is a bit early but I'll plan to be there.
  
  Carl
  On Aug 26, 2014 4:04 PM, Kyle Mestery  mest...@mestery.com  wrote:
  
  
  I'd like to propose a meeting at 1300UTC on Thursday in
  #openstack-meeting-3 to discuss Neutron BPs remaining for Juno at this
  point. We're taking specifically about medium and high priority ones,
  with a focus on these three:
  
  https://blueprints.launchpad.net/neutron/+spec/l3-high-availability )
  https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security )
  https://blueprints.launchpad.net/neutron/+spec/security-group-rules-for-devices-rpc-call-refactor
  )
  
  These three BPs will provide a final push for scalability in a few
  areas and are things we as a team need to work to merge this week. The
  meeting will allow for discussion of final issues on these patches
  with the goal of trying to merge them by Feature Freeze next week. If
  time permits, we can discuss other medium and high priority community
  BPs as well.
  
  Let me know if this works by responding on this thread and I hope to
  see people there Thursday!
  
  Thanks,
  Kyle
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] refactoring of resize/migrate

2014-08-27 Thread Markus Zoeller
The review of the spec to blueprint hot-resize has several comments 
about the need of refactoring the existing code base of resize and 
migrate before the blueprint could be considered (see [1]).
I'm interested in the result of the blueprint therefore I want to offer 
my support. How can I participate?

[1] https://review.openstack.org/95054

Regards,
markus_z


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Steven Hardy
On Wed, Aug 27, 2014 at 02:39:09PM +0530, Jyoti Ranjan wrote:
I am curious to know about Swift role here. Can you elaborate little bit
please? 

I think Zane already covered it with We just want people to stop polling
us, because it's killing our performance.

Basically, if we provide the option for folks using heat at large scale to
poll Swift instead of the Heat API, we can work around some performance
issues a subset of our users have been experiencing due to the load of many
resources polling Heat (and hence the database) frequently.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Migration from nova-network to Neutron for large production clouds

2014-08-27 Thread Tim Bell
 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 26 August 2014 22:20
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][neutron] Migration from nova-network to
 Neutron for large production clouds
...
 
 Mark and I finally got a chance to sit down and write out a basic proposal. It
 looks like this:
 

Thanks... I've put a few questions inline and I'll ask the experts to review 
the steps when they're back from holidays

 == neutron step 0 ==
 configure neutron to reverse proxy calls to Nova (part to be written)
 
 == nova-compute restart one ==
 Freeze nova's network state (probably by stopping nova-api, but we could be
 smarter than that if required) Update all nova-compute nodes to point Neutron
 and remove nova-net agent for Neutron Nova aware L2 agent Enable Neutron
 Layer 2 agent on each node, this might have the side effect of causing the
 network configuration to be rebuilt for some instances API can be unfrozen at
 this time until ready for step 2
 

- Would it be possible to only update some of the compute nodes ? We'd like to 
stage the upgrade if we can in view of scaling risks. Worst case, we'd look to 
do it cell by cell but those are quite large already (200+ hypervisors)

 == neutron restart two ==
 Freeze nova's network state (probably by stopping nova-api, but we could be
 smarter than that if required) Dump/translate/restore date from Nova-Net to
 Neutron Configure Neutron to point to its own database Unfreeze Nova API
 

- Linked with the point above, we'd like to do the nova-net to neutron in 
stages if we can

 *** Stopping point for linuxbridge to linuxbridge translation, or continue for
 rollout of new tech
 
 == nova-compute restart two ==
 Configure OVS or new technology, ensure that proper ML2 driver is installed
 Restart Layer2 agent on each hypervisor where next gen networking should be
 enabled
 
 
 So, I want to stop using the word cold to describe this. Its more of a 
 rolling
 upgrade than a cold migration. So... Would two shorter nova API outages be
 acceptable?
 

Two Nova API outages would be OK for us.

 Michael
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Chris Dent

On Wed, 27 Aug 2014, Angus Salkeld wrote:


I believe developers working on OpenStack work for companies that
really want this to happen. The developers also want their projects to
be well regarded. Just the way the problem is using framed is a bit
like you did above and this is very daunting for any one person to
solve. If we can we quantify the problem, break the work into doable
items of work (bugs) and prioritized it will be solved a lot faster.


Yes.

It's very easy when encountering organizational scaling issues to
start catastrophizing and then throwing all the extant problems under
the same umbrella. This thread (and the czar one) has grown to include
a huge number of problems. We could easily change the subject to just
The Future.

I think two things need to happen:

* Be rational about the fact that at least in some areas we are trying
  to do too much with too little.

  Strategically that means we need:

  * to prioritize and decompose issues (of all sorts) better
  * get more resources (human and otherwise)

  That first is on us. The second I guess gets bumped up to the people
  with the money; one aspect of being rational is utilizing the fact
  that though OpenStack is open source, it is to a very large extent
  corporate open source. If the corps need to step up, we need to tell
  them.

* Do pretty much exactly what Angus says:

  10 identify bugs (not just in code)
  20 find groups who care about those bugs
  30 fix em
  40 GOTO 10 # FOR THE REST OF TIME

  We all know this, but I get the impression it can be hard to get
  traction. I think a lot of the slipping comes from too much emphasis
  on the different projects. It would be better to think I work on
  OpenStack rather than I work on Ceilometer (or whatever).

I'm not opposed to process and bureaucracy, it can be very important
part of the puzzle of getting lots of different groups to work
together. However an increase in both can be a bad smell indicating an
effort to hack around things that are perceived to be insurmountable
problems (e.g. getting more nodes for CI, having more documentors,
etc.).
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [hacking] Avoid old style class declarations

2014-08-27 Thread Julien Danjou
Hi,

I've proposed a check to avoid having old style classes used in our
code.

  https://review.openstack.org/#/c/116846/

It looks common sense to me, but if there's need to be any debate about,
go ahead.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-27 Thread Mathieu Rohon
you probably should consider using the future extension manager in ML2 :

https://review.openstack.org/#/c/89211/

On Mon, Aug 25, 2014 at 12:54 PM, Irena Berezovsky ire...@mellanox.com wrote:
 Hi Andreas,
 We can definitely set some time to discuss this.
 I am usually available from 5 to 14:00 UTC.
 Let's follow up on IRC (irenab).

 BR,
 Irena

 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Monday, August 25, 2014 11:00 AM
 To: Irena Berezovsky
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [neutron][ml2] Openvswitch agent support for non 
 promic mode adapters

 Hi Irena,
 thanks for your reply. Yes sure, collaboration would be great.
 Do you already have a blueprint out there? Maybe wen can synchup this week to 
 discuss more details? Cause I would like to understand what exactly you're 
 looking for. Normally I'm available form 7 UTC to 16 utc (today only until 13 
 utc). My irc name is scheuran. Maybe we can get in contact this week!

 You also where talking about sriov. I saw some blueprint mentioning sriov  
 macvtap. Do you have any insights into this one, too? What we also would like 
 to do is to introduce macvtap as network virtualization option. Macvtap also 
 registers mac addresses to network adapters...


 Thanks,
 Andreas


 On Sun, 2014-08-24 at 08:51 +, Irena Berezovsky wrote:
 Hi Andreas,
 Thank you for this initiative.
 We were looking on similar problem for mixing OVS and SR-IOV on same network 
 adapter, which also requires mac addresses registration of OVS ports.
 Please let me know if you would like to collaborate on this effort.

 BR,
 Irena

 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Friday, August 22, 2014 11:16 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support
 for non promic mode adapters

 Thanks for your feedback.

 No, I do not yet have code for it. Just wanted to get a feeling if such a 
 feature would get acceptance in the community.
 But if that helps I can sit down and start some prototyping while I'm 
 preparing a blueprint spec in parallel.

 The main part of the implementation I wanted to do on my own to get more 
 familiar with the code base and to get more in touch with the community.
 But of course advice and feedback of experienced neutron developers is 
 essential!

 So I will proceed like this
 - Create a blueprint
 - Commit first pieces of code to get early feedback (e.g. ask via the
 mailing list or irc)
 - Upload a spec (as soon as the repo is available for K)

 Does that make sense for you?

 Thanks,
 Andreas



 On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
  I think this sounds reasonable. Do you have code for this already,
  or are you looking for a developer to help implement it?
 
 
  On Thu, Aug 21, 2014 at 8:45 AM, Andreas Scheuring
  scheu...@linux.vnet.ibm.com wrote:
  Hi,
  last week I started discussing an extension to the existing
  neutron
  openvswitch agent to support network adapters that are not in
  promiscuous mode. Now I would like to enhance the round to get
  feedback
  from a broader audience via the mailing list.
 
 
  The Problem
  When driving vlan or flat networking, openvswitch requires an
  network
  adapter in promiscuous mode.
 
 
  Why not having promiscuous mode in your adapter?
  - Admins like to have full control over their environment and
  which
  network packets enter the system.
  - The network adapter just does not have support for it.
 
 
  What to do?
  Linux net-dev driver offer an interface to manually register
  additional
  mac addresses (also called secondary unicast addresses).
  Exploiting this
  one can register additional mac addresses to the network
  adapter. This
  also works via a well known ip user space tool.
 
  `bridge fdb add aa:aa:aa:aa:aa:aa dev eth0`
 
 
  What to do in openstack?
  As neutron is aware of all the mac addresses that are in use
  it's the
  perfect candidate for doing the mac registrations. The idea is
  to modify
  the neutron openvswitch agent that it does the registration on
  port
  add and port remove via the bridge command.
  There would be a new optional configuration parameter,
  something like
  'non-promisc-mode' that is by default set to false. Only when
  set to
  true, macs get manually registered. Otherwise the agent
  behaves like it
  does today. So I guess only very little changes to the agent
  code are
  required. From my current point of view we do not need any
  changes to
  the ml2 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Sean Dague
On 08/26/2014 11:40 AM, Anne Gentle wrote:
 
 
 
 On Mon, Aug 25, 2014 at 8:36 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
 On 08/20/2014 12:37 PM, Zane Bitter wrote:
  On 11/08/14 05:24, Thierry Carrez wrote:
  So the idea that being (and remaining) in the integrated release
 should
  also be judged on technical merit is a slightly different effort.
 It's
  always been a factor in our choices, but like Devananda says,
 it's more
  difficult than just checking a number of QA/integration
 checkboxes. In
  some cases, blessing one project in a problem space stifles
 competition,
  innovation and alternate approaches. In some other cases, we reinvent
  domain-specific solutions rather than standing on the shoulders of
  domain-specific giants in neighboring open source projects.
 
  I totally agree that these are the things we need to be vigilant
 about.
 
  Stifling competition is a big worry, but it appears to me that a
 lot of
  the stifling is happening even before incubation. Everyone's time is
  limited, so if you happen to notice a new project on the incubation
  trajectory doing things in what you think is the Wrong Way, you're
 most
  likely to either leave some drive-by feedback or to just ignore it and
  carry on with your life. What you're most likely *not* to do is to
 start
  a competing project to prove them wrong, or to jump in full time
 to the
  existing project and show them the light. It's really hard to argue
  against the domain experts too - when you're acutely aware of how
  shallow your knowledge is in a particular area it's very hard to know
  how hard to push. (Perhaps ironically, since becoming a PTL I feel I
  have to be much more cautious in what I say too, because people are
  inclined to read too much into my opinion - I wonder if TC members
 feel
  the same pressure.) I speak from first-hand instances of guilt here -
  for example, I gave some feedback to the Mistral folks just before the
  last design summit[1], but I haven't had time to follow it up at
 all. I
  wouldn't be a bit surprised if they showed up with an incubation
  request, a largely-unchanged user interface and an expectation that I
  would support it.
 
  The result is that projects often don't hear the feedback they need
  until far too late - often when they get to the incubation review
 (maybe
  not even their first incubation review). In the particularly
 unfortunate
  case of Marconi, it wasn't until the graduation review. (More
 about that
  in a second.) My best advice to new projects here is that you must be
  like a ferret up the pant-leg of any negative feedback. Grab hold
 of any
  criticism and don't let go until you have either converted the person
  giving it into your biggest supporter, been converted by them, or
  provoked them to start a competing project. (Any of those is a win as
  far as the community is concerned.)
 
  Perhaps we could consider a space like a separate mailing list
  (openstack-future?) reserved just for announcements of Related
 projects,
  their architectural principles, and discussions of the same?  They
  certainly tend to get drowned out amidst the noise of openstack-dev.
  (Project management, meeting announcements, and internal project
  discussion would all be out of scope for this list.)
 
  As for reinventing domain-specific solutions, I'm not sure that
 happens
  as often as is being made out. IMO the defining feature of IaaS that
  makes the cloud the cloud is on-demand (i.e. real-time) self-service.
  Everything else more or less falls out of that requirement, but
 the very
  first thing to fall out is multi-tenancy and there just aren't
 that many
  multi-tenant services floating around out there. There are a couple of
  obvious strategies to deal with that: one is to run existing software
  within a tenant-local resource provisioned by OpenStack (Trove and
  Sahara are examples of this), and the other is to wrap a multi-tenancy
  framework around an existing piece of software (Nova and Cinder are
  examples of this). (BTW the former is usually inherently less
  satisfying, because it scales at a much coarser granularity.) The
 answer
  to a question of the form:
 
  Why do we need OpenStack project $X, when open source project $Y
  already exists?
 
  is almost always:
 
  Because $Y is not multi-tenant aware; we need to wrap it with a
  multi-tenancy layer with OpenStack-native authentication, metering and
  quota management. That even allows us to set up an abstraction
 layer so
  that you can substitute $Z as the back end too.
 
  This is completely 

[openstack-dev] [neutron] [nova] Parity meeting cancelled going forward

2014-08-27 Thread Kyle Mestery
Due to low turnout and the fact we have a good hand on the parity work
for Juno, I'm canceling this meeting going forward. If items pop up at
the end of Juno which are parity related, please add them to the
weekly Neutron meeting agenda [1] and we'll cover them there.

Thanks!
Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Thierry Carrez
Hi everyone,

I've been thinking about what changes we can bring to the Design Summit
format to make it more productive. I've heard the feedback from the
mid-cycle meetups and would like to apply some of those ideas for Paris,
within the constraints we have (already booked space and time). Here is
something we could do:

Day 1. Cross-project sessions / incubated projects / other projects

I think that worked well last time. 3 parallel rooms where we can
address top cross-project questions, discuss the results of the various
experiments we conducted during juno. Don't hesitate to schedule 2 slots
for discussions, so that we have time to come to the bottom of those
issues. Incubated projects (and maybe other projects, if space allows)
occupy the remaining space on day 1, and could occupy pods on the
other days.

Day 2 and Day 3. Scheduled sessions for various programs

That's our traditional scheduled space. We'll have a 33% less slots
available. So, rather than trying to cover all the scope, the idea would
be to focus those sessions on specific issues which really require
face-to-face discussion (which can't be solved on the ML or using spec
discussion) *or* require a lot of user feedback. That way, appearing in
the general schedule is very helpful. This will require us to be a lot
stricter on what we accept there and what we don't -- we won't have
space for courtesy sessions anymore, and traditional/unnecessary
sessions (like my traditional release schedule one) should just move
to the mailing-list.

Day 4. Contributors meetups

On the last day, we could try to split the space so that we can conduct
parallel midcycle-meetup-like contributors gatherings, with no time
boundaries and an open agenda. Large projects could get a full day,
smaller projects would get half a day (but could continue the discussion
in a local bar). Ideally that meetup would end with some alignment on
release goals, but the idea is to make the best of that time together to
solve the issues you have. Friday would finish with the design summit
feedback session, for those who are still around.


I think this proposal makes the best use of our setup: discuss clear
cross-project issues, address key specific topics which need
face-to-face time and broader attendance, then try to replicate the
success of midcycle meetup-like open unscheduled time to discuss
whatever is hot at this point.

There are still details to work out (is it possible split the space,
should we use the usual design summit CFP website to organize the
scheduled time...), but I would first like to have your feedback on
this format. Also if you have alternative proposals that would make a
better use of our 4 days, let me know.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Russell Bryant
On 08/27/2014 08:51 AM, Thierry Carrez wrote:
 Hi everyone,
 
 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:
 
 Day 1. Cross-project sessions / incubated projects / other projects
 
 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.

I would add Don't hesitate to schedule 2 slots ... to the description
for days 2 and 3, as well.  I think the same point applies for
project-specific sessions.  I don't think I've seen that used for
project sessions much, but I think it would help in some cases.

 Day 2 and Day 3. Scheduled sessions for various programs
 
 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.
 
 Day 4. Contributors meetups
 
 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.
 
 
 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.
 
 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.

+1 on the format.  I think it sounds like a nice iteration on our setup
to try some new ideas.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Sean Dague
On 08/27/2014 08:51 AM, Thierry Carrez wrote:
 Hi everyone,
 
 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:
 
 Day 1. Cross-project sessions / incubated projects / other projects
 
 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.
 
 Day 2 and Day 3. Scheduled sessions for various programs
 
 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.
 
 Day 4. Contributors meetups
 
 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.
 
 
 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.
 
 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.

I definitely like this approach. I think it will be really interesting
to collect feedback from people about the value they got from days 2  3
vs. Day 4.

I also wonder if we should lose a slot from days 1 - 3 and expand the
hallway time. Hallway track is always pretty interesting, and honestly
at a lot of interesting ideas spring up. The 10 minute transitions often
seem to feel like you are rushing between places too quickly some times.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Author tags

2014-08-27 Thread Gary Kotton
Hi,
A few cycles ago the Nova group decided to remove @author from copyright 
statements. This is due to the fact that this information is stored in git. 
After adding a similar hacking rule to Neutron it has stirred up some debate.
Does anyone have any reason to for us not to go ahead with 
https://review.openstack.org/#/c/112329/.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.serialization 0.1.0 release

2014-08-27 Thread Doug Hellmann
The Oslo team is pleased to announce the release of oslo.serialization 0.1.0, 
the first release of the Oslo library containing tools for rendering objects in 
formats useful for storage or transmission.

This release moves the jsonutils module from the oslo-incubator to 
oslo.serialization.

Please report issues using the Oslo bug tracker: https://bugs.launchpad.net/oslo

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-08-27 Thread Robert Li (baoli)
Hi Xuhan,

What I saw is that GARP is sent to the gateway port and also to the router 
ports, from a neutron router. I’m not sure why it’s sent to the router ports 
(internal network). My understanding for arping to the gateway port is that it 
is needed for proper NAT operation. Since we are not planning to support ipv6 
NAT, so this is not required/needed for ipv6 any more?

There is an abandoned patch that disabled the arping for ipv6 gateway port:  
https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py

thanks,
Robert

On 8/27/14, 1:03 AM, Xuhan Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

As a follow-up action of yesterday's IPv6 sub-team meeting, I would like to 
start a discussion about how to support l3 agent HA when IP version is IPv6.

This problem is triggered by bug [1] where sending gratuitous arp packet for HA 
doesn't work for IPv6 subnet gateways. This is because neighbor discovery 
instead of ARP should be used for IPv6.

My thought to solve this problem turns into how to send out neighbor 
advertisement for IPv6 routers just like sending ARP reply for IPv4 routers 
after reading the comments on code review [2].

I searched for utilities which can do this and only find a utility called 
ndsend [3] as part of vzctl on ubuntu. I could not find similar tools on other 
linux distributions.

There are comments in yesterday's meeting that it's the new router's job to 
send out RA and there is no need for neighbor discovery. But we didn't get 
enough time to finish the discussion.

Can you comment your thoughts about how to solve this problem in this thread, 
please?

[1] https://bugs.launchpad.net/neutron/+bug/1357068

[2] https://review.openstack.org/#/c/114437/

[3] http://manpages.ubuntu.com/manpages/oneiric/man8/ndsend.8.html

Thanks,
Xu Han
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging] Request to include AMQP 1.0 support in Juno-3

2014-08-27 Thread Ken Giusti
Hi All,

I believe Juno-3 is our last chance to get this feature [1] included
into olso.messaging.

I honestly believe this patch is about as low risk as possible for a
change that introduces a whole new transport into oslo.messaging.  The
patch shouldn't affect the existing transports at all, and doesn't
come into play unless the application specifically turns on the new
'amqp' transport, which won't be the case for existing applications.

The patch includes a set of functional tests which exercise all the
messaging patterns, timeouts, and even broker failover. These tests do
not mock out any part of the driver - a simple test broker is included
which allows the full driver codepath to be executed and verified.

IFAIK, the only remaining technical block to adding this feature,
aside from core reviews [2], is sufficient infrastructure test coverage.
We discussed this a bit at the last design summit.  The root of the
issue is that this feature is dependent on a platform-specific library
(proton) that isn't in the base repos for most of the CI platforms.
But it is available via EPEL, and the Apache QPID team is actively
working towards getting the packages into Debian (a PPA is available
in the meantime).

In the interim I've proposed a non-voting CI check job that will
sanity check the new driver on EPEL based systems [3].  I'm also
working towards adding devstack support [4], which won't be done in
time for Juno but nevertheless I'm making it happen.

I fear that this feature's inclusion is stuck in a chicken/egg
deadlock: the driver won't get merged until there is CI support, but
the CI support won't run correctly (and probably won't get merged)
until the driver is available.  The driver really has to be merged
first, before I can continue with CI/devstack development.

[1] 
https://blueprints.launchpad.net/oslo.messaging/+spec/amqp10-driver-implementation
[2] https://review.openstack.org/#/c/75815/
[3] https://review.openstack.org/#/c/115752/
[4] https://review.openstack.org/#/c/109118/

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Mech driver as out-of-tree add-on

2014-08-27 Thread Mathieu Rohon
l2pop is about l2 networks optimization with tunnel creation and arp
repsonder population (so this is
not only a overlays network optimization. For example ofagent now use
l2pop info for flat and vlan optimization [1]),
This optimization is orthogonal to several agent based mechanism
driver (lb, ovs, ofagent).
I agree that this optimization should be accessible to every MD, by
providing an access to fdb dict directly from ML2.db.
a controler based MD like ODL could use those fdb entries the same way
agents use it, by optimizing the datapath under its control.

[1]https://review.openstack.org/#/c/114119/

On Wed, Aug 27, 2014 at 10:30 AM, Kevin Benton blak...@gmail.com wrote:
So why not agent based?

 Maybe I have an experimental operating system that can't run python. Maybe
 the RPC channel between compute nodes and Neutron doesn't satisfy certain
 security criteria. Regardless of the reason, it doesn't matter because that
 is an implementation detail that should be irrelevant to separate ML2
 drivers.

 l2pop should be concerned with tunnel endpoints and tunnel endpoints only.
 Whether or not you're running a chunk of code responding to messages on an
 RPC bus and sending heartbeats should not be Neutron's concern. It defeats
 the purpose of ML2 if everything that can bind a port has to be running a
 neutron RPC-compatible agent.

 The l2pop functionality should become part of the tunnel type drivers and
 the mechanism drivers should be able to provide the termination endpoints
 for the tunnels using whatever mechanism it chooses. Agent-based drivers can
 use the agent DB to do this and then the REST drivers can provide whatever
 termination point they want. This solves the interoperability problem and
 relaxes this tight coupling between vxlan and agents.


 On Wed, Aug 27, 2014 at 1:09 AM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 3:13 PM, Kevin Benton blak...@gmail.com wrote:

 Ports are bound in order of configured drivers so as long as the
 OpenVswitch driver is put first in the list, it will bind the ports it can
 and then ODL would bind the leftovers. [1][2] The only missing component is
 that ODL doesn't look like it uses l2pop so establishing tunnels between the
 OVS agents and the ODL-managed vswitches would be an issue that would have
 to be handled via another process.

 Regardless, my original point is that the driver keeps the neutron
 semantics and DB in tact. In my opinion, the lack of compatibility with
 l2pop isn't an issue with the driver, but more of an issue with how l2pop
 was designed. It's very tightly coupled to having agents managed by Neutron
 via RPC, which shouldn't be necessary when it's primary purpose is to
 establish endpoints for overlay tunnels.


 So why not agent based? Neutron shouldn't be treated as just an resource
 storage, built-in backends naturally need things like l2pop and dvr for
 distributed dynamic topology control,  we couldn't say that something as a
 part was tightly coupled.

 On the contrary, 3rd backends should adapt themselves to be integrated
 into Neutron as thin as they can, focusing on the backend device control but
 not re-implement core service logic duplicated with Neutron . BTW, Ofagent
 is a good example for this style.




 1.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mech_agent.py#L53
 2.
 https://github.com/openstack/neutron/blob/7f466c8730cfca13f2fb374c80d810929bb8/neutron/plugins/ml2/drivers/mechanism_odl.py#L326


 On Tue, Aug 26, 2014 at 10:05 PM, loy wolfe loywo...@gmail.com wrote:




 On Wed, Aug 27, 2014 at 12:42 PM, Kevin Benton blak...@gmail.com
 wrote:

 I think that opensource is not the only factor, it's about built-in
  vs. 3rd backend. Built-in must be opensource, but opensource is not
  necessarily built-in. By my thought, current OVS and linuxbridge are
  built-in, but shim RESTful proxy for all kinds of sdn controller should 
  be
  3rd, for they keep all virtual networking data model and service logic 
  in
  their own places, using Neutron API just as the NB shell (they can't 
  even
  co-work with built-in l2pop driver for vxlan/gre network type today).


 I understand the point you are trying to make, but this blanket
 statement about the data model of drivers/plugins with REST backends is
 wrong. Look at the ODL mechanism driver for a counter-example.[1] The data
 is still stored in Neutron and all of the semantics of the API are
 maintained. The l2pop driver is to deal with decentralized overlays, so 
 I'm
 not sure how its interoperability with the ODL driver is relevant.


 If we create a vxlan network,  then can we bind some ports to built-in
 ovs driver, and other ports to ODL driver? linux bridge agnet, ovs agent,
 ofagent can co-exist in the same vxlan network, under the common l2pop
 mechanism. By that scenery, I'm not sure whether ODL can just add to them 
 in
 a heterogeneous multi-backend architecture , or 

Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Daniel P. Berrange
On Wed, Aug 27, 2014 at 02:51:55PM +0200, Thierry Carrez wrote:
 Hi everyone,
 
 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:
 
 Day 1. Cross-project sessions / incubated projects / other projects
 
 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.
 
 Day 2 and Day 3. Scheduled sessions for various programs
 
 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.
 
 Day 4. Contributors meetups
 
 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.
 
 
 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.
 
 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.

+1, I think what you've proposed is a pretty attractive evolution of
our previous design summit formats. I figure it is safer trying such
an evolutionary approach for Paris, rather than trying to make too
much of a big bang revolution at one time. 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] static IP DHCP

2014-08-27 Thread Sanjivini Naikar
Hi,

I want to assign static IP to my instance. However, when trying to do so, the 
IP doesnt get associated with the VM. My VM boot logs show:

Sending discover...
Sending discover...
Sending discover...
No lease, failing

WARN: /etc/rc3.d/S40-network failed

How do I assign a static IP to my VM?


Regards,
Sanjivini Naikar
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][IPv6] Neighbor Discovery for HA

2014-08-27 Thread Veiga, Anthony

Hi Xuhan,

What I saw is that GARP is sent to the gateway port and also to the router 
ports, from a neutron router. I’m not sure why it’s sent to the router ports 
(internal network). My understanding for arping to the gateway port is that it 
is needed for proper NAT operation. Since we are not planning to support ipv6 
NAT, so this is not required/needed for ipv6 any more?

I agree that this is no longer necessary.


There is an abandoned patch that disabled the arping for ipv6 gateway port:  
https://review.openstack.org/#/c/77471/3/neutron/agent/l3_agent.py

thanks,
Robert

On 8/27/14, 1:03 AM, Xuhan Peng 
pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

As a follow-up action of yesterday's IPv6 sub-team meeting, I would like to 
start a discussion about how to support l3 agent HA when IP version is IPv6.

This problem is triggered by bug [1] where sending gratuitous arp packet for HA 
doesn't work for IPv6 subnet gateways. This is because neighbor discovery 
instead of ARP should be used for IPv6.

My thought to solve this problem turns into how to send out neighbor 
advertisement for IPv6 routers just like sending ARP reply for IPv4 routers 
after reading the comments on code review [2].

I searched for utilities which can do this and only find a utility called 
ndsend [3] as part of vzctl on ubuntu. I could not find similar tools on other 
linux distributions.

There are comments in yesterday's meeting that it's the new router's job to 
send out RA and there is no need for neighbor discovery. But we didn't get 
enough time to finish the discussion.

Because OpenStack runs the l3 agent, it is the router.  Instead of needing to 
do gratuitous ARP to alert all clients of the new MAC, a simple RA from the new 
router for the same prefix would accomplish the same, without having to resort 
to a special package to generate unsolicited NA packets.  RAs must be generated 
from the l3 agent anyway if it’s the gateway, and we’re doing that via radvd 
now.  The HA failover simply needs to start the proper radvd process on the 
secondary gateway and resume normal operation.


Can you comment your thoughts about how to solve this problem in this thread, 
please?

[1] https://bugs.launchpad.net/neutron/+bug/1357068

[2] https://review.openstack.org/#/c/114437/

[3] http://manpages.ubuntu.com/manpages/oneiric/man8/ndsend.8.html

Thanks,
Xu Han


-Anthony
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][NFV] VIF_VHOSTUSER

2014-08-27 Thread Luke Gorrie
Howdy!

I am writing to ask whether it will be possible to merge VIF_VHOSTUSER [1]
in Juno?

VIF_VHOSTUSER adds support for a QEMU 2.1 has a feature called vhost-user
[2] that allows a guest to do Virtio-net I/O via a userspace vswitch. This
makes it convenient to deploy new vswitches that are optimized for NFV
workloads, of which there are now several both open source and proprietary.

The complication is that we have no CI coverage for this feature in Juno.
Originally we had anticipated merging a Neutron driver that would exercise
vhost-user but the Neutron core team requested that we develop that outside
of the Neutron tree for the time being instead [3].

We are hoping that the Nova team will be willing to merge the feature even
so. Within the NFV subgroup it would help us to share more code with each
other and also be good for our morale :) particularly as the QEMU work was
done especially for use with OpenStack.

Cheers,
-Luke

[1] https://review.openstack.org/#/c/96140/
[2] http://www.virtualopensystems.com/en/solutions/guides/snabbswitch-qemu/
[3] https://review.openstack.org/#/c/116476/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-08-27 Thread Daniel P. Berrange
On Tue, Aug 26, 2014 at 05:02:10PM +0300, Itzik Brown wrote:
 Hi,
 
 Following the conversation [1]:
 My understanding was that the way to use out of the tree vif_driver is to
 set vif_driver option in nova.conf
 until there is a better way to support such cases but the commit [2] removed
 this option.
 Can someone clarify the current status (i.e. what is the current way to do
 it ) in Juno?

For Juno the vif_driver option still exists  can be used.

It is only deleted in the Kilo release.

The recommendation is that people working on new VIF drivers should do
so on a branch of the Nova repo, so that the can modify the libvirt VIF
driver code to support their needs, rather than try to write a completely
isolated out of tree impl. This is an approach that will need to be taken
regardless in order to submit the VIF driver for review and merge to Nova.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Zane Bitter

On 27/08/14 09:55, Thierry Carrez wrote:

Daniel P. Berrange wrote:

On Wed, Aug 27, 2014 at 02:51:55PM +0200, Thierry Carrez wrote:

[...]
I think this proposal makes the best use of our setup: discuss clear
cross-project issues, address key specific topics which need
face-to-face time and broader attendance, then try to replicate the
success of midcycle meetup-like open unscheduled time to discuss
whatever is hot at this point.

There are still details to work out (is it possible split the space,
should we use the usual design summit CFP website to organize the
scheduled time...), but I would first like to have your feedback on
this format. Also if you have alternative proposals that would make a
better use of our 4 days, let me know.


+1, I think what you've proposed is a pretty attractive evolution of
our previous design summit formats. I figure it is safer trying such
an evolutionary approach for Paris, rather than trying to make too
much of a big bang revolution at one time.


We have too many fixed constraints at this time for a big bang anyway.

What I like in the format is that the nature of the 4th day can change
completely based on the result of the 3 previous days. If a major topic
emerged, you can address it. If a continuation discussion is needed, you
can have it. If you are completely drained of any energy, you can spend
a quiet time together with a lighter agenda, or no agenda   at all.

It would still be open for everyone, but the placement (Friday) and
title in the schedule (X contributors gathering) should feel
unattractive enough so that attendance is generally smaller and more
on-topic. We'll likely have to split rooms, so people who have been
complaining about giant rooms being detrimental should be happy too.


+1 I have no idea if this will work, but it definitely seems worth 
trying and will help inform our planning for the L design summit at a 
point where we'll still have some flexibility.


I do hope that contributors will emphasize to their respective finance 
departments that the X contributors gathering is the potentially the 
_most_ important part of the whole conference, not an opportunity to 
skip out early and avoid an extra night's hotel stay. If all of the key 
people are in the room it may even save a bunch of people a trip to a 
mid-cycle meetup.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non promic mode adapters

2014-08-27 Thread Mathieu Rohon
hi irena,

in the proposal of andreas you want to enforce the non-promisc mode
per l2-agent? so every port managed by this agent will have to be in a
non-promisc state?
at a first read of the mail, I understood that you want to manage that
per port with an extension.
By using an extension, an agent could host promisc and non-promisc
net-adapters, and other MD could potentially leverage this info (at
least LB MD).

On Wed, Aug 27, 2014 at 3:45 PM, Irena Berezovsky ire...@mellanox.com wrote:
 Hi Mathieu,
 We had a short discussion with Andreas about the use case stated below and 
 also considered the SR-IOV related use case.
 It seems that all required changes can be encapsulated in the L2 OVS agent, 
 since it requires to add fdb mac registration on adapted interface.
 What was your idea related to extension manager in ML2?

 Thanks,
 Irena

 -Original Message-
 From: Mathieu Rohon [mailto:]
 Sent: Wednesday, August 27, 2014 3:11 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support for non 
 promic mode adapters

 you probably should consider using the future extension manager in ML2 :

 https://review.openstack.org/#/c/89211/

 On Mon, Aug 25, 2014 at 12:54 PM, Irena Berezovsky ire...@mellanox.com 
 wrote:
 Hi Andreas,
 We can definitely set some time to discuss this.
 I am usually available from 5 to 14:00 UTC.
 Let's follow up on IRC (irenab).

 BR,
 Irena

 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Monday, August 25, 2014 11:00 AM
 To: Irena Berezovsky
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [neutron][ml2] Openvswitch agent support
 for non promic mode adapters

 Hi Irena,
 thanks for your reply. Yes sure, collaboration would be great.
 Do you already have a blueprint out there? Maybe wen can synchup this week 
 to discuss more details? Cause I would like to understand what exactly 
 you're looking for. Normally I'm available form 7 UTC to 16 utc (today only 
 until 13 utc). My irc name is scheuran. Maybe we can get in contact this 
 week!

 You also where talking about sriov. I saw some blueprint mentioning sriov  
 macvtap. Do you have any insights into this one, too? What we also would 
 like to do is to introduce macvtap as network virtualization option. Macvtap 
 also registers mac addresses to network adapters...


 Thanks,
 Andreas


 On Sun, 2014-08-24 at 08:51 +, Irena Berezovsky wrote:
 Hi Andreas,
 Thank you for this initiative.
 We were looking on similar problem for mixing OVS and SR-IOV on same 
 network adapter, which also requires mac addresses registration of OVS 
 ports.
 Please let me know if you would like to collaborate on this effort.

 BR,
 Irena

 -Original Message-
 From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com]
 Sent: Friday, August 22, 2014 11:16 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron][ml2] Openvswitch agent support
 for non promic mode adapters

 Thanks for your feedback.

 No, I do not yet have code for it. Just wanted to get a feeling if such a 
 feature would get acceptance in the community.
 But if that helps I can sit down and start some prototyping while I'm 
 preparing a blueprint spec in parallel.

 The main part of the implementation I wanted to do on my own to get more 
 familiar with the code base and to get more in touch with the community.
 But of course advice and feedback of experienced neutron developers is 
 essential!

 So I will proceed like this
 - Create a blueprint
 - Commit first pieces of code to get early feedback (e.g. ask via the
 mailing list or irc)
 - Upload a spec (as soon as the repo is available for K)

 Does that make sense for you?

 Thanks,
 Andreas



 On Thu, 2014-08-21 at 13:44 -0700, Kevin Benton wrote:
  I think this sounds reasonable. Do you have code for this already,
  or are you looking for a developer to help implement it?
 
 
  On Thu, Aug 21, 2014 at 8:45 AM, Andreas Scheuring
  scheu...@linux.vnet.ibm.com wrote:
  Hi,
  last week I started discussing an extension to the existing
  neutron
  openvswitch agent to support network adapters that are not in
  promiscuous mode. Now I would like to enhance the round to get
  feedback
  from a broader audience via the mailing list.
 
 
  The Problem
  When driving vlan or flat networking, openvswitch requires an
  network
  adapter in promiscuous mode.
 
 
  Why not having promiscuous mode in your adapter?
  - Admins like to have full control over their environment and
  which
  network packets enter the system.
  - The network adapter just does not have support for it.
 
 
  What to do?
  Linux net-dev driver offer an interface to manually register
  additional
   

[openstack-dev] test

2014-08-27 Thread Xingjun Chu


Xingjun Chu
IP Solutions
Huawei Technologies Canada Co.,Ltd.
303 Terry Fox Drive, Suite 400
Kanata, Ontario Canada K2K 3J1
T: 613-595-1900 Ext. 1618
F: 613-595-1901
Email: xingjun@huawei.commailto:carlos.rob...@huawei.com
[cid:image001.png@01CFC1DF.D00AF300]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][all][Heat] Packaging of functional tests

2014-08-27 Thread Zane Bitter

On 26/08/14 18:59, Clint Byrum wrote:

Excerpts from Steve Baker's message of 2014-08-26 14:25:46 -0700:

On 27/08/14 03:18, David Kranz wrote:

On 08/26/2014 10:14 AM, Zane Bitter wrote:

Steve Baker has started the process of moving Heat tests out of the
Tempest repository and into the Heat repository, and we're looking
for some guidance on how they should be packaged in a consistent way.
Apparently there are a few projects already packaging functional
tests in the package projectname.tests.functional (alongside
projectname.tests.unit for the unit tests).

That strikes me as odd in our context, because while the unit tests
run against the code in the package in which they are embedded, the
functional tests run against some entirely different code - whatever
OpenStack cloud you give it the auth URL and credentials for. So
these tests run from the outside, just like their ancestors in
Tempest do.

There's all kinds of potential confusion here for users and
packagers. None of it is fatal and all of it can be worked around,
but if we refrain from doing the thing that makes zero conceptual
sense then there will be no problem to work around :)

Thanks, Zane. The point of moving functional tests to projects is to
be able to run more of them
in gate jobs for those projects, and allow tempest to survive being
stretched-to-breaking horizontally as we scale to more projects. At
the same time, there are benefits to the
tempest-as-all-in-one-functional-and-integration-suite that we should
try not to lose:

1. Strong integration testing without thinking too hard about the
actual dependencies
2. Protection from mistaken or unwise api changes (tempest two-step
required)
3. Exportability as a complete blackbox functional test suite that can
be used by Rally, RefStack, deployment validation, etc.

I think (1) may be the most challenging because tests that are moved
out of tempest might be testing some integration that is not being
covered
by a scenario. We will need to make sure that tempest actually has a
complete enough set of tests to validate integration. Even if this is
all implemented in a way where tempest can see in-project tests as
plugins, there will still not be time to run them all as part of
tempest on every commit to every project, so a selection will have to
be made.

(2) is quite difficult. In Atlanta we talked about taking a copy of
functional tests into tempest for stable apis. I don't know how
workable that is but don't see any other real options except vigilance
in reviews of patches that change functional tests.

(3) is what Zane was addressing. The in-project functional tests need
to be written in a way that they can, at least in some configuration,
run against a real cloud.




I suspect from reading the previous thread about In-tree functional
test vision that we may actually be dealing with three categories of
test here rather than two:

* Unit tests that run against the package they are embedded in
* Functional tests that run against the package they are embedded in
* Integration tests that run against a specified cloud

i.e. the tests we are now trying to add to Heat might be
qualitatively different from the projectname.tests.functional
suites that already exist in a few projects. Perhaps someone from
Neutron and/or Swift can confirm?

That seems right, except that I would call the third functional
tests and not integration tests, because the purpose is not really
integration but deep testing of a particular service. Tempest would
continue to focus on integration testing. Is there some controversy
about that?
The second category could include whitebox tests.

I don't know about swift, but in neutron the intent was to have these
tests be configurable to run against a real cloud, or not. Maru Newby
would have details.


I'd like to propose that tests of the third type get their own
top-level package with a name of the form
projectname-integrationtests (second choice: projectname-tempest
on the principle that they're essentially plugins for Tempest). How
would people feel about standardising that across OpenStack?

+1 But I would not call it integrationtests for the reason given above.


Because all heat does is interact with other services, what we call
functional tests are actually integration tests. Sure, we could mock at
the REST API level, but integration coverage is what we need most. This


I'd call that faking, not mocking, but both could apply.


lets us verify things like:
- how heat handles races in other services leading to resources going
into ERROR


A fake that predictably fails (and thus tests failure handling) will
result in better coverage than a real service that only fails when that
real service is broken. What's frustrating is that _both_ are needed to
catch bugs.


Yeah, we discussed this in the project meeting yesterday[1] and 
basically came to the conclusion that we'll ultimately need both 
functional tests with fake services and integration tests with real 
services.



Re: [openstack-dev] [Openstack][TripleO] What if undercloud machines down, can we reboot overcloud machines?

2014-08-27 Thread Ben Nemec
We probably will at some point, but I don't know that it's a huge
priority right now.  The PXE booting method works fine, and as I
mentioned we don't intend you to reboot machines without using the
undercloud anyway, just like Nova doesn't expect you to reboot vms
directly via libvirt (or your driver of choice).

It's likely there would be other issues booting a deployed machine if
the undercloud is down anyway (nothing to respond to DHCP requests, for
one), so I don't see that as something we want to encourage anyway.

-Ben

On 08/27/2014 04:11 AM, Jyoti Ranjan wrote:
 I believe that local boot option is available in Ironic. Will not be a good
 idea to boot from local disk instead of relying on PXE boot always? Curious
 to know why we are not going this path?
 
 
 On Wed, Aug 27, 2014 at 3:54 AM, 严超 yanchao...@gmail.com wrote:
 
 Thank you very much.
 And sorry for the cross-posting.

 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*


 2014-08-26 23:17 GMT+08:00 Ben Nemec openst...@nemebean.com:

 Oh, after writing my response below I realized this is cross-posted
 between openstack and openstack-dev.  Please don't do that.

 I suppose this probably belongs on the users list, but since I've
 already written the response I guess I'm not going to argue too much. :-)

 On 08/26/2014 07:36 AM, 严超 wrote:
 Hi, All:
 I've deployed undercloud and overcloud on some baremetals. All
 overcloud machines are deployed by undercloud.
 Then I tried to shutdown undercloud machines. After that, if I
 reboot one overcloud machine, it will never boot from net, AKA PXE used
 by
 undercloud.

 Yes, that's normal.  With the way our baremetal deployments work today,
 the deployed systems always PXE boot.  After deployment they PXE boot a
 kernel and ramdisk that use the deployed hard disk image, but it's still
 a PXE boot.

 Is that what TripleO is designed to be ? We can never shutdown
 undercloud machines for maintainance of overcloud ?  Please help me
 clearify that.

 Yes, that's working as intended at the moment.  I recall hearing that
 there were plans to eliminate the PXE requirement after deployment, but
 you'd have to talk to the Ironic team about that.

 Also, I don't think it was ever the intent of TripleO that the
 undercloud would be shut down after deployment.  The idea is that you
 use the undercloud to manage the overcloud machines, so if you want to
 reboot one you do it via the undercloud nova, not directly on the system
 itself.


 *Best Regards!*


 *Chao Yan--**My twitter:Andy Yan @yanchao727
 https://twitter.com/yanchao727*


 *My Weibo:http://weibo.com/herewearenow
 http://weibo.com/herewearenow--*



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Jyoti Ranjan
I am little bit skeptical about using Swift for this use case because of
its eventual consistency issue. I am not sure Swift cluster is good to be
used for this kind of problem. Please note that Swift cluster may give you
old data at some point of time.


On Wed, Aug 27, 2014 at 4:12 PM, Steven Hardy sha...@redhat.com wrote:

 On Wed, Aug 27, 2014 at 02:39:09PM +0530, Jyoti Ranjan wrote:
 I am curious to know about Swift role here. Can you elaborate little
 bit
 please?

 I think Zane already covered it with We just want people to stop polling
 us, because it's killing our performance.

 Basically, if we provide the option for folks using heat at large scale to
 poll Swift instead of the Heat API, we can work around some performance
 issues a subset of our users have been experiencing due to the load of many
 resources polling Heat (and hence the database) frequently.

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Author tags

2014-08-27 Thread Kyle Mestery
On Wed, Aug 27, 2014 at 8:24 AM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 A few cycles ago the Nova group decided to remove @author from copyright
 statements. This is due to the fact that this information is stored in git.
 After adding a similar hacking rule to Neutron it has stirred up some
 debate.
 Does anyone have any reason to for us not to go ahead with
 https://review.openstack.org/#/c/112329/.
 Thanks
 Gary

My main concern is around landing a change like this during feature
freeze week, I think at best this should land at the start of Kilo.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps

2014-08-27 Thread Marouen Mechtri
Hi Thiago,

Yes Docker could be used to replace the virtualization layer on the Compute
Node, it's the case if we configure Nova to use the Docker Driver.

In our case, we are orchestrating Docker in OpenStack via Heat.  We used a
VM as a docker host for many reasons :

  - It is a way to economize IP addresses :

If the docker host is a physical server,  every container needs a floating
IP (to be reachable from outside).  When the docker host is a VM, we
associate one floating IP to it and we can access to containers via binding
ports  (@VM-floatingIP:container-binding-port).

  - To achieve stronger isolation and security

  - To easily integrate docker with existing VM-based management and
monitoring tools

  - We think that Docker and virtualization could also be complementary.


Thank you for the question
We will update the document to clarify these points.

https://github.com/MarouenMechtri/Docker-containers-deployment-with-OpenStack-Heat

Regards,
Marouen


2014-08-26 21:35 GMT+02:00 Martinx - ジェームズ thiagocmarti...@gmail.com:

 Hey Stackers! Wait!   =)

 Let me ask something...

 Why are you guys using Docker within a VM?!?! What is the point of doing
 such thing?!

 I thought Docker was here to entirely replace the virtualization layer,
 bringing a bare metal-cloud, am I right?!

 Tks!
 Thiago


 On 26 August 2014 05:45, Marouen Mechtri mechtri.mar...@gmail.com wrote:

 Hi Angus,

 We are not using nova-docker driver to deploy docker containers.

 In our manual, we are using Heat (thanks to the docker plugin) to deploy
 docker containers and nova is just used to deploy VM. Inside this VM
 heat deploy the docker software. The figure below describes the
 interactions between different components.

 Regards,
 Marouen


  [image: Images intégrées 1]




 2014-08-26 0:13 GMT+02:00 Angus Salkeld asalk...@mirantis.com:

 This seems misleading as there is no description on setting up
 nova-docker or using the heat docker container.

 -Angus



 On Tue, Aug 26, 2014 at 5:56 AM, Marouen Mechtri 
 mechtri.mar...@gmail.com wrote:

  Hi all,

 I want to present you our guide for Docker containers deployment with
 OpenStack Heat.
 In this guide we dockerize and deploy a lamp application on two
 containers.


 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/Docker-containers-deployment-with-OpenStack-Heat.rst
 https://github.com/MarouenMechtri/OpenStack-Heat-Installation/blob/master/OpenStack-Heat-Installation.rst


 Hope it will be helpful for many people. Please let us know your
 opinion about it.

 Regards,
 Marouen Mechtri

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][NFV] VIF_VHOSTUSER

2014-08-27 Thread Daniel P. Berrange
On Wed, Aug 27, 2014 at 04:06:25PM +0200, Luke Gorrie wrote:
 Howdy!
 
 I am writing to ask whether it will be possible to merge VIF_VHOSTUSER [1]
 in Juno?
 
 VIF_VHOSTUSER adds support for a QEMU 2.1 has a feature called vhost-user
 [2] that allows a guest to do Virtio-net I/O via a userspace vswitch. This
 makes it convenient to deploy new vswitches that are optimized for NFV
 workloads, of which there are now several both open source and proprietary.
 
 The complication is that we have no CI coverage for this feature in Juno.
 Originally we had anticipated merging a Neutron driver that would exercise
 vhost-user but the Neutron core team requested that we develop that outside
 of the Neutron tree for the time being instead [3].

 We are hoping that the Nova team will be willing to merge the feature even
 so. Within the NFV subgroup it would help us to share more code with each
 other and also be good for our morale :) particularly as the QEMU work was
 done especially for use with OpenStack.

Our general rule for accepting new VIF drivers in Nova is that Neutron
should have accepted the corresponding other half of VIF driver, since
nova does not want to add support for things that are not in-tree for
Neutron.

In this case addign the new VIF driver involves defining a new VIF type
and corresponding metadata associated with it. This metadata is part of
the public API definition, to be passed from Neutron to Nova during VIF
plugging and so IMHO this has to be agreed upon and defined in tree for
Neutron  Nova. So even if the VIF driver in Neutron were to live out
of tree, at a very minimum I'd expect the VIF_VHOSTUSER part to be
specified in-tree to Neutron, so that Nova has a defined interface it
can rely on.

So based on this policy, my recommendation would be to keep the Nova VIF
support out of tree in your own branch of Nova codebase until Neutron team
are willing to accept their half of the driver.

In cases like this I think Nova  Neutron need to work together to agree
on acceptance/rejection of the proposed feature. Having one project accept
it and the other project reject, without them talking to each other is not
a good position to be in.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-27 Thread Sean Dague
So this change came in with adding glance.store -
https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a
bad direction to be headed.

Here is the problem when it comes to working with code from git, in
python, that uses namespaces, it's kind of a hack that violates the
principle of least surprise.

For instance:

cd /opt/stack/oslo.vmware
pip install .
cd /opt/stack/olso.config
pip install -e .
python -m olso.vmware
/usr/bin/python: No module named olso.vmware

In python 2.7 (using pip) namespaces are a bolt on because of the way
importing modules works. And depending on how you install things in a
namespace will overwrite the base __init__.py for the top level part of
the namespace in such a way that you can't get access to the submodules.

It's well known, and every conversation with dstuft that I've had in the
past was don't use namespaces.

A big reason we see this a lot is due to the fact that devstack does
'editable' pip installs for most things, because the point is it's a
development environment, and you should be able to change code, and see
if live without having to go through the install step again.

If people remember the constant issues with oslo.config in unit tests 9
months ago, this was because of mismatch of editable vs. non editable
libraries in the system and virtualenvs. This took months to get to a
consistent workaround.

The *workaround* that was done is we just gave up on installing olso
libs in a development friendly way. I don't consider that a solution,
it's a work around. But it has some big implications for the usefulness
of the development environment. It also definitely violates the
principle of least surprise, as changes to oslo.messaging in a devstack
env don't immediately apply, you have to reinstall olso.messaging to get
them to take.

If this is just oslo, that's one thing (and still something I think
should be revisted, because when the maintainer of pip says don't do
this I'm inclined to go by that). But this change aims to start brining
this pattern into other projects. Realistically I'm quite concerned that
this will trigger more work arounds and confusion.

It also means, for instance, that once we are in a namespace we can
never decide to install some of the namespace from pypi and some of it
from git editable (because it's a part that's under more interesting
rapid development).

So I'd like us to revisit using a namespace for glance, and honestly,
for other places in OpenStack, because these kinds of violations of the
principle of least surprise is something that I'd like us to be actively
minimizing.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Henry Gessau
On 8/27/2014 8:51 AM, Thierry Carrez wrote:
 better use of our 4 days

Will the design space be available on the fifth day too?

No need to schedule anything on that day (Day 0), but having the space
available would be nice for ad hoc gatherings.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Brainstorming summit sessions

2014-08-27 Thread Michael Still
Hi,

I'd like to start the summit planning process for Paris by asking for
people to brain storm a list of topics we might want to cover. We can
then prioritize that list and make sure that we address the most
important issues. This is a process that worked well for us at the
mid-cycle meetup.

To that end, I've created:

https://etherpad.openstack.org/p/kilo-nova-summit-topics

I'd appreciate if people could take a look at add anything they think
I've missed in my initial brain dump.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 8:47 AM, Sean Dague s...@dague.net wrote:

 On 08/26/2014 11:40 AM, Anne Gentle wrote:
 
 
 
 On Mon, Aug 25, 2014 at 8:36 AM, Sean Dague s...@dague.net
 mailto:s...@dague.net wrote:
 
On 08/20/2014 12:37 PM, Zane Bitter wrote:
 On 11/08/14 05:24, Thierry Carrez wrote:
 So the idea that being (and remaining) in the integrated release
should
 also be judged on technical merit is a slightly different effort.
It's
 always been a factor in our choices, but like Devananda says,
it's more
 difficult than just checking a number of QA/integration
checkboxes. In
 some cases, blessing one project in a problem space stifles
competition,
 innovation and alternate approaches. In some other cases, we reinvent
 domain-specific solutions rather than standing on the shoulders of
 domain-specific giants in neighboring open source projects.
 
 I totally agree that these are the things we need to be vigilant
about.
 
 Stifling competition is a big worry, but it appears to me that a
lot of
 the stifling is happening even before incubation. Everyone's time is
 limited, so if you happen to notice a new project on the incubation
 trajectory doing things in what you think is the Wrong Way, you're
most
 likely to either leave some drive-by feedback or to just ignore it and
 carry on with your life. What you're most likely *not* to do is to
start
 a competing project to prove them wrong, or to jump in full time
to the
 existing project and show them the light. It's really hard to argue
 against the domain experts too - when you're acutely aware of how
 shallow your knowledge is in a particular area it's very hard to know
 how hard to push. (Perhaps ironically, since becoming a PTL I feel I
 have to be much more cautious in what I say too, because people are
 inclined to read too much into my opinion - I wonder if TC members
feel
 the same pressure.) I speak from first-hand instances of guilt here -
 for example, I gave some feedback to the Mistral folks just before the
 last design summit[1], but I haven't had time to follow it up at
all. I
 wouldn't be a bit surprised if they showed up with an incubation
 request, a largely-unchanged user interface and an expectation that I
 would support it.
 
 The result is that projects often don't hear the feedback they need
 until far too late - often when they get to the incubation review
(maybe
 not even their first incubation review). In the particularly
unfortunate
 case of Marconi, it wasn't until the graduation review. (More
about that
 in a second.) My best advice to new projects here is that you must be
 like a ferret up the pant-leg of any negative feedback. Grab hold
of any
 criticism and don't let go until you have either converted the person
 giving it into your biggest supporter, been converted by them, or
 provoked them to start a competing project. (Any of those is a win as
 far as the community is concerned.)
 
 Perhaps we could consider a space like a separate mailing list
 (openstack-future?) reserved just for announcements of Related
projects,
 their architectural principles, and discussions of the same?  They
 certainly tend to get drowned out amidst the noise of openstack-dev.
 (Project management, meeting announcements, and internal project
 discussion would all be out of scope for this list.)
 
 As for reinventing domain-specific solutions, I'm not sure that
happens
 as often as is being made out. IMO the defining feature of IaaS that
 makes the cloud the cloud is on-demand (i.e. real-time) self-service.
 Everything else more or less falls out of that requirement, but
the very
 first thing to fall out is multi-tenancy and there just aren't
that many
 multi-tenant services floating around out there. There are a couple of
 obvious strategies to deal with that: one is to run existing software
 within a tenant-local resource provisioned by OpenStack (Trove and
 Sahara are examples of this), and the other is to wrap a multi-tenancy
 framework around an existing piece of software (Nova and Cinder are
 examples of this). (BTW the former is usually inherently less
 satisfying, because it scales at a much coarser granularity.) The
answer
 to a question of the form:
 
 Why do we need OpenStack project $X, when open source project $Y
 already exists?
 
 is almost always:
 
 Because $Y is not multi-tenant aware; we need to wrap it with a
 multi-tenancy layer with OpenStack-native authentication, metering and
 quota management. That even allows us to set up an abstraction
layer so
 that you can substitute $Z as the back end too.
 
 This is completely uncontroversial when you substitute X, Y, Z = Nova,
 libvirt, Xen. However, when you instead substitute X, Y, Z =
 Zaqar/Marconi, Qpid, MongoDB it suddenly becomes *highly*
controversial.
 I'm all in favour of a healthy scepticism, but I think we've
passed that
 point now. (How would *you* make an AMQP bus 

Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Steven Hardy
On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
I am little bit skeptical about using Swift for this use case because of
its eventual consistency issue. I am not sure Swift cluster is good to be
used for this kind of problem. Please note that Swift cluster may give you
old data at some point of time.

This is probably not a major problem, but it's certainly worth considering.

My assumption is that the latency of making the replicas consistent will be
small relative to the timeout for things like SoftwareDeployments, so all
we need is to ensure that instances  eventually get the new data, act on
it, and send a signal back to Heat (again, Heat eventually getting it via
Swift will be OK provided the replication delay is small relative to the
stack timeout, which defaults to one hour)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-27 Thread Ben Nemec
On 08/27/2014 09:31 AM, Sean Dague wrote:
 So this change came in with adding glance.store -
 https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a
 bad direction to be headed.
 
 Here is the problem when it comes to working with code from git, in
 python, that uses namespaces, it's kind of a hack that violates the
 principle of least surprise.
 
 For instance:
 
 cd /opt/stack/oslo.vmware
 pip install .
 cd /opt/stack/olso.config
 pip install -e .
 python -m olso.vmware
 /usr/bin/python: No module named olso.vmware
 
 In python 2.7 (using pip) namespaces are a bolt on because of the way
 importing modules works. And depending on how you install things in a
 namespace will overwrite the base __init__.py for the top level part of
 the namespace in such a way that you can't get access to the submodules.
 
 It's well known, and every conversation with dstuft that I've had in the
 past was don't use namespaces.
 
 A big reason we see this a lot is due to the fact that devstack does
 'editable' pip installs for most things, because the point is it's a
 development environment, and you should be able to change code, and see
 if live without having to go through the install step again.
 
 If people remember the constant issues with oslo.config in unit tests 9
 months ago, this was because of mismatch of editable vs. non editable
 libraries in the system and virtualenvs. This took months to get to a
 consistent workaround.
 
 The *workaround* that was done is we just gave up on installing olso
 libs in a development friendly way. I don't consider that a solution,
 it's a work around. But it has some big implications for the usefulness
 of the development environment. It also definitely violates the
 principle of least surprise, as changes to oslo.messaging in a devstack
 env don't immediately apply, you have to reinstall olso.messaging to get
 them to take.
 
 If this is just oslo, that's one thing (and still something I think
 should be revisted, because when the maintainer of pip says don't do
 this I'm inclined to go by that). But this change aims to start brining
 this pattern into other projects. Realistically I'm quite concerned that
 this will trigger more work arounds and confusion.
 
 It also means, for instance, that once we are in a namespace we can
 never decide to install some of the namespace from pypi and some of it
 from git editable (because it's a part that's under more interesting
 rapid development).
 
 So I'd like us to revisit using a namespace for glance, and honestly,
 for other places in OpenStack, because these kinds of violations of the
 principle of least surprise is something that I'd like us to be actively
 minimizing.
 
   -Sean
 

+1.  Just say no to namespaces.

The only reason we didn't rip them out of oslo completely is that
renaming libraries used by so many projects is painful in and of itself.
 If you can avoid making the namespace mistake up front that's
absolutely the way to go.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday August 28th at 17:00 UTC

2014-08-27 Thread Matthew Treinish
Hi Everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
this Thursday, August 28th at 17:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-Matt Treinish


pgpDUnqxPIMPV.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-27 Thread Flavio Percoco
On 08/27/2014 04:31 PM, Sean Dague wrote:
 So this change came in with adding glance.store -
 https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a
 bad direction to be headed.
 
 Here is the problem when it comes to working with code from git, in
 python, that uses namespaces, it's kind of a hack that violates the
 principle of least surprise.
 
 For instance:
 
 cd /opt/stack/oslo.vmware
 pip install .
 cd /opt/stack/olso.config
 pip install -e .
 python -m olso.vmware
 /usr/bin/python: No module named olso.vmware
 
 In python 2.7 (using pip) namespaces are a bolt on because of the way
 importing modules works. And depending on how you install things in a
 namespace will overwrite the base __init__.py for the top level part of
 the namespace in such a way that you can't get access to the submodules.
 
 It's well known, and every conversation with dstuft that I've had in the
 past was don't use namespaces.
 
 A big reason we see this a lot is due to the fact that devstack does
 'editable' pip installs for most things, because the point is it's a
 development environment, and you should be able to change code, and see
 if live without having to go through the install step again.
 
 If people remember the constant issues with oslo.config in unit tests 9
 months ago, this was because of mismatch of editable vs. non editable
 libraries in the system and virtualenvs. This took months to get to a
 consistent workaround.
 
 The *workaround* that was done is we just gave up on installing olso
 libs in a development friendly way. I don't consider that a solution,
 it's a work around. But it has some big implications for the usefulness
 of the development environment. It also definitely violates the
 principle of least surprise, as changes to oslo.messaging in a devstack
 env don't immediately apply, you have to reinstall olso.messaging to get
 them to take.
 
 If this is just oslo, that's one thing (and still something I think
 should be revisted, because when the maintainer of pip says don't do
 this I'm inclined to go by that). But this change aims to start brining
 this pattern into other projects. Realistically I'm quite concerned that
 this will trigger more work arounds and confusion.
 
 It also means, for instance, that once we are in a namespace we can
 never decide to install some of the namespace from pypi and some of it
 from git editable (because it's a part that's under more interesting
 rapid development).
 
 So I'd like us to revisit using a namespace for glance, and honestly,
 for other places in OpenStack, because these kinds of violations of the
 principle of least surprise is something that I'd like us to be actively
 minimizing.


Sean,

Thanks for bringing this up.

To be honest, I became familiar with these namespace issues when I
started working on glance.store. That said, I realize how this can be an
issue even with the current workaround.

Unfortunately, it's already quite late in the release and starting the
rename process will delay Glance's migration to glance.store leaving us
with not enough time to test it and make sure things are working as
expected.

With full transparency, I don't have an counter-argument for what you're
saying/proposing. I talked to Doug on IRC and he mentioned this is
something that won't be fixed in py27 so there's not even hope on seeing
it fixed/working soon. Based on that, I'm happy to rename glance.store
but, I'd like us to think in a good way to make this rename happen
without blocking the glance.store work in Juno.

I've 2 ways to make this happen:

1. Do a partial rename and then complete it after the glance migration
is done. If I'm not missing anything, we should be able to do something
like:
- Rename the project internally
- Release a new version with the new name `glancestore`
- Switch glance over to `glancestore`
- Complete the rename process with support from infra

2. Let this patch land, complete Glance's switch-over using namespaces
and then do the rename all together.

Do you have any other suggestion that would help avoiding namespaces
without blocking glance.store?

Thanks,
Flavio


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Doug Hellmann

On Aug 26, 2014, at 2:01 PM, Joe Gordon joe.gord...@gmail.com wrote:

 
 
 
 On Wed, Aug 20, 2014 at 2:25 AM, Eoghan Glynn egl...@redhat.com wrote:
 
 
Additional cross-project resources can be ponied up by the large
contributor companies, and existing cross-project resources are not
necessarily divertable on command.
  
   Sure additional cross-project resources can and need to be ponied up, but 
   I
   am doubtful that will be enough.
 
  OK, so what exactly do you suspect wouldn't be enough, for what
  exactly?
 
 
  I am not sure what would be enough to get OpenStack back in a position where
  more developers/users are happier with the current state of affairs. Which
  is why I think we may want to try several things.
 
 
 
  Is it the likely number of such new resources, or the level of domain-
  expertise that they can be realistically be expected bring to the
  table, or the period of time to on-board them, or something else?
 
 
  Yes, all of the above.
 
 Hi Joe,
 
 In coming to that conclusion, have you thought about and explicitly
 rejected all of the approaches that have been mooted to mitigate
 those concerns? 
 
 Is there a strong reason why the following non-exhaustive list
 would all be doomed to failure:
 
  * encouraging projects to follow the successful Sahara model,
where one core contributor also made a large contribution to
a cross-project effort (in this case infra, but could be QA
or docs or release management or stable-maint ... etc)
 
[this could be seen as essentially offsetting the cost of
 that additional project drawing from the cross-project well]
 
  * assigning liaisons from each project to *each* of the cross-
project efforts
 
[this could be augmented/accelerated with one of the standard
 on-boarding approaches, such as a designated mentor for the
 liaison or even an immersive period of secondment]
 
  * applying back-pressure via the board representation to make
it more likely that the appropriate number of net-new
cross-project resources are forthcoming
 
[c.f. Stef's we're not amateurs or volunteers mail earlier
 on this thread]
 
 All of these are good ideas and I think we should try them. I am just afraid 
 this won't be enough.
 
 Imagine for a second, that the gate is is always stable, and none of the 
 existing cross project efforts are short staffed. OpenStack would still has a 
 pretty poor user experience and return errors in production. Our 'official' 
 CLIs are poor, our logs are cryptic, we have scaling issues (by number of 
 nodes), people are concerned about operational readiness [0], upgrades are 
 very painful, etc. Solving the issue of scaling cross project efforts is not 
 enough, we still have to solve a whole slew of usability issues. 

These are indeed problems, and AFAICT, we don’t really have a structure in 
place to solve some of them directly. There’s the unified CLI project, which is 
making good progress. The SDK project started this cycle as well. I don’t have 
the impression, though, that either of those has quite the traction we need to 
fully replace the in-project versions, yet. Sean has some notes for making 
logging better, but I don’t think there’s a team working on those changes yet 
either.

The challenge with most of these cross-project initiatives is that they need 
every project to contribute resources, at least in the form of reviews if not 
code, but every project also has its own priorities. Would the situation be 
improved if we had a more formal way for the TC to say to projects, “this cycle 
we need you to dedicate resources to work on X with this cross-project team, 
even if that means deprioritizing something else”, similar to what we’ve done 
recently with the gap analysis?

Doug

 
 [0] http://robhirschfeld.com/2014/08/04/oscon-report/
 
  
 
 I really think we need to do better than dismissing out-of-hand
 the idea of beefing up the cross-project efforts. If it won't
 work for specific reasons, let's get those reasons out onto
 the table and make a data-driven decision on this.
 
  And which cross-project concern do you think is most strained by the
  current set of projects in the integrated release? Is it:
 
  * QA
  * infra
  * release management
  * oslo
  * documentation
  * stable-maint
 
  or something else?
 
 
  Good question.
 
  IMHO QA, Infra and release management are probably the most strained.
 
 OK, well let's brain-storm on how some of those efforts could
 potentially be made more scalable.
 
 Should we for example start to look at release management as a
 program onto itself, with a PTL *and* a group of cores to divide
 and conquer the load?
 
 (the hands-on rel mgmt for the juno-2 milestone, for example, was
  delegated - is there a good reason why such delegation wouldn't
  work as a matter of course?)
 
 Should QA programs such as grenade be actively seeking new cores to
 spread the workload?
 
 (until recently, this had the 

Re: [openstack-dev] [Heat][Docker] How to Dockerize your applications with OpenStack Heat in simple steps

2014-08-27 Thread Eric Windisch
On Tue, Aug 26, 2014 at 3:35 PM, Martinx - ジェームズ thiagocmarti...@gmail.com
wrote:

 Hey Stackers! Wait!   =)

 Let me ask something...

 Why are you guys using Docker within a VM?!?! What is the point of doing
 such thing?!

 I thought Docker was here to entirely replace the virtualization layer,
 bringing a bare metal-cloud, am I right?!



Although this is getting somewhat off-topic, but it's something the
containers service seeks to support... so perhaps it's a worthy discussion.
It's also a surprisingly common sentiment, so I'd like to address it:

The advantages of Docker are not simply as a lightweight alternative to
virtualization, but to provide portability, transport, and process-level
isolation across hosts (physical, or not). All of those advantages are seen
with virtualization just as well as without. Ostensibly, those that seek to
use Docker to replace virtualization are those that never needed
virtualization in the first place, but needed better systems management
tools. That's fine, because Docker seeks to be that better management tool.

Still, there plenty of valid reasons for virtualization, including, but not
least, the need for multi-tenant isolation, where using virtual or physical
machine boundaries to provide isolation between tenants is highly
advisable. The combined use  of Docker with VMs is an important part of the
Docker security story.

Years ago, I had founded an IaaS service. We had been running a PaaS-like
service and had been trying to move customers to IaaS. We were offloading
the problem of maintaining various application stacks. We had just gone
through the MVC framework hype-cycle and were tired of trying to pick
winners to provide support for. Instead, we wanted to simply provide the
hardware, the architecture, and let customers run their own software. It
was great in practice, but users didn't want to do Ops. They wanted to do
Dev. A very small minority ran Puppet of CfEngine.  Ultimately, we found
there was a large gap between users that knew what to do with a server and
those that knew how to build applications. What we needed then was Docker.

Providing hardware, physical or virtual, isn't enough.  A barrier of entry
exists. DevOps works for some, but it's a culture and one that requires
tooling; tooling which often wedges the divide that DevOps seeks to bridge.
That might be fine for a San Francisco infrastructure startup or the
mega-corp, but it's not fine for the sort of users that go to Heroku or
Engineyard. As an industry, we cannot tell new Django developers they must
also learn Chef if they wish to deploy their applications to the cloud. We
also shouldn't teach them to build to some specific, proprietary PaaS.
Lowering the barrier of entry and leveling the field helps everyone, even
those that have always paid the previous price of admission.

What I'm really saying is that much of the value that Docker adds to the
ecosystem has little to do with performance, and performance is the primary
reason for moving away from virtualization. Deciding to trade security for
performance is a decision users might wish to make, but that's only
indicative of the flexibility that Docker offers, not a requirement.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-sdk-php] Canceled Meeting and Future of weekly meetings

2014-08-27 Thread Matthew Farina
First I'd like to note that the weekly PHP SDK meeting this week is
canceled.

For the time being, unless someone has a good argument to the contrary, the
meeting will be suspended. Those of us working on PHP can be found in the
#openstack-sdks room in IRC.

The meetings have stopped being useful at this point with few or no regular
attenders. When we have a release and activity from others around the
project picks up I expect to restart the meetings.

If someone believes we should keep the meetings going I'm open to hearing
the case. Please respond and share your stance.

- Matt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] Release report

2014-08-27 Thread mar...@redhat.com
1. os-apply-config: no changes, 0.1.19

2. os-refresh-config:   no changes, 0.1.7

3. os-collect-config:   no changes, 0.1.27

4. os-cloud-config: release: 0.1.6 -- 0.1.7
-- https://pypi.python.org/pypi/os-cloud-config/0.1.7
--
http://tarballs.openstack.org/os-cloud-config/os-cloud-config-0.1.7.tar.gz

5. diskimage-builder:   release: 0.1.27 -- 0.1.28
-- https://pypi.python.org/pypi/diskimage-builder/0.1.28
--
http://tarballs.openstack.org/diskimage-builder/diskimage-builder-0.1.28.tar.gz

6. dib-utils:   release: 0.0.4 -- 0.0.5
-- https://pypi.python.org/pypi/dib-utils/0.0.5
-- http://tarballs.openstack.org/dib-utils/dib-utils-0.0.5.tar.gz

7. tripleo-heat-templates:  release: 0.7.3 -- 0.7.4
-- https://pypi.python.org/pypi/tripleo-heat-templates/0.7.4
--
http://tarballs.openstack.org/tripleo-heat-templates/tripleo-heat-templates-0.7.4.tar.gz

8: tripleo-image-elements:  release: 0.8.3 -- 0.8.4
-- https://pypi.python.org/pypi/tripleo-image-elements/0.8.4
--
http://tarballs.openstack.org/tripleo-image-elements/tripleo-image-elements-0.8.4.tar.gz

9: tuskar:  release 0.4.8 -- 0.4.9
-- https://pypi.python.org/pypi/tuskar/0.4.9
-- http://tarballs.openstack.org/tuskar/tuskar-0.4.9.tar.gz

10. python-tuskarclient:release 0.1.8 -- 0.1.9
-- https://pypi.python.org/pypi/python-tuskarclient/0.1.9
--
http://tarballs.openstack.org/python-tuskarclient/python-tuskarclient-0.1.9.tar.gz

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] usage patterns for oslo.config

2014-08-27 Thread Brant Knudson
Mark -


 I don't think I've seen code (except for obscure cases) which uses the
 CONF global directly (as opposed to being passed CONF as a parameter)
 but doesn't register the options at import time.

 Mark.


Keystone uses the CONF global directly and doesn't register the options at
import time. They're registered early when keystone is started, just before
the call to CONF().

Here's an example use of the CONF global:
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/identity/controllers.py#n48

Here's the function that registers the CONF options:
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n820

Here's the call to the function that registers the CONF options:
http://git.openstack.org/cgit/openstack/keystone/tree/bin/keystone-all#n115

This was done so that reading the value of a config option during import
will fail.

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 8:51 AM, Thierry Carrez thie...@openstack.org wrote:

 Hi everyone,
 
 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:
 
 Day 1. Cross-project sessions / incubated projects / other projects
 
 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.

If anything, I’d like to have fewer cross-project tracks running 
simultaneously. Depending on which are proposed, maybe we can make that happen. 
On the other hand, cross-project issues is a big theme right now so maybe we 
should consider devoting more than a day to dealing with them.

 
 Day 2 and Day 3. Scheduled sessions for various programs
 
 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.

The message I’m getting from this change in available space is that we need to 
start thinking about and writing up ideas early, so teams can figure out which 
upcoming specs need more discussion and which don’t.

 
 Day 4. Contributors meetups
 
 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.

This is a good compromise between needing to allow folks to move around between 
tracks (including speaking at the conference) and having a large block of 
unstructured time for deep dives.

 
 
 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.
 
 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.
 
 Cheers,
 
 -- 
 Thierry Carrez (ttx)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Zane Bitter

On 27/08/14 11:04, Steven Hardy wrote:

On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:

I am little bit skeptical about using Swift for this use case because of
its eventual consistency issue. I am not sure Swift cluster is good to be
used for this kind of problem. Please note that Swift cluster may give you
old data at some point of time.


This is probably not a major problem, but it's certainly worth considering.

My assumption is that the latency of making the replicas consistent will be
small relative to the timeout for things like SoftwareDeployments, so all
we need is to ensure that instances  eventually get the new data, act on


That part is fine, but if they get the new data and then later get the 
old data back again... that would not be so good.



it, and send a signal back to Heat (again, Heat eventually getting it via
Swift will be OK provided the replication delay is small relative to the
stack timeout, which defaults to one hour)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 10:31 AM, Sean Dague s...@dague.net wrote:

 So this change came in with adding glance.store -
 https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a
 bad direction to be headed.
 
 Here is the problem when it comes to working with code from git, in
 python, that uses namespaces, it's kind of a hack that violates the
 principle of least surprise.
 
 For instance:
 
 cd /opt/stack/oslo.vmware
 pip install .
 cd /opt/stack/olso.config
 pip install -e .
 python -m olso.vmware
 /usr/bin/python: No module named olso.vmware
 
 In python 2.7 (using pip) namespaces are a bolt on because of the way
 importing modules works. And depending on how you install things in a
 namespace will overwrite the base __init__.py for the top level part of
 the namespace in such a way that you can't get access to the submodules.
 
 It's well known, and every conversation with dstuft that I've had in the
 past was don't use namespaces”.

I’ve been using namespace packages on and off for 10+ years, and OpenStack is 
the first project where I’ve encountered any issues. That doesn’t necessarily 
mean we shouldn’t change, but it’s also not fair to paint them as completely 
broken. Many projects continue to use them successfully.

 
 A big reason we see this a lot is due to the fact that devstack does
 'editable' pip installs for most things, because the point is it's a
 development environment, and you should be able to change code, and see
 if live without having to go through the install step again.
 
 If people remember the constant issues with oslo.config in unit tests 9
 months ago, this was because of mismatch of editable vs. non editable
 libraries in the system and virtualenvs. This took months to get to a
 consistent workaround.
 
 The *workaround* that was done is we just gave up on installing olso
 libs in a development friendly way. I don't consider that a solution,
 it's a work around. But it has some big implications for the usefulness
 of the development environment. It also definitely violates the
 principle of least surprise, as changes to oslo.messaging in a devstack
 env don't immediately apply, you have to reinstall olso.messaging to get
 them to take.

We did make that change, but IIRC we also found and fixed some dependency 
management issues that were causing non-editable versions of libraries to be 
installed even though there was already another version present on the system. 
I thought at the time that not installing the oslo libs editable was a “belt 
and suspenders” change, rather than the final fix.

 
 If this is just oslo, that's one thing (and still something I think
 should be revisted, because when the maintainer of pip says don't do
 this I'm inclined to go by that). But this change aims to start brining
 this pattern into other projects. Realistically I'm quite concerned that
 this will trigger more work arounds and confusion.

The reason we decided to leave oslo packages in the oslo namespace was due to 
the pain of renaming the existing libs and repackaging them in the distros. It 
might be possible to build a package that contains both oslo.config and 
oslo_config, though, so I’ll spend some time investigating that. If we can make 
that work, the Oslo team can look into applying that change to all of our libs 
in the next cycle. If not, we’ll need to get some feedback from our packagers 
about how much pain renaming things is going to cause for upgrades before we 
make any changes.

 
 It also means, for instance, that once we are in a namespace we can
 never decide to install some of the namespace from pypi and some of it
 from git editable (because it's a part that's under more interesting
 rapid development).
 
 So I'd like us to revisit using a namespace for glance, and honestly,
 for other places in OpenStack, because these kinds of violations of the
 principle of least surprise is something that I'd like us to be actively
 minimizing.
 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] [marconi] Removing GET message by ID in v1.1 (Redux)

2014-08-27 Thread Kurt Griffiths
Crew, as we continue implementing v1.1 in anticipation for a “public preview” 
at the summit, I’ve started to wonder again about removing the ability to GET a 
message by ID from the API. Previously, I was concerned that it may be too 
disruptive a change and should wait for 2.0. But consider this: in order to GET 
a message by ID you already have to have either listed or claimed that message, 
in which case you already have the message. Therefore, this operation would 
appear to have no practical purpose, and so probably won’t be missed by users 
if we remove it.

Am I missing something? What does everyone think about removing getting 
messages by ID in v1.1?

--Kurt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Ryan Brown
Swift does have some guarantees around read-after-write consistency, but
for Heat I think the best bet would be the X-Newest[1] header which has
been in swift for a very, very long time. The downside here is that
(IIUC) it queries all storage nodes for that object. It does not provide
a hard guarantee[2] but does at least try *harder* to get the most
recent version.

We could also (assuming it was turned on) use object versioning to
ensure that the most up to date version of the metadata was used, but I
think X-Newest is the way to go.

[1]: https://lists.launchpad.net/openstack/msg06846.html
[2]:
https://ask.openstack.org/en/question/26403/does-x-newest-apply-to-getting-container-lists-and-object-lists-also-dlo/

On 08/27/2014 11:41 AM, Zane Bitter wrote:
 On 27/08/14 11:04, Steven Hardy wrote:
 On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
 I am little bit skeptical about using Swift for this use case
 because of
 its eventual consistency issue. I am not sure Swift cluster is
 good to be
 used for this kind of problem. Please note that Swift cluster may
 give you
 old data at some point of time.

 This is probably not a major problem, but it's certainly worth
 considering.

 My assumption is that the latency of making the replicas consistent
 will be
 small relative to the timeout for things like SoftwareDeployments, so all
 we need is to ensure that instances  eventually get the new data, act on
 
 That part is fine, but if they get the new data and then later get the
 old data back again... that would not be so good.
 
 it, and send a signal back to Heat (again, Heat eventually getting it via
 Swift will be OK provided the replication delay is small relative to the
 stack timeout, which defaults to one hour)

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 11:14 AM, Flavio Percoco fla...@redhat.com wrote:

 On 08/27/2014 04:31 PM, Sean Dague wrote:
 So this change came in with adding glance.store -
 https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a
 bad direction to be headed.
 
 Here is the problem when it comes to working with code from git, in
 python, that uses namespaces, it's kind of a hack that violates the
 principle of least surprise.
 
 For instance:
 
 cd /opt/stack/oslo.vmware
 pip install .
 cd /opt/stack/olso.config
 pip install -e .
 python -m olso.vmware
 /usr/bin/python: No module named olso.vmware
 
 In python 2.7 (using pip) namespaces are a bolt on because of the way
 importing modules works. And depending on how you install things in a
 namespace will overwrite the base __init__.py for the top level part of
 the namespace in such a way that you can't get access to the submodules.
 
 It's well known, and every conversation with dstuft that I've had in the
 past was don't use namespaces.
 
 A big reason we see this a lot is due to the fact that devstack does
 'editable' pip installs for most things, because the point is it's a
 development environment, and you should be able to change code, and see
 if live without having to go through the install step again.
 
 If people remember the constant issues with oslo.config in unit tests 9
 months ago, this was because of mismatch of editable vs. non editable
 libraries in the system and virtualenvs. This took months to get to a
 consistent workaround.
 
 The *workaround* that was done is we just gave up on installing olso
 libs in a development friendly way. I don't consider that a solution,
 it's a work around. But it has some big implications for the usefulness
 of the development environment. It also definitely violates the
 principle of least surprise, as changes to oslo.messaging in a devstack
 env don't immediately apply, you have to reinstall olso.messaging to get
 them to take.
 
 If this is just oslo, that's one thing (and still something I think
 should be revisted, because when the maintainer of pip says don't do
 this I'm inclined to go by that). But this change aims to start brining
 this pattern into other projects. Realistically I'm quite concerned that
 this will trigger more work arounds and confusion.
 
 It also means, for instance, that once we are in a namespace we can
 never decide to install some of the namespace from pypi and some of it
 from git editable (because it's a part that's under more interesting
 rapid development).
 
 So I'd like us to revisit using a namespace for glance, and honestly,
 for other places in OpenStack, because these kinds of violations of the
 principle of least surprise is something that I'd like us to be actively
 minimizing.
 
 
 Sean,
 
 Thanks for bringing this up.
 
 To be honest, I became familiar with these namespace issues when I
 started working on glance.store. That said, I realize how this can be an
 issue even with the current workaround.
 
 Unfortunately, it's already quite late in the release and starting the
 rename process will delay Glance's migration to glance.store leaving us
 with not enough time to test it and make sure things are working as
 expected.
 
 With full transparency, I don't have an counter-argument for what you're
 saying/proposing. I talked to Doug on IRC and he mentioned this is
 something that won't be fixed in py27 so there's not even hope on seeing
 it fixed/working soon. Based on that, I'm happy to rename glance.store
 but, I'd like us to think in a good way to make this rename happen
 without blocking the glance.store work in Juno.

When you asked me about namespace packages, I thought using them was fine 
because we had worked out how to do it for Oslo and the same approach worked 
for you in glance. I didn’t realize that was still considered an unresolved 
issue, so I apologize that my mistake has ended up causing you more work, 
Flavio.

 
 I've 2 ways to make this happen:
 
 1. Do a partial rename and then complete it after the glance migration
 is done. If I'm not missing anything, we should be able to do something
 like:
   - Rename the project internally
   - Release a new version with the new name `glancestore`
   - Switch glance over to `glancestore`
   - Complete the rename process with support from infra

This seems like the best approach. If nothing is using the library now, all of 
the name changes would need to happen within the library. You can release 
“glancestore” from a git repo called “glance.store” and we can rename that repo 
somewhere down the line, so you shouldn’t be blocked on the infra team (who I 
expect are going to be really busy keeping an eye on the gate as we get close 
to feature freeze).

Was there anything else you were worried about being blocked on by the rename?

We also need to figure out how to avoid problems like this in the future. We 
can’t expect everyone to be intimately familiar with all of the work 

Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-27 Thread Donald Stufft

 On Aug 27, 2014, at 11:45 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 On Aug 27, 2014, at 10:31 AM, Sean Dague s...@dague.net 
 mailto:s...@dague.net wrote:
 
 So this change came in with adding glance.store -
 https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a
 bad direction to be headed.
 
 Here is the problem when it comes to working with code from git, in
 python, that uses namespaces, it's kind of a hack that violates the
 principle of least surprise.
 
 For instance:
 
 cd /opt/stack/oslo.vmware
 pip install .
 cd /opt/stack/olso.config
 pip install -e .
 python -m olso.vmware
 /usr/bin/python: No module named olso.vmware
 
 In python 2.7 (using pip) namespaces are a bolt on because of the way
 importing modules works. And depending on how you install things in a
 namespace will overwrite the base __init__.py for the top level part of
 the namespace in such a way that you can't get access to the submodules.
 
 It's well known, and every conversation with dstuft that I've had in the
 past was don't use namespaces”.
 
 I’ve been using namespace packages on and off for 10+ years, and OpenStack is 
 the first project where I’ve encountered any issues. That doesn’t necessarily 
 mean we shouldn’t change, but it’s also not fair to paint them as completely 
 broken. Many projects continue to use them successfully.

Just for the record, there are at least 3 different ways of installing a 
package using pip (under the cover ways), and there are two different ways for 
pip to tell setuptools to handle the namespace packages. Unfortunately both 
ways of namespace package handling only work on 2/3 of the ways to install 
things. Unfortunately theres not much that can be done about this, it’s a 
fundamental flaw in the way setuptools namespace packages work.

The changes in Python 3 to enable real namespace packages should work without 
those problems, but you know, Python 3 only.

Generally it’s my opinion that ``import foo_bar`` isn’t particularly any better 
or worse than ``import foo.bar``. The only real benefit is being able to 
iterate over ``foo.*``, however I’d just recommend using entry points instead 
of trying to do magic based on the name.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] let's start preparing for the kilo summit

2014-08-27 Thread Doug Hellmann
Based on Thierry’s comment that we’ll have less space/time for summit sessions, 
I want to make sure we start thinking about topics that need to be addressed as 
a team earlier than we might usually.

I created an etherpad for us to start sketching out ideas: 
https://etherpad.openstack.org/p/kilo-oslo-summit-topics

There are some instructions at the top of the page, please read those.

You don’t need to write up a full spec for proposals, yet, but when the time 
comes to make the hard decisions any proposals with a written spec are likely 
to be given priority.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Flavio Percoco
On 08/27/2014 03:26 PM, Sean Dague wrote:
 On 08/27/2014 08:51 AM, Thierry Carrez wrote:
 Hi everyone,

 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:

 Day 1. Cross-project sessions / incubated projects / other projects

 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.

 Day 2 and Day 3. Scheduled sessions for various programs

 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.

 Day 4. Contributors meetups

 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.


 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.

 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.
 
 I definitely like this approach. I think it will be really interesting
 to collect feedback from people about the value they got from days 2  3
 vs. Day 4.
 
 I also wonder if we should lose a slot from days 1 - 3 and expand the
 hallway time. Hallway track is always pretty interesting, and honestly
 at a lot of interesting ideas spring up. The 10 minute transitions often
 seem to feel like you are rushing between places too quickly some times.

+1

Last summit, it was basically impossible to do any hallway talking and
even meet some folks face-2-face.

Other than that, I think the proposal is great and makes sense to me.

Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] review priorities

2014-08-27 Thread Doug Hellmann
I *believe* we have started work on all of the graduations we are likely to 
complete during Juno. I suggest we start focusing our efforts on reviews 
related to the other outstanding blueprints and critical bugs. I see a lot of 
open patches for taskflow, for example.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Heat] Creating accounts in Keystone

2014-08-27 Thread Kurt Griffiths
On 8/25/14, 9:50 AM, Ryan Brown rybr...@redhat.com wrote:

I'm actually quite partial to roles because, in my experience, service
accounts rarely have their credentials rotated more than once per eon.
Having the ability to let instances grab tokens would certainly help
Heat, especially if we start using Zaqar (the artist formerly known as
marconi).


According to AWS docs, IAM Roles allow you to Define which API actions
and resources the application can use after assuming the role.” What would
it take to implement this in OpenStack? Currently, Keystone roles seem to
be more oriented toward cloud operators, not end users. This quote from
the Keystone docs[1] is telling:

If you wish to restrict users from performing operations in, say,
the Compute service, you need to create a role in the Identity
Service and then modify /etc/nova/policy.json so that this role is
required for Compute operations.

On 8/25/14, 9:49 AM, Zane Bitter zbit...@redhat.com wrote:

In particular, even if a service like Zaqar or Heat implements their own
authorisation (e.g. the user creating a Zaqar queue supplies lists of
the accounts that are allowed to read or write to it, respectively), how
does the user ensure that the service accounts they create will not have
access to other OpenStack APIs? IIRC the default policy.json files
supplied by the various projects allow non-admin operations from any
account with a role in the project.


It seems like end users need to be able to define custom roles and
policies.

Some example use cases for the sake of discussion:

1. App developer sends a request to Zaqar to create a queue named
   “customer-orders
2. Zaqar creates a queue named customer-orders
3. App developer sends a request to Keystone to create a role, role-x,
   for App Component X
4. Keystone creates role-x
5. App developer sends requests to Keystone to create a service user,
   “user-x” and associate it with role-x
6. Keystone creates user-x and gives it role-x
7. App developer sends a request to Zaqar to create a policy,
   “customer-orders-observer”, and associate that policy with role-x. The
   policy only allows GETing (listing) messages from the customer-orders
   queue
8. Zaqar creates customer-orders-observer and notes that it is associated
   with role-x

Later on...

1. App Component X sends a request to Zaqar, including an auth token
2. Zaqar sends a request to Keystone asking for roles associated with the
   given token
3. Keystone returns one or more roles, including role-x
4. Zaqar checks for any user-defined policies associated with the roles,
   including role-x, and finds customer-orders-observer
5. Zaqar verifies that the requested operation is allowed according to
   customer-orders-observer

We should also compare and contrast this with signed URLs ala Swift’s
tempurl. For example, service accounts do not have to be created or
managed in the case of tempurl.

--Kurt

[1]: http://goo.gl/5UBMwR [http://docs.openstack.org]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 11:17 AM, Chris Dent chd...@redhat.com wrote:

 On Wed, 27 Aug 2014, Doug Hellmann wrote:
 
 For example, Matt helped me with an issue yesterday, and afterwards
 I asked him to write up a few details about how he reached his
 conclusion because he was moving fast enough that I wasn’t
 actually learning anything from what he was saying to me on IRC.
 Having an example with some logs and then even stream of
 consciousness notes like “I noticed the out of memory error, and
 then I found the first instance of that and looked at the oom-killer
 report in syslog to see which process was killed and it was X which
 might mean Y” would help.
 
 +many
 
 I'd _love_ to be more capable at gate debugging.
 
 That said, it does get easier just by doing it. The first many times
 is like beating my head against the wall, especially the constant
 sense of where am I and where do I need to go.

I definitely know the feeling. I don’t expect to become an expert, but given my 
focus on turning out libraries for Oslo it’s hard to find time to “practice” 
enough to get past the frustrated phase. If I had even some hints to look at, 
that would help me, and I’m sure others.

I have found it immensely helpful, for example, to have a written set of the 
steps involved in creating a new library, from importing the git repo all the 
way through to making it available to other projects. Without those 
instructions, it would have been much harder to split up the work. The team 
would have had to train each other by word of mouth, and we would have had 
constant issues with inconsistent approaches triggering different failures. The 
time we spent building and verifying the instructions has paid off to the 
extent that we even had one developer not on the core team handle a graduation 
for us.

Doug

 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [glance] python namespaces considered harmful to development, lets not introduce more of them

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 11:55 AM, Donald Stufft don...@stufft.io wrote:

 
 On Aug 27, 2014, at 11:45 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 
 On Aug 27, 2014, at 10:31 AM, Sean Dague s...@dague.net wrote:
 
 So this change came in with adding glance.store -
 https://review.openstack.org/#/c/115265/5/lib/glance, which I think is a
 bad direction to be headed.
 
 Here is the problem when it comes to working with code from git, in
 python, that uses namespaces, it's kind of a hack that violates the
 principle of least surprise.
 
 For instance:
 
 cd /opt/stack/oslo.vmware
 pip install .
 cd /opt/stack/olso.config
 pip install -e .
 python -m olso.vmware
 /usr/bin/python: No module named olso.vmware
 
 In python 2.7 (using pip) namespaces are a bolt on because of the way
 importing modules works. And depending on how you install things in a
 namespace will overwrite the base __init__.py for the top level part of
 the namespace in such a way that you can't get access to the submodules.
 
 It's well known, and every conversation with dstuft that I've had in the
 past was don't use namespaces”.
 
 I’ve been using namespace packages on and off for 10+ years, and OpenStack 
 is the first project where I’ve encountered any issues. That doesn’t 
 necessarily mean we shouldn’t change, but it’s also not fair to paint them 
 as completely broken. Many projects continue to use them successfully.
 
 Just for the record, there are at least 3 different ways of installing a 
 package using pip (under the cover ways), and there are two different ways 
 for pip to tell setuptools to handle the namespace packages. Unfortunately 
 both ways of namespace package handling only work on 2/3 of the ways to 
 install things. Unfortunately theres not much that can be done about this, 
 it’s a fundamental flaw in the way setuptools namespace packages work.
 
 The changes in Python 3 to enable real namespace packages should work without 
 those problems, but you know, Python 3 only.
 
 Generally it’s my opinion that ``import foo_bar`` isn’t particularly any 
 better or worse than ``import foo.bar``. The only real benefit is being able 
 to iterate over ``foo.*``, however I’d just recommend using entry points 
 instead of trying to do magic based on the name.

Yeah, we’re not doing anything like that AFAIK, so that shouldn’t be a problem. 
I’m worried about ensuring that upgrades work, ensuring new versions of the 
existing libs don’t break stable releases, and not having to handle a lot of 
back ports to separate libraries for things that could otherwise be semver 
bumps. I’ll spend some time testing to see if we can create a shim layer with 
the namespace package to avoid some of those issues.

Doug

 
 ---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] review priorities

2014-08-27 Thread Julien Danjou
On Wed, Aug 27 2014, Doug Hellmann wrote:

 I *believe* we have started work on all of the graduations we are likely to
 complete during Juno. I suggest we start focusing our efforts on reviews
 related to the other outstanding blueprints and critical bugs. I see a lot
 of open patches for taskflow, for example.

If there's a easy way to grab this list of reviews (magic link?) that'd
be awesome. :)

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Incubator concerns from packaging perspective

2014-08-27 Thread Stefano Maffulli
On 08/21/2014 03:12 AM, Ihar Hrachyshka wrote:
 I wonder where discussion around the proposal is running. Is it public?

Yes, it's public, and this thread is part of it. Look at the dates of
the wiki: this is a recent proposal (first appearance Aug 11), came out
to address the GBP issue, quickly iterated over a couple of IRC neutron
meetings, and quick phone calls to get early feedback from the GBP team,
Octavia and a few others.

 Though the way incubator is currently described in that proposal on
 the wiki doesn't clearly imply similar benefits for the project, hence
 concerns.

The rationale for the separate repository is that Neutron's code needs a
lot of love for the *existing* codebase, before new features can be
added (and before core team can accept more responsibilities for it).
But progress is unstoppable: new features are being proposed every cycle
and reviewers bandwidth is not infinite.

That's the gist of 'Mission' and 'Why a Seperate Repo?' on
https://wiki.openstack.org/wiki/Network/Incubator

 Of course, we should raise the bar for all the code - already merged,
 in review, and in incubator. I just think there is no reason to make
 those requirements different from general acceptance requirements (do
 we have those formally defined?).

yes, there is a reason to request higher standards for any new code, why
wouldn't there be? If legacy code is struggling to improve test
coverage, there is a very good reason not to accept more debt.

Not sure it's spelled out and where but I believe it's an accepted and
shared best practice among core reviewers not to merge code without tests.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zaqar] [marconi] Removing GET message by ID in v1.1 (Redux)

2014-08-27 Thread Nataliia Uvarova
I doesn't support the idea of removing this endpoint, although it 
requires some efforts to maintain.


First of all, because of the confusion among users, that it could bring. 
The href to message is returned in many cases, and was seen as canonical 
way to deal with it. (As far as I understand, we encourage users to use 
links we provide and not ids etc). And if you only could  send DELETE 
requests to this url and not GET, that is looking not good.


And the second. We do have ability get a set of messages by id, by using 
/messages?ids=ids endpoint. By removing ability to get message normal 
way, we could unintentionally force users to use this hacky approach to 
get single message. The cost of support both endpoint is not that much 
higher, than support only one of them.


As for me, changes in v1.1 is more cosmetic one in the part of 
queues/messages, and it is better to make decision about it in v2, where 
will have more understanding what is needed and what is not.


It is only my thoughts, I'm not very experienced in Zaqar API yet.

On 08/27/2014 05:48 PM, Kurt Griffiths wrote:
Crew, as we continue implementing v1.1 in anticipation for a “public 
preview” at the summit, I’ve started to wonder again about removing 
the ability to GET a message by ID from the API. Previously, I was 
concerned that it may be too disruptive a change and should wait for 
2.0. But consider this: in order to GET a message by ID you already 
have to have either listed or claimed that message, in which case you 
already have the message. Therefore, this operation would appear to 
have no practical purpose, and so probably won’t be missed by users if 
we remove it.


Am I missing something? What does everyone think about removing 
getting messages by ID in v1.1?


--Kurt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] review priorities

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 12:29 PM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Aug 27 2014, Doug Hellmann wrote:
 
 I *believe* we have started work on all of the graduations we are likely to
 complete during Juno. I suggest we start focusing our efforts on reviews
 related to the other outstanding blueprints and critical bugs. I see a lot
 of open patches for taskflow, for example.
 
 If there's a easy way to grab this list of reviews (magic link?) that'd
 be awesome. :)

I keep hearing that this is a problem storyboard will solve for us. :-)

In the mean time, I go to https://launchpad.net/oslo/+milestone/juno-3 and 
click through from each blueprint or bug listed there.

Bugs that aren’t yet targeted are visible on https://bugs.launchpad.net/oslo of 
course.

Doug

 
 -- 
 Julien Danjou
 // Free Software hacker
 // http://julien.danjou.info


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-08-27 08:41:29 -0700:
 On 27/08/14 11:04, Steven Hardy wrote:
  On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
  I am little bit skeptical about using Swift for this use case because 
  of
  its eventual consistency issue. I am not sure Swift cluster is good to 
  be
  used for this kind of problem. Please note that Swift cluster may give 
  you
  old data at some point of time.
 
  This is probably not a major problem, but it's certainly worth considering.
 
  My assumption is that the latency of making the replicas consistent will be
  small relative to the timeout for things like SoftwareDeployments, so all
  we need is to ensure that instances  eventually get the new data, act on
 
 That part is fine, but if they get the new data and then later get the 
 old data back again... that would not be so good.
 

Agreed, and I had not considered that this can happen.

There is a not-so-simple answer though:

* Heat inserts this as initial metadata:

{metadata: {}, update-url: xx, version: 0}

* Polling goes to update-url and ignores metadata = 0

* Polling finds new metadata in same format, and continues the loop
without talking to Heat

However, this makes me rethink why we are having performance problems.
MOST of the performance problems have two root causes:

* We parse the entire stack to show metadata, because we have to see if
  there are custom access controls defined in any of the resources used.
  I actually worked on a patch set to deprecate this part of the resource
  plugin API because it is impossible to scale this way.
* We rely on the engine to respond because of the parsing issue.

If however we could just push metadata into the db fully resolved
whenever things in the stack change, and cache the response in the API
using Last-Modified/Etag headers, I think we'd be less inclined to care
so much about swift for polling. However we are still left with the many
thousands of keystone users being created vs. thousands of swift tempurls.

That would also set us up nicely for very easy integration with Zaqar,
as metadata changes would flow naturally into the message queue for the
server through the same mechanism as they flow into the database.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Steven Hardy
On Wed, Aug 27, 2014 at 11:41:29AM -0400, Zane Bitter wrote:
 On 27/08/14 11:04, Steven Hardy wrote:
 On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
 I am little bit skeptical about using Swift for this use case because of
 its eventual consistency issue. I am not sure Swift cluster is good to 
  be
 used for this kind of problem. Please note that Swift cluster may give 
  you
 old data at some point of time.
 
 This is probably not a major problem, but it's certainly worth considering.
 
 My assumption is that the latency of making the replicas consistent will be
 small relative to the timeout for things like SoftwareDeployments, so all
 we need is to ensure that instances  eventually get the new data, act on
 
 That part is fine, but if they get the new data and then later get the old
 data back again... that would not be so good.

Right, my assumption is that we'd have a version, either directly in the
data being polled or via swift object versioning.  We persist the most
recent metadata inside the instance, so the agent doing the polling just
has to know to ignore any metadata with a version number lower than the
locally stored data.

This does all seem like a fairly convoluted way to work around what are
seemingly mostly database bandwidth issues, but the eventual consistency
thing doesn't seem to be a showstopper afaics, if we go the swift
direction.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Steven Hardy
On Wed, Aug 27, 2014 at 09:40:31AM -0700, Clint Byrum wrote:
 Excerpts from Zane Bitter's message of 2014-08-27 08:41:29 -0700:
  On 27/08/14 11:04, Steven Hardy wrote:
   On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
   I am little bit skeptical about using Swift for this use case 
   because of
   its eventual consistency issue. I am not sure Swift cluster is good 
   to be
   used for this kind of problem. Please note that Swift cluster may 
   give you
   old data at some point of time.
  
   This is probably not a major problem, but it's certainly worth 
   considering.
  
   My assumption is that the latency of making the replicas consistent will 
   be
   small relative to the timeout for things like SoftwareDeployments, so all
   we need is to ensure that instances  eventually get the new data, act on
  
  That part is fine, but if they get the new data and then later get the 
  old data back again... that would not be so good.
  
 
 Agreed, and I had not considered that this can happen.
 
 There is a not-so-simple answer though:
 
 * Heat inserts this as initial metadata:
 
 {metadata: {}, update-url: xx, version: 0}
 
 * Polling goes to update-url and ignores metadata = 0
 
 * Polling finds new metadata in same format, and continues the loop
 without talking to Heat
 
 However, this makes me rethink why we are having performance problems.
 MOST of the performance problems have two root causes:
 
 * We parse the entire stack to show metadata, because we have to see if
   there are custom access controls defined in any of the resources used.
   I actually worked on a patch set to deprecate this part of the resource
   plugin API because it is impossible to scale this way.
 * We rely on the engine to respond because of the parsing issue.
 
 If however we could just push metadata into the db fully resolved
 whenever things in the stack change, and cache the response in the API
 using Last-Modified/Etag headers, I think we'd be less inclined to care
 so much about swift for polling. However we are still left with the many
 thousands of keystone users being created vs. thousands of swift tempurls.

There's probably a few relatively simple optimisations we can do if the
keystone user thing becomes the bottleneck:
- Make the user an attribute of the stack and only create one per
  stack/tree-of-stacks
- Make the user an attribute of each server resource (probably more secure
  but less optimal if your optimal is less keystone users).

I don't think the many keystone users thing is actually a problem right now
though, or is it?

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Heat] Creating accounts in Keystone

2014-08-27 Thread Ryan Brown


On 08/27/2014 12:15 PM, Kurt Griffiths wrote:
 On 8/25/14, 9:50 AM, Ryan Brown rybr...@redhat.com wrote:
 
 I'm actually quite partial to roles because, in my experience, service
 accounts rarely have their credentials rotated more than once per eon.
 Having the ability to let instances grab tokens would certainly help
 Heat, especially if we start using Zaqar (the artist formerly known as
 marconi).

 
 According to AWS docs, IAM Roles allow you to Define which API actions
 and resources the application can use after assuming the role.” What would

Optimally, you'd be able to (as a user) generate tokens with subsets of
your permissions (e.g. if you're admin, you can create non-admin
tokens/tempurls).

It seems like implementing this seems (from where I'm sitting) like it
would take lots of help from the Keystone team.

 it take to implement this in OpenStack? Currently, Keystone roles seem to
 be more oriented toward cloud operators, not end users. This quote from
 the Keystone docs[1] is telling:
 
 If you wish to restrict users from performing operations in, say,
 the Compute service, you need to create a role in the Identity
 Service and then modify /etc/nova/policy.json so that this role is
 required for Compute operations.

I wasn't aware that this was how role permissions worked. Thank you for
including that info.

 
 On 8/25/14, 9:49 AM, Zane Bitter zbit...@redhat.com wrote:
 
 In particular, even if a service like Zaqar or Heat implements their own
 authorisation (e.g. the user creating a Zaqar queue supplies lists of
 the accounts that are allowed to read or write to it, respectively), how
 does the user ensure that the service accounts they create will not have
 access to other OpenStack APIs? IIRC the default policy.json files
 supplied by the various projects allow non-admin operations from any
 account with a role in the project.

 
 It seems like end users need to be able to define custom roles and
 policies.
 
 Some example use cases for the sake of discussion:
 
 1. App developer sends a request to Zaqar to create a queue named
“customer-orders
 2. Zaqar creates a queue named customer-orders
 3. App developer sends a request to Keystone to create a role, role-x,
for App Component X
 4. Keystone creates role-x
 5. App developer sends requests to Keystone to create a service user,
“user-x” and associate it with role-x
 6. Keystone creates user-x and gives it role-x
 7. App developer sends a request to Zaqar to create a policy,
“customer-orders-observer”, and associate that policy with role-x. The
policy only allows GETing (listing) messages from the customer-orders
queue
 8. Zaqar creates customer-orders-observer and notes that it is associated
with role-x
 
 Later on...
 
 1. App Component X sends a request to Zaqar, including an auth token
 2. Zaqar sends a request to Keystone asking for roles associated with the
given token
 3. Keystone returns one or more roles, including role-x
 4. Zaqar checks for any user-defined policies associated with the roles,
including role-x, and finds customer-orders-observer
 5. Zaqar verifies that the requested operation is allowed according to
customer-orders-observer
 
 We should also compare and contrast this with signed URLs ala Swift’s
 tempurl. For example, service accounts do not have to be created or
 managed in the case of tempurl.

Perhaps there would be a way to have a more generic (Keystone-wide)
version of similar functionality. Even if there wasn't any scoping
support it would still be exceptionally useful.

This is starting to sound like it's worth drafting a blueprint for, or
at least looking through existing BP's to see if there's something that
fits.

 
 --Kurt
 
 [1]: http://goo.gl/5UBMwR [http://docs.openstack.org]
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Chris Dent

On Wed, 27 Aug 2014, Doug Hellmann wrote:


I have found it immensely helpful, for example, to have a written set
of the steps involved in creating a new library, from importing the
git repo all the way through to making it available to other projects.
Without those instructions, it would have been much harder to split up
the work. The team would have had to train each other by word of
mouth, and we would have had constant issues with inconsistent
approaches triggering different failures. The time we spent building
and verifying the instructions has paid off to the extent that we even
had one developer not on the core team handle a graduation for us.


+many more for the relatively simple act of just writing stuff down

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-27 Thread Huang Zhiteng
Definitely +1.

Xing has been very active in providing feedback for reviews.  Thanks
for the help and welcome to the team!

On Tue, Aug 19, 2014 at 5:24 PM, Avishay Traeger
avis...@stratoscale.com wrote:
 +1


 On Thu, Aug 14, 2014 at 9:55 AM, Boring, Walter walter.bor...@hp.com
 wrote:

 Hey guys,
I wanted to pose a nomination for Cinder core.

 Xing Yang.
 She has been active in the cinder community for many releases and has
 worked on several drivers as well as other features for cinder itself.   She
 has been doing an awesome job doing reviews and helping folks out in the
 #openstack-cinder irc channel for a long time.   I think she would be a good
 addition to the core team.


 Walt
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Regards
Huang Zhiteng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat Juno Mid-cycle Meetup report

2014-08-27 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-08-27 10:08:36 -0700:
 On Wed, Aug 27, 2014 at 09:40:31AM -0700, Clint Byrum wrote:
  Excerpts from Zane Bitter's message of 2014-08-27 08:41:29 -0700:
   On 27/08/14 11:04, Steven Hardy wrote:
On Wed, Aug 27, 2014 at 07:54:41PM +0530, Jyoti Ranjan wrote:
I am little bit skeptical about using Swift for this use case 
because of
its eventual consistency issue. I am not sure Swift cluster is 
good to be
used for this kind of problem. Please note that Swift cluster may 
give you
old data at some point of time.
   
This is probably not a major problem, but it's certainly worth 
considering.
   
My assumption is that the latency of making the replicas consistent 
will be
small relative to the timeout for things like SoftwareDeployments, so 
all
we need is to ensure that instances  eventually get the new data, act on
   
   That part is fine, but if they get the new data and then later get the 
   old data back again... that would not be so good.
   
  
  Agreed, and I had not considered that this can happen.
  
  There is a not-so-simple answer though:
  
  * Heat inserts this as initial metadata:
  
  {metadata: {}, update-url: xx, version: 0}
  
  * Polling goes to update-url and ignores metadata = 0
  
  * Polling finds new metadata in same format, and continues the loop
  without talking to Heat
  
  However, this makes me rethink why we are having performance problems.
  MOST of the performance problems have two root causes:
  
  * We parse the entire stack to show metadata, because we have to see if
there are custom access controls defined in any of the resources used.
I actually worked on a patch set to deprecate this part of the resource
plugin API because it is impossible to scale this way.
  * We rely on the engine to respond because of the parsing issue.
  
  If however we could just push metadata into the db fully resolved
  whenever things in the stack change, and cache the response in the API
  using Last-Modified/Etag headers, I think we'd be less inclined to care
  so much about swift for polling. However we are still left with the many
  thousands of keystone users being created vs. thousands of swift tempurls.
 
 There's probably a few relatively simple optimisations we can do if the
 keystone user thing becomes the bottleneck:
 - Make the user an attribute of the stack and only create one per
   stack/tree-of-stacks
 - Make the user an attribute of each server resource (probably more secure
   but less optimal if your optimal is less keystone users).
 
 I don't think the many keystone users thing is actually a problem right now
 though, or is it?

1000 servers means 1000 keystone users to manage, and all of the tokens
and backend churn that implies.

It's not a problem, but it is quite a bit heavier than tempurls.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-27 Thread John Griffith
On Wed, Aug 27, 2014 at 11:39 AM, Huang Zhiteng winsto...@gmail.com wrote:

 Definitely +1.

 Xing has been very active in providing feedback for reviews.  Thanks
 for the help and welcome to the team!

 On Tue, Aug 19, 2014 at 5:24 PM, Avishay Traeger
 avis...@stratoscale.com wrote:
  +1
 
 
  On Thu, Aug 14, 2014 at 9:55 AM, Boring, Walter walter.bor...@hp.com
  wrote:
 
  Hey guys,
 I wanted to pose a nomination for Cinder core.
 
  Xing Yang.
  She has been active in the cinder community for many releases and has
  worked on several drivers as well as other features for cinder itself.
  She
  has been doing an awesome job doing reviews and helping folks out in the
  #openstack-cinder irc channel for a long time.   I think she would be a
 good
  addition to the core team.
 
 
  Walt
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Regards
 Huang Zhiteng

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Confirming Xing as the newest member of the Cinder Core team.
​

​This one was all around pretty easy, congratulations Xing!!  Keep up the
great work!​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-27 Thread yang, xing
Thanks John!   Thanks everyone!

Xing


From: John Griffith [mailto:john.griff...@solidfire.com]
Sent: Wednesday, August 27, 2014 2:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination



On Wed, Aug 27, 2014 at 11:39 AM, Huang Zhiteng 
winsto...@gmail.commailto:winsto...@gmail.com wrote:
Definitely +1.

Xing has been very active in providing feedback for reviews.  Thanks
for the help and welcome to the team!

On Tue, Aug 19, 2014 at 5:24 PM, Avishay Traeger
avis...@stratoscale.commailto:avis...@stratoscale.com wrote:
 +1


 On Thu, Aug 14, 2014 at 9:55 AM, Boring, Walter 
 walter.bor...@hp.commailto:walter.bor...@hp.com
 wrote:

 Hey guys,
I wanted to pose a nomination for Cinder core.

 Xing Yang.
 She has been active in the cinder community for many releases and has
 worked on several drivers as well as other features for cinder itself.   She
 has been doing an awesome job doing reviews and helping folks out in the
 #openstack-cinder irc channel for a long time.   I think she would be a good
 addition to the core team.


 Walt
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards
Huang Zhiteng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​Confirming Xing as the newest member of the Cinder Core team.
​

​This one was all around pretty easy, congratulations Xing!!  Keep up the great 
work!​

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Author tags

2014-08-27 Thread Mandeep Dhami
T hese should all be comment changes, so there should be no impact. While
I agree that it is late for J3, IMO this is the type of change
(minor/comment only) that should be OK J4 rather than wait for Kilo.


On Wed, Aug 27, 2014 at 7:25 AM, Kyle Mestery mest...@mestery.com wrote:

 On Wed, Aug 27, 2014 at 8:24 AM, Gary Kotton gkot...@vmware.com wrote:
  Hi,
  A few cycles ago the Nova group decided to remove @author from copyright
  statements. This is due to the fact that this information is stored in
 git.
  After adding a similar hacking rule to Neutron it has stirred up some
  debate.
  Does anyone have any reason to for us not to go ahead with
  https://review.openstack.org/#/c/112329/.
  Thanks
  Gary
 
 My main concern is around landing a change like this during feature
 freeze week, I think at best this should land at the start of Kilo.

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-27 Thread Doug Hellmann

On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote:

 On Wed, 27 Aug 2014, Doug Hellmann wrote:
 
 I have found it immensely helpful, for example, to have a written set
 of the steps involved in creating a new library, from importing the
 git repo all the way through to making it available to other projects.
 Without those instructions, it would have been much harder to split up
 the work. The team would have had to train each other by word of
 mouth, and we would have had constant issues with inconsistent
 approaches triggering different failures. The time we spent building
 and verifying the instructions has paid off to the extent that we even
 had one developer not on the core team handle a graduation for us.
 
 +many more for the relatively simple act of just writing stuff down

Write it down.” is my theme for Kilo.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread John Griffith
On Wed, Aug 27, 2014 at 9:25 AM, Flavio Percoco fla...@redhat.com wrote:

 On 08/27/2014 03:26 PM, Sean Dague wrote:
  On 08/27/2014 08:51 AM, Thierry Carrez wrote:
  Hi everyone,
 
  I've been thinking about what changes we can bring to the Design Summit
  format to make it more productive. I've heard the feedback from the
  mid-cycle meetups and would like to apply some of those ideas for Paris,
  within the constraints we have (already booked space and time). Here is
  something we could do:
 
  Day 1. Cross-project sessions / incubated projects / other projects
 
  I think that worked well last time. 3 parallel rooms where we can
  address top cross-project questions, discuss the results of the various
  experiments we conducted during juno. Don't hesitate to schedule 2 slots
  for discussions, so that we have time to come to the bottom of those
  issues. Incubated projects (and maybe other projects, if space allows)
  occupy the remaining space on day 1, and could occupy pods on the
  other days.
 
  Day 2 and Day 3. Scheduled sessions for various programs
 
  That's our traditional scheduled space. We'll have a 33% less slots
  available. So, rather than trying to cover all the scope, the idea would
  be to focus those sessions on specific issues which really require
  face-to-face discussion (which can't be solved on the ML or using spec
  discussion) *or* require a lot of user feedback. That way, appearing in
  the general schedule is very helpful. This will require us to be a lot
  stricter on what we accept there and what we don't -- we won't have
  space for courtesy sessions anymore, and traditional/unnecessary
  sessions (like my traditional release schedule one) should just move
  to the mailing-list.
 
  Day 4. Contributors meetups
 
  On the last day, we could try to split the space so that we can conduct
  parallel midcycle-meetup-like contributors gatherings, with no time
  boundaries and an open agenda. Large projects could get a full day,
  smaller projects would get half a day (but could continue the discussion
  in a local bar). Ideally that meetup would end with some alignment on
  release goals, but the idea is to make the best of that time together to
  solve the issues you have. Friday would finish with the design summit
  feedback session, for those who are still around.
 
 
  I think this proposal makes the best use of our setup: discuss clear
  cross-project issues, address key specific topics which need
  face-to-face time and broader attendance, then try to replicate the
  success of midcycle meetup-like open unscheduled time to discuss
  whatever is hot at this point.
 
  There are still details to work out (is it possible split the space,
  should we use the usual design summit CFP website to organize the
  scheduled time...), but I would first like to have your feedback on
  this format. Also if you have alternative proposals that would make a
  better use of our 4 days, let me know.
 
  I definitely like this approach. I think it will be really interesting
  to collect feedback from people about the value they got from days 2  3
  vs. Day 4.
 
  I also wonder if we should lose a slot from days 1 - 3 and expand the
  hallway time. Hallway track is always pretty interesting, and honestly
  at a lot of interesting ideas spring up. The 10 minute transitions often
  seem to feel like you are rushing between places too quickly some times.

 +1

 Last summit, it was basically impossible to do any hallway talking and
 even meet some folks face-2-face.

 Other than that, I think the proposal is great and makes sense to me.

 Flavio

 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​Sounds like a great idea to me:
+1​
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-27 Thread Jay Pipes

On 08/26/2014 07:09 PM, James E. Blair wrote:

Hi,

After reading https://wiki.openstack.org/wiki/Network/Incubator I have
some thoughts about the proposed workflow.

We have quite a bit of experience and some good tools around splitting
code out of projects and into new projects.  But we don't generally do a
lot of importing code into projects.  We've done this once, to my
recollection, in a way that preserved history, and that was with the
switch to keystone-lite.

It wasn't easy; it's major git surgery and would require significant
infra-team involvement any time we wanted to do it.

However, reading the proposal, it occurred to me that it's pretty clear
that we expect these tools to be able to operate outside of the Neutron
project itself, to even be releasable on their own.  Why not just stick
with that?  In other words, the goal of this process should be to create
separate projects with their own development lifecycle that will
continue indefinitely, rather than expecting the code itself to merge
into the neutron repo.

This has advantages in simplifying workflow and making it more
consistent.  Plus it builds on known integration mechanisms like APIs
and python project versions.

But more importantly, it helps scale the neutron project itself.  I
think that a focused neutron core upon which projects like these can
build on in a reliable fashion would be ideal.


Despite replies to you saying that certain branches of Neutron 
development work are special unicorns, I wanted to say I *fully* support 
your above statement.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] gate debugging

2014-08-27 Thread Sean Dague
Note: thread intentionally broken, this is really a different topic.

On 08/27/2014 02:30 PM, Doug Hellmann wrote:
 On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote:

 On Wed, 27 Aug 2014, Doug Hellmann wrote:

 I have found it immensely helpful, for example, to have a written set
 of the steps involved in creating a new library, from importing the
 git repo all the way through to making it available to other projects.
 Without those instructions, it would have been much harder to split up
 the work. The team would have had to train each other by word of
 mouth, and we would have had constant issues with inconsistent
 approaches triggering different failures. The time we spent building
 and verifying the instructions has paid off to the extent that we even
 had one developer not on the core team handle a graduation for us.

 +many more for the relatively simple act of just writing stuff down

 Write it down.” is my theme for Kilo.

I definitely get the sentiment. Write it down is also hard when you
are talking about things that do change around quite a bit. OpenStack as
a whole sees 250 - 500 changes a week, so the interaction pattern moves
around enough that it's really easy to have *very* stale information
written down. Stale information is even more dangerous than no
information some times, as it takes people down very wrong paths.

I think we break down on communication when we get into a conversation
of I want to learn gate debugging because I don't quite know what that
means, or where the starting point of understanding is. So those
intentions are well meaning, but tend to stall. The reality was there
was no road map for those of us that dive in, it's just understanding
how OpenStack holds together as a whole and where some of the high risk
parts are. And a lot of that comes with days staring at code and logs
until patterns emerge.

Maybe if we can get smaller more targeted questions, we can help folks
better? I'm personally a big fan of answering the targeted questions
because then I also know that the time spent exposing that information
was directly useful.

I'm more than happy to mentor folks. But I just end up finding the I
want to learn at the generic level something that's hard to grasp onto
or figure out how we turn it into action. I'd love to hear more ideas
from folks about ways we might do that better.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] refactoring of resize/migrate

2014-08-27 Thread Jay Pipes

On 08/27/2014 06:41 AM, Markus Zoeller wrote:

The review of the spec to blueprint hot-resize has several comments
about the need of refactoring the existing code base of resize and
migrate before the blueprint could be considered (see [1]).
I'm interested in the result of the blueprint therefore I want to offer
my support. How can I participate?

[1] https://review.openstack.org/95054


Are you offering support to refactor resize/migrate, or are you offering 
support to work only on the hot-resize functionality?


I'm very much interested in refactoring the resize/migrate 
functionality, and would appreciate any help and insight you might have. 
Unfortunately, such a refactoring:


a) Must start in Kilo
b) Begins with un-crufting the simply horrible, inconsistent, and 
duplicative REST API and public behaviour of the resize and migrate actions


In any case, I'm happy to start the conversation about this going in 
about a month or so, or whenever Kilo blueprints open up. Until then, 
we're pretty much working on reviews for already-approved blueprints and 
bug fixing.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Design Summit reloaded

2014-08-27 Thread Kyle Mestery
On Wed, Aug 27, 2014 at 7:51 AM, Thierry Carrez thie...@openstack.org wrote:
 Hi everyone,

 I've been thinking about what changes we can bring to the Design Summit
 format to make it more productive. I've heard the feedback from the
 mid-cycle meetups and would like to apply some of those ideas for Paris,
 within the constraints we have (already booked space and time). Here is
 something we could do:

 Day 1. Cross-project sessions / incubated projects / other projects

 I think that worked well last time. 3 parallel rooms where we can
 address top cross-project questions, discuss the results of the various
 experiments we conducted during juno. Don't hesitate to schedule 2 slots
 for discussions, so that we have time to come to the bottom of those
 issues. Incubated projects (and maybe other projects, if space allows)
 occupy the remaining space on day 1, and could occupy pods on the
 other days.

 Day 2 and Day 3. Scheduled sessions for various programs

 That's our traditional scheduled space. We'll have a 33% less slots
 available. So, rather than trying to cover all the scope, the idea would
 be to focus those sessions on specific issues which really require
 face-to-face discussion (which can't be solved on the ML or using spec
 discussion) *or* require a lot of user feedback. That way, appearing in
 the general schedule is very helpful. This will require us to be a lot
 stricter on what we accept there and what we don't -- we won't have
 space for courtesy sessions anymore, and traditional/unnecessary
 sessions (like my traditional release schedule one) should just move
 to the mailing-list.

 Day 4. Contributors meetups

 On the last day, we could try to split the space so that we can conduct
 parallel midcycle-meetup-like contributors gatherings, with no time
 boundaries and an open agenda. Large projects could get a full day,
 smaller projects would get half a day (but could continue the discussion
 in a local bar). Ideally that meetup would end with some alignment on
 release goals, but the idea is to make the best of that time together to
 solve the issues you have. Friday would finish with the design summit
 feedback session, for those who are still around.


 I think this proposal makes the best use of our setup: discuss clear
 cross-project issues, address key specific topics which need
 face-to-face time and broader attendance, then try to replicate the
 success of midcycle meetup-like open unscheduled time to discuss
 whatever is hot at this point.

 There are still details to work out (is it possible split the space,
 should we use the usual design summit CFP website to organize the
 scheduled time...), but I would first like to have your feedback on
 this format. Also if you have alternative proposals that would make a
 better use of our 4 days, let me know.

 Cheers,

+1000

This is a great move in the right direction here. Evolving the design
summit in this direction feels natural and will greatly benefit the
projects by allowing for some flexibility and taking some of the good
points from mid-cycle meetings and incorporating it here.

Thanks,
Kyle

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-27 Thread Jeremy Stanley
On 2014-08-27 14:54:55 -0400 (-0400), Sean Dague wrote:
[...]
 I think we break down on communication when we get into a
 conversation of I want to learn gate debugging because I don't
 quite know what that means, or where the starting point of
 understanding is. So those intentions are well meaning, but tend
 to stall. The reality was there was no road map for those of us
 that dive in, it's just understanding how OpenStack holds together
 as a whole and where some of the high risk parts are. And a lot of
 that comes with days staring at code and logs until patterns
 emerge.
[...]

One way to put this in perspective, I think, is to talk about
devstack-gate integration test jobs (which are only one of a variety
of kinds of jobs we gate on, but it's possibly the most nebulous
case).

Since devstack-gate mostly just sets up an OpenStack (for a variety
of definitions thereof) and then runs some defined suite of
transformations and tests against it, a failure really is quite
often this cloud broke. You are really looking, post-mortem, at
what would in production probably be considered a catastrophic
cascade failure involving multiple moving parts, where all you have
left is (hopefully enough, sometimes not) logs of what the services
were doing when all hell broke loose. However, you're an ops team of
one trying to get to the bottom of why your environment went toes
up... and then you're a developer trying to work out what to patch
where to make it not happen again (if you're lucky).

That is gate debugging and, to support your point, is something
which can at best be only vaguely documented.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-27 Thread David Kranz

On 08/27/2014 02:54 PM, Sean Dague wrote:

Note: thread intentionally broken, this is really a different topic.

On 08/27/2014 02:30 PM, Doug Hellmann wrote:

On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote:


On Wed, 27 Aug 2014, Doug Hellmann wrote:


I have found it immensely helpful, for example, to have a written set
of the steps involved in creating a new library, from importing the
git repo all the way through to making it available to other projects.
Without those instructions, it would have been much harder to split up
the work. The team would have had to train each other by word of
mouth, and we would have had constant issues with inconsistent
approaches triggering different failures. The time we spent building
and verifying the instructions has paid off to the extent that we even
had one developer not on the core team handle a graduation for us.

+many more for the relatively simple act of just writing stuff down

Write it down.” is my theme for Kilo.

I definitely get the sentiment. Write it down is also hard when you
are talking about things that do change around quite a bit. OpenStack as
a whole sees 250 - 500 changes a week, so the interaction pattern moves
around enough that it's really easy to have *very* stale information
written down. Stale information is even more dangerous than no
information some times, as it takes people down very wrong paths.

I think we break down on communication when we get into a conversation
of I want to learn gate debugging because I don't quite know what that
means, or where the starting point of understanding is. So those
intentions are well meaning, but tend to stall. The reality was there
was no road map for those of us that dive in, it's just understanding
how OpenStack holds together as a whole and where some of the high risk
parts are. And a lot of that comes with days staring at code and logs
until patterns emerge.

Maybe if we can get smaller more targeted questions, we can help folks
better? I'm personally a big fan of answering the targeted questions
because then I also know that the time spent exposing that information
was directly useful.

I'm more than happy to mentor folks. But I just end up finding the I
want to learn at the generic level something that's hard to grasp onto
or figure out how we turn it into action. I'd love to hear more ideas
from folks about ways we might do that better.

-Sean

Race conditions are what makes debugging very hard. I think we are in 
the process of experimenting with such an idea: asymetric gating by 
moving functional tests to projects, making them deeper and more 
extensive, and gating against their own projects. The result should be 
that when a code change is made, we will spend much more time running 
tests of code that is most likely to be growing a race bug from the 
change. Of course there is a risk that we will impair integration 
testing and we will have to be vigilant about that. One mitigating 
factor is that if cross-project interaction uses apis (official or not) 
that are well tested by the functional tests, there is less risk that a 
bug will only show up only when those apis are used by another project.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-27 Thread Sean Dague
On 08/27/2014 03:33 PM, David Kranz wrote:
 On 08/27/2014 02:54 PM, Sean Dague wrote:
 Note: thread intentionally broken, this is really a different topic.

 On 08/27/2014 02:30 PM, Doug Hellmann wrote:
 On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote:

 On Wed, 27 Aug 2014, Doug Hellmann wrote:

 I have found it immensely helpful, for example, to have a written set
 of the steps involved in creating a new library, from importing the
 git repo all the way through to making it available to other projects.
 Without those instructions, it would have been much harder to split up
 the work. The team would have had to train each other by word of
 mouth, and we would have had constant issues with inconsistent
 approaches triggering different failures. The time we spent building
 and verifying the instructions has paid off to the extent that we even
 had one developer not on the core team handle a graduation for us.
 +many more for the relatively simple act of just writing stuff down
 Write it down.” is my theme for Kilo.
 I definitely get the sentiment. Write it down is also hard when you
 are talking about things that do change around quite a bit. OpenStack as
 a whole sees 250 - 500 changes a week, so the interaction pattern moves
 around enough that it's really easy to have *very* stale information
 written down. Stale information is even more dangerous than no
 information some times, as it takes people down very wrong paths.

 I think we break down on communication when we get into a conversation
 of I want to learn gate debugging because I don't quite know what that
 means, or where the starting point of understanding is. So those
 intentions are well meaning, but tend to stall. The reality was there
 was no road map for those of us that dive in, it's just understanding
 how OpenStack holds together as a whole and where some of the high risk
 parts are. And a lot of that comes with days staring at code and logs
 until patterns emerge.

 Maybe if we can get smaller more targeted questions, we can help folks
 better? I'm personally a big fan of answering the targeted questions
 because then I also know that the time spent exposing that information
 was directly useful.

 I'm more than happy to mentor folks. But I just end up finding the I
 want to learn at the generic level something that's hard to grasp onto
 or figure out how we turn it into action. I'd love to hear more ideas
 from folks about ways we might do that better.

 -Sean

 Race conditions are what makes debugging very hard. I think we are in
 the process of experimenting with such an idea: asymetric gating by
 moving functional tests to projects, making them deeper and more
 extensive, and gating against their own projects. The result should be
 that when a code change is made, we will spend much more time running
 tests of code that is most likely to be growing a race bug from the
 change. Of course there is a risk that we will impair integration
 testing and we will have to be vigilant about that. One mitigating
 factor is that if cross-project interaction uses apis (official or not)
 that are well tested by the functional tests, there is less risk that a
 bug will only show up only when those apis are used by another project.

So, sorry, this is really not about systemic changes (we're running
those in parallel), but more about skills transfer in people getting
engaged. Because we need both. I guess that's the danger of breaking the
thread is apparently I lost part of the context.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-27 Thread David Kranz

On 08/27/2014 03:43 PM, Sean Dague wrote:

On 08/27/2014 03:33 PM, David Kranz wrote:

On 08/27/2014 02:54 PM, Sean Dague wrote:

Note: thread intentionally broken, this is really a different topic.

On 08/27/2014 02:30 PM, Doug Hellmann wrote:

On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote:


On Wed, 27 Aug 2014, Doug Hellmann wrote:


I have found it immensely helpful, for example, to have a written set
of the steps involved in creating a new library, from importing the
git repo all the way through to making it available to other projects.
Without those instructions, it would have been much harder to split up
the work. The team would have had to train each other by word of
mouth, and we would have had constant issues with inconsistent
approaches triggering different failures. The time we spent building
and verifying the instructions has paid off to the extent that we even
had one developer not on the core team handle a graduation for us.

+many more for the relatively simple act of just writing stuff down

Write it down.” is my theme for Kilo.

I definitely get the sentiment. Write it down is also hard when you
are talking about things that do change around quite a bit. OpenStack as
a whole sees 250 - 500 changes a week, so the interaction pattern moves
around enough that it's really easy to have *very* stale information
written down. Stale information is even more dangerous than no
information some times, as it takes people down very wrong paths.

I think we break down on communication when we get into a conversation
of I want to learn gate debugging because I don't quite know what that
means, or where the starting point of understanding is. So those
intentions are well meaning, but tend to stall. The reality was there
was no road map for those of us that dive in, it's just understanding
how OpenStack holds together as a whole and where some of the high risk
parts are. And a lot of that comes with days staring at code and logs
until patterns emerge.

Maybe if we can get smaller more targeted questions, we can help folks
better? I'm personally a big fan of answering the targeted questions
because then I also know that the time spent exposing that information
was directly useful.

I'm more than happy to mentor folks. But I just end up finding the I
want to learn at the generic level something that's hard to grasp onto
or figure out how we turn it into action. I'd love to hear more ideas
from folks about ways we might do that better.

 -Sean


Race conditions are what makes debugging very hard. I think we are in
the process of experimenting with such an idea: asymetric gating by
moving functional tests to projects, making them deeper and more
extensive, and gating against their own projects. The result should be
that when a code change is made, we will spend much more time running
tests of code that is most likely to be growing a race bug from the
change. Of course there is a risk that we will impair integration
testing and we will have to be vigilant about that. One mitigating
factor is that if cross-project interaction uses apis (official or not)
that are well tested by the functional tests, there is less risk that a
bug will only show up only when those apis are used by another project.

So, sorry, this is really not about systemic changes (we're running
those in parallel), but more about skills transfer in people getting
engaged. Because we need both. I guess that's the danger of breaking the
thread is apparently I lost part of the context.

-Sean

I agree we need both. I made the comment because if we can make gate 
debugging less daunting
then less skill will be needed and I think that is key. Honestly, I am 
not sure the full skill you have can be transferred. It was gained 
partly through

learning in simpler times.

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gate debugging

2014-08-27 Thread Anita Kuno
On 08/27/2014 03:43 PM, Sean Dague wrote:
 On 08/27/2014 03:33 PM, David Kranz wrote:
 On 08/27/2014 02:54 PM, Sean Dague wrote:
 Note: thread intentionally broken, this is really a different topic.

 On 08/27/2014 02:30 PM, Doug Hellmann wrote:
 On Aug 27, 2014, at 1:30 PM, Chris Dent chd...@redhat.com wrote:

 On Wed, 27 Aug 2014, Doug Hellmann wrote:

 I have found it immensely helpful, for example, to have a written set
 of the steps involved in creating a new library, from importing the
 git repo all the way through to making it available to other projects.
 Without those instructions, it would have been much harder to split up
 the work. The team would have had to train each other by word of
 mouth, and we would have had constant issues with inconsistent
 approaches triggering different failures. The time we spent building
 and verifying the instructions has paid off to the extent that we even
 had one developer not on the core team handle a graduation for us.
 +many more for the relatively simple act of just writing stuff down
 Write it down.” is my theme for Kilo.
 I definitely get the sentiment. Write it down is also hard when you
 are talking about things that do change around quite a bit. OpenStack as
 a whole sees 250 - 500 changes a week, so the interaction pattern moves
 around enough that it's really easy to have *very* stale information
 written down. Stale information is even more dangerous than no
 information some times, as it takes people down very wrong paths.

 I think we break down on communication when we get into a conversation
 of I want to learn gate debugging because I don't quite know what that
 means, or where the starting point of understanding is. So those
 intentions are well meaning, but tend to stall. The reality was there
 was no road map for those of us that dive in, it's just understanding
 how OpenStack holds together as a whole and where some of the high risk
 parts are. And a lot of that comes with days staring at code and logs
 until patterns emerge.

 Maybe if we can get smaller more targeted questions, we can help folks
 better? I'm personally a big fan of answering the targeted questions
 because then I also know that the time spent exposing that information
 was directly useful.

 I'm more than happy to mentor folks. But I just end up finding the I
 want to learn at the generic level something that's hard to grasp onto
 or figure out how we turn it into action. I'd love to hear more ideas
 from folks about ways we might do that better.

 -Sean

 Race conditions are what makes debugging very hard. I think we are in
 the process of experimenting with such an idea: asymetric gating by
 moving functional tests to projects, making them deeper and more
 extensive, and gating against their own projects. The result should be
 that when a code change is made, we will spend much more time running
 tests of code that is most likely to be growing a race bug from theby 
 change. Of course there is a risk that we will impair integration
 testing and we will have to be vigilant about that. One mitigating
 factor is that if cross-project interaction uses apis (official or not)
 that are well tested by the functional tests, there is less risk that a
 bug will only show up only when those apis are used by another project.
 
 So, sorry, this is really not about systemic changes (we're running
 those in parallel), but more about skills transfer in people getting
 engaged. Because we need both. I guess that's the danger of breaking the
 thread is apparently I lost part of the context.
 
   -Sean
 
I love mentoring it is my favourite skills transfer pattern.

The optimal pattern is I agree to mentor someone and they are focused on
what I task them with, I evaluate it and we review it and not only do
they learn a skill but they have their own personal experience as a
foundation for having that skill.

Here is the part that breaks down in OpenStack - then the person I
mentor agrees to mentor someone else. Now I am mentoring one person plus
another by proxy, which is great because now in addition to technical
skills like searching and finding and offering patches and reviewing,
the person I'm mentoring learns the mentoring skills to be able to pass
on what they learn. For some reason I don't seem to make much headway (a
little but not much) in getting any traction in the second layer of
mentoring. For whatever reason, it just doesn't work and I am having to
teach everything all over from scratch one at a time to people. This is
not what I am used to and is really exhausting.

I wish I had answers but I don't. I don't know why this structure
doesn't pick up and scale out, but it doesn't.

Perhaps you might figure it out, Sean. I don't know.

Thanks,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-27 Thread Kevin Benton
What are you talking about? The only reply was from me clarifying that one
of the purposes of the incubator was for components of neutron that are
experimental but are intended to be merged. In that case it might not make
sense to have a life cycle of their own in another repo indefinitely.


On Wed, Aug 27, 2014 at 11:52 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 08/26/2014 07:09 PM, James E. Blair wrote:

 Hi,

 After reading https://wiki.openstack.org/wiki/Network/Incubator I have
 some thoughts about the proposed workflow.

 We have quite a bit of experience and some good tools around splitting
 code out of projects and into new projects.  But we don't generally do a
 lot of importing code into projects.  We've done this once, to my
 recollection, in a way that preserved history, and that was with the
 switch to keystone-lite.

 It wasn't easy; it's major git surgery and would require significant
 infra-team involvement any time we wanted to do it.

 However, reading the proposal, it occurred to me that it's pretty clear
 that we expect these tools to be able to operate outside of the Neutron
 project itself, to even be releasable on their own.  Why not just stick
 with that?  In other words, the goal of this process should be to create
 separate projects with their own development lifecycle that will
 continue indefinitely, rather than expecting the code itself to merge
 into the neutron repo.

 This has advantages in simplifying workflow and making it more
 consistent.  Plus it builds on known integration mechanisms like APIs
 and python project versions.

 But more importantly, it helps scale the neutron project itself.  I
 think that a focused neutron core upon which projects like these can
 build on in a reliable fashion would be ideal.


 Despite replies to you saying that certain branches of Neutron development
 work are special unicorns, I wanted to say I *fully* support your above
 statement.

 Best,
 -jay



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Author tags

2014-08-27 Thread Mark McClain

On Aug 27, 2014, at 2:27 PM, Mandeep Dhami 
dh...@noironetworks.commailto:dh...@noironetworks.com wrote:


T hese should all be comment changes, so there should be no impact. While I 
agree that it is late for J3, IMO this is the type of change (minor/comment 
only) that should be OK J4 rather than wait for Kilo.


We do not have a J4 milestone :)  That said we’ve accumulated a bunch of rebase 
generating very low risk code cleanup items.  I’ve been maintaining a list of 
these for team consideration as the final item during the freeze period.

mark


On Wed, Aug 27, 2014 at 7:25 AM, Kyle Mestery 
mest...@mestery.commailto:mest...@mestery.com wrote:
On Wed, Aug 27, 2014 at 8:24 AM, Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com wrote:
 Hi,
 A few cycles ago the Nova group decided to remove @author from copyright
 statements. This is due to the fact that this information is stored in git.
 After adding a similar hacking rule to Neutron it has stirred up some
 debate.
 Does anyone have any reason to for us not to go ahead with
 https://review.openstack.org/#/c/112329/.
 Thanks
 Gary

My main concern is around landing a change like this during feature
freeze week, I think at best this should land at the start of Kilo.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [neutron] [tc] Neutron Incubator workflow

2014-08-27 Thread Kevin Benton
I agree that components isolated to one service or something along those
lines where there is a clear plugin point in Neutron, it might make sense
to separate them permanently. However, at that point why even bother with
the Neutron incubator when a new project can be started?

The feature branch idea sounds interesting for the one-time big
experimental changes. Are there any other projects that do this right now?


On Wed, Aug 27, 2014 at 12:30 PM, James E. Blair cor...@inaugust.com
wrote:

 Kevin Benton blak...@gmail.com writes:

  From what I understand, the intended projects for the incubator can't
  operate without neutron because they are just extensions/plugins/drivers.

 I could have phrased that better.  What I meant was that they could
 operate without being actually in the Neutron repo, not that they could
 not operate without Neutron itself.

 The proposal for the incubator is that extensions be developed outside
 of the Neutron repo.  My proposed refinement is that they stay outside
 of the Neutron repo.  They live their entire lives as extension modules
 in separate projects.

  For example, if the DVR modifications to the reference reference L3
 plugin
  weren't already being developed in the tree, DVR could have been
 developed
  in the incubator and then merged into Neutron once the bugs were ironed
 out
  so a huge string of Gerrit patches didn't need to be tracked. If that had
  happened, would it make sense to keep the L3 plugin as a completely
  separate project or merge it? I understand this is the approach the load
  balancer folks took by making Octavia a separate project, but I think it
  can still operate on its own, where the reference L3 plugin (and many of
  the other incubator projects) are just classes that expect to be able to
  make core Neutron calls.

 The list of Juno/Kilo candidates doesn't seem to have projects that are
 quite so low-level.

 If a feature is going to become part of the neutron core, then it should
 be developed in the neutron repository.  If we need a place to land code
 that isn't master, it's actually far easier to just use a feature branch
 on the neutron repo.  Commits can land there as needed, master can be
 periodically merged into it, and when the feature is ready, the feature
 branch can be merged into master.

 I think between those two options: incubate/spin-out components that are
 high-level enough not to have deep integration in the neutron core, and
 using feature branches for large experimental changes to the core
 itself, we can handle the problems the incubator repo is intended to
 address.

 -Jim

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >