[openstack-dev] [nova] Request non-priority feature freeze exception for VIF_TYPE_TAP

2015-02-25 Thread Neil Jerram
Although we are past the non-priority deadline, I have been encouraged
to request this late exception for Project Calico's spec and code adding
VIF_TYPE_TAP to Nova.

https://review.openstack.org/#/c/130732/ (spec)
https://review.openstack.org/#/c/146914/ (code)

Why might you consider this?

- It is a small, self-contained change, of similar (or less) scope as
  other non-priority VIF drivers that have been granted exceptions
  during the last two weeks.

- It is the basis for a useful class of future OpenStack networking
  experimentation that could be pursued without requiring any further
  Nova changes - hence I think it should reduce the future workload on
  Nova reviewers.  (This has been discussed in the openstack-dev thread
  starting at
  http://lists.openstack.org/pipermail/openstack-dev/2014-December/052129.html.)

- The spec was approved for Kilo (and is still merged AFAICS in the
  nova-specs repository), and the code was constructively and positively
  reviewed by several reviewers, if not by cores, leading in particular
  to improvements in its unit test coverage.

Please let me know what you think.  Apart from rebasing that I would
expect to be trivial, this enhancement is all good to go as soon as
anyone gives the word.

Regards,
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp serial-ports *partly* implemented?

2015-02-25 Thread Sahid Orentino Ferdjaoui
On Tue, Feb 24, 2015 at 04:19:39PM +0100, Markus Zoeller wrote:
 Sahid Orentino Ferdjaoui sahid.ferdja...@redhat.com wrote on 02/23/2015 
 11:13:12 AM:
 
  From: Sahid Orentino Ferdjaoui sahid.ferdja...@redhat.com
  To: OpenStack Development Mailing List (not for usage questions) 
  openstack-dev@lists.openstack.org
  Date: 02/23/2015 11:17 AM
  Subject: Re: [openstack-dev] [nova] bp serial-ports *partly* 
 implemented?
  
  On Fri, Feb 20, 2015 at 06:03:46PM +0100, Markus Zoeller wrote:
   It seems to me that the blueprint serial-ports[1] didn't implement
   everything which was described in its spec. If one of you could have a 
 
   look at the following examples and help me to understand if these 
   observations are right/wrong that would be great.
   
   Example 1:
   The flavor provides the extra_spec hw:serial_port_count and the 
 image
   the property hw_serial_port_count. This is used to decide how many
   serial devices (with different ports) should be defined for an 
 instance.
   But the libvirt driver returns always only the *first* defined port 
   (see method get_serial_console [2]). I didn't find anything in the 
   code which uses the other defined ports.
  
  The method you are referencing [2] is used to return the first well
  binded and not connected port in the domain.
 
 Is that the intention behind the code ``mode='bind'`` in said method?
 In my test I created an instance with 2 ports with the default cirros
 image with a flavor which has the hw:serial_port_count=2 property. 
 The domain XML has this snippet:
 serial type=tcp
   source host=127.0.0.1 mode=bind service=1/
 /serial
 serial type=tcp
   source host=127.0.0.1 mode=bind service=10001/
 /serial
 My expectation was to be able to connect to the same instance via both 
 ports at the same time. But the second connection is blocked as long 
 as the first connection is established. A debug trace in the code shows 
 that both times the first port is returned. IOW I was not able to create
 a scenario where the *second* port was returned and that confuses me
 a little. Any thoughts about this?

So we probably have a bug here, can you at least refer it in launchpad
? We need to see if the problem comes from the code in Nova or a bad
interpretation of the behavior of libvirt or a bug in libvirt.

Please on the report can you also paste the XML when you have a well
connected session on the first port?

  When defining the domain '{hw_|hw:}serial_port_count' are well take
  into account as you can see:
  
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/
  driver.py#L3702
  
  (The method looks to have been refactored and include several parts
  not related to serial-console)
 
   Example 2:
   If a user is already connected, then reject the attempt of a 
 second
   user to access the console, but have an API to forceably 
 disconnect
   an existing session. This would be particularly important to cope
   with hung sessions where the client network went away before the
   console was cleanly closed. [1]
   I couldn't find the described API. If there is a hung session one 
 cannot
   gracefully recover from that. This could lead to a bad UX in horizons
   serial console client implementation[3].
  
  This API is not implemented, I will see what I can do on that
  part. Thanks for this.
 
 Sounds great, thanks for that! Please keep me in the loop when 
 reviews or help with coding are needed.
 
   [1] nova bp serial-ports;
   
   https://github.com/openstack/nova-specs/blob/master/specs/juno/
  implemented/serial-ports.rst
   [2] libvirt driver; return only first port; 
   
   https://github.com/openstack/nova/blob/master/nova/virt/libvirt/
  driver.py#L2518
   [3] horizon bp serial-console; 
   https://blueprints.launchpad.net/horizon/+spec/serial-console
   
   
   
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
I’m writing a plan/script to benchmark OVS+OF(CT) vs OVS+LB+iptables+ipsets,  
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.

The plan is to keep all of it in a single multicore host, and make all the 
measures
within it, to make sure we just measure the difference due to the software 
layers.

Suggestions or ideas on what to measure are welcome, there’s an initial draft 
here:

https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct  

Miguel Ángel Ajo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api][all] - Openstack.error common library

2015-02-25 Thread Eugeniya Kudryashova
Hi, stackers!

As was suggested in topic [1], using an HTTP header was a good solution for
communicating common/standardized OpenStack API error codes.

So I’d like to begin working on a common library, which will collect all
openstack HTTP API errors, and assign them string error codes. My suggested
name for library is openstack.error, but please feel free to propose
something different.

The other question is where we should allocate such project, in openstack
or stackforge, or maybe oslo-incubator? I think such project will be too
massive (due to dealing with lots and lots of exceptions)  to allocate it
as a part of oslo, so I propose developing the project on Stackforge and
then eventually have it moved into the openstack/ code namespace when the
other projects begin using the library.

Let me know your feedback, please!


[1] -
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055549.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware][ironic] Configuring active/passive HA Nova compute

2015-02-25 Thread Matthew Booth
On 25/02/15 11:51, Radoslav Gerganov wrote:
 On 02/23/2015 03:18 PM, Matthew Booth wrote:
 On 23/02/15 12:13, Gary Kotton wrote:


 On 2/23/15, 2:05 PM, Matthew Booth mbo...@redhat.com wrote:

 On 20/02/15 11:48, Matthew Booth wrote:
 Gary Kotton came across a doozy of a bug recently:

 https://bugs.launchpad.net/nova/+bug/1419785

 In short, when you start a Nova compute, it will query the driver for
 instances and compare that against the expected host of the the
 instance
 according to the DB. If the driver is reporting an instance the DB
 thinks is on a different host, it assumes the instance was evacuated
 while Nova compute was down, and deletes it on the hypervisor.
 However,
 Gary found that you trigger this when starting up a backup HA node
 which
 has a different `host` config setting. i.e. You fail over, and the
 first
 thing it does is delete all your instances.

 Gary and I both agree on a couple of things:

 1. Deleting all your instances is bad
 2. HA nova compute is highly desirable for some drivers

 We disagree on the approach to fixing it, though. Gary posted this:

 https://review.openstack.org/#/c/154029/

 I've already outlined my objections to this approach elsewhere, but to
 summarise I think this fixes 1 symptom of a design problem, and leaves
 the rest untouched. If the value of nova compute's `host` changes,
 then
 the assumption that instances associated with that compute can be
 identified by the value of instance.host becomes invalid. This
 assumption is pervasive, so it breaks a lot of stuff. The worst one is
 _destroy_evacuated_instances(), which Gary found, but if you scan
 nova/compute/manager for the string 'self.host' you'll find lots of
 them. For example, all the periodic tasks are broken, including image
 cache management, and the state of ResourceTracker will be unusual.
 Worse, whenever a new instance is created it will have a different
 value
 of instance.host, so instances running on a single hypervisor will
 become partitioned based on which nova compute was used to create
 them.

 In short, the system may appear to function superficially, but it's
 unsupportable.

 I had an alternative idea. The current assumption is that the `host`
 managing a single hypervisor never changes. If we break that
 assumption,
 we break Nova, so we could assert it at startup and refuse to start if
 it's violated. I posted this VMware-specific POC:

 https://review.openstack.org/#/c/154907/

 However, I think I've had a better idea. Nova creates ComputeNode
 objects for its current configuration at startup which, amongst other
 things, are a map of host:hypervisor_hostname. We could assert when
 creating a ComputeNode that hypervisor_hostname is not already
 associated with a different host, and refuse to start if it is. We
 would
 give an appropriate error message explaining that this is a
 misconfiguration. This would prevent the user from hitting any of the
 associated problems, including the deletion of all their instances.

 I have posted a patch implementing the above for review here:

 https://review.openstack.org/#/c/158269/

 I have to look at what you have posted. I think that this topic is
 something that we should speak about at the summit and this should fall
 under some BP and well defined spec. I really would not like to see
 existing installations being broken if and when this patch lands. It may
 also affect Ironic as it works on the same model.

 This patch will only affect installations configured with multiple
 compute hosts for a single hypervisor. These are already broken, so this
 patch will at least let them know if they haven't already noticed.

 It won't affect Ironic, because they configure all compute hosts to have
 the same 'host' value. An Ironic user would only notice this patch if
 they accidentally misconfigured it, which is the intended behaviour.

 Incidentally, I also support more focus on the design here. Until we
 come up with a better design, though, we need to do our best to prevent
 non-trivial corruption from a trivial misconfiguration. I think we need
 to merge this, or something like it, now and still have a summit
 discussion.

 Matt

 
 Hi Matt,
 
 I already posted a comment on your patch but I'd like to reiterate here
 as well.  Currently the VMware driver is using the cluster name as
 hypervisor_hostname which is a problem because you can have different
 clusters with the same name.  We already have a critical bug filed for
 this:
 
 https://bugs.launchpad.net/nova/+bug/1329261
 
 There was an attempt to fix this by using a combination of vCenter UUID
 + cluster_name but it was rejected because this combination was not
 considered a 'real' hostname.  I think that if we go for a DB schema
 change we can fix both issues by renaming hypervisor_hostname to
 hypervisor_id and make it unique.  What do you think?

Well, I think hypervisor_id makes more sense than hypervisor_hostname.
The latter is a confusing. However, I'd prefer not to 

Re: [openstack-dev] [Neutron] db-level locks, non-blocking algorithms, active/active DB clusters and IPAM

2015-02-25 Thread Eugene Nikanorov
Thanks for putting this all together, Salvatore.

I just want to comment on this suggestion:
 1) Move the allocation logic out of the driver, thus making IPAM an
independent service. The API workers will then communicate with the IPAM
service through a message bus, where IP allocation requests will be
naturally serialized

Right now port creation is already a distributed process involving several
parties.
Adding one more actor outside Neutron which can be communicated with
message bus just to serialize requests makes me think of how terrible
troubleshooting could be in case of applied load, when communication over
mq slows down or interrupts.
Not to say such service would be SPoF and a contention point.
So, this of course could be an option, but personally I'd not like to see
it as a default.

Thanks,
Eugene.

On Wed, Feb 25, 2015 at 4:35 AM, Robert Collins robe...@robertcollins.net
wrote:

 On 24 February 2015 at 01:07, Salvatore Orlando sorla...@nicira.com
 wrote:
  Lazy-Stacker summary:
 ...
  In the medium term, there are a few things we might consider for
 Neutron's
  built-in IPAM.
  1) Move the allocation logic out of the driver, thus making IPAM an
  independent service. The API workers will then communicate with the IPAM
  service through a message bus, where IP allocation requests will be
  naturally serialized
  2) Use 3-party software as dogpile, zookeeper but even memcached to
  implement distributed coordination. I have nothing against it, and I
 reckon
  Neutron can only benefit for it (in case you're considering of arguing
 that
  it does not scale, please also provide solid arguments to support your
  claim!). Nevertheless, I do believe API request processing should proceed
  undisturbed as much as possible. If processing an API requests requires
  distributed coordination among several components then it probably means
  that an asynchronous paradigm is more suitable for that API request.

 So data is great. It sounds like as long as we have an appropriate
 retry decorator in place, that write locks are better here, at least
 for up to 30 threads. But can we trust the data?

 One thing I'm not clear on is the SQL statement count.  You say 100
 queries for A-1 with a time on Galera of 0.06*1.2=0.072 seconds per
 allocation ? So is that 2 queries over 50 allocations over 20 threads?

 I'm not clear on what the request parameter in the test json files
 does, and AFAICT your threads each do one request each. As such I
 suspect that you may be seeing less concurrency - and thus contention
 - than real-world setups where APIs are deployed to run worker
 processes in separate processes and requests are coming in
 willy-nilly. The size of each algorithms workload is so small that its
 feasible to imagine the thread completing before the GIL bytecount
 code trigger (see
 https://docs.python.org/2/library/sys.html#sys.setcheckinterval) and
 the GIL's lack of fairness would exacerbate that.

 If I may suggest:
  - use multiprocessing or some other worker-pool approach rather than
 threads
  - or set setcheckinterval down low (e.g. to 20 or something)
  - do multiple units of work (in separate transactions) within each
 worker, aim for e.g. 10 seconds or work or some such.
  - log with enough detail that we can report on the actual concurrency
 achieved. E.g. log the time in us when each transaction starts and
 finishes, then we can assess how many concurrent requests were
 actually running.

 If the results are still the same - great, full steam ahead. If not,
 well lets revisit :)

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Replace eventlet with asyncio

2015-02-25 Thread Clint Byrum
Excerpts from Victor Stinner's message of 2015-02-25 02:12:05 -0800:
 Hi,
 
  I also just put up another proposal to consider:
  https://review.openstack.org/#/c/156711/
  Sew over eventlet + patching with threads
 
 My asyncio spec is unclear about WSGI, I just wrote
 
 The spec doesn't change OpenStack components running WSGI servers
 like nova-api. The specific problem of using asyncio with WSGI will
 need a separated spec.
 
 Joshua's threads spec proposes:
 
 I would prefer to let applications such as apache or others handle
 the request as they see fit and just make sure that our applications
 provide wsgi entrypoints that are stateless and can be horizontally
 scaled as needed (aka remove all eventlet and thread ... semantics
 and usage from these entrypoints entirely).
 
 Keystone wants to do the same:
 https://review.openstack.org/#/c/157495/
 Deprecate Eventlet Deployment in favor of wsgi containers
 
 This deprecates Eventlet support in documentation and on invocation
 of keystone-all.
 
 I agree: we don't need concurrency in the code handling a single HTTP 
 request: use blocking functions calls. You should rely on highly efficient 
 HTTP servers like Apache, nginx, werkzeug, etc. There is a lot of choice, 
 just pick your favorite server ;-) Each HTTP request is handled in a thread. 
 You can use N processes and each process running M threads. It's a common 
 architecture design which is efficient.
 
 For database accesses, just use regular blocking calls (no need to modify 
 SQLAchemy). According to Mike Bayer's benchmark (*), it's even the fastest 
 method if your code is database intensive. You may share a pool of database 
 connections between the threads, but a connection should only be used by a 
 single thread.
 
 (*) http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/
 
 I don't think that we need a spec if everybody already agree on the design :-)
 

+1

This leaves a few pieces of python which don't operate via HTTP
requests. There are likely more, but these come to mind:

* Nova conductor
* Nova scheduler/Gantt
* Nova compute
* Neutron agents
* Heat engine

I don't have a good answer for them, but my gut says none of these
gets as crazy with concurrency as the API services which have to talk
to all the clients with their terrible TCP stacks, and awful network
connectivity. The list above is always just talking on local buses, and
thus can likely just stay on eventlet, or use a multiprocessing model to
take advantage of local CPUs too. I know for Heat's engine, we saw quite
an improvement in performance of Heat just by running multiple engines.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Request non-priority feature freeze exception for VIF_TYPE_TAP

2015-02-25 Thread Daniel P. Berrange
On Wed, Feb 25, 2015 at 11:46:05AM +, Neil Jerram wrote:
 Although we are past the non-priority deadline, I have been encouraged
 to request this late exception for Project Calico's spec and code adding
 VIF_TYPE_TAP to Nova.

I'm afraid you're also past the freeze exception request deadline. The
meeting to decide upon exception requests took place a week  a half
ago now.

So while I'd probably support inclusion of your new VIF driver to libvirt,
you are out of luck for Kilo in terms of the process currently being
applied to Nova.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-25 Thread Jeremy Stanley
On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
[...]
 Run 2) We removed glusterfs backend, so Cinder was configured with
 the default storage backend i.e. LVM. We re-created the OOM here
 too
 
 So that proves that glusterfs doesn't cause it, as its happening
 without glusterfs too.

Well, if you re-ran the job on the same VM then the second result is
potentially contaminated. Luckily this hypothesis can be confirmed
by running the second test on a fresh VM in Rackspace.

 The VM (104.239.136.99) is now in such a bad shape that existing
 ssh sessions are no longer responding for a long long time now,
 tho' ping works. So need someone to help reboot/restart the VM so
 that we can collect the logs for records. Couldn't find anyone
 during apac TZ to get it reboot.
[...]

According to novaclient that instance was in a shutoff state, and
so I had to nova reboot --hard to get it running. Looks like it's
back up and reachable again now.

 So from the above we can conclude that the tests are running fine
 on hpcloud and not on rax provider. Since the OS (centos7) inside
 the VM across provider is same, this now boils down to some issue
 with rax provider VM + centos7 combination.

This certainly seems possible.

 Another data point I could gather is:
     The only other centos7 job we have is
 check-tempest-dsvm-centos7 and it does not run full tempest
 looking at the job's config it only runs smoke tests (also
 confirmed the same with Ian W) which i believe is a subset of
 tests only.

Correct, so if we confirm that we can't successfully run tempest
full on CentOS 7 in both of our providers yet, we should probably
think hard about the implications on yesterday's discussion as to
whether to set the smoke version gating on devstack and
devstack-gate changes.

 So that brings to the conclusion that probably cinder-glusterfs CI
 job (check-tempest-dsvm-full-glusterfs-centos7) is the first
 centos7 based job running full tempest tests in upstream CI and
 hence is the first to hit the issue, but on rax provider only

Entirely likely. As I mentioned last week, we don't yet have any
voting/gating jobs running on the platform as far as I can tell, so
it's still very much in an experimental stage.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Kyle Mestery
On Wed, Feb 25, 2015 at 7:52 AM, Miguel Ángel Ajo majop...@redhat.com
wrote:

  I’m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.

 The plan is to keep all of it in a single multicore host, and make all the
 measures
 within it, to make sure we just measure the difference due to the software
 layers.

 Suggestions or ideas on what to measure are welcome, there’s an initial
 draft here:

 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

 This is a good idea Miguel, thanks for taking this on! Might I suggest we
add this document into the neutron tree once you feel it's ready? Having it
exist there may make a lot of sense for people who want to understand your
performance test setup and who may want to run it in the future.

Thanks,
Kyle


 Miguel Ángel Ajo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Resuming of workflows/tasks

2015-02-25 Thread Kekane, Abhishek
Hi Michal,

 1. Need of distributed lock to avoid same task being resumed by two instances 
 of a service. Do we need tooz to do that or is there any other solution?

Tooz library is already being used in ceilometer to manage group membership. 
(https://github.com/openstack/ceilometer/blob/master/ceilometer/coordination.py 
)
Tooz also supports MySQL driver for distributed locking, so we can use this as 
we are storing taskflow details in MySQL database.

Thank You,

Abhishek Kekane


-Original Message-
From: Dulko, Michal [mailto:michal.du...@intel.com] 
Sent: 24 February 2015 18:12
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [cinder] Resuming of workflows/tasks

Hi all,

I was working on spec[1] and prototype[2] to make Cinder to be able to resume 
workflows in case of server or service failure. Problem of requests lost and 
resources left in unresolved states in case of failure was signaled at the 
Paris Summit[3].

What I was able to prototype was to resume running tasks locally after service 
restart using persistence API provided by TaskFlow. However core team agreed 
that we should aim at resuming workflows globally even by other service 
instances (which I think is a good decision).

There are few major problems blocking this approach:

1. Need of distributed lock to avoid same task being resumed by two instances 
of a service. Do we need tooz to do that or is there any other solution?
2. Are we going to step out from using TaskFlow? Such idea came up at the 
mid-cycle meetup, what's the status of it? Without TaskFlow's persistence 
implementing task resumptions would be a lot more difficult.
3. In case of cinder-api service we're unable to monitor it's state using 
servicegroup API. Do we have alternatives here to make decision if particular 
workflow being processed by cinder-api is abandoned?

As this topic is deferred to Liberty release I want to start discussion here to 
be continued at the summit.

[1] https://review.openstack.org/#/c/147879/
[2] https://review.openstack.org/#/c/152200/
[3] https://etherpad.openstack.org/p/kilo-crossproject-ha-integration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] db-level locks, non-blocking algorithms, active/active DB clusters and IPAM

2015-02-25 Thread Clint Byrum
Excerpts from Salvatore Orlando's message of 2015-02-23 04:07:38 -0800:
 Lazy-Stacker summary:
 I am doing some work on Neutron IPAM code for IP Allocation, and I need to
 found whether it's better to use db locking queries (SELECT ... FOR UPDATE)
 or some sort of non-blocking algorithm.
 Some measures suggest that for this specific problem db-level locking is
 more efficient even when using multi-master DB clusters, which kind of
 counters recent findings by other contributors [2]... but also backs those
 from others [7].
 

Thanks Salvatore, the story and data you produced is quite interesting.

 
 With the test on the Galera cluster I was expecting a terrible slowdown in
 A-1 because of deadlocks caused by certification failures. I was extremely
 disappointed that the slowdown I measured however does not make any of the
 other algorithms a viable alternative.
 On the Galera cluster I did not run extensive collections for A-2. Indeed
 primary key violations seem to triggers db deadlock because of failed write
 set certification too (but I have not yet tested this).
 I run tests with 10 threads on each node, for a total of 30 workers. Some
 results are available at [15]. There was indeed a slow down in A-1 (about
 20%), whereas A-3 performance stayed pretty much constant. Regardless, A-1
 was still at least 3 times faster than A-3.
 As A-3's queries are mostly select (about 75% of them) use of caches might
 make it a lot faster; also the algorithm is probably inefficient and can be
 optimised in several areas. Still, I suspect it can be made faster than
 A-1. At this stage I am leaning towards adoption db-level-locks with
 retries for Neutron's IPAM. However, since I never trust myself, I wonder
 if there is something important that I'm neglecting and will hit me down
 the road.
 

The thing is, nobody should actually be running blindly with writes
being sprayed out to all nodes in a Galera cluster. So A-1 won't slow
down _at all_ if you just use Galera as an ACTIVE/PASSIVE write master.
It won't scale any worse for writes, since all writes go to all nodes
anyway. For reads we can very easily start to identify hot-spot reads
that can be sent to all nodes and are tolerant of a few seconds latency.

 In the medium term, there are a few things we might consider for Neutron's
 built-in IPAM.
 1) Move the allocation logic out of the driver, thus making IPAM an
 independent service. The API workers will then communicate with the IPAM
 service through a message bus, where IP allocation requests will be
 naturally serialized

This would rely on said message bus guaranteeing ordered delivery. That
is going to scale far worse, and be more complicated to maintain, than
Galera with a few retries on failover.

 2) Use 3-party software as dogpile, zookeeper but even memcached to
 implement distributed coordination. I have nothing against it, and I reckon
 Neutron can only benefit for it (in case you're considering of arguing that
 it does not scale, please also provide solid arguments to support your
 claim!). Nevertheless, I do believe API request processing should proceed
 undisturbed as much as possible. If processing an API requests requires
 distributed coordination among several components then it probably means
 that an asynchronous paradigm is more suitable for that API request.
 

If we all decide that having a load balancer sending all writes and
reads to one Galera node is not acceptable for some reason, then we
should consider a distributed locking method that might scale better,
like ZK/etcd or the like. But I think just figuring out why we want to
send all writes and reads to all nodes is a better short/medium term
goal.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manuals] Training guide issue

2015-02-25 Thread Andreas Jaeger
On 02/25/2015 03:16 AM, Ajay Kalambur (akalambu) wrote:
 Hi
 I am trying to just get started with  openstack commits and wanted to
 start by fixing some documentation bugs. I assigned 3 bugs which seem to
 be in the same file/area

Welcome!

Let's discuss this on the openstack-docs mailing list, training-guides
is part of the documentation project.

 https://bugs.launchpad.net/openstack-training-guides/+bug/1380153
 https://bugs.launchpad.net/openstack-training-guides/+bug/1380155
 https://bugs.launchpad.net/openstack-training-guides/+bug/1380156
 
 
 The file seems to be located under the openstack-manuals branch since I
 found this xml file there

These files are in the training-guides repository.

 But the bug seems to be under Openstack Training guides which seems to
 be a different git repo with this file not present there
 
 Can someone help me understand whats going on here?

Let's ask the training guides folks on openstack-docs for further questions,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Jennifer Guild, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Shared storage support

2015-02-25 Thread Gary Kotton
Hi,
There is an issue with the statistics reported when a nova compute driver has 
shared storage attached. That is, there may be more than one compute node 
reporting on the shared storage. A patch has been posted - 
https://review.openstack.org/#/c/155184. The direction here was to add a extra 
parameter to the dictionary that the driver returns for the resource 
utilization. The DB statistics calculation would take this into account and 
then do calculations accordingly.
I am not really in favor of the approach for a number of reasons:

  1.  Over the last few cycles we have been making a move to trying to better 
define data structures and models that we use. More specifically we have been 
moving to object support
  2.  A change in the DB layer may break this support.
  3.  We are trying to have versioning of various blobs of data that are passed 
around

My thinking is that the resource tracker should be aware that the compute node 
has shared storage and the changes done there. I do not think that the compute 
node should rely on the changes being done in the DB layer - that may be on a 
different host and even run a different version.

I understand that this is a high or critical bug but I think that we need to 
discuss more on it and try have a more robust model.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Kyle Mestery
On Wed, Feb 25, 2015 at 8:49 AM, Miguel Ángel Ajo majop...@redhat.com
wrote:

 On Wednesday, 25 de February de 2015 at 15:38, Kyle Mestery wrote:

 On Wed, Feb 25, 2015 at 7:52 AM, Miguel Ángel Ajo majop...@redhat.com
 wrote:

  I’m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.

 The plan is to keep all of it in a single multicore host, and make all the
 measures
 within it, to make sure we just measure the difference due to the software
 layers.

 Suggestions or ideas on what to measure are welcome, there’s an initial
 draft here:

 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

 This is a good idea Miguel, thanks for taking this on! Might I suggest we
 add this document into the neutron tree once you feel it's ready? Having it
 exist there may make a lot of sense for people who want to understand your
 performance test setup and who may want to run it in the future.


 That’s actually a good idea, so we can use gerrit to review and gather
 comments.
 Where should I put the document?

 in /doc/source we have devref with it’s own index in rst.

 I think it makes sense to add a testing directory there perhaps. But
failing that, even having it in devref would be a good idea.


 Best,
 Miguel Ángel.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Brian Haley
On 02/25/2015 08:52 AM, Miguel Ángel Ajo wrote:
 I’m writing a plan/script to benchmark OVS+OF(CT) vs OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.
 
 The plan is to keep all of it in a single multicore host, and make all the 
 measures
 within it, to make sure we just measure the difference due to the software 
 layers.
 
 Suggestions or ideas on what to measure are welcome, there’s an initial draft 
 here:
 
 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

Thanks for writing this up Miguel.

I realize this is more focusing on performance (how fast the packets flow), but
one of the orthogonal issues to Security Groups in general is the time it takes
for Neutron to apply or update them, for example, iptables_manager.apply().  I
would like to make sure that time doesn't grow any larger than it is today.
This can probably all be scraped from log files, so wouldn't require any special
work, except for testing with a large SG set.

Thanks,

-Brian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [Cinder-GlusterFS CI] centos7 gate job abrupt failures

2015-02-25 Thread Deepak Shetty
On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
 [...]
  Run 2) We removed glusterfs backend, so Cinder was configured with
  the default storage backend i.e. LVM. We re-created the OOM here
  too
 
  So that proves that glusterfs doesn't cause it, as its happening
  without glusterfs too.

 Well, if you re-ran the job on the same VM then the second result is
 potentially contaminated. Luckily this hypothesis can be confirmed
 by running the second test on a fresh VM in Rackspace.


Maybe true, but we did the same on hpcloud provider VM too and both time
it ran successfully with glusterfs as the cinder backend. Also before
starting
the 2nd run, we did unstack and saw that free memory did go back to 5G+
and then re-invoked your script, I believe the contamination could result
in some
additional testcase failures (which we did see) but shouldn't be related to
whether system can OOM or not, since thats a runtime thing.

I see that the VM is up again. We will execute the 2nd run afresh now and
update
here.



  The VM (104.239.136.99) is now in such a bad shape that existing
  ssh sessions are no longer responding for a long long time now,
  tho' ping works. So need someone to help reboot/restart the VM so
  that we can collect the logs for records. Couldn't find anyone
  during apac TZ to get it reboot.
 [...]

 According to novaclient that instance was in a shutoff state, and
 so I had to nova reboot --hard to get it running. Looks like it's
 back up and reachable again now.


Cool, thanks!



  So from the above we can conclude that the tests are running fine
  on hpcloud and not on rax provider. Since the OS (centos7) inside
  the VM across provider is same, this now boils down to some issue
  with rax provider VM + centos7 combination.

 This certainly seems possible.

  Another data point I could gather is:
  The only other centos7 job we have is
  check-tempest-dsvm-centos7 and it does not run full tempest
  looking at the job's config it only runs smoke tests (also
  confirmed the same with Ian W) which i believe is a subset of
  tests only.

 Correct, so if we confirm that we can't successfully run tempest
 full on CentOS 7 in both of our providers yet, we should probably
 think hard about the implications on yesterday's discussion as to
 whether to set the smoke version gating on devstack and
 devstack-gate changes.

  So that brings to the conclusion that probably cinder-glusterfs CI
  job (check-tempest-dsvm-full-glusterfs-centos7) is the first
  centos7 based job running full tempest tests in upstream CI and
  hence is the first to hit the issue, but on rax provider only

 Entirely likely. As I mentioned last week, we don't yet have any
 voting/gating jobs running on the platform as far as I can tell, so
 it's still very much in an experimental stage.


So is there a way for a job to ask for hpcloud affinity, since thats where
our
job ran well (faster and only 2 failures, which were expected) ? I am not
sure
how easy and time consuming it would be to root cause why centos7 + rax
provider
is causing oom.

Alternatively do you recommend using some other OS as the base for our job
F20 or F21 or ubuntu ? I assume that there are other Jobs in rax provider
that
run on Fedora or Ubuntu with full tempest and don't OOM, would you know ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] db-level locks, non-blocking algorithms, active/active DB clusters and IPAM

2015-02-25 Thread Salvatore Orlando
Thanks Clint.

I think you are bringing an interesting and disruptive perspective into
this discussion.
Disruptive because one thing that has not been considered so far in this
thread is that perhaps we don't need at all to leverage multi-master
capabilities for write operations.

More comments inline,
Salvatore

On 25 February 2015 at 14:40, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Salvatore Orlando's message of 2015-02-23 04:07:38 -0800:
  Lazy-Stacker summary:
  I am doing some work on Neutron IPAM code for IP Allocation, and I need
 to
  found whether it's better to use db locking queries (SELECT ... FOR
 UPDATE)
  or some sort of non-blocking algorithm.
  Some measures suggest that for this specific problem db-level locking is
  more efficient even when using multi-master DB clusters, which kind of
  counters recent findings by other contributors [2]... but also backs
 those
  from others [7].
 

 Thanks Salvatore, the story and data you produced is quite interesting.

 
  With the test on the Galera cluster I was expecting a terrible slowdown
 in
  A-1 because of deadlocks caused by certification failures. I was
 extremely
  disappointed that the slowdown I measured however does not make any of
 the
  other algorithms a viable alternative.
  On the Galera cluster I did not run extensive collections for A-2. Indeed
  primary key violations seem to triggers db deadlock because of failed
 write
  set certification too (but I have not yet tested this).
  I run tests with 10 threads on each node, for a total of 30 workers. Some
  results are available at [15]. There was indeed a slow down in A-1 (about
  20%), whereas A-3 performance stayed pretty much constant. Regardless,
 A-1
  was still at least 3 times faster than A-3.
  As A-3's queries are mostly select (about 75% of them) use of caches
 might
  make it a lot faster; also the algorithm is probably inefficient and can
 be
  optimised in several areas. Still, I suspect it can be made faster than
  A-1. At this stage I am leaning towards adoption db-level-locks with
  retries for Neutron's IPAM. However, since I never trust myself, I wonder
  if there is something important that I'm neglecting and will hit me down
  the road.
 

 The thing is, nobody should actually be running blindly with writes
 being sprayed out to all nodes in a Galera cluster. So A-1 won't slow
 down _at all_ if you just use Galera as an ACTIVE/PASSIVE write master.
 It won't scale any worse for writes, since all writes go to all nodes
 anyway.


If writes are sent to a singe node, obviously there will be no contention
at all.
I think Mike in some other thread mentioned the intention of supporting
this scheme as part of the db facade work he's doing.
On the other hand, as you say in this way you'll be effectively using the
DB cluster as ACTIVE/PASSIVE w.r.t writes.
If the fact that there won't be any performance or scalability penalty is
proven, then I think I have no concern about adopting this model. Note that
I'm not saying this because I don't trust you, but merely because of my
ignorance - I understand that in principle what you say is correct, but
since galera replicates through write sets and simply by replying
transaction logs, things might be not as obvious, and perhaps I'd look at
some data supporting this claim.


 For reads we can very easily start to identify hot-spot reads
 that can be sent to all nodes and are tolerant of a few seconds latency.


A few second latency sound rather bad, but obviously we need to put
everything into the right context.
Indeed it seems to me that by hot-spot you mean reads which are performed
fairly often?
In your opinion, what would be the caveats in always distributing reads to
all nodes?



  In the medium term, there are a few things we might consider for
 Neutron's
  built-in IPAM.
  1) Move the allocation logic out of the driver, thus making IPAM an
  independent service. The API workers will then communicate with the IPAM
  service through a message bus, where IP allocation requests will be
  naturally serialized

 This would rely on said message bus guaranteeing ordered delivery. That
 is going to scale far worse, and be more complicated to maintain, than
 Galera with a few retries on failover.


I suspect this as well, but on the other hand I believe some components
adopted by several OpenStack projects adopt exactly this pattern. I have
however no data point to make any sort of claim regarding their scalability.
From an architectural perspective you can make things better by scaling out
the RPC server with multiple workers, but then you'd be again at the
starting point!
When I propose stuff on the mailing list I typically do full brain dumps,
even the terrible ideas, which are unfortunately occur to me very
frequently.



  2) Use 3-party software as dogpile, zookeeper but even memcached to
  implement distributed coordination. I have nothing against it, and I
 reckon
  Neutron can only benefit for it (in case 

Re: [openstack-dev] [Neutron] db-level locks, non-blocking algorithms, active/active DB clusters and IPAM

2015-02-25 Thread Salvatore Orlando
On 25 February 2015 at 13:50, Eugene Nikanorov enikano...@mirantis.com
wrote:

 Thanks for putting this all together, Salvatore.

 I just want to comment on this suggestion:
  1) Move the allocation logic out of the driver, thus making IPAM an
 independent service. The API workers will then communicate with the IPAM
 service through a message bus, where IP allocation requests will be
 naturally serialized

 Right now port creation is already a distributed process involving several
 parties.
 Adding one more actor outside Neutron which can be communicated with
 message bus just to serialize requests makes me think of how terrible
 troubleshooting could be in case of applied load, when communication over
 mq slows down or interrupts.
 Not to say such service would be SPoF and a contention point.
 So, this of course could be an option, but personally I'd not like to see
 it as a default.


Basically here I'm just braindumping. I have no idea on whether this could
be scalable, reliable or maintainable (see reply to Clint's post). I wish I
could prototype code for this, but I'm terribly slow. The days were I was
able to produce thousands of working LOCs per day are long gone.

Anyway it is right that port creation is already a fairly complex workflow.
However, IPAM will be anyway a synchronous operation within this workflow.
Indeed if the IPAM process does not complete port wiring and securing in
the agents cannot occur. So I do not expect it to add significant
difficulties in troubleshooting, for which I might add that the issue is
not really due to complex communication patterns, but to the fact that
Neutron still does not have a decent mechanism to correlate events
occurring on the server and in the agents, thus forcing developers and
operators to read logs as if they were hieroglyphics.




 Thanks,
 Eugene.

 On Wed, Feb 25, 2015 at 4:35 AM, Robert Collins robe...@robertcollins.net
  wrote:

 On 24 February 2015 at 01:07, Salvatore Orlando sorla...@nicira.com
 wrote:
  Lazy-Stacker summary:
 ...
  In the medium term, there are a few things we might consider for
 Neutron's
  built-in IPAM.
  1) Move the allocation logic out of the driver, thus making IPAM an
  independent service. The API workers will then communicate with the IPAM
  service through a message bus, where IP allocation requests will be
  naturally serialized
  2) Use 3-party software as dogpile, zookeeper but even memcached to
  implement distributed coordination. I have nothing against it, and I
 reckon
  Neutron can only benefit for it (in case you're considering of arguing
 that
  it does not scale, please also provide solid arguments to support your
  claim!). Nevertheless, I do believe API request processing should
 proceed
  undisturbed as much as possible. If processing an API requests requires
  distributed coordination among several components then it probably means
  that an asynchronous paradigm is more suitable for that API request.

 So data is great. It sounds like as long as we have an appropriate
 retry decorator in place, that write locks are better here, at least
 for up to 30 threads. But can we trust the data?

 One thing I'm not clear on is the SQL statement count.  You say 100
 queries for A-1 with a time on Galera of 0.06*1.2=0.072 seconds per
 allocation ? So is that 2 queries over 50 allocations over 20 threads?

 I'm not clear on what the request parameter in the test json files
 does, and AFAICT your threads each do one request each. As such I
 suspect that you may be seeing less concurrency - and thus contention
 - than real-world setups where APIs are deployed to run worker
 processes in separate processes and requests are coming in
 willy-nilly. The size of each algorithms workload is so small that its
 feasible to imagine the thread completing before the GIL bytecount
 code trigger (see
 https://docs.python.org/2/library/sys.html#sys.setcheckinterval) and
 the GIL's lack of fairness would exacerbate that.

 If I may suggest:
  - use multiprocessing or some other worker-pool approach rather than
 threads
  - or set setcheckinterval down low (e.g. to 20 or something)
  - do multiple units of work (in separate transactions) within each
 worker, aim for e.g. 10 seconds or work or some such.
  - log with enough detail that we can report on the actual concurrency
 achieved. E.g. log the time in us when each transaction starts and
 finishes, then we can assess how many concurrent requests were
 actually running.

 If the results are still the same - great, full steam ahead. If not,
 well lets revisit :)

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [Neutron] db-level locks, non-blocking algorithms, active/active DB clusters and IPAM

2015-02-25 Thread Carl Baldwin
jOn Mon, Feb 23, 2015 at 5:07 AM, Salvatore Orlando sorla...@nicira.com wrote:
 Lazy-Stacker summary:
 I am doing some work on Neutron IPAM code for IP Allocation, and I need to
 found whether it's better to use db locking queries (SELECT ... FOR UPDATE)
 or some sort of non-blocking algorithm.
 Some measures suggest that for this specific problem db-level locking is
 more efficient even when using multi-master DB clusters, which kind of
 counters recent findings by other contributors [2]... but also backs those
 from others [7].

 The long and boring thing:

 In the past few months there's been a fair amount of discussion concerning
 the use of locking queries (ie: SELECT ... FOR UPDATE) in various OpenStack
 projects, especially in connection with the fact that this kind of queries
 is likely to trigger write set certification failures in synchronous db
 replication engines such as MySQL Galera - as pointed out in [1] and
 thoroughly analysed in [2].
 Neutron, in particular, uses this construct often - and the correctness of
 the whole IPAM logic depends on it. On this regard Neutron is also guilty of
 not implementing any sort of retry mechanism. Eugene, Rossella, and others
 have been working towards fixing this issue. Some examples of their work can
 be found at [3] and [4].

 However, techniques such as compare and swap described in [2] can be used
 for implementing non-blocking algorithms which avoid at all the write-set
 certification issue.
 A neutron example of such technique can be found in [5]. However, a bug in
 this process found by Attila [6], and the subsequent gerrit discussion [7],
 raised further questions on locking vs non-blocking solutions.
 At this stage, we know that using database-level locks triggers write-set
 certification failures in synchronous multi master modes. This failures are
 reported as deadlock errors. Such errors can be dealt with retrying the
 transaction. However, this is claimed as not efficient in [2], whereas
 Attila in [7] argues the opposite.

 As I'm rewriting Neutron's IPAM logic as a part of [8], which heavily relies
 on this construct, I decided to spend some time to put everything to the
 test.
 To this aim I devised 3 algorithms.

 A-1) The classic one which locks the availability ranges for a subnet until
 allocation is complete, augmented with a simple and brutal retry mechanism
 [9].

So, similar to today's implementation.

 A-2) A wait-free 2 step process, in which the first one creates the IP
 allocation entry, and the second completes it by adjusting availability
 ranges [10]. Primary key violations will trigger retries.

My summary:  queries RESERVED ip requests and subtracts them from
availability.  Attempts to generate an IP from the resulting set.
Adjust availability in an idempotent way as a post-step.  Does that
sound about right?

Seems there are four flavors of this.  Did you compare them?  Which
one did you use to compare with the other two?

 A-3) A wait-free, lock-free 3 step process [11], which adopts
 compare-and-swap [2] and the same election process as the bully algorithm
 [12].

Also, 2 flavors of this one:  random and sequential.  I'll admit I did
not fully grok this implementation.  It looks to me like it would be
inefficient.

 Unfortunately neutron does not create IP records for every address in a
 subnet's CIDR - this would be fairly bad for IPv6 networks. Therefore we
 cannot simply apply the simple CAS technique introduced in [2].

What if it created records for only N addresses and did the same sort
of thing with those?  That would mitigate the issue with IPv6 and
large IPv4 subnets.  If those N are exhausted then we'd have to run a
routine to create more records.  Multiple workers could run that
routine at the same time but I imagine that they would all attempt to
insert the same set of new records and only one would win.  The others
would hit key violations and try to allocate again.

 I then tested them under arbitrary concurrent load from multiple workers.
 The tests were performed first on a single node backend, and then a 3-node
 mysql Galera cluster, installed and configured using the official
 documentation [13].
 The single-node tests showed, as it might have been expected, that the
 algorithm leveraging db-level locks is way more efficient [14]
 While it's pointless discussing the full results, one important aspect here
 is that with db-level locks are a lot more gentle with the database, and
 this result in consistently faster execution times.
 With 20 concurrent threads, every thread completed in about 0.06 seconds
 with db-level locks (A-1). With A-2 it took about 0.45 seconds, and with A-3
 0.32. With A-2 we saw over 500 queries, a little more than 200 with A-3, and
 just a bit more than 100 with A-1.
 It's worth noting that what makes A-3 faster than A-2 is a random allocation
 strategy which drastically reduces collisions. A-3's performance with
 sequential allocations are much worse (the avg. thread runtime 

Re: [openstack-dev] [Ironic] Stepping down from Ironic Core

2015-02-25 Thread Ruby Loo
Hi Robert,

I'm really glad I had a chance to interact with you. Thanks for everything;
I'm hopeful that we'll see you back.

Ironic still has lots of life, so I guess life goes on without lifeless,
but I'll miss you still. ;)

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] db-level locks, non-blocking algorithms, active/active DB clusters and IPAM

2015-02-25 Thread Salvatore Orlando
On 25 February 2015 at 16:52, Carl Baldwin c...@ecbaldwin.net wrote:

 jOn Mon, Feb 23, 2015 at 5:07 AM, Salvatore Orlando sorla...@nicira.com
 wrote:
  Lazy-Stacker summary:
  I am doing some work on Neutron IPAM code for IP Allocation, and I need
 to
  found whether it's better to use db locking queries (SELECT ... FOR
 UPDATE)
  or some sort of non-blocking algorithm.
  Some measures suggest that for this specific problem db-level locking is
  more efficient even when using multi-master DB clusters, which kind of
  counters recent findings by other contributors [2]... but also backs
 those
  from others [7].
 
  The long and boring thing:
 
  In the past few months there's been a fair amount of discussion
 concerning
  the use of locking queries (ie: SELECT ... FOR UPDATE) in various
 OpenStack
  projects, especially in connection with the fact that this kind of
 queries
  is likely to trigger write set certification failures in synchronous db
  replication engines such as MySQL Galera - as pointed out in [1] and
  thoroughly analysed in [2].
  Neutron, in particular, uses this construct often - and the correctness
 of
  the whole IPAM logic depends on it. On this regard Neutron is also
 guilty of
  not implementing any sort of retry mechanism. Eugene, Rossella, and
 others
  have been working towards fixing this issue. Some examples of their work
 can
  be found at [3] and [4].
 
  However, techniques such as compare and swap described in [2] can be
 used
  for implementing non-blocking algorithms which avoid at all the write-set
  certification issue.
  A neutron example of such technique can be found in [5]. However, a bug
 in
  this process found by Attila [6], and the subsequent gerrit discussion
 [7],
  raised further questions on locking vs non-blocking solutions.
  At this stage, we know that using database-level locks triggers write-set
  certification failures in synchronous multi master modes. This failures
 are
  reported as deadlock errors. Such errors can be dealt with retrying the
  transaction. However, this is claimed as not efficient in [2], whereas
  Attila in [7] argues the opposite.
 
  As I'm rewriting Neutron's IPAM logic as a part of [8], which heavily
 relies
  on this construct, I decided to spend some time to put everything to the
  test.
  To this aim I devised 3 algorithms.
 
  A-1) The classic one which locks the availability ranges for a subnet
 until
  allocation is complete, augmented with a simple and brutal retry
 mechanism
  [9].

 So, similar to today's implementation.

  A-2) A wait-free 2 step process, in which the first one creates the IP
  allocation entry, and the second completes it by adjusting availability
  ranges [10]. Primary key violations will trigger retries.

 My summary:  queries RESERVED ip requests and subtracts them from
 availability.  Attempts to generate an IP from the resulting set.
 Adjust availability in an idempotent way as a post-step.  Does that
 sound about right?


Correct.


 Seems there are four flavors of this.  Did you compare them?  Which
 one did you use to compare with the other two?


There are indeed 4 flavors. Random and sequential, and for each one of them
with or without a preliminary range check to verify whether the IP address
has already been removed by a concurrent thread and hence skip the process.

Random allocation works terribly with availability ranges. This is mainly
because adjusting them with random allocation in the great majority of
cases means doing 1 update + 1 insert query at least, whereas sequential
allocation usually results in a single update query.
The range check before retrying brings improvements in the majority of
cases tested in this experimental campaign. However, its effectiveness
decreases when the concurrency level is low, so it might not be the best
solution.


  A-3) A wait-free, lock-free 3 step process [11], which adopts
  compare-and-swap [2] and the same election process as the bully algorithm
  [12].

 Also, 2 flavors of this one:  random and sequential.  I'll admit I did
 not fully grok this implementation.  It looks to me like it would be
 inefficient.


It sounds inefficient because we are making very little assumptions on the
underlying database. I think its effectiveness can be improved;
nevertheless is its complexity that worries me.


  Unfortunately neutron does not create IP records for every address in a
  subnet's CIDR - this would be fairly bad for IPv6 networks. Therefore we
  cannot simply apply the simple CAS technique introduced in [2].

 What if it created records for only N addresses and did the same sort
 of thing with those?  That would mitigate the issue with IPv6 and
 large IPv4 subnets.  If those N are exhausted then we'd have to run a
 routine to create more records.  Multiple workers could run that
 routine at the same time but I imagine that they would all attempt to
 insert the same set of new records and only one would win.  The others
 would hit key 

Re: [openstack-dev] [api][all][log] - Openstack.error common library

2015-02-25 Thread Kuvaja, Erno
Hi Eugeniya,

Please have a look on the discussion under tag [log]. We’ve been discussing 
around this topic (bit wider, not limiting to API errors) quite regular since 
Paris Summit and we should have X-project specs for review quite soon after the 
Ops meetup. The workgroup meetings will start as well.

Obviously at this point the implementation is open for discussion but so far 
there has been push to implement the tracking into the project trees rather 
than consolidating it under one location.

Could you elaborate bit what you want to have in the headers and why? Our plan 
has been so far having the error code in the message payload so that it’s 
easily available for users and operators.  What this library you’re proposing 
would be actually doing?

We’re more than happy to take extra hands on this so please follow up the [log] 
discussion and feel free to contact me (IRC: jokke_) or Rockyg (in cc as well)  
around what has been done and planned in case you need more clarification.


-  Erno
From: Eugeniya Kudryashova [mailto:ekudryash...@mirantis.com]
Sent: 25 February 2015 14:33
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [api][all] - Openstack.error common library


Hi, stackers!


As was suggested in topic [1], using an HTTP header was a good solution for 
communicating common/standardized OpenStack API error codes.

So I’d like to begin working on a common library, which will collect all 
openstack HTTP API errors, and assign them string error codes. My suggested 
name for library is openstack.error, but please feel free to propose something 
different.


The other question is where we should allocate such project, in openstack or 
stackforge, or maybe oslo-incubator? I think such project will be too massive 
(due to dealing with lots and lots of exceptions)  to allocate it as a part of 
oslo, so I propose developing the project on Stackforge and then eventually 
have it moved into the openstack/ code namespace when the other projects begin 
using the library.


Let me know your feedback, please!


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055549.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] patches that only address grammatical/typos

2015-02-25 Thread Dmitry Tantsur

On 02/25/2015 05:26 PM, Ruby Loo wrote:

Hi,

I was wondering what people thought about patches that only fix
grammatical issues or misspellings in comments in our code.

I can't believe I'm sending out this email, but as a group, I'd like it
if we had  a similar understanding so that we treat all patches in a
similar (dare I say it, consistent) manner. I've seen negative votes and
positive (approved) votes for similar patches. Right now, I look at such
submitted patches and ignore them, because I don't know what the fairest
thing is. I don't feel right that a patch that was previously submitted
gets a -2, whereas another patch gets a +A.

/me too



To be clear, I think that anything that is user-facing like (log,
exception) messages or our documentation should be cleaned up. (And yes,
I am fine using British or American English or a mix here.)

What I'm wondering about are the fixes to docstrings and inline comments
that aren't externally visible.

On one hand, It is great that someone submits a patch so maybe we should
approve it, so as not to discourage the submitter. On the other hand,
how useful are such submissions. It has already been suggested (and
maybe discussed to death) that we should approve patches if there are
only nits. These grammatical and misspellings fall under nits. If we are
explicitly saying that it is OK to merge these nits, then why fix them
later, unless they are part of a patch that does more than only address
those nits?
The biggest problem is that these patches 1. take our time, 2. take gate 
resources, 3. may introduce merge conflicts.


So I would suggest agree to -2 patches that fix _only_ user-invisible 
strings.




I realize that it would take me less time to approve the patches than to
write this email, but I wanted to know what the community thought. Some
rule-of-thumb would be helpful to me.

Thoughts?

--ruby


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress][Delegation] Initial workflow design

2015-02-25 Thread ruby.krishnaswamy
Hi Tim, All,


1) Step 3: The VM-placement engine is also a datalog engine . Right?

When policies are delegated:

when policies are inserted? When the VM-placement engine has already registered 
itself all policies are given to it?




In our example, this would mean the domain-specific policy engine executes the 
following API call over the DSE

ð  domain-agnostic 



2) Step 4:



Ok

But finally: if Congress will likely delegate



3) Step 5:  Compilation of subpolicy to LP in VM-placement engine



For the PoC, it is likely that the LP program ( in PuLP or some other ML) is 
*not* completely generated by compiler/translator.

ð  Right?

 You also indicate that some category of constraints (the LP solver 
doesn't know what the relationship between assign[i][j], hMemUse[j], and 
vMemUse[i] actually is, so the VM-placement engine must also include 
constraints) .
 These constraints must be explicitly written?  (e.g. max_ram_allocation 
etc that are constraints used in the solver-scheduler's package).


 So what parts will be generated:
Cost function :
Constraint from Policy : memory usage  75%

 Then the rest should be filled up?

 Could we convene on an intermediary modeling language?
@Yathi: do you think we could use some thing like AMPL ? Is this 
proprietary?


A detail: the example Y[host1] = hMemUse[host1]  0.75 * hMemCap[host1]


ð  To be changed to a linear form (mi - Mi  0 then Yi = 1 else Yi = 0) so 
something like (mi - Mi)  100 yi


4) Step 6: This is completely internal to the VM-placement engine (and we 
could say this is transparent to Congress)



We should allow configuration of a solver (this itself could be a policy :) )

How to invoke the solver API ?



The domain-specific placement engine could send out to DSE (action_handler: 
data)?


3)   Step 7 : Perform the migrations (according to the assignments computed in 
the step 6)

 This part invokes OpenStack API (to perform migrations).
 We may suppose that there are services implementing action handlers?
 It can listen on the DSE and execute the action.


5) Nova tables to use
Policy warning(id) :-
nova:host(id, name, service, zone, memory_capacity),
legacy:special_zone(zone),
ceilometer:statistics(id, memory, avg, count, duration,
 durstart, durend, max, min, period,
perstart, perend, sum, unit),
avg  0.75 * memory_capacity



I believe that ceilometer gives usage of VMs and not hosts. The host table 
(ComputeNode table) should give current used capacity.



6) One of the issues highlighted in OpenStack (scheduler) and also 
elsewhere (e.g. Omega scheduler by google) is :

Reading host utilization state from the data bases and DB (nova:host table) 
updates and overhead of maintaining in-memory state uptodate.

ð  This is expensive and current nova-scheduler does face this issue (many 
blogs/discussions).

  While the first goal is a PoC, this will likely become a concern in terms 
of adoption.



7) While in this document you have changed the example policy, could we 
drill down the set of policies for the PoC (the server under utilization ?)


ð  As a reference
Ruby

De : Yathiraj Udupi (yudupi) [mailto:yud...@cisco.com]
Envoyé : mardi 24 février 2015 20:01
À : OpenStack Development Mailing List (not for usage questions); Tim Hinrichs
Cc : Debo Dutta (dedutta)
Objet : Re: [openstack-dev] [Congress][Delegation] Initial workflow design

Hi Tim,

Thanks for your updated doc on Delegation from Congress to a domain-specific 
policy engine, in this case, you are planning to build a LP-based VM-Placement 
engine to be the domain specific policy engine.
I agree your main goal is to first get the delegation interface sorted out.  It 
will be good so that external services (like Solver-Scheduler) can also easily 
integrate to the delegation model.

From the Solver-Scheduler point of view,  we would actually want to start 
working on a PoC effort to start integrating Congress and the Solver-Scheduler.
We believe rather than pushing this effort to a long-term,  it would add value 
to both the Solver Scheduler effort, as well as the Congress effort to try some 
early integration now, as most of the LP solver work for VM placements is ready 
available now in Solver scheduler, and we need to spend some time thinking 
about translating your domain-agnostic policy to constraints that the Solver 
scheduler can use.

I would definitely need your help from the Congress interfaces and I hope you 
will share your early interfaces for the delegation, so I can start the effort 
from the Solver scheduler side for integration.
I will reach out to you to get some initial help for integration w.r.t. 
Congress, and also keep you posted about the progress from our side.

Thanks,
Yathi.



On 2/23/15, 11:28 AM, Tim Hinrichs 
thinri...@vmware.commailto:thinri...@vmware.com wrote:


Hi all,



I made a heavy editing pass of 

Re: [openstack-dev] [Neutron] (RE: Change in openstack/neutron-specs[master]: Introducing Tap-as-a-Service)

2015-02-25 Thread Salvatore Orlando
From previous discussion, it appeared the proposer really felt this needed
to be a core neutron aspect.
Where by core they meant both be part of the core API and part of
openstack/neutron.

On the other hand, we also agreed that the best way forward was to develop
a service plugin which in a way contradicts the claim about the necessity
of being part of openstack/neutron.
When it comes to what repo the project should leave in I really think
we're splitting hairs and often talking nonsense. To be a fundamental
component of openstack networking one does not have to be in
openstack/neutron - load balancing docet. At some point I think I lost the
reasons brought up by the proposers for not being able to do this work
outside of openstack/neutron.

Personally, I feel we should collaborate as much as possible, and be open
to take the TAP service as part of openstack/neutron provided that there
are compelling reasons for doing so. I'm afraid however that misperceptions
around the suitability of stackforge/* projects won't be a valid reason (at
least for me), but I'm fairly sure this is not the case of this specific
project.

Salvatore


On 25 February 2015 at 02:32, Kyle Mestery mest...@mestery.com wrote:

 There is a -2 (from me). And this was done from the auto-abandon script
 which I try to run once a month.

 As Kevin said, the suggestion multiple times was to do a StackForge
 project for this work, that's the best way forward here.

 On Tue, Feb 24, 2015 at 5:01 PM, CARVER, PAUL pc2...@att.com wrote:

 Maybe I'm misreading review.o.o, but I don't see the -2. There was a -2
 from Salvatore Orlando with the comment The -2 on this patch is only to
 deter further comments and a link to 140292, but 140292 has a comment from
 Kyle saying it's been abandoned in favor of going back to 96149. Are we in
 a loop here?

 We're moving forward internally with proprietary mechanisms for attaching
 analyzers but it sure would be nice if there were a standard API. Anybody
 who thinks switches don't need SPAN/mirror ports has probably never working
 in Operations on a real production network where SLAs were taken seriously
 and enforced.

 I know there's been a lot of heated discussion around this spec for a
 variety of reasons, but there isn't an enterprise class hardware switch on
 the market that doesn't support SPAN/mirror. Lack of this capability is a
 glaring omission in Neutron that keeps Operations type folks opposed to
 using it because it causes them to lose visibility that they've had for
 ages. We're getting a lot of pressure to continue deploying hardware
 analyzers and/or deploy non-OpenStack mechanisms for implementing
 tap/SPAN/mirror capability when I'd much rather integrate the analyzers
 into OpenStack.


 -Original Message-
 From: Kyle Mestery (Code Review) [mailto:rev...@openstack.org]
 Sent: Tuesday, February 24, 2015 17:37
 To: vinay yadhav
 Cc: CARVER, PAUL; Marios Andreou; Sumit Naiksatam; Anil Rao; Carlos
 Gonçalves; YAMAMOTO Takashi; Ryan Moats; Pino de Candia; Isaku Yamahata;
 Tomoe Sugihara; Stephen Wong; Kanzhe Jiang; Bao Wang; Bob Melander;
 Salvatore Orlando; Armando Migliaccio; Mohammad Banikazemi; mark mcclain;
 Henry Gessau; Adrian Hoban; Hareesh Puthalath; Subrahmanyam Ongole; Fawad
 Khaliq; Baohua Yang; Maruti Kamat; Stefano Maffulli 'reed'; Akihiro Motoki;
 ijw-ubuntu; Stephen Gordon; Rudrajit Tapadar; Alan Kavanagh; Zoltán Lajos
 Kis
 Subject: Change in openstack/neutron-specs[master]: Introducing
 Tap-as-a-Service

 Kyle Mestery has abandoned this change.

 Change subject: Introducing Tap-as-a-Service
 ..


 Abandoned

 This review is  4 weeks without comment and currently blocked by a core
 reviewer with a -2. We are abandoning this for now. Feel free to reactivate
 the review by pressing the restore button and contacting the reviewer with
 the -2 on this review to ensure you address their concerns.

 --
 To view, visit https://review.openstack.org/96149
 To unsubscribe, visit https://review.openstack.org/settings

 Gerrit-MessageType: abandon
 Gerrit-Change-Id: I087d9d2a802ea39c02259f17d2b8c4e2f6d8d714
 Gerrit-PatchSet: 8
 Gerrit-Project: openstack/neutron-specs
 Gerrit-Branch: master
 Gerrit-Owner: vinay yadhav vinay.yad...@ericsson.com
 Gerrit-Reviewer: Adrian Hoban adrian.ho...@intel.com
 Gerrit-Reviewer: Akihiro Motoki amot...@gmail.com
 Gerrit-Reviewer: Alan Kavanagh alan.kavan...@ericsson.com
 Gerrit-Reviewer: Anil Rao arao...@gmail.com
 Gerrit-Reviewer: Armando Migliaccio arma...@gmail.com
 Gerrit-Reviewer: Bao Wang baowan...@yahoo.com
 Gerrit-Reviewer: Baohua Yang bao...@linux.vnet.ibm.com
 Gerrit-Reviewer: Bob Melander bob.melan...@gmail.com
 Gerrit-Reviewer: Carlos Gonçalves m...@cgoncalves.pt
 Gerrit-Reviewer: Fawad Khaliq fa...@plumgrid.com
 Gerrit-Reviewer: Hareesh Puthalath hareesh.puthal...@gmail.com
 Gerrit-Reviewer: Henry Gessau ges...@cisco.com
 Gerrit-Reviewer: Isaku Yamahata 

Re: [openstack-dev] [api][all] - Openstack.error common library

2015-02-25 Thread Doug Hellmann


On Wed, Feb 25, 2015, at 09:33 AM, Eugeniya Kudryashova wrote:
 Hi, stackers!
 
 As was suggested in topic [1], using an HTTP header was a good solution
 for
 communicating common/standardized OpenStack API error codes.
 
 So I’d like to begin working on a common library, which will collect all
 openstack HTTP API errors, and assign them string error codes. My
 suggested
 name for library is openstack.error, but please feel free to propose
 something different.
 
 The other question is where we should allocate such project, in openstack
 or stackforge, or maybe oslo-incubator? I think such project will be too
 massive (due to dealing with lots and lots of exceptions)  to allocate it
 as a part of oslo, so I propose developing the project on Stackforge and
 then eventually have it moved into the openstack/ code namespace when the
 other projects begin using the library.
 
 Let me know your feedback, please!

I'm not sure a single library as a home to all of the various error
messages is the right approach. I thought, based on re-reading the
thread you link to, that the idea was to come up with a standard schema
for error payloads and then let the projects fill in the details. We
might need a library for utility functions, but that wouldn't actually
include the error messages.

Did I misunderstand?

Doug

 
 
 [1] -
 http://lists.openstack.org/pipermail/openstack-dev/2015-January/055549.html
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] patches that only address grammatical/typos

2015-02-25 Thread Alexis Lee
Ruby Loo said on Wed, Feb 25, 2015 at 11:26:56AM -0500:
 I was wondering what people thought about patches that only fix grammatical
 issues or misspellings in comments in our code.

For my money, a patch fixing nits has value but only if it fixes a few.
If it's a follow-up patch it should fix all the nits; otherwise it
should be of a significant chunk, EG a file, class or large method.

 It has already been suggested (and maybe
 discussed to death) that we should approve patches if there are only nits.
 These grammatical and misspellings fall under nits. If we are explicitly
 saying that it is OK to merge these nits, then why fix them later, unless
 they are part of a patch that does more than only address those nits?

We'd rather they were fixed though, right? Letting patches with nits
land is a pragmatic response to long response times or social friction.
Likewise a follow-up patch to fix nits can be a very pragmatic way to
allow the original patch to land quickly without sacrificing code
readability.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] db-level locks, non-blocking algorithms, active/active DB clusters and IPAM

2015-02-25 Thread John Belamaric


On 2/25/15, 10:52 AM, Carl Baldwin c...@ecbaldwin.net wrote:

Wondering if Kilo should just focus on creating the interface which
will allow us to create multiple implementations and swap them out
during the Liberty development cycle.  Hopefully, this could include
even something like your option 2 below.

I think I would go with the locking solution first because it is most
like what we have today.

+1. The pluggable framework will make it easy to swap these in and out, to
get real-world experience with each.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.messaging 1.7.0 released

2015-02-25 Thread Mehdi Abaakouk

The Oslo team is thrilled to announce the release of:

oslo.messaging 1.7.0: Oslo Messaging API

For more details, please see the git log history below and:

http://launchpad.net/oslo.messaging/+milestone/1.7.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

Changes in oslo.messaging 1.6.0..1.7.0
--

097fb23 Add FAQ entry for notifier configuration
68cd8cf rabbit: Fix behavior of rabbit_use_ssl
b2f505e amqp1: fix functional tests deps
aef3a61 Skip functional tests that fail due to a qpidd bug
56fda65 Remove unnecessary log messages from amqp1 unit tests
a9d5dd1 Include missing parameter in call to listen_for_notifications
3d366c9 Fix the import of the driver by the unit test
d8e68c3 Add a new aioeventlet executor
4e182b2 Add missing unit test for a recent commit
1475246 Add the threading executor setup.cfg entrypoint
824313a Move each drivers options into its own group
16ee9a8 Refactor the replies waiter code
03a46fb Imported Translations from Transifex
8380ac6 Fix notifications broken with ZMQ driver
b6a1ea0 Gate functionnal testing improvements
bf4ab5a Treat sphinx warnings as errors
0bf90d1 Move gate hooks to the oslo.messaging tree
dc75773 Set the password used in gate
7a7ca5f Update README.rst format to match expectations
434b5c8 Declare DirectPublisher exchanges with passive=True
3c40cee kombu: fix driver loading with kombu+qpid scheme
e7e5506 Ensure kombu channels are closed
3d232a0 Implements notification-dispatcher-filter
8eed6bb Make sure zmq can work with redis

Diffstat (except docs and test files)
-

README.rst |   5 +-
amqp1-requirements.txt |   2 +-
.../locale/fr/LC_MESSAGES/oslo.messaging.po|  18 +-
oslo_messaging/_drivers/amqp.py|   6 +-
oslo_messaging/_drivers/amqpdriver.py  | 160 
++---

oslo_messaging/_drivers/common.py  |   5 +-
oslo_messaging/_drivers/impl_qpid.py   |  45 +++--
oslo_messaging/_drivers/impl_rabbit.py | 196 
++---

oslo_messaging/_drivers/impl_zmq.py|  24 +--
oslo_messaging/_drivers/matchmaker_redis.py|   2 +-
oslo_messaging/_executors/impl_aioeventlet.py  |  75 
oslo_messaging/_executors/impl_eventlet.py |   6 +-
oslo_messaging/conffixture.py  |  16 +-
oslo_messaging/notify/__init__.py  |   2 +
oslo_messaging/notify/dispatcher.py|   9 +-
oslo_messaging/notify/filter.py|  77 
oslo_messaging/notify/listener.py  |   8 +-
oslo_messaging/opts.py |   7 +-
requirements.txt   |   4 +
setup.cfg  |   5 +
tox.ini|  15 +-
42 files changed, 937 insertions(+), 327 deletions(-)


Requirements updates


diff --git a/amqp1-requirements.txt b/amqp1-requirements.txt
index bf8a37e..6303dc7 100644
--- a/amqp1-requirements.txt
+++ b/amqp1-requirements.txt
@@ -8 +8 @@ pyngus=1.0.0,2.0.0  # Apache-2.0
-python-qpid-proton=0.7,0.8  # Apache-2.0
+python-qpid-proton=0.7,0.9  # Apache-2.0
diff --git a/requirements.txt b/requirements.txt
index 352b14a..e6747b0 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -31,0 +32,4 @@ futures=2.1.6
+
+# needed by the aioeventlet executor
+aioeventlet=0.4
+trollius=1.0


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] patches that only address grammatical/typos

2015-02-25 Thread Ruby Loo
Hi,

I was wondering what people thought about patches that only fix grammatical
issues or misspellings in comments in our code.

I can't believe I'm sending out this email, but as a group, I'd like it if
we had  a similar understanding so that we treat all patches in a similar
(dare I say it, consistent) manner. I've seen negative votes and positive
(approved) votes for similar patches. Right now, I look at such submitted
patches and ignore them, because I don't know what the fairest thing is. I
don't feel right that a patch that was previously submitted gets a -2,
whereas another patch gets a +A.

To be clear, I think that anything that is user-facing like (log,
exception) messages or our documentation should be cleaned up. (And yes, I
am fine using British or American English or a mix here.)

What I'm wondering about are the fixes to docstrings and inline comments
that aren't externally visible.

On one hand, It is great that someone submits a patch so maybe we should
approve it, so as not to discourage the submitter. On the other hand, how
useful are such submissions. It has already been suggested (and maybe
discussed to death) that we should approve patches if there are only nits.
These grammatical and misspellings fall under nits. If we are explicitly
saying that it is OK to merge these nits, then why fix them later, unless
they are part of a patch that does more than only address those nits?

I realize that it would take me less time to approve the patches than to
write this email, but I wanted to know what the community thought. Some
rule-of-thumb would be helpful to me.

Thoughts?

--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Rick Jones

On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:

I’m writing a plan/script to benchmark OVS+OF(CT) vs
OVS+LB+iptables+ipsets,
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.

The plan is to keep all of it in a single multicore host, and make
all the measures within it, to make sure we just measure the
difference due to the software layers.

Suggestions or ideas on what to measure are welcome, there’s an initial
draft here:

https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct


Conditions to be benchmarked

Initial connection establishment time
Max throughput on the same CPU

Large MTUs and stateless offloads can mask a multitude of path-length 
sins.  And there is a great deal more to performance than Mbit/s. While 
some of that may be covered by the first item via the likes of say 
netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a 
focus on Mbit/s (which I assume is the focus of the second item) there 
is something for packet per second performance.  Something like netperf 
TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.


Doesn't have to be netperf, that is simply the hammer I wield :)

What follows may be a bit of perfect being the enemy of the good, or 
mission creep...


On the same CPU would certainly simplify things, but it will almost 
certainly exhibit different processor data cache behaviour than actually 
going through a physical network with a multi-core system.  Physical 
NICs will possibly (probably?) have RSS going, which may cause cache 
lines to be pulled around.  The way packets will be buffered will differ 
as well.  Etc etc.  How well the different solutions scale with cores is 
definitely a difference of interest between the two sofware layers.


rick

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStackClient] New meeting time

2015-02-25 Thread Dean Troyer
[Parse the subject as new-meeting time, as there is no old meeting time...]

OpenStackClient will start holding periodic development team meetings on
Thursday Feb 26 at 18:00 UTC in Freenode's #openstack-meeting channel.  One
of the first things I want to cover is timing and frequency of these
meetings.  I know this is short notice so I anticipate that discussion to
be summarized on the mailing list before decisions are made.

The agenda for tomorrow's meeting can be found at
https://wiki.openstack.org/wiki/Meetings/OpenStackClient.  Anyone is
welcome to add things to the agenda, please indicate your IRC handle and if
you are unable to attend the meeting sending background/summary to me or
someone who will attend is appreciated.

A timezone helper is at
http://www.timeanddate.com/worldclock/fixedtime.html?hour=18min=0sec=0:

03:00 JST
04:30 ACDT
19:00 CET
13:00 EST
12:00 CST
10:00 PST

dt


-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread Robert Collins
On 26 February 2015 at 08:54, melanie witt melwi...@gmail.com wrote:
 On Feb 25, 2015, at 10:51, Duncan Thomas duncan.tho...@gmail.com wrote:

 Is there anybody who'd like to step forward in defence of this rule and 
 explain why it is an improvement? I don't discount for a moment the 
 possibility I'm missing something, and welcome the education in that case

 A reason I can think of would be to preserve namespacing (no possibility of 
 function or class name collision upon import). Another reason could be 
 maintainability, scenario being: Person 1 imports ClassA from a module to 
 use, Person 2 comes along later and needs a different class from the module 
 so they import ClassB from the same module to use, and it continues. If only 
 the module had been imported, everybody can just do module.ClassA, 
 module.ClassB instead of potentially multiple imports from the same module of 
 different classes and functions. I've also read it doesn't cost more to 
 import the entire module rather than just a function or a class, as the whole 
 module has to be parsed either way.

I think the primary benefit is that when looking at the code you can
tell where a name came from. If the code is using libraries that one
is not familiar with, this makes finding the implementation a lot
easier (than e.g. googling it and hoping its unique and not generic
like 'create' or something.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread John Griffith
On Wed, Feb 25, 2015 at 12:59 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 26 February 2015 at 08:54, melanie witt melwi...@gmail.com wrote:
  On Feb 25, 2015, at 10:51, Duncan Thomas duncan.tho...@gmail.com
 wrote:
 
  Is there anybody who'd like to step forward in defence of this rule and
 explain why it is an improvement? I don't discount for a moment the
 possibility I'm missing something, and welcome the education in that case
 
  A reason I can think of would be to preserve namespacing (no possibility
 of function or class name collision upon import). Another reason could be
 maintainability, scenario being: Person 1 imports ClassA from a module to
 use, Person 2 comes along later and needs a different class from the module
 so they import ClassB from the same module to use, and it continues. If
 only the module had been imported, everybody can just do module.ClassA,
 module.ClassB instead of potentially multiple imports from the same module
 of different classes and functions. I've also read it doesn't cost more to
 import the entire module rather than just a function or a class, as the
 whole module has to be parsed either way.

 I think the primary benefit is that when looking at the code you can
 tell where a name came from. If the code is using libraries that one
 is not familiar with, this makes finding the implementation a lot
 easier (than e.g. googling it and hoping its unique and not generic
 like 'create' or something.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​I'd echo Melanie and Roberts points as being benefits.  I've actually
recently encountered the name space collision problem pointed out in
Melanies example, but most of all I agree with Roberts points about being
explicit.  Not to mention we can avoid what we have today where the same
module is imported under 5 different names (that's annoying). ​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
The fact that a system doesn't use a neutron agent is not a good
justification for monolithic vs driver. The VLAN drivers co-exist with OVS
just fine when using VLAN encapsulation even though some are agent-less.

There is a missing way to coordinate connectivity with tunnel networks
across drivers, but that doesn't mean you can't run multiple drivers to
handle different types or just to provide additional features (auditing,
more access control, etc).
On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2 framework
 (also inter-operate with native Neutron L3/service plugins), while all
 other fat MD(agentless) go with the old style of monolithic plugin, with
 all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] creating a unified developer reference manual

2015-02-25 Thread Doug Hellmann
During yesterday’s cross-project meeting [1], we discussed the Eventlet Best 
Practices” spec [2] started by bnemec. The discussion of that spec revolved 
around the question of whether our cross-project specs repository is the right 
place for this type of document that isn’t a “plan” for a change, and is more a 
reference guide. Eventually we came around to the idea of creating a 
cross-project developer guide to hold these sorts of materials.

That leads to two questions, then:

1. Should we have a unified developer guide for the project?
2. Where should it live and how should we manage it?

I believe we would benefit by having a more formal place to write down some of 
our experiences in ways that make them discoverable. We have been using the 
wiki for this, but it is often challenging to find information in the wiki if 
you don’t generally know where to look for it. That leads to an answer to 
question 2: create a new git repository to host a Sphinx-based manual similar 
to what many projects already have. We would then try to unify content from all 
sources where it applies to the whole project, and we could eventually start 
moving some of the wiki content into it as well.

Oslo has already started moving some of our reference materials from the wiki 
into a “policy” section of the oslo-specs repository, and one benefit we found 
to using git to manage those policy documents is that we have a record of the 
discussion of the changes to the pages, and we can collaborate on changes 
through our code review system — so everyone on the team has a voice and can 
comment on the changes. It can also be a bit easier to do things like include 
sample code [3].

Right now each project has its own reference guide, with project-specific 
information in it. Not all projects are going to agree to all of the 
guidelines, but it will be useful to have the conversation about those points 
where we are doing things differently so we can learn from each other.

If we decide to create the repository, we would also need to decide how it 
would be managed. The rules set up for the cross-project specs repository seems 
close to what we want (everyone can +1/-1; TC members can +2/-2; the TC chair 
tallies the votes and uses workflow+1) [4].

An alternative is to designate a subsection of the openstack-specs repository 
for the content, as we’ve done in Oslo. In this case, though, I think it makes 
more sense to create a new repository. If there is a general agreement to go 
ahead with the plan, I will set that up with a Sphinx project framework to get 
us started.

Comments?

Doug


[1] 
http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-02-24-21.03.log.html
[2] https://review.openstack.org/#/c/154642/
[3] http://docs.openstack.org/developer/oslo.log/api/fixtures.html
[4] http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-02-24-20.02.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [plugins] A simple network to interface mapping in astute.yaml

2015-02-25 Thread Andrey Danin
Hi, fuelers,

As you may know, we have a rich and complex network_transformations section
in astute.yaml. We use it to describe which OVS/Linux network primitives
should be created and how they should be connected together. This section
is used by l23network Puppet module during the deployment stage.

The problem is that from plugin developer's stand point it's hard to parse
a full transformation graph to find which interface/vlan is used for each
network (Public, Private, etc.). I see two ways to fix that.

1) Add a new structure to astute.yaml with a simple mapping of networks to
interfaces/vlans. Example: http://paste.openstack.org/show/181466/ (see
roles_meta section).
Pros: it's very easy for plugin developers.
Cons: there are two sources of truth (roles_meta and transformations). If
one plugin patch transformations but forget to patch roles_meta, another
plugin, which relies on roles_meta, fails the deployment.

2) Provide a some kind of SDK - functions/libraries for Puppet/Python/Bash,
which can be used in plugin's tasks to operate with graph of
transformations.
Pros: single point of truth. One and controlled way to do things right.
Cons: a new piece of software will be issued. It must be written, tested,
documented, and incorporated into CI/CD infrastructure.


I prefer the second way. Do you?

-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
Yeah, it seems ML2 at the least should save you a lot of boilerplate.
On Feb 25, 2015 2:32 AM, Russell Bryant rbry...@redhat.com wrote:

 On 02/24/2015 05:38 PM, Kevin Benton wrote:
  OVN implementing it's own control plane isn't a good reason to make it a
  monolithic plugin. Many of the ML2 drivers are for technologies with
  their own control plane.
 
  Going with the monolithic plugin only makes sense if you are certain
  that you never want interoperability with other technologies at the
  Neutron level. Instead of ruling that out this early, why not make it as
  an ML2 driver and then change to a monolithic plugin if you run into
  some fundamental issue with ML2?

 That was my original thinking.  I figure the important code of the ML2
 driver could be reused if/when the switch is needed.  I'd really just
 take the quicker path to making something work unless it's obvious that
 ML2 isn't the right path.  As this thread is still ongoing, it certainly
 doesn't seem obvious.

 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread Joe Gordon
On Wed, Feb 25, 2015 at 10:51 AM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 Hi

 So a review [1] was recently submitted to cinder to fix up all of the H302
 violations, and turn on the automated check for them. This is certainly a
 reasonable suggestion given the number of manual reviews that -1 for this
 issue, however I'm far from convinced it actually makes the code more
 readable,

 Is there anybody who'd like to step forward in defence of this rule and
 explain why it is an improvement? I don't discount for a moment the
 possibility I'm missing something, and welcome the education in that case


H302 originally comes from
http://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports


 Thanks


 [1] https://review.openstack.org/#/c/145780/
 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread Duncan Thomas
Thanks for that, Joe. I'd say the cons miss 'It looks ugly in places'.

On 25 February 2015 at 20:54, Joe Gordon joe.gord...@gmail.com wrote:


 On Wed, Feb 25, 2015 at 10:51 AM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 Hi

 So a review [1] was recently submitted to cinder to fix up all of the
 H302 violations, and turn on the automated check for them. This is
 certainly a reasonable suggestion given the number of manual reviews that
 -1 for this issue, however I'm far from convinced it actually makes the
 code more readable,

 Is there anybody who'd like to step forward in defence of this rule and
 explain why it is an improvement? I don't discount for a moment the
 possibility I'm missing something, and welcome the education in that case


 H302 originally comes from
 http://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=Imports#Imports


 Thanks


 [1] https://review.openstack.org/#/c/145780/
 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread melanie witt
On Feb 25, 2015, at 10:51, Duncan Thomas duncan.tho...@gmail.com wrote:

 Is there anybody who'd like to step forward in defence of this rule and 
 explain why it is an improvement? I don't discount for a moment the 
 possibility I'm missing something, and welcome the education in that case

A reason I can think of would be to preserve namespacing (no possibility of 
function or class name collision upon import). Another reason could be 
maintainability, scenario being: Person 1 imports ClassA from a module to use, 
Person 2 comes along later and needs a different class from the module so they 
import ClassB from the same module to use, and it continues. If only the module 
had been imported, everybody can just do module.ClassA, module.ClassB instead 
of potentially multiple imports from the same module of different classes and 
functions. I've also read it doesn't cost more to import the entire module 
rather than just a function or a class, as the whole module has to be parsed 
either way.

melanie (melwitt)






signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Tapio Tallgren
Hi,

The RFC2544 with near zero packet loss is a pretty standard performance
benchmark. It is also used in the OPNFV project (
https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases
).

Does this mean that OpenStack will have stateful firewalls (or security
groups)? Any other ideas planned, like ebtables type filtering?

-Tapio

On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com wrote:

 On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:

 I’m writing a plan/script to benchmark OVS+OF(CT) vs
 OVS+LB+iptables+ipsets,
 so we can make sure there’s a real difference before jumping into any
 OpenFlow security group filters when we have connection tracking in OVS.

 The plan is to keep all of it in a single multicore host, and make
 all the measures within it, to make sure we just measure the
 difference due to the software layers.

 Suggestions or ideas on what to measure are welcome, there’s an initial
 draft here:

 https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct


 Conditions to be benchmarked

 Initial connection establishment time
 Max throughput on the same CPU

 Large MTUs and stateless offloads can mask a multitude of path-length
 sins.  And there is a great deal more to performance than Mbit/s. While
 some of that may be covered by the first item via the likes of say netperf
 TCP_CRR or TCP_CC testing, I would suggest that in addition to a focus on
 Mbit/s (which I assume is the focus of the second item) there is something
 for packet per second performance.  Something like netperf TCP_RR and
 perhaps aggregate TCP_RR or UDP_RR testing.

 Doesn't have to be netperf, that is simply the hammer I wield :)

 What follows may be a bit of perfect being the enemy of the good, or
 mission creep...

 On the same CPU would certainly simplify things, but it will almost
 certainly exhibit different processor data cache behaviour than actually
 going through a physical network with a multi-core system.  Physical NICs
 will possibly (probably?) have RSS going, which may cause cache lines to be
 pulled around.  The way packets will be buffered will differ as well.  Etc
 etc.  How well the different solutions scale with cores is definitely a
 difference of interest between the two sofware layers.

 rick


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
-Tapio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-25 Thread Johannes Erdfelt
On Tue, Feb 24, 2015, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-02-24 10:00:51 -0800 (-0800), Johannes Erdfelt wrote:
 [...]
  Recently, I have spent a lot more time waiting on reviews than I
  have spent writing the actual code.
 
 That's awesome, assuming what you mean here is that you've spent
 more time reviewing submitted code than writing more. That's where
 we're all falling down as a project and should be doing better, so I
 applaud your efforts in this area.

I think I understand what you're trying to do here, but to be clear, are
you saying that I only have myself to blame for how long it takes to
get code merged nowadays?

JE


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] H302 considered harmful

2015-02-25 Thread Duncan Thomas
Hi

So a review [1] was recently submitted to cinder to fix up all of the H302
violations, and turn on the automated check for them. This is certainly a
reasonable suggestion given the number of manual reviews that -1 for this
issue, however I'm far from convinced it actually makes the code more
readable,

Is there anybody who'd like to step forward in defence of this rule and
explain why it is an improvement? I don't discount for a moment the
possibility I'm missing something, and welcome the education in that case

Thanks


[1] https://review.openstack.org/#/c/145780/
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][docs] VPNaaS API reference missing...

2015-02-25 Thread Paul Michali
Not sure where it disappeared, but there are none of the API pages for
VPNaaS exist. This was added some time ago (I think it was review 41702
 back in 9/2013).

Mentioned to Edgar, but wanted to let the community know...


PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStackClient] New meeting time

2015-02-25 Thread Steve Martinelli
\o/

I'll be there! Thanks for organizing it all Dean.

Steve

Dean Troyer dtro...@gmail.com wrote on 02/25/2015 12:19:38 PM:

 From: Dean Troyer dtro...@gmail.com
 To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
 Date: 02/25/2015 12:32 PM
 Subject: [openstack-dev] [OpenStackClient] New meeting time
 
 [Parse the subject as new-meeting time, as there is no old meeting 
time...]
 
 OpenStackClient will start holding periodic development team 
 meetings on Thursday Feb 26 at 18:00 UTC in Freenode's #openstack-
 meeting channel.  One of the first things I want to cover is timing 
 and frequency of these meetings.  I know this is short notice so I 
 anticipate that discussion to be summarized on the mailing list 
 before decisions are made.
 
 The agenda for tomorrow's meeting can be found at https://
 wiki.openstack.org/wiki/Meetings/OpenStackClient.  Anyone is welcome
 to add things to the agenda, please indicate your IRC handle and if 
 you are unable to attend the meeting sending background/summary to 
 me or someone who will attend is appreciated.
 
 A timezone helper is at http://www.timeanddate.com/worldclock/
 fixedtime.html?hour=18min=0sec=0:
 
 03:00 JST
 04:30 ACDT
 19:00 CET
 13:00 EST
 12:00 CST
 10:00 PST
 
 dt
 
 -- 
 
 Dean Troyer
 dtro...@gmail.com
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][docs] VPNaaS API reference missing...

2015-02-25 Thread Salvatore Orlando
As you can see, netconn-api has gone into the Openstack-attic.
A few months ago, all neutron API reference docs were moved into
neutron-specs (similar things happened to other projects).

The new home of the VPN API spec is [1]

Salvatore

[1]
http://specs.openstack.org/openstack/neutron-specs/specs/api/virtual_private_network_as_a_service__vpnaas_.html

On 25 February 2015 at 19:26, Paul Michali p...@michali.net wrote:

 Not sure where it disappeared, but there are none of the API pages for
 VPNaaS exist. This was added some time ago (I think it was review 41702
  back in 9/2013).

 Mentioned to Edgar, but wanted to let the community know...


 PCM (Paul Michali)

 IRC pc_m (irc.freenode.com)
 Twitter... @pmichali


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp serial-ports *partly* implemented?

2015-02-25 Thread Markus Zoeller
Sahid Orentino Ferdjaoui sahid.ferdja...@redhat.com wrote on 02/25/2015 
12:58:30 PM:
 [...]
 
 So we probably have a bug here, can you at least refer it in launchpad
 ? We need to see if the problem comes from the code in Nova or a bad
 interpretation of the behavior of libvirt or a bug in libvirt.
 
 Please on the report can you also paste the XML when you have a well
 connected session on the first port?
 
 [...]

I opened bug 1425640: https://bugs.launchpad.net/nova/+bug/1425640


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] patches that only address grammatical/typos

2015-02-25 Thread Doug Hellmann


On Wed, Feb 25, 2015, at 12:36 PM, Jay Faulkner wrote:
 
  On Feb 25, 2015, at 10:26 AM, Ruby Loo rlooya...@gmail.com wrote:
  
  Hi,
  
  I was wondering what people thought about patches that only fix grammatical 
  issues or misspellings in comments in our code.
  
  I can't believe I'm sending out this email, but as a group, I'd like it if 
  we had  a similar understanding so that we treat all patches in a similar 
  (dare I say it, consistent) manner. I've seen negative votes and positive 
  (approved) votes for similar patches. Right now, I look at such submitted 
  patches and ignore them, because I don't know what the fairest thing is. I 
  don't feel right that a patch that was previously submitted gets a -2, 
  whereas another patch gets a +A.
  
  To be clear, I think that anything that is user-facing like (log, 
  exception) messages or our documentation should be cleaned up. (And yes, I 
  am fine using British or American English or a mix here.)
  
  What I'm wondering about are the fixes to docstrings and inline comments 
  that aren't externally visible.
  
  On one hand, It is great that someone submits a patch so maybe we should 
  approve it, so as not to discourage the submitter. On the other hand, how 
  useful are such submissions. It has already been suggested (and maybe 
  discussed to death) that we should approve patches if there are only nits. 
  These grammatical and misspellings fall under nits. If we are explicitly 
  saying that it is OK to merge these nits, then why fix them later, unless 
  they are part of a patch that does more than only address those nits?
  
  I realize that it would take me less time to approve the patches than to 
  write this email, but I wanted to know what the community thought. Some 
  rule-of-thumb would be helpful to me.
  
 
 I personally always ask this question: does it make the software better?
 IMO fixing some of these grammatical issues can. I don’t think we should
 actively encourage such patches, but if someone already did the work, why
 should we run them away? Many folks use patches like this to help them
 learn the process for contributing to OpenStack and I’d hate to run them
 away.
 
 These changes tend to bubble up because they’re an easy way to get
 involved. The time it takes to review and merge them in is an investment
 in that person’s future interest in contributing to OpenStack, or
 possibly open source in general. 

+1

We need to keep this sort of thing in mind. If the patch is fairly
trivial, it's also fairly trivial to review. If it is going to cause a
more complex patch to need to be rebased, suggest that the proposer
rebase their patch on top of the complex patch to avoid problems later
-- that's teaching another lesson, so everyone benefits.

Doug

 
 -Jay
 
 
  Thoughts?
  
  --ruby
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStackClient] New meeting time

2015-02-25 Thread Doug Hellmann
Ditto!

On Wed, Feb 25, 2015, at 01:35 PM, Steve Martinelli wrote:
 \o/
 
 I'll be there! Thanks for organizing it all Dean.
 
 Steve
 
 Dean Troyer dtro...@gmail.com wrote on 02/25/2015 12:19:38 PM:
 
  From: Dean Troyer dtro...@gmail.com
  To: OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
  Date: 02/25/2015 12:32 PM
  Subject: [openstack-dev] [OpenStackClient] New meeting time
  
  [Parse the subject as new-meeting time, as there is no old meeting 
 time...]
  
  OpenStackClient will start holding periodic development team 
  meetings on Thursday Feb 26 at 18:00 UTC in Freenode's #openstack-
  meeting channel.  One of the first things I want to cover is timing 
  and frequency of these meetings.  I know this is short notice so I 
  anticipate that discussion to be summarized on the mailing list 
  before decisions are made.
  
  The agenda for tomorrow's meeting can be found at https://
  wiki.openstack.org/wiki/Meetings/OpenStackClient.  Anyone is welcome
  to add things to the agenda, please indicate your IRC handle and if 
  you are unable to attend the meeting sending background/summary to 
  me or someone who will attend is appreciated.
  
  A timezone helper is at http://www.timeanddate.com/worldclock/
  fixedtime.html?hour=18min=0sec=0:
  
  03:00 JST
  04:30 ACDT
  19:00 CET
  13:00 EST
  12:00 CST
  10:00 PST
  
  dt
  
  -- 
  
  Dean Troyer
  dtro...@gmail.com
  
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-25 Thread Alex Glikson
Tom Fifield t...@openstack.org wrote on 25/02/2015 06:46:13 AM:

 On 24/02/15 19:27, Daniel P. Berrange wrote:
  On Tue, Feb 24, 2015 at 12:05:17PM +0100, Thierry Carrez wrote:
  Daniel P. Berrange wrote:
  [...]
 
  I'm not familiar with how the translations works, but if they are
  waiting until the freeze before starting translation work I'd
  say that is a mistaken approach. Obviously during active dev part
  of the cycle, some translated strings are in flux, so if translation
  was taking place in parallel there could be some wasted effort, but
  I'd expect that to be the minority case. I think the majority of
  translation work can be done in parallel with dev work and the freeze
  time just needs to tie up the small remaining bits.
 
 
 So, two points:
 
 1) We wouldn't be talking about throwing just a couple of percent of
 their work away.
 
 As an example, even without looking at the introduction of new strings
 or deleting others, you may not be aware that changing a single word in
 a string in the code means that entire string needs to be re-translated.
 Even with the extensive translation memory systems we have making
 suggestions as best they can, we're talking about very, very significant
 amounts of wasted effort.

How difficult would it be to try quantifying this wasted effort? For 
example, if someone could write a script that extracts the data for a 
histogram showing the amount of strings (e.g., in Nova) that have been 
changed/overridden in consequent patches up to 1 week apart, between 1 and 
2 weeks apart, and so on up to, say, 52 weeks.

Regards,
Alex


 Regards,
 
 
 Tom
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware][ironic] Configuring active/passive HA Nova compute

2015-02-25 Thread Joe Gordon
On Fri, Feb 20, 2015 at 3:48 AM, Matthew Booth mbo...@redhat.com wrote:

 Gary Kotton came across a doozy of a bug recently:

 https://bugs.launchpad.net/nova/+bug/1419785

 In short, when you start a Nova compute, it will query the driver for
 instances and compare that against the expected host of the the instance
 according to the DB. If the driver is reporting an instance the DB
 thinks is on a different host, it assumes the instance was evacuated
 while Nova compute was down, and deletes it on the hypervisor. However,
 Gary found that you trigger this when starting up a backup HA node which
 has a different `host` config setting. i.e. You fail over, and the first
 thing it does is delete all your instances.

 Gary and I both agree on a couple of things:

 1. Deleting all your instances is bad
 2. HA nova compute is highly desirable for some drivers


There is a deeper issue here, that we are trying to work around.  Nova was
never designed to have entire systems running behind a nova-compute. It was
designed to have one nova-compute per 'physical box that runs instances'

There have been many discussions in the past on how to fix this issue (by
adding a new point in nova where clustered systems can plug in), but if I
remember correctly the gotcha was no one was willing to step up to do it.



 We disagree on the approach to fixing it, though. Gary posted this:

 https://review.openstack.org/#/c/154029/

 I've already outlined my objections to this approach elsewhere, but to
 summarise I think this fixes 1 symptom of a design problem, and leaves
 the rest untouched. If the value of nova compute's `host` changes, then
 the assumption that instances associated with that compute can be
 identified by the value of instance.host becomes invalid. This
 assumption is pervasive, so it breaks a lot of stuff. The worst one is
 _destroy_evacuated_instances(), which Gary found, but if you scan
 nova/compute/manager for the string 'self.host' you'll find lots of
 them. For example, all the periodic tasks are broken, including image
 cache management, and the state of ResourceTracker will be unusual.
 Worse, whenever a new instance is created it will have a different value
 of instance.host, so instances running on a single hypervisor will
 become partitioned based on which nova compute was used to create them.

 In short, the system may appear to function superficially, but it's
 unsupportable.

 I had an alternative idea. The current assumption is that the `host`
 managing a single hypervisor never changes. If we break that assumption,
 we break Nova, so we could assert it at startup and refuse to start if
 it's violated. I posted this VMware-specific POC:

 https://review.openstack.org/#/c/154907/

 However, I think I've had a better idea. Nova creates ComputeNode
 objects for its current configuration at startup which, amongst other
 things, are a map of host:hypervisor_hostname. We could assert when
 creating a ComputeNode that hypervisor_hostname is not already
 associated with a different host, and refuse to start if it is. We would
 give an appropriate error message explaining that this is a
 misconfiguration. This would prevent the user from hitting any of the
 associated problems, including the deletion of all their instances.

 We can still do active/passive HA!

 If we configure both nodes in the active/passive cluster identically,
 including with the same value of `host`, I don't see why this shouldn't
 work today. I don't even think the configuration is onerous. All we
 would be doing is preventing the user from accidentally running a
 misconfigured HA which leads to inconsistent state, and will eventually
 require manual cleanup.

 We would still have to be careful that we don't bring up both nova
 computes simultaneously. The VMware driver, at least, has hardcoded
 assumptions that it is the only writer in certain circumstances. That
 problem would have to be handled separately, perhaps at the messaging
 layer.

 Matt
 --
 Matthew Booth
 Red Hat Engineering, Virtualisation Team

 Phone: +442070094448 (UK)
 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-25 Thread Joe Gordon
On Wed, Feb 25, 2015 at 11:54 AM, Doug Hellmann d...@doughellmann.com
wrote:

 During yesterday’s cross-project meeting [1], we discussed the Eventlet
 Best Practices” spec [2] started by bnemec. The discussion of that spec
 revolved around the question of whether our cross-project specs repository
 is the right place for this type of document that isn’t a “plan” for a
 change, and is more a reference guide. Eventually we came around to the
 idea of creating a cross-project developer guide to hold these sorts of
 materials.

 That leads to two questions, then:

 1. Should we have a unified developer guide for the project?
 2. Where should it live and how should we manage it?

 I believe we would benefit by having a more formal place to write down
 some of our experiences in ways that make them discoverable. We have been
 using the wiki for this, but it is often challenging to find information in
 the wiki if you don’t generally know where to look for it. That leads to an
 answer to question 2: create a new git repository to host a Sphinx-based
 manual similar to what many projects already have. We would then try to
 unify content from all sources where it applies to the whole project, and
 we could eventually start moving some of the wiki content into it as well.

 Oslo has already started moving some of our reference materials from the
 wiki into a “policy” section of the oslo-specs repository, and one benefit
 we found to using git to manage those policy documents is that we have a
 record of the discussion of the changes to the pages, and we can
 collaborate on changes through our code review system — so everyone on the
 team has a voice and can comment on the changes. It can also be a bit
 easier to do things like include sample code [3].

 Right now each project has its own reference guide, with project-specific
 information in it. Not all projects are going to agree to all of the
 guidelines, but it will be useful to have the conversation about those
 points where we are doing things differently so we can learn from each
 other.


I like the idea of a unified developer reference. There is a bunch of stuff
in the nova devref that isn't nova specific such as:

http://docs.openstack.org/developer/nova/devref/unit_tests.html

As for how to manage what is project specific and what isn't.  Something
along the lines of how we do it in hacking may be nice. Each project has
its own file that has project specific things and references the main
hacking doc (https://github.com/openstack/keystone/blob/master/HACKING.rst).





 If we decide to create the repository, we would also need to decide how it
 would be managed. The rules set up for the cross-project specs repository
 seems close to what we want (everyone can +1/-1; TC members can +2/-2; the
 TC chair tallies the votes and uses workflow+1) [4].

 An alternative is to designate a subsection of the openstack-specs
 repository for the content, as we’ve done in Oslo. In this case, though, I
 think it makes more sense to create a new repository. If there is a general
 agreement to go ahead with the plan, I will set that up with a Sphinx
 project framework to get us started.

 Comments?

 Doug


 [1]
 http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-02-24-21.03.log.html
 [2] https://review.openstack.org/#/c/154642/
 [3] http://docs.openstack.org/developer/oslo.log/api/fixtures.html
 [4]
 http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-02-24-20.02.log.html
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Joe Gordon
On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell tim.b...@cern.ch wrote:


 A few inline comments and a general point

 How do we handle scenarios like volumes when we have a per-component
 janitor rather than a single co-ordinator ?

 To be clean,

 1. nova should shutdown the instance
 2. nova should then ask the volume to be detached
 3. cinder could then perform the 'project deletion' action as configured
 by the operator (such as shelve or backup)
 4. nova could then perform the 'project deletion' action as configured by
 the operator (such as VM delete or shelve)

 If we have both cinder and nova responding to a single message, cinder
 would do 3. Immediately and nova would be doing the shutdown which is
 likely to lead to a volume which could not be shelved cleanly.

 The problem I see with messages is that co-ordination of the actions may
 require ordering between the components.  The disable/enable cases would
 show this in a worse scenario.


You raise two good points.

* How to clean something up may be different for different clouds
* Some cleanup operations have to happen in a specific order

Not sure what the best way to address those two points is.  Perhaps the
best way forward is a openstack-specs spec to hash out these details.



 Tim

  -Original Message-
  From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
  Sent: 19 February 2015 17:49
  To: OpenStack Development Mailing List (not for usage questions); Joe
 Gordon
  Cc: openstack-operat...@lists.openstack.org
  Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by a
  project/tenant are not cleaned up after that project is deleted from
 keystone
 
 
 
  On 2/2/15, 15:41, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 
  
  On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com)
  wrote:
  
  
  
  On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
  morgan.fainb...@gmail.com wrote:
  
  I think the simple answer is yes. We (keystone) should emit
  notifications. And yes other projects should listen.
  
  The only thing really in discussion should be:
  
  1: soft delete or hard delete? Does the service mark it as orphaned, or
  just delete (leave this to nova, cinder, etc to discuss)
  
  2: how to cleanup when an event is missed (e.g rabbit bus goes out to
  lunch).
  
  
  
  
  
  
  I disagree slightly, I don't think projects should directly listen to
  the Keystone notifications I would rather have the API be something
  from a keystone owned library, say keystonemiddleware. So something like
  this:
  
  
  from keystonemiddleware import janitor
  
  
  keystone_janitor = janitor.Janitor()
  keystone_janitor.register_callback(nova.tenant_cleanup)
  
  
  keystone_janitor.spawn_greenthread()
  
  
  That way each project doesn't have to include a lot of boilerplate
  code, and keystone can easily modify/improve/upgrade the notification
  mechanism.
  
  


 I assume janitor functions can be used for

 - enable/disable project
 - enable/disable user

  
  
  
  
  
  
  
  
  
  Sure. I’d place this into an implementation detail of where that
  actually lives. I’d be fine with that being a part of Keystone
  Middleware Package (probably something separate from auth_token).
  
  
  —Morgan
  
 
  I think my only concern is what should other projects do and how much do
 we
  want to allow operators to configure this? I can imagine it being
 preferable to
  have safe (without losing much data) policies for this as a default and
 to allow
  operators to configure more destructive policies as part of deploying
 certain
  services.
 

 Depending on the cloud, an operator could want different semantics for
 delete project's impact, between delete or 'shelve' style or maybe disable.

 
  
  
  
  
  
  --Morgan
  
  Sent via mobile
  
   On Feb 2, 2015, at 10:16, Matthew Treinish mtrein...@kortar.org
 wrote:
  
   On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
   This came up in the operators mailing list back in June [1] but
  given the  subject probably didn't get much attention.
  
   Basically there is a really old bug [2] from Grizzly that is still a
  problem  and affects multiple projects.  A tenant can be deleted in
  Keystone even  though other resources in other projects are under
  that project, and those  resources aren't cleaned up.
  
   I agree this probably can be a major pain point for users. We've had
  to work around it  in tempest by creating things like:
  
  
  http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
  p_s
  ervice.py
  http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/clean
  up_
  service.py
   and
  
  http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
  p.p
  y
  
 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanup
  .
  py
  
   to ensure we aren't dangling resources after a run. But, this doesn't
  work in  all cases either. (like with tenant isolation enabled)
  
   I 

Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-25 Thread Doug Hellmann


On Wed, Feb 25, 2015, at 12:35 PM, Johannes Erdfelt wrote:
 On Tue, Feb 24, 2015, Jeremy Stanley fu...@yuggoth.org wrote:
  On 2015-02-24 10:00:51 -0800 (-0800), Johannes Erdfelt wrote:
  [...]
   Recently, I have spent a lot more time waiting on reviews than I
   have spent writing the actual code.
  
  That's awesome, assuming what you mean here is that you've spent
  more time reviewing submitted code than writing more. That's where
  we're all falling down as a project and should be doing better, so I
  applaud your efforts in this area.
 
 I think I understand what you're trying to do here, but to be clear, are
 you saying that I only have myself to blame for how long it takes to
 get code merged nowadays?

I read that as a reminder that we are all collaborators, and that
working together is more effective and less frustrating than not working
together. So while you wait, look at some other contributions and
provide feedback. Others will do the same for your patches. Reviewed
patches improve and land faster. We all win.

Doug

 
 JE
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sqlalchemy-migrate 0.9.5 released

2015-02-25 Thread Matt Riedemann

https://pypi.python.org/pypi/sqlalchemy-migrate/0.9.5

Changes:

mriedem@ubuntu:~/git/sqlalchemy-migrate$ git log --oneline --no-merges 
0.9.4..0.9.5

5feeaba Don't run the test if _setup() fails
c8c5c4b Correcting minor typo
9d212e6 Fix .gitignore for .tox and .testrepository
ae64d82 allow dropping fkeys with sqlite
997855d Add pretty_tox setup
b9caaae script: strip comments in SQL statements
677f374 Replace assertNotEquals with assertNotEqual.
cee9136 Update requirements file matching global requ
c14a311 Work toward Python 3.4 support and testing
5542e1c Fixes the auto-generated manage.py

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2015-02-25 10:51:00 -0800:
 Hi
 
 So a review [1] was recently submitted to cinder to fix up all of the H302
 violations, and turn on the automated check for them. This is certainly a
 reasonable suggestion given the number of manual reviews that -1 for this
 issue, however I'm far from convinced it actually makes the code more
 readable,
 
 Is there anybody who'd like to step forward in defence of this rule and
 explain why it is an improvement? I don't discount for a moment the
 possibility I'm missing something, and welcome the education in that case

I think we've had this conclusion a few times before, but let me
resurrect it:

The reason we have hacking and flake8 and pep8 and etc. etc. is so that
code reviews don't descend into nit picking and style spraying.

I'd personally have a private conversation with anyone who mentioned
this, or any other rule that is in hacking/etc., in a review. I want to
know why people think it is a good idea to bombard users with rules that
are already called out explicitly in automation.

Let the robots do their job, and they will let you do yours (until the
singularity, at which point your job will be hiding from the robots).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread Doug Hellmann


On Wed, Feb 25, 2015, at 02:59 PM, Robert Collins wrote:
 On 26 February 2015 at 08:54, melanie witt melwi...@gmail.com wrote:
  On Feb 25, 2015, at 10:51, Duncan Thomas duncan.tho...@gmail.com wrote:
 
  Is there anybody who'd like to step forward in defence of this rule and 
  explain why it is an improvement? I don't discount for a moment the 
  possibility I'm missing something, and welcome the education in that case
 
  A reason I can think of would be to preserve namespacing (no possibility of 
  function or class name collision upon import). Another reason could be 
  maintainability, scenario being: Person 1 imports ClassA from a module to 
  use, Person 2 comes along later and needs a different class from the module 
  so they import ClassB from the same module to use, and it continues. If 
  only the module had been imported, everybody can just do module.ClassA, 
  module.ClassB instead of potentially multiple imports from the same module 
  of different classes and functions. I've also read it doesn't cost more to 
  import the entire module rather than just a function or a class, as the 
  whole module has to be parsed either way.
 
 I think the primary benefit is that when looking at the code you can
 tell where a name came from. If the code is using libraries that one
 is not familiar with, this makes finding the implementation a lot
 easier (than e.g. googling it and hoping its unique and not generic
 like 'create' or something.

I think the rule originally came from the way mock works. If you import
a thing in your module and then a test tries to mock where it came from,
your module still uses the version it imported because the name lookup
isn't done again at the point when the test runs. If all external
symbols are accessed through the module that contains them, then the
lookup is done at runtime instead of import time and mocks can replace
the symbols. The same benefit would apply to monkey patching like what
eventlet does, though that's less likely to come up in our own code than
it is for third-party and stdlib modules.

Doug

 
 -Rob
 
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread Duncan Thomas
Clint

This rule is not currently enabled in Cinder. This review fixes up all
cases and enables it, which is absolutely 100% the right thing to do if we
decide to implement this rule.

The purpose of this thread is to understand the value of the rule. We
should either enforce it, or else explicitly decide to ignore it, and
educate reviewers who manually comment on it.

I lean against the rule, but there are certainly enough comments coming in
that I'll look and think again, which is a good result for the thread.

On 25 February 2015 at 22:46, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Duncan Thomas's message of 2015-02-25 10:51:00 -0800:
  Hi
 
  So a review [1] was recently submitted to cinder to fix up all of the
 H302
  violations, and turn on the automated check for them. This is certainly a
  reasonable suggestion given the number of manual reviews that -1 for this
  issue, however I'm far from convinced it actually makes the code more
  readable,
 
  Is there anybody who'd like to step forward in defence of this rule and
  explain why it is an improvement? I don't discount for a moment the
  possibility I'm missing something, and welcome the education in that case

 I think we've had this conclusion a few times before, but let me
 resurrect it:

 The reason we have hacking and flake8 and pep8 and etc. etc. is so that
 code reviews don't descend into nit picking and style spraying.

 I'd personally have a private conversation with anyone who mentioned
 this, or any other rule that is in hacking/etc., in a review. I want to
 know why people think it is a good idea to bombard users with rules that
 are already called out explicitly in automation.

 Let the robots do their job, and they will let you do yours (until the
 singularity, at which point your job will be hiding from the robots).

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Matt Joyce
Wondering if heat should be performing this orchestration.

Would provide for a more pluggable front end to the action set.

-matt

On Feb 25, 2015 2:37 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell tim.b...@cern.ch wrote:


 A few inline comments and a general point

 How do we handle scenarios like volumes when we have a per-component janitor 
 rather than a single co-ordinator ?

 To be clean,

 1. nova should shutdown the instance
 2. nova should then ask the volume to be detached
 3. cinder could then perform the 'project deletion' action as configured by 
 the operator (such as shelve or backup)
 4. nova could then perform the 'project deletion' action as configured by 
 the operator (such as VM delete or shelve)

 If we have both cinder and nova responding to a single message, cinder would 
 do 3. Immediately and nova would be doing the shutdown which is likely to 
 lead to a volume which could not be shelved cleanly.

 The problem I see with messages is that co-ordination of the actions may 
 require ordering between the components.  The disable/enable cases would 
 show this in a worse scenario.


 You raise two good points. 

 * How to clean something up may be different for different clouds
 * Some cleanup operations have to happen in a specific order

 Not sure what the best way to address those two points is.  Perhaps the best 
 way forward is a openstack-specs spec to hash out these details.

  

 Tim

  -Original Message-
  From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
  Sent: 19 February 2015 17:49
  To: OpenStack Development Mailing List (not for usage questions); Joe 
  Gordon
  Cc: openstack-operat...@lists.openstack.org
  Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by a
  project/tenant are not cleaned up after that project is deleted from 
  keystone
 
 
 
  On 2/2/15, 15:41, Morgan Fainberg morgan.fainb...@gmail.com wrote:
 
  
  On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com)
  wrote:
  
  
  
  On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
  morgan.fainb...@gmail.com wrote:
  
  I think the simple answer is yes. We (keystone) should emit
  notifications. And yes other projects should listen.
  
  The only thing really in discussion should be:
  
  1: soft delete or hard delete? Does the service mark it as orphaned, or
  just delete (leave this to nova, cinder, etc to discuss)
  
  2: how to cleanup when an event is missed (e.g rabbit bus goes out to
  lunch).
  
  
  
  
  
  
  I disagree slightly, I don't think projects should directly listen to
  the Keystone notifications I would rather have the API be something
  from a keystone owned library, say keystonemiddleware. So something like
  this:
  
  
  from keystonemiddleware import janitor
  
  
  keystone_janitor = janitor.Janitor()
  keystone_janitor.register_callback(nova.tenant_cleanup)
  
  
  keystone_janitor.spawn_greenthread()
  
  
  That way each project doesn't have to include a lot of boilerplate
  code, and keystone can easily modify/improve/upgrade the notification
  mechanism.
  
  


 I assume janitor functions can be used for

 - enable/disable project
 - enable/disable user

  
  
  
  
  
  
  
  
  
  Sure. I’d place this into an implementation detail of where that
  actually lives. I’d be fine with that being a part of Keystone
  Middleware Package (probably something separate from auth_token).
  
  
  —Morgan
  
 
  I think my only concern is what should other projects do and how much do we
  want to allow operators to configure this? I can imagine it being 
  preferable to
  have safe (without losing much data) policies for this as a default and to 
  allow
  operators to configure more destructive policies as part of deploying 
  certain
  services.
 

 Depending on the cloud, an operator could want different semantics for 
 delete project's impact, between delete or 'shelve' style or maybe disable.

 
  
  
  
  
  
  --Morgan
  
  Sent via mobile
  
   On Feb 2, 2015, at 10:16, Matthew Treinish mtrein...@kortar.org wrote:
  
   On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
   This came up in the operators mailing list back in June [1] but
  given the  subject probably didn't get much attention.
  
   Basically there is a really old bug [2] from Grizzly that is still a
  problem  and affects multiple projects.  A tenant can be deleted in
  Keystone even  though other resources in other projects are under
  that project, and those  resources aren't cleaned up.
  
   I agree this probably can be a major pain point for users. We've had
  to work around it  in tempest by creating things like:
  
  
  http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
  p_s
  ervice.py
  http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/clean
  up_
  service.py
   and
  
  http://git.openstack.org/cgit/openstack/tempest/tree/tempest/cmd/cleanu
  p.p
  y
  

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-25 Thread Joe Gordon
On Tue, Feb 24, 2015 at 7:00 AM, Doug Hellmann d...@doughellmann.com
wrote:



 On Mon, Feb 23, 2015, at 06:31 PM, Joe Gordon wrote:
  On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
  wrote:
 
  
  
   On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka 
 ihrac...@redhat.com
wrote:
   
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 02/20/2015 07:16 PM, Joshua Harlow wrote:
  Sean Dague wrote:
  On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps
  (direct and transitive), while
  requirements.inhttp://requirements.in  reflects what people
  will actually use.  Whatever is in requirements.txt affects the
  egg's requires.txt. Instead, we can keep requirements.txt
  unchanged and have it still be the canonical list of
  dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an
  upstream utility we produce and use to keep things sane on our
  slaves.
 
  Maybe all we need is:
 
  * update the existing post-merge job on the requirements repo
  to produce a requirements.txt (as it does now) as well the
  compiled version.
 
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
 
  I'm not sure how the second bit jives with the existing
  devstack installation code, specifically with the libraries
  from git-or-master but we can probably add something to warm
  the system with dependencies from the compiled version prior to
  calling pip/setup.py/etc http://setup.py/etc
 
  It sounds like you are suggesting we take the tool we use to
  ensure that all of OpenStack is installable together in a
 unified
  way, and change it's installation so that it doesn't do that any
  more.
 
  Which I'm fine with.
 
  But if we are doing that we should just whole hog give up on the
  idea that OpenStack can be run all together in a single
  environment, and just double down on the devstack venv work
  instead.
 
  It'd be interesting to see what a distribution (canonical,
  redhat...) would think about this movement. I know yahoo! has
 been
  looking into it for similar reasons (but we are more flexibly
 then
  I think a packager such as canonical/redhat/debian/... would/culd
  be). With a move to venv's that seems like it would just offload
  the work to find the set of dependencies that work together (in a
  single-install) to packagers instead.
 
  Is that ok/desired at this point?
 

 Honestly, I failed to track all the different proposals. Just
 saying
 from packager perspective: we absolutely rely on requirements.txt
 not
 being a list of hardcoded values from pip freeze, but present us a
 reasonable freedom in choosing versions we want to run in packaged
 products.


in short the current proposal for stable branches is:
   
keep requirements.txt as is, except maybe put some upper bounds on
 the
requirements.
   
Add requirements.gate to specify the *exact* versions we are gating
against
(this would be a full list including all transitive dependencies).
  
   The gate syncs requirements into projects before installing them. Would
   we change the sync script for the gate to work from the
   requirements.gate file, or keep it pulling from requirements.txt?
  
 
  We would only add requirements.gate for stable branches (because we don't
  want to cap/pin  things on master). So I think the answer is sync script
  should work for both.  I am not sure on the exact mechanics of how this
  would work. Whoever ends up driving this bit of work (I think Adam G),
  will
  have to sort out the details.

 OK. I think it's probably worth a spec, then, so we can think it through
 before starting work. Maybe in the cross-project specs repo, to avoid
 having to create one just for requirements? Or we could modify the
 README or something, but the specs repo seems more visible.


Start of the cross project spec https://review.openstack.org/159249



 Doug

 
 
   Doug
  
   
   
 That's why I asked before we should have caps and not pins.

 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJU61oJAAoJEC5aWaUY1u57T7cIALySnlpLV0tjrsTH2gZxskH+
 zY+L6E/DukFNZsWxB2XSaOuVdVaP3Oj4eYCZ2iL8OoxLrBotiOYyRFH29f9vjNSX
 h++dErBr0SwIeUtcnEjbk9re6fNP6y5Hqhk1Ac+NSxwL75KlS3bgKnJAhLA08MVB
 5xkGRR7xl2cuYf9eylPlQaAy9rXPCyyRdxZs6mNjZ2vlY6hZx/w/Y7V28R/V4gO4
 qsvMg6Kv+3urDTRuJdEsV6HbN/cXr2+o543Unzq7gcPpDYXRFTLkpCRV2k8mnmA1
 pO9W10F1FCQZiBnLk0c6OypFz9rQmKxpwlNUN5MTMF15Et6DOxGBxMcfr7TpRaQ=
 =WHOH
 -END PGP SIGNATURE-
   

Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-25 Thread Doug Hellmann


On Wed, Feb 25, 2015, at 03:14 PM, Joe Gordon wrote:
 On Wed, Feb 25, 2015 at 11:54 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
  During yesterday’s cross-project meeting [1], we discussed the Eventlet
  Best Practices” spec [2] started by bnemec. The discussion of that spec
  revolved around the question of whether our cross-project specs repository
  is the right place for this type of document that isn’t a “plan” for a
  change, and is more a reference guide. Eventually we came around to the
  idea of creating a cross-project developer guide to hold these sorts of
  materials.
 
  That leads to two questions, then:
 
  1. Should we have a unified developer guide for the project?
  2. Where should it live and how should we manage it?
 
  I believe we would benefit by having a more formal place to write down
  some of our experiences in ways that make them discoverable. We have been
  using the wiki for this, but it is often challenging to find information in
  the wiki if you don’t generally know where to look for it. That leads to an
  answer to question 2: create a new git repository to host a Sphinx-based
  manual similar to what many projects already have. We would then try to
  unify content from all sources where it applies to the whole project, and
  we could eventually start moving some of the wiki content into it as well.
 
  Oslo has already started moving some of our reference materials from the
  wiki into a “policy” section of the oslo-specs repository, and one benefit
  we found to using git to manage those policy documents is that we have a
  record of the discussion of the changes to the pages, and we can
  collaborate on changes through our code review system — so everyone on the
  team has a voice and can comment on the changes. It can also be a bit
  easier to do things like include sample code [3].
 
  Right now each project has its own reference guide, with project-specific
  information in it. Not all projects are going to agree to all of the
  guidelines, but it will be useful to have the conversation about those
  points where we are doing things differently so we can learn from each
  other.
 
 
 I like the idea of a unified developer reference. There is a bunch of
 stuff
 in the nova devref that isn't nova specific such as:
 
 http://docs.openstack.org/developer/nova/devref/unit_tests.html
 
 As for how to manage what is project specific and what isn't.  Something
 along the lines of how we do it in hacking may be nice. Each project has
 its own file that has project specific things and references the main
 hacking doc
 (https://github.com/openstack/keystone/blob/master/HACKING.rst).

I was more interested in how we come to agree on what is global vs. what
isn't, but I definitely agree that anything deemed project-specific
should stay in the project's documentation somewhere.

Doug

 
 
 
 
 
  If we decide to create the repository, we would also need to decide how it
  would be managed. The rules set up for the cross-project specs repository
  seems close to what we want (everyone can +1/-1; TC members can +2/-2; the
  TC chair tallies the votes and uses workflow+1) [4].
 
  An alternative is to designate a subsection of the openstack-specs
  repository for the content, as we’ve done in Oslo. In this case, though, I
  think it makes more sense to create a new repository. If there is a general
  agreement to go ahead with the plan, I will set that up with a Sphinx
  project framework to get us started.
 
  Comments?
 
  Doug
 
 
  [1]
  http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-02-24-21.03.log.html
  [2] https://review.openstack.org/#/c/154642/
  [3] http://docs.openstack.org/developer/oslo.log/api/fixtures.html
  [4]
  http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-02-24-20.02.log.html
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-25 Thread Joe Gordon
On Wed, Feb 25, 2015 at 12:49 PM, Doug Hellmann d...@doughellmann.com
wrote:



 On Wed, Feb 25, 2015, at 03:14 PM, Joe Gordon wrote:
  On Wed, Feb 25, 2015 at 11:54 AM, Doug Hellmann d...@doughellmann.com
  wrote:
 
   During yesterday’s cross-project meeting [1], we discussed the
 Eventlet
   Best Practices” spec [2] started by bnemec. The discussion of that spec
   revolved around the question of whether our cross-project specs
 repository
   is the right place for this type of document that isn’t a “plan” for a
   change, and is more a reference guide. Eventually we came around to the
   idea of creating a cross-project developer guide to hold these sorts of
   materials.
  
   That leads to two questions, then:
  
   1. Should we have a unified developer guide for the project?
   2. Where should it live and how should we manage it?
  
   I believe we would benefit by having a more formal place to write down
   some of our experiences in ways that make them discoverable. We have
 been
   using the wiki for this, but it is often challenging to find
 information in
   the wiki if you don’t generally know where to look for it. That leads
 to an
   answer to question 2: create a new git repository to host a
 Sphinx-based
   manual similar to what many projects already have. We would then try to
   unify content from all sources where it applies to the whole project,
 and
   we could eventually start moving some of the wiki content into it as
 well.
  
   Oslo has already started moving some of our reference materials from
 the
   wiki into a “policy” section of the oslo-specs repository, and one
 benefit
   we found to using git to manage those policy documents is that we have
 a
   record of the discussion of the changes to the pages, and we can
   collaborate on changes through our code review system — so everyone on
 the
   team has a voice and can comment on the changes. It can also be a bit
   easier to do things like include sample code [3].
  
   Right now each project has its own reference guide, with
 project-specific
   information in it. Not all projects are going to agree to all of the
   guidelines, but it will be useful to have the conversation about those
   points where we are doing things differently so we can learn from each
   other.
  
 
  I like the idea of a unified developer reference. There is a bunch of
  stuff
  in the nova devref that isn't nova specific such as:
 
  http://docs.openstack.org/developer/nova/devref/unit_tests.html
 
  As for how to manage what is project specific and what isn't.  Something
  along the lines of how we do it in hacking may be nice. Each project has
  its own file that has project specific things and references the main
  hacking doc
  (https://github.com/openstack/keystone/blob/master/HACKING.rst).

 I was more interested in how we come to agree on what is global vs. what
 isn't, but I definitely agree that anything deemed project-specific
 should stay in the project's documentation somewhere.


As you know, doing this in Hacking has been difficult.  So I don't have any
good answers here, except I think there is a ton of low hanging fruit that
we can tackle first.



 Doug

 
 
 
 
  
   If we decide to create the repository, we would also need to decide
 how it
   would be managed. The rules set up for the cross-project specs
 repository
   seems close to what we want (everyone can +1/-1; TC members can +2/-2;
 the
   TC chair tallies the votes and uses workflow+1) [4].
  
   An alternative is to designate a subsection of the openstack-specs
   repository for the content, as we’ve done in Oslo. In this case,
 though, I
   think it makes more sense to create a new repository. If there is a
 general
   agreement to go ahead with the plan, I will set that up with a Sphinx
   project framework to get us started.
  
   Comments?
  
   Doug
  
  
   [1]
  
 http://eavesdrop.openstack.org/meetings/crossproject/2015/crossproject.2015-02-24-21.03.log.html
   [2] https://review.openstack.org/#/c/154642/
   [3] http://docs.openstack.org/developer/oslo.log/api/fixtures.html
   [4]
  
 http://eavesdrop.openstack.org/meetings/tc/2015/tc.2015-02-24-20.02.log.html
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [neutron][docs] VPNaaS API reference missing...

2015-02-25 Thread Paul Michali
Cool! Glad to see it is not gone. Should it also be on the API reference
pages?

http://developer.openstack.org/api-ref-networking-v2.html

PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali


On Wed, Feb 25, 2015 at 1:42 PM, Salvatore Orlando sorla...@nicira.com
wrote:

 As you can see, netconn-api has gone into the Openstack-attic.
 A few months ago, all neutron API reference docs were moved into
 neutron-specs (similar things happened to other projects).

 The new home of the VPN API spec is [1]

 Salvatore

 [1]
 http://specs.openstack.org/openstack/neutron-specs/specs/api/virtual_private_network_as_a_service__vpnaas_.html

 On 25 February 2015 at 19:26, Paul Michali p...@michali.net wrote:

 Not sure where it disappeared, but there are none of the API pages for
 VPNaaS exist. This was added some time ago (I think it was review 41702
  back in 9/2013).

 Mentioned to Edgar, but wanted to let the community know...


 PCM (Paul Michali)

 IRC pc_m (irc.freenode.com)
 Twitter... @pmichali


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Revisiting the 6 month release cycle

2015-02-25 Thread Ian Cordasco
On 2/25/15, 14:41, Doug Hellmann d...@doughellmann.com wrote:



On Wed, Feb 25, 2015, at 12:35 PM, Johannes Erdfelt wrote:
 On Tue, Feb 24, 2015, Jeremy Stanley fu...@yuggoth.org wrote:
  On 2015-02-24 10:00:51 -0800 (-0800), Johannes Erdfelt wrote:
  [...]
   Recently, I have spent a lot more time waiting on reviews than I
   have spent writing the actual code.
  
  That's awesome, assuming what you mean here is that you've spent
  more time reviewing submitted code than writing more. That's where
  we're all falling down as a project and should be doing better, so I
  applaud your efforts in this area.
 
 I think I understand what you're trying to do here, but to be clear, are
 you saying that I only have myself to blame for how long it takes to
 get code merged nowadays?

I read that as a reminder that we are all collaborators, and that
working together is more effective and less frustrating than not working
together. So while you wait, look at some other contributions and
provide feedback. Others will do the same for your patches. Reviewed
patches improve and land faster. We all win.

Doug

I read it the same was as Doug. I don’t think Jeremy was trying to imply
your reviews would move through more quickly if you reviewed other
people’s work. Just that, as with most open source projects, there’s
always at least 2 distinct groups: people who push code more often and
people who review code more often. I think Jeremy took your comments to
mean that you were reviewing code more often than you were pushing it and
was thanking you for helping review outstanding changes. Reviews in
general are hard to come by on some projects, really good reviews even
harder. All reviews help make the project better.

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][docs] VPNaaS API reference missing...

2015-02-25 Thread Anne Gentle
On Wed, Feb 25, 2015 at 3:11 PM, Paul Michali p...@michali.net wrote:

 Cool! Glad to see it is not gone. Should it also be on the API reference
 pages?

 http://developer.openstack.org/api-ref-networking-v2.html


Yes, we've had a doc bug since last fall about it being missing from the
API Reference.

https://bugs.launchpad.net/openstack-api-site/+bug/1358179

Happy to help someone with that patch.
Thanks,
Anne


 PCM (Paul Michali)

 IRC pc_m (irc.freenode.com)
 Twitter... @pmichali


 On Wed, Feb 25, 2015 at 1:42 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

 As you can see, netconn-api has gone into the Openstack-attic.
 A few months ago, all neutron API reference docs were moved into
 neutron-specs (similar things happened to other projects).

 The new home of the VPN API spec is [1]

 Salvatore

 [1]
 http://specs.openstack.org/openstack/neutron-specs/specs/api/virtual_private_network_as_a_service__vpnaas_.html

 On 25 February 2015 at 19:26, Paul Michali p...@michali.net wrote:

 Not sure where it disappeared, but there are none of the API pages for
 VPNaaS exist. This was added some time ago (I think it was review 41702
  back in 9/2013).

 Mentioned to Edgar, but wanted to let the community know...


 PCM (Paul Michali)

 IRC pc_m (irc.freenode.com)
 Twitter... @pmichali



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][Rally][HA-testing][multi-scenarios-load-gen] Proposal to change Rally input task format

2015-02-25 Thread Boris Pavlovic
Hi stackers,


When we started Rally we have a just small idea to make some tool that
generates load and measures performance. During almost 2 years a lot of
changed in Rally, so now it's quite common testing framework that allows to
cover various topics like: stress, load, volume, performance, negative,
functional testing. Since the begging we have idea of scenario centric
approach, where scenario method that is called multiple times
simultaneously to generate load and duration of it is collected.

This is huge limitation that doesn't allow us to easily generate real life
load. (e.g. loading simultaneously few components) or HA testing (where we
need during the load generation to disable/kill process, reboot or power
off physical nodes). To make this possible we should just run multiple
scenarios in parallel, but this change will require change in input task
format.

I made proposal of new Rally task input format in this patch:
https://review.openstack.org/#/c/159065/3/specs/new_rally_input_task_format.yaml

Please review it. Let's try to resolve all UX issues before starting
working on it.

P.S. I hope this will be the last big change in Rally input task format..


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] patches that only address grammatical/typos

2015-02-25 Thread Sean Dague
We've unwound the gate quite a bit, so the cost of extra patches in the
merge queue fixing trivial things (like comment spelling) is pretty low.

Honestly, I'd much rather merge functional fixes faster and not go an
extra 2 rounds of typo fixing (assuming the English is decipherable),
and merge typo fixes later.

Also, if we are regularly merging typo fixes in projects, people will
actually fix the issues when they see them. If we -1 people and send
them away, they won't.

-Sean

On 02/25/2015 04:24 PM, Bernard Van De Walle wrote:
 Jay,
 I can only confirm your point of view.
 I personally landed such a patch yesterday and saw it as an easy way to
 get familiar with Gerrit.
 
 My goal being to land some more complex patches in the near future.
 
 Bernard
 
 On Wed, Feb 25, 2015 at 12:37 PM, Doug Hellmann d...@doughellmann.com
 mailto:d...@doughellmann.com wrote:
 
 
 
 On Wed, Feb 25, 2015, at 12:36 PM, Jay Faulkner wrote:
 
   On Feb 25, 2015, at 10:26 AM, Ruby Loo rlooya...@gmail.com 
 mailto:rlooya...@gmail.com wrote:
  
   Hi,
  
   I was wondering what people thought about patches that only fix 
 grammatical issues or misspellings in comments in our code.
  
   I can't believe I'm sending out this email, but as a group, I'd like 
 it if we had  a similar understanding so that we treat all patches in a 
 similar (dare I say it, consistent) manner. I've seen negative votes and 
 positive (approved) votes for similar patches. Right now, I look at such 
 submitted patches and ignore them, because I don't know what the fairest 
 thing is. I don't feel right that a patch that was previously submitted gets 
 a -2, whereas another patch gets a +A.
  
   To be clear, I think that anything that is user-facing like (log, 
 exception) messages or our documentation should be cleaned up. (And yes, I am 
 fine using British or American English or a mix here.)
  
   What I'm wondering about are the fixes to docstrings and inline 
 comments that aren't externally visible.
  
   On one hand, It is great that someone submits a patch so maybe we 
 should approve it, so as not to discourage the submitter. On the other hand, 
 how useful are such submissions. It has already been suggested (and maybe 
 discussed to death) that we should approve patches if there are only nits. 
 These grammatical and misspellings fall under nits. If we are explicitly 
 saying that it is OK to merge these nits, then why fix them later, unless 
 they are part of a patch that does more than only address those nits?
  
   I realize that it would take me less time to approve the patches than 
 to write this email, but I wanted to know what the community thought. Some 
 rule-of-thumb would be helpful to me.
  
 
  I personally always ask this question: does it make the software better?
  IMO fixing some of these grammatical issues can. I don’t think we should
  actively encourage such patches, but if someone already did the work, 
 why
  should we run them away? Many folks use patches like this to help them
  learn the process for contributing to OpenStack and I’d hate to run them
  away.
 
  These changes tend to bubble up because they’re an easy way to get
  involved. The time it takes to review and merge them in is an investment
  in that person’s future interest in contributing to OpenStack, or
  possibly open source in general.
 
 +1
 
 We need to keep this sort of thing in mind. If the patch is fairly
 trivial, it's also fairly trivial to review. If it is going to cause a
 more complex patch to need to be rebased, suggest that the proposer
 rebase their patch on top of the complex patch to avoid problems later
 -- that's teaching another lesson, so everyone benefits.
 
 Doug
 
 
  -Jay
 
 
   Thoughts?
  
   --ruby
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-25 Thread Doug Hellmann


On Wed, Feb 25, 2015, at 04:04 PM, Joe Gordon wrote:
 On Tue, Feb 24, 2015 at 7:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:
 
 
 
  On Mon, Feb 23, 2015, at 06:31 PM, Joe Gordon wrote:
   On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
   wrote:
  
   
   
On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
 On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka 
  ihrac...@redhat.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 02/20/2015 07:16 PM, Joshua Harlow wrote:
   Sean Dague wrote:
   On 02/20/2015 12:26 AM, Adam Gandelman wrote:
   Its more than just the naming.  In the original proposal,
   requirements.txt is the compiled list of all pinned deps
   (direct and transitive), while
   requirements.inhttp://requirements.in  reflects what people
   will actually use.  Whatever is in requirements.txt affects the
   egg's requires.txt. Instead, we can keep requirements.txt
   unchanged and have it still be the canonical list of
   dependencies, while
   reqiurements.out/requirements.gate/requirements.whatever is an
   upstream utility we produce and use to keep things sane on our
   slaves.
  
   Maybe all we need is:
  
   * update the existing post-merge job on the requirements repo
   to produce a requirements.txt (as it does now) as well the
   compiled version.
  
   * modify devstack in some way with a toggle to have it process
   dependencies from the compiled version when necessary
  
   I'm not sure how the second bit jives with the existing
   devstack installation code, specifically with the libraries
   from git-or-master but we can probably add something to warm
   the system with dependencies from the compiled version prior to
   calling pip/setup.py/etc http://setup.py/etc
  
   It sounds like you are suggesting we take the tool we use to
   ensure that all of OpenStack is installable together in a
  unified
   way, and change it's installation so that it doesn't do that any
   more.
  
   Which I'm fine with.
  
   But if we are doing that we should just whole hog give up on the
   idea that OpenStack can be run all together in a single
   environment, and just double down on the devstack venv work
   instead.
  
   It'd be interesting to see what a distribution (canonical,
   redhat...) would think about this movement. I know yahoo! has
  been
   looking into it for similar reasons (but we are more flexibly
  then
   I think a packager such as canonical/redhat/debian/... would/culd
   be). With a move to venv's that seems like it would just offload
   the work to find the set of dependencies that work together (in a
   single-install) to packagers instead.
  
   Is that ok/desired at this point?
  
 
  Honestly, I failed to track all the different proposals. Just
  saying
  from packager perspective: we absolutely rely on requirements.txt
  not
  being a list of hardcoded values from pip freeze, but present us a
  reasonable freedom in choosing versions we want to run in packaged
  products.
 
 
 in short the current proposal for stable branches is:

 keep requirements.txt as is, except maybe put some upper bounds on
  the
 requirements.

 Add requirements.gate to specify the *exact* versions we are gating
 against
 (this would be a full list including all transitive dependencies).
   
The gate syncs requirements into projects before installing them. Would
we change the sync script for the gate to work from the
requirements.gate file, or keep it pulling from requirements.txt?
   
  
   We would only add requirements.gate for stable branches (because we don't
   want to cap/pin  things on master). So I think the answer is sync script
   should work for both.  I am not sure on the exact mechanics of how this
   would work. Whoever ends up driving this bit of work (I think Adam G),
   will
   have to sort out the details.
 
  OK. I think it's probably worth a spec, then, so we can think it through
  before starting work. Maybe in the cross-project specs repo, to avoid
  having to create one just for requirements? Or we could modify the
  README or something, but the specs repo seems more visible.
 
 
 Start of the cross project spec https://review.openstack.org/159249

Thanks, Joe!

 
 
 
  Doug
 
  
  
Doug
   


  That's why I asked before we should have caps and not pins.
 
  /Ihar
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1
 
  iQEcBAEBAgAGBQJU61oJAAoJEC5aWaUY1u57T7cIALySnlpLV0tjrsTH2gZxskH+
  zY+L6E/DukFNZsWxB2XSaOuVdVaP3Oj4eYCZ2iL8OoxLrBotiOYyRFH29f9vjNSX
  h++dErBr0SwIeUtcnEjbk9re6fNP6y5Hqhk1Ac+NSxwL75KlS3bgKnJAhLA08MVB
  

Re: [openstack-dev] [Ironic] patches that only address grammatical/typos

2015-02-25 Thread Bernard Van De Walle
Jay,
I can only confirm your point of view.
I personally landed such a patch yesterday and saw it as an easy way to get
familiar with Gerrit.

My goal being to land some more complex patches in the near future.

Bernard

On Wed, Feb 25, 2015 at 12:37 PM, Doug Hellmann d...@doughellmann.com
wrote:



 On Wed, Feb 25, 2015, at 12:36 PM, Jay Faulkner wrote:
 
   On Feb 25, 2015, at 10:26 AM, Ruby Loo rlooya...@gmail.com wrote:
  
   Hi,
  
   I was wondering what people thought about patches that only fix
 grammatical issues or misspellings in comments in our code.
  
   I can't believe I'm sending out this email, but as a group, I'd like
 it if we had  a similar understanding so that we treat all patches in a
 similar (dare I say it, consistent) manner. I've seen negative votes and
 positive (approved) votes for similar patches. Right now, I look at such
 submitted patches and ignore them, because I don't know what the fairest
 thing is. I don't feel right that a patch that was previously submitted
 gets a -2, whereas another patch gets a +A.
  
   To be clear, I think that anything that is user-facing like (log,
 exception) messages or our documentation should be cleaned up. (And yes, I
 am fine using British or American English or a mix here.)
  
   What I'm wondering about are the fixes to docstrings and inline
 comments that aren't externally visible.
  
   On one hand, It is great that someone submits a patch so maybe we
 should approve it, so as not to discourage the submitter. On the other
 hand, how useful are such submissions. It has already been suggested (and
 maybe discussed to death) that we should approve patches if there are only
 nits. These grammatical and misspellings fall under nits. If we are
 explicitly saying that it is OK to merge these nits, then why fix them
 later, unless they are part of a patch that does more than only address
 those nits?
  
   I realize that it would take me less time to approve the patches than
 to write this email, but I wanted to know what the community thought. Some
 rule-of-thumb would be helpful to me.
  
 
  I personally always ask this question: does it make the software better?
  IMO fixing some of these grammatical issues can. I don’t think we should
  actively encourage such patches, but if someone already did the work, why
  should we run them away? Many folks use patches like this to help them
  learn the process for contributing to OpenStack and I’d hate to run them
  away.
 
  These changes tend to bubble up because they’re an easy way to get
  involved. The time it takes to review and merge them in is an investment
  in that person’s future interest in contributing to OpenStack, or
  possibly open source in general.

 +1

 We need to keep this sort of thing in mind. If the patch is fairly
 trivial, it's also fairly trivial to review. If it is going to cause a
 more complex patch to need to be rebased, suggest that the proposer
 rebase their patch on top of the complex patch to avoid problems later
 -- that's teaching another lesson, so everyone benefits.

 Doug

 
  -Jay
 
 
   Thoughts?
  
   --ruby
  
 __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-25 Thread michael mccune

On 02/25/2015 02:54 PM, Doug Hellmann wrote:

During yesterday’s cross-project meeting [1], we discussed the Eventlet Best 
Practices” spec [2] started by bnemec. The discussion of that spec revolved around 
the question of whether our cross-project specs repository is the right place for 
this type of document that isn’t a “plan” for a change, and is more a reference 
guide. Eventually we came around to the idea of creating a cross-project developer 
guide to hold these sorts of materials.

That leads to two questions, then:

1. Should we have a unified developer guide for the project?


+1, this sounds like a fantastic idea.


2. Where should it live and how should we manage it?


i like the idea of creating a new repository, akin to the other 
OpenStack manuals. i think it would be great if there was an easy way 
for the individual projects to add their specific recommendations as well.


the main downside i see to creating a new repo/manual/infra is the 
project overhead. hopefully there will be enough interest that this 
won't be an issue though.


mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday February 26th at 17:00 UTC

2015-02-25 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, February 26th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

12:00 EST
02:00 JST
03:30 ACDT
18:00 CET
11:00 CST
9:00 PST

-Matt Treinish


pgp5FUa1LG429.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-25 Thread Clint Byrum
Excerpts from Duncan Thomas's message of 2015-02-25 12:51:35 -0800:
 Clint
 
 This rule is not currently enabled in Cinder. This review fixes up all
 cases and enables it, which is absolutely 100% the right thing to do if we
 decide to implement this rule.
 
 The purpose of this thread is to understand the value of the rule. We
 should either enforce it, or else explicitly decide to ignore it, and
 educate reviewers who manually comment on it.
 
 I lean against the rule, but there are certainly enough comments coming in
 that I'll look and think again, which is a good result for the thread.
 

Thanks for your thoughts Duncan, they are appreciated.

I believe that what's being missed here is arguing for or against the
rule, or even taking time to try and understand it, is far more costly
than simply following it if it is enabled or ignoring it if it is not
enabled.

I don't think any of us want to be project historians, so we should
just make sure to have a good commit message when we turn it on or off,
and otherwise move forward with the actual development of OpenStack.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
You can horizontally split as well (if I understand what axis definitions
you are using). The Big Switch driver for example will bind ports that
belong to hypervisors running IVS while leaving the OVS driver to bind
ports attached to hypervisors running OVS.

I don't fully understand your comments about  the architecture of neutron.
Most work is delegated to either agents or a backend server. Basically
every ML2 driver pushes the work via an agent notification or an HTTP call
of some sort. If you do want to have a discussion about the architecture of
neutron, please start a new thread. This one is related to developing an
OVN plugin/driver and we have already diverged too far.
On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking).
 I am getting a bit confused by this discussion. Aren’t there already a 
 few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability 
 between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for
 “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Kevin Benton
In the cases I'm referring to, OVS handles the security groups and
vswitch.  The other drivers handle fabric configuration for VLAN tagging to
the host and whatever other plumbing they want to do.
On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need coordination
 between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic”
 J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
Oh, what you mean is vertical splitting, while I'm talking about horizontal
splitting.

I'm a little confused about why Neutron is designed so differently with
Nova and Cinder. In fact MD could be very simple, delegating nearly all
things out to agent. Remember Cinder volume manager? The real storage
backend could also be deployed outside the server farm as the dedicated
hardware, not necessary the local host based resource. The agent could act
as the proxy to an outside module, instead of heavy burden on central
plugin servers, and also, all backend can inter-operate and co-exist
seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need coordination
 between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking).
 I am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability 
 between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic”
 J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Dolph Mathews
On Wed, Feb 25, 2015 at 5:42 PM, Zane Bitter zbit...@redhat.com wrote:

 On 25/02/15 15:37, Joe Gordon wrote:



 On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell tim.b...@cern.ch
 mailto:tim.b...@cern.ch wrote:


 A few inline comments and a general point

 How do we handle scenarios like volumes when we have a per-component
 janitor rather than a single co-ordinator ?

 To be clean,

 1. nova should shutdown the instance
 2. nova should then ask the volume to be detached
 3. cinder could then perform the 'project deletion' action as
 configured by the operator (such as shelve or backup)
 4. nova could then perform the 'project deletion' action as
 configured by the operator (such as VM delete or shelve)

 If we have both cinder and nova responding to a single message,
 cinder would do 3. Immediately and nova would be doing the shutdown
 which is likely to lead to a volume which could not be shelved
 cleanly.

 The problem I see with messages is that co-ordination of the actions
 may require ordering between the components.  The disable/enable
 cases would show this in a worse scenario.


 You raise two good points.

 * How to clean something up may be different for different clouds
 * Some cleanup operations have to happen in a specific order

 Not sure what the best way to address those two points is.  Perhaps the
 best way forward is a openstack-specs spec to hash out these details.


 For completeness, if nothing else, it should be noted that another option
 is for Keystone to refuse to delete the project until all resources within
 it have been removed by a user.


Keystone has no knowledge of the tenant-owned resources in OpenStack (nor
is it a client of the other services), so that's not really feasible.



 It's hard to know at this point which would be more painful. Both sound
 horrific in their own way :D

 cheers,
 Zane.


 Tim

   -Original Message-
   From: Ian Cordasco [mailto:ian.corda...@rackspace.com
 mailto:ian.corda...@rackspace.com]
   Sent: 19 February 2015 17:49
   To: OpenStack Development Mailing List (not for usage questions);
 Joe Gordon
   Cc: openstack-operat...@lists.openstack.org
 mailto:openstack-operat...@lists.openstack.org
   Subject: Re: [Openstack-operators] [openstack-dev] Resources
 owned by a
   project/tenant are not cleaned up after that project is deleted
 from keystone
  
  
  
   On 2/2/15, 15:41, Morgan Fainberg morgan.fainb...@gmail.com
 mailto:morgan.fainb...@gmail.com wrote:
  
   
   On February 2, 2015 at 1:31:14 PM, Joe Gordon
 (joe.gord...@gmail.com mailto:joe.gord...@gmail.com)
   wrote:
   
   
   
   On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
   morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com

 wrote:
   
   I think the simple answer is yes. We (keystone) should emit
   notifications. And yes other projects should listen.
   
   The only thing really in discussion should be:
   
   1: soft delete or hard delete? Does the service mark it as
 orphaned, or
   just delete (leave this to nova, cinder, etc to discuss)
   
   2: how to cleanup when an event is missed (e.g rabbit bus goes
 out to
   lunch).
   
   
   
   
   
   
   I disagree slightly, I don't think projects should directly
 listen to
   the Keystone notifications I would rather have the API be
 something
   from a keystone owned library, say keystonemiddleware. So
 something like
   this:
   
   
   from keystonemiddleware import janitor
   
   
   keystone_janitor = janitor.Janitor()
   keystone_janitor.register_callback(nova.tenant_cleanup)
   
   
   keystone_janitor.spawn_greenthread()
   
   
   That way each project doesn't have to include a lot of boilerplate
   code, and keystone can easily modify/improve/upgrade the
 notification
   mechanism.
   
   


 I assume janitor functions can be used for

 - enable/disable project
 - enable/disable user

  
  
  
  
  
  
  
  
  
  Sure. I’d place this into an implementation detail of where that
  actually lives. I’d be fine with that being a part of Keystone
  Middleware Package (probably something separate from auth_token).
  
  
  —Morgan
  
 
  I think my only concern is what should other projects do and how
 much do we
  want to allow operators to configure this? I can imagine it being
 preferable to
  have safe (without losing much data) policies for this as a default
 and to allow
  operators to configure more destructive policies as part of
 deploying certain
  services.
 

 Depending on the cloud, an operator could want different semantics
 for delete 

Re: [openstack-dev] [Ironic] Stepping down from Ironic Core

2015-02-25 Thread Devananda van der Veen
Robert,

Thank you for all your input and insight over the last two years. Our
architectural discussions have been invaluable to me and helped shape
Ironic into what it is today.

All the best,
Devananda

On Tue Feb 24 2015 at 4:26:54 PM Robert Collins robe...@robertcollins.net
wrote:

 Like with TripleO, I've not been pulling my weight as a core reviewer
 for a bit; I'd be very hesitant to +A in the project as a result.

 I'm still very interested in Ironic, but its not where my immediate focus
 is.

 Cheers,
 Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] (RE: Change in openstack/neutron-specs[master]: Introducing Tap-as-a-Service)

2015-02-25 Thread Paul Carver

On 2/24/2015 6:47 PM, Kevin Benton wrote:


More seriously, have you considered starting a tap-as-a-service project on
stackforge now that the services split has established a framework for
advanced services? Uploading the code you are using to do it is a great way
to get people motivated to try it, propose new features, critique it, etc.
If you can't upload it because your approach would be proprietary, then
would upstream support even be relevant?



Right now we haven't written any code, but my concern is really more 
about standardizing the API. We're currently weighing two categories of 
options. One is to evaluate a number of open and closed source SDN 
software as plugins to Neutron. I'm not going to list names, but the 
candidates are represented in the plugins and ml2 subdirectories of 
Neutron. Many of these provide tap/mirror functionality, but since 
there's no standard Neutron API they we would be coding to a vendor 
specific API and have to call multiple different APIs to do the same 
thing if we deploy different ones in different locations over time.


The other option that we've considered is to extend a piece of software 
we've written that currently has nothing to do with tap/mirror but does 
perform some OvS flow modifications. If we went with this route we 
certainly would consider open sourcing it, but right now this is the 
less likely plan B.


It actually doesn't matter to me very much whether Neutron implements 
the tap functionality, but I'd really like to see a standard API call 
that the various SDN vendors could get behind. Right now it's possible 
to make Neutron API calls to manipulate networks, ports, and subnets and 
expect that they will function essentially the same way regardless of 
the underlying implementation from a variety of hardware and software 
vendors. But if we want to mirror a vSwitch port to an analyzer we have 
a myriad of vendor specific API calls that are entirely dependent on the 
underlying software and/or hardware beneath Neutron.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] patches that only address grammatical/typos

2015-02-25 Thread Robert Collins
On 26 February 2015 at 05:26, Ruby Loo rlooya...@gmail.com wrote:
 Hi,

 I was wondering what people thought about patches that only fix grammatical
 issues or misspellings in comments in our code.

 I can't believe I'm sending out this email, but as a group, I'd like it if
 we had  a similar understanding so that we treat all patches in a similar
 (dare I say it, consistent) manner. I've seen negative votes and positive
 (approved) votes for similar patches. Right now, I look at such submitted
 patches and ignore them, because I don't know what the fairest thing is. I
 don't feel right that a patch that was previously submitted gets a -2,
 whereas another patch gets a +A.

 To be clear, I think that anything that is user-facing like (log, exception)
 messages or our documentation should be cleaned up. (And yes, I am fine
 using British or American English or a mix here.)

 What I'm wondering about are the fixes to docstrings and inline comments
 that aren't externally visible.

 On one hand, It is great that someone submits a patch so maybe we should
 approve it, so as not to discourage the submitter. On the other hand, how
 useful are such submissions. It has already been suggested (and maybe
 discussed to death) that we should approve patches if there are only nits.
 These grammatical and misspellings fall under nits. If we are explicitly
 saying that it is OK to merge these nits, then why fix them later, unless
 they are part of a patch that does more than only address those nits?

 I realize that it would take me less time to approve the patches than to
 write this email, but I wanted to know what the community thought. Some
 rule-of-thumb would be helpful to me.

 Thoughts?

I think improvements are improvements and we should welcome them - and
allow single +2/+A so long as they are not touching code.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-25 Thread Thomas Goirand
On 02/24/2015 12:27 PM, Daniel P. Berrange wrote:
 I'm actually trying to judge it from the POV of users, not just
 developers. I find it pretty untenable that in the fast moving
 world of cloud, users have to wait as long as 6 months for a
 feature to get into a openstack release, often much longer.

If you were trying to judge from the POV of users, then you would
consider that basically, they don't really care the brand new shiny
feature which just appear. They care having a long time support for
whatever version of OpenStack they have installed, without having the
head-aches of upgrading which is famously painful with OpenStack. This
shows clearly on our user surveys which are presented on every summit:
users are lagging behind, with a majority still running with OpenStack
releases which are already EOL.

In fact, if you want to judge from the POV of our users, we should *SLOW
DOWN* our release cycles, and probably move to something like one
release every year or 2. We should also try to have longer periods of
support for our stable releases, which would (with my Debian package
maintainer hat on!) help distributions to do such security support.

Debian Jessie will be released in a few month from now, just before
Icehouse (which it ships) will be EOL. RedHat, Canonical, IBM, and so
many more are also on the same (sinking) ship.

As for my employer side of things, we've seen numerous cases with our
customer requesting for LTS, which we have to provide by ourselves,
since it's not supported upstream.

 I think the majority of
 translation work can be done in parallel with dev work and the freeze
 time just needs to tie up the small remaining bits.

It'd be nice indeed, but I've never seen any project (open source or
not) working this way for translations.

 Documentation is again something I'd expect to be done more or less
 in parallel with dev.

Let's go back to reality: the Juno install-guide is still not finished,
and the doc team is lagging behind.

 It would be reasonable for the vulnerability team to take the decision
 that they'll support fixes for master, and any branches that the stable
 team decide to support. ie they would not neccessarily have to commit
 to shipping fixes for every single release made.

I've been crying for this type of decision. IE: drop Juno support early,
and continue to maintain Icehouse for longer. I wish this happens, but
the release team always complained that nobody works on maintaining the
gate for the stable branches. Unless this changes, I don't see hope... :(

 I really not trying to focus on the developers woes. I'm trying to focus on
 making OpenStack better serve our users. My main motiviation here is that I
 think we're doing a pretty terrible job at getting work done that is important
 to our users in a timely manner. This is caused by a workflow  release cycle
 that is negatively impacting the developers.

Our workflow and the release cycle are 2 separate things. From my POV,
it'd be a mistake to believe switching to a different release cycle will
fix our workflow.

Also, I'd like to point out something: it's been 2 years that I do
release each and every beta release we do. But either they are bug free
(you bet... :)), or nobody uses them (more likely), because I've *never
ever* received some bug reports about them. Reasonably, there's no
consumer for beta releases.

Hoping my point of view is helpful,

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Zane Bitter

On 25/02/15 19:15, Dolph Mathews wrote:


On Wed, Feb 25, 2015 at 5:42 PM, Zane Bitter zbit...@redhat.com
mailto:zbit...@redhat.com wrote:

On 25/02/15 15:37, Joe Gordon wrote:



On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell tim.b...@cern.ch
mailto:tim.b...@cern.ch
mailto:tim.b...@cern.ch mailto:tim.b...@cern.ch wrote:


 A few inline comments and a general point

 How do we handle scenarios like volumes when we have a
per-component
 janitor rather than a single co-ordinator ?

 To be clean,

 1. nova should shutdown the instance
 2. nova should then ask the volume to be detached
 3. cinder could then perform the 'project deletion' action as
 configured by the operator (such as shelve or backup)
 4. nova could then perform the 'project deletion' action as
 configured by the operator (such as VM delete or shelve)

 If we have both cinder and nova responding to a single message,
 cinder would do 3. Immediately and nova would be doing the
shutdown
 which is likely to lead to a volume which could not be
shelved cleanly.

 The problem I see with messages is that co-ordination of
the actions
 may require ordering between the components.  The
disable/enable
 cases would show this in a worse scenario.


You raise two good points.

* How to clean something up may be different for different clouds
* Some cleanup operations have to happen in a specific order

Not sure what the best way to address those two points is.
Perhaps the
best way forward is a openstack-specs spec to hash out these
details.


For completeness, if nothing else, it should be noted that another
option is for Keystone to refuse to delete the project until all
resources within it have been removed by a user.


Keystone has no knowledge of the tenant-owned resources in OpenStack
(nor is it a client of the other services), so that's not really feasible.


As pointed out above, Keystone doesn't have any knowledge of how to 
orchestrate the deletion of the tenant-owned resources either (and in 
large part neither do the other services - except Heat, and then only 
for the ones it created), so by that logic neither option is feasible.


Choose your poison ;)



It's hard to know at this point which would be more painful. Both
sound horrific in their own way :D

cheers,
Zane.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Devstack+Manila -- old method won't work anymore

2015-02-25 Thread Ben Swartzlander
Many of you may have gotten in the habit of copying the contrib/devstack 
folder in order to make Manila work with Devstack.


https://review.openstack.org/#/c/158054/

Once the above change merges, that method of installing Manila with 
Devstack won't work anymore. Valeriy was kind enough to include 
instructions on the new method in the commit messages of his change:


https://github.com/openstack/manila/commit/b5f0ccabfaab837b6d7738786f63ddca6a1705ff

I have also updated the Wiki to outline the new method:

https://wiki.openstack.org/wiki/Manila/KiloDevstack

-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

so how about security group, and all other things which need coordination
between vswitchs?


  There is a missing way to coordinate connectivity with tunnel networks
 across drivers, but that doesn't mean you can't run multiple drivers to
 handle different types or just to provide additional features (auditing,
 more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I
 am getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Dolph Mathews
On Wed, Feb 25, 2015 at 3:02 PM, Matt Joyce m...@nycresistor.com wrote:

 Wondering if heat should be performing this orchestration.


I wouldn't expect heat to have access to everything that needs to be
cleaned up.



 Would provide for a more pluggable front end to the action set.

 -matt

 On Feb 25, 2015 2:37 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
  On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell tim.b...@cern.ch wrote:
 
 
  A few inline comments and a general point
 
  How do we handle scenarios like volumes when we have a per-component
 janitor rather than a single co-ordinator ?
 
  To be clean,
 
  1. nova should shutdown the instance
  2. nova should then ask the volume to be detached
  3. cinder could then perform the 'project deletion' action as
 configured by the operator (such as shelve or backup)
  4. nova could then perform the 'project deletion' action as configured
 by the operator (such as VM delete or shelve)
 
  If we have both cinder and nova responding to a single message, cinder
 would do 3. Immediately and nova would be doing the shutdown which is
 likely to lead to a volume which could not be shelved cleanly.
 
  The problem I see with messages is that co-ordination of the actions
 may require ordering between the components.  The disable/enable cases
 would show this in a worse scenario.
 
 
  You raise two good points.
 
  * How to clean something up may be different for different clouds
  * Some cleanup operations have to happen in a specific order
 
  Not sure what the best way to address those two points is.  Perhaps the
 best way forward is a openstack-specs spec to hash out these details.
 
 
 
  Tim
 
   -Original Message-
   From: Ian Cordasco [mailto:ian.corda...@rackspace.com]
   Sent: 19 February 2015 17:49
   To: OpenStack Development Mailing List (not for usage questions); Joe
 Gordon
   Cc: openstack-operat...@lists.openstack.org
   Subject: Re: [Openstack-operators] [openstack-dev] Resources owned by
 a
   project/tenant are not cleaned up after that project is deleted from
 keystone
  
  
  
   On 2/2/15, 15:41, Morgan Fainberg morgan.fainb...@gmail.com
 wrote:
  
   
   On February 2, 2015 at 1:31:14 PM, Joe Gordon (joe.gord...@gmail.com
 )
   wrote:
   
   
   
   On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
   morgan.fainb...@gmail.com wrote:
   
   I think the simple answer is yes. We (keystone) should emit
   notifications. And yes other projects should listen.
   
   The only thing really in discussion should be:
   
   1: soft delete or hard delete? Does the service mark it as orphaned,
 or
   just delete (leave this to nova, cinder, etc to discuss)
   
   2: how to cleanup when an event is missed (e.g rabbit bus goes out to
   lunch).
   
   
   
   
   
   
   I disagree slightly, I don't think projects should directly listen to
   the Keystone notifications I would rather have the API be something
   from a keystone owned library, say keystonemiddleware. So something
 like
   this:
   
   
   from keystonemiddleware import janitor
   
   
   keystone_janitor = janitor.Janitor()
   keystone_janitor.register_callback(nova.tenant_cleanup)
   
   
   keystone_janitor.spawn_greenthread()
   
   
   That way each project doesn't have to include a lot of boilerplate
   code, and keystone can easily modify/improve/upgrade the notification
   mechanism.
   
   
 
 
  I assume janitor functions can be used for
 
  - enable/disable project
  - enable/disable user
 
   
   
   
   
   
   
   
   
   
   Sure. I’d place this into an implementation detail of where that
   actually lives. I’d be fine with that being a part of Keystone
   Middleware Package (probably something separate from auth_token).
   
   
   —Morgan
   
  
   I think my only concern is what should other projects do and how much
 do we
   want to allow operators to configure this? I can imagine it being
 preferable to
   have safe (without losing much data) policies for this as a default
 and to allow
   operators to configure more destructive policies as part of deploying
 certain
   services.
  
 
  Depending on the cloud, an operator could want different semantics for
 delete project's impact, between delete or 'shelve' style or maybe disable.
 
  
   
   
   
   
   
   --Morgan
   
   Sent via mobile
   
On Feb 2, 2015, at 10:16, Matthew Treinish mtrein...@kortar.org
 wrote:
   
On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann wrote:
This came up in the operators mailing list back in June [1] but
   given the  subject probably didn't get much attention.
   
Basically there is a really old bug [2] from Grizzly that is
 still a
   problem  and affects multiple projects.  A tenant can be deleted in
   Keystone even  though other resources in other projects are under
   that project, and those  resources aren't cleaned up.
   
I agree this probably can be a major pain point for users. We've
 had
   to work around it  in tempest by creating things 

Re: [openstack-dev] [stable][requirements] External dependency caps introduced in 499db6b

2015-02-25 Thread Robert Collins
I'll follow-up on the spec, but one thing Donald has been pointing out
for a while is that we don't use requirements.txt the way that pip
anticipates: the expected use is that a specific install (e.g. the
gate) will have a very specific list of requirements, caps etc, but
that the install_requires will be as minimal as possible to ensure the
project builds and self-tests ok.

I see the issues here as being related to that.

-Rob

On 26 February 2015 at 10:04, Joe Gordon joe.gord...@gmail.com wrote:


 On Tue, Feb 24, 2015 at 7:00 AM, Doug Hellmann d...@doughellmann.com
 wrote:



 On Mon, Feb 23, 2015, at 06:31 PM, Joe Gordon wrote:
  On Mon, Feb 23, 2015 at 11:04 AM, Doug Hellmann d...@doughellmann.com
  wrote:
 
  
  
   On Mon, Feb 23, 2015, at 12:26 PM, Joe Gordon wrote:
On Mon, Feb 23, 2015 at 8:49 AM, Ihar Hrachyshka
ihrac...@redhat.com
wrote:
   
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 02/20/2015 07:16 PM, Joshua Harlow wrote:
  Sean Dague wrote:
  On 02/20/2015 12:26 AM, Adam Gandelman wrote:
  Its more than just the naming.  In the original proposal,
  requirements.txt is the compiled list of all pinned deps
  (direct and transitive), while
  requirements.inhttp://requirements.in  reflects what people
  will actually use.  Whatever is in requirements.txt affects
  the
  egg's requires.txt. Instead, we can keep requirements.txt
  unchanged and have it still be the canonical list of
  dependencies, while
  reqiurements.out/requirements.gate/requirements.whatever is an
  upstream utility we produce and use to keep things sane on our
  slaves.
 
  Maybe all we need is:
 
  * update the existing post-merge job on the requirements repo
  to produce a requirements.txt (as it does now) as well the
  compiled version.
 
  * modify devstack in some way with a toggle to have it process
  dependencies from the compiled version when necessary
 
  I'm not sure how the second bit jives with the existing
  devstack installation code, specifically with the libraries
  from git-or-master but we can probably add something to warm
  the system with dependencies from the compiled version prior
  to
  calling pip/setup.py/etc http://setup.py/etc
 
  It sounds like you are suggesting we take the tool we use to
  ensure that all of OpenStack is installable together in a
  unified
  way, and change it's installation so that it doesn't do that
  any
  more.
 
  Which I'm fine with.
 
  But if we are doing that we should just whole hog give up on
  the
  idea that OpenStack can be run all together in a single
  environment, and just double down on the devstack venv work
  instead.
 
  It'd be interesting to see what a distribution (canonical,
  redhat...) would think about this movement. I know yahoo! has
  been
  looking into it for similar reasons (but we are more flexibly
  then
  I think a packager such as canonical/redhat/debian/...
  would/culd
  be). With a move to venv's that seems like it would just offload
  the work to find the set of dependencies that work together (in
  a
  single-install) to packagers instead.
 
  Is that ok/desired at this point?
 

 Honestly, I failed to track all the different proposals. Just
 saying
 from packager perspective: we absolutely rely on requirements.txt
 not
 being a list of hardcoded values from pip freeze, but present us a
 reasonable freedom in choosing versions we want to run in packaged
 products.


in short the current proposal for stable branches is:
   
keep requirements.txt as is, except maybe put some upper bounds on
the
requirements.
   
Add requirements.gate to specify the *exact* versions we are gating
against
(this would be a full list including all transitive dependencies).
  
   The gate syncs requirements into projects before installing them.
   Would
   we change the sync script for the gate to work from the
   requirements.gate file, or keep it pulling from requirements.txt?
  
 
  We would only add requirements.gate for stable branches (because we
  don't
  want to cap/pin  things on master). So I think the answer is sync script
  should work for both.  I am not sure on the exact mechanics of how this
  would work. Whoever ends up driving this bit of work (I think Adam G),
  will
  have to sort out the details.

 OK. I think it's probably worth a spec, then, so we can think it through
 before starting work. Maybe in the cross-project specs repo, to avoid
 having to create one just for requirements? Or we could modify the
 README or something, but the specs repo seems more visible.


 Start of the cross project spec https://review.openstack.org/159249



 Doug

 
 
   Doug
  
   
   
 That's why I asked before 

Re: [openstack-dev] [Openstack-operators] Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2015-02-25 Thread Zane Bitter

On 25/02/15 15:37, Joe Gordon wrote:



On Sat, Feb 21, 2015 at 5:03 AM, Tim Bell tim.b...@cern.ch
mailto:tim.b...@cern.ch wrote:


A few inline comments and a general point

How do we handle scenarios like volumes when we have a per-component
janitor rather than a single co-ordinator ?

To be clean,

1. nova should shutdown the instance
2. nova should then ask the volume to be detached
3. cinder could then perform the 'project deletion' action as
configured by the operator (such as shelve or backup)
4. nova could then perform the 'project deletion' action as
configured by the operator (such as VM delete or shelve)

If we have both cinder and nova responding to a single message,
cinder would do 3. Immediately and nova would be doing the shutdown
which is likely to lead to a volume which could not be shelved cleanly.

The problem I see with messages is that co-ordination of the actions
may require ordering between the components.  The disable/enable
cases would show this in a worse scenario.


You raise two good points.

* How to clean something up may be different for different clouds
* Some cleanup operations have to happen in a specific order

Not sure what the best way to address those two points is.  Perhaps the
best way forward is a openstack-specs spec to hash out these details.


For completeness, if nothing else, it should be noted that another 
option is for Keystone to refuse to delete the project until all 
resources within it have been removed by a user.


It's hard to know at this point which would be more painful. Both sound 
horrific in their own way :D


cheers,
Zane.



Tim

  -Original Message-
  From: Ian Cordasco [mailto:ian.corda...@rackspace.com
mailto:ian.corda...@rackspace.com]
  Sent: 19 February 2015 17:49
  To: OpenStack Development Mailing List (not for usage questions);
Joe Gordon
  Cc: openstack-operat...@lists.openstack.org
mailto:openstack-operat...@lists.openstack.org
  Subject: Re: [Openstack-operators] [openstack-dev] Resources
owned by a
  project/tenant are not cleaned up after that project is deleted
from keystone
 
 
 
  On 2/2/15, 15:41, Morgan Fainberg morgan.fainb...@gmail.com
mailto:morgan.fainb...@gmail.com wrote:
 
  
  On February 2, 2015 at 1:31:14 PM, Joe Gordon
(joe.gord...@gmail.com mailto:joe.gord...@gmail.com)
  wrote:
  
  
  
  On Mon, Feb 2, 2015 at 10:28 AM, Morgan Fainberg
  morgan.fainb...@gmail.com mailto:morgan.fainb...@gmail.com
wrote:
  
  I think the simple answer is yes. We (keystone) should emit
  notifications. And yes other projects should listen.
  
  The only thing really in discussion should be:
  
  1: soft delete or hard delete? Does the service mark it as
orphaned, or
  just delete (leave this to nova, cinder, etc to discuss)
  
  2: how to cleanup when an event is missed (e.g rabbit bus goes
out to
  lunch).
  
  
  
  
  
  
  I disagree slightly, I don't think projects should directly
listen to
  the Keystone notifications I would rather have the API be something
  from a keystone owned library, say keystonemiddleware. So
something like
  this:
  
  
  from keystonemiddleware import janitor
  
  
  keystone_janitor = janitor.Janitor()
  keystone_janitor.register_callback(nova.tenant_cleanup)
  
  
  keystone_janitor.spawn_greenthread()
  
  
  That way each project doesn't have to include a lot of boilerplate
  code, and keystone can easily modify/improve/upgrade the
notification
  mechanism.
  
  


I assume janitor functions can be used for

- enable/disable project
- enable/disable user

 
 
 
 
 
 
 
 
 
 Sure. I’d place this into an implementation detail of where that
 actually lives. I’d be fine with that being a part of Keystone
 Middleware Package (probably something separate from auth_token).
 
 
 —Morgan
 

 I think my only concern is what should other projects do and how much do 
we
 want to allow operators to configure this? I can imagine it being 
preferable to
 have safe (without losing much data) policies for this as a default and 
to allow
 operators to configure more destructive policies as part of deploying 
certain
 services.


Depending on the cloud, an operator could want different semantics
for delete project's impact, between delete or 'shelve' style or
maybe disable.

 
  
  
  
  
  
  --Morgan
  
  Sent via mobile
  
   On Feb 2, 2015, at 10:16, Matthew Treinish
mtrein...@kortar.org mailto:mtrein...@kortar.org wrote:
  
   On Mon, Feb 02, 2015 at 11:46:53AM -0600, Matt Riedemann 

Re: [openstack-dev] [all] creating a unified developer reference manual

2015-02-25 Thread Miguel Ángel Ajo
Sounds like a very good idea. Cross project development shared knowledge.

Miguel Ángel Ajo


On Wednesday, 25 de February de 2015 at 22:32, michael mccune wrote:

 On 02/25/2015 02:54 PM, Doug Hellmann wrote:
  During yesterday’s cross-project meeting [1], we discussed the Eventlet 
  Best Practices” spec [2] started by bnemec. The discussion of that spec 
  revolved around the question of whether our cross-project specs repository 
  is the right place for this type of document that isn’t a “plan” for a 
  change, and is more a reference guide. Eventually we came around to the 
  idea of creating a cross-project developer guide to hold these sorts of 
  materials.
   
  That leads to two questions, then:
   
  1. Should we have a unified developer guide for the project?
  
 +1, this sounds like a fantastic idea.
  
  2. Where should it live and how should we manage it?
  
 i like the idea of creating a new repository, akin to the other  
 OpenStack manuals. i think it would be great if there was an easy way  
 for the individual projects to add their specific recommendations as well.
  
 the main downside i see to creating a new repo/manual/infra is the  
 project overhead. hopefully there will be enough interest that this  
 won't be an issue though.
  
 mike
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
On Thursday, 26 de February de 2015 at 7:48, Miguel Ángel Ajo wrote:
 Inline comments follow after this, but I wanted to respond to Brian question
 which has been cut out:
  
 We’re talking here of doing a preliminary analysis of the networking 
 performance,
 before writing any real code at neutron level.
  
 If that looks right, then we should go into a preliminary (and orthogonal to 
 iptables/LB)
 implementation. At that moment we will be able to examine the scalability of 
 the solution
 in regards of switching openflow rules, which is going to be severely affected
 by the way we use to handle OF rules in the bridge:
  
* via OpenFlow, making the agent a “real OF controller, with the current 
 effort to use
   the ryu framework plugin to do that.
* via cmdline (would be alleviated with the current rootwrap work, but the 
 former one
  would be preferred).
  
 Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben 
 Pfaff for the
 explanation, if you’re reading this ;-))
  
 Best,
 Miguel Ángel
  
  
  
 On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:
  Hi,
   
  The RFC2544 with near zero packet loss is a pretty standard performance 
  benchmark. It is also used in the OPNFV project 
  (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases).
   
  Does this mean that OpenStack will have stateful firewalls (or security 
  groups)? Any other ideas planned, like ebtables type filtering?
   
 What I am proposing is in the terms of maintaining the statefulness we have 
 now
 regards security groups (RELATED/ESTABLISHED connections are allowed back  
 on open ports) while adding a new firewall driver working only with OVS+OF 
 (no iptables  
 or linux bridge).
  
 That will be possible (without auto-populating OF rules in oposite 
 directions) due to
 the new connection tracker functionality to be eventually merged into ovs.
   
  -Tapio
   
   
  On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
  (mailto:rick.jon...@hp.com) wrote:
   On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
I’m writing a plan/script to benchmark OVS+OF(CT) vs
OVS+LB+iptables+ipsets,
so we can make sure there’s a real difference before jumping into any
OpenFlow security group filters when we have connection tracking in OVS.
 
The plan is to keep all of it in a single multicore host, and make
all the measures within it, to make sure we just measure the
difference due to the software layers.
 
Suggestions or ideas on what to measure are welcome, there’s an initial
draft here:
 
https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct

   Conditions to be benchmarked

   Initial connection establishment time
   Max throughput on the same CPU

   Large MTUs and stateless offloads can mask a multitude of path-length 
   sins.  And there is a great deal more to performance than Mbit/s. While 
   some of that may be covered by the first item via the likes of say 
   netperf TCP_CRR or TCP_CC testing, I would suggest that in addition to a 
   focus on Mbit/s (which I assume is the focus of the second item) there is 
   something for packet per second performance.  Something like netperf 
   TCP_RR and perhaps aggregate TCP_RR or UDP_RR testing.

   Doesn't have to be netperf, that is simply the hammer I wield :)

   What follows may be a bit of perfect being the enemy of the good, or 
   mission creep...

   On the same CPU would certainly simplify things, but it will almost 
   certainly exhibit different processor data cache behaviour than actually 
   going through a physical network with a multi-core system.  Physical NICs 
   will possibly (probably?) have RSS going, which may cause cache lines to 
   be pulled around.  The way packets will be buffered will differ as well.  
   Etc etc.  How well the different solutions scale with cores is definitely 
   a difference of interest between the two sofware layers.



Hi rick, thanks for your feedback here, I’ll take it into consideration,  
specially about the small packet pps measurements, and
really using physical hosts.

Although I may start with an AIO setup for simplicity, we should
get more conclusive results from at least two hosts and decent NICs.

I will put all this together in the document, and loop you in for review.  
   rick


   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
  --  
  -Tapio  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: 

Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Miguel Ángel Ajo
Inline comments follow after this, but I wanted to respond to Brian question
which has been cut out:

We’re talking here of doing a preliminary analysis of the networking 
performance,
before writing any real code at neutron level.

If that looks right, then we should go into a preliminary (and orthogonal to 
iptables/LB)
implementation. At that moment we will be able to examine the scalability of 
the solution
in regards of switching openflow rules, which is going to be severely affected
by the way we use to handle OF rules in the bridge:

   * via OpenFlow, making the agent a “real OF controller, with the current 
effort to use
  the ryu framework plugin to do that.
   * via cmdline (would be alleviated with the current rootwrap work, but the 
former one
 would be preferred).

Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben Pfaff 
for the
explanation, if you’re reading this ;-))

Best,
Miguel Ángel



On Wednesday, 25 de February de 2015 at 20:34, Tapio Tallgren wrote:  
 Hi,
  
 The RFC2544 with near zero packet loss is a pretty standard performance 
 benchmark. It is also used in the OPNFV project 
 (https://wiki.opnfv.org/characterize_vswitch_performance_for_telco_nfv_use_cases).
  
 Does this mean that OpenStack will have stateful firewalls (or security 
 groups)? Any other ideas planned, like ebtables type filtering?
  
What I am proposing is in the terms of maintaining the statefulness we have now
regards security groups (RELATED/ESTABLISHED connections are allowed back  
on open ports) while adding a new firewall driver working only with OVS+OF (no 
iptables  
or linux bridge).

That will be possible (without auto-populating OF rules in oposite directions) 
due to
the new connection tracker functionality to be eventually merged into ovs.
  
  
 -Tapio
  
  
 On Wed, Feb 25, 2015 at 5:07 PM, Rick Jones rick.jon...@hp.com 
 (mailto:rick.jon...@hp.com) wrote:
  On 02/25/2015 05:52 AM, Miguel Ángel Ajo wrote:
   I’m writing a plan/script to benchmark OVS+OF(CT) vs
   OVS+LB+iptables+ipsets,
   so we can make sure there’s a real difference before jumping into any
   OpenFlow security group filters when we have connection tracking in OVS.

   The plan is to keep all of it in a single multicore host, and make
   all the measures within it, to make sure we just measure the
   difference due to the software layers.

   Suggestions or ideas on what to measure are welcome, there’s an initial
   draft here:

   https://github.com/mangelajo/ovs-experiments/tree/master/ovs-ct
   
  Conditions to be benchmarked
   
  Initial connection establishment time
  Max throughput on the same CPU
   
  Large MTUs and stateless offloads can mask a multitude of path-length sins. 
   And there is a great deal more to performance than Mbit/s. While some of 
  that may be covered by the first item via the likes of say netperf TCP_CRR 
  or TCP_CC testing, I would suggest that in addition to a focus on Mbit/s 
  (which I assume is the focus of the second item) there is something for 
  packet per second performance.  Something like netperf TCP_RR and perhaps 
  aggregate TCP_RR or UDP_RR testing.
   
  Doesn't have to be netperf, that is simply the hammer I wield :)
   
  What follows may be a bit of perfect being the enemy of the good, or 
  mission creep...
   
  On the same CPU would certainly simplify things, but it will almost 
  certainly exhibit different processor data cache behaviour than actually 
  going through a physical network with a multi-core system.  Physical NICs 
  will possibly (probably?) have RSS going, which may cause cache lines to be 
  pulled around.  The way packets will be buffered will differ as well.  Etc 
  etc.  How well the different solutions scale with cores is definitely a 
  difference of interest between the two sofware layers.
   
  rick
   
   
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
 --  
 -Tapio  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OpenFlow security groups (pre-benchmarking plan)

2015-02-25 Thread Ben Pfaff
On Thu, Feb 26, 2015 at 07:48:51AM +0100, Miguel Ángel Ajo wrote:
 Also, ipset groups can be moved into conjunctive groups in OF (thanks Ben 
 Pfaff for the
 explanation, if you’re reading this ;-))

You're welcome.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
On Thu, Feb 26, 2015 at 10:50 AM, Kevin Benton blak...@gmail.com wrote:

 You can horizontally split as well (if I understand what axis definitions
 you are using). The Big Switch driver for example will bind ports that
 belong to hypervisors running IVS while leaving the OVS driver to bind
 ports attached to hypervisors running OVS.


That's just what I mean about horizontal, which is limited for some
features. For example, ports belonging to BSN driver and OVS driver can't
communicate with each other in the same tunnel network, neither does
security group across both sides.


  I don't fully understand your comments about  the architecture of
 neutron. Most work is delegated to either agents or a backend server.
 Basically every ML2 driver pushes the work via an agent notification or
 an HTTP call of some sort


Here is the key difference: thin MD such as ovs and bridge never push any
work to agent, which only handle port bind, just as a scheduler selecting
the backend vif type. Those agent notification is handled by other common
code in ML2, so thin MDs can seamlessly be integrated with each other
horizontally for all features, like tunnel l2pop. On the other hand fat MD
will push every work to backend through HTTP call, which partly block
horizontal inter-operation with other backends.

Then I'm thing about this pattern: ML2 /w thin MD - agent - HTTP call to
backend? Which should be much easier for horizontal inter-operate.


On Feb 25, 2015 6:15 PM, loy wolfe loywo...@gmail.com wrote:

 Oh, what you mean is vertical splitting, while I'm talking about
 horizontal splitting.

 I'm a little confused about why Neutron is designed so differently with
 Nova and Cinder. In fact MD could be very simple, delegating nearly all
 things out to agent. Remember Cinder volume manager? The real storage
 backend could also be deployed outside the server farm as the dedicated
 hardware, not necessary the local host based resource. The agent could act
 as the proxy to an outside module, instead of heavy burden on central
 plugin servers, and also, all backend can inter-operate and co-exist
 seamlessly (like a single vxlan across ovs and tor in hybrid deployment)


 On Thu, Feb 26, 2015 at 9:39 AM, Kevin Benton blak...@gmail.com wrote:

 In the cases I'm referring to, OVS handles the security groups and
 vswitch.  The other drivers handle fabric configuration for VLAN tagging to
 the host and whatever other plumbing they want to do.
 On Feb 25, 2015 5:30 PM, loy wolfe loywo...@gmail.com wrote:



 On Thu, Feb 26, 2015 at 3:51 AM, Kevin Benton blak...@gmail.com
 wrote:

 The fact that a system doesn't use a neutron agent is not a good
 justification for monolithic vs driver. The VLAN drivers co-exist with OVS
 just fine when using VLAN encapsulation even though some are agent-less.

 so how about security group, and all other things which need
 coordination between vswitchs?


  There is a missing way to coordinate connectivity with tunnel
 networks across drivers, but that doesn't mean you can't run multiple
 drivers to handle different types or just to provide additional features
 (auditing,  more access control, etc).
 On Feb 25, 2015 2:04 AM, loy wolfe loywo...@gmail.com wrote:

 +1 to separate monolithic OVN plugin

 The ML2 has been designed for co-existing of multiple heterogeneous
 backends, it works well for all agent solutions: OVS, Linux Bridge, and
 even ofagent.

 However, when things come with all kinds of agentless solutions,
 especially all kinds of SDN controller (except for Ryu-Lib style),
 Mechanism Driver became the new monolithic place despite the benefits of
 code reduction:  MDs can't inter-operate neither between themselves nor
 with ovs/bridge agent L2pop, each MD has its own exclusive vxlan
 mapping/broadcasting solution.

 So my suggestion is that keep those thin MD(with agent) in ML2
 framework (also inter-operate with native Neutron L3/service plugins),
 while all other fat MD(agentless) go with the old style of monolithic
 plugin, with all L2-L7 features tightly integrated.

 On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
 amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in
 networking). I am getting a bit confused by this discussion. Aren’t 
 there
 already a few monolithic plugins (that is what I could understand from
 reading the Networking chapter of the OpenStack Cloud Administrator 
 Guide.
 Table 7.3 Available networking plugi-ins)? So how do we have
 interoperability between those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for
 “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore







 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [nova] Shared storage support

2015-02-25 Thread Alex Xu
Actually I have similar idea, an plan to work on it at L by a nova-spec (is
it worth a spec?).

But this idea not come from this bug, it's come from other cases:

1. Currently we need specified 'on_shared_storage' and 'block_migration'
when evacuate and live_migration. After we tracking the shared storage, we
needn't user specified those parameter. Also scheduler can have priority to
choice the host have shared-storage with previous host.

2. Currently nova compute won't release resources for stopped instance, and
won't rescheduler when start the stopped instance. To implement this, it
need check the instance is on shared storage or not. That make the code
very complex. After scheduler tracking the shared-storage, we can implement
this smart. There is an option specified whether rescheduler stopped
instance when instance isn't on shared storage, because block migration is
waste.

3. Other intelligent scheduling.


The basic idea is add new column in compute_node table, and the new column
store an ID that identifying a storage. If two compute nodes have same
storage id, that means the two nodes on the shared storage. There will be
different way to generate the ID for different type storage, like NFS,
ceph, lvm

Thanks
Alex
2015-02-25 22:08 GMT+08:00 Gary Kotton gkot...@vmware.com:

  Hi,
 There is an issue with the statistics reported when a nova compute driver
 has shared storage attached. That is, there may be more than one compute
 node reporting on the shared storage. A patch has been posted -
 https://review.openstack.org/#/c/155184. The direction here was to add a
 extra parameter to the dictionary that the driver returns for the resource
 utilization. The DB statistics calculation would take this into account and
 then do calculations accordingly.
 I am not really in favor of the approach for a number of reasons:

1. Over the last few cycles we have been making a move to trying to
better define data structures and models that we use. More specifically we
have been moving to object support
2. A change in the DB layer may break this support.
3. We are trying to have versioning of various blobs of data that are
passed around

 My thinking is that the resource tracker should be aware that the compute
 node has shared storage and the changes done there. I do not think that the
 compute node should rely on the changes being done in the DB layer – that
 may be on a different host and even run a different version.

  I understand that this is a high or critical bug but I think that we
 need to discuss more on it and try have a more robust model.
 Thanks
 Gary

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Dealing with database connection sharing issues

2015-02-25 Thread Duncan Thomas
The only thing I'd argue with here is the log level, Robert. Logstash on
the gate doesn't index trace/debug, so info or above would be far more
helpful, so that we can have a logstash query for the issue

On 25 February 2015 at 01:20, Robert Collins robe...@robertcollins.net
wrote:

 On 23 February 2015 at 13:54, Michael Bayer mba...@redhat.com wrote:

  Correct me if I'm wrong but the register_after_fork seems to apply only
 to
  the higher level Process abstraction.   If someone calls os.fork(), as is
  the case now, there's no hook to use.
 
  Hence the solution I have in place right now, which is that Oslo.db *can*
  detect a fork and adapt at the most basic level by checking for
 os.getpid()
  and recreating the connection, no need for anyone to call
 engine.dispose()
  anywhere. But that approach has been rejected.  Because the caller of the
  library should be aware they're doing this.
 
  If we can all read the whole thread here each time and be on the same
 page
  about what is acceptable and what's not, that would help.

 I've read the whole thread :).

 I don't agree with the rejection you received :(.

 Here are my principles in the design:
  - oslo.db is meant to be a good [but opinionated] general purpose db
 library: it is by and for OpenStack, but it can only assume as givens
 those things which are guaranteed the same for all OpenStack projects,
 and which we can guarantee we don't want to change in future.
 Everything else it needs to do the usual thing of offering interfaces
 and extension points where its behaviour can be modified.
  - failing closed is usually much much better than failing open. Other
 libraries and app code may do things oslo.db doesn't expect, and
 oslo.db failing in a hard to debug fashion is a huge timewaste for
 everyone involved.
  - faults should be trapped as close to the moment that it happened as
 possible. That is, at the first sign.
  - correctness is more important than aesthetics : ugly but doing the
 right thing is better than looking nice but breaking.
  - where we want to improve things in a program in a way thats
 incompatible, we should consider a deprecation period.


 Concretely, I think we should do the following:
  - in olso.db today, detect the fork and reopen the connection (so the
 users code works); and log a DEBUG/TRACE level message that this is a
 deprecated pattern and will be removed.
  - follow that up with patches to all the projects to prevent this
 happening at all
  - wait until we're no longer doing security fixes to any branch with
 the pre-fixed code
  - at the next major release of oslo.db, change it from deprecated to
 hard failure

 That gives a graceful migration path and ensures safety.

 As to the potential for someone to deliberately:
  - open an oslo.db connection
  - fork
  - expect it to work

 I say phoooey. Pre-forking patterns don't need this (it won't use the
 connect before work is handed off to the child). Privilege dropping
 patterns could potentially use this, but they are rare enough that
 they can explicitly close the connection and make a new one after the
 fork. In general anything related to fork is going to break and one
 should re-establish things after forking. The exceptions are
 sufficiently rare that I think we can defer adding apis to support
 them (e.g. a way to say 'ok, refresh your cache of the pid now') until
 someone actually wants that.

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread Fawad Khaliq
On Wed, Feb 25, 2015 at 5:34 AM, Sukhdev Kapur sukhdevka...@gmail.com
wrote:

 Folks,

 A great discussion. I am not expert at OVN, hence, want to ask a question.
 The answer may make a  case that it should probably be a ML2 driver as
 oppose to monolithic plugin.

 Say a customer want to deploy an OVN based solution and use HW devices
 from one vendor for L2 and L3 (e.g. Arista or Cisco), and want to use
 another vendor for services (e.g. F5 or A10) - how can that be supported?

 If OVN goes in as ML2 driver, I can then run ML2 and Service plugin to
 achieve above solution. For a monolithic plugin, don't I have an issue?

On the specifics of service plugins: service plugins and standalone plugins
can co-exist to provide a solution with advanced services from different
vendors. Some existing monolithic plugins (e.g. PLUMgrid) have blueprints
deployed using this approach.


 regards..
 -Sukhdev


 On Tue, Feb 24, 2015 at 8:58 AM, Salvatore Orlando sorla...@nicira.com
 wrote:

 I think we're speculating a lot about what would be best for OVN whereas
 we should probably just expose pro and cons of ML2 drivers vs standalone
 plugin (as I said earlier on indeed it does not necessarily imply
 monolithic *)

 I reckon the job of the Neutron community is to provide a full picture to
 OVN developers - so that they could make a call on the integration strategy
 that best suits them.
 On the other hand, if we're planning to commit to a model where ML2 is
 not anymore a plugin but the interface with the API layer, then any choice
 which is not a ML2 driver does not make any sense. Personally I'm not sure
 we ever want to do that, at least not in the near/medium term, but I'm one
 and hardly representative of the developer/operator communities.

 Salvatore


 * In particular with the advanced service split out the term monolithic
 simply does not mean anything anymore.

 On 24 February 2015 at 17:48, Robert Kukura kuk...@noironetworks.com
 wrote:

  Kyle, What happened to the long-term potential goal of ML2 driver APIs
 becoming neutron's core APIs? Do we really want to encourage new monolithic
 plugins?

 ML2 is not a control plane - its really just an integration point for
 control planes. Although co-existence of multiple mechanism drivers is
 possible, and sometimes very useful, the single-driver case is fully
 supported. Even with hierarchical bindings, its not really ML2 that
 controls what happens - its the drivers within the framework. I don't think
 ML2 really limits what drivers can do, as long as a virtual network can be
 described as a set of static and possibly dynamic network segments. ML2 is
 intended to impose as few constraints on drivers as possible.

 My recommendation would be to implement an ML2 mechanism driver for OVN,
 along with any needed new type drivers or extension drivers. I believe this
 will result in a lot less new code to write and maintain.

 Also, keep in mind that even if multiple driver co-existence doesn't
 sound immediately useful, there are several potential use cases to
 consider. One is that it allows new technology to be introduced into an
 existing cloud alongside what previously existed. Migration from one ML2
 driver to another may be a lot simpler (and/or flexible) than migration
 from one plugin to another. Another is that additional drivers can support
 special cases, such as bare metal, appliances, etc..

 -Bob


 On 2/24/15 11:11 AM, Kyle Mestery wrote:

  On Tue, Feb 24, 2015 at 3:19 AM, Salvatore Orlando sorla...@nicira.com
  wrote:

  On 24 February 2015 at 01:34, Kyle Mestery mest...@mestery.com
 wrote:

  Russel and I have already merged the initial ML2 skeleton driver [1].

   The thinking is that we can always revert to a non-ML2 driver if
 needed.


  If nothing else an authoritative decision on a design direction saves
 us the hassle of going through iterations and discussions.
 The integration through ML2 is definitely viable. My opinion however is
 that since OVN implements a full control plane, the control plane bits
 provided by ML2 are not necessary, and a plugin which provides only
 management layer capabilities might be the best solution. Note: this does
 not mean it has to be monolithic. We can still do L3 with a service plugin.
  However, since the same kind of approach has been adopted for ODL I
 guess this provides some sort of validation.


 To be honest, after thinking about this last night, I'm now leaning
 towards doing this as a full plugin. I don't really envision OVN running
 with other plugins, as OVN is implementing it's own control plane, as you
 say. So the value of using ML2 is quesitonable.


I'm not sure how useful having using OVN with other drivers will
 be, and that was my initial concern with doing ML2 vs. full plugin. With
 the HW VTEP support in OVN+OVS, you can tie in physical devices this way.
 Anyways, this is where we're at for now. Comments welcome, of course.


  That was also kind of my point regarding the control 

[openstack-dev] Module_six_moves_urllib_parse error

2015-02-25 Thread Manickam, Kanagaraj
Hi,

I see the below error in my devstack and is raised from the package 'six'

AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 
'SplitResult'

Currently my devstack setup is having six 1.9.0 version. Could anyone help here 
to fix the issue? Thanks.

Regards
Kanagaraj M
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-25 Thread Kashyap Chamarthy
On Tue, Feb 24, 2015 at 10:02:36PM -0800, Mark Atwood wrote:
 On Tue, Feb 24, 2015, at 04:28, Kashyap Chamarthy wrote:
  
  Along with the below, if push comes to shove, OpenStack Foundation could
  probably try a milder variant (obviously, not all activities can be
  categorized as 'critical path') of Linux Foundation's Critical
  Infrastructure Protection Initiative[1] to fund certain project
  activities in need.
 
 Speaking as a person who sits on the LF CII board meetings,
 and helps turn the crank on that particular sausage mill,
 no, we really don't want to go down that path at this point in
 time.

I didn't imply to do an _exact_ version of LF's CII. Given so much of
interest in OpenStack, vendors/interested parties must (yes) maintain a
balance between giving vs taking from the community. And, they should be
provided ample information on _where_ they can target people/engineers
(Dan Berrange noted this too in one of his responses in this thread).

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Re-evaluating the suitability of the 6 month release cycle

2015-02-25 Thread Daniel P. Berrange
On Wed, Feb 25, 2015 at 10:31:58AM +0100, Thierry Carrez wrote:
 Robert Collins wrote:
  It's also worth noting that we were on a 3-month cycle at the start of
  OpenStack. That was dropped after a cataclysmic release that managed the
  feat of (a) not having anything significant done, and (b) have out of
  date documentation and translations.
  
  Oh! 
  https://wiki.openstack.org/w/index.php?title=Obsolete:3MonthReleaseCycledirection=prevoldid=14016
  appears to be the page about that, but my google-fu isn't turning up a
  post-mortem of the C release which prompted the change. Perhaps some
  old-hand like you can fill us in on the details :).
 
 After the Austin release (a code drop, really), we were on a three-month
 time-based cycle with no intermediary milestone. The first cycle (Bexar)
 worked relatively well, the second one (Cactus) not so well. There were
 multiple reasons for that:
 
 - Software was growing up in Bexar and we introduced more testing, but
 we were unable to detect and fix all the release blockers in time for
 the release date, resulting in a broken Bexar release. To address that
 in Cactus, we increased the duration of the frozen period, but then that
 didn't leave enough time for development, hence nothing getting done.

I wasn't involved in OpenStack back then, but is it not true that
OpenStack has changed almost beyond recognition since those early
days. The number of projects has certainly increased massively, as
has the number of contributors, both individual  corporate. There
is a hell of alot of testing now and people doing continuous deployment
of trunk, so I find it hard to believe that trunk is so unstable that
we can't do a release at any time we choose, nor that we have to have
such long freezes that we don't do get time for dev.

 - Not starting a release cycle with a global F2F gathering (no
 intermediary Design Summit) actually proved quite disastrous: no reset
 of community, no alignment on goal, no discussion on what to work on.
 That resulted in a limbo period after the Bexar release. That is why I'm
 so insistent on aligning our development cycles with our Design Summits.
 I've seen where we go when we don't.

I don't know about other projects so much, but I don't really see the
design summit as a positive thing wrt planning the next release. For
a start the design summit is atbout 5 weeks after the trunk opens for
development, so if people wait for the summit do do planning they have
thrown away half of the first milestones' development time. It is also
not inclusive as a decision making forum because we cannot assume every
one is able to make it to the summit in person, and even if they are
present, people often can't get involved in all sessions they wish to
due to conflicting schedules. If release planning were done via email
primarily, with IRC for cases where realtime planning is needed, it
would be more effective IMHO.  IOW i think the summit would be better
off if were explicitly not associated with releases, but rather be a
general forum for collaboration were we talk through ideas or do code
sprints, and more.

  [...]
  I may appear defensive on my answers, but it's not my goal to defend the
  current system: it's just that most of those proposals generally ignore
  the diversity of the needs of the teams that make OpenStack possible, to
  focus on a particular set of contributors' woes. I'm trying to bring
  that wider perspective in -- the current system is a trade-off and the
  result of years of evolution, not an arbitrary historic choice that we
  can just change at will.
  
  I agree that its not arbitrary and that changing it requires some
  appropriate wide-spread consultation; OTOH the benefits of rolling and
  higher frequency releases are really substantial: but we have to
  backup the change with the appropriate engineering (such as reducing
  our frictions that cause teams practicing CD to be weeks behind tip)
  to make it feasible. My greatest concern about the proposals happening
  now is that we may bite off more than we can chew. OTGH the reality is
  that all the negative things multiply out, so we probably need to just
  start somewhere and /do/ to give us the space to fix other things.
 
 As I said, I'd very much like us to release a lot more often. I just
 don't see how we can do it without:
 
 - abandoning the idea of translating the software
 - admit that docs will always lag code
 - dropping stable branch maintenance (and only fix vulnerabilities in
 master)

I don't really buy that. If we have 6 month cycles with 1 month freeze
for docs  i18n, vs 2 months cycles with 2 weeks freeze for docs  i18n
the latter is a win in terms of dev time vs stablization time, giving
docs  i18n 50% more time in aggregate.

 That would all make my life (and the life of most feature-focused
 developers) a *lot* easier. I'm not convinced that's the best solution
 for our downstream users, though. I think they like docs,
 translations(?), stable 

Re: [openstack-dev] [neutron] ML2 versus core plugin for OVN

2015-02-25 Thread loy wolfe
+1 to separate monolithic OVN plugin

The ML2 has been designed for co-existing of multiple heterogeneous
backends, it works well for all agent solutions: OVS, Linux Bridge, and
even ofagent.

However, when things come with all kinds of agentless solutions, especially
all kinds of SDN controller (except for Ryu-Lib style), Mechanism Driver
became the new monolithic place despite the benefits of code reduction:
 MDs can't inter-operate neither between themselves nor with ovs/bridge
agent L2pop, each MD has its own exclusive vxlan mapping/broadcasting
solution.

So my suggestion is that keep those thin MD(with agent) in ML2 framework
(also inter-operate with native Neutron L3/service plugins), while all
other fat MD(agentless) go with the old style of monolithic plugin, with
all L2-L7 features tightly integrated.

On Wed, Feb 25, 2015 at 9:25 AM, Amit Kumar Saha (amisaha) 
amis...@cisco.com wrote:

  Hi,



 I am new to OpenStack (and am particularly interested in networking). I am
 getting a bit confused by this discussion. Aren’t there already a few
 monolithic plugins (that is what I could understand from reading the
 Networking chapter of the OpenStack Cloud Administrator Guide. Table 7.3
 Available networking plugi-ins)? So how do we have interoperability between
 those (or do we not intend to)?



 BTW, it is funny that the acronym ML can also be used for “monolithic” J



 Regards,

 Amit Saha

 Cisco, Bangalore





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >