[openstack-dev] Problems with new Neutron service plugin

2014-07-29 Thread Maciej Nabożny

Hello!
This is my first mail on this mailing list, so - hello everybody :)

I'm trying to write extension and service plugin for Neutron, which adds 
support for something like floating port. This should be dnat/snat 
service for virtual machines. I was following this tutorial:


http://control-that-vm.blogspot.in/2014/05/writing-api-extensions-in-neutron.html?view=flipcard

I have created extension with some resources (RESOURCE_ATTRIBUTE_MAP) 
and service plugin for it. In logs, neutron says, that extension is 
working and it has backend (service plugin). It is also listed in 
extension list through neutron api. The only problem, which I have is 
how to access my functions. As far as I understand, it should work in 
this way:

- GET /v2.0/floatingport/ calls function get_floatingport(...)
- GET /v2.0/floatingports/id/ calls function get_floatingports(...)
and so on for all CRUD methods. Could you tell me how should I register 
this functions to be available in neutron's api? Each time when I call 
GET /v2.0/gloatingport/ i got 404. The same is through neutronclient 
python module with:

c.do_request('GET', '/floatingip')
I have also tried to paste all my methods to ml2 plugin, but also they 
are not accessible through api.


I will be grateful for your help
Regards!
Maciek

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Parallel deployment of secondary controllers

2014-07-29 Thread Sergii Golovatiuk
Hi,

That's awesome! Thanks everyone who made this happen! This is a huge
improvement!

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser


On Tue, Jul 29, 2014 at 2:04 AM, Vladimir Kuklin vkuk...@mirantis.com
wrote:

 Fuelers

 I am glad to announce that we have finally merged all the code that is
 needed for parallel deployment of secondary controllers. The final piece
 that was merged today:
 https://review.openstack.org/#/c/104267/
 This will introduce significant performance boost for us as now controller
 deployment time is O(1) instead of O(n). If you experience any issues with
 HA deployment with the code that contains this fix, please file a bug with
 critical priority and target it to 5.1 milestone.


 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Parallel deployment of secondary controllers

2014-07-29 Thread Mike Scherbakov
+1, this is great.

We do have a few issues with HA though which block us from further testing.
There is no evidence that this final patch enabling the feature caused
those, but let's focus on investigation. Please no code merges if it's not
related to basic BuildVerificationTests (BVT) we have.

I'm aware of at least one issue which breaks CentOS HA:
https://bugs.launchpad.net/fuel/+bug/1348649.
The following related to network verification makes the build failing:
https://bugs.launchpad.net/fuel/+bug/1306705, under investigation by
dshulyak

Ubuntu was also broken due to https://bugs.launchpad.net/fuel/+bug/1349684.
However this issue looks like happening very rarely and related to whether
high load of the server or time desync. It is under investigation by warpc.

Issues above, and any further found with BVT tests failures are highest
priority. Please help in investigation, resolution, code review and landing
fixes into master. Remember, it blocks us from further testing all the
features in HA mode.

Thanks,


On Tue, Jul 29, 2014 at 10:48 AM, Sergii Golovatiuk 
sgolovat...@mirantis.com wrote:

 Hi,

 That's awesome! Thanks everyone who made this happen! This is a huge
 improvement!

 --
 Best regards,
 Sergii Golovatiuk,
 Skype #golserge
 IRC #holser


 On Tue, Jul 29, 2014 at 2:04 AM, Vladimir Kuklin vkuk...@mirantis.com
 wrote:

 Fuelers

 I am glad to announce that we have finally merged all the code that is
 needed for parallel deployment of secondary controllers. The final piece
 that was merged today:
 https://review.openstack.org/#/c/104267/
 This will introduce significant performance boost for us as now
 controller deployment time is O(1) instead of O(n). If you experience any
 issues with HA deployment with the code that contains this fix, please file
 a bug with critical priority and target it to 5.1 milestone.


 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Authentication is turned on - Fuel API and UI

2014-07-29 Thread Lukasz Oles
In latest version of python-keystoneclient using admin_token in auth_token
middleware was depracted. So in future we need to create configuration
similar to openstack with nailgun_service user. In that configuration there
should be no problem with upgrades.
We can do it after 5.1.


On Mon, Jul 28, 2014 at 5:28 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 1. yes, we can do it, if it's possible to create new user with
 admin_token. But it will complicate upgrade process and will take some time
 to design/implement and test, because I see several new cases, for example
 we need to create new user in previous version of the container (we use
 nailgun api before upgrade too), and then in new container, and in case of
 rollback delete it from previous container.

 2. afaik, this config is not in the container, it's on the host system,
 and it will be replaced by puppet on the host system


 On Mon, Jul 28, 2014 at 6:37 PM, Lukasz Oles lo...@mirantis.com wrote:

 As I said in another topic, storing user password in plain text is not an
 option.

 Ad. 1.
 We can create special upgrade_user with the same rights as admin user.
 We can use it to authenticate in nailgun. It can be done after 5.1 release.

 Ad. 2.
 In perfect world during upgrade /etc/fuel/client/config.yaml should be
 copied to new container. If it's not possible, warning in documentation
 should be ok.

 Regards


 On Mon, Jul 28, 2014 at 3:59 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Lukasz,
 what do you think on this? Is someone addressing the issues mentioned by
 Evgeny?

 Thanks,


 On Fri, Jul 25, 2014 at 3:31 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 I have several concerns about password changing.

  Default password can be changed via UI or via fuel-cli. In case of
 changing password via UI or fuel-cli password is not stored in any file
 only in keystone

 It's important to change password in /etc/fuel/astute.yaml
 otherwise it will be impossible for user to run upgrade,

 1. upgrade system uses credentials from /etc/fuel/astute.yaml
 to authenticate in nailgun
 2. upgrade system runs puppet to upgrade dockerctl/fuelclient
 on the host system, puppet uses credentials from
 /etc/fuel/astute.yaml
 to update config /etc/fuel/client/config.yaml [1], even if user
 changed
 the password in the config for fuelclient, it will be overwritten
 after upgrade

 If we don't want to change credentials in /etc/fuel/astute.yaml
 lets at least add some warning in the documentation.

 [1]
 https://github.com/stackforge/fuel-library/blob/705dc089037757ed8c5a25c4cf78df71f9bd33b0/deployment/puppet/nailgun/examples/host-only.pp#L51-L55



 On Thu, Jul 24, 2014 at 6:17 PM, Lukasz Oles lo...@mirantis.com
 wrote:

 Hi all,

 one more thing. You do not need to install keystone in your
 development environment. By default it runs there in fake mode. Keystone
 mode is enabled only on iso. If you want to test it locally you have to
 install keystone and configure nailgun as Kamil explained.

 Regards,


 On Thu, Jul 24, 2014 at 3:57 PM, Mike Scherbakov 
 mscherba...@mirantis.com wrote:

 Kamil,
 thank you for the detailed information.

 Meg, do we have anything documented about authx yet? I think Kamil's
 email can be used as a source to prepare user and operation guides for 
 Fuel
 5.1.

 Thanks,


 On Thu, Jul 24, 2014 at 5:45 PM, Kamil Sambor ksam...@mirantis.com
 wrote:

 Hi folks,

 All parts of code related to stage I and II from blueprint
 http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.htm
 http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.html
  are
 merged. In result of that, fuel (api and UI)  we now have
 authentication via keystone and now is required as default. Keystone is
 installed in new container during master installation. We can configure
 password via fuelmenu during installation (default user:password -
 admin:admin). Password is saved in astute.yaml, also admin_token is 
 stored
 here.
 Almost all endpoints in fuel are protected and they required
 authentication token. We made exception for few endpoints and they are
 defined in nailgun/middleware/keystone.py in public_url .
 Default password can be changed via UI or via fuel-cli. In case of
 changing password via UI or fuel-cli password is not stored in any file
 only in keystone, so if you forgot password you can change it using
 keystone client from master node and admin_token from astute.yaml using
 command: keystone --os-endpoint=http://10.20.0.2:35357/v2.0 
 --os-token=admin_token
 password-update .
 Fuel client now use for authentication user and passwords which are
 stored in /etc/fuel/client/config.yaml. Password in this file is not
 changed during changing via fuel-cli or UI, user must change this 
 password
 manualy. If user don't want use config file can provide user and 
 password
 to 

Re: [openstack-dev] Problems with new Neutron service plugin

2014-07-29 Thread Jaume Devesa
Hello Maciej,

can I see your code somewhere? I have written an extension recently and I
might can help you. The blog post is quite similar of what I've done, so
you should be close to get it work.

Regards,
jaume



On 29 July 2014 08:28, Maciej Nabożny m...@mnabozny.pl wrote:

 Hello!
 This is my first mail on this mailing list, so - hello everybody :)

 I'm trying to write extension and service plugin for Neutron, which adds
 support for something like floating port. This should be dnat/snat
 service for virtual machines. I was following this tutorial:

 http://control-that-vm.blogspot.in/2014/05/writing-
 api-extensions-in-neutron.html?view=flipcard

 I have created extension with some resources (RESOURCE_ATTRIBUTE_MAP) and
 service plugin for it. In logs, neutron says, that extension is working and
 it has backend (service plugin). It is also listed in extension list
 through neutron api. The only problem, which I have is how to access my
 functions. As far as I understand, it should work in this way:
 - GET /v2.0/floatingport/ calls function get_floatingport(...)
 - GET /v2.0/floatingports/id/ calls function get_floatingports(...)
 and so on for all CRUD methods. Could you tell me how should I register
 this functions to be available in neutron's api? Each time when I call GET
 /v2.0/gloatingport/ i got 404. The same is through neutronclient python
 module with:
 c.do_request('GET', '/floatingip')
 I have also tried to paste all my methods to ml2 plugin, but also they are
 not accessible through api.

 I will be grateful for your help
 Regards!
 Maciek

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] spec template to use

2014-07-29 Thread Andreas Scheuring
Hi together, 
I found two blueprint templates for neutron

The .rst file on github
http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/template.rst

and the one on the openstack wiki page
https://wiki.openstack.org/wiki/Neutron/BlueprintTemplate

Are both templates still valid or is the .rst one the right one to go? And 
if so, does it make sense to remove the content from the wikipage and only 
place a link to the .rst file on github there?

Thanks,

Andreas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PKG-Openstack-devel] Bug#755315: [Trove] Should we stop using wsgi-intercept, now that it imports from mechanize? this is really bad!

2014-07-29 Thread Thomas Goirand
On 07/28/2014 04:04 AM, Chris Dent wrote:
 On Mon, 28 Jul 2014, Thomas Goirand wrote:
 
 That's exactly the version which I've been looking at. The thing is,
 when I run the unit test with that version, it just bombs on me because
 mechanize isn't there.
 
 How would you feel about it being optionally available and for the tests
 for mechanize to only run for it if someone has aleady preinstalled
 mechanize? That is the tests will skip if import mechanize is an
 ImportError?
 
 While I'm not in love with mechanize, if it is a tool that _some_
 people use, then I don't want wsgi-intercept to not be useful to them.
 
 Please let me know if you can release a new version of wsgi-intercept
 cleaned from any trace of mechanize, or if you think this can't be done.
 
 Let me know if the above idea can't work. Depending on your answer
 I'll either release a version as described, or go ahead and flush it.
 If you get back to me by tomorrow morning (UTC) I can probably get the new
 version out tomorrow too.

Hi,

Sorry, I couldn't reply earlier.

Well, if at least mechanize really becomes optional, which means: no
issue when running unit tests without it, and no issue when using it,
then it may be ok from my point of view (eg: I wouldn't complain that
much about it).

However, from *your* perspective, I wouldn't advise that you keep using
such a dangerous, badly maintained Python module. Saying that it's
optional may look like you think mechanize is ok and you are vouching
for it, when it really shouldn't be the case. Having clean, well
maintained dependencies, is IMO very important for a given python
module. It shows that you care no bad module gets in.

Let me know whenever you have a new release, without mechanize as new
dependency, or with it being optional.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [IPv6] Hide ipv6 subnet API attributes

2014-07-29 Thread Nir Yechiel
Now with the Juno efforts to provide IPv6 support and some features (provider 
networks SLAAC, RADVD) already merged, is there any plan/patch to revert this 
Icehouse change [1] and make the 'ra_mode' and 'ipv6_address_mode' consumable?

Thanks,
Nir

[1] https://review.openstack.org/#/c/85869/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV][CI] The plan to bring up Snabb NFV CI for Juno-3

2014-07-29 Thread Luke Gorrie
Greetings fellow NFV'stas!

I would like to explain and solicit feedback on our plan to support a new
open source NFV system in Juno. This work is approved as
low-priority/best-effort for Juno-3. (Yes, we do understand that we are
fighting the odds in terms of the Juno schedule.)

We are developing a practical open source NFV implementation for OpenStack.
This is for people who want to run tens of millions of packets per second
through Virtio-net on each compute node. The work involves contributing
code upstream to a dependent chain of projects:

snabbswitch - QEMU - Libvirt - Nova - Neutron

Recently we had a breakthrough: QEMU upstream merged the vhost-user feature
that we developed and this convinced the kind maintainers of Libvirt, Nova,
and Neutron to let us target code to them in parallel. Now Libvirt has
accepted our code upstream too and the last pieces are Nova and Neutron.
(Then we can start work on Version 2.)

Previously our upstreaming effort has been obstructed: people
understandably wanted to see our QEMU code accepted before they would take
us seriously. So it is an exciting time for us and our upstreaming work.

Just now we have ramped up our OpenStack development effort in response to
getting approved for Juno-3. Michele Paolino has joined in: he is
experienced with Libvirt and is the one who upstreamed our code there.
Nikolay Nikolaev is joining in too: he did the bulk of the development on
vhost-user and the upstreaming of it into QEMU.

Here is what the three of us are working on for Juno-3:

* VIF_VHOSTUSER support in Nova.
https://blueprints.launchpad.net/nova/+spec/vif-vhostuser

* Snabb NFV mech driver for Neutron.
https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver

* NFV CI: OpenStack 3rd party CI that covers our entire software ecosystem
(snabbswitch + QEMU + Libvirt + Nova + Neutron).

We are already getting great support from the community. Thank you
everybody for that, and meta-thankyou to the people who setup the NFV
subgroup which has been a fantastic enabler. For the code changes, the ball
is in our court now to get them into shape in time. For the CI, I think
it's worth having a discussion to make sure we are on the same page and
have the same expectations.

Here is how I visualize our ideal NFV CI for Juno:

* Run Tempest tests for Nova and Neutron.
* Test with the relevant versions of Libvirt, QEMU, and snabbswitch.
* Test with NFV-oriented features that are upstream in OpenStack.
* Test with NFV-oriented changes that are not yet upstream e.g. Neutron QoS
API.
* Operate reliably with a strong track record.
* Be easy for other people to replicate if they want to run their own NFV
CI.

This CI should then provide assurance for us that our whole ecosystem is
running compatibly, for OpenStack that the code going upstream is
continuously tested, and for end users that the software they plan to
deploy works (either based on our tests, if they are deploying the same
software that we use, or based on their own tests if they want to operate a
customised CI).

How does this CI idea sound to the community and to others who are
interested in related NFV-oriented features?

That was quite a brain-dump... we have been working on this for quite some
time but mostly on the parts outside of the OpenStack tree until now.

For more information about our open source NFV project you can read the
humble home page: http://snabb.co/nfv.html

and if you want to talk nuts and bolts you can find us on Github:
https://github.com/SnabbCo/snabbswitch

and Google Groups:
https://groups.google.com/forum/#!forum/snabb-devel

We are independent open source developers and we are working to support
Deutsche Telekom's TeraStream NFV project.

Cheers!
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-29 Thread Alessandro Pilotti
Glad to see that the Bp is hypervisor independent.

We'll provide the Hyper-V implementation, based on this TODO comment:

https://review.openstack.org/#/c/99916/3/nova/virt/hyperv/driver.py

Thanks,

Alessandro


 On 25.07.2014, at 03:08, Michael Still mi...@stillhq.com wrote:
 
 Yep, I think this one has well and truly crossed the line. Exception granted.
 
 Michael
 
 On Fri, Jul 25, 2014 at 2:30 AM, Andrew Laski
 andrew.la...@rackspace.com wrote:
 
 
 From: Day, Phil [philip@hp.com]
 Sent: Thursday, July 24, 2014 9:20 AM
 To: OpenStack Development Mailing List (not for usage questions); Daniel P. 
 Berrange
 Subject: Re: [openstack-dev] [Nova][Spec freeze exception] Controlled 
 shutdown of GuestOS
 
 According to: https://etherpad.openstack.org/p/nova-juno-spec-priorities   
 alaski has also singed up for this if I drop the point of contention - which 
 I'ev done
 
 
 Yes, I will sponsor this one as well.  This is more a bug fix than a feature 
 IMO and would be really nice to get into Juno.
 
 
 
 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 24 July 2014 00:50
 To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Nova][Spec freeze exception] Controlled
 shutdown of GuestOS
 
 Another core sponsor would be nice on this one. Any takers?
 
 Michael
 
 On Thu, Jul 24, 2014 at 4:14 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
 On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
 Hi Folks,
 
 I'd like to propose the following as an exception to the spec freeze, on 
 the
 basis that it addresses a potential data corruption issues in the Guest.
 
 https://review.openstack.org/#/c/89650
 
 We were pretty close to getting acceptance on this before, apart from a
 debate over whether one additional config value could be allowed to be set
 via image metadata - so I've given in for now on wanting that feature from a
 deployer perspective, and said that we'll hard code it as requested.
 
 Initial parts of the implementation are here:
 https://review.openstack.org/#/c/68942/
 https://review.openstack.org/#/c/99916/
 
 Per my comments already, I think this is important for Juno and will
 sponsor it.
 
 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org 
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ 
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc 
 :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Boot from ISO feature status

2014-07-29 Thread Daniel P. Berrange
On Mon, Jul 28, 2014 at 09:30:24AM -0700, Vishvananda Ishaya wrote:
 I think we should discuss adding/changing this functionality. I have had
 many new users assume that booting from an iso image would give them a
 root drive which they could snapshot. I was hoping that the new block
 device mapping code would allow something like this, but unfortunately
 there isn’t a way to do it there either. You can boot a flavor with an
 ephemeral drive, but there is no command to snapshot secondary drives.

The new block device mapping code is intended to ultimately allow any
disk configuration you can imagine, so if a desirable setup with CDROM
vs disks does not work, do file a bug about this because we should
definitely address it. 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Generate Event or Notification in Ceilometer

2014-07-29 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Hi Folks,

Are there any guide or examples to show how to produce a new event or 
notification add add a handler for this event in ceilometer?

I am asked to implement OpenStack service monitoring which will send an event 
and trigger the handler once a service, say nova-compute, crashes, in a short 
time. :(
The link (http://docs.openstack.org/developer/ceilometer/events.html) does a 
good job on the explanation of concept and hence I know that I need to emit 
notification to message queue and ceilometer-collector will process them and 
generate events but it is far from real implementations.

Regards,
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problems with new Neutron service plugin

2014-07-29 Thread Maciej Nabożny

Yes, here is extension code:
http://pastebin.com/btYQjwnr
service plugin code:
http://pastebin.com/ikKf80Fr

and script, which makes requests to neutron:
from neutronclient.neutron import client

c = client.Client('2.0',
  tenant_name='admin',
  username='admin',
  password='59d29892353346f0',
  auth_token='dc22bf0fc886484d813b9d168b99dae2',
  endpoint_url='http://127.0.0.1:9696/',
  auth_url='http://127.0.0.1:5000/v2.0')
c.do_request('GET', '/floatingports')

I was trying with urls like /floatingport, /floatingports, Floatingports 
etc., but none of them works. Also after pasting plugin functions to ML2 
plugin


Regards,
Maciej


W dniu 29.07.2014, 10:14, Jaume Devesa pisze:

Hello Maciej,

can I see your code somewhere? I have written an extension recently and
I might can help you. The blog post is quite similar of what I've done,
so you should be close to get it work.

Regards,
jaume



On 29 July 2014 08:28, Maciej Nabożny m...@mnabozny.pl
mailto:m...@mnabozny.pl wrote:

Hello!
This is my first mail on this mailing list, so - hello everybody :)

I'm trying to write extension and service plugin for Neutron, which
adds support for something like floating port. This should be
dnat/snat service for virtual machines. I was following this tutorial:


http://control-that-vm.__blogspot.in/2014/05/writing-__api-extensions-in-neutron.__html?view=flipcard

http://control-that-vm.blogspot.in/2014/05/writing-api-extensions-in-neutron.html?view=flipcard

I have created extension with some resources
(RESOURCE_ATTRIBUTE_MAP) and service plugin for it. In logs, neutron
says, that extension is working and it has backend (service plugin).
It is also listed in extension list through neutron api. The only
problem, which I have is how to access my functions. As far as I
understand, it should work in this way:
- GET /v2.0/floatingport/ calls function get_floatingport(...)
- GET /v2.0/floatingports/id/ calls function get_floatingports(...)
and so on for all CRUD methods. Could you tell me how should I
register this functions to be available in neutron's api? Each time
when I call GET /v2.0/gloatingport/ i got 404. The same is through
neutronclient python module with:
 c.do_request('GET', '/floatingip')
I have also tried to paste all my methods to ml2 plugin, but also
they are not accessible through api.

I will be grateful for your help
Regards!
Maciek

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Jaume Devesa
Software Engineer at Midokura


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Questions on instance_system_metadata rows not being deleted

2014-07-29 Thread Chen CH Ji

Hi
   I am working on this bug [1] and [2] was submitted to try to
fix it, unfortunately seems both way I can image failed to pass Jenkins
test
   Do anyone have any idea on how to handle this problem or any
mail has conclusion about it? Thanks for support ~

[1] https://bugs.launchpad.net/nova/+bug/1226049
[2]https://review.openstack.org/#/c/109201/

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problems with new Neutron service plugin

2014-07-29 Thread Kashyap Chamarthy
On Tue, Jul 29, 2014 at 11:15:45AM +0200, Maciej Nabożny wrote:

[Just a generic comment, not related to the extension code in question.]

 Yes, here is extension code:
 http://pastebin.com/btYQjwnr
 service plugin code:
 http://pastebin.com/ikKf80Fr

Pastebins expire, it's useful to provide URLs that are accessible for
much longer times, so that anyone who refers to this email months later
will still be able to access the code in question.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][policy] Group-based Policy code sprint

2014-07-29 Thread Sumit Naiksatam
The code sprint was pretty productive and attended by current and new members:
https://wiki.openstack.org/wiki/Neutron/GroupPolicy/JunoCodeSprint

We were able to fix some of our DB migration issues, and also make
significant progress with the API intercept discussion.

The following is a status report:

1. Code for GBP API with reference implementation
Majority of the code has been in review for a while now, and we are
iterating with the reviewers. The code has been split up into multiple
patches and the series starts here:
https://review.openstack.org/#/c/95900

More information on the patch series can be found here:
https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy/Patches

Team of several people has worked, or are working on this.

2. Client/CLI for GBP
The Client/CLI for the first series (EP, EPG, L2P, L3P) is available.
The CLI catering to the resources in the other two series will be
posted shortly.

Subra Ongole is working on this.

3. Tempest tests for GBP API
The API tests for the first series are posted. Those for the other two
series are in the works along with the scenario tests.

Miguel and Sumit are working on this.

4. Horizon
This also seems to be under control. Ronak and Abishek are working on this.

5. Vendor Driver implementation
Six different drivers are in the works. We spent a good part of the
code sprint in discussing the vendor drivers. It seems that these
drivers will be able to leverage the Implicit and (parts of) the
Resource Mapping Drivers. They will also use existing libraries from
their monolithic plugins or ML2 drivers for connectivity to their
backends. This will reduce the code need to be written for these
drivers.

6. API Intercept
This allows for better integration with Nova. Kevin Benton is working on this.

6. Docs for GBP API
Sumit will start soon.

7. Devstack
No changes will most likely be required here.

If you like to participate in the future in this work stream, please
join the weekly IRC meetings:
https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

and checkout the team wiki:
https://wiki.openstack.org/wiki/Neutron/GroupPolicy

Thanks,
~Sumit (on behalf of GBP team).

On Tue, Jul 15, 2014 at 12:33 PM, Sumit Naiksatam
sumitnaiksa...@gmail.com wrote:
 Hi All,

 The Group Policy team is planning to meet on July 24th to focus on
 making progress with the pending items for Juno, and also to
 facilitate the vendor drivers. The specific agenda will be posted on
 the Group Policy wiki:
 https://wiki.openstack.org/wiki/Neutron/GroupPolicy

 Prasad Vellanki from One Convergence has graciously offered to host
 this for those planning to attend in person in the bay area:
 Address:
 2290 N First Street
 Suite # 304
 San Jose, CA 95131

 Time: 9.30 AM

 For those not being able to attend in person, we will post remote
 attendance details on the above Group Policy wiki.

 Thanks for your participation.

 ~Sumit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problems with new Neutron service plugin

2014-07-29 Thread Jaume Devesa
Maciej: have you loaded the service plugin in the neutron.conf?

service_plugins =
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.floatingports.FloatingPort

Neutron needs to know what plugins to load at start up time to expose the
extensions...

RESOURCE_ATTRIBUTE_MAP key is called 'redirections', it should be called
'floatingports'. And when you call
resource_helper.build_resource_info you are using constants.FIREWALL.
Should be 'floatingport' (the plugin
type or the plugin name you defined in 'FloatingPortPluginBase', not sure
which one. In your case it does not matter
because is the same value).

You have to add also the value 'floatingport' in the
neutron.plugins.common.constants.COMMON_PREFIXES dictionary.

The definition of the method get_floatingports should use the 'filters'
attribute instead of the 'id' one:

def get_floatingports(self, context, filters=None, fields=None)

And the rest is up to you! Hope this helps.


On 29 July 2014 11:51, Kashyap Chamarthy kcham...@redhat.com wrote:

 On Tue, Jul 29, 2014 at 11:15:45AM +0200, Maciej Nabożny wrote:

 [Just a generic comment, not related to the extension code in question.]

  Yes, here is extension code:
  http://pastebin.com/btYQjwnr
  service plugin code:
  http://pastebin.com/ikKf80Fr

 Pastebins expire, it's useful to provide URLs that are accessible for
 much longer times, so that anyone who refers to this email months later
 will still be able to access the code in question.

 --
 /kashyap

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Jaume Devesa
Software Engineer at Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PKG-Openstack-devel] Bug#755315: [Trove] Should we stop using wsgi-intercept, now that it imports from mechanize? this is really bad!

2014-07-29 Thread Chris Dent

On Tue, 29 Jul 2014, Thomas Goirand wrote:


Sorry, I couldn't reply earlier.


No problem.


However, from *your* perspective, I wouldn't advise that you keep using
such a dangerous, badly maintained Python module. Saying that it's
optional may look like you think mechanize is ok and you are vouching
for it, when it really shouldn't be the case. Having clean, well
maintained dependencies, is IMO very important for a given python
module. It shows that you care no bad module gets in.


I've pointed a couple of the other wsgi-intercept contributors to this
thread to get their opinions on which way is the best way forward,
I'd prefer not to make the decision solo.


Let me know whenever you have a new release, without mechanize as new
dependency, or with it being optional.


It will be soon (a day or so).

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-29 Thread Daniel P. Berrange
Looking at the current review backlog I think that we have to
seriously question whether our stable branch review process in
Nova is working to an acceptable level

On Havana

  - 43 patches pending
  - 19 patches with a single +2
  - 1 patch with a -1
  - 0 patches wit a -2
  - Stalest waiting 111 days since most recent patch upload
  - Oldest waiting 250 days since first patch upload
  - 26 patches waiting more than 1 month since most recent upload
  - 40 patches waiting more than 1 month since first upload

On Icehouse:

  - 45 patches pending
  - 17 patches with a single +2
  - 4 patches with a -1
  - 1 patch with a -2
  - Stalest waiting 84 days since most recent patch upload
  - Oldest waiting 88 days since first patch upload
  - 10 patches waiting more than 1 month since most recent upload
  - 29 patches waiting more than 1 month since first upload

I think those stats paint a pretty poor picture of our stable branch
review process, particularly Havana.

It should not take us 250 days for our review team to figure out whether
a patch is suitable material for a stable branch, nor should we have
nearly all the patches waiting more than 1 month in Havana.

These branches are not getting sufficient reviewer attention and we need
to take steps to fix that.

If I had to set a benchmark, assuming CI passes, I'd expect us to either
approve or reject submissions for stable within a 2 week window in the
common case, 1 month at the worst case.

If we are trying to throttle down the rate of change in Havana, that
totally makes sense, but we should be more active at rejecting patches
if that is our current goal, not let them hang around in limbo for
many months.

I'm actually unclear on who even has permission to approve patches
on stable branches ? Despite being in Nova core I don't have any perm
to approve patches on stable. I think it is pretty odd that we've got a
system where the supposed experts of the Nova team can't approve patches
for stable. I get that we've probably got people on stable team who are
not in core, but IMHO we should have the stable team comprising a superset
of core, not a subset.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tempest api volume test failed

2014-07-29 Thread Nikesh Kumar Mahalka
I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.

1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
#FLAT_INTERFACE = eth0
FIXED_RANGE=192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh

5)
I am running below test:
cd /opt/stack/tempest
./run_tempest.sh tempest.api.volume


But some tests are failed.Manually i am able to perform all volume operations.
Can any one tell where am i wrong?
Below is portion of failed test :

==
FAIL: 
tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTestXML.test_volume_from_snapshot[gate]
--
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-07-28 12:01:41,514 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 POST
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots
0.117s
2014-07-28 12:01:41,569 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 GET
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots/20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7
0.054s
2014-07-28 12:01:43,621 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 GET
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots/20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7
0.049s
}}}

Traceback (most recent call last):
  File /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py,
line 181, in test_volume_from_snapshot
snapshot = self.create_snapshot(self.volume_origin['id'])
  File /opt/stack/tempest/tempest/api/volume/base.py, line 106, in
create_snapshot
'available')
  File /opt/stack/tempest/tempest/services/volume/xml/snapshots_client.py,
line 136, in wait_for_snapshot_status
value = self._get_snapshot_status(snapshot_id)
  File /opt/stack/tempest/tempest/services/volume/xml/snapshots_client.py,
line 109, in _get_snapshot_status
snapshot_id=snapshot_id)
SnapshotBuildErrorException: Snapshot
20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7 failed to build and is in ERROR
status


Ran 246 tests in 4149.523s

FAILED (failures=10)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [not-only-neutron] How to Contribute upstream in OpenStack Neutron

2014-07-29 Thread Luke Gorrie
On 28 July 2014 11:37, Salvatore Orlando sorla...@nicira.com wrote:

 Therefore the likeness of your patch merging depends on the specific
 nature of the -1 you received.


This is really a key point.

Here is a pattern that's worth recognising:

If your code is in reasonable shape but there is no urgent need to complete
the merge then the reviewer might praise with faint damnation. That is,
keep giving you -1 reviews for minor reasons until closer to the end of the
merge window.

If you are an overzealous newbie you might think you need to respond to
every such comment with an immediate revision, and that you might then be
rewarded with a +1, but then you would just be waving a dead chicken [0]
and better advised to slow down a little. (Says me who went through 19
patch sets on his first small contribution :-)).

I would hope that new contributors won't feel too much pressure to look
busy. This can be a tough call when your job is to take all reasonable
steps to have code accepted. It's one thing to be overzealous about
answering nitpick reviews but it would be really unfortunate if you felt
that you always needed to (extreme example) have an agenda item in all
relevant weekly meetings e.g. NFV + Nova + Neutron + ML2 + Third party.

In any case the whole process probably goes much more smoothly once you
have a chance to attend a Summit and make some friends. That might not be
bad as a first step for new contributors if they have that possibility.
(But then the Summit is very expensive until after your first commit is
merged and you are recognised as a contributor.)

[0]: http://zvon.org/comp/r/ref-Jargon_file.html#Terms~wave_a_dead_chicken
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-29 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 29/07/14 12:15, Daniel P. Berrange wrote:
 Looking at the current review backlog I think that we have to
 seriously question whether our stable branch review process in
 Nova is working to an acceptable level
 
 On Havana
 
   - 43 patches pending
   - 19 patches with a single +2
   - 1 patch with a -1
   - 0 patches wit a -2
   - Stalest waiting 111 days since most recent patch upload
   - Oldest waiting 250 days since first patch upload
   - 26 patches waiting more than 1 month since most recent upload
   - 40 patches waiting more than 1 month since first upload
 
 On Icehouse:
 
   - 45 patches pending
   - 17 patches with a single +2
   - 4 patches with a -1
   - 1 patch with a -2
   - Stalest waiting 84 days since most recent patch upload
   - Oldest waiting 88 days since first patch upload
   - 10 patches waiting more than 1 month since most recent upload
   - 29 patches waiting more than 1 month since first upload
 
 I think those stats paint a pretty poor picture of our stable branch
 review process, particularly Havana.
 
 It should not take us 250 days for our review team to figure out whether
 a patch is suitable material for a stable branch, nor should we have
 nearly all the patches waiting more than 1 month in Havana.
 
 These branches are not getting sufficient reviewer attention and we need
 to take steps to fix that.
 
 If I had to set a benchmark, assuming CI passes, I'd expect us to either
 approve or reject submissions for stable within a 2 week window in the
 common case, 1 month at the worst case.

Totally agreed.

 
 If we are trying to throttle down the rate of change in Havana, that
 totally makes sense, but we should be more active at rejecting patches
 if that is our current goal, not let them hang around in limbo for
 many months.

Tip: to be notified in time about new backport requests, you may add
those branches you're interested in to watched, in Gerrit, go to
Settings - Watched Projects, and add whatever you like. Then you'll
receive emails for each backport request.

 
 I'm actually unclear on who even has permission to approve patches
 on stable branches ? Despite being in Nova core I don't have any perm
 to approve patches on stable. I think it is pretty odd that we've got a
 system where the supposed experts of the Nova team can't approve patches
 for stable. I get that we've probably got people on stable team who are
 not in core, but IMHO we should have the stable team comprising a superset
 of core, not a subset.

AFAIK stable team consists of project PTLs + people interested in stable
branches specifically (that got added to the team after their request).
Anyone can start reviewing the patches and ask to be added to the team.

I also think it's weird that project cores don't have +2 for stable
branches of their projects. They do not require global +2 for all stable
branches though.

 
 Regards,
 Daniel
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJT13h/AAoJEC5aWaUY1u57Kn4H+gMhIA2omnwIfFqibrMRnTex
DkZNtgDNvfPIBxkhmkj0anREsnglgrwjufPZYF0MmJcxSvCDLJnWoDJ+iOxir9sg
FiW0GVcSB89TNjKNbRfeFcuP6J6Dw6eNRvYnwf2OoypcyVBN+yElHJG+8/bzZ7FV
lZFGdTK3X777ik2DtFdjkpbrGxxOG+BC/ZWtiKWiI5HPnnl0ZZPHuI44cclDCvGu
bcR5yjFkMYa/hXnzbM+vYcP/kf7iBguEdfn792egrZE1BajSknbT6HYXkJ4C765R
qmq487hlJ60KdkSS8oEWzLcRrNIKir3qyMTqjZ73tUIuKdATcqiylC53a0ZuJKM=
=B7hS
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] tempest api volume test failed

2014-07-29 Thread Duncan Thomas
You should be able to trace the failed request in the cinder-api and
cinder-volume logs to find out what caused the error. Grepping for
ERROR in both those logs is usually a good starting point.

On 29 July 2014 11:10, Nikesh Kumar Mahalka nikeshmaha...@vedams.com wrote:
 I deployed a single node devstack on Ubuntu 14.04.
 This devstack belongs to Juno.

 1) git clone https://github.com/openstack-dev/devstack.git
 2)cd devstack
 3)vi local.conf

 [[local|localrc]]

 ADMIN_PASSWORD=some_password
 DATABASE_PASSWORD=$ADMIN_PASSWORD
 RABBIT_PASSWORD=$ADMIN_PASSWORD
 SERVICE_PASSWORD=$ADMIN_PASSWORD
 SERVICE_TOKEN=ADMIN
 #FLAT_INTERFACE = eth0
 FIXED_RANGE=192.168.2.80/29
 #FLOATING_RANGE=192.168.20.0/25
 HOST_IP=192.168.2.64
 LOGFILE=$DEST/logs/stack.sh.log
 SCREEN_LOGDIR=$DEST/logs/screen
 SYSLOG=True
 SYSLOG_HOST=$HOST_IP
 SYSLOG_PORT=516
 RECLONE=yes
 CINDER_ENABLED_BACKENDS=client:client_driver

 [[post-config|$CINDER_CONF]]

 [client_driver]
 volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
 san_ip = 192.168.2.192
 san_login = some_name
 san_password =some_password
 client_iscsi_ips = 192.168.2.193

 4)./stack.sh

 5)
 I am running below test:
 cd /opt/stack/tempest
 ./run_tempest.sh tempest.api.volume


 But some tests are failed.Manually i am able to perform all volume operations.
 Can any one tell where am i wrong?
 Below is portion of failed test :

 ==
 FAIL: 
 tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTestXML.test_volume_from_snapshot[gate]
 --
 Traceback (most recent call last):
 _StringException: Empty attachments:
   stderr
   stdout

 pythonlogging:'': {{{
 2014-07-28 12:01:41,514 3278 INFO [tempest.common.rest_client]
 Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 POST
 http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots
 0.117s
 2014-07-28 12:01:41,569 3278 INFO [tempest.common.rest_client]
 Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 GET
 http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots/20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7
 0.054s
 2014-07-28 12:01:43,621 3278 INFO [tempest.common.rest_client]
 Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 GET
 http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots/20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7
 0.049s
 }}}

 Traceback (most recent call last):
   File /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py,
 line 181, in test_volume_from_snapshot
 snapshot = self.create_snapshot(self.volume_origin['id'])
   File /opt/stack/tempest/tempest/api/volume/base.py, line 106, in
 create_snapshot
 'available')
   File /opt/stack/tempest/tempest/services/volume/xml/snapshots_client.py,
 line 136, in wait_for_snapshot_status
 value = self._get_snapshot_status(snapshot_id)
   File /opt/stack/tempest/tempest/services/volume/xml/snapshots_client.py,
 line 109, in _get_snapshot_status
 snapshot_id=snapshot_id)
 SnapshotBuildErrorException: Snapshot
 20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7 failed to build and is in ERROR
 status


 Ran 246 tests in 4149.523s

 FAILED (failures=10)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Hide ipv6 subnet API attributes

2014-07-29 Thread Henry Gessau
Nir Yechiel nyech...@redhat.com wrote:
 Now with the Juno efforts to provide IPv6 support and some features
 (provider networks SLAAC, RADVD) already merged, is there any plan/patch to
 revert this Icehouse change [1] and make the 'ra_mode' and
 'ipv6_address_mode' consumable?
 
 Thanks,
 Nir
 
 [1] https://review.openstack.org/#/c/85869/

It's already done.

https://review.openstack.org/86169



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Incubation request

2014-07-29 Thread Thierry Carrez
Swartzlander, Ben a écrit :
 Manila has come a long way since we proposed it for incubation last autumn. 
 Below are the formal requests.
 
 https://wiki.openstack.org/wiki/Manila/Incubation_Application
 https://wiki.openstack.org/wiki/Manila/Program_Application
 
 Anyone have anything to add before I forward these to the TC?

When ready, propose a governance change a bit like this one:

https://github.com/openstack/governance/commit/52d9b4cf2f3ba9d0b757e16dc040a1c174e1d27e

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problems with new Neutron service plugin

2014-07-29 Thread Maciej Nabożny
Thank you very much! The constants list was the problem. I've also added 
empty list as result for get_floatingports and it works perfectly :)


Maciek

W dniu 29.07.2014, 12:09, Jaume Devesa pisze:

Maciej: have you loaded the service plugin in the neutron.conf?

service_plugins =
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.floatingports.FloatingPort

Neutron needs to know what plugins to load at start up time to expose
the extensions...

RESOURCE_ATTRIBUTE_MAP key is called 'redirections', it should be called
'floatingports'. And when you call
resource_helper.build_resource_infoyou are using constants.FIREWALL.
Should be 'floatingport' (the plugin
type or the plugin name you defined in 'FloatingPortPluginBase', not
sure which one. In your case it does not matter
because is the same value).

You have to add also the value 'floatingport' in
theneutron.plugins.common.constants.COMMON_PREFIXESdictionary.

The definition of the method get_floatingports should use the 'filters'
attribute instead of the 'id' one:

def get_floatingports(self, context, filters=None, fields=None)

And the rest is up to you! Hope this helps.


On 29 July 2014 11:51, Kashyap Chamarthy kcham...@redhat.com
mailto:kcham...@redhat.com wrote:

On Tue, Jul 29, 2014 at 11:15:45AM +0200, Maciej Nabożny wrote:

[Just a generic comment, not related to the extension code in question.]

  Yes, here is extension code:
  http://pastebin.com/btYQjwnr
  service plugin code:
  http://pastebin.com/ikKf80Fr

Pastebins expire, it's useful to provide URLs that are accessible for
much longer times, so that anyone who refers to this email months later
will still be able to access the code in question.

--
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Jaume Devesa
Software Engineer at Midokura


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-29 Thread Thierry Carrez
Ihar Hrachyshka a écrit :
 On 29/07/14 12:15, Daniel P. Berrange wrote:
 Looking at the current review backlog I think that we have to
 seriously question whether our stable branch review process in
 Nova is working to an acceptable level
 
 On Havana
 
   - 43 patches pending
   - 19 patches with a single +2
   - 1 patch with a -1
   - 0 patches wit a -2
   - Stalest waiting 111 days since most recent patch upload
   - Oldest waiting 250 days since first patch upload
   - 26 patches waiting more than 1 month since most recent upload
   - 40 patches waiting more than 1 month since first upload
 
 On Icehouse:
 
   - 45 patches pending
   - 17 patches with a single +2
   - 4 patches with a -1
   - 1 patch with a -2
   - Stalest waiting 84 days since most recent patch upload
   - Oldest waiting 88 days since first patch upload
   - 10 patches waiting more than 1 month since most recent upload
   - 29 patches waiting more than 1 month since first upload
 
 I think those stats paint a pretty poor picture of our stable branch
 review process, particularly Havana.
 
 It should not take us 250 days for our review team to figure out whether
 a patch is suitable material for a stable branch, nor should we have
 nearly all the patches waiting more than 1 month in Havana.
 
 These branches are not getting sufficient reviewer attention and we need
 to take steps to fix that.
 
 If I had to set a benchmark, assuming CI passes, I'd expect us to either
 approve or reject submissions for stable within a 2 week window in the
 common case, 1 month at the worst case.
 
 Totally agreed.

A bit of history.

At the dawn of time there were no OpenStack stable branches, each
distribution was maintaining its own stable branches, duplicating the
backporting work. At some point it was suggested (mostly by RedHat and
Canonical folks) that there should be collaboration around that task,
and the OpenStack project decided to set up official stable branches
where all distributions could share the backporting work. The stable
team group was seeded with package maintainers from all over the distro
world.

So these branches originally only exist as a convenient place to
collaborate on backporting work. This is completely separate from
development work, even if those days backports are often proposed by
developers themselves. The stable branch team is separate from the rest
of OpenStack teams. We have always been very clear tht if the stable
branches are no longer maintained (i.e. if the distributions don't see
the value of those anymore), then we'll consider removing them. We, as a
project, only signed up to support those as long as the distros wanted them.

We have been adding new members to the stable branch teams recently, but
those tend to come from development teams rather than downstream
distributions, and that starts to bend the original landscape.
Basically, the stable branch needs to be very conservative to be a
source of safe updates -- downstream distributions understand the need
to weigh the benefit of the patch vs. the disruption it may cause.
Developers have another type of incentive, which is to get the fix they
worked on into stable releases, without necessarily being very
conservative. Adding more -core people to the stable team to compensate
the absence of distro maintainers will ultimately kill those branches.

 If we are trying to throttle down the rate of change in Havana, that
 totally makes sense, but we should be more active at rejecting patches
 if that is our current goal, not let them hang around in limbo for
 many months.
 
 Tip: to be notified in time about new backport requests, you may add
 those branches you're interested in to watched, in Gerrit, go to
 Settings - Watched Projects, and add whatever you like. Then you'll
 receive emails for each backport request.
 
 
 I'm actually unclear on who even has permission to approve patches
 on stable branches ? Despite being in Nova core I don't have any perm
 to approve patches on stable. I think it is pretty odd that we've got a
 system where the supposed experts of the Nova team can't approve patches
 for stable. I get that we've probably got people on stable team who are
 not in core, but IMHO we should have the stable team comprising a superset
 of core, not a subset.
 
 AFAIK stable team consists of project PTLs + people interested in stable
 branches specifically (that got added to the team after their request).
 Anyone can start reviewing the patches and ask to be added to the team.
 
 I also think it's weird that project cores don't have +2 for stable
 branches of their projects. They do not require global +2 for all stable
 branches though.

The key reason why $PROJECT-core don't automatically get stable branch
+2 is that the rules for accepting a patch there are VERY different from
the rules for accepting a patch for master, and most -core people don't
know those.

We need to ensure those -core people know the stable branch acceptance
rules before we 

Re: [openstack-dev] tempest api volume test failed

2014-07-29 Thread Giulio Fidente

On 07/29/2014 12:43 PM, Duncan Thomas wrote:

You should be able to trace the failed request in the cinder-api and
cinder-volume logs to find out what caused the error. Grepping for
ERROR in both those logs is usually a good starting point.


Also, from the log lines you posted it seems a snapshot failed to be 
built and went into ERROR state:


SnapshotBuildErrorException: Snapshot 
20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7 failed to build and is in ERROR

status

Are you able to create a volume snapshot using the cinder cli?


On 29 July 2014 11:10, Nikesh Kumar Mahalka nikeshmaha...@vedams.com wrote:

I deployed a single node devstack on Ubuntu 14.04.
This devstack belongs to Juno.

1) git clone https://github.com/openstack-dev/devstack.git
2)cd devstack
3)vi local.conf

[[local|localrc]]

ADMIN_PASSWORD=some_password
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=ADMIN
#FLAT_INTERFACE = eth0
FIXED_RANGE=192.168.2.80/29
#FLOATING_RANGE=192.168.20.0/25
HOST_IP=192.168.2.64
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes
CINDER_ENABLED_BACKENDS=client:client_driver

[[post-config|$CINDER_CONF]]

[client_driver]
volume_driver=cinder.volume.drivers.san.client.iscsi.client_iscsi.ClientISCSIDriver
san_ip = 192.168.2.192
san_login = some_name
san_password =some_password
client_iscsi_ips = 192.168.2.193

4)./stack.sh

5)
I am running below test:
cd /opt/stack/tempest
./run_tempest.sh tempest.api.volume


But some tests are failed.Manually i am able to perform all volume operations.
Can any one tell where am i wrong?
Below is portion of failed test :

==
FAIL: 
tempest.api.volume.test_volumes_snapshots.VolumesSnapshotTestXML.test_volume_from_snapshot[gate]
--
Traceback (most recent call last):
_StringException: Empty attachments:
   stderr
   stdout

pythonlogging:'': {{{
2014-07-28 12:01:41,514 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 POST
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots
0.117s
2014-07-28 12:01:41,569 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 GET
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots/20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7
0.054s
2014-07-28 12:01:43,621 3278 INFO [tempest.common.rest_client]
Request (VolumesSnapshotTestXML:test_volume_from_snapshot): 200 GET
http://192.168.2.64:8776/v1/eea01c797b0c4df7b1ead18038697a2e/snapshots/20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7
0.049s
}}}

Traceback (most recent call last):
   File /opt/stack/tempest/tempest/api/volume/test_volumes_snapshots.py,
line 181, in test_volume_from_snapshot
 snapshot = self.create_snapshot(self.volume_origin['id'])
   File /opt/stack/tempest/tempest/api/volume/base.py, line 106, in
create_snapshot
 'available')
   File /opt/stack/tempest/tempest/services/volume/xml/snapshots_client.py,
line 136, in wait_for_snapshot_status
 value = self._get_snapshot_status(snapshot_id)
   File /opt/stack/tempest/tempest/services/volume/xml/snapshots_client.py,
line 109, in _get_snapshot_status
 snapshot_id=snapshot_id)
SnapshotBuildErrorException: Snapshot
20d0f8ad-9b5b-49f4-a37e-34b762bd0ca7 failed to build and is in ERROR
status


Ran 246 tests in 4149.523s

FAILED (failures=10)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] requesting python-neutronclient release for MacAddressInUseClient exception

2014-07-29 Thread Kyle Mestery
On Mon, Jul 28, 2014 at 6:45 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:
 Nove needs a python-neutronclient release to use the new
 MacAddressInUseClient exception type defined here [1].

I'll spin a new client release today Matt, and reply back on this
thread once that's complete.

Thanks,
Kyle

 [1] https://review.openstack.org/#/c/109052/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Update on third party CI in Neutron

2014-07-29 Thread Kyle Mestery
On Mon, Jul 28, 2014 at 1:42 PM, Hemanth Ravi hemanthrav...@gmail.com wrote:
 Kyle,

 One Convergence CI has been fixed (setup issue) and is running without the
 failures for ~10 days now. Updated the etherpad.

Thanks for the update Hemanth, much appreciated!

Kyle

 Thanks,
 -hemanth


 On Fri, Jul 11, 2014 at 4:50 PM, Fawad Khaliq fa...@plumgrid.com wrote:


 On Fri, Jul 11, 2014 at 8:56 AM, Kyle Mestery mest...@noironetworks.com
 wrote:

 PLUMgrid

 Not saving enough logs

 All Jenkins slaves were just updated to upload all required logs. PLUMgrid
 CI should be good now.


 Thanks,
 Fawad Khaliq


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status and Expectations for Juno

2014-07-29 Thread Kyle Mestery
This all looks good to me. My only concern is that we need to land a
driver in Juno as well. The HA-proxy based, agent-less driver which
runs on the API node is the only choice here, right? Otherwise, the
scalable work is being done in Octavia. Is that correct?

On Mon, Jul 28, 2014 at 2:46 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:
 That was essentially the point of my email.  To get across that not
 everything we want to go in Juno will make it in and because of this V2
 will not be in the state that many users will be able to use.  Also, to
 get people's opinions on what they think is high priority.

 On Mon, 2014-07-28 at 18:11 +, Doug Wiegley wrote:
 I don’t think the lbaas roadmap has changed (including octavia), just the
 delivery timeline.  Nor am I debating making the ref driver simpler (I’m
 on record as supporting that decision, and still do.)  And if that was the
 only wart, I’m sure we’d all ignore it and plow forward.  But it’s not,
 and add all the things that are likely to miss together, and I think we’d
 be doing the community a disservice by pushing v2 too soon.  Which means
 our moratorium on v1 is likely premature.

 Unless Brandon gives up sleeping altogether; then I’m sure we’ll make it.

 Anyway, all this is my long-winded way of agreeing that some things will
 likely need to be pushed to K, it happens, and let’s just be realistic
 about what that means for our end users.

 Doug



 On 7/28/14, 9:34 AM, Jorge Miramontes jorge.miramon...@rackspace.com
 wrote:

 Hey Doug,
 
 In terms of taking a step backward from a user perspective I'm fine with
 making v1 the default. I think there was always the notion of supporting
 what v1 currently offers by making a config change. Thus, Horizon should
 still have all the support it had in Icehouse. I am a little worried about
 the delivery of items we said we wanted to deliver however. The reason we
 are focusing on the current items is that Octavia is also part of the
 picture, albeit, behind the scenes right now. Thus, the argument that the
 new reference driver is less capable is actually a means to getting
 Octavia out. Eventually, we were hoping to get Octavia as the reference
 implementation which, from the user's perspective, will be much better
 since you can actually run it at operator scale. To be realistic, the v2
 implementation is a WIP and focusing on the control plane first seems to
 make the most sense. Having a complete end-to-end v2 implementation is
 large in scope and I don't think anyone expected it to be a full-fledged
 product by Juno, but we are getting closer!
 
 
 Cheers,
 --Jorge
 
 
 
 
 On 7/28/14 8:02 AM, Doug Wiegley do...@a10networks.com wrote:
 
 Hi Brandon,
 
 Thanks for bringing this up. If you¹re going to call me out by name, I
 guess I have to respond to the Horizon thing.  Yes, I don¹t like it, from
 a user perspective.  We promise a bunch of new features, new driversŠ and
 none of them are visible.  Or the horizon support does land, and suddenly
 the user goes from a provider list of 5 to 2.  Sucks if you were using
 one
 of the others.  Anyway, back to a project status.  To summarize, listed
 by
 feature, priority, status:
 
 LBaaS V2 API,   high, reviews in gerrit
 Ref driver, high, removed agent, review in gerrit
 CLI V2, high, not yet in review
 Devstack,   high, not started
 +TLS,   medium, lots done in parallel
 +L7,medium, not started
 Shim V1 - V2,  low, minimally complete
 Horizon V2, low, not started
 ref agent,  low, not started
 Drivers,low, one vendor driver in review, several in progress
 
 And with a review submission freeze of August 21st.  Let¹s work
 backwards:
 
 Dependent stuff will need at least two weeks to respond to the final
 changes and submit.  That¹d be:
 
 Devstack,   high, not started
 +TLS,   medium, lots done in parallel
 +L7,medium, not started
 Shim V1 - V2,  low, minimally complete
 Horizon V2, low, not started
 ref agent,  low, not started
 Drivers,low, one vendor driver in review, several in progress
 
 Š I¹m not including TLS, since it¹s work has been very parallel so far,
 even though logically it should be there.  But that would mean the
 following should be ³done² and merged by August 7th:
 
 LBaaS V2 API,   high, reviews in gerrit
 Ref driver, high, removed agent, review in gerrit
 CLI V2, high, not yet in review
 
 Š that¹s a week and a half, for a big pile of new code.  At the current
 change velocity, I have my doubts.  And if that slips, the rest starts to
 look very very hazy.  Backing up, and focusing on the user, here¹s lbaas
 v1:
 
 
 
 
 - Current object model, basic http lb
 - Ref driver with agent, +3 vendors (with +3 more backends not submitting
 drivers because of v2)
 - UI
 
 Š what we initially planned for Juno:
 
 - Shiny new object model (base for some new features)
 - TLS termination/offload
 - L7 routing
 - Ref driver with agent, 

Re: [openstack-dev] [neutron] spec template to use

2014-07-29 Thread Kyle Mestery
On Tue, Jul 29, 2014 at 3:12 AM, Andreas Scheuring
andreas.scheur...@de.ibm.com wrote:
 Hi together,
 I found two blueprint templates for neutron

 The .rst file on github
 http://git.openstack.org/cgit/openstack/neutron-specs/tree/specs/template.rst

 and the one on the openstack wiki page
 https://wiki.openstack.org/wiki/Neutron/BlueprintTemplate

 Are both templates still valid or is the .rst one the right one to go? And
 if so, does it make sense to remove the content from the wikipage and only
 place a link to the .rst file on github there?

Please use the .rst file, the one on the wiki is deprecated. I've
updated the wiki page to reflect the new template file and blueprint
process.

Thanks,
Kyle

 Thanks,

 Andreas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Managing change in gerrit which depends on multiple other changes in review

2014-07-29 Thread Evgeny Fedoruk
Hi folks,

I'm working on a change for neutron LBaaS service.
Since there is a massive work done for LBaaS these days, my change depends on 
other changes being reviewed in parallel in gerrit.
I don't have a big git knowledge and I'm failing in figuring out the right 
procedure that should be followed for managing such a multi-dependent patch.
So, I sending my question to you guys, in hope to find the right way to manage 
such patches in gerrit.

Here is the situation:
There are 4 patches on review in gerrit

1.   A - No dependencies

2.   B - Depends on A

3.   C - Depends on A

4.   D - No dependencies




My change, let's call it X, is already on review in gerrit.
It should depend on all four other changes, A, B, C and D.

I tried to two ways of managing those dependencies, 1) by doing a cherry-pick 
for each one of them, and 2) by doing git review and git rebase for each one of 
them.
It does not work for me well, my change commit message is replaced by other 
changes' commit messages and when I commit my patch, it commit's other changes 
patches too.

So, my question is:
Is this scenario supported by gerrit system?
If it does - what is the right procedure to follow in order to manage those 
dependencies
and how to rebase my change when some of dependencies was commited with a new 
patch to keep the dependencies up-to-date?


Thank you!
Evg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-29 Thread Daniel P. Berrange
On Tue, Jul 29, 2014 at 02:04:42PM +0200, Thierry Carrez wrote:
 Ihar Hrachyshka a écrit :
 At the dawn of time there were no OpenStack stable branches, each
 distribution was maintaining its own stable branches, duplicating the
 backporting work. At some point it was suggested (mostly by RedHat and
 Canonical folks) that there should be collaboration around that task,
 and the OpenStack project decided to set up official stable branches
 where all distributions could share the backporting work. The stable
 team group was seeded with package maintainers from all over the distro
 world.
 
 So these branches originally only exist as a convenient place to
 collaborate on backporting work. This is completely separate from
 development work, even if those days backports are often proposed by
 developers themselves. The stable branch team is separate from the rest
 of OpenStack teams. We have always been very clear tht if the stable
 branches are no longer maintained (i.e. if the distributions don't see
 the value of those anymore), then we'll consider removing them. We, as a
 project, only signed up to support those as long as the distros wanted them.
 
 We have been adding new members to the stable branch teams recently, but
 those tend to come from development teams rather than downstream
 distributions, and that starts to bend the original landscape.
 Basically, the stable branch needs to be very conservative to be a
 source of safe updates -- downstream distributions understand the need
 to weigh the benefit of the patch vs. the disruption it may cause.
 Developers have another type of incentive, which is to get the fix they
 worked on into stable releases, without necessarily being very
 conservative. Adding more -core people to the stable team to compensate
 the absence of distro maintainers will ultimately kill those branches.

The situation I'm seeing is that the broader community believe that
the Nova core team is responsible for the nova stable branches. When
stuff sits in review for ages it is the core team that is getting
pinged about it and on the receiving end of the complaints the
inaction of review.

Adding more people to the stable team won't kill those branches. I'm
not suggesting we change the criteria for accepting patches, or that
we dramatically increase the number of patches we accept. There is
clearly alot of stuff proposed to stable that the existing stable
team think is a good idea - as illustrated by the number of patches
with at least one +2 present. On the contrary, having a bigger stable
team comprises all of core + interested distro maintainers will ensure
that the stable branches are actually gettting the patches people in
the field need to provide a stable cloud. 


 
  If we are trying to throttle down the rate of change in Havana, that
  totally makes sense, but we should be more active at rejecting patches
  if that is our current goal, not let them hang around in limbo for
  many months.
  
  Tip: to be notified in time about new backport requests, you may add
  those branches you're interested in to watched, in Gerrit, go to
  Settings - Watched Projects, and add whatever you like. Then you'll
  receive emails for each backport request.
  
  
  I'm actually unclear on who even has permission to approve patches
  on stable branches ? Despite being in Nova core I don't have any perm
  to approve patches on stable. I think it is pretty odd that we've got a
  system where the supposed experts of the Nova team can't approve patches
  for stable. I get that we've probably got people on stable team who are
  not in core, but IMHO we should have the stable team comprising a superset
  of core, not a subset.
  
  AFAIK stable team consists of project PTLs + people interested in stable
  branches specifically (that got added to the team after their request).
  Anyone can start reviewing the patches and ask to be added to the team.
  
  I also think it's weird that project cores don't have +2 for stable
  branches of their projects. They do not require global +2 for all stable
  branches though.
 
 The key reason why $PROJECT-core don't automatically get stable branch
 +2 is that the rules for accepting a patch there are VERY different from
 the rules for accepting a patch for master, and most -core people don't
 know those.
 
 We need to ensure those -core people know the stable branch acceptance
 rules before we grant them +2 there.

I think that's a really weak argument against having core team to take
part in the stable branch approval. If the rules are outlined somewhere
on the wiki anyone I know on the core team is more than capable of
reading  following them. 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   

Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-29 Thread Thierry Carrez
Daniel P. Berrange wrote:
 The situation I'm seeing is that the broader community believe that
 the Nova core team is responsible for the nova stable branches. When
 stuff sits in review for ages it is the core team that is getting
 pinged about it and on the receiving end of the complaints the
 inaction of review.

I would be interested to know who is actually complaining... This was
supposed to be a closed loop area: consumers of the branches are the
ones who maintain it. So they can't complain it's not up to date -- it's
their work to maintain it. I think that's a very self-healing situation,
and changing from that (by making it yet another core reviewers duty)
sounds very dangerous to me.

FWIW changes tend to get picked up by the stable point release manager
as they get closer to a point release. There is one due soon so I expect
a peak of activity soon. Also note that stable branch maintainers have
their own mailing-list:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint

 We need to ensure those -core people know the stable branch acceptance
 rules before we grant them +2 there.
 
 I think that's a really weak argument against having core team to take
 part in the stable branch approval. If the rules are outlined somewhere
 on the wiki anyone I know on the core team is more than capable of
 reading  following them. 

They are certainly capable (and generally make good stable-maint
candidates). But that's not work they signed up for. I prefer that work
to be opt-in rather than add to the plate of core reviewers by making
them all collectively responsible for stable branch maintenance too.

Rules are here, btw:
https://wiki.openstack.org/wiki/StableBranch

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Hide ipv6 subnet API attributes

2014-07-29 Thread Collins, Sean
On Tue, Jul 29, 2014 at 04:40:33AM EDT, Nir Yechiel wrote:
 Now with the Juno efforts to provide IPv6 support and some features (provider 
 networks SLAAC, RADVD) already merged, is there any plan/patch to revert this 
 Icehouse change [1] and make the 'ra_mode' and 'ipv6_address_mode' consumable?
 

There are currently no plans, since all of the changes that implement
IPv6 functionality have been landing in Juno. I suggest upgrading to
Juno when it is released.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Tracking unapproved specs in milestone plans

2014-07-29 Thread Sergey Lukjanov
+1, it sounds like the best approach for such situation.

On Sat, Jul 19, 2014 at 1:18 AM, Russell Bryant rbry...@redhat.com wrote:
 On 07/18/2014 11:38 AM, Thierry Carrez wrote:
 Hi everyone,

 At the last cross-project/release meeting we discussed the need to track
 yet-unapproved specs in milestone release plans.

 There are multiple cases where the spec is not approved yet, but the
 code is almost ready, and there is a high chance that the feature will
 be included in the milestone. Currently such specs are untracked and fly
 below the radar until the spec is approved at the last minute -- this
 creates a gap between what we know might be coming up and what we
 communicate outside the project might be coming up.

 The simplest way to track those is to add them (with a priority) to the
 milestone plan and set the implementation status to Blocked (and
 design status to Review if you want to be fancy). If there is general
 agreement around that, I'll update the reference wiki page at [1] to
 reflect that.

 I also have a new version of spec2bp.py[2] in review that supports both
 cases (--inreview and approved) and sets blueprint fields for you.
 Please feel free to try it and report back on the review if it works or
 fails for you.

 [1] https://wiki.openstack.org/wiki/Blueprints
 [2] https://review.openstack.org/108041


 Sounds like a very reasonable solution to the problem you discussed in
 the weekly project meeting.  +1

 --
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting this week.

2014-07-29 Thread Peter Pouliot
Hi All,
Due to key members of our team travelling this week we will have to postpone 
the meeting.  We'll reconvene next week at the usual time.


Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research  Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.commailto:ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Ready to change the meeting time?

2014-07-29 Thread Steve Gordon
Hi all,

I have recently had a few people express concern to me that the current meeting 
time is preventing their attendance at the meeting. As we're still using the 
original meeting time we discussed using for a trial period immediately after 
summit it is probably time we reassess anyway.

I have been through the global iCal [1] and tried to identify times where at 
least one of the IRC meeting rooms is available and no other NFV related team 
or subteam (E.g. Nova, Neutron, DVR, L3, etc.) is meeting. The resultant times 
are available for voting on this whenisgood.net sheet - be sure to select your 
location to view in your local time:

http://whenisgood.net/exzzbi8

If you are a regular participant in the NFV meetings, or even more importantly 
if you would like to be but are restrained from doing so because of the current 
timing then please record your preferences above. If you think there is an 
available time slot that I've missed, or I've made a time slot available that 
actually clashes with a meeting relevant to NFV participants, then please 
respond on list!

This week's meeting will proceed at the regular time on Wednesday, July 30 at 
1400 UTC in #openstack-meeting-alt.

Thanks,

Steve

[1] 
https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Boot from ISO feature status

2014-07-29 Thread Maksym Lobur
Hi Daniel,

Thanks for the feedback. I have two questions:
1. Is it available in Havana?
2. Is it possible to to say that the glance image should be fetched to a
CDROM device (and disk file named 'iso'), and the root disk should be
created blank according to the flavor settings (disk file named 'disk')

Best regards,
Max Lobur,
OpenStack Developer, Mirantis, Inc.

Mobile: +38 (093) 665 14 28
Skype: max_lobur

38, Lenina ave. Kharkov, Ukraine
www.mirantis.com
www.mirantis.ru


On Tue, Jul 29, 2014 at 2:03 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Mon, Jul 28, 2014 at 09:30:24AM -0700, Vishvananda Ishaya wrote:
  I think we should discuss adding/changing this functionality. I have had
  many new users assume that booting from an iso image would give them a
  root drive which they could snapshot. I was hoping that the new block
  device mapping code would allow something like this, but unfortunately
  there isn’t a way to do it there either. You can boot a flavor with an
  ephemeral drive, but there is no command to snapshot secondary drives.

 The new block device mapping code is intended to ultimately allow any
 disk configuration you can imagine, so if a desirable setup with CDROM
 vs disks does not work, do file a bug about this because we should
 definitely address it.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Generate Event or Notification in Ceilometer

2014-07-29 Thread Jay Pipes

On 07/29/2014 02:05 AM, Duan, Li-Gong (Gary@HPServers-Core-OE-PSC) wrote:

Hi Folks,

Are there any guide or examples to show how to produce a new event or
notification add add a handler for this event in ceilometer?

I am asked to implement OpenStack service monitoring which will send an
event and trigger the handler once a service, say nova-compute, crashes,
in a short time. L

The link (http://docs.openstack.org/developer/ceilometer/events.html)
does a good job on the explanation of concept and hence I know that I
need to emit notification to message queue and ceilometer-collector will
process them and generate events but it is far from real implementations.


I would not use Ceilometer for this, as it is more tenant-facing than 
infrastructure service facing. Instead, I would use a tried-and-true 
solution like Nagios and NRPE checks. Here's an example of such a check 
for a keystone endpoint:


https://github.com/ghantoos/debian-nagios-plugins-openstack/blob/master/plugins/check_keystone

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] nova-network as ML2 mechanism?

2014-07-29 Thread Jonathan Proulx
Hi All,

Would making an nova-network mechanism driver for the ml2 plugin be possible?

I'm an operator not a developer so apologies if this has been
discussed and is either planned or impossible, but a quick web search
didn't hit anything.

As an operator I would envision this a a transition mechanism, which
AFAIK is still lacking, between nova network and neutron.

If a DB transition scrip similar to the ovs-ml2 conversion could be
created, operator could transition their controller/network-nodes to
neutron while initially leaving the compute nodes with active
nova-network configs active.  It's a much simpler matter for most
operators I think to then do rolling upgrades of compute hosts to
proper neutron agents either by live migrating existing VMs or simply
through attrition.  And this would preserve continuity of VMs through
the upgrade (these may be cattle but you still don't want to slaughter
all  of them at once!)

This is no longer my use case as I jumped into neutron with Grizzly,
but having just transitioned to Icehouse and ML2, it got me to
thinking.  If this sounds feasible from a development standpoint I'd
recommend taking the discussion to the operators list to see if others
share my opinion before doing major work in that direction.

Just a thought,
-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Boot from ISO feature status

2014-07-29 Thread Nikola Đipanov
On 07/29/2014 11:03 AM, Daniel P. Berrange wrote:
 On Mon, Jul 28, 2014 at 09:30:24AM -0700, Vishvananda Ishaya wrote:
 I think we should discuss adding/changing this functionality. I have had
 many new users assume that booting from an iso image would give them a
 root drive which they could snapshot. I was hoping that the new block
 device mapping code would allow something like this, but unfortunately
 there isn’t a way to do it there either. You can boot a flavor with an
 ephemeral drive, but there is no command to snapshot secondary drives.
 
 The new block device mapping code is intended to ultimately allow any
 disk configuration you can imagine, so if a desirable setup with CDROM
 vs disks does not work, do file a bug about this because we should
 definitely address it. 
 

There is a blueprint for this - and it would also provide what Vish
mentioned elsewhere on the thread (having a root drive that you can
snapshot) [1]. Sadly - I never found the time to start this work,
although there seems to be interested people from time to time - no one
has picked it up either (hint, hint).

If someone does - I'll be super happy to review it!

N.

[1] https://blueprints.launchpad.net/nova/+spec/libvirt-image-to-local-bdm

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][CI][third-party] The plan to bring up Snabb NFV CI for Juno-3

2014-07-29 Thread Steve Gordon
- Original Message -
 From: Luke Gorrie l...@snabb.co
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 
 Greetings fellow NFV'stas!
 
 I would like to explain and solicit feedback on our plan to support a new
 open source NFV system in Juno. This work is approved as
 low-priority/best-effort for Juno-3. (Yes, we do understand that we are
 fighting the odds in terms of the Juno schedule.)
 
 We are developing a practical open source NFV implementation for OpenStack.
 This is for people who want to run tens of millions of packets per second
 through Virtio-net on each compute node. The work involves contributing
 code upstream to a dependent chain of projects:
 
 snabbswitch - QEMU - Libvirt - Nova - Neutron

Hi Luke,

I've added the [third-party] tag as well to ensure this catches the broadest 
segment of relevant people. Probably a stupid question from me - are any 
modifications to upstream Open vSwitch required to support Snabb?

Thanks,

Steve

 Recently we had a breakthrough: QEMU upstream merged the vhost-user feature
 that we developed and this convinced the kind maintainers of Libvirt, Nova,
 and Neutron to let us target code to them in parallel. Now Libvirt has
 accepted our code upstream too and the last pieces are Nova and Neutron.
 (Then we can start work on Version 2.)
 
 Previously our upstreaming effort has been obstructed: people
 understandably wanted to see our QEMU code accepted before they would take
 us seriously. So it is an exciting time for us and our upstreaming work.
 
 Just now we have ramped up our OpenStack development effort in response to
 getting approved for Juno-3. Michele Paolino has joined in: he is
 experienced with Libvirt and is the one who upstreamed our code there.
 Nikolay Nikolaev is joining in too: he did the bulk of the development on
 vhost-user and the upstreaming of it into QEMU.
 
 Here is what the three of us are working on for Juno-3:
 
 * VIF_VHOSTUSER support in Nova.
 https://blueprints.launchpad.net/nova/+spec/vif-vhostuser
 
 * Snabb NFV mech driver for Neutron.
 https://blueprints.launchpad.net/neutron/+spec/snabb-nfv-mech-driver
 
 * NFV CI: OpenStack 3rd party CI that covers our entire software ecosystem
 (snabbswitch + QEMU + Libvirt + Nova + Neutron).
 
 We are already getting great support from the community. Thank you
 everybody for that, and meta-thankyou to the people who setup the NFV
 subgroup which has been a fantastic enabler. For the code changes, the ball
 is in our court now to get them into shape in time. For the CI, I think
 it's worth having a discussion to make sure we are on the same page and
 have the same expectations.

Have you already attempted to solicit some core reviewers in Nova and Neutron 
with an 

 Here is how I visualize our ideal NFV CI for Juno:
 
 * Run Tempest tests for Nova and Neutron.
 * Test with the relevant versions of Libvirt, QEMU, and snabbswitch.
 * Test with NFV-oriented features that are upstream in OpenStack.
 * Test with NFV-oriented changes that are not yet upstream e.g. Neutron QoS
 API.
 * Operate reliably with a strong track record.
 * Be easy for other people to replicate if they want to run their own NFV
 CI.
 
 This CI should then provide assurance for us that our whole ecosystem is
 running compatibly, for OpenStack that the code going upstream is
 continuously tested, and for end users that the software they plan to
 deploy works (either based on our tests, if they are deploying the same
 software that we use, or based on their own tests if they want to operate a
 customised CI).
 
 How does this CI idea sound to the community and to others who are
 interested in related NFV-oriented features?
 
 That was quite a brain-dump... we have been working on this for quite some
 time but mostly on the parts outside of the OpenStack tree until now.
 
 For more information about our open source NFV project you can read the
 humble home page: http://snabb.co/nfv.html
 
 and if you want to talk nuts and bolts you can find us on Github:
 https://github.com/SnabbCo/snabbswitch
 
 and Google Groups:
 https://groups.google.com/forum/#!forum/snabb-devel
 
 We are independent open source developers and we are working to support
 Deutsche Telekom's TeraStream NFV project.
 
 Cheers!
 -Luke
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-29 Thread Jay Pipes

On 07/29/2014 06:13 AM, Daniel P. Berrange wrote:

On Tue, Jul 29, 2014 at 02:04:42PM +0200, Thierry Carrez wrote:

Ihar Hrachyshka a écrit :
At the dawn of time there were no OpenStack stable branches, each
distribution was maintaining its own stable branches, duplicating the
backporting work. At some point it was suggested (mostly by RedHat and
Canonical folks) that there should be collaboration around that task,
and the OpenStack project decided to set up official stable branches
where all distributions could share the backporting work. The stable
team group was seeded with package maintainers from all over the distro
world.

So these branches originally only exist as a convenient place to
collaborate on backporting work. This is completely separate from
development work, even if those days backports are often proposed by
developers themselves. The stable branch team is separate from the rest
of OpenStack teams. We have always been very clear tht if the stable
branches are no longer maintained (i.e. if the distributions don't see
the value of those anymore), then we'll consider removing them. We, as a
project, only signed up to support those as long as the distros wanted them.

We have been adding new members to the stable branch teams recently, but
those tend to come from development teams rather than downstream
distributions, and that starts to bend the original landscape.
Basically, the stable branch needs to be very conservative to be a
source of safe updates -- downstream distributions understand the need
to weigh the benefit of the patch vs. the disruption it may cause.
Developers have another type of incentive, which is to get the fix they
worked on into stable releases, without necessarily being very
conservative. Adding more -core people to the stable team to compensate
the absence of distro maintainers will ultimately kill those branches.


The situation I'm seeing is that the broader community believe that
the Nova core team is responsible for the nova stable branches. When
stuff sits in review for ages it is the core team that is getting
pinged about it and on the receiving end of the complaints the
inaction of review.

Adding more people to the stable team won't kill those branches. I'm
not suggesting we change the criteria for accepting patches, or that
we dramatically increase the number of patches we accept. There is
clearly alot of stuff proposed to stable that the existing stable
team think is a good idea - as illustrated by the number of patches
with at least one +2 present. On the contrary, having a bigger stable
team comprises all of core + interested distro maintainers will ensure
that the stable branches are actually gettting the patches people in
the field need to provide a stable cloud.


-1

In my experience, the distro maintainers who pioneered the stable branch 
teams had opposite viewpoints to core teams in regards to what was 
appropriate to put into a stable release. I think it's dangerous to 
populate the stable team with the core team members just because of long 
review and merge times.


Distros can and should have more people participating in the stable 
teams -- as should non-distro folks that deploy and care about 
non-master deployments.


If core team members are getting pinged about certain reviews on stable 
branches, they should direct the pinger to the stable team members.


Just my 2 cents,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bug discussion at mid cycle meet up

2014-07-29 Thread Tracy Jones

At the mid-cycle meet-up yesterday we spent some time looking at our bug 
dashboard (http://54.201.139.117/nova-bugs.html) and talking about things we 
can do to help focus on bugs.  We came up with the following ideas.  I’d like 
folks to weigh in on these i if you have some ideas or concerns.


1.  Start auto-abandoning bugs that have not been touched (I.e. Updated) in the 
last 60 days.We would have something (a bot?) that would look at bugs that 
have not been updated (nor their review updated) in the last 60 days.  The bug 
would be set to the “new” state and the assignee would be removed.  This would 
cause the bug to be re-triaged and would be up for someone else to pick up.

2.  Also - when a bug has all abandoned reviews, we should automatically set 
the bug to new and remove the assignee.

3.  We have bugs that are really not bugs but features, or performance issues.  
They really should be a BP not a bug, but we don’t want these things to fall 
off the radar so they are bugs… But we don’t really know what to do with them.  
Should they be closed?  Should they have a different category – like feature 
request??  Perhaps they should just be wish list??

4.  We should  have more frequent and focused bug days.  For example every 
Monday have a bug day where we focus on 1 area (like api or compute or 
networking for example) and work on moving new bugs to configured, or confirmed 
bugs to triaged.  I’ll talk with Michael about when to schedule the 1st 
“focused” bug day and what area to address.


In generate we need to tighten up the definition of triaged and confirmed.  
Bugs should move from New - Confirmed - Triaged - In Progress.  JayPipes has 
updated the wiki to clarify this.

  *   Confirmed means someone has looked at the bug, saw there was enough into 
to start to diagnose, and agreed it sounds like a bug.
  *   Triaged means someone has analyzed the bug and can propose a solution 
(not necessarily a patch).  If the person is not going to fix it, they should 
update the bug with the proposal and move the bug into Triaged.

If we do implement 1 and 2 I’m hoping the infra team can help – I think dims 
volunteered ;-)

I made a note to add review stats to the bug page to make it easier to see how 
far along a bug is in review




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-29 Thread Daniel P. Berrange
On Tue, Jul 29, 2014 at 08:30:09AM -0700, Jay Pipes wrote:
 On 07/29/2014 06:13 AM, Daniel P. Berrange wrote:
 On Tue, Jul 29, 2014 at 02:04:42PM +0200, Thierry Carrez wrote:
 Ihar Hrachyshka a écrit :
 At the dawn of time there were no OpenStack stable branches, each
 distribution was maintaining its own stable branches, duplicating the
 backporting work. At some point it was suggested (mostly by RedHat and
 Canonical folks) that there should be collaboration around that task,
 and the OpenStack project decided to set up official stable branches
 where all distributions could share the backporting work. The stable
 team group was seeded with package maintainers from all over the distro
 world.
 
 So these branches originally only exist as a convenient place to
 collaborate on backporting work. This is completely separate from
 development work, even if those days backports are often proposed by
 developers themselves. The stable branch team is separate from the rest
 of OpenStack teams. We have always been very clear tht if the stable
 branches are no longer maintained (i.e. if the distributions don't see
 the value of those anymore), then we'll consider removing them. We, as a
 project, only signed up to support those as long as the distros wanted them.
 
 We have been adding new members to the stable branch teams recently, but
 those tend to come from development teams rather than downstream
 distributions, and that starts to bend the original landscape.
 Basically, the stable branch needs to be very conservative to be a
 source of safe updates -- downstream distributions understand the need
 to weigh the benefit of the patch vs. the disruption it may cause.
 Developers have another type of incentive, which is to get the fix they
 worked on into stable releases, without necessarily being very
 conservative. Adding more -core people to the stable team to compensate
 the absence of distro maintainers will ultimately kill those branches.
 
 The situation I'm seeing is that the broader community believe that
 the Nova core team is responsible for the nova stable branches. When
 stuff sits in review for ages it is the core team that is getting
 pinged about it and on the receiving end of the complaints the
 inaction of review.
 
 Adding more people to the stable team won't kill those branches. I'm
 not suggesting we change the criteria for accepting patches, or that
 we dramatically increase the number of patches we accept. There is
 clearly alot of stuff proposed to stable that the existing stable
 team think is a good idea - as illustrated by the number of patches
 with at least one +2 present. On the contrary, having a bigger stable
 team comprises all of core + interested distro maintainers will ensure
 that the stable branches are actually gettting the patches people in
 the field need to provide a stable cloud.
 
 -1
 
 In my experience, the distro maintainers who pioneered the stable branch
 teams had opposite viewpoints to core teams in regards to what was
 appropriate to put into a stable release. I think it's dangerous to populate
 the stable team with the core team members just because of long review and
 merge times.

Sure there was some debate about what criteria were desired acceptance
when stable trees were started. Once the criteria are defined I don't
think it is credible to say that people are incapable of following the
rules. In the unlikely event that people were to willfully ignore the
agreed upon rules for stable tree, then I'd not trust them to be part
of a core team working on any branch at all. With responsibility comes
trust and an acceptance to follow the agreed upon processes.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] requesting python-neutronclient release for MacAddressInUseClient exception

2014-07-29 Thread Kyle Mestery
On Tue, Jul 29, 2014 at 7:46 AM, Kyle Mestery mest...@mestery.com wrote:
 On Mon, Jul 28, 2014 at 6:45 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 Nove needs a python-neutronclient release to use the new
 MacAddressInUseClient exception type defined here [1].

 I'll spin a new client release today Matt, and reply back on this
 thread once that's complete.

FYI, I just pushed this release out, see the email here:

http://lists.openstack.org/pipermail/openstack-dev/2014-July/041438.html

Thanks,
Kyle

 Thanks,
 Kyle

 [1] https://review.openstack.org/#/c/109052/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New python-neutronclient release: 2.3.6

2014-07-29 Thread Kyle Mestery
Hi all:

I've just pushed a new release of python-neutronclient out. This was
mainly to address the issue of Nova being able to use the new
MacAddressInUseClient exception [1]. In addition, the following,
changes are also a part of this release:

b21cafa Remove strict checking of encryption type
f9dbbb4 Improve the method find_resourceid_by_name_or_id
5db54ed Add a new timeout option to set the HTTP Timeout
c46bf95 Revert Fix CLI support for DVR
c05eb29 Update theme for docs
8da544d Add a tox job for generating docs
5eeba0c Add MacAddressInUseClient exception handling
4927f74 Create new IPv6 attributes for Subnets by client
dae498e Python 3: use six.iteritems()
a2b03db Fix for CLI message of agent disassociation
bfec80a Fix CLI support for DVR
b289695 Warn on tiny subnet
708820e Some edits for help strings
c8b7734 Changed 'json' to 'JSON'
267d1dc Add CLI Support for DVR
8eeb578 Add CONTRIBUTING.rst
d7c5104 Pass timeout parameter to requests lib call
261588b Found a useless comment
a84b2be Suppress outputs in test_cli20_nsx_networkgateway
8a2349a Sync with oslo
2e6f238 Changed 'xml' to 'XML'
58e48df Switch over to mox3

The release can be downloaded from PyPI here:

https://pypi.python.org/pypi/python-neutronclient

Thanks!
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/041385.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Managing change in gerrit which depends on multiple other changes in review

2014-07-29 Thread Doug Wiegley
Hi Evgeny,

I’m not sure I’m doing it in the most efficient way, so I’d love to hear 
pointers, but what I’ve been doing:

First, to setup the dependent commit, the command is “git review –d”.   I’ve 
been using this guide: 
http://www.mediawiki.org/wiki/Gerrit/Advanced_usage#Create_a_dependency

Second, when the dependent review changes, there is a ‘rebase’ button on gerrit 
that’ll get things back in sync automatically.

Third, if you need to change your code after rebasing from gerrit, this is the 
only sequence I’ve tried that doesn’t result in something weird (rebasing 
overwrites the dependent commits, silently, so I’m clearly doing something 
wrong):

  1.  Re-clone vanilla neutron
  2.  Cd into new clone, setup for gerrit review
  3.  Redo dependent commit setup
  4.  Create your topic branch
  5.  Cherry-pick your commit from gerrit into your new topic branch
  6.  Use git log -n5 --decorate --pretty=oneline”, and verify that your 
dependency commit hashes match what’s in gerrit.
  7.  Git review

Thanks,
doug


From: Evgeny Fedoruk evge...@radware.commailto:evge...@radware.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, July 29, 2014 at 7:12 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron] Managing change in gerrit which depends on 
multiple other changes in review

Hi folks,

I’m working on a change for neutron LBaaS service.
Since there is a massive work done for LBaaS these days, my change depends on 
other changes being reviewed in parallel in gerrit.
I don’t have a big git knowledge and I’m failing in figuring out the right 
procedure that should be followed for managing such a multi-dependent patch.
So, I sending my question to you guys, in hope to find the right way to manage 
such patches in gerrit.

Here is the situation:
There are 4 patches on review in gerrit

1.   A – No dependencies

2.   B – Depends on A

3.   C – Depends on A

4.   D – No dependencies




My change, let’s call it “X”, is already on review in gerrit.
It should depend on all four other changes, A, B, C and D.

I tried to two ways of managing those dependencies, 1) by doing a cherry-pick 
for each one of them, and 2) by doing git review and git rebase for each one of 
them.
It does not work for me well, my change commit message is replaced by other 
changes’ commit messages and when I commit my patch, it commit’s other changes 
patches too.

So, my question is:
Is this scenario supported by gerrit system?
If it does – what is the right procedure to follow in order to manage those 
dependencies
and how to rebase my change when some of dependencies was commited with a new 
patch to keep the dependencies up-to-date?


Thank you!
Evg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status and Expectations for Juno

2014-07-29 Thread Doug Wiegley
Yes.  There is an outside chance that someone can re-add the agent after
we get the agent-less driver in, for Juno, but if v2 is not going to be
the default extension, I’m not sure it’s worth the effort, since some
version of Octavia should land in Kilo, during which I would also expect
v2 to become the default.

Doug


On 7/29/14, 6:05 AM, Kyle Mestery mest...@mestery.com wrote:

This all looks good to me. My only concern is that we need to land a
driver in Juno as well. The HA-proxy based, agent-less driver which
runs on the API node is the only choice here, right? Otherwise, the
scalable work is being done in Octavia. Is that correct?

On Mon, Jul 28, 2014 at 2:46 PM, Brandon Logan
brandon.lo...@rackspace.com wrote:
 That was essentially the point of my email.  To get across that not
 everything we want to go in Juno will make it in and because of this V2
 will not be in the state that many users will be able to use.  Also, to
 get people's opinions on what they think is high priority.

 On Mon, 2014-07-28 at 18:11 +, Doug Wiegley wrote:
 I don’t think the lbaas roadmap has changed (including octavia), just
the
 delivery timeline.  Nor am I debating making the ref driver simpler
(I’m
 on record as supporting that decision, and still do.)  And if that was
the
 only wart, I’m sure we’d all ignore it and plow forward.  But it’s not,
 and add all the things that are likely to miss together, and I think
we’d
 be doing the community a disservice by pushing v2 too soon.  Which
means
 our moratorium on v1 is likely premature.

 Unless Brandon gives up sleeping altogether; then I’m sure we’ll make
it.

 Anyway, all this is my long-winded way of agreeing that some things
will
 likely need to be pushed to K, it happens, and let’s just be realistic
 about what that means for our end users.

 Doug



 On 7/28/14, 9:34 AM, Jorge Miramontes
jorge.miramon...@rackspace.com
 wrote:

 Hey Doug,
 
 In terms of taking a step backward from a user perspective I'm fine
with
 making v1 the default. I think there was always the notion of
supporting
 what v1 currently offers by making a config change. Thus, Horizon
should
 still have all the support it had in Icehouse. I am a little worried
about
 the delivery of items we said we wanted to deliver however. The
reason we
 are focusing on the current items is that Octavia is also part of the
 picture, albeit, behind the scenes right now. Thus, the argument that
the
 new reference driver is less capable is actually a means to getting
 Octavia out. Eventually, we were hoping to get Octavia as the
reference
 implementation which, from the user's perspective, will be much better
 since you can actually run it at operator scale. To be realistic, the
v2
 implementation is a WIP and focusing on the control plane first seems
to
 make the most sense. Having a complete end-to-end v2 implementation is
 large in scope and I don't think anyone expected it to be a
full-fledged
 product by Juno, but we are getting closer!
 
 
 Cheers,
 --Jorge
 
 
 
 
 On 7/28/14 8:02 AM, Doug Wiegley do...@a10networks.com wrote:
 
 Hi Brandon,
 
 Thanks for bringing this up. If you¹re going to call me out by name,
I
 guess I have to respond to the Horizon thing.  Yes, I don¹t like it,
from
 a user perspective.  We promise a bunch of new features, new
driversŠ and
 none of them are visible.  Or the horizon support does land, and
suddenly
 the user goes from a provider list of 5 to 2.  Sucks if you were
using
 one
 of the others.  Anyway, back to a project status.  To summarize,
listed
 by
 feature, priority, status:
 
 LBaaS V2 API,   high, reviews in gerrit
 Ref driver, high, removed agent, review in gerrit
 CLI V2, high, not yet in review
 Devstack,   high, not started
 +TLS,   medium, lots done in parallel
 +L7,medium, not started
 Shim V1 - V2,  low, minimally complete
 Horizon V2, low, not started
 ref agent,  low, not started
 Drivers,low, one vendor driver in review, several in progress
 
 And with a review submission freeze of August 21st.  Let¹s work
 backwards:
 
 Dependent stuff will need at least two weeks to respond to the final
 changes and submit.  That¹d be:
 
 Devstack,   high, not started
 +TLS,   medium, lots done in parallel
 +L7,medium, not started
 Shim V1 - V2,  low, minimally complete
 Horizon V2, low, not started
 ref agent,  low, not started
 Drivers,low, one vendor driver in review, several in progress
 
 Š I¹m not including TLS, since it¹s work has been very parallel so
far,
 even though logically it should be there.  But that would mean the
 following should be ³done² and merged by August 7th:
 
 LBaaS V2 API,   high, reviews in gerrit
 Ref driver, high, removed agent, review in gerrit
 CLI V2, high, not yet in review
 
 Š that¹s a week and a half, for a big pile of new code.  At the
current
 change velocity, I have my doubts.  And if that slips, the rest
starts to
 look very 

Re: [openstack-dev] [neutron] requesting python-neutronclient release for MacAddressInUseClient exception

2014-07-29 Thread Matt Riedemann



On 7/29/2014 9:15 AM, Kyle Mestery wrote:

On Tue, Jul 29, 2014 at 7:46 AM, Kyle Mestery mest...@mestery.com wrote:

On Mon, Jul 28, 2014 at 6:45 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:

Nove needs a python-neutronclient release to use the new
MacAddressInUseClient exception type defined here [1].


I'll spin a new client release today Matt, and reply back on this
thread once that's complete.


FYI, I just pushed this release out, see the email here:

http://lists.openstack.org/pipermail/openstack-dev/2014-July/041438.html

Thanks,
Kyle


Thanks,
Kyle


[1] https://review.openstack.org/#/c/109052/

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for the quick turnaround!

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][CI][third-party] The plan to bring up Snabb NFV CI for Juno-3

2014-07-29 Thread Luke Gorrie
Hi Steve,

On 29 July 2014 17:21, Steve Gordon sgor...@redhat.com wrote:

 I've added the [third-party] tag as well to ensure this catches the
 broadest segment of relevant people.


Thanks!


 are any modifications to upstream Open vSwitch required to support Snabb?


Good question. No, this uses a separate vswitch called Snabb Switch. Snabb
Switch is a small user-space program that you assign some network
interfaces to. It runs independent of any other networking you are doing on
other ports (OVS, DPDK-OVS, SR-IOV, etc).

Have you already attempted to solicit some core reviewers in Nova and
 Neutron


How does one normally do that? We are getting help but I am not exactly
sure how people have found us beyond chat in #openstack-nfv :-).

Two Neutron core reviewers are making the requirements there very clear to
us, both on the code and the CI.

One Nova core reviewer is helping us too. I would like to better understand
CI requirements on the Nova side (e.g. does the Neutron tempest testing
regime provide adequate coverage for Nova or do we need to do more?). This
is our first contribution to Nova so there is a risk that we overlook
something important.

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] objects notifications

2014-07-29 Thread Gary Kotton
Hi,
When reviewing https://review.openstack.org/#/c/107954/ it occurred to me that 
maybe we should consider having some kind of generic object wrapper that could 
do notifications for objects. Any thoughts on this?
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] oslo.utils 0.1.1 released

2014-07-29 Thread Davanum Srinivas
The Oslo team is pleased to announce the first release of oslo.utils,
the library that replaces several utils modules from oslo-incubator:
https://github.com/openstack/oslo.utils/tree/master/oslo/utils

The new library has been uploaded to PyPI, and there is a changeset in
the queue update the global requirements list and our package mirror:
https://review.openstack.org/#/c/110380/

Documentation for the library is available on our developer docs site:
http://docs.openstack.org/developer/oslo.utils/

The spec for the graduation blueprint includes some advice for
migrating to the new library:
http://git.openstack.org/cgit/openstack/oslo-specs/tree/specs/juno/graduate-oslo-utils.rst

Please report bugs using the Oslo bug tracker in launchpad:
http://bugs.launchpad.net/oslo

Thanks to everyone who helped with reviews and patches to make this
release possible!

Thanks,
dims

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Anyone using owner_is_tenant = False with image members?

2014-07-29 Thread Mark Washenberger
On Thu, Jul 24, 2014 at 9:48 AM, Scott Devoid dev...@anl.gov wrote:

 So it turns out that fixing this issue is not very simple. It turns out
 that there are stubbed out openstack.common.policy checks in the glance-api
 code, which are pretty much useless because they do not use the image as a
 target. [1] Then there's a chain of API / client calls where it's unclear
 who is responsible for validating ownership: python-glanceclient -
 glance-api - glance-registry-client - glance-registry-api -
 glance.db.sqlalchemy.api. Add to that the fact that request IDs are not
 consistently captured along the logging path [2] and it's a holy mess.

 I am wondering...
 1. Has anyone actually set owner_is_tenant to false? Has this ever been
 tested?


We haven't really been using or thinking about this as a feature, more a
potential backwards compatibility headache. I think it makes sense to just
go through the deprecation path so people aren't confused about whether
they should start using owner_is_tenant=False (they shouldn't).


 2. From glance developers, what kind of permissions / policy scenarios do
 you actually expect to work?


There is work going on now to support using images as targets. Of course,
the policy api wants enforce calls to only ever work with targets that are
dictionaries, which is a great way to race to the bottom in terms of
programming practices. But oh well.

Spec for supporting use of images as targets is here:
https://blueprints.launchpad.net/glance/+spec/restrict-downloading-images-protected-properties
https://github.com/openstack/glance-specs/blob/master/specs/juno/restrict-downloading-images.rst



 Right now we have one user who consistently gets an empty 404 back from
 nova image-list because glance-api barfs on a single image and gives up
 on the entire API request...and there are no non-INFO/DEBUG messages in
 glance logs for this. :-/

 ~ Scott

 [1] https://bugs.launchpad.net/glance/+bug/1346648
 [2] https://bugs.launchpad.net/glance/+bug/1336958


 On Fri, Jul 11, 2014 at 12:26 PM, Scott Devoid dev...@anl.gov wrote:

 Hi Alexander,

 I read through the artifact spec. Based on my reading it does not fix
 this issue at all. [1] Furthermore, I do not understand why the glance
 developers are focused on adding features like artifacts or signed images
 when there are significant usability problems with glance as it currently
 stands. This is echoing Sean Dague's comment that bugs are filed against
 glance but never addressed.

 [1] See the **Sharing Artifact** section, which indicates that sharing
 may only be done between projects and that the tenant owns the image.


 On Thu, Jul 3, 2014 at 4:55 AM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 Thanks Scott, that is a nice topic

 In theory, I would prefer to have both owner_tenant and owner_user to be
 persisted with an image, and to have a policy rule which allows to specify
 if the users of a tenant have access to images owned by or shared with
 other users of their tenant. But this will require too much changes to the
 current object model, and I am not sure if we need to introduce such
 changes now.

 However, this is the approach I would like to use in Artifacts. At least
 the current version of the spec assumes that both these fields to be
 maintained ([0])

 [0]
 https://review.openstack.org/#/c/100968/4/specs/juno/artifact-repository.rst

 --
 Regards,
 Alexander Tivelkov


 On Thu, Jul 3, 2014 at 3:44 AM, Scott Devoid dev...@anl.gov wrote:

  Hi folks,

 Background:

 Among all services, I think glance is unique in only having a single
 'owner' field for each image. Most other services include a 'user_id' and a
 'tenant_id' for things that are scoped this way. Glance provides a way to
 change this behavior by setting owner_is_tenant to false, which implies
 that owner is user_id. This works great: new images are owned by the user
 that created them.

 Why do we want this?

 We would like to make sure that the only person who can delete an image
 (besides admins) is the person who uploaded said image. This achieves that
 goal nicely. Images are private to the user, who may share them with other
 users using the image-member API.

 However, one problem is that we'd like to allow users to share with
 entire projects / tenants. Additionally, we have a number of images (~400)
 migrated over from a different OpenStack deployment, that are owned by the
 tenant and we would like to make sure that users in that tenant can see
 those images.

 Solution?

 I've implemented a small patch to the is_image_visible API call [1]
 which checks the image.owner and image.members against context.owner and
 context.tenant. This appears to work well, at least in my testing.

 I am wondering if this is something folks would like to see integrated?
 Also for glance developers, if there is a cleaner way to go about solving
 this problem? [2]

 ~ Scott

 [1]
 https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L209
 

Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in glance

2014-07-29 Thread Mark Washenberger
On Mon, Jul 28, 2014 at 8:12 AM, Tailor, Rajesh rajesh.tai...@nttdata.com
wrote:

 Hi All,

 I have submitted the patch Made provision for glance service to use
 Launcher to the community gerrit.
 Pl refer: https://review.openstack.org/#/c/110012/

 I have also set the workflow to 'work in progress'. I will start working
 on writing unit tests for the proposed
 changes, after positive feedback for the same.

 Could you please give your comments on this.

 Could you also please suggest me whether to file a launchpad bug or a
 blueprint,  to propose these changes in the glance project ?


Submitting to github.com/openstack/glance-specs would be best. Thanks.



 Thanks,
 Rajesh Tailor

 -Original Message-
 From: Tailor, Rajesh [mailto:rajesh.tai...@nttdata.com]
 Sent: Wednesday, July 23, 2014 12:13 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in
 glance

 Hi Jay,
 Thank you for your response.
 I will soon submit patch for the same.

 Thanks,
 Rajesh Tailor

 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com]
 Sent: Tuesday, July 22, 2014 8:07 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [glance] Use Launcher/ProcessLauncher in
 glance

 On 07/17/2014 03:07 AM, Tailor, Rajesh wrote:
  Hi all,
 
  Why glance is not using Launcher/ProcessLauncher (oslo-incubator) for
  its wsgi service like it is used in other openstack projects i.e.
  nova, cinder, keystone etc.

 Glance uses the same WSGI service launch code as the other OpenStack
 project from which that code was copied: Swift.

  As of now when SIGHUP signal is sent to glance-api parent process, it
  calls the callback handler and then throws OSError.
 
  The OSError is thrown because os.wait system call was interrupted due
  to SIGHUP callback handler.
 
  As a result of this parent process closes the server socket.
 
  All the child processes also gets terminated without completing
  existing api requests because the server socket is already closed and
  the service doesn't restart.
 
  Ideally when SIGHUP signal is received by the glance-api process, it
  should process all the pending requests and then restart the
  glance-api service.
 
  If (oslo-incubator) Launcher/ProcessLauncher is used in glance then it
  will handle service restart on 'SIGHUP' signal properly.
 
  Can anyone please let me know what will be the positive/negative
  impact of using Launcher/ProcessLauncher (oslo-incubator) in glance?

 Sounds like you've identified at least one good reason to move to
 oslo-incubator's Launcher/ProcessLauncher. Feel free to propose patches
 which introduce that change to Glance. :)

  Thank You,
 
  Rajesh Tailor
  __
  Disclaimer:This email and any attachments are sent in strictest
  confidence for the sole use of the addressee and may contain legally
  privileged, confidential, and proprietary data. If you are not the
  intended recipient, please advise the sender by replying promptly to
  this email and then delete and destroy this email and any attachments
  without any further use, copying or forwarding

 Please advise your corporate IT department that the above disclaimer on
 your emails is annoying, is entirely disregarded by 99.999% of the real
 world, has no legal standing or enforcement, and may be a source of
 problems with people's mailing list posts being sent into spam boxes.

 All the best,
 -jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 Disclaimer:This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data.  If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 Disclaimer:This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data.  If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [nova] objects notifications

2014-07-29 Thread Lance Bragstad
Keystone has a notifications module that is based on this idea. When
implementing notification in Keystone, we wanted it to be easy to deliver
notifications on new resources and extensions [1], which is where the idea
of the wrapper came from. With that framework in place, we wrap our CRUD
methods in the Manager layer [2]. Beyond the base implementation, which
consisted of the resource, the operation type, and the UUID of the
resource, the framework has been leveraged for more detailed auditing
(pycadf). Regardless, we do it through a wrapper method.


[1]
https://github.com/openstack/keystone/blob/5017993c361a0ecfb7db6b2ebc18b7e9cf135d84/keystone/notifications.py#L50
[2]
https://github.com/openstack/keystone/blob/5017993c361a0ecfb7db6b2ebc18b7e9cf135d84/keystone/assignment/core.py#L73


On Tue, Jul 29, 2014 at 11:43 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
  When reviewing https://review.openstack.org/#/c/107954/ it occurred to
 me that maybe we should consider having some kind of generic object wrapper
 that could do notifications for objects. Any thoughts on this?

Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][CI] The plan to bring up Snabb NFV CI for Juno-3

2014-07-29 Thread Luke Gorrie
On 29 July 2014 10:48, Luke Gorrie l...@snabb.co wrote:

 We are developing a practical open source NFV implementation for
 OpenStack. This is for people who want to run tens of millions of packets
 per second through Virtio-net on each compute node.


Incidentally, we do currently achieve ~ line rate with our target workload
of 6x10G with 256-byte packets and all traffic being looped through VMs
over Virtio-net. Here is a benchmark output from our testbed right now:

On :07:00.0 got 4.462
On :07:00.1 got 4.462
On :24:00.0 got 4.454
On :24:00.1 got 4.452
On :27:00.0 got 4.455
On :27:00.1 got 4.455

Rate(Mpps): 26.74

That is with each packet received off the wire by Snabb Switch, looped
through a QEMU guest (running Ubuntu w/ DPDK) over vhost-user, then
transmitted by Snabb Switch back onto the wire. That is one packet received
and transmitted on each port every 225 nanoseconds.


Surprisingly, the whole traffic plane is written in Lua and is only a small
amount of code. We are really proud of the work we are doing and hope it
will become a part of the open source networking landscape for many years
to come. People who like this sort of thing are advised to get in touch
with us and join in the fun :).


Cheers,

-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][CI][third-party] The plan to bring up Snabb NFV CI for Juno-3

2014-07-29 Thread Steve Gordon
- Original Message -
 From: Luke Gorrie l...@snabb.co
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Cc: Nikolay Nikolaev n.nikol...@virtualopensystems.com
 Sent: Tuesday, July 29, 2014 12:28:55 PM
 Subject: Re: [openstack-dev] [NFV][CI][third-party] The plan to bring up 
 Snabb NFV CI for Juno-3
 
 Hi Steve,
 
 On 29 July 2014 17:21, Steve Gordon sgor...@redhat.com wrote:
 
  I've added the [third-party] tag as well to ensure this catches the
  broadest segment of relevant people.
 
 
 Thanks!
 
 
  are any modifications to upstream Open vSwitch required to support Snabb?
 
 
 Good question. No, this uses a separate vswitch called Snabb Switch. Snabb
 Switch is a small user-space program that you assign some network
 interfaces to. It runs independent of any other networking you are doing on
 other ports (OVS, DPDK-OVS, SR-IOV, etc).

OK, that's what I thought - thanks for confirming.

 Have you already attempted to solicit some core reviewers in Nova and
  Neutron
 
 How does one normally do that? We are getting help but I am not exactly
 sure how people have found us beyond chat in #openstack-nfv :-).
 
 Two Neutron core reviewers are making the requirements there very clear to
 us, both on the code and the CI.

Ideally this is fairly organic, the NFV group certainly serves as a forum for 
coordinating what work is out there in this space but ultimately it is 
necessary to also liaise with and meet the expectations of the actual projects 
(e.g. Neutron and Nova in this case). This is an area where we as a group still 
have a lot of work to do, in my opinion.

I see that there is some discussion to this end in the code submission for the 
new mechanism driver:

https://review.openstack.org/#/c/95711/

It appears to me the expectation/desire from Mark and Maru here is to see a lot 
more justification of the use cases for this driver and the direction of the 
current implementation so while it is positive that it got the attention of 
some core reviewers early there are some hard questions that will need to be 
answered to continue making progress with this for Juno.

 One Nova core reviewer is helping us too. I would like to better understand
 CI requirements on the Nova side (e.g. does the Neutron tempest testing
 regime provide adequate coverage for Nova or do we need to do more?). This
 is our first contribution to Nova so there is a risk that we overlook
 something important.

Typically third party CI is only provided/required for Nova when 
adding/maintaining a new hypervisor driver - at least that seems to be the case 
so far. I know in your earlier email you mentioned also wanting to use this 
third party CI to also test a number of other scenarios, particularly:

 * Test with NFV-oriented features that are upstream in OpenStack.
 * Test with NFV-oriented changes that are not yet upstream e.g. Neutron QoS
API.

I am not sure how this would work - perhaps I misunderstand what you are 
proposing? As it stands the third-party CI jobs ideally run on each change 
*submitted* to gerrit so features that are not yet *merged* still receive this 
CI testing today both from the CI managed by infra and the existing third-party 
CI jobs? Or are you simply highlighting that you wish to test same with 
snabbswitch? Just not quite understanding why these were called out as separate 
cases.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug discussion at mid cycle meet up

2014-07-29 Thread Russell Bryant
On 07/29/2014 11:43 AM, Tracy Jones wrote:
 3.  We have bugs that are really not bugs but features, or performance
 issues.  They really should be a BP not a bug, but we don’t want these
 things to fall off the radar so they are bugs… But we don’t really know
 what to do with them.  Should they be closed?  Should they have a
 different category – like feature request??  Perhaps they should just be
 wish list??

I don't think blueprint are appropriate for tracking requests.  They
should only be created when someone is proposing actually doing the work.

I think Wishlist is fine for keeping a list of requests.  That's what
I've been using it for.

 In generate we need to tighten up the definition of triaged and
 confirmed.  Bugs should move from New - Confirmed - Triaged - In
 Progress.  JayPipes has updated the wiki to clarify this.
 
   * Confirmed means someone has looked at the bug, saw there was enough
 into to start to diagnose, and agreed it sounds like a bug.  
   * Triaged means someone has analyzed the bug and can propose a
 solution (not necessarily a patch).  If the person is not going to
 fix it, they should update the bug with the proposal and move the
 bug into Triaged.  

We should be careful not to conflict with the guidelines set for all
OpenStack projects here:

   https://wiki.openstack.org/wiki/BugTriage

For example, that page says when a bug should be set to Confirmed or
Triaged.  In most cases, it's Confirmed.  Triage is when there is a
known solution.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] objects notifications

2014-07-29 Thread Mike Spreitzer
Gary Kotton gkot...@vmware.com wrote on 07/29/2014 12:43:08 PM:

 Hi,
 When reviewing https://review.openstack.org/#/c/107954/ it occurred 
 to me that maybe we should consider having some kind of generic 
 object wrapper that could do notifications for objects. Any thoughts on 
this?

I am not sure what that would look like, but I agree that we have a 
problem with too many things not offering notifications.  If there were 
some generic way to solve that problem, it would indeed be great.

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status and Expectations for Juno

2014-07-29 Thread Stephen Balukoff
Just to put my $0.02 in:  While it's a little disappointing that we won't
get everything into Juno that we'd like, I think the effort this team has
put into getting us to where we are is laudable. Although I would really
like to see L7 land as well, I have no problem with the prioritization as
laid out by Brandon and Doug, eh. We have to be realistic, after all, eh.

I do want to emphasize that while Octavia will certainly be taking off in a
big way once most of the Neutron LBaaS stuff is buttoned up, it's still
really important that those of y'all with cycles to devote to this should
still be working to either code or review code for Neutron LBaaS (at least
until the August 21st deadline, but probably several weeks afterward to get
bugfixes  etc. written and through review).

I have been purposely de-emphasizing Octavia accordingly. But I see that
it's starting to become important to at least have the design and direction
documented (so that it's clear where this project is going to go, at least
in the short to medium term). I'll be spending time this week working on
this documentation (converting from google docs to something reviewable in
Gerrit). I'll let y'all know when that's going to be ready for comment.

Stephen



On Tue, Jul 29, 2014 at 9:18 AM, Doug Wiegley do...@a10networks.com wrote:

 Yes.  There is an outside chance that someone can re-add the agent after
 we get the agent-less driver in, for Juno, but if v2 is not going to be
 the default extension, I’m not sure it’s worth the effort, since some
 version of Octavia should land in Kilo, during which I would also expect
 v2 to become the default.

 Doug


 On 7/29/14, 6:05 AM, Kyle Mestery mest...@mestery.com wrote:

 This all looks good to me. My only concern is that we need to land a
 driver in Juno as well. The HA-proxy based, agent-less driver which
 runs on the API node is the only choice here, right? Otherwise, the
 scalable work is being done in Octavia. Is that correct?
 
 On Mon, Jul 28, 2014 at 2:46 PM, Brandon Logan
 brandon.lo...@rackspace.com wrote:
  That was essentially the point of my email.  To get across that not
  everything we want to go in Juno will make it in and because of this V2
  will not be in the state that many users will be able to use.  Also, to
  get people's opinions on what they think is high priority.
 
  On Mon, 2014-07-28 at 18:11 +, Doug Wiegley wrote:
  I don’t think the lbaas roadmap has changed (including octavia), just
 the
  delivery timeline.  Nor am I debating making the ref driver simpler
 (I’m
  on record as supporting that decision, and still do.)  And if that was
 the
  only wart, I’m sure we’d all ignore it and plow forward.  But it’s not,
  and add all the things that are likely to miss together, and I think
 we’d
  be doing the community a disservice by pushing v2 too soon.  Which
 means
  our moratorium on v1 is likely premature.
 
  Unless Brandon gives up sleeping altogether; then I’m sure we’ll make
 it.
 
  Anyway, all this is my long-winded way of agreeing that some things
 will
  likely need to be pushed to K, it happens, and let’s just be realistic
  about what that means for our end users.
 
  Doug
 
 
 
  On 7/28/14, 9:34 AM, Jorge Miramontes
 jorge.miramon...@rackspace.com
  wrote:
 
  Hey Doug,
  
  In terms of taking a step backward from a user perspective I'm fine
 with
  making v1 the default. I think there was always the notion of
 supporting
  what v1 currently offers by making a config change. Thus, Horizon
 should
  still have all the support it had in Icehouse. I am a little worried
 about
  the delivery of items we said we wanted to deliver however. The
 reason we
  are focusing on the current items is that Octavia is also part of the
  picture, albeit, behind the scenes right now. Thus, the argument that
 the
  new reference driver is less capable is actually a means to getting
  Octavia out. Eventually, we were hoping to get Octavia as the
 reference
  implementation which, from the user's perspective, will be much better
  since you can actually run it at operator scale. To be realistic, the
 v2
  implementation is a WIP and focusing on the control plane first seems
 to
  make the most sense. Having a complete end-to-end v2 implementation is
  large in scope and I don't think anyone expected it to be a
 full-fledged
  product by Juno, but we are getting closer!
  
  
  Cheers,
  --Jorge
  
  
  
  
  On 7/28/14 8:02 AM, Doug Wiegley do...@a10networks.com wrote:
  
  Hi Brandon,
  
  Thanks for bringing this up. If you¹re going to call me out by name,
 I
  guess I have to respond to the Horizon thing.  Yes, I don¹t like it,
 from
  a user perspective.  We promise a bunch of new features, new
 driversŠ and
  none of them are visible.  Or the horizon support does land, and
 suddenly
  the user goes from a provider list of 5 to 2.  Sucks if you were
 using
  one
  of the others.  Anyway, back to a project status.  To summarize,
 listed
  by
  feature, priority, 

Re: [openstack-dev] [nova] bug discussion at mid cycle meet up

2014-07-29 Thread Jay Pipes

On 07/29/2014 11:48 AM, Russell Bryant wrote:

On 07/29/2014 11:43 AM, Tracy Jones wrote:

3.  We have bugs that are really not bugs but features, or performance
issues.  They really should be a BP not a bug, but we don’t want these
things to fall off the radar so they are bugs… But we don’t really know
what to do with them.  Should they be closed?  Should they have a
different category – like feature request??  Perhaps they should just be
wish list??


I don't think blueprint are appropriate for tracking requests.  They
should only be created when someone is proposing actually doing the work.

I think Wishlist is fine for keeping a list of requests.  That's what
I've been using it for.


There's a metric crap-ton of bugs that are *not* in Wishlist and are 
instead in High or Medium importance, but they are not necessarily bugs 
that either have a specific solution -- see: performance-related bugs -- 
or are things that frankly can never be fixed.


We don't want to keep these things as bugs, because they aren't really 
bugs, but the blueprint/spec stuff isn't appropriate for epics that 
are like super-specs to be used to track general themes. We think that a 
separate category of thing is needed for tracking these themes. In 
Agile, these things are called epics.



In generate we need to tighten up the definition of triaged and
confirmed.  Bugs should move from New - Confirmed - Triaged - In
Progress.  JayPipes has updated the wiki to clarify this.

   * Confirmed means someone has looked at the bug, saw there was enough
 into to start to diagnose, and agreed it sounds like a bug.
   * Triaged means someone has analyzed the bug and can propose a
 solution (not necessarily a patch).  If the person is not going to
 fix it, they should update the bug with the proposal and move the
 bug into Triaged.


We should be careful not to conflict with the guidelines set for all
OpenStack projects here:

https://wiki.openstack.org/wiki/BugTriage

For example, that page says when a bug should be set to Confirmed or
Triaged.  In most cases, it's Confirmed.  Triage is when there is a
known solution.


So, yeah, we went back and forth on this. One thing that was mentioned 
is that by setting something to Wishlist from, say, High, we downplay 
the importance of the particular bug, which, for performance and 
scalability epics tends to annoy both the bug submitter and the bug owner.


However, setting the bug to New, which triggers a re-verification and/or 
re-triaging of the issue, puts the onus and responsibility on the bug 
triaging team and allows the bug to be taken off of the In progress but 
Abandoned list and not lost into the general swamp of In progress but 
not assigned.


Anyway, just food for thought.
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-07-29 Thread Russell Bryant
On 07/29/2014 12:12 PM, Daniel P. Berrange wrote:
 Sure there was some debate about what criteria were desired acceptance
 when stable trees were started. Once the criteria are defined I don't
 think it is credible to say that people are incapable of following the
 rules. In the unlikely event that people were to willfully ignore the
 agreed upon rules for stable tree, then I'd not trust them to be part
 of a core team working on any branch at all. With responsibility comes
 trust and an acceptance to follow the agreed upon processes.

I agree with this.  If we can't trust someone on *-core to follow the
stable criteria, then they shouldn't be on *-core in the first place.
Further, if we can't trust the combination of *two* people from *-core
to approve a stable backport, then we're really in trouble.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] python-neutronclient, launchpad, and milestones

2014-07-29 Thread Kyle Mestery
All:

I spent some time today cleaning up python-neutronclient in LP. I
created a 2.3 series, and created milestones for the 2.3.5 (June 26)
and 2.3.6 (today) releases. I also targeted bugs which were released
in those milestones to the appropriate places. My next step is to
remove the 3.0 series, as I don't believe this is necessary anymore.

One other note: I've tentatively created a 2.3.7 milestone in LP, so
we can start targeting client bugs which merge there for the next
client release.

If you have any questions, please let me know.

Thanks,
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][qa] cinder client versions and tempest

2014-07-29 Thread Mike Perez
On 16:19 Thu 24 Jul , David Kranz wrote:
 I noticed that the cinder list-extensions url suffix is underneath
 the v1/v2 in the GET url but the returned result is the same either
 way. Some of the
 returned items have v1 in the namespace, and others v2.

For XML, the namespace is different. JSON makes no different.

 Also, in tempest, there is a single config section for cinder and
 only a single extensions client even though we run cinder
 tests for v1 and v2 through separate volume clients. I would have
 expected that listing extensions would be separate calls for v1
 and v2 and that the results might be different, implying that tempest
 conf should have a separate section (and service enabled) for volumes
 v2
 rather than treating the presence of v1 and v2 as flags in
 volume-feature-enabled. Am I missing something here?

The results of using extensions with v1 or v2 makes no different except what
I noted above. With that in mind, I recommend testing the extensions once with
v2 since it's the latest supported, and v1 is deprecated as of Juno.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][CI] VMware mine sweeper for Neutron temporarily disabled

2014-07-29 Thread Salvatore Orlando
Minesweeper for Neutron is now running again.
We updated the image for our compute nodes to ensure it is compliant with
commit [1].

We are still observing occasional infrastructure-related issues manifesting
as request timeout failures. We will soon whitelist those failures so that
mine sweeper won't vote when they're  hit.

Regards,
Salvatore

[1]
https://github.com/openstack/nova/commit/842b2abfe76dede55b3b61ebaad5a90c356c5ace




On 28 July 2014 13:07, Salvatore Orlando sorla...@nicira.com wrote:

 Hi,

 We have been witnessing some issues in our infrastructure which resulted
 in Mine Sweeper test run failures. Unfortunately these failures resulted in
 -1s being put on several patches.

 Mine sweeper is now temporarily disabled and our team is already working
 on solving the issue.
 In the meanwhile, you can trigger a recheck with recheck-vmware to
 remove the negative score from your patch.

 Salvatore

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] python-neutronclient, launchpad, and milestones

2014-07-29 Thread Nader Lahouti
Hi Kyle,

I have a BP listed in https://blueprints.launchpad.net/python-neutronclient
and looks like it is targeted for 3.0 (it is needed fro juno-3) The code is
ready and in the review. Can it be a included for 2.3.7 release?

Thanks,
Nader.



On Tue, Jul 29, 2014 at 12:28 PM, Kyle Mestery mest...@mestery.com wrote:

 All:

 I spent some time today cleaning up python-neutronclient in LP. I
 created a 2.3 series, and created milestones for the 2.3.5 (June 26)
 and 2.3.6 (today) releases. I also targeted bugs which were released
 in those milestones to the appropriate places. My next step is to
 remove the 3.0 series, as I don't believe this is necessary anymore.

 One other note: I've tentatively created a 2.3.7 milestone in LP, so
 we can start targeting client bugs which merge there for the next
 client release.

 If you have any questions, please let me know.

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday July 31 at 17:00UTC

2014-07-29 Thread David Kranz

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
this Thursday, July 31st at 17:00 UTC in the #openstack-meeting channel.

The agenda for Thursday's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 17:00 UTC is in other timezones, Thursday's
meeting will be at:

13:00 EDT
02:00 JST
02:30 ACST
19:00 CEST
12:00 CDT
10:00 PDT

-David Kranz


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Saving the original raw template in the DB

2014-07-29 Thread Ton Ngo

Hi everyone,
The raw template saved in the DB used to be the original template that
a user submits.  With the recent fix for stack update, it now reflects the
template that is actually deployed, so it may be different from the
original template because some resources may fail to deploy.  I would like
to solicit some feedback on saving the original template in the DB
separately from the deployed template.  I can think of two use cases for
retrieving the original template:
   Debugging:  running stack-update using the same template after fixing
   environmental problems.  The CLI and API can be extended to allow
   reusing the original template without having to provide it again.
   Convergence or retry:  some initial resource deployment may fail
   intermittently, but the user can retry later.

 Are there other potential use cases?The cost would be an extra
copy of the template in the raw template table for each stack if there is
failure, and a new column in the stack table to hold the id.  We can argue
that the user should have the original template to resubmit, but it seems
useful and convenient to save it in the DB.
Ton Ngo,


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Spec process

2014-07-29 Thread James Slagle
Last week at the TripleO midcycle we discussed the spec process that
we've adopted. Overall, I think most folks are liking the specs
themselves. I heard general agreement that we're helping to tease out
issues and potential implementation disagreements earlier in the
process, and that's a good thing.  I don't think I heard any contrary
opinions to that anyway :-).

The point was raised that we have a lot of specs in review, and
relatively few approved specs. All agreed that the time to review a
spec can be consuming and requires more commitment. We proposed asking
core reviewers to commit to reviewing at least 1 spec a week. jdob
emailed the list about that:

http://lists.openstack.org/pipermail/openstack-dev/2014-July/040926.html

Please reply to that thread if you have an opinion on that point.
Everyone at the midcycle was in general agreement about that
commitment (hence probably why not a lot of people have replied), but
we wanted to be sure to poll those that couldn't attend the midcycle
as well.

The idea of opening up the spec approval process to other TripleO core
reviewers also came up. Personally, I felt this might reduce the
workload on one person a bit and increase bandwidth to get specs
approved. Since the team is new to this process, we talked about
defining what it means when a spec is ready to be approved. I
volunteered to pull together a wiki page on that topic and have done
so here:

https://wiki.openstack.org/wiki/TripleO/SpecReviews

Thoughts on modifications, additions, subtractions, etc., are all welcome.

Finally, the juno-2 milestone has passed. Many (if not all?)
integrated projects have already -2'd specs that have not been
approved, indicating they are not going to make Juno. There are many
valid reasons to do this: focus, stabilization, workload, etc.

Personally, I don't feel like TripleO agreed or had discussion on this
point as a community. I'm actually not sure right off (without digging
through archives) if the spec freeze is an OpenStack wide process or
for individual projects. And, if it is OpenStack wide, would that
apply just to projects that are part of the integrated release.

I admit some selfishness here...since I have some outstanding specs.
But, I think we need to come to a consensus if we are going to have a
spec freeze for TripleO around the time of other projects or not, and
at the very least, define what those dates are. Additionally, we
haven't defined or talked about if we'd have an exception process to
the freeze if someone wanted to propose an exception.

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bp/pxe boot

2014-07-29 Thread Angelo Matarazzo
Hi folks,
I would add the pxe boot capability to Nova/libvirt and Horizon too.
Currently, compute instances must be booted from images (or snapshots)
stored in Glance or volumes stored in Cinder.
Our idea (as you can find below) is already described there [1] [2] and
aims to provide a design for booting compute instances from a PXE boot
server, i.e. bypassing the image/snapshot/volume requirement.
There is already  a open blueprint but I would want to register a new one
because it has no update since 2013.
https://blueprints.launchpad.net/nova/+spec/libvirt-empty-vm-boot-pxe
https://wiki.openstack.org/wiki/Nova/Blueprints/pxe-boot-instance
What do you think?

Thanks beforehand

Angelo
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] python-neutronclient, launchpad, and milestones

2014-07-29 Thread Kyle Mestery
On Tue, Jul 29, 2014 at 3:50 PM, Nader Lahouti nader.laho...@gmail.com wrote:
 Hi Kyle,

 I have a BP listed in https://blueprints.launchpad.net/python-neutronclient
 and looks like it is targeted for 3.0 (it is needed fro juno-3) The code is
 ready and in the review. Can it be a included for 2.3.7 release?

Yes, you can target it there. We'll see about including it in that
release, pending review.

Thanks!
Kyle

 Thanks,
 Nader.



 On Tue, Jul 29, 2014 at 12:28 PM, Kyle Mestery mest...@mestery.com wrote:

 All:

 I spent some time today cleaning up python-neutronclient in LP. I
 created a 2.3 series, and created milestones for the 2.3.5 (June 26)
 and 2.3.6 (today) releases. I also targeted bugs which were released
 in those milestones to the appropriate places. My next step is to
 remove the 3.0 series, as I don't believe this is necessary anymore.

 One other note: I've tentatively created a 2.3.7 milestone in LP, so
 we can start targeting client bugs which merge there for the next
 client release.

 If you have any questions, please let me know.

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] vhost-scsi support in Nova

2014-07-29 Thread Mike Perez
On 11:08 Fri 25 Jul , Stefan Hajnoczi wrote:
 On Fri, Jul 25, 2014 at 10:47 AM, Nicholas A. Bellinger
 n...@linux-iscsi.org wrote:
  As mentioned, we'd like the Nova folks to consider vhost-scsi support as
  a experimental feature for the Juno release of Openstack, given the
  known caveats.

 I think OpenStack support for vhost-scsi makes sense if someone wants
 to do the work.

This was done in Nova [1], but we're waiting on the additional requirements to
be met that Daniel requested for Libvirt to support it. We'll revisit this for
the next release.

Support for vHost in Cinder was worked on [1][2], but is paused until the
requirements above are met.

[1] - https://review.openstack.org/#/c/107650/
[2] - 
https://github.com/openstack/cinder-specs/blob/master/specs/juno/vhost-support.rst
[3] - 
https://github.com/Thingee/cinder/commit/4da7c5aab817f021b3f39b7d56df7c7beace2ab8

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Saving the original raw template in the DB

2014-07-29 Thread Clint Byrum

Excerpts from Ton Ngo's message of 2014-07-29 13:53:12 -0700:
 
 Hi everyone,
 The raw template saved in the DB used to be the original template that
 a user submits.  With the recent fix for stack update, it now reflects the
 template that is actually deployed, so it may be different from the
 original template because some resources may fail to deploy.  I would like
 to solicit some feedback on saving the original template in the DB
 separately from the deployed template.  I can think of two use cases for
 retrieving the original template:
Debugging:  running stack-update using the same template after fixing
environmental problems.  The CLI and API can be extended to allow
reusing the original template without having to provide it again.
Convergence or retry:  some initial resource deployment may fail
intermittently, but the user can retry later.
 

I believe this use case is far better handled via vcs. We need the
template to parse the current state of the stack. The user will have
their intended template and can have their intended parameter values
all included in a VCS.

  Are there other potential use cases?The cost would be an extra
 copy of the template in the raw template table for each stack if there is
 failure, and a new column in the stack table to hold the id.  We can argue
 that the user should have the original template to resubmit, but it seems
 useful and convenient to save it in the DB.
 Ton Ngo,
 

Additional cost is the additional complexity of code to manage the data.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] data-source renovation

2014-07-29 Thread Tim Hinrichs
Hi all,

As I mentioned in a previous IRC, when writing our first few policies I had 
trouble using the tables we currently use to represent external data sources 
like Nova/Neutron.  

The main problem is that wide tables (those with many columns) are hard to use. 
 (a) it is hard to remember what all the columns are, (b) it is easy to 
mistakenly use the same variable in two different tables in the body of the 
rule, i.e. to create an accidental join, (c) changes to the datasource drivers 
can require tedious/error-prone modifications to policy.

I see several options.  Once we choose something, I’ll write up a spec and 
include the other options as alternatives.


1) Add a preprocessor to the policy engine that makes it easier to deal with 
large tables via named-argument references.

Instead of writing a rule like

p(port_id, name) :-
neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts, 
binding_cap, status, name, admin_state_up, network_id, tenant_id, binding_vif, 
device_owner, mac_address, fixed_ips, router_id, binding_host)

we would write

p(id, nme) :-
neutron:ports(port_id=id, name=nme)

The preprocessor would fill in all the missing variables and hand the original 
rule off to the Datalog engine.

Pros: (i) leveraging vanilla database technology under the hood
  (ii) policy is robust to changes in the fields of the original data b/c 
the Congress data model is different than the Nova/Neutron data models
Cons: (i) we will need to invert the preprocessor when showing 
rules/traces/etc. to the user
  (ii) a layer of translation makes debugging difficult

2) Be disciplined about writing narrow tables and write 
tutorials/recommendations demonstrating how.

Instead of a table like...
neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts, 
binding_cap, status, name, admin_state_up, network_id, tenant_id, binding_vif, 
device_owner, mac_address, fixed_ips, router_id, binding_host)

we would have many tables...
neutron:ports(port_id)
neutron:ports.addr_pairs(port_id, addr_pairs)
neutron:ports.security_groups(port_id, security_groups)
neutron:ports.extra_dhcp_opts(port_id, extra_dhcp_opts)
neutron:ports.name(port_id, name)
...

People writing policy would write rules such as ...

p(x) :- neutron:ports.name(port, name), ...

[Here, the period e.g. in ports.name is not an operator--just a convenient way 
to spell the tablename.]

To do this, Congress would need to know which columns in a table are sufficient 
to uniquely identify a row, which in most cases is just the ID.

Pros: (i) this requires only changes in the datasource drivers; everything else 
remains the same
  (ii) still leveraging database technology under the hood
  (iii) policy is robust to changes in fields of original data
Cons: (i) datasource driver can force policy writer to use wide tables
  (ii) this data model is much different than the original data models
  (iii) we need primary-key information about tables

3) Enhance the Congress policy language to handle objects natively.

Instead of writing a rule like the following ...

p(port_id, name, group) :-
neutron:ports(port_id, addr_pairs, security_groups, extra_dhcp_opts, 
binding_cap, status, name, admin_state_up, network_id, tenant_id, binding_vif, 
device_owner, mac_address, fixed_ips, router_id, binding_host),
neutron:ports.security_groups(security_group, group)

we would write a rule such as
p(port_id, name) :-
neutron:ports(port),
port.name(name),
port.id(port_id),
port.security_groups(group)

The big difference here is that the period (.) is an operator in the language, 
just as in C++/Java.

Pros:
(i) The data model we use in Congress is almost exactly the same as the data 
model we use in Neutron/Nova.

(ii) Policy is robust to changes in the Neutron/Nova data model as long as 
those changes only ADD fields.

(iii) Programmers may be slightly more comfortable with this language.

Cons:

(i) The obvious implementation (changing the engine to implement the (.) 
operator directly is quite a change from traditional database technology.  At 
this point, that seems risky.

(ii) It is unclear how to implement this via a preprocessor (thereby leveraging 
database technology).  The key problem I see is that we would need to translate 
port.name(...) into something like option (2) above.  The difficulty is that 
TABLE could sometimes be a port, sometimes be a network, sometimes be a subnet, 
etc.

(iii) Requires some extra syntactic restrictions to ensure we don't lose 
decidability.

(iv) Because the Congress and Nova/Neutron models are the same, changes to the 
Nova/Neutron model can require rewriting policy.



Thoughts?
Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][policy] Bridging the 2-group gap in group policy

2014-07-29 Thread Ryan Moats


As promised in Monday's Neutron IRC minutes [1], this mail is a trip down
memory lane looking at the history of the
Neutron GP project..  The original GP google doc [2] included specifying
policy via both a produce/consume 1-group
approach and as a link between two groups.  There was an email thread [3]
that discussed the relationship between
these models early on, but that discussion petered out and during a later
IRC meeting [4] the concept of contracts
were added, but without changing the basic use case requirements from the
original document.  A followup meeting [5]
began the discussion of how to express the original model from the contract
data model but that discussion doesn't
appear to have been completed either.  The PoC in Atlanta raised a set of
issues [6],[7] around the complexity of the
resulting PoC code.

The good news is that having looked through the proposed GP code commits
(links to which can be found at [8) I
believe that folks that want to be able to specify policies via the 2-group
approach (and yes, I'm one of them) can have
that without changing the model encoded in those commits. Rather, it can be
done via the WiP CLI code commit by
providing a profiled API - this is a technique used by the IETF, CCITT,
etc. to allow a rich API to be consumed in
common ways.  In this case, what I'm envisioning is something like

neutron policy-apply [policy rule] [src group] [destination group]

in this case, the CLI would perform the contract creation for the policy
rule, and assigning the proper produce/consume
edits to the specified source and destination groups.  Note:  this is in
addition to the CLI providing direct access to the
underlying data model.  I believe that this is the simplest way to bridge
the gap and provide support to folks who want
to specify policy as something between two groups.

Ryan Moats (regXboi)

References:
[1]
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-07-28-21.02.log.txt
[2]
https://docs.google.com/document/d/1ZbOFxAoibZbJmDWx1oOrOsDcov6Cuom5aaBIrupCD9E/edit#
[3]
http://lists.openstack.org/pipermail/openstack-dev/2013-December/022150.html
[4]
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-02-27-19.00.log.html
[5]
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-03-20-19.00.log.html
[6] http://lists.openstack.org/pipermail/openstack-dev/2014-May/035661.html
[7]
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-05-22-18.01.log.html
[8] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ova support in glance

2014-07-29 Thread Bhandaru, Malini K
Hello Everyone!

We were discussing the following blueprint in Glance:
Enhanced-Platform-Awareness-OVF-Meta-Data-Import 
:https://review.openstack.org/#/c/104904/

The OVA format is very rich and the proposal here in its first incarnation is 
to essentially Untar the ova package, andimport the first disk image therein 
and parse the ovf file and attach meta data to the disk image.
There is a nova effort  in a similar vein that supports OVA, limiting its 
availability to the VMWare hypervisor. Our efforts will combine.

The issue that is raised is how many openstack users and OpenStack cloud 
providers tackle OVA data with multiple disk images, using them as an 
application.
Do your users using OVA with content other than 1 disk image + OVF? 
That is does it have other files that are used? Do any of you use OVAs with 
snapshot chains?
Would this solution path break your system, result in unhappy users?  


If the solution will at least address 50% of the use cases, a low bar, and ease 
deploying NFV applications, this would be worthy.
If so, how would we message around this so as not to imply that OpenStack 
supports OVA in its full glory?

Down the road the Artefacts blueprint will provide a place holder for OVA. 
Perhaps even the OVA format may be transformed into a Heat template to work in 
OpenStack.

Please do prov ide us your feedback.
Regards
Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa] turbo-hipster seems very unhappy

2014-07-29 Thread Matt Riedemann
I've seen t-h failing on many patches today, most that aren't touching 
the database migrations, but it's primarily catching my attention 
because of the failure on this change:


https://review.openstack.org/#/c/109660/

It looks like a pretty simple issue of the decorator package not being 
in whatever pypi mirror that t-h is using.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Managing change in gerrit which depends on multiple other changes in review

2014-07-29 Thread Brandon Logan
Hi Evgeny and Doug,

So the thing to keep in mind is that Gerrit determines a new review by
the change-id in the commit message.  It then determines patch sets by
the commit hashes.  This is my understanding of it at least.  A commit's
hash gets changed on many actions such as cherry-picks, rebases, and
commit --amend.

With this in mind, this means that you can verify if your changes will
not cause an update to the ancestor changes in a gerrit dependency
chain.  Before you do a git review just look at your git log and commit
hashes and see if the hash for each of those commits are the same as the
latest patch sets in gerrit.

My workflow is this:
If I just need to rebase the change, I just hit the rebase button in
gerrit on my change only.  This will cause the commit to have a new
hash, thus a new patch set.

If I need to make a change then just doing the normal git checkout from
the gerrit change page, and git commit --amend works fine, because I am
only touching that commit.

If I need to make a change AND rebase there are two ways to do this:
1. Hit Rebase Button on the gerrit change page then git checkout, make
change, git commit --amend, git review.
- The problem with this is that it creates two patch sets.
2. git checkout the gerrit change that your gerrit change is dependent
on.  Then cherry-pick your gerrit change on top of that.  This is
essentially a rebase, and now you can make changes to the code, commit
--amend and git review.  Gerrit will only see this commit hash changed
once, so only one patch set.

One other thing to keep in mind is since your change is dependent on
others you have to rely on your change's dependents to be rebased with
master.  You shouldn't do a rebase against master until the change you
are dependent on has been merged.  So the only time you should rebase is
when gerrit shows the OUTDATED message on your dependency.

Hope that helps explain my methodology, which is still a work in
progress.  However, I think this is a decent methodology when dealing
with a massive dependency chain like this.

Thanks,
Brandon


On Tue, 2014-07-29 at 16:05 +, Doug Wiegley wrote:
 Hi Evgeny,
 
 
 I’m not sure I’m doing it in the most efficient way, so I’d love to
 hear pointers, but what I’ve been doing:
 
 
 First, to setup the dependent commit, the command is “git review –d”.
 I’ve been using this
 guide: http://www.mediawiki.org/wiki/Gerrit/Advanced_usage#Create_a_dependency
 
 
 Second, when the dependent review changes, there is a ‘rebase’ button
 on gerrit that’ll get things back in sync automatically.
 
 
 Third, if you need to change your code after rebasing from gerrit,
 this is the only sequence I’ve tried that doesn’t result in something
 weird (rebasing overwrites the dependent commits, silently, so I’m
 clearly doing something wrong):
  1. Re-clone vanilla neutron
  2. Cd into new clone, setup for gerrit review
  3. Redo dependent commit setup
  4. Create your topic branch
  5. Cherry-pick your commit from gerrit into your new topic branch
  6. Use git log -n5 --decorate --pretty=oneline”, and verify that
 your dependency commit hashes match what’s in gerrit.
  7. Git review
 
 
 Thanks,
 doug
 
 
 
 
 From: Evgeny Fedoruk evge...@radware.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 Date: Tuesday, July 29, 2014 at 7:12 AM
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [Neutron] Managing change in gerrit which
 depends on multiple other changes in review
 
 
 
 Hi folks,
 
  
 
 I’m working on a change for neutron LBaaS service.
 
 Since there is a massive work done for LBaaS these days, my change
 depends on other changes being reviewed in parallel in gerrit.
 
 I don’t have a big git knowledge and I’m failing in figuring out the
 right procedure that should be followed for managing such a
 multi-dependent patch.
 
 So, I sending my question to you guys, in hope to find the right way
 to manage such patches in gerrit.
 
  
 
 Here is the situation:
 
 There are 4 patches on review in gerrit
 
 1.  A – No dependencies
 
 2.  B – Depends on A
 
 3.  C – Depends on A
 
 4.  D – No dependencies
 
  
 
  
 
 My change, let’s call it “X”, is already on review in gerrit.
 
 It should depend on all four other changes, A, B, C and D.
 
  
 
 I tried to two ways of managing those dependencies, 1) by doing a
 cherry-pick for each one of them, and 2) by doing git review and git
 rebase for each one of them.
 
 It does not work for me well, my change commit message is replaced by
 other changes’ commit messages and when I commit my patch, it commit’s
 other changes patches too.
 
  
 
 So, my question is: 
 
 Is this scenario supported by gerrit system?
 
 If it does – what is the right procedure to follow in order to manage
 those dependencies
 
 and how to rebase my change when some of 

[openstack-dev] [Neutron] Not support dnsmasq 2.63?

2014-07-29 Thread Xuhan Peng
We bumped the minimum version of dnsmasq to 2.63 a while ago by this code
change:

https://review.openstack.org/#/c/105378/

However, currently we still kind of support earlier version of dnsmasq
because we only give a warning and don't exit the program when we find
dnsmasq version is less than the minimum version. This causes some
confusion and complicates the code since we need to take care different
syntax of dnsmasq of different version in dhcp code (Note that the previous
version doesn't support tag).

I wonder what's your opinion on NOT supporting dnsmasq version less than
2.63 in Juno? I think we can prompt error message and exit the program when
we detect invalid version but I would like to gather more thoughts on this
one.

Thanks,
Xu Han
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Not support dnsmasq 2.63?

2014-07-29 Thread Kyle Mestery
On Tue, Jul 29, 2014 at 8:51 PM, Xuhan Peng pengxu...@gmail.com wrote:
 We bumped the minimum version of dnsmasq to 2.63 a while ago by this code
 change:

 https://review.openstack.org/#/c/105378/

 However, currently we still kind of support earlier version of dnsmasq
 because we only give a warning and don't exit the program when we find
 dnsmasq version is less than the minimum version. This causes some confusion
 and complicates the code since we need to take care different syntax of
 dnsmasq of different version in dhcp code (Note that the previous version
 doesn't support tag).

 I wonder what's your opinion on NOT supporting dnsmasq version less than
 2.63 in Juno? I think we can prompt error message and exit the program when
 we detect invalid version but I would like to gather more thoughts on this
 one.

I'm personally ok with this hard limit, but I'd really like to hear
from distribution people here to understand their thoughts, including
what versions of dnsmasq ship with their products and how this would
affect them.

Thanks,
Kyle

 Thanks,
 Xu Han

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] objects notifications

2014-07-29 Thread Jay Lau
Its a good idea to have a generic way to handle object notifications.
Considering that different objects might have different payload and
different logic for handling payload, we may need some clear design for
this. Seems a bp is needed for this. Thanks.


2014-07-30 2:49 GMT+08:00 Mike Spreitzer mspre...@us.ibm.com:

 Gary Kotton gkot...@vmware.com wrote on 07/29/2014 12:43:08 PM:

  Hi,
  When reviewing https://review.openstack.org/#/c/107954/ it occurred
  to me that maybe we should consider having some kind of generic
  object wrapper that could do notifications for objects. Any thoughts on
 this?

 I am not sure what that would look like, but I agree that we have a
 problem with too many things not offering notifications.  If there were
 some generic way to solve that problem, it would indeed be great.

 Thanks,
 Mike


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Survey on Token Provider Usage

2014-07-29 Thread Morgan Fainberg
Hi!

The Keystone team is looking for feedback from the community on what type of 
Keystone Token is being used in your OpenStack deployments. This is to help us 
understand the use of the different providers and get information on the 
reasoning (if possible) that that token provider is being used.

Please use the survey link and let us know which release of OpenStack and which 
Keystone Token type (UUID, PKI, PKIZ, something custom) you are using. The 
results of this survey will have no impact on future support of any of these 
types of Tokens, we plan to continue to support all of the current token 
formats and the ability to use a custom token provider.


https://www.surveymonkey.com/s/NZNDH3M 


Thanks!
The Keystone Team



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Incubation request

2014-07-29 Thread Swartzlander, Ben
On Tue, 2014-07-29 at 13:38 +0200, Thierry Carrez wrote:
 Swartzlander, Ben a écrit :
  Manila has come a long way since we proposed it for incubation last autumn. 
  Below are the formal requests.
  
  https://wiki.openstack.org/wiki/Manila/Incubation_Application
  https://wiki.openstack.org/wiki/Manila/Program_Application
  
  Anyone have anything to add before I forward these to the TC?
 
 When ready, propose a governance change a bit like this one:
 
 https://github.com/openstack/governance/commit/52d9b4cf2f3ba9d0b757e16dc040a1c174e1d27e

Thierry, does the governance change process replace the process of
sending an email to the openstack-tc ML?

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] objects notifications

2014-07-29 Thread Dan Smith
 When reviewing https://review.openstack.org/#/c/107954/ it occurred to
 me that maybe we should consider having some kind of generic object
 wrapper that could do notifications for objects. Any thoughts on this?

I think it might be good to do this in a repeatable, but perhaps not
totally automatic way. I can see that any time instance gets changed in
certain ways, that we'd want a notification about it. However, there are
probably some cases that don't fit that. For example,
instance.system_metadata is mostly private to nova I think, so I'm not
sure we'd want to emit a notification for that. Plus, we'd probably end
up with some serious duplication if we just do it implicitly.

What if we provided a way to declare the fields of an object that we
want to trigger a notification? Something like:

  NOTIFICATION_FIELDS = ['host', 'metadata', ...]

  @notify_on_save(NOTIFICATION_FIELDS)
  @base.remotable
  def save(context):
  ...

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Ready to change the meeting time?

2014-07-29 Thread Isaku Yamahata
Hi Steve.

The timeslot of 5:00AM UTC(Tuesday) 30min clashes with it.
https://wiki.openstack.org/wiki/Meetings/ServiceVM
Please disable this time slot.

thanks,

On Tue, Jul 29, 2014 at 10:53:21AM -0400,
Steve Gordon sgor...@redhat.com wrote:

 Hi all,
 
 I have recently had a few people express concern to me that the current 
 meeting time is preventing their attendance at the meeting. As we're still 
 using the original meeting time we discussed using for a trial period 
 immediately after summit it is probably time we reassess anyway.
 
 I have been through the global iCal [1] and tried to identify times where at 
 least one of the IRC meeting rooms is available and no other NFV related team 
 or subteam (E.g. Nova, Neutron, DVR, L3, etc.) is meeting. The resultant 
 times are available for voting on this whenisgood.net sheet - be sure to 
 select your location to view in your local time:
 
 http://whenisgood.net/exzzbi8
 
 If you are a regular participant in the NFV meetings, or even more 
 importantly if you would like to be but are restrained from doing so because 
 of the current timing then please record your preferences above. If you think 
 there is an available time slot that I've missed, or I've made a time slot 
 available that actually clashes with a meeting relevant to NFV participants, 
 then please respond on list!
 
 This week's meeting will proceed at the regular time on Wednesday, July 30 at 
 1400 UTC in #openstack-meeting-alt.
 
 Thanks,
 
 Steve
 
 [1] 
 https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Isaku Yamahata isaku.yamah...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL] Removing the tox==1.6.1 pin

2014-07-29 Thread Matt Riedemann



On 7/25/2014 2:38 PM, Clark Boylan wrote:

Hello,

The recent release of tox 1.7.2 has fixed the {posargs} interpolation
issues we had with newer tox which forced us to be pinned to tox==1.6.1.
Before we can remove the pin and start telling people to use latest tox
we need to address a new default behavior in tox.

New tox sets a random PYTHONHASHSEED value by default. Arguably this is
a good thing as it forces you to write code that handles unknown hash
seeds, but unfortunately many projects' unittests don't currently deal
with this very well. A work around is to hard set a PYTHONHASHSEED of 0
in tox.ini files. I have begun to propose these changes to the projects
that I have tested and found to not handle random seeds. It would be
great if we could get these reviewed and merged so that infra can update
the version of tox used on our side.

I probably won't be able to test every single project and propose fixes
with backports to stable branches for everything. It would be a massive
help if individual projects tested and proposed fixes as necessary too
(these changes will need to be backported to stable branches). You can
test by running `tox -epy27` in your project with tox version 1.7.2. If
that fails add PYTHONHASHSEED=0 as in
https://review.openstack.org/#/c/109700/ and rerun `tox -epy27` to
confirm that succeeds.

This will get us over the immediate hump of the tox upgrade, but we
should also start work to make our tests run with random hashes. This
shouldn't be too hard to do as it will be a self gating change once
infra is able to update the version of tox used in the gate. Most of the
issues appear related to dict entry ordering. I have gone ahead and
created https://bugs.launchpad.net/cinder/+bug/1348818 to track this
work.

Thank you,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Is this in any way related to the fact that tox is unable to 
find/install the oslo alpha packages for me in nova right now (config, 
messaging, rootwrap) after I rebased on master?  I had to go into 
requirements.txt and remove the min versions on the alpha versions to 
get tox to install dependencies for nova unit tests.  I'm running with 
tox 1.6.1 but not sure if that would be related anyhow.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev