Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-02 Thread Thomas Goirand
On 06/01/2015 07:16 PM, Jeremy Stanley wrote:
 On 2015-06-01 14:55:06 +0200 (+0200), Thomas Goirand wrote:
 [...]
 So, should I start writing a script to build an image for package
 building (ie: an image with sbuild, git-buildpackage, and so on...)?
 [...]
 
 Probably what we'd want to do is something like debootstrap/rpmstrap

FYI, I was the Debian package maintainer for rpmstrap, and I just gave
up on it because it was not maintainable (ie: breaking constantly).
Bootstraping an RPM distribution can simply be done with yum (which is
why I maintain Yum in Debian).

 a chroot for each platform we want to build, then in each of them
 iterate through the packaging git repos and --download-only the
 build-deps listed therein. That will prime a local cache in each
 chroot and then it will get baked into that image.

Ok, got the idea. Though what should be done is fill
/var/cache/pbuilder/aptarchive rather than the host OS. I'll try to do
with something of that sort.

 Later when a
 worker is booted from that image, the package build job just chroots
 into the appropriate filesystem subtree and has a warm cache
 available to it so it only needs to hopefully update at most the
 package list and maybe a handful of packages before starting to
 build whatever new package was requested.

Ok. That's what sbuild will do if we fill /var/cache/pbuilder/aptarchive
with relevant content.

 The good thing about this approach is that it can be added later, it
 doesn't need to be implemented on day one.

Yeah. I'm currently working on a Jessie cloud image with an sbuild
configuration, so that when Paul is ready, we can use that.
 How would a job get the latest version of a Git repository then? This
 still needs network, right?
 [...]
 
 The way our jobs work right now is that the workers start with a
 recent (generally no more than a day old) clone of all the Git
 repositories we maintain. It still has to hit the network to
 retrieve more recent Git refs, but this at least minimizes network
 interaction and significantly reduces the failure rate.

That will be the little bit more tricky part. Some libraries are very
small, and probably caching will not be useful (too much work when
building the VM image). However, for big projects (nova, neutron,
cinder...), then we'll have to do something for Git repository caching.

Thanks for your input Jeremy, this is very valuable.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-02 Thread Miguel Ángel Ajo
The backport seems reasonable IMO.

Is this tested in a multihost environment?.

I ask, because given the Ian explanation (which probably I got wrong), the 
issue is in the  
NET-NIC-VM path while the patch fixes the path in the network node (this is 
ran in the
dhcp agent). dhcp-NIC-NET.


Best,
Miguel Ángel Ajo


On Tuesday, 2 de June de 2015 at 9:32, Ian Wells wrote:

 The fix should work fine.  It is technically a workaround for the way 
 checksums work in virtualised systems, and the unfortunate fact that some 
 DHCP clients check checksums on packets where the hardware has checksum 
 offload enabled.  (This doesn't work due to an optimisation in the way QEMU 
 treats packet checksums.  You'll see the problem if your machine is running 
 the VM on the same host as its DHCP server and the VM has a vulnerable 
 client.)
  
 I haven't tried it myself but I have confidence in it and would recommend a 
 backport.
 --  
 Ian.
  
 On 1 June 2015 at 21:32, Kevin Benton blak...@gmail.com 
 (mailto:blak...@gmail.com) wrote:
  I would propose a back-port of it and then continue the discussion on the 
  patch. I don't see any major blockers for back-porting it.
   
  On Mon, Jun 1, 2015 at 7:01 PM, Tidwell, Ryan ryan.tidw...@hp.com 
  (mailto:ryan.tidw...@hp.com) wrote:
   Not seeing this on Kilo, we're seeing this on Juno builds (that's 
   expected).  I'm interested in a Juno backport, but mainly wanted to be 
   see if others had confidence in the fix.  The discussion in the bug 
   report also seemed to indicate there were other alternative solutions 
   others might be looking into that didn't involve an iptables rule.

   -Ryan

   -Original Message-
   From: Mark McClain [mailto:m...@mcclain.xyz]
   Sent: Monday, June 01, 2015 6:47 PM
   To: OpenStack Development Mailing List (not for usage questions)
   Subject: Re: [openstack-dev] [Neutron] virtual machine can not get DHCP 
   lease due packet has no checksum


On Jun 1, 2015, at 7:26 PM, Tidwell, Ryan ryan.tidw...@hp.com 
(mailto:ryan.tidw...@hp.com) wrote:
   
I see a fix for https://bugs.launchpad.net/neutron/+bug/1244589 merged 
during Kilo.  I'm wondering if we think we have identified a root cause 
and have merged an appropriate long-term fix, or if 
https://review.openstack.org/148718 was merged just so there's at least 
a fix available while we investigate other alternatives.  Does anyone 
have an update to provide?
   
-Ryan

   The fix works in environments we’ve tested in.  Are you still seeing 
   problems?

   mark
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   __
   OpenStack Development Mailing List (not for usage questions)
   Unsubscribe: 
   openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
   (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
   
   
  --  
  Kevin Benton  
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
  (http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
   
  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Jay Lau
Thanks Adrian, imho making name as required can bring more convenient to
end users because UUID is difficult to use. Without name, the end user need
to retrieve the UUID of the bay/baymodel first before he did some
operations for the bay/baymodel, its really time consuming. We can discuss
more in this week's IRC meeting. Thanks.


2015-06-02 14:08 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  -1. I disagree.

  I am not convinced that requiring names is a good idea. I've asked
 several times why there is a desire to require names, and I'm not seeing
 any persuasive arguments that are not already addressed by UUIDs. We have
 UUID values to allow for acting upon an individual resource. Names are
 there as a convenience. Requiring names, especially unique names, would
 make Magnum harder to use for API users driving Magnum from other systems.
 I want to keep the friction as low as possible.

 I'm fine with replacing None with an empty string.

  Consistency with Nova would be a valid argument if we were being more
 restrictive, but that's not the case. We are more permissive. You can use
 Magnum in the same way you use Nova if you want, by adding names to all
 resources. I don't see the wisdom in forcing that style of use without a
 technical reason for it.

 Thanks,

 Adrian

 On May 31, 2015, at 4:43 PM, Jay Lau jay.lau@gmail.com wrote:


  Just want to use ML to trigger more discussion here. There are now
 bugs/patches tracing this, but seems more discussions are needed before we
 come to a conclusion.

 https://bugs.launchpad.net/magnum/+bug/1453732
 https://review.openstack.org/#/c/181839/
 https://review.openstack.org/#/c/181837/
 https://review.openstack.org/#/c/181847/
 https://review.openstack.org/#/c/181843/

  IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility
 to end user as Magnum also support operating Bay/Baymodel via names and the
 name might be more meaningful to end users.

 Perhaps we can borrow some iead from nova, the concept in magnum can be
 mapped to nova as following:

 1) instance = bay
 2) flavor = baymodel

 So I think that a solution might be as following:
 1) Make name as a MUST for both bay/baymodel
 2) Update magnum client to use following style for bay-create and
 baymodel-create: DO NOT add --name option

 root@devstack007:/tmp# nova boot
 usage: nova boot [--flavor flavor] [--image image]
  [--image-with key=value] [--boot-volume volume_id]
  [--snapshot snapshot_id] [--min-count number]
  [--max-count number] [--meta key=value]
  [--file dst-path=src-path] [--key-name key-name]
  [--user-data user-data]
  [--availability-zone availability-zone]
  [--security-groups security-groups]
  [--block-device-mapping dev-name=mapping]
  [--block-device key1=value1[,key2=value2...]]
  [--swap swap_size]
  [--ephemeral size=size[,format=format]]
  [--hint key=value]
  [--nic
 net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid]
  [--config-drive value] [--poll]
  name
 error: too few arguments
 Try 'nova help boot' for more information.
 root@devstack007:/tmp# nova flavor-create
 usage: nova flavor-create [--ephemeral ephemeral] [--swap swap]
   [--rxtx-factor factor] [--is-public
 is-public]
   name id ram disk vcpus
 Please show your comments if any.

 --
   Thanks,

  Jay Lau (Guangya Liu)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-02 Thread Ian Wells
The fix should work fine.  It is technically a workaround for the way
checksums work in virtualised systems, and the unfortunate fact that some
DHCP clients check checksums on packets where the hardware has checksum
offload enabled.  (This doesn't work due to an optimisation in the way QEMU
treats packet checksums.  You'll see the problem if your machine is running
the VM on the same host as its DHCP server and the VM has a vulnerable
client.)

I haven't tried it myself but I have confidence in it and would recommend a
backport.
-- 
Ian.

On 1 June 2015 at 21:32, Kevin Benton blak...@gmail.com wrote:

 I would propose a back-port of it and then continue the discussion on the
 patch. I don't see any major blockers for back-porting it.

 On Mon, Jun 1, 2015 at 7:01 PM, Tidwell, Ryan ryan.tidw...@hp.com wrote:

 Not seeing this on Kilo, we're seeing this on Juno builds (that's
 expected).  I'm interested in a Juno backport, but mainly wanted to be see
 if others had confidence in the fix.  The discussion in the bug report also
 seemed to indicate there were other alternative solutions others might be
 looking into that didn't involve an iptables rule.

 -Ryan

 -Original Message-
 From: Mark McClain [mailto:m...@mcclain.xyz]
 Sent: Monday, June 01, 2015 6:47 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron] virtual machine can not get DHCP
 lease due packet has no checksum


  On Jun 1, 2015, at 7:26 PM, Tidwell, Ryan ryan.tidw...@hp.com wrote:
 
  I see a fix for https://bugs.launchpad.net/neutron/+bug/1244589 merged
 during Kilo.  I'm wondering if we think we have identified a root cause and
 have merged an appropriate long-term fix, or if
 https://review.openstack.org/148718 was merged just so there's at least
 a fix available while we investigate other alternatives.  Does anyone have
 an update to provide?
 
  -Ryan

 The fix works in environments we’ve tested in.  Are you still seeing
 problems?

 mark
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-02 Thread Serg Melikyan
I would like to ask community for the help to implement support for
TOSCA in Murano:
https://blueprints.launchpad.net/murano/+spec/support-tosca-format

I was driving this feature and during OpenStack Summit in Paris we
spent good amount of time discussing how we can implement support for
TOSCA with folks from IBM working on TOSCA specification and came-up
with following etherpad:
https://etherpad.openstack.org/p/tosca-in-murano

But unfortunately I was not able to spend enough time on this
blueprint to move it forward to the actual implementation.

If you are interested in having support for TOSCA in Murano and ready
to work on the implementation of this feature, I would be happy to
help you to drive this further.
-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] [Taskflow]Review help to cinder bp:Implement function to manage/unmanage snapshots

2015-06-02 Thread hao wang
Hi, folks,

There is a cinder bp:Implement function to manage/unmanage snapshots(
https://review.openstack.org/#/c/144590/), that we use taskflow to
implement this feature.

So I need your guys' help(cinder  taskflow) to push this forward.

Thanks.



-- 

Best Wishes For You!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Taskflow]Review help to cinder bp:Implement function to manage/unmanage snapshots

2015-06-02 Thread Dulko, Michal
Right now we’re working on refactoring current TaskFlow implementations in 
Cinder to make them more readable and clean. Then we’ll be able to decide if we 
want to get more TaskFlow into Cinder or step back from using it. Deadline for 
refactoring work is around 1 of July.

Here’s related patch for scheduler’s create_volume workflow: 
https://review.openstack.org/#/c/186439/

Currently I’m working on a patch for API’s create_volume and John Griffith 
agreed to work on manager’s one (I don’t know current status). If you want to 
help with these efforts – reviews are always welcomed. You may also take a shot 
at refactoring manage_existing flow in manager. It seem simple enough but maybe 
there are some improvement that we can do to make it more readable.

From: hao wang [mailto:sxmatch1...@gmail.com]
Sent: Tuesday, June 2, 2015 11:30 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder] [Taskflow]Review help to cinder bp:Implement 
function to manage/unmanage snapshots

Hi, folks,

There is a cinder bp:Implement function to manage/unmanage 
snapshots(https://review.openstack.org/#/c/144590/), that we use taskflow to 
implement this feature.

So I need your guys' help(cinder  taskflow) to push this forward.

Thanks.



--

Best Wishes For You!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Why not common definition about normal HTTP status code like 2xx and 3xx?

2015-06-02 Thread Boris Bobrov
On Tuesday 02 June 2015 09:32:45 Chenhong Liu wrote:
 There is keystone/exception.py which contains Exceptions defined and used
 inside keystone provide 4xx and 5xx status code. And we can use it like:
 exception.Forbidden.code, exception.forbiddent.title
 exception.NotFound.code, exception.NotFound.title
 
 This makes the code looks pretty and avoid error prone. But I can't find
 definition for other status code, like 200, 201, 204, 302, and so on. The
 code in keystone, especially the unit test cases,  just write these status
 code and title explicitly.
 
 How about add those definitions?

These are standard HTTP codes:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

Description in exceptions is given because one error code can be used for 
several errors. Success codes always have one meaning.

-- 
Best regards,
Boris Bobrov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominating Kirill Zaitsev for murano-core

2015-06-02 Thread Ekaterina Chernova
+1

Regards,
Kate.

On Tue, Jun 2, 2015 at 9:32 AM, Serg Melikyan smelik...@mirantis.com
wrote:

 I'd like to propose Kirill Zaitsev to core members of Murano team.

 Kirill Zaitsev is active member of our community, he implemented
 https://launchpad.net/murano/+milestone/2015.1.0 several blueprint in
 Kilo and fixed number of bugs, he maintains a really good score as
 contributor:
 http://stackalytics.com/report/users/kzaitsev

 Existing Murano cores, please vote +1/-1 for the addition of Kirill to the
 murano-core.
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Progressing/tracking work on libvirt / vif drivers

2015-06-02 Thread Ian Wells
VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this to
a hopefully interested audience.

At the summit, we wrote up a spec we were thinking of doing at [1].  It
actually proposes two things, which is a little naughty really, but hey.

Firstly we propose that we turn binding into a negotiation, so that Nova
can offer binding options it supports to Neutron and Neutron can pick the
one it likes most.  This is necessary if you happen to use vhostuser with
qemu, as it doesn't work for some circumstances, and desirable all around,
since it means you no longer have to configure Neutron to choose a binding
type that Nova likes and Neutron can choose different binding types
depending on circumstances.  As a bonus, it should make inter-version
compatibility work better.

Secondly we suggest that some of the information that Nova and Neutron
currently calculate independently should instead be passed from Neutron to
Nova, simplifying the Nova code since it no longer has to take an educated
guess at things like TAP device names.  That one is more contentious, since
in theory Neutron could pass an evil value, but if we can find some pattern
that works (and 'pattern' might be literally true, in that you could get
Nova to confirm that the TAP name begins with a magic string and is not
going to be a physical device or other interface on the box) I think that
would simplify the code there.

Read, digest, see what you think.  I haven't put it forward yet (actually
I've lost track of which projects take specs at this point) but I would
very much like to get it implemented and it's not a drastic change (in
fact, it's a no-op until we change Neutron to respect what Nova passes).

[1] https://etherpad.openstack.org/p/YVR-nova-neutron-binding-spec

On 1 June 2015 at 10:37, Neil Jerram neil.jer...@metaswitch.com wrote:

 On 01/06/15 17:45, Neil Jerram wrote:

  Many thanks, John  Dan.  I'll start by drafting a summary of the work
 that I'm aware of in this area, at
 https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work.


 OK, my first draft of this is now there at [1].  Please could folk with
 VIF-related work pending check that I haven't missed or misrepresented
 them?  Especially, please could owners of the 'Infiniband SR-IOV' and
 'mlnx_direct removal' changes confirm that those are really ready for core
 review?  It would be bad to ask for core review that wasn't in fact wanted.

 Thanks,
 Neil


 [1] https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominating Filip Blaha for murano-core

2015-06-02 Thread Ekaterina Chernova
+1

Welcome!

On Tue, Jun 2, 2015 at 9:25 AM, Serg Melikyan smelik...@mirantis.com
wrote:

 Folks, I'd like to propose Filip Blaha to core members of Murano team.

 Filip is active member of our community and he maintains a good score
 as contributor:
 http://stackalytics.com/report/users/filip-blaha

 Existing Murano cores, please vote +1/-1 for the addition of Filip to
 the murano-core.
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]Why not common definition about normal HTTP status code like 2xx and 3xx?

2015-06-02 Thread Chenhong Liu
There is keystone/exception.py which contains Exceptions defined and used
inside keystone provide 4xx and 5xx status code. And we can use it like:
exception.Forbidden.code, exception.forbiddent.title
exception.NotFound.code, exception.NotFound.title

This makes the code looks pretty and avoid error prone. But I can't find
definition for other status code, like 200, 201, 204, 302, and so on. The
code in keystone, especially the unit test cases,  just write these status
code and title explicitly.

How about add those definitions?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [fwaas] -IPv6 support in Kilo

2015-06-02 Thread Rukhsana Ansari
Hi,

I was browsing the code to understand IPv6  support For FWaaS in Kilo.

I don't see a restriction in the db code or in reference fwaas_plugin.py

However, from  this:
https://github.com/openstack/neutron-fwaas/blob/stable/kilo/neutron_fwaas/services/firewall/drivers/vyatta/vyatta_fwaas.py#L126

I gather that at least Vyatta does not have IPv6 firewall support.

Would greatly appreciate it if someone could explain  the reasons for this
restriction.

Thanks
-Rukhsana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [fwaas] -IPv6 support in Kilo

2015-06-02 Thread Mike Spreitzer
 From: Rukhsana Ansari rukhsana.ans...@oneconvergence.com
 To: openstack-dev@lists.openstack.org
 Date: 06/02/2015 01:59 PM
 Subject: [openstack-dev]  [neutron] [fwaas] -IPv6 support in Kilo
 
 Hi,
 
 I was browsing the code to understand IPv6  support For FWaaS in Kilo.
 
 I don't see a restriction in the db code or in reference fwaas_plugin.py
 
 However, from  this:
 https://github.com/openstack/neutron-fwaas/blob/stable/kilo/
 neutron_fwaas/services/firewall/drivers/vyatta/vyatta_fwaas.py#L126
 
 I gather that at least Vyatta does not have IPv6 firewall support.
 
 Would greatly appreciate it if someone could explain  the reasons 
 for this restriction.
 
 Thanks
 -Rukhsana

Indeed, this is a surprise to me.  
http://www.brocade.com/downloads/documents/html_product_manuals/vyatta/vyatta_5400_manual/wwhelp/wwhimpl/js/html/wwhelp.htm#href=Firewall/Firewall_Overview.02.11.html#1749726
 
indicates that the Vyatta 5400, at least, definitely has firewall 
functionality.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Call for 7.0 feature design and review

2015-06-02 Thread Sean M. Collins
Can we update the vxlan bp to target it to 7.0? The series goal is still
set to 6.1.x

https://blueprints.launchpad.net/fuel/+spec/neutron-vxlan-support

Thanks

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Call for 7.0 feature design and review

2015-06-02 Thread Andrew Woodward
Updated

Sean, I see that there is no spec linked in gerrit or the BP do we have one?


On Tue, Jun 2, 2015 at 11:07 AM Sean M. Collins s...@coreitpro.com wrote:

 Can we update the vxlan bp to target it to 7.0? The series goal is still
 set to 6.1.x

 https://blueprints.launchpad.net/fuel/+spec/neutron-vxlan-support

 Thanks

 --
 Sean M. Collins

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
--
Andrew Woodward
Mirantis
Fuel Community Ambassador
Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-02 Thread Adam Young
Since this a cross project concern, sending it out to the wider mailing 
list:


We have a sub-effort in Keystone to do better access control policy (not 
the  Neutron or  Congress based policy efforts).


I presented on this at the summit, and the effort is under full swing.  
We are going to set up a subteam meeting for this, but would like to get 
some input from outside the Keystone developers working on it.  In 
particular, we'd like input from the Nova team that was thinking about 
hard-coding policy decisions in Python, and ask you, instead, to work 
with us to come up with a solution that works for all the service.


If you are interested in being part of this effort, there is a Trello 
board set up here:


https://trello.com/b/260v4Gs7/dynamic-policy

It should be world readable.  I will provide you write access if you are 
interested in contributing.  In addition, let me know what your 
constraints are in setting up a weekly meeting and I will try to 
accommodate.  Right now, the people involved are primarily East-Coast of 
the Western Hemisphere and Europe, and the meeting time will likely be 
driven by that.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker] Tacker (NFV MANO VNFM) team meeting June 4th

2015-06-02 Thread Stephen Wong
Please note the change in time (and day of the week) and channel:

Meeting on #openstack-meeting at 1600 UTC (9:00am PDT)

Agenda can be found here (feel free to add yours):
https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_June_4.2C_2015

Thanks,
- Stephen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-02 Thread Vahid S Hashemian
Hi Serg,

As I mentioned in my earlier email to you I am interested in participating 
in this effort.
I am a Heat-Translator contributor and have started looking at how the 
integration may work (at a higher level for now).
I'll send my thoughts on that shortly.

Thanks.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Change abandonment policy

2015-06-02 Thread Colleen Murphy
In today's meeting we discussed implementing a policy for whether and when
core reviewers should abandon old patches whose author's were inactive.
(This doesn't apply to authors that want to abandon their own changes, only
for core reviewers to abandon other people's changes.) There are a few
things we could do here, with potential policy drafts for the wiki:

1) Never abandon

```
Our policy is to never abandon changes except for our own.
```

The sentiment here is that an old change in the queue isn't really hurting
anything by just sitting there, and it is more visible if someone else
wants to pick up the change.

2) Manually abandon after N months/weeks changes that have a -2 or were
fixed in a different patch

```
If a change is submitted and given a -1, and subsequently the author
becomes unresponsive for a few weeks, reviewers should leave reminder
comments on the review or attempt to contact the original author via IRC or
email. If the change is easy to fix, anyone should feel welcome to check
out the change and resubmit it using the same change ID to preserve
original authorship. Core reviewers will not abandon such a change.

If a change is submitted and given a -2, or it otherwise becomes clear that
the change can not make it in (for example, if an alternate change was
chosen to solve the problem), and the author has been unresponsive for at
least 3 months, a core reviewer should abandon the change.
```

Core reviewers can click the abandon button only on old patches that are
definitely never going to make it in. This approach has the advantage that
it is easier for contributors to find changes and fix them up, even if the
change is very old.

3) Manually abandon after N months/weeks changes that have a -1 that was
never responded to

```
If a change is submitted and given a -1, and subsequently the author
becomes unresponsive for a few weeks, reviewers should leave reminder
comments on the review or attempt to contact the original author via IRC or
email. If the change is easy to fix, anyone should feel welcome to check
out the change and resubmit it using the same change ID to preserve
original authorship. If the author is unresponsive for at least 3 months
and no one else takes over the patch, core reviewers can abandon the patch,
leaving a detailed note about how the change can be restored.

If a change is submitted and given a -2, or it otherwise becomes clear that
the change can not make it in (for example, if an alternate change was
chosen to solve the problem), and the author has been unresponsive for at
least 3 months, a core reviewer should abandon the change.
```

Core reviewers can click the abandon button on changes that no one has
shown an interest in in N months/weeks, leaving a message about how to
restore the change if the author wants to come back to it. Puppet Labs does
this for its module pull requests, setting N at 1 month.

4) Auto-abandon after N months/weeks if patch has a -1 or -2

```
If a change is given a -2 and the author has been unresponsive for at least
3 months, a script will automatically abandon the change, leaving a message
about how the author can restore the change and attempt to resolve the -2
with the reviewer who left it.
```

We would use a tool like this one[1] to automatically abandon changes
meeting a certain criteria. We would have to decide whether we want to only
auto-abandon changes with -2's or go as far as to auto-abandon those with
-1's. The policy proposal above assumes -2. The tool would leave a canned
message about how to restore the change.


Option 1 has the problem of leaving clutter around, which the discussion
today seeks to solve.

Option 3 leaves the possibility that a change that is mostly good becomes
abandoned, making it harder for someone to find and restore it.

 I don't think option 4 is necessary because there are not an overwhelming
number of old changes (I count 9 that are currently over six months old).
In working through old changes a few months ago I found that many of them
are easy to fix up to remove a -1, and auto-abandoning removes the ability
for a human to make that call. Moreover, if a patch has a procedural -2
that ought to be lifted after some point, auto-abandonment has the
potential to accidentally throw out a change that was intended to be kept
(though presumably the core reviewer who left the -2 would notice the
abandonment and restore it if that was the case).

I am in favor of option 2. I think setting N to be 3 months or 6 months is
appropriate. I don't have very strong feelings about options 1 or 3. I'm
against option 4.

Colleen

[1]
https://github.com/openstack/nova/blob/master/tools/abandon_old_reviews.sh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Release naming for M open for nominations

2015-06-02 Thread Monty Taylor
Hey everyone!

It's time to pick a name for the M release.

If you have a name you'd like us to vote on, please add it here:

https://wiki.openstack.org/wiki/Release_Naming/M_Proposals

The nominations will be open until 2015-06-07 23:59:59 UTC.

If you don't remember the rules, they're here:

http://governance.openstack.org/reference/release-naming.html

But I'll paste in the text here:

The following rules are designed to provide some consistency in the
pattern used to select release names, provide a fun challenge in finding
names that meet the criteria, and prevent unwieldy names from being chosen.

  1  Each release name must start with the letter of the ISO basic Latin
alphabet following the initial letter of the previous release, starting
with the initial release of “Austin”. After “Z”, the next name should
start with “A” again.

  2  The name must be composed only of the 26 characters of the ISO
basic Latin alphabet. Names which can be transliterated into this
character set are also acceptable.

  3  The name must refer to the physical or human geography of the
region encompassing the location of the OpenStack design summit for the
corresponding release.

  4  The name must be a single word with a maximum of 10 characters.
Words that describe the feature should not be included, so “Foo City” or
“Foo Peak” would both be eligible as “Foo”.

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may
make an exception for one or more of them to be considered in the
Condorcet poll. The naming official is responsible for presenting the
list of exceptional names for consideration to the TC before the poll opens.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

2015-06-02 Thread Wanjing Xu
Thanks everybody.  While I am trying to digest all the responses, I am here to 
reply why we are not considering a driver.  We already have an application 
which listens to neutron events to do some other stuff, it might just be easier 
for us if we reuse this framework and program the LBaasS from there.  If we use 
driver, there is this added efforts where we need to ask the user to install 
our driver, modify the conf file, start the agent and restart neutron.   We 
might still go back to driver/agent later because it seemed that it helps scale 
better.  
Thanks Doug, Kevin, Brandon and Kunal!  You guys are so helpful.  Will   have 
more questions later
Wanjing

From: doug...@parksidesoftware.com
Date: Mon, 1 Jun 2015 18:04:02 -0600
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

The reference implemenation for lbaas (using haproxy with namespaces) requires 
the agent. There are vendor implementations that are agent-less, but not the 
reference.
There is a non-agent ref driver for lbaas v2, but there is no horizon support 
for v2, and that driver is unsupported beyond dev use.
If I may ask, why do you not want to run the agent?
Thanks,doug

On Jun 1, 2015, at 5:52 PM, Wanjing Xu wanjing...@hotmail.com wrote:


Is there a way to add an LBaaS service without having to use neutron 
plugin/agent framework?
I want to add a LBaaS service without  an LBaaS agent running and still want to 
have lb cli and horizon.  When the user configure loadbalance via cli or 
horizon, neutron will send the events(pool, member, vip create/delete event)in 
the notification info queue and our application will listen to the queue and 
program  the LBaaS box.  So far, I have tried to enable the built-in HAProxy 
LBaaS(enable the service_plugin to be LoadBalancerPlugin and service provider 
to be haproxy).  By doing that , horizon and cli are all enabled and our 
application can successfully program LBaaS box using the notification events.  
The problem with that is that there is a haproxy agent running in the 
background although we are not using its function.  But if I don't enable the 
agent, we can not use horizon.  Currently we don't want to write a LBaaS agent 
of our own.  Is there a way to not to use LBaaS agent and still  be able to use 
horizon/cli to configure loadbalance?  During openstack summit at vancouver, I 
saw paypal loadbalance presentation, they use two providers, one is agent , the 
other is agentless controller, not sure how that controller works, could not 
find it through googling.
RegardsWanjing Xu

  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Change abandonment policy

2015-06-02 Thread Andrew Woodward
Also in favor of #2 and thought it was how it was running. #4 sounds bad
and may hide good code.

How do we want to account for drive-by authors who are going to be unable
to work on future revisions. We talked a while back that we wanted to be
able to account for this as some operators are unable to do/have time for
proper CR cycles to get patches landed.

On Tue, Jun 2, 2015 at 11:39 AM Colleen Murphy coll...@gazlene.net wrote:

 In today's meeting we discussed implementing a policy for whether and when
 core reviewers should abandon old patches whose author's were inactive.
 (This doesn't apply to authors that want to abandon their own changes, only
 for core reviewers to abandon other people's changes.) There are a few
 things we could do here, with potential policy drafts for the wiki:

 1) Never abandon

 ```
 Our policy is to never abandon changes except for our own.
 ```

 The sentiment here is that an old change in the queue isn't really hurting
 anything by just sitting there, and it is more visible if someone else
 wants to pick up the change.

 2) Manually abandon after N months/weeks changes that have a -2 or were
 fixed in a different patch

 ```
 If a change is submitted and given a -1, and subsequently the author
 becomes unresponsive for a few weeks, reviewers should leave reminder
 comments on the review or attempt to contact the original author via IRC or
 email. If the change is easy to fix, anyone should feel welcome to check
 out the change and resubmit it using the same change ID to preserve
 original authorship. Core reviewers will not abandon such a change.

 If a change is submitted and given a -2, or it otherwise becomes clear
 that the change can not make it in (for example, if an alternate change was
 chosen to solve the problem), and the author has been unresponsive for at
 least 3 months, a core reviewer should abandon the change.
 ```

 Core reviewers can click the abandon button only on old patches that are
 definitely never going to make it in. This approach has the advantage that
 it is easier for contributors to find changes and fix them up, even if the
 change is very old.

 3) Manually abandon after N months/weeks changes that have a -1 that was
 never responded to

 ```
 If a change is submitted and given a -1, and subsequently the author
 becomes unresponsive for a few weeks, reviewers should leave reminder
 comments on the review or attempt to contact the original author via IRC or
 email. If the change is easy to fix, anyone should feel welcome to check
 out the change and resubmit it using the same change ID to preserve
 original authorship. If the author is unresponsive for at least 3 months
 and no one else takes over the patch, core reviewers can abandon the patch,
 leaving a detailed note about how the change can be restored.

 If a change is submitted and given a -2, or it otherwise becomes clear
 that the change can not make it in (for example, if an alternate change was
 chosen to solve the problem), and the author has been unresponsive for at
 least 3 months, a core reviewer should abandon the change.
 ```

 Core reviewers can click the abandon button on changes that no one has
 shown an interest in in N months/weeks, leaving a message about how to
 restore the change if the author wants to come back to it. Puppet Labs does
 this for its module pull requests, setting N at 1 month.

 4) Auto-abandon after N months/weeks if patch has a -1 or -2

 ```
 If a change is given a -2 and the author has been unresponsive for at
 least 3 months, a script will automatically abandon the change, leaving a
 message about how the author can restore the change and attempt to resolve
 the -2 with the reviewer who left it.
 ```

 We would use a tool like this one[1] to automatically abandon changes
 meeting a certain criteria. We would have to decide whether we want to only
 auto-abandon changes with -2's or go as far as to auto-abandon those with
 -1's. The policy proposal above assumes -2. The tool would leave a canned
 message about how to restore the change.


 Option 1 has the problem of leaving clutter around, which the discussion
 today seeks to solve.

 Option 3 leaves the possibility that a change that is mostly good becomes
 abandoned, making it harder for someone to find and restore it.

  I don't think option 4 is necessary because there are not an overwhelming
 number of old changes (I count 9 that are currently over six months old).
 In working through old changes a few months ago I found that many of them
 are easy to fix up to remove a -1, and auto-abandoning removes the ability
 for a human to make that call. Moreover, if a patch has a procedural -2
 that ought to be lifted after some point, auto-abandonment has the
 potential to accidentally throw out a change that was intended to be kept
 (though presumably the core reviewer who left the -2 would notice the
 abandonment and restore it if that was the case).

 I am in favor of option 2. I think setting N 

Re: [openstack-dev] [Fuel] vxlan support

2015-06-02 Thread Andrew Woodward
Samuel,

VXLAN was moved to 7.0, as you noted it won't make 6.1. Mirantis has
identified this as a high priority for 7.0 so it should get more attention
this time. However any assistance in CR / Testing is always appreciated.

On Thu, May 28, 2015 at 1:38 PM Samuel Bartel samuel.bartel@gmail.com
wrote:

 Hi Sean,

 I understand and share you point of view. The best and cleaner solution
 would  be to have the vxlan support out of the box. Unfortunatly it is not
 the case for 6.0 and I doubt this feature can be available before HCF for
 6.1 (planned in the next upcoming days).
  That's why in my initial message I asked if  help is needed to ship this
 feature in 7.0

 Having a vxlan plugin is a workaround, an ugly one, but the only
 workaround which make the job in 6.0 and 6.1.
 My actual vxlan plugin modify ml2 neutron plugin configuration in order to
 switch segmentation type to vxlan and restart neutron services or neurton
 crm ressources. The only issue, I have for  the moment is to recreate net04
 network and corresponding subnet as you can't redefine an already defined
 resource in puppet.


The plugin tasks run as separate puppet jobs so they are not bound by this
restriction. Going forward with granular tasks, this will only be a problem
within the specific task.


 About the fact to choose GRE and having vxlan. it will be the same with
 contrail, nuage cinder netapp, nfs nova nfs glance plugins for example.  In
 the create env form you choose a configuration. But you can choose in
 settings tab by activating a particular plugin to override initial network
 or storage configuration. In every case, It will be done on purpose.

 --
 Regards,
 Samuel Bartel,
 IRC #samuelbartel


 2015-05-28 21:42 GMT+02:00 Sean M. Collins s...@coreitpro.com:

 On May 28, 2015 2:51:56 PM EDT, Andrey Danin ada...@mirantis.com wrote:

 Hi, Sean,

 A plugin cannot modify Fuel UI but it actually can change a segmentation
 type after deployment. On UI it's still GRE but in fact it will be VxLAN. I
 know, it's ugly, but should work.

 On Thu, May 28, 2015 at 7:47 PM, Sean M. Collins s...@coreitpro.com
 wrote:

 VxLAN support cannot be made as a plugin because plugins cannot modify
 the initial networking wizard (based on conversations I've had in
 #fuel-dev) where the choices between Neutron VLAN, Neutron GRE, and
 nova-network are shown to the user.

 I am currently working on this blueprint and have a WIP patch for
 fuel-web. Please contact me if you want to help contribute to the work.

 --
 Sean M. Collins


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 I don't think that's a good way to go about it, we'd be giving someone a
 surprise if they actually wanted to deploy GRE only to discover it deployed
 VXLAN.
 --
 Sent from my Android device with K-9 Mail. Please excuse my brevity.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Nova] DHCP sent by veth interface connected to Linux Bridge

2015-06-02 Thread Padmanabhan Krishnan
Hello,
I am seeing some weird behavior in my setup and would really appreciate if 
someone could clarify a few things below:

Setup: Simple two node. One with control+compute (Node1) and another a compute 
(Node2). Since, Node2 is running low in HW, I create a AvlZone putting only 
Node2 so that I could also launch a VM in Node2. I use OVS as mech_driver. I 
use type_Driver as ‘local’, for test purposes. 

When a VM is launched in Node2:

A DHCP Discover is sent out for the veth connected to the Linux Bridge side. 
Eventually, both the VM and veth connected to the bridge gets IP addresses (one 
a .2 and another .3). Ofcourse, the external DHCP server that I use, shouldn’t 
have given out an IP address for the veth, which should be corrected. But, why 
was a DHCP sent out in the first place on behalf of the veth? I didn’t see this 
behavior for VM’s launched in Node1 or even in other setups (I don’t use 
AvlZones in other setups).
So, I assume Nova (or is it libvirt underneath?) creates the veth and attaches 
one side to br-int and another side to Linux bridge. Is it a veth or LinuxBr 
property that signifies this DHCP behavior?


Srvr23:~$ ifconfig qvb5d8aa01c-53
qvb5d8aa01c-53 Link encap:Ethernet  HWaddr 62:ab:98:e4:8a:7e  
  inet addr:145.189.82.2  Bcast:145.189.82.255  Mask:255.255.255.0
  inet6 addr: fe80::60ab:98ff:fee4:8a7e/64 Scope:Link
  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
….

Srvr23:~$ brctl show
bridge name bridge id   STP enabled interfaces
qbr5d8aa01c-53  8000.62ab98e48a7e   no  qvb5d8aa01c-53
    tap5d8aa01c-53

Srvr23:~$ sudo ovs-vsctl show
5dfbb68a-7d32-4efb-b9df-6e04d5c1b402
    Bridge br-int
    fail_mode: secure
    Port int-br-ethd
    Interface int-br-ethd
    type: patch
    options: {peer=phy-br-ethd}
    Port br-int
    Interface br-int
    type: internal
    Port qvo5d8aa01c-53
    tag: 10
    Interface qvo5d8aa01c-53

—

Now, my compute server’s (Node2) default gateway is messed up and connectivity 
is lost. Now, the default GW points to  whatever is sent out by my external 
DHCP server. BTW, I haven’t created any external network or Neutron routers.


Before VM was spawned:

Srvr23:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref    Use Iface
0.0.0.0 172.28.12.1 0.0.0.0 UG    0  0    0 eth2  
- Original Default GW
169.254.0.0 0.0.0.0 255.255.0.0 U 1000   0    0 eth2
172.28.12.0 0.0.0.0 255.255.255.0   U 0  0    0 eth2

After VM was spawned:

Srvr23:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref    Use Iface
0.0.0.0 145.189.82.1    0.0.0.0 UG    0  0    0 
qvb5d8aa01c-53  Default GW modified
145.189.82.0    0.0.0.0 255.255.255.0   U 1  0    0 
qvb5d8aa01c-53
169.254.0.0 0.0.0.0 255.255.0.0 U 1000   0    0 eth2
172.28.12.0 0.0.0.0 255.255.255.0   U 0  0    0 eth2


This is stable/Juno.
Thanks,
Paddu__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] SafeConfigParser.write duplicates defaults: bug or feature?

2015-06-02 Thread David Kranz
The verify_tempest_config script has an option to write a new conf file. 
I noticed that when you do this, the items in DEFAULT are duplicated in 
every section that is written. Looking at the source I can see why this 
happens. I guess it is not harmful but is this considered a bug in the 
write method?


 -David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominating Kirill Zaitsev for murano-core

2015-06-02 Thread Kirill Zaitsev
Thanks all!

Someone should now say «With great power comes great responsibility» 

And I should respond with something like «Night gathers, and now my watch 
begins», should I? 


I kind of like this kind of symbolism =) Isn’t there a Core Developer Oath or 
Vow? Shouldn’t we think of one?

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 2 Jun 2015 at 20:27:46, McLellan, Steven (steve.mclel...@hp.com) wrote:

+1  

From: Stan Lagun sla...@mirantis.commailto:sla...@mirantis.com  
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org  
Date: Tuesday, June 2, 2015 at 9:32 AM  
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org  
Subject: Re: [openstack-dev] [Murano] Nominating Kirill Zaitsev for murano-core 
 

+1 without any doubt  

Sincerely yours,  
Stan Lagun  
Principal Software Engineer @ Mirantis  

mailto:sla...@mirantis.com  

On Tue, Jun 2, 2015 at 10:43 AM, Ekaterina Chernova 
efedor...@mirantis.commailto:efedor...@mirantis.com wrote:  
+1  

Regards,  
Kate.  

On Tue, Jun 2, 2015 at 9:32 AM, Serg Melikyan 
smelik...@mirantis.commailto:smelik...@mirantis.com wrote:  
I'd like to propose Kirill Zaitsev to core members of Murano team.  

Kirill Zaitsev is active member of our community, he 
implementedhttps://launchpad.net/murano/+milestone/2015.1.0 several blueprint 
in Kilo and fixed number of bugs, he maintains a really good score as 
contributor:  
http://stackalytics.com/report/users/kzaitsev  

Existing Murano cores, please vote +1/-1 for the addition of Kirill to the 
murano-core.  
--  
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.  
http://mirantis.com | smelik...@mirantis.commailto:smelik...@mirantis.com  

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  



__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  



__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New meeting time for nova-scheduler - vote!

2015-06-02 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

The current meeting time of 1500 UTC on Tuesday is presenting some
conflicts, so we're trying to come up with an alternative. I've
created a Doodle for anyone who is interested in attending these
meetings to indicate what times work and which don't.

There is no way to say Tuesdays on Doodle, so I created it for next
Tuesday and Wednesday, but please reply with your general availability
for that time every week. The link is:

http://doodle.com/akuv4b4ftv68q3me

Once we have a few acceptable times, we'll see which IRC rooms are
available and update the calendar.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVbhuBAAoJEKMgtcocwZqLS3cP/iCP8d6piLUBcMNUCSZFehcD
3tUvZcqKdrYHjyikkjfqY8pViALXJH8buE8Wjshxj0guRbxn8gSbdmhw4sjLstdu
FuwY+k0arSf9OljfWuqAosbN4se2lYNP4eXOJ8/xQj96nxVrm/nCa4o7lpCt33yL
0xsQp+rN2LMTw6n3UfbOLgypRNZ9k6qfeHhTV+MP65sK3n73oDU5EPRZ7hhkD7bM
4tRn1N/vWN7zxhJ56LFAiqz98hB59G2GDb+KB8LSBNncyZEhOmom7KtbMD8I5YUh
exEuKp+re5DY/hmijd8p/UUY5s6jFc+JQjHMbKirTuE6ZcYs6U6xuHpHhdEqDa7i
tp/yREmSLm7xq6LdYOeGSJZjWnT8p+0sURplHsBSlU/DzHPkbSR0mx4Ri33FOLlA
0Fn2k+PcVZcu80xLkAUWEF1GCXDDmSZx1EgfXTph9QmI/ZS9uv12o1GYZ/GNrX+7
kSB+nJuUapI/LCDjmWkqIPxH7dUsk/rlSG9ZvxUBM+3nRFNj6jo4fOny4UISMSon
pWicgIIz5FYDMptRfdmHGY5+N23EF3eqO3IM1znsFbK7jIym7fV3cs56FPi1b5/Q
IWxStDCHMwh7klxmQ9lGy0iWOwZnUNHQGhH2MVb8PDmBRj3tO6uRadfoO/tuVc0f
tFswjsM3XtBpu0PldBFm
=HF1/
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-02 Thread Kevin Benton
Sorry about the long delay.

Even the LOG.error(KEVIN PID=%s network response: %s % (os.getpid(),
r.text)) line?  Surely the server would have forked before that line was
executed - so what could prevent it from executing once in each forked
process, and hence generating multiple logs?

Yes, just once. I wasn't able to reproduce the behavior you ran into. Maybe
eventlet has some protection for this? Can you provide small sample code
for the logging driver that does reproduce the issue?

On Wed, May 13, 2015 at 5:19 AM, Neil Jerram neil.jer...@metaswitch.com
wrote:

 Hi Kevin,

 Thanks for your response...

 On 08/05/15 08:43, Kevin Benton wrote:

 I'm not sure I understand the behavior you are seeing. When your
 mechanism driver gets initialized and kicks off processing, all of that
 should be happening in the parent PID. I don't know why your child
 processes start executing code that wasn't invoked. Can you provide a
 pointer to the code or give a sample that reproduces the issue?


 https://github.com/Metaswitch/calico/tree/master/calico/openstack

 Basically, our driver's initialize method immediately kicks off a green
 thread to audit what is now in the Neutron DB, and to ensure that the other
 Calico components are consistent with that.

  I modified the linuxbridge mech driver to try to reproduce it:
 http://paste.openstack.org/show/216859/

 In the output, I never received any of the init code output I added more
 than once, including the function spawned using eventlet.


 Interesting.  Even the LOG.error(KEVIN PID=%s network response: %s %
 (os.getpid(), r.text)) line?  Surely the server would have forked before
 that line was executed - so what could prevent it from executing once in
 each forked process, and hence generating multiple logs?

 Thanks,
 Neil

  The only time I ever saw anything executed by a child process was actual
 API requests (e.g. the create_port method).




  On Thu, May 7, 2015 at 6:08 AM, Neil Jerram neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com wrote:

 Is there a design for how ML2 mechanism drivers are supposed to cope
 with the Neutron server forking?

 What I'm currently seeing, with api_workers = 2, is:

 - my mechanism driver gets instantiated and initialized, and
 immediately kicks off some processing that involves communicating
 over the network

 - the Neutron server process then forks into multiple copies

 - multiple copies of my driver's network processing then continue,
 and interfere badly with each other :-)

 I think what I should do is:

 - wait until any forking has happened

 - then decide (somehow) which mechanism driver is going to kick off
 that processing, and do that.

 But how can a mechanism driver know when the Neutron server forking
 has happened?

 Thanks,
  Neil


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] high priority patches, please review

2015-06-02 Thread Richard Jones
Hi Horizon devs,

The following test patches are a high priority, blocking further new work
in Liberty until they're landed. Please consider helping review them to get
the landed ASAP:

https://review.openstack.org/#/c/170554/
https://review.openstack.org/#/c/167738/
https://review.openstack.org/#/c/172057/
https://review.openstack.org/#/c/178227/
https://review.openstack.org/#/c/176532/
https://review.openstack.org/#/c/178434/
https://review.openstack.org/#/c/167326/


Cheers,

 Richard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

2015-06-02 Thread Wanjing Xu
Doug
Our current event consumer is listening to the queue with the topic specified 
in neutron.conf as notification_topics = x.  neutron will generate all 
create/update/delete events(from api) to this queue including vip/member/pool 
events.  So we don't need to write a driver to generate the events.  Neutron 
base api has taken care of it.
Regards!
Wanjing

From: doug...@parksidesoftware.com
Date: Tue, 2 Jun 2015 16:57:12 -0600
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

Hi,
If you have an existing event consumer that you want to stick with, you could 
write a driver that just generates events. There are some error status warts 
that you’d either have to live with or handle, but you could do that later.
Thanks,doug
On Jun 2, 2015, at 1:05 PM, Wanjing Xu wanjing...@hotmail.com wrote:


Thanks everybody.  While I am trying to digest all the responses, I am here to 
reply why we are not considering a driver.  We already have an application 
which listens to neutron events to do some other stuff, it might just be easier 
for us if we reuse this framework and program the LBaasS from there.  If we use 
driver, there is this added efforts where we need to ask the user to install 
our driver, modify the conf file, start the agent and restart neutron.   We 
might still go back to driver/agent later because it seemed that it helps scale 
better.  
Thanks Doug, Kevin, Brandon and Kunal!  You guys are so helpful.  Will   have 
more questions later
Wanjing

From: doug...@parksidesoftware.com
Date: Mon, 1 Jun 2015 18:04:02 -0600
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

The reference implemenation for lbaas (using haproxy with namespaces) requires 
the agent. There are vendor implementations that are agent-less, but not the 
reference.
There is a non-agent ref driver for lbaas v2, but there is no horizon support 
for v2, and that driver is unsupported beyond dev use.
If I may ask, why do you not want to run the agent?
Thanks,doug

On Jun 1, 2015, at 5:52 PM, Wanjing Xu wanjing...@hotmail.com wrote:


Is there a way to add an LBaaS service without having to use neutron 
plugin/agent framework?
I want to add a LBaaS service without  an LBaaS agent running and still want to 
have lb cli and horizon.  When the user configure loadbalance via cli or 
horizon, neutron will send the events(pool, member, vip create/delete event)in 
the notification info queue and our application will listen to the queue and 
program  the LBaaS box.  So far, I have tried to enable the built-in HAProxy 
LBaaS(enable the service_plugin to be LoadBalancerPlugin and service provider 
to be haproxy).  By doing that , horizon and cli are all enabled and our 
application can successfully program LBaaS box using the notification events.  
The problem with that is that there is a haproxy agent running in the 
background although we are not using its function.  But if I don't enable the 
agent, we can not use horizon.  Currently we don't want to write a LBaaS agent 
of our own.  Is there a way to not to use LBaaS agent and still  be able to use 
horizon/cli to configure loadbalance?  During openstack summit at vancouver, I 
saw paypal loadbalance presentation, they use two providers, one is agent , the 
other is agentless controller, not sure how that controller works, could not 
find it through googling.
RegardsWanjing Xu

  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

2015-06-02 Thread Doug Wiegley
Hi,

Ok, so you just need a noop driver, which you can find in the review link I 
posted a few emails back.

Thanks,
doug


 On Jun 2, 2015, at 6:23 PM, Wanjing Xu wanjing...@hotmail.com wrote:
 
 Doug
 
 Our current event consumer is listening to the queue with the topic specified 
 in neutron.conf as notification_topics = x.  neutron will generate all 
 create/update/delete events(from api) to this queue including vip/member/pool 
 events.  So we don't need to write a driver to generate the events.  Neutron 
 base api has taken care of it.
 
 Regards!
 
 Wanjing
 
 From: doug...@parksidesoftware.com
 Date: Tue, 2 Jun 2015 16:57:12 -0600
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?
 
 Hi,
 
 If you have an existing event consumer that you want to stick with, you could 
 write a driver that just generates events. There are some error status warts 
 that you’d either have to live with or handle, but you could do that later.
 
 Thanks,
 doug
 
 On Jun 2, 2015, at 1:05 PM, Wanjing Xu wanjing...@hotmail.com 
 mailto:wanjing...@hotmail.com wrote:
 
 Thanks everybody.  While I am trying to digest all the responses, I am here 
 to reply why we are not considering a driver.  We already have an application 
 which listens to neutron events to do some other stuff, it might just be 
 easier for us if we reuse this framework and program the LBaasS from there.  
 If we use driver, there is this added efforts where we need to ask the user 
 to install our driver, modify the conf file, start the agent and restart 
 neutron.   We might still go back to driver/agent later because it seemed 
 that it helps scale better.  
 
 Thanks Doug, Kevin, Brandon and Kunal!  You guys are so helpful.  Will   have 
 more questions later
 
 Wanjing
 
 From: doug...@parksidesoftware.com mailto:doug...@parksidesoftware.com
 Date: Mon, 1 Jun 2015 18:04:02 -0600
 To: openstack-dev@lists.openstack.org 
 mailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?
 
 The reference implemenation for lbaas (using haproxy with namespaces) 
 requires the agent. There are vendor implementations that are agent-less, but 
 not the reference.
 
 There is a non-agent ref driver for lbaas v2, but there is no horizon support 
 for v2, and that driver is unsupported beyond dev use.
 
 If I may ask, why do you not want to run the agent?
 
 Thanks,
 doug
 
 
 On Jun 1, 2015, at 5:52 PM, Wanjing Xu wanjing...@hotmail.com 
 mailto:wanjing...@hotmail.com wrote:
 
 Is there a way to add an LBaaS service without having to use neutron 
 plugin/agent framework?
 
 I want to add a LBaaS service without  an LBaaS agent running and still want 
 to have lb cli and horizon.  When the user configure loadbalance via cli or 
 horizon, neutron will send the events(pool, member, vip create/delete 
 event)in the notification info queue and our application will listen to the 
 queue and program  the LBaaS box.  So far, I have tried to enable the 
 built-in HAProxy LBaaS(enable the service_plugin to be LoadBalancerPlugin and 
 service provider to be haproxy).  By doing that , horizon and cli are all 
 enabled and our application can successfully program LBaaS box using the 
 notification events.  The problem with that is that there is a haproxy agent 
 running in the background although we are not using its function.  But if I 
 don't enable the agent, we can not use horizon.  Currently we don't want to 
 write a LBaaS agent of our own.  Is there a way to not to use LBaaS agent and 
 still  be able to use horizon/cli to configure loadbalance?  During openstack 
 summit at vancouver, I saw paypal loadbalance presentation, they use two 
 providers, one is agent , the other is agentless controller, not sure how 
 that controller works, could not find it through googling.
 
 Regards
 Wanjing Xu
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __ 
 OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [all] error codes design conclusion?

2015-06-02 Thread Gareth
Thanks Rochelle, looking forward to your next version

On Wed, Jun 3, 2015 at 6:27 AM, Rochelle Grober rochelle.gro...@huawei.com
wrote:

  Spec is in the works but needs to be reworked a bit more.  It’s under
 Openstack-specs.  I’m revamping it, but I’m taking vacation until Monday,
 so you won’t see the new patch until at least next week.  You are welcome
 to comment on the current version, though:
 https://review.openstack.org/#/c/172552/



 Need to clean up formatting, add some history, better examples, etc.  Any
 useful suggestions to address current or your issues extremely welcome.



 --Rocky



 *From:* Gareth [mailto:academicgar...@gmail.com]
 *Sent:* Monday, June 01, 2015 18:17
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [all] error codes design conclusion?



 Hey guys,



 I remember there was a session in design summit talking about openstack
 error codes. What's the current status? or is there any conclusion yet?



 Kun

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [all] Liberty summit: Updates in Glance

2015-06-02 Thread Fei Long Wang

On 02/06/15 01:51, Jay Pipes wrote:
 On 06/01/2015 08:30 AM, John Garbutt wrote:
 On 1 June 2015 at 13:10, Flavio Percoco fla...@redhat.com wrote:
 On 01/06/15 11:57 +0100, John Garbutt wrote:
 On 26/05/15 13:54 -0400, Nikhil Komawar wrote:
 FWIW, moving Nova from glance v1 to glance v2, without breaking Nova's
 public API, will require someone getting a big chunk of glance v1 on
 top of glance v2.

 AFAIK, the biggest issue right now is changed-since which is
 something Glance doesn't have in v2 but it's exposed throught Nova's
 image API.

I'm working on something in Glance related to this.
 Thats the big unanswered question that needs fixing in any spec we
 would approve around this effort.

 I'm happy you brought this up. What are Nova's plans to adopt Glance's
 v2 ? I heard there was a discussion and something along the lines of
 creating a library that wraps both APIs came up.

 We don't have anyone who has stepped up to work on it at his point.

 I think the push you made around this effort in kilo is the latest
 updated on this:
 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/remove-glanceclient-wrapper.html


 It would be great if we could find a glance/nova CPL to drive this
 effort.

 I am happy to take the lead on this from the Nova side. I'm familiar
 with the code in Nova.

 I really think nova should put some more effort on helping this
 happen. The work I did[0] - all red now, I swear it wasn't - during
 Kilo didn't get enough attention even before we decided to push it
 back. Not a complain, really. However, I'd love to see some
 cross-project efforts on making this happen.
 [0] https://review.openstack.org/#/c/144875/

 As there is no one to work on the effort, we haven't made it a
 priority for liberty.

 It's not a huge amount of work. I can do it.
Yes, given the L release is just starting. I think we have enough time
to make it happen.

 If someone is able to step up to help complete the work, I can do my
 best to help get that effort reviewed, by raising its priority, just
 as we did in Kilo.

 I suspect looking at how to slowly move towards v2, rather than going
 for a big bang approach, will make this easier to land. That and
 solving how we implement changed-since, if thats not available in
 the newer glance APIs. Honestly, part of me wonders about skipping v2,
 and going straight to v3.

 We actually already support Glance V2 in some things. It shouldn't be
 too difficult to complete the work to fully support V2.

 Please assign me as the CPL for Glance from Nova.
I'm happy  to work with Jay for Nova from Glance :)

 Best,
 -jay

 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-02 Thread Vahid S Hashemian
In other words, can a new component that is being to an environment have 
dependency on the existing ones?
If so, how does that defined?

For example, going back to your example of a multi-tier application, if I 
initially have PostgreDB in my environment, and later add Tomcat, how do I tell 
Tomcat the PostgreDB connection info? Would it be manually done through 
component parameters? Or there are other dynamic ways of discovering it?

Thanks.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud Labs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Joe Gordon
On Tue, Jun 2, 2015 at 4:12 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 3 June 2015 at 10:34, Jeremy Stanley fu...@yuggoth.org wrote:
  On 2015-06-02 21:59:34 + (+), Ian Cordasco wrote:
  I like this very much. I recall there was a session at the summit
  about this that Thierry and Kyle led. If I recall correctly, the
  discussion mentioned that it wasn't (at this point in time)
  possible to use gerrit the way you describe it, but perhaps people
  were mistaken?
  [...]
 
  It wasn't an option at the time. What's being conjectured now is
  that with custom Prolog rules it might be possible to base Gerrit
  label permissions on strict file subsets within repos. It's
  nontrivial, as of yet I've seen no working demonstration, and we'd
  still need the Infrastructure Team to feel comfortable supporting it
  even if it does turn out to be technically possible. But even before
  going down the path of automating/enforcing it anywhere in our
  toolchain, projects interested in this workflow need to try to
  mentally follow the proposed model and see if it makes social sense
  for them.
 
  It's also still not immediately apparent to me that this additional
  complication brings any substantial convenience over having distinct
  Git repositories under the control of separate but allied teams. For
  example, the Infrastructure Project is now past 120 repos with more
  than 70 core reviewers among those. In a hypothetical reality where
  those were separate directory trees within a single repository, I'm
  not coming up with any significant ways it would improve our current
  workflow. That said, I understand other projects may have different
  needs and challenges with their codebase we just don't face.

 We *really* don't need a technical solution to a social problem.

 If someone isn't trusted enough to know the difference between
 project/subsystemA and project/subsystemB, nor trusted enough to not
 commit changes to subsystemB, pushing stuff out to a new repo, or
 in-repo ACLs are not the solution. The solution is to work with them
 to learn to trust them.

 Further, there are plenty of cases where the 'subsystem' is
 cross-cutting, not vertical - and in those cases its much much much
 harder to just describe file boundaries where the thing is.

 So I'd like us to really get our heads around the idea that folk are
 able to make promises ('I will only commit changes relevant to the DB
 abstraction/transaction management') and honour them. And if they
 don't - well, remove their access. *even with* CD in the picture,
 thats a wholly acceptable risk IMO.


With gerrit's great REST APIs it would be very easy to generate a report to
detect if someone breaks their promise and commits something outside of a
given sub-directory.



 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-02 Thread Vahid S Hashemian
Hi Gosha,

Thank you very much for your message.

Before I can answer your question I would need to better understand how 
Murano handles life-cycle operations that you mentioned.
I hope you can bear with me with these questions or point me to documents 
that I need to read.

When I deploy an environment in Murano I see that a Heat stack is created 
and deployed.
If later I add a new component to the environment and redeploy I see a 
second stack added to stack list which seems to include only the resources 
associated with the new component; as if the old components in the 
environment are not touched.
Also, in the scenario you mentioned you referred to adding applications to 
a running stack. It seems to me that any such update to the stack would 
not require modifying the existing stack resources (except for delete).
Is this a correct observation?
Is there a scenario where an environment update would require updates to 
existing components? If so how does Murano handle that case?

Thank you in advance for your insights.

Regards,

---
Vahid Hashemian
Advisory Software Engineer, IBM Cloud Labs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Ian Wienand

On 06/03/2015 07:24 AM, Boris Pavlovic wrote:

Really it's hard to find cores that understand whole project, but
it's quite simple to find people that can maintain subsystems of
project.


  We are made wise not by the recollection of our past, but by the
  responsibility for our future.
   - George Bernard Shaw

Less authorities, mini-kingdoms and
turing-complete-rule-based-gerrit-subtree-git-commit-enforcement; more 
empowerment of responsible developers and building trust.


-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.messaging release 1.13.0 (liberty)

2015-06-02 Thread doug
We are content to announce the release of:

oslo.messaging 1.13.0: Oslo Messaging API

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

For more details, please see the git log history below and:

http://launchpad.net/oslo.messaging/+milestone/1.13.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

Changes in oslo.messaging 1.12.0..1.13.0


887e5a0 Ensure rpc_response_timeout is registered before using it
e93f623 Allow to remove second _send_reply() call
c1c0af2 Don't create a new channel in RabbitMQ Connection.reset()

Diffstat (except docs and test files)
-

oslo_messaging/_drivers/amqp.py  | 12 +++
oslo_messaging/_drivers/amqpdriver.py| 13 +---
oslo_messaging/_drivers/impl_qpid.py | 11 ++
oslo_messaging/_drivers/impl_rabbit.py   | 26 ++--
5 files changed, 49 insertions(+), 25 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [fwaas] -IPv6 support in Kilo

2015-06-02 Thread Sumit Naiksatam
Hi Rukhsana, When you say IPv6 support for FWaaS in Kilo can you
indicate exactly what you are looking for?

The FWaaS rules in the resource model support both formats (which I
recall has always been the case). A particular implementation/driver
may not support ipv6 (and which is what you are seeing in the
referenced code).

Thanks,
~Sumit.

On Tue, Jun 2, 2015 at 10:57 AM, Rukhsana Ansari
rukhsana.ans...@oneconvergence.com wrote:
 Hi,

 I was browsing the code to understand IPv6  support For FWaaS in Kilo.

 I don't see a restriction in the db code or in reference fwaas_plugin.py

 However, from  this:
 https://github.com/openstack/neutron-fwaas/blob/stable/kilo/neutron_fwaas/services/firewall/drivers/vyatta/vyatta_fwaas.py#L126

 I gather that at least Vyatta does not have IPv6 firewall support.

 Would greatly appreciate it if someone could explain  the reasons for this
 restriction.

 Thanks
 -Rukhsana

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-06-02 Thread Chris Friesen

On 06/02/2015 02:36 PM, Andrew Laski wrote:

There used to be a project that I think was looking for an API like this to
provide a reservation system, Climate or Blazar or something.  There was brief
talk of providing something like it for that use case, but the idea was put on
the backburner to wait for the scheduling rework that's occurring.
The question in my mind is should the claim requests be in the Nova API or come
from a scheduler API.  And I tend to think that they should come from a
scheduler API.


Who owns the resources, nova or the scheduler?

In many cases only nova-compute can resolve races (resource tracking of specific 
CPU cores, specific PCI devices, etc. in the face of parallel scheduling) so 
unless we're going to guarantee no races then I think claim requests should be a 
nova API call, and it should go all the way down to nova-compute to make sure 
that the resources are actually claimed.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Boris Pavlovic
Jeremy,


  the Infrastructure Project is now past 120 repos with more
 than 70 core reviewers among those.


I dislike the idea of having 120 repos for single tool. It makes things
complicated for everybody:
documentation stuff, installation, maintaing, work that touches multiple
repos and so on..

So I would prefer to have single repo with many subcores.


Robert,

We *really* don't need a technical solution to a social problem.


We really need... It's like non voting jobs in our CI (everybody just
ignores).
Btw it will be hard for large core team to know each other.
Especially if we are speaking about various groups of cores that are
mantaining
only parts of systems. Keeping all this in heads will be hard task (it
should be automated)


Best regards,
Boris Pavlovic



On Wed, Jun 3, 2015 at 2:12 AM, Robert Collins robe...@robertcollins.net
wrote:

 On 3 June 2015 at 10:34, Jeremy Stanley fu...@yuggoth.org wrote:
  On 2015-06-02 21:59:34 + (+), Ian Cordasco wrote:
  I like this very much. I recall there was a session at the summit
  about this that Thierry and Kyle led. If I recall correctly, the
  discussion mentioned that it wasn't (at this point in time)
  possible to use gerrit the way you describe it, but perhaps people
  were mistaken?
  [...]
 
  It wasn't an option at the time. What's being conjectured now is
  that with custom Prolog rules it might be possible to base Gerrit
  label permissions on strict file subsets within repos. It's
  nontrivial, as of yet I've seen no working demonstration, and we'd
  still need the Infrastructure Team to feel comfortable supporting it
  even if it does turn out to be technically possible. But even before
  going down the path of automating/enforcing it anywhere in our
  toolchain, projects interested in this workflow need to try to
  mentally follow the proposed model and see if it makes social sense
  for them.
 
  It's also still not immediately apparent to me that this additional
  complication brings any substantial convenience over having distinct
  Git repositories under the control of separate but allied teams. For
  example, the Infrastructure Project is now past 120 repos with more
  than 70 core reviewers among those. In a hypothetical reality where
  those were separate directory trees within a single repository, I'm
  not coming up with any significant ways it would improve our current
  workflow. That said, I understand other projects may have different
  needs and challenges with their codebase we just don't face.

 We *really* don't need a technical solution to a social problem.

 If someone isn't trusted enough to know the difference between
 project/subsystemA and project/subsystemB, nor trusted enough to not
 commit changes to subsystemB, pushing stuff out to a new repo, or
 in-repo ACLs are not the solution. The solution is to work with them
 to learn to trust them.

 Further, there are plenty of cases where the 'subsystem' is
 cross-cutting, not vertical - and in those cases its much much much
 harder to just describe file boundaries where the thing is.

 So I'd like us to really get our heads around the idea that folk are
 able to make promises ('I will only commit changes relevant to the DB
 abstraction/transaction management') and honour them. And if they
 don't - well, remove their access. *even with* CD in the picture,
 thats a wholly acceptable risk IMO.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Jeremy Stanley
On 2015-06-03 02:36:20 +0300 (+0300), Boris Pavlovic wrote:
 I dislike the idea of having 120 repos for single tool. It makes things
 complicated for everybody:
 documentation stuff, installation, maintaing, work that touches multiple
 repos and so on..
 
 So I would prefer to have single repo with many subcores.
[...]

Can you explain why having things in separate Git repositories is
more complicated than having them in separate directory hierarchies
in one Git repository plus using a turing-complete language to
identify who has permission to approve what across those?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Sean McGinnis
I like this idea. I agree, as things grow it is probably easier to find 
folks that know certain areas a project rather than the full scope.


This could be a good way to handle the load and delegate some pieces 
(such as driver reviews) to a different set of people.


On 06/02/2015 04:24 PM, Boris Pavlovic wrote:

Hi stackers,

*Issue*
*---*

Projects are becoming bigger and bigger overtime.
More and more people would like to contribute code and usually core 
reviewers
team can't scale enough. It's very hard to find people that understand 
full project and have enough time to do code reviews. As a result team 
is very small under heavy load and many maintainers just get burned out.


We have to solve this issue to move forward.


*Idea*
*--*

Let's introduce subsystems cores.

Really it's hard to find cores that understand whole project, but it's 
quite simple to find people that can maintain subsystems of project.



*How To*
*---*
*
*
Gerrit is not so simple as it looks and it has really neat features ;)

For example we can write own rules about who can put +2 and merge 
patch based on changes files.


We can make special subdirectory core ACL group.
People from such ACL group will be able to merge changes that touch 
only files from some specific subdirs.


As a result with proper organization of directories in project we can 
scale up review process without losing quality.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 6.1 Hard Code Freeze - status update June 2nd

2015-06-02 Thread Eugene Bogdanov

Hello everyone,

We currently see 6 bugs as main HCF dependencies [1]. We'll try to close 
all of them overnight. If we don't succeed, we'll need another day. 
We'll take decision whether we do/do not declare HCF based on the 
results tomorrow morning PDT.


[1] Key HCF dependencies:
https://bugs.launchpad.net/fuel/+bug/1461036 - in progress
https://bugs.launchpad.net/fuel/+bug/1458806 - in progress
https://bugs.launchpad.net/fuel/+bug/1460972 - on review
https://bugs.launchpad.net/fuel/6.1.x/+bug/1458533 - on review
https://bugs.launchpad.net/fuel/+bug/1461206 - discovered just today
https://bugs.launchpad.net/fuel/+bug/1461126 - discovered just today

Automated testing results:
CentOS - 73% pass rate
Ubuntu - 77% pass rate

--
EugeneB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Salvatore Orlando
On 2 June 2015 at 23:59, Ian Cordasco ian.corda...@rackspace.com wrote:



 On 6/2/15, 16:24, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi stackers,
 
 
 Issue
 ---
 
 
 Projects are becoming bigger and bigger overtime.
 More and more people would like to contribute code and usually core
 reviewers
 team can't scale enough. It's very hard to find people that understand
 full project and have enough time to do code reviews. As a result team is
 very small under heavy load and many maintainers just get burned out.
 
 
 We have to solve this issue to move forward.
 
 
 
 
 Idea
 --
 
 
 Let's introduce subsystems cores.
 
 
 Really it's hard to find cores that understand whole project, but it's
 quite simple to find people that can maintain subsystems of project.
 
 
 
 
 How To
 ---
 
 
 Gerrit is not so simple as it looks and it has really neat features ;)
 
 
 For example we can write own rules about who can put +2 and merge patch
 based on changes files.
 
 
 We can make special subdirectory core ACL group.
 People from such ACL group will be able to merge changes that touch only
 files from some specific subdirs.
 
 
 As a result with proper organization of directories in project we can
 scale up review process without losing quality.
 
 
 
 
 Thoughts?
 
 
 
 
 Best regards,
 Boris Pavlovic

 I like this very much. I recall there was a session at the summit about
 this that Thierry and Kyle led.


Indeed, and Kyle has already transformed that into facts [1]


 If I recall correctly, the discussion
 mentioned that it wasn't (at this point in time) possible to use gerrit
 the way you describe it, but perhaps people were mistaken?


I recall that too, and I also recall fungi stating the same thing back in
Paris.
Gerrit doesn't really have a concept of subsystems, as far as I can
understand; in theory gerrit could be changed to support this, but that's
another discussion.
The networking community is currently adopting multiple repositories to
this aim. This has worked very well for 3rd party plugins, and quite well
for advanced services.
For the 'neutron' proper project, which is however large enough to identify
multiple subsystems in it, the lieutenant mode described in [1] will be
enforced with a bit of common sense - from what I gather. If you're a core
for subsystem X, nominated by its lieutenant, you're not supposed to +/-2
patches that only marginally affect your subsystem or do not affect it at
all.



 If we can do this exactly as you describe it, that would be awesome.

If
 there's a problem in limiting people to what files they can approve
 changes for, then an alteration might be that those people get +2 but not
 +W. This provides a signal to whomever has +W that the review is very much
 ready to be merged. Does that sound fair?


neutron-specs adopts this approach (all cores can +2 but only a handful can
+A).
I think it works, in the assumption of a lieutenant systems, but for
projects with a large patch turnaround might constitute a bottleneck,
especially when there are gate-breaking issues that need to be approved
ASAP.
Generally speaking, I believe having 2 ties of cores (those with +A rights
and those without) is an experiment worth doing. I don't think it creates
an elite among developers; on the other hand, it gives SMEs a chance to
have a greater impact.



 Cheers,
 Ian


Salvatore

[1]
http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/policies/core-reviewers.rst


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Jeremy Stanley
On 2015-06-02 21:59:34 + (+), Ian Cordasco wrote:
 I like this very much. I recall there was a session at the summit
 about this that Thierry and Kyle led. If I recall correctly, the
 discussion mentioned that it wasn't (at this point in time)
 possible to use gerrit the way you describe it, but perhaps people
 were mistaken?
[...]

It wasn't an option at the time. What's being conjectured now is
that with custom Prolog rules it might be possible to base Gerrit
label permissions on strict file subsets within repos. It's
nontrivial, as of yet I've seen no working demonstration, and we'd
still need the Infrastructure Team to feel comfortable supporting it
even if it does turn out to be technically possible. But even before
going down the path of automating/enforcing it anywhere in our
toolchain, projects interested in this workflow need to try to
mentally follow the proposed model and see if it makes social sense
for them.

It's also still not immediately apparent to me that this additional
complication brings any substantial convenience over having distinct
Git repositories under the control of separate but allied teams. For
example, the Infrastructure Project is now past 120 repos with more
than 70 core reviewers among those. In a hypothetical reality where
those were separate directory trees within a single repository, I'm
not coming up with any significant ways it would improve our current
workflow. That said, I understand other projects may have different
needs and challenges with their codebase we just don't face.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-02 Thread Morgan Fainberg
Hi Henrique,

I don't think we need to specifically call out that we want a domain, we
should always reference the namespace as we do today. Basically, if we ask
for a project name we need to also provide it's namespace (your option #1).
This clearly lines up with how we handle projects in domains today.

I would, however, focus on how to represent the namespace in a single
(usable) string. We've been delaying the work on this for a while since we
have historically not provided a clear way to delimit the hierarchy. If we
solve the issue with what is the delimiter between domain, project, and
subdomain/subproject, we end up solving the usability issues with proposal
#1, and not breaking the current behavior you'd expect with implementing
option #2 (which at face value feels to be API incompatible/break of
current behavior).

Cheers,
--Morgan

On Tue, Jun 2, 2015 at 7:43 AM, Henrique Truta henriquecostatr...@gmail.com
 wrote:

 Hi folks,

 In Reseller[1], we’ll have the domains concept merged into projects, that
 means that we will have projects that will behave as domains. Therefore, it
 will be possible to have two projects with the same name in a hierarchy,
 one being a domain and another being a regular project. For instance, the
 following hierarchy will be valid:

 A - is_domain project, with domain A

 |

 B - project

 |

 A - project with domain A

 That hierarchy faces a problem when a user requests a project scoped token
 by name, once she’ll pass “domain = ‘A’” and project.name = “A”.
 Currently, we have no way to distinguish which project we are referring to.
 We have two proposals for this.


1.

Specify the whole hierarchy in the token request body, which means
that when requesting a token for the child project for that hierarchy,
we’ll have in the scope field something like:

 project: {
domain: {
name: A
},
name: [“A”', “B”, “A”]
}

 If the project name is unique inside the domain (project “B”, for
 example), the hierarchy is optional.


1.

When a conflict happen, always provide a token to the child project.
That means that, in case we have a name clashing as described, it will only
be possible to get a project scoped token to the is_domain project through
its id.



 The former will give us more clarity and won’t create any more
 restrictions than we already have. As a con, we currently are not able to
 get the names of projects in the hierarchy above a given project. Although
 the latter seems to hurt fewer people, it has the disadvantage of creating
 another set of constraints that might difficult the UX in the future.

 What do you think about that? We want to hear your oppinion, so we can
 discuss it at today’s Keystone Meeting.

 [1]
 https://github.com/openstack/keystone-specs/blob/master/specs/liberty/reseller.rst

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-02 Thread James E. Blair
Hi,

This came up at the TC meeting today, and I volunteered to provide an
update from the discussion.

In general, I think there is a lot of support for a packaging effort in
OpenStack.  The discussion here has been great; we need to answer a few
questions, get some decisions written down, and make sure we have
agreement.

Here's what we need to know:

1) Is this one or more than one horizontal effort?

In other words, do we think the idea of having a single packaging
project/team with collaboration among distros is going to work?  Or
should we look at it more like the deployment projects where we have
puppet and chef as top level OpenStack projects?

Either way is fine, and regardless, we need to answer the next
questions:

2) What's the collaboration plan?

How will different distros collaborate with each other, if at all?  What
things are important to standardize on, what aren't and how do we
support them all.

3) What are the plans for repositories and their contents?

What repos will be created, and what will be in them.  When will new
ones be created, and is there any process around that.

4) Who is on the team(s)?

Who is interested in the overall effort?  Who is signing up for
distro-specific work?  Who will be the initial PTL?

I think if the discussion here can answer those questions, you should
update the governance repo change with that information, we can get all
the participants to ack that, and the TC will be able to act.

Thanks again for driving this.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

2015-06-02 Thread Doug Wiegley
Hi,

If you have an existing event consumer that you want to stick with, you could 
write a driver that just generates events. There are some error status warts 
that you’d either have to live with or handle, but you could do that later.

Thanks,
doug

 On Jun 2, 2015, at 1:05 PM, Wanjing Xu wanjing...@hotmail.com wrote:
 
 Thanks everybody.  While I am trying to digest all the responses, I am here 
 to reply why we are not considering a driver.  We already have an application 
 which listens to neutron events to do some other stuff, it might just be 
 easier for us if we reuse this framework and program the LBaasS from there.  
 If we use driver, there is this added efforts where we need to ask the user 
 to install our driver, modify the conf file, start the agent and restart 
 neutron.   We might still go back to driver/agent later because it seemed 
 that it helps scale better.  
 
 Thanks Doug, Kevin, Brandon and Kunal!  You guys are so helpful.  Will   have 
 more questions later
 
 Wanjing
 
 From: doug...@parksidesoftware.com
 Date: Mon, 1 Jun 2015 18:04:02 -0600
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?
 
 The reference implemenation for lbaas (using haproxy with namespaces) 
 requires the agent. There are vendor implementations that are agent-less, but 
 not the reference.
 
 There is a non-agent ref driver for lbaas v2, but there is no horizon support 
 for v2, and that driver is unsupported beyond dev use.
 
 If I may ask, why do you not want to run the agent?
 
 Thanks,
 doug
 
 
 On Jun 1, 2015, at 5:52 PM, Wanjing Xu wanjing...@hotmail.com 
 mailto:wanjing...@hotmail.com wrote:
 
 Is there a way to add an LBaaS service without having to use neutron 
 plugin/agent framework?
 
 I want to add a LBaaS service without  an LBaaS agent running and still want 
 to have lb cli and horizon.  When the user configure loadbalance via cli or 
 horizon, neutron will send the events(pool, member, vip create/delete 
 event)in the notification info queue and our application will listen to the 
 queue and program  the LBaaS box.  So far, I have tried to enable the 
 built-in HAProxy LBaaS(enable the service_plugin to be LoadBalancerPlugin and 
 service provider to be haproxy).  By doing that , horizon and cli are all 
 enabled and our application can successfully program LBaaS box using the 
 notification events.  The problem with that is that there is a haproxy agent 
 running in the background although we are not using its function.  But if I 
 don't enable the agent, we can not use horizon.  Currently we don't want to 
 write a LBaaS agent of our own.  Is there a way to not to use LBaaS agent and 
 still  be able to use horizon/cli to configure loadbalance?  During openstack 
 summit at vancouver, I saw paypal loadbalance presentation, they use two 
 providers, one is agent , the other is agentless controller, not sure how 
 that controller works, could not find it through googling.
 
 Regards
 Wanjing Xu
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __ 
 OpenStack Development Mailing List (not for usage questions) Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Boris Pavlovic
Hi stackers,

*Issue*
*---*

Projects are becoming bigger and bigger overtime.
More and more people would like to contribute code and usually core
reviewers
team can't scale enough. It's very hard to find people that understand full
project and have enough time to do code reviews. As a result team is very
small under heavy load and many maintainers just get burned out.

We have to solve this issue to move forward.


*Idea*
*--*

Let's introduce subsystems cores.

Really it's hard to find cores that understand whole project, but it's
quite simple to find people that can maintain subsystems of project.


*How To*
*---*

Gerrit is not so simple as it looks and it has really neat features ;)

For example we can write own rules about who can put +2 and merge patch
based on changes files.

We can make special subdirectory core ACL group.
People from such ACL group will be able to merge changes that touch only
files from some specific subdirs.

As a result with proper organization of directories in project we can scale
up review process without losing quality.


*Thoughts?*


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cors] Last Call for Comments

2015-06-02 Thread Michael Krotscheck
Hey everyone!

The CORS spec has been under review for about a month now, and the TC has
put it on the agenda next week for final approval. I plan on doing one
final revision of the document - if it is warranted - so get your comments
in now!

https://review.openstack.org/#/c/179866/

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-02 Thread Georgy Okrokvertskhov
Hi Vahid,

Thank you for sharing your thoughts.
I have a questions about application life-cycle if we use TOSCA translator.
In Murano the main advantage of using HOT format is that we can update Het
stack with resources as soon as we need to deploy additional application.
We can dynamically create multi-tier applications with using other apps as
a building blocks. Imagine Java app on tom of Tomcat (VM1) and PostgreDB
(VM2).  All three components are three different apps in  the catalog.
Murano allows you to bring them and deploy together.

Do you think it will be possible to use TOSCA translator for Heat stack
updates? What we will do if we have two apps with two TOSCA templates like
Tomcat and Postgre. How we can combine them together?

Thanks
Gosha

On Tue, Jun 2, 2015 at 12:14 PM, Vahid S Hashemian 
vahidhashem...@us.ibm.com wrote:

 This is my what I have so far.



 Would love to hear feedback on it. Thanks.

 Regards,

 - 
  *Vahid
 Hashemian, Ph.D.*
 Advisory Software Engineer, IBM Cloud Labs



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Ian Cordasco


On 6/2/15, 16:24, Boris Pavlovic bo...@pavlovic.me wrote:

Hi stackers, 


Issue
---


Projects are becoming bigger and bigger overtime.
More and more people would like to contribute code and usually core
reviewers
team can't scale enough. It's very hard to find people that understand
full project and have enough time to do code reviews. As a result team is
very small under heavy load and many maintainers just get burned out.


We have to solve this issue to move forward.




Idea
--


Let's introduce subsystems cores.


Really it's hard to find cores that understand whole project, but it's
quite simple to find people that can maintain subsystems of project.




How To
---


Gerrit is not so simple as it looks and it has really neat features ;)


For example we can write own rules about who can put +2 and merge patch
based on changes files.


We can make special subdirectory core ACL group.
People from such ACL group will be able to merge changes that touch only
files from some specific subdirs.


As a result with proper organization of directories in project we can
scale up review process without losing quality.




Thoughts?




Best regards,
Boris Pavlovic

I like this very much. I recall there was a session at the summit about
this that Thierry and Kyle led. If I recall correctly, the discussion
mentioned that it wasn't (at this point in time) possible to use gerrit
the way you describe it, but perhaps people were mistaken?

If we can do this exactly as you describe it, that would be awesome. If
there's a problem in limiting people to what files they can approve
changes for, then an alteration might be that those people get +2 but not
+W. This provides a signal to whomever has +W that the review is very much
ready to be merged. Does that sound fair?

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from the Nova team

2015-06-02 Thread David Lyle
The Horizon project also uses the nova policy.json file to do role based
access control (RBAC) on the actions a user can perform. If the defaults
are hidden in the code, that makes those checks a lot more difficult to
perform. Horizon will then get to duplicate all the hard coded defaults in
our code base. Fully understanding UI is not everyone's primary concern, I
will just point out that it's a terrible user experience to have 10 actions
listed on an instance that will only fail when actually attempted by making
the API call.

To accomplish this level of RBAC, Horizon has to maintain a sync'd copy of
the nova policy file. The move to centralized policy is something I am very
excited about. But this seems to be a move in the opposite direction.

I think simply documenting the default values in the policy.json file would
be a simpler and more straight-forward approach. I think the defcore
resolution is also a documentation issue.

David



On Tue, Jun 2, 2015 at 10:31 AM, Ihar Hrachyshka ihrac...@redhat.com
wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 06/02/2015 06:22 PM, Sean Dague wrote:
  Nova has a very large API, and during the last release cycle a lot
  of work was done to move all the API checking properly into policy,
  and not do admin context checks at the database level. The result
  is a very large policy file -
  https://github.com/openstack/nova/blob/master/etc/nova/policy.json
 
  This provides a couple of challenges. One of which is in recent
  defcore discussions some deployers have been arguing that the
  existence of policy files means that anything you can do with
  policy.json is valid and shouldn't impact trademark usage, because
  the knobs were given. Nova specifically states this is not ok -
  https://github.com/openstack/nova/blob/master/doc/source/devref/policy
 _enforcement.rst#existed-nova-api-being-restricted
 
 
 however, we'd like to go a step further here.
 
  What we'd really like is sane defaults for policy that come from
  code, not from etc files. So that a Nova deploy with an empty
  policy.json is completely valid, and does a reasonable thing.
 
  Policy.json would then be just a set
  ofhttp://docs.openstack.org/developer/oslo.policy/api.html#rule-check
  overrides for existing policy. That would make it a lot more clear
  what was changed from the existing policy.
 
  We'd also really like the policy system to be able to WARN when
  the server starts if the policy was changed in some way that could
  negatively impact compatibility of the system, i.e. if functions
  that we felt were essential were turned off. Because the default
  policy is in code, we could have a view of the old and new world
  and actually warn the Operator that they did a weird thing.
 
  Lastly, we'd actually really like to redo our policy to look more
  like resource urls instead of extension names, as this should be a
  lot more sensible to the administrators, and hopefully make it
  easier to think about policy. Which I think means an aliasing
  facility in oslo.policy to allow a graceful transition for users.
  (This may exist, I don't know).

 If I understand your aliasing need correctly, you may want to use
 RuleChecks:
 http://docs.openstack.org/developer/oslo.policy/api.html#rule-check

 
  I'm happy to write specs here, but mostly wanted to have the
  discussion on the list first to ensure we're all generally good
  with this direction.
 
  -Sean
 
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVbdp7AAoJEC5aWaUY1u57/x0H/0G2aGlfNVyUdcflC19sner6
 FobWh/ASS/fBLq2SjDGduieu/voCdvK8XKi4rTncSvcwuKGVkgmJ/G3YiO22ZPyn
 kPFWtQjiSadRdmP3WRmMYU4LeHw090Gxq32lBA7knpqon2f/MTHLPZUsnqdmX5R8
 J7zpGEj+nqe9RiWq4kJzwK8niwZTe4FP5+wvc3A+QYNbHNJB5feY5VnGMuUK/4O/
 svsmuNMyAz93GCZL36f+EJoXXQv7+tGtSuImANq505Ae6sXs+Bl7crZul9lkzHo7
 VB/UCbcxa208iw6tiWBh4qP1Y8vBljNjL8ifNbyXj6Y0z3gekEtoUcBQq3T0w5s=
 =lBtm
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-02 Thread Morgan Fainberg
On Tue, Jun 2, 2015 at 12:09 PM, Adam Young ayo...@redhat.com wrote:

 Since this a cross project concern, sending it out to the wider mailing
 list:

 We have a sub-effort in Keystone to do better access control policy (not
 the  Neutron or  Congress based policy efforts).

 I presented on this at the summit, and the effort is under full swing.  We
 are going to set up a subteam meeting for this, but would like to get some
 input from outside the Keystone developers working on it.  In particular,
 we'd like input from the Nova team that was thinking about hard-coding
 policy decisions in Python, and ask you, instead, to work with us to come
 up with a solution that works for all the service.


I want to be sure we look at what Nova is presenting here. While building
policy into python may not (on the surface) look like an approach that is
wanted due to it restricting the flexibility that we've had with
policy.json, I don't want to exclude the concept without examination. If
there is a series of base level functionality that is expected to work with
Nova in all cases - is that something that should be codified in the policy
rules? This doesn't preclude having a mix between the two approaches
(allowing custom roles, etc, but having a baseline for a project that is a
known quantity that could be overridden).

Is there real value (from a UX and interoperability standpoint) to have
everything 100% flexible in all the ways? If we are working to redesign how
policy works, we should be very careful of excluding the (more) radical
ideas without consideration. I'd argue that dynamic policy does fall on the
opposite side of the spectrum from the Nova proposal. In truth I'm going to
guess we end up somewhere in the middle.




 If you are interested in being part of this effort, there is a Trello
 board set up here:

 https://trello.com/b/260v4Gs7/dynamic-policy

 It should be world readable.  I will provide you write access if you are
 interested in contributing.  In addition, let me know what your constraints
 are in setting up a weekly meeting and I will try to accommodate.  Right
 now, the people involved are primarily East-Coast of the Western Hemisphere
 and Europe, and the meeting time will likely be driven by that.


I definitely want to encourage this to be a cross-project / horizontal
effort as this will impact everything within OpenStack.

Cheers,
--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstackclient] Image create-or-update

2015-06-02 Thread Steve Martinelli
I'm thinking that the current approach is probably how we want to keep 
things. I can't imagine many other projects being okay with multiple 
create calls with the same name.
Though if you're really adamant about including that support, we could 
include a new flag (--or-update) that performs the update if it's found, 
otherwise it continues with a new create. 

Does that make sense?

Thanks,

Steve Martinelli
OpenStack Keystone Core

Marek Aufart mauf...@redhat.com wrote on 06/02/2015 10:55:20 AM:

 From: Marek Aufart mauf...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 06/02/2015 10:55 AM
 Subject: [openstack-dev] [openstackclient] Image create-or-update
 
 Hi,
 
 I have a question related to openstack image create command v1 from 
 python-openstackclient.
 
 It behaves like create-or-update (if image with *name* specified for 
 create already existed, it is updated). Actually it looks, that it is in 

 collision with glance, which allows create multiple images with same 
 names instead of update one.
 
 Is the create-or-update approach still wanted?
 
 Related code:
 https://github.com/openstack/python-openstackclient/blob/master/
 openstackclient/image/v1/image.py#L247-L269
 
 Thanks.
 
 -- 
 Marek Aufart
 
 Email: mauf...@redhat.com
 
 IRC: maufart / aufi on freenode
 
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from the Nova team

2015-06-02 Thread Kevin L. Mitchell
On Tue, 2015-06-02 at 16:16 -0600, David Lyle wrote:
 The Horizon project also uses the nova policy.json file to do role
 based access control (RBAC) on the actions a user can perform. If the
 defaults are hidden in the code, that makes those checks a lot more
 difficult to perform. Horizon will then get to duplicate all the hard
 coded defaults in our code base. Fully understanding UI is not
 everyone's primary concern, I will just point out that it's a terrible
 user experience to have 10 actions listed on an instance that will
 only fail when actually attempted by making the API call.

For the record, the discussion at the summit also touched on the
discoverability of the policy affecting a given user/API.  I don't
believe we considered the ordering between that and the defaults feature
we suggested, but I believe we can code a defaults mechanism to
dynamically generate an output file in the interim (as is done for
configuration now), which may improve the situation from Horizon's
standpoint, until the discoverability piece is in place.

-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] error codes design conclusion?

2015-06-02 Thread Rochelle Grober
Spec is in the works but needs to be reworked a bit more.  It’s under 
Openstack-specs.  I’m revamping it, but I’m taking vacation until Monday, so 
you won’t see the new patch until at least next week.  You are welcome to 
comment on the current version, though:  
https://review.openstack.org/#/c/172552/

Need to clean up formatting, add some history, better examples, etc.  Any 
useful suggestions to address current or your issues extremely welcome.

--Rocky

From: Gareth [mailto:academicgar...@gmail.com]
Sent: Monday, June 01, 2015 18:17
To: OpenStack Development Mailing List
Subject: [openstack-dev] [all] error codes design conclusion?

Hey guys,

I remember there was a session in design summit talking about openstack error 
codes. What's the current status? or is there any conclusion yet?

Kun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

2015-06-02 Thread Wanjing Xu

Hi Kunal
Is it OK if you tell us how this agentless controller is done?  Or at least 
give me the pointer to the installation guide?   I  wonder if using an noop 
driver is the way.
RegardsWanjingFrom: kunalhgan...@gmail.com
Date: Mon, 1 Jun 2015 19:02:29 -0700
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] No LBaaS agent?

Hi Wanjing
We at PayPal/eBay have integrated with a multi-vendor solution. One of the 
vendor solution is agent less and off loads all the business logic to a vendor 
controller that manages the devices.
RegardsKunal

On Jun 1, 2015, at 4:52 PM, Wanjing Xu wanjing...@hotmail.com wrote:Is there 
a way to add an LBaaS service without having to use neutron plugin/agent 
framework?
I want to add a LBaaS service without  an LBaaS agent running and still want to 
have lb cli and horizon.  When the user configure loadbalance via cli or 
horizon, neutron will send the events(pool, member, vip create/delete event)in 
the notification info queue and our application will listen to the queue and 
program  the LBaaS box.  So far, I have tried to enable the built-in HAProxy 
LBaaS(enable the service_plugin to be LoadBalancerPlugin and service provider 
to be haproxy).  By doing that , horizon and cli are all enabled and our 
application can successfully program LBaaS box using the notification events.  
The problem with that is that there is a haproxy agent running in the 
background although we are not using its function.  But if I don't enable the 
agent, we can not use horizon.  Currently we don't want to write a LBaaS agent 
of our own.  Is there a way to not to use LBaaS agent and still  be able to use 
horizon/cli to configure loadbalance?  During openstack summit at vancouver, I 
saw paypal loadbalance presentation, they use two providers, one is agent , the 
other is agentless controller, not sure how that controller works, could not 
find it through googling.
RegardsWanjing Xu

__OpenStack
 Development Mailing List (not for usage questions)Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Robert Collins
On 3 June 2015 at 10:34, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-06-02 21:59:34 + (+), Ian Cordasco wrote:
 I like this very much. I recall there was a session at the summit
 about this that Thierry and Kyle led. If I recall correctly, the
 discussion mentioned that it wasn't (at this point in time)
 possible to use gerrit the way you describe it, but perhaps people
 were mistaken?
 [...]

 It wasn't an option at the time. What's being conjectured now is
 that with custom Prolog rules it might be possible to base Gerrit
 label permissions on strict file subsets within repos. It's
 nontrivial, as of yet I've seen no working demonstration, and we'd
 still need the Infrastructure Team to feel comfortable supporting it
 even if it does turn out to be technically possible. But even before
 going down the path of automating/enforcing it anywhere in our
 toolchain, projects interested in this workflow need to try to
 mentally follow the proposed model and see if it makes social sense
 for them.

 It's also still not immediately apparent to me that this additional
 complication brings any substantial convenience over having distinct
 Git repositories under the control of separate but allied teams. For
 example, the Infrastructure Project is now past 120 repos with more
 than 70 core reviewers among those. In a hypothetical reality where
 those were separate directory trees within a single repository, I'm
 not coming up with any significant ways it would improve our current
 workflow. That said, I understand other projects may have different
 needs and challenges with their codebase we just don't face.

We *really* don't need a technical solution to a social problem.

If someone isn't trusted enough to know the difference between
project/subsystemA and project/subsystemB, nor trusted enough to not
commit changes to subsystemB, pushing stuff out to a new repo, or
in-repo ACLs are not the solution. The solution is to work with them
to learn to trust them.

Further, there are plenty of cases where the 'subsystem' is
cross-cutting, not vertical - and in those cases its much much much
harder to just describe file boundaries where the thing is.

So I'd like us to really get our heads around the idea that folk are
able to make promises ('I will only commit changes relevant to the DB
abstraction/transaction management') and honour them. And if they
don't - well, remove their access. *even with* CD in the picture,
thats a wholly acceptable risk IMO.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

2015-06-02 Thread Kai Qiang Wu
Hi All,

For mesos bay, I think what we should implement depends on user-cases.

If users use magnum to create mesos-bay, what would they do with mesos in
following steps ?

1. If they go to mesos (framework or anything) directly, we'd better not
involve any new mesos objects, but use container if possible.
2. If they'd like to  operate with mesos through magnum, and it is easy to
do that, we could provide some objects operation.

Ideally, it is good to reuse containers api if possible. If not, we'd
better find ways to mesos mapping(api passthrough, instead add redundant
objects in magnum side)



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu hongbin...@huawei.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:   06/02/2015 06:15 AM
Subject:Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint



Hi Jay,

For your question “what is the mesos object that we want to manage”, the
short answer is it depends. There are two options I can think of:
  1.   Don’t manage any object from Marathon directly. Instead, we
  can focus on the existing Magnum objects (i.e. container), and
  implements them by using Marathon APIs if it is possible. Use the
  abstraction ‘container’ as an example. For a swarm bay, container
  will be implemented by calling docker APIs. For a mesos bay,
  container could be implemented by using Marathon APIs (it looks the
  Marathon’s object ‘app’ can be leveraged to operate a docker
  container). The effect is that Magnum will have a set of common
  abstractions that is implemented differently by different bay type.
  2.   Do manage a few Marathon objects (i.e. app). The effect is
  that Magnum will have additional API object(s) that is from Marathon
  (like what we have for existing k8s objects: pod/service/rc).
Thoughts?

Thanks
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: May-29-15 1:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

I want to mention that there is another mesos framework named as chronos:
https://github.com/mesos/chronos , it is used for job orchestration.

For others, please refer to my comments in line.

2015-05-29 7:45 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:
I’m moving this whiteboard to the ML so we can have some discussion to
refine it, and then go back and update the whiteboard.

Source: https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type

My comments in-line below.


Begin forwarded message:

From: hongbin hongbin...@huawei.com
Subject: COMMERCIAL:[Blueprint mesos-bay-type] Add support for mesos bay
type
Date: May 28, 2015 at 2:11:29 PM PDT
To: adrian.o...@rackspace.com
Reply-To: hongbin hongbin...@huawei.com

Blueprint changed by hongbin:

Whiteboard set to:

I did a preliminary research on possible implementations. I think this BP
can be implemented in two steps.
1. Develop a heat template for provisioning mesos cluster.
2. Implement a magnum conductor for managing the mesos cluster.

Agreed, thanks for filing this blueprint!
For 2, the conductor is mainly used to manage objects for CoE, k8s has pod,
service, rc, what is the mesos object that we want to manage? IMHO, mesos
is a resource manager and it needs to be worked with some frameworks to
provide services.


 First, I want to emphasis that mesos is not a service (It looks like a
 library). Therefore, mesos doesn't have web API, and most users don't
 use mesos directly. Instead, they use a mesos framework that is on top
 of mesos. Therefore, a mesos bay needs to have a mesos framework pre-
 configured so that magnum can talk to the framework to manage the bay.
 There are several framework choices. Below is a list of frameworks that
 look like a fit (in my opinion). A exhaustive list of framework can be
 found here [1].

 1. Marathon [2]
 This is a framework controlled by a company (mesosphere [3]). It is open
 source through. It supports running app on clusters of docker containers.
 It is probably the most widely-used mesos framework for long-running
 application.

 Marathon offers a REST API, whereas Aroura does not (unless one has
 materialized in the last month). This was the one we discussed in our
 Vancouver design summit, and we agreed that those wanting to use Apache
 Mesos are probably expecting this framework.


 2. Aurora [4]
 This is a framework governed by Apache Software 

Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-02 Thread Matthew Thode
On 06/02/2015 05:41 PM, James E. Blair wrote:
 Hi,
 
 This came up at the TC meeting today, and I volunteered to provide an
 update from the discussion.
 
 In general, I think there is a lot of support for a packaging effort in
 OpenStack.  The discussion here has been great; we need to answer a few
 questions, get some decisions written down, and make sure we have
 agreement.
 
 Here's what we need to know:
 
 1) Is this one or more than one horizontal effort?
 
 In other words, do we think the idea of having a single packaging
 project/team with collaboration among distros is going to work?  Or
 should we look at it more like the deployment projects where we have
 puppet and chef as top level OpenStack projects?
 
 Either way is fine, and regardless, we need to answer the next
 questions:
 
 2) What's the collaboration plan?
 
 How will different distros collaborate with each other, if at all?  What
 things are important to standardize on, what aren't and how do we
 support them all.
 
 3) What are the plans for repositories and their contents?
 
 What repos will be created, and what will be in them.  When will new
 ones be created, and is there any process around that.
 
 4) Who is on the team(s)?
 
 Who is interested in the overall effort?  Who is signing up for
 distro-specific work?  Who will be the initial PTL?
 
 I think if the discussion here can answer those questions, you should
 update the governance repo change with that information, we can get all
 the participants to ack that, and the TC will be able to act.
 
 Thanks again for driving this.
 
 -Jim
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Gentoo packages from source client side, don't think this effects us.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Help needed with TOSCA support in Murano

2015-06-02 Thread Georgy Okrokvertskhov
When you update an environment in Murano it will update underlying stack.
You should not see a new stack for the same environment. If you have a
PostgresDB deployed and then add a Tomcat application you will see that
stack was updated with new resources. There is no dependency between Tomcat
and PostgresDB so deployment will just update Heat stack with independent
resources (new Nova:Server will be added). Murano will never add a new
stack for existing environment when you update it. There is no such logic
in the code for sure, a single Heat stack nailed down to environment and
each deployment will do stack update calls.

Thanks
Gosha


On Tue, Jun 2, 2015 at 5:59 PM, Vahid S Hashemian vahidhashem...@us.ibm.com
 wrote:

 In other words, can a new component that is being to an environment have
 dependency on the existing ones?
 If so, how does that defined?

 For example, going back to your example of a multi-tier application, if I
 initially have PostgreDB in my environment, and later add Tomcat, how do I
 tell Tomcat the PostgreDB connection info? Would it be manually done
 through component parameters? Or there are other dynamic ways of
 discovering it?

 Thanks.

 Regards,

 -
 Vahid Hashemian, Ph.D.
 Advisory Software Engineer, IBM Cloud Labs


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Adrian Otto
Eric,

On Jun 2, 2015, at 10:07 PM, Eric Windisch 
e...@windisch.usmailto:e...@windisch.us wrote:



On Tue, Jun 2, 2015 at 10:29 PM, Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com wrote:
I have reflected on this further and offer this suggestion:

1) Add a feature to Magnum to auto-generate human readable names, like Docker 
does for un-named containers, and ElasticSearch does for naming cluster nodes. 
Use this feature if no name is specified upon the creation of a Bay or Baymodel.

For what it's worth, I also believe that requiring manual specification of 
names, especially if they must be unique is an anti-pattern.

If auto-generation of human readable names is performed and these must be 
unique, mind that you will be accepting a limit on the number of bays that may 
be created.

Good point. Keeping in mind that the effective limit would be per-tenant, and a 
simple mitigation could be used (adding incrementing digits or hex to the end 
of the name in the case of multiple guesses with collisions) could make the 
effective maximum high enough that it would be effectively unlimited. If 
someone actually reached the effective limit, the cloud provider could advise 
the user to specify a UUID they create as the name in order to avoid running 
out of auto-generated names. I could also imagine a Magnum feature that would 
allow a tenant to select an alternate name assignment strategy. For example:

bay_name_generation_strategy = random_readable | uuid
baymodel_name_generation_strategy = random_readable | uuid

Where uuid simply sets the name to the uuid of the resource, guaranteeing an 
unlimited number of bays at the cost of readability. If this were settable on a 
per-tenant basis, you’d only need to use it for tenants with ridiculous numbers 
of bays. I suggest that we not optimize for this until the problem actually 
surfaces somewhere.

I think this is perfectly fine, as long as it's reasonably large and the 
algorithm is sufficiently intelligent. The UUID algorithm is good at this, for 
instance, although it fails at readability. Docker's is not terribly great and 
could be limiting if you were looking to run several thousand containers on a 
single machine. Something better than Docker's algorithm but more readable than 
UUID could be explored.

Also, something to consider is if this should also mean a change to the UUIDs 
themselves. You could use UUID-5 to create a UUID from your tenant's UUID and 
your unique name. The tenant's UUID would be the namespace, with the bay's name 
being the name field. The benefit of this is that clients, by knowing their 
tenant ID could automatically determine their bay ID, while also guaranteeing 
uniqueness (or as unique as UUID gets, anyway).

Cool idea!

Adrian


Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

2015-06-02 Thread Jay Lau
In today's IRC meeting, the conclusion for now is that create a
Marathon+Mesos bay without exposing any object to Magnum but enable end
user operate Marathon directly. Thanks.

2015-06-03 9:19 GMT+08:00 Kai Qiang Wu wk...@cn.ibm.com:

 Hi All,

 For mesos bay, I think what we should implement depends on user-cases.

 If users use magnum to create mesos-bay, what would they do with mesos in
 following steps ?

 1. If they go to mesos (framework or anything) directly, we'd better not
 involve any new mesos objects, but use container if possible.
 2. If they'd like to  operate with mesos through magnum, and it is easy to
 do that, we could provide some objects operation.

 Ideally, it is good to reuse containers api if possible. If not, we'd
 better find ways to mesos mapping(api passthrough, instead add redundant
 objects in magnum side)



 Thanks

 Best Wishes,

 
 Kai Qiang Wu (吴开强  Kennan)
 IBM China System and Technology Lab, Beijing

 E-mail: wk...@cn.ibm.com
 Tel: 86-10-82451647
 Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
 100193

 
 Follow your heart. You are miracle!

 [image: Inactive hide details for Hongbin Lu ---06/02/2015 06:15:44
 AM---Hi Jay, For your question “what is the mesos object that we w]Hongbin
 Lu ---06/02/2015 06:15:44 AM---Hi Jay, For your question “what is the mesos
 object that we want to manage”, the short answer is it

 From: Hongbin Lu hongbin...@huawei.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 06/02/2015 06:15 AM
 Subject: Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint
 --



 Hi Jay,

 For your question “what is the mesos object that we want to manage”, the
 short answer is it depends. There are two options I can think of:

1.   Don’t manage any object from Marathon directly. Instead, we
can focus on the existing Magnum objects (i.e. container), and implements
them by using Marathon APIs if it is possible. Use the abstraction
‘container’ as an example. For a swarm bay, container will be implemented
by calling docker APIs. For a mesos bay, container could be implemented by
using Marathon APIs (it looks the Marathon’s object ‘app’ can be leveraged
to operate a docker container). The effect is that Magnum will have a set
of common abstractions that is implemented differently by different bay
type.
2.   Do manage a few Marathon objects (i.e. app). The effect is
that Magnum will have additional API object(s) that is from Marathon (like
what we have for existing k8s objects: pod/service/rc).

 Thoughts?

 Thanks
 Hongbin

 *From:* Jay Lau [mailto:jay.lau@gmail.com jay.lau@gmail.com]
 * Sent:* May-29-15 1:35 AM
 * To:* OpenStack Development Mailing List (not for usage questions)
 * Subject:* Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

 I want to mention that there is another mesos framework named as chronos:
 *https://github.com/mesos/chronos* https://github.com/mesos/chronos ,
 it is used for job orchestration.

 For others, please refer to my comments in line.

 2015-05-29 7:45 GMT+08:00 Adrian Otto *adrian.o...@rackspace.com*
 adrian.o...@rackspace.com:
 I’m moving this whiteboard to the ML so we can have some discussion to
 refine it, and then go back and update the whiteboard.

 Source: *https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type*
 https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type

 My comments in-line below.


 Begin forwarded message:

 *From: *hongbin *hongbin...@huawei.com* hongbin...@huawei.com
 *Subject: COMMERCIAL:[Blueprint mesos-bay-type] Add support for mesos bay
 type*
 *Date: *May 28, 2015 at 2:11:29 PM PDT
 *To: **adrian.o...@rackspace.com* adrian.o...@rackspace.com
 *Reply-To: *hongbin *hongbin...@huawei.com* hongbin...@huawei.com

 Blueprint changed by hongbin:

 Whiteboard set to:

 I did a preliminary research on possible implementations. I think this BP
 can be implemented in two steps.
 1. Develop a heat template for provisioning mesos cluster.
 2. Implement a magnum conductor for managing the mesos cluster.

 Agreed, thanks for filing this blueprint!
 For 2, the conductor is mainly used to manage objects for CoE, k8s has
 pod, service, rc, what is the mesos object that we want to manage? IMHO,
 mesos is a resource manager and it needs to be worked with some frameworks
 to provide services.



First, I want to emphasis that mesos is not a service (It looks like a
library). Therefore, mesos doesn't have web API, and most users don't
use mesos directly. Instead, they use a mesos framework that is on top
of mesos. Therefore, a mesos bay needs to have a mesos framework 

Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Jay Lau
I think that we did not come to a conclusion in today's IRC meeting.

Adrian proposed that Magnum generate a unique name just like what docker is
doing for docker run, the problem mentioned by Andrew Melton is that
Magnum support multi tenant, we should support the case that bay/baymodel
under different tenant can have same name, the unique name is not required.

Also we may need support name update as well if the end user specify a name
by mistake and want to update it after the bay/baymodel was created.

Hmm.., looking forward to more comments from you. Thanks.

2015-06-02 23:34 GMT+08:00 Fox, Kevin M kevin@pnnl.gov:

  Names can make writing generic orchestration templates that would go in
 the applications catalog easier. Humans are much better at inputting a name
 rather then a uuid. You can even default a name in the text box and if they
 don't change any of the defaults, it will just work. You can't do that with
 a UUID since it is different on every cloud.

 Thanks,
 Kevin
  --
 *From:* Jay Lau [jay.lau@gmail.com]
 *Sent:* Tuesday, June 02, 2015 12:33 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be
 a required option when creating a Bay/Baymodel

   Thanks Adrian, imho making name as required can bring more convenient
 to end users because UUID is difficult to use. Without name, the end user
 need to retrieve the UUID of the bay/baymodel first before he did some
 operations for the bay/baymodel, its really time consuming. We can discuss
 more in this week's IRC meeting. Thanks.


 2015-06-02 14:08 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  -1. I disagree.

  I am not convinced that requiring names is a good idea. I've asked
 several times why there is a desire to require names, and I'm not seeing
 any persuasive arguments that are not already addressed by UUIDs. We have
 UUID values to allow for acting upon an individual resource. Names are
 there as a convenience. Requiring names, especially unique names, would
 make Magnum harder to use for API users driving Magnum from other systems.
 I want to keep the friction as low as possible.

 I'm fine with replacing None with an empty string.

  Consistency with Nova would be a valid argument if we were being more
 restrictive, but that's not the case. We are more permissive. You can use
 Magnum in the same way you use Nova if you want, by adding names to all
 resources. I don't see the wisdom in forcing that style of use without a
 technical reason for it.

 Thanks,

 Adrian

 On May 31, 2015, at 4:43 PM, Jay Lau jay.lau@gmail.com wrote:


  Just want to use ML to trigger more discussion here. There are now
 bugs/patches tracing this, but seems more discussions are needed before we
 come to a conclusion.

 https://bugs.launchpad.net/magnum/+bug/1453732
 https://review.openstack.org/#/c/181839/
 https://review.openstack.org/#/c/181837/
 https://review.openstack.org/#/c/181847/
 https://review.openstack.org/#/c/181843/

  IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility
 to end user as Magnum also support operating Bay/Baymodel via names and the
 name might be more meaningful to end users.

 Perhaps we can borrow some iead from nova, the concept in magnum can be
 mapped to nova as following:

 1) instance = bay
 2) flavor = baymodel

 So I think that a solution might be as following:
 1) Make name as a MUST for both bay/baymodel
 2) Update magnum client to use following style for bay-create and
 baymodel-create: DO NOT add --name option

 root@devstack007:/tmp# nova boot
 usage: nova boot [--flavor flavor] [--image image]
  [--image-with key=value] [--boot-volume volume_id]
  [--snapshot snapshot_id] [--min-count number]
  [--max-count number] [--meta key=value]
  [--file dst-path=src-path] [--key-name key-name]
  [--user-data user-data]
  [--availability-zone availability-zone]
  [--security-groups security-groups]
  [--block-device-mapping dev-name=mapping]
  [--block-device key1=value1[,key2=value2...]]
  [--swap swap_size]
  [--ephemeral size=size[,format=format]]
  [--hint key=value]
  [--nic
 net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid]
  [--config-drive value] [--poll]
  name
 error: too few arguments
 Try 'nova help boot' for more information.
 root@devstack007:/tmp# nova flavor-create
 usage: nova flavor-create [--ephemeral ephemeral] [--swap swap]
   [--rxtx-factor factor] [--is-public
 is-public]
   name id ram disk vcpus
 Please show your comments if any.

 --
   Thanks,

  Jay Lau (Guangya Liu)


 

Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Eric Windisch
On Tue, Jun 2, 2015 at 10:29 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

  I have reflected on this further and offer this suggestion:

  1) Add a feature to Magnum to auto-generate human readable names, like
 Docker does for un-named containers, and ElasticSearch does for naming
 cluster nodes. Use this feature if no name is specified upon the creation
 of a Bay or Baymodel.


For what it's worth, I also believe that requiring manual specification of
names, especially if they must be unique is an anti-pattern.

If auto-generation of human readable names is performed and these must be
unique, mind that you will be accepting a limit on the number of bays that
may be created. I think this is perfectly fine, as long as it's reasonably
large and the algorithm is sufficiently intelligent. The UUID algorithm is
good at this, for instance, although it fails at readability. Docker's is
not terribly great and could be limiting if you were looking to run several
thousand containers on a single machine. Something better than Docker's
algorithm but more readable than UUID could be explored.

Also, something to consider is if this should also mean a change to the
UUIDs themselves. You could use UUID-5 to create a UUID from your tenant's
UUID and your unique name. The tenant's UUID would be the namespace, with
the bay's name being the name field. The benefit of this is that clients,
by knowing their tenant ID could automatically determine their bay ID,
while also guaranteeing uniqueness (or as unique as UUID gets, anyway).

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-06-02 Thread Jay Pipes

On 06/02/2015 07:25 PM, Chris Friesen wrote:

On 06/02/2015 02:36 PM, Andrew Laski wrote:

There used to be a project that I think was looking for an API like
this to
provide a reservation system, Climate or Blazar or something.  There
was brief
talk of providing something like it for that use case, but the idea
was put on
the backburner to wait for the scheduling rework that's occurring.
The question in my mind is should the claim requests be in the Nova
API or come
from a scheduler API.  And I tend to think that they should come from a
scheduler API.


Who owns the resources, nova or the scheduler?

In many cases only nova-compute can resolve races (resource tracking of
specific CPU cores, specific PCI devices, etc. in the face of parallel
scheduling) so unless we're going to guarantee no races then I think
claim requests should be a nova API call, and it should go all the way
down to nova-compute to make sure that the resources are actually claimed.


That's actually how the system works today. And, IMHO, it's inefficient. 
The nova-compute node should be the final arbiter of whether a request 
for resources can be properly fulfilled by the hypervisor, however, the 
scheduler should be the thing that owns resource usage records for the 
partition of resource providers that the scheduler process is 
responsible for.


I think the claim IDs should be returned from the scheduler API instead 
of created within the nova-compute manager.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-06-02 Thread Chris Friesen

On 06/02/2015 07:48 PM, Jay Pipes wrote:

On 06/02/2015 07:25 PM, Chris Friesen wrote:



In many cases only nova-compute can resolve races (resource tracking of
specific CPU cores, specific PCI devices, etc. in the face of parallel
scheduling) so unless we're going to guarantee no races then I think
claim requests should be a nova API call, and it should go all the way
down to nova-compute to make sure that the resources are actually claimed.


That's actually how the system works today. And, IMHO, it's inefficient. The
nova-compute node should be the final arbiter of whether a request for resources
can be properly fulfilled by the hypervisor, however, the scheduler should be
the thing that owns resource usage records for the partition of resource
providers that the scheduler process is responsible for.


If the nova-compute node is still the final arbiter, what does it actually mean 
to say that the scheduler owns the records?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Adrian Otto
I have reflected on this further and offer this suggestion:

1) Add a feature to Magnum to auto-generate human readable names, like Docker 
does for un-named containers, and ElasticSearch does for naming cluster nodes. 
Use this feature if no name is specified upon the creation of a Bay or Baymodel.

-and-

2) Add a configuration directives (default=FALSE) for allow_duplicate_bay_name 
and allow_duplicate_baymodel_name. If TRUE, duplicate named Bay and BayModel 
resources will be allowed, as they are today.

This way, by default Magnum requires a unique name, and if none is specified, 
it will automatically generate a name. This way no additional burden is put on 
users who want to act on containers exclusively using UUIDs, and cloud 
operators can decide if they want to enforce name uniqueness or not.

In the case of clouds that want to allow sharing access to a BayModel between 
multiple tenants (example: a global BayModel named “kubernetes”) with 
allow_duplicate_baymodel_name set to FALSE, a user will still be allowed to 
create a BayModel with the name “kubernetes” and it will override the global 
one. If a user-supplied BayModel is present with the same name as a global one, 
we shall automatically select the one owned by the tenant.

About Sharing of BayModel Resources:

Similarly, if we add features to allow one tenant to share a BayModel with 
another tenant (pending acceptance of the offered share), and duplicate names 
are allowed, then prefer in this order: 1) Use the resource owned by the same 
tenant, 2) Use the resource shared by the other tenant (post acceptance only), 
3) Use the global resource. If duplicates exist in the same scope of ownership, 
then raise an exception requiring the use of a UUID in that case to resolve the 
ambiguity.

One expected drawback of this approach is that tools designed to integrate with 
one Magnum may not work the same with another Magnum if the 
allow_duplicate_bay* settings are changed from the default values on one but 
not the other. This should be made clear in the comments above the 
configuration directive in the example config file.

Adrian

On Jun 2, 2015, at 8:44 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

I think that we did not come to a conclusion in today's IRC meeting.

Adrian proposed that Magnum generate a unique name just like what docker is 
doing for docker run, the problem mentioned by Andrew Melton is that Magnum 
support multi tenant, we should support the case that bay/baymodel under 
different tenant can have same name, the unique name is not required.

Also we may need support name update as well if the end user specify a name by 
mistake and want to update it after the bay/baymodel was created.

Hmm.., looking forward to more comments from you. Thanks.

2015-06-02 23:34 GMT+08:00 Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov:
Names can make writing generic orchestration templates that would go in the 
applications catalog easier. Humans are much better at inputting a name rather 
then a uuid. You can even default a name in the text box and if they don't 
change any of the defaults, it will just work. You can't do that with a UUID 
since it is different on every cloud.

Thanks,
Kevin

From: Jay Lau [jay.lau@gmail.commailto:jay.lau@gmail.com]
Sent: Tuesday, June 02, 2015 12:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a 
required option when creating a Bay/Baymodel

Thanks Adrian, imho making name as required can bring more convenient to end 
users because UUID is difficult to use. Without name, the end user need to 
retrieve the UUID of the bay/baymodel first before he did some operations for 
the bay/baymodel, its really time consuming. We can discuss more in this week's 
IRC meeting. Thanks.


2015-06-02 14:08 GMT+08:00 Adrian Otto 
adrian.o...@rackspace.commailto:adrian.o...@rackspace.com:
-1. I disagree.

I am not convinced that requiring names is a good idea. I've asked several 
times why there is a desire to require names, and I'm not seeing any persuasive 
arguments that are not already addressed by UUIDs. We have UUID values to allow 
for acting upon an individual resource. Names are there as a convenience. 
Requiring names, especially unique names, would make Magnum harder to use for 
API users driving Magnum from other systems. I want to keep the friction as low 
as possible.

I'm fine with replacing None with an empty string.

Consistency with Nova would be a valid argument if we were being more 
restrictive, but that's not the case. We are more permissive. You can use 
Magnum in the same way you use Nova if you want, by adding names to all 
resources. I don't see the wisdom in forcing that style of use without a 
technical reason for it.

Thanks,

Adrian

On May 31, 2015, at 4:43 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com 

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread John Griffith
On Tue, Jun 2, 2015 at 7:19 PM, Ian Wienand iwien...@redhat.com wrote:

 On 06/03/2015 07:24 AM, Boris Pavlovic wrote:

 Really it's hard to find cores that understand whole project, but
 it's quite simple to find people that can maintain subsystems of
 project.


   We are made wise not by the recollection of our past, but by the
   responsibility for our future.
- George Bernard Shaw

 Less authorities, mini-kingdoms and
 turing-complete-rule-based-gerrit-subtree-git-commit-enforcement; more
 empowerment of responsible developers and building trust.

 -i


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​All of the debate about the technical feasibility, additional repos aside,
the one question I always raise when topics like this come up is how does
that really solve the problem.  In other words, there's still a finite
number of folks that dedicate the time to be subject matter experts and
do the reviews.

Maybe this will help, I don't know.  But I have the same argument as I made
in my spec to remove drivers from Cinder altogether, creating another
repo and moving things around just creates more overhead and does little
to address the lack of review resources.

I understand you're not proposing new repos Boris, although it was
mentioned in this thread.

I do think that we could probably try and do something like growing the
Lieutenant model that the Neutron team is hammering out.  Not sure... but
seems like a good start; again assuming there are enough
qualified/interested Lieutenants.  I'm not sure, but that's kind of how I
interpreted your proposal but one additional step of ACL's; is that
accurate?

Thanks,
John​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-02 Thread Salvatore Orlando
On 3 June 2015 at 07:12, John Griffith john.griffi...@gmail.com wrote:



 On Tue, Jun 2, 2015 at 7:19 PM, Ian Wienand iwien...@redhat.com wrote:

 On 06/03/2015 07:24 AM, Boris Pavlovic wrote:

 Really it's hard to find cores that understand whole project, but
 it's quite simple to find people that can maintain subsystems of
 project.


   We are made wise not by the recollection of our past, but by the
   responsibility for our future.
- George Bernard Shaw

 Less authorities, mini-kingdoms and
 turing-complete-rule-based-gerrit-subtree-git-commit-enforcement; more
 empowerment of responsible developers and building trust.

 -i


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ​All of the debate about the technical feasibility, additional repos
 aside, the one question I always raise when topics like this come up is
 how does that really solve the problem.  In other words, there's still a
 finite number of folks that dedicate the time to be subject matter
 experts and do the reviews.

 Maybe this will help, I don't know.  But I have the same argument as I
 made in my spec to remove drivers from Cinder altogether, creating another
 repo and moving things around just creates more overhead and does little
 to address the lack of review resources.


In the neutron project we do not have yet enough data points to assess
impact of driver/plugin split on review turnaround. On the one hand it
seems that there is no statistically significant improvement in review
times for the core part, but on the other hand average review times for
plugin/driver code have improved a lot. So I reckon that there's been a
clear advantage on this front. There is always a flip of the coin, of
course: plugin maintainers have to do extra work to chase changes in
openstack/neutron.

However, this is a bit out of scope for this thread. I'd say that splitting
out a project in several repositories is an option, but not always the
right one. In the case of neutron plugins and drivers, it made sense
because there is a stable-ish interface between the core system and the
plugin, and because there's usually little overlap of responsibilities.


 I understand you're not proposing new repos Boris, although it was
 mentioned in this thread.

 I do think that we could probably try and do something like growing the
 Lieutenant model that the Neutron team is hammering out.  Not sure... but
 seems like a good start; again assuming there are enough
 qualified/interested Lieutenants.  I'm not sure, but that's kind of how I
 interpreted your proposal but one additional step of ACL's; is that
 accurate?


While I cannot answer for Boris, my opinion is that the lieutenant system
actually tries to provide a social solution to the problem, where as ACLs
are a technical solution. I personally think that the belief that there's
always a tool to fix any problem is a giant unicorn - as Robert put it
there's no technical solution to a social problem. A technical solution
would probably end up bringing more process, more bureaucracy, and
therefore more annoyance... but I'm digressing.

In my opinion the lieutenant system is an attempt to build networks of
trusted and responsible developers who share interest (more or less vested)
and knowledge on a specific subsystem of a project. If implemented
correctly, it will ensure those networks are small enough so that trust can
be achieved in a simple way.
I'd rather rely on trust and common sense than on a set of ACLs that
probably at some point will get in the way and be more a hindrance than a
help.

Salvatore



 Thanks,
 John​


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Change abandonment policy

2015-06-02 Thread Cody Herriges
Colleen Murphy wrote:
snip
 
 3) Manually abandon after N months/weeks changes that have a -1 that was
 never responded to
 
 ```
 If a change is submitted and given a -1, and subsequently the author
 becomes unresponsive for a few weeks, reviewers should leave reminder
 comments on the review or attempt to contact the original author via IRC
 or email. If the change is easy to fix, anyone should feel welcome to
 check out the change and resubmit it using the same change ID to
 preserve original authorship. If the author is unresponsive for at least
 3 months and no one else takes over the patch, core reviewers can
 abandon the patch, leaving a detailed note about how the change can be
 restored.
 
 If a change is submitted and given a -2, or it otherwise becomes clear
 that the change can not make it in (for example, if an alternate change
 was chosen to solve the problem), and the author has been unresponsive
 for at least 3 months, a core reviewer should abandon the change.
 ```
 
 Core reviewers can click the abandon button on changes that no one has
 shown an interest in in N months/weeks, leaving a message about how to
 restore the change if the author wants to come back to it. Puppet Labs
 does this for its module pull requests, setting N at 1 month.
 

+1

 
 Option 3 leaves the possibility that a change that is mostly good
 becomes abandoned, making it harder for someone to find and restore it.
 

In my opinion this will happen very infrequently.

-- 
Cody



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] UPDATED: [new][mercador] Announcing Mercador, a project to federate OpenStack cloud services.

2015-06-02 Thread Geoff Arnold
[Updated with additional information.]

Hello,

I'm pleased to announce the development of a new project called Mercador 
(Portuguese for “merchant”).  Mercador will provide a mechanism for integrating 
OpenStack cloud services from one cloud service provider (CSP) into the set of 
services published by a second CSP. The mechanism is intended to be completely 
transparent to cloud service users, and to require minimal changes for 
participating CSPs. It is based on the concept of a virtual region, and builds 
on the hierarchical multitenant and Keystone-to-Keystone identity  federation 
work in Kilo. This project is hosted on StackForge, initialized as a set of 
empty cookiecutter[1] repos:

https://github.com/stackforge/mercador-pub
https://github.com/stackforge/mercador-sub
https://github.com/stackforge/python-mercadorclient

Planning information for the project will be captured on Trello:

https://trello.com/b/6tlmk3z4/mercador-stackforge-project

Please join us via iRC on #openstack-mercador on freenode.

I am holding a Doodle poll to select times for our first meeting. (I’ve 
reformatted the poll since the original announcement.)  This Doodle poll will 
close June 8th and meeting times will be announced on the mailing list at that 
time.  At our first IRC meeting, we will be selecting the core team members, so 
if you’re interested in participating in this project, please try to attend.  

http://doodle.com/fsdm6ry6aytqf7w8

The initial core team includes:

Geoff Arnold
David Cheperdak
Orran Krieger
Raildo Mascena

For more details, check out our Wiki:

 https://wiki.openstack.org/wiki/Mercador  

In anticipation of a lively debate, I’m appending an FAQ [2]

Regards,
Geoff Arnold
--
[1]

https://github.com/openstack-dev/cookiecutter

[2]
FAQ
Q. What exactly is this project going to build?
A. The first deliverable is a system which will allow resources from CSP A to 
be made available to users of an OpenStack cloud operated by CSP B. We plan to 
demonstrate this in Tokyo.

Q. Can’t we do that today? CERN already does this, and it was the theme of the 
Identity Federation demonstration at the Vancouver summit.
A. Those examples all require administrators to collaborate on the static 
configuration of the various clouds. This system will support automated dynamic 
configuration.

Q. How?
A. The administrator of CSP A defines a set of “Virtual Regions”, each mapped 
into a Keystone Domain within one of her Regions. Then the admin of CSP B can 
select an available Virtual Region and make it available to his users just as 
though it was a regular Region of cloud B. (It shows up in Keystone and Horizon 
like other regions.)
 
Q. How do the users of CSP B experience this?
A. Users shouldn’t be able to tell the difference between one of CSP B’s own 
regions and a virtual region sourced from CSP A. (It should pass RefStack.)

Q. How is this implemented?
A. CSP A deploys a “publisher” service to define and publish Virtual Regions. 
CSP B deploys a “subscriber” service which talks to “publishers” to bind 
virtual regions. And there’s a CLI tool.

Q. Is that all?
A. The “publisher” is straightforward. The “subscriber” needs to be able to 
dynamically reconfigure Keystone and Horizon. This may require some minor 
changes.

Q. How is resource allocation policy managed? How does CSP A control what’s 
available in a Virtual Region?
A. In Kilo, the Keystone team implemented Hierarchical Multitenancy (HMT), but 
the rest of OpenStack isn’t HMT-aware. We need quotas in Nova, Cinder, etc. to 
be extended to support HMT. 

Q. This doesn’t meet my expectations for Service Federation. To me, Federation 
implies [insert list of cool intercloud functionality].
A. We’re concentrating on this one mechanism, which we think will be a 
foundation for a lot of interesting innovations. We’re collaborating with some 
of those, like the team from the Massachusetts Open Cloud.

Q. There’s more to federation than simply wiring up the OpenStack services. 
What about operations and business integration – logging, metrics, billing, 
service assurance?
A. You’re right. However right now most of those things are out of scope for 
OpenStack. We expect that the functionality we’re going to build will wind up 
being embedded in various OSS and BSS workflows.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mechanism drivers and Neutron server forking?

2015-06-02 Thread Salvatore Orlando
I'm not sure if you can test this behaviour on your own because it requires
the VMware plugin and the eventlet handling of backend response.

But the issue was manifesting and had to be fixed with this mega-hack [1].
The issue was not about several workers executing the same code - the
loopingcall was always started on a single thread. The issue I witnessed
was that the other API workers just hang.

There's probably something we need to understand about how eventlet can
work safely with a os.fork (I just think they're not really made to work
together!).
Regardless, I did not spent too much time on it, because I thought that the
multiple workers code might have been rewritten anyway by the pecan switch
activities you're doing.

Salvatore


[1] https://review.openstack.org/#/c/180145/

On 3 June 2015 at 02:20, Kevin Benton blak...@gmail.com wrote:

 Sorry about the long delay.

 Even the LOG.error(KEVIN PID=%s network response: %s % (os.getpid(),
 r.text)) line?  Surely the server would have forked before that line was
 executed - so what could prevent it from executing once in each forked
 process, and hence generating multiple logs?

 Yes, just once. I wasn't able to reproduce the behavior you ran into.
 Maybe eventlet has some protection for this? Can you provide small sample
 code for the logging driver that does reproduce the issue?

 On Wed, May 13, 2015 at 5:19 AM, Neil Jerram neil.jer...@metaswitch.com
 wrote:

 Hi Kevin,

 Thanks for your response...

 On 08/05/15 08:43, Kevin Benton wrote:

 I'm not sure I understand the behavior you are seeing. When your
 mechanism driver gets initialized and kicks off processing, all of that
 should be happening in the parent PID. I don't know why your child
 processes start executing code that wasn't invoked. Can you provide a
 pointer to the code or give a sample that reproduces the issue?


 https://github.com/Metaswitch/calico/tree/master/calico/openstack

 Basically, our driver's initialize method immediately kicks off a green
 thread to audit what is now in the Neutron DB, and to ensure that the other
 Calico components are consistent with that.

  I modified the linuxbridge mech driver to try to reproduce it:
 http://paste.openstack.org/show/216859/

 In the output, I never received any of the init code output I added more
 than once, including the function spawned using eventlet.


 Interesting.  Even the LOG.error(KEVIN PID=%s network response: %s %
 (os.getpid(), r.text)) line?  Surely the server would have forked before
 that line was executed - so what could prevent it from executing once in
 each forked process, and hence generating multiple logs?

 Thanks,
 Neil

  The only time I ever saw anything executed by a child process was actual
 API requests (e.g. the create_port method).




  On Thu, May 7, 2015 at 6:08 AM, Neil Jerram neil.jer...@metaswitch.com
 mailto:neil.jer...@metaswitch.com wrote:

 Is there a design for how ML2 mechanism drivers are supposed to cope
 with the Neutron server forking?

 What I'm currently seeing, with api_workers = 2, is:

 - my mechanism driver gets instantiated and initialized, and
 immediately kicks off some processing that involves communicating
 over the network

 - the Neutron server process then forks into multiple copies

 - multiple copies of my driver's network processing then continue,
 and interfere badly with each other :-)

 I think what I should do is:

 - wait until any forking has happened

 - then decide (somehow) which mechanism driver is going to kick off
 that processing, and do that.

 But how can a mechanism driver know when the Neutron server forking
 has happened?

 Thanks,
  Neil


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 

Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-06-02 Thread Alexis Lee
Alexis Lee said on Tue, Jun 02, 2015 at 11:28:03AM +0100:
 Paul Murray tells me there was a blueprint for this some time ago, but
 I can't find a spec for it. I'm interested in pushing this, I'll put up
 a spec at some point unless someone beats me to it.

Oops, found it, thanks Paul:
https://blueprints.launchpad.net/nova/+spec/persistent-resource-claim
https://review.openstack.org/#/c/84906/ (merged in Juno)


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Why not common definition about normal HTTP status code like 2xx and 3xx?

2015-06-02 Thread Neo Liu
On Tue, Jun 2, 2015 at 5:46 PM Boris Bobrov bbob...@mirantis.com wrote:

 On Tuesday 02 June 2015 09:32:45 Chenhong Liu wrote:
  There is keystone/exception.py which contains Exceptions defined and used
  inside keystone provide 4xx and 5xx status code. And we can use it like:
  exception.Forbidden.code, exception.forbiddent.title
  exception.NotFound.code, exception.NotFound.title
 
  This makes the code looks pretty and avoid error prone. But I can't find
  definition for other status code, like 200, 201, 204, 302, and so on. The
  code in keystone, especially the unit test cases,  just write these
 status
  code and title explicitly.
 
  How about add those definitions?

 These are standard HTTP codes:
 http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

 Description in exceptions is given because one error code can be used for
 several errors. Success codes always have one meaning.


I know HTTP codes. I mean writing something like module.OK or
module.NoConent is better than writting '200 OK' or '204 No Conent'.



 --
 Best regards,
 Boris Bobrov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Why not common definition about normal HTTP status code like 2xx and 3xx?

2015-06-02 Thread samuel

Hi Chenhong Liu,

encapsulated into the WSGI application, Keystone is architecturally 
organized as follows:


Application - Router - Controller - Manager - Driver

The Router connects called URLs with the code in the Controller, which 
delegates actions
to Manager, which manages the business logic and in turn calls the 
configured Driver to

access stored information.

The Controller level may catch raised exceptions from Manager/Driver 
[1]; those are the
exceptions defined at exception.py and represent the 4xx and 5xx HTTP 
status codes.


When a Controller call to Manager/Driver has success, it may set the 
HTTP status code
itself [2] or it will be defined when rendering the WSGI response [3]; 
those are the 2xx

HTTP status codes.

The 300 HTTP status code is particularly used in version discovery, and 
is defined at that

specific manager call in the base Controller class [4].

Sincerely,
Samuel

[1] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/identity/backends/sql.py#n130
[2] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/contrib/federation/controllers.py#n100
[3] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/wsgi.py#n740
[4] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/controllers.py#n178


Em 02.06.2015 06:45, Boris Bobrov escreveu:

On Tuesday 02 June 2015 09:32:45 Chenhong Liu wrote:
There is keystone/exception.py which contains Exceptions defined and 
used
inside keystone provide 4xx and 5xx status code. And we can use it 
like:

exception.Forbidden.code, exception.forbiddent.title
exception.NotFound.code, exception.NotFound.title

This makes the code looks pretty and avoid error prone. But I can't 
find
definition for other status code, like 200, 201, 204, 302, and so on. 
The
code in keystone, especially the unit test cases,  just write these 
status

code and title explicitly.

How about add those definitions?

These are standard HTTP codes:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

Description in exceptions is given because one error code can be used 
for

several errors. Success codes always have one meaning.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-06-02 Thread Valeriy Ponomaryov
Deepak,

transfer-* is not suitable in this particular case. Usage of share
networks causes creation of resources, when transfer does not. Also in
this topic we have creation of new share based on some snapshot.

Valeriy

On Sun, May 31, 2015 at 4:23 PM, Deepak Shetty dpkshe...@gmail.com wrote:


 On Thu, May 28, 2015 at 4:54 PM, Duncan Thomas duncan.tho...@gmail.com
 wrote:

 On 28 May 2015 at 13:03, Deepak Shetty dpkshe...@gmail.com wrote:

 Isn't this similar to what cinder transfer-* cmds are for ? Ability to
 transfer cinder volume across tenants
 So Manila should be implementing the transfer-* cmds, after which
 admin/user can create a clone
 then initiate a transfer to a diff tenant  ?


 Cinder doesn't seem to have any concept analogous to a share network from
 what I can see; the cinder transfer commands are for moving a volume
 between tenants, which is a different thing, I think.


 Yes, cinder doesn't have any eq of share network. But my comment was from
 the functionality perpsective. In cinder transfer-* commands are used to
 transfer ownership of volumes across tenants. IIUC the ability in Manila to
 create a share from snapshot and have that share in a different share
 network is eq to creating a share from a snapshot for a different tenant,
 no ? Share networks are typically 1-1 with tenant network AFAIK, correct me
 if i am wrong




 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Steven Dake (stdake)
Kennan,

Agree on no requirement for unique name.

Regards
-steve

From: Kai Qiang Wu wk...@cn.ibm.commailto:wk...@cn.ibm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 1, 2015 at 6:11 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a 
required option when creating a Bay/Baymodel


+1 about Jay option.

BTW, as nova and glance all allow same name for instance or image, So name 
seems not need to be unique, it is OK I think.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.commailto:wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Jay Lau ---06/01/2015 11:17:40 PM---2015-06-01 21:54 
GMT+08:00 Jay Pipes jaypi...@gmail.com:  On 0]Jay Lau ---06/01/2015 11:17:40 
PM---2015-06-01 21:54 GMT+08:00 Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com:  On 05/31/2015 05:38 PM, Jay 
Lau wrote:

From: Jay Lau jay.lau@gmail.commailto:jay.lau@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: 06/01/2015 11:17 PM
Subject: Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a 
required option when creating a Bay/Baymodel







2015-06-01 21:54 GMT+08:00 Jay Pipes 
jaypi...@gmail.commailto:jaypi...@gmail.com:

On 05/31/2015 05:38 PM, Jay Lau wrote:
Just want to use ML to trigger more discussion here. There are now
bugs/patches tracing this, but seems more discussions are needed before
we come to a conclusion.

https://bugs.launchpad.net/magnum/+bug/1453732
https://review.openstack.org/#/c/181839/
https://review.openstack.org/#/c/181837/
https://review.openstack.org/#/c/181847/
https://review.openstack.org/#/c/181843/

IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility
to end user as Magnum also support operating Bay/Baymodel via names and
the name might be more meaningful to end users.

Perhaps we can borrow some iead from nova, the concept in magnum can be
mapped to nova as following:

1) instance = bay
2) flavor = baymodel

So I think that a solution might be as following:
1) Make name as a MUST for both bay/baymodel
2) Update magnum client to use following style for bay-create and
baymodel-create: DO NOT add --name option

You should decide whether name would be unique -- either globally or within a 
tenant.

Note that Nova's instance names (the display_name model field) are *not* 
unique, neither globally nor within a tenant. I personally believe this was a 
mistake.

The decision affects your data model and constraints.

Yes, my thinking is to enable Magnum has same behavior with nova. The name can 
be managed by the end user and the end user can specify the name as they want, 
it is end user's responsibility to make sure there are no duplicate names. 
Actually, I think that the name do not need to be unique but UUID.

Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,

Jay Lau (Guangya 
Liu)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]Why not common definition about normal HTTP status code like 2xx and 3xx?

2015-06-02 Thread Neo Liu
On Tue, Jun 2, 2015 at 7:24 PM samuel sam...@lsd.ufcg.edu.br wrote:

 Hi Chenhong Liu,

 In addition, I think creating a common file to place non-error HTTP
 status code
 is a good idea and can be discussed with the Keystone cores.

 Feel free to add a point to our weekly meeting, Tuesdays 18:00 UTC. [1]

 Thanks,  samuel. I will be online tonight.


 Sincerely,
 Samuel

 [1] https://wiki.openstack.org/wiki/Meetings/KeystoneMeeting

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] About the volume status exposure during migration

2015-06-02 Thread Sheng Bo Hou
Hi Avishay,

I really appreciate your comments on the spec I submitted for volume 
migration improvement.(https://review.openstack.org/#/c/186327/)
I truly understand your concerns about exposing the migrating status to 
the end user(not admin). However, there is something I get confused: when 
we would like to migrate a volume from LVM to Storwize back-end, we need 
to use the command retype. During this retype, the volume status is 
already set to retyping, which is also exposed to the end user(not admin). 
Do you see any issues about that Is this something we need to change 
as well?


Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Updating Our Concept of Resources

2015-06-02 Thread Alexis Lee
Ed Leafe said on Mon, Jun 01, 2015 at 07:40:17AM -0500:
 We need to update our concept of a resource internally in Nova, both
 in the DB and in code, and stop thinking that every request should
 have a flavor.

If you allocate all the memory of a box to high-mem instances, you may
not be billing for all the CPU and disk which are now unusable. That's
why flavors were introduced, afaik, and it's still a valid need.

I totally agree the scheduler doesn't have to know anything about
flavors though. We should push them out to request validation in the
Nova API. This can be considered part of cleaning up the scheduler API.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-02 Thread Miguel Ángel Ajo
Ooook, fully understood now. Thanks Ihar  Ian for the clarification :)


Miguel Ángel Ajo


On Tuesday, 2 de June de 2015 at 13:33, Ihar Hrachyshka wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256
  
 On 06/02/2015 10:10 AM, Miguel Ángel Ajo wrote:
  The backport seems reasonable IMO.
   
  Is this tested in a multihost environment?.
   
  I ask, because given the Ian explanation (which probably I got
  wrong), the issue is in the NET-NIC-VM path while the patch fixes
  the path in the network node (this is ran in the dhcp agent).
  dhcp-NIC-NET.
   
  
  
 If a packet goes out of your real NIC, then it gets a proper checksum
 attached. So the issue is single host only.
  
 Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2
  
 iQEcBAEBCAAGBQJVbZSjAAoJEC5aWaUY1u57nWQIAImV2DxUIK1f1NPvuKkm/Del
 lfi90sDNSo8sIOmkLzey8n/1Dyrb9QTzZlb5XpJlG+HLmuRa+AwaWuyNswKJvHEu
 MlMBNPawdimlmyn0uLs+QwQOjL31HOb4SD76DOHGc8X2LVOz4PXf0KO2s0PbjU2v
 bfm+Yo+lhC7ZMAeebEcjNO6s28TSzRhOzQ7H1ItlPcJFrchcYCRJ1l2vdmcL69DO
 FzndWaAQ1R8xGKy2giOt4dc2x/cEad3ZTI/v573aOTJg3UWfHp6GbFfwkuWZzHbW
 U+UAezEogg3P++cv0eEwnQEeNhyN/eO2aV928kpPgJaw4T/6HFBGmp+yhOINXjQ=
 =fQ24
 -END PGP SIGNATURE-
  
  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] python versions

2015-06-02 Thread Kirill Zaitsev
It seems that python-muranoclient is the last project from murano-official 
group, that still supports python2.6. Other projects do not have a 2.6 testing 
job (correct me if I’m wrong).

Personally I think it’s time to drop support for 2.6 completely, and add (at 
least non-voting) jobs with python3.4 support tests.
It seems to fit the whole process of moving OS projects towards python 3: 
https://etherpad.openstack.org/p/liberty-cross-project-python3

What do you think? Does anyone have any objections?

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] virtual machine can not get DHCP lease due packet has no checksum

2015-06-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/02/2015 10:10 AM, Miguel Ángel Ajo wrote:
 The backport seems reasonable IMO.
 
 Is this tested in a multihost environment?.
 
 I ask, because given the Ian explanation (which probably I got
 wrong), the issue is in the NET-NIC-VM path while the patch fixes
 the path in the network node (this is ran in the dhcp agent).
 dhcp-NIC-NET.
 

If a packet goes out of your real NIC, then it gets a proper checksum
attached. So the issue is single host only.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVbZSjAAoJEC5aWaUY1u57nWQIAImV2DxUIK1f1NPvuKkm/Del
lfi90sDNSo8sIOmkLzey8n/1Dyrb9QTzZlb5XpJlG+HLmuRa+AwaWuyNswKJvHEu
MlMBNPawdimlmyn0uLs+QwQOjL31HOb4SD76DOHGc8X2LVOz4PXf0KO2s0PbjU2v
bfm+Yo+lhC7ZMAeebEcjNO6s28TSzRhOzQ7H1ItlPcJFrchcYCRJ1l2vdmcL69DO
FzndWaAQ1R8xGKy2giOt4dc2x/cEad3ZTI/v573aOTJg3UWfHp6GbFfwkuWZzHbW
U+UAezEogg3P++cv0eEwnQEeNhyN/eO2aV928kpPgJaw4T/6HFBGmp+yhOINXjQ=
=fQ24
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Global Requirements] Adding apscheduler to global requirements

2015-06-02 Thread BORTMAN, Limor (Limor)
Hi all,
As part as a BP in mistral (Add seconds granularity in cron-trigger execute[1])
I would like to add apscheduler (Advanced Python Scheduler[2]) to the openstack 
Global Requirements.

Any objections?

[1] 
https://blueprints.launchpad.net/mistral/+spec/cron-trigger-seconds-granularity
[2] https://apscheduler.readthedocs.org/en/latest/
 

Thanks Stotland Limor 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Progressing/tracking work on libvirt / vif drivers

2015-06-02 Thread Irena Berezovsky
Hi Ian,
I like your proposal. It sounds very reasonable and makes separation of
concerns between neutron and nova very clear. I think with vif plug script
support [1]. it will help to decouple neutron from nova dependency.
Thank you for sharing this,
Irena
[1] https://review.openstack.org/#/c/162468/

On Tue, Jun 2, 2015 at 10:45 AM, Ian Wells ijw.ubu...@cack.org.uk wrote:

 VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this to
 a hopefully interested audience.

 At the summit, we wrote up a spec we were thinking of doing at [1].  It
 actually proposes two things, which is a little naughty really, but hey.

 Firstly we propose that we turn binding into a negotiation, so that Nova
 can offer binding options it supports to Neutron and Neutron can pick the
 one it likes most.  This is necessary if you happen to use vhostuser with
 qemu, as it doesn't work for some circumstances, and desirable all around,
 since it means you no longer have to configure Neutron to choose a binding
 type that Nova likes and Neutron can choose different binding types
 depending on circumstances.  As a bonus, it should make inter-version
 compatibility work better.

 Secondly we suggest that some of the information that Nova and Neutron
 currently calculate independently should instead be passed from Neutron to
 Nova, simplifying the Nova code since it no longer has to take an educated
 guess at things like TAP device names.  That one is more contentious, since
 in theory Neutron could pass an evil value, but if we can find some pattern
 that works (and 'pattern' might be literally true, in that you could get
 Nova to confirm that the TAP name begins with a magic string and is not
 going to be a physical device or other interface on the box) I think that
 would simplify the code there.

 Read, digest, see what you think.  I haven't put it forward yet (actually
 I've lost track of which projects take specs at this point) but I would
 very much like to get it implemented and it's not a drastic change (in
 fact, it's a no-op until we change Neutron to respect what Nova passes).

 [1] https://etherpad.openstack.org/p/YVR-nova-neutron-binding-spec

 On 1 June 2015 at 10:37, Neil Jerram neil.jer...@metaswitch.com wrote:

 On 01/06/15 17:45, Neil Jerram wrote:

  Many thanks, John  Dan.  I'll start by drafting a summary of the work
 that I'm aware of in this area, at
 https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work.


 OK, my first draft of this is now there at [1].  Please could folk with
 VIF-related work pending check that I haven't missed or misrepresented
 them?  Especially, please could owners of the 'Infiniband SR-IOV' and
 'mlnx_direct removal' changes confirm that those are really ready for core
 review?  It would be bad to ask for core review that wasn't in fact wanted.

 Thanks,
 Neil


 [1] https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] RequestSpec object and Instance model

2015-06-02 Thread Sylvain Bauza

Hi,

Currently working on implementing the RequestSpec object BP [1], I had 
some cool comments on my change here :

https://review.openstack.org/#/c/145528/12/nova/objects/request_spec.py,cm

Since we didn't discussed on how to persist that RequestSpec object, I 
think the comment is valuable.


For the moment, the only agreed spec for persisting the object that we 
have is [2] but there is also a corollar here that means that we would 
have to persist more than the current fields 
https://review.openstack.org/#/c/169901/3/specs/liberty/approved/add-buildrequest-obj.rst,cm


So, there are 2 possibilities :
 #1, we only persist the RequestSpec for the sole Scheduler and in that 
case, we can leave as it is - only a few fields from Instance are stored
 #2, we consider that RequestSpec can be used for more than just the 
Scheduler, and then we need to make sure that we will have all the 
instance fields then.



I'm not strongly opiniated on that, I maybe consider that #2 is probably 
the best option but there is a tie in my mind. Help me figuring out 
what's the best option.


-Sylvain

[1] : 
http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/request-spec-object.html
[2] : 
http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/persist-request-spec.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-02 Thread Henrique Truta
Hi folks,

In Reseller[1], we’ll have the domains concept merged into projects, that
means that we will have projects that will behave as domains. Therefore, it
will be possible to have two projects with the same name in a hierarchy,
one being a domain and another being a regular project. For instance, the
following hierarchy will be valid:

A - is_domain project, with domain A

|

B - project

|

A - project with domain A

That hierarchy faces a problem when a user requests a project scoped token
by name, once she’ll pass “domain = ‘A’” and project.name = “A”. Currently,
we have no way to distinguish which project we are referring to. We have
two proposals for this.


   1.

   Specify the whole hierarchy in the token request body, which means that
   when requesting a token for the child project for that hierarchy, we’ll
   have in the scope field something like:

project: {
   domain: {
   name: A
   },
   name: [“A”', “B”, “A”]
   }

If the project name is unique inside the domain (project “B”, for example),
the hierarchy is optional.


   1.

   When a conflict happen, always provide a token to the child project.
   That means that, in case we have a name clashing as described, it will only
   be possible to get a project scoped token to the is_domain project through
   its id.



The former will give us more clarity and won’t create any more restrictions
than we already have. As a con, we currently are not able to get the names
of projects in the hierarchy above a given project. Although the latter
seems to hurt fewer people, it has the disadvantage of creating another set
of constraints that might difficult the UX in the future.

What do you think about that? We want to hear your oppinion, so we can
discuss it at today’s Keystone Meeting.

[1]
https://github.com/openstack/keystone-specs/blob/master/specs/liberty/reseller.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Liberty mid-cycle meetup

2015-06-02 Thread Matt Riedemann



On 5/11/2015 9:48 AM, Michael Still wrote:

Ok, given we've had a whole bunch people sign up already and no
complaints here, I think this is a done deal. So, you can now assume
that the dates are final. I will email people currently registered to
let them know as well.

I have added the mid-cycle to the wiki as well.

Cheers,
Michael

On Fri, May 8, 2015 at 4:49 PM, Michael Still mi...@stillhq.com wrote:

I thought I should let people know that we've had 14 people sign up
for the mid-cycle so far.

Michael

On Fri, May 8, 2015 at 3:55 PM, Michael Still mi...@stillhq.com wrote:

As discussed at the Nova meeting this morning, we'd like to gauge
interest in a mid-cycle meetup for the Liberty release.

To that end, I've created the following eventbrite event like we have
had for previous meetups. If you sign up, you're expressing interest
in the event and if we decide there's enough interest to go ahead we
will email you and let you know its safe to book travel and that
you're ticket is now a real thing.

To save you a few clicks, the proposed details are 21 July to 23 July,
at IBM in Rochester, MN.

So, I'd appreciate it if people could take a look at:

 
https://www.eventbrite.com.au/e/openstack-nova-liberty-mid-cycle-developer-meetup-tickets-16908756546

Thanks,
Michael

PS: I haven't added this to the wiki list of sprints because it might
not happen. When the decision is final, I'll add it to the wiki if we
decide to go ahead.

--
Rackspace Australia




--
Rackspace Australia






The wiki page has the details:

https://wiki.openstack.org/wiki/Sprints/NovaLibertySprint

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project meeting, Tue Jun 2nd, 21:00 UTC

2015-06-02 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting today at 21:00 UTC, with the
following agenda:

* Horizontal teams announcements
* Update the API WG merge process for Liberty [1]
* Vertical teams announcements
* Open discussion

[1] https://review.openstack.org/#/c/186836/

If you're from an horizontal team (Release management, QA, Infra, Docs,
Security, I18n...) or a vertical team (Nova, Swift, Keystone...) and
have something to communicate to the other teams, feel free to abuse the
relevant sections of that meeting and make sure it gets #info-ed by the
meetbot in the meeting summary.

Doug Hellmann has agreed to chair it this week. We'll from now on
regularly rotate chairs on this meeting. If you're interested please
contact me.

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel

2015-06-02 Thread Steven Dake (stdake)
I can see both points of view.  Principle of least surprise applies here.  A 
list of bays without names would be surprising for a tenant imo :)  I don’t 
particularly have a strong opinion, but my inclination is to lean towards 
non-unique names as a requirement for creating bays.

Again I am not strongly opinionated on this point so I’ll roll with whatever 
hits the code base ;-)

Regards
-steve

From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Monday, June 1, 2015 at 11:08 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Does Bay/Baymodel name should be a 
required option when creating a Bay/Baymodel

-1. I disagree.

I am not convinced that requiring names is a good idea. I've asked several 
times why there is a desire to require names, and I'm not seeing any persuasive 
arguments that are not already addressed by UUIDs. We have UUID values to allow 
for acting upon an individual resource. Names are there as a convenience. 
Requiring names, especially unique names, would make Magnum harder to use for 
API users driving Magnum from other systems. I want to keep the friction as low 
as possible.

I'm fine with replacing None with an empty string.

Consistency with Nova would be a valid argument if we were being more 
restrictive, but that's not the case. We are more permissive. You can use 
Magnum in the same way you use Nova if you want, by adding names to all 
resources. I don't see the wisdom in forcing that style of use without a 
technical reason for it.

Thanks,

Adrian

On May 31, 2015, at 4:43 PM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:


Just want to use ML to trigger more discussion here. There are now bugs/patches 
tracing this, but seems more discussions are needed before we come to a 
conclusion.

https://bugs.launchpad.net/magnum/+bug/1453732
https://review.openstack.org/#/c/181839/
https://review.openstack.org/#/c/181837/
https://review.openstack.org/#/c/181847/
https://review.openstack.org/#/c/181843/

IMHO, making the Bay/Baymodel name as a MUST will bring more flexibility to end 
user as Magnum also support operating Bay/Baymodel via names and the name might 
be more meaningful to end users.

Perhaps we can borrow some iead from nova, the concept in magnum can be mapped 
to nova as following:

1) instance = bay
2) flavor = baymodel

So I think that a solution might be as following:
1) Make name as a MUST for both bay/baymodel
2) Update magnum client to use following style for bay-create and 
baymodel-create: DO NOT add --name option

root@devstack007:/tmp# nova boot
usage: nova boot [--flavor flavor] [--image image]
 [--image-with key=value] [--boot-volume volume_id]
 [--snapshot snapshot_id] [--min-count number]
 [--max-count number] [--meta key=value]
 [--file dst-path=src-path] [--key-name key-name]
 [--user-data user-data]
 [--availability-zone availability-zone]
 [--security-groups security-groups]
 [--block-device-mapping dev-name=mapping]
 [--block-device key1=value1[,key2=value2...]]
 [--swap swap_size]
 [--ephemeral size=size[,format=format]]
 [--hint key=value]
 [--nic 
net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid]
 [--config-drive value] [--poll]
 name
error: too few arguments
Try 'nova help boot' for more information.
root@devstack007:/tmp# nova flavor-create
usage: nova flavor-create [--ephemeral ephemeral] [--swap swap]
  [--rxtx-factor factor] [--is-public is-public]
  name id ram disk vcpus

Please show your comments if any.

--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.concurrency release 1.10.0 (liberty)

2015-06-02 Thread doug
We are stoked to announce the release of:

oslo.concurrency 1.10.0: Oslo Concurrency library

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

For more details, please see the git log history below and:

http://launchpad.net/oslo.concurrency/+milestone/1.10.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

Changes in oslo.concurrency 1.9.0..1.10.0
-

9a963a9 Imported Translations from Transifex
b01e2a9 Sync from oslo-incubator
24033d3 Updated from global requirements
9433f0d Advertise support for Python3.4 / Remove support for 3.3
a592069 Updated from global requirements
e342fc8 Imported Translations from Transifex
926ee3b Remove run_cross_tests.sh
c0280ba Updated from global requirements
ddba72f Updated from global requirements

Diffstat (except docs and test files)
-

openstack-common.conf  |   2 +-
.../LC_MESSAGES/oslo.concurrency-log-error.po  |  12 +--
.../en_GB/LC_MESSAGES/oslo.concurrency-log-info.po |   4 -
.../locale/en_GB/LC_MESSAGES/oslo.concurrency.po   | 102 +---
.../fr/LC_MESSAGES/oslo.concurrency-log-error.po   |  12 +--
.../fr/LC_MESSAGES/oslo.concurrency-log-info.po|   2 -
.../locale/fr/LC_MESSAGES/oslo.concurrency.po  | 105 +
oslo_concurrency/openstack/common/fileutils.py |   9 +-
requirements.txt   |   4 +-
setup.cfg  |   2 +-
test-requirements.txt  |   4 +-
tox.ini|   4 -
13 files changed, 109 insertions(+), 244 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index cf4758d..df20d90 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-pbr=0.6,!=0.7,1.0
+pbr=0.11,2.0
@@ -8 +8 @@ iso8601=0.1.9
-oslo.config=1.9.3  # Apache-2.0
+oslo.config=1.11.0  # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 0fb38b7..5808fd5 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ coverage=3.6
-futures=2.1.6
+futures=3.0
@@ -15 +15 @@ sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
-eventlet=0.16.1,!=0.17.0
+eventlet=0.17.3



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominating Kirill Zaitsev for murano-core

2015-06-02 Thread Stan Lagun
+1 without any doubt

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

sla...@mirantis.com

On Tue, Jun 2, 2015 at 10:43 AM, Ekaterina Chernova efedor...@mirantis.com
wrote:

 +1

 Regards,
 Kate.

 On Tue, Jun 2, 2015 at 9:32 AM, Serg Melikyan smelik...@mirantis.com
 wrote:

 I'd like to propose Kirill Zaitsev to core members of Murano team.

 Kirill Zaitsev is active member of our community, he implemented
 https://launchpad.net/murano/+milestone/2015.1.0 several blueprint in
 Kilo and fixed number of bugs, he maintains a really good score as
 contributor:
 http://stackalytics.com/report/users/kzaitsev

 Existing Murano cores, please vote +1/-1 for the addition
 of Kirill to the murano-core.
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Progressing/tracking work on libvirt / vif drivers

2015-06-02 Thread Gary Kotton
Hi,
At the summit this was discussed in the nova sessions and there were a number 
of concerns regarding security etc.
Thanks
Gary

From: Irena Berezovsky irenab@gmail.commailto:irenab@gmail.com
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, June 2, 2015 at 1:44 PM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Progressing/tracking work on libvirt / vif 
drivers

Hi Ian,
I like your proposal. It sounds very reasonable and makes separation of 
concerns between neutron and nova very clear. I think with vif plug script 
support [1]. it will help to decouple neutron from nova dependency.
Thank you for sharing this,
Irena
[1] https://review.openstack.org/#/c/162468/

On Tue, Jun 2, 2015 at 10:45 AM, Ian Wells 
ijw.ubu...@cack.org.ukmailto:ijw.ubu...@cack.org.uk wrote:
VIF plugging, but not precisely libvirt VIF plugging, so I'll tout this to a 
hopefully interested audience.

At the summit, we wrote up a spec we were thinking of doing at [1].  It 
actually proposes two things, which is a little naughty really, but hey.

Firstly we propose that we turn binding into a negotiation, so that Nova can 
offer binding options it supports to Neutron and Neutron can pick the one it 
likes most.  This is necessary if you happen to use vhostuser with qemu, as it 
doesn't work for some circumstances, and desirable all around, since it means 
you no longer have to configure Neutron to choose a binding type that Nova 
likes and Neutron can choose different binding types depending on 
circumstances.  As a bonus, it should make inter-version compatibility work 
better.

Secondly we suggest that some of the information that Nova and Neutron 
currently calculate independently should instead be passed from Neutron to 
Nova, simplifying the Nova code since it no longer has to take an educated 
guess at things like TAP device names.  That one is more contentious, since in 
theory Neutron could pass an evil value, but if we can find some pattern that 
works (and 'pattern' might be literally true, in that you could get Nova to 
confirm that the TAP name begins with a magic string and is not going to be a 
physical device or other interface on the box) I think that would simplify the 
code there.

Read, digest, see what you think.  I haven't put it forward yet (actually I've 
lost track of which projects take specs at this point) but I would very much 
like to get it implemented and it's not a drastic change (in fact, it's a no-op 
until we change Neutron to respect what Nova passes).

[1] https://etherpad.openstack.org/p/YVR-nova-neutron-binding-spec

On 1 June 2015 at 10:37, Neil Jerram 
neil.jer...@metaswitch.commailto:neil.jer...@metaswitch.com wrote:
On 01/06/15 17:45, Neil Jerram wrote:

Many thanks, John  Dan.  I'll start by drafting a summary of the work
that I'm aware of in this area, at
https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work.

OK, my first draft of this is now there at [1].  Please could folk with 
VIF-related work pending check that I haven't missed or misrepresented them?  
Especially, please could owners of the 'Infiniband SR-IOV' and 'mlnx_direct 
removal' changes confirm that those are really ready for core review?  It would 
be bad to ask for core review that wasn't in fact wanted.

Thanks,
Neil


[1] https://etherpad.openstack.org/p/liberty-nova-libvirt-vif-work


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][mercador] Announcing Mercador, a project to federate OpenStack cloud services.

2015-06-02 Thread Geoff Arnold
Hello,

I'm pleased to announce the development of a new project called Mercador 
(Portuguese for “merchant”).  Mercador will provide a mechanism for integrating 
OpenStack cloud services from one cloud service provider (CSP) into the set of 
services published by a second CSP. The mechanism is intended to be completely 
transparent to cloud service users, and to require minimal changes for 
participating CSPs. It is based on the concept of a virtual region, and builds 
on the hierarchical multitenant and Keystone-to-Keystone identity  federation 
work in Kilo. This project will begin as a StackForge project based upon a set 
of empty cookiecutter[1] repos. 

Please join us via iRC on #openstack-mercador on freenode.

[Repository info to come.]

I am holding a Doodle poll to select times for our first meeting.  This Doodle 
poll will close June 8th and meeting times will be announced on the mailing 
list at that time.  At our first IRC meeting, we will be selecting the core 
team members, so if you’re  interested in participating in this project, please 
try to attend.  

http://doodle.com/fsdm6ry6aytqf7w8 http://doodle.com/fsdm6ry6aytqf7w8


The initial core team includes:

Geoff Arnold (Cisco)
David Cheperdak (Cisco)
Orran Krieger (MOC)

For more details, check out our Wiki:

 https://wiki.openstack.org/wiki/Mercador 
https://wiki.openstack.org/wiki/Mercador  

However,  in view of the lively debates which have followed recent project 
announcements, I’m appending an FAQ [2]

Regards,
Geoff Arnold
--
[1]
https://github.com/openstack-dev/cookiecutter 
https://github.com/openstack-dev/cookiecutter

[2]
FAQ
Q. What exactly is this project going to build?
A. The first deliverable is a system which will allow resources from CSP A to 
be made available to users of an OpenStack cloud operated by CSP B. We plan to 
demonstrate this in Tokyo.

Q. Can’t we do that today? CERN already does this, and it was the theme of the 
Identity Federation demonstration at the Vancouver summit.
A. Those examples all require administrators to collaborate on the static 
configuration of the various clouds. This system will support automated dynamic 
configuration.

Q. How?
A. The administrator of CSP A defines a set of “Virtual Regions”, each mapped 
into a Keystone Domain within one of her Regions. Then the admin of CSP B can 
select an available Virtual Region and make it available to his users just as 
though it was a regular Region of cloud B. (It shows up in Keystone and Horizon 
like other regions.)
 
Q. How do the users of CSP B experience this?
A. Users shouldn’t be able to tell the difference between one of CSP B’s own 
regions and a virtual region sourced from CSP A. (It should pass RefStack.)

Q. How is this implemented?
A. CSP A deploys a “publisher” service to define and publish Virtual Regions. 
CSP B deploys a “subscriber” service which talks to “publishers” to bind 
virtual regions. And there’s a CLI tool.

Q. Is that all?
A. The “publisher” is straightforward. The “subscriber” needs to be able to 
dynamically reconfigure Keystone and Horizon. This may require some minor 
changes.

Q. How is resource allocation policy managed? How does CSP A control what’s 
available in a Virtual Region?
A. In Kilo, the Keystone team implemented Hierarchical Multitenancy (HMT), but 
the rest of OpenStack isn’t HMT-aware. We need quotas in Nova, Cinder, etc. to 
be extended to support HMT. 

Q. This doesn’t meet my expectations for Service Federation. To me, Federation 
implies [insert list of cool intercloud functionality].
A. We’re concentrating on this one mechanism, which we think will be a 
foundation for a lot of interesting innovations. We’re collaborating with some 
of those, like the team from the Massachusetts Open Cloud.

Q. There’s more to federation than simply wiring up the OpenStack services. 
What about operations and business integration – logging, metrics, billing, 
service assurance?
A. You’re right. However right now most of those things are out of scope for 
OpenStack. We expect that the functionality we’re going to build will wind up 
being embedded in various OSS and BSS workflows.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominating Filip Blaha for murano-core

2015-06-02 Thread Stan Lagun
+1

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

sla...@mirantis.com

On Tue, Jun 2, 2015 at 9:25 AM, Serg Melikyan smelik...@mirantis.com
wrote:

 Folks, I'd like to propose Filip Blaha to core members of Murano team.

 Filip is active member of our community and he maintains a good score
 as contributor:
 http://stackalytics.com/report/users/filip-blaha

 Existing Murano cores, please vote +1/-1 for the addition of Filip to
 the murano-core.
 --
 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
 http://mirantis.com | smelik...@mirantis.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] I think nova behaves poorly when booting multiple instances

2015-06-02 Thread Alexis Lee
Andrew Laski said on Mon, Jun 01, 2015 at 09:26:33AM -0400:
 However what these parameters give users, versus orchestrating
 outside of Nova, is the ability to have the instances all scheduled
 as a single block.

We should seek to provide this via persistent claims. IE add to the
API something like:

claim([ResourceRequest]): [ResourceClaim]
boot(ResourceClaim, Image, ...): Instance
free_claim([ResourceClaim]): None
check_claim([ResourceRequest]): [Boolean]

(this is not a polished proposal!)

This allows you to claim() space for many instances, either in one API
call or across several, before beginning to boot instances. check_claim
is an obvious extension for probing availability.

Paul Murray tells me there was a blueprint for this some time ago, but
I can't find a spec for it. I'm interested in pushing this, I'll put up
a spec at some point unless someone beats me to it.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >