[openstack-dev] [nova][neutron] bridge name generator for vif plugging

2014-12-15 Thread Ryota Mibu
Hi all,


We are proposing a change to move bridge name generator (creating bridge name 
from net-id or reading integration bridge name from nova.conf) from Nova to 
Neutron. The followings are BPs in Nova and Neutron.

https://blueprints.launchpad.net/nova/+spec/neutron-vif-bridge-details
https://blueprints.launchpad.net/neutron/+spec/vif-plugging-metadata

I'd like to get your comments on this change whether this is relevant 
direction. I found related comment in Nova code [3] and guess these discussion 
had in context of vif-plugging and port-binding, but I'm not sure there was 
consensus about bridge name.


https://github.com/openstack/nova/blob/2014.2/nova/network/neutronv2/api.py#L1298-1299


Thanks,
Ryota


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] anyway to pep8 check on a specified file

2014-12-15 Thread Chen CH Ji

tox -e pep8 usually takes several minutes on my test server, actually I
only changes one file and I knew something might wrong there
anyway to only check that file? Thanks a lot

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyway to pep8 check on a specified file

2014-12-15 Thread Sylvain Bauza


Le 15/12/2014 10:04, Chen CH Ji a écrit :


tox -e pep8 usually takes several minutes on my test server, actually 
I only changes one file and I knew something might wrong there

anyway to only check that file? Thanks a lot

That's really non necessary to check all the files if you only modified 
a single one.

You can just take the files you modified and run a check by doing this :

git diff HEAD^ --name-only | xargs tools/with_venv.sh flake8


Hope it helps,
-Sylvain



Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyway to pep8 check on a specified file

2014-12-15 Thread Sylvain Bauza


Le 15/12/2014 10:27, Sylvain Bauza a écrit :


Le 15/12/2014 10:04, Chen CH Ji a écrit :


tox -e pep8 usually takes several minutes on my test server, actually 
I only changes one file and I knew something might wrong there

anyway to only check that file? Thanks a lot

That's really non necessary to check all the files if you only 
modified a single one.

You can just take the files you modified and run a check by doing this :

git diff HEAD^ --name-only | xargs tools/with_venv.sh flake8




Eh, just replying to myself, I just saw there is a recent commit which 
added a -8 flag to run_tests for checking PEP8 only against HEAD.


https://review.openstack.org/#/c/110746/

That's worth it :-)


Hope it helps,
-Sylvain



Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyway to pep8 check on a specified file

2014-12-15 Thread Daniel P. Berrange
On Mon, Dec 15, 2014 at 05:04:59PM +0800, Chen CH Ji wrote:
 
 tox -e pep8 usually takes several minutes on my test server, actually I
 only changes one file and I knew something might wrong there
 anyway to only check that file? Thanks a lot

Use

  ./run_tests.sh -8


That will only check pep8 against the files listed in the current
commit. If you want to check an entire branch patch series then

  git rebase -i master  -x './run_tests.sh -8'

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] For-each

2014-12-15 Thread Nikolay Makhotkin
Hi,

Here is the doc with suggestions on specification for for-each feature.

You are free to comment and ask questions.

https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing



-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-15 Thread Mathieu Rohon
Hi Ryan,

We have been working on similar Use cases to announce /32 with the
Bagpipe BGPSpeaker that supports EVPN.
Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is
compatible with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN,
and I'm sure it could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYcsns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger
ryan.cleven...@rackspace.com wrote:
 Hi,

 At Rackspace, we have a need to create a higher level networking service
 primarily for the purpose of creating a Floating IP solution in our
 environment. The current solutions for Floating IPs, being tied to plugin
 implementations, does not meet our needs at scale for the following reasons:

 1. Limited endpoint H/A mainly targeting failover only and not multi-active
 endpoints,
 2. Lack of noisy neighbor and DDOS mitigation,
 3. IP fragmentation (with cells, public connectivity is terminated inside
 each cell leading to fragmentation and IP stranding when cell CPU/Memory use
 doesn't line up with allocated IP blocks. Abstracting public connectivity
 away from nova installations allows for much more efficient use of those
 precious IPv4 blocks).
 4. Diversity in transit (multiple encapsulation and transit types on a per
 floating ip basis).

 We realize that network infrastructures are often unique and such a solution
 would likely diverge from provider to provider. However, we would love to
 collaborate with the community to see if such a project could be built that
 would meet the needs of providers at scale. We believe that, at its core,
 this solution would boil down to terminating north-south traffic
 temporarily at a massively horizontally scalable centralized core and then
 encapsulating traffic east-west to a specific host based on the
 association setup via the current L3 router's extension's 'floatingips'
 resource.

 Our current idea, involves using Open vSwitch for header rewriting and
 tunnel encapsulation combined with a set of Ryu applications for management:

 https://i.imgur.com/bivSdcC.png

 The Ryu application uses Ryu's BGP support to announce up to the Public
 Routing layer individual floating ips (/32's or /128's) which are then
 summarized and announced to the rest of the datacenter. If a particular
 floating ip is experiencing unusually large traffic (DDOS, slashdot effect,
 etc.), the Ryu application could change the announcements up to the Public
 layer to shift that traffic to dedicated hosts setup for that purpose. It
 also announces a single /32 Tunnel Endpoint ip downstream to the TunnelNet
 Routing system which provides transit to and from the cells and their
 hypervisors. Since traffic from either direction can then end up on any of
 the FLIP hosts, a simple flow table to modify the MAC and IP in either the
 SRC or DST fields (depending on traffic direction) allows the system to be
 completely stateless. We have proven this out (with static routing and
 flows) to work reliably in a small lab setup.

 On the hypervisor side, we currently plumb networks into separate OVS
 bridges. Another Ryu application would control the bridge that handles
 overlay networking to selectively divert traffic destined for the default
 gateway up to the FLIP NAT systems, taking into account any configured
 logical routing and local L2 traffic to pass out into the existing overlay
 fabric undisturbed.

 Adding in support for L2VPN EVPN
 (https://tools.ietf.org/html/draft-ietf-l2vpn-evpn-11) and L2VPN EVPN
 Overlay (https://tools.ietf.org/html/draft-sd-l2vpn-evpn-overlay-03) to the
 Ryu BGP speaker will allow the hypervisor side Ryu application to advertise
 up to the FLIP system reachability information to take into account VM
 failover, live-migrate, and supported encapsulation types. We believe that
 decoupling the tunnel endpoint discovery from the control plane
 (Nova/Neutron) will provide for a more robust solution as well as allow for
 use outside of openstack if desired.

 

 Ryan Clevenger
 Manager, Cloud Engineering - US
 m: 678.548.7261
 e: ryan.cleven...@rackspace.com


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] bridge name generator for vif plugging

2014-12-15 Thread Ian Wells
Hey Ryota,

A better way of describing it would be that the bridge name is, at present,
generated in *both* Nova *and* Neutron, and the VIF type semantics define
how it's calculated.  I think you're right that in both cases it would make
more sense for Neutron to tell Nova what the connection endpoint was going
to be rather than have Nova calculate it independently.  I'm not sure that
that necessarily requires two blueprints, and you don't have a spec there
at the moment, which is a problem because the Neutron spec deadline is upon
us, but the idea's a good one.  (You might get away without a Neutron spec,
since the change to Neutron to add the information should be small and
backward compatible, but that's not something I can make judgement on.)

If we changed this, then your options are to make new plugging types where
the name is exchanged rather than calculated or use the old plugging types
and provide (from Neutron) and use if provided (in Nova) the name.  You'd
need to think carefully about upgrade scenarios to make sure that changing
version on either side is going to work.

VIF_TYPE_TAP, while somewhat different in its focus, is also moving in the
same direction of having a more logical interface between Nova and
Neutron.  That plus this points that we should have VIF_TYPE_TAP handing
over the TAP device name to use, and similarly create a VIF_TYPE_BRIDGE
(passing bridge name) and slightly modify VIF_TYPE_VHOSTUSER before it gets
established (to add the socket name).

Did you have any thoughts on how the metadata should be stored on the port?
-- 
Ian.


On 15 December 2014 at 10:01, Ryota Mibu r-m...@cq.jp.nec.com wrote:

 Hi all,


 We are proposing a change to move bridge name generator (creating bridge
 name from net-id or reading integration bridge name from nova.conf) from
 Nova to Neutron. The followings are BPs in Nova and Neutron.

 https://blueprints.launchpad.net/nova/+spec/neutron-vif-bridge-details
 https://blueprints.launchpad.net/neutron/+spec/vif-plugging-metadata

 I'd like to get your comments on this change whether this is relevant
 direction. I found related comment in Nova code [3] and guess these
 discussion had in context of vif-plugging and port-binding, but I'm not
 sure there was consensus about bridge name.


 https://github.com/openstack/nova/blob/2014.2/nova/network/neutronv2/api.py#L1298-1299


 Thanks,
 Ryota


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] bridge name generator for vif plugging

2014-12-15 Thread Daniel P. Berrange
On Mon, Dec 15, 2014 at 11:15:56AM +0100, Ian Wells wrote:
 Hey Ryota,
 
 A better way of describing it would be that the bridge name is, at present,
 generated in *both* Nova *and* Neutron, and the VIF type semantics define
 how it's calculated.  I think you're right that in both cases it would make
 more sense for Neutron to tell Nova what the connection endpoint was going
 to be rather than have Nova calculate it independently.  I'm not sure that
 that necessarily requires two blueprints, and you don't have a spec there
 at the moment, which is a problem because the Neutron spec deadline is upon
 us, but the idea's a good one.  (You might get away without a Neutron spec,
 since the change to Neutron to add the information should be small and
 backward compatible, but that's not something I can make judgement on.)

Yep, the fact that both Nova  Neutron calculat the bridge name is a
historical accident. Originally Nova did it, because nova-network was
the only solution. Then Neutron did it too, so it matched what Nova
was doing. Clearly if we had Neutron right from the start, then it
would have been Neutrons responsibility todo this. Nothing in Nova
cares what the names are from a functional POV - it just needs to
be told what to use.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-15 Thread Neil Jerram
Daniel P. Berrange berra...@redhat.com writes:

 Failing that though, I could see a way to accomplish a similar thing
 without a Neutron launched agent. If one of the VIF type binding
 parameters were the name of a script, we could run that script on
 plug  unplug. So we'd have a finite number of VIF types, and each
 new Neutron mechanism would merely have to provide a script to invoke

 eg consider the existing midonet  iovisor VIF types as an example.
 Both of them use the libvirt ethernet config, but have different
 things running in their plug methods. If we had a mechanism for
 associating a plug script with a vif type, we could use a single
 VIF type for both.

 eg iovisor port binding info would contain

   vif_type=ethernet
   vif_plug_script=/usr/bin/neutron-iovisor-vif-plug

 while midonet would contain

   vif_type=ethernet
   vif_plug_script=/usr/bin/neutron-midonet-vif-plug


 And so you see implementing a new Neutron mechanism in this way would
 not require *any* changes in Nova whatsoever. The work would be entirely
 self-contained within the scope of Neutron. It is simply a packaging
 task to get the vif script installed on the compute hosts, so that Nova
 can execute it.

 This is essentially providing a flexible VIF plugin system for Nova,
 without having to have it plug directly into the Nova codebase with
 the API  RPC stability constraints that implies.

I agree that this is a very promising idea.  But... what about the
problem that it is libvirt-specific?  Does that matter?

Regards,
Neil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Building Fuel plugins with UI part

2014-12-15 Thread Anton Zemlyanov
My experience with building Fuel plugins with UI part is following. To
build a ui-less plugin, it takes 3 seconds and those commands:

git clone https://github.com/AlgoTrader/test-plugin.git
cd ./test-plugin
fpb --build ./

When UI added, build start to look like this and takes many minutes:

git clone https://github.com/AlgoTrader/test-plugin.git
git clone https://github.com/stackforge/fuel-web.git
cd ./fuel-web
git fetch https://review.openstack.org/stackforge/fuel-web
refs/changes/00/112600/24  git checkout FETCH_HEAD
cd ..
mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin
cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin
cd ./fuel-web/nailgun
npm install  npm update
grunt build --static-dir=static_compressed
cd ../..
rm -rf ./test-plugin/ui
mkdir ./test-plugin/ui
cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/*
./test-plugin/ui
cd ./test-plugin
fpb --build ./

I think we need something not so complex and fragile

Anton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyway to pep8 check on a specified file

2014-12-15 Thread Sahid Orentino Ferdjaoui
On Mon, Dec 15, 2014 at 09:37:23AM +, Daniel P. Berrange wrote:
 On Mon, Dec 15, 2014 at 05:04:59PM +0800, Chen CH Ji wrote:
  
  tox -e pep8 usually takes several minutes on my test server, actually I
  only changes one file and I knew something might wrong there
  anyway to only check that file? Thanks a lot
 
 Use
 
   ./run_tests.sh -8
 
 
 That will only check pep8 against the files listed in the current
 commit. If you want to check an entire branch patch series then
 
   git rebase -i master  -x './run_tests.sh -8'

Really useful point!

s.

 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-15 Thread A, Keshava
Mathieua,
I have been thinking of Starting MPLS right from CN for L2VPN/EVPN 
scenario also.

Below are my queries w.r.t supporting MPLS from OVS :
1. MPLS will be used even for VM-VM traffic across CNs 
generated by OVS  ?
2. MPLS will be originated right from OVS and will be mapped at 
Gateway (it may be NN/Hardware router ) to SP network ?
So MPLS will carry 2 Labels ? (one for hop-by-hop, and 
other one for end to identify network ?)
3. MPLS will go over even the network physical infrastructure 
 also ?
4. How the Labels will be mapped a/c virtual and physical world 
?
5. Who manages the label space  ? Virtual world or physical 
world or both ? (OpenStack +  ODL ?)
6. The labels are nested (i.e. Like L3 VPN end to end MPLS 
connectivity ) will be established ?
7. Or it will be label stitching between Virtual-Physical 
network ? 
How the end-to-end path will be setup ?

Let me know your opinion for the same.

regards,
keshava


-Original Message-
From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com] 
Sent: Monday, December 15, 2014 3:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

Hi Ryan,

We have been working on similar Use cases to announce /32 with the Bagpipe 
BGPSpeaker that supports EVPN.
Please have a look at use case B in [1][2].
Note also that the L2population Mechanism driver for ML2, that is compatible 
with OVS, Linuxbridge and ryu ofagent, is inspired by EVPN, and I'm sure it 
could help in your use case

[1]http://fr.slideshare.net/ThomasMorin1/neutron-and-bgp-vpns-with-bagpipe
[2]https://www.youtube.com/watch?v=q5z0aPrUZYcsns
[3]https://blueprints.launchpad.net/neutron/+spec/l2-population

Mathieu

On Thu, Dec 4, 2014 at 12:02 AM, Ryan Clevenger ryan.cleven...@rackspace.com 
wrote:
 Hi,

 At Rackspace, we have a need to create a higher level networking 
 service primarily for the purpose of creating a Floating IP solution 
 in our environment. The current solutions for Floating IPs, being tied 
 to plugin implementations, does not meet our needs at scale for the following 
 reasons:

 1. Limited endpoint H/A mainly targeting failover only and not 
 multi-active endpoints, 2. Lack of noisy neighbor and DDOS mitigation, 
 3. IP fragmentation (with cells, public connectivity is terminated 
 inside each cell leading to fragmentation and IP stranding when cell 
 CPU/Memory use doesn't line up with allocated IP blocks. Abstracting 
 public connectivity away from nova installations allows for much more 
 efficient use of those precious IPv4 blocks).
 4. Diversity in transit (multiple encapsulation and transit types on a 
 per floating ip basis).

 We realize that network infrastructures are often unique and such a 
 solution would likely diverge from provider to provider. However, we 
 would love to collaborate with the community to see if such a project 
 could be built that would meet the needs of providers at scale. We 
 believe that, at its core, this solution would boil down to 
 terminating north-south traffic temporarily at a massively 
 horizontally scalable centralized core and then encapsulating traffic 
 east-west to a specific host based on the association setup via the current 
 L3 router's extension's 'floatingips'
 resource.

 Our current idea, involves using Open vSwitch for header rewriting and 
 tunnel encapsulation combined with a set of Ryu applications for management:

 https://i.imgur.com/bivSdcC.png

 The Ryu application uses Ryu's BGP support to announce up to the 
 Public Routing layer individual floating ips (/32's or /128's) which 
 are then summarized and announced to the rest of the datacenter. If a 
 particular floating ip is experiencing unusually large traffic (DDOS, 
 slashdot effect, etc.), the Ryu application could change the 
 announcements up to the Public layer to shift that traffic to 
 dedicated hosts setup for that purpose. It also announces a single /32 
 Tunnel Endpoint ip downstream to the TunnelNet Routing system which 
 provides transit to and from the cells and their hypervisors. Since 
 traffic from either direction can then end up on any of the FLIP 
 hosts, a simple flow table to modify the MAC and IP in either the SRC 
 or DST fields (depending on traffic direction) allows the system to be 
 completely stateless. We have proven this out (with static routing and
 flows) to work reliably in a small lab setup.

 On the hypervisor side, we currently plumb networks into separate OVS 
 bridges. Another Ryu application would control the bridge that handles 
 overlay networking to selectively divert traffic destined for the 
 default gateway up to the FLIP NAT systems, taking into account any 
 configured logical routing and local L2 traffic to pass out 

Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-15 Thread Daniel P. Berrange
On Mon, Dec 15, 2014 at 10:36:59AM +, Neil Jerram wrote:
 Daniel P. Berrange berra...@redhat.com writes:
 
  Failing that though, I could see a way to accomplish a similar thing
  without a Neutron launched agent. If one of the VIF type binding
  parameters were the name of a script, we could run that script on
  plug  unplug. So we'd have a finite number of VIF types, and each
  new Neutron mechanism would merely have to provide a script to invoke
 
  eg consider the existing midonet  iovisor VIF types as an example.
  Both of them use the libvirt ethernet config, but have different
  things running in their plug methods. If we had a mechanism for
  associating a plug script with a vif type, we could use a single
  VIF type for both.
 
  eg iovisor port binding info would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-iovisor-vif-plug
 
  while midonet would contain
 
vif_type=ethernet
vif_plug_script=/usr/bin/neutron-midonet-vif-plug
 
 
  And so you see implementing a new Neutron mechanism in this way would
  not require *any* changes in Nova whatsoever. The work would be entirely
  self-contained within the scope of Neutron. It is simply a packaging
  task to get the vif script installed on the compute hosts, so that Nova
  can execute it.
 
  This is essentially providing a flexible VIF plugin system for Nova,
  without having to have it plug directly into the Nova codebase with
  the API  RPC stability constraints that implies.
 
 I agree that this is a very promising idea.  But... what about the
 problem that it is libvirt-specific?  Does that matter?

Libvirt defines terminology that is generally applicable to different
hypervisors. Of course not all hypevisors will be capable of supporting
all VIF types, but that's true no matter what terminology you choose,
so I don't see any problem here.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-15 Thread Neil Jerram
Hi Joe,

Joe Gordon joe.gord...@gmail.com writes:

 In preparation, I put together a nova-specs dashboard:

 https://review.openstack.org/141137

 https://review.openstack.org/#/dashboard/?foreach=project%3A%5Eopenstack%2Fnova-specs+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%252cjenkins+NOT+label%3ACode-Review%3E%3D-2%252cself+branch%3Amastertitle=Nova+SpecsYour+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3AselfNeeds+final+%2B2=label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-specs-core+label%3ACode-Review%3C%3D-1%29+limit%3A100Passed+Jenkins%2C+Positive+Nova-Core+Feedback=NOT+label%3ACode-Review%3E%3D2+%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3C%3D-1%29+limit%3A100Passed+Jenkins%2C+No+Positive+Nova-Core+Feedback%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3C%3D-1+NOT+label%3ACode-Review%3E%3D2+NOT%28reviewerin%3Anova-core+label%3ACode-Review%3E%3D1%29+limit%3A100Wayward+Changes+%28Changes+with+no+code+review+in+the+last+7+days%29=NOT+label%3ACode-Review%3C%3D2+age%3A7dSome+negative
 
+feedback%2C+might+still+be+worth+commenting=label%3ACode-Review%3D-1+NOT+label%3ACode-Review%3D-2+limit%3A100Dead+Specs=label%3ACode-Review%3C%3D-2

My Nova spec (https://review.openstack.org/#/c/130732/) does not appear
on this dashboard, even though I believe it's in good standing and - I
hope - close to approval.  Do you know why - does it mean that I've set
some metadata field somewhere wrongly?

Many thanks,
 Neil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-15 Thread Anna Kamyshnikova
Looking at all comments it seems that existing change is reasonable. I will
update it with link to this thread.

Thanks!

Regards,
Ann Kamyshnikova

On Sat, Dec 13, 2014 at 1:15 AM, Rochelle Grober rochelle.gro...@huawei.com
 wrote:





 Morgan Fainberg [mailto:morgan.fainb...@gmail.com] *on* Friday, December
 12, 2014 2:01 PM wrote:
 On Friday, December 12, 2014, Sean Dague s...@dague.net wrote:

 On 12/12/2014 01:05 PM, Maru Newby wrote:
 
  On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net wrote:
 
  On 12/11/2014 04:16 PM, Jay Pipes wrote:
  On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
  On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:
  On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
  On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com wrote:
 
  On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
  On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
 
  I'm generally in favor of making name attributes opaque, utf-8
  strings that
  are entirely user-defined and have no constraints on them. I
  consider the
  name to be just a tag that the user places on some resource. It
  is the
  resource's ID that is unique.
 
  I do realize that Nova takes a different approach to *some*
  resources,
  including the security group name.
 
  End of the day, it's probably just a personal preference whether
  names
  should be unique to a tenant/user or not.
 
  Maru had asked me my opinion on whether names should be unique
 and I
  answered my personal opinion that no, they should not be, and if
  Neutron
  needed to ensure that there was one and only one default security
  group for
  a tenant, that a way to accomplish such a thing in a race-free
  way, without
  use of SELECT FOR UPDATE, was to use the approach I put into the
  pastebin on
  the review above.
 
 
  I agree with Jay.  We should not care about how a user names the
  resource.
  There other ways to prevent this race and Jay’s suggestion is a
  good one.
 
  However we should open a bug against Horizon because the user
  experience there
  is terrible with duplicate security group names.
 
  The reason security group names are unique is that the ec2 api
  supports source
  rule specifications by tenant_id (user_id in amazon) and name, so
  not enforcing
  uniqueness means that invocation in the ec2 api will either fail or
 be
  non-deterministic in some way.
 
  So we should couple our API evolution to EC2 API then?
 
  -jay
 
  No I was just pointing out the historical reason for uniqueness, and
  hopefully
  encouraging someone to find the best behavior for the ec2 api if we
  are going
  to keep the incompatibility there. Also I personally feel the ux is
  better
  with unique names, but it is only a slight preference.
 
  Sorry for snapping, you made a fair point.
 
  Yeh, honestly, I agree with Vish. I do feel that the UX of that
  constraint is useful. Otherwise you get into having to show people UUIDs
  in a lot more places. While those are good for consistency, they are
  kind of terrible to show to people.
 
  While there is a good case for the UX of unique names - it also makes
 orchestration via tools like puppet a heck of a lot simpler - the fact is
 that most OpenStack resources do not require unique names.  That being the
 case, why would we want security groups to deviate from this convention?

 Maybe the other ones are the broken ones?

 Honestly, any sanely usable system makes names unique inside a
 container. Like files in a directory. In this case the tenant is the
 container, which makes sense.

 It is one of many places that OpenStack is not consistent. But I'd
 rather make things consistent and more usable than consistent and less.



 +1.



 More consistent and more usable is a good approach. The name uniqueness
 has prior art in OpenStack - keystone keeps project names unique within a
 domain (domain is the container), similar usernames can't be duplicated in
 the same domain. It would be silly to auth with the user ID, likewise
 unique names for the security group in the container (tenant) makes a lot
 of sense from a UX Perspective.



 *[Rockyg] +1*

 *Especially when dealing with domain data that are managed by Humans,
 human visible unique is important for understanding *and* efficiency.
 Tenant security is expected to be managed by the tenant admin, not some
 automated “robot admin” and as such needs to be clear , memorable and
 seperable between instances.  Unique names is the most straightforward (and
 easiest to enforce) way do this for humans. Humans read differentiate
 alphanumerics, so that should be the standard differentiator when humans
 are expected to interact and reason about containers.*



 --Morgan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [nova][neutron] bridge name generator for vif plugging

2014-12-15 Thread Ian Wells
Let me write a spec and see what you both think.  I have a couple of things
we could address here and while it's a bit late it wouldn't be a dramatic
thing to fix and it might be acceptable.

On 15 December 2014 at 11:28, Daniel P. Berrange berra...@redhat.com
wrote:

 On Mon, Dec 15, 2014 at 11:15:56AM +0100, Ian Wells wrote:
  Hey Ryota,
 
  A better way of describing it would be that the bridge name is, at
 present,
  generated in *both* Nova *and* Neutron, and the VIF type semantics define
  how it's calculated.  I think you're right that in both cases it would
 make
  more sense for Neutron to tell Nova what the connection endpoint was
 going
  to be rather than have Nova calculate it independently.  I'm not sure
 that
  that necessarily requires two blueprints, and you don't have a spec there
  at the moment, which is a problem because the Neutron spec deadline is
 upon
  us, but the idea's a good one.  (You might get away without a Neutron
 spec,
  since the change to Neutron to add the information should be small and
  backward compatible, but that's not something I can make judgement on.)

 Yep, the fact that both Nova  Neutron calculat the bridge name is a
 historical accident. Originally Nova did it, because nova-network was
 the only solution. Then Neutron did it too, so it matched what Nova
 was doing. Clearly if we had Neutron right from the start, then it
 would have been Neutrons responsibility todo this. Nothing in Nova
 cares what the names are from a functional POV - it just needs to
 be told what to use.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] REST and Django

2014-12-15 Thread Tihomir Trifonov
Travis,

That said, I can see a few ways that we could use the same REST decorator
 code and provide direct access to the API.  We’d simply provide a class
 where the url_regex maps to the desired path and gives direct passthrough.
 Maybe that kind of passthrough could always be provided for ease of
 customization / extensibility and additional methods with wrappers provided
 when necessary.



I completely agree on this. We can use the REST decorator to handle either
some really specific cases, or to handle some features, like pagination, in
a general way for all entities, if possible. What I argued against was that
it is unnecessary to have duplicate code JS-REST-APIClient, if we can
call directly JS-(auth wrapper)-APIClient. Also, there are some examples
- where the middleware wrapper hides some functionality, or makes some
unneeded processing, that we may completely skip.

In the given example - we don't need the (images, has_prev, has_more)
return value. What we really need is the list of images, sliced based on
the request.GET parameters and the config.API_RESULT_LIMIT. Then - the
whole check for has_prev, has_more should be done in the client. Currently
this processing was done at the server, as it was needed by the Django
rendering engine. Now it is not. So I am basically talking on
simplification of the middleware layer as much as possible, and moving the
presentation logic into the JS Client.


Also, to answer the comment of Thai - there is a lot of work that the
server will still do - like the translation - I guess we should load the
angular templates from the server with applied translation rather than
putting them into plain js files. I'm not sure what are the best options
here. But still - there is a lot of unneeded code currently in the
openstack_dashboard/api/*.py files.

So I guess the current approach with Django-REST might fit our needs. We
just have to look over each /api/ file in greater detail and to remove the
code that will better work on the client.

Let's move the discussion in Gerrit, and discuss the api wrapper proposed
by Richard. I believe we are on the same page now, I just needed to clarify
for myself that we are not going to just replace the Django with REST, but
we want to make Horizon a really flexible and powerful application.


On Sat, Dec 13, 2014 at 1:09 AM, Tripp, Travis S travis.tr...@hp.com
wrote:

 Tihomir,

 Today I added one glance call based on Richard’s decorator pattern[1] and
 started to play with incorporating some of your ideas. Please note, I only
 had limited time today.  That is passing the kwargs through to the glance
 client. This was an interesting first choice, because it immediately
 highlighted a concrete example of the horizon glance wrapper
 post-processing still being useful (rather than be a direct pass-through
 with no wrapper). See below. If you have some some concrete code examples
 of your ideas it would be helpful.

 [1]
 https://review.openstack.org/#/c/141273/2/openstack_dashboard/api/rest/glance.py

 With the patch, basically, you can call the following and all of the GET
 parameters get passed directly through to the horizon glance client and you
 get results back as expected.


 http://localhost:8002/api/glance/images/?sort_dir=descsort_key=created_atpaginate=Truemarker=bb2cfb1c-2234-4f54-aec5-b4916fe2d747

 If you pass in an incorrect sort_key, the glance client returns the
 following error message which propagates back to the REST caller as an
 error with the message:

 sort_key must be one of the following: name, status, container_format,
 disk_format, size, id, created_at, updated_at.

 This is done by passing **request.GET.dict() through.

 Please note, that if you try this (with POSTMAN, for example), you need to
 set the header of X-Requested-With = XMLHttpRequest

 So, what issues did it immediately call out with directly invoking the
 client?

 The python-glanceclient internally handles pagination by returning a
 generator.  Each iteration on the generator will handle making a request
 for the next page of data. If you were to just do something like return
 list(image_generator) to serialize it back out to the caller, it would
 actually end up making a call back to the server X times to fetch all pages
 before serializing back (thereby not really paginating). The horizon glance
 client wrapper today handles this by using islice intelligently along with
 honoring the API_RESULT_LIMIT setting in Horizon. So, this gives a direct
 example of where the wrapper does something that a direct passthrough to
 the client would not allow.

 That said, I can see a few ways that we could use the same REST decorator
 code and provide direct access to the API.  We’d simply provide a class
 where the url_regex maps to the desired path and gives direct passthrough.
 Maybe that kind of passthrough could always be provided for ease of
 customization / extensibility and additional methods with wrappers provided
 when necessary.  I need to leave 

Re: [openstack-dev] [Fuel] Building Fuel plugins with UI part

2014-12-15 Thread Przemyslaw Kaminski
First of all, compiling of statics shouldn't be a required step. No one 
does this during development.
For production-ready plugins, the compiled files should already be 
included in the GitHub repos and installation of plugin should just be a 
matter of downloading it. The API should then take care of informing the 
UI what plugins are installed.

The npm install step is mostly one-time.
The grunt build step for the plugin should basically just compile the 
staticfiles of the plugin and not the whole project. Besides with one 
file this is not extendable -- for N plugins we would build 2^N files 
with all possible combinations of including the plugins? :)


P.

On 12/15/2014 11:35 AM, Anton Zemlyanov wrote:
My experience with building Fuel plugins with UI part is following. To 
build a ui-less plugin, it takes 3 seconds and those commands:


git clone https://github.com/AlgoTrader/test-plugin.git
cd ./test-plugin
fpb --build ./

When UI added, build start to look like this and takes many minutes:

git clone https://github.com/AlgoTrader/test-plugin.git
git clone https://github.com/stackforge/fuel-web.git
cd ./fuel-web
git fetch https://review.openstack.org/stackforge/fuel-web 
refs/changes/00/112600/24  git checkout FETCH_HEAD

cd ..
mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin
cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin
cd ./fuel-web/nailgun
npm install  npm update
grunt build --static-dir=static_compressed
cd ../..
rm -rf ./test-plugin/ui
mkdir ./test-plugin/ui
cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/* 
./test-plugin/ui

cd ./test-plugin
fpb --build ./

I think we need something not so complex and fragile

Anton




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2014-12-15 Thread Tomasz Napierala
Also +1 here.
In huge envs we already have problems with parsing performance. In long long 
term we need to think about other log management solution


 On 12 Dec 2014, at 23:17, Igor Kalnitsky ikalnit...@mirantis.com wrote:
 
 +1 to stop parsing logs on UI and show them as is. I think it's more
 than enough for all users.
 
 On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:
 We have a high priority bug in 6.0:
 https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.
 
 Our openstack services use to send logs in strange format with extra copy of
 timestamp and loglevel:
 == ./neutron-metadata-agent.log ==
 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO
 neutron.common.config [-] Logging enabled!
 
 And we have a workaround for this. We hide extra timestamp and use second
 loglevel.
 
 In Juno some of services have updated oslo.logging and now send logs in
 simple format:
 == ./nova-api.log ==
 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
 /etc/nova/api-paste.ini
 
 In order to keep backward compatibility and deal with both formats we have a
 dirty workaround for our workaround:
 https://review.openstack.org/#/c/141450/
 
 As I see, our best choice here is to throw away all workarounds and show
 logs on UI as is. If service sends duplicated data - we should show
 duplicated data.
 
 Long term fix here is to update oslo.logging in all packages. We can do it
 in 6.1.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Global Context and Execution Environment

2014-12-15 Thread Renat Akhmerov
Hi,

 It looked good and I began to write down the summary: 
 https://etherpad.openstack.org/p/mistral-global-context 
 https://etherpad.openstack.org/p/mistral-global-context
Thanks, I left my comments in there.

 What problems are we trying to solve: 
 1) reduce passing the same parameters over and over from parent to child
 2) “automatically” make a parameter accessible to most actions without typing 
 it all over (like auth token) 

I agree that it’s one of the angle from what we’re looking at the problem. 
However, IMO, it’s wider than just these two points. My perspective is that we 
are, first of all, discussing workflow variables’ scoping (see my previous 
email in this thread). So I would rather focus on that. Let’s list all the 
scopes that would make sense, their semantics and use cases where each of them 
could solve particular usability problems (I’m saying “usability problems” 
because it’s really all about usability only).

The reason I’m trying to discuss all this from this point of view is because I 
think we should try to be more formal on things like that. 

 Can #1 be solved by passing input to subworkflows automatically

No, it can’t. “input” is something that gets validated upon workflow execution 
(which happens now) and can’t be arbitrarily passed all the way through because 
of that. If we introduce something like “global” scope then we can always pass 
variables of this scope down to nested workflows using a separate mechanism 
(i.e. different parameter of start_workflow() method). 

 Can #2 be solved somehow else? Default passing of arbitrary parameters to 
 action seems like breaking abstraction

Yes, unless explicitly specified I would not give actions more than they need. 
Encapsulation has been proven to be a good thing.

 Thoughts? need to brainstorm further….

Just once again, I appeal to talk about scopes, their semantics and use cases 
purely from workflow language (DSL) and API standpoint because I’m afraid 
otherwise we could bury ourselves under a pile of minor technical details. 
Specification first, implementation second.

Thanks

Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-15 Thread Russell Bryant
On 12/11/2014 12:03 PM, Anita Kuno wrote:
 On 12/11/2014 09:36 AM, Jon Bernard wrote:
 Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
 was marked as skipped, only the revert_resize test was failing.  I have
 submitted a patch to nova for this [1], and that yields an all green
 ceph ci run [2].  So at the moment, and with my revert patch, we're in
 good shape.

 I will fix up that patch today so that it can be properly reviewed and
 hopefully merged.  From there I'll submit a patch to infra to move the
 job to the check queue as non-voting, and we can go from there.

 [1] https://review.openstack.org/#/c/139693/
 [2] 
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html

 Cheers,

 Please add the name of your CI account to this table:
 https://wiki.openstack.org/wiki/ThirdPartySystems
 
 As outlined in the third party CI requirements:
 http://ci.openstack.org/third_party.html#requirements
 
 Please post system status updates to your individual CI wikipage that is
 linked to this table.
 
 The mailing list is not the place to post status updates for third party
 CI systems.
 
 If you have questions about any of the above, please attend one of the
 two third party meetings and ask any and all questions until you are
 satisfied. https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting

This is not a third party CI system.  This is a job running in OpenStack
infra.  It was in the experimental pipeline while bugs were being fixed.
 This report is about those bugs being fixed and Jon giving a heads up
that he thinks it will be ready to move to the check queue very soon.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SRIOV-error

2014-12-15 Thread Murali B
Hi David,

Please add as per the Irena suggestion

FYI: refer the below configuration

http://pastebin.com/DGmW7ZEg


Thanks
-Murali
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting - 12/15/2014

2014-12-15 Thread Renat Akhmerov
Hi,

I’m just reminding about another team meeting we have today at 16.00 UTC 
#openstack-meeting channel.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
Release Kilo-1 progress
for-each spec
Discuss scoping (global, local etc.)
Open discussion

(see https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
https://wiki.openstack.org/wiki/Meetings/MistralAgenda to the same agenda and 
the meeting archive).

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] bridge name generator for vif plugging

2014-12-15 Thread Ryota Mibu
Ian and Daniel,


Thanks for the comments.

I have neutron spec here and planned to start from Neutron side to expose 
bridge name via port-binding API.

https://review.openstack.org/#/c/131342/


Thanks,
Ryota

 -Original Message-
 From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
 Sent: Monday, December 15, 2014 8:08 PM
 To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [nova][neutron] bridge name generator for vif
 plugging
 
 Let me write a spec and see what you both think.  I have a couple of things
 we could address here and while it's a bit late it wouldn't be a dramatic
 thing to fix and it might be acceptable.
 
 
 On 15 December 2014 at 11:28, Daniel P. Berrange berra...@redhat.com
 wrote:
 
   On Mon, Dec 15, 2014 at 11:15:56AM +0100, Ian Wells wrote:
Hey Ryota,
   
A better way of describing it would be that the bridge name is,
 at present,
generated in *both* Nova *and* Neutron, and the VIF type semantics
 define
how it's calculated.  I think you're right that in both cases
 it would make
more sense for Neutron to tell Nova what the connection endpoint
 was going
to be rather than have Nova calculate it independently.  I'm not
 sure that
that necessarily requires two blueprints, and you don't have a
 spec there
at the moment, which is a problem because the Neutron spec deadline
 is upon
us, but the idea's a good one.  (You might get away without a
 Neutron spec,
since the change to Neutron to add the information should be small
 and
backward compatible, but that's not something I can make judgement
 on.)
 
   Yep, the fact that both Nova  Neutron calculat the bridge name
 is a
   historical accident. Originally Nova did it, because nova-network
 was
   the only solution. Then Neutron did it too, so it matched what Nova
   was doing. Clearly if we had Neutron right from the start, then
 it
   would have been Neutrons responsibility todo this. Nothing in Nova
   cares what the names are from a functional POV - it just needs to
   be told what to use.
 
   Regards,
   Daniel
   --
   |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/ :|
   |: http://libvirt.org  -o-
 http://virt-manager.org :|
   |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/ :|
   |: http://entangle-photo.org   -o-
 http://live.gnome.org/gtk-vnc :|
 
 
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
 dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-15 Thread Murugan, Visnusaran
Hi Zane,

We have been going through this chain for quite some time now and we still feel 
a disconnect in our understanding. 
Can you put up a etherpad where we can understand your approach. For example: 
for storing resource dependencies,
Are you storing its name, version tuple or just its ID. If I am correct, you 
are updating all resources on update regardless 
of their change which will be inefficient if stack contains a million resource. 
We have similar questions regarding other
areas in your implementation, which we believe if we understand the outline of 
your implementation. It is difficult to get
a hold on your approach just by looking at code. Docs strings / Etherpad will 
help.


About streams, Yes in a million resource stack, the data will be huge, but less 
than template. Also this stream is stored
only In IN_PROGRESS resources. The reason to have entire dependency list to 
reduce DB queries while a stack update.
When you have a singular dependency on each resources similar to your 
implantation, then we will end up loading 
Dependencies one at a time and altering almost all resource's dependency 
regardless of their change.

Regarding a 2 template approach for delete, it is not actually 2 different 
templates. Its just that we have a delete stream
To be taken up post update. (Any post operation will be handled as an update) 
This approach is True when Rollback==True
We can always fall back to regular stream (non-delete stream) if Rollback=False

In our view we would like to have only one basic operation and that is UPDATE.

1. CREATE will be an update where a realized graph == Empty
2. UPDATE will be an update where realized graph == Full/Partial realized 
(possibly with a delete stream as post operation if Rollback==True)
3. DELETE will be just another update with an empty to_be_realized_graph.

It would be great if we can freeze a stable approach by mid-week as Christmas 
vacations are round the corner.  :) :)

 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: Saturday, December 13, 2014 5:43 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
 showdown
 
 On 12/12/14 05:29, Murugan, Visnusaran wrote:
 
 
  -Original Message-
  From: Zane Bitter [mailto:zbit...@redhat.com]
  Sent: Friday, December 12, 2014 6:37 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
  showdown
 
  On 11/12/14 08:26, Murugan, Visnusaran wrote:
  [Murugan, Visnusaran]
  In case of rollback where we have to cleanup earlier version of
  resources,
  we could get the order from old template. We'd prefer not to have a
  graph table.
 
  In theory you could get it by keeping old templates around. But
  that means keeping a lot of templates, and it will be hard to keep
  track of when you want to delete them. It also means that when
  starting an update you'll need to load every existing previous
  version of the template in order to calculate the dependencies. It
  also leaves the dependencies in an ambiguous state when a resource
  fails, and although that can be worked around it will be a giant pain to
 implement.
 
 
  Agree that looking to all templates for a delete is not good. But
  baring Complexity, we feel we could achieve it by way of having an
  update and a delete stream for a stack update operation. I will
  elaborate in detail in the etherpad sometime tomorrow :)
 
  I agree that I'd prefer not to have a graph table. After trying a
  couple of different things I decided to store the dependencies in
  the Resource table, where we can read or write them virtually for
  free because it turns out that we are always reading or updating
  the Resource itself at exactly the same time anyway.
 
 
  Not sure how this will work in an update scenario when a resource
  does not change and its dependencies do.
 
  We'll always update the requirements, even when the properties don't
  change.
 
 
  Can you elaborate a bit on rollback.
 
 I didn't do anything special to handle rollback. It's possible that we need 
 to -
 obviously the difference in the UpdateReplace + rollback case is that the
 replaced resource is now the one we want to keep, and yet the
 replaced_by/replaces dependency will force the newer (replacement)
 resource to be checked for deletion first, which is an inversion of the usual
 order.
 
 However, I tried to think of a scenario where that would cause problems and
 I couldn't come up with one. Provided we know the actual, real-world
 dependencies of each resource I don't think the ordering of those two
 checks matters.
 
 In fact, I currently can't think of a case where the dependency order
 between replacement and replaced resources matters at all. It matters in the
 current Heat implementation because resources are artificially segmented
 into the current and backup stacks, but with a holistic view of dependencies
 that may well not be 

Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2014-12-15 Thread Aleksey Kasatkin
+1 to show as is. We don't get benefits from parsing now (like filtering
by value of particular parameter, date/time intervals). It only adds
complexity.



Aleksey Kasatkin


On Mon, Dec 15, 2014 at 1:40 PM, Tomasz Napierala tnapier...@mirantis.com
wrote:

 Also +1 here.
 In huge envs we already have problems with parsing performance. In long
 long term we need to think about other log management solution


  On 12 Dec 2014, at 23:17, Igor Kalnitsky ikalnit...@mirantis.com
 wrote:
 
  +1 to stop parsing logs on UI and show them as is. I think it's more
  than enough for all users.
 
  On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:
  We have a high priority bug in 6.0:
  https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.
 
  Our openstack services use to send logs in strange format with extra
 copy of
  timestamp and loglevel:
  == ./neutron-metadata-agent.log ==
  2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349
 INFO
  neutron.common.config [-] Logging enabled!
 
  And we have a workaround for this. We hide extra timestamp and use
 second
  loglevel.
 
  In Juno some of services have updated oslo.logging and now send logs in
  simple format:
  == ./nova-api.log ==
  2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
  /etc/nova/api-paste.ini
 
  In order to keep backward compatibility and deal with both formats we
 have a
  dirty workaround for our workaround:
  https://review.openstack.org/#/c/141450/
 
  As I see, our best choice here is to throw away all workarounds and show
  logs on UI as is. If service sends duplicated data - we should show
  duplicated data.
 
  Long term fix here is to update oslo.logging in all packages. We can do
 it
  in 6.1.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] RE: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells �C summit recap and move forward

2014-12-15 Thread joehuang
Hello, Morgan,



The keystone is global service for cascading OpenStack and cascaded OpenStacks, 
just like it works for multi-region. PKI token/UUID token should be workable 
for multi-region first, if there is some security issues, we need to fix it, no 
matter cascading introduced or not.



Using global KeyStone makes the project ID/user/role/domain/group have 
consistentent view in the cloud. The token used in the request to cascading 
Nova/Cinder/Neutron will be transfered in the request to the cascaded 
Nova/Cinder/Neutron too.



Best regards



Chaoyi Huang ( joehuang )




From: Morgan Fainberg [morgan.fainb...@gmail.com]
Sent: 13 December 2014 19:42
To: Henry; OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells �C summit 
recap and move forward

On December 13, 2014 at 3:26:34 AM, Henry 
(henry4...@gmail.commailto:henry4...@gmail.com) wrote:
Hi Morgan,

A good question about keystone.

In fact, keystone is naturally suitable for multi-region deployment. It has 
only REST service interface, and PKI based token greatly reduce the central 
service workload. So, unlike other openstack service, it would not be set to 
cascade mode.


I agree that Keystone is suitable for multi-region in some cases, I am still 
concerned from a security standpoint. The cascade examples all assert a 
*global* tenant_id / project_id in a lot of comments/documentation. The answer 
you gave me doesn’t quite address this issue nor the issue of a disparate 
deployment having a wildly different role-set or security profile. A PKI token 
is not (as of today) possible to use with a Keystone (or OpenStack deployment) 
that it didn’t come from. This is like this because Keystone needs to control 
the AuthZ for it’s local deployment (same design as the keystone-to-keystone 
federation).

So I have to direct questions:

* Is there something specific you expect to happen with the cascading that 
makes resolving a project_id to something globally unique or am I mis-reading 
this as part of the design?

* Does the cascade centralization just ask for Keystone tokens for each of the 
deployments or is there something else being done? Essentially how does one 
work with a Nova from cloud XXX and cloud YYY from an authorization standpoint?

You don’t need to answer these right away, but they are clarification points 
that need to be thought about as this design moves forward. There are a number 
of security / authorization questions I can expand on, but the above two are 
the really big ones to start with. As you scale up (or utilize deployments 
owned by different providers) it isn’t always possible to replicate the 
Keystone data around.

Cheers,
Morgan

Best regards
Henry

Sent from my iPad

On 2014-12-13, at 下午3:12, Morgan Fainberg 
morgan.fainb...@gmail.commailto:morgan.fainb...@gmail.com wrote:



On Dec 12, 2014, at 10:30, Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:



On Fri, Dec 12, 2014 at 6:50 AM, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:
On 12/11/2014 12:55 PM, Andrew Laski wrote:
 Cells can handle a single API on top of globally distributed DCs.  I
 have spoken with a group that is doing exactly that.  But it requires
 that the API is a trusted part of the OpenStack deployments in those
 distributed DCs.

And the way the rest of the components fit into that scenario is far
from clear to me.  Do you consider this more of a if you can make it
work, good for you, or something we should aim to be more generally
supported over time?  Personally, I see the globally distributed
OpenStack under a single API case much more complex, and worth
considering out of scope for the short to medium term, at least.

For me, this discussion boils down to ...

1) Do we consider these use cases in scope at all?

2) If we consider it in scope, is it enough of a priority to warrant a
cross-OpenStack push in the near term to work on it?

3) If yes to #2, how would we do it?  Cascading, or something built
around cells?

I haven't worried about #3 much, because I consider #2 or maybe even #1
to be a show stopper here.

Agreed

I agree with Russell as well. I also am curious on how identity will work in 
these cases. As it stands identity provides authoritative information only for 
the deployment it runs. There is a lot of concern I have from a security 
standpoint when I start needing to address what the central api can do on the 
other providers. We have had this discussion a number of times in Keystone, 
specifically when designing the keystone-to-keystone identity federation, and 
we came to the conclusion that we needed to ensure that the keystone local to a 
given cloud is the only source of authoritative authz information. While it 
may, in some cases, accept authn from a source that is trusted, it still 
controls the local 

[openstack-dev] Have added mapping for huawei-storage-drivers

2014-12-15 Thread liuxinguo
Hi Mike,

Sorry to delay so long. Now we have added a mapping in 
cinder/volume/manager.pyhttps://review.openstack.org/#/c/133193/25/cinder/volume/manager.py
 and add a file named 
test_huawei_drivers_compatibility.pyhttps://review.openstack.org/#/c/133193/25/cinder/tests/test_huawei_drivers_compatibility.py
 to test the compatibility, please check it, thanks.

Best regards,
liu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Building Fuel plugins with UI part

2014-12-15 Thread Anton Zemlyanov
The building of the UI plugin has several things I do not like

1) I need to extract the UI part of the plugin and copy/symlink it to
fuel-web
2) I have to run grunt build on the whole fuel-web
3) I have to copy files back to original location to pack them
4) I cannot easily switch between development/production versions (no way
to easily change entry point)

The only way to install plugin is `fuel plugins --install`, no matter
development or production, so even development plugins should be packed to
tar.gz

Anton

On Mon, Dec 15, 2014 at 3:30 PM, Przemyslaw Kaminski pkamin...@mirantis.com
 wrote:

  First of all, compiling of statics shouldn't be a required step. No one
 does this during development.
 For production-ready plugins, the compiled files should already be
 included in the GitHub repos and installation of plugin should just be a
 matter of downloading it. The API should then take care of informing the UI
 what plugins are installed.
 The npm install step is mostly one-time.
 The grunt build step for the plugin should basically just compile the
 staticfiles of the plugin and not the whole project. Besides with one file
 this is not extendable -- for N plugins we would build 2^N files with all
 possible combinations of including the plugins? :)

 P.


 On 12/15/2014 11:35 AM, Anton Zemlyanov wrote:

 My experience with building Fuel plugins with UI part is following. To
 build a ui-less plugin, it takes 3 seconds and those commands:

  git clone https://github.com/AlgoTrader/test-plugin.git
  cd ./test-plugin
 fpb --build ./

  When UI added, build start to look like this and takes many minutes:

  git clone https://github.com/AlgoTrader/test-plugin.git
 git clone https://github.com/stackforge/fuel-web.git
 cd ./fuel-web
 git fetch https://review.openstack.org/stackforge/fuel-web
 refs/changes/00/112600/24  git checkout FETCH_HEAD
 cd ..
 mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin
 cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin
 cd ./fuel-web/nailgun
 npm install  npm update
 grunt build --static-dir=static_compressed
 cd ../..
 rm -rf ./test-plugin/ui
 mkdir ./test-plugin/ui
 cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/*
 ./test-plugin/ui
 cd ./test-plugin
 fpb --build ./

  I think we need something not so complex and fragile

  Anton




 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Building Fuel plugins with UI part

2014-12-15 Thread Przemyslaw Kaminski


On 12/15/2014 02:26 PM, Anton Zemlyanov wrote:

The building of the UI plugin has several things I do not like

1) I need to extract the UI part of the plugin and copy/symlink it to 
fuel-web


This is required, the UI part should live somewhere in statics/js. This 
directory is served by nginx and symlinking/copying is I think the best 
way, far better than adding new directories to nginx configuration.



2) I have to run grunt build on the whole fuel-web


This shouldn't at all be necessary.


3) I have to copy files back to original location to pack them


Shouldn't be necessary.

4) I cannot easily switch between development/production versions (no 
way to easily change entry point)


Development/production versions should only differ by serving 
raw/compressed files. The compressed files should be published by the 
plugin author.




The only way to install plugin is `fuel plugins --install`, no matter 
development or production, so even development plugins should be 
packed to tar.gz


The UI part should be working immediately after symlinking somewhere in 
the statics/js directory imho (and after API is aware of the new pugin but).


P.



Anton

On Mon, Dec 15, 2014 at 3:30 PM, Przemyslaw Kaminski 
pkamin...@mirantis.com mailto:pkamin...@mirantis.com wrote:


First of all, compiling of statics shouldn't be a required step.
No one does this during development.
For production-ready plugins, the compiled files should already be
included in the GitHub repos and installation of plugin should
just be a matter of downloading it. The API should then take care
of informing the UI what plugins are installed.
The npm install step is mostly one-time.
The grunt build step for the plugin should basically just compile
the staticfiles of the plugin and not the whole project. Besides
with one file this is not extendable -- for N plugins we would
build 2^N files with all possible combinations of including the
plugins? :)

P.


On 12/15/2014 11:35 AM, Anton Zemlyanov wrote:

My experience with building Fuel plugins with UI part is
following. To build a ui-less plugin, it takes 3 seconds and
those commands:

git clone https://github.com/AlgoTrader/test-plugin.git
cd ./test-plugin
fpb --build ./

When UI added, build start to look like this and takes many minutes:

git clone https://github.com/AlgoTrader/test-plugin.git
git clone https://github.com/stackforge/fuel-web.git
cd ./fuel-web
git fetch https://review.openstack.org/stackforge/fuel-web
refs/changes/00/112600/24  git checkout FETCH_HEAD
cd ..
mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin
cp -R ./test-plugin/ui/*
./fuel-web/nailgun/static/plugins/test-plugin
cd ./fuel-web/nailgun
npm install  npm update
grunt build --static-dir=static_compressed
cd ../..
rm -rf ./test-plugin/ui
mkdir ./test-plugin/ui
cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/*
./test-plugin/ui
cd ./test-plugin
fpb --build ./

I think we need something not so complex and fragile

Anton




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] #PERSONAL# : Facing problem in installing new python dependencies for Horizon- Pls help

2014-12-15 Thread Douglas Fish

Swati Shukla1 swati.shuk...@tcs.com wrote on 12/14/2014 11:29:19 PM:

 From: Swati Shukla1 swati.shuk...@tcs.com
 To: openstack-dev@lists.openstack.org
 Date: 12/14/2014 11:34 PM
 Subject: [openstack-dev] #PERSONAL# : Facing problem in installing
 new python dependencies for Horizon- Pls help

 Hi,

 I want to install 2 new modules in Horizon but have no clue how it
 installs in its virtualenv.

 I mentioned pisa = 3.0.33 and reportlab = 2.5 in requirements.txt
 file, ran ./unstack.sh and ./stack.sh, but still do not get these
 installed in its virtualenv.

 As a result, when I do ./run_tests.sh, I get  ImportError: No
 module named ho.pisa

 Please suggest me if I am going wrong somewhere or how to proceed with
this.

 Thanks in advance.

 Regards,
 Swati Shukla
 Tata Consultancy Services
 Mailto: swati.shuk...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty. IT Services
 Business Solutions
 Consulting
 
 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank
you___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

run_tests.sh --force
will reinstall the virtual environment and will be up the added modules


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

the question arose recently in one of reviews for neutron-*aas repos
to remove all oslo-incubator code from those repos since it's
duplicated in neutron main repo. (You can find the link to the review
at the end of the email.)

Brief hostory: neutron repo was recently split into 4 pieces (main,
neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted
in each repository keeping their own copy of
neutron/openstack/common/... tree (currently unused in all
neutron-*aas repos that are still bound to modules from main repo).

As a oslo liaison for the project, I wonder what's the best way to
manage oslo-incubator files. We have several options:

1. just kill all the neutron/openstack/common/ trees from neutron-*aas
repositories and continue using modules from main repo.

2. kill all duplicate modules from neutron-*aas repos and leave only
those that are used in those repos but not in main repo.

3. fully duplicate all those modules in each of four repos that use them.

I think option 1. is a straw man, since we should be able to introduce
new oslo-incubator modules into neutron-*aas repos even if they are
not used in main repo.

Option 2. is good when it comes to synching non-breaking bug fixes (or
security fixes) from oslo-incubator, in that it will require only one
sync patch instead of e.g. four. At the same time there may be
potential issues when synchronizing updates from oslo-incubator that
would break API and hence require changes to each of the modules that
use it. Since we don't support atomic merges for multiple projects in
gate, we will need to be cautious about those updates, and we will
still need to leave neutron-*aas repos broken for some time (though
the time may be mitigated with care).

Option 3. is vice versa - in theory, you get total decoupling, meaning
no oslo-incubator updates in main repo are expected to break
neutron-*aas repos, but bug fixing becomes a huge PITA.

I would vote for option 2., for two reasons:
- - most oslo-incubator syncs are non-breaking, and we may effectively
apply care to updates that may result in potential breakage (f.e.
being able to trigger an integrated run for each of neutron-*aas repos
with the main sync patch, if there are any concerns).
- - it will make oslo liaison life a lot easier. OK, I'm probably too
selfish on that. ;)
- - it will make stable maintainers life a lot easier. The main reason
why stable maintainers and distributions like recent oslo graduation
movement is that we don't need to track each bug fix we need in every
project, and waste lots of cycles on it. Being able to fix a bug in
one place only is *highly* anticipated. [OK, I'm quite selfish on that
one too.]
- - it's a delusion that there will be no neutron-main syncs that will
break neutron-*aas repos ever. There can still be problems due to
incompatibility between neutron main and neutron-*aas code resulted
EXACTLY because multiple parts of the same process use different
versions of the same module.

That said, Doug Wiegley (lbaas core) seems to be in favour of option
3. due to lower coupling that is achieved in that way. I know that
lbaas team had a bad experience due to tight coupling to neutron
project in the past, so I appreciate their concerns.

All in all, we should come up with some standard solution for both
advanced services that are already split out, *and* upcoming vendor
plugin shrinking initiative.

The initial discussion is captured at:
https://review.openstack.org/#/c/141427/

Thanks,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh
apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq
6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6
tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E
QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/
czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk=
=D6Gn
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 15/12/14 15:15, Ihar Hrachyshka wrote:
 - it's a delusion that there will be no neutron-main syncs that
 will break neutron-*aas repos ever.

OK, I've just decided to check whether my (non-native speaker)
understanding of the meaning of the word 'delusion' is correct, and I
need to admit that what I've found out in dictionaries is not what I
really meant. :|

I only meant that it's a wrong assumption.

So in case anyone reads it as any kind of derogation, I'm very sorry
for the bad wording. Please blame my bad English. And Richard Dawkins.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUju7sAAoJEC5aWaUY1u57XfIH/ituoB51dLLaLvryFumADvT/
dhkJyzylD/x0j++BS88KdNE9i7aiAFn2MQyvINxYV7THSsgl60YruV6xXj5X72aK
EUd967OI77XuIheOP6iIC2ZoGa3ie8RGyMTxbTW5hEeDR8+mtYhQTmDZUWKtT15o
jviGV9/kPftVU2U1UirwjpZY3DPee4D9CwIoJdTKvk93+NNNlMh1cAsWIR0ISJBC
mm/X020SSl2wOy9d3lUge4QEi698NPYpkAAbAqL6YTkblXOFgfb7EBexGQoV388P
TFb2StHyCD7hVpdx6ljLWR2mEQVavIJE9VUkvAzvoBMmlZnYFFx4TnUEu6Vu7VY=
=Q+GY
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Building Fuel plugins with UI part

2014-12-15 Thread Vitaly Kramskikh
Hi,

The only thing I don't really like is that we need fuel-web code to build
plugin. But we have can do nothing with it, as typical UI plugin by design
is tightly coupled with the core. If plugin want to reuse core libraries,
utils, controls then it has to declare them as dependencies and if there
would be a build error if these files weren't found by r.js.

I created the first version of spec
https://review.openstack.org/#/c/141761/1 where described my vision of
build process. You can comment on it there.

Some responses inline:

2014-12-15 14:48 GMT+01:00 Przemyslaw Kaminski pkamin...@mirantis.com:


 On 12/15/2014 02:26 PM, Anton Zemlyanov wrote:

 The building of the UI plugin has several things I do not like

 1) I need to extract the UI part of the plugin and copy/symlink it to
 fuel-web


 This is required, the UI part should live somewhere in statics/js. This
 directory is served by nginx and symlinking/copying is I think the best
 way, far better than adding new directories to nginx configuration.


I think Anton is talking not about serving, but building the plugin. Yes,
to build the UI part of plugin you need to extract its UI part and
move/symlink it to static/plugins/plugin_name before you can run the
build.

   2) I have to run grunt build on the whole fuel-web


 This shouldn't at all be necessary.

 Yes, it is not necessary. Actually you don't have if you add another task
or option for grunt build to not build the main project. It can be achieved
by removing these lines
https://github.com/stackforge/fuel-web/blob/master/nailgun/Gruntfile.js#L45-L48.


  3) I have to copy files back to original location to pack them


 Shouldn't be necessary.

  4) I cannot easily switch between development/production versions (no
 way to easily change entry point)


 Development/production versions should only differ by serving
 raw/compressed files. The compressed files should be published by the
 plugin author.

 On my development machine I use different ports of nginx to serve original
and compressed versions of UI. It's configuration is pretty straightforward.


  The only way to install plugin is `fuel plugins --install`, no matter
 development or production, so even development plugins should be packed to
 tar.gz


 The UI part should be working immediately after symlinking somewhere in
 the statics/js directory imho (and after API is aware of the new pugin but).

 P.



  Anton

 On Mon, Dec 15, 2014 at 3:30 PM, Przemyslaw Kaminski 
 pkamin...@mirantis.com wrote:

  First of all, compiling of statics shouldn't be a required step. No one
 does this during development.
 For production-ready plugins, the compiled files should already be
 included in the GitHub repos and installation of plugin should just be a
 matter of downloading it. The API should then take care of informing the UI
 what plugins are installed.
 The npm install step is mostly one-time.
 The grunt build step for the plugin should basically just compile the
 staticfiles of the plugin and not the whole project. Besides with one file
 this is not extendable -- for N plugins we would build 2^N files with all
 possible combinations of including the plugins? :)

 P.


 On 12/15/2014 11:35 AM, Anton Zemlyanov wrote:

  My experience with building Fuel plugins with UI part is following. To
 build a ui-less plugin, it takes 3 seconds and those commands:

  git clone https://github.com/AlgoTrader/test-plugin.git
  cd ./test-plugin
 fpb --build ./

  When UI added, build start to look like this and takes many minutes:

  git clone https://github.com/AlgoTrader/test-plugin.git
 git clone https://github.com/stackforge/fuel-web.git
 cd ./fuel-web
 git fetch https://review.openstack.org/stackforge/fuel-web
 refs/changes/00/112600/24  git checkout FETCH_HEAD
 cd ..
 mkdir -p ./fuel-web/nailgun/static/plugins/test-plugin
 cp -R ./test-plugin/ui/* ./fuel-web/nailgun/static/plugins/test-plugin
 cd ./fuel-web/nailgun
 npm install  npm update
 grunt build --static-dir=static_compressed
 cd ../..
 rm -rf ./test-plugin/ui
 mkdir ./test-plugin/ui
 cp -R ./fuel-web/nailgun/static_compressed/plugins/test-plugin/*
 ./test-plugin/ui
 cd ./test-plugin
 fpb --build ./

  I think we need something not so complex and fragile

  Anton




  ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Vitaly 

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-15 Thread Anant Patil
On 12-Dec-14 06:29, Zane Bitter wrote:
 On 11/12/14 01:14, Anant Patil wrote:
 On 04-Dec-14 10:49, Zane Bitter wrote:
 On 01/12/14 02:02, Anant Patil wrote:
 On GitHub:https://github.com/anantpatil/heat-convergence-poc

 I'm trying to review this code at the moment, and finding some stuff I
 don't understand:

 https://github.com/anantpatil/heat-convergence-poc/blob/master/heat/engine/stack.py#L911-L916

 This appears to loop through all of the resources *prior* to kicking off
 any actual updates to check if the resource will change. This is
 impossible to do in general, since a resource may obtain a property
 value from an attribute of another resource and there is no way to know
 whether an update to said other resource would cause a change in the
 attribute value.

 In addition, no attempt to catch UpdateReplace is made. Although that
 looks like a simple fix, I'm now worried about the level to which this
 code has been tested.

 We were working on new branch and as we discussed on Skype, we have
 handled all these cases. Please have a look at our current branch:
 https://github.com/anantpatil/heat-convergence-poc/tree/graph-version

 When a new resource is taken for convergence, its children are loaded
 and the resource definition is re-parsed. The frozen resource definition
 will have all the get_attr resolved.


 I'm also trying to wrap my head around how resources are cleaned up in
 dependency order. If I understand correctly, you store in the
 ResourceGraph table the dependencies between various resource names in
 the current template (presumably there could also be some left around
 from previous templates too?). For each resource name there may be a
 number of rows in the Resource table, each with an incrementing version.
 As far as I can tell though, there's nowhere that the dependency graph
 for _previous_ templates is persisted? So if the dependency order
 changes in the template we have no way of knowing the correct order to
 clean up in any more? (There's not even a mechanism to associate a
 resource version with a particular template, which might be one avenue
 by which to recover the dependencies.)

 I think this is an important case we need to be able to handle, so I
 added a scenario to my test framework to exercise it and discovered that
 my implementation was also buggy. Here's the fix:
 https://github.com/zaneb/heat-convergence-prototype/commit/786f367210ca0acf9eb22bea78fd9d51941b0e40


 Thanks for pointing this out Zane. We too had a buggy implementation for
 handling inverted dependency. I had a hard look at our algorithm where
 we were continuously merging the edges from new template into the edges
 from previous updates. It was an optimized way of traversing the graph
 in both forward and reverse order with out missing any resources. But,
 when the dependencies are inverted,  this wouldn't work.

 We have changed our algorithm. The changes in edges are noted down in
 DB, only the delta of edges from previous template is calculated and
 kept. At any given point of time, the graph table has all the edges from
 current template and delta from previous templates. Each edge has
 template ID associated with it.
 
 The thing is, the cleanup dependencies aren't really about the template. 
 The real resources really depend on other real resources. You can't 
 delete a Volume before its VolumeAttachment, not because it says so in 
 the template but because it will fail if you try. The template can give 
 us a rough guide in advance to what those dependencies will be, but if 
 that's all we keep then we are discarding information.
 
 There may be multiple versions of a resource corresponding to one 
 template version. Even worse, the actual dependencies of a resource 
 change on a smaller time scale than an entire stack update (this is the 
 reason the current implementation updates the template one resource at a 
 time as we go).
 

Absolutely! The edges from the template are kept only for the reference
purposes. When we have a resource in new template, its template ID will
also be marked to current template. At any point of time, realized
resource will from current template, even if it were found in previous
templates. The template ID moves for a resource if it is found.

 Given that our Resource entries in the DB are in 1:1 correspondence with 
 actual resources (we create a new one whenever we need to replace the 
 underlying resource), I found it makes the most conceptual and practical 
 sense to store the requirements in the resource itself, and update them 
 at the time they actually change in the real world (bonus: introduces no 
 new locking issues and no extra DB writes). I settled on this after a 
 legitimate attempt at trying other options, but they didn't work out: 
 https://github.com/zaneb/heat-convergence-prototype/commit/a62958342e8583f74e2aca90f6239ad457ba984d
 

I am okay with the notion of graph stored in resource table.

 For resource clean up, we start from the
 first 

Re: [openstack-dev] [Fuel] Logs format on UI (High/6.0)

2014-12-15 Thread Roman Prykhodchenko
Hi folks!

In most productions environments I’ve seen bare logs as they are shown now in 
Fuel web UI were pretty useless. If someone has an infrastructure that consists 
of more that 5 servers and 5 services running on them they are most likely to 
use logstash, loggly or any other log management system. There are options for 
forwarding these logs to a remote log server and that’s what is likely to be 
used IRL.

Therefore for production environments formatting logs in Fuel web UI or even 
showing them is a cool but pretty useless feature. In addition to being useless 
in production environments it also creates additional load to the user 
interface.

However, I can see that developers actually use it for debugging or 
troubleshooting, so my proposal is to introduce an option for disabling this 
feature completely.


- romcheg

 On 15 Dec 2014, at 12:40, Tomasz Napierala tnapier...@mirantis.com wrote:
 
 Also +1 here.
 In huge envs we already have problems with parsing performance. In long long 
 term we need to think about other log management solution
 
 
 On 12 Dec 2014, at 23:17, Igor Kalnitsky ikalnit...@mirantis.com wrote:
 
 +1 to stop parsing logs on UI and show them as is. I think it's more
 than enough for all users.
 
 On Fri, Dec 12, 2014 at 8:35 PM, Dmitry Pyzhov dpyz...@mirantis.com wrote:
 We have a high priority bug in 6.0:
 https://bugs.launchpad.net/fuel/+bug/1401852. Here is the story.
 
 Our openstack services use to send logs in strange format with extra copy of
 timestamp and loglevel:
 == ./neutron-metadata-agent.log ==
 2014-12-12T11:00:30.098105+00:00 info: 2014-12-12 11:00:30.003 14349 INFO
 neutron.common.config [-] Logging enabled!
 
 And we have a workaround for this. We hide extra timestamp and use second
 loglevel.
 
 In Juno some of services have updated oslo.logging and now send logs in
 simple format:
 == ./nova-api.log ==
 2014-12-12T10:57:15.437488+00:00 debug: Loading app ec2 from
 /etc/nova/api-paste.ini
 
 In order to keep backward compatibility and deal with both formats we have a
 dirty workaround for our workaround:
 https://review.openstack.org/#/c/141450/
 
 As I see, our best choice here is to throw away all workarounds and show
 logs on UI as is. If service sends duplicated data - we should show
 duplicated data.
 
 Long term fix here is to update oslo.logging in all packages. We can do it
 in 6.1.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 --
 Tomasz 'Zen' Napierala
 Sr. OpenStack Engineer
 tnapier...@mirantis.com
 
 
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] need help identifying missing fixtures and test APIs

2014-12-15 Thread Doug Hellmann
We talked about this last week at the Oslo meeting [1], but I also promised to 
send an email for a broader audience.

We have recently had a couple of issues when we released a library where we 
broke unit tests running in other projects. We test the source tree of the 
libraries against the applications using the integration test suite, but we do 
not run the unit tests. This isn’t a new situation — we had similar problems in 
icehouse, and juno. We discussed setting up gate jobs to run the consuming 
project's unit tests during Juno, but eventually dropped that idea because of 
the server requirements needed to actually run all of the required jobs. That 
means we still have a small risk of breaking things with a release if we don’t 
have an API test in place for something we change, or if a test suite mocks out 
an implementation detail of a library instead of mocking the public API.

As part of releasing each library, we have tried to create test APIs and test 
fixtures that can be used to control the library’s behavior within a unit test 
suite in a well-known, testable, and supportable way. We need the liaisons to 
help identify missing fixtures from existing and not-yet-graduated libraries.

There are two main ways application test suites interact with Oslo libraries 
that we want to address: Using configuration options directly to control 
library behavior and mocking. Learning more about both will help us understand 
how the library interacts with the test, and we can then either design a 
fixture or test API for the library or modify the test (in cases where it is 
mocking implementation details).

A change of this scale is a long term project, but we need to start gathering 
data now if we want to start writing new fixtures in the next cycle. Please 
review your project’s test suite and make notes about how it uses mocks and 
configuration options, then add the information to the etherpad [2]. We will 
talk about it again at the Oslo meeting in a few weeks.

Thanks,
Doug

[1] http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-12-08-16.00.html
[2] https://etherpad.openstack.org/p/oslo-mocks-in-project-unit-tests


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-15 Thread Anant Patil
On 13-Dec-14 05:42, Zane Bitter wrote:
 On 12/12/14 05:29, Murugan, Visnusaran wrote:


 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: Friday, December 12, 2014 6:37 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
 showdown

 On 11/12/14 08:26, Murugan, Visnusaran wrote:
 [Murugan, Visnusaran]
 In case of rollback where we have to cleanup earlier version of
 resources,
 we could get the order from old template. We'd prefer not to have a
 graph table.

 In theory you could get it by keeping old templates around. But that
 means keeping a lot of templates, and it will be hard to keep track
 of when you want to delete them. It also means that when starting an
 update you'll need to load every existing previous version of the
 template in order to calculate the dependencies. It also leaves the
 dependencies in an ambiguous state when a resource fails, and
 although that can be worked around it will be a giant pain to implement.


 Agree that looking to all templates for a delete is not good. But
 baring Complexity, we feel we could achieve it by way of having an
 update and a delete stream for a stack update operation. I will
 elaborate in detail in the etherpad sometime tomorrow :)

 I agree that I'd prefer not to have a graph table. After trying a
 couple of different things I decided to store the dependencies in the
 Resource table, where we can read or write them virtually for free
 because it turns out that we are always reading or updating the
 Resource itself at exactly the same time anyway.


 Not sure how this will work in an update scenario when a resource does
 not change and its dependencies do.

 We'll always update the requirements, even when the properties don't
 change.


 Can you elaborate a bit on rollback.
 
 I didn't do anything special to handle rollback. It's possible that we 
 need to - obviously the difference in the UpdateReplace + rollback case 
 is that the replaced resource is now the one we want to keep, and yet 
 the replaced_by/replaces dependency will force the newer (replacement) 
 resource to be checked for deletion first, which is an inversion of the 
 usual order.
 

This is where the version is so handy! For UpdateReplaced ones, there is
an older version to go back to. This version could just be template ID,
as I mentioned in another e-mail. All resources are at the current
template ID if they are found in the current template, even if they is
no need to update them. Otherwise, they need to be cleaned-up in the
order given in the previous templates.

I think the template ID is used as version as far as I can see in Zane's
PoC. If the resource template key doesn't match the current template
key, the resource is deleted. The version is misnomer here, but that
field (template id) is used as though we had versions of resources.

 However, I tried to think of a scenario where that would cause problems 
 and I couldn't come up with one. Provided we know the actual, real-world 
 dependencies of each resource I don't think the ordering of those two 
 checks matters.
 
 In fact, I currently can't think of a case where the dependency order 
 between replacement and replaced resources matters at all. It matters in 
 the current Heat implementation because resources are artificially 
 segmented into the current and backup stacks, but with a holistic view 
 of dependencies that may well not be required. I tried taking that line 
 out of the simulator code and all the tests still passed. If anybody can 
 think of a scenario in which it would make a difference, I would be very 
 interested to hear it.
 
 In any event though, it should be no problem to reverse the direction of 
 that one edge in these particular circumstances if it does turn out to 
 be a problem.
 
 We had an approach with depends_on
 and needed_by columns in ResourceTable. But dropped it when we figured out
 we had too many DB operations for Update.
 
 Yeah, I initially ran into this problem too - you have a bunch of nodes 
 that are waiting on the current node, and now you have to go look them 
 all up in the database to see what else they're waiting on in order to 
 tell if they're ready to be triggered.
 
 It turns out the answer is to distribute the writes but centralise the 
 reads. So at the start of the update, we read all of the Resources, 
 obtain their dependencies and build one central graph[1]. We than make 
 that graph available to each resource (either by passing it as a 
 notification parameter, or storing it somewhere central in the DB that 
 they will all have to read anyway, i.e. the Stack). But when we update a 
 dependency we don't update the central graph, we update the individual 
 Resource so there's no global lock required.
 
 [1] 
 https://github.com/zaneb/heat-convergence-prototype/blob/distributed-graph/converge/stack.py#L166-L168
 

A centralized graph and decision making will make the 

Re: [openstack-dev] Do all OpenStack daemons support sd_notify?

2014-12-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 14/12/14 09:45, Thomas Goirand wrote:
 Hi,
 
 As I am slowing fixing all systemd issues for the daemons of
 OpenStack in Debian (and hopefully, have this ready before the
 freeze of Jessie), I was wondering what kind of Type= directive to
 put on the systemd .service files. I have noticed that in Fedora,
 there's Type=notify. So my question is:
 
 Do all OpenStack daemons, as a rule, support the DBus sd_notify
 thing? Should I always use Type=notify for systemd .service files?
 Can this be called a general rule with no exception?

(I will talk about neutron only.)

I guess Type=notify is supposed to be used with daemons that use
Service class from oslo-incubator that provides systemd notification
mechanism, or call to systemd.notify_once() otherwise.

In terms of Neutron, neutron-server process is doing it, metadata
agent also seems to do it, while OVS agent seems to not. So it really
should depend on each service and the way it's implemented. You cannot
just assume that every Neutron service reports back to systemd.

In terms of Fedora, we have Type=notify for neutron-server service only.

BTW now that more distributions are interested in shipping unit files
for services, should we upstream them and ship the same thing in all
interested distributions?

 
 Cheers,
 
 Thomas Goirand (zigo)
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUjvxgAAoJEC5aWaUY1u57N1gH/RsYPqmGpyoZ8fe8CwXcnz+R
Rvfo7FHpcEZ9+Idvr9qitoPhKtGjzwgJC27EIQ6NCvgZZT462f+/jYHlxW0dX5Cz
Fm9Zg/Hv50ukDOC1nT3nfDKH8uMwuPMrQsfRuXTGKhwqsfgnFfExozydgVeC2XFw
WB9B3tBblp+7PRzaGyN9Bpe3gQnHUm3lyXaziK+wLbf7NTROzATlVCZ4xpPWu/5C
ArfzwXICp9Dk5Juy75mwYwh37gw26w0VWfvPzn2WjkSVHKymNVn9GRdflVOrV3fq
wnhu08e/wup8XF1/eKkWUJyF+hEsN5E0kO2x5CvavvMS3HSTm3Viuhz5tKC6ZAs=
=WiBi
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time

2014-12-15 Thread Anita Kuno
On 12/09/2014 11:55 AM, Anita Kuno wrote:
 On 12/09/2014 08:32 AM, Kurt Taylor wrote:
 All of the feedback so far has supported moving the existing IRC
 Third-party CI meeting to better fit a worldwide audience.

 The consensus is that we will have only 1 meeting per week at alternating
 times. You can see examples of other teams with alternating meeting times
 at: https://wiki.openstack.org/wiki/Meetings

 This way, one week we are good for one part of the world, the next week for
 the other. You will not need to attend both meetings, just the meeting time
 every other week that fits your schedule.

 Proposed times in UTC are being voted on here:
 https://www.google.com/moderator/#16/e=21b93c

 Please vote on the time that is best for you. I would like to finalize the
 new times this week.

 Thanks!
 Kurt Taylor (krtaylor)



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Note that Kurt is welcome to do as he pleases with his own time.
 
 I will be having meetings in the irc channel for the times that I have
 booked.
 
 Thanks,
 Anita.
 
Just in case anyone remains confused, I am chairing third party meetings
Mondays at 1500 UTC and Tuesdays at 0800 UTC in #openstack-meeting.
There is a meeting currently in progress.

This is a great time for people who don't understand requirements to
show up and ask questions.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][infra] Ceph CI status update

2014-12-15 Thread Anita Kuno
On 12/15/2014 07:11 AM, Russell Bryant wrote:
 On 12/11/2014 12:03 PM, Anita Kuno wrote:
 On 12/11/2014 09:36 AM, Jon Bernard wrote:
 Heya, quick Ceph CI status update.  Once the test_volume_boot_pattern
 was marked as skipped, only the revert_resize test was failing.  I have
 submitted a patch to nova for this [1], and that yields an all green
 ceph ci run [2].  So at the moment, and with my revert patch, we're in
 good shape.

 I will fix up that patch today so that it can be properly reviewed and
 hopefully merged.  From there I'll submit a patch to infra to move the
 job to the check queue as non-voting, and we can go from there.

 [1] https://review.openstack.org/#/c/139693/
 [2] 
 http://logs.openstack.org/93/139693/1/experimental/check-tempest-dsvm-full-ceph/12397fd/console.html

 Cheers,

 Please add the name of your CI account to this table:
 https://wiki.openstack.org/wiki/ThirdPartySystems

 As outlined in the third party CI requirements:
 http://ci.openstack.org/third_party.html#requirements

 Please post system status updates to your individual CI wikipage that is
 linked to this table.

 The mailing list is not the place to post status updates for third party
 CI systems.

 If you have questions about any of the above, please attend one of the
 two third party meetings and ask any and all questions until you are
 satisfied. https://wiki.openstack.org/wiki/Meetings#Third_Party_Meeting
 
 This is not a third party CI system.  This is a job running in OpenStack
 infra.  It was in the experimental pipeline while bugs were being fixed.
  This report is about those bugs being fixed and Jon giving a heads up
 that he thinks it will be ready to move to the check queue very soon.
 
My mistake then, thank you for the explanation. Going forward can we
avoid the use of CI in the subject line for emails about jobs running in
infra and their status?

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time

2014-12-15 Thread Kurt Taylor
Anita, please, creating yet another meeting time without input from anyone
just confuses the issue.

The work group has agreed unanimously on alternating weekly meeting times,
and are currently voting on the best for everyone. (
https://www.google.com/moderator/#16/e=21b93c  14 voters so far, thanks
everyone!) Once we finalize the voting, I was going to start up the new
meeting times in the new year. Until then, we would stay at our normal
time, Monday at 1800 UTC.

I am still confused why you would not want to go with the consensus on this.

And, thanks again for everything that you do for us!
Kurt Taylor (krtaylor)


On Mon, Dec 15, 2014 at 9:23 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/09/2014 11:55 AM, Anita Kuno wrote:
  On 12/09/2014 08:32 AM, Kurt Taylor wrote:
  All of the feedback so far has supported moving the existing IRC
  Third-party CI meeting to better fit a worldwide audience.
 
  The consensus is that we will have only 1 meeting per week at
 alternating
  times. You can see examples of other teams with alternating meeting
 times
  at: https://wiki.openstack.org/wiki/Meetings
 
  This way, one week we are good for one part of the world, the next week
 for
  the other. You will not need to attend both meetings, just the meeting
 time
  every other week that fits your schedule.
 
  Proposed times in UTC are being voted on here:
  https://www.google.com/moderator/#16/e=21b93c
 
  Please vote on the time that is best for you. I would like to finalize
 the
  new times this week.
 
  Thanks!
  Kurt Taylor (krtaylor)
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  Note that Kurt is welcome to do as he pleases with his own time.
 
  I will be having meetings in the irc channel for the times that I have
  booked.
 
  Thanks,
  Anita.
 
 Just in case anyone remains confused, I am chairing third party meetings
 Mondays at 1500 UTC and Tuesdays at 0800 UTC in #openstack-meeting.
 There is a meeting currently in progress.

 This is a great time for people who don't understand requirements to
 show up and ask questions.

 Thank you,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Do all OpenStack daemons support sd_notify?

2014-12-15 Thread Clint Byrum
Excerpts from Ihar Hrachyshka's message of 2014-12-15 07:21:04 -0800:
 Hash: SHA512
 
 On 14/12/14 09:45, Thomas Goirand wrote:
  Hi,
  
  As I am slowing fixing all systemd issues for the daemons of
  OpenStack in Debian (and hopefully, have this ready before the
  freeze of Jessie), I was wondering what kind of Type= directive to
  put on the systemd .service files. I have noticed that in Fedora,
  there's Type=notify. So my question is:
  
  Do all OpenStack daemons, as a rule, support the DBus sd_notify
  thing? Should I always use Type=notify for systemd .service files?
  Can this be called a general rule with no exception?
 
 (I will talk about neutron only.)
 
 I guess Type=notify is supposed to be used with daemons that use
 Service class from oslo-incubator that provides systemd notification
 mechanism, or call to systemd.notify_once() otherwise.
 
 In terms of Neutron, neutron-server process is doing it, metadata
 agent also seems to do it, while OVS agent seems to not. So it really
 should depend on each service and the way it's implemented. You cannot
 just assume that every Neutron service reports back to systemd.
 
 In terms of Fedora, we have Type=notify for neutron-server service only.
 
 BTW now that more distributions are interested in shipping unit files
 for services, should we upstream them and ship the same thing in all
 interested distributions?
 

Since we can expect the five currently implemented OS's in TripleO to all
have systemd by default soon (Debian, Fedora, openSUSE, RHEL, Ubuntu),
it would make a lot of sense for us to make the systemd unit files that
TripleO generates set Type=notify wherever possible. So hopefully we can
actually make such a guarantee upstream sometime in the not-so-distant
future, especially since our CI will run two of the more distinct forks,
Ubuntu and Fedora.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Job failures related to Setuptools 8.0 release

2014-12-15 Thread Jeremy Stanley
On Saturday, December 13, Setuptools 8.0 released implementing the
new PEP 440[1] version specification and dropping support for a
variety of previously somewhat-supported versioning syntaxes. This
is impacting us in several ways...

Multiple range expressions in requirements are no longer
interpreted the way we were relying on. The fix is fairly
straightforward, since the SQLAlchemy requirement in oslo.db (and
corresponding line in global requirements) is the only current
example. Fixes for this in multiple branches are in flight.
[2][3][4] (This last one is merged to oslo.db's master branch but
not released yet.)

Arbitrary alphanumeric version subcomponents such as PBR's Git SHA
suffixes now sort earlier than all major version numbers. The fix is
still in progress[5][6], and resulted in a couple of brown-bag
releases over the weekend. 0.10.1 generated PEP 440 compliant
versions which ended up unparseable when included in requirements
files, so 0.10.2 is a roll-back identical to 0.10.

The 1.2.3.rc1 we were using for release candidates is now
automatically normalized to 1.2.3c1 during sdist tarball and wheel
generation, causing tag text to no longer match the resulting file
names. This may simply require us to change our naming convention[7]
for these sorts of tags in the future.

In the interim we've pinned setuptools8 in our infrastructure[8] to
help keep things moving while these various solutions crystalize,
but if you run into this problem locally check your version of
setuptools and try an older one.

[1] http://legacy.python.org/dev/peps/pep-0440/
[2] https://review.openstack.org/141584
[3] https://review.openstack.org/137583
[4] https://review.openstack.org/140948
[5] https://review.openstack.org/141666
[6] https://review.openstack.org/141667
[7] https://review.openstack.org/141831
[8] https://review.openstack.org/141577

-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][fwaas][oslo] Common code between VMWare neutron plugin and services plugins

2014-12-15 Thread Kobi Samoray
Hi,
Some files in neutron are common infrastructure to the VMWare neutron L2/L3 
plugin, and the services plugins.
These files wrap VMWare NSX and provide a python API to some NSX services.

This code is common to:
- VMWare L2/L3 plugin, which after the split should be held outside of 
openstack repo (e.g stackforge)
- neutron-lbaas, neutron-fwaas repos, which will hold the VMWare services 
plugins

With neutron split into multiple repos, in and out of openstack, we have the 
following options:
1. Duplicate the relevant code between the various repos - IMO a pretty bad 
choice for obvious reasons.

2. Keep the code in the VMWare L3/L4 plugin repo - which will add an import 
from the neutron-*aas repos to a repo which is outside of openstack.

3. Add these components to oslo.vmware project: oslo.vmware contains, as of 
now, a wrapper to vCenter API. The components in discussion wrap NSX API, which 
is out of vCenter scope. Therefore it’s not really a part of oslo.vmware scope 
as it is defined today, but is still a wrapper layer to a VMWare product.
We could extend the oslo.vmware scope to include wrappers to VMWare products, 
in general, and add the relevant components under oslo.vmware.network.nsx or 
similar.

Thanks,
Kobi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-15 Thread Doug Wiegley
Hi Ihar,

I’m actually in favor of option 2, but it implies a few things about your
time, and I wanted to chat with you before presuming.

Maintenance can not involve breaking changes. At this point, the co-gate
will block it.  Also, oslo graduation changes will have to be made in the
services repos first, and then Neutron.

Thanks,
doug


On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

the question arose recently in one of reviews for neutron-*aas repos
to remove all oslo-incubator code from those repos since it's
duplicated in neutron main repo. (You can find the link to the review
at the end of the email.)

Brief hostory: neutron repo was recently split into 4 pieces (main,
neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted
in each repository keeping their own copy of
neutron/openstack/common/... tree (currently unused in all
neutron-*aas repos that are still bound to modules from main repo).

As a oslo liaison for the project, I wonder what's the best way to
manage oslo-incubator files. We have several options:

1. just kill all the neutron/openstack/common/ trees from neutron-*aas
repositories and continue using modules from main repo.

2. kill all duplicate modules from neutron-*aas repos and leave only
those that are used in those repos but not in main repo.

3. fully duplicate all those modules in each of four repos that use them.

I think option 1. is a straw man, since we should be able to introduce
new oslo-incubator modules into neutron-*aas repos even if they are
not used in main repo.

Option 2. is good when it comes to synching non-breaking bug fixes (or
security fixes) from oslo-incubator, in that it will require only one
sync patch instead of e.g. four. At the same time there may be
potential issues when synchronizing updates from oslo-incubator that
would break API and hence require changes to each of the modules that
use it. Since we don't support atomic merges for multiple projects in
gate, we will need to be cautious about those updates, and we will
still need to leave neutron-*aas repos broken for some time (though
the time may be mitigated with care).

Option 3. is vice versa - in theory, you get total decoupling, meaning
no oslo-incubator updates in main repo are expected to break
neutron-*aas repos, but bug fixing becomes a huge PITA.

I would vote for option 2., for two reasons:
- - most oslo-incubator syncs are non-breaking, and we may effectively
apply care to updates that may result in potential breakage (f.e.
being able to trigger an integrated run for each of neutron-*aas repos
with the main sync patch, if there are any concerns).
- - it will make oslo liaison life a lot easier. OK, I'm probably too
selfish on that. ;)
- - it will make stable maintainers life a lot easier. The main reason
why stable maintainers and distributions like recent oslo graduation
movement is that we don't need to track each bug fix we need in every
project, and waste lots of cycles on it. Being able to fix a bug in
one place only is *highly* anticipated. [OK, I'm quite selfish on that
one too.]
- - it's a delusion that there will be no neutron-main syncs that will
break neutron-*aas repos ever. There can still be problems due to
incompatibility between neutron main and neutron-*aas code resulted
EXACTLY because multiple parts of the same process use different
versions of the same module.

That said, Doug Wiegley (lbaas core) seems to be in favour of option
3. due to lower coupling that is achieved in that way. I know that
lbaas team had a bad experience due to tight coupling to neutron
project in the past, so I appreciate their concerns.

All in all, we should come up with some standard solution for both
advanced services that are already split out, *and* upcoming vendor
plugin shrinking initiative.

The initial discussion is captured at:
https://review.openstack.org/#/c/141427/

Thanks,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh
apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq
6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6
tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E
QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/
czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk=
=D6Gn
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time

2014-12-15 Thread Anita Kuno
On 12/15/2014 10:55 AM, Kurt Taylor wrote:
 Anita, please, creating yet another meeting time without input from anyone
 just confuses the issue.
When I ask people to attend meetings to reduce noise on the mailing
list, there had better be some meetings.

I am grateful for the time you have spent chairing, thank you. It gave
me a huge break and allowed me to focus on other things (like reviews)
that I had to neglect due to the amount of time third party was taking
from my life.

I need there to be meetings to answer questions for people and will be
chairing meetings on the dates and times I have specified, like I said
that I would do.

Thank you,
Anita.

 
 The work group has agreed unanimously on alternating weekly meeting times,
 and are currently voting on the best for everyone. (
 https://www.google.com/moderator/#16/e=21b93c  14 voters so far, thanks
 everyone!) Once we finalize the voting, I was going to start up the new
 meeting times in the new year. Until then, we would stay at our normal
 time, Monday at 1800 UTC.
 
 I am still confused why you would not want to go with the consensus on this.
 
 And, thanks again for everything that you do for us!
 Kurt Taylor (krtaylor)
 
 
 On Mon, Dec 15, 2014 at 9:23 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/09/2014 11:55 AM, Anita Kuno wrote:
 On 12/09/2014 08:32 AM, Kurt Taylor wrote:
 All of the feedback so far has supported moving the existing IRC
 Third-party CI meeting to better fit a worldwide audience.

 The consensus is that we will have only 1 meeting per week at
 alternating
 times. You can see examples of other teams with alternating meeting
 times
 at: https://wiki.openstack.org/wiki/Meetings

 This way, one week we are good for one part of the world, the next week
 for
 the other. You will not need to attend both meetings, just the meeting
 time
 every other week that fits your schedule.

 Proposed times in UTC are being voted on here:
 https://www.google.com/moderator/#16/e=21b93c

 Please vote on the time that is best for you. I would like to finalize
 the
 new times this week.

 Thanks!
 Kurt Taylor (krtaylor)



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Note that Kurt is welcome to do as he pleases with his own time.

 I will be having meetings in the irc channel for the times that I have
 booked.

 Thanks,
 Anita.

 Just in case anyone remains confused, I am chairing third party meetings
 Mondays at 1500 UTC and Tuesdays at 0800 UTC in #openstack-meeting.
 There is a meeting currently in progress.

 This is a great time for people who don't understand requirements to
 show up and ask questions.

 Thank you,
 Anita.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][fwaas][oslo] Common code between VMWare neutron plugin and services plugins

2014-12-15 Thread Russell Bryant
On 12/15/2014 11:20 AM, Kobi Samoray wrote:
 3. Add these components to oslo.vmware project: oslo.vmware contains, as of 
 now, a wrapper to vCenter API. The components in discussion wrap NSX API, 
 which is out of vCenter scope. Therefore it’s not really a part of 
 oslo.vmware scope as it is defined today, but is still a wrapper layer to a 
 VMWare product.
 We could extend the oslo.vmware scope to include wrappers to VMWare products, 
 in general, and add the relevant components under oslo.vmware.network.nsx or 
 similar.

This option sounds best to me, unless the NSX support brings in some
additional dependencies to oslo.vmware that warrant keeping it separate
from the existing oslo.vmware.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][fwaas][oslo] Common code between VMWare neutron plugin and services plugins

2014-12-15 Thread Doug Wiegley


On 12/15/14, 8:20 AM, Kobi Samoray ksamo...@vmware.com wrote:

Hi,
Some files in neutron are common infrastructure to the VMWare neutron
L2/L3 plugin, and the services plugins.
These files wrap VMWare NSX and provide a python API to some NSX services.

This code is common to:
- VMWare L2/L3 plugin, which after the split should be held outside of
openstack repo (e.g stackforge)
- neutron-lbaas, neutron-fwaas repos, which will hold the VMWare services
plugins

With neutron split into multiple repos, in and out of openstack, we have
the following options:
1. Duplicate the relevant code between the various repos - IMO a pretty
bad choice for obvious reasons.

Yeah, yuck.


2. Keep the code in the VMWare L3/L4 plugin repo - which will add an
import from the neutron-*aas repos to a repo which is outside of
openstack.

Importing code from elsewhere, which is not in the requirements file, is
done in a few places for vendor libraries. As long as the mainline code
doesn’t require it, and unit tests similarly can run without that import
being present, I don’t see a big problem wit hit.

Doug



3. Add these components to oslo.vmware project: oslo.vmware contains, as
of now, a wrapper to vCenter API. The components in discussion wrap NSX
API, which is out of vCenter scope. Therefore it’s not really a part of
oslo.vmware scope as it is defined today, but is still a wrapper layer to
a VMWare product.
We could extend the oslo.vmware scope to include wrappers to VMWare
products, in general, and add the relevant components under
oslo.vmware.network.nsx or similar.

Thanks,
Kobi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas][fwaas][oslo] Common code between VMWare neutron plugin and services plugins

2014-12-15 Thread Kobi Samoray
It shouldn’t drag any additional dependencies - it’s REST API wrapper so 
nothing beyond XML/JSON/HTTP/threads should be used in these components.

 On Dec 15, 2014, at 18:23, Russell Bryant rbry...@redhat.com wrote:
 
 On 12/15/2014 11:20 AM, Kobi Samoray wrote:
 3. Add these components to oslo.vmware project: oslo.vmware contains, as of 
 now, a wrapper to vCenter API. The components in discussion wrap NSX API, 
 which is out of vCenter scope. Therefore it’s not really a part of 
 oslo.vmware scope as it is defined today, but is still a wrapper layer to a 
 VMWare product.
 We could extend the oslo.vmware scope to include wrappers to VMWare 
 products, in general, and add the relevant components under 
 oslo.vmware.network.nsx or similar.
 
 This option sounds best to me, unless the NSX support brings in some
 additional dependencies to oslo.vmware that warrant keeping it separate
 from the existing oslo.vmware.
 
 -- 
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-15 Thread Radoslav Gerganov

On 12/15/2014 12:54 PM, Neil Jerram wrote:

My Nova spec (https://review.openstack.org/#/c/130732/) does not appear
on this dashboard, even though I believe it's in good standing and - I
hope - close to approval.  Do you know why - does it mean that I've set
some metadata field somewhere wrongly?



The dashboard doesn't show your own specs.  You have to remove 
+NOT+owner%3Aself from the URL to see them.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Doug Hellmann
The Oslo team is pleased to announce the release of
oslo.db 1.3.0: oslo.db library

This release is primarily meant to update the SQLAlchemy dependency
to resolve the issue with the new version of setuptools changing
how it evaluates version range specifications.

For more details, please see the git log history below and
 http://launchpad.net/oslo.db/+milestone/1.3.0

Please report issues through launchpad:
 http://bugs.launchpad.net/oslo.db



Changes in openstack/oslo.db  1.2.0..1.3.0

0265aa4 Repair string-based disconnect filters for MySQL, DB2
b1af0f5 Fix python3.x scoping issues with removed 'uee' variable
c6b352e Updated from global requirements
9658b28 Fix test_migrate_cli for py3
4c939b3 Fix TestConnectionUtils to py3x compatibility
9c3477d Updated from global requirements
32e5c60 Upgrade exc_filters for 'engine' argument and connect behavior
161bbb2 Workflow documentation is now in infra-manual
86c136a Fix nested() for py3

  diffstat (except docs and test files):

 CONTRIBUTING.rst  |  7 ++---
 oslo/db/sqlalchemy/compat/__init__.py |  6 ++--
 oslo/db/sqlalchemy/compat/handle_error.py | 50 +++
 oslo/db/sqlalchemy/compat/utils.py|  1 +
 oslo/db/sqlalchemy/exc_filters.py | 34 -
 requirements.txt  |  4 +--
 tests/sqlalchemy/test_exc_filters.py  | 39 +++-
 tests/sqlalchemy/test_migrate_cli.py  |  6 ++--
 tests/sqlalchemy/test_utils.py| 12 
 tests/utils.py|  4 +--
 10 files changed, 111 insertions(+), 52 deletions(-)

  Requirements updates:

 diff --git a/requirements.txt b/requirements.txt
 index f8a0d8c..8ab53a0 100644
 --- a/requirements.txt
 +++ b/requirements.txt
 @@ -11,2 +11,2 @@ oslo.config=1.4.0  # Apache-2.0
 -oslo.utils=1.0.0   # Apache-2.0
 -SQLAlchemy=0.8.4,=0.8.99,=0.9.7,=0.9.99
 +oslo.utils=1.1.0   # Apache-2.0
 +SQLAlchemy=0.9.7,=0.9.99
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

I was (rightfully) asked to share my comments on the matter that I
left in gerrit here. See below.

On 12/12/14 22:40, Sean Dague wrote:
 On 12/12/2014 01:05 PM, Maru Newby wrote:
 
 On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net wrote:
 
 On 12/11/2014 04:16 PM, Jay Pipes wrote:
 On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
 On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com
 wrote:
 On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
 On Dec 11, 2014, at 8:00 AM, Henry Gessau
 ges...@cisco.com wrote:
 
 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz
 wrote:
 
 On Dec 11, 2014, at 8:43 AM, Jay Pipes
 jaypi...@gmail.com mailto:jaypi...@gmail.com
 wrote:
 
 I'm generally in favor of making name attributes
 opaque, utf-8 strings that are entirely
 user-defined and have no constraints on them. I 
 consider the name to be just a tag that the user
 places on some resource. It is the resource's ID
 that is unique.
 
 I do realize that Nova takes a different approach
 to *some* resources, including the security group
 name.
 
 End of the day, it's probably just a personal
 preference whether names should be unique to a
 tenant/user or not.
 
 Maru had asked me my opinion on whether names
 should be unique and I answered my personal
 opinion that no, they should not be, and if 
 Neutron needed to ensure that there was one and
 only one default security group for a tenant,
 that a way to accomplish such a thing in a
 race-free way, without use of SELECT FOR UPDATE,
 was to use the approach I put into the pastebin
 on the review above.
 
 
 I agree with Jay.  We should not care about how a
 user names the resource. There other ways to
 prevent this race and Jay’s suggestion is a good
 one.
 
 However we should open a bug against Horizon because
 the user experience there is terrible with duplicate
 security group names.
 
 The reason security group names are unique is that the
 ec2 api supports source rule specifications by
 tenant_id (user_id in amazon) and name, so not
 enforcing uniqueness means that invocation in the ec2
 api will either fail or be non-deterministic in some
 way.
 
 So we should couple our API evolution to EC2 API then?
 
 -jay
 
 No I was just pointing out the historical reason for
 uniqueness, and hopefully encouraging someone to find the
 best behavior for the ec2 api if we are going to keep the
 incompatibility there. Also I personally feel the ux is 
 better with unique names, but it is only a slight
 preference.
 
 Sorry for snapping, you made a fair point.
 
 Yeh, honestly, I agree with Vish. I do feel that the UX of
 that constraint is useful. Otherwise you get into having to
 show people UUIDs in a lot more places. While those are good
 for consistency, they are kind of terrible to show to people.
 
 While there is a good case for the UX of unique names - it also
 makes orchestration via tools like puppet a heck of a lot simpler
 - the fact is that most OpenStack resources do not require unique
 names.  That being the case, why would we want security groups to
 deviate from this convention?
 
 Maybe the other ones are the broken ones?
 
 Honestly, any sanely usable system makes names unique inside a 
 container. Like files in a directory.

Correct. Or take git: it does not use hashes to identify objects, right?

 In this case the tenant is the container, which makes sense.
 
 It is one of many places that OpenStack is not consistent. But I'd 
 rather make things consistent and more usable than consistent and
 less.

Are we only proposing to make security group name unique? I assume
that, since that's what we currently have in review. The change would
make API *more* inconsistent, not less, since other objects still use
uuid for identification.

You may say that we should move *all* neutron objects to the new
identification system by name. But what's the real benefit?

If there are problems in UX (client, horizon, ...), we should fix the
view and not data models used. If we decide we want users to avoid
using objects with the same names, fine, let's add warnings in UI
(probably with an option to disable it so that we don't push the
validation into their throats).

Finally, I have concern about us changing user visible object
attributes like names during db migrations, as it's proposed in the
patch discussed here. I think such behaviour can be quite unexpected
for some users, if not breaking their workflow and/or scripts.

My belief is that responsible upstream does not apply ad-hoc changes
to API to fix a race condition that is easily solvable in other ways
(see Assaf's proposal to introduce a new DefaultSecurityGroups table
in patchset 12 comments).

As for the whole object identification scheme change, for this to
work, it probably needs a spec and a long discussion on any possible
complications (and benefits) when applying a change like that.

For reference and convenience of readers, leaving the link to the

Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Jeremy Stanley
On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
[...]
 This release is primarily meant to update the SQLAlchemy dependency
 to resolve the issue with the new version of setuptools changing
 how it evaluates version range specifications.
[...]

However note that I'm in the middle of forcing a refresh on a couple
of our PyPI mirror servers, so it may be a couple hours before we
see the effects of this throughout all of our infrastructure.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Unrecognized Services and Install Guide

2014-12-15 Thread André Aranha
When I try to follow the instalation guide I'm having some issues (
http://docs.openstack.org/developer/ironic/deploy/install-guide.html )
I installed the devstack with ironic and did worked. Now, having a single
machine running devstack, I want to deploy the Ironic on it. So, I'll have
a machine as the controller node and another machine as the one ironic will
use as a VM.
I'm using Ubuntu 14.04 and when I download the irnoic services

# Available in Ubuntu 14.04 (trusty)
 apt-get install ironic-api ironic-conductor python-ironicclient


I get the ironic-api version and the version downloaded was the 2014.1.rc1

ironic-api --version
 2014.1.rc1


This version don't have the capability 'create_schema', as it's required in
the guide

ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema


So, following some tips in the #openstack-ironic, I downloaded the code
from the repository, and installed it after removing the downloaded
ironic-api:

git clone https://github.com/openstack/ironic.git
 python setup.py install


Now the ironic-dbsync is working with the create_schema, and I have the
following ironic-api version:

ironic-api --version
 2015.1.dev206.g2db2659


But when I continue the guide
1) I get an error on the ironic-api service

sudo service ironic-api restart
 ironic-api: unrecognized service


2) nova-scheduler service don't exist

sudo service nova-scheduler restart
 nova-scheduler: unrecognized service


3) Neither nova-compute

sudo service nova-compute restart
 nova-compute: unrecognized service


Did someone have this problem, and how can it be solved?

I don't know if this issue should be addressed to Openstack-dev so I'm also
addressing to Openstack.

Thank you,
Andre Aranha
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 15/12/2014

2014-12-15 Thread Renat Akhmerov
Thanks for joining our team meeting today!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-15-16.01.html
 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-15-16.01.html
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-15-16.01.log.html
 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-12-15-16.01.log.html

The next meeting is on Dec 22 at the same time.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-15 Thread Assaf Muller


- Original Message -
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 I was (rightfully) asked to share my comments on the matter that I
 left in gerrit here. See below.
 
 On 12/12/14 22:40, Sean Dague wrote:
  On 12/12/2014 01:05 PM, Maru Newby wrote:
  
  On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net wrote:
  
  On 12/11/2014 04:16 PM, Jay Pipes wrote:
  On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
  On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com
  wrote:
  On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
  
  On Dec 11, 2014, at 8:00 AM, Henry Gessau
  ges...@cisco.com wrote:
  
  On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz
  wrote:
  
  On Dec 11, 2014, at 8:43 AM, Jay Pipes
  jaypi...@gmail.com mailto:jaypi...@gmail.com
  wrote:
  
  I'm generally in favor of making name attributes
  opaque, utf-8 strings that are entirely
  user-defined and have no constraints on them. I
  consider the name to be just a tag that the user
  places on some resource. It is the resource's ID
  that is unique.
  
  I do realize that Nova takes a different approach
  to *some* resources, including the security group
  name.
  
  End of the day, it's probably just a personal
  preference whether names should be unique to a
  tenant/user or not.
  
  Maru had asked me my opinion on whether names
  should be unique and I answered my personal
  opinion that no, they should not be, and if
  Neutron needed to ensure that there was one and
  only one default security group for a tenant,
  that a way to accomplish such a thing in a
  race-free way, without use of SELECT FOR UPDATE,
  was to use the approach I put into the pastebin
  on the review above.
  
  
  I agree with Jay.  We should not care about how a
  user names the resource. There other ways to
  prevent this race and Jay’s suggestion is a good
  one.
  
  However we should open a bug against Horizon because
  the user experience there is terrible with duplicate
  security group names.
  
  The reason security group names are unique is that the
  ec2 api supports source rule specifications by
  tenant_id (user_id in amazon) and name, so not
  enforcing uniqueness means that invocation in the ec2
  api will either fail or be non-deterministic in some
  way.
  
  So we should couple our API evolution to EC2 API then?
  
  -jay
  
  No I was just pointing out the historical reason for
  uniqueness, and hopefully encouraging someone to find the
  best behavior for the ec2 api if we are going to keep the
  incompatibility there. Also I personally feel the ux is
  better with unique names, but it is only a slight
  preference.
  
  Sorry for snapping, you made a fair point.
  
  Yeh, honestly, I agree with Vish. I do feel that the UX of
  that constraint is useful. Otherwise you get into having to
  show people UUIDs in a lot more places. While those are good
  for consistency, they are kind of terrible to show to people.
  
  While there is a good case for the UX of unique names - it also
  makes orchestration via tools like puppet a heck of a lot simpler
  - the fact is that most OpenStack resources do not require unique
  names.  That being the case, why would we want security groups to
  deviate from this convention?
  
  Maybe the other ones are the broken ones?
  
  Honestly, any sanely usable system makes names unique inside a
  container. Like files in a directory.
 
 Correct. Or take git: it does not use hashes to identify objects, right?
 
  In this case the tenant is the container, which makes sense.
  
  It is one of many places that OpenStack is not consistent. But I'd
  rather make things consistent and more usable than consistent and
  less.
 
 Are we only proposing to make security group name unique? I assume
 that, since that's what we currently have in review. The change would
 make API *more* inconsistent, not less, since other objects still use
 uuid for identification.
 
 You may say that we should move *all* neutron objects to the new
 identification system by name. But what's the real benefit?
 
 If there are problems in UX (client, horizon, ...), we should fix the
 view and not data models used. If we decide we want users to avoid
 using objects with the same names, fine, let's add warnings in UI
 (probably with an option to disable it so that we don't push the
 validation into their throats).
 
 Finally, I have concern about us changing user visible object
 attributes like names during db migrations, as it's proposed in the
 patch discussed here. I think such behaviour can be quite unexpected
 for some users, if not breaking their workflow and/or scripts.
 
 My belief is that responsible upstream does not apply ad-hoc changes
 to API to fix a race condition that is easily solvable in other ways
 (see Assaf's proposal to introduce a new DefaultSecurityGroups table
 in patchset 12 comments).
 

As usual you explain yourself better than I can... I think my main
original objection to the patch is that it 

Re: [openstack-dev] [nova] Kilo specs review day

2014-12-15 Thread Joe Gordon
On Mon, Dec 15, 2014 at 8:46 AM, Radoslav Gerganov rgerga...@vmware.com
wrote:

 On 12/15/2014 12:54 PM, Neil Jerram wrote:

 My Nova spec (https://review.openstack.org/#/c/130732/) does not appear
 on this dashboard, even though I believe it's in good standing and - I
 hope - close to approval.  Do you know why - does it mean that I've set
 some metadata field somewhere wrongly?


 The dashboard doesn't show your own specs.  You have to remove
 +NOT+owner%3Aself from the URL to see them.


The latest iteration of the dashboard (
https://review.openstack.org/#/c/130732/) shows your own specs:

https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fnova%252Dspecs+status%3Aopen+NOT+label%3AWorkflow%3C%3D%252D1+branch%3Amaster+NOT+owner%3Aselftitle=Nova+SpecsYour+are+a+reviewer%252c+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsNot+blocked+by+%252D2s=NOT+label%3ACode%252DReview%3C%3D%252D2+NOT+label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsNo+votes+and+spec+is+%3E+1+week+old=NOT+label%3ACode%252DReview%3E%3D%252D2+age%3A7d+label%3AVerified%3E%3D1%252cjenkinsNeeds+final+%2B2=label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsBroken+Specs+%28doesn%27t+pass+Jenkins%29=label%3AVerified%3C%3D%252D1%252cjenkinsDead+Specs+%28blocked+by+a+%252D2%29=label%3ACode%252DReview%3C%3D%252D2




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-15 Thread Doug Wiegley
Hi all,

Ihar and I discussed this on IRC, and are going forward with option 2
unless someone has a big problem with it.

Thanks,
Doug


On 12/15/14, 8:22 AM, Doug Wiegley do...@a10networks.com wrote:

Hi Ihar,

I’m actually in favor of option 2, but it implies a few things about your
time, and I wanted to chat with you before presuming.

Maintenance can not involve breaking changes. At this point, the co-gate
will block it.  Also, oslo graduation changes will have to be made in the
services repos first, and then Neutron.

Thanks,
doug


On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,

the question arose recently in one of reviews for neutron-*aas repos
to remove all oslo-incubator code from those repos since it's
duplicated in neutron main repo. (You can find the link to the review
at the end of the email.)

Brief hostory: neutron repo was recently split into 4 pieces (main,
neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted
in each repository keeping their own copy of
neutron/openstack/common/... tree (currently unused in all
neutron-*aas repos that are still bound to modules from main repo).

As a oslo liaison for the project, I wonder what's the best way to
manage oslo-incubator files. We have several options:

1. just kill all the neutron/openstack/common/ trees from neutron-*aas
repositories and continue using modules from main repo.

2. kill all duplicate modules from neutron-*aas repos and leave only
those that are used in those repos but not in main repo.

3. fully duplicate all those modules in each of four repos that use them.

I think option 1. is a straw man, since we should be able to introduce
new oslo-incubator modules into neutron-*aas repos even if they are
not used in main repo.

Option 2. is good when it comes to synching non-breaking bug fixes (or
security fixes) from oslo-incubator, in that it will require only one
sync patch instead of e.g. four. At the same time there may be
potential issues when synchronizing updates from oslo-incubator that
would break API and hence require changes to each of the modules that
use it. Since we don't support atomic merges for multiple projects in
gate, we will need to be cautious about those updates, and we will
still need to leave neutron-*aas repos broken for some time (though
the time may be mitigated with care).

Option 3. is vice versa - in theory, you get total decoupling, meaning
no oslo-incubator updates in main repo are expected to break
neutron-*aas repos, but bug fixing becomes a huge PITA.

I would vote for option 2., for two reasons:
- - most oslo-incubator syncs are non-breaking, and we may effectively
apply care to updates that may result in potential breakage (f.e.
being able to trigger an integrated run for each of neutron-*aas repos
with the main sync patch, if there are any concerns).
- - it will make oslo liaison life a lot easier. OK, I'm probably too
selfish on that. ;)
- - it will make stable maintainers life a lot easier. The main reason
why stable maintainers and distributions like recent oslo graduation
movement is that we don't need to track each bug fix we need in every
project, and waste lots of cycles on it. Being able to fix a bug in
one place only is *highly* anticipated. [OK, I'm quite selfish on that
one too.]
- - it's a delusion that there will be no neutron-main syncs that will
break neutron-*aas repos ever. There can still be problems due to
incompatibility between neutron main and neutron-*aas code resulted
EXACTLY because multiple parts of the same process use different
versions of the same module.

That said, Doug Wiegley (lbaas core) seems to be in favour of option
3. due to lower coupling that is achieved in that way. I know that
lbaas team had a bad experience due to tight coupling to neutron
project in the past, so I appreciate their concerns.

All in all, we should come up with some standard solution for both
advanced services that are already split out, *and* upcoming vendor
plugin shrinking initiative.

The initial discussion is captured at:
https://review.openstack.org/#/c/141427/

Thanks,
/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh
apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq
6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6
tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E
QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/
czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk=
=D6Gn
-END PGP SIGNATURE-


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-15 Thread Joe Gordon
On Mon, Dec 15, 2014 at 9:14 AM, Joe Gordon joe.gord...@gmail.com wrote:



 On Mon, Dec 15, 2014 at 8:46 AM, Radoslav Gerganov rgerga...@vmware.com
 wrote:

 On 12/15/2014 12:54 PM, Neil Jerram wrote:

 My Nova spec (https://review.openstack.org/#/c/130732/) does not appear
 on this dashboard, even though I believe it's in good standing and - I
 hope - close to approval.  Do you know why - does it mean that I've set
 some metadata field somewhere wrongly?


 The dashboard doesn't show your own specs.  You have to remove
 +NOT+owner%3Aself from the URL to see them.


 The latest iteration of the dashboard (
 https://review.openstack.org/#/c/130732/) shows your own specs:


Looks like that section was removed in
https://review.openstack.org/#/c/141411/




 https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fnova%252Dspecs+status%3Aopen+NOT+label%3AWorkflow%3C%3D%252D1+branch%3Amaster+NOT+owner%3Aselftitle=Nova+SpecsYour+are+a+reviewer%252c+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsNot+blocked+by+%252D2s=NOT+label%3ACode%252DReview%3C%3D%252D2+NOT+label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsNo+votes+and+spec+is+%3E+1+week+old=NOT+label%3ACode%252DReview%3E%3D%252D2+age%3A7d+label%3AVerified%3E%3D1%252cjenkinsNeeds+final+%2B2=label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsBroken+Specs+%28doesn%27t+pass+Jenkins%29=label%3AVerified%3C%3D%252D1%252cjenkinsDead+Specs+%28blocked+by+a+%252D2%29=label%3ACode%252DReview%3C%3D%252D2




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Kilo specs review day

2014-12-15 Thread Neil Jerram
Joe Gordon joe.gord...@gmail.com writes:

 On Mon, Dec 15, 2014 at 8:46 AM, Radoslav Gerganov
 rgerga...@vmware.com wrote:

 On 12/15/2014 12:54 PM, Neil Jerram wrote:
 
 My Nova spec (https://review.openstack.org/#/c/130732/) does not
 appear
 on this dashboard, even though I believe it's in good standing
 and - I
 hope - close to approval. Do you know why - does it mean that
 I've set
 some metadata field somewhere wrongly?
 
 

 The dashboard doesn't show your own specs. You have to remove
 +NOT+owner%3Aself from the URL to see them.

Ah, as simple an explanation and solution as that.  Thanks!

 The latest iteration of the dashboard
 (https://review.openstack.org/#/c/130732/) shows your own specs:

 https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fnova%252Dspecs+status%3Aopen+NOT+label%3AWorkflow%3C%3D%252D1+branch%3Amaster+NOT+owner%3Aselftitle=Nova+SpecsYour+are+a+reviewer%252c+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsNot+blocked+by+%252D2s=NOT+label%3ACode%252DReview%3C%3D%252D2+NOT+label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsNo+votes+and+spec+is+%3E+1+week+old=NOT+label%3ACode%252DReview%3E%3D%252D2+age%3A7d+label%3AVerified%3E%3D1%252cjenkinsNeeds+final+%2B2=label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsBroken+Specs+%28doesn%27t+pass+Jenkins%29=label%3AVerified%3C%3D%252D1%252cjenkinsDead+Specs+%28blocked+by+a+%252D2%29=label%3ACode%252DReview%3C%3D%252D2

Actually that URL still doesn't.  But this one does:

https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fnova%252Dspecs+status%3Aopen+NOT+label%3AWorkflow%3C%3D%252D1+branch%3Amastertitle=Nova+SpecsYour+are+a+reviewer%252c+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsNot+blocked+by+%252D2s=NOT+label%3ACode%252DReview%3C%3D%252D2+NOT+label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsNo+votes+and+spec+is+%3E+1+week+old=NOT+label%3ACode%252DReview%3E%3D%252D2+age%3A7d+label%3AVerified%3E%3D1%252cjenkinsNeeds+final+%2B2=label%3ACode%252DReview%3E%3D2+NOT+label%3ACode%252DReview%3E%3D%252D2%252cself+label%3AVerified%3E%3D1%252cjenkinsBroken+Specs+%28doesn%27t+pass+Jenkins%29=label%3AVerified%3C%3D%252D1%252cjenkinsDead+Specs+%28blocked+by+a+%252D2%29=label%3ACode%252DReview%3C%3D%252D2

Thanks for your reply and for generating this dashboard!

   Neil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-15 Thread Kyle Mestery
Option 2 works for me, thanks for figuring this out Ihar and Doug!

On Mon, Dec 15, 2014 at 11:16 AM, Doug Wiegley do...@a10networks.com
wrote:

 Hi all,

 Ihar and I discussed this on IRC, and are going forward with option 2
 unless someone has a big problem with it.

 Thanks,
 Doug


 On 12/15/14, 8:22 AM, Doug Wiegley do...@a10networks.com wrote:

 Hi Ihar,
 
 I’m actually in favor of option 2, but it implies a few things about your
 time, and I wanted to chat with you before presuming.
 
 Maintenance can not involve breaking changes. At this point, the co-gate
 will block it.  Also, oslo graduation changes will have to be made in the
 services repos first, and then Neutron.
 
 Thanks,
 doug
 
 
 On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Hi all,
 
 the question arose recently in one of reviews for neutron-*aas repos
 to remove all oslo-incubator code from those repos since it's
 duplicated in neutron main repo. (You can find the link to the review
 at the end of the email.)
 
 Brief hostory: neutron repo was recently split into 4 pieces (main,
 neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted
 in each repository keeping their own copy of
 neutron/openstack/common/... tree (currently unused in all
 neutron-*aas repos that are still bound to modules from main repo).
 
 As a oslo liaison for the project, I wonder what's the best way to
 manage oslo-incubator files. We have several options:
 
 1. just kill all the neutron/openstack/common/ trees from neutron-*aas
 repositories and continue using modules from main repo.
 
 2. kill all duplicate modules from neutron-*aas repos and leave only
 those that are used in those repos but not in main repo.
 
 3. fully duplicate all those modules in each of four repos that use them.
 
 I think option 1. is a straw man, since we should be able to introduce
 new oslo-incubator modules into neutron-*aas repos even if they are
 not used in main repo.
 
 Option 2. is good when it comes to synching non-breaking bug fixes (or
 security fixes) from oslo-incubator, in that it will require only one
 sync patch instead of e.g. four. At the same time there may be
 potential issues when synchronizing updates from oslo-incubator that
 would break API and hence require changes to each of the modules that
 use it. Since we don't support atomic merges for multiple projects in
 gate, we will need to be cautious about those updates, and we will
 still need to leave neutron-*aas repos broken for some time (though
 the time may be mitigated with care).
 
 Option 3. is vice versa - in theory, you get total decoupling, meaning
 no oslo-incubator updates in main repo are expected to break
 neutron-*aas repos, but bug fixing becomes a huge PITA.
 
 I would vote for option 2., for two reasons:
 - - most oslo-incubator syncs are non-breaking, and we may effectively
 apply care to updates that may result in potential breakage (f.e.
 being able to trigger an integrated run for each of neutron-*aas repos
 with the main sync patch, if there are any concerns).
 - - it will make oslo liaison life a lot easier. OK, I'm probably too
 selfish on that. ;)
 - - it will make stable maintainers life a lot easier. The main reason
 why stable maintainers and distributions like recent oslo graduation
 movement is that we don't need to track each bug fix we need in every
 project, and waste lots of cycles on it. Being able to fix a bug in
 one place only is *highly* anticipated. [OK, I'm quite selfish on that
 one too.]
 - - it's a delusion that there will be no neutron-main syncs that will
 break neutron-*aas repos ever. There can still be problems due to
 incompatibility between neutron main and neutron-*aas code resulted
 EXACTLY because multiple parts of the same process use different
 versions of the same module.
 
 That said, Doug Wiegley (lbaas core) seems to be in favour of option
 3. due to lower coupling that is achieved in that way. I know that
 lbaas team had a bad experience due to tight coupling to neutron
 project in the past, so I appreciate their concerns.
 
 All in all, we should come up with some standard solution for both
 advanced services that are already split out, *and* upcoming vendor
 plugin shrinking initiative.
 
 The initial discussion is captured at:
 https://review.openstack.org/#/c/141427/
 
 Thanks,
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 
 iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh
 apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq
 6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6
 tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E
 QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/
 czLUCyr/nJg4aw8Qm0DTjnZxS+BBe5De0Ke4zm2AGePgFYcai8YQPtuOfSJDbXk=
 =D6Gn
 -END PGP SIGNATURE-
 

 ___
 OpenStack-dev mailing list

Re: [openstack-dev] [Third-party] Voting for new Third-party CI weekly IRC meeting time

2014-12-15 Thread Kurt Taylor
On Mon, Dec 15, 2014 at 10:24 AM, Anita Kuno ante...@anteaya.info wrote:

 On 12/15/2014 10:55 AM, Kurt Taylor wrote:
  Anita, please, creating yet another meeting time without input from
 anyone
  just confuses the issue.
 When I ask people to attend meetings to reduce noise on the mailing
 list, there had better be some meetings.


I don't think we have a problem with the volume of third-party email.  In
fact, I wish there was even more questions and discussion. I encourage
everyone to use whatever method (meetings or email) to get involved.



 I am grateful for the time you have spent chairing, thank you. It gave
 me a huge break and allowed me to focus on other things (like reviews)
 that I had to neglect due to the amount of time third party was taking
 from my life.


No problem at all. I'm just a CI operator running a meeting for CI
operators, I get just as much out of it as everyone else.


 I need there to be meetings to answer questions for people and will be
 chairing meetings on the dates and times I have specified, like I said
 that I would do.


I don't know how to move forward with this, except to follow what the group
has agreed on.

I will be happy to kick off the meetings we are voting on, but I hope to
bring other CI operators in the mix to help with chairing, leading
development work groups, and sharing their best practices. I think we are
on the right track to do some great things in Kilo!

Kurt Taylor (krtaylor)



 Thank you,
 Anita.

 
  The work group has agreed unanimously on alternating weekly meeting
 times,
  and are currently voting on the best for everyone. (
  https://www.google.com/moderator/#16/e=21b93c  14 voters so far, thanks
  everyone!) Once we finalize the voting, I was going to start up the new
  meeting times in the new year. Until then, we would stay at our normal
  time, Monday at 1800 UTC.
 
  I am still confused why you would not want to go with the consensus on
 this.
 
  And, thanks again for everything that you do for us!
  Kurt Taylor (krtaylor)
 
 
  On Mon, Dec 15, 2014 at 9:23 AM, Anita Kuno ante...@anteaya.info
 wrote:
 
  On 12/09/2014 11:55 AM, Anita Kuno wrote:
  On 12/09/2014 08:32 AM, Kurt Taylor wrote:
  All of the feedback so far has supported moving the existing IRC
  Third-party CI meeting to better fit a worldwide audience.
 
  The consensus is that we will have only 1 meeting per week at
  alternating
  times. You can see examples of other teams with alternating meeting
  times
  at: https://wiki.openstack.org/wiki/Meetings
 
  This way, one week we are good for one part of the world, the next
 week
  for
  the other. You will not need to attend both meetings, just the meeting
  time
  every other week that fits your schedule.
 
  Proposed times in UTC are being voted on here:
  https://www.google.com/moderator/#16/e=21b93c
 
  Please vote on the time that is best for you. I would like to finalize
  the
  new times this week.
 
  Thanks!
  Kurt Taylor (krtaylor)
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  Note that Kurt is welcome to do as he pleases with his own time.
 
  I will be having meetings in the irc channel for the times that I have
  booked.
 
  Thanks,
  Anita.
 
  Just in case anyone remains confused, I am chairing third party meetings
  Mondays at 1500 UTC and Tuesdays at 0800 UTC in #openstack-meeting.
  There is a meeting currently in progress.
 
  This is a great time for people who don't understand requirements to
  show up and ask questions.
 
  Thank you,
  Anita.
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-15 Thread Neil Jerram
Hi all,

Following the approval for Neutron vendor code decomposition
(https://review.openstack.org/#/c/134680/), I just wanted to comment
that it appears to work fine to have an ML2 mechanism driver _entirely_
out of tree, so long as the vendor repository that provides the ML2
mechanism driver does something like this to register their driver as a
neutron.ml2.mechanism_drivers entry point:

  setuptools.setup(
  ...,
  entry_points = {
  ...,
  'neutron.ml2.mechanism_drivers': [
  'calico = xyz.openstack.mech_xyz:XyzMechanismDriver',
  ],
  },
  )

(Please see
https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f03789001c2139b16de85c
for the complete change and detail, for the example that works for me.)

Then Neutron and the vendor package can be separately installed, and the
vendor's driver name configured in ml2_conf.ini, and everything works.

Given that, I wonder:

- is that what the architects of the decomposition are expecting?

- other than for the reference OVS driver, are there any reasons in
  principle for keeping _any_ ML2 mechanism driver code in tree?

Many thanks,
 Neil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Swift 2.2.1 rc (err c) 1 is available

2014-12-15 Thread John Dickinson
All,

I'm happy to say that the Swift 2.2.1 release candidate is available.

http://tarballs.openstack.org/swift/swift-2.2.1c1.tar.gz

Please take a look, and if nothing is found, we'll release this as the final 
2.2.1 version at the end of the week.

This release includes a lot of great improvements for operators. You can see 
the change log at https://github.com/openstack/swift/blob/master/CHANGELOG.


One note about the tag name. The recent release of setuptools has started 
enforcing PEP440. According to that spec, 2.2.1rc1 (ie the old way we tagged 
things) is normalized to 2.2.1c1. See 
https://www.python.org/dev/peps/pep-0440/#pre-releases for the details. Since 
OpenStack infrastructure relies on setuptools parsing to determine the tarball 
name, the tags we use need to be already normalized so that the tag in the repo 
matches the tarball created. Therefore, the new tag name is 2.2.1c1.


--John




signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo][neutron] Managing oslo-incubator modules after project split

2014-12-15 Thread Doug Hellmann
There may be a similar problem managing dependencies on libraries that live 
outside of either tree. I assume you already decided how to handle that. Are 
you doing the same thing, and adding the requirements to neutron’s lists?

On Dec 15, 2014, at 12:16 PM, Doug Wiegley do...@a10networks.com wrote:

 Hi all,
 
 Ihar and I discussed this on IRC, and are going forward with option 2
 unless someone has a big problem with it.
 
 Thanks,
 Doug
 
 
 On 12/15/14, 8:22 AM, Doug Wiegley do...@a10networks.com wrote:
 
 Hi Ihar,
 
 I’m actually in favor of option 2, but it implies a few things about your
 time, and I wanted to chat with you before presuming.
 
 Maintenance can not involve breaking changes. At this point, the co-gate
 will block it.  Also, oslo graduation changes will have to be made in the
 services repos first, and then Neutron.
 
 Thanks,
 doug
 
 
 On 12/15/14, 6:15 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 Hi all,
 
 the question arose recently in one of reviews for neutron-*aas repos
 to remove all oslo-incubator code from those repos since it's
 duplicated in neutron main repo. (You can find the link to the review
 at the end of the email.)
 
 Brief hostory: neutron repo was recently split into 4 pieces (main,
 neutron-fwaas, neutron-lbaas, and neutron-vpnaas). The split resulted
 in each repository keeping their own copy of
 neutron/openstack/common/... tree (currently unused in all
 neutron-*aas repos that are still bound to modules from main repo).
 
 As a oslo liaison for the project, I wonder what's the best way to
 manage oslo-incubator files. We have several options:
 
 1. just kill all the neutron/openstack/common/ trees from neutron-*aas
 repositories and continue using modules from main repo.
 
 2. kill all duplicate modules from neutron-*aas repos and leave only
 those that are used in those repos but not in main repo.
 
 3. fully duplicate all those modules in each of four repos that use them.
 
 I think option 1. is a straw man, since we should be able to introduce
 new oslo-incubator modules into neutron-*aas repos even if they are
 not used in main repo.
 
 Option 2. is good when it comes to synching non-breaking bug fixes (or
 security fixes) from oslo-incubator, in that it will require only one
 sync patch instead of e.g. four. At the same time there may be
 potential issues when synchronizing updates from oslo-incubator that
 would break API and hence require changes to each of the modules that
 use it. Since we don't support atomic merges for multiple projects in
 gate, we will need to be cautious about those updates, and we will
 still need to leave neutron-*aas repos broken for some time (though
 the time may be mitigated with care).
 
 Option 3. is vice versa - in theory, you get total decoupling, meaning
 no oslo-incubator updates in main repo are expected to break
 neutron-*aas repos, but bug fixing becomes a huge PITA.
 
 I would vote for option 2., for two reasons:
 - - most oslo-incubator syncs are non-breaking, and we may effectively
 apply care to updates that may result in potential breakage (f.e.
 being able to trigger an integrated run for each of neutron-*aas repos
 with the main sync patch, if there are any concerns).
 - - it will make oslo liaison life a lot easier. OK, I'm probably too
 selfish on that. ;)
 - - it will make stable maintainers life a lot easier. The main reason
 why stable maintainers and distributions like recent oslo graduation
 movement is that we don't need to track each bug fix we need in every
 project, and waste lots of cycles on it. Being able to fix a bug in
 one place only is *highly* anticipated. [OK, I'm quite selfish on that
 one too.]
 - - it's a delusion that there will be no neutron-main syncs that will
 break neutron-*aas repos ever. There can still be problems due to
 incompatibility between neutron main and neutron-*aas code resulted
 EXACTLY because multiple parts of the same process use different
 versions of the same module.
 
 That said, Doug Wiegley (lbaas core) seems to be in favour of option
 3. due to lower coupling that is achieved in that way. I know that
 lbaas team had a bad experience due to tight coupling to neutron
 project in the past, so I appreciate their concerns.
 
 All in all, we should come up with some standard solution for both
 advanced services that are already split out, *and* upcoming vendor
 plugin shrinking initiative.
 
 The initial discussion is captured at:
 https://review.openstack.org/#/c/141427/
 
 Thanks,
 /Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 
 iQEcBAEBCgAGBQJUju0NAAoJEC5aWaUY1u57n5YH/jA4l5DsLgRpw9gYsoSWVGvh
 apmJ4UlnAKhxzc787XImz1VA+ztSyIwAUdEdcfq3gkinP58q7o48oIXOGjFXaBNq
 6qBePC1hflEqZ85Hm4/i5z51qutjW0dyi4y4C6FHgM5NsEkhbh0QIa/u8Hr4F1q6
 tkr0kDbCbDkiZ8IX1l74VGWQ3QvCNeJkANUg79BqGq+qIVP3BeOHyWqRmpLZFQ6E
 QiUwhiYv5l4HekCEQN8PWisJoqnhbTNjvLBnLD82IitLd5vXnsXfSkxKhv9XeOg/
 

[openstack-dev] Help improve the OpenStack Horizon user portal!

2014-12-15 Thread Kruithof, Piet
You are invited to participate in an online card sort activity sponsored by the 
Horizon team.  The purpose of this activity is to help us evaluate the current 
information architecture and find ways to improve it.   We are specifically 
interested in individuals who use cloud services as a consumer (IaaS, PaaS, 
SaaS, etc).

Activity Time:  15 minutes to complete the online activity by yourself  – no 
scheduling required!
Link to card sort: http://ows.io/os/0v46l867

**Participants will be automatically added to a drawing for one of three HP 7 
G2 tablets.**

Feel free to forward the link to anyone else who might be interested.  College 
students are welcome to participate.

As always, the results will be shared with the community.

Thanks,

Piet Kruithof
Sr. UX Architect – HP Helion Cloud



PS - This is a different from the usability study that was posted earlier.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Job failures related to Setuptools 8.0 release

2014-12-15 Thread Matt Riedemann



On 12/15/2014 10:16 AM, Jeremy Stanley wrote:

On Saturday, December 13, Setuptools 8.0 released implementing the
new PEP 440[1] version specification and dropping support for a
variety of previously somewhat-supported versioning syntaxes. This
is impacting us in several ways...

Multiple range expressions in requirements are no longer
interpreted the way we were relying on. The fix is fairly
straightforward, since the SQLAlchemy requirement in oslo.db (and
corresponding line in global requirements) is the only current
example. Fixes for this in multiple branches are in flight.
[2][3][4] (This last one is merged to oslo.db's master branch but
not released yet.)

Arbitrary alphanumeric version subcomponents such as PBR's Git SHA
suffixes now sort earlier than all major version numbers. The fix is
still in progress[5][6], and resulted in a couple of brown-bag
releases over the weekend. 0.10.1 generated PEP 440 compliant
versions which ended up unparseable when included in requirements
files, so 0.10.2 is a roll-back identical to 0.10.

The 1.2.3.rc1 we were using for release candidates is now
automatically normalized to 1.2.3c1 during sdist tarball and wheel
generation, causing tag text to no longer match the resulting file
names. This may simply require us to change our naming convention[7]
for these sorts of tags in the future.

In the interim we've pinned setuptools8 in our infrastructure[8] to
help keep things moving while these various solutions crystalize,
but if you run into this problem locally check your version of
setuptools and try an older one.

[1] http://legacy.python.org/dev/peps/pep-0440/
[2] https://review.openstack.org/141584
[3] https://review.openstack.org/137583
[4] https://review.openstack.org/140948
[5] https://review.openstack.org/141666
[6] https://review.openstack.org/141667
[7] https://review.openstack.org/141831
[8] https://review.openstack.org/141577



I've opened three bugs for three elastic-recheck queries related to 
this, then waiting for logstash/e-r to catch up to see what's left to 
categorize.


nova: https://bugs.launchpad.net/nova/+bug/1402751
glance: https://bugs.launchpad.net/glance/+bug/1402747
pbr: https://bugs.launchpad.net/pbr/+bug/1402759

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday December 16th at 19:00 UTC

2014-12-15 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday December 16th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

Meeting log and minutes from the last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-09-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-09-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-09-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Sean Dague
On 12/15/2014 12:01 PM, Jeremy Stanley wrote:
 On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
 [...]
 This release is primarily meant to update the SQLAlchemy dependency
 to resolve the issue with the new version of setuptools changing
 how it evaluates version range specifications.
 [...]
 
 However note that I'm in the middle of forcing a refresh on a couple
 of our PyPI mirror servers, so it may be a couple hours before we
 see the effects of this throughout all of our infrastructure.
 

It looks like this change has broken the grenade jobs because now
oslo.db 1.3.0 ends up being installed in stable/juno environments, which
has incompatible requirements with the rest of stable juno.

http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz

pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but
SQLAlchemy=0.9.7,=0.9.99 is required by ['oslo.db']

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-15 Thread Stefano Maffulli
On 12/09/2014 04:11 PM,  by wrote:
[vad] how about the documentation in this case?... bcos it needs some
 place to document (a short desc and a link to vendor page) or list these
 kind of out-of-tree plugins/drivers...  just to make the user aware of
 the availability of such plugins/driers which is compatible with so and
 so openstack release.  
 I checked with the documentation team and according to them, only the
 following plugins/drivers only will get documented...
 1) in-tree plugins/drivers (full documentation) 
 2) third-party plugins/drivers (ie, one implements and follows this new
 proposal, a.k.a partially-in-tree due to the integration module/code)...
 
 *** no listing/mention about such completely out-of-tree plugins/drivers***

Discoverability of documentation is a serious issue. As I commented on
docs spec [1], I think there are already too many places, mini-sites and
random pages holding information that is relevant to users. We should
make an effort to keep things discoverable, even if not maintained
necessarily by the same, single team.

I think the docs team means that they are not able to guarantee
documentation for third-party *themselves* (and has not been able, too).
The docs team is already overworked as it is now, they can't take on
more responsibilities.

So once Neutron's code will be split, documentation for the users of all
third-party modules should find a good place to live in, indexed and
searchable together where the rest of the docs are. I'm hoping that we
can find a place (ideally under docs.openstack.org?) where third-party
documentation can live and be maintained by the teams responsible for
the code, too.

Thoughts?

/stef

[1] https://review.openstack.org/#/c/133372/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Donald Stufft

 On Dec 15, 2014, at 1:50 PM, Sean Dague s...@dague.net wrote:
 
 On 12/15/2014 12:01 PM, Jeremy Stanley wrote:
 On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
 [...]
 This release is primarily meant to update the SQLAlchemy dependency
 to resolve the issue with the new version of setuptools changing
 how it evaluates version range specifications.
 [...]
 
 However note that I'm in the middle of forcing a refresh on a couple
 of our PyPI mirror servers, so it may be a couple hours before we
 see the effects of this throughout all of our infrastructure.
 
 
 It looks like this change has broken the grenade jobs because now
 oslo.db 1.3.0 ends up being installed in stable/juno environments, which
 has incompatible requirements with the rest of stable juno.
 
 http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz
 
 pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but
 SQLAlchemy=0.9.7,=0.9.99 is required by ['oslo.db']
 
   -Sean

It should probably use the specifier from Juno which matches the old
specifier in functionality.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Sean Dague
On 12/15/2014 01:53 PM, Donald Stufft wrote:
 
 On Dec 15, 2014, at 1:50 PM, Sean Dague s...@dague.net wrote:

 On 12/15/2014 12:01 PM, Jeremy Stanley wrote:
 On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
 [...]
 This release is primarily meant to update the SQLAlchemy dependency
 to resolve the issue with the new version of setuptools changing
 how it evaluates version range specifications.
 [...]

 However note that I'm in the middle of forcing a refresh on a couple
 of our PyPI mirror servers, so it may be a couple hours before we
 see the effects of this throughout all of our infrastructure.


 It looks like this change has broken the grenade jobs because now
 oslo.db 1.3.0 ends up being installed in stable/juno environments, which
 has incompatible requirements with the rest of stable juno.

 http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz

 pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but
 SQLAlchemy=0.9.7,=0.9.99 is required by ['oslo.db']

  -Sean
 
 It should probably use the specifier from Juno which matches the old
 specifier in functionality.

Probably, but that was specifically reverted here -
https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Donald Stufft

 On Dec 15, 2014, at 1:57 PM, Sean Dague s...@dague.net wrote:
 
 On 12/15/2014 01:53 PM, Donald Stufft wrote:
 
 On Dec 15, 2014, at 1:50 PM, Sean Dague s...@dague.net wrote:
 
 On 12/15/2014 12:01 PM, Jeremy Stanley wrote:
 On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
 [...]
 This release is primarily meant to update the SQLAlchemy dependency
 to resolve the issue with the new version of setuptools changing
 how it evaluates version range specifications.
 [...]
 
 However note that I'm in the middle of forcing a refresh on a couple
 of our PyPI mirror servers, so it may be a couple hours before we
 see the effects of this throughout all of our infrastructure.
 
 
 It looks like this change has broken the grenade jobs because now
 oslo.db 1.3.0 ends up being installed in stable/juno environments, which
 has incompatible requirements with the rest of stable juno.
 
 http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz
 
 pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but
 SQLAlchemy=0.9.7,=0.9.99 is required by ['oslo.db']
 
 -Sean
 
 It should probably use the specifier from Juno which matches the old
 specifier in functionality.
 
 Probably, but that was specifically reverted here -
 https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm
 

Not sure I follow, that doesn’t seem to contain any SQLAlchemy changes?

I mean stable/juno has this - 
SQLAlchemy=0.8.4,=0.9.99,!=0.9.0,!=0.9.1,!=0.9.2,!=0.9.3,!=0.9.4,!=0.9.5,!=0.9.6
and master has this - SQLAlchemy=0.9.7,=0.9.99

I forget who it was but someone suggested just dropping 0.8 in global
requirements over the weekend so that’s what I did.

It appears oslo.db used the SQLAlchemy specifier from master which means that
it won’t work with SQLAlchemy in the 0.8 series. So probably oslo.db should
instead use the one from stable/juno?

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Doug Hellmann

On Dec 15, 2014, at 2:02 PM, Donald Stufft don...@stufft.io wrote:

 
 On Dec 15, 2014, at 1:57 PM, Sean Dague s...@dague.net wrote:
 
 On 12/15/2014 01:53 PM, Donald Stufft wrote:
 
 On Dec 15, 2014, at 1:50 PM, Sean Dague s...@dague.net wrote:
 
 On 12/15/2014 12:01 PM, Jeremy Stanley wrote:
 On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
 [...]
 This release is primarily meant to update the SQLAlchemy dependency
 to resolve the issue with the new version of setuptools changing
 how it evaluates version range specifications.
 [...]
 
 However note that I'm in the middle of forcing a refresh on a couple
 of our PyPI mirror servers, so it may be a couple hours before we
 see the effects of this throughout all of our infrastructure.
 
 
 It looks like this change has broken the grenade jobs because now
 oslo.db 1.3.0 ends up being installed in stable/juno environments, which
 has incompatible requirements with the rest of stable juno.
 
 http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz
 
 pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but
 SQLAlchemy=0.9.7,=0.9.99 is required by ['oslo.db']
 
-Sean
 
 It should probably use the specifier from Juno which matches the old
 specifier in functionality.
 
 Probably, but that was specifically reverted here -
 https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm
 
 
 Not sure I follow, that doesn’t seem to contain any SQLAlchemy changes?
 
 I mean stable/juno has this - 
 SQLAlchemy=0.8.4,=0.9.99,!=0.9.0,!=0.9.1,!=0.9.2,!=0.9.3,!=0.9.4,!=0.9.5,!=0.9.6
 and master has this - SQLAlchemy=0.9.7,=0.9.99
 
 I forget who it was but someone suggested just dropping 0.8 in global
 requirements over the weekend so that’s what I did.
 
 It appears oslo.db used the SQLAlchemy specifier from master which means that
 it won’t work with SQLAlchemy in the 0.8 series. So probably oslo.db should
 instead use the one from stable/juno?

Master oslo.db has to match the requirements list being used elsewhere in 
master, so it can’t use the requirements spec from a stable branch.

Can we cap oslo.db in juno to 1.2.0? That should work as a minimum version in 
the requirements list for master, which would let us maintain an overlapping 
requirements range to support rolling updates.

Doug

 
 ---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.3.0 released

2014-12-15 Thread Clark Boylan


On Mon, Dec 15, 2014, at 11:09 AM, Doug Hellmann wrote:
 
 On Dec 15, 2014, at 2:02 PM, Donald Stufft don...@stufft.io wrote:
 
  
  On Dec 15, 2014, at 1:57 PM, Sean Dague s...@dague.net wrote:
  
  On 12/15/2014 01:53 PM, Donald Stufft wrote:
  
  On Dec 15, 2014, at 1:50 PM, Sean Dague s...@dague.net wrote:
  
  On 12/15/2014 12:01 PM, Jeremy Stanley wrote:
  On 2014-12-15 11:53:07 -0500 (-0500), Doug Hellmann wrote:
  [...]
  This release is primarily meant to update the SQLAlchemy dependency
  to resolve the issue with the new version of setuptools changing
  how it evaluates version range specifications.
  [...]
  
  However note that I'm in the middle of forcing a refresh on a couple
  of our PyPI mirror servers, so it may be a couple hours before we
  see the effects of this throughout all of our infrastructure.
  
  
  It looks like this change has broken the grenade jobs because now
  oslo.db 1.3.0 ends up being installed in stable/juno environments, which
  has incompatible requirements with the rest of stable juno.
  
  http://logs.openstack.org/07/137307/1/gate//gate-grenade-dsvm/048ee63//logs/old/screen-s-proxy.txt.gz
  
  pkg_resources.VersionConflict: SQLAlchemy 0.8.4 is installed but
  SQLAlchemy=0.9.7,=0.9.99 is required by ['oslo.db']
  
   -Sean
  
  It should probably use the specifier from Juno which matches the old
  specifier in functionality.
  
  Probably, but that was specifically reverted here -
  https://review.openstack.org/#/c/138546/2/global-requirements.txt,cm
  
  
  Not sure I follow, that doesn’t seem to contain any SQLAlchemy changes?
  
  I mean stable/juno has this - 
  SQLAlchemy=0.8.4,=0.9.99,!=0.9.0,!=0.9.1,!=0.9.2,!=0.9.3,!=0.9.4,!=0.9.5,!=0.9.6
  and master has this - SQLAlchemy=0.9.7,=0.9.99
  
  I forget who it was but someone suggested just dropping 0.8 in global
  requirements over the weekend so that’s what I did.
  
  It appears oslo.db used the SQLAlchemy specifier from master which means 
  that
  it won’t work with SQLAlchemy in the 0.8 series. So probably oslo.db should
  instead use the one from stable/juno?
 
 Master oslo.db has to match the requirements list being used elsewhere in
 master, so it can’t use the requirements spec from a stable branch.
 
 Can we cap oslo.db in juno to 1.2.0? That should work as a minimum
 version in the requirements list for master, which would let us maintain
 an overlapping requirements range to support rolling updates.
 
 Doug
 
  
  ---
  Donald Stufft
  PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think you need a 1.2.1 release that doesn't have the broken
requirement for sqlalchemy then cap on that in stable/juno.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-15 Thread Clint Byrum
Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800:
 On 13-Dec-14 05:42, Zane Bitter wrote:
  On 12/12/14 05:29, Murugan, Visnusaran wrote:
 
 
  -Original Message-
  From: Zane Bitter [mailto:zbit...@redhat.com]
  Sent: Friday, December 12, 2014 6:37 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
  showdown
 
  On 11/12/14 08:26, Murugan, Visnusaran wrote:
  [Murugan, Visnusaran]
  In case of rollback where we have to cleanup earlier version of
  resources,
  we could get the order from old template. We'd prefer not to have a
  graph table.
 
  In theory you could get it by keeping old templates around. But that
  means keeping a lot of templates, and it will be hard to keep track
  of when you want to delete them. It also means that when starting an
  update you'll need to load every existing previous version of the
  template in order to calculate the dependencies. It also leaves the
  dependencies in an ambiguous state when a resource fails, and
  although that can be worked around it will be a giant pain to implement.
 
 
  Agree that looking to all templates for a delete is not good. But
  baring Complexity, we feel we could achieve it by way of having an
  update and a delete stream for a stack update operation. I will
  elaborate in detail in the etherpad sometime tomorrow :)
 
  I agree that I'd prefer not to have a graph table. After trying a
  couple of different things I decided to store the dependencies in the
  Resource table, where we can read or write them virtually for free
  because it turns out that we are always reading or updating the
  Resource itself at exactly the same time anyway.
 
 
  Not sure how this will work in an update scenario when a resource does
  not change and its dependencies do.
 
  We'll always update the requirements, even when the properties don't
  change.
 
 
  Can you elaborate a bit on rollback.
  
  I didn't do anything special to handle rollback. It's possible that we 
  need to - obviously the difference in the UpdateReplace + rollback case 
  is that the replaced resource is now the one we want to keep, and yet 
  the replaced_by/replaces dependency will force the newer (replacement) 
  resource to be checked for deletion first, which is an inversion of the 
  usual order.
  
 
 This is where the version is so handy! For UpdateReplaced ones, there is
 an older version to go back to. This version could just be template ID,
 as I mentioned in another e-mail. All resources are at the current
 template ID if they are found in the current template, even if they is
 no need to update them. Otherwise, they need to be cleaned-up in the
 order given in the previous templates.
 
 I think the template ID is used as version as far as I can see in Zane's
 PoC. If the resource template key doesn't match the current template
 key, the resource is deleted. The version is misnomer here, but that
 field (template id) is used as though we had versions of resources.
 
  However, I tried to think of a scenario where that would cause problems 
  and I couldn't come up with one. Provided we know the actual, real-world 
  dependencies of each resource I don't think the ordering of those two 
  checks matters.
  
  In fact, I currently can't think of a case where the dependency order 
  between replacement and replaced resources matters at all. It matters in 
  the current Heat implementation because resources are artificially 
  segmented into the current and backup stacks, but with a holistic view 
  of dependencies that may well not be required. I tried taking that line 
  out of the simulator code and all the tests still passed. If anybody can 
  think of a scenario in which it would make a difference, I would be very 
  interested to hear it.
  
  In any event though, it should be no problem to reverse the direction of 
  that one edge in these particular circumstances if it does turn out to 
  be a problem.
  
  We had an approach with depends_on
  and needed_by columns in ResourceTable. But dropped it when we figured out
  we had too many DB operations for Update.
  
  Yeah, I initially ran into this problem too - you have a bunch of nodes 
  that are waiting on the current node, and now you have to go look them 
  all up in the database to see what else they're waiting on in order to 
  tell if they're ready to be triggered.
  
  It turns out the answer is to distribute the writes but centralise the 
  reads. So at the start of the update, we read all of the Resources, 
  obtain their dependencies and build one central graph[1]. We than make 
  that graph available to each resource (either by passing it as a 
  notification parameter, or storing it somewhere central in the DB that 
  they will all have to read anyway, i.e. the Stack). But when we update a 
  dependency we don't update the central graph, we update the individual 
  Resource so there's no global lock required.
  
  [1] 
  

[openstack-dev] [DevStack] A grenade for DevStack?

2014-12-15 Thread Collins, Sean
Hi,

I've been bitten by a couple bugs lately on DevStack installs that have
been long lived. One is a built by Vagrant, and the other is bare metal
hardware.

https://bugs.launchpad.net/devstack/+bug/1395776

In this instance, a commit was made to rollback the introduction of
MariaDB in ubuntu, but it does not appear that there was anything done
to revert the change in existing deployments, so I had to go and fix by
hand on the bare metal lab because I didn't want to waste a lot of time
rebuilding the whole lab from scratch just to fix.

I also got bit by this on my Vagrant node, but I nuked and paved to fix.

https://bugs.launchpad.net/devstack/+bug/1402762

It'd be great to somehow make a long lived dsvm node and job where
DevStack is continually deployed to it and restacked, to check for these
kinds of errors?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] A grenade for DevStack?

2014-12-15 Thread Sean Dague
On 12/15/2014 02:33 PM, Collins, Sean wrote:
 Hi,
 
 I've been bitten by a couple bugs lately on DevStack installs that have
 been long lived. One is a built by Vagrant, and the other is bare metal
 hardware.
 
 https://bugs.launchpad.net/devstack/+bug/1395776
 
 In this instance, a commit was made to rollback the introduction of
 MariaDB in ubuntu, but it does not appear that there was anything done
 to revert the change in existing deployments, so I had to go and fix by
 hand on the bare metal lab because I didn't want to waste a lot of time
 rebuilding the whole lab from scratch just to fix.
 
 I also got bit by this on my Vagrant node, but I nuked and paved to fix.
 
 https://bugs.launchpad.net/devstack/+bug/1402762
 
 It'd be great to somehow make a long lived dsvm node and job where
 DevStack is continually deployed to it and restacked, to check for these
 kinds of errors?

One of the things we don't test on the devstack side at all is that
clean.sh takes us back down to baseline, which I think was the real
issue here - https://review.openstack.org/#/c/141891/

I would not be opposed to adding cleanup testing at the end of any
devstack run that ensures everything is shut down correctly and cleaned
up to a base level.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno

2014-12-15 Thread Doug Hellmann
The issue with stable/juno jobs failing because of the difference in the 
SQLAlchemy requirements between the older applications and the newer oslo.db is 
being addressed with a new release of the 1.2.x series. We will then cap the 
requirements for stable/juno to 1.2.1. We decided we did not need to raise the 
minimum version of oslo.db allowed in kilo, because the old versions of the 
library do work, if they are installed from packages and not through setuptools.

Jeremy created a feature/1.2 branch for us, and I have 2 patches up [1][2] to 
apply the requirements fix. The change to the oslo.db version in stable/juno is 
[3].

After the changes in oslo.db merge, I will tag 1.2.1.

Doug

[1] https://review.openstack.org/#/c/141893/
[2] https://review.openstack.org/#/c/141894/
[3] https://review.openstack.org/#/c/141896/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Unsafe Abandon

2014-12-15 Thread Ari Rubenstein
Hi there,
I'm new to the list, and trying to get more information about the following 
issue:

https://bugs.launchpad.net/heat/+bug/1353670
Is there anyone on the list who can explain under what conditions a user might 
hit this?  Workarounds?  ETA for a fix?
Thanks!
- Ari


 On Monday, December 15, 2014 11:30 AM, Clint Byrum cl...@fewbar.com 
wrote:
   

 Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800:
 On 13-Dec-14 05:42, Zane Bitter wrote:
  On 12/12/14 05:29, Murugan, Visnusaran wrote:
 
 
  -Original Message-
  From: Zane Bitter [mailto:zbit...@redhat.com]
  Sent: Friday, December 12, 2014 6:37 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
  showdown
 
  On 11/12/14 08:26, Murugan, Visnusaran wrote:
  [Murugan, Visnusaran]
  In case of rollback where we have to cleanup earlier version of
  resources,
  we could get the order from old template. We'd prefer not to have a
  graph table.
 
  In theory you could get it by keeping old templates around. But that
  means keeping a lot of templates, and it will be hard to keep track
  of when you want to delete them. It also means that when starting an
  update you'll need to load every existing previous version of the
  template in order to calculate the dependencies. It also leaves the
  dependencies in an ambiguous state when a resource fails, and
  although that can be worked around it will be a giant pain to implement.
 
 
  Agree that looking to all templates for a delete is not good. But
  baring Complexity, we feel we could achieve it by way of having an
  update and a delete stream for a stack update operation. I will
  elaborate in detail in the etherpad sometime tomorrow :)
 
  I agree that I'd prefer not to have a graph table. After trying a
  couple of different things I decided to store the dependencies in the
  Resource table, where we can read or write them virtually for free
  because it turns out that we are always reading or updating the
  Resource itself at exactly the same time anyway.
 
 
  Not sure how this will work in an update scenario when a resource does
  not change and its dependencies do.
 
  We'll always update the requirements, even when the properties don't
  change.
 
 
  Can you elaborate a bit on rollback.
  
  I didn't do anything special to handle rollback. It's possible that we 
  need to - obviously the difference in the UpdateReplace + rollback case 
  is that the replaced resource is now the one we want to keep, and yet 
  the replaced_by/replaces dependency will force the newer (replacement) 
  resource to be checked for deletion first, which is an inversion of the 
  usual order.
  
 
 This is where the version is so handy! For UpdateReplaced ones, there is
 an older version to go back to. This version could just be template ID,
 as I mentioned in another e-mail. All resources are at the current
 template ID if they are found in the current template, even if they is
 no need to update them. Otherwise, they need to be cleaned-up in the
 order given in the previous templates.
 
 I think the template ID is used as version as far as I can see in Zane's
 PoC. If the resource template key doesn't match the current template
 key, the resource is deleted. The version is misnomer here, but that
 field (template id) is used as though we had versions of resources.
 
  However, I tried to think of a scenario where that would cause problems 
  and I couldn't come up with one. Provided we know the actual, real-world 
  dependencies of each resource I don't think the ordering of those two 
  checks matters.
  
  In fact, I currently can't think of a case where the dependency order 
  between replacement and replaced resources matters at all. It matters in 
  the current Heat implementation because resources are artificially 
  segmented into the current and backup stacks, but with a holistic view 
  of dependencies that may well not be required. I tried taking that line 
  out of the simulator code and all the tests still passed. If anybody can 
  think of a scenario in which it would make a difference, I would be very 
  interested to hear it.
  
  In any event though, it should be no problem to reverse the direction of 
  that one edge in these particular circumstances if it does turn out to 
  be a problem.
  
  We had an approach with depends_on
  and needed_by columns in ResourceTable. But dropped it when we figured out
  we had too many DB operations for Update.
  
  Yeah, I initially ran into this problem too - you have a bunch of nodes 
  that are waiting on the current node, and now you have to go look them 
  all up in the database to see what else they're waiting on in order to 
  tell if they're ready to be triggered.
  
  It turns out the answer is to distribute the writes but centralise the 
  reads. So at the start of the update, we read all of the Resources, 
  obtain their dependencies and build one central graph[1]. We 

Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-15 Thread Maru Newby

On Dec 12, 2014, at 1:40 PM, Sean Dague s...@dague.net wrote:

 On 12/12/2014 01:05 PM, Maru Newby wrote:
 
 On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net wrote:
 
 On 12/11/2014 04:16 PM, Jay Pipes wrote:
 On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
 On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
 On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com wrote:
 
 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
 On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com wrote:
 
 I'm generally in favor of making name attributes opaque, utf-8
 strings that
 are entirely user-defined and have no constraints on them. I
 consider the
 name to be just a tag that the user places on some resource. It
 is the
 resource's ID that is unique.
 
 I do realize that Nova takes a different approach to *some*
 resources,
 including the security group name.
 
 End of the day, it's probably just a personal preference whether
 names
 should be unique to a tenant/user or not.
 
 Maru had asked me my opinion on whether names should be unique and I
 answered my personal opinion that no, they should not be, and if
 Neutron
 needed to ensure that there was one and only one default security
 group for
 a tenant, that a way to accomplish such a thing in a race-free
 way, without
 use of SELECT FOR UPDATE, was to use the approach I put into the
 pastebin on
 the review above.
 
 
 I agree with Jay.  We should not care about how a user names the
 resource.
 There other ways to prevent this race and Jay’s suggestion is a
 good one.
 
 However we should open a bug against Horizon because the user
 experience there
 is terrible with duplicate security group names.
 
 The reason security group names are unique is that the ec2 api
 supports source
 rule specifications by tenant_id (user_id in amazon) and name, so
 not enforcing
 uniqueness means that invocation in the ec2 api will either fail or be
 non-deterministic in some way.
 
 So we should couple our API evolution to EC2 API then?
 
 -jay
 
 No I was just pointing out the historical reason for uniqueness, and
 hopefully
 encouraging someone to find the best behavior for the ec2 api if we
 are going
 to keep the incompatibility there. Also I personally feel the ux is
 better
 with unique names, but it is only a slight preference.
 
 Sorry for snapping, you made a fair point.
 
 Yeh, honestly, I agree with Vish. I do feel that the UX of that
 constraint is useful. Otherwise you get into having to show people UUIDs
 in a lot more places. While those are good for consistency, they are
 kind of terrible to show to people.
 
 While there is a good case for the UX of unique names - it also makes 
 orchestration via tools like puppet a heck of a lot simpler - the fact is 
 that most OpenStack resources do not require unique names.  That being the 
 case, why would we want security groups to deviate from this convention?
 
 Maybe the other ones are the broken ones?
 
 Honestly, any sanely usable system makes names unique inside a
 container. Like files in a directory. In this case the tenant is the
 container, which makes sense.
 
 It is one of many places that OpenStack is not consistent. But I'd
 rather make things consistent and more usable than consistent and less.

You might prefer less consistency for the sake of usability, but for me, 
consistency is a large enough factor in usability that allowing seemingly 
arbitrary deviation doesn’t seem like a huge win.  Regardless, I’d like to see 
us came to decisions on API usability on an OpenStack-wide basis, so the API 
working group is probably where this discussion should continue.


Maru

  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-15 Thread Salvatore Orlando
I think the point made is that the behaviour is currently inconsistent and
not user friendly.
Regardless of that, I would like to point that technically this kind of
change is backward incompatible and so it should not be simply approved by
popular acclamation.

I will seek input from the API WG in the next meeting.

Salvatore

On 15 December 2014 at 20:39, Maru Newby ma...@redhat.com wrote:


 On Dec 12, 2014, at 1:40 PM, Sean Dague s...@dague.net wrote:

  On 12/12/2014 01:05 PM, Maru Newby wrote:
 
  On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net wrote:
 
  On 12/11/2014 04:16 PM, Jay Pipes wrote:
  On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
  On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com wrote:
  On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
  On Dec 11, 2014, at 8:00 AM, Henry Gessau ges...@cisco.com
 wrote:
 
  On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz wrote:
 
  On Dec 11, 2014, at 8:43 AM, Jay Pipes jaypi...@gmail.com
  mailto:jaypi...@gmail.com wrote:
 
  I'm generally in favor of making name attributes opaque, utf-8
  strings that
  are entirely user-defined and have no constraints on them. I
  consider the
  name to be just a tag that the user places on some resource. It
  is the
  resource's ID that is unique.
 
  I do realize that Nova takes a different approach to *some*
  resources,
  including the security group name.
 
  End of the day, it's probably just a personal preference whether
  names
  should be unique to a tenant/user or not.
 
  Maru had asked me my opinion on whether names should be unique
 and I
  answered my personal opinion that no, they should not be, and if
  Neutron
  needed to ensure that there was one and only one default
 security
  group for
  a tenant, that a way to accomplish such a thing in a race-free
  way, without
  use of SELECT FOR UPDATE, was to use the approach I put into the
  pastebin on
  the review above.
 
 
  I agree with Jay.  We should not care about how a user names the
  resource.
  There other ways to prevent this race and Jay’s suggestion is a
  good one.
 
  However we should open a bug against Horizon because the user
  experience there
  is terrible with duplicate security group names.
 
  The reason security group names are unique is that the ec2 api
  supports source
  rule specifications by tenant_id (user_id in amazon) and name, so
  not enforcing
  uniqueness means that invocation in the ec2 api will either fail
 or be
  non-deterministic in some way.
 
  So we should couple our API evolution to EC2 API then?
 
  -jay
 
  No I was just pointing out the historical reason for uniqueness, and
  hopefully
  encouraging someone to find the best behavior for the ec2 api if we
  are going
  to keep the incompatibility there. Also I personally feel the ux is
  better
  with unique names, but it is only a slight preference.
 
  Sorry for snapping, you made a fair point.
 
  Yeh, honestly, I agree with Vish. I do feel that the UX of that
  constraint is useful. Otherwise you get into having to show people
 UUIDs
  in a lot more places. While those are good for consistency, they are
  kind of terrible to show to people.
 
  While there is a good case for the UX of unique names - it also makes
 orchestration via tools like puppet a heck of a lot simpler - the fact is
 that most OpenStack resources do not require unique names.  That being the
 case, why would we want security groups to deviate from this convention?
 
  Maybe the other ones are the broken ones?
 
  Honestly, any sanely usable system makes names unique inside a
  container. Like files in a directory. In this case the tenant is the
  container, which makes sense.
 
  It is one of many places that OpenStack is not consistent. But I'd
  rather make things consistent and more usable than consistent and less.

 You might prefer less consistency for the sake of usability, but for me,
 consistency is a large enough factor in usability that allowing seemingly
 arbitrary deviation doesn’t seem like a huge win.  Regardless, I’d like to
 see us came to decisions on API usability on an OpenStack-wide basis, so
 the API working group is probably where this discussion should continue.


 Maru




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] UniqueConstraint for name and tenant_id in security group

2014-12-15 Thread Maru Newby

On Dec 15, 2014, at 9:13 AM, Assaf Muller amul...@redhat.com wrote:

 
 
 - Original Message -
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 
 I was (rightfully) asked to share my comments on the matter that I
 left in gerrit here. See below.
 
 On 12/12/14 22:40, Sean Dague wrote:
 On 12/12/2014 01:05 PM, Maru Newby wrote:
 
 On Dec 11, 2014, at 2:27 PM, Sean Dague s...@dague.net wrote:
 
 On 12/11/2014 04:16 PM, Jay Pipes wrote:
 On 12/11/2014 04:07 PM, Vishvananda Ishaya wrote:
 On Dec 11, 2014, at 1:04 PM, Jay Pipes jaypi...@gmail.com
 wrote:
 On 12/11/2014 04:01 PM, Vishvananda Ishaya wrote:
 
 On Dec 11, 2014, at 8:00 AM, Henry Gessau
 ges...@cisco.com wrote:
 
 On Thu, Dec 11, 2014, Mark McClain m...@mcclain.xyz
 wrote:
 
 On Dec 11, 2014, at 8:43 AM, Jay Pipes
 jaypi...@gmail.com mailto:jaypi...@gmail.com
 wrote:
 
 I'm generally in favor of making name attributes
 opaque, utf-8 strings that are entirely
 user-defined and have no constraints on them. I
 consider the name to be just a tag that the user
 places on some resource. It is the resource's ID
 that is unique.
 
 I do realize that Nova takes a different approach
 to *some* resources, including the security group
 name.
 
 End of the day, it's probably just a personal
 preference whether names should be unique to a
 tenant/user or not.
 
 Maru had asked me my opinion on whether names
 should be unique and I answered my personal
 opinion that no, they should not be, and if
 Neutron needed to ensure that there was one and
 only one default security group for a tenant,
 that a way to accomplish such a thing in a
 race-free way, without use of SELECT FOR UPDATE,
 was to use the approach I put into the pastebin
 on the review above.
 
 
 I agree with Jay.  We should not care about how a
 user names the resource. There other ways to
 prevent this race and Jay’s suggestion is a good
 one.
 
 However we should open a bug against Horizon because
 the user experience there is terrible with duplicate
 security group names.
 
 The reason security group names are unique is that the
 ec2 api supports source rule specifications by
 tenant_id (user_id in amazon) and name, so not
 enforcing uniqueness means that invocation in the ec2
 api will either fail or be non-deterministic in some
 way.
 
 So we should couple our API evolution to EC2 API then?
 
 -jay
 
 No I was just pointing out the historical reason for
 uniqueness, and hopefully encouraging someone to find the
 best behavior for the ec2 api if we are going to keep the
 incompatibility there. Also I personally feel the ux is
 better with unique names, but it is only a slight
 preference.
 
 Sorry for snapping, you made a fair point.
 
 Yeh, honestly, I agree with Vish. I do feel that the UX of
 that constraint is useful. Otherwise you get into having to
 show people UUIDs in a lot more places. While those are good
 for consistency, they are kind of terrible to show to people.
 
 While there is a good case for the UX of unique names - it also
 makes orchestration via tools like puppet a heck of a lot simpler
 - the fact is that most OpenStack resources do not require unique
 names.  That being the case, why would we want security groups to
 deviate from this convention?
 
 Maybe the other ones are the broken ones?
 
 Honestly, any sanely usable system makes names unique inside a
 container. Like files in a directory.
 
 Correct. Or take git: it does not use hashes to identify objects, right?
 
 In this case the tenant is the container, which makes sense.
 
 It is one of many places that OpenStack is not consistent. But I'd
 rather make things consistent and more usable than consistent and
 less.
 
 Are we only proposing to make security group name unique? I assume
 that, since that's what we currently have in review. The change would
 make API *more* inconsistent, not less, since other objects still use
 uuid for identification.
 
 You may say that we should move *all* neutron objects to the new
 identification system by name. But what's the real benefit?
 
 If there are problems in UX (client, horizon, ...), we should fix the
 view and not data models used. If we decide we want users to avoid
 using objects with the same names, fine, let's add warnings in UI
 (probably with an option to disable it so that we don't push the
 validation into their throats).
 
 Finally, I have concern about us changing user visible object
 attributes like names during db migrations, as it's proposed in the
 patch discussed here. I think such behaviour can be quite unexpected
 for some users, if not breaking their workflow and/or scripts.
 
 My belief is that responsible upstream does not apply ad-hoc changes
 to API to fix a race condition that is easily solvable in other ways
 (see Assaf's proposal to introduce a new DefaultSecurityGroups table
 in patchset 12 comments).
 
 
 As usual you explain yourself better than I can... I think my main
 original objection to the patch is that it feels like an 

Re: [openstack-dev] [Nova][Neutron] out-of-tree plugin for Mech driver/L2 and vif_driver

2014-12-15 Thread Maxime Leroy
On Fri, Dec 12, 2014 at 3:16 PM, Daniel P. Berrange berra...@redhat.com wrote:
 On Fri, Dec 12, 2014 at 03:05:28PM +0100, Maxime Leroy wrote:
 On Fri, Dec 12, 2014 at 10:46 AM, Daniel P. Berrange
 berra...@redhat.com wrote:
  On Fri, Dec 12, 2014 at 01:21:36PM +0900, Ryu Ishimoto wrote:
  On Thu, Dec 11, 2014 at 7:41 PM, Daniel P. Berrange berra...@redhat.com
  wrote:
 
 [..]
  Port binding mechanism could vary among different networking technologies,
  which is not nova's concern, so this proposal makes sense.  Note that some
  vendors already provide port binding scripts that are currently executed
  directly from nova's vif.py ('mm-ctl' of midonet and 'ifc_ctl' for iovisor
  are two such examples), and this proposal makes it unnecessary to have
  these hard-coded in nova.  The only question I have is, how would nova
  figure out the arguments for these scripts?  Should nova dictate what they
  are?
 
  We could define some standard set of arguments  environment variables
  to pass the information from the VIF to the script in a standard way.
 

 Many information are used by the plug/unplug method: vif_id,
 vif_address, ovs_interfaceid, firewall, net_id, tenant_id, vnic_type,
 instance_uuid...

 Not sure we can define a set of standard arguments.

 That's really not a problem. There will be some set of common info
 needed for all. Then for any particular vif type we know what extra
 specific fields are define in the port binding metadata. We'll just
 set env variables for each of those.

 Maybe instead to use a script we should load some plug/unplug
 functions from a python module with importlib. So a vif_plug_module
 option instead to have a vif_plug_script ?

 No, we explicitly do *not* want any usage of the Nova python modules.
 That is all private internal Nova implementation detail that nothing
 is permitted to rely on - this is why the VIF plugin feature was
 removed in the first place.

 There are several other problems to solve if we are going to use this
 vif_plug_script:

 - How to have the authorization to run this script (i.e. rootwrap)?

 Yes, rootwrap.

 - How to test plug/unplug function from these scripts?
   Now, we have unity tests in nova/tests/unit/virt/libvirt/test_vif.py
 for plug/unplug method.

 Integration and/or functional tests run for the VIF impl would
 exercise this code still.

 - How this script will be installed?
- should it be including in the L2 agent package? Some L2 switch
 doesn't have a L2 agent.

 That's just a normal downstream packaging task which is easily
 handled by people doing that work. If there's no L2 agent package
 they can trivially just create a new package for the script(s)
 that need installing on the comput node. They would have to be
 doing exactly this anyway if you had the VIF plugin as a python
 module instead.


Ok, thank you for the details. I will look how to implement this feature.

Regards,

Maxime

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][python-keystoneclient][pycadf] Abandoning of inactive reviews

2014-12-15 Thread Morgan Fainberg
The abandon sweep has been finished. Here is the list of the reviews that were 
abandoned.

Keystone:

https://review.openstack.org/#/c/73907/
https://review.openstack.org/#/c/111312/
https://review.openstack.org/#/c/93480/
https://review.openstack.org/#/c/123862/
https://review.openstack.org/#/c/75708/
https://review.openstack.org/#/c/92727/
https://review.openstack.org/#/c/95282/
https://review.openstack.org/#/c/108592/
https://review.openstack.org/#/c/117380/
https://review.openstack.org/#/c/103368/
https://review.openstack.org/#/c/113236/
https://review.openstack.org/#/c/116464/
https://review.openstack.org/#/c/65428/
https://review.openstack.org/#/c/98836/
https://review.openstack.org/#/c/120031/
https://review.openstack.org/#/c/126217/

python-keystoneclient:

https://review.openstack.org/#/c/114856/
https://review.openstack.org/#/c/107926/
https://review.openstack.org/#/c/107328/
https://review.openstack.org/#/c/122569/
https://review.openstack.org/#/c/122515/
https://review.openstack.org/#/c/95680/
https://review.openstack.org/#/c/118531/
https://review.openstack.org/#/c/120822/
https://review.openstack.org/#/c/112752/
https://review.openstack.org/#/c/111665/
https://review.openstack.org/#/c/113163/
https://review.openstack.org/#/c/112564/
https://review.openstack.org/#/c/66137/
https://review.openstack.org/#/c/92726/
https://review.openstack.org/#/c/93244/
https://review.openstack.org/#/c/91895/

Keystone middleware:

https://review.openstack.org/#/c/114261/

Cheers,
Morgan
-- 
Morgan Fainberg

On December 11, 2014 at 1:05:37 PM, Morgan Fainberg (morgan.fainb...@gmail.com) 
wrote:

This is a notification that at the start of next week, all projects under the 
Identity Program are going to see a cleanup of old/lingering open reviews. I 
will be reviewing all reviews. If there is a negative score (this would be, -1 
or -2 from jenkins, -1 or -2 from a code reviewer, or -1 workflow) and the 
review has not seen an update in over 60days (more than “rebase”, 
commenting/responding to comments is an update) I will be administratively 
abandoning the change.

This will include reviews on:

Keystone
Keystone-specs
python-keystoneclient
keystonemiddleware
pycadf
python-keystoneclient-kerberos
python-keystoneclient-federation

Please take a look at your open reviews and get an update/response to negative 
scores to keep reviews active. You will always be able to un-abandon a review 
(as the author) or ask a Keystone-core member to unabandon a change. 

Cheers,
Morgan Fainberg

-- 
Morgan Fainberg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno

2014-12-15 Thread Doug Hellmann

On Dec 15, 2014, at 3:21 PM, Doug Hellmann d...@doughellmann.com wrote:

 The issue with stable/juno jobs failing because of the difference in the 
 SQLAlchemy requirements between the older applications and the newer oslo.db 
 is being addressed with a new release of the 1.2.x series. We will then cap 
 the requirements for stable/juno to 1.2.1. We decided we did not need to 
 raise the minimum version of oslo.db allowed in kilo, because the old 
 versions of the library do work, if they are installed from packages and not 
 through setuptools.
 
 Jeremy created a feature/1.2 branch for us, and I have 2 patches up [1][2] to 
 apply the requirements fix. The change to the oslo.db version in stable/juno 
 is [3].
 
 After the changes in oslo.db merge, I will tag 1.2.1.

After spending several hours exploring a bunch of options to make this actually 
work, some of which require making changes to test job definitions, grenade, or 
other long-term changes, I’m proposing a new approach:

1. Undo the change in master that broke the compatibility with versions of 
SQLAlchemy by making master match juno: https://review.openstack.org/141927
2. Update oslo.db after ^^ lands.
3. Tag oslo.db 1.4.0 with a set of requirements compatible with Juno.
4. Change the requirements in stable/juno to skip oslo.db 1.1, 1.2, and 1.3.

I’ll proceed with that plan tomorrow morning (~15 hours from now) unless 
someone points out why that won’t work in the mean time.

Doug

 
 Doug
 
 [1] https://review.openstack.org/#/c/141893/
 [2] https://review.openstack.org/#/c/141894/
 [3] https://review.openstack.org/#/c/141896/
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Question about nova boot --min-count number

2014-12-15 Thread Vishvananda Ishaya
I suspect you are actually failing due to not having enough room in your cloud 
instead of not having enough quota.

You will need to make instance sizes with less cpus/ram/disk or change your 
allocation ratios in the scheduler.

Vish

On Dec 13, 2014, at 8:43 AM, Danny Choi (dannchoi) dannc...@cisco.com wrote:

 Hi, 
 
 According to the help text, “—min-count number” boot at least number 
 servers (limited by quota):
 
 --min-count number  Boot at least number servers (limited by
 quota).
 
 I used devstack to deploy OpenStack (version Kilo) in a multi-node setup:
 1 Controller/Network + 2 Compute nodes
 
 I update the tenant demo default quota “instances and “cores from ’10’ and 
 ’20’ to ‘100’ and ‘200’:
 
 localadmin@qa4:~/devstack$ nova quota-show --tenant 
 62fe9a8a2d58407d8aee860095f11550 --user eacb7822ccf545eab9398b332829b476
 +-+---+
 | Quota   | Limit |
 +-+---+
 | instances   | 100   |   
 | cores   | 200   |   
 | ram | 51200 |
 | floating_ips| 10|
 | fixed_ips   | -1|
 | metadata_items  | 128   |
 | injected_files  | 5 |
 | injected_file_content_bytes | 10240 |
 | injected_file_path_bytes| 255   |
 | key_pairs   | 100   |
 | security_groups | 10|
 | security_group_rules| 20|
 | server_groups   | 10|
 | server_group_members| 10|
 +-+---+
 
 When I boot 50 VMs using “—min-count 50”, only 48 VMs come up.
 
 localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 
 1 --nic net-id=5b464333-bad0-4fc1-a2f0-310c47b77a17 --min-count 50 vm-
 
 There is no error in logs; and it happens consistently. 
 
 I also tried “—min-count 60” and only 48 VMs com up.
 
 In Horizon, left pane “Admin” - “System” - “Hypervisors”, it shows both 
 Compute hosts, each with 32 total VCPUs for a grand total of 64, but only 48 
 used.
 
 Is this normal behavior or is there any other setting to change in order to 
 use all 64 VCPUs?
 
 Thanks,
 Danny 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state?

2014-12-15 Thread Vishvananda Ishaya
I have seen deadlocks in libvirt that could cause this. When you are in this 
state, check to see if you can do a virsh list on the node. If not, libvirt is 
deadlocked, and ubuntu may need to pull in a fix/newer version.

Vish

On Dec 12, 2014, at 2:12 PM, pcrews glee...@gmail.com wrote:

 On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote:
 Hi,
 
 This case is always tested by Tempest on the gate.
 
 https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152
 
 So I guess this problem wouldn't happen on the latest version at least.
 
 Thanks
 Ken'ichi Ohmichi
 
 ---
 
 2014-12-10 6:32 GMT+09:00 Joe Gordon joe.gord...@gmail.com:
 
 
 On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) dannc...@cisco.com
 wrote:
 
 Hi,
 
 I have a VM which is in ERROR state.
 
 
 +--+--+++-++
 
 | ID   | Name
 | Status | Task State | Power State | Networks   |
 
 
 +--+--+++-++
 
 | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 |
 cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR  | -  | 
 NOSTATE
 ||
 
 
 I tried in both CLI “nova delete” and Horizon “terminate instance”.
 Both accepted the delete command without any error.
 However, the VM never got deleted.
 
 Is there a way to remove the VM?
 
 
 What version of nova are you using? This is definitely a serious bug, you
 should be able to delete an instance in error state. Can you file a bug that
 includes steps on how to reproduce the bug along with all relevant logs.
 
 bugs.launchpad.net/nova
 
 
 
 Thanks,
 Danny
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Hi,
 
 I've encountered this in my own testing and have found that it appears to be 
 tied to libvirt.
 
 When I hit this, reset-state as the admin user reports success (and state is 
 set), *but* things aren't really working as advertised and subsequent 
 attempts to do anything with the errant vm's will send them right back into 
 'FLAIL' / can't delete / endless DELETING mode.
 
 restarting libvirt-bin on my machine fixes this - after restart, the deleting 
 vm's are properly wiped without any further user input to nova/horizon and 
 all seems right in the world.
 
 using:
 devstack
 ubuntu 14.04
 libvirtd (libvirt) 1.2.2
 
 triggered via:
 lots of random create/reboot/resize/delete requests of varying validity and 
 sanity.
 
 Am in the process of cleaning up my test code so as not to hurt anyone's 
 brain with the ugly and will file a bug once done, but thought this worth 
 sharing.
 
 Thanks,
 Patrick
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Mid-Cycle Sprint

2014-12-15 Thread Douglas Mendizabal
Hi openstack-dev,

The Barbican team is planning to have a mid-cycle sprint in Austin, TX on 
February 16-18, 2015.  We’ll be meeting at Capital Factory, a co-working space 
in downtown Austin.

For more details and RSVP, please see:

https://wiki.openstack.org/wiki/Sprints/BarbicanKiloSprint

Thanks,
-Doug Mendizábal


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5 0CC9 AD14 1F30 2D58 923C
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Unsafe Abandon

2014-12-15 Thread Clint Byrum
Excerpts from Ari Rubenstein's message of 2014-12-15 12:32:08 -0800:
 Hi there,
 I'm new to the list, and trying to get more information about the following 
 issue:
 
 https://bugs.launchpad.net/heat/+bug/1353670
 Is there anyone on the list who can explain under what conditions a user 
 might hit this?  Workarounds?  ETA for a fix?

Hi Ari. Welcome, and thanks for your interest in OpenStack and Heat!

A bit of etiquette first: Please do not reply to existing threads to
start a new one. That is known as a hijack:

https://wiki.openstack.org/wiki/MailingListEtiquette#Changing_Subject

Also, for bugs, you'll find it's best to ask in the comments of the bug,
as those who are most interested and able to answer should already be
subscribed and can respond there.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Murano Agent

2014-12-15 Thread Stan Lagun
Murano agent is required as long as you deploy applications that use it.
You can take (write) application that uses Heat Software Configuration
instead of Murano agent and use image without agent

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

sla...@mirantis.com

On Mon, Dec 15, 2014 at 7:54 AM, Ruslan Kamaldinov rkamaldi...@mirantis.com
 wrote:

 On Mon, Dec 15, 2014 at 7:10 AM,  raghavendra@accenture.com wrote:
  Please let me kow why Murano-agent is required and the components that
 needs
  to be installed in it.

 You can find more details about murano agent at:
 https://wiki.openstack.org/wiki/Murano/UnifiedAgent

 It can be installed with diskimage-builder:
 http://git.openstack.org/cgit/stackforge/murano-agent/tree/README.rst#n34

 - Ruslan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party

2014-12-15 Thread Stefano Maffulli
On 12/05/2014 07:08 AM, Kurt Taylor wrote:
 1. Meeting content: Having 2 meetings per week is more than is needed at
 this stage of the working group. There just isn't enough meeting content
 to justify having two meetings every week.

I'd like to discuss this further: the stated objectives of the meetings
are very wide and may allow for more than one slot per week. In
particular I'm seeing the two below as good candidates for 'meet as many
times as possible':

   *  to provide a forum for the curious and for OpenStack programs who
are not yet in this space but may be in the future
   * to encourage questions from third party folks and support the
sourcing of answers

https://wiki.openstack.org/wiki/Meetings/ThirdParty#Goals_for_Third_Party_meetings

 2. Decisions: Any decision made at one meeting will potentially be
 undone at the next, or at least not fully explained. It will be
 difficult to keep consistent direction with the overall work group.

I think this needs to be clarified for all teams, not just third-party:
I disagree that important decisions should be taken on IRC. IRC is the
place where discussions happen and agreements may form among a group of
people, but ultimately the communication and the actual *decision* needs
to happen on the wider email list channel.

 My proposal was to have only 1 meeting per week at alternating times,
[...]

I'm not going to enter into the specific, as I'm sure you're already
aware and weighted the disadvantages of alternating times and multiple
dates. Only I'd ask you to communicate clearly times and objectives on
the team's and meetings wiki pages.

There are now three slots listed:

Mondays at 1500 UTC, chair Anita
Mondays at 1800 UTC, chair (I'm assuming it's Kurt)
Tuesdays at 0800 UTC, chair Anita

I would suggest you to make an effort to split the agenda across the
three slots, if you think it's possible, or assign priority topics from
the stated goals of the meetings so that people will know what to expect
each time.

The objective of the meetings should be to engage more people in
different timezones, while avoiding confusion. Let me know if you need
help editing the pages.

As I mentioned above, probably one way to do this is to make some slots
more focused on engaging newcomers and answering questions, more like
serendipitous mentoring sessions with the less involved, while another
slot could be dedicated to more focused and long term efforts, with more
committed people?

Cheers,
stef

PS let's continue the conversation only on -infra. I'm replying
including -dev only because I think the point 2. Decisions needed to
be clarified for all.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Minimal ML2 mechanism driver after Neutron decomposition change

2014-12-15 Thread henry hly
On Tue, Dec 16, 2014 at 1:53 AM, Neil Jerram neil.jer...@metaswitch.com wrote:
 Hi all,

 Following the approval for Neutron vendor code decomposition
 (https://review.openstack.org/#/c/134680/), I just wanted to comment
 that it appears to work fine to have an ML2 mechanism driver _entirely_
 out of tree, so long as the vendor repository that provides the ML2
 mechanism driver does something like this to register their driver as a
 neutron.ml2.mechanism_drivers entry point:

   setuptools.setup(
   ...,
   entry_points = {
   ...,
   'neutron.ml2.mechanism_drivers': [
   'calico = xyz.openstack.mech_xyz:XyzMechanismDriver',
   ],
   },
   )

 (Please see
 https://github.com/Metaswitch/calico/commit/488dcd8a51d7c6a1a2f03789001c2139b16de85c
 for the complete change and detail, for the example that works for me.)

 Then Neutron and the vendor package can be separately installed, and the
 vendor's driver name configured in ml2_conf.ini, and everything works.

 Given that, I wonder:

 - is that what the architects of the decomposition are expecting?

 - other than for the reference OVS driver, are there any reasons in
   principle for keeping _any_ ML2 mechanism driver code in tree?


Good questions. I'm also looking for the linux bridge MD, SRIOV MD...
Who will be responsible for these drivers?

The OVS driver is maintained by Neutron community, vendor specific
hardware driver by vendor, SDN controllers driver by their own
community or vendor. But there are also other drivers like SRIOV,
which are general for a lot of vendor agonitsc backends, and can't be
maintained by a certain vendor/community.

So, it would be better to keep some general backend MD in tree
besides SRIOV. There are also vif-type-tap, vif-type-vhostuser,
hierarchy-binding-external-VTEP ... We can implement a very thin
in-tree base MD that only handle vif bind which is backend agonitsc,
then backend provider is free to implement their own service logic,
either by an backend agent, or by a driver derived from the base MD
for agentless scenery.

Regards

 Many thanks,
  Neil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] For-each

2014-12-15 Thread Dmitri Zimine
I had a short user feedback sessions with Patrick and James, the short summary 
is:

1) simplify the syntax to optimize for the most common use case
2) 'concurrency' is the best word - but bring it out of for-each to task level, 
or task/policies 
3) all-permutation - relatively rare case, either implement as a different 
construct - like ‘for-every’, or use workaround.

Another feedback is  for-each as a term is confusing: people expect a different 
behavior (e.g., run sequentially, modify individual elements)
while it is effectively a ‘map’ function. No good suggestion on the better name 
yet. Keep on looking.

The details are added into the document.

DZ. 

On Dec 15, 2014, at 2:00 AM, Nikolay Makhotkin nmakhot...@mirantis.com wrote:

 Hi,
 
 Here is the doc with suggestions on specification for for-each feature.
 
 You are free to comment and ask questions.
 
 https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing
 
 
 
 -- 
 Best Regards,
 Nikolay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation

2014-12-15 Thread Rao Dingyuan
Yes Ryan, that's exactly what I'm thinking. Glad to know that we have the same 
opinion :)


BR
Kurt


-邮件原件-
发件人: Ryan Brown [mailto:rybr...@redhat.com] 
发送时间: 2014年12月12日 23:30
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [Openstack] [Ceilometer] [API] Batch alarm creation

On 12/12/2014 03:37 AM, Rao Dingyuan wrote:
 Hi Eoghan and folks,
 
 I'm thinking of adding an API to create multiple alarms in a batch.
 
 I think adding an API to create multiple alarms is a good option to solve the 
 problem that once an *alarm target* (a vm or a new group of vms) is created, 
 multiple requests will be fired because multiple alarms are to be created.
 
 In the our current project, this requiement is specially urgent since our 
 alarm target is one VM, and 6 alarms are to be created when one VM is created.
 
 What do you guys think?
 
 
 Best Regards,
 Kurt Rao

Allowing batch operations is definitely a good idea, though it may not be a 
solution to all of the problems you outlined.

One way to batch object creations would be to give clients the option to POST a 
collection of alarms instead of a single alarm. Currently your API looks 
like[1]:

POST /v2/alarms

BODY:
{
  alarm_actions: ...
  ...
}

For batches you could modify your API to accept a body like:

{
  alarms: [
{alarm_actions: ...},
{alarm_actions: ...},
{alarm_actions: ...},
{alarm_actions: ...}
  ]
}

It could (pretty easily) be a backwards-compatible change since the schemata 
don't conflict, and you can even add some kind of a batch:true flag to make 
it explicit that the user wants to create a collection. The API-WG has a 
spec[2] out right now explaining the rationale behind collection 
representations.


[1]:
http://docs.openstack.org/developer/ceilometer/webapi/v2.html#post--v2-alarms
[2]:
https://review.openstack.org/#/c/133660/11/guidelines/representation_structure.rst,unified
 
 
 
 - Original -
 发件人: Eoghan Glynn [mailto:egl...@redhat.com]
 发送时间: 2014年12月3日 20:34
 收件人: Rao Dingyuan
 抄送: openst...@lists.openstack.org
 主题: Re: [Openstack] [Ceilometer] looking for alarm best practice - 
 please help
 
 
 
 Hi folks,



 I wonder if anyone could share some best practice regarding to the 
 usage of ceilometer alarm. We are using the alarm 
 evaluation/notification of ceilometer and we don’t feel very well of 
 the way we use it. Below is our
 problem:



 

 Scenario:

 When cpu usage or memory usage above a certain threshold, alerts 
 should be displayed on admin’s web page. There should be a 3 level 
 alerts according to meter value, namely notice, warning, fatal. 
 Notice means the meter value is between 50% ~ 70%, warning means 
 between 70% ~ 85% and fatal means above 85%

 For example:

 * when one vm’s cpu usage is 72%, an alert message should be 
 displayed saying
 “Warning: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] cpu usage is above 70%”.

 * when one vm’s memory usage is 90%, another alert message should be 
 created saying “Fatal: vm[d9b7018b-06c4-4fba-8221-37f67f6c6b8c] 
 memory usage is above 85%”



 Our current Solution:

 We used ceilometer alarm evaluation/notification to implement this. 
 To distinguish which VM and which meter is above what value, we’ve 
 created one alarm for each VM by each condition. So, to monitor 1 VM,
 6 alarms will be created because there are 2 meters and for each meter there 
 are 3 levels.
 That means, if there are 100 VMs to be monitored, 600 alarms will be 
 created.



 Problems:

 * The first problem is, when the number of meters increases, the 
 number of alarms will be multiplied. For example, customer may want 
 alerts on disk and network IO rates, and if we do that, there will be
 4*3=12 alarms for each VM.

 * The second problem is, when one VM is created, multiple alarms will 
 be created, meaning multiple http requests will be fired. In the case 
 above, 6 HTTP requests will be needed once a VM is created. And this 
 number also increases as the number of meters goes up.
 
 One way of reducing both the number of alarms and the volume of notifications 
 would be to group related VMs, if such a concept exists in your use-case.
 
 This is effectively how Heat autoscaling uses ceilometer, alarming on the 
 average of some statistic over a set of instances (as opposed to triggering 
 on individual instances).
 
 The VMs could be grouped by setting user-metadata of form:
 
   nova boot ... --meta metering.my_server_group=foobar
 
 Any user-metadata prefixed with 'metering.' will be preserved by ceilometer 
 in the resource_metadata.user_metedata stored for each sample, so that it can 
 used to select the statistics on which the alarm is based, e.g.
 
   ceilometer alarm-threshold-create --name cpu_high_foobar \
 --description 'warning: foobar instance group running hot' \
 --meter-name cpu_util --threshold 70.0 \
 --comparison-operator gt --statistic avg \
 ...
 --query 

Re: [openstack-dev] [DevStack] A grenade for DevStack?

2014-12-15 Thread Dean Troyer
On Mon, Dec 15, 2014 at 2:03 PM, Sean Dague s...@dague.net wrote:

 On 12/15/2014 02:33 PM, Collins, Sean wrote:
  It'd be great to somehow make a long lived dsvm node and job where
  DevStack is continually deployed to it and restacked, to check for these
  kinds of errors?


I want to be careful here to not let an expectation develop that DevStack
should be used in any long-running deployment.  Refreshing a VM or a new OS
install on bare metal should be expected, often. IMO the only bits that you
should expect to refresh quickly are the git repos.


 One of the things we don't test on the devstack side at all is that
 clean.sh takes us back down to baseline, which I think was the real
 issue here - https://review.openstack.org/#/c/141891/

 I would not be opposed to adding cleanup testing at the end of any
 devstack run that ensures everything is shut down correctly and cleaned
 up to a base level.


clean.sh makes an attempt to remove enough of OpenStack and its
dependencies to be able to change the configuration, such as switching
databases or adding/removing services, and re-run stack.sh.  I'm not a fan
of tracking installed OS package dependencies and Python packages so it can
all be undone. But the above is a bug...

dt


-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Anyone knows whether there is freezing day of spec approval?

2014-12-15 Thread Chen, Wei D
Hi,

I know nova has such day around Dec. 18, is there a similar day in Cinder 
project? thanks!

Best Regards,
Dave Chen




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Anyone knows whether there is freezing day of spec approval?

2014-12-15 Thread Jay Bryant
Dave,

Yes,  we are not taking new specs/blueprints after 12/18.

Jay
On Dec 15, 2014 9:18 PM, Chen, Wei D wei.d.c...@intel.com wrote:

 Hi,

 I know nova has such day around Dec. 18, is there a similar day in Cinder
 project? thanks!

 Best Regards,
 Dave Chen



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >