Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread mar...@redhat.com
On 10/04/14 20:55, Jay Dobies wrote:
 On 04/10/2014 01:40 PM, Nachi Ueno wrote:
 Hi Jarda

 Congratulations
 This release and the demo is super awesome!!
 Do you have any instruction to install this one?
 
 I'd like to see this too. I asked a few times and never got an answer on
 whether or not there was a documented way of demoing this without a ton
 of baremetal lying around.

from what Jarda said in his other email (un-narrated one), this is all
running from master. In which case all you need is an undercloud setup
and then you can install tuskar/ui on that host. WRT the baremetal - For
dev I've only ever done this with poseur nodes - or I misunderstood the
question

marios

 



 2014-04-10 1:32 GMT-07:00 Jaromir Coufal jcou...@redhat.com:
 Dear Stackers,

 I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
 branch 0.1.0 for Icehouse release [0].

 I put together a narrated demo of all included features [1].

 You can find one manual part in the whole workflow - cloud
 initialization.
 There is ongoing work on automatic os-cloud-config, but for the
 release we
 had to include manual way. Automation should be added soon though.

 I want to thank all contributors for hard work to make this happen.
 It has
 been pleasure to cooperate with all of you guys and I am looking
 forward to
 bringing new features [2] in.


 -- Jarda


 [0] 0.1.0 Icehouse Release of the UI:
 https://github.com/openstack/tuskar-ui/releases/tag/0.1.0

 [1] Narrated demo of TripleO UI 0.1.0:
 https://www.youtube.com/watch?v=-6whFIqCqLU

 [2] Juno Planning for Tuskar:
 https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-11 Thread Thomas Goirand
On 04/08/2014 03:10 AM, Martinx - ジェームズ wrote:
 Hi Thomas!
 
 It will be a honor for me to join Debian OpenStack packaging team! I'm
 in!! :-D
 
 Listen, that neutron-ipv6.patch I have, doesn't apply against
 neutron-2014.1.rc1, here is it:
 
 neutron-ipv6.patch: http://paste.openstack.org/show/74857/
 
 I generated it from the commands that Xuhan Peng told me to do, few
 posts back, which are:
 
 --
 git fetch https://review.openstack.org/openstack/neutron
 refs/changes/49/70649/15
 git format-patch -1 --stdout FETCH_HEAD  neutron-ipv6.patch
 --
 
 But, as Collins said, even if the patch applies successfully against
 neutron-2014.1.rc1 (or newer), it will not pass the tests, so, there is
 still a lot of work to do, to enable Neutron with IPv6 but, I think we
 can start working on this patches and start testing whatever is already
 there (related to IPv6).
 
 Best!
 Thiago

Hi Thiago,

It's my view that we'd better keep each patch separated, so that they
can evolve over time, as they are accepted or fixed in
review.openstack.org. On the Debian packaging I do, each and every patch
has to comply with the DEP3 patch header specifications [1].
Specifically, I do insist that the Origin: field is set with the
correct gerrit review URL, so that we can easily find out which patch
comes from where. The Last-Update field is also important, so we know
which version of the patch is in.

Also, at eNovance, we are currently in the process of selecting which
patch should get in, and which patch shouldn't. Currently, we are
tracking the below patches:

1. Support IPv6 SLAAC mode in dnsmasq
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-slaac
   Patchset: Add support to DHCP agent for BP ipv6-two-attributes:
https://review.openstack.org/#/c/70649/

2. Bind dnsmasq in qrouter- namespace.
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
   Patchset: Add support to DHCP agent for BP ipv6-two-attributes:
https://review.openstack.org/#/c/70649/

3. IPv6 Feature Parity
https://blueprints.launchpad.net/neutron/+spec/ipv6-feature-parity
   Definition: Superseded.

4. Two Attributes Proposal to Control IPv6 RA Announcement and Address
Assignment
   https://blueprints.launchpad.net/neutron/+spec/ipv6-two-attributes
   Patchset: Create new IPv6 attributes for Subnets.
https://review.openstack.org/#/c/52983/
   Patchset: Add support to DHCP agent for BP ipv6-two-attributes.
https://review.openstack.org/70649
   Patchset: Calculate stateless IPv6 address.
https://review.openstack.org/56184
   Patchset: Permit ICMPv6 RAs only from known routers.
https://review.openstack.org/#/c/72252/

5. Support IPv6 DHCPv6 Stateless mode in dnsmasq
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-stateless
   Patchset: Add support to DHCP agent for BP ipv6-two-attributes:
https://review.openstack.org/#/c/70649/

6. Support IPv6 DHCPv6 Stateful mode in dnsmasq
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-stateful
   Patchset: Add support to DHCP agent for BP ipv6-two-attributes:
https://review.openstack.org/#/c/70649/

7. Support IPv6 DHCPv6 Relay Agent
https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-relay-agent
   Definition: Drafting.

8. Provider Networking - upstream SLAAC support
https://blueprints.launchpad.net/neutron/+spec/ipv6-provider-nets-slaac
   Patchset: Ensure that that all fixed ips for a port belong to a
subnet using DHCP. https://review.openstack.org/#/c/64578/

9. Store both LLA and GUA on router interface port

https://blueprints.launchpad.net/neutron/+spec/ipv6-lla-gua-router-interface
   Definition: New

10. Add RADVD to router namespaces to serve ipv6 RAs when required
https://blueprints.launchpad.net/neutron/+spec/neutron-ipv6-radvd-ra
Definition: New

11. Allow multiple subnets on gateway port for router

https://blueprints.launchpad.net/neutron/+spec/allow-multiple-subnets-on-gateway-port
Patchset: Add support for dual-stack (IPv4 and IPv6) on external
gateway. https://review.openstack.org/#/c/77471/
Definition: Review

It is my own personal view that we shouldn't care about DHCPv6, and that
only RADVD is important, though maybe my colleagues will have a
different view. Also, it is very important that we only use patches that
have a reasonable chance to be merged, or at least for which the
functionality will for sure get in Juno.

The current plan is that my colleagues (sridhar and Sylvain Afchain,
which are both reacheable on IRC as SridharG and safchain in
#openstack-neutron) will work on grabbing each patch, reviewing it and
evaluating it, then rewrite the patch header, and finally send me a
patchset to apply on top of Icehouse. It's much better that it is them,
rather than me, that works on that, as they are specialists of Neutron,
while I'm a specialist of Debian packaging.

Your input on this process is very much welcome here. Your thoughts?

Cheers,

Thomas Goirand (zigo)

[1] 

[openstack-dev] TripleO fully uploaded to Debian Experimental

2014-04-11 Thread Thomas Goirand
Hi,

it's with a great joy that I can announce today, that TripleO is now
fully in Debian [1]. It is currently only uploaded to Debian
experimental, like for all Icehouse (I don't think I can upload to
normal Sid until Icehouse is released).

Feedback (and bug reports to the Debian BTS) would be more than welcome,
as I had no time to test it myself! Of course, the plan is to have it
updated again soon.

Cheers,

Thomas Goirand (zigo)

[1]
http://qa.debian.org/developer.php?login=openstack-de...@lists.alioth.debian.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Ladislav Smola

Hello,

we have used this list of steps for the demo on Fedora 20:
https://wiki.openstack.org/wiki/Tuskar/Devtest

The demo is running on one machine with 24GB RAM and 120GB disk. We are 
using

virtualized baremetals(bm_poseur) for development.

KInd Regards,
Ladislav


On 04/10/2014 07:40 PM, Nachi Ueno wrote:

Hi Jarda

Congratulations
This release and the demo is super awesome!!
Do you have any instruction to install this one?




2014-04-10 1:32 GMT-07:00 Jaromir Coufal jcou...@redhat.com:

Dear Stackers,

I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
branch 0.1.0 for Icehouse release [0].

I put together a narrated demo of all included features [1].

You can find one manual part in the whole workflow - cloud initialization.
There is ongoing work on automatic os-cloud-config, but for the release we
had to include manual way. Automation should be added soon though.

I want to thank all contributors for hard work to make this happen. It has
been pleasure to cooperate with all of you guys and I am looking forward to
bringing new features [2] in.


-- Jarda


[0] 0.1.0 Icehouse Release of the UI:
https://github.com/openstack/tuskar-ui/releases/tag/0.1.0

[1] Narrated demo of TripleO UI 0.1.0:
https://www.youtube.com/watch?v=-6whFIqCqLU

[2] Juno Planning for Tuskar:
https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] using transport_url disables connection pooling?

2014-04-11 Thread Mehdi Abaakouk


Hi,

Le 2014-04-10 22:02, Gordon Sim a écrit :

If you use the transport_url config option (or pass a url directly to
get_transport()), then the amqp connection pooling appears to be
disabled[1]. That means a new connection is created for every request
send as well as for every response.

So I guess rpc_backend is the recommended approach to selecting the
transport at present(?).


To keep same behavior as before yes.

Also, they are a pending review to fix that (multi amqp hosts and 
connection pooling when using a transport url):


https://review.openstack.org/#/c/78948/


Cheers,
---
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread mar...@redhat.com
On 11/04/14 10:35, Ladislav Smola wrote:
 Hello,
 
 we have used this list of steps for the demo on Fedora 20:
 https://wiki.openstack.org/wiki/Tuskar/Devtest

nice!

 
 The demo is running on one machine with 24GB RAM and 120GB disk. We are
 using
 virtualized baremetals(bm_poseur) for development.
 
 KInd Regards,
 Ladislav
 
 
 On 04/10/2014 07:40 PM, Nachi Ueno wrote:
 Hi Jarda

 Congratulations
 This release and the demo is super awesome!!
 Do you have any instruction to install this one?




 2014-04-10 1:32 GMT-07:00 Jaromir Coufal jcou...@redhat.com:
 Dear Stackers,

 I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
 branch 0.1.0 for Icehouse release [0].

 I put together a narrated demo of all included features [1].

 You can find one manual part in the whole workflow - cloud
 initialization.
 There is ongoing work on automatic os-cloud-config, but for the
 release we
 had to include manual way. Automation should be added soon though.

 I want to thank all contributors for hard work to make this happen.
 It has
 been pleasure to cooperate with all of you guys and I am looking
 forward to
 bringing new features [2] in.


 -- Jarda


 [0] 0.1.0 Icehouse Release of the UI:
 https://github.com/openstack/tuskar-ui/releases/tag/0.1.0

 [1] Narrated demo of TripleO UI 0.1.0:
 https://www.youtube.com/watch?v=-6whFIqCqLU

 [2] Juno Planning for Tuskar:
 https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Jaromir Coufal

On 2014/10/04 19:40, Nachi Ueno wrote:

Hi Jarda

Congratulations
This release and the demo is super awesome!!
Do you have any instruction to install this one?


Thank you, Nachi!

look at Ladislav's response he posted our guideline for installation. If 
you have any problems, let us know on #tuskar or #tripleo channel and we 
will help you to go through it.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Jaromir Coufal

On 2014/10/04 19:40, Nachi Ueno wrote:

Hi Jarda

Congratulations
This release and the demo is super awesome!!
Do you have any instruction to install this one?


Thank you, Nachi!

look at Ladislav's response he posted our guideline for installation. If 
you have any problems, let us know on #tuskar channel and we will help 
you to go through.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Demo of current state of Tuskar-UI

2014-04-11 Thread Jaromir Coufal

On 2014/10/04 22:55, Robert Collins wrote:

On 10 April 2014 01:54, Jaromir Coufal jcou...@redhat.com wrote:

Hello OpenStackers,

I would like to share with you non-narrated demo of current version of
'Tuskar-UI' project, which is very close to Icehouse release (one or two
more patches to come in).


Very very cool! It's thrilling to see all the effort folk have been
putting into this - I just showed it to one of our product folk here
and they really love it ;)

-Rob


Thank you, Rob!

I think this was very important step and since now everything will go 
smoothly :) I also wanted to thank you for valuable feedbacks and 
contributions. Getting through this milestone brings us a lot of work 
for Juno which is good news.


Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel-dev][Fuel] VXLAN tunnels support

2014-04-11 Thread Oleg Balakirev
Hello,

We have implemented support for VXLAN tunnels. Please review.

Blueprint: https://blueprints.launchpad.net/fuel/+spec/neutron-vxlan-support
Review requests:
https://review.openstack.org/#/c/86611/
https://review.openstack.org/#/c/83767/

-- 
___
Best Regards
Oleg Balakirev
Deployment Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] config options, defaults, oh my!

2014-04-11 Thread Alexis Lee
Clint Byrum said on Thu, Apr 10, 2014 at 09:45:17AM -0700:
  Now you've described it, you're right, I'm not interested in TripleO or
  TripleO milestones. I am interested in using os-*-config, Heat and
  tripleo-image-elements to produce pure OpenStack deployments from 3 to
  3000 nodes, for varying workloads.

 That is precisely what TripleO wants as that first  milestone too. What
 is the difference between what I said we want (a default OpenStack
 deployment) and what you just said (a pure OpenStack deployment)? At
 what point did anybody suggest to you that TripleO doesn't want a highly
 scalable deployment of OpenStack?

 I'm not sure why you wouldn't just put the configuration you need
 in the upstream TripleO as the default configuration. Even more, why
 wouldn't you put these configurations in as the defaults for upstream
 Nova/Glance/Cinder/Neutron/etc?

The trouble is there can only be one default configuration, whereas
there are many potential kinds of deployment. If the default deployment
is highly scalable, it will need a minimum number of nodes to start up.
EG a highly scalable Logstash infrastructure requires at least 8 nodes
to itself before you add any compute nodes. This won't be useful to
those who want to start with a trial deployment and scale up from there.

I like your Heat idea very much, where the user can easily supply whole
files or filesets. This allows the user to choose:
 1) What software is installed on which nodes, by element composition
 2) The contents of configuration files, by Heat passthrough of a fileset
 3) Per-instance variables plus a few tweak-prone knobs, via Heat
parameters

Thus we don't have to enshrine a single configuration or add a Heat
parameter for every single config option. And TripleO can publish at
first one but later several default Openstack deployments, each
suitable for a different scale.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Neutron] Networking Discussions last week

2014-04-11 Thread Przemyslaw Grygiel
On Apr 11, 2014, at 12:35 AM, Andrey Danin ada...@mirantis.com wrote:

 
 
 
 On Wed, Apr 9, 2014 at 4:22 AM, Mike Scherbakov mscherba...@mirantis.com 
 wrote:
 Looks like it falls into two parts: Fuel  Neutron requirements.
 
 Use case, as far as I understand, is following: user doesn't have one large 
 range of publicly routable IP addresses for environment, and has multiple L3 
 ranges instead.
 
 So for Fuel it means:
 We should not waste public IPs for compute nodes nodes if we don't need them 
 there (Neutron doesn't need it, only required for nova-network in multi-host 
 mode). I think it should be covered by 
 https://blueprints.launchpad.net/fuel/+spec/advanced-networking
  We can do that when we introduce a role-based network assignment.
 If we use public network then only for OpenStack REST API services, we should 
 be fine with one single IP range, do we?
 Yes. 
 Floating network, which is external in Neutron terms, can be large waste of 
 IPs for VMs. So it's impossible that in large clusters there is gonna be 
 single L3 which would cover it. That means, Fuel should allow to have 
 multiple L3 external networks per OpenStack environment, in theory they can 
 be even in different L2.
 Our complex network setup (many bridges and many patches) allows us to 
 concatenate multiple L3-networks into one L2-segment. And theoreticaly 
 Neutron can manage multiple external networks in this setup. But we need to 
 test it. 

also we should be able provide multiple floating networks (each in different L2 
network) like on diagram 
https://drive.google.com/file/d/0B_Tv5g8RQZt1R3NIUEZtYkhEVVk/edit?usp=sharing
It will be useful for hybrid (private/public) cloud.
 I had a short discussion with Maru  Mark in IRC, it looks like we need in 
 Neutron:
 It should be possible to have multiple L3 subnets for external network
 It is unlikely that we will need to have more than one subnet serving by 
 single Neutron server, but we might in theory..
 Alexander, please take a look if I treated your initial blueprint in a right 
 way.
 
 Thanks,
 
 
 On Tue, Apr 8, 2014 at 7:53 PM, Salvatore Orlando sorla...@nicira.com wrote:
 Hi Mike,
 
 For all neutron-related fuel developments please feel free to reach to to the 
 neutron team for any help you might need either by using the ML or pinging 
 people in #openstack-neutron.
 Regarding the fuel blueprints you linked in your first post, I am looking in 
 particular at 
 https://blueprints.launchpad.net/fuel/+spec/separate-public-floating
 
 I am not entirely sure of what are the semantics of 'public' and 'floating' 
 here, but I was wondering if this would be achievable at all with the current 
 neutron API, since within a subnet CIDR there's no 'qualitative' distinction 
 of allocations pools; so it would not be possible to have a 'public' IP pool 
 and a 'floating' IP pool in the same L3 segment.
 
 Also, regarding nova gaps, it might be worth noting that Mark McClain 
 (markmcclain) and Brent Eagles (beagles) are keeping track of current 
 feature/testing/quality gaps and also covering progress for the relevant work 
 items.
 
 Regards,
 Salvatore
 
 
 On 8 April 2014 14:46, Mike Scherbakov mscherba...@mirantis.com wrote:
 Great, thanks Assaf.
 
 I will keep following it. I've added a link to this bp on this page: 
 https://wiki.openstack.org/wiki/NovaNeutronGapHighlights#Multi-Host, might 
 help people to get the status.
 
 
 On Mon, Apr 7, 2014 at 11:37 AM, Assaf Muller amul...@redhat.com wrote:
 
 
 - Original Message -
  Hi all,
  we had a number of discussions last week in Moscow, with participation of
  guys from Russia, Ukraine and Poland.
  That was a great time!! Thanks everyone who participated.
 
  Special thanks to Przemek for great preparations, including the following:
  https://docs.google.com/a/mirantis.com/presentation/d/115vCujjWoQ0cLKgVclV59_y1sLDhn2zwjxEDmLYsTzI/edit#slide=id.p
 
  I've searched over blueprints which require update after meetings:
  https://blueprints.launchpad.net/fuel/+spec/multiple-cluster-networks
  https://blueprints.launchpad.net/fuel/+spec/fuel-multiple-l3-agents
  https://blueprints.launchpad.net/fuel/+spec/fuel-storage-networks
  https://blueprints.launchpad.net/fuel/+spec/separate-public-floating
  https://blueprints.launchpad.net/fuel/+spec/advanced-networking
 
  We will need to create one for UI.
 
  Neutron blueprints which are in the interest of large and thus complex
  deployments, with the requirements of scalability and high availability:
  https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
  https://blueprints.launchpad.net/neutron/+spec/quantum-multihost
 
  The last one was rejected... there is might be another way of achieving same
  use cases? Use case, I think, was explained in great details here:
  https://wiki.openstack.org/wiki/NovaNeutronGapHighlights
  Any thoughts on this?
 
 
 https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
 This is the up the date blueprint, 

Re: [openstack-dev] [infra] Consolidating efforts around Fedora/Centos gate job

2014-04-11 Thread Sean Dague
On 04/11/2014 01:43 AM, Ian Wienand wrote:
 Hi,
 
 To summarize recent discussions, nobody is opposed in general to
 having Fedora / Centos included in the gate.  However, it raises a
 number of big questions : which job(s) to run on Fedora, where does
 the quota for extra jobs come from, how do we get the job on multiple
 providers, how stable will it be, how will we handle new releases,
 centos v fedora, etc.
 
 I think we agreed in [1] that the best thing to do is to start small,
 get some experience with multiple platforms and grow from there.  Thus
 the decision to target a single job to test just incoming devstack
 changes on Fedora 20.  This is a very moderate number of changes, so
 adding a separate test will not have a huge impact on resources.
 
 Evidence points to this being a good point to start.  People
 submitting to devstack might have noticed comments from redhatci
 like [2] which reports runs of their change on a variety of rpm-based
 distros.  Fedora 20 has been very stable, so we should not have many
 issues.  Making sure it stays stable is very useful to build on for
 future gate jobs.
 
 I believe we decided that to make a non-voting job we could just focus
 on running on Rackspace and avoid the issues of older fedora images on
 hp cloud.  Longer term, either a new hp cloud version comes, or DIB
 builds the fedora images ... either way we have a path to upgrading it
 to a voting job in time.  Another proposal was to use the ooo cloud,
 but dprince feels that is probably better kept separate.
 
 Then we have the question of the nodepool setup scripts working on
 F20.  I just tested the setup scripts from [3] and it all seems to
 work on a fresh f20 cloud image.  I think this is due to kchamart,
 peila2 and others who've fixed parts of this before.
 
 So, is there:
 
  1) anything blocking having f20 in the nodepool?
  2) anything blocking a simple, non-voting job to test devstack
 changes on f20?
 
 Thanks,
 
 -i
 
 [1] 
 http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-08-19.01.log.html#l-89
 [2] http://people.redhat.com/~iwienand/86310/
 [3] 
 https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/nodepool/scripts

I can't speak to #1, however I'm +1 on this effort. Would love to have
devstack running on Fedora as well on changes in general, and especially
on devstack, as we accidentally break Fedora far too often.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Jaromir Coufal

On 2014/11/04 10:27, Thomas Goirand wrote:

On 04/10/2014 04:32 PM, Jaromir Coufal wrote:

Dear Stackers,

I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
branch 0.1.0 for Icehouse release [0].

I put together a narrated demo of all included features [1].

You can find one manual part in the whole workflow - cloud
initialization. There is ongoing work on automatic os-cloud-config, but
for the release we had to include manual way. Automation should be added
soon though.

I want to thank all contributors for hard work to make this happen. It
has been pleasure to cooperate with all of you guys and I am looking
forward to bringing new features [2] in.


-- Jarda


Are all needed components latest tags enough? In other words, if I
update all of TripleO in Debian, will I have a useable system?

Cheers,

Thomas


Hi Thomas,

I would say yes. The question is what you mean by usable system? You 
want to try the Tuskar UI? If yes, here is devtest which will help you 
to get the dev setup: https://wiki.openstack.org/wiki/Tuskar/Devtest and 
here is the part for Tuskar UI: 
https://github.com/openstack/tuskar-ui/blob/master/docs/install.rst


If you want more general info about Tuskar, here is wiki page: 
https://wiki.openstack.org/wiki/Tuskar.


We are also very happy to help on #tuskar or #tripleo freenode channels 
if you experience some troubles.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][heat] Unusable error messages in dashboard for Orchestration

2014-04-11 Thread Steven Hardy
Hi Tom,

On Fri, Apr 11, 2014 at 01:05:00PM +0800, Tom Fifield wrote:
 Hi,
 
 Lodged a bug the day after Havana came out to hope to get this
 usability problem addressed.
 
 https://bugs.launchpad.net/horizon/+bug/1241395
 
 Essentially, if something goes wrong creating a stack through the
 dashboard, all the user ever sees is:
 
 Stack creation failed.
 
 
 ... which is, a little less than useful in terms of enabling them to
 fix the problem.
 
 Testing using RC1 today, there is no improvement, and I was shocked
 to discover the the bug submitted was not even triaged during
 icehouse!
 
 Any ideas? :)

Thanks for drawing attention to this problem, unfortunately it looks like
this bug has fallen through the cracks so sorry about that.

I am a little confused however, because when I create a stack which fails,
it does provide fedback regarding the reason for failure in both the
topology and stack overview pages.

Heres the steps I went through to test:

1. login as demo user, select demo tenant
2. select Orchestration-Stacks page
3. Click Launch Stack, do direct input of this template:

heat_template_version: 2013-05-23

resources:
wait_handle:
type: OS::Heat::UpdateWaitConditionHandle

wait_condition:
type: AWS::CloudFormation::WaitCondition
properties:
Count: 1
Handle: { get_resource: wait_handle }
Timeout: 10

4. Click next, enter stack name test, password dummy and Click Launch
5. The UI returns to the stack list view, with the test stack status in
progress
6. The stack creation fails, status moves to failed
7. Click on the stack name, takes me to Stack Overview page
8. Observe Status for failure:

Status
  Create_Failed: Resource CREATE failed: WaitConditionTimeout: 0 of 1 received


So there evidently are some other error paths which do not provide feedback
(when we fail to attempt stack creation at all) - the only way I could
reproduce your issue was by attempting to create a stack with a duplicate
name, which as you say does result in an uhelpful popup saying Error:
Stack creation failed.

Comparing that with, for example, creating a stack with an invalid name
123, which gives you a nice explanation of the problem on the form.  SO I
guess what you're seeing is a default response to some error paths which
are not explicitly handled.

What I think may help is if you can update the bug with steps to reproduce
the problem.  It may be there are several failure scenarios we need to
resolve, so getting specific details to reproduce is the first step.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Thomas Goirand
On 04/11/2014 06:05 PM, Jaromir Coufal wrote:
 On 2014/11/04 10:27, Thomas Goirand wrote:
 On 04/10/2014 04:32 PM, Jaromir Coufal wrote:
 Dear Stackers,

 I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
 branch 0.1.0 for Icehouse release [0].

 I put together a narrated demo of all included features [1].

 You can find one manual part in the whole workflow - cloud
 initialization. There is ongoing work on automatic os-cloud-config, but
 for the release we had to include manual way. Automation should be added
 soon though.

 I want to thank all contributors for hard work to make this happen. It
 has been pleasure to cooperate with all of you guys and I am looking
 forward to bringing new features [2] in.


 -- Jarda

 Are all needed components latest tags enough? In other words, if I
 update all of TripleO in Debian, will I have a useable system?

 Cheers,

 Thomas
 
 Hi Thomas,
 
 I would say yes. The question is what you mean by usable system? You
 want to try the Tuskar UI? If yes, here is devtest which will help you
 to get the dev setup: https://wiki.openstack.org/wiki/Tuskar/Devtest and
 here is the part for Tuskar UI:
 https://github.com/openstack/tuskar-ui/blob/master/docs/install.rst
 
 If you want more general info about Tuskar, here is wiki page:
 https://wiki.openstack.org/wiki/Tuskar.
 
 We are also very happy to help on #tuskar or #tripleo freenode channels
 if you experience some troubles.
 
 -- Jarda

Hi Jarda,

Thanks a lot for your reply.

Unfortunately, these instructions aren't very useful if you want to do
an installation based on packages. Something like:

git clone https://git.openstack.org/openstack/tripleo-incubator
$TRIPLEO_ROOT/tripleo-incubator/scripts/devtest.sh --trash-my-machine

is of course a no go. Stuff with pip install or easy_install, or git
clone, aren't what Debian users should read.

So, I guess the documentation for when using packages have to be written
from scratch. I'm not sure where to start... :( Do you have time to help
with this?

Now, yes, I'd be very happy to chat about this on IRC. However, I have
to get my hands on a powerful enough server. I've read a post in this
list by Robert that I would need at least 16 GB of RAM. I hope to get a
spare hardware for such tests soon.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][TripleO] ubuntu deploy-ironic element ramdisk does not detect sata disks

2014-04-11 Thread Rohan Kanade
I am using Ironic's out of tree Nova driver and running Ironic +
PXESeamicro driver.

I am using a ubuntu based deployment ramdisk created using below cmd in
diskimage-builder.

sudo bin/ramdisk-image-create -a amd64 ubuntu deploy-ironic -o
/tmp/deploy-ramdisk

I can see that the ramdisk is pxe-booted on the baremetal correctly. But
the ramdisk throws an error saying it cannot find the target disk device

https://github.com/openstack/diskimage-builder/blob/master/elements/deploy-ironic/init.d/80-deploy-ironic#L18

I then hard-code /dev/sda as target_disk, yet the ramdisk does not
actually detect any disks after booting up or while booting up.


I have cross-checked using a SystemRescueCD linux image on the same
baremetal, i can see all the sata disks attached to it fine.

Any pointers?

Regards,
Rohan Kanade
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Problem with kombu version.

2014-04-11 Thread Dmitry Pyzhov
We are going to move to kombu==2.5.14 today. Right now we have 2.1.8.

2.5.14 meets global-requirements.txt, and 3.0 does not. Because =3.0 is
much more strict requirement then =2.4.8. We can try to migrate to kombu
3.0, but looks like we have no resources for this in 5.0. It needs to be
fully tested before update.


On Wed, Apr 9, 2014 at 3:38 PM, Matthew Mosesohn mmoses...@mirantis.comwrote:

 Dmitry, I don't think you should drop kombu.five so soon.
 We haven't heard directly from Fuel python team, such as Dmitry
 Pyzhov, what reason we have to lock kombu at version 2.5.14.
 I wrote to him earlier today out of band, so hopefully he will get
 back to this message soon.

 On Wed, Apr 9, 2014 at 3:27 PM, Dmitry Teselkin dtesel...@mirantis.com
 wrote:
  Hi again,
 
  So there is a reply from the Dmitry Burmistrov which for some reason was
  missed in this thread:
  Nailgun requires exact version of kombu ( == 2.5.14 ).
  This is the only reason why we can't update it.
  I think you should talk to Dmitry P. about this version conflict.
  I want to take this opportunity to remind everyone that we should
  adhere to the global-requirements.txt in order to avoid such
  conflicts.
 
  Hopefully our developers decided to get rid of kombu.five usage what
 looks
  an easy task.
 
  Thanks, everyone.
 
 
 
  On Mon, Apr 7, 2014 at 8:33 PM, Dmitry Teselkin dtesel...@mirantis.com
  wrote:
 
  Hello,
 
  I'm working on Murano integration into FUEL-5.0, and have faced the
  following problem: our current implementation depends on 'kombu.five'
  module, but this module (actually a single file) is available only
 starting
  at kombu 3.0. So this means that murano-api component depends on kombu
  =3.0. This meets the OpenStack global requirements list, where kombu
  =2.4.8 is declared. Unfortunately, this also means that system-wide
  version upgrade is required.
 
  So the question is - what is the right way to solve the promblem? I see
  the following options:
  1. change kombu version requirement to =3.0 for entire FUEL
 installation
  - it doesn't break global requirements constraint but some other FUEL
  components could be affected.
  2. replace calls to functions from 'python.kombu' and use existing
 version
  - I'm not sure if it's possible, I'm awaiting answer from our
 developers.
 
  Which is the most suitable variant, or are there any other solutions for
  the problem?
 
 
  --
  Thanks,
  Dmitry Teselkin
  Deployment Engineer
  Mirantis
  http://www.mirantis.com
 
 
 
 
  --
  Thanks,
  Dmitry Teselkin
  Deployment Engineer
  Mirantis
  http://www.mirantis.com
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Ladislav Smola

On 04/11/2014 01:03 PM, Thomas Goirand wrote:

On 04/11/2014 06:05 PM, Jaromir Coufal wrote:

On 2014/11/04 10:27, Thomas Goirand wrote:

On 04/10/2014 04:32 PM, Jaromir Coufal wrote:

Dear Stackers,

I am happy to announce that yesterday Tuskar UI (TripleO UI) has tagged
branch 0.1.0 for Icehouse release [0].

I put together a narrated demo of all included features [1].

You can find one manual part in the whole workflow - cloud
initialization. There is ongoing work on automatic os-cloud-config, but
for the release we had to include manual way. Automation should be added
soon though.

I want to thank all contributors for hard work to make this happen. It
has been pleasure to cooperate with all of you guys and I am looking
forward to bringing new features [2] in.


-- Jarda

Are all needed components latest tags enough? In other words, if I
update all of TripleO in Debian, will I have a useable system?

Cheers,

Thomas

Hi Thomas,

I would say yes. The question is what you mean by usable system? You
want to try the Tuskar UI? If yes, here is devtest which will help you
to get the dev setup: https://wiki.openstack.org/wiki/Tuskar/Devtest and
here is the part for Tuskar UI:
https://github.com/openstack/tuskar-ui/blob/master/docs/install.rst

If you want more general info about Tuskar, here is wiki page:
https://wiki.openstack.org/wiki/Tuskar.

We are also very happy to help on #tuskar or #tripleo freenode channels
if you experience some troubles.

-- Jarda

Hi Jarda,

Thanks a lot for your reply.

Unfortunately, these instructions aren't very useful if you want to do
an installation based on packages. Something like:

git clone https://git.openstack.org/openstack/tripleo-incubator
$TRIPLEO_ROOT/tripleo-incubator/scripts/devtest.sh --trash-my-machine

is of course a no go. Stuff with pip install or easy_install, or git
clone, aren't what Debian users should read.

So, I guess the documentation for when using packages have to be written
from scratch. I'm not sure where to start... :( Do you have time to help
with this?

Now, yes, I'd be very happy to chat about this on IRC. However, I have
to get my hands on a powerful enough server. I've read a post in this
list by Robert that I would need at least 16 GB of RAM. I hope to get a
spare hardware for such tests soon.

Thomas


Hi Thomas,

Yes, devtest is suppose to run on bleeding edge. :-)

For Fedora:
I think you are looking for this 
https://github.com/agroup/instack-undercloud

It uses packages and pre-created images.

For Debian like, I am afraid nobody started to prepare a package based 
solution.


Kind Regards,
Ladislav



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Brocade NICs support

2014-04-11 Thread Mike Scherbakov
Folks,
does our boostrap discovery image still can't see Brocade NICs?
Do we have anyone to try it out?

https://bugs.launchpad.net/fuel/+bug/1260492 this bug can't be resolved
unless we have hardware to test a fix for it?
Fix was proposed when bootstrap was on CentOS 6.4, and now we have CentOS
6.5, I hope the issue has gone away...

Thanks,
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-11 Thread Deepak Shetty
Hi,
   I am using the nfs and glusterfs driver as reference here.

I see that load_shares_config is called everytime via
_ensure_shares_mounted which I feel is incorrect mainly because
ensure_shares_mounted loads the config file again w/o restarting the service

I think that the shares config file should only be loaded once (during
service startup) as part of do_setup and never again.

If someone changes something in the conf file, one needs to restart service
which calls do_setup again and the changes made in shares.conf is taken
effect.

In looking further.. the ensure_shares_mounted ends up calling
remotefsclient.mount() which does _Nothing_ if the share is already
mounted.. whcih is mostly the case. So even if someone changed something in
the shares file (like added -o options) it won't take effect as the share
is already mounted  service already running.

In fact today, if you restart the service, even then the changes in share
won't take effect as the mount is not un-mounted, hence when the service is
started next, the mount is existing and ensures_shares_mounted just returns
w/o doing anything.

The only adv of calling load_shares_config in ensure_shares_mounted is if
someone changed the shares server IP while the service is running ... it
loads the new share usign the new server IP.. which again is wrong since
ideally the person should restart service for any shares.conf changes to
take effect.

Hence i feel callign load_shares_config in ensure_shares_mounted is
Incorrect and should be removed

Thoughts ?

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-11 Thread Deepak Shetty
My argument was mostly from the perspective that unmanage shud do its best
to revert back the volume to its original state (mainly the name).

Like you said, once its given to cinder, its not a external volume anymore
similary, once its taken out of cinder, its a external volume and its just
logical for admin/user to expect the volume with its original name

Thinking of scenarios..  (i may be wrong here)

An admin submits few storage array LUNs (for which he has setup a
mirroring relationship in the storage array) as volumes to cinder using
manage_existing.. uses the volume as part of openstack, and there
are 2 cases here
1) cinder renames the volume, which causes his backend mirroring
relationship to be broken
2) He disconnects the mirror relnship while submitting the volume to
cinder and when he unmanages it, expects the mirror to work

Will this break if cinder renames the volume ?



On Thu, Apr 10, 2014 at 10:50 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 On 10 April 2014 09:02, Deepak Shetty dpkshe...@gmail.com wrote:

  Ok, agreed. But then when admin unmanages it, we shud rename it back to
 the
  name
  that it originally had before it was managed by cinder. At least thats
 what
  admin can hope
  to expect, since he is un-doing the managed_existing stuff, he expects
 his
  file name to be
  present as it was before he managed it w/ cinder.

 I'd question this assertion. Once you've given a volume to cinder, it
 is not an external volume any more, it is cinder's. Unmanage of any
 volume should be consistent, regardless of whether it got into cinder
 via a volume create or a 'cinder manage' command. It is far worse to
 have unmanage  inconsistent at some point in the distant future than
 it is for the storage admin to do some extra work in the short term if
 he is experimenting with managing / unmanaging volumes.

 As was discussed at the summit, manage / unmanage is *not* designed to
 be a routine operation. If you're unmanaging volumes regularly then
 you're not using the interface as intended, and we need to discuss
 your use-case, not bake weird and inconsistent behaviour into the
 current interface.

 So, under what circumstances do you expect that the current behaviour
 causes a significant problem?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PTL Election Conclusion and Results

2014-04-11 Thread Anita Kuno
Thank you to the electorate, to all those who voted and to all
candidates who put their name forward for PTL for this election. A
healthy, open process breeds trust in our decision making capability -
thank you to all those who make this process possible.

Now for the results of the PTL election process, please join me in
extending congratulations to the following PTLs:

* Compute (Nova)
** Michael Still
* Object Storage (Swift)
** John Dickinson
* Image Service (Glance)
** Mark Washenberger
* Identity (Keystone)
** Dolph Mathews
* Dashboard (Horizon)
** David Lyle
* Networking (Neutron)
** Kyle Mestery
* Block Storage (Cinder)
** John Griffith
* Metering/Monitoring (Ceilometer)
** Eoghan Glynn
* Orchestration (Heat)
** Zane Bitter
* Database Service (Trove)
** Nikhil Manchanda
* Bare metal (Ironic)
** Devananda van der Veen
* Common Libraries (Oslo)
** Doug Hellmann
* Infrastructure
** Jim Blair
* Documentation
** Anne Gentle
* Quality Assurance (QA)
** Matt Treinish
* Deployment (TripleO)
** Robert Collins
* Devstack (DevStack)
** Dean Troyer
* Release cycle management
** Thierry Carrez
* Queue Service (Marconi)
** Kurt Griffiths
* Data Processing Service (Sahara)
** Sergey Lukjanov
* Key Management Service (Barbican)
** Jarret Raim

Election Results:
* Nova: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_e295a4530c19f691
* Neutron:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_230fb7953010b219
* Cinder:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_cdc16bb61aacc74a
* Ceilometer:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_18b9e8d39c82df0b
* Heat: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_e0f670cec964fef8
* TripleO:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_eeaa27c0de8c9a51

Shortly I will post the announcement opening TC nominations and then we
are into the TC election process.

Thank you to all involved in the PTL election process,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [Horizon] Icehouse Release of TripleO UI + Demo

2014-04-11 Thread Jaromir Coufal

Hi Jarda,

Thanks a lot for your reply.

Unfortunately, these instructions aren't very useful if you want to do
an installation based on packages. Something like:

git clone https://git.openstack.org/openstack/tripleo-incubator
$TRIPLEO_ROOT/tripleo-incubator/scripts/devtest.sh --trash-my-machine

is of course a no go. Stuff with pip install or easy_install, or git
clone, aren't what Debian users should read.

So, I guess the documentation for when using packages have to be written
from scratch. I'm not sure where to start... :( Do you have time to help
with this?

Now, yes, I'd be very happy to chat about this on IRC. However, I have
to get my hands on a powerful enough server. I've read a post in this
list by Robert that I would need at least 16 GB of RAM. I hope to get a
spare hardware for such tests soon.

Thomas


Right, sorry, I somehow missed Debian note in your e-mail...

Robert was right, it is a bit demanding project :) So once you get your 
server, just jump into the #tuskar channel and somebody will definitely 
try to help you with setting it up. If you could take notes and help us 
updating the wiki page with installation instruction for Debian, it 
would be awesome.


Thanks
-- Jarda



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-11 Thread Duncan Thomas
On 11 April 2014 14:21, Deepak Shetty dpkshe...@gmail.com wrote:
 My argument was mostly from the perspective that unmanage shud do its best
 to revert back the volume to its original state (mainly the name).

 Like you said, once its given to cinder, its not a external volume anymore
 similary, once its taken out of cinder, its a external volume and its just
 logical for admin/user to expect the volume with its original name

 Thinking of scenarios..  (i may be wrong here)

 An admin submits few storage array LUNs (for which he has setup a
 mirroring relationship in the storage array) as volumes to cinder using
 manage_existing.. uses the volume as part of openstack, and there
 are 2 cases here
 1) cinder renames the volume, which causes his backend mirroring
 relationship to be broken
 2) He disconnects the mirror relnship while submitting the volume to
 cinder and when he unmanages it, expects the mirror to work

 Will this break if cinder renames the volume ?


Both of those are unreasonable expectations, and I would entirely
expect both of them to break. Once you give cidner a volume, you no
longer have *any* control over what happens to that volume. Mirroring
relationships, volume names, etc *all* become completely under
cinder's control. Expecting *anything* to go back to the way it was
before cinder got hold of the volume is completely wrong.

The scenario I *don't want to see is:
1) Admin import a few hundred volumes into the cloud
2) Some significant time goes by
3) Cloud is being decommissioned / the storage transfer / etc. so the
admin runs unmanage on all cinder volumes on that storage
4) The volumes get renamed or not, based on whether they happened to
come into cinder via manage or volume create

*That* I would consider broken.

I'll say it again, to make my position totally clear - Once you've run
cinder manage, you can have no further expectations on a a volume.
Cinder might rename it, migrate it, compress it, change the on disk
format of it, etc. Cinder will not, and should not, remember
*anything* about the volume before it was managed. You give the volume
to cinder, and it becomes just another cinder volume, nothing special
about it at all.

Anything else is *not* covered by the manage / unmanage commands, and
needs to be discussed, with clear, reasoned usecases. We do not want
people using this interface with any other expectations, because even
for things that happen to work now, they might get changed in future,
without warning.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-11 Thread Kerr, Andrew
Hi Deepak,

I know that there are plans to completely change how NFS uses (or more
accurately, will not use) the shares.conf file in the future.  My guess is
that a lot of this code will be changed in the near future during that
rework.

Andrew Kerr
OpenStack QA
Cloud Solutions Group
NetApp


From:  Deepak Shetty dpkshe...@gmail.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Friday, April 11, 2014 at 7:54 AM
To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Subject:  [openstack-dev] [Cinder] XXXFSDriver: Query on usage
of  load_shares_config in ensure_shares_mounted


Hi,

   I am using the nfs and glusterfs driver as reference here.


I see that load_shares_config is called everytime via
_ensure_shares_mounted which I feel is incorrect mainly because
ensure_shares_mounted loads the config file again w/o restarting the
service


I think that the shares config file should only be loaded once (during
service startup) as part of do_setup and never again.

If someone changes something in the conf file, one needs to restart
service which calls do_setup again and the changes made in shares.conf is
taken effect.


In looking further.. the ensure_shares_mounted ends up calling
remotefsclient.mount() which does _Nothing_ if the share is already
mounted.. whcih is mostly the case. So even if someone changed something
in the shares file (like added -o options) it won't take
 effect as the share is already mounted  service already running.

In fact today, if you restart the service, even then the changes in share
won't take effect as the mount is not un-mounted, hence when the service
is started next, the mount is existing and ensures_shares_mounted just
returns w/o doing anything.


The only adv of calling load_shares_config in ensure_shares_mounted is if
someone changed the shares server IP while the service is running ... it
loads the new share usign the new server IP.. which again is wrong since
ideally the person should restart service
 for any shares.conf changes to take effect.

Hence i feel callign load_shares_config in ensure_shares_mounted is
Incorrect and should be removed

Thoughts ?

thanx,

deepak


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-11 Thread Duncan Thomas
Hi Deepak

Both of those (config being read repeatedly, and config not being
applied if the share is already mounted) sound like bugs. Please file
bug reports for both, with reproducers if possible.

Thanks

On 11 April 2014 12:54, Deepak Shetty dpkshe...@gmail.com wrote:
 Hi,
I am using the nfs and glusterfs driver as reference here.

 I see that load_shares_config is called everytime via _ensure_shares_mounted
 which I feel is incorrect mainly because ensure_shares_mounted loads the
 config file again w/o restarting the service

 I think that the shares config file should only be loaded once (during
 service startup) as part of do_setup and never again.

 If someone changes something in the conf file, one needs to restart service
 which calls do_setup again and the changes made in shares.conf is taken
 effect.

 In looking further.. the ensure_shares_mounted ends up calling
 remotefsclient.mount() which does _Nothing_ if the share is already
 mounted.. whcih is mostly the case. So even if someone changed something in
 the shares file (like added -o options) it won't take effect as the share is
 already mounted  service already running.

 In fact today, if you restart the service, even then the changes in share
 won't take effect as the mount is not un-mounted, hence when the service is
 started next, the mount is existing and ensures_shares_mounted just returns
 w/o doing anything.

 The only adv of calling load_shares_config in ensure_shares_mounted is if
 someone changed the shares server IP while the service is running ... it
 loads the new share usign the new server IP.. which again is wrong since
 ideally the person should restart service for any shares.conf changes to
 take effect.

 Hence i feel callign load_shares_config in ensure_shares_mounted is
 Incorrect and should be removed

 Thoughts ?

 thanx,
 deepak

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][nova][docker]

2014-04-11 Thread Derek Higgins
Hi All,

I've been taking a look at devstack support for nova-docker[1] which
was recently taken out of nova. I stumbled across a similar effort
currently underway[2], so have based my work off that.

What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
this will setup a devstack based nova with the nova-docker driver and
can then run what ever tests make sense (currently only a minimal test,
Eric I believe you were looking at tempest support maybe it could be
hooked in here?).

I've taken this as far as I can locally to make sure it all works on a
manually setup devstack-f20 node and it works, of course there may be
slight differences in what nodepool sets up but we can't know that until
we try :-)

So I'd like to give this a go, I've left the various patches in WIP for
the moment so people can take a look etc...
https://review.openstack.org/86905
https://review.openstack.org/86910

Any thoughts?

thanks,
Derek.

[1] http://git.openstack.org/cgit/stackforge/nova-docker
[2] https://review.openstack.org/#/c/81097/
[3] https://review.openstack.org/#/c/86842/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Candidate proposals for TC (Technical Committee) positions are now open

2014-04-11 Thread Anita Kuno
Candidate proposals for the Technical Committee positions (7 positions)
are now open and will remain open until 05:59 utc April 18, 2014.

Candidates for the Technical Committee Positions:
Any Foundation individual member can propose their candidacy for an
available, directly-elected TC seat. [0]
(except the six TC members who were elected for a one-year seat last
October: Monty Taylor, Russell Bryant, Anne Gentle, Mark McLoughlin,
Doug Hellman and Sean Dague)[1]

Propose your candidacy by sending an email to the openstack-dev at
lists.openstack.org mailing-list, with the subject: TC candidacy.
Please start your own thread so we have one thread per candidate. Since
there will be many people voting for folks with whom they might not have
worked, including a platform or statement to help voters make an
informed choice is recommended, though not required.

Tristan and I will confirm candidates with an email to the candidate
thread as well as create a link to the confirmed candidate's proposal
email on the wikipage for this election. [1]

The election will be held from April 18 through to 13:00 utc April 24,
2014. The electorate are the Foundation individual members that are also
committers for one of the official programs projects[2] over the
Havana-Icehouse timeframe (April 4, 2013 06:00 UTC to April 4, 2014
05:59 UTC), as well as the 3 non-code ATCs who are acknowledged by the
TC. [3]

Please see the wikipage for additional details about this election. [1]

If you have any questions please be sure to either voice them on the
mailing list or email Tristan or myself[4] or contact Tristan or myself
on IRC.

Thank you, and I look forward to reading your candidate proposals,
Anita Kuno (anteaya)

[0] https://wiki.openstack.org/wiki/Governance/Foundation/TechnicalCommittee
[1] https://wiki.openstack.org/wiki/TC_Elections_April_2014
[2]
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml?id=april-2014-elections
Note the tag for this repo, april-2014-elections.
[3]
http://git.openstack.org/cgit/openstack/governance/tree/reference/extra-atcs?id=april-2014-elections
[4] Anita: anteaya at anteaya dot info Tristan: tristan dot cacqueray at
enovance dot com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-11 Thread Collins, Sean
Hi,

I've noticed a proliferation of Vagrant projects that are popping up, is
there any interest from other authors in trying to consolidate?

https://github.com/bcwaldon/vagrant_devstack

https://github.com/sdague/devstack-vagrant

http://openstack.prov12n.com/how-to-make-a-lot-of-devstack-with-vagrant/

https://github.com/jogo/DevstackUp

https://github.com/search?q=vagrant+devstackref=cmdform

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-11 Thread Andrew Laski

On 04/10/14 at 11:33pm, Oleg Gelbukh wrote:

Andrew,

Thank you for clarification!


On Thu, Apr 10, 2014 at 3:47 PM, Andrew Laski andrew.la...@rackspace.comwrote:



The scheduler as it currently exists is a placement engine.  There is
sufficient complexity in the scheduler with just that responsibility so I
would prefer to see anything that's making runtime decisions separated out.
 Perhaps it could just be another service within the scheduler project once
it's broken out, but I think it will be beneficial to have a clear
distinction between placement decisions and runtime monitoring.



Do you think that auto-scaling could be considered another facet of this
'runtime monitoring' functionality? Now it is a combination of Heat and
Ceilometer. Does it worth moving to hypothetical runtime mobility service
as well?


Auto-scaling is certainly a facet of runtime monitoring.  But 
auto-scaling performs actions based on a set of user defined rules and 
is very visible while the enhancements proposed below are intended to 
benefit deployers and be very invisible to users.  So the set of 
allowable actions is very constrained compared to what auto-scaling can 
do.  

In my opinion what's being proposed doesn't seem to fit cleanly into 
any existing service, so perhaps it could start as a standalone entity.  
Then once there's something that can be used and demoed a proper place 
might suggest itself, or it might make sense to keep it separate.





--
Best regards,
Oleg Gelbukh







--
Best regards,
Oleg Gelbukh


On Wed, Apr 9, 2014 at 7:47 PM, Jay Lau jay.lau@gmail.com wrote:

 @Oleg, Till now, I'm not sure the target of Gantt, is it for initial

placement policy or run time policy or both, can you help clarify?

@Henrique, not sure if you know IBM PRS (Platform Resource Scheduler)
[1],
we have finished the dynamic scheduler in our Icehouse version (PRS
2.2),
it has exactly the same feature as your described, we are planning a live
demo for this feature in Atlanta Summit. I'm also writing some document
for
run time policy which will cover more run time policies for OpenStack,
but
not finished yet. (My shame for the slow progress). The related blueprint
is [2], you can also get some discussion from [3]

[1]
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=
ANsubtype=CAhtmlfid=897/ENUS213-590appname=USN
[2]
https://blueprints.launchpad.net/nova/+spec/resource-
optimization-service
[3] http://markmail.org/~jaylau/OpenStack-DRS

Thanks.


2014-04-09 23:21 GMT+08:00 Oleg Gelbukh ogelb...@mirantis.com:

Henrique,



You should check out Gantt project [1], it could be exactly the place to
implement such features. It is a generic cross-project Scheduler as a
Service forked from Nova recently.

[1] https://github.com/openstack/gantt

--
Best regards,
Oleg Gelbukh
Mirantis Labs


On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta 
henriquecostatr...@gmail.com wrote:

 Hello, everyone!


I am currently a graduate student and member of a group of contributors
to OpenStack. We believe that a dynamic scheduler could improve the
efficiency of an OpenStack cloud, either by rebalancing nodes to
maximize
performance or to minimize the number of active hosts, in order to
minimize
energy costs. Therefore, we would like to propose a dynamic scheduling
mechanism to Nova. The main idea is using the Ceilometer information
(e.g.
RAM, CPU, disk usage) through the ceilometer-client and dinamically
decide
whether a instance should be live migrated.

This might me done as a Nova periodic task, which will be executed
every
once in a given period or as a new independent project. In both cases,
the
current Nova scheduler will not be affected, since this new scheduler
will
be pluggable. We have done a search and found no such initiative in the
OpenStack BPs. Outside the community, we found only a recent IBM
announcement for a similiar feature in one of its cloud products.

A possible flow is: In the new scheduler, we periodically make a call
to
Nova, get the instance list from a specific host and, for each
instance, we
make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
cpu_util -q resource=$INSTANCE_ID) and then, according to some specific
parameters configured by the user, analyze the meters and do the proper
migrations.

Do you have any comments or suggestions?

--
Ítalo Henrique Costa Truta



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Thanks,

Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___

OpenStack-dev mailing 

Re: [openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-11 Thread Sean Dague
On 04/11/2014 10:34 AM, Collins, Sean wrote:
 Hi,
 
 I've noticed a proliferation of Vagrant projects that are popping up, is
 there any interest from other authors in trying to consolidate?
 
 https://github.com/bcwaldon/vagrant_devstack
 
 https://github.com/sdague/devstack-vagrant
 
 http://openstack.prov12n.com/how-to-make-a-lot-of-devstack-with-vagrant/
 
 https://github.com/jogo/DevstackUp
 
 https://github.com/search?q=vagrant+devstackref=cmdform

That's definitely an option. As I look at the differences across the
projects I mostly see that there are differences with the provisioning
engine.

My personal end game was to get to a point where it would be simple to
replicate the devstack gate, which means reusing that puppet policy
(which is why I started with puppet). I see
https://github.com/bcwaldon/vagrant_devstack is based on chef. And
https://github.com/jogo/DevstackUp is just shell.

Maybe it would be good to get an ad-hoc IRC meeting together to figure
out what the must have features are that inspired everyone to write
these. If we can come up with a way to overlap those all sanely, moving
to stackforge and doing this via gerrit would be something I'd be into.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Russell Bryant
On 04/11/2014 10:11 AM, Derek Higgins wrote:
 Hi All,
 
 I've been taking a look at devstack support for nova-docker[1] which
 was recently taken out of nova. I stumbled across a similar effort
 currently underway[2], so have based my work off that.
 
 What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
 this will setup a devstack based nova with the nova-docker driver and
 can then run what ever tests make sense (currently only a minimal test,
 Eric I believe you were looking at tempest support maybe it could be
 hooked in here?).
 
 I've taken this as far as I can locally to make sure it all works on a
 manually setup devstack-f20 node and it works, of course there may be
 slight differences in what nodepool sets up but we can't know that until
 we try :-)
 
 So I'd like to give this a go, I've left the various patches in WIP for
 the moment so people can take a look etc...
 https://review.openstack.org/86905
 https://review.openstack.org/86910
 
 Any thoughts?
 
 thanks,
 Derek.
 
 [1] http://git.openstack.org/cgit/stackforge/nova-docker
 [2] https://review.openstack.org/#/c/81097/
 [3] https://review.openstack.org/#/c/86842/

Hugely appreciated!

It'd be nice to have one other person ACK the devstack integration and
then I think we should merge it.

I know last cycle we were looking at 3rd party CI, but that was because
of timing and infra review availability.  I think a re-focus on doing it
in openstack's infra is the right approach.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-11 Thread Collins, Sean
Many of those patches are stale - please join us in the subteam IRC
meeting if you wish to coordinate development of IPv6 features, so that
we can focus on updating them and getting them merged. At this point
simply applying them to the Icehouse tree is not enough.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] more oslo liaisons needed

2014-04-11 Thread Doug Hellmann
I see that several projects have their Oslo liaisons lined up
(https://wiki.openstack.org/wiki/Oslo/ProjectLiaisons). It would be
great if we had at least one volunteer from each project before the
summit, so we can get a head start on the coordination work.

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Consolidating efforts around Fedora/Centos gate job

2014-04-11 Thread Derek Higgins
On 11/04/14 06:43, Ian Wienand wrote:
 Hi,
 
 To summarize recent discussions, nobody is opposed in general to
 having Fedora / Centos included in the gate.  However, it raises a
 number of big questions : which job(s) to run on Fedora, where does
 the quota for extra jobs come from, how do we get the job on multiple
 providers, how stable will it be, how will we handle new releases,
 centos v fedora, etc.
 
 I think we agreed in [1] that the best thing to do is to start small,
 get some experience with multiple platforms and grow from there.  Thus
 the decision to target a single job to test just incoming devstack
 changes on Fedora 20.  This is a very moderate number of changes, so
 adding a separate test will not have a huge impact on resources.
 
 Evidence points to this being a good point to start.  People
 submitting to devstack might have noticed comments from redhatci
 like [2] which reports runs of their change on a variety of rpm-based
 distros.  Fedora 20 has been very stable, so we should not have many
 issues.  Making sure it stays stable is very useful to build on for
 future gate jobs.
 
 I believe we decided that to make a non-voting job we could just focus
 on running on Rackspace and avoid the issues of older fedora images on
 hp cloud.  Longer term, either a new hp cloud version comes, or DIB
 builds the fedora images ... either way we have a path to upgrading it
 to a voting job in time.  Another proposal was to use the ooo cloud,
 but dprince feels that is probably better kept separate.
 
 Then we have the question of the nodepool setup scripts working on
 F20.  I just tested the setup scripts from [3] and it all seems to
 work on a fresh f20 cloud image.  I think this is due to kchamart,
 peila2 and others who've fixed parts of this before.
 
 So, is there:
 
  1) anything blocking having f20 in the nodepool?
I believe it should work, I've also done manual runs and they went ok,
there may be minor differences when launched from nodepool but I don't
think anything major, anyways I submitted this earlier for another job
I'm proposing on nova-docker
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032493.html
https://review.openstack.org/#/c/86842/1

  2) anything blocking a simple, non-voting job to test devstack
 changes on f20?
 
 Thanks,
 
 -i
 
 [1] 
 http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-04-08-19.01.log.html#l-89
 [2] http://people.redhat.com/~iwienand/86310/
 [3] 
 https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/nodepool/scripts
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security Group logging

2014-04-11 Thread Jay Pipes
On Wed, 2014-04-09 at 00:02 +0100, Salvatore Orlando wrote:
 Auditing has been discussed for the firewall extension.
 However, it is reasonable to expect some form of auditing for security
 group rules as well.
 
 
 To the best of my knowledge there has never been an explicit decision
 to not support logging.
 However, my guess here is that we might be better off with an auditing
 service plugin integrating with security group and firewall agents
 rather than baking the logging feature in the security group
 extension.
 Please note that I'm just thinking aloud here.

+1. A notification event should be sent across the typical notifier
mechanisms when a security group rule is changed or applied.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security Group logging

2014-04-11 Thread Veiga, Anthony

On Wed, 2014-04-09 at 00:02 +0100, Salvatore Orlando wrote:
 Auditing has been discussed for the firewall extension.
 However, it is reasonable to expect some form of auditing for security
 group rules as well.
 
 
 To the best of my knowledge there has never been an explicit decision
 to not support logging.
 However, my guess here is that we might be better off with an auditing
 service plugin integrating with security group and firewall agents
 rather than baking the logging feature in the security group
 extension.
 Please note that I'm just thinking aloud here.

+1. A notification event should be sent across the typical notifier
mechanisms when a security group rule is changed or applied.

Throwing my hat in the ring for this as well.  Preferably the message
should include the UUID of the Group being changed, and also the UUID of
the Instance if it¹s being applied.


Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-11 Thread Collins, Sean
 Maybe it would be good to get an ad-hoc IRC meeting together to figure
 out what the must have features are that inspired everyone to write
 these. If we can come up with a way to overlap those all sanely, moving
 to stackforge and doing this via gerrit would be something I'd be into.
 
   -Sean

+1 to this idea. I found vagrant_devstack a while ago and started using
it and then started documenting it for our development team at Comcast,
it has been very helpful for getting new developers started. Having
something in Stackforge would be great, so that we can consolidate all
our knowledge and energy.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][heat] Unusable error messages in dashboard for Orchestration

2014-04-11 Thread Fox, Kevin M
I've seen unusable error messages out of heat as well. I've been telling users 
(our ops guys) to look at the heat-engine logs when it happens and usually its 
fairly apparent what is wrong with their templates.

In the future, Should I report each of these I see as a new bug or add each to 
the existing bug?

Thanks,
Kevin

From: Steven Hardy [sha...@redhat.com]
Sent: Friday, April 11, 2014 3:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [horizon][heat] Unusable error messages in 
dashboard for Orchestration

Hi Tom,

On Fri, Apr 11, 2014 at 01:05:00PM +0800, Tom Fifield wrote:
 Hi,

 Lodged a bug the day after Havana came out to hope to get this
 usability problem addressed.

 https://bugs.launchpad.net/horizon/+bug/1241395

 Essentially, if something goes wrong creating a stack through the
 dashboard, all the user ever sees is:

 Stack creation failed.


 ... which is, a little less than useful in terms of enabling them to
 fix the problem.

 Testing using RC1 today, there is no improvement, and I was shocked
 to discover the the bug submitted was not even triaged during
 icehouse!

 Any ideas? :)

Thanks for drawing attention to this problem, unfortunately it looks like
this bug has fallen through the cracks so sorry about that.

I am a little confused however, because when I create a stack which fails,
it does provide fedback regarding the reason for failure in both the
topology and stack overview pages.

Heres the steps I went through to test:

1. login as demo user, select demo tenant
2. select Orchestration-Stacks page
3. Click Launch Stack, do direct input of this template:

heat_template_version: 2013-05-23

resources:
wait_handle:
type: OS::Heat::UpdateWaitConditionHandle

wait_condition:
type: AWS::CloudFormation::WaitCondition
properties:
Count: 1
Handle: { get_resource: wait_handle }
Timeout: 10

4. Click next, enter stack name test, password dummy and Click Launch
5. The UI returns to the stack list view, with the test stack status in
progress
6. The stack creation fails, status moves to failed
7. Click on the stack name, takes me to Stack Overview page
8. Observe Status for failure:

Status
  Create_Failed: Resource CREATE failed: WaitConditionTimeout: 0 of 1 received


So there evidently are some other error paths which do not provide feedback
(when we fail to attempt stack creation at all) - the only way I could
reproduce your issue was by attempting to create a stack with a duplicate
name, which as you say does result in an uhelpful popup saying Error:
Stack creation failed.

Comparing that with, for example, creating a stack with an invalid name
123, which gives you a nice explanation of the problem on the form.  SO I
guess what you're seeing is a default response to some error paths which
are not explicitly handled.

What I think may help is if you can update the bug with steps to reproduce
the problem.  It may be there are several failure scenarios we need to
resolve, so getting specific details to reproduce is the first step.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] more oslo liaisons needed

2014-04-11 Thread Thomas Herve
 I see that several projects have their Oslo liaisons lined up
 (https://wiki.openstack.org/wiki/Oslo/ProjectLiaisons). It would be
 great if we had at least one volunteer from each project before the
 summit, so we can get a head start on the coordination work.

Hi Doug,

I'd be happy to handle the role for Heat.

Cheers,

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-11 Thread Eric Harney
On 04/11/2014 10:55 AM, Eric Harney wrote:
 On 04/11/2014 07:54 AM, Deepak Shetty wrote:
 Hi,
I am using the nfs and glusterfs driver as reference here.

 I see that load_shares_config is called everytime via
 _ensure_shares_mounted which I feel is incorrect mainly because
 ensure_shares_mounted loads the config file again w/o restarting the service

 I think that the shares config file should only be loaded once (during
 service startup) as part of do_setup and never again.

 
 Wouldn't this change the functionality that this provides now, though?
 
 Unless I'm missing something, since get_volume_stats calls
 _ensure_shares_mounted(), this means you can add a new share to the
 config file and have it become active in the driver.  (While I'm not
 sure this was the original intent, it could be nice to have and should
 at least be considered before ditching it.)
 
 If someone changes something in the conf file, one needs to restart service
 which calls do_setup again and the changes made in shares.conf is taken
 effect.

 
 I'm not sure this is correct given the above.
 
 In looking further.. the ensure_shares_mounted ends up calling
 remotefsclient.mount() which does _Nothing_ if the share is already
 mounted.. whcih is mostly the case. So even if someone changed something in
 the shares file (like added -o options) it won't take effect as the share
 is already mounted  service already running.

 In fact today, if you restart the service, even then the changes in share
 won't take effect as the mount is not un-mounted, hence when the service is
 started next, the mount is existing and ensures_shares_mounted just returns
 w/o doing anything.

 The only adv of calling load_shares_config in ensure_shares_mounted is if
 someone changed the shares server IP while the service is running ... it
 loads the new share usign the new server IP.. which again is wrong since
 ideally the person should restart service for any shares.conf changes to
 take effect.

 
 This won't work anyway because of how we track provider_location in the
 database.  This particular case is planned to be addressed via this
 blueprint with reworks configuration:
 
 https://blueprints.launchpad.net/cinder/+spec/remotefs-share-cfg-improvements
 

I suppose I should also note that if the plans in this blueprint are
implemented the way I've had in mind, the main issue here about only
loading shares at startup time would be in place, so we may want to
consider these questions under that direction.

 Hence i feel callign load_shares_config in ensure_shares_mounted is
 Incorrect and should be removed

 Thoughts ?

 thanx,
 deepak

 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Paul Czarkowski
I would rather see the devstack support be integrated in devstack's repo
itself.   There's a review
Outstanding for devstack[1] that adds this support in and also adds in
some pieces to make
it easier to utilize external nova drivers.


If that fails out then I'm all for merging in your review to the
nova-docker driver.  We're
Currently using nova-docker for Solum and having the devstack support
would help us out a lot.

There is some ugliness in the nova-docker review[2] that we could avoid by
merging the integration
directly into devstack.
 
[1] https://review.openstack.org/#/c/84839/
[2] https://review.openstack.org/#/c/81097/3/contrib/devstack/stackrc.diff


On 4/11/14 9:53 AM, Russell Bryant rbry...@redhat.com wrote:

On 04/11/2014 10:11 AM, Derek Higgins wrote:
 Hi All,
 
 I've been taking a look at devstack support for nova-docker[1] which
 was recently taken out of nova. I stumbled across a similar effort
 currently underway[2], so have based my work off that.
 
 What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
 this will setup a devstack based nova with the nova-docker driver and
 can then run what ever tests make sense (currently only a minimal test,
 Eric I believe you were looking at tempest support maybe it could be
 hooked in here?).
 
 I've taken this as far as I can locally to make sure it all works on a
 manually setup devstack-f20 node and it works, of course there may be
 slight differences in what nodepool sets up but we can't know that until
 we try :-)
 
 So I'd like to give this a go, I've left the various patches in WIP for
 the moment so people can take a look etc...
 https://review.openstack.org/86905
 https://review.openstack.org/86910
 
 Any thoughts?
 
 thanks,
 Derek.
 
 [1] http://git.openstack.org/cgit/stackforge/nova-docker
 [2] https://review.openstack.org/#/c/81097/
 [3] https://review.openstack.org/#/c/86842/

Hugely appreciated!

It'd be nice to have one other person ACK the devstack integration and
then I think we should merge it.

I know last cycle we were looking at 3rd party CI, but that was because
of timing and infra review availability.  I think a re-focus on doing it
in openstack's infra is the right approach.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-11 Thread Anne Gentle
I'd love to see consolidation as I've tried to keep up nova dev docs for
example, and with all the options it's tough to test and maintain one for
docs. Go for it.


On Fri, Apr 11, 2014 at 9:34 AM, Collins, Sean 
sean_colli...@cable.comcast.com wrote:

 Hi,

 I've noticed a proliferation of Vagrant projects that are popping up, is
 there any interest from other authors in trying to consolidate?

 https://github.com/bcwaldon/vagrant_devstack

 https://github.com/sdague/devstack-vagrant

 http://openstack.prov12n.com/how-to-make-a-lot-of-devstack-with-vagrant/

 https://github.com/jogo/DevstackUp

 https://github.com/search?q=vagrant+devstackref=cmdform

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-11 Thread Greg Lucas
Sean Dague wrote:
 Maybe it would be good to get an ad-hoc IRC meeting together to figure
 out what the must have features are that inspired everyone to write
 these. If we can come up with a way to overlap those all sanely, moving
 to stackforge and doing this via gerrit would be something I'd be into.

This is a good idea, I've definitely stumbled across lots of GitHub
projects, blog posts, etc that overlap here.

Folks seem to have a strong preference for provisioner so it may makes
sense to support several. We can put together a Vagrantfile that allows
you to choose a provisioner while maintaining a common machine
configuration (using --provision-with or using env variables and loading
in additional rb files, etc).

~Greg





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Russell Bryant
On 04/11/2014 11:28 AM, Paul Czarkowski wrote:
 I would rather see the devstack support be integrated in devstack's repo
 itself.   There's a review
 Outstanding for devstack[1] that adds this support in and also adds in
 some pieces to make
 it easier to utilize external nova drivers.
 
 
 If that fails out then I'm all for merging in your review to the
 nova-docker driver.  We're
 Currently using nova-docker for Solum and having the devstack support
 would help us out a lot.
 
 There is some ugliness in the nova-docker review[2] that we could avoid by
 merging the integration
 directly into devstack.
  
 [1] https://review.openstack.org/#/c/84839/
 [2] https://review.openstack.org/#/c/81097/3/contrib/devstack/stackrc.diff

Agree that directly in devstack is easier, but I just don't see that
happening.  They've been pretty clear on only including support for
things in official projects.

See Sean's review of I feel like this should be done entirely as an
extras.d file which is actually in the docker driver tree. If there are
call points which are needed in extras.d, we should add those.

So, I think we should proceed with the support in the nova-docker repo
for now.  That will unblock CI progress against nova-docker, which is
the #1 blocker for eventually re-proposing the driver to nova itself.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-04-11 Thread Dina Belova
Hello, folks!

Our Climate meeting minutes are here :)

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-11-15.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-11-15.00.txt
Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-11-15.00.log.html


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-11 Thread Fox, Kevin M
Maybe an Intel AMT driver too:
http://www.intel.com/content/www/us/en/architecture-and-technology/intel-active-management-technology.html

You could use desktop class machines with ironic then.

Kevin

From: Devananda van der Veen [devananda@gmail.com]
Sent: Wednesday, April 09, 2014 4:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][Agent]

On Wed, Apr 9, 2014 at 9:01 AM, Stig Telfer 
stel...@cray.commailto:stel...@cray.com wrote:
 -Original Message-
 From: Matt Wagner 
 [mailto:matt.wag...@redhat.commailto:matt.wag...@redhat.com]
 Sent: Tuesday, April 08, 2014 6:46 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Ironic][Agent]

 On 08/04/14 14:04 +0400, Vladimir Kozhukalov wrote:
 snip
 0) There are a plenty of old hardware which does not have IPMI/ILO at all.
 How Ironic is supposed to power them off and on? Ssh? But Ironic is not
 supposed to interact with host OS.

 I'm more accustomed to using PDUs for this type of thing. I.e., a
 power strip you can ssh into or hit via a web API to toggle power to
 individual ports.

 Machines are configured to power up on power restore, plus PXE boot.
 You have less control than with IPMI -- all you can do is toggle power
 to the outlet -- but it works well, even for some desktop machines I
 have in a lab.

 I don't have a compelling need, but I've often wondered if such a
 driver would be useful. I can imagine it also being useful if people
 want to power up non-compute stuff, though that's probably not a top
 priority right now.

We have developed a driver that might be of interest.  Ironic uses it to 
control the PDUs in our lab cluster through SNMP.  It appears the leading 
brands of PDU implement SNMP interfaces, albeit through vendor-specific 
enterprise MIBs.  As a mechanism for control, I'd suggest that SNMP is going to 
be a better bet than an automated tron for hitting the ssh or web interfaces.

Currently our power driver is a point solution for our PDUs, but why not make 
it generalised?  We'd be happy to contribute it.

Best wishes
Stig Telfer
Cray Inc.


A PDU-based power driver has come up in discussions in the past several times, 
and I think it's well within Ironic's scope to support this. An iBoot driver 
was proposed, but bit rotted. I'd rather see a generic one, honestly.

FWIW, there already is an SSH-based power driver, which is primarily used in 
test environments (we mock real hardware with VMs to cut down the cost of 
developer testing), but this could probably be extended to support connecting 
to PDU's.

Best,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO fully uploaded to Debian Experimental

2014-04-11 Thread Nachi Ueno
Hi Thomas

Great!  Do we have a doc how to use these packages?

2014-04-11 0:00 GMT-07:00 Thomas Goirand z...@debian.org:
 Hi,

 it's with a great joy that I can announce today, that TripleO is now
 fully in Debian [1]. It is currently only uploaded to Debian
 experimental, like for all Icehouse (I don't think I can upload to
 normal Sid until Icehouse is released).

 Feedback (and bug reports to the Debian BTS) would be more than welcome,
 as I had no time to test it myself! Of course, the plan is to have it
 updated again soon.

 Cheers,

 Thomas Goirand (zigo)

 [1]
 http://qa.debian.org/developer.php?login=openstack-de...@lists.alioth.debian.org

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] more oslo liaisons needed

2014-04-11 Thread Doug Hellmann
Thanks, Thomas! If you're on IRC, you can join us in #openstack-oslo.

On Fri, Apr 11, 2014 at 11:19 AM, Thomas Herve
thomas.he...@enovance.com wrote:
 I see that several projects have their Oslo liaisons lined up
 (https://wiki.openstack.org/wiki/Oslo/ProjectLiaisons). It would be
 great if we had at least one volunteer from each project before the
 summit, so we can get a head start on the coordination work.

 Hi Doug,

 I'd be happy to handle the role for Heat.

 Cheers,

 --
 Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Derek Higgins
On 11/04/14 16:28, Paul Czarkowski wrote:
 I would rather see the devstack support be integrated in devstack's repo
 itself.   There's a review
 Outstanding for devstack[1] that adds this support in and also adds in
 some pieces to make
 it easier to utilize external nova drivers.

Great, I hadn't spotted the review to put it back into devstack, I'll
give it a go and review to help move it along. I see there is a -1
suggesting it should be in nova-docker, the devstack approach certainly
looks cleaner.

regardless of which approach is taken I should be able to work the check
job to use it for nova-docker CI.

 
 
 If that fails out then I'm all for merging in your review to the
 nova-docker driver.  We're
 Currently using nova-docker for Solum and having the devstack support
 would help us out a lot.
 
 There is some ugliness in the nova-docker review[2] that we could avoid by
 merging the integration
 directly into devstack.
Yup, agreed.

  
 [1] https://review.openstack.org/#/c/84839/
 [2] https://review.openstack.org/#/c/81097/3/contrib/devstack/stackrc.diff
 
 
 On 4/11/14 9:53 AM, Russell Bryant rbry...@redhat.com wrote:
 
 On 04/11/2014 10:11 AM, Derek Higgins wrote:
 Hi All,

 I've been taking a look at devstack support for nova-docker[1] which
 was recently taken out of nova. I stumbled across a similar effort
 currently underway[2], so have based my work off that.

 What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
 this will setup a devstack based nova with the nova-docker driver and
 can then run what ever tests make sense (currently only a minimal test,
 Eric I believe you were looking at tempest support maybe it could be
 hooked in here?).

 I've taken this as far as I can locally to make sure it all works on a
 manually setup devstack-f20 node and it works, of course there may be
 slight differences in what nodepool sets up but we can't know that until
 we try :-)

 So I'd like to give this a go, I've left the various patches in WIP for
 the moment so people can take a look etc...
 https://review.openstack.org/86905
 https://review.openstack.org/86910

 Any thoughts?

 thanks,
 Derek.

 [1] http://git.openstack.org/cgit/stackforge/nova-docker
 [2] https://review.openstack.org/#/c/81097/
 [3] https://review.openstack.org/#/c/86842/

 Hugely appreciated!

 It'd be nice to have one other person ACK the devstack integration and
 then I think we should merge it.

 I know last cycle we were looking at 3rd party CI, but that was because
 of timing and infra review availability.  I think a re-focus on doing it
 in openstack's infra is the right approach.

 -- 
 Russell Bryant

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Meeting minutes

2014-04-11 Thread Sylvain Bauza
Hi Dina et al.

I'm sorry, I was facing this week a particular off-load due to various
concerns, and so was unable to attend the meeting.
About the wondering of my participation, please all be sure I'll still
dedicate some of my time to Climate, including some BPs and bugs, so yes I
will handle the new resource framework proposal.

Starting next week, I'll also propose an events lifecycle service for
Climate, so that we could properly handle leases statuses (started, in
progress, done, error) based on the events.

-Sylvain (sorry again about my dotted participation, I understand this is
frustrating for both of us :-) )


2014-04-11 17:59 GMT+02:00 Dina Belova dbel...@mirantis.com:

 Hello, folks!

 Our Climate meeting minutes are here :)

 Minutes:
 http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-11-15.00.html
 Minutes (text):
 http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-11-15.00.txt
 Log:
 http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-11-15.00.log.html


 Best regards,

 Dina Belova

 Software Engineer

 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [taskflow] Mistral TaskFlow integration summary

2014-04-11 Thread Joshua Harlow
Thanks for the write-up krill.

Also some adjustments,

Both points are good, and putting some of this on @ 
https://etherpad.openstack.org/p/taskflow-mistral-details so that we can have 
it actively noted (feel free to adjust it).

I think ivan is working on some docs/code/… for the lazy engine idea, so 
hopefully we can get back soon with that. Lets see what comes out of that 
effort and iterate on that.

For (2), our are mostly correct about unconditional execution although [1] does 
now change this, and there are a few active reviews that are being worked [3] 
on to fit this mistral use-case better. I believe [2] can help move in this 
direction, ivans ideas I think will also push it a little farther to. Of course 
lets work together to make sure they fit the best so that taskflow  mistral  
openstack can be the best it can be (pigeons not included).

Can we also make sure the small issues are noted somewhere (maybe in the above 
etherpad??). Thanks!

[1] https://wiki.openstack.org/wiki/TaskFlow#Retries
[2] https://review.openstack.org/#/c/86470
[3] https://review.openstack.org/#/q/status:open+project:openstack/taskflow,n,z

From: Kirill Izotov enyk...@stackstorm.commailto:enyk...@stackstorm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, April 10, 2014 at 9:20 PM
To: 
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org 
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
Subject: [openstack-dev] [mistral] [taskflow] Mistral TaskFlow integration 
summary

Hi everyone,

This is a summary to the prototype integration we did not too long ago: 
http://github.com/enykeev/mistral/pull/1. Hope it would shed some light on the 
aspects of the integration we are struggling with.

There is a possibility to build Mistral on top of TaskFlow as a library, but in 
order to meet the requirements dictated by Mistral users and use cases, both 
Mistral and TaskFlow should change.

There are two main sides of the story. One is engine. The other is flow control 
capabilities.

1) THE ENGINE
The current TaskFlow implementation of engine doesn't fit Mistral needs because 
it is synchronous, it blocks the thread, it requires us to store the reference 
to the particular engine to be able to get its status and suspend the execution 
and it lacks long-running task compatibility. To fix this problem in a solid 
and maintainable way, we need to split the engine into its synchronous and 
asynchronous counterparts.

Lazy engine should be async and atomic, it should not have its own state, 
instead it should rely on some kind of global state (db or in-memory, depending 
on a type of application). It should have at least two methods: run and 
task_complete. Run method should calculate the first batch of tasks and 
schedule them for executing (either put them in queue or spawn the threads). 
Task_complete should mark a certain task to be completed and then schedule the 
next batch of tasks that became available due to resolution of this one.

The desired use of lazy engine in Mistral is illustrated here: 
https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign#Big_Picture. 
It should support long running tasks and survive engine process restart without 
loosing the state of the running actions. So it must be passive (lazy) and 
persistent.

On Mistral side we are using Lazy engine by patching async.run directly to the 
API (or engine queue) and async.task_complete to the worker queue result 
channel (and the API for long running tasks). We are still sharing the same 
graph_analyzer, but instead of relying on loop and Futures, we are handling the 
execution ourselves in a scalable and robust way.

Then, on top of it you can build a sync engine by introducing Futures. You are 
using async.run() to schedule tasks by transforming them to Futures and then 
starting a loop, checking Futures for completion and sending their results to 
async.task_complete() which would produce even more Futures to check over. Just 
the same way TaskFlow do it right now.

The reason I'm proposing to extract Futures from async engine is because they 
won't work if we have multiple engine processes that should handle the task 
results concurrently (and without that there will be no scalability).

2) THE FLOW CONTROL CAPABILITIES

Since we treat TaskFlow as a library we expect them to provide us with a number 
of primitives to build our workflow with them. Most important of them to us for 
the moment are Direct Transitions, and Conditional Transitions.

The current implementation of flow transitions in TaskFlow are built on top of 
data flow dependencies where each task provides some data to the flow and 
requires some data to be present prior being executed. In other words, you are 
starting to build your flow tree from the last task through the first one by 
adding their requirements to the 

Re: [openstack-dev] [swift] Enterprise Deployment

2014-04-11 Thread Ben Nemec
This sounds like a question for either the users list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


or the operators list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Thanks.

-Ben

On 04/10/2014 09:07 PM, Sumit Gaur wrote:

Hi
I understand that Swift need a lot of configuration and manual steps.
Need to know if somebody knows about any existing opensource
tool/process that can help in deploying multinode enterprise deployment
of Openstack SWIFT. I tried chef and some existing cookbooks but using
them need a lot of learning curve for ruby and chef , also I understand
that dev stack also does not provide enterprise level scripts.

Thanks
sumit


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-11 Thread Henrique Truta
Is there anyone currently working on Neat/Gantt projects? I'd like to
contribute to them, as well.


2014-04-11 11:37 GMT-03:00 Andrew Laski andrew.la...@rackspace.com:

 On 04/10/14 at 11:33pm, Oleg Gelbukh wrote:

 Andrew,

 Thank you for clarification!


 On Thu, Apr 10, 2014 at 3:47 PM, Andrew Laski andrew.la...@rackspace.com
 wrote:



 The scheduler as it currently exists is a placement engine.  There is
 sufficient complexity in the scheduler with just that responsibility so I
 would prefer to see anything that's making runtime decisions separated
 out.
  Perhaps it could just be another service within the scheduler project
 once
 it's broken out, but I think it will be beneficial to have a clear
 distinction between placement decisions and runtime monitoring.



 Do you think that auto-scaling could be considered another facet of this
 'runtime monitoring' functionality? Now it is a combination of Heat and
 Ceilometer. Does it worth moving to hypothetical runtime mobility service
 as well?


 Auto-scaling is certainly a facet of runtime monitoring.  But auto-scaling
 performs actions based on a set of user defined rules and is very visible
 while the enhancements proposed below are intended to benefit deployers and
 be very invisible to users.  So the set of allowable actions is very
 constrained compared to what auto-scaling can do.
 In my opinion what's being proposed doesn't seem to fit cleanly into any
 existing service, so perhaps it could start as a standalone entity.  Then
 once there's something that can be used and demoed a proper place might
 suggest itself, or it might make sense to keep it separate.




 --
 Best regards,
 Oleg Gelbukh





  --
 Best regards,
 Oleg Gelbukh


 On Wed, Apr 9, 2014 at 7:47 PM, Jay Lau jay.lau@gmail.com wrote:

  @Oleg, Till now, I'm not sure the target of Gantt, is it for initial

 placement policy or run time policy or both, can you help clarify?

 @Henrique, not sure if you know IBM PRS (Platform Resource Scheduler)
 [1],
 we have finished the dynamic scheduler in our Icehouse version (PRS
 2.2),
 it has exactly the same feature as your described, we are planning a
 live
 demo for this feature in Atlanta Summit. I'm also writing some document
 for
 run time policy which will cover more run time policies for OpenStack,
 but
 not finished yet. (My shame for the slow progress). The related
 blueprint
 is [2], you can also get some discussion from [3]

 [1]
 http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=
 ANsubtype=CAhtmlfid=897/ENUS213-590appname=USN
 [2]
 https://blueprints.launchpad.net/nova/+spec/resource-
 optimization-service
 [3] http://markmail.org/~jaylau/OpenStack-DRS

 Thanks.


 2014-04-09 23:21 GMT+08:00 Oleg Gelbukh ogelb...@mirantis.com:

 Henrique,


 You should check out Gantt project [1], it could be exactly the place
 to
 implement such features. It is a generic cross-project Scheduler as a
 Service forked from Nova recently.

 [1] https://github.com/openstack/gantt

 --
 Best regards,
 Oleg Gelbukh
 Mirantis Labs


 On Wed, Apr 9, 2014 at 6:41 PM, Henrique Truta 
 henriquecostatr...@gmail.com wrote:

  Hello, everyone!


 I am currently a graduate student and member of a group of
 contributors
 to OpenStack. We believe that a dynamic scheduler could improve the
 efficiency of an OpenStack cloud, either by rebalancing nodes to
 maximize
 performance or to minimize the number of active hosts, in order to
 minimize
 energy costs. Therefore, we would like to propose a dynamic
 scheduling
 mechanism to Nova. The main idea is using the Ceilometer information
 (e.g.
 RAM, CPU, disk usage) through the ceilometer-client and dinamically
 decide
 whether a instance should be live migrated.

 This might me done as a Nova periodic task, which will be executed
 every
 once in a given period or as a new independent project. In both
 cases,
 the
 current Nova scheduler will not be affected, since this new scheduler
 will
 be pluggable. We have done a search and found no such initiative in
 the
 OpenStack BPs. Outside the community, we found only a recent IBM
 announcement for a similiar feature in one of its cloud products.

 A possible flow is: In the new scheduler, we periodically make a call
 to
 Nova, get the instance list from a specific host and, for each
 instance, we
 make a call to the ceilometer-client (e.g. $ ceilometer statistics -m
 cpu_util -q resource=$INSTANCE_ID) and then, according to some
 specific
 parameters configured by the user, analyze the meters and do the
 proper
 migrations.

 Do you have any comments or suggestions?

 --
 Ítalo Henrique Costa Truta



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Dynamic scheduling

2014-04-11 Thread Tim Bell


 -Original Message-
 From: Andrew Laski [mailto:andrew.la...@rackspace.com]
 Sent: 11 April 2014 16:38
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] Dynamic scheduling
 
 On 04/10/14 at 11:33pm, Oleg Gelbukh wrote:
 Andrew,
 
...
 
 In my opinion what's being proposed doesn't seem to fit cleanly into any 
 existing service, so perhaps it could start as a standalone
 entity.
 Then once there's something that can be used and demoed a proper place might 
 suggest itself, or it might make sense to keep it
 separate.
 

I strongly support no auto scaling. Heat looks after this and is a user facing 
activity since it needs to know what to do when a new VM is created and how to 
set it up.

A dynamic scheduling 'service' would work on an operator infrastructure layer 
performing VM relocation according to the service provider needs (balance 
between optimisation, thrashing, acceptable downtime). It should be performed 
within the SLA expectations of the VM.

The dynamic scheduling is 'OpenStack Tetris', trying to ensure a consistent 
packing policy of VMs on resources based on the policy for the service class.

Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral][TaskFlow] Mistral-TaskFlow Summary

2014-04-11 Thread Dmitri Zimine
We prototyped Mistral / TaskFlow integration and have a follow-up discussions. 

SUMMARY: Mistral (Workflow Service) can embed TaskFlow as a workflow library, 
with some required modifications to function resliently as a service, and for 
smooth integration. However, the TaskFlow flow controls are insufficient for 
Mistral use cases. 

Details discussed on other thirds. 
The prototype scope - [0]; code and discussion - [1] and techical highlights - 
[2].

DETAILS: 

1) Embedding TaskFlow inside Mistral:
* Required: make the engine lazy [3], [4].This is required to support 
long-running delegates and not loose tasks when engine manager process restarts.

* Persistence: need clarity how to replace or mix-in TaskFlow persistence with 
Mistral persistence. Renat is taking a look.

* Declaring Flows in YAML DSL: done for simplest flow. Need to prototype for 
data flow. Rich flow controls are missing in TaskFlow for a representative 
prototype.

* ActionRunners vs Taskflow Workers - not prototyped. Not a risk: both Mistral 
and TaskFlow implementations work. But we shall resolve the overlap. 

* Ignored for now - unlikely any risks: Keystone integration, Mistral event 
scheduler, Mistral declarative services and action definition.

2) TaskFlow library features
* Must: flow control - conditional transitions, references, expression 
evaluation, to express real-life workflows [5]. The required flow control 
primitives are 1) repeater 2) flow in flow 3) direct transition 4) conditional 
transition 5) multiple data. TaskFlow has 1) and 2), need to add 3/4/5. 

* Other details and smaller requests are in the discussion [1]

3) Next Steps proposed:
* Mistal team: summarize the requirements discussed and agreed on [2] and [3]
* Mistral team: code sample (tests?) on how Mistral would like to consume 
TaskFlow lazy engine 
* Taskflow team: Provide a design for alternative TaskExecutor approach 
(prototypes, boxes, arrows, crayons :)) 
* Decide on lazy engine
* Move the discussion on other elements on integration.

References:
[0] The scope of the prototype: 
https://etherpad.openstack.org/p/mistral-taskflow-prototype
[1] Prototype code and discussion https://github.com/enykeev/mistral/pull/1
[2] Techical summary 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032461.html
[3] Email discussion on TaskFlow lazy eninge 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031134.html
[4] IRC discussion Mistral/Taskflow http://paste.openstack.org/show/75389/
[5] Use cases https://github.com/dzimine/mistral-workflows/tree/add-usecases___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Mistral-TaskFlow Summary

2014-04-11 Thread Joshua Harlow
I'm confused, why is this 2 emails??

http://lists.openstack.org/pipermail/openstack-dev/2014-April/032461.html

Seems better to just have 1 chain, not 2.

From: Dmitri Zimine d...@stackstorm.commailto:d...@stackstorm.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday, April 11, 2014 at 9:55 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Mistral][TaskFlow] Mistral-TaskFlow Summary

We prototyped Mistral / TaskFlow integration and have a follow-up discussions.

SUMMARY: Mistral (Workflow Service) can embed TaskFlow as a workflow library, 
with some required modifications to function resliently as a service, and for 
smooth integration. However, the TaskFlow flow controls are insufficient for 
Mistral use cases.

Details discussed on other thirds.
The prototype scope - 
[0https://etherpad.openstack.org/p/mistral-taskflow-prototype]; code and 
discussion - [1https://github.com/enykeev/mistral/pull/1] and techical 
highlights - 
[2http://lists.openstack.org/pipermail/openstack-dev/2014-April/032461.html].

DETAILS:

1) Embedding TaskFlow inside Mistral:
* Required: make the engine lazy 
[3http://lists.openstack.org/pipermail/openstack-dev/2014-March/031134.html], 
[4http://paste.openstack.org/show/75389/].This is required to support 
long-running delegates and not loose tasks when engine manager process restarts.

* Persistence: need clarity how to replace or mix-in TaskFlow persistence with 
Mistral persistence. Renat is taking a look.

* Declaring Flows in YAML DSL: done for simplest flow. Need to prototype for 
data flow. Rich flow controls are missing in TaskFlow for a representative 
prototype.

* ActionRunners vs Taskflow Workers - not prototyped. Not a risk: both Mistral 
and TaskFlow implementations work. But we shall resolve the overlap.

* Ignored for now - unlikely any risks: Keystone integration, Mistral event 
scheduler, Mistral declarative services and action definition.

2) TaskFlow library features
* Must: flow control - conditional transitions, references, expression 
evaluation, to express real-life workflows 
[5https://github.com/dzimine/mistral-workflows/tree/add-usecases]. The 
required flow control primitives are 1) repeater 2) flow in flow 3) direct 
transition 4) conditional transition 5) multiple data. TaskFlow has 1) and 2), 
need to add 3/4/5.

* Other details and smaller requests are in the discussion 
[1https://github.com/enykeev/mistral/pull/1]

3) Next Steps proposed:
* Mistal team: summarize the requirements discussed and agreed on 
[2http://lists.openstack.org/pipermail/openstack-dev/2014-April/032461.html] 
and 
[3http://lists.openstack.org/pipermail/openstack-dev/2014-March/031134.html]
* Mistral team: code sample (tests?) on how Mistral would like to consume 
TaskFlow lazy engine
* Taskflow team: Provide a design for alternative TaskExecutor approach 
(prototypes, boxes, arrows, crayons :))
* Decide on lazy engine
* Move the discussion on other elements on integration.

References:
[0] The scope of the prototype: 
https://etherpad.openstack.org/p/mistral-taskflow-prototype
[1] Prototype code and discussion https://github.com/enykeev/mistral/pull/1
[2] Techical summary 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/032461.html
[3] Email discussion on TaskFlow lazy eninge 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031134.html
[4] IRC discussion Mistral/Taskflow http://paste.openstack.org/show/75389/
[5] Use cases https://github.com/dzimine/mistral-workflows/tree/add-usecases
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiprovider API documentation

2014-04-11 Thread Robert Kukura


On 4/10/14, 6:35 AM, Salvatore Orlando wrote:
The bug for documenting the 'multi-provider' API extension is still 
open [1].
The bug report has a good deal of information, but perhaps it might be 
worth also documenting how ML2 uses the segment information, as this 
might be useful to understand when one should use the 'provider' 
extension and when instead the 'multi-provider' would be a better fit.


Unfortunately I do not understand enough how ML2 handles multi-segment 
networks, so I hope somebody from the ML2 team can chime in.
Here's a quick description of ML2 port binding, including how 
multi-segment networks are handled:


   Port binding is how the ML2 plugin determines the mechanism driver
   that handles the port, the network segment to which the port is
   attached, and the values of the binding:vif_type and
   binding:vif_details port attributes. Its inputs are the
   binding:host_id and binding:profile port attributes, as well as the
   segments of the port's network. When port binding is triggered, each
   registered mechanism driver's bind_port() function is called, in the
   order specified in the mechanism_drivers config variable, until one
   succeeds in binding, or all have been tried. If none succeed, the
   binding:vif_type attribute is set to 'binding_failed'. In
   bind_port(), each mechanism driver checks if it can bind the port on
   the binding:host_id host, using any of the network's segments,
   honoring any requirements it understands in binding:profile. If it
   can bind the port, the mechanism driver calls
   PortContext.set_binding() from within bind_port(), passing the
   chosen segment's ID, the values for binding:vif_type and
   binding:vif_details, and optionally, the port's status. A common
   base class for mechanism drivers supporting L2 agents implements
   bind_port() by iterating over the segments and calling a
   try_to_bind_segment_for_agent() function that decides whether the
   port can be bound based on the agents_db info periodically reported
   via RPC by that specific L2 agent. For network segment types of
   'flat' and 'vlan', the try_to_bind_segment_for_agent() function
   checks whether the L2 agent on the host has a mapping from the
   segment's physical_network value to a bridge or interface. For
   tunnel network segment types, try_to_bind_segment_for_agent() checks
   whether the L2 agent has that tunnel type enabled.


Note that, although ML2 can manage binding to multi-segment networks, 
neutron does not manage bridging between the segments of a multi-segment 
network. This is assumed to be done administratively.


Finally, at least in ML2, the providernet and multiprovidernet 
extensions are two different APIs to supply/view the same underlying 
information. The older providernet extension can only deal with 
single-segment networks, but is easier to use. The newer 
multiprovidernet extension handles multi-segment networks and 
potentially supports an extensible set of a segment properties, but is 
more cumbersome to use, at least from the CLI. Either extension can be 
used to create single-segment networks with ML2. Currently, ML2 network 
operations return only the providernet attributes 
(provider:network_type, provider:physical_network, and 
provider:segmentation_id) for single-segment networks, and only the 
multiprovidernet attribute (segments) for multi-segment networks. It 
could be argued that all attributes should be returned from all 
operations, with a provider:network_type value of 'multi-segment' 
returned when the network has multiple segments. A blueprint in the 
works for juno that lets each ML2 type driver define whatever segment 
properties make sense for that type may lead to eventual deprecation of 
the providernet extension.


Hope this helps,

-Bob



Salvatore


[1] https://bugs.launchpad.net/openstack-api-site/+bug/1242019


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-11 Thread Sean Dague
On 04/11/2014 11:38 AM, Greg Lucas wrote:
 Sean Dague wrote:
 Maybe it would be good to get an ad-hoc IRC meeting together to figure
 out what the must have features are that inspired everyone to write
 these. If we can come up with a way to overlap those all sanely, moving
 to stackforge and doing this via gerrit would be something I'd be into.
 
 This is a good idea, I've definitely stumbled across lots of GitHub
 projects, blog posts, etc that overlap here.
 
 Folks seem to have a strong preference for provisioner so it may makes
 sense to support several. We can put together a Vagrantfile that allows
 you to choose a provisioner while maintaining a common machine
 configuration (using --provision-with or using env variables and loading
 in additional rb files, etc).

Honestly, multi provisioner support is something I think shouldn't be
done. That's realistically where I become uninterested in spending
effort here. Puppet is needed if we want to be able to replicate
devstack-gate locally (which is one of the reasons I started writing this).

Being opinionated is good when it comes to providing tools to make
things easy to onboard people. The provisioner in infra is puppet.
Learning puppet lets you contribute to the rest of the openstack infra,
and I expect to consume some piece of that in this process. I get that
leaves other efforts out in the cold, but the tradeoff in the other
direction I don't think is worth it.

The place I think plugability makes sense is in virt backends. I'd
honestly love to be able to do nested kvm for performance reasons, or an
openstack cloud for dogfooding reasons.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Sean Dague
On 04/11/2014 11:50 AM, Derek Higgins wrote:
 On 11/04/14 16:28, Paul Czarkowski wrote:
 I would rather see the devstack support be integrated in devstack's repo
 itself.   There's a review
 Outstanding for devstack[1] that adds this support in and also adds in
 some pieces to make
 it easier to utilize external nova drivers.
 
 Great, I hadn't spotted the review to put it back into devstack, I'll
 give it a go and review to help move it along. I see there is a -1
 suggesting it should be in nova-docker, the devstack approach certainly
 looks cleaner.
 
 regardless of which approach is taken I should be able to work the check
 job to use it for nova-docker CI.

Right, I remain -1 on adding this kind of function into devstack for
external repos. Devstack is an opinionated development setup tool for
OpenStack. If it's not in tree for an OpenStack project, then it's not
OpenStack (we already have way too much to support in tree).

So if and when docker comes back in the nova tree, we can integrate it
back to devstack. Until then, the extras.d approach is the right one.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Consolidating efforts around Fedora/Centos gate job

2014-04-11 Thread James E. Blair
Ian Wienand iwien...@redhat.com writes:

 Then we have the question of the nodepool setup scripts working on
 F20.  I just tested the setup scripts from [3] and it all seems to
 work on a fresh f20 cloud image.  I think this is due to kchamart,
 peila2 and others who've fixed parts of this before.

 So, is there:

  1) anything blocking having f20 in the nodepool?
  2) anything blocking a simple, non-voting job to test devstack
 changes on f20?

I'm not aware of blockers; I think this is great, so thanks to all the
people who have worked to make it happen!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Paul Czarkowski
Based on this feedback I have removed the docker portions of the review to
devstack
But have kept in the changes that make it easier for a nova drivers to add
their own
Files to the nova rootwrap.d directory.

On 4/11/14 10:47 AM, Russell Bryant rbry...@redhat.com wrote:

On 04/11/2014 11:28 AM, Paul Czarkowski wrote:
 I would rather see the devstack support be integrated in devstack's repo
 itself.   There's a review
 Outstanding for devstack[1] that adds this support in and also adds in
 some pieces to make
 it easier to utilize external nova drivers.
 
 
 If that fails out then I'm all for merging in your review to the
 nova-docker driver.  We're
 Currently using nova-docker for Solum and having the devstack support
 would help us out a lot.
 
 There is some ugliness in the nova-docker review[2] that we could avoid
by
 merging the integration
 directly into devstack.
  
 [1] https://review.openstack.org/#/c/84839/
 [2] 
https://review.openstack.org/#/c/81097/3/contrib/devstack/stackrc.diff

Agree that directly in devstack is easier, but I just don't see that
happening.  They've been pretty clear on only including support for
things in official projects.

See Sean's review of I feel like this should be done entirely as an
extras.d file which is actually in the docker driver tree. If there are
call points which are needed in extras.d, we should add those.

So, I think we should proceed with the support in the nova-docker repo
for now.  That will unblock CI progress against nova-docker, which is
the #1 blocker for eventually re-proposing the driver to nova itself.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-heat-templates migration to software-config: please rebase on top

2014-04-11 Thread Clint Byrum
Another massive change just landed in front of 81666.

https://review.openstack.org/#/c/84197/

So let me make my statement below a bit more clear:

Please do not land changes until this is fixed and merged. Every time
I have to merge in the new changes is a few hours to a whole day of
work. If we keep landing changes in front of it, we will never get this
refactor done, which stands in the way of HA, rolling updates, and
deprecation of merge.py.

Thanks!

Excerpts from Clint Byrum's message of 2014-04-10 13:39:35 -0700:
 Hi! For the last 2 weeks I've been trying to prepare and land a change
 in tripleo-heat-templates to migrate to using OS::Heat::StructuredConfig
 and OS::Heat::StructuredDeployment so we can start deprecating features
 of merge.py.
 
 However, this is quite a large refactoring, and it will be complicated
 to do in pieces, so there is just one patch:
 
 https://review.openstack.org/#/c/81666/
 
 Because this is such a large refactoring, merging new changes in is
 extremely time consuming and tedious.
 
 So, I am asking that if you plan on editting any of the yaml files in
 tripleo-heat-templates, that you rebase on top of that change, and
 please help test the migration.
 
 Hopefully we can get this working and merged soon. Thank you!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-11 Thread Jorge Miramontes
Hi Kevin,

We are trying to prioritize features based on actual data utilization. If you 
have some, by all means please add it to 
https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWc#gid=0.
 One reason we are focusing on HTTP(S) and not FTP is that 0.27% of our lb 
instances leverage the FTP protocol. That being said, we are only one cloud 
provider so if you have an interesting use case please add it to the links that 
Sam added. Once it is in the docs then it will be easier for everyone to be 
aware of it and thus make for a more spirited discussion.

Cheers,
--Jorge

From: Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, April 9, 2014 7:21 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Eugene Nikanorov (enikano...@mirantis.commailto:enikano...@mirantis.com) 
enikano...@mirantis.commailto:enikano...@mirantis.com
Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web 
ui screen captures

I'm not seeing anything here about non http(s) related Load balancing.  We're 
interested in load balancing ssh, ftp, and other services too.

Thanks,
Kevin

From: Samuel Bercovici [samu...@radware.commailto:samu...@radware.com]
Sent: Sunday, April 06, 2014 5:51 AM
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org); 
Eugene Nikanorov (enikano...@mirantis.commailto:enikano...@mirantis.com)
Subject: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui 
screen captures

Per the last LBaaS meeting.


1.   Please find a list of use cases.
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing


a)  Please review and see if you have additional ones for the project-user

b)  We can then chose 2-3 use cases to play around with how the CLI, API, 
etc. would look


2.   Please find a document to place screen captures of web UI. I took the 
liberty to place a few links showing ELB.
https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uHerSq3pLQA/edit?usp=sharing


Regards,
-Sam.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vagrant Devstack projects - time to consolidate?

2014-04-11 Thread Collins, Sean
On Fri, Apr 11, 2014 at 02:24:10PM EDT, Sean Dague wrote:
 Honestly, multi provisioner support is something I think shouldn't be
 done. 

Agreed - in addition let's keep this in perspective - we're just adding
a little bit of glue to prep the VM for the running of DevStack, which
does the heavy lifting.

 Being opinionated is good when it comes to providing tools to make
 things easy to onboard people. The provisioner in infra is puppet.
 Learning puppet lets you contribute to the rest of the openstack infra,
 and I expect to consume some piece of that in this process. 

+1000 on this - if there's already a provisioner that is used for infra,
let's re-use some of that knowledge and expertise.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Eric Windisch


 What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
 this will setup a devstack based nova with the nova-docker driver and
 can then run what ever tests make sense (currently only a minimal test,
 Eric I believe you were looking at tempest support maybe it could be
 hooked in here?).


I'm not sure how far you've gotten, but my approach had been not to use
devstack-gate, but to build upon dockenstack (
https://github.com/ewindisch/dockenstack) to hasten the tests.

Advantages to this over devstack-gate are that:
1) It is usable for developers as an alternative to devstack-vagrant so it
may be the same environment for developing as for CI.
2) All network-dependent resources are downloaded into the image -
completely eliminating the need for mirrors/caching infrastructure.
3) Most of the packages are installed and pre-configured inside the image
prior to running the tests such that there is little time spent
initializing the testing environment.

Disadvantages are:
1) It's currently tied to Ubuntu. It could be ported to Fedora, but hasn't
been.
2) Removal of apt/rpm or even pypi dependencies may allow for
false-positive testing results (if a dependency is removed from a
requirements.txt or devstack's packages lists, it will still be installed
within the testing image); This is something that could be easily fixed if
should it be essential.

If you're interested, I'd be willing to entertain adding Fedora support to
Dockenstack.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Russell Bryant
On 04/11/2014 04:29 PM, Eric Windisch wrote:
 
 What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
 this will setup a devstack based nova with the nova-docker driver and
 can then run what ever tests make sense (currently only a minimal test,
 Eric I believe you were looking at tempest support maybe it could be
 hooked in here?).
 
 
 I'm not sure how far you've gotten, but my approach had been not to use
 devstack-gate, but to build upon dockenstack
 (https://github.com/ewindisch/dockenstack) to hasten the tests.
 
 Advantages to this over devstack-gate are that:
 1) It is usable for developers as an alternative to devstack-vagrant so
 it may be the same environment for developing as for CI.
 2) All network-dependent resources are downloaded into the image -
 completely eliminating the need for mirrors/caching infrastructure.
 3) Most of the packages are installed and pre-configured inside the
 image prior to running the tests such that there is little time spent
 initializing the testing environment.
 
 Disadvantages are:
 1) It's currently tied to Ubuntu. It could be ported to Fedora, but
 hasn't been.
 2) Removal of apt/rpm or even pypi dependencies may allow for
 false-positive testing results (if a dependency is removed from a
 requirements.txt or devstack's packages lists, it will still be
 installed within the testing image); This is something that could be
 easily fixed if should it be essential.
 
 If you're interested, I'd be willing to entertain adding Fedora support
 to Dockenstack.

I think part of the issue is how quickly we can get this working in
OpenStack infra.  devstack-gate and devstack are how most (all?)
functional test jobs work there today.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui screen captures

2014-04-11 Thread Adam Harwell
Most everything non-http(s) related can simply be load-balanced under the 
generic umbrella of UDP or TCP protocol. MySQL often gets it's own special 
protocol (Libra has MySQL/Galera), but of what you listed,  SSH is the only 
real special case I can think of, wherein something more specialized like 
Ballast* would be useful.

* http://code.nasa.gov/project/balancing-load-across-systems-ballast/

--Adam

From: Fox, Kevin M kevin@pnnl.govmailto:kevin@pnnl.gov
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, April 9, 2014 7:21 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Eugene Nikanorov (enikano...@mirantis.commailto:enikano...@mirantis.com) 
enikano...@mirantis.commailto:enikano...@mirantis.com
Subject: Re: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web 
ui screen captures

I'm not seeing anything here about non http(s) related Load balancing.  We're 
interested in load balancing ssh, ftp, and other services too.

Thanks,
Kevin

From: Samuel Bercovici [samu...@radware.commailto:samu...@radware.com]
Sent: Sunday, April 06, 2014 5:51 AM
To: OpenStack Development Mailing List 
(openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org); 
Eugene Nikanorov (enikano...@mirantis.commailto:enikano...@mirantis.com)
Subject: [openstack-dev] [Neutron][LBaaS] Load balancing use cases and web ui 
screen captures

Per the last LBaaS meeting.


1.   Please find a list of use cases.
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?usp=sharing


a)  Please review and see if you have additional ones for the project-user

b)  We can then chose 2-3 use cases to play around with how the CLI, API, 
etc. would look


2.   Please find a document to place screen captures of web UI. I took the 
liberty to place a few links showing ELB.
https://docs.google.com/document/d/10EOCTej5CvDfnusv_es0kFzv5SIYLNl0uHerSq3pLQA/edit?usp=sharing


Regards,
-Sam.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Sean Dague
On 04/11/2014 04:39 PM, Russell Bryant wrote:
 On 04/11/2014 04:29 PM, Eric Windisch wrote:

 What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
 this will setup a devstack based nova with the nova-docker driver and
 can then run what ever tests make sense (currently only a minimal test,
 Eric I believe you were looking at tempest support maybe it could be
 hooked in here?).


 I'm not sure how far you've gotten, but my approach had been not to use
 devstack-gate, but to build upon dockenstack
 (https://github.com/ewindisch/dockenstack) to hasten the tests.

 Advantages to this over devstack-gate are that:
 1) It is usable for developers as an alternative to devstack-vagrant so
 it may be the same environment for developing as for CI.
 2) All network-dependent resources are downloaded into the image -
 completely eliminating the need for mirrors/caching infrastructure.
 3) Most of the packages are installed and pre-configured inside the
 image prior to running the tests such that there is little time spent
 initializing the testing environment.

 Disadvantages are:
 1) It's currently tied to Ubuntu. It could be ported to Fedora, but
 hasn't been.
 2) Removal of apt/rpm or even pypi dependencies may allow for
 false-positive testing results (if a dependency is removed from a
 requirements.txt or devstack's packages lists, it will still be
 installed within the testing image); This is something that could be
 easily fixed if should it be essential.

 If you're interested, I'd be willing to entertain adding Fedora support
 to Dockenstack.
 
 I think part of the issue is how quickly we can get this working in
 OpenStack infra.  devstack-gate and devstack are how most (all?)
 functional test jobs work there today.

Correct. If this is intended for infra, it has to use devstack-gate.
That has lots of levers that we need to set based on branches, how to do
the zuul ref calculations (needed for the speculative gating), how to do
branch overrides for stable an upgrade jobs, etc.

If we think it's staying in 3rd party, people are free to use whatever
they would like.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Russell Bryant
On 04/11/2014 04:58 PM, Sean Dague wrote:
 On 04/11/2014 04:39 PM, Russell Bryant wrote:
 On 04/11/2014 04:29 PM, Eric Windisch wrote:
 
 What I hope to do is setup a check doing CI on devstack-f20
 nodes[3], this will setup a devstack based nova with the
 nova-docker driver and can then run what ever tests make sense
 (currently only a minimal test, Eric I believe you were looking
 at tempest support maybe it could be hooked in here?).
 
 
 I'm not sure how far you've gotten, but my approach had been
 not to use devstack-gate, but to build upon dockenstack 
 (https://github.com/ewindisch/dockenstack) to hasten the
 tests.
 
 Advantages to this over devstack-gate are that: 1) It is usable
 for developers as an alternative to devstack-vagrant so it may
 be the same environment for developing as for CI. 2) All
 network-dependent resources are downloaded into the image - 
 completely eliminating the need for mirrors/caching
 infrastructure. 3) Most of the packages are installed and
 pre-configured inside the image prior to running the tests such
 that there is little time spent initializing the testing
 environment.
 
 Disadvantages are: 1) It's currently tied to Ubuntu. It could
 be ported to Fedora, but hasn't been. 2) Removal of apt/rpm or
 even pypi dependencies may allow for false-positive testing
 results (if a dependency is removed from a requirements.txt or
 devstack's packages lists, it will still be installed within
 the testing image); This is something that could be easily
 fixed if should it be essential.
 
 If you're interested, I'd be willing to entertain adding Fedora
 support to Dockenstack.
 
 I think part of the issue is how quickly we can get this working
 in OpenStack infra.  devstack-gate and devstack are how most
 (all?) functional test jobs work there today.
 
 Correct. If this is intended for infra, it has to use
 devstack-gate. That has lots of levers that we need to set based on
 branches, how to do the zuul ref calculations (needed for the
 speculative gating), how to do branch overrides for stable an
 upgrade jobs, etc.
 
 If we think it's staying in 3rd party, people are free to use
 whatever they would like.

I guess we should be clear on this point.

I *really* think the best way forward is to move back to trying to get
this working in openstack infra.  I really can't think of any reason
not to.

Any disagreements with that goal?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] oddness with sqlalchemy db().refresh(object)

2014-04-11 Thread Andrew Woodward
Recently in one of my changes [1] I was fighting with one of the unit
tests showing a failure for a test which should have been outside the
sphere of influence.

Traceback (most recent call last):
  File 
/home/andreww/.virtualenvs/fuel/local/lib/python2.7/site-packages/mock.py,
line 1190, in patched
return func(*args, **keywargs)
  File 
/home/andreww/git/fuel-web/nailgun/nailgun/test/integration/test_task_managers.py,
line 65, in test_deployment_task_managers
self.assertEquals(provision_task.weight, 0.4)
AssertionError: 1.0 != 0.4

After walking through a number of times and finally playing with it we
where able to find that the db().refresh(task_provision) call appeared
to be reseting the object. This caused the loss of the weight being
set to 0.4 (1.0 is the model default). db().commit(), db().flush() and
no call to db all caused the test to pass again.

Does anyone have any input on why this would occur? The oddly odd part
is that this test doesn't fail outside of the change set in [1]

[1] https://review.openstack.org/#/c/78406/8/nailgun/nailgun/task/manager.py

-- 
Andrew
Mirantis
Ceph community

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TripleO fully uploaded to Debian Experimental

2014-04-11 Thread Chris Jones
Hi

On 11 April 2014 08:00, Thomas Goirand z...@debian.org wrote:

 it's with a great joy that I can announce today, that TripleO is now
 fully in Debian [1]. It is currently only uploaded to Debian


woo! Thanks very much Thomas :)

-- 
Cheers,

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Multiprovider API documentation

2014-04-11 Thread Salvatore Orlando
On 11 April 2014 19:11, Robert Kukura kuk...@noironetworks.com wrote:


 On 4/10/14, 6:35 AM, Salvatore Orlando wrote:

  The bug for documenting the 'multi-provider' API extension is still open
 [1].
  The bug report has a good deal of information, but perhaps it might be
 worth also documenting how ML2 uses the segment information, as this might
 be useful to understand when one should use the 'provider' extension and
 when instead the 'multi-provider' would be a better fit.

  Unfortunately I do not understand enough how ML2 handles multi-segment
 networks, so I hope somebody from the ML2 team can chime in.

 Here's a quick description of ML2 port binding, including how
 multi-segment networks are handled:

 Port binding is how the ML2 plugin determines the mechanism driver that
 handles the port, the network segment to which the port is attached, and
 the values of the binding:vif_type and binding:vif_details port attributes.
 Its inputs are the binding:host_id and binding:profile port attributes, as
 well as the segments of the port's network. When port binding is triggered,
 each registered mechanism driver’s bind_port() function is called, in the
 order specified in the mechanism_drivers config variable, until one
 succeeds in binding, or all have been tried. If none succeed, the
 binding:vif_type attribute is set to 'binding_failed'. In bind_port(), each
 mechanism driver checks if it can bind the port on the binding:host_id
 host, using any of the network’s segments, honoring any requirements it
 understands in binding:profile. If it can bind the port, the mechanism
 driver calls PortContext.set_binding() from within bind_port(), passing the
 chosen segment's ID, the values for binding:vif_type and
 binding:vif_details, and optionally, the port’s status. A common base class
 for mechanism drivers supporting L2 agents implements bind_port() by
 iterating over the segments and calling a try_to_bind_segment_for_agent()
 function that decides whether the port can be bound based on the agents_db
 info periodically reported via RPC by that specific L2 agent. For network
 segment types of 'flat' and 'vlan', the try_to_bind_segment_for_agent()
 function checks whether the L2 agent on the host has a mapping from the
 segment's physical_network value to a bridge or interface. For tunnel
 network segment types, try_to_bind_segment_for_agent() checks whether the
 L2 agent has that tunnel type enabled.


 Note that, although ML2 can manage binding to multi-segment networks,
 neutron does not manage bridging between the segments of a multi-segment
 network. This is assumed to be done administratively.


Thanks Bob. I think the above paragraph is the answer I was looking for.



 Finally, at least in ML2, the providernet and multiprovidernet extensions
 are two different APIs to supply/view the same underlying information. The
 older providernet extension can only deal with single-segment networks, but
 is easier to use. The newer multiprovidernet extension handles
 multi-segment networks and potentially supports an extensible set of a
 segment properties, but is more cumbersome to use, at least from the CLI.
 Either extension can be used to create single-segment networks with ML2.
 Currently, ML2 network operations return only the providernet attributes
 (provider:network_type, provider:physical_network, and
 provider:segmentation_id) for single-segment networks, and only the
 multiprovidernet attribute (segments) for multi-segment networks. It could
 be argued that all attributes should be returned from all operations, with
 a provider:network_type value of 'multi-segment' returned when the network
 has multiple segments. A blueprint in the works for juno that lets each ML2
 type driver define whatever segment properties make sense for that type may
 lead to eventual deprecation of the providernet extension.

 Hope this helps,

 -Bob


  Salvatore


  [1] https://bugs.launchpad.net/openstack-api-site/+bug/1242019


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Michael Still
On Sat, Apr 12, 2014 at 7:37 AM, Russell Bryant rbry...@redhat.com wrote:
 On 04/11/2014 04:58 PM, Sean Dague wrote:

[snip]

 If we think it's staying in 3rd party, people are free to use
 whatever they would like.

 I guess we should be clear on this point.

 I *really* think the best way forward is to move back to trying to get
 this working in openstack infra.  I really can't think of any reason
 not to.

 Any disagreements with that goal?

Agreed, where it makes sense. In general we should be avoiding third
party CI unless we need to support something we can't in the gate -- a
proprietary virt driver, or weird network hardware for example. I
think we've now well and truly demonstrated that third party CI
implementations are hard to run well.

Docker doesn't meet either of those tests.

However, I can see third party CI being a stepping stone required by
the infra team to reduce their workload -- in other words that they'd
like to see things running consistently as a third party CI before
they move it into their world. However, I'll leave that call to the
infra team.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NOVA][Neutron][ML2][Tunnel] Error in nova-agent when launching VM in compute for Tunnel cases

2014-04-11 Thread Padmanabhan Krishnan
Hello,
I have two Openstack nodes (controller  Compute and a Compute). VM's are 
getting launched fine on the node that also acts as the controller. But, the 
VM's that are scheduled on the compute node seems to go to 
error state. I am running Icehouse Master version and my ML2 type driver is GRE 
(even VXLAN has the same error) . I used devstack for my installation. If I use 
non-tunnel mode and change it to VLAN, I don't see this error and VM's are 
launched fine in compute nodes as well.
My devstack configuration on controller/compute is:

Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=gre
ENABLE_TENANT_TUNNELS=True
TENANT_TUNNEL_RANGE=32000:33000

The error i see in compute nova screen is:

2014-04-11 09:38:57.042 ERROR nova.compute.manager [-] [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] An error occurred while refreshing the 
network cache.
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] Traceback (most recent call last):
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/nova/nova/compute/manager.py, line 4871, in 
_heal_instance_info_cache
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] self._get_instance_nw_info(context, 
instance, use_slave=True)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/nova/nova/compute/manager.py, line 1129, in _get_instance_nw_info
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] instance)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/nova/nova/network/api.py, line 48, in wrapper
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] res = f(self, context, *args, 
**kwargs)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 465, in 
get_instance_nw_info
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] port_ids)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 474, in 
_get_instance_nw_info
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] port_ids)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 1106, in 
_build_network_info_model
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] data = 
client.list_ports(**search_opts)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 108, in 
with_params
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] ret = self.function(instance, *args, 
**kwargs)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 310, in 
list_ports
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] **_params)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1302, in 
list
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] for r in self._pagination(collection, 
path, **params):
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1315, in 
_pagination
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] res = self.get(path, params=params)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1288, in 
get
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] headers=headers, params=params)
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516]   File 
/opt/stack/python-neutronclient/neutronclient/v2_0/client.py, line 1280, in 
retry_request
2014-04-11 09:38:57.042 TRACE nova.compute.manager [instance: 
b12490cc-a237-47e1-b2df-fe69d4a5e516] raise 

Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-11 Thread Thomas Goirand
On 04/11/2014 10:52 PM, Collins, Sean wrote:
 Many of those patches are stale - please join us in the subteam IRC
 meeting if you wish to coordinate development of IPv6 features, so that
 we can focus on updating them and getting them merged. At this point
 simply applying them to the Icehouse tree is not enough.

When and where is the next meeting?

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-11 Thread Martinx - ジェームズ
Hey Thomas!

That's an amazing list!   :-D

Okay, I'll drop by on IRC anytime soon to chat with you guys, tk for the
invite!

About DHCPv6 support, yes, I agree with you, it can be postponed (in fact,
I don't think I'll ever use it). Radvd should be enough for me.

I think that we need to start with the simplest configuration possible,
that I think it is this one:


8- Provider Networking - upstream SLAAC support:
https://blueprints.launchpad.net/neutron/+spec/ipv6-provider-nets-slaac


Please, Collins, can you confirm this for us: is this above blueprint the
easiest to achieve (or almost)?! If not, which one is closest to be ready
for tests?!

Nevertheless, from what I'm seeing, that this is, in fact, the simplest
blueprint / topology we can start testing, so, I'm finishing a Quick
Guide for our tests (more commits to it comming - cleanup). It perfect
fits into the proposed blueprint topology, which is upstream router (no
L3, no gre/vxlan tunnels, no Floating IPs, no NAT even for IPv4)...


Here it is:

Ultimate OpenStack IceHouse Guide - IPv6-Friednly:
https://gist.github.com/tmartinx/9177697


My current plan is, if you guys can read this page (not all of it) for a
moment, at the step 5.2.1. Creating the Flat Neutron Network, I would
like to do, basically this:

neutron subnet-create --ip-version 6 --tenant-id $ADMIN_TENANT_ID
sharednet1 2001:db8:1:1::/64 --dns_nameservers list=true
2001:4860:4860::8844 2001:4860:4860::

...Into that Simple Flat Network Right after applying the patch for
upstream SLAAC / ipv6-provider-nets-slaac at the IceHouse lab I have, it
is up and running now. You guys can easily replicated it with 3 KVM Virtual
Machines (1 gateway (dual-stacked) / 1 controller / 1 compute).

BTW, which extra options for subnet-create, the blueprint
ipv6-provider-nets-slaac covers, if any?

--ipv6_ra_mode XXX --ipv6_address_mode YYY ?

Here at my lab, my upstream SLAAC already have radvd ready, IPv6
connectivity okay and etc, I think I have everything ready to start testing
it.

Cheers!
Thiago


On 11 April 2014 03:26, Thomas Goirand z...@debian.org wrote:

 On 04/08/2014 03:10 AM, Martinx - ジェ�`ムズ wrote:
  Hi Thomas!
 
  It will be a honor for me to join Debian OpenStack packaging team! I'm
  in!! :-D
 
  Listen, that neutron-ipv6.patch I have, doesn't apply against
  neutron-2014.1.rc1, here is it:
 
  neutron-ipv6.patch: http://paste.openstack.org/show/74857/
 
  I generated it from the commands that Xuhan Peng told me to do, few
  posts back, which are:
 
  --
  git fetch https://review.openstack.org/openstack/neutron
  refs/changes/49/70649/15
  git format-patch -1 --stdout FETCH_HEAD  neutron-ipv6.patch
  --
 
  But, as Collins said, even if the patch applies successfully against
  neutron-2014.1.rc1 (or newer), it will not pass the tests, so, there is
  still a lot of work to do, to enable Neutron with IPv6 but, I think we
  can start working on this patches and start testing whatever is already
  there (related to IPv6).
 
  Best!
  Thiago

 Hi Thiago,

 It's my view that we'd better keep each patch separated, so that they
 can evolve over time, as they are accepted or fixed in
 review.openstack.org. On the Debian packaging I do, each and every patch
 has to comply with the DEP3 patch header specifications [1].
 Specifically, I do insist that the Origin: field is set with the
 correct gerrit review URL, so that we can easily find out which patch
 comes from where. The Last-Update field is also important, so we know
 which version of the patch is in.

 Also, at eNovance, we are currently in the process of selecting which
 patch should get in, and which patch shouldn't. Currently, we are
 tracking the below patches:

 1. Support IPv6 SLAAC mode in dnsmasq
 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-slaac
Patchset: Add support to DHCP agent for BP ipv6-two-attributes:
 https://review.openstack.org/#/c/70649/

 2. Bind dnsmasq in qrouter- namespace.

 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-bind-into-qrouter-namespace
Patchset: Add support to DHCP agent for BP ipv6-two-attributes:
 https://review.openstack.org/#/c/70649/

 3. IPv6 Feature Parity
 https://blueprints.launchpad.net/neutron/+spec/ipv6-feature-parity
Definition: Superseded.

 4. Two Attributes Proposal to Control IPv6 RA Announcement and Address
 Assignment
https://blueprints.launchpad.net/neutron/+spec/ipv6-two-attributes
Patchset: Create new IPv6 attributes for Subnets.
 https://review.openstack.org/#/c/52983/
Patchset: Add support to DHCP agent for BP ipv6-two-attributes.
 https://review.openstack.org/70649
Patchset: Calculate stateless IPv6 address.
 https://review.openstack.org/56184
Patchset: Permit ICMPv6 RAs only from known routers.
 https://review.openstack.org/#/c/72252/

 5. Support IPv6 DHCPv6 Stateless mode in dnsmasq

 https://blueprints.launchpad.net/neutron/+spec/dnsmasq-ipv6-dhcpv6-stateless
Patchset: Add support to DHCP agent for BP 

Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-11 Thread Martinx - ジェームズ
Hey guys!

My OpenStack Instance have IPv6 connectivity! Using ML2 / Simple Flat
Network... For the first time ever! Look:

---
administrative@controller:~$ nova boot --image
70f335e3-798b-4031-9773-a640970a8bdf --key-name Key trusty-1

administrative@controller:~$ ssh -i ~/test.pem ubuntu@10.33.14.21

ubuntu@trusty-1:~$ sudo ip -6 a a 2001:1291:2bf:fffb::300/64 dev eth0

ubuntu@trusty-1:~$ sudo ip -6 r a default via 2001:1291:2bf:fffb::1

ubuntu@trusty-1:~$ ping6 -c 1 google.com

PING google.com(2800:3f0:4004:801::1000) 56 data bytes
64 bytes from 2800:3f0:4004:801::1000: icmp_seq=1 ttl=54 time=55.1 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 55.121/55.121/55.121/0.000 ms

-
# From my Laptop (and from another IPv6 block):
testuser@macbuntu:~$ telnet 2001:1291:2bf:fffb::300 22
Trying 2001:1291:2bf:fffb::300...
Connected to 2001:1291:2bf:fffb::300.
Escape character is '^]'.
SSH-2.0-OpenSSH_6.6p1 Ubuntu-2
---

But, OpenStack / Neutron isn't aware of that fixed IPv6 (
2001:1291:2bf:fffb::300) I just configured within the trusty-1 Instance,
so, I think we just need:

- Blueprint ipv6-provider-nets-slaac ready;
- Start radvd on upstream router (2001:1291:2bf:fffb::1).

Am I right?!

In fact, apparently, Security Groups is also working! I can ssh into
trusty-1 through IPv6 right now, but can't access port 80 of it (it is
closed buy 22 is open to the world)...

Maybe it will also work with VLANs...

BTW, I just realized that both the physical servers, controllers, networks
and compute nodes and etc, can be installed under a single IPv6 /64 subnet!
Since the openstack will random generate the MAC address (plus SLAAC),
IPv6s will never conflict.

Best!
Thiago


On 12 April 2014 00:09, Thomas Goirand z...@debian.org wrote:

 On 04/11/2014 10:52 PM, Collins, Sean wrote:
  Many of those patches are stale - please join us in the subteam IRC
  meeting if you wish to coordinate development of IPv6 features, so that
  we can focus on updating them and getting them merged. At this point
  simply applying them to the Icehouse tree is not enough.

 When and where is the next meeting?

 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker]

2014-04-11 Thread Eric Windisch


 Any disagreements with that goal?


No disagreement at all.

Not that we're talking yet about moving the driver back into Nova, I'd like
to take this opportunity to remind anyone interesting in contributing a
Cinder driver that it would be a lot easier if they do it while the driver
is still in Stackforge.

 Correct. If this is intended for infra, it has to use
 devstack-gate. That has lots of levers that we need to set based on
 branches, how to do the zuul ref calculations (needed for the
 speculative gating), how to do branch overrides for stable an
 upgrade jobs, etc.

I suppose I wasn't very clear and what I said may have been misinterpreted.
 I'm certainly not opposed to the integration being introduced into
devstack-gate or testing that way. I'm also happy that someone wants to
contribute on the '-infra' side of things (thank you Derek!). In part, my
response earlier was to point to work that has already been done, since
Derek pointedly asked me about those efforts.

Derek: for more clarification on the Tempest work, however, most of the
patches necessary for using Docker with Tempest have been merged into
Tempest itself. Some patches were rejected or expired. I can share these
with you. Primarily, these patches were to make tempest work with Cinder,
Neutron, suspend/unsuspend, pause/resume, and snapshots disabled. Snapshot
support exists in the driver, but has an open bug that prevents tempest
from passing. Neutron support is now integrated into the driver. Primarily,
the driver lacks support for suspend/unsuspend and pause/resume.

As for dockenstack, this might deserve a separate thread. What I've done
here is build something that may be useful to openstack-infra and might
necessitate further discussion. It's the fastest way to get the Docker
driver up and running, but that's aside to its more generic usefulness as a
potential tool for openstack-infra. Basically, I do not see dockenstack as
being in conflict with devstack-gate. If anything, it overlaps more with
'install_jenkins_slave.sh'.

What's nice about dockenstack is that improving that infrastructure can be
easily tested locally and as the jobs are significantly less dependent on
that infrastructure, those jobs may be easily run on a developer's
workstation. Have you noticed that jobs in the gate run significantly
faster than devstack on a laptop? Does that have to be the case? Can we not
consolidate these into a single solution that is always fast for everyone,
all the time? Something used in dev and gating? Something that might reduce
the costs for running openstack-infra?  That's what dockenstack is.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-11 Thread Martinx - ジェームズ
In fact, neutron accepted the following command:

---
root@controller:~# neutron subnet-create --ip-version 6 --disable-dhcp
--tenant-id 5e0106fa81104c5cbe21e1ccc9eb1a36 sharednet1
2001:1291:2bf:fffb::/64
Created a new subnet:
+--+-+
| Field| Value
  |
+--+-+
| allocation_pools | {start: 2001:1291:2bf:fffb::2, end:
2001:1291:2bf:fffb::::fffe} |
| cidr | 2001:1291:2bf:fffb::/64
  |
| dns_nameservers  |
  |
| enable_dhcp  | False
  |
| gateway_ip   | 2001:1291:2bf:fffb::1
  |
| host_routes  |
  |
| id   | 8685c917-e8df-4741-987c-6a531dca9fcd
 |
| ip_version   | 6
  |
| name |
  |
| network_id   | 17cda0fb-a59b-4a7e-9d96-76d0670bc95c
 |
| tenant_id| 5e0106fa81104c5cbe21e1ccc9eb1a36
 |
+--+-+
---

Where gateway_ip 2001:1291:2bf:fffb::1 is my upstream SLAAC router
(radvd stopped for now).

Diving: I think I'll put my OVS bridge br-eth0 (bridge_mappings =
physnet1:br-eth0) on top of a VLAN but, I'll not tell OpenStack to use
vlan, I'll keep using flat but, on top of a hidden vlan... eheh   :-P

I'll keep testing to see how far I can go...:-)

Cheers!


On 12 April 2014 00:42, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Hey guys!

 My OpenStack Instance have IPv6 connectivity! Using ML2 / Simple Flat
 Network... For the first time ever! Look:

 ---
 administrative@controller:~$ nova boot --image
 70f335e3-798b-4031-9773-a640970a8bdf --key-name Key trusty-1

 administrative@controller:~$ ssh -i ~/test.pem ubuntu@10.33.14.21

 ubuntu@trusty-1:~$ sudo ip -6 a a 2001:1291:2bf:fffb::300/64 dev eth0

 ubuntu@trusty-1:~$ sudo ip -6 r a default via 2001:1291:2bf:fffb::1

 ubuntu@trusty-1:~$ ping6 -c 1 google.com

 PING google.com(2800:3f0:4004:801::1000) 56 data bytes
 64 bytes from 2800:3f0:4004:801::1000: icmp_seq=1 ttl=54 time=55.1 ms

 --- google.com ping statistics ---
 1 packets transmitted, 1 received, 0% packet loss, time 0ms
 rtt min/avg/max/mdev = 55.121/55.121/55.121/0.000 ms

 -
 # From my Laptop (and from another IPv6 block):
 testuser@macbuntu:~$ telnet 2001:1291:2bf:fffb::300 22
 Trying 2001:1291:2bf:fffb::300...
 Connected to 2001:1291:2bf:fffb::300.
 Escape character is '^]'.
 SSH-2.0-OpenSSH_6.6p1 Ubuntu-2
 ---

 But, OpenStack / Neutron isn't aware of that fixed IPv6 (
 2001:1291:2bf:fffb::300) I just configured within the trusty-1 Instance,
 so, I think we just need:

 - Blueprint ipv6-provider-nets-slaac ready;
 - Start radvd on upstream router (2001:1291:2bf:fffb::1).

 Am I right?!

 In fact, apparently, Security Groups is also working! I can ssh into
 trusty-1 through IPv6 right now, but can't access port 80 of it (it is
 closed buy 22 is open to the world)...

 Maybe it will also work with VLANs...

 BTW, I just realized that both the physical servers, controllers, networks
 and compute nodes and etc, can be installed under a single IPv6 /64 subnet!
 Since the openstack will random generate the MAC address (plus SLAAC),
 IPv6s will never conflict.

 Best!
 Thiago


 On 12 April 2014 00:09, Thomas Goirand z...@debian.org wrote:

 On 04/11/2014 10:52 PM, Collins, Sean wrote:
  Many of those patches are stale - please join us in the subteam IRC
  meeting if you wish to coordinate development of IPv6 features, so that
  we can focus on updating them and getting them merged. At this point
  simply applying them to the Icehouse tree is not enough.

 When and where is the next meeting?

 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-11 Thread Martinx - ジェームズ
Cool! Instance shows an IPv6 address and it clearly isn't generated by
EUI-64 (SLAAC) but, at least, I can use static IPv6!  YAY!

---
root@controller:~# nova list
+--+--+++-+---+
| ID   | Name | Status | Task State |
Power State | Networks  |
+--+--+++-+---+
| 1654644d-6d52-4760-b147-4b88769a6fc2 | trusty-2 | ACTIVE | -  |
Running | sharednet1=10.33.14.23, 2001:1291:2bf:fffb::3 |
+--+--+++-+---+

root@controller:~# ssh -i ~/xxx.pem ubuntu@10.33.14.23

ubuntu@trusty-2:~$ sudo ip -6 a a 2001:1291:2bf:fffb::3/64 dev eth0

ubuntu@trusty-2:~$ sudo ip -6 r a default via 2001:1291:2bf:fffb::1

ubuntu@trusty-2:~$ ping6 -c 1 google.com
PING google.com(2800:3f0:4004:801::100e) 56 data bytes
64 bytes from 2800:3f0:4004:801::100e: icmp_seq=1 ttl=54 time=49.6 ms

--- google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 49.646/49.646/49.646/0.000 ms
---

IPv6 up and running and OpenStack is aware of both IPv4 and IPv6 instance's
addresses! Security Group is also taking care of ip6tables.

I'm pretty sure that if I start radvd on upstream router right now, all
instances will generate its own IPv6 based on their respective MAC address.
But then, the IPv6 will differ from what OpenStack thinks that each
instance have.

So many e-mails, sorry BTW! :-P

Best,
Thiago

On 12 April 2014 01:11, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 In fact, neutron accepted the following command:

 ---
 root@controller:~# neutron subnet-create --ip-version 6 --disable-dhcp
 --tenant-id 5e0106fa81104c5cbe21e1ccc9eb1a36 sharednet1
 2001:1291:2bf:fffb::/64
 Created a new subnet:

 +--+-+
 | Field| Value
   |

 +--+-+
 | allocation_pools | {start: 2001:1291:2bf:fffb::2, end:
 2001:1291:2bf:fffb::::fffe} |
 | cidr | 2001:1291:2bf:fffb::/64
   |
 | dns_nameservers  |
   |
 | enable_dhcp  | False
   |
 | gateway_ip   | 2001:1291:2bf:fffb::1
   |
 | host_routes  |
   |
 | id   | 8685c917-e8df-4741-987c-6a531dca9fcd
|
 | ip_version   | 6
   |
 | name |
   |
 | network_id   | 17cda0fb-a59b-4a7e-9d96-76d0670bc95c
|
 | tenant_id| 5e0106fa81104c5cbe21e1ccc9eb1a36
|

 +--+-+
 ---

 Where gateway_ip 2001:1291:2bf:fffb::1 is my upstream SLAAC router
 (radvd stopped for now).

 Diving: I think I'll put my OVS bridge br-eth0 (bridge_mappings =
 physnet1:br-eth0) on top of a VLAN but, I'll not tell OpenStack to use
 vlan, I'll keep using flat but, on top of a hidden vlan... eheh   :-P

 I'll keep testing to see how far I can go...:-)

 Cheers!


 On 12 April 2014 00:42, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Hey guys!

 My OpenStack Instance have IPv6 connectivity! Using ML2 / Simple Flat
 Network... For the first time ever! Look:

 ---
 administrative@controller:~$ nova boot --image
 70f335e3-798b-4031-9773-a640970a8bdf --key-name Key trusty-1

 administrative@controller:~$ ssh -i ~/test.pem ubuntu@10.33.14.21

 ubuntu@trusty-1:~$ sudo ip -6 a a 2001:1291:2bf:fffb::300/64 dev eth0

 ubuntu@trusty-1:~$ sudo ip -6 r a default via 2001:1291:2bf:fffb::1

 ubuntu@trusty-1:~$ ping6 -c 1 google.com

 PING google.com(2800:3f0:4004:801::1000) 56 data bytes
 64 bytes from 2800:3f0:4004:801::1000: icmp_seq=1 ttl=54 time=55.1 ms

 --- google.com ping statistics ---
 1 packets transmitted, 1 received, 0% packet loss, time 0ms
 rtt min/avg/max/mdev = 55.121/55.121/55.121/0.000 ms

 -
 # From my Laptop (and from another IPv6 block):
 testuser@macbuntu:~$ telnet 2001:1291:2bf:fffb::300 22
 Trying 2001:1291:2bf:fffb::300...
 Connected to 2001:1291:2bf:fffb::300.
 Escape character is '^]'.
 SSH-2.0-OpenSSH_6.6p1 Ubuntu-2
 ---

 But, OpenStack / Neutron isn't aware of that fixed IPv6 (
 2001:1291:2bf:fffb::300) I just configured within the trusty-1 Instance,
 so, I think we just need:

 - Blueprint ipv6-provider-nets-slaac ready;
 - Start radvd on upstream