[openstack-dev] [Horizon] core reviewers needed

2013-10-03 Thread fank
Dear Horizon core reviewers,

I filed a bug recently (https://bugs.launchpad.net/horizon/+bug/1231248) to add 
Horizon UI support for Neutron NVP advanced service router. The Neutron plugin 
has been merged and we would like to have the Horizon UI support for Havanas 
release if possible.

I have submitted the patch for review (https://review.openstack.org/#/c/48393/) 
as well. If you can spend sometime reviewing the bug/patch I'd really 
appreciate it.

Thanks,
-Kaiwei

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stable/grizzly gating

2013-10-03 Thread Gary Kotton
Hi,
The gating is broken due to the Neutron/Quantum name change. The root cause was 
that the devstack code was accessing quantum directories for configuration 
files and the repo has neutron directories. Please see the devstack patch 
https://review.openstack.org/#/c/49483/. I hope that this will cover it. Please 
note that the stable/grizzly feature freeze is on the 10th of October 
(http://www.youtube.com/watch?v=590ljQM08H0)
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Gary Kotton
Please see https://review.openstack.org/#/c/49483/

From: Matt Riedemann mrie...@us.ibm.commailto:mrie...@us.ibm.com
Date: Wednesday, October 2, 2013 7:19 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Administrator gkot...@vmware.commailto:gkot...@vmware.com
Subject: Re: [openstack-dev] Gate issues - what you can do to help

I'm tracking that with this bug:

https://bugs.launchpad.net/openstack-ci/+bug/1234181

There are a lot of sys.exit(1) calls in the neutron code on stable/grizzly (and 
in master too for that matter) so I'm wondering if something is puking but the 
error doesn't get logged before the process exits.


Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development


Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.commailto:mrie...@us.ibm.com
[cid:_1_0EC4395C0EC433C80059AEAE86257BF8]

3605 Hwy 52 N
Rochester, MN 55901-1407
United States






From:Alan Pevec ape...@gmail.commailto:ape...@gmail.com
To:Gary Kotton gkot...@vmware.commailto:gkot...@vmware.com,
Cc:OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date:10/02/2013 10:45 AM
Subject:Re: [openstack-dev] Gate issues - what you can do to help




Hi,

quantumclient is now fixed for stable/grizzly but there are issues
with check-tempest-devstack-vm-neutron job where devstack install is
dying in the middle of create_quantum_initial_network() without trace
e.g. 
http://logs.openstack.org/71/49371/1/check/check-tempest-devstack-vm-neutron/6da159d/console.html

Any ideas?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


attachment: ATT1..gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Autoscale and load balancers

2013-10-03 Thread Thomas Hervé
On Wed, Oct 2, 2013 at 11:07 PM, Thomas Spatzier thomas.spatz...@de.ibm.com
 wrote:

A way to achieve the same behavior as you suggest but less verbose would be
 to use relationships in HOT. We had some discussion about relationships
 earlier this year and in other contexts, but this would fit here very well.
 And I think you express a similar idea on the wiki page you linked above.
 The model could look like:

 resources:
   server1:
 type: OS::Nova::Server
   server2:
 type: OS::Nova::Server
   loadbalancer:
 type: OS::Neutron::LoadBalancer
 # properties etc.
 relationships:
   - member: server1
   - member: server2

 From an implementation perspective, a relationship would be implemented
 similar to a resource, i.e. there is python code that implements all the
 behavior like modifying or notifying source or target of the relationship.
 Only the look in the template is different. In the sample above, 'member'
 would be the type of relatioship and there would be a corresponding
 implementation. I actually wrote up some thoughts about possible notation
 for relationship in HOT on this wiki page:
 https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider

 We have such concepts in TOSCA and I think it could make sense to apply
 this here. So when evolving HOT we should think about extending a template
 from just having resources to also having links (instead of association
 resources which are more verbose).


Thanks for the input, I like the way it's structured. In this particular
case, I think I'd like the relationships to be on server instead of having
a list of relationships on the load balancer, but the idea remains the same.

I didn't grasp completely the idea of components just yet, but it seems it
would be a useful distinction with resources here, as we care more about
the actual application here (the service running on port N on the server),
rather than the bare server. It becomes the responsibility of the
application to register with the load balancer, which it can do in a more
informed manner (providing the weight in the pool for example). We just
need a concise and explicit way to do that in a template.

-- 
Thomas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Alan Pevec
2013/10/3 Gary Kotton gkot...@vmware.com:
 Please see https://review.openstack.org/#/c/49483/

That's s/quantum/neutron/ on stable - I'm confused why is that, it
should have been quantum everywhere in Grizzly.
Could you please expand your reasoning in the commit message? It also
doesn't help, check-tempest-devstack-vm-neutron job still failed.

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Gary Kotton


On 10/3/13 11:59 AM, Alan Pevec ape...@gmail.com wrote:

2013/10/3 Gary Kotton gkot...@vmware.com:
 Please see https://review.openstack.org/#/c/49483/

That's s/quantum/neutron/ on stable - I'm confused why is that, it
should have been quantum everywhere in Grizzly.

Please see below:

nicira@os-devstack:/opt/stack/neutron/etc$ git branch
  master
* stable/grizzly
nicira@os-devstack:/opt/stack/neutron/etc$ pwd
/opt/stack/neutron/etc
nicira@os-devstack:/opt/stack/neutron/etc$



The stable/grizzly devstack code was trying to copy from quantum. The
destination directories are still quantum, just the source directories are
neutron.


Could you please expand your reasoning in the commit message? It also
doesn't help, check-tempest-devstack-vm-neutron job still failed.

I need to check this. Locally things work for me now where they did not
prior to the patch. It looks like I have missed some places with the
root-wrap:

ROOTWRAP_SUDOER_CMD='/usr/local/bin/quantum-rootwrap
/etc/neutron/rootwrap.conf *'


I'll post another version soon.

Thanks
Gary


Cheers,
Alan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova, Neutron, Heat and Horizon Havana RC1 available

2013-10-03 Thread Thierry Carrez
Hello everyone,

This morning we've got Nova, Neutron, Heat and Horizon all publishing
their first release candidate for the Havana release ! You can download
those RC1 tarballs at:

https://launchpad.net/nova/havana/havana-rc1
https://launchpad.net/neutron/havana/havana-rc1
https://launchpad.net/heat/havana/havana-rc1
https://launchpad.net/horizon/havana/havana-rc1

Unless release-critical issues are found that warrant a release
candidate respin, those RC1s will be formally released as the 2013.2
final version on October 17. You are therefore strongly encouraged to
test and validate those tarballs.

Alternatively, you can directly test the milestone-proposed branches at:
https://github.com/openstack/nova/tree/milestone-proposed
https://github.com/openstack/neutron/tree/milestone-proposed
https://github.com/openstack/heat/tree/milestone-proposed
https://github.com/openstack/horizon/tree/milestone-proposed

If you find an issue that could be considered release-critical, please
file it against the corresponding project:

https://bugs.launchpad.net/nova/+filebug
https://bugs.launchpad.net/neutron/+filebug
https://bugs.launchpad.net/heat/+filebug
https://bugs.launchpad.net/horizon/+filebug

and tag it *havana-rc-potential* to bring it to the release crew's
attention.

Note that the master branches of Nova, Neutron, Heat and Horizon are
now open for Icehouse development, and feature freeze restrictions no
longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-03 Thread Julien Danjou
On Wed, Oct 02 2013, Thomas Maddox wrote:

 I'm working to make the sample pipeline optional and I'm stuck at a
 decision point about whether I ought to use a collector config option
 (like 'enable_sample_pipelines'), or let it be driven by setup.cfg (i.e.
 the existence of sample plugin references). My favorite right now is the
 former, but I wanted to entertain the latter and learn in the process.

What about having an empty pipeline.yml?

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Gary Kotton
Hi,
This seems to be my bad. I have abandoned the patch and am still looking
into the problems.
Thanks
Gary

On 10/3/13 12:14 PM, Gary Kotton gkot...@vmware.com wrote:



On 10/3/13 11:59 AM, Alan Pevec ape...@gmail.com wrote:

2013/10/3 Gary Kotton gkot...@vmware.com:
 Please see https://review.openstack.org/#/c/49483/

That's s/quantum/neutron/ on stable - I'm confused why is that, it
should have been quantum everywhere in Grizzly.

Please see below:

nicira@os-devstack:/opt/stack/neutron/etc$ git branch
  master
* stable/grizzly
nicira@os-devstack:/opt/stack/neutron/etc$ pwd
/opt/stack/neutron/etc
nicira@os-devstack:/opt/stack/neutron/etc$



The stable/grizzly devstack code was trying to copy from quantum. The
destination directories are still quantum, just the source directories are
neutron.


Could you please expand your reasoning in the commit message? It also
doesn't help, check-tempest-devstack-vm-neutron job still failed.

I need to check this. Locally things work for me now where they did not
prior to the patch. It looks like I have missed some places with the
root-wrap:

ROOTWRAP_SUDOER_CMD='/usr/local/bin/quantum-rootwrap
/etc/neutron/rootwrap.conf *'


I'll post another version soon.

Thanks
Gary


Cheers,
Alan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] strange behaviour (possible bug) with nova evacuate

2013-10-03 Thread Pavel Kravchenco
Hi Chris,

You probably encountered this bug: 
https://bugs.launchpad.net/nova/+bug/1156269
It been fixed here: https://review.openstack.org/#/c/24600/
Btw, what code are you using?

Thanks,
Pavel

 Date: Wed, 2 Oct 2013 15:30:11 -0600
 From: Chris Friesen chris.frie...@windriver.com
 
 Hi all,
 
 I posted this on the IRC channel but got no response, so I'll try here.
 
 Suppose I do the following:
 
 1) create an instance (instance files not on shared storage)
 2) kill its compute node and evacuate the instance to another node
 3) boot up the original compute node
 4) kill the second compute node and evacuate back to the first compute 
node
 
 In step 4 it seems to be failing a check in rebuild_instance() because 
 it finds the old instance file on the disk at/var/lib/nova/instances/. 
   Is this a bug?  If not, what's the intended behaviour in this case? 
 Surely the admin isn't supposed to manually wipe a compute node before 
 reconnecting it to the network...
 
 It seems to me that when the original compute node boots up it should 
 recognize that the instance has been evacuated and delete the instance 
 file on the disk.
 
 Or is the problem that it doesn't know whether the instances are on 
 shared storage (in which case we wouldn't want to delete the instance 
 file) or local storage (in which case we would)?  If this is the case, 
 then maybe we should embed the storage type in the instance itself--this 

 would also let us avoid having to manually specify --on-shared-storage 
 in the evacuate call.



 
 Thanks,
 Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meeting agenda for Thu Oct 3rd at 1500 UTC

2013-10-03 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Thu Oct 3rd at 1500 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Release python-ceilometerclient? 
* [lsmola] talking about Hardware Agent
  https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
  , should it merge into Central Agent? Also I am in the process of setting
  up the TripleO Undercloud where I want to test the hardware agent (right
  now only on bm_poseur node, testing on real baremetal node is planned in
  few weeks) 
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-03 Thread Doug Hellmann
On Thu, Oct 3, 2013 at 5:54 AM, Julien Danjou jul...@danjou.info wrote:

 On Wed, Oct 02 2013, Thomas Maddox wrote:

  I'm working to make the sample pipeline optional and I'm stuck at a
  decision point about whether I ought to use a collector config option
  (like 'enable_sample_pipelines'), or let it be driven by setup.cfg (i.e.
  the existence of sample plugin references). My favorite right now is the
  former, but I wanted to entertain the latter and learn in the process.

 What about having an empty pipeline.yml?


Modifying the pipeline's configuration file is the right way to go. If an
empty file isn't valid, then a single pipeline that subscribes to no events
may work.

The other proposed solution isn't going to work. setup.cfg is not meant to
be a deployer-facing configuration file. It's a packaging file that tells
the system what files are part of the app or library.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-03 Thread Thomas Maddox
On 10/3/13 8:15 AM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:




On Thu, Oct 3, 2013 at 5:54 AM, Julien Danjou 
jul...@danjou.infomailto:jul...@danjou.info wrote:
On Wed, Oct 02 2013, Thomas Maddox wrote:

 I'm working to make the sample pipeline optional and I'm stuck at a
 decision point about whether I ought to use a collector config option
 (like 'enable_sample_pipelines'), or let it be driven by setup.cfg (i.e.
 the existence of sample plugin references). My favorite right now is the
 former, but I wanted to entertain the latter and learn in the process.

What about having an empty pipeline.yml?

Modifying the pipeline's configuration file is the right way to go. If an empty 
file isn't valid, then a single pipeline that subscribes to no events may work.

Interesting point, Doug and Julien. I'm thinking out loud, but if we wanted to 
use pipeline.yaml, we could have an 'enabled' attribute for each pipeline? I'm 
curious, does the pipeline dictate whether its resulting sample is stored, or 
if no pipeline is configured, will it just store the sample according to the 
plugins in */notifications.py? I will test this out.

For additional context, the intent of the feature is to allow a deployer more 
flexibility. Like, say we wanted to only enable storing white-listed event 
traits and using trigger pipelines (to come) for notification based 
alerting/monitoring?


The other proposed solution isn't going to work. setup.cfg is not meant to be a 
deployer-facing configuration file. It's a packaging file that tells the system 
what files are part of the app or library.

Agreed. Thanks for the explanation! =]


Doug



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-03 Thread Julien Danjou
On Thu, Oct 03 2013, Thomas Maddox wrote:

 Interesting point, Doug and Julien. I'm thinking out loud, but if we wanted
 to use pipeline.yaml, we could have an 'enabled' attribute for each
 pipeline?

That would be an option, for sure. But just removing all of them should
also work.

 I'm curious, does the pipeline dictate whether its resulting
 sample is stored, or if no pipeline is configured, will it just store the
 sample according to the plugins in */notifications.py? I will test this out.

If there's no pipeline, there's no sample, so nothing's stored.

 For additional context, the intent of the feature is to allow a deployer
 more flexibility. Like, say we wanted to only enable storing white-listed
 event traits and using trigger pipelines (to come) for notification based
 alerting/monitoring?

This is already supported by the pipeline as you can list the meters you
want or not.

-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Gary Kotton
Hi,
I think that I may have stumbled upon the problem, but need the help from
someone on the infra team. Then again I may just be completely mistaken.

Prior to the little hiccup we have at the moment VM's that were used for
devstack on the infra side would have 1 interface:

2013-09-10 06:32:22.208 | Triggered by: https://review.openstack.org/40359
patchset 4
2013-09-10 06:32:22.208 | Pipeline: gate
2013-09-10 06:32:22.208 | IP configuration of this host:
2013-09-10 06:32:22.209 | 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc
noqueue state UNKNOWN
2013-09-10 06:32:22.209 | inet 127.0.0.1/8 scope host lo
2013-09-10 06:32:22.209 | 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu
1500 qdisc pfifo_fast state UP qlen 1000
2013-09-10 06:32:22.209 | inet 10.2.144.147/15 brd 10.3.255.255 scope
global eth0
2013-09-10 06:32:23.193 | Running devstack
2013-09-10 06:32:23.866 | Using mysql database backend



In the latest version they have 2:

2013-10-03 11:33:54.298 | 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc
noqueue state UNKNOWN
2013-10-03 11:33:54.299 | inet 127.0.0.1/8 scope host lo
2013-10-03 11:33:54.300 | 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu
1500 qdisc pfifo_fast state UP qlen 1000
2013-10-03 11:33:54.301 | inet 162.242.160.129/24 brd 162.242.160.255
scope global eth0
2013-10-03 11:33:54.302 | 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu
1500 qdisc pfifo_fast state UP qlen 1000
2013-10-03 11:33:54.302 | inet 10.208.24.83/17 brd 10.208.127.255
scope global eth1
2013-10-03 11:33:57.539 | Running devstack
2013-10-03 11:33:58.780 | Using mysql database backend



The problems occur when the when the the following line is invoked:

https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quantum#L
302

Anyone able to clarify why an additional interface was added?

Thanks
Gary


On 10/3/13 12:58 PM, Gary Kotton gkot...@vmware.com wrote:

Hi,
This seems to be my bad. I have abandoned the patch and am still looking
into the problems.
Thanks
Gary

On 10/3/13 12:14 PM, Gary Kotton gkot...@vmware.com wrote:



On 10/3/13 11:59 AM, Alan Pevec ape...@gmail.com wrote:

2013/10/3 Gary Kotton gkot...@vmware.com:
 Please see https://review.openstack.org/#/c/49483/

That's s/quantum/neutron/ on stable - I'm confused why is that, it
should have been quantum everywhere in Grizzly.

Please see below:

nicira@os-devstack:/opt/stack/neutron/etc$ git branch
  master
* stable/grizzly
nicira@os-devstack:/opt/stack/neutron/etc$ pwd
/opt/stack/neutron/etc
nicira@os-devstack:/opt/stack/neutron/etc$



The stable/grizzly devstack code was trying to copy from quantum. The
destination directories are still quantum, just the source directories
are
neutron.


Could you please expand your reasoning in the commit message? It also
doesn't help, check-tempest-devstack-vm-neutron job still failed.

I need to check this. Locally things work for me now where they did not
prior to the patch. It looks like I have missed some places with the
root-wrap:

ROOTWRAP_SUDOER_CMD='/usr/local/bin/quantum-rootwrap
/etc/neutron/rootwrap.conf *'


I'll post another version soon.

Thanks
Gary


Cheers,
Alan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-10-03 Thread Simon Pasquier

Hi Christopher,

Thanks for replying! I've been out last week hence this late email.

Le 20/09/2013 21:22, Christopher Armstrong a écrit :

Hello Simon! I've put responses below.

I'm kind of confused about your examples though, because you don't show
anything that depends on ComputeReady in your template. I guess I can
imagine some scenarios, but it's not very clear to me how this works.
It'd be nice to make sure the new autoscaling solution that we're
working on will support your case in a nice way, but I think we need
some more information about what you're doing. The only time this would
have an effect is if there's another resource depending on the
ComputeReady /that's also being updated at the same time/, because the
only effect that a dependency has is to wait until it is met before
performing create, update, or delete operations on other resources. So I
think it would be nice to understand your use case a little bit more
before continuing discussion.


I'm not sure I understand which template you're talking about: is it [1] 
or [2]?
In both cases, nothing depends on ComputeReady: this is the guard 
condition and it is the last resource being created. And since it 
depends on the NumberOfComputes or NumberOfWaitConditions parameter, it 
gets updated when I update one of these.


[1] http://paste.openstack.org/show/47142/
[2] http://paste.openstack.org/show/47148/

--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] question about stack updates, instance groups and wait conditions

2013-10-03 Thread Simon Pasquier

Hi Clint,

Thanks for the reply! I'll update the bug you raised with more 
information. In the meantime, I agree with you that cfn-hup is enough 
for now.


BTW, is there any bug or missing feature that would prevent me from 
replacing cfn-hup by os-collect-config?


Simon

Le 20/09/2013 22:12, Clint Byrum a écrit :

Excerpts from Simon Pasquier's message of 2013-09-17 05:57:58 -0700:

Hello,

I'm testing stack updates with instance group and wait conditions and
I'd like to get feedback from the Heat community.

My template declares an instance group resource with size = N and a wait
condition resource with count = N (N being passed as a parameter of the
template). Each group's instance is calling cfn-signal (with a different
id!) at the end of the user data script and my stack creates with no error.

Now when I update my stack to run N+X instances, the instance group gets
updated with size=N+X but since the wait condition is deleted and
recreated, the count value should either be updated to X or my existing
instances should re-execute cfn-signal.


That is a bug, the count should be something that can be updated in-place.

https://bugs.launchpad.net/heat/+bug/1228362

Once that is fixed, there will be an odd interaction between the groups
though. Any new instances will add to the count, but removed instances
will not decrease it. I'm not sure how to deal with that particular quirk.

That said, rolling updates will likely produce some changes to the way
updates interact with wait conditions so that we can let instances and/or
monitoring systems feed back when an instance is ready. That will also
help deal with the problem you are seeing.

In the mean time, cfn-hup is exactly what you want, and I see no problem
with re-running cfn-signal after an update to signal that the update
has applied.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Simon Pasquier
Software Engineer
Bull, Architect of an Open World
Phone: + 33 4 76 29 71 49
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Proposed Agenda for Today's QA Meeting - 1700UTC - #openstack-meeting

2013-10-03 Thread Sean Dague

https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_October_3_2013

Blueprints (sdague)
* started cleaning up blueprints, need to start a purge of all the 
Unknown items


Neutron job status (sdague)

Elastic Recheck Top issues found (mtreinish)

Stable branch timing (sdague)

Design Summit Initial Planning (sdague)
 * QA sessions needed at DS - 
https://etherpad.openstack.org/icehouse-qa-session-planning


As always, if you have additional topics you'd like to see, please add 
them to the agenda with your irc nick so we know who will lead the 
discussion.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Jeremy Stanley
On 2013-10-03 06:54:10 -0700 (-0700), Gary Kotton wrote:
 I think that I may have stumbled upon the problem, but need the help from
 someone on the infra team.
[...]

I'm manually launching the test script on a fresh VM now and should
have something shortly.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Alan Pevec
 The problems occur when the when the the following line is invoked:

 https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quantum#L302

But that line is reached only in case baremetal is enabled which isn't
the case in gate, is it?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] neutron and private networks

2013-10-03 Thread Jon Maron
Hi,

  I'd like to raise an issue in the hopes of opening some discussion on the IRC 
chat later today:

We see a critical requirement to support the creation of a savanna cluster 
with neutron networking while leveraging a private network (i.e. without the 
assignment of public IPs) - at least during the provisioning phase.  So the 
current neutron solution coded in the master branch appears to be insufficient 
(it is dependent on the assignment of public IPs to launched instances), at 
least in the context of discussions we've had with users.

  We've been experimenting and trying to understand the viability of such an 
approach and have had some success establishing SSH connections over a private 
network using paramiko etc.  So as long as there is a mechanism to ascertain 
the namespace associated with the given cluster/tenant (configuration?  neutron 
client?) it appears that the modifications to the actual savanna code for the 
instance remote interface (the SSH client code etc) will be fairly small.  The 
namespace selection could potentially be another field made available in the 
dashboard's cluster creation interface.

-- Jon


  
-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] neutron and private networks

2013-10-03 Thread Matthew Farrellee

On 10/03/2013 11:21 AM, Jon Maron wrote:

Hi,

I'd like to raise an issue in the hopes of opening some discussion on
the IRC chat later today:

We see a critical requirement to support the creation of a savanna
cluster with neutron networking while leveraging a private network
(i.e. without the assignment of public IPs) - at least during the
provisioning phase.  So the current neutron solution coded in the
master branch appears to be insufficient (it is dependent on the
assignment of public IPs to launched instances), at least in the
context of discussions we've had with users.

We've been experimenting and trying to understand the viability of
such an approach and have had some success establishing SSH
connections over a private network using paramiko etc.  So as long as
there is a mechanism to ascertain the namespace associated with the
given cluster/tenant (configuration?  neutron client?) it appears
that the modifications to the actual savanna code for the instance
remote interface (the SSH client code etc) will be fairly small.  The
namespace selection could potentially be another field made available
in the dashboard's cluster creation interface.

-- Jon


Last week there was an IRC discussion about this, which is by its very 
nature rather ephemeral. So thanks for taking this to the list.


The outcome of the IRC meeting was that -

 0) we don't cover the use case where only the cluster's head node has 
a public IP (all worker nodes have private IPs)


 1) we think it's an important use case

 2) there are two ways we see to address it
  a) do some architectural changes so that the responsibility of 
configuring the cluster can be delegated to the head node (from savanna-api)
  b) make savanna-api netns aware (e.g. ip netns exec) so that it can 
contact all nodes no matter the visibility of their network


This is a good item for the roadmap and for a design session in Hong Kong.

Best,


matt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] strange behaviour (possible bug) with nova evacuate

2013-10-03 Thread Chris Friesen

On 10/03/2013 05:45 AM, Pavel Kravchenco wrote:

Hi Chris,

You probably encountered this bug:
*https://bugs.launchpad.net/nova/+bug/1156269*
It been fixed here: https://review.openstack.org/#/c/24600/


Yes, that looks like what I'm seeing.  Thanks for the pointer.


Btw, what code are you using?


I'm currently using Grizzly.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] neutron and private networks

2013-10-03 Thread Ruslan Kamaldinov
This approach looks good.

But I'd like to outline an issue people might hit in real-world production
installations:
Let's imagine OpenStack is installed in HA mode. There are several
controllers,
they run Savanna and Q3 services in HA mode. The problem is that active
Savanna
service should run on the same host with active Q3 service. Otherwise
proposed
approach will not work. It will require some advanced configuration in
HA software - Savanna should always run on the same host with Q3.


Ruslan


On Thu, Oct 3, 2013 at 7:21 PM, Jon Maron jma...@hortonworks.com wrote:

 Hi,

   I'd like to raise an issue in the hopes of opening some discussion on
 the IRC chat later today:

 We see a critical requirement to support the creation of a savanna
 cluster with neutron networking while leveraging a private network (i.e.
 without the assignment of public IPs) - at least during the provisioning
 phase.  So the current neutron solution coded in the master branch appears
 to be insufficient (it is dependent on the assignment of public IPs to
 launched instances), at least in the context of discussions we've had with
 users.

   We've been experimenting and trying to understand the viability of such
 an approach and have had some success establishing SSH connections over a
 private network using paramiko etc.  So as long as there is a mechanism to
 ascertain the namespace associated with the given cluster/tenant
 (configuration?  neutron client?) it appears that the modifications to the
 actual savanna code for the instance remote interface (the SSH client code
 etc) will be fairly small.  The namespace selection could potentially be
 another field made available in the dashboard's cluster creation interface.

 -- Jon



 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] strange behaviour (possible bug) with nova evacuate

2013-10-03 Thread Chris Friesen

On 10/02/2013 11:42 PM, Lingxian Kong wrote:

Hi Chris:

Aftering exploring the code, I think there is already clean up on the
original compute node, it will check that the instances reported by the
driver are still associated with this host. If they are not, they will
be destroyed. Please refer to:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L784


Looks like the problem is that I'm using Grizzly.  This was fixed in 
Havana back in April but the fix was never backported.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Release notes

2013-10-03 Thread Nicholas Chase
Is there a target date for updating the release notes at 
https://wiki.openstack.org/wiki/ReleaseNotes/Havana (other than 
Keystone, which is already there)?  I'm trying to get a handle on what 
new features actually made it into Havana.  Or is there a better way to 
figure that out, other than wading through hundreds of bugs?


Thanks...

  Nick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Release notes

2013-10-03 Thread Thierry Carrez
Nicholas Chase wrote:
 Is there a target date for updating the release notes at
 https://wiki.openstack.org/wiki/ReleaseNotes/Havana (other than
 Keystone, which is already there)?  I'm trying to get a handle on what
 new features actually made it into Havana.  Or is there a better way to
 figure that out, other than wading through hundreds of bugs?

The sooner the better, but they should be considered work in progress
until release time (Oct 17).

You can get a glimpse at:
http://status.openstack.org/release/

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gate issues - what you can do to help

2013-10-03 Thread Akihiro Motoki
Hi,

I would like to share what Gary and I investigated, while it is not
addressed yet.

The cause is the failure of quantum-debug command in setup_quantum_debug
(https://github.com/openstack-dev/devstack/blob/stable/grizzly/stack.sh#L996).
We can reproduce the issue in local environment by setting
Q_USE_DEBUG_COMMAND=True in localrc.

Mark proposed a patch https://review.openstack.org/#/c/49584/ but it
does not address the issue.
We need another way to proxy quantumclient to neutronclient.

Note that there is a case devstack log in the gate does not contain
the end of the console logs.
In 
http://logs.openstack.org/39/48539/5/check/check-tempest-devstack-vm-neutron/b9e6559/,
the last command logged is quantum subnet-create, but actually
quantum-debug command was executed
and it failed.

Thanks,
Akihiro

On Fri, Oct 4, 2013 at 12:05 AM, Alan Pevec ape...@gmail.com wrote:
 The problems occur when the when the the following line is invoked:

 https://github.com/openstack-dev/devstack/blob/stable/grizzly/lib/quantum#L302

 But that line is reached only in case baremetal is enabled which isn't
 the case in gate, is it?

 Cheers,
 Alan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Akihiro MOTOKI amot...@gmail.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] SmokeStack welcomes Heat and Ceilometer

2013-10-03 Thread Dan Prince
Hi all,

A quick update on some recent SmokeStack additions in the Havana dev cycle:

Several weeks back we added support for Ceilometer. Test configuration using 
both MongoDB (Fedora 19) and MySQL (Centos 6.4) are currently being used to 
test Ceilometer. Currently only the compute agent is being used and verified. 
We hope to expand coverage to include using and testing the central agent as 
well in the future.

This week we put the finishing touches on Heat support for SmokeStack. Similar 
to Ceilometer above this includes configuration which test things on Fedora 19 
and Centos 6.4.

You should now see SmokeStack results for these projects. There have already 
been a couple regressions we found in reporting -1 results back into Gerrit for 
these projects.

Also, Because SmokeStack uses (and helps maintain) upstream packages we are 
also able to provide review feedback on the associated puppet-ceilometer and 
puppet-heat projects which are currently being used to install packages for 
testing.

Smokin' since Cactus...

Dan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for today meeting at 2000 UTC

2013-10-03 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is today, 
2013-10-03!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Discuss ongoing status of the overall effort and any needed coordination.
- Discuss celery/distributed integration/review (and talk about current issues).
- Discuss resumption/migration @ 
https://blueprints.launchpad.net/taskflow/+spec/resumption-migrations
-   Also see reviews (49380, 48773) for a couple different approaches
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, problems, open-reviews, issues, solutions, 
questions (and more).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Autoscale and load balancers

2013-10-03 Thread Thomas Spatzier
Thomas Hervé the...@gmail.com wrote on 03.10.2013 09:59:02:

 From: Thomas Hervé the...@gmail.com
 To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
 Date: 03.10.2013 10:01
 Subject: Re: [openstack-dev] [Heat] Autoscale and load balancers


 On Wed, Oct 2, 2013 at 11:07 PM, Thomas
Spatzier thomas.spatz...@de.ibm.com
  wrote:

 A way to achieve the same behavior as you suggest but less verbose would
be
 to use relationships in HOT. We had some discussion about relationships
 earlier this year and in other contexts, but this would fit here very
well.
 And I think you express a similar idea on the wiki page you linked above.
 The model could look like:

 resources:
   server1:
     type: OS::Nova::Server
   server2:
     type: OS::Nova::Server
   loadbalancer:
     type: OS::Neutron::LoadBalancer
     # properties etc.
     relationships:
       - member: server1
       - member: server2

 From an implementation perspective, a relationship would be implemented
 similar to a resource, i.e. there is python code that implements all the
 behavior like modifying or notifying source or target of the
relationship.
 Only the look in the template is different. In the sample above, 'member'
 would be the type of relatioship and there would be a corresponding
 implementation. I actually wrote up some thoughts about possible notation
 for relationship in HOT on this wiki page:
 https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider

 We have such concepts in TOSCA and I think it could make sense to apply
 this here. So when evolving HOT we should think about extending a
template
 from just having resources to also having links (instead of association
 resources which are more verbose).

 Thanks for the input, I like the way it's structured. In this
 particular case, I think I'd like the relationships to be on server
 instead of having a list of relationships on the load balancer, but
 the idea remains the same.

Yes, good point. Relationships from the server to the LB make sense; it's
typically the one who wants to something (the server wants to be
load-balanced) to refer to the one offering the service.


 I didn't grasp completely the idea of components just yet, but it
 seems it would be a useful distinction with resources here, as we
 care more about the actual application here (the service running on
 port N on the server), rather than the bare server. It becomes the
 responsibility of the application to register with the load
 balancer, which it can do in a more informed manner (providing the
 weight in the pool for example). We just need a concise and explicit
 way to do that in a template.

If you refer to the components thing in the wiki page I mentioned, this has
been introduced (proposed) in relation to software orchestration. In the
first place, I thought it is not relevant for the loadbalancer use case but
the relationship concept is more important, and that was the reason I
pointed to that wiki page.
But actually you have a good point: it is the workload on-top of a server
that gets load balanced. So looks like both topics need to be looked at in
combination.


 --
 Thomas
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Resource Classes and Racks (Wireframes + Concept Discussion)

2013-10-03 Thread mar...@redhat.com
On 01/10/13 00:07, Jaromir Coufal wrote:
 Just BTW:
 
 I know that lot of folks were watching the youtube stream
 (http://youtu.be/m3y6uD8yKVQ), so please feel free to give any feedback
 you have to this thread. I believe that this is good way to proceed
 forward and how to make things flexible enough to support various types
 of deployments.
 
 Looking forward to any feedback

my 2c:

* I like LGroup. You can still create an LGroup where all nodes in a
physical rack are in the same LGroup (i.e. what we have now in Rack).
However, it also means you can divide a given physical rack of servers
into two or more LGroups (eg TOR switch setup to give you two rack-local
subnets). Gives much better capacity planning. For Tuskar... currently a
'Rack' is just a list of node_ids ... so the concept should actually map
fine - I think?

* hardware profile on a resource class... nice idea... i like the auto
matching of profiles to particular nodes though obviously thats a while
away yet. In tuskar we'd need to expand the resource class definition
and operations for capturing the hardware profiles. Right now only racks
have this notion of 'aggregate capacity' and resource class doesn't
expose the total aggregate 'total resource capacity' or aggregate 'total
instance capacity'.

marios


 
 Thanks
 -- Jarda
 
 On 2013/30/09 13:57, Jaromir Coufal wrote:
 Hi everyone,

 based on Tuskar's merger with TripleO and upstream feedback on Tuskar,
 when I was thinking about processes and workflows there, I got into
 some changes which I think that are important for us, because they
 will help us to achieve better flexibility and still having ability
 for easy scaling.

 I wanted to do just walkthrough the wireframes but I think that it
 will raise up some discussion around Classes and Racks, so my thought
 was to merge both together (wireframes + concepts discussion).

 At this meeting I'd like to get you familiar with my thoughts and get
 into some wireframes which will explain the ideas more. I hope that we
 will get into discussion around changes (not just UI but API as well).

 The scope which we will be talking about is Icehouse.

 I'll be posting link into #tripleo IRC channel.
 I'd like to record the whole session, so if anybody cannot attend, it
 should be available for you later.

 (Please note that Google Hangout has limited number of 10
 participants, so if you consider just watching, please use youtube
 stream - link will be posted here when available.)

 Thanks
 -- Jarda


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Fei Long Wang for glance core

2013-10-03 Thread Nikhil Komawar

+1
 
-Nikhil


-Original Message-
From: Iccha Sethi iccha.se...@rackspace.com
Sent: Thursday, October 3, 2013 12:28am
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: [openstack-dev] Nominating Fei Long Wang for glance core



Hey,

I would like to nominate Fei Long Wang(flwang) for glance core. I think Fei has 
been an active reviewer/contributor to the glance community [1] and has always 
been on top of reviews.

Thanks for the good work Fei!

Iccha

[1] http://russellbryant.net/openstack-stats/glance-reviewers-30.txt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Zhi Yan Liu for glance-core

2013-10-03 Thread Nikhil Komawar

+1
 
-Nikhil


-Original Message-
From: Iccha Sethi iccha.se...@rackspace.com
Sent: Thursday, October 3, 2013 12:25am
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: [openstack-dev] Nominating Zhi Yan Liu for glance-core



Hey,

I would like to nominate Zhi Yan Liu(lzydev) for glance core. I think Zhi has 
been an active reviewer/contributor to the glance community [1] and has always 
been on top of reviews.

Thanks for the good work Zhi!

Iccha

[1] http://russellbryant.net/openstack-stats/glance-reviewers-30.txt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-03 Thread Rudra Rugge
Hi All,

A blueprint has been registered to add IPAM and Policy
extensions to Neutron. Please review the blueprint and
the attached specification.

https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-policy-extensions-for-neutron

All comments are welcome.

Thanks,
Rudra
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Curvature and Donabe repos are now public!

2013-10-03 Thread Debojyoti Dutta
Hi!

We @Cisco just made the following repos public
https://github.com/CiscoSystems/donabe
https://github.com/CiscoSystems/curvature

Donabe was pitched as a recursive container before Heat days.
Curvature is an alternative interactive GUI front end to openstack
that can handle virtual resources, templates and can instantiate
Donabe workloads. The D3 + JS stuff was incorporated into Horizon. A
short demo was shown last summit and can be found at
http://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe

Congrats to the primary developers: @CaffeinatedBrad @John_R_Davidge
@Tehsmash_ @JackPeterFletch ... Special thanks to @lewtucker for
supporting this.

Hope this leads to more cool stuff for the Openstack community!

-- 
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Patches blocked for the feature freeze

2013-10-03 Thread Russell Bryant
Greetings,

The Havana feature freeze is over and the master branch is open for
Icehouse development.  We put a -2 on a *lot* of patches during the
feature freeze.  It seems that most of the ones I blocked have already
been automatically expired.

If you had a patch blocked for the feature freeze, please restore it in
gerrit.  You will likely have to ping the person that put a -2 on it
because they will have to manually remove it.  If it was me, feel free
to ping me on IRC or email.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Curvature and Donabe repos are now public!

2013-10-03 Thread Debojyoti Dutta
And just for some history on Donabe: the idea was 1st presented at the
E summit along with @dendrobates at
http://www.slideshare.net/ddutta1/donabe-models-openstack-essex-summit

debo

On Thu, Oct 3, 2013 at 11:43 AM, Debojyoti Dutta ddu...@gmail.com wrote:
 Hi!

 We @Cisco just made the following repos public
 https://github.com/CiscoSystems/donabe
 https://github.com/CiscoSystems/curvature

 Donabe was pitched as a recursive container before Heat days.
 Curvature is an alternative interactive GUI front end to openstack
 that can handle virtual resources, templates and can instantiate
 Donabe workloads. The D3 + JS stuff was incorporated into Horizon. A
 short demo was shown last summit and can be found at
 http://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe

 Congrats to the primary developers: @CaffeinatedBrad @John_R_Davidge
 @Tehsmash_ @JackPeterFletch ... Special thanks to @lewtucker for
 supporting this.

 Hope this leads to more cool stuff for the Openstack community!

 --
 -Debo~



-- 
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Curvature and Donabe repos are now public!

2013-10-03 Thread Christopher Armstrong
On Thu, Oct 3, 2013 at 1:43 PM, Debojyoti Dutta ddu...@gmail.com wrote:

 Hi!

 We @Cisco just made the following repos public
 https://github.com/CiscoSystems/donabe
 https://github.com/CiscoSystems/curvature

 Donabe was pitched as a recursive container before Heat days.
 Curvature is an alternative interactive GUI front end to openstack
 that can handle virtual resources, templates and can instantiate
 Donabe workloads. The D3 + JS stuff was incorporated into Horizon. A
 short demo was shown last summit and can be found at

 http://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe

 Congrats to the primary developers: @CaffeinatedBrad @John_R_Davidge
 @Tehsmash_ @JackPeterFletch ... Special thanks to @lewtucker for
 supporting this.

 Hope this leads to more cool stuff for the Openstack community!



Congrats! I'm glad you guys finally released the code :)

Does Cisco (or anyone else) plan to continue to put development resources
into these projects, or should we basically view them as reference code for
solving particular problems?

Thanks,


-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-10-03 Thread McReynolds, Auston
If User X's existing instance is isolated from the change, but there's
no snapshot/clone/versioning of the current settings on X's instance
(via the trove database or jinja template), then how will
GET /configurations/:id return the correct/current settings? Unless
you're planning on communicating with the guest? There's nothing
wrong with that approach, it's just not explicitly noted anywhere in
the blueprint. For some reason I inferred that it would be handled
like trove security-groups.

On a slightly different note: If the default template will not be
represented as a default configuration-group from an api standpoint,
then how will you support the ability for a user to enumerate the list
of default configuration-group values for a service-type?
GET /configurations/:id won't be applicable, so will it be
something like GET /configurations/default?



From:  Craig Vyvial cp16...@gmail.com
Reply-To:  OpenStack Development Mailing List
openstack-dev@lists.openstack.org
Date:  Thursday, October 3, 2013 11:17 AM
To:  OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [trove] Configuration API BP


inline.


On Wed, Oct 2, 2013 at 1:03 PM, McReynolds, Auston
amcreyno...@ebay.com wrote:

Awesome! I only have one follow-up question:

Regarding #6  #7, how will the clone behavior work given that the
defaults are hydrated by a non-versioned jinja template?


I am not sure i understand clone behavior because there is not really a
concept of cloning here. The jinja template is created and passed in the
prepare call to the guest to write to the default my.cnf file.

When a configuration-group is removed the instance will return to the
default state. This does not exactly act as a clone behavior.



Scenario Timeline:

T1) Cloud provider begins with the default jinja template, but changes
   the values for properties 'a' and 'b'. (Template Version #1)
T2) User X deploys a database instance
T3) Cloud provider decides to update the existing template by modifying
   property 'c'. (Template Version #2)
T4) User Z deploys a database instance

I think it goes without saying that User Z's instance gets Template
Version #2 (w/ changes to a  b  c), but does User X?


No User X does not get the changes. For User X to get the changes a
maintenance may need to be scheduled.



If it's a true clone, User X should be isolated from a change in
defaults, no?


User X will not see these default changes until a new instance is created.
 


Come to think about it, this is eerily similar to security-groups:
administratively, it can be beneficial to share a
configuration/security-group across multiple instances, but it can
also be a nightmare. Internally, it's extremely rare that we wish to
apply a database change to multiple tenants at once, so I'd argue
at a minimum to support a CONF opt-in for isolation, if not default
to it.


If i understand this correctly my above statement means that its isolated
by default.
 


On a related note: Will the default template for a service-type be
represented as a default configuration-group? If so, I imagine it
can be managed through the API (or MGMT API)?


The default template will not be represented as a configuration group.
This could potentially be a good fit but its more of a nice to have type
of feature.
 



From:  Craig Vyvial cp16...@gmail.com
Reply-To:  OpenStack Development Mailing List
openstack-dev@lists.openstack.org

Date:  Wednesday, October 2, 2013 10:06 AM
To:  OpenStack Development Mailing List openstack-dev@lists.openstack.org

Subject:  Re: [openstack-dev] [trove] Configuration API BP


I'm glad we both agree on most of these answers.
:)

On Oct 2, 2013 11:57 AM, Michael Basnight mbasni...@gmail.com wrote:

On Oct 1, 2013, at 11:20 PM, McReynolds, Auston wrote:

 I have a few questions left unanswered by the blueprint/wiki:

 #1 - Should the true default configuration-group for a service-type be
customizable by the cloud provider?

Yes


 #2 - Should a user be able to enumerate the entire actualized/realized
set of values for a configuration-group, or just the overrides?

actualized


 #3 - Should a user be able to apply a different configuration-group on
a master, than say, a slave?

Yes


 #4 - If a user creates a new configuration-group with values equal to
that of the default configuration-group, what is the expected
behavior?

Im not sure thats an issue. You will select your config group, and it will
be the one used. I believe you are talking the difference between the
template thats used to set up values for the instance, and the config
options that users are allowed to edit.
 Those are going to be appended, so to speak, to the existing template.
Itll be up to the server software to define what order values, if
duplicated, are read / used.


 #5 - For GET /configuration/parameters, where is the list of supported
parameters and their metadata sourced from?



i believe 

[openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-03 Thread Chris Friesen


I was wondering if there is any interest in adding an 
on_shared_storage field to the Instance class.  This would be set once 
at instance creation time and we would then be able to avoid having the 
admin manually pass it in for the various API calls 
(evacuate/rebuild_instance/migration/etc.)


It could even be set automatically for the case of booting off block 
storage, and maybe we add a config flag indicating that a given compute 
node is using shared storage for its instances.


This would also allow for nova host-evacuate to work properly if some 
of the instances are on unshared storage and some are booting from block 
storage (which is shared).  As it stands, the host-evacuate command 
assumes that they're all the same.


Thoughts?

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-03 Thread Caitlin Bestler



On October 3, 2013 12:44:50 PM Chris Friesen chris.frie...@windriver.com 
wrote:


I was wondering if there is any interest in adding an on_shared_storage 
field to the Instance class.  This would be set once at instance creation 
time and we would then be able to avoid having the admin manually pass it 
in for the various API calls (evacuate/rebuild_instance/migration/etc.)


It could even be set automatically for the case of booting off block 
storage, and maybe we add a config flag indicating that a given compute 
node is using shared storage for its instances.


This would also allow for nova host-evacuate to work properly if some of 
the instances are on unshared storage and some are booting from block 
storage (which is shared).  As it stands, the host-evacuate command assumes 
that they're all the same.


Thoughts?

Chris


*What* is on shared storage?

The boot drive?
A snapshot of the running VM?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC: adding on_shared_storage field to instance

2013-10-03 Thread Chris Friesen

On 10/03/2013 02:02 PM, Caitlin Bestler wrote:



On October 3, 2013 12:44:50 PM Chris Friesen
chris.frie...@windriver.com wrote:


I was wondering if there is any interest in adding an
on_shared_storage field to the Instance class.  This would be set
once at instance creation time and we would then be able to avoid
having the admin manually pass it in for the various API calls
(evacuate/rebuild_instance/migration/etc.)

It could even be set automatically for the case of booting off block
storage, and maybe we add a config flag indicating that a given
compute node is using shared storage for its instances.

This would also allow for nova host-evacuate to work properly if
some of the instances are on unshared storage and some are booting
from block storage (which is shared).  As it stands, the host-evacuate
command assumes that they're all the same.

Thoughts?

Chris


*What* is on shared storage?

The boot drive?
A snapshot of the running VM?


For the purposes of the API calls mentioned above, it means that the 
instance files (the disk on which the instantiated instance runs) are 
located on shared storage.  If this is the case, then when you evacuate 
an instance to a new host you can reuse the existing instance files 
(assuming they haven't been corrupted by the crash).


If the instance files are local to a specific compute node, then when 
doing an evacuate the system must rebuild the instance from the 
originally-specified image.


For a discussion of instance storage, see 
http://docs.openstack.org/trunk/openstack-ops/content/compute_nodes.html#instance_storage;


Chris




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Configuration API BP

2013-10-03 Thread Craig Vyvial
Oops forgot the link on BP for versioning templates.

https://blueprints.launchpad.net/trove/+spec/configuration-templates-versionable


On Thu, Oct 3, 2013 at 3:47 PM, Craig Vyvial cp16...@gmail.com wrote:

 I have been trying to figure out where a call for the default
 configuration should go. I just finished adding a method to get the
 [mysqld] section via an api call but not sure where this should go yet.

 Currently i made it:
 GET - /instance/{id}/configuration

 This kinda only half fits in the path here because it doesnt really
 describe that this is the default configuration on the instance. On the
 other hand, it shows that it is coupled to the instance because we need the
 instance flavor to give what the current values are in the template applied
 to the instance.

 Maybe other options could be:
 GET - /instance/{id}/configuration/default
 GET - /instance/{id}/defaultconfiguration
 GET - /instance/{id}/default-configuration
 GET - /configuration/default/instance/{id}

 Suggestions welcome on the path.

 There is some wonkiness showing this information to the user because of
 the difference in the values used. [1] This example shows that the template
 uses 50M as a value applied and the configuration-group would apply the
 value equivalent to 52428800. I dont think we should worry about this now
 but this could lead to confusion by a user. If they are a power-user type
 then they might understand compared to a entry level user.

 [1] https://gist.github.com/cp16net/6816691



 On Thu, Oct 3, 2013 at 2:36 PM, McReynolds, Auston 
 amcreyno...@ebay.comwrote:

 If User X's existing instance is isolated from the change, but there's
 no snapshot/clone/versioning of the current settings on X's instance
 (via the trove database or jinja template), then how will
 GET /configurations/:id return the correct/current settings? Unless
 you're planning on communicating with the guest? There's nothing
 wrong with that approach, it's just not explicitly noted anywhere in
 the blueprint. For some reason I inferred that it would be handled
 like trove security-groups.

 So this is a great point. There are talks about making the templating
 versioned in some form or fashion. ekonetzk(irc) said he would write up a
 BP around versioning.



 On a slightly different note: If the default template will not be
 represented as a default configuration-group from an api standpoint,
 then how will you support the ability for a user to enumerate the list
 of default configuration-group values for a service-type?
 GET /configurations/:id won't be applicable, so will it be
 something like GET /configurations/default?

 see above paragraph.





 From:  Craig Vyvial cp16...@gmail.com
 Reply-To:  OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
 Date:  Thursday, October 3, 2013 11:17 AM
 To:  OpenStack Development Mailing List 
 openstack-dev@lists.openstack.org
 Subject:  Re: [openstack-dev] [trove] Configuration API BP


 inline.


 On Wed, Oct 2, 2013 at 1:03 PM, McReynolds, Auston
 amcreyno...@ebay.com wrote:

 Awesome! I only have one follow-up question:

 Regarding #6  #7, how will the clone behavior work given that the
 defaults are hydrated by a non-versioned jinja template?


 I am not sure i understand clone behavior because there is not really a
 concept of cloning here. The jinja template is created and passed in the
 prepare call to the guest to write to the default my.cnf file.

 When a configuration-group is removed the instance will return to the
 default state. This does not exactly act as a clone behavior.



 Scenario Timeline:

 T1) Cloud provider begins with the default jinja template, but changes
the values for properties 'a' and 'b'. (Template Version #1)
 T2) User X deploys a database instance
 T3) Cloud provider decides to update the existing template by modifying
property 'c'. (Template Version #2)
 T4) User Z deploys a database instance

 I think it goes without saying that User Z's instance gets Template
 Version #2 (w/ changes to a  b  c), but does User X?


 No User X does not get the changes. For User X to get the changes a
 maintenance may need to be scheduled.



 If it's a true clone, User X should be isolated from a change in
 defaults, no?


 User X will not see these default changes until a new instance is created.



 Come to think about it, this is eerily similar to security-groups:
 administratively, it can be beneficial to share a
 configuration/security-group across multiple instances, but it can
 also be a nightmare. Internally, it's extremely rare that we wish to
 apply a database change to multiple tenants at once, so I'd argue
 at a minimum to support a CONF opt-in for isolation, if not default
 to it.


 If i understand this correctly my above statement means that its isolated
 by default.



 On a related note: Will the default template for a service-type be
 represented as a default configuration-group? If so, I imagine it
 can be managed 

Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-03 Thread Thomas Maddox
On 10/3/13 8:53 AM, Julien Danjou jul...@danjou.info wrote:

On Thu, Oct 03 2013, Thomas Maddox wrote:

 Interesting point, Doug and Julien. I'm thinking out loud, but if we
wanted
 to use pipeline.yaml, we could have an 'enabled' attribute for each
 pipeline?

That would be an option, for sure. But just removing all of them should
also work.

 I'm curious, does the pipeline dictate whether its resulting
 sample is stored, or if no pipeline is configured, will it just store
the
 sample according to the plugins in */notifications.py? I will test this
out.

If there's no pipeline, there's no sample, so nothing's stored.

 For additional context, the intent of the feature is to allow a deployer
 more flexibility. Like, say we wanted to only enable storing
white-listed
 event traits and using trigger pipelines (to come) for notification
based
 alerting/monitoring?

This is already supported by the pipeline as you can list the meters you
want or not.

I poked around a bunch today; yep, you're right - we can just drop samples
on the floor by negating all meters in pipeline.yaml. I didn't have much
luck just removing all pipeline definitions or using a blank one (it
puked, and anything other than negating all samples felt too hacky to be
viable with trusted behavior).

I had my semantics and understanding of the workflow from the collector to
the pipeline to the dispatcher all muddled and was set straight today. =]
I will think on this some more.

I was also made aware of some additional Stevedore functionality, like
NamedExtensionManager, that should allow us to completely enable/disable
any handlers we don't want to load and the pipelines with just config
changes, and easily (thanks, Dragon!).

I really appreciate the time you all take to help us less experienced
developers learn on a daily basis! =]

Cheers!

-Thomas


-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Kicking TripleO up a notch

2013-10-03 Thread Dan Prince
Hi Robert,

In general I buy the vision laid out in this email. I think Starting with the 
customer story will keep us on the right track as to what features to 
implement, working on most important stuff, etc. A CD tripleO setup sounds just 
grand. For me though most of what you've laid out here is like kicking it up 
two notches (not just one). Not saying I don't think we should work towards 
this... just wondering if it is the most important thing at the moment. What I 
mean is... with all the instability in keeping TripleO working on a weekly 
basis would we be better served putting our resources on the CI front first. 
And once we have stability... then we kick things up another notch or two? Or 
perhaps we do both of these in parallel?

I like the idea of multiple lines of defense... but given limited resources I 
wonder if some simpler CI doesn't trump CD at this point.

Dan

- Original Message -
 From: Robert Collins robe...@robertcollins.net
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Tuesday, October 1, 2013 4:37:16 AM
 Subject: [openstack-dev] [TripleO] Kicking TripleO up a notch
 
 Warning, this is a little long, but it's the distillation of a
 2.mumble hour call I had late last week with Devananda and Clint. It's
 a proposal: please do comment and critique it.
 
 The tl;dr read is:
  - we've been doing good work
  - but most of us are currently focused on the tech rather than the
 customer stories
  - lets fix that
  - Start with customer story and work back to minimum work needed
- and we can actually be *delivering* that story [we have hardware
 for it, thanks to HP]
  - Focus on efficiency and reducing firefighting before new features
  - https://trello.com/tripleo as an experimental kanban for this [1]
 
 Now for a less condensed, and hopefully more useful version :)
 
 The night before the call I finished reading
 http://www.amazon.com/The-Phoenix-Project-Business-ebook/dp/B00AZRBLHO/ref=sr_1_9?s=digital-textie=UTF8qid=1380182909sr=1-9keywords=the+goal
 which is a devops casting of 'The Goal', a seminal work in the LEAN
 manufacturing space. (It's terrible writing in a lot of ways, but it
 also does do a pretty good job IMO of highlighting the
 systems-thinking aspects of CD... but it doesn't drill into the
 detailed analysis of each aspect so some followup reading required to
 get chapter and verse on e.g. 'single item flow is ideal').
 
 It reminded me very strongly of things I used to hold as very
 important, but I've been sidetracked into playing with the tech -
 which I love - and not focusing on ... 'The goal'. I grabbed Clint,
 and Deva, and tried to grab Joe - to get a cross section of focus
 areas : Heat, Ironic/NovaBM/Nova - to sanity check what was in my head
 :).
 
 Our goal is to deliver a continuously deployed version of OpenStack.
 Right now, we're working on plumbing to build a /good/ version of
 that. Note the difference: 'deliver an X', 'building stuff to let us
 deliver a good X'.
 
 This is key: we've managed to end up focusing on bottom-up work,
 rather than optimising our ability to deliver the thing, and
 iteratively improving it. The former is necessary but not sufficient.
 Tuskar has been working top down, and (as usual) this results in very
 fast progress; the rest of TripleO has provided a really solid
 foundation, but with many gaps and super rough spots...
 
 So, I'd like to invert our priorities and start with the deliverable,
 however slipshod, and then iterate to make it better and better along
 the design paths we've already thought about. This could extend all
 the way to Tuskar, or we could start with the closest thing within
 reach, which is the existing 'no-state-for-tripleo' style CLI + API
 based tooling.
 
 In the call we had, we agreed that this approach makes a lot of sense,
 and spent a bunch of time talking through the ramifications on TripleO
 and Ironic, and sketched out one way to slice and dice things;
 https://docs.google.com/drawings/d/1kgBlHvkW8Kj_ynCA5oCILg4sPqCUvmlytY5p1p9AjW0/edit?usp=sharing
 is the diagram we came up with.
 
 The basic approach is to actually deliver the thing we want to deliver
 - a live working CD overcloud *ourselves* and iterate on that to make
 upgrades of that preserve state, then start tackling CD of it's
 infrastructure, then remove the seed.
 
 Ramifications:
  - long term a much better project health and responsiveness to
 changing user needs.
  - may cause disruption in the short term as we do whats needed to get
 /something/ working.
  - will need community buy-in and support to make it work : two of the
 key things about working Lean are keeping WIP - inventory - low and
 ensuring that bottlenecks are not used for anything other than
 bottleneck tasks. Both of these things impact what can be done at any
 point in time within the project: we may need to say 'no' to proposed
 work to permit driving more momentum as a whole... at least in the
 short term.
  

Re: [openstack-dev] [TripleO] Kicking TripleO up a notch

2013-10-03 Thread James Slagle
On Tue, Oct 1, 2013 at 4:37 AM, Robert Collins
robe...@robertcollins.net wrote:
 Now for a less condensed, and hopefully more useful version :)
 Our goal is to deliver a continuously deployed version of OpenStack.
 Right now, we're working on plumbing to build a /good/ version of
 that. Note the difference: 'deliver an X', 'building stuff to let us
 deliver a good X'.

Overall, I agree with the approach.  But, I think it really helps that a lot of
the low level tooling already exists.  I think a CD environment is definitely
going to help us find issues a lot quicker than we're finding them now.


 This is key: we've managed to end up focusing on bottom-up work,
 rather than optimising our ability to deliver the thing, and
 iteratively improving it. The former is necessary but not sufficient.
 Tuskar has been working top down, and (as usual) this results in very
 fast progress; the rest of TripleO has provided a really solid
 foundation, but with many gaps and super rough spots...

 So, I'd like to invert our priorities and start with the deliverable,
 however slipshod, and then iterate to make it better and better along
 the design paths we've already thought about. This could extend all
 the way to Tuskar, or we could start with the closest thing within
 reach, which is the existing 'no-state-for-tripleo' style CLI + API
 based tooling.

 In the call we had, we agreed that this approach makes a lot of sense,
 and spent a bunch of time talking through the ramifications on TripleO
 and Ironic, and sketched out one way to slice and dice things;
 https://docs.google.com/drawings/d/1kgBlHvkW8Kj_ynCA5oCILg4sPqCUvmlytY5p1p9AjW0/edit?usp=sharing
 is the diagram we came up with.

Phase 0...makes sense.

A couple of questions about the other phases:
What is Persistent Overcloud with CD in TripleO Phase 1?
Is that where the overcloud gets upgraded on each commit, vs torn down
and redeployed?
I'd take it this is the image based upgrade approach where we'd need
the read-only /, and
storage somewhere for the persistent data support that has been
previously discussed?

If one of the other goals of the MVP of Phase 1 is to stop causing API
downtime during
upgrades, then this implies an HA Overcloud?  I believe that also
implies that we'd need support
across the upstream Openstack projects of different versions of the same service
being compatible (to an extent).  Meaning, if we have a HA Overcloud
with 2 Control nodes, and
we bring one of the nodes down and upgrade Nova to a newer version,
when we start
the upgraded node again, the 2 running Nova's need to be able to be
interoperable.  AIUI,
this type of support is still not ready in most projects.  But, I
guess that's why this is phase 1
and not 0 :).

In Phase 2, does  Undercloud CD also imply persistent Undercloud?  I'm
guessing yes, since
the Overcloud couldn't stay persistent if it's undercloud was destroyed.



 The basic approach is to actually deliver the thing we want to deliver
 - a live working CD overcloud *ourselves* and iterate on that to make
 upgrades of that preserve state,

Ok, I think this roughly answers my question about what persistent meant.

 then start tackling CD of it's
 infrastructure, then remove the seed.

Removing the seed and starting with the undercloud is one of the areas
I've looked
at, with the goal being making it easier to bootstrap an undercloud
for the folks working
on Tuskar.  I know I've pointed out these things before, but I wanted
to again here.  I'm not
sure if these efforts align with the long term vision of removing the
seed, or what
exactly the plan is around that.  I just want to make folks aware of
these, so as to
avoid duplication if similar paths are chosen.

First, there's the undercloud-live effort to build a live usb image of
an undercloud that people can boot, and install if they choose to.
https://github.com/agroup/undercloud-live

Second, undercloud-live makes use of some other python code I worked
on to apply d-i-b
elements to the current system, as opposed to a chroot.  This is the
work I mentioned
in Seattle (still working on a patch for d-i-b proper for this code
btw).  For now, it's at:
https://github.com/agroup/python-dib-elements/

undercloud-live is Fedora based at the moment, because we wanted to integrate
it with the Fedora build toolchain easily.



 Ramifications:
  - long term a much better project health and responsiveness to
 changing user needs.
  - may cause disruption in the short term as we do whats needed to get
 /something/ working.

I *think* this is a fair trade off.  Though, I'm not sure I understand
the short term
disruption.  Do you just mean there won't be as many people focusing on  devtest
and the low level tooling because instead they're focused on the CD environment?

  - will need community buy-in and support to make it work : two of the
 key things about working Lean are keeping WIP - inventory - low and
 ensuring that bottlenecks are not used for anything other than
 bottleneck 

Re: [openstack-dev] [TripleO] Kicking TripleO up a notch

2013-10-03 Thread Robert Collins
On 4 October 2013 11:00, Dan Prince dpri...@redhat.com wrote:
 Hi Robert,

 In general I buy the vision laid out in this email. I think Starting with 
 the customer story will keep us on the right track as to what features to 
 implement, working on most important stuff, etc. A CD tripleO setup sounds 
 just grand. For me though most of what you've laid out here is like kicking 
 it up two notches (not just one). Not saying I don't think we should work 
 towards this... just wondering if it is the most important thing at the 
 moment. What I mean is... with all the instability in keeping TripleO working 
 on a weekly basis would we be better served putting our resources on the CI 
 front first. And once we have stability... then we kick things up another 
 notch or two? Or perhaps we do both of these in parallel?

 I like the idea of multiple lines of defense... but given limited resources I 
 wonder if some simpler CI doesn't trump CD at this point.

Thats a really good question. I think if we analyze tripleo, we see 3
major stories in-play and non-gating-CI'd, and not-actually-CD'd.
 - deploy an overcloud
 - deploy an undercloud
 - bootstrap a seed

Some issues, like a nova-client bug, will break all three, most issues
we hit either break heat related things, so over-and-under, or break
baremetal, so seed-and-under.

I guess my thinking on this is that by:
 - reducing this to one story - CD an overcloud using a manual static undercloud
 - rigorously fixing the story *and* putting in place things that make
the story break - CI/design changes/whatever

We will at least stabilise the end user valuable aspect. Once thats
there and doing it's thing, we bring in the next story, so what we're
maintaining looks like:
 - CD an overcloud using a CD'd undercloud done using a manual static seed
 - rigorously fix this story and put in place things to stop it
breaking - CI/design changes/whatever

The net effect of this engineering wise is that we're focusing on less
battles and we'll drive things like CI for each story through to
completion *before* we move onto the next story : breaking the
deadlock we've had with
test-all-the-things-all-the-time-and-gosh-it's-hard.

That said, https://etherpad.openstack.org/tripleo-test-cluster is
aiming straight at a full end to end test story for tripleo at the
moment, but we may want to modify it to get earlier delivery of CI -
once we're at the point of having it run things at all, of course.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] core reviewers needed

2013-10-03 Thread Gabriel Hurley
Hi Kaiwai,

First, the bad news:

1. The Horizon release candidate has already been cut, so for Havana we're only 
considering release-blocking bugs at this point (and even those have to meet a 
high bar to warrant a new release candidate). The feature freeze deadline was 
almost a month ago.

2. As Akihiro Motoki pointed out on the Launchpad ticket, this looks more like 
a small feature addition than a bug. Whether it should be tracked as a 
blueprint or a bug with the priority wishlist is debatable, but either way 
it's not something we can consider for a Havana RC2.

The good news is that as of today Horizon's master branch is now open for 
Icehouse development, which means that reviews like this will start getting 
attention again. I don't see any immediate reasons why this shouldn't be 
accepted (pending real reviews), it will just go into the I1 milestone instead 
of the Havana final release.

Thank you for your work in putting together the patch and working through the 
feature. I'm sorry the timing wasn't more fortuitous.

All the best,

- Gabriel

 -Original Message-
 From: f...@vmware.com [mailto:f...@vmware.com]
 Sent: Wednesday, October 02, 2013 11:44 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Horizon] core reviewers needed
 
 Dear Horizon core reviewers,
 
 I filed a bug recently (https://bugs.launchpad.net/horizon/+bug/1231248) to
 add Horizon UI support for Neutron NVP advanced service router. The
 Neutron plugin has been merged and we would like to have the Horizon UI
 support for Havanas release if possible.
 
 I have submitted the patch for review
 (https://review.openstack.org/#/c/48393/) as well. If you can spend
 sometime reviewing the bug/patch I'd really appreciate it.
 
 Thanks,
 -Kaiwei
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Re: Service VM discussion - mgmt ifs

2013-10-03 Thread Regnier, Greg J
RE: vlan trunking support for network tunnels
Copying to dev mailing list.
- Greg

-Original Message-
From: Kyle Mestery (kmestery) [mailto:kmest...@cisco.com] 
Sent: Thursday, October 03, 2013 6:33 AM
To: Bob Melander (bmelande)
Cc: Regnier, Greg J
Subject: Re: Service VM discussion - mgmt ifs

On Oct 3, 2013, at 1:56 AM, Bob Melander (bmelande) bmela...@cisco.com wrote:
 
 The N1kv plugin only uses VXLAN but for that tunneling method the VLAN 
 trunking is supported. The way it works is that each VXLAN is mapped to a 
 *link local* VLAN. That technique is pretty much amenable to any tunneling 
 method.
 
 There is a blueprint for trunking support in Neutron written by Kyle 
 (https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api). 
 I think that it would be very useful for the service VM framework if at least 
 the ML2 and OVS plugins would implement the above blueprint. 
 
I think this blueprint would be worth shooting for in Icehouse. I can flesh it 
out a bit more so there is more to see on it and we can target it for Icehouse 
if you guys think this makes sense. I think not only would it help the service 
VM approach being taken here, but for running OpenStack on OpenStack 
deployments, having a trunk port to the VM makes a lot of sense and enables 
more networking options for that type of testing.

Thanks,
Kyle

 We actually have an implementation also for the OVS plugin that supports its 
 tunneling methods. But we have not yet attempted to upstream it.
 
 Thanks,
 Bob
 
 Ps. Thanks for inserting the email comments into the document. If we can 
 extend it further in the coming weeks to get a full(er) picture then during 
 summit we can identify/discuss suitable pieces to implement in phases during 
 Iceberg timeframe. 
 
 
 3 okt 2013 kl. 01:13 skrev Regnier, Greg J greg.j.regn...@intel.com:
 
 Hi Bob,
  
 Does the VLAN trunking solution work with tenant networks that use (VxLAN, 
 NVGRE) tunnels?
  
 Thanks,
 Greg
  
 From: Bob Melander (bmelande) [mailto:bmela...@cisco.com]
 Sent: Wednesday, September 25, 2013 2:57 PM
 To: Regnier, Greg J; Sumit Naiksatam; Rudrajit Tapadar (rtapadar); 
 David Chang (dwchang); Joseph Swaminathan; Elzur, Uri; Marc Benoit; 
 Sridar Kandaswamy (skandasw); Dan Florea (dflorea); Kanzhe Jiang; 
 Kuang-Ching Wang; Gary Duan; Yi Sun; Rajesh Mohan; Maciocco, 
 Christian; Kyle Mestery (kmestery)
 Subject: Re: Service VM discussion - mgmt ifs ... The service VM 
 framework scheduler should preferably also allow selection of VIFs to 
 host a logical resource's logical interfaces. To clarify the last statement, 
 one use case
 could be to spin up a VM with more VIFs than are needed initially 
 (e.g., if the VM does not support vif hot-plugging). Another use 
 case is if the plugin supports VLAN trunking and attachement of the 
 logical resource's logical interface to a network corresponds to trunking of 
 a network on a VIF.
  
 There are at least three (or four) ways to dynamically plug a logical 
 service resource inside a VM to networks:
 - Create a VM VIF on demand for the logical interface of the service 
 resource
 (hot-plugging)
 - Pre-populate the VM with a set of VIFs that can be allocated to 
 logical interfaces of the service resources
 - Create a set of VM VIFs (on demand or during VM creation) that 
 carry VLAN trunks for which logical (VLAN) interfaces are created and 
 allocated to service resources.
  



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Kicking TripleO up a notch

2013-10-03 Thread Robert Collins
On 4 October 2013 11:03, James Slagle james.sla...@gmail.com wrote:
 On Tue, Oct 1, 2013 at 4:37 AM, Robert Collins


 In the call we had, we agreed that this approach makes a lot of sense,
 and spent a bunch of time talking through the ramifications on TripleO
 and Ironic, and sketched out one way to slice and dice things;
 https://docs.google.com/drawings/d/1kgBlHvkW8Kj_ynCA5oCILg4sPqCUvmlytY5p1p9AjW0/edit?usp=sharing
 is the diagram we came up with.

 Phase 0...makes sense.

 A couple of questions about the other phases:
 What is Persistent Overcloud with CD in TripleO Phase 1?
 Is that where the overcloud gets upgraded on each commit, vs torn down
 and redeployed?

Yes indeed!

 I'd take it this is the image based upgrade approach where we'd need
 the read-only /, and
 storage somewhere for the persistent data support that has been
 previously discussed?

Readonly / would be nice but isn't in the MVP for it; persistent data
is crucial however.

 If one of the other goals of the MVP of Phase 1 is to stop causing API
 downtime during
 upgrades, then this implies an HA Overcloud?  I believe that also
 implies that we'd need support
 across the upstream Openstack projects of different versions of the same 
 service
 being compatible (to an extent).  Meaning, if we have a HA Overcloud
 with 2 Control nodes, and
 we bring one of the nodes down and upgrade Nova to a newer version,
 when we start
 the upgraded node again, the 2 running Nova's need to be able to be
 interoperable.  AIUI,
 this type of support is still not ready in most projects.  But, I
 guess that's why this is phase 1
 and not 0 :).

Yes, and yes and yes.! :)

 In Phase 2, does  Undercloud CD also imply persistent Undercloud?  I'm
 guessing yes, since
 the Overcloud couldn't stay persistent if it's undercloud was destroyed.

Totally, thanks for calling these things out; suggests to me we may
want to split into a few more phases, and tighten up the descriptions.

 then start tackling CD of it's
 infrastructure, then remove the seed.

 Removing the seed and starting with the undercloud is one of the areas
 I've looked
 at, with the goal being making it easier to bootstrap an undercloud
 for the folks working
 on Tuskar.  I know I've pointed out these things before, but I wanted
 to again here.  I'm not
 sure if these efforts align with the long term vision of removing the
 seed, or what
 exactly the plan is around that.  I just want to make folks aware of
 these, so as to
 avoid duplication if similar paths are chosen.

I don't think they align all that much, but OTOH I think they are
useful things in their own right.

 First, there's the undercloud-live effort to build a live usb image of
 an undercloud that people can boot, and install if they choose to.
 https://github.com/agroup/undercloud-live

 Second, undercloud-live makes use of some other python code I worked
 on to apply d-i-b
 elements to the current system, as opposed to a chroot.  This is the
 work I mentioned
 in Seattle (still working on a patch for d-i-b proper for this code
 btw).  For now, it's at:
 https://github.com/agroup/python-dib-elements/

 undercloud-live is Fedora based at the moment, because we wanted to integrate
 it with the Fedora build toolchain easily.

Yeah, and thats cool.

 Ramifications:
  - long term a much better project health and responsiveness to
 changing user needs.
  - may cause disruption in the short term as we do whats needed to get
 /something/ working.

 I *think* this is a fair trade off.  Though, I'm not sure I understand
 the short term
 disruption.  Do you just mean there won't be as many people focusing on  
 devtest
 and the low level tooling because instead they're focused on the CD 
 environment?

And that for instance we probably won't catch nova-bm regressions as
often because we'll be leaving the CD environment running for a period
of time, only retooling that when we get to phase 2.

  - will need community buy-in and support to make it work : two of the
 key things about working Lean are keeping WIP - inventory - low and
 ensuring that bottlenecks are not used for anything other than
 bottleneck tasks. Both of these things impact what can be done at any
 point in time within the project: we may need to say 'no' to proposed
 work to permit driving more momentum as a whole... at least in the
 short term.

 Can you give some examples of something that might be said no to?

So for instance, lets say someone wanted to switch us to Ironic next
week. That would disrupt the effort to get the overcloud doing CD; it
would slow velocity on that. It's a good thing to do, but we should do
that when we can as a team focus on dealing with the side effects
together, effectively. Or switching devtest to using tuskar, likewise
- long term totally do it, but lets fit it in so we don't cause
extended disruption, or crucially context switching amongst folk
working on whatever is the current critical path.

 In my head, I read that as refactoring or new