[openstack-dev] [nova][cinder]Nova can't detach volume in init host routine

2015-04-28 Thread hao wang
Hi, everyone

There is a Nova bug:

https://bugs.launchpad.net/nova/+bug/1408865.
https://review.openstack.org/#/c/147042/

The bug Scenario is:

1. create a vm using bootable volume.
2. delete this vm
3. restart service nova-compute when vm's task state is deleting.

When nova-compute is up, vm became deleted successful, but the bootable
volume is still in-use state and can't delete it using cinder delete volume.

solve method:

Add init=True in _delete_instance when init_host, and raise exception
when EndpointNotFound exists. It will set vm's status to error and
make user can re-issue a delete.


Is this method ok? Or cinder could do something to fix this bug?


I need your suggestion to push this work forward.


Thanks.

wanghao


-- 

Best Wishes For You!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] docker with Juno devstack

2015-04-28 Thread Murali B
Hi Srinivas,

Thank you for your response.

I found there is bug on this nova-docker
https://bugs.launchpad.net/nova-docker/+bug/1449273

after fixing the code from https://review.openstack.org/#/c/165196/

Able to create the docker image successfully using nova

Thanks
-Murali
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-04-28 Thread Bogdan Dobrelya
On 28.04.2015 15:15, Bogdan Dobrelya wrote:
 
 Hello, Zhou
 
 
 Yes, this is a known issue [0]. Note, there were many bugfixes, like
 [1],[2],[3], merged for MQ OCF script, so you may want to try to
 backport them as well by the following guide [4]
 
 [0] https://bugs.launchpad.net/fuel/+bug/1432603
 [1] https://review.openstack.org/#/c/175460/
 [2] https://review.openstack.org/#/c/175457/
 [3] https://review.openstack.org/#/c/175371/
 [4] https://review.openstack.org/#/c/170476/
 
 
 Could you please elaborate the what is the same/different batches for MQ
 and DB? Note, there is a MQ clustering logic flow charts available here
 [5] and we're planning to release a dedicated technical bulletin for this.
 
 [5] http://goo.gl/PPNrw7
 
 
 This is very interesting, thank you! I believe all commands for MySQL RA
 OCF script should be as well wrapped with timeout -SIGTERM or -SIGKILL
 as we did for MQ RA OCF. And there should no be any sleep calls. I
 created a bug for this [6].
 
 [6] https://bugs.launchpad.net/fuel/+bug/1449542
 
 
 Yes, something like that. As I mentioned, there were several bug fixes
 in the 6.1 dev, and you can also check the MQ clustering flow charts.
 
 after
 
 Not exactly. There is no master in mirrored MQ cluster. We define the
 rabbit_hosts configuration option from Oslo.messaging. What ensures all
 queue masters will be spread around all of MQ nodes in a long run. And
 we use a master abstraction only for the Pacemaker RA clustering layer.
 Here, a master is the MQ node what joins the rest of the MQ nodes.
 
 
 We do erase the node master attribute in CIB for such cases. This should
 not bring problems into the master election logic.
 
 
 (Note, the RabbitMQ documentation mentions *queue* masters and slaves,
 which are not the case for the Pacemaker RA clustering abstraction layer.)
 
 
 We made an assumption what the node with the highest MQ uptime should
 know the most about recent cluster state, so other nodes must join it.
 RA OCF does not work with queue masters directly.
 
 
 The full MQ cluster reassemble logic is far from the perfect state,
 indeed. This might erase all mnesia files, hence any custom entities,
 like users or vhosts, would be removed as well. Note, we do not
 configure durable queues for Openstack so there is nothing to care about
 here - the full cluster downtime assumes there will be no AMQP messages
 stored at all.
 
 
 Yes, this option is only supported for newest RabbitMQ versions. But we
 definitely should look how this could help.
 
 
 Indeed, there are cases when MQ's autoheal can do nothing with existing
 partitions and remains partitioned for ever, for example:
 
 Masters: [ node-1 ]
 Slaves: [ node-2 node-3 ]
 root@node-1:~# rabbitmqctl cluster_status
 Cluster status of node 'rabbit@node-1' ...
 [{nodes,[{disc,['rabbit@node-1','rabbit@node-2']}]},
 {running_nodes,['rabbit@node-1']},
 {cluster_name,rabbit@node-2},
 {partitions,[]}]
 ...done.
 root@node-2:~# rabbitmqctl cluster_status
 Cluster status of node 'rabbit@node-2' ...
 [{nodes,[{disc,['rabbit@node-2']}]}]
 ...done.
 root@node-3:~# rabbitmqctl cluster_status
 Cluster status of node 'rabbit@node-3' ...
 [{nodes,[{disc,['rabbit@node-1','rabbit@node-2','rabbit@node-3']}]},
 {running_nodes,['rabbit@node-3']},
 {cluster_name,rabbit@node-2},
 {partitions,[]}]

Sorry, here is the correct one [0] !

[0] http://pastebin.com/m3fDdMA6

 
 So we should test the pause-minority value as well.
 But I strongly believe we should make MQ multi-state clone to support
 many masters, related bp [7]
 
 [7]
 https://blueprints.launchpad.net/fuel/+spec/rabbitmq-pacemaker-multimaster-clone
 
 
 Well, we should not mess the queue masters and multi-clone master for MQ
 resource in the pacemaker.
 As I said, pacemaker RA has nothing to do with queue masters. And we
 introduced this master mostly in order to support the full cluster
 reassemble case - there must be a node promoted and other nodes should join.
 
 
 This is a very good point, thank you.
 
 
 Thank you for a thorough feedback! This was a really great job.
 
 
 


-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Navigating the ever changing OpenStack codebase

2015-04-28 Thread Doug Hellmann
Excerpts from Kevin L. Mitchell's message of 2015-04-27 17:38:25 -0500:
 On Mon, 2015-04-27 at 21:42 +, Jeremy Stanley wrote:
  I consider it an unfortunate oversight that those files weren't
  deleted a very, very long time ago.
 
 Unfortunately, there's one problem with that: you can't tell tox to use
 a virtualenv that you've built.  We need this capability at present, so
 we have to run tests using run_tests.sh instead of tox :(  I have an
 issue open on tox to address this need, but haven't seen any movement on
 that; so until then, I have to oppose the removal of run_tests.sh…
 despite how much *I'd* like to see it bite the dust!

I had a similar requirement at one point and was able to use tox
--notests to create the virtualenv, then modify it to add the tools I
wanted, then run tox again to run the tests.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Driverlog] Feel free to add your drivers for Kilo release into Driverlog

2015-04-28 Thread Irina Povolotskaya
Hi to all,

I'd like to remind you about Driverlog, nice tool that joins our efforts in
keeping information on OpenStack drivers up-to-date.

You can get the latest information on drivers in 2 different sources:
Driverlog itself [1] and OpenStack marketplace [2].

There is a number of brand new drivers in Kilo, so it's high time to get
them published at Driverlog.

I can't help mentioning that Ironic is now also present in Driverlog -
great thanks to Ironic PTL Devananda van der Veen for help.

For instructions on adding a new entry, see the wiki page [3].

Remember, that with adding new drivers, we keep Driverlog updated and open
to OpenStack newbies in comparison to numerous wiki pages that can
sometimes get users confused.

Feel free to add me for code review - always pleased to help.

Thank you.


[1] http://stackalytics.com/driverlog/
[2] https://www.openstack.org/marketplace/drivers/
[3] https://wiki.openstack.org/wiki/DriverLog


-- 
Best regards,

Irina

PI Team Technical Writer
skype: ira_live
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Improving OpenStack documentation around RabbitMQ

2015-04-28 Thread Michael Klishin
Hi, 

I'm a RabbitMQ engineering team member and we'd like to help improve OpenStack 
docs
around it.

I've been reading the docs and making notes of what can be improved. We'd be 
happy
to contribute the changes. However, we're not very familiar with the OpenStack 
development
process and have a few questions before we start.

As far as I understand, OpenStack Kilo is about to ship. Does this mean we can 
only contribute
documentation improvements for the release after it? Are there maintenance 
releases that doc improvements
could go into? If so, how is this reflected in repository  branches?

Should the changes we propose be discussed on this list or in GitHub issues [1]?

Finally, we are considering adding a doc guide dedicated to OpenStack on 
rabbitmq.com (we have one for EC2,
for instance). Note that we are not looking
to replace what's on docs.openstack.org, only provide a guide that can go into 
more details.
Does this sound like a good idea to the OpenStack community? Should we keep 
everything on docs.openstack.org?
Would it be OK if we link to rabbitmq.com guides in any changes we contribute? 
I don't think OpenStack Juno
docs have a lot of external links: is that by design?

Thanks.

1. https://github.com/openstack/openstack-manuals 
--  
MK  

Staff Software Engineer, Pivotal/RabbitMQ  



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-04-28 Thread Bogdan Dobrelya
 Hello,

Hello, Zhou


 I using Fuel 6.0.1 and find that RabbitMQ recover time is long after
 power failure. I have a running HA environment, then I reset power of
 all the machines at the same time. I observe that after reboot it
 usually takes 10 minutes for RabittMQ cluster to appear running
 master-slave mode in pacemaker. If I power off all the 3 controllers and
 only start 2 of them, the downtime sometimes can be as long as 20 minutes.

Yes, this is a known issue [0]. Note, there were many bugfixes, like
[1],[2],[3], merged for MQ OCF script, so you may want to try to
backport them as well by the following guide [4]

[0] https://bugs.launchpad.net/fuel/+bug/1432603
[1] https://review.openstack.org/#/c/175460/
[2] https://review.openstack.org/#/c/175457/
[3] https://review.openstack.org/#/c/175371/
[4] https://review.openstack.org/#/c/170476/


 I have a little investigation and find out there are some possible causes.

 1. MySQL Recovery Takes Too Long [1] and Blocking RabbitMQ Clustering in
 Pacemaker

 The pacemaker resource p_mysql start timeout is set to 475s. Sometimes
 MySQL-wss fails to start after power failure, and pacemaker would wait
 475s before retry starting it. The problem is that pacemaker divides
 resource state transitions into batches. Since RabbitMQ is master-slave
 resource, I assume that starting all the slaves and promoting master are
 put into two different batches. If unfortunately starting all RabbitMQ
 slaves are put in the same batch as MySQL starting, even if RabbitMQ
 slaves and all other resources are ready, pacemaker will not continue
 but just wait for MySQL timeout.

Could you please elaborate the what is the same/different batches for MQ
and DB? Note, there is a MQ clustering logic flow charts available here
[5] and we're planning to release a dedicated technical bulletin for this.

[5] http://goo.gl/PPNrw7


 I can re-produce this by hard powering off all the controllers and start
 them again. It's more likely to trigger MySQL failure in this way. Then
 I observe that if there is one cloned mysql instance not starting, the
 whole pacemaker cluster gets stuck and does not emit any log. On the
 host of the failed instance, I can see a mysql resource agent process
 calling the sleep command. If I kill that process, the pacemaker comes
 back alive and RabbitMQ master gets promoted. In fact this long timeout
 is blocking every resource from state transition in pacemaker.

 This maybe a known problem of pacemaker and there are some discussions
 in Linux-HA mailing list [2]. It might not be fixed in the near future.
 It seems in generally it's bad to have long timeout in state transition
 actions (start/stop/promote/demote). There maybe another way to
 implement MySQL-wss resource agent to use a short start timeout and
 monitor the wss cluster state using monitor action.

This is very interesting, thank you! I believe all commands for MySQL RA
OCF script should be as well wrapped with timeout -SIGTERM or -SIGKILL
as we did for MQ RA OCF. And there should no be any sleep calls. I
created a bug for this [6].

[6] https://bugs.launchpad.net/fuel/+bug/1449542


 I also find a fix to improve MySQL start timeout [3]. It shortens the
 timeout to 300s. At the time I sending this email, I can not find it in
 stable/6.0 branch. Maybe the maintainer needs to cherry-pick it to
 stable/6.0 ?

 [1] https://bugs.launchpad.net/fuel/+bug/1441885
 [2] http://lists.linux-ha.org/pipermail/linux-ha/2014-March/047989.html
 [3] https://review.openstack.org/#/c/171333/


 2. RabbitMQ Resource Agent Breaks Existing Cluster

 Read the code of the RabbitMQ resource agent, I find it does the
 following to start RabbitMQ master-slave cluster.
 On all the controllers:
 (1) Start Erlang beam process
 (2) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 (3) Stop RabbitMQ App but do not stop the beam process

 Then in pacemaker, all the RabbitMQ instances are in slave state. After
 pacemaker determines the master, it does the following.
 On the to-be-master host:
 (4) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 On the slaves hosts:
 (5) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 (6) Join RabbitMQ cluster of the master host


Yes, something like that. As I mentioned, there were several bug fixes
in the 6.1 dev, and you can also check the MQ clustering flow charts.

 As far as I can understand, this process is to make sure the master
 determined by pacemaker is the same as the master determined in RabbitMQ
 cluster. If there is no existing cluster, it's fine. If it is run
after

Not exactly. There is no master in mirrored MQ cluster. We define the
rabbit_hosts configuration option from Oslo.messaging. What ensures all
queue masters will be spread around all of MQ nodes in a long run. And
we use a master abstraction only for the Pacemaker RA clustering layer.
Here, a master is the MQ node what joins the rest of the MQ nodes.

 power failure 

Re: [openstack-dev] Improving OpenStack documentation around RabbitMQ

2015-04-28 Thread Davanum Srinivas
Michael,

Have you seen this?
https://github.com/openstack/ha-guide/tree/master/doc/high-availability-guide/ha_aa_rabbitmq

That url was built from this github repo:
https://github.com/openstack/ha-guide/tree/master/doc/high-availability-guide/ha_aa_rabbitmq

There's a weekly meeting for the HA documentation to meet people
working on the HA guide:
https://wiki.openstack.org/wiki/Meetings#HA_Guide_Update_Meeting

Thanks,
dims

On Tue, Apr 28, 2015 at 9:15 AM, Christian Berendt christ...@berendt.io wrote:
 Hello Michael.

 Just moving your thread to the correct mailling list.

 Sorry for my previous mail. Thunderbird autocompletion used the
 unsubscribe alias and not the correct mailinglist :(

 Christian.

 On 04/28/2015 02:58 PM, Michael Klishin wrote:
 Hi,

 I'm a RabbitMQ engineering team member and we'd like to help improve 
 OpenStack docs
 around it.

 I've been reading the docs and making notes of what can be improved. We'd be 
 happy
 to contribute the changes. However, we're not very familiar with the 
 OpenStack development
 process and have a few questions before we start.

 As far as I understand, OpenStack Kilo is about to ship. Does this mean we 
 can only contribute
 documentation improvements for the release after it? Are there maintenance 
 releases that doc improvements
 could go into? If so, how is this reflected in repository  branches?

 Should the changes we propose be discussed on this list or in GitHub issues 
 [1]?

 Finally, we are considering adding a doc guide dedicated to OpenStack on 
 rabbitmq.com (we have one for EC2,
 for instance). Note that we are not looking
 to replace what's on docs.openstack.org, only provide a guide that can go 
 into more details.
 Does this sound like a good idea to the OpenStack community? Should we keep 
 everything on docs.openstack.org?
 Would it be OK if we link to rabbitmq.com guides in any changes we 
 contribute? I don't think OpenStack Juno
 docs have a lot of external links: is that by design?

 Thanks.

 1. https://github.com/openstack/openstack-manuals
 --
 MK

 Staff Software Engineer, Pivotal/RabbitMQ



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Christian Berendt
 Cloud Computing Solution Architect
 Mail: bere...@b1-systems.de

 B1 Systems GmbH
 Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
 GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [EC2 API] Cancelling today's team meeting - 04/28/2015

2015-04-28 Thread M Ranga Swami Reddy
Team,

We decided to cancel today’s meeting because a number of key members
won’t be able to attend.


Thanks
Swami

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Glance] Hierarchical multitenancy and Glance?

2015-04-28 Thread Geoff Arnold
Yes. 100% upstream.

And although I’ve referred to it as “reseller” (following the previous Keystone 
BP), it’s a much more generic pattern. Long term, I think it turns into 
something like a supply chain framework for services.

Geoff

 On Apr 28, 2015, at 3:51 AM, Tim Bell tim.b...@cern.ch wrote:
 
 Geoff,
  
 Would the generic parts of your “reseller” solution by contributed to the 
 upstream projects (e.g. glance, horizon, ceilometer) ? It would be good to 
 get the core components understanding hierarchical multitenancy for all the 
 use cases.
  
 The nova quota work is being submitted upstream for Liberty by Sajeesh 
 (https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api 
 https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api)
  
 The cinder quota proposal is also underway 
 (https://blueprints.launchpad.net/cinder/+spec/cinder-nested-quota-driver 
 https://blueprints.launchpad.net/cinder/+spec/cinder-nested-quota-driver)
  
 Tim
  
 From: Geoff Arnold [mailto:ge...@geoffarnold.com] 
 Sent: 28 April 2015 08:11
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Keystone][Glance] Hierarchical multitenancy and 
 Glance?
  
 Use cases:
 https://wiki.openstack.org/wiki/HierarchicalMultitenancy 
 https://wiki.openstack.org/wiki/HierarchicalMultitenancy
  
 Blueprints:
 (Kilo):
 https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy 
 https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
 https://blueprints.launchpad.net/keystone/+spec/reseller 
 https://blueprints.launchpad.net/keystone/+spec/reseller
 (Liberty):
 https://blueprints.launchpad.net/nova/+spec/multiple-level-user-quota-management
  
 https://blueprints.launchpad.net/nova/+spec/multiple-level-user-quota-management
 https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api 
 https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api
 (Pending):
 https://blueprints.launchpad.net/horizon/+spec/hierarchical-projects 
 https://blueprints.launchpad.net/horizon/+spec/hierarchical-projects
 https://blueprints.launchpad.net/horizon/+spec/inherited-roles 
 https://blueprints.launchpad.net/horizon/+spec/inherited-roles
  
 As for adoption, it’s hard to say. The HMT work in Keystone was a necessary 
 starting point, but in order to create a complete solution we really need the 
 corresponding changes in Nova (quotas), Glance (resource visibility), Horizon 
 (UI scoping), and probably Ceilometer (aggregated queries). We (Cisco) are 
 planning to kick off a Stackforge project to knit all of these things 
 together into a complete “reseller” federation system. I’m assuming that 
 there will be other system-level compositions of the various pieces.
  
 Geoff
  
 On Apr 27, 2015, at 9:48 PM, Tripp, Travis S travis.tr...@hp.com 
 mailto:travis.tr...@hp.com wrote:
  
 Geoff,
 
 Getting a spec on HMT would be helpful, as Nikhil mentioned.
 
 As a general question, what it the current adoption of domains / vs
 hierarchical projects? Is there a wiki or something that highlights what
 the desired path forward is with regard to domains?
 
 Thanks,
 Travis
 
 On 4/27/15, 7:16 PM, Geoff Arnold ge...@geoffarnold.com 
 mailto:ge...@geoffarnold.com wrote:
 
 
 Good points. I¹ll add some details. I¹m sure the Reseller guys will have
 some comments.
 
 Geoff
 
 
 On Apr 27, 2015, at 3:32 PM, Nikhil Komawar
 nikhil.koma...@rackspace.com mailto:nikhil.koma...@rackspace.com wrote:
 
 Thanks Geoff. Added some notes and questions.
 
 -Nikhil
 
 
 From: Geoff Arnold ge...@geoffarnold.com mailto:ge...@geoffarnold.com
 Sent: Monday, April 27, 2015 5:50 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Keystone][Glance] Hierarchical multitenancy
 and   Glance?
 
 In preparation for Vancouver, I¹ve been looking for blueprints and
 design summit discussions involving the application of the Keystone
 hierarchical multitenancy work to other OpenStack projects. One obvious
 candidate is Glance, where, for example, we might want domain-local
 resource visibility as a default. Despite my searches, I wasn¹t able to
 find anything. Did I miss something obvious?
 
 I¹ve added a paragraph to
 https://etherpad.openstack.org/p/liberty-glance-summit-topics 
 https://etherpad.openstack.org/p/liberty-glance-summit-topics to make
 sure it doesn¹t get overlooked.
 
 Cheers,
 
 Geoff
 
 _
 _
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org 
 mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 _
 _
 OpenStack 

Re: [openstack-dev] [heat] Kubernetes AutoScaling with Heat AutoScalingGroup and Ceilometer

2015-04-28 Thread Qiming Teng
On Mon, Apr 27, 2015 at 12:28:01PM -0400, Rabi Mishra wrote:
 Hi All,
 
 Deploying Kubernetes(k8s) cluster on any OpenStack based cloud for container 
 based workload is a standard deployment pattern. However, auto-scaling this 
 cluster based on load would require some integration between k8s OpenStack 
 components. While looking at the option of leveraging Heat ASG to achieve 
 autoscaling, I came across few requirements that the list can discuss and 
 arrive at the best possible solution.
 
 A typical k8s deployment scenario on OpenStack would be as below.
 
 - Master (single VM)
 - Minions/Nodes (AutoScalingGroup)
 
 AutoScaling of the cluster would involve both scaling of minions/nodes and 
 scaling Pods(ReplicationControllers). 
 
 1. Scaling Nodes/Minions:
 
 We already have utilization stats collected at the hypervisor level, as 
 ceilometer compute agent polls the local libvirt daemon to acquire 
 performance data for the local instances/nodes.

I really doubts if those metrics are so useful to trigger a scaling
operation. My suspicion is based on two assumptions: 1) autoscaling
requests should come from the user application or service, not from the
controller plane, the application knows best whether scaling is needed;
2) hypervisor level metrics may be misleading in some cases. For
example, it cannot give an accurate CPU utilization number in the case
of CPU overcommit which is a common practice.

 Also, Kubelet (running on the node) collects the cAdvisor stats. However, 
 cAdvisor stats are not fed back to the scheduler at present and scheduler 
 uses a simple round-robin method for scheduling.

It looks like a multi-layer resource management problem which needs a
wholistic design. I'm not quite sure if scheduling at the container
layer alone can help improve resource utilization or not.

 Req 1: We would need a way to push stats from the kubelet/cAdvisor to 
 ceilometer directly or via the master(using heapster). Alarms based on these 
 stats can then be used to scale up/down the ASG. 

To send a sample to ceilometer for triggering autoscaling, we will need
some user credentials to authenticate with keystone (even with trusts).
We need to pass the project-id in and out so that ceilometer will know
the correct scope for evaluation. We also need a standard way to tag
samples with the stack ID and maybe also the ASG ID. I'd love to see
this done transparently, i.e. no matching_metadata or query confusions.

 There is an existing blueprint[1] for an inspector implementation for docker 
 hypervisor(nova-docker). However, we would probably require an agent running 
 on the nodes or master and send the cAdvisor or heapster stats to ceilometer. 
 I've seen some discussions on possibility of leveraging keystone trusts with 
 ceilometer client. 

An agent is needed, definitely.

 Req 2: Autoscaling Group is expected to notify the master that a new node has 
 been added/removed. Before removing a node the master/scheduler has to mark 
 node as 
 unschedulable. 

A little bit confused here ... are we scaling the containers or the
nodes or both?

 Req 3: Notify containers/pods that the node would be removed for them to stop 
 accepting any traffic, persist data. It would also require a cooldown period 
 before the node removal. 

There have been some discussions on sending messages, but so far I don't
think there is a conclusion on the generic solution.

Just my $0.02.

BTW, we have been looking into similar problems in the Senlin project.

Regards,
  Qiming

 Both requirement 2 and 3 would probably require generating scaling event 
 notifications/signals for master and containers to consume and probably some 
 ASG lifecycle hooks.  
 
 
 Req 4: In case of too many 'pending' pods to be scheduled, scheduler would 
 signal ASG to scale up. This is similar to Req 1. 
 
 
 2. Scaling Pods
 
 Currently manual scaling of pods is possible by resizing 
 ReplicationControllers. k8s community is working on an abstraction, 
 AutoScaler[2] on top of ReplicationController(RC) that provides 
 intention/rule based autoscaling. There would be a requirement to collect 
 cAdvisor/Heapster stats to signal the AutoScaler too. Probably this is beyond 
 the scope of OpenStack.
 
 Any thoughts and ideas on how to realize this use-case would be appreciated.
 
 
 [1] 
 https://review.openstack.org/gitweb?p=openstack%2Fceilometer-specs.git;a=commitdiff;h=6ea7026b754563e18014a32e16ad954c86bd8d6b
 [2] 
 https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md
 
 Regards,
 Rabi Mishra
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [all] Question for the TC candidates

2015-04-28 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2015-04-27 23:20:39 +0100:
 On Mon, 27 Apr 2015, Doug Hellmann wrote:
 
  I believe all of the posts were on the main OpenStack foundation blog
  under the technical committee tag [1], and they also went to
  planet.openstack.org for folks who subscribe to the entire community
  feed.
 
 Ah. Some things about that:
 
 * in the right sidebar, under categories, there is no category for
the tag technical-committee
 * assuming the blog is up to date there were three postings in that
tag last year, and none so far this year
 * there are some posts from this year, but they didn't get the tag

Obviously we need to work on our consistency, there. :-)

I scanned the archives and found a few posts starting in July and
covering the big themes of what we were discussing for the last half of
Juno.

http://www.openstack.org/blog/2014/07/openstack-technical-committee-update-july-1/
http://www.openstack.org/blog/2014/09/latest-technical-committee-updates/
http://www.openstack.org/blog/2014/10/openstack-technical-committee-update-2/

I think this post from Anne was inspired by a discussion we had in a
TC meeting, but I'm not sure if it was meant as an update:

http://www.openstack.org/blog/2014/12/studying-midcycle-sprints-and-meetings/

And then Thierry's post from this year summarizing the big tent work:

http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/

I didn't find any summaries after that, but I was scanning quickly and
might have missed one.

 
  The only way I've been able to get any sense of what the TC might be
  up to is by watching the governance project on gerrit and that tends
  to be too soon and insufficiently summarized and thus a fair bit of
  work to separate the details from the destinations.
 
  I think we chose blog posts for their relative permanence, and
  retweetability. Maybe we should post to the mailing list instead,
  if the contributor community follows the list more regularly than
  blogs?
 
 I think on a blog is a great idea, but my point with above and the
 earlier message is either that the blogging is not happening or I'm not
 finding it. The impression I got from your earlier message was that
 summaries from the meetings were being produced. The TC met more than
 three times in 2014, yes? So either something is amiss with linking
 up the blog posts or the summaries aren't happening.

We didn't summarize each meeting. There were only meant to be posts
every few weeks, since often a topic will take a couple of weeks
to iron out.  We started in July, mid-way through Juno, which also
contributed to a lower count.

 
 I think it would be great if there were weekly or montly summaries. They
 can go to whatever medium is deemed appropriate but it is critical that
 new folk are able to find them.
 
 _Summaries_ are critical as it is important that the information is
 digested and contextualized so its relevance can be more clear. I
 know that I can read the IRC logs but I suspect most people don't
 want to make the time for that.
 

As you say, the meeting logs are published, but it's not reasonable
to expect everyone to read those.  A fair amount of the discussion
is happening in gerrit review comments now, too, and those aren't
part of the meeting logs.

OTOH, I'm not sure a strict weekly summary is going to be that
useful. We do not always resolve all issues in a week, so we won't
always have a conclusion to report. Ongoing discussions are tracked
either here on the mailing list or in gerrit, and I'm not sure we
want to try to duplicate that information to summarize it. So we
need to find a balance, and I agree that we need to continue posting
summaries.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] implement openvswitch container

2015-04-28 Thread FangFenghua
I want to enbale openvswitch container.  I tink i can do that like:
1 add a container that run ovs process2 add a container that run 
neutron-openvswitch-agent3  share the db.sock in compose.yaml4 add configure 
script and check script for the 2 containers
that's all i need to do, right?   __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improving OpenStack documentation around RabbitMQ

2015-04-28 Thread Michael Klishin
On 28 April 2015 at 16:44:32, Michael Klishin (mklis...@pivotal.io) wrote:
 At this stage I'm trying to understand the process more than  
 anything. E.g. how can
 documentation improvements to Kilo be contributed after it  
 ships.
  
 Some of the improvements we have in mind are not HA-related.

I've decided to start a proper new thread on openstack-docs with my original 
questions.

Let's continue there. 
--  
MK  

Staff Software Engineer, Pivotal/RabbitMQ  



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Problem in loading image in sahara open stack

2015-04-28 Thread Nikolay Starodubtsev
Hi Sonal,
Am I right and you can't just upload an image to glance image-registry?



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-04-28 16:28 GMT+03:00 Sonal Singh sonal.si...@aricent.com:

  Hi,



 I have installed Sahara openstack using devstack.



 Now I am going to make multi node cluster in Sahara. For this I have made
 1 master node and 4 worker node now when I am going to launch this cluster,
 I need shara-juno-vanilla image.

 For this I have downloaded this image. Now when I am trying  to launch
 this image in image storage of openstack, its status is hanged in queued
 state or getting killed but not coming in active state. Please find the
 below snapshot:





 Can you please provide me any solution regarding this. Any help will be
 highly appreciated.



 Thanks  Regards

 Sonal






  DISCLAIMER: This message is proprietary to Aricent and is intended
 solely for the use of the individual to whom it is addressed. It may
 contain privileged or confidential information and should not be circulated
 or used for any purpose other than for what it is intended. If you have
 received this message in error, please notify the originator immediately.
 If you are not the intended recipient, you are notified that you are
 strictly prohibited from using, copying, altering, or disclosing the
 contents of this message. Aricent accepts no responsibility for loss or
 damage arising from the use of the information transmitted by this email
 including damage from virus.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improving OpenStack documentation around RabbitMQ

2015-04-28 Thread Michael Klishin
On 28 April 2015 at 16:33:35, Davanum Srinivas (dava...@gmail.com) wrote:
 Hello Michael.
  
 Just moving your thread to the correct mailling list.

Apologies, I've signed up to openstack-docs now and will re-post there. 
--  
MK  

Staff Software Engineer, Pivotal/RabbitMQ  



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improving OpenStack documentation around RabbitMQ

2015-04-28 Thread Michael Klishin
On 28 April 2015 at 16:33:35, Davanum Srinivas (dava...@gmail.com) wrote:
 Have you seen this?
 https://github.com/openstack/ha-guide/tree/master/doc/high-availability-guide/ha_aa_rabbitmq
   
  
 That url was built from this github repo:
 https://github.com/openstack/ha-guide/tree/master/doc/high-availability-guide/ha_aa_rabbitmq
   
  
 There's a weekly meeting for the HA documentation to meet people  
 working on the HA guide:
 https://wiki.openstack.org/wiki/Meetings#HA_Guide_Update_Meeting  

Thank you, I'll take a look.

At this stage I'm trying to understand the process more than anything. E.g. how 
can
documentation improvements to Kilo be contributed after it ships.

Some of the improvements we have in mind are not HA-related. 
--  
MK  

Staff Software Engineer, Pivotal/RabbitMQ  



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Glance] Hierarchical multitenancy and Glance?

2015-04-28 Thread Rodrigo Duarte Sousa

Hi all,

Our team in the Federal University of Campina Grande implemented the 
initial Hierarchical Multitenancy support and now we are implementing 
the Reseller use case in Keystone.


Already answering Travis question, in the Reseller solution we are 
merging the domains and projects entities: domains are going to be a 
feature of projects - if a project has the domain feature enabled, 
it will behave exactly as domains currently behave (being a container of 
users). With domain being a project, they will be part of the same 
hierarchy, for more details you may read the spec: 
https://github.com/openstack/keystone-specs/blob/master/specs/liberty/reseller.rst


And yes, we need to extend the Hierarchical Mutlitenancy concept to 
other projects and our team is already working in Horizon and in contact 
with Sajeesh (Nova). We are definitely interested in participating the 
proposed design session and discussions that could emerge from it.


--

Rodrigo Duarte

On 28-04-2015 10:59, Geoff Arnold wrote:

Yes. 100% upstream.

And although I’ve referred to it as “reseller” (following the previous 
Keystone BP), it’s a much more generic pattern. Long term, I think it 
turns into something like a supply chain framework for services.


Geoff

On Apr 28, 2015, at 3:51 AM, Tim Bell tim.b...@cern.ch 
mailto:tim.b...@cern.ch wrote:


Geoff,

Would the generic parts of your “reseller” solution by contributed to 
the upstream projects (e.g. glance, horizon, ceilometer) ? It would 
be good to get the core components understanding hierarchical 
multitenancy for all the use cases.


The nova quota work is being submitted upstream for Liberty by 
Sajeesh 
(https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api)


The cinder quota proposal is also underway 
(https://blueprints.launchpad.net/cinder/+spec/cinder-nested-quota-driver)


Tim

*From:*Geoff Arnold [mailto:ge...@geoffarnold.com]
*Sent:* 28 April 2015 08:11
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Keystone][Glance] Hierarchical 
multitenancy and Glance?


Use cases:

https://wiki.openstack.org/wiki/HierarchicalMultitenancy

Blueprints:

(Kilo):

https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy

https://blueprints.launchpad.net/keystone/+spec/reseller

(Liberty):

https://blueprints.launchpad.net/nova/+spec/multiple-level-user-quota-management

https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api

(Pending):

https://blueprints.launchpad.net/horizon/+spec/hierarchical-projects

https://blueprints.launchpad.net/horizon/+spec/inherited-roles

As for adoption, it’s hard to say. The HMT work in Keystone was a 
necessary starting point, but in order to create a complete solution 
we really need the corresponding changes in Nova (quotas), Glance 
(resource visibility), Horizon (UI scoping), and probably Ceilometer 
(aggregated queries). We (Cisco) are planning to kick off a 
Stackforge project to knit all of these things together into a 
complete “reseller” federation system. I’m assuming that there will 
be other system-level compositions of the various pieces.


Geoff

On Apr 27, 2015, at 9:48 PM, Tripp, Travis S travis.tr...@hp.com
mailto:travis.tr...@hp.com wrote:

Geoff,

Getting a spec on HMT would be helpful, as Nikhil mentioned.

As a general question, what it the current adoption of domains / vs
hierarchical projects? Is there a wiki or something that
highlights what
the desired path forward is with regard to domains?

Thanks,
Travis

On 4/27/15, 7:16 PM, Geoff Arnold ge...@geoffarnold.com
mailto:ge...@geoffarnold.com wrote:


Good points. I¹ll add some details. I¹m sure the Reseller
guys will have
some comments.

Geoff


On Apr 27, 2015, at 3:32 PM, Nikhil Komawar
nikhil.koma...@rackspace.com
mailto:nikhil.koma...@rackspace.com wrote:

Thanks Geoff. Added some notes and questions.

-Nikhil


From: Geoff Arnold ge...@geoffarnold.com
mailto:ge...@geoffarnold.com
Sent: Monday, April 27, 2015 5:50 PM
To: OpenStack Development Mailing List (not for usage
questions)
Subject: [openstack-dev] [Keystone][Glance] Hierarchical
multitenancy
and   Glance?

In preparation for Vancouver, I¹ve been looking for
blueprints and
design summit discussions involving the application of
the Keystone
hierarchical multitenancy work to other OpenStack
projects. One obvious
candidate is Glance, where, for example, we might want
domain-local
resource visibility as a default. Despite my searches, I
wasn¹t able to
find anything. Did I miss something obvious?


Re: [openstack-dev] [Keystone][Glance] Hierarchical multitenancy and Glance?

2015-04-28 Thread Tim Bell
Geoff,

Would the generic parts of your reseller solution by contributed to the 
upstream projects (e.g. glance, horizon, ceilometer) ? It would be good to get 
the core components understanding hierarchical multitenancy for all the use 
cases.

The nova quota work is being submitted upstream for Liberty by Sajeesh 
(https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api)

The cinder quota proposal is also underway 
(https://blueprints.launchpad.net/cinder/+spec/cinder-nested-quota-driver)

Tim

From: Geoff Arnold [mailto:ge...@geoffarnold.com]
Sent: 28 April 2015 08:11
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Keystone][Glance] Hierarchical multitenancy and 
Glance?

Use cases:
https://wiki.openstack.org/wiki/HierarchicalMultitenancy

Blueprints:
(Kilo):
https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy
https://blueprints.launchpad.net/keystone/+spec/reseller
(Liberty):
https://blueprints.launchpad.net/nova/+spec/multiple-level-user-quota-management
https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api
(Pending):
https://blueprints.launchpad.net/horizon/+spec/hierarchical-projects
https://blueprints.launchpad.net/horizon/+spec/inherited-roles

As for adoption, it's hard to say. The HMT work in Keystone was a necessary 
starting point, but in order to create a complete solution we really need the 
corresponding changes in Nova (quotas), Glance (resource visibility), Horizon 
(UI scoping), and probably Ceilometer (aggregated queries). We (Cisco) are 
planning to kick off a Stackforge project to knit all of these things together 
into a complete reseller federation system. I'm assuming that there will be 
other system-level compositions of the various pieces.

Geoff

On Apr 27, 2015, at 9:48 PM, Tripp, Travis S 
travis.tr...@hp.commailto:travis.tr...@hp.com wrote:

Geoff,

Getting a spec on HMT would be helpful, as Nikhil mentioned.

As a general question, what it the current adoption of domains / vs
hierarchical projects? Is there a wiki or something that highlights what
the desired path forward is with regard to domains?

Thanks,
Travis

On 4/27/15, 7:16 PM, Geoff Arnold 
ge...@geoffarnold.commailto:ge...@geoffarnold.com wrote:


Good points. I¹ll add some details. I¹m sure the Reseller guys will have
some comments.

Geoff


On Apr 27, 2015, at 3:32 PM, Nikhil Komawar
nikhil.koma...@rackspace.commailto:nikhil.koma...@rackspace.com wrote:

Thanks Geoff. Added some notes and questions.

-Nikhil


From: Geoff Arnold ge...@geoffarnold.commailto:ge...@geoffarnold.com
Sent: Monday, April 27, 2015 5:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Keystone][Glance] Hierarchical multitenancy
and   Glance?

In preparation for Vancouver, I¹ve been looking for blueprints and
design summit discussions involving the application of the Keystone
hierarchical multitenancy work to other OpenStack projects. One obvious
candidate is Glance, where, for example, we might want domain-local
resource visibility as a default. Despite my searches, I wasn¹t able to
find anything. Did I miss something obvious?

I¹ve added a paragraph to
https://etherpad.openstack.org/p/liberty-glance-summit-topics to make
sure it doesn¹t get overlooked.

Cheers,

Geoff

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Dependency management summit sessions

2015-04-28 Thread Sean Dague
On 04/27/2015 05:53 PM, Robert Collins wrote:
 Hi, so  -
 
 https://libertydesignsummit.sched.org/event/4da45ec1390dadcfc1d8a73decbf3f19#.VT6urd_va00
 
 Is an ops track session about dependencies - focusing on the
 operational side (mysql, mongo, rabbit etc). I'd love to have some
 developer centric folk in the room, though I'll be doing my best to
 capture the ops constraints. My hope is to have a really clear story
 about what we can and should depend on, and the cost of non-boring
 tech, for our operators.
 
 On 16 April 2015 at 22:32, Sean Dague s...@dague.net wrote:
 ...
 Possibly, the devil is in the details. Also it means it won't play nice
 with manual pip installs before / after, which are sometimes needed.

 Mostly I find it annoying that pip has no global view, so all tools that
 call it have to construct their own global view and only ever call pip
 once to get a right answer. It feels extremely fragile. It's also not
 clear to me how easy it's going to be to debug.
 
 We don't need to do that. I've put some expanded thoughts together here:
 
 https://rbtcollins.wordpress.com/2015/04/28/dealing-with-deps-in-openstack/
 
 I *think* that avoids the omg refactor-devstack thing, at the cost of
 a small additional feature in pip.
 
 Thierry says we're going to slot this into
 https://libertydesignsummit.sched.org/event/da0f31eddd0def88c6c51fb131fe87bd#.VT6v1N_va00
 or the followup after lunch - I'd like to pin it down more than that,
 if we can.

I'm still generally suspicious of the precompute / install model because
solving that ends up being... interesting some times. I also think there
is a related issue of dependencies for optional features which,
because they are inconsistently dealt with, exacerbate things. This
being things like drivers, db backends.

After the giant unwind Doug, Clark, and I started writing up the
following - https://etherpad.openstack.org/p/requirements-future

I do think we need a summit discussion, I also think pip needs some
fixes, but I think this needs a lot of heads together to get to a plan,
because many individuals thought they nailed this issue in the past, and
were wrong.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Kilo RC3 available

2015-04-28 Thread Thierry Carrez
Hello everyone,

Due to a security issue (bug 1447883) discovered in RC2 testing, a new
Neutron release candidate was just created for Kilo. The list of RC3
last-minute fixes, as well as the RC3 tarballs are available at:

https://launchpad.net/neutron/kilo/kilo-rc3

At this late stage, these tarballs are very likely to be formally
released as the final Kilo version on April 30. You are therefore
strongly encouraged to test and validate them !

Alternatively, you can directly test the stable/kilo branches at:
https://github.com/openstack/neutron/tree/stable/kilo
https://github.com/openstack/neutron-fwaas/tree/stable/kilo
https://github.com/openstack/neutron-lbaas/tree/stable/kilo
https://github.com/openstack/neutron-vpnaas/tree/stable/kilo

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/neutron/+filebug

and tag it *kilo-rc-potential* to bring it to the release crew's attention.

Thanks!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Navigating the ever changing OpenStack codebase

2015-04-28 Thread Jay Pipes

On 04/27/2015 06:38 PM, Kevin L. Mitchell wrote:

On Mon, 2015-04-27 at 21:42 +, Jeremy Stanley wrote:

I consider it an unfortunate oversight that those files weren't
deleted a very, very long time ago.


Unfortunately, there's one problem with that: you can't tell tox to use
a virtualenv that you've built.  We need this capability at present, so
we have to run tests using run_tests.sh instead of tox :(  I have an
issue open on tox to address this need, but haven't seen any movement on
that; so until then, I have to oppose the removal of run_tests.sh…
despite how much *I'd* like to see it bite the dust!


Honestly, I see no problem with some helper bash scripts that simplify 
life for new contributors. The bash scripts do wonders for developers 
new to OpenStack or Python coding by having a pretty easy and readable 
way of determining what CLI commands are used to execute tests. Hell, 
devstack [1] itself was written originally in the way it was to 
well-document the deployment process for OpenStack. Many packagers and 
configuration management script authors have looked at devstack's Bash 
scripts for inspiration and instruction in this way.


The point Ronald was making that nobody seems to have addressed is the 
very valid observation that as a new contributor, it can be very 
confusing to go from one project to another and see different ways of 
running tests. Some projects have run_tests.sh and still actively 
promote it in the devref docs. Others don't


While Ronald seems to have been the victim of unfortunate timing (he 
started toying around with python-openstackclient and within a week, 
they removed the script he was using to run tests), that doesn't make 
his point about our inconsistency moot.


Best,
-jay

[1] http://docs.openstack.org/developer/devstack/stack.sh.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Elections] Vote Vote Vote in the TC election!

2015-04-28 Thread Tristan Cacqueray
We are coming down the the last day plus hours for voting in the TC
election.

Search your gerrit preferred email address[0] for the following subject:
  Poll: OpenStack Technical Committee (TC) Election - April 2015

That is your ballot and links you to the voting application. Please
vote. If you have voted, please encourage your colleagues to vote.

Candidate statements are linked to the names of all confirmed candidates:

https://wiki.openstack.org/wiki/TC_Elections_April_2015#Confirmed_Candidates

What to do if you don't see the email and have a commit in at least one
of the official programs projects[1]:
  * check the trash of your gerrit Preferred Email address[0], in case
it went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project
repos[1] and email me and Elizabeth[2]. If we can confirm that you
are entitled to vote, we will add you to the voters list and you
will be emailed a ballot.

Please vote!

Thank you,
Tristan

[0] Sign into review.openstack.org: Go to Settings  Contact
Information. Look at the email listed as your Preferred Email.
That is where the ballot has been sent.
[1]
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=april-2015-elections
[2] Elizabeth K. Joseph (pleia2): lyz at princessleia dot com
Tristan (tristanC): tristan dot cacqueray at enovance dot com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet when creating a pool?

2015-04-28 Thread Wanjing Xu
Brandon
Just saw this.  So using horizon, when adding a member to the pool, I can 
either select from active vm lists, or just specify ip address of the member.  
There is no subnet selection on this page.  So what is the usage of pool subnet 
what I have to enter when creating pool on horizon?
ThanksWanjing

From: brandon.lo...@rackspace.com
To: openstack-dev@lists.openstack.org
Date: Tue, 28 Apr 2015 16:19:37 +
Subject: Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet 
when creating a pool?







​So someone pointed out that you were using lbaas for Juno, which would mean 
you aren't using LBaaS V2.  So you're using V1.  V1 member's do not take 
subnet_id as an attribute.  Let me know how you are making your requests.







Thanks,



Brandon





From: Brandon Logan brandon.lo...@rackspace.com

Sent: Monday, April 27, 2015 8:40 PM

To: OpenStack Development Mailing List not for usage questions

Subject: Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet 
when creating a pool?
 


I'm assuming you are using LBaaS V2.  With that assumption, I'm not sure how 
you are having to select subnet on the pool.  It's not supposed to be a field 
at all on the pool object.  subnet_id is required on the member object right 
now, but that's something
 I and others think should just be optional, and if not specified then it's 
assumed that member can be reached with whatever has already been setup.​  
Another option is pool could get a subnet_id field in the future and all 
members that are created without
 subnet_id are assumed to be on the pool's subnet_id, but I'm getting ahead of 
myself and this has no bearing on your current issue.







Could you tell me how you are making your requests? CLI? REST directly?





From: Wanjing Xu wanjing...@hotmail.com

Sent: Monday, April 27, 2015 12:57 PM

To: OpenStack Development Mailing List not for usage questions

Subject: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet when 
creating a pool?
 


So when I use Haproxy for LBaaS for Juno, there is a subnet mandatary field 
that I need to fill in when creating a pool, and later on when I add members, I 
can use a different subnet(or simply just enter the ip of the member), when 
adding vip,
 I can still select a third subnet.  So what is the usage of the first subnet 
that I used to create pool?  There is no port created for this pool subnet.  I 
can see that a port is created for the vip subnet that the loadbalancer 
instance is binding to.



Regards!



Wanjing Xu








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet when creating a pool?

2015-04-28 Thread Bharath M
Hi Wanjing,

As it's Juno, I assume you are using LBaaSv1. If that's the case, as
Brandon pointed, there's no subnet-id switch in the neutron
lb-member-create command.

Having said that you still use the subnet-id in both the following commands:
neutron lb-pool-create
neutron lb-vip-create

You should note that the subnet id in each of the above commands serve
different purpose. In the case of lb-pool-create, the subnet-id is
provided to make sure that only members belonging to the specified
subnet-id could be added to the pool.

However, the subnet id in the lb-vip-create command specifies the network
range from which an ip is chosen to be assigned as a vip.

Thus, you could use different subnets for both the above commands and as
long as you have route between those two, the load balancing works.

Thanks,
Bharath.


On Tue, Apr 28, 2015 at 9:19 AM, Brandon Logan brandon.lo...@rackspace.com
wrote:

  ​So someone pointed out that you were using lbaas for Juno, which would
 mean you aren't using LBaaS V2.  So you're using V1.  V1 member's do not
 take subnet_id as an attribute.  Let me know how you are making your
 requests.


  Thanks,

 Brandon
  --
 *From:* Brandon Logan brandon.lo...@rackspace.com
 *Sent:* Monday, April 27, 2015 8:40 PM
 *To:* OpenStack Development Mailing List not for usage questions
 *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select
 subnet when creating a pool?


 I'm assuming you are using LBaaS V2.  With that assumption, I'm not sure
 how you are having to select subnet on the pool.  It's not supposed to be a
 field at all on the pool object.  subnet_id is required on the member
 object right now, but that's something I and others think should just be
 optional, and if not specified then it's assumed that member can be reached
 with whatever has already been setup.​  Another option is pool could get a
 subnet_id field in the future and all members that are created without
 subnet_id are assumed to be on the pool's subnet_id, but I'm getting ahead
 of myself and this has no bearing on your current issue.


  Could you tell me how you are making your requests? CLI? REST directly?
  --
 *From:* Wanjing Xu wanjing...@hotmail.com
 *Sent:* Monday, April 27, 2015 12:57 PM
 *To:* OpenStack Development Mailing List not for usage questions
 *Subject:* [openstack-dev] [Neutron][LBaaS] Why do we need to select
 subnet when creating a pool?

  So when I use Haproxy for LBaaS for Juno, there is a subnet mandatary
 field that I need to fill in when creating a pool, and later on when I add
 members, I can use a different subnet(or simply just enter the ip of the
 member), when adding vip, I can still select a third subnet.  So what is
 the usage of the first subnet that I used to create pool?  There is no port
 created for this pool subnet.  I can see that a port is created for the vip
 subnet that the loadbalancer instance is binding to.

  Regards!

  Wanjing Xu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-28 Thread loy wolfe
On Wed, Apr 29, 2015 at 2:59 AM, Kevin Benton blak...@gmail.com wrote:
 The concern is that having broken drivers out there that claim to work with
 an OpenStack project end up making the project look bad. It's similar to a
 first time Linux user experiencing frequent kernel panics because they are
 using hardware with terrible drivers. They aren't going to recognize the
 distinction and will just assume the project is bad.


I think the focal point is not about device driver for the real
backend such as OVS/LB or HW TOR, but ML2 vs. external SDN controllers
which are also claimed to be backends by some people.

Again analogy with Linux, which has socket layer exposing the API,
common tcp/ip stack and common netdev  skbuff, and each NIC has its
own device driver (real backend). While it make sense to discuss
whether those backend device driver should be splitted out of tree,
there was no consideration that the common middle stacks should be
splitted for equal footing with some other external implementations.

Things are slimiar with Nova  Cinder. we may have all kinds of virt
driver and volume driver, but only one common scheduling 
compute/volume manager implementation. For Neutron it is necessary to
support hundreds of real backends, but does it really benefit
customers to equal footing the ML2 with a bunch of other external SDN
controllers?

Best Regards



I would love to see OpenStack upstream acting more like a resource to
 support users and developers

 I'm not sure what you mean here. The purpose of 3rd party CI requirements is
 to signal stability to users and to provide feedback to the developers.

 On Tue, Apr 28, 2015 at 4:18 AM, Luke Gorrie l...@tail-f.com wrote:

 On 28 April 2015 at 10:14, Duncan Thomas duncan.tho...@gmail.com wrote:

 If we allow third party CI to fail and wait for vendors to fix their
 stuff, experience has shown that they won't, and there'll be broken or
 barely functional drivers out there, and no easy way for the community to
 exert pressure to fix them up.


 Can't the user community exert pressure on the driver developers directly
 by talking to them, or indirectly by not using their drivers? How come
 OpenStack upstream wants to tell the developers what is needed before the
 users get a chance to take a look?

 I would love to see OpenStack upstream acting more like a resource to
 support users and developers (e.g. providing 3rd party CI hooks upon requst)
 and less like gatekeepers with big sticks to wave at people who don't drop
 their own priorities and Follow The Process.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Why ceilometer do not offer run_tests.sh script?

2015-04-28 Thread Luo Gangyi
Hi guys,


When I try to run unit tests of ceilometer, I find there is no run_tests.sh 
script offers.


And when I use tox directly, I got a message ' 'Could not find mongod command'.


So another question is why unit tests needs mongo?


Can someone give me some hint?


Thanks a lot.
--
Luo gangyiluogan...@chinamobile.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposal for Madhuri Kumari to join Core Team

2015-04-28 Thread Jay Lau
+1, welcome Madhuri!

2015-04-29 0:00 GMT+08:00 Adrian Otto adrian.o...@rackspace.com:

  +1

  On Apr 28, 2015, at 8:14 AM, Steven Dake (stdake) std...@cisco.com
 wrote:

  Hi folks,

  I would like to nominate Madhuri Kumari  to the core team for Magnum.
 Please remember a +1 vote indicates your acceptance.  A –1 vote acts as a
 complete veto.

  Why Madhuri for core?

1. She participates on IRC heavily
2. She has been heavily involved in a really difficult project  to
remove Kubernetes kubectl and replace it with a native python language
binding which is really close to be done (TM)
3. She provides helpful reviews and her reviews are of good quality

 Some of Madhuri’s stats, where she performs in the pack with the rest of
 the core team:

  reviews: http://stackalytics.com/?release=kilomodule=magnum-group
 commits:
 http://stackalytics.com/?release=kilomodule=magnum-groupmetric=commits

  Please feel free to vote if your a Magnum core contributor.

  Regards
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Two nominations for Manila Core Reviewer Team

2015-04-28 Thread Ben Swartzlander

On 04/22/2015 02:23 PM, Ben Swartzlander wrote:
I would like to nominate Thomas Bechtold to join the Manila core 
reviewer team. Thomas has been contributing to Manila for close to 6 
months and has provided a good number of quality code reviews in 
addition to a substantial amount of contributions. Thomas brings both 
Oslo experience as well as a packager/distro perspective which is 
especially helpful as Manila starts to get used in more production 
scenarios.


I would also like to nominate Mark Sturdevant. He has also been active 
in the community for about 6 months and has a similar history of code 
reviews. Mark is the maintainer of the HP driver and would add vendor 
diversity to the core team.




Welcome Mark Sturdevant and Thomas Bechtold to the Manila core reviewer 
team!


-Ben Swartzlander



-Ben Swartzlander
Manila PTL



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Lbaas v2 not showing operating_status as inactive

2015-04-28 Thread Brandon Logan
?So that is the right URL for the statuses call.  As I understand the issue the 
statuses call is correctly changing the operating status to DISABLED correct?


The problem is when you do an operationg on a loadbalancer and admin_state_up = 
False.  In that case the body returned for those operationgs (POST, PUT, GET) 
should show operating status as DISABLED, and it is not.  This is a bug, and I 
believe it would be quite simple to fix.  You won't need to call the statuses 
method as that is just the method that is called when the /statuses resource is 
called.  The create_loadbalancer, get_loadbalancer, get_loadbalancers, and 
update_loadbalancer methods will just need to change the operating_status to 
DISABLED if admin_state_up is False.  Should be a very simple change actually.


Let me know if I am articulating the problem correctly.


Thanks,

Brandon


From: Madhusudhan Kandadai madhusudhan.openst...@gmail.com
Sent: Tuesday, April 28, 2015 3:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Neutron Lbaas v2 not showing operating_status as 
inactive

Hi Anand,

There is an api which calls 'statuses' method.. I could see the status 
'DISABLED' in: GET /lbaas/loadbalancers/loadbalancer_id/statuses.

Maybe we need to correct the doc to reflect the right URL to avoid confusion. 
If that is the right API call, I shall update the bug and mark it as fixed.

Regards,
Madhu



On Tue, Apr 28, 2015 at 12:28 PM, Anand shanmugam 
anand1...@outlook.commailto:anand1...@outlook.com wrote:
Hi ,

I am working on the bug https://bugs.launchpad.net/neutron/+bug/1449286

In this bug the admin_state_up is made to false when creating a lbaas v2 
loadbalancer.The operating_state should become DISABLED for the created 
loadbalancer but it is showing as online.

I can see that there is a method statuses which takes care of disabling the 
opensrting_status ( 
https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/plugin.py#L971)
 but I cannot find the method which will call this 'satuses' method.

I feel this statuses method is not called at all when creating or updating a 
loadbalancer.Could someone please help me if there is any other api to call 
this method?

Regards,
Anand S

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Core Reviewer Update

2015-04-28 Thread Lin Hua Cheng
Congrats Doug, Rob and Travis! thanks for all the hard work

On Tue, Apr 28, 2015 at 3:57 PM, David Lyle dkly...@gmail.com wrote:

 I am pleased to announce the addition of Doug Fish, Rob Cresswell and
 Travis Tripp to the Horizon Core Reviewer team.

 Doug Fish has been an active reviewer and participant in Horizon for a few
 releases now. He represents a strong customer focus and has provided high
 quality reviews.

 Rob Cresswell has been providing a high number of quality reviews, an
 active contributor and an active participant in the community.

 Travis Tripp has been contributing to Horizon for the past couple of
 releases, an active participant in the community, a critical angularJS
 reviewer, and played a significant role in driving the angular based launch
 instance work in Kilo.

 Thank you all for your contributions and welcome to the team!

 David

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack/Resource updated_at conventions

2015-04-28 Thread Zane Bitter

On 28/04/15 03:56, Steven Hardy wrote:

On Mon, Apr 27, 2015 at 06:41:52PM -0400, Zane Bitter wrote:

On 27/04/15 13:38, Steven Hardy wrote:

On Mon, Apr 27, 2015 at 04:46:20PM +0100, Steven Hardy wrote:

AFAICT there's two options:

1. Update the stack.Stack so we store now at every transition (e.g in
state_set)

2. Stop trying to explicitly control updated_at, and just allow the oslo
TimestampMixin to do it's job and update updated_at every time the DB model
is updated.


Ok, at the risk of answering my own question, there's a third option, which
is to output an event for all stack transitions, not only resource
transitions.  This appears to be the way the CFN event API works AFAICS.


My recollection was that in CFN events were always about a particular
resource. That may have been wrong, or they may have changed it. In any
event (uh, no pun intended), I think this option is preferable to options 1
 2.


Well from the docs I've been looking at, events are also output for stacks:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-listing-event-history.html

Here we see a stack myteststack, which generates events of ResourceType
AWS::CloudFormation::Stack, with a LogicalResourceId of myteststack.


Huh, so it does. So the only difference is that the stack events don't 
have a ResourceProperties key. Ick.



It's a bit confusing because the PhysicalResourceId doesn't match the
StackId, but I'm interpreting this as an event from the stack rather than a
resource inside the stack.  Could be that it's just a bad example though.


When we first implemented this stuff we only operated on one resource at a
time, there was no way to cancel an update, c. It was a simpler world ;)


Yeah, true - and (with the benefit of hindsight) events are a really bad
interface for hook polling, which is what I'm currently trying to work
around.

Trying to do this has exposed how limited our event API is though, so IMO
it's worth trying to fix this for the benefit of all API consumers.


I guess the event would have a dummy OS::Heat::Stack type and then you


That's too hacky IMHO, I think we should have a more solid way of
distinguishing resource events from stack events. OS::Heat::Stack is a type
of resource already, after all. Arguably they should come from separate
endpoints, to avoid breaking clients until we get to a v2 API.


I disagree about the separate endpoint (not least because it implies hooks
will be unusable for kilo):

Looking more closely at our native event API:

http://developer.openstack.org/api-ref-orchestration-v1.html#stack-events

The path for events is:·

/v1/{tenant_id}/stacks/{stack_name}/events

This, to me (historical resource-ness aside) implies events associated with
a particular stack - IMHO it's fair game to output both events associated
with the stack itself here and the resources contained by the stack.

If we were to use some other endpoint, I don't even know what we would
use, because intuitively the path above is the one which makes sense for
events associated with a stack?


I'm not saying it's the wrong place, but somehow, somewhere, it will 
break some client who is not expecting it.



I'm open to using something other than OS::Heat::Stack, but that to me is
the most obvious option, which fits OK with the current resource-orientated
event API response payload - it is the resource which describes a stack
after all (and it potentially aligns with the AWS interface I mention above.)


For consistency with CloudFormation, I agree that's the obvious choice. 
I withdraw my objection.



could find the most recent transition to e.g UPDATE_IN_PROGRESS in the
events and use that as a marker so you only list results after that event?


Even that is not valid in a distributed system. For convergence we're
planning to have a UUID associated with each update. We should reuse that to
connect events with particular update traversals.


There's still going to be some event (or at least a point in time) where an
API request for update-stack is recieved, and the stack, as a whole, moves
from a stable state (COMPLETE/FAILED) into an in-progress one though, is
there not?

I'm not really sure why distribution of the update workload will affect the
nature of that initial transition, other than that there may be multiple
passes before we reach the final transition back into a stable state (e.g
potentially multiple updates on resources before we stop updating the stack
as a whole)?


Sorry that was far too vague, I should have been more clear: 
establishing the order of events by timestamp is not a valid strategy 
for a distributed system because time is not monotonic in a distributed 
system.


cheers,
Zane.


Anyway, https://review.openstack.org/#/c/177961/2 has been approved now -
I'm happy to follow up if you have specific suggestions on how we can
improve it.

Cheers,

Steve

__
OpenStack Development Mailing List 

Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet when creating a pool?

2015-04-28 Thread Wanjing Xu
Thanks, Brandon.  I am using V1. And trying to create via horizon, this field 
is the mandatory field in horizon creating pool pageWanjing

From: brandon.lo...@rackspace.com
To: openstack-dev@lists.openstack.org
Date: Tue, 28 Apr 2015 01:40:00 +
Subject: Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet 
when creating a pool?







I'm assuming you are using LBaaS V2.  With that assumption, I'm not sure how 
you are having to select subnet on the pool.  It's not supposed to be a field 
at all on the pool object.  subnet_id is required on the member object right 
now, but that's something
 I and others think should just be optional, and if not specified then it's 
assumed that member can be reached with whatever has already been setup.​  
Another option is pool could get a subnet_id field in the future and all 
members that are created without
 subnet_id are assumed to be on the pool's subnet_id, but I'm getting ahead of 
myself and this has no bearing on your current issue.







Could you tell me how you are making your requests? CLI? REST directly?





From: Wanjing Xu wanjing...@hotmail.com

Sent: Monday, April 27, 2015 12:57 PM

To: OpenStack Development Mailing List not for usage questions

Subject: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet when 
creating a pool?
 


So when I use Haproxy for LBaaS for Juno, there is a subnet mandatary field 
that I need to fill in when creating a pool, and later on when I add members, I 
can use a different subnet(or simply just enter the ip of the member), when 
adding vip,
 I can still select a third subnet.  So what is the usage of the first subnet 
that I used to create pool?  There is no port created for this pool subnet.  I 
can see that a port is created for the vip subnet that the loadbalancer 
instance is binding to.



Regards!



Wanjing Xu






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Navigating the ever changing OpenStack codebase

2015-04-28 Thread Christopher Aedo
On Tue, Apr 28, 2015 at 2:56 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-04-28 16:08:03 -0400 (-0400), Jay Pipes wrote:
 Honestly, I see no problem with some helper bash scripts that
 simplify life for new contributors.
 [...]

 [...]
 I remember it happening regularly before we started begging people
 to run tox and instead remove those scripts where possible.

The arguments for pushing people to tox make good sense, but it seems
like there will still be cases where run_tests.sh continues to exist
for whatever reason.

For the sake of the new contributors, what about including a warning
when running run_tests.sh reminding them to use tox if possible?  For
the cases where the script can't be removed entirely, it will at least
help eager new folks (who are very likely to pull a repo and look for
a helpfully named script like that - I'm certainly guilty of rushing
ahead sometimes, running scripts while reading docs...)

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Kubernetes AutoScaling with Heat AutoScalingGroup and Ceilometer

2015-04-28 Thread Georgy Okrokvertskhov
You can take a look onto Murano Kubernetes package. There is no autoscaling
out of the box, but it will be quite trivial to add a new action for that
as there are functions to add new ETC and Kubernetes nodes on master as
well as there is a function to add a new VM.

Here is an example of a scaleUp action:
https://github.com/gokrokvertskhov/murano-app-incubator/blob/monitoring-ha/io.murano.apps.java.HelloWorldCluster/Classes/HelloWorldCluster.murano#L93

Here is Kubernetes scaleUp action:
https://github.com/openstack/murano-apps/blob/master/Docker/Kubernetes/KubernetesCluster/package/Classes/KubernetesCluster.yaml#L441

And here is a place where Kubernetes master is update with a new node info:
https://github.com/openstack/murano-apps/blob/master/Docker/Kubernetes/KubernetesCluster/package/Classes/KubernetesMinionNode.yaml#L90

By that way as you can see there is cAdvisor setup on a new node too.

Thanks
Gosha


On Tue, Apr 28, 2015 at 8:52 AM, Rabi Mishra ramis...@redhat.com wrote:


 - Original Message -
  On Mon, Apr 27, 2015 at 12:28:01PM -0400, Rabi Mishra wrote:
   Hi All,
  
   Deploying Kubernetes(k8s) cluster on any OpenStack based cloud for
   container based workload is a standard deployment pattern. However,
   auto-scaling this cluster based on load would require some integration
   between k8s OpenStack components. While looking at the option of
   leveraging Heat ASG to achieve autoscaling, I came across few
 requirements
   that the list can discuss and arrive at the best possible solution.
  
   A typical k8s deployment scenario on OpenStack would be as below.
  
   - Master (single VM)
   - Minions/Nodes (AutoScalingGroup)
  
   AutoScaling of the cluster would involve both scaling of minions/nodes
 and
   scaling Pods(ReplicationControllers).
  
   1. Scaling Nodes/Minions:
  
   We already have utilization stats collected at the hypervisor level, as
   ceilometer compute agent polls the local libvirt daemon to acquire
   performance data for the local instances/nodes.
 
  I really doubts if those metrics are so useful to trigger a scaling
  operation. My suspicion is based on two assumptions: 1) autoscaling
  requests should come from the user application or service, not from the
  controller plane, the application knows best whether scaling is needed;
  2) hypervisor level metrics may be misleading in some cases. For
  example, it cannot give an accurate CPU utilization number in the case
  of CPU overcommit which is a common practice.

 I agree that correct utilization statistics is complex with virtual
 infrastructure.
 However, I think physical+hypervisor metrics (collected by compute agent)
 should be a
 good point to start.

   Also, Kubelet (running on the node) collects the cAdvisor stats.
 However,
   cAdvisor stats are not fed back to the scheduler at present and
 scheduler
   uses a simple round-robin method for scheduling.
 
  It looks like a multi-layer resource management problem which needs a
  wholistic design. I'm not quite sure if scheduling at the container
  layer alone can help improve resource utilization or not.

 k8s scheduler is going to improve over time to use the cAdvisor/heapster
 metrics for
 better scheduling. IMO, we should leave that for k8s to handle.

 My point is on getting that metrics to ceilometer either from the nodes or
 from the \
 scheduler/master.

   Req 1: We would need a way to push stats from the kubelet/cAdvisor to
   ceilometer directly or via the master(using heapster). Alarms based on
   these stats can then be used to scale up/down the ASG.
 
  To send a sample to ceilometer for triggering autoscaling, we will need
  some user credentials to authenticate with keystone (even with trusts).
  We need to pass the project-id in and out so that ceilometer will know
  the correct scope for evaluation. We also need a standard way to tag
  samples with the stack ID and maybe also the ASG ID. I'd love to see
  this done transparently, i.e. no matching_metadata or query confusions.
 
   There is an existing blueprint[1] for an inspector implementation for
   docker hypervisor(nova-docker). However, we would probably require an
   agent running on the nodes or master and send the cAdvisor or heapster
   stats to ceilometer. I've seen some discussions on possibility of
   leveraging keystone trusts with ceilometer client.
 
  An agent is needed, definitely.
 
   Req 2: Autoscaling Group is expected to notify the master that a new
 node
   has been added/removed. Before removing a node the master/scheduler
 has to
   mark node as
   unschedulable.
 
  A little bit confused here ... are we scaling the containers or the
  nodes or both?

 We would only focusing on the nodes. However, adding/removing nodes
 without the k8s master/scheduler
 knowing about it (so that it can schedule pods or make them
 unschedulable)would be useless.

   Req 3: Notify containers/pods that the node would be removed for them
 to
   stop accepting any traffic, 

Re: [openstack-dev] Neutron Lbaas v2 not showing operating_status as inactive

2015-04-28 Thread Madhusudhan Kandadai
Yes you are right. The statuses call is correctly changing the operating
status to DISABLED, but not showing operating status as DISABLED when doing
POST/PUT/GET on loadbalancer upon specifying admin_state_up as 'False'
explicitly. Thanks for confirming though.

Madhu

On Tue, Apr 28, 2015 at 6:59 PM, Brandon Logan brandon.lo...@rackspace.com
wrote:

  ​So that is the right URL for the statuses call.  As I understand the
 issue the statuses call is correctly changing the operating status to
 DISABLED correct?


  The problem is when you do an operationg on a loadbalancer and
 admin_state_up = False.  In that case the body returned for those
 operationgs (POST, PUT, GET) should show operating status as DISABLED, and
 it is not.  This is a bug, and I believe it would be quite simple to fix.
 You won't need to call the statuses method as that is just the method that
 is called when the /statuses resource is called.  The create_loadbalancer,
 get_loadbalancer, get_loadbalancers, and update_loadbalancer methods will
 just need to change the operating_status to DISABLED if admin_state_up is
 False.  Should be a very simple change actually.


  Let me know if I am articulating the problem correctly.


  Thanks,

 Brandon
  --
 *From:* Madhusudhan Kandadai madhusudhan.openst...@gmail.com
 *Sent:* Tuesday, April 28, 2015 3:23 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] Neutron Lbaas v2 not showing
 operating_status as inactive

Hi Anand,

  There is an api which calls 'statuses' method.. I could see the status
 'DISABLED' in: GET /lbaas/loadbalancers/loadbalancer_id/statuses.

 Maybe we need to correct the doc to reflect the right URL to avoid
 confusion. If that is the right API call, I shall update the bug and mark
 it as fixed.

  Regards,
  Madhu



 On Tue, Apr 28, 2015 at 12:28 PM, Anand shanmugam anand1...@outlook.com
 wrote:

  Hi ,

  I am working on the bug https://bugs.launchpad.net/neutron/+bug/1449286

  In this bug the admin_state_up is made to false when creating a lbaas
 v2 loadbalancer.The operating_state should become DISABLED for the created
 loadbalancer but it is showing as online.

  I can see that there is a method statuses which takes care of
 disabling the opensrting_status (
 https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/plugin.py#L971)
 but I cannot find the method which will call this 'satuses' method.

  I feel this statuses method is not called at all when creating or
 updating a loadbalancer.Could someone please help me if there is any other
 api to call this method?

  Regards,
 Anand S

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Core Reviewer Update

2015-04-28 Thread David Lyle
I am pleased to announce the addition of Doug Fish, Rob Cresswell and
Travis Tripp to the Horizon Core Reviewer team.

Doug Fish has been an active reviewer and participant in Horizon for a few
releases now. He represents a strong customer focus and has provided high
quality reviews.

Rob Cresswell has been providing a high number of quality reviews, an
active contributor and an active participant in the community.

Travis Tripp has been contributing to Horizon for the past couple of
releases, an active participant in the community, a critical angularJS
reviewer, and played a significant role in driving the angular based launch
instance work in Kilo.

Thank you all for your contributions and welcome to the team!

David
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Core Reviewer Update

2015-04-28 Thread Thai Q Tran
Welcome to the team Doug, Rob, and Travis!!!-David Lyle dkly...@gmail.com wrote: -To: OpenStack Development Mailing List openstack-dev@lists.openstack.orgFrom: David Lyle dkly...@gmail.comDate: 04/28/2015 04:00PMSubject: [openstack-dev]  [Horizon] Core Reviewer UpdateI am pleased to announce the addition of Doug Fish, Rob Cresswell and Travis Tripp to the Horizon Core Reviewer team.Doug Fish has been an active reviewer and participant in Horizon for a few releases now. He represents a strong customer focus and has provided high quality reviews.Rob Cresswell has been providing a high number of quality reviews, an active contributor and an active participant in the community.Travis Tripp has been contributing to Horizon for the past couple of releases, an active participant in the community, a critical angularJS reviewer, and played a significant role in driving the angular based launch instance work in Kilo. Thank you all for your contributions and welcome to the team!David
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] beaker-rspec tests blueprint

2015-04-28 Thread Emilien Macchi


On 04/28/2015 04:16 PM, Gabriele Cerami wrote:
 Hi,
 
 for people who, like me, would like to contribute to the effort of
 adding tests to the upcoming beaker-rspec framework, could be really
 helpful discussing about the scope, requirements and goals for the
 framework and for a test to make sense in this environment.

So Spencer (nibalizer) and I are working on getting the basics bits for
Ubuntu Trusty right now.

You can follow the work here:
https://docs.google.com/spreadsheets/d/1i2z5QyvukHCWU_JjkWrTpn-PexPBnt3bWPlQfiDGsj8/edit#gid=0

and here: https://review.openstack.org/#/q/topic:bug-1444736,n,z

Once we have basic structure passing the CI, we may want to support
CentOS7: https://review.openstack.org/#/c/175434/

And once we have both ubuntu  centos happy with basic structure 
tests, we may want to support multiple scenarios, having advanced
serverspec tests, etc.

 
 I'd like to see, if possible, a blueprint proposed to the community
 about beaker-rspec.

Feel free to initiate this work on
https://github.com/stackforge/puppet-openstack-specs, taking in
consideration what we already did.
Also, make sure you're familiar with OpenStack Infra configuration
(Jenkins jobs): https://github.com/openstack-infra/project-config

Thanks for your help!

 
 thanks.
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][QoS] service-plugin or not discussion

2015-04-28 Thread Miguel Angel Ajo Pelayo

 On 24/4/2015, at 19:42, Armando M. arma...@gmail.com wrote:
 
 
 
 On 24 April 2015 at 01:47, Miguel Angel Ajo Pelayo mangel...@redhat.com 
 mailto:mangel...@redhat.com wrote:
 Hi Armando  Salvatore,
 
 On 23/4/2015, at 9:30, Salvatore Orlando sorla...@nicira.com 
 mailto:sorla...@nicira.com wrote:
 
 
 
 On 23 April 2015 at 01:30, Armando M. arma...@gmail.com 
 mailto:arma...@gmail.com wrote:
 
 On 22 April 2015 at 06:02, Miguel Angel Ajo Pelayo mangel...@redhat.com 
 mailto:mangel...@redhat.com wrote:
 
 Hi everybody,
 
In the latest QoS meeting, one of the topics was a discussion about how 
 to implement
 QoS [1] either as in core, or as a service plugin, in, or out-tree.
 
 It is really promising that after only two meetings the team is already 
 split! I cannot wait for the API discussion to start ;)
 
 We seem to be relatively on the same page about how to model the API, but we 
 need yet to loop
 in users/operators who have an interest in QoS to make sure they find it 
 usable. [1]
 
  
 
 My apologies if I was unable to join, the meeting clashed with another one I 
 was supposed to attend.
 
 My bad, sorry ;-/
 
  
 
It’s my feeling, and Mathieu’s that it looks more like a core feature, as 
 we’re talking of
 port properties that we define at high level, and most plugins (QoS capable) 
 may want
 to implement at dataplane/controlplane level, and also that it’s something 
 requiring a good
 amount of review.
 
 Core is a term which is recently being abused in Neutron... However, I 
 think you mean that it is a feature fairly entangled with the L2 mechanisms,
 
 Not only the L2 mechanisms, but the description of ports themselves, in the 
 basic cases we’re just defining
 how “small” or “big” your port is.  In the future we could be saying “UDP 
 ports 5000-6000” have the highest
 priority on this port, or a minimum bandwidth of 50Mbps…, it’s marked with a 
 IPv6 flow label for hi-prio…
 or whatever policy we support.
 
 that deserves being integrated in what is today the core plugin and in the 
 OVS/LB agents. To this aim I think it's good to make a distinction between 
 the management plane and the control plane implementation.
 
 At the management plane you have a few choices:
 - yet another mixin, so that any plugin can add it and quickly support the 
 API extension at the mgmt layer. I believe we're fairly certain everybody 
 understands mixins are not sustainable anymore and I'm hopeful you are not 
 considering this route.
 
 Are you specifically referring to this on every plugin? 
 
 class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, ---
 dvr_mac_db.DVRDbMixin, ---
 external_net_db.External_net_db_mixin, ---
 sg_db_rpc.SecurityGroupServerRpcMixin,   ---
 agentschedulers_db.DhcpAgentSchedulerDbMixin,  ---
 addr_pair_db.AllowedAddressPairsMixin,  
 
 I’m quite allergic to mixings, I must admit, but, if it’s not the desired 
 way, why don’t we refactor the way we compose plugins !? (yet more refactors 
 probably would slow us down, …) but… I feel like we’re pushing to 
 overcomplicate the design for a case which is similar to everything else we 
 had before (security groups, port security, allowed address pairs).
 
 It feels wrong to have every similar feature done in a different way, even if 
 the current way is not the best one I admit.
 
 
 This attitude led us to the pain we are in now, I think we can no longer 
 afford to keep doing that. Bold goals require bold actions. If we don't step 
 back and figure out a way to extend the existing components without hijacking 
 the current codebase, it would be very difficult to give this effort the 
 priority it deserves.

I agree with you, please note my point of “let’s refactor it all into something 
better”, but refactoring the world and forgetting about new features is not 
sustainable, so, as you say we may start with new features as we explore better 
ways to do it. But I believe old extensions should also be equally addressed in 
the future.

I also lack the perspective yet to propose better approaches, I hope I will be 
able to do it in the future when I explore those areas of neutron.

Let’s focus in the API, and the lowest levels of what we’re going to do, and 
lets resolve everything else at a later time when that’s clear. I start to lean 
towards a service-plugin implementation as it’s going to be technically much 
more clean and decoupled.

 - a service plugin - as suggested by some proposers. The service plugin is 
 fairly easy to implement, and now Armando has provided you with a mechanism 
 to register for callbacks for events in other plugins. This should make the 
 implementation fairly straightforward. This also enables other plugins to 
 implement QoS support.
 - a ML2 mechanism driver + a ML2 extension driver. From an architectural 
 perspective this would be the preferred solution for a ML2 implementation, 
 but at the same time will 

Re: [openstack-dev] [swift] install custom middleware via devstack

2015-04-28 Thread gordon chung
 Hi 
 I'm working on a custom middleware which will place on swift-proxy 
 pipeline, just before the latest proxy-logging. 
 
 how can I force devstack to install my middleware just as I run ./stack.sh? 
 #The point is that I'm looking for a mature and standard to do it, so 
 I'm not going to just edit the stack.sh and Embed some scripts to 
 manually handle the case! but as I mentioned I would like the learn the 
 best case... 

i don't know if there's an official standard to doing this -- it's discouraged 
by some. that said, we do exactly this using ceilometermiddleware[1][2] so if 
you're looking for a reference point, it might help.

[1] http://github.com/openstack-dev/devstack/blob/master/lib/swift#L384-L392
[2] 
http://github.com/openstack-dev/devstack/blob/master/lib/ceilometer#L337-L348

cheers,
gord  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-28 Thread Russell Bryant
On 04/27/2015 08:52 PM, Armando M. wrote:
 
 Any project that fails to meet the criteria later can be dropped at any
 time.  For example, if some repo is clearly unmaintained, it can be
 removed.
 
 
 If we open the door to excluding projects down the road, then wouldn't
 we need to take into account some form of 3rd party CI validation as
 part of the criteria to 'ensure quality' (or lack thereof)? Would you
 consider that part of the inclusion criteria too?

My suggestion would be to use the state of 3rd party CI validation in
whatever is used to indicate the current level of maturity, but not to
directly decide what's considered in the OpenStack Neutron project.

If we take networking-ovn as an example, it's very actively developed
and certainly one of us, in my opinion.  It has CI jobs, but they're
not running tempest yet.  It seems wrong to say it's not an OpenStack
project because of that.  What we need is to be able to clearly
communicate that it's very new and immature, which is something different.

For something that has been around much longer and has had CI fully
working, I would view it a bit different.  If the tests break and stay
broken for a long time, that sounds like an early indicator that the
code is unmaintained and may get dropped and moved to openstack-attic
at some point.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improving OpenStack documentation around RabbitMQ

2015-04-28 Thread Christian Berendt
Hello Michael.

Just moving your thread to the correct mailling list.

Christian.

On 04/28/2015 02:58 PM, Michael Klishin wrote:
 Hi, 
 
 I'm a RabbitMQ engineering team member and we'd like to help improve 
 OpenStack docs
 around it.
 
 I've been reading the docs and making notes of what can be improved. We'd be 
 happy
 to contribute the changes. However, we're not very familiar with the 
 OpenStack development
 process and have a few questions before we start.
 
 As far as I understand, OpenStack Kilo is about to ship. Does this mean we 
 can only contribute
 documentation improvements for the release after it? Are there maintenance 
 releases that doc improvements
 could go into? If so, how is this reflected in repository  branches?
 
 Should the changes we propose be discussed on this list or in GitHub issues 
 [1]?
 
 Finally, we are considering adding a doc guide dedicated to OpenStack on 
 rabbitmq.com (we have one for EC2,
 for instance). Note that we are not looking
 to replace what's on docs.openstack.org, only provide a guide that can go 
 into more details.
 Does this sound like a good idea to the OpenStack community? Should we keep 
 everything on docs.openstack.org?
 Would it be OK if we link to rabbitmq.com guides in any changes we 
 contribute? I don't think OpenStack Juno
 docs have a lot of external links: is that by design?
 
 Thanks.
 
 1. https://github.com/openstack/openstack-manuals 
 --  
 MK  
 
 Staff Software Engineer, Pivotal/RabbitMQ  
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Christian Berendt
Cloud Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-28 Thread Luke Gorrie
On 28 April 2015 at 10:14, Duncan Thomas duncan.tho...@gmail.com wrote:

 If we allow third party CI to fail and wait for vendors to fix their
 stuff, experience has shown that they won't, and there'll be broken or
 barely functional drivers out there, and no easy way for the community to
 exert pressure to fix them up.


Can't the user community exert pressure on the driver developers directly
by talking to them, or indirectly by not using their drivers? How come
OpenStack upstream wants to tell the developers what is needed before the
users get a chance to take a look?

I would love to see OpenStack upstream acting more like a resource to
support users and developers (e.g. providing 3rd party CI hooks upon
requst) and less like gatekeepers with big sticks to wave at people who
don't drop their own priorities and Follow The Process.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Speed Up RabbitMQ Recovering

2015-04-28 Thread Vladimir Kuklin
Hi, Zhou

Thank you for writing these awesome recommendations.

We will look into them and see whether they provide significant impact.
BTW, we have found a bunch of issues with our 5.1 and 6.0 RabbitMQ OCF
script and fixed them in current master. Would you be so kind as to check
out the newest version and say if any of issues mentioned by you are gone?

On Tue, Apr 28, 2015 at 9:03 AM, Zhou Zheng Sheng / 周征晟 
zhengsh...@awcloud.com wrote:

  Hello,

 I using Fuel 6.0.1 and find that RabbitMQ recover time is long after power
 failure. I have a running HA environment, then I reset power of all the
 machines at the same time. I observe that after reboot it usually takes 10
 minutes for RabittMQ cluster to appear running master-slave mode in
 pacemaker. If I power off all the 3 controllers and only start 2 of them,
 the downtime sometimes can be as long as 20 minutes.

 I have a little investigation and find out there are some possible causes.

 1. MySQL Recovery Takes Too Long [1] and Blocking RabbitMQ Clustering in
 Pacemaker

 The pacemaker resource p_mysql start timeout is set to 475s. Sometimes
 MySQL-wss fails to start after power failure, and pacemaker would wait 475s
 before retry starting it. The problem is that pacemaker divides resource
 state transitions into batches. Since RabbitMQ is master-slave resource, I
 assume that starting all the slaves and promoting master are put into two
 different batches. If unfortunately starting all RabbitMQ slaves are put in
 the same batch as MySQL starting, even if RabbitMQ slaves and all other
 resources are ready, pacemaker will not continue but just wait for MySQL
 timeout.

 I can re-produce this by hard powering off all the controllers and start
 them again. It's more likely to trigger MySQL failure in this way. Then I
 observe that if there is one cloned mysql instance not starting, the whole
 pacemaker cluster gets stuck and does not emit any log. On the host of the
 failed instance, I can see a mysql resource agent process calling the sleep
 command. If I kill that process, the pacemaker comes back alive and
 RabbitMQ master gets promoted. In fact this long timeout is blocking every
 resource from state transition in pacemaker.

 This maybe a known problem of pacemaker and there are some discussions in
 Linux-HA mailing list [2]. It might not be fixed in the near future. It
 seems in generally it's bad to have long timeout in state transition
 actions (start/stop/promote/demote). There maybe another way to implement
 MySQL-wss resource agent to use a short start timeout and monitor the wss
 cluster state using monitor action.

 I also find a fix to improve MySQL start timeout [3]. It shortens the
 timeout to 300s. At the time I sending this email, I can not find it in
 stable/6.0 branch. Maybe the maintainer needs to cherry-pick it to
 stable/6.0 ?

 [1] https://bugs.launchpad.net/fuel/+bug/1441885
 [2] http://lists.linux-ha.org/pipermail/linux-ha/2014-March/047989.html
 [3] https://review.openstack.org/#/c/171333/


 2. RabbitMQ Resource Agent Breaks Existing Cluster

 Read the code of the RabbitMQ resource agent, I find it does the following
 to start RabbitMQ master-slave cluster.
 On all the controllers:
 (1) Start Erlang beam process
 (2) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 (3) Stop RabbitMQ App but do not stop the beam process

 Then in pacemaker, all the RabbitMQ instances are in slave state. After
 pacemaker determines the master, it does the following.
 On the to-be-master host:
 (4) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 On the slaves hosts:
 (5) Start RabbitMQ App (If failed, reset mnesia DB and cluster state)
 (6) Join RabbitMQ cluster of the master host

 As far as I can understand, this process is to make sure the master
 determined by pacemaker is the same as the master determined in RabbitMQ
 cluster. If there is no existing cluster, it's fine. If it is run after
 power failure and recovery, it introduces the a new problem.

 After power recovery, if some of the RabbitMQ instances reach step (2)
 roughly at the same time (within 30s which is hard coded in RabbitMQ) as
 the original RabbitMQ master instance, they form the original cluster again
 and then shutdown. The other instances would have to wait for 30s before it
 reports failure waiting for tables, and be  reset to a standalone cluster.

 In RabbitMQ documentation [4], it is also mentioned that if we shutdown
 RabbitMQ master, a new master is elected from the rest of slaves. If we
 continue to shutdown nodes in step (3), we reach a point that the last node
 is the RabbitMQ master, and pacemaker is not aware of it. I can see there
 is code to bookkeeping a rabbit-start-time attribute in pacemaker to
 record the most long lived instance to help pacemaker determine the master,
 but it does not cover the case mentioned above. A recent patch [5] checks
 existing rabbit-master attribute but it neither cover the above case.

 So 

[openstack-dev] [release][oslo][stable] oslo.messaging 1.8.2 (kilo)

2015-04-28 Thread Doug Hellmann
We are thrilled to announce the release of:

oslo.messaging 1.8.2: Oslo Messaging API

For more details, please see the git log history below and:

http://launchpad.net/oslo.messaging/+milestone/1.8.2

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

Changes in oslo.messaging 1.8.1..1.8.2
--

562c41b 2015-04-27 16:10:19 +0200 rabbit: add some notes about heartbeat
a8c06ab 2015-04-24 07:19:45 +0200 Disable and mark heartbeat as experimental
c7fd828 2015-04-23 11:15:26 + rabbit: fix ipv6 support
e06c5b8 2015-04-09 13:57:59 + Fix changing keys during iteration in 
matchmaker heartbeat
f24a7df 2015-04-09 13:56:33 + Add pluggability for matchmakers
513b1c9 2015-04-09 13:55:59 + Don't raise Timeout on no-matchmaker results
671c60d 2015-04-09 13:04:15 + Fix the bug redis do not delete the expired 
keys
d9c2520 2015-04-06 14:55:33 + set defaultbranch for reviews

Diffstat (except docs and test files)
-

.gitreview |  1 +
oslo_messaging/_drivers/impl_rabbit.py | 16 ++---
oslo_messaging/_drivers/impl_zmq.py| 41 +++---
oslo_messaging/_drivers/matchmaker.py  |  2 +-
oslo_messaging/_drivers/matchmaker_redis.py|  7 +++-
requirements.txt   |  1 +
setup.cfg  |  6 
11 files changed, 124 insertions(+), 19 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 3b49a53..4e87ee6 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -24,0 +25 @@ PyYAML=3.1.0
+# require kombu=3.0.7 and amqp=1.4.0 for heatbeat support
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party] Announcing Third Party CI Tools Repo

2015-04-28 Thread Kurt Taylor
Hi all,

The third party CI working group[1] have been discussing a way to share
tools, configurations, and plugins best practices. This was an idea that
started at the Paris summit because teams that are operating external CI
systems have each created tools that make their life as CI operators
easier. Now we have a way to share them[2].

Since this new repo[3] is created, I am proposing the following CI
operators as initial cores. They have consistently attended the working
group meetings and pushed to move work items[4] forward in the community.

Ramy Asselin (asselin)
Patrick East (patrickeast)
Steve Weston (sweston)
Mikhail Medvedev (mmedvede)

We will discuss at this weeks meeting[5], but if there are no objections, I
will add them immediately following.

If you are interested in sharing something you have written that makes your
CI life easier, please come to a working group meeting and discuss, or just
submit a patch.

Thanks!
Kurt Taylor (krtaylor)

[1] https://wiki.openstack.org/wiki/ThirdPartyCIWorkingGroup
[2] https://review.openstack.org/#/c/175520
[3] https://github.com/stackforge/third-party-ci-tools
[4]
https://wiki.openstack.org/wiki/ThirdPartyCIWorkingGroup#Development_Priorities
[5] https://wiki.openstack.org/wiki/Meetings/ThirdParty#4.2F29.2F15_1500_UTC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improving OpenStack documentation around RabbitMQ

2015-04-28 Thread Christian Berendt
Hello Michael.

Just moving your thread to the correct mailling list.

Sorry for my previous mail. Thunderbird autocompletion used the
unsubscribe alias and not the correct mailinglist :(

Christian.

On 04/28/2015 02:58 PM, Michael Klishin wrote:
 Hi, 
 
 I'm a RabbitMQ engineering team member and we'd like to help improve 
 OpenStack docs
 around it.
 
 I've been reading the docs and making notes of what can be improved. We'd be 
 happy
 to contribute the changes. However, we're not very familiar with the 
 OpenStack development
 process and have a few questions before we start.
 
 As far as I understand, OpenStack Kilo is about to ship. Does this mean we 
 can only contribute
 documentation improvements for the release after it? Are there maintenance 
 releases that doc improvements
 could go into? If so, how is this reflected in repository  branches?
 
 Should the changes we propose be discussed on this list or in GitHub issues 
 [1]?
 
 Finally, we are considering adding a doc guide dedicated to OpenStack on 
 rabbitmq.com (we have one for EC2,
 for instance). Note that we are not looking
 to replace what's on docs.openstack.org, only provide a guide that can go 
 into more details.
 Does this sound like a good idea to the OpenStack community? Should we keep 
 everything on docs.openstack.org?
 Would it be OK if we link to rabbitmq.com guides in any changes we 
 contribute? I don't think OpenStack Juno
 docs have a lot of external links: is that by design?
 
 Thanks.
 
 1. https://github.com/openstack/openstack-manuals 
 --  
 MK  
 
 Staff Software Engineer, Pivotal/RabbitMQ  
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Christian Berendt
Cloud Computing Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Question about tempest API tests that establish SSH connection to instances

2015-04-28 Thread Yaroslav Lobankov
Guys, thank you for answers!

Then I have one more question :) How do scenario tests (that establish SSH
connection to instances) work?

Regards,
Yaroslav Lobankov.

On Tue, Apr 28, 2015 at 12:27 PM, Lanoux, Joseph joseph.lan...@hp.com
wrote:

  Hi,



 Actually, ssh connection is not yet implemented in Tempest. We’re
 currently working on it [1].



 Joseph



 [1]
 https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/ssh-auth-strategy,n,z





 *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
 *Sent:* 28 April 2015 10:16
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [QA] Question about tempest API tests that
 establish SSH connection to instances



 At a first glance it seems run_ssh is disabled in gate tests [1]. I could
 not find any nova job where it is enabled.

 These tests are therefore skipped. For what is worth they might be broken
 now. Sharing a traceback or filing a bug might help.



 Salvatore



 [1]
 http://logs.openstack.org/81/159481/2/check/check-tempest-dsvm-neutron-full/85e039c/logs/testr_results.html.gz



 On 28 April 2015 at 10:26, Yaroslav Lobankov yloban...@mirantis.com
 wrote:

  Hi everyone,



 I have a question about tempest tests that are related to instance
 validation. Some of these tests are




 tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name[gate,id-ac1ad47f-984b-4441-9274-c9079b7a0666]


 tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_created_server_vcpus[gate,id-cbc0f52f-05aa-492b-bdc1-84b575ca294b]


 tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_host_name_is_same_as_server_name[gate,id-ac1ad47f-984b-4441-9274-c9079b7a0666]


 tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_created_server_vcpus[gate,id-cbc0f52f-05aa-492b-bdc1-84b575ca294b]



 To enable these tests I should set the config option run_ssh to True.
 When I set the option to true and ran the tests, all the tests failed. It
 looks like ssh code in API tests doesn't work.

 Maybe I am wrong. The question is the following: which of tempest jobs
 runs these tests?  Maybe I have tempest misconfiguration.



 Regards,

 Yaroslav Lobankov.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] install custom middleware via devstack

2015-04-28 Thread Jaume Devesa
Hi Ali,

I don't know the details of what you want to achieve, but (not so) recently,
devstack allows you to add Externally Hosted Plugins[1], from which Devstack
loads an external repository and run any arbitrary code.

This code will be sourced in each one of the Devstack's phases, so you can
control what code run in each phase. Check out the script we use in MidoNet
plugin as an example[2]

Hope that helps.

[1]:
http://docs.openstack.org/developer/devstack/plugins.html#externally-hosted-plugins
[2]:
https://github.com/stackforge/networking-midonet/blob/master/devstack/plugin.sh

On Tue, 28 Apr 2015 08:55, AliReza Taleghani wrote:
 Hi
 I'm working on a custom middleware which will place on swift-proxy
 pipeline, just before the latest proxy-logging.
 
 how can I force devstack to install my middleware just as I run ./stack.sh?
 #The point is that I'm looking for a mature and standard to do it, so I'm
 not going to just edit the stack.sh and Embed some scripts to manually
 handle the case! but as I mentioned I would like the learn the best case...
 
 
 
 Sincerely,
 Ali R. Taleghani
 @linkedIn http://ir.linkedin.com/in/taleghani

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Jaume Devesa
Software Engineer at Midokura

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bump the RPC version required for port_update - AgentNotifierApi

2015-04-28 Thread Russell Bryant
On 04/28/2015 06:25 AM, Rossella Sblendido wrote:
 
 
 On 04/28/2015 03:24 AM, Armando M. wrote:
  UnsupportedVersion error if the version is not bumped in their agent 
 too.
 
 
  Could the server fall back and keep on using the old version of the 
 API? I
  think that would make for a much nicer experience, especially in face 
 of
  upgrades. Is this not possible? If it is, then the in vs out matter is 
 not
  really an issue and out-of-tree code can reflect the change in API at 
 their
  own pace.

 while it's indeed nicer, it's difficult as port_update is
 an async call (cast) and does not wait for errors
 including UnsupportedVersion.


 Then, let's figure out how to change it!
 
 Russell suggested a way to handle it using a version_cap. It doesn't
 seem a trivial change and Russell already mentioned that it adds
 complexity. If we feel that it's necessary I can look into it.

Armando's suggestion is possible if the method is changed from cast() to
call(), but switching from async to sync has a cost, too.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Missing meetings this week

2015-04-28 Thread Jay Pipes

On 04/27/2015 06:44 PM, Everett Toews wrote:

Hi All,

I’ll be out the next few days and will be missing our meetings.
Specifically the cross-project meeting [1] and our API WG meeting
[2].


I can host the API WG meeting, since I have that time free this week 
(finally) :)



On the plus side I got to my action items from the last meeting and
“froze” the 3 guidelines up for review and proposed a cross-project
session [3] (row 22). I also booked some time for us in the working
group sessions at the summit.


Thanks very much for completing those action items. Much appreciated.

Best,
-jay


Thanks, Everett

[1] https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting [2]
https://wiki.openstack.org/wiki/Meetings/API-WG [3]
https://docs.google.com/spreadsheets/d/1vCTZBJKCMZ2xBhglnuK3ciKo3E8UMFo5S5lmIAYMCSE/edit#gid=827503418



__



OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Problem in loading image in sahara open stack

2015-04-28 Thread Sonal Singh
Hi,

I have installed Sahara openstack using devstack.

Now I am going to make multi node cluster in Sahara. For this I have made 1 
master node and 4 worker node now when I am going to launch this cluster, I 
need shara-juno-vanilla image.
For this I have downloaded this image. Now when I am trying  to launch this 
image in image storage of openstack, its status is hanged in queued state or 
getting killed but not coming in active state. Please find the below snapshot:

[cid:image001.png@01D081E5.697DCB50]

Can you please provide me any solution regarding this. Any help will be highly 
appreciated.

Thanks  Regards
Sonal



DISCLAIMER: This message is proprietary to Aricent and is intended solely for 
the use of the individual to whom it is addressed. It may contain privileged or 
confidential information and should not be circulated or used for any purpose 
other than for what it is intended. If you have received this message in error, 
please notify the originator immediately. If you are not the intended 
recipient, you are notified that you are strictly prohibited from using, 
copying, altering, or disclosing the contents of this message. Aricent accepts 
no responsibility for loss or damage arising from the use of the information 
transmitted by this email including damage from virus.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Why ceilometer do not offer run_tests.sh script?

2015-04-28 Thread Lu, Lianhao
On Apr 29, 2015 09:49, Luo Gangyi wrote:
 Hi guys,
 
 When I try to run unit tests of ceilometer, I find there is no 
 run_tests.sh script offers.
 
 And when I use tox directly, I got a message ' 'Could not find mongod 
 command'.

Please use setup-test-env-mongodb.sh instead. See tox.ini for details.

 So another question is why unit tests needs mongo?

It's used for the scenario tests on different db backend. Will be moved into 
functional test though. https://review.openstack.org/#/c/160827/ 

 Can someone give me some hint?

-Lianhao Lu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] implement openvswitch container

2015-04-28 Thread Steven Dake (stdake)


From: FangFenghua fang_feng...@hotmail.commailto:fang_feng...@hotmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Tuesday, April 28, 2015 at 7:02 AM
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] implement openvswitch container

I want to enbale openvswitch container.  I tink i can do that like:

1 add a container that run ovs process
2 add a container that run neutron-openvswitch-agent
3  share the db.sock in compose.yaml
4 add configure script and check script for the 2 containers

that's all i need to do, right?

That should do it

You may need to configure the ovs process in the start.sh script and 
neutron-openvswitch-agent, which will be the most difficult part of the work.

Note our agents atm are a “fat container” but if you can get ovs in a separate 
container, that would be ideal. We are planning to redux the fat container we 
have to single-purpose containers.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] creating a rubygems mirror for OpenStack

2015-04-28 Thread JJ Asghar

 On Apr 27, 2015, at 9:19 PM, Emilien Macchi emil...@redhat.com wrote:
 
 Hi,

Hey!

 
 All Puppet OpenStack jobs (lint, syntax, unit and beaker) are quite
 often affected by rubygems.org downtimes and make all jobs failing
 randomly because it can't download some gems during the bootstrap.
 This is something that really affect our CI and we would really
 appreciate openstack-infra's help!

Our OpenStack+Chef project is effected by this also.

 
 It came up on IRC we could use the existing Pypi mirror nodes to add
 rubygems and have rubygems.openstack.org or something like this).

I love this idea; we are moving from bundler, to the ChefDK, but we still would 
benefit from having access to this.

 
 I created a story here: https://storyboard.openstack.org/#!/story/2000247 
 https://storyboard.openstack.org/#!/story/2000247

Thanks! I’ll keep an eye on this.

Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Question about tempest API tests that establish SSH connection to instances

2015-04-28 Thread Matthew Treinish
On Tue, Apr 28, 2015 at 02:21:27PM +0300, Yaroslav Lobankov wrote:
 Guys, thank you for answers!
 
 Then I have one more question :) How do scenario tests (that establish SSH
 connection to instances) work?
 

So the scenario tests don't use the run_ssh flag, they just ssh in to servers
when they need to regardless of the value of that config option. (as do any
tests which are currently using ssh in the gate)

That's actually part of what the BP that Joseph pointed out is trying to
address. The run_ssh option is currently kind of a mess, it's not globally
respected, and things that do rely on it are probably broken, because as
Salvatore said we skip those in the gate.

Reading the spec for that BP will provide some good background here:

http://specs.openstack.org/openstack/qa-specs/specs/ssh-auth-strategy.html

-Matt Treinish

 
 On Tue, Apr 28, 2015 at 12:27 PM, Lanoux, Joseph joseph.lan...@hp.com
 wrote:
 
   Hi,
 
 
 
  Actually, ssh connection is not yet implemented in Tempest. We’re
  currently working on it [1].
 
 
 
  Joseph
 
 
 
  [1]
  https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/ssh-auth-strategy,n,z
 
 
 
 
 
  *From:* Salvatore Orlando [mailto:sorla...@nicira.com]
  *Sent:* 28 April 2015 10:16
  *To:* OpenStack Development Mailing List (not for usage questions)
  *Subject:* Re: [openstack-dev] [QA] Question about tempest API tests that
  establish SSH connection to instances
 
 
 
  At a first glance it seems run_ssh is disabled in gate tests [1]. I could
  not find any nova job where it is enabled.
 
  These tests are therefore skipped. For what is worth they might be broken
  now. Sharing a traceback or filing a bug might help.
 
 
 
  Salvatore
 
 
 
  [1]
  http://logs.openstack.org/81/159481/2/check/check-tempest-dsvm-neutron-full/85e039c/logs/testr_results.html.gz
 
 
 
  On 28 April 2015 at 10:26, Yaroslav Lobankov yloban...@mirantis.com
  wrote:
 
   Hi everyone,
 
 
 
  I have a question about tempest tests that are related to instance
  validation. Some of these tests are
 
 
 
 
  tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name[gate,id-ac1ad47f-984b-4441-9274-c9079b7a0666]
 
 
  tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_created_server_vcpus[gate,id-cbc0f52f-05aa-492b-bdc1-84b575ca294b]
 
 
  tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_host_name_is_same_as_server_name[gate,id-ac1ad47f-984b-4441-9274-c9079b7a0666]
 
 
  tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_created_server_vcpus[gate,id-cbc0f52f-05aa-492b-bdc1-84b575ca294b]
 
 
 
  To enable these tests I should set the config option run_ssh to True.
  When I set the option to true and ran the tests, all the tests failed. It
  looks like ssh code in API tests doesn't work.
 
  Maybe I am wrong. The question is the following: which of tempest jobs
  runs these tests?  Maybe I have tempest misconfiguration.
 
 
 
  Regards,
 
  Yaroslav Lobankov.
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



pgpaR86bhFh97.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bump the RPC version required for port_update - AgentNotifierApi

2015-04-28 Thread Armando M.
On 28 April 2015 at 05:52, Russell Bryant rbry...@redhat.com wrote:

 On 04/28/2015 06:25 AM, Rossella Sblendido wrote:
 
 
  On 04/28/2015 03:24 AM, Armando M. wrote:
   UnsupportedVersion error if the version is not bumped in their
 agent too.
  
  
   Could the server fall back and keep on using the old version of
 the API? I
   think that would make for a much nicer experience, especially in
 face of
   upgrades. Is this not possible? If it is, then the in vs out
 matter is not
   really an issue and out-of-tree code can reflect the change in
 API at their
   own pace.
 
  while it's indeed nicer, it's difficult as port_update is
  an async call (cast) and does not wait for errors
  including UnsupportedVersion.
 
 
  Then, let's figure out how to change it!
 
  Russell suggested a way to handle it using a version_cap. It doesn't
  seem a trivial change and Russell already mentioned that it adds
  complexity. If we feel that it's necessary I can look into it.

 Armando's suggestion is possible if the method is changed from cast() to
 call(), but switching from async to sync has a cost, too.


For the type of communication paradigm needed in this case, I don't think
that switching to call from cast is really a viable solution. Even though
this circumstance may as well be handled as suggested above (assumed that
the breakage is adequately advertised like via [1] and IRC meetings), I
think there might be a couple of things worth considering, if backward
compatibility as well as rolling upgrades are a must-meet requirement (and
I would like to think that they are):

a) introduce a new, enhanced, port_update topic to be used by more capable
agents: in this case the server could fanout to the two separate topics. We
could get rid of this logic in due course to go back to a simpler model
made by a single fanout.

b) have the server be aware of the agent's version (after all they all
report state, which could include their capabilities in the form of a list
of RPC API versions) and selectively cast requests based on their
capabilities.

Both a) and b) would introduce no operator complexity, at a price of more
elaborated RPC method implementation.

I realize that we can't roll upgrade today, but forklift upgrades are bad
and we should take a serious look at what practices we could put in place
to address this aspect once and for all.

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Neutron/LibraryAPIBreakage

 --
 Russell Bryant

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Proposal for Madhuri Kumari to join Core Team

2015-04-28 Thread Steven Dake (stdake)
Hi folks,

I would like to nominate Madhuri Kumari  to the core team for Magnum.  Please 
remember a +1 vote indicates your acceptance.  A –1 vote acts as a complete 
veto.

Why Madhuri for core?

  1.  She participates on IRC heavily
  2.  She has been heavily involved in a really difficult project  to remove 
Kubernetes kubectl and replace it with a native python language binding which 
is really close to be done (TM)
  3.  She provides helpful reviews and her reviews are of good quality

Some of Madhuri’s stats, where she performs in the pack with the rest of the 
core team:

reviews: http://stackalytics.com/?release=kilomodule=magnum-group
commits: 
http://stackalytics.com/?release=kilomodule=magnum-groupmetric=commits

Please feel free to vote if your a Magnum core contributor.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposal for Madhuri Kumari to join Core Team

2015-04-28 Thread Davanum Srinivas
+1 from me. welcome Madhuri!

On Tue, Apr 28, 2015 at 11:14 AM, Steven Dake (stdake) std...@cisco.com wrote:
 Hi folks,

 I would like to nominate Madhuri Kumari  to the core team for Magnum.
 Please remember a +1 vote indicates your acceptance.  A –1 vote acts as a
 complete veto.

 Why Madhuri for core?

 She participates on IRC heavily
 She has been heavily involved in a really difficult project  to remove
 Kubernetes kubectl and replace it with a native python language binding
 which is really close to be done (TM)
 She provides helpful reviews and her reviews are of good quality

 Some of Madhuri’s stats, where she performs in the pack with the rest of the
 core team:

 reviews: http://stackalytics.com/?release=kilomodule=magnum-group
 commits:
 http://stackalytics.com/?release=kilomodule=magnum-groupmetric=commits

 Please feel free to vote if your a Magnum core contributor.

 Regards
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Kubernetes AutoScaling with Heat AutoScalingGroup and Ceilometer

2015-04-28 Thread Rabi Mishra

- Original Message -
 On Mon, Apr 27, 2015 at 12:28:01PM -0400, Rabi Mishra wrote:
  Hi All,
  
  Deploying Kubernetes(k8s) cluster on any OpenStack based cloud for
  container based workload is a standard deployment pattern. However,
  auto-scaling this cluster based on load would require some integration
  between k8s OpenStack components. While looking at the option of
  leveraging Heat ASG to achieve autoscaling, I came across few requirements
  that the list can discuss and arrive at the best possible solution.
  
  A typical k8s deployment scenario on OpenStack would be as below.
  
  - Master (single VM)
  - Minions/Nodes (AutoScalingGroup)
  
  AutoScaling of the cluster would involve both scaling of minions/nodes and
  scaling Pods(ReplicationControllers).
  
  1. Scaling Nodes/Minions:
  
  We already have utilization stats collected at the hypervisor level, as
  ceilometer compute agent polls the local libvirt daemon to acquire
  performance data for the local instances/nodes.
 
 I really doubts if those metrics are so useful to trigger a scaling
 operation. My suspicion is based on two assumptions: 1) autoscaling
 requests should come from the user application or service, not from the
 controller plane, the application knows best whether scaling is needed;
 2) hypervisor level metrics may be misleading in some cases. For
 example, it cannot give an accurate CPU utilization number in the case
 of CPU overcommit which is a common practice.

I agree that correct utilization statistics is complex with virtual 
infrastructure.
However, I think physical+hypervisor metrics (collected by compute agent) 
should be a 
good point to start.
 
  Also, Kubelet (running on the node) collects the cAdvisor stats. However,
  cAdvisor stats are not fed back to the scheduler at present and scheduler
  uses a simple round-robin method for scheduling.
 
 It looks like a multi-layer resource management problem which needs a
 wholistic design. I'm not quite sure if scheduling at the container
 layer alone can help improve resource utilization or not.

k8s scheduler is going to improve over time to use the cAdvisor/heapster 
metrics for
better scheduling. IMO, we should leave that for k8s to handle.

My point is on getting that metrics to ceilometer either from the nodes or from 
the \
scheduler/master.

  Req 1: We would need a way to push stats from the kubelet/cAdvisor to
  ceilometer directly or via the master(using heapster). Alarms based on
  these stats can then be used to scale up/down the ASG.
 
 To send a sample to ceilometer for triggering autoscaling, we will need
 some user credentials to authenticate with keystone (even with trusts).
 We need to pass the project-id in and out so that ceilometer will know
 the correct scope for evaluation. We also need a standard way to tag
 samples with the stack ID and maybe also the ASG ID. I'd love to see
 this done transparently, i.e. no matching_metadata or query confusions.
 
  There is an existing blueprint[1] for an inspector implementation for
  docker hypervisor(nova-docker). However, we would probably require an
  agent running on the nodes or master and send the cAdvisor or heapster
  stats to ceilometer. I've seen some discussions on possibility of
  leveraging keystone trusts with ceilometer client.
 
 An agent is needed, definitely.
 
  Req 2: Autoscaling Group is expected to notify the master that a new node
  has been added/removed. Before removing a node the master/scheduler has to
  mark node as
  unschedulable.
 
 A little bit confused here ... are we scaling the containers or the
 nodes or both?

We would only focusing on the nodes. However, adding/removing nodes without the 
k8s master/scheduler 
knowing about it (so that it can schedule pods or make them unschedulable)would 
be useless.

  Req 3: Notify containers/pods that the node would be removed for them to
  stop accepting any traffic, persist data. It would also require a cooldown
  period before the node removal.
 
 There have been some discussions on sending messages, but so far I don't
 think there is a conclusion on the generic solution.
 
 Just my $0.02.

Thanks Qiming.

 BTW, we have been looking into similar problems in the Senlin project.

Great. We can probably discuss these during the Summit? I assume there is 
already a session
on Senlin planned, right?

 
 Regards,
   Qiming
 
  Both requirement 2 and 3 would probably require generating scaling event
  notifications/signals for master and containers to consume and probably
  some ASG lifecycle hooks.
  
  
  Req 4: In case of too many 'pending' pods to be scheduled, scheduler would
  signal ASG to scale up. This is similar to Req 1.
  
  
  2. Scaling Pods
  
  Currently manual scaling of pods is possible by resizing
  ReplicationControllers. k8s community is working on an abstraction,
  AutoScaler[2] on top of ReplicationController(RC) that provides
  intention/rule based autoscaling. There would be a requirement to 

Re: [openstack-dev] [magnum] Proposal for Madhuri Kumari to join Core Team

2015-04-28 Thread Adrian Otto
+1

On Apr 28, 2015, at 8:14 AM, Steven Dake (stdake) 
std...@cisco.commailto:std...@cisco.com wrote:

Hi folks,

I would like to nominate Madhuri Kumari  to the core team for Magnum.  Please 
remember a +1 vote indicates your acceptance.  A –1 vote acts as a complete 
veto.

Why Madhuri for core?

  1.  She participates on IRC heavily
  2.  She has been heavily involved in a really difficult project  to remove 
Kubernetes kubectl and replace it with a native python language binding which 
is really close to be done (TM)
  3.  She provides helpful reviews and her reviews are of good quality

Some of Madhuri’s stats, where she performs in the pack with the rest of the 
core team:

reviews: http://stackalytics.com/?release=kilomodule=magnum-group
commits: 
http://stackalytics.com/?release=kilomodule=magnum-groupmetric=commits

Please feel free to vote if your a Magnum core contributor.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Navigating the ever changing OpenStack codebase

2015-04-28 Thread Kevin L. Mitchell
On Mon, 2015-04-27 at 15:54 -0700, Clint Byrum wrote:
 Excerpts from Kevin L. Mitchell's message of 2015-04-27 15:38:25 -0700:
  On Mon, 2015-04-27 at 21:42 +, Jeremy Stanley wrote:
   I consider it an unfortunate oversight that those files weren't
   deleted a very, very long time ago.
  
  Unfortunately, there's one problem with that: you can't tell tox to use
  a virtualenv that you've built.  We need this capability at present, so
  we have to run tests using run_tests.sh instead of tox :(  I have an
  issue open on tox to address this need, but haven't seen any movement on
  that; so until then, I have to oppose the removal of run_tests.sh…
  despite how much *I'd* like to see it bite the dust!
 
 Err.. you can just run the commands in tox.ini in the venv of your
 choice. You don't need run_tests.sh for that.

No dice.  I don't want to have to parse the tox.ini directly.  We're
talking about automated tests here, by the way.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Weekly meeting #33

2015-04-28 Thread Emilien Macchi


On 04/27/2015 12:50 PM, Emilien Macchi wrote:
 Hi,
 
 Tomorrow is our weekly meeting.
 Please look at the agenda [1].
 
 Feel free to bring new topics and reviews/bugs if needed.
 Also, if you had any action, make sure you can give a status during the
 meeting or in the etherpad directly.
 
 See you tomorrow,
 
 [1]
 https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150428

We did our meeting, you can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-04-28-15.00.html

Have a great day,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Why ceilometer do not offer run_tests.sh script?

2015-04-28 Thread gordon chung
everything llu mentioned below. also, if you don't want to install mongo, you 
can run tests against mysql/postgres/elasticsearch instead using tox 
-epy-mysql, tox -epy-pgsql or tox -epy-elastic. if you want to debug the tests 
you can similar run tox -edebug-db

cheers,
gord



 From: lianhao...@intel.com
 To: openstack-dev@lists.openstack.org
 Date: Wed, 29 Apr 2015 02:18:38 +
 Subject: Re: [openstack-dev] [Ceilometer] Why ceilometer do not offer 
 run_tests.sh script?

 On Apr 29, 2015 09:49, Luo Gangyi wrote:
 Hi guys,

 When I try to run unit tests of ceilometer, I find there is no
 run_tests.sh script offers.

 And when I use tox directly, I got a message ' 'Could not find mongod
 command'.

 Please use setup-test-env-mongodb.sh instead. See tox.ini for details.

 So another question is why unit tests needs mongo?

 It's used for the scenario tests on different db backend. Will be moved into 
 functional test though. https://review.openstack.org/#/c/160827/

 Can someone give me some hint?

 -Lianhao Lu

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Why ceilometer do notoffer run_tests.sh script?

2015-04-28 Thread Luo Gangyi
Thanks for your reply.


Yes, we can do as you said. But still a bit weird using database in unit tests 
:)


--
Luo gangyiluogan...@chinamobile.com



 




-- Original --
From:  Lu, Lianhao;lianhao...@intel.com;
Date:  Wed, Apr 29, 2015 10:18 AM
To:  OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; 

Subject:  Re: [openstack-dev] [Ceilometer] Why ceilometer do notoffer   
run_tests.sh script?



On Apr 29, 2015 09:49, Luo Gangyi wrote:
 Hi guys,
 
 When I try to run unit tests of ceilometer, I find there is no 
 run_tests.sh script offers.
 
 And when I use tox directly, I got a message ' 'Could not find mongod 
 command'.

Please use setup-test-env-mongodb.sh instead. See tox.ini for details.

 So another question is why unit tests needs mongo?

It's used for the scenario tests on different db backend. Will be moved into 
functional test though. https://review.openstack.org/#/c/160827/ 

 Can someone give me some hint?

-Lianhao Lu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] time based auto scaling

2015-04-28 Thread Fei Long Wang
+1, if you really care about time range, Mistral can meet your requirements.

Besides, maybe not directly related, as for autoscaling, I always
believe there should be a message queue service(like Zaqar) between the
(web) application and the worker, a task request will be posted to the
queue as a message, worker will pick the message from the queue to
handle. And then we can trigger the autoscaling based on the workload of
queue instead of the hardware usage. Just drop my 2 cents.

On 29/04/15 16:39, Fox, Kevin M wrote:
 what about Mistral?

 https://wiki.openstack.org/wiki/Mistral

 Thanks,
 Kevin *
  
 *
 
 *From:* ZhiQiang Fan
 *Sent:* Tuesday, April 28, 2015 9:23:20 PM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [ceilometer] time based auto scaling

 Hi devs,

 I'm thinking to add new type of alarm for time based auto scaling, but
 not sure if there is a better way to achieve it outside ceilometer scope

 Currently we can auto scaling based on vm load, but it will take
 several minutes to do it. For the worst case, when the vm load is
 heavy, ceilometer needs 10 minutes (by default) to collect the
 performance data, and alarm need 1 minutes to evaluate it, maybe there
 is evaluation_periods which set to higher that 1 to avoid performance
 peak. 

 So even though we can collect data by 1 minutes, but evaluation
 periods may be set to 3, so the worst case is after vm load perfomance
 in high level for 4 minutes, then the alarm is triggered, then heat
 will expand vm count, nova will take dozens seconds or more to create,
 finally the service on the in the heat server group will performance
 bad or unavailable for several minutes, which is not acceptable for
 some critical applications.

 But if we can know some high load which related with time, for
 example, 08:00am will be a peak, and after 22:00pm will definitely
 drop to idel level, then heat can increase some vms at 07:30am, then
 decrease some vms at 22:00 (or decrease by load as normal routine)

 However, current ceilometer only provide time constraint alarm, which
 will only evaluate but not guarantee it will be triggered. And heat
 cannot provide something like timer, but just can wait for the signal.

 So I propose to add a new type of alarm, which will definitely send a
 signal to action when it is evaluated (with or without repeat, it will
 work well with time constraint)

 Any suggestion?


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Pre-Liberty Virtual Mini Summit

2015-04-28 Thread Nikhil Komawar
Hi all,

Please note an important update on the schedule. Since we need to finalize the 
Glance summit sessions by May 8th, we have planned an earlier virtual meetup on 
Thursday May 7th 2015. We are still planning to have the discussions on May 
12th and if necessary on May 13th. The details are in the following etherpad.

https://etherpad.openstack.org/p/liberty-glance-virtual-mini-summit

Thanks,
-Nikhil


From: Nikhil Komawar nikhil.koma...@rackspace.com
Sent: Friday, April 24, 2015 11:11 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev]  [Glance] Pre-Liberty Virtual Mini Summit

Hello all,

Glance community is planning to conduct a Pre-Liberty-Summit video conferencing 
event over 2 days on Tuesday May 12th and Wednesday May 13th for everyone to 
discuss topics that are or are not going to be discussed at the main event. It 
will help everyone be prepared at Vancouver and be able to make the most out of 
such an extraordinary event. You are encouraged to propose topics. Since, we 
have limited slots, please try to do that as soon as possible.

The tentative schedule and sign up list is available at:
https://etherpad.openstack.org/p/liberty-glance-virtual-mini-summit

We are looking to finalize the schedule over the next few days and it will be 
available on the same etherpad. Please let me know if you have any questions.

Thanks,
 -Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] time based auto scaling

2015-04-28 Thread Fox, Kevin M
what about Mistral?

https://wiki.openstack.org/wiki/Mistral

Thanks,
Kevin


From: ZhiQiang Fan
Sent: Tuesday, April 28, 2015 9:23:20 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [ceilometer] time based auto scaling

Hi devs,

I'm thinking to add new type of alarm for time based auto scaling, but not sure 
if there is a better way to achieve it outside ceilometer scope

Currently we can auto scaling based on vm load, but it will take several 
minutes to do it. For the worst case, when the vm load is heavy, ceilometer 
needs 10 minutes (by default) to collect the performance data, and alarm need 1 
minutes to evaluate it, maybe there is evaluation_periods which set to higher 
that 1 to avoid performance peak.

So even though we can collect data by 1 minutes, but evaluation periods may be 
set to 3, so the worst case is after vm load perfomance in high level for 4 
minutes, then the alarm is triggered, then heat will expand vm count, nova will 
take dozens seconds or more to create, finally the service on the in the heat 
server group will performance bad or unavailable for several minutes, which is 
not acceptable for some critical applications.

But if we can know some high load which related with time, for example, 08:00am 
will be a peak, and after 22:00pm will definitely drop to idel level, then heat 
can increase some vms at 07:30am, then decrease some vms at 22:00 (or decrease 
by load as normal routine)

However, current ceilometer only provide time constraint alarm, which will only 
evaluate but not guarantee it will be triggered. And heat cannot provide 
something like timer, but just can wait for the signal.

So I propose to add a new type of alarm, which will definitely send a signal to 
action when it is evaluated (with or without repeat, it will work well with 
time constraint)

Any suggestion?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara][CDH] Is it possible to add CDH5.4 into Kilo release now?

2015-04-28 Thread Chen, Ken
Hi all,
Currently Cloudera has already release CDH5.4.0 version. I have already 
registered a bp and submitted two patches for it 
(https://blueprints.launchpad.net/sahara/+spec/cdh-5-4-support) . However, they 
are for master stream, and Cloudera hope it can be added to the latest release 
version of Sahara (Kilo release) so that they can give better support to their 
customers. I am not sure whether it is possible to do this at this stage?

-Ken
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

2015-04-28 Thread Fox, Kevin M
Yes, ml2 was created since each of the drivers used to be required to do 
everything themselves and it was decided it would be far better for everyone to 
share the common bits. Thats what ml2s about. Its not about implementing an sdn

Thanks,
Kevin


From: loy wolfe
Sent: Tuesday, April 28, 2015 6:16:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack 
project maintained by core team keep only API/DB in the future?

On Wed, Apr 29, 2015 at 2:59 AM, Kevin Benton blak...@gmail.com wrote:
 The concern is that having broken drivers out there that claim to work with
 an OpenStack project end up making the project look bad. It's similar to a
 first time Linux user experiencing frequent kernel panics because they are
 using hardware with terrible drivers. They aren't going to recognize the
 distinction and will just assume the project is bad.


I think the focal point is not about device driver for the real
backend such as OVS/LB or HW TOR, but ML2 vs. external SDN controllers
which are also claimed to be backends by some people.

Again analogy with Linux, which has socket layer exposing the API,
common tcp/ip stack and common netdev  skbuff, and each NIC has its
own device driver (real backend). While it make sense to discuss
whether those backend device driver should be splitted out of tree,
there was no consideration that the common middle stacks should be
splitted for equal footing with some other external implementations.

Things are slimiar with Nova  Cinder. we may have all kinds of virt
driver and volume driver, but only one common scheduling 
compute/volume manager implementation. For Neutron it is necessary to
support hundreds of real backends, but does it really benefit
customers to equal footing the ML2 with a bunch of other external SDN
controllers?

Best Regards



I would love to see OpenStack upstream acting more like a resource to
 support users and developers

 I'm not sure what you mean here. The purpose of 3rd party CI requirements is
 to signal stability to users and to provide feedback to the developers.

 On Tue, Apr 28, 2015 at 4:18 AM, Luke Gorrie l...@tail-f.com wrote:

 On 28 April 2015 at 10:14, Duncan Thomas duncan.tho...@gmail.com wrote:

 If we allow third party CI to fail and wait for vendors to fix their
 stuff, experience has shown that they won't, and there'll be broken or
 barely functional drivers out there, and no easy way for the community to
 exert pressure to fix them up.


 Can't the user community exert pressure on the driver developers directly
 by talking to them, or indirectly by not using their drivers? How come
 OpenStack upstream wants to tell the developers what is needed before the
 users get a chance to take a look?

 I would love to see OpenStack upstream acting more like a resource to
 support users and developers (e.g. providing 3rd party CI hooks upon requst)
 and less like gatekeepers with big sticks to wave at people who don't drop
 their own priorities and Follow The Process.




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack/Resource updated_at conventions

2015-04-28 Thread Angus Salkeld
On Tue, Apr 28, 2015 at 1:46 AM, Steven Hardy sha...@redhat.com wrote:

 Hi all,

 I've been looking into $subject recently, I raised this bug:

 https://bugs.launchpad.net/heat/+bug/1448155

 Basically we've got some historically weird and potentially inconsistent
 behavior around updated_at, and I'm trying to figure out the best way to
 proceed.

 Now, we selectively update updated_at only on the transition to
 UPDATE_COMPLETE, where we store the time that we moved into
 UPDATE_IN_PROGRESS.  During the update, there's no way to derive the
 time we started the update.

 Also, we inconsistently store the time associated with the transition into
 IN_PROGRESS for suspend, resume, snapshot, restore and check actions (even
 though many of these don't modify the stack definition).

 The reason I need this is the hook/breakpoint API - the only way to detect
 if you've hit a breakpoint is via events, and to detect you've hit a hook
 during multiple sequential updates (some of which may fail or time out with
 hooks pending), you need to filter the events to only consider those with a
 timestamp newer than the transition of the stack to the update IN_PROGRESS.

 AFAICT there's two options:

 1. Update the stack.Stack so we store now at every transition (e.g in
 state_set)

 2. Stop trying to explicitly control updated_at, and just allow the oslo
 TimestampMixin to do it's job and update updated_at every time the DB model
 is updated.

 What are peoples thoughts?  Either will solve my problem, but I'm leaning
 towards (2) as the cleanest and most technically correct solution.


Just beware:
https://github.com/openstack/heat/blob/master/heat/engine/resources/stack_resource.py#L328-L346
and
https://review.openstack.org/#/c/173045/

This is our only current way for knowing if something has changed between 2
updates.

-A


 Similar problems exist for resource.Resource AFAICT.

 Steve

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] time based auto scaling

2015-04-28 Thread ZhiQiang Fan
Hi devs,

I'm thinking to add new type of alarm for time based auto scaling, but not
sure if there is a better way to achieve it outside ceilometer scope

Currently we can auto scaling based on vm load, but it will take several
minutes to do it. For the worst case, when the vm load is heavy, ceilometer
needs 10 minutes (by default) to collect the performance data, and alarm
need 1 minutes to evaluate it, maybe there is evaluation_periods which set
to higher that 1 to avoid performance peak.

So even though we can collect data by 1 minutes, but evaluation periods may
be set to 3, so the worst case is after vm load perfomance in high level
for 4 minutes, then the alarm is triggered, then heat will expand vm count,
nova will take dozens seconds or more to create, finally the service on the
in the heat server group will performance bad or unavailable for several
minutes, which is not acceptable for some critical applications.

But if we can know some high load which related with time, for example,
08:00am will be a peak, and after 22:00pm will definitely drop to idel
level, then heat can increase some vms at 07:30am, then decrease some vms
at 22:00 (or decrease by load as normal routine)

However, current ceilometer only provide time constraint alarm, which will
only evaluate but not guarantee it will be triggered. And heat cannot
provide something like timer, but just can wait for the signal.

So I propose to add a new type of alarm, which will definitely send a
signal to action when it is evaluated (with or without repeat, it will work
well with time constraint)

Any suggestion?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ?????? [ceilometer] time based auto scaling

2015-04-28 Thread Luo Gangyi
Agree with ZhiQiang. 


Maybe we could achieve this by heat itself or other project like Mistral, 
but it seems more natural to achieve this through ceilometer alarm system.


--
Luo gangyiluogan...@chinamobile.com



 




--  --
??: ZhiQiang Fan;aji.zq...@gmail.com;
: 2015??4??29??(??) 12:23
??: OpenStack Development Mailing 
Listopenstack-dev@lists.openstack.org; 

: [openstack-dev] [ceilometer] time based auto scaling



Hi devs,

I'm thinking to add new type of alarm for time based auto scaling, but not sure 
if there is a better way to achieve it outside ceilometer scope


Currently we can auto scaling based on vm load, but it will take several 
minutes to do it. For the worst case, when the vm load is heavy, ceilometer 
needs 10 minutes (by default) to collect the performance data, and alarm need 1 
minutes to evaluate it, maybe there is evaluation_periods which set to higher 
that 1 to avoid performance peak. 


So even though we can collect data by 1 minutes, but evaluation periods may be 
set to 3, so the worst case is after vm load perfomance in high level for 4 
minutes, then the alarm is triggered, then heat will expand vm count, nova will 
take dozens seconds or more to create, finally the service on the in the heat 
server group will performance bad or unavailable for several minutes, which is 
not acceptable for some critical applications.


But if we can know some high load which related with time, for example, 08:00am 
will be a peak, and after 22:00pm will definitely drop to idel level, then heat 
can increase some vms at 07:30am, then decrease some vms at 22:00 (or decrease 
by load as normal routine)


However, current ceilometer only provide time constraint alarm, which will only 
evaluate but not guarantee it will be triggered. And heat cannot provide 
something like timer, but just can wait for the signal.



So I propose to add a new type of alarm, which will definitely send a signal to 
action when it is evaluated (with or without repeat, it will work well with 
time constraint)


Any suggestion?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][third-party][neutron-lbaas] git review - how to provide the URL to the test artifacts

2015-04-28 Thread Shane McGough
Hi all


I am running into trouble with how to post back the link to the log artefacts 
after running the CI.


I can see how this is done in zuul using the url_pattern in zuul.conf, but as 
it stands now I am only using jenkins and the command line to monitor gerrit 
and build test environments.


Is there a way to provide the URL back to gerrit with git review via ssh in the 
command line?


Thanks


Shane McGough
Junior Software Developer
KEMP Technologies
National Technology Park, Limerick, Ireland.

kemptechnologies.comhttps://kemptechnologies.com/ | 
@KEMPtechhttps://twitter.com/KEMPtech | 
LinkedInhttps://www.linkedin.com/company/kemp-technologies
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-28 Thread Fox, Kevin M
This is all good stuff. Thanks. Does/Should the neutron docs have an 
openvswitch debugging page? This belongs there for easy access. Such a page 
might go a long way to alleviate fears over the openvswitch backend.

Thanks,
Kevin


From: Attila Fazekas
Sent: Tuesday, April 28, 2015 1:22:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in 
DevStack [was: Status of the nova-network to Neutron migration work]

You can tcpdump the ovs ports as usual.

Please keep in mind ovs does not have `single contention` port.
OVS does MAC learning by default and you may not see `learned` uni-cast traffic
on a random trunk port. You MAY see BUM traffic, but many of them also can be 
canceled
by neutron-ml2-ovs, AFAIK it is not enabled by default.

OVS behaves like a real switch, real switches also does not have 5 Tbit/sec 
ports for monitoring :(
If you need to tcpudump on a port which is not visible in the userspace 
(internal patch links) by default
you should do port mirroring. [1]

Usually you do not need to dump the traffic,
What you should do as basic trouble shooting is checking the tags on the ports,
(`ovsdb-client dump` show everything, excluding the oflow rules)

Hopefully the root cause is fixed, but you should check is the port not trunk
when it needs to be tagged.

Neutron also dedicates the vlan-4095 on br-int as dead vlan,
If you have a port in this, it can mean a miss configuration
or a message lost in the void or something Exceptional happened.

If you really need to redirect exceptional `out of band` traffic to a special 
port
or to an external service (controller) it would be more complex thing
then just doing the mirroring.

[1] http://www.yet.org/2014/09/openvswitch-troubleshooting/

PS.:
OVS does not generates ICMP packets in many cases when a real `L3` switch would 
do,
thats why the MTU size differences causes issues and requires extra care at 
configuration,
when ovs used with tunneling. (OVS also can be used with vlans)

Probably this caused the most headache for many user.

PS2.:
Somewhere I read the ovs had the PMTUD support, but it was removed because
it was not conforming to the standard.
It just does silent packet drop :(



- Original Message -
 From: Jeremy Stanley fu...@yuggoth.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, April 21, 2015 5:00:24 PM
 Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in 
 DevStack [was: Status of the nova-network
 to Neutron migration work]

 On 2015-04-21 03:19:04 -0400 (-0400), Attila Fazekas wrote:
 [...]
  IMHO the OVS is less complex than netfilter (iptables, *tables),
  if someone able to deal with reading the netfilter rules he should
  be able to deal with OVS as well.

 In a simple DevStack setup, you really have that many
 iptables/ebtables rules?

  OVS has debugging tools for internal operations, I guess you are
  looking for something else. I do not have any `good debugging`
  tool for net-filter either.
 [...]

 Complexity of connecting tcpdump to the bridge was the primary
 concern here (convenient means of debugging network problems when
 you're using OVS, less tools for debugging OVS itself though it can
 come down to that at times as well). Also ebtables can easily be
 configured to log every frame it blocks, forwards or rewrites
 (presumably so can the OVS flow handler? but how?).
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Call to action, revisit CIS state

2015-04-28 Thread Ian Cordasco


On 4/28/15, 00:31, Tripp, Travis S travis.tr...@hp.com wrote:


On 4/27/15, 05:39, Kuvaja, Erno kuv...@hp.com wrote:

The spec bluntly states that
there is no security impact from the implementation
 and the concerns should have been brought up so reviewers would have
had
better chance to catch possible threats.

 
I would like you to look back into those two specs and the comments,
look
back into the implementation and raise any urgent concerns and please
lets try to have good and healthy base for discussion in the Vancouver
Summit how we will continue
 forward from this!
 



Any thoughts on improving security are always welcome.  As you¹ll find in
the original service spec, in the comments on it, and in the code,
security was one of the number one topics with the CIS service. Getting
input on this was a driving reason to initially target a single OpenStack
service (Glance). Security was also discussed in all three of the summit
discussions leading to the creation of this service (pre-Kilo virtual
summit, Kilo summit, Kilo mini-summit). Without security, this becomes an
admin only service of limited interest and could be done completely
outside of OpenStack as a native, possibly proprietary plugin to Elastic
Search itself. In that scenario, it also would not have any input from the
community and would not provide benefit to the broader community.

https://review.openstack.org/#/c/138051/


On 4/27/15, 10:52 AM, Ian Cordasco ian.corda...@rackspace.com wrote:




There¹s a slight problem with this though. We load the plugins
dynamically
https://github.com/openstack/glance/blob/582f8563e866f167ae1de1a2309c1a1e
2
4
84442a/glance/common/utils.py#L735 (as anyone really would expect us to)
which means new plugins can be created for any service that is willing to
create one and install it properly. With that done, we /could/ have CIS
become a centralized Elasticsearch API in OpenStack as managed by Glance.

The solution that seems obvious for this is to disallow plugins to
declare
their index name (using PluginClass.get_index_name) but I don¹t think
that
warrants an RC3 or will necessarily make it into 2015.1.1.


Thank you Ian for your continued thoughtful reviews. As Kamil pointed out,
the index name also is a point of customization for deployers that might
be using their elastic search cluster for multiple indexes.  If they want
to change the index for any reason such as avoiding collisions, this
allows that flexibility.

Sorry, so deployers are expected to do something like

class FixAssumptionsMadeByCIS(plugins.images.ImageIndex):
def get_index_name(self):
return ‘glance_made_an_assumption’

Why wouldn’t this be configurable in etc/glance-search.conf ? That seems
to me to be the best location for deployers to be configuring it. In fact,
isn’t that where everything that deployers are intended to configure is
supposed to go? In that case, I’m failing to see the reasoning for
`get_index_name`.


If CIS is going to become a fully supported (or non-experimental) Glance
API in Liberty, I think we should really make sure that it is a service
that can only create documents for Glance. Since the API is Experimental,
I think it¹s safe to say the API for the Plugins will be considered
experimental and so removing get_index_name from plugin classes will not
break the world.


I have a summit session proposed on discussing the catalog index service
at the summit and I specifically want to cover the scope and logistics of
it moving forward. This includes discussing whether or not it should be
proposed as its own project or if it might make sense for it to move to
its own repo as part of the glance project for technical and logistical
concerns. I¹ve started populating the linked discussion etherpad for that
session proposal with a few thoughts. There appears to be another highly
related session from Stuart, Flavio, and Brian that should be logically
arranged so that the timing / coordination between the two sessions makes
sense.

Cool. I’m looking forward to that session.

Cheers,
Ian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Question for the TC candidates

2015-04-28 Thread Jeremy Stanley
On 2015-04-28 16:30:21 +0100 (+0100), Chris Dent wrote:
[...]
 What's important to avoid is the blog postings being only reporting of
 conclusions. They also need to be invitations to participate in the
 discussions. Yes, the mailing list, gerrit and meeting logs have some
 of the ongoing discussions but often, without a nudge, people won't
 know.
[...]

Perhaps better visibility for the meeting agenda would help? As in
these are the major topics we're planning to cover in the upcoming
meeting, everyone is encouraged to attend sort of messaging?
Blogging that might be a bit obnoxious, not really sure (I'm one of
those luddites who prefer mailing lists to blogs so tend not to
follow the latter anyway).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Question for the TC candidates

2015-04-28 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2015-04-28 16:21:17 +:
 On 2015-04-28 16:30:21 +0100 (+0100), Chris Dent wrote:
 [...]
  What's important to avoid is the blog postings being only reporting of
  conclusions. They also need to be invitations to participate in the
  discussions. Yes, the mailing list, gerrit and meeting logs have some
  of the ongoing discussions but often, without a nudge, people won't
  know.
 [...]
 
 Perhaps better visibility for the meeting agenda would help? As in
 these are the major topics we're planning to cover in the upcoming
 meeting, everyone is encouraged to attend sort of messaging?
 Blogging that might be a bit obnoxious, not really sure (I'm one of
 those luddites who prefer mailing lists to blogs so tend not to
 follow the latter anyway).

The agenda is managed in the wiki [1] and Thierry sends a copy to
the openstack-tc mailing list every week. Maybe those should come
here to this list, with the topic tag [tc], instead?

Doug

[1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet when creating a pool?

2015-04-28 Thread Brandon Logan
?So someone pointed out that you were using lbaas for Juno, which would mean 
you aren't using LBaaS V2.  So you're using V1.  V1 member's do not take 
subnet_id as an attribute.  Let me know how you are making your requests.


Thanks,

Brandon


From: Brandon Logan brandon.lo...@rackspace.com
Sent: Monday, April 27, 2015 8:40 PM
To: OpenStack Development Mailing List not for usage questions
Subject: Re: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet 
when creating a pool?


I'm assuming you are using LBaaS V2.  With that assumption, I'm not sure how 
you are having to select subnet on the pool.  It's not supposed to be a field 
at all on the pool object.  subnet_id is required on the member object right 
now, but that's something I and others think should just be optional, and if 
not specified then it's assumed that member can be reached with whatever has 
already been setup.?  Another option is pool could get a subnet_id field in the 
future and all members that are created without subnet_id are assumed to be on 
the pool's subnet_id, but I'm getting ahead of myself and this has no bearing 
on your current issue.


Could you tell me how you are making your requests? CLI? REST directly?


From: Wanjing Xu wanjing...@hotmail.com
Sent: Monday, April 27, 2015 12:57 PM
To: OpenStack Development Mailing List not for usage questions
Subject: [openstack-dev] [Neutron][LBaaS] Why do we need to select subnet when 
creating a pool?

So when I use Haproxy for LBaaS for Juno, there is a subnet mandatary field 
that I need to fill in when creating a pool, and later on when I add members, I 
can use a different subnet(or simply just enter the ip of the member), when 
adding vip, I can still select a third subnet.  So what is the usage of the 
first subnet that I used to create pool?  There is no port created for this 
pool subnet.  I can see that a port is created for the vip subnet that the 
loadbalancer instance is binding to.

Regards!

Wanjing Xu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack

2015-04-28 Thread Sean M. Collins
I wanted to provide a quick update - the patch is passing tests at the
gate, however it is with a couple defaults hard-coded in. We are picking
eth0 as the physical interface, and VLANs.

https://review.openstack.org/#/c/168423/

I have another patch that is WIP that switches to VXLAN, but I also need
to adjust the defaults to make it work.

https://review.openstack.org/#/c/176927/

In short, we're very close, I just need to make some changes to the
first patch, so that it picks up and utilizes variables set by the user
in their local.conf file, like FLAT_INTERFACE or PUBLIC_INTERFACE
(depending on the mechanism driver) while also still using settings that
reflect our configuration at the gate.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Navigating the ever changing OpenStack codebase

2015-04-28 Thread Ronald Bradford
Thanks again for the clarification. Your initial --notests was an option I
was unaware of and I didn't take the time to try variations. I was familiar
with invoking test names by regex I just thought it was more a convention.

Regards

Ronald



On Tue, Apr 28, 2015 at 2:48 PM, Doug Hellmann d...@doughellmann.com
wrote:

 Excerpts from Ronald Bradford's message of 2015-04-28 14:24:37 -0400:
  Thanks Doug. For others following this thread. The following creates and
  activates the tox virtual environment.
 
  # Note: its --spacenotests not --notests

 Sorry, that was a typo on my part. The option name is actually
 '--notest' (no s at the end). That causes tox to do everything it
 would normally do, except for the step where it runs the command list
 for the named environment.

  $ tox -epy27 -- notests

 This becomes:

  $ tox -e py27 --notest

 Doug

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Navigating the ever changing OpenStack codebase

2015-04-28 Thread Kevin L. Mitchell
On Tue, 2015-04-28 at 16:08 -0400, Jay Pipes wrote:
 Honestly, I see no problem with some helper bash scripts that simplify 
 life for new contributors. The bash scripts do wonders for developers 
 new to OpenStack or Python coding by having a pretty easy and readable 
 way of determining what CLI commands are used to execute tests. Hell, 
 devstack [1] itself was written originally in the way it was to 
 well-document the deployment process for OpenStack. Many packagers and 
 configuration management script authors have looked at devstack's Bash 
 scripts for inspiration and instruction in this way.
 
 The point Ronald was making that nobody seems to have addressed is the 
 very valid observation that as a new contributor, it can be very 
 confusing to go from one project to another and see different ways of 
 running tests. Some projects have run_tests.sh and still actively 
 promote it in the devref docs. Others don't
 
 While Ronald seems to have been the victim of unfortunate timing (he 
 started toying around with python-openstackclient and within a week, 
 they removed the script he was using to run tests), that doesn't make 
 his point about our inconsistency moot.

Completely agreed, actually; I was only responding to the comment
suggesting the complete removal of run_tests.sh.  I personally think we
should promote only tox in the various doc files, and reference
run_tests.sh only as a legacy thing we can't fully get rid of quite yet.
(Incidentally, for my testing purposes, I don't care where it is, as
long as it's somewhere; so we could also move it to, say, tools.  I
don't even care what it outputs, as long as it gives a reasonable return
value; so we could have it print out a scary-looking warning about it
being legacy… :)
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron Lbaas v2 not showing operating_status as inactive

2015-04-28 Thread Madhusudhan Kandadai
Hi Anand,

There is an api which calls 'statuses' method.. I could see the status
'DISABLED' in: GET /lbaas/loadbalancers/loadbalancer_id/statuses.

Maybe we need to correct the doc to reflect the right URL to avoid
confusion. If that is the right API call, I shall update the bug and mark
it as fixed.

Regards,
Madhu



On Tue, Apr 28, 2015 at 12:28 PM, Anand shanmugam anand1...@outlook.com
wrote:

 Hi ,

 I am working on the bug https://bugs.launchpad.net/neutron/+bug/1449286

 In this bug the admin_state_up is made to false when creating a lbaas v2
 loadbalancer.The operating_state should become DISABLED for the created
 loadbalancer but it is showing as online.

 I can see that there is a method statuses which takes care of disabling
 the opensrting_status (
 https://github.com/openstack/neutron-lbaas/blob/master/neutron_lbaas/services/loadbalancer/plugin.py#L971)
 but I cannot find the method which will call this 'satuses' method.

 I feel this statuses method is not called at all when creating or updating
 a loadbalancer.Could someone please help me if there is any other api to
 call this method?

 Regards,
 Anand S

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Kilo RC3 available

2015-04-28 Thread Thierry Carrez
Hello everyone,

Due to a critical upgrade issue (bug 1448075) discovered in RC2 testing,
a new Nova release candidate was just created for Kilo. The list of RC3
last-minute fixes, as well as the RC3 tarball are available at:

https://launchpad.net/nova/kilo/kilo-rc3

At this late stage, this tarball is very likely to be formally released
as the final Kilo version on April 30. You are therefore strongly
encouraged to test and validate it !

Alternatively, you can directly test the stable/kilo branch at:
https://github.com/openstack/nova/tree/stable/kilo

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/nova/+filebug

and tag it *kilo-rc-potential* to bring it to the release crew's attention.

Thanks!

-- 
Thierry Carrez (ttx)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] beaker-rspec tests blueprint

2015-04-28 Thread Gabriele Cerami
Hi,

for people who, like me, would like to contribute to the effort of
adding tests to the upcoming beaker-rspec framework, could be really
helpful discussing about the scope, requirements and goals for the
framework and for a test to make sense in this environment.

I'd like to see, if possible, a blueprint proposed to the community
about beaker-rspec.

thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] trimming down Tempest smoke tag

2015-04-28 Thread David Kranz

On 04/28/2015 06:38 AM, Sean Dague wrote:

The Tempest Smoke tag was originally introduced to provide a quick view
of your OpenStack environment to ensure that a few basic things were
working. It was intended to be fast.

However, during Icehouse the smoke tag was repurposed as a way to let
neutron not backslide (so it's massively overloaded with network tests).
It current runs at about 15 minutes on neutron jobs. This is why grenade
neutron takes *so* long, because we run tempest smoke twice.

The smoke tag needs a diet. I believe our working definition should be
something as follows:

  - Total run time should be fast (= 5 minutes)
  - No negative tests
  - No admin tests
  - No tests that test optional extensions
  - No tests that test advanced services (like lbaas, vpnaas)
  - No proxy service tests

The criteria for a good set of tests is CRUD operations on basic
services. For instance, with compute we should have built a few servers,
ensure we can shut them down. For neutron we should have done some basic
network / port plugging.
That makes sense. On IRC, Sean and I agreed that this would include 
creation of users, projects, etc. So some of the keystone smoke tests
will be left in even though admin. IMO, it is debatable whether admin is 
relevant as part of the criteria for smoke.


We also previously had the 'smoke' tag include all of the scenario
tests, which was fine when we had 6 scenario tests. However as those
have grown I think that should be trimmed back to a few basic through
scenarios.

The results of this are -
https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:smoke,n,z

The impacts on our upstream gate will mean that grenade jobs will speed
up dramatically (20 minutes faster on grenade neutron).

There is one edge condition which exists, which is the
check-tempest-dsvm-neutron-icehouse job. Neutron couldn't pass either a
full or parallel tempest run in icehouse (it's far too racy). So that's
current running the smoke-serial tag. This would end up reducing the
number of tests run on that job. However, based on the number of
rechecks I've had to run in this series, that job is currently at about
a 30% fail rate - http://goo.gl/N2w7qc - which means some test reduction
is probably in order anyway, as it's mostly just preventing other people
from landing unrelated patches.

This was something we were originally planning on doing during the QA
Sprint but ran out of time. It looks like we'll plan to land this right
after Tempest 4 is cut this week, so that people that really want the
old behavior can stay on the Tempest 4 release, but master is moving
forward.

I think that once we trim down we can decide to point add specific tests
later. I expect smoke to be a bit more fluid over time, so it's not a
tag that anyone should count on a test going into that tag and staying
forever.
Agreed. The criteria and purpose should stay the same but individual 
tests may be added or removed from smoke.

Thanks for doing this.

 -David


-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2015-04-28 Thread Ruby Loo
Hi,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)


(As of Mon, 27 Apr, 15:20 UTC)
Open: 145 (+2)
4 new (-4), 41 in progress (+7), 0 critical, 9 high (-1) and 11 (+3)
incomplete


Drivers
==

iLO (wanyen)
--
Made good progress on 3rd-party CI test.  All patches have been submitted
for review.

iRMC (naohirot)
-
iRMC Virtual Media Deploy Driver Spec has been approved, and the code is
ready for review https://review.openstack.org/#/c/151958/ which has
included:
- Support for non-glance image references
- automate-uefi-bios-iso-creation



Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Navigating the ever changing OpenStack codebase

2015-04-28 Thread Jeremy Stanley
On 2015-04-28 16:08:03 -0400 (-0400), Jay Pipes wrote:
 Honestly, I see no problem with some helper bash scripts that
 simplify life for new contributors.
[...]

Well, the main downside to them is that rather than serving as
documentation of how to run the tests, they serve as a temptation to
developers using them to start adding workarounds for various things
and can end up in a situation where what we're gating on acts
nothing like what the developers who run that script are actually
getting. And then that leads to even more confusion because they
don't realize that their problem is being hidden by something hacked
into run_tests.sh so they think the automated CI is broken instead.

I remember it happening regularly before we started begging people
to run tox and instead remove those scripts where possible.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Skipping Cross-Project meeting today

2015-04-28 Thread Thierry Carrez
Hi!

The agenda for the cross-project meeting is currently pretty empty:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

Since we are all pretty busy on release week, I propose we skip it and
meet again next week.

The only topic on the agenda was about officializing the Liberty release
schedule. If you have last-minute comments on that, please follow-up on
the ML thread instead:

http://lists.openstack.org/pipermail/openstack-dev/2015-April/061331.html

Remember anyone can add cross-project topics to discuss at this meeting:
just edit the above wiki page :)

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Additions to stable-maint-core

2015-04-28 Thread Thierry Carrez
Hi everyone,

I'm pleased to announce that two well-known QA team members just
accepted to join the stable-maint-core group: Matt Treinish and Matt
Riedemann.

Stable maint core team members ensure that the stable branches are
working correctly and that the stable branch policy[1] is enforced
across project-specific stable maintenance teams. If you are interested,
please send us[2] an email or join us in #openstack-stable.

Cheers,

[1] https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy
[2] https://review.openstack.org/#/admin/groups/530,members

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][python-novaclient] microversion implementation on client side

2015-04-28 Thread Devananda van der Veen
FWIW, we enumerated the use-cases and expected behavior for all
combinations of server [pre versions, older version, newer version]
and client [pre versions, older version, newer version, user-specified
version], in this informational spec:

http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html#proposed-change

Not all of that is implemented yet within our client, but the
auto-negotiation of version is done. While our clients probably don't
share any code, maybe something here can help:

http://git.openstack.org/cgit/openstack/python-ironicclient/tree/ironicclient/common/http.py#n72

-Deva

On Mon, Apr 27, 2015 at 2:49 AM, John Garbutt j...@johngarbutt.com wrote:
 I see these changes as really important.

 We need to establish good patterns other SDKs can copy.

 On 24 April 2015 at 12:05, Alex Xu sou...@gmail.com wrote:
 2015-04-24 18:15 GMT+08:00 Andrey Kurilin akuri...@mirantis.com:
 When user execute cmd without --os-compute-version. The nova client
  should discover the nova server supported version. Then cmd choice the
  latest version supported both by client and server.

 In that case, why X-Compute-API-Version can accept latest value? Also,
 such discovery will require extra request to API side for every client call.


 I think it is convenient for some case. like give user more easy to try nova
 api by some code access the nova api directly. Yes, it need one more extra
 request. But if without discover, we can't ensure the client support server.
 Maybe client too old for server even didn't support the server's min
 version. For better user experience, I think it worth to discover the
 version. And we will call keystone each nova client cli call, so it is
 acceptable.

 We might need to extend the API to make this easier, but I think we
 need to come up with a simple and efficient pattern here.


 Case 1:
 Existing python-novaclient calls, now going to v2.1 API

 We can look for the transitional entry of computev21, as mentioned
 above, but it seems fair to assume v2.1 and v2.0 are accessed from the
 same service catalog entry of compute, by default (eventually).

 Lets be optimistic about what the cloud supports, and request latest
 version from v2.1.

 If its a v2.0 only API endpoint, we will not get back a version header
 with the response, we could error out if the user requested v2.x
 min_version via the CLI parameters.

 In most cases, we get the latest return values, and all is well.


 Case 2:
 User wants some info they know was added to the response in a specific
 microversion

 We can request latest and error out if we don't get a new enough
 version to meet the user's min requirement.


 Case 3:
 Adding support for a new request added in a microversion

 We could just send latest and assume the new functionality, then
 raise an error when you get bad request (or similar), and check the
 version header to see if that was the cause of the problem, so we can
 say why it failed.

 If its supported, everything just works.

 If the user requests a specific version before it was supported, we
 should error out as not supported, I guess?

 In a way it would be cleaner if we had a way for the client to say
 latest but requires 2.3, so you get a bad version request if your
 minimum requirement is not respected, so its much clearer than
 miss-interpreting random errors that you might generate. But I guess
 its not totally required here.


 Would all that work? It should avoid an extra API call to discover the
 specific version we have available.

 '--os-compute-version=None' can be supported, that means will return the
  min version of server supported.

 From my point of view '--os-compute-version=None' is equal to not
 specified value. Maybe, it would be better to accept value min for
 os-compute-version option.

 I think '--os-compute-version=None' means not specified version request
 header when send api request to server. The server behavior is if there
 isn't value specified, the min version will be used.

 --os-compte-version=v2 means no version specified I guess?

 Can we go back to the use cases here please?
 What do the users need here and why?


 3. if the microversion non-supported, but user call cmd with
  --os-compute-version, this should return failed.

 Imo, it should be implemented on API side(return BadRequest when
 X-Compute-API-Version header is presented in V2)

 V2 is already deployed now, and doesn't do that.

 No matter what happens we need to fix that.

 Emm I'm not sure. Because GET '/v2/' already can be used to discover
 microversion support or not. Sounds like add another way to support
 discover? And v2 api didn't return fault with some extra header, that sounds
 like behavior and back-incompatible change.

 -1

 We should not use the URL to detect the version.
 We have other ways to do that for a good reason.

 Thanks,
 John



 On Fri, Apr 24, 2015 at 12:42 PM, Alex Xu sou...@gmail.com wrote:



 2015-04-24 17:24 GMT+08:00 Andrey 

Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in DevStack [was: Status of the nova-network to Neutron migration work]

2015-04-28 Thread Attila Fazekas
You can tcpdump the ovs ports as usual.

Please keep in mind ovs does not have `single contention` port.
OVS does MAC learning by default and you may not see `learned` uni-cast traffic
on a random trunk port. You MAY see BUM traffic, but many of them also can be 
canceled
by neutron-ml2-ovs, AFAIK it is not enabled by default. 

OVS behaves like a real switch, real switches also does not have 5 Tbit/sec 
ports for monitoring :(
If you need to tcpudump on a port which is not visible in the userspace 
(internal patch links) by default
you should do port mirroring. [1]

Usually you do not need to dump the traffic,
What you should do as basic trouble shooting is checking the tags on the ports,
(`ovsdb-client dump` show everything, excluding the oflow rules)

Hopefully the root cause is fixed, but you should check is the port not trunk
when it needs to be tagged.

Neutron also dedicates the vlan-4095 on br-int as dead vlan,
If you have a port in this, it can mean a miss configuration
or a message lost in the void or something Exceptional happened.

If you really need to redirect exceptional `out of band` traffic to a special 
port
or to an external service (controller) it would be more complex thing
then just doing the mirroring.

[1] http://www.yet.org/2014/09/openvswitch-troubleshooting/

PS.:
OVS does not generates ICMP packets in many cases when a real `L3` switch would 
do,
thats why the MTU size differences causes issues and requires extra care at 
configuration,
when ovs used with tunneling. (OVS also can be used with vlans)

Probably this caused the most headache for many user.

PS2.:
Somewhere I read the ovs had the PMTUD support, but it was removed because
it was not conforming to the standard.
It just does silent packet drop :(
 


- Original Message -
 From: Jeremy Stanley fu...@yuggoth.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, April 21, 2015 5:00:24 PM
 Subject: Re: [openstack-dev] [Nova][Neutron] Linuxbridge as the default in 
 DevStack [was: Status of the nova-network
 to Neutron migration work]
 
 On 2015-04-21 03:19:04 -0400 (-0400), Attila Fazekas wrote:
 [...]
  IMHO the OVS is less complex than netfilter (iptables, *tables),
  if someone able to deal with reading the netfilter rules he should
  be able to deal with OVS as well.
 
 In a simple DevStack setup, you really have that many
 iptables/ebtables rules?
 
  OVS has debugging tools for internal operations, I guess you are
  looking for something else. I do not have any `good debugging`
  tool for net-filter either.
 [...]
 
 Complexity of connecting tcpdump to the bridge was the primary
 concern here (convenient means of debugging network problems when
 you're using OVS, less tools for debugging OVS itself though it can
 come down to that at times as well). Also ebtables can easily be
 configured to log every frame it blocks, forwards or rewrites
 (presumably so can the OVS flow handler? but how?).
 --
 Jeremy Stanley
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Question about tempest API tests that establish SSH connection to instances

2015-04-28 Thread Yaroslav Lobankov
Hi everyone,

I have a question about tempest tests that are related to instance
validation. Some of these tests are

tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name[gate,id-ac1ad47f-984b-4441-9274-c9079b7a0666]
tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_created_server_vcpus[gate,id-cbc0f52f-05aa-492b-bdc1-84b575ca294b]
tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_host_name_is_same_as_server_name[gate,id-ac1ad47f-984b-4441-9274-c9079b7a0666]
tempest.api.compute.servers.test_create_server.ServersTestManualDisk.test_verify_created_server_vcpus[gate,id-cbc0f52f-05aa-492b-bdc1-84b575ca294b]

To enable these tests I should set the config option run_ssh to True.
When I set the option to true and ran the tests, all the tests failed. It
looks like ssh code in API tests doesn't work.
Maybe I am wrong. The question is the following: which of tempest jobs runs
these tests?  Maybe I have tempest misconfiguration.

Regards,
Yaroslav Lobankov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Question for the TC candidates

2015-04-28 Thread Stefano Maffulli
On Mon, 2015-04-27 at 17:00 -0400, Doug Hellmann wrote:
 I would have to go back and check, but I'm pretty sure the posts were
 highlighted in Stef's community newsletter email. 

They were, in fact. But I know as a fact that even if people many love
the newsletter, I have the impression that few people follow the links.
The blog posts from the TC were on openstack.org/blog, on
planet.openstack.org, relayed on twitter and mentioned in the weekly
newsletter. I don't think we can give more visibility than this without
getting annoying. Maybe better titles and leads in the posts would help
more.

The problem is that there is just too much traffic and it's impossible
for anyone to keep up with everything. People skim through their emails
checking the subjects, their rss feeds reading only the titles, parsing
twitter feed for a couple of pages down (at best) and that's it. If
nothing catches their attention, pieces of information get lost. Even
the weekly newsletter I'm sure few people click to follow the links and
read further (unless it has a cool title).

I've long come to the conclusion that it is what it is: at the size
we're at, we can't expect every voter to be fully informed about all the
issues.

Better titles and a sort of TL;DR first paragraph in blog posts are very
helpful. But in order to write those, the author needs to have more
training as a communicator and more time. It's just a hard problem.

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Question for the TC candidates

2015-04-28 Thread Anita Kuno
On 04/23/2015 12:14 PM, Chris Dent wrote:
 
 This might be a bit presumptuous, but why not give it a try...
 
 This cycle's TC elections didn't come with a set of prepackaged
 questions and though the self-nomination messages have included some
 very interesting stuff I think it would be useful to get answers
 from the candidates on at least one topical but open-ended
 question. Maybe other people have additional questions they think
 are important but this one is the one that matters to me and also
 captures the role that I wish the TC filled more strongly. Here's
 the preamble:
 
 There are lots of different ways to categorize the various
 stakeholders in the OpenStack community, no list is complete. For
 the sake of this question the people I'm concerned with are the
 developers, end-users and operators of OpenStack: the individuals
 who are actively involved with it on a daily basis. I'm intentionally
 leaving out things like the downstream.
 
 There are many different ways to define quality. For the sake of
 this question feel free to use whatever definition you like but take
 it as given that quality needs to be improved.
 
 Here's the question:
 
 What can and should the TC at large, and you specifically, do to ensure
 quality improves for the developers, end-users and operators of
 OpenStack as a full system, both as a project being developed and a
 product being used?
 
Chris:

I welcomed your question as I welcome all questions from the electorate
with the intention and motivation of getting to know the candidates
better, hopefully to improve engagement in the electoral process and to
increase the extent to which people feel a part of the structure they
participate in.

At present, I am beginning to wonder to what degree you are being honest
with us? Is you intention to know the candidates or to communicate your
dissatisfaction with the current blog post situation?

It is detrimental to our overall electoral process if folks cloak
personal points of disagreement in the guise of open discussion.

Going forward, I encourage those who actually would like to get to know
all candidates in an electoral contest better to ask all the candidates
fair questions and let them respond. Should something come from that,
please follow up in a neutral way that does not weigh down the impartial
nature (hopefully) intended by the original question.

I do continue to hope that candidate statements and responses are
helpful to the electorate and that they cast their ballot without
feeling that doing so is an indication about their feelings regarding a
secondary issue.

My thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Navigating the ever changing OpenStack codebase

2015-04-28 Thread Clint Byrum
Excerpts from Kevin L. Mitchell's message of 2015-04-28 08:15:51 -0700:
 On Mon, 2015-04-27 at 15:54 -0700, Clint Byrum wrote:
  Excerpts from Kevin L. Mitchell's message of 2015-04-27 15:38:25 -0700:
   On Mon, 2015-04-27 at 21:42 +, Jeremy Stanley wrote:
I consider it an unfortunate oversight that those files weren't
deleted a very, very long time ago.
   
   Unfortunately, there's one problem with that: you can't tell tox to use
   a virtualenv that you've built.  We need this capability at present, so
   we have to run tests using run_tests.sh instead of tox :(  I have an
   issue open on tox to address this need, but haven't seen any movement on
   that; so until then, I have to oppose the removal of run_tests.sh…
   despite how much *I'd* like to see it bite the dust!
  
  Err.. you can just run the commands in tox.ini in the venv of your
  choice. You don't need run_tests.sh for that.
 
 No dice.  I don't want to have to parse the tox.ini directly.  We're
 talking about automated tests here, by the way.

Why not? It's an ini file, with a stable interface.

Python 3.4.0 (default, Apr 11 2014, 13:05:11) 
[GCC 4.8.2] on linux
Type help, copyright, credits or license for more information.
 import configparser
 x = configparser.ConfigParser()
 x.read('tox.ini')
['tox.ini']
 x.get('testenv', 'commands')
\npython setup.py testr --slowest --testr-args='{posargs}'
 

I'm sure you've thought more about this than me, so I apologize for
sounding dense. However, I'm struggling to see where having to maintain
two test harnesses is less complicated than the code above.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Does anyone use Zookeeper, Memcache Nova ServiceGroup Driver ?

2015-04-28 Thread Vilobh Meshram
Attila,

Thanks for the details.

Why the current Zk driver is not good ?

Apart from the slowness of Mc, Zk driver are they reliable enough ?

Lets say I have more than 1000 compute would you still suggest to go with
DB Servicegroup driver ?

The sg drivers was introduced to eliminate 100 Update/sec at 1000 Host,
but it caused all service is being fetched from the DB even if at the
given code
part you just need to alive services.

Couldn't get this comment, current implementation has get_all and
service_is_up calls so why is it still fetching all compute nodes rather
than fetching only the ones which where service_is_up ?

-Vilobh

On Tue, Apr 28, 2015 at 12:34 AM, Attila Fazekas afaze...@redhat.com
wrote:

 How many compute nodes do you want to manage ?

 If it less than ~1000, you do not need to care.
 If you have more, just use SSD with good write IOPS value.

 Mysql actually can be fast with enough memory and good SSD.
 Even faster than [1].

 zk as technology is good, the current nova driver is not. Not recommended.
 The current mc driver does lot of tcp ping-pong for every node,
 it can be slower than the SQL.

 IMHO At high compute node count you would face with scheduler latency
 issues
 sooner than sg driver issues. (It is not Log(N) :()

 The sg drivers was introduced to eliminate 100 Update/sec at 1000 Host,
 but it caused all service is being fetched from the DB even if at the
 given code
 part you just need to alive services.


 [1]
 http://www.percona.com/blog/2013/10/18/innodb-scalability-issues-tables-without-primary-keys/

 - Original Message -
  From: Vilobh Meshram vilobhmeshram.openst...@gmail.com
  To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org, OpenStack
  Mailing List (not for usage questions) openst...@lists.openstack.org
  Sent: Tuesday, April 28, 2015 1:21:58 AM
  Subject: [openstack-dev] [openstack][nova] Does anyone use Zookeeper,
 Memcache Nova ServiceGroup Driver ?
 
  Hi,
 
  Does anyone use Zookeeper[1], Memcache[2] Nova ServiceGroup Driver ?
 
  If yes how has been your experience with it. It was noticed that most of
 the
  deployment try to use the default Database driver[3]. Any experiences
 with
  Zookeeper, Memcache driver will be helpful.
 
  -Vilobh
 
  [1]
 
 https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/zk.py
  [2]
 
 https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/mc.py
  [3]
 
 https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Question for the TC candidates

2015-04-28 Thread Stefano Maffulli
On Tue, 2015-04-28 at 16:30 +0100, Chris Dent wrote:
 What's important to avoid is the blog postings being only reporting of
 conclusions. They also need to be invitations to participate in the
 discussions. Yes, the mailing list, gerrit and meeting logs have some
 of the ongoing discussions but often, without a nudge, people won't
 know.

Not being a member of the TC, I have noticed that the hottest topics
touched by the TC were indeed discussed on this list quite extensively.
I'm sure you've not missed the conversation about big tent, tags,
release naming process, the use of secret IRC channels, etc. Important
conversations happen on this list first, then move to the tc and
eventually become decisions published on
http://governance.openstack.org.

Applications of new projects also are discussed on this list before they
land on TC meeting agenda.

The mailing list of the TC is open and is low traffic. The agenda is
published there regularly: people who care about the TC and *have time
to spare* are on that list. 

If you haven't seen many blog posts in 2015 is because the most
important change is the 'big tent' and its tagging system. My feeling is
that this topic is so huge that it will take another cycle before it can
be digested, nobody yet considers it 'completed' enough to summarize in
a blog post. I expect that after the release and the Summit the topic
will be more clear. BTW, add this session to your calendar for
Vancouver:
http://openstacksummitmay2015vancouver.sched.org/event/4f51d5c18865d24d9a8fbb5a0603f0c9

 I'm not trying to suggest that the TC is trying to keep people in
 the dark, rather that it always takes more effort than anyone would
 like to make sure things are lit.

I have the feeling that the largest and most important changes are being
communicated widely. They're hard and complex and will require time and
more efforts by all (not just TC members) before they're also understood
properly. It'll take time.

.stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code

2015-04-28 Thread Russell Bryant
On 04/28/2015 01:17 PM, Neil Jerram wrote:
 Apologies for commenting so late, but I'm not clear on the concept of
 bringing all possible backend projects back inside Neutron.
 
 
 I think my question is similar to what Henry and Mathieu are getting at
 below - viz:
 
 
 We just recently decided to move a lot of vendor-specific ML2 mechanism
 driver code _out_ of the Neutron tree; and I thought that the main
 motivation for that was that it wasn't reasonably possible for most
 Neutron developers to understand, review and maintain that code to the
 same level as they can with the Neutron core code.
 
 
 How then does it now make sense to bring a load of vendor-specific code
 back into the Neutron fold?  Has some important factor changed?  Or have
 I misunderstood what is now being proposed?

The suggestion is to recognize that these are all part of the larger
Neutron effort.  It is *not* to suggest that the same group of
developers needs to be reviewing all of it.  It's mostly an
organizational thing.  The way teams operate should not change too much.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >