Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-12-22 Thread Bartlomiej Piotrowski
FYI, xz with multithreading support (5.2 release) has been marked as stable
yesterday.

Regards,
Bartłomiej Piotrowski

On Mon, Nov 24, 2014 at 12:32 PM, Bartłomiej Piotrowski 
bpiotrow...@mirantis.com wrote:

 On 24 Nov 2014, at 12:25, Matthew Mosesohn mmoses...@mirantis.com wrote:
  I did this exercise over many iterations during Docker container
  packing and found that as long as the data is under 1gb, it's going to
  compress really well with xz. Over 1gb and lrzip looks more attractive
  (but only on high memory systems). In reality, we're looking at log
  footprints from OpenStack environments on the order of 500mb to 2gb.
 
  xz is very slow on single-core systems with 1.5gb of memory, but it's
  quite a bit faster if you run it on a more powerful system. I've found
  level 4 compression to be the best compromise that works well enough
  that it's still far better than gzip. If increasing compression time
  by 3-5x is too much for you guys, why not just go to bzip? You'll
  still improve compression but be able to cut back on time.
 
  Best Regards,
  Matthew Mosesohn

 Alpha release of xz supports multithreading via -T (or —threads) parameter.
 We could also use pbzip2 instead of regular bzip to cut some time on
 multi-core
 systems.

 Regards,
 Bartłomiej Piotrowski
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fw: [Heat] Multiple_Routers_Topoloy

2014-12-22 Thread Rao Shweta
 

 Hi All

I am working on openstack Heat and i wanted to make below topolgy using heat 
template :



For this i am using a template as given :

AWSTemplateFormatVersion: '2010-09-09'
Description: Sample Heat template that spins up multiple instances and a 
private network
  (JSON)
Resources:
  heat_network_01:
    Properties: {name: heat-network-01}
    Type: OS::Neutron::Net
  heat_network_02:
    Properties: {name: heat-network-02}
    Type: OS::Neutron::Net
  heat_router_01:
    Properties: {admin_state_up: 'True', name: heat-router-01}
    Type: OS::Neutron::Router
  heat_router_02:
    Properties: {admin_state_up: 'True', name: heat-router-02}
    Type: OS::Neutron::Router
  heat_router_int0:
    Properties:
  router_id: {Ref: heat_router_01}
  subnet_id: {Ref: heat_subnet_01}
    Type: OS::Neutron::RouterInterface
  heat_router_int1:
    Properties:
  router_id: {Ref: heat_router_02}
  subnet_id: {Ref: heat_subnet_02}
    Type: OS::Neutron::RouterInterface
  heat_subnet_01:
    Properties:
  cidr: 10.10.10.0/24
  dns_nameservers: [172.16.1.11, 172.16.1.6]
  enable_dhcp: 'True'
  gateway_ip: 10.10.10.254
  name: heat-subnet-01
  network_id: {Ref: heat_network_01}
    Type: OS::Neutron::Subnet
  heat_subnet_02:
    Properties:
  cidr: 10.10.11.0/24
  dns_nameservers: [172.16.1.11, 172.16.1.6]
  enable_dhcp: 'True'
  gateway_ip: 10.10.11.254
  name: heat-subnet-01
  network_id: {Ref: heat_network_02}
    Type: OS::Neutron::Subnet
  instance0:
    Properties:
  flavor: m1.nano
  image: cirros-0.3.2-x86_64-uec
  name: heat-instance-01
  networks:
  - port: {Ref: instance0_port0}
    Type: OS::Nova::Server
  instance0_port0:
    Properties:
  admin_state_up: 'True'
  network_id: {Ref: heat_network_01}
    Type: OS::Neutron::Port
  instance1:
    Properties:
  flavor: m1.nano
  image: cirros-0.3.2-x86_64-uec
  name: heat-instance-02
  networks:
  - port: {Ref: instance1_port0}
    Type: OS::Nova::Server
  instance1_port0:
    Properties:
  admin_state_up: 'True'
  network_id: {Ref: heat_network_01}
    Type: OS::Neutron::Port
  instance11:
    Properties:
  flavor: m1.nano
  image: cirros-0.3.2-x86_64-uec
  name: heat-instance11-01
  networks:
  - port: {Ref: instance11_port0}
    Type: OS::Nova::Server
  instance11_port0:
    Properties:
  admin_state_up: 'True'
  network_id: {Ref: heat_network_02}
    Type: OS::Neutron::Port
  instance1:
    Properties:
  flavor: m1.nano
  image: cirros-0.3.2-x86_64-uec
  name: heat-instance12-02
  networks:
  - port: {Ref: instance12_port0}
    Type: OS::Nova::Server
  instance12_port0:
    Properties:
  admin_state_up: 'True'
  network_id: {Ref: heat_network_02}
    Type: OS::Neutron::Port

I am able to create topology using the template but i am not able to connect 
two routers. Neither i can get a template example on internet through which i 
can connect two routers. Can you please help me with :

1.) Can we connect two routers? I tried with making a interface on router 1 and 
connecting it to the subnet2 which is showing error.

  heat_router_int0:
    Properties:
  router_id: {Ref: heat_router_01}
  subnet_id: {Ref: heat_subnet_02}

Can you please guide me how can we connect routers or have link between routers 
using template.

2.) Can you please forward a link or a example template from which i can refer 
and implement reqiured topology using heat template.

Waiting for a response



Thankyou 

Regards
 Shweta Rao
 Mailto: rao.shw...@tcs.com
 Website: http://www.tcs.com
 

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2014-12-22 Thread Radomir Dopieralski
On 20/12/14 21:25, Richard Jones wrote:
 This is a good proposal, though I'm unclear on how the
 static_settings.py file is populated by a developer (as opposed to a
 packager, which you described).

It's not, the developer version is included in the repository, and
simply points to where Bower is configured to put the files.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How can I continue to complete a abandoned blueprint?

2014-12-22 Thread li-zheming
hi all:
   Bp 
flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory)
  was submitted by my partner in havana.   but it has abandoned because of  
some reason.  I want to  continue to this blueprint. Based on the rules about 
BP for kilo, for this bp, spec is not necessary, so I submit the code directly 
and give commit message to clear out questions in spec.  Is it right? how can I 
do? thanks!


   


 





--

Name :  Li zheming
Company :  Hua Wei
Address  : Shenzhen China
Tel:0086 18665391827___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 'module' object has no attribute 'HVSpec'

2014-12-22 Thread Srinivasa Rao Ragolu
Hi All,

I have integrated below CPU pinning patches to Nova

https://review.openstack.org/#/c/132001/2https://review.openstack.org/#/c/128738/12https://review.openstack.org/#/c/129266/11https://review.openstack.org/#/c/129326/11https://review.openstack.org/#/c/129603/10https://review.openstack.org/#/c/129626/11https://review.openstack.org/#/c/130490/11https://review.openstack.org/#/c/130491/11https://review.openstack.org/#/c/130598/10https://review.openstack.org/#/c/131069/9https://review.openstack.org/#/c/131210/8https://review.openstack.org/#/c/131830/5https://review.openstack.org/#/c/131831/6https://review.openstack.org/#/c/131070/https://review.openstack.org/#/c/132086/https://review.openstack.org/#/c/132295/https://review.openstack.org/#/c/132296/https://review.openstack.org/#/c/132297/https://review.openstack.org/#/c/132557/https://review.openstack.org/#/c/132655/


And now if I try to run nova-compute, getting below error


File /opt/stack/nova/nova/objects/compute_node.py, line 93, in _from_db_object

for hv_spec in hv_specs]

AttributeError: 'module' object has no attribute 'HVSpec'


Please help me in resolving this issue.


Thanks,

Srinivas.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 'module' object has no attribute 'HVSpec'

2014-12-22 Thread Kashyap Chamarthy
On Mon, Dec 22, 2014 at 04:37:47PM +0530, Srinivasa Rao Ragolu wrote:
 Hi All,
 
 I have integrated below CPU pinning patches to Nova

As of now, CPU pinning works directly from Nova git (as you can see,
most of the patches below are merged), you don't have to manually apply
any patches.

 https://review.openstack.org/#/c/132001/2https://review.openstack.org/#/c/128738/12https://review.openstack.org/#/c/129266/11https://review.openstack.org/#/c/129326/11https://review.openstack.org/#/c/129603/10https://review.openstack.org/#/c/129626/11https://review.openstack.org/#/c/130490/11https://review.openstack.org/#/c/130491/11https://review.openstack.org/#/c/130598/10https://review.openstack.org/#/c/131069/9https://review.openstack.org/#/c/131210/8https://review.openstack.org/#/c/131830/5https://review.openstack.org/#/c/131831/6https://review.openstack.org/#/c/131070/https://review.openstack.org/#/c/132086/https://review.openstack.org/#/c/132295/https://review.openstack.org/#/c/132296/https://review.openstack.org/#/c/132297/https://review.openstack.org/#/c/132557/https://review.openstack.org/#/c/132655/

The links are all mangled due to the bad formatting.

 And now if I try to run nova-compute, getting below error
 
 
 File /opt/stack/nova/nova/objects/compute_node.py, line 93, in 
 _from_db_object
 
 for hv_spec in hv_specs]
 
 AttributeError: 'module' object has no attribute 'HVSpec'

You can try directly from git and DevStack without applying manually
patches.

Also, these kind of usage questions are better suited for operator list
or ask.openstack.org.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-22 Thread Alex Xu
Joe, thanks, that's useful feature. But still not sure it's good for this
case. Thinking of user's server-group will be deleted by administrator and
new server-group created for user by administrator, that sounds confused
for user. I'm thinking of the HA case, if there is host failed, the
infrastructure can evacuate instance out of failed host automatically, and
user shouldn't be affected by that(user still will know his instance is
down, and the instance get back later. At least we should reduce the
affect).

I think the key is whether we think evacuate instance out of failed host
that in affinity group is violation or not. The host already failed, we can
ignore the failed host which in server group when we evacuate first
instance to another host. After first instance evacuated, there is new
alive host in the server group, then other instances will be evacuated to
that new alive host to comply affinity policy.

2014-12-22 11:29 GMT+08:00 Joe Cropper cropper@gmail.com:

 This is another great example of a use case in which these blueprints [1,
 2] would be handy.  They didn’t make the clip line for Kilo, but we’ll try
 again for L.  I personally don’t think the scheduler should have “special
 case” rules about when/when not to apply affinity policies, as that could
 be confusing for administrators.  It would be simple to just remove it from
 the group, thereby allowing the administrator to rebuild the VM anywhere
 s/he wants… and then re-add the VM to the group once the environment is
 operational once again.

 [1] https://review.openstack.org/#/c/136487/
 [2] https://review.openstack.org/#/c/139272/

 - Joe

 On Dec 21, 2014, at 8:36 PM, Lingxian Kong anlin.k...@gmail.com wrote:

  2014-12-22 9:21 GMT+08:00 Alex Xu sou...@gmail.com:
 
 
  2014-12-22 9:01 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:
 
 
 
  but what if the compute node is back to normal? There will be
  instances in the same server group with affinity policy, but located
  in different hosts.
 
 
  If operator decide to evacuate the instance from the failed host, we
 should
  fence the failed host first.
 
  Yes, actually. I mean the recommandation or prerequisite should be
  emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
  things more confused. But the issue you are working around is indeed a
  problem we should solve.
 
  --
  Regards!
  ---
  Lingxian Kong
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Plans to load and performance testing

2014-12-22 Thread Anastasia Kuznetsova
Dmitry,

Now I see that my comments are not so informative, I will try to describe
environment and scenarios in more details.

1) *1 api 1 engine 1 executor  *it means that there were 3 Mistral
processes running on the same box
2) list-workbooks scenario was run when there were no workflow executions
at the same time, I will notice this your comment and I will measure time
in such situation, but I guess that it will take more time, the question is
as far as.
3) 60 % of success means that only 60 % of number of times execution of
scenario 'list-workbooks' were successful, at the moment I have observed
only one type of error:
error connection to Rabbit : Error ConnectionError: ('Connection aborted.',
error(104, 'Connection reset by peer'))
4) we don't know the durability criteria of Mistral and under what load
Mistral will 'die', we want to define the threshold.

P.S. Dmitry, if you have any ideas/scenarios which you want to test, please
share them.

On Sat, Dec 20, 2014 at 9:35 AM, Dmitri Zimine dzim...@stackstorm.com
wrote:

 Anastasia, any start is a good start.

 * 1 api 1 engine 1 executor, list-workbooks*

 what exactly doest it mean: 1) is mistral deployed on 3 boxes with
 component per box, or all three are processes on the same box? 2) is
 list-workbooks test running while workflow executions going on? How many?
 what’s the character of the load 3) when it says 60% success what exactly
 does it mean, what kind of failures? 4) what is the durability criteria,
 how long do we expect Mistral to withstand the load.

 Let’s discuss this in details on the next IRC meeting?

 Thanks again for getting this started.

 DZ.


 On Dec 19, 2014, at 7:44 AM, Anastasia Kuznetsova 
 akuznets...@mirantis.com wrote:

 Boris,

 Thanks for feedback!

  But I belive that you should put bigger load here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 As I said it is only beginning and  I will increase the load and change
 its type.

 As well concurrency should be at least 2-3 times bigger than times
 otherwise it won't generate proper load and you won't collect enough data
 for statistical analyze.
 
 As well use  rps runner that generates more real life load.
 Plus it will be nice to share as well output of rally task report
 command.

 Thanks for the advice, I will consider it in further testing and reporting.

 Answering to your question about using Rally for integration testing, as I
 mentioned in our load testing plan published on wiki page,  one of our
 final goals is to have a Rally gate in one of Mistral repositories, so we
 are interested in it and I already prepare first commits to Rally.

 Thanks,
 Anastasia Kuznetsova

 On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic bpavlo...@mirantis.com
 wrote:

 Anastasia,

 Nice work on this. But I belive that you should put bigger load here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 As well concurrency should be at least 2-3 times bigger than times
 otherwise it won't generate proper load and you won't collect enough data
 for statistical analyze.

 As well use  rps runner that generates more real life load.
 Plus it will be nice to share as well output of rally task report
 command.


 By the way what do you think about using Rally scenarios (that you
 already wrote) for integration testing as well?


 Best regards,
 Boris Pavlovic

 On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova 
 akuznets...@mirantis.com wrote:

 Hello everyone,

 I want to announce that Mistral team has started work on load and
 performance testing in this release cycle.

 Brief information about scope of our work can be found here:

 https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing

 First results are published here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 Thanks,
 Anastasia Kuznetsova
 @ Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-22 Thread Alex Xu
2014-12-22 10:36 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:

 2014-12-22 9:21 GMT+08:00 Alex Xu sou...@gmail.com:
 
 
  2014-12-22 9:01 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:
 

 
  but what if the compute node is back to normal? There will be
  instances in the same server group with affinity policy, but located
  in different hosts.
 
 
  If operator decide to evacuate the instance from the failed host, we
 should
  fence the failed host first.

 Yes, actually. I mean the recommandation or prerequisite should be
 emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
 things more confused. But the issue you are working around is indeed a
 problem we should solve.


Yea, you are right, we should doc it if we think this make sense. Thanks!


 --
 Regards!
 ---
 Lingxian Kong

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-22 Thread Sylvain Bauza


Le 22/12/2014 13:37, Alex Xu a écrit :



2014-12-22 10:36 GMT+08:00 Lingxian Kong anlin.k...@gmail.com 
mailto:anlin.k...@gmail.com:


2014-12-22 9:21 GMT+08:00 Alex Xu sou...@gmail.com
mailto:sou...@gmail.com:


 2014-12-22 9:01 GMT+08:00 Lingxian Kong anlin.k...@gmail.com
mailto:anlin.k...@gmail.com:



 but what if the compute node is back to normal? There will be
 instances in the same server group with affinity policy, but
located
 in different hosts.


 If operator decide to evacuate the instance from the failed
host, we should
 fence the failed host first.

Yes, actually. I mean the recommandation or prerequisite should be
emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
things more confused. But the issue you are working around is indeed a
problem we should solve.


Yea, you are right, we should doc it if we think this make sense. Thanks!


As I said, I'm not in favor of adding more complexity in the instance 
group setup that is done in the conductor for basic race condition reasons.


If I understand correctly, the problem is when there is only one host 
for all the instances belonging to a group with affinity filter and this 
host is down, then the filter will deny any other host and consequently 
the request will fail while it should succeed.


Is this really a problem ? I mean, it appears to me that's a normal 
behaviour because a filter is by definition an *hard* policy.


So, provided you would like to implement *soft* policies, that sounds 
more likely a *weigher* that you would like to have : ie. make sure that 
hosts running existing instances in the group are weighted more than 
other ones so they'll be chosen every time, but in case they're down, 
allow the scheduler to pick other hosts.


HTH,
-Sylvain





--
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How can I continue to complete a abandoned blueprint?

2014-12-22 Thread Jay Pipes

On 12/22/2014 04:54 AM, li-zheming wrote:

hi all: Bp
flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory)
was submitted by my partner in havana.   but it has abandoned because
of  some reason.


Some reason == the submitter failed to provide any details on how the 
work would be implemented, what the use cases were, and any alternatives 
that might be possible.


  I want to  continue to this blueprint. Based on the

rules about BP for
https://blueprints.launchpad.net/openstack/?searchtext=for kilo,
for this bp, spec is not necessary, so I submit the code directly and
give commit message to clear out questions in spec.  Is it right? how
can I do? thanks!


Specs are no longer necessary for smallish features, no. A blueprint is 
still necessary on Launchpad, so you should be able to use the abandoned 
one you link above -- which, AFAICT, has enough implementation details 
about the proposed changes.


Alternately, if you cannot get the original submitter to remove the spec 
link to the old spec review, you can always start a new blueprint and we 
can mark that one as obselete.


I'd like Dan Berrange (cc'd) to review whichever blueprint on Launchpad 
you end up using. Please let us know what you do.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework

2014-12-22 Thread A, Keshava
Vikram,

1.   In this solution it is assumed that all the OpenStack services are 
available/enabled on all the CNs ?

2.   Consider a  scenario: For a particular Tennant traffic  the  flows are 
chained across a set of CNs .

Then if one of the  VM (of that Tennant) migrates to a new CN, where that 
Tennant was not there earlier on that CN, what will be the impact ?

How to control the chaining of flows in these kind of scenario ? so that packet 
will reach that Tennant VM on new CN ?



Here this Tennant VM be a NFV Service-VM (which should be transparent to 
OpenStack).

keshava



From: Vikram Choudhary [mailto:vikram.choudh...@huawei.com]
Sent: Monday, December 22, 2014 12:28 PM
To: Murali B
Cc: openstack-dev@lists.openstack.org; yuriy.babe...@telekom.de; A, Keshava; 
stephen.kf.w...@gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi
Subject: RE: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Sorry for the incontinence. We will sort the issue at the earliest.
Please find the BP attached with the mail!!!

From: Murali B [mailto:mbi...@gmail.com]
Sent: 22 December 2014 12:20
To: Vikram Choudhary
Cc: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de; 
keshav...@hp.commailto:keshav...@hp.com; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; Dhruv Dhody; 
Dongfeng (C); Kalyankumar Asangi
Subject: Re: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Thank you Vikram,

Could you or somebody please provide the access the full specification document

Thanks
-Murali

On Mon, Dec 22, 2014 at 11:48 AM, Vikram Choudhary 
vikram.choudh...@huawei.commailto:vikram.choudh...@huawei.com wrote:
Hi Murali,

We have proposed service function chaining idea using open flow.
https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow

Will submit the same for review soon.

Thanks
Vikram

From: yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de 
[mailto:yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de]
Sent: 18 December 2014 19:35
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi,
in the IRC meeting yesterday we agreed to work on the use-case for service 
function chaining as it seems to be important for a lot of participants [1].
We will prepare the first draft and share it in the TelcoWG Wiki for discussion.

There is one blueprint in openstack on that in [2]


[1] 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt
[2] 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

Kind regards/Mit freundlichen Grüßen
Yuriy Babenko

Von: A, Keshava [mailto:keshav...@hp.com]
Gesendet: Mittwoch, 10. Dezember 2014 19:06
An: stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; OpenStack 
Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should be.
Depending on this we at OpenStack also will have impact for ‘Service Chaining’.
Please find the mail attached w.r.t that discussion with NFV for ‘Service-VM + 
Openstack OVS related discussion”.


Regards,
keshava

From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Wednesday, December 10, 2014 10:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There is already a ServiceVM project (Tacker), currently under development 
on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki page 
above and see if the project's goals align with yours. If so, you are certainly 
welcome to join the IRC meeting and start to contribute to the project's 
direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B 
mbi...@gmail.commailto:mbi...@gmail.com wrote:
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to service 
VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is 
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding the 
ovs-rules in OVS


Thanks
-Murali




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [qa] host aggregate's availability zone

2014-12-22 Thread Danny Choi (dannchoi)
Hi Joe,

No, I did not.  I’m not aware of this.

Can you tell me exactly what needs to be done?

Thanks,
Danny

--

Date: Sun, 21 Dec 2014 11:42:02 -0600
From: Joe Cropper cropper@gmail.commailto:cropper@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] host aggregate's availability zone
Message-ID: 
b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.commailto:b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.com
Content-Type: text/plain; charset=utf-8

Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? 
 And enable the FilterScheduler?  These are two common issues related to this.

- Joe

On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) 
dannc...@cisco.commailto:dannc...@cisco.com wrote:
Hi,
I have a multi-node setup with 2 compute hosts, qa5 and qa6.
I created 2 host-aggregate, each with its own availability zone, and assigned 
one compute host:
localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1
++---+---+---+--+
| Id | Name  | Availability Zone | Hosts | Metadata 
|
++---+---+---+--+
| 9  | host-aggregate-zone-1 | az-1  | 'qa5' | 
'availability_zone=az-1' |
++---+---+---+--+
localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2
++---+---+---+--+
| Id | Name  | Availability Zone | Hosts | Metadata 
|
++---+---+---+--+
| 10 | host-aggregate-zone-2 | az-2  | 'qa6' | 
'availability_zone=az-2' |
++---+---+---+?+
My intent is to control at which compute host to launch a VM via the 
host-aggregate?s availability-zone parameter.
To test, for vm-1, I specify --availiability-zone=az-1, and 
--availiability-zone=az-2 for vm-2:
localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 
--nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-0066  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| kxot3ZBZcBH6   
|
| config_drive |
|
| created  | 2014-12-21T15:59:03Z   
|
| flavor   | m1.tiny (1)
|
| hostId   |
|
| id   | 854acae9-b718-4ea5-bc28-e0bc46378b60   
|
| image| cirros-0.3.2-x86_64-uec 
(61409a53-305c-4022-978b-06e55052875b) |
| key_name | -  
|
| metadata | {} 
|
| name

[openstack-dev] [mistral] Team meeting - 12/22/2014

2014-12-22 Thread Renat Akhmerov
Hi,

Reminding that we have a team meeting today at #openstack-meeting at 16.00 UTC

Review action items
Current status (progress, issues, roadblocks, further plans)
Kilo-1 scope and blueprints
for-each 
Scoping (global, local etc.)
Load testing
Open discussion

(see https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
https://wiki.openstack.org/wiki/Meetings/MistralAgenda to find the agenda and 
the meeting archive)

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread John Griffith
Lately (on the Cinder team at least) there's been a lot of
disagreement in reviews regarding the proper way to do LOG messages
correctly.  Use of '%' vs ',' in the formatting of variables etc.

We do have the oslo i18n guidelines page here [1], which helps a lot
but there's some disagreement on a specific case here.  Do we have a
set answer on:

LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})

vs

LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})


It's always fun when one person provides a -1 for the first usage; the
submitter changes it and another reviewer gives a -1 and says, no it
should be the other way.

I'm hoping maybe somebody on the olso team can provide an
authoritative answer and we can then update the example page
referenced in [1] to clarify this particular case.

Thanks,
John

[1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] Canceling VMware meeting 12/24 and 12/31

2014-12-22 Thread Gary Kotton
Hi,
I am not sure that we will have enough people around for the up and coming 
meetings. I suggest that we cancel them and resume in the New Year. Happy 
holidays to all!
A luta continua
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No Cross-project meeting nor 1:1 syncs for next two weeks

2014-12-22 Thread Thierry Carrez
PTLs and others,

As a reminder, we'll be skipping the cross-project meeting (normally
held on Tuesdays at 21:00 UTC) for the next two weeks. Next meeting will
be on January 6th.

We'll also skip 1:1 sync between release liaisons and release management
(normally held on Tuesdays and Thursdays) for the next two weeks. If you
have anything urgent to discuss don't hesitate to ping me on
#openstack-relmgr-office.

Enjoy the end-of-year holiday season!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] meetings during holidays

2014-12-22 Thread Doug Wiegley
Canceled. The next lbaas meeting will be 1/6. Happy holidays.

Thanks,
doug

On 12/19/14, 11:33 AM, Doug Wiegley do...@a10networks.com wrote:

Hi all,

Anyone have big agenda items for the 12/23 or 12/30 meeting? If not, I’d
suggest we cancel those two meetings, and bring up anything small during
the on-demand portion of the neutron meetings.

If I don’t hear anything by Monday, we will cancel those two meetings.

Thanks,
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature delivery rules and automated tests

2014-12-22 Thread Anastasia Urlapova
Mike, Dmitry, team,
let me add 5 cents - tests per feature have to run on CI before SCF, it is
mean that jobs configuration also should be implemented.

On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 I fully support the idea.

 Feature Lead has to know, that his feature is under threat if it's not yet
 covered by system tests (unit/integration tests are not enough!!!), and
 should proactive work with QA engineers to get tests implemented and
 passing before SCF.

 On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:

 Guys,

 we've done a good job in 6.0. Most of the features were merged before
 feature freeze. Our QA were involved in testing even earlier. It was much
 better than before.

 We had a discussion with Anastasia. There were several bug reports for
 features yesterday, far beyond HCF. So we still have a long way to be
 perfect. We should add one rule: we need to have automated tests before HCF.

 Actually, we should have results of these tests just after FF. It is
 quite challengeable because we have a short development cycle. So my
 proposal is to require full deployment and run of automated tests for each
 feature before soft code freeze. And it needs to be tracked in checklists
 and on feature syncups.

 Your opinion?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday December 23rd at 19:00 UTC

2014-12-22 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday December 23rd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

Meeting log and minutes from the last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread Ben Nemec
On 12/22/2014 09:42 AM, John Griffith wrote:
 Lately (on the Cinder team at least) there's been a lot of
 disagreement in reviews regarding the proper way to do LOG messages
 correctly.  Use of '%' vs ',' in the formatting of variables etc.
 
 We do have the oslo i18n guidelines page here [1], which helps a lot
 but there's some disagreement on a specific case here.  Do we have a
 set answer on:
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})
 
 vs
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})

This is the preferred way.

Note that this is just a multi-variable variation on
http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
and the reasoning discussed there applies.

I'd be curious why some people prefer the % version because to my
knowledge that's not recommended even for untranslated log messages.

 
 
 It's always fun when one person provides a -1 for the first usage; the
 submitter changes it and another reviewer gives a -1 and says, no it
 should be the other way.
 
 I'm hoping maybe somebody on the olso team can provide an
 authoritative answer and we can then update the example page
 referenced in [1] to clarify this particular case.
 
 Thanks,
 John
 
 [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How can I write at milestone section of blueprint?

2014-12-22 Thread Randall Burt
Its been discussed at several summits. We have settled on a general solution 
using Zaqar, but no work has been done that I know of. I was just pointing out 
that similar blueprints/specs exist and you may want to look through those to 
get some ideas about writing your own and/or basing your proposal off of one of 
them.

On Dec 22, 2014, at 12:19 AM, Yasunori Goto y-g...@jp.fujitsu.com
 wrote:

 Rundal-san,
 
 There should already be blueprints in launchpad for very similar 
 functionality.
 For example: https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks.
 While that specifies Heat sending notifications to the outside world,
 there has been discussion around debugging that would allow the receiver to
 send notifications back. I only point this out so you can see there should be
 similar blueprints and specs that you can reference and use as examples.
 
 Thank you for pointing it out.
 But do you know current status about it?
 Though the above blueprint is not approved, and it seems to be discarded.
 
 Bye,
 
 
 On Dec 19, 2014, at 4:17 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote:
 
 Hello,
 
 This is the first mail at Openstack community,
 
 Welcome! :)
 
 and I have a small question about how to write blueprint for Heat.
 
 Currently our team would like to propose 2 interfaces
 for users operation in HOT. 
 (One is Event handler which is to notify user's defined event to heat.
 Another is definitions of action when heat catches the above notification.)
 So, I'm preparing the blueprint for it.
 
 Please include details of the exact use-case, e.g the problem you're trying
 to solve (not just the proposed solution), as it's possible we can suggest
 solutions based on exiting interfaces.
 
 However, I can not find how I can write at the milestone section of 
 blueprint.
 
 Heat blueprint template has a section for Milestones.
 Milestones -- Target Milestone for completeion:
 
 But I don't think I can decide it by myself.
 In my understanding, it should be decided by PTL.
 
 Normally, it's decided by when the person submitting the spec expects to
 finish writing the code by.  The PTL doesn't really have much control over
 that ;)
 
 In addition, probably the above our request will not finish
 by Kilo. I suppose it will be L version or later.
 
 So to clarify, you want to propose the feature, but you're not planning on
 working on it (e.g implementing it) yourself?
 
 So, what should I write at this section?
 Kilo-x, L version, or empty?
 
 As has already been mentioned, it doesn't matter that much - I see it as a
 statement of intent from developers.  If you're just requesting a feature,
 you can even leave it blank if you want and we'll update it when an
 assignee is found (e.g during the spec review).
 
 Thanks,
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -- 
 Yasunori Goto y-g...@jp.fujitsu.com
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fw: [Heat] Multiple_Routers_Topoloy

2014-12-22 Thread Zane Bitter
The -dev mailing list is not for usage questions. Please post your 
question to ask.openstack.org and include the text of the error message 
you when trying to add a RouterInterface.


cheers,
Zane.

On 22/12/14 04:18, Rao Shweta wrote:



Hi All

I am working on openstack Heat and i wanted to make below topolgy using
heat template :



For this i am using a template as given :

AWSTemplateFormatVersion: '2010-09-09'
Description: Sample Heat template that spins up multiple instances and a
private network
   (JSON)
Resources:
   heat_network_01:
 Properties: {name: heat-network-01}
 Type: OS::Neutron::Net
   heat_network_02:
 Properties: {name: heat-network-02}
 Type: OS::Neutron::Net
   heat_router_01:
 Properties: {admin_state_up: 'True', name: heat-router-01}
 Type: OS::Neutron::Router
   heat_router_02:
 Properties: {admin_state_up: 'True', name: heat-router-02}
 Type: OS::Neutron::Router
   heat_router_int0:
 Properties:
   router_id: {Ref: heat_router_01}
   subnet_id: {Ref: heat_subnet_01}
 Type: OS::Neutron::RouterInterface
   heat_router_int1:
 Properties:
   router_id: {Ref: heat_router_02}
   subnet_id: {Ref: heat_subnet_02}
 Type: OS::Neutron::RouterInterface
   heat_subnet_01:
 Properties:
   cidr: 10.10.10.0/24
   dns_nameservers: [172.16.1.11, 172.16.1.6]
   enable_dhcp: 'True'
   gateway_ip: 10.10.10.254
   name: heat-subnet-01
   network_id: {Ref: heat_network_01}
 Type: OS::Neutron::Subnet
   heat_subnet_02:
 Properties:
   cidr: 10.10.11.0/24
   dns_nameservers: [172.16.1.11, 172.16.1.6]
   enable_dhcp: 'True'
   gateway_ip: 10.10.11.254
   name: heat-subnet-01
   network_id: {Ref: heat_network_02}
 Type: OS::Neutron::Subnet
   instance0:
 Properties:
   flavor: m1.nano
   image: cirros-0.3.2-x86_64-uec
   name: heat-instance-01
   networks:
   - port: {Ref: instance0_port0}
 Type: OS::Nova::Server
   instance0_port0:
 Properties:
   admin_state_up: 'True'
   network_id: {Ref: heat_network_01}
 Type: OS::Neutron::Port
   instance1:
 Properties:
   flavor: m1.nano
   image: cirros-0.3.2-x86_64-uec
   name: heat-instance-02
   networks:
   - port: {Ref: instance1_port0}
 Type: OS::Nova::Server
   instance1_port0:
 Properties:
   admin_state_up: 'True'
   network_id: {Ref: heat_network_01}
 Type: OS::Neutron::Port
   instance11:
 Properties:
   flavor: m1.nano
   image: cirros-0.3.2-x86_64-uec
   name: heat-instance11-01
   networks:
   - port: {Ref: instance11_port0}
 Type: OS::Nova::Server
   instance11_port0:
 Properties:
   admin_state_up: 'True'
   network_id: {Ref: heat_network_02}
 Type: OS::Neutron::Port
   instance1:
 Properties:
   flavor: m1.nano
   image: cirros-0.3.2-x86_64-uec
   name: heat-instance12-02
   networks:
   - port: {Ref: instance12_port0}
 Type: OS::Nova::Server
   instance12_port0:
 Properties:
   admin_state_up: 'True'
   network_id: {Ref: heat_network_02}
 Type: OS::Neutron::Port

I am able to create topology using the template but i am not able to
connect two routers. Neither i can get a template example on internet
through which i can connect two routers. Can you please help me with :

1.) Can we connect two routers? I tried with making a interface on
router 1 and connecting it to the subnet2 which is showing error.

   heat_router_int0:
 Properties:
   router_id: {Ref: heat_router_01}
   subnet_id: {Ref: heat_subnet_02}

Can you please guide me how can we connect routers or have link between
routers using template.

2.) Can you please forward a link or a example template from which i can
refer and implement reqiured topology using heat template.

Waiting for a response



Thankyou

Regards
Shweta Rao
Mailto: rao.shw...@tcs.com
Website: http://www.tcs.com


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Setting MTU size for tap device

2014-12-22 Thread Vishvananda Ishaya
It makes sense to add it to me. Libvirt sets the mtu from the bridge when it 
creates the tap device, but if you are creating it manually you might need to 
set it to something else.

Vish

On Dec 17, 2014, at 10:29 PM, Ryu Ishimoto r...@midokura.com wrote:

 Hi All,
 
 I noticed that in linux_net.py, the method to create a tap interface[1] does 
 not let you set the MTU size.  In other places, I see calls made to set the 
 MTU of the device [2].
 
 I'm wondering if there is any technical reasons to why we can't also set the 
 MTU size when creating tap interfaces for general cases.  In certain overlay 
 solutions, this would come in handy.  If there isn't any, I would love to 
 submit a patch to accomplish this.
 
 Thanks in advance!
 
 Ryu
 
 [1] 
 https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1374
 [2] 
 https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1309
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Listing of backends

2014-12-22 Thread Martin, Kurt Frederick (ESSN Storage MSDU)
You can set/unset key value pairs to your volume type with the cinder type-key 
command. Or you can also set them in the Horizon Admin console under the 
Admin-Volumes-Volume Types tab, then select “View Extra Specs” Action.

$cinder help type-key
usage: cinder type-key vtype action key=value [key=value ...]

Sets or unsets extra_spec for a volume type.

Positional arguments:
  vtype  Name or ID of volume type.
  action The action. Valid values are 'set' or 'unset.'
  key=value  The extra specs key and value pair to set or unset. For unset,
   specify only the key.

e.g.
cinder type-key GoldVolumeType set volume_backend_name=my_iscsi_backend

~Kurt

From: Pradip Mukhopadhyay [mailto:pradip.inte...@gmail.com]
Sent: Sunday, December 07, 2014 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Listing of backends

Thanks!
One more question.
Is there any equivalent API to add keys to the volume-type? I understand we 
have APIs for creating volume-type? But how about adding key-value pair (say I 
want to add-key to the volume-type as backend-name=my_iscsi_backend ?

Thanks,
Pradip

On Sun, Dec 7, 2014 at 4:25 PM, Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:
See https://review.openstack.org/#/c/119938/ - now merged. I don't believe the 
python-cinderclient side work has been done yet, nor anything in Horizon, but 
the API itself is now there.

On 7 December 2014 at 09:53, Pradip Mukhopadhyay 
pradip.inte...@gmail.commailto:pradip.inte...@gmail.com wrote:
Hi,

Is there a way to find out/list down the backends discovered for Cinder?

There is, I guess, no API for get the list of backends.


Thanks,
Pradip

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][nova-net]Floating ip assigned as /32 from the start of the range

2014-12-22 Thread Vishvananda Ishaya
Floating ips are always added to the host as a /32. You will need one ip on the
compute host from the floating range with the /16 prefix (which it will use for
natting instances without floating ips as well).

In other words you should manually assign an ip from 10.100.130.X/16 to each
compute node and set that value as routing_source_ip=10.100.130.X (or my_ip) in
nova.conf.

Vish
On Dec 19, 2014, at 7:00 AM, Eduard Matei eduard.ma...@cloudfounders.com 
wrote:

 Hi,
 I'm trying to create a vm and assign it an ip in range 10.100.130.0/16.
 On the host, the ip is assigned to br100 as  inet 10.100.0.3/32 scope global 
 br100
 instead of 10.100.130.X/16, so it's not reachable from the outside.
 
 The localrc.conf :
 FLOATING_RANGE=10.100.130.0/16
 
 Any idea what to change?
 
 Thanks,
 Eduard
 
 
 -- 
 Eduard Biceri Matei, Senior Software Developer
 www.cloudfounders.com | eduard.ma...@cloudfounders.com
  
 
  
 CloudFounders, The Private Cloud Software Company
  
 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited. 
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][nova-net]Floating ip assigned as /32 from the start of the range

2014-12-22 Thread Eduard Matei
Thanks,
I managed to get it working by deleting the public pool (which was the
whole 10.100.X.X subnet) and creating a new pool 10.100.129.X.
This gives me control over which ips are assignable to the vms.

Eduard.

On Mon, Dec 22, 2014 at 7:30 PM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 Floating ips are always added to the host as a /32. You will need one ip
 on the
 compute host from the floating range with the /16 prefix (which it will
 use for
 natting instances without floating ips as well).

 In other words you should manually assign an ip from 10.100.130.X/16 to
 each
 compute node and set that value as routing_source_ip=10.100.130.X (or
 my_ip) in
 nova.conf.

 Vish
 On Dec 19, 2014, at 7:00 AM, Eduard Matei eduard.ma...@cloudfounders.com
 wrote:

 Hi,
 I'm trying to create a vm and assign it an ip in range 10.100.130.0/16.
 On the host, the ip is assigned to br100 as  inet 10.100.0.3/32 scope
 global br100
 instead of 10.100.130.X/16, so it's not reachable from the outside.

 The localrc.conf :
 FLOATING_RANGE=10.100.130.0/16

 Any idea what to change?

 Thanks,
 Eduard


 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.

  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread John Griffith
On Mon, Dec 22, 2014 at 10:03 AM, Ben Nemec openst...@nemebean.com wrote:
 On 12/22/2014 09:42 AM, John Griffith wrote:
 Lately (on the Cinder team at least) there's been a lot of
 disagreement in reviews regarding the proper way to do LOG messages
 correctly.  Use of '%' vs ',' in the formatting of variables etc.

 We do have the oslo i18n guidelines page here [1], which helps a lot
 but there's some disagreement on a specific case here.  Do we have a
 set answer on:

 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})

 vs

 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})

 This is the preferred way.

 Note that this is just a multi-variable variation on
 http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
 and the reasoning discussed there applies.

 I'd be curious why some people prefer the % version because to my
 knowledge that's not recommended even for untranslated log messages.

Not sure if it's that anybody has a preference as opposed to an
interpretation, notice the recommendation for multi-vars in raise:

# RIGHT
raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})




 It's always fun when one person provides a -1 for the first usage; the
 submitter changes it and another reviewer gives a -1 and says, no it
 should be the other way.

 I'm hoping maybe somebody on the olso team can provide an
 authoritative answer and we can then update the example page
 referenced in [1] to clarify this particular case.

 Thanks,
 John

 [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread Doug Hellmann

On Dec 22, 2014, at 12:03 PM, Ben Nemec openst...@nemebean.com wrote:

 On 12/22/2014 09:42 AM, John Griffith wrote:
 Lately (on the Cinder team at least) there's been a lot of
 disagreement in reviews regarding the proper way to do LOG messages
 correctly.  Use of '%' vs ',' in the formatting of variables etc.
 
 We do have the oslo i18n guidelines page here [1], which helps a lot
 but there's some disagreement on a specific case here.  Do we have a
 set answer on:
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})
 
 vs
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})
 
 This is the preferred way.

+1

 
 Note that this is just a multi-variable variation on
 http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
 and the reasoning discussed there applies.
 
 I'd be curious why some people prefer the % version because to my
 knowledge that's not recommended even for untranslated log messages.
 
 
 
 It's always fun when one person provides a -1 for the first usage; the
 submitter changes it and another reviewer gives a -1 and says, no it
 should be the other way.
 
 I'm hoping maybe somebody on the olso team can provide an
 authoritative answer and we can then update the example page
 referenced in [1] to clarify this particular case.
 
 Thanks,
 John
 
 [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread Doug Hellmann

On Dec 22, 2014, at 1:05 PM, John Griffith john.griffi...@gmail.com wrote:

 On Mon, Dec 22, 2014 at 10:03 AM, Ben Nemec openst...@nemebean.com wrote:
 On 12/22/2014 09:42 AM, John Griffith wrote:
 Lately (on the Cinder team at least) there's been a lot of
 disagreement in reviews regarding the proper way to do LOG messages
 correctly.  Use of '%' vs ',' in the formatting of variables etc.
 
 We do have the oslo i18n guidelines page here [1], which helps a lot
 but there's some disagreement on a specific case here.  Do we have a
 set answer on:
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})
 
 vs
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})
 
 This is the preferred way.
 
 Note that this is just a multi-variable variation on
 http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
 and the reasoning discussed there applies.
 
 I'd be curious why some people prefer the % version because to my
 knowledge that's not recommended even for untranslated log messages.
 
 Not sure if it's that anybody has a preference as opposed to an
 interpretation, notice the recommendation for multi-vars in raise:
 
 # RIGHT
 raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': 
 v2})

It’s really not related to translation as much as the logging API itself.

With the exception, you want to initialize the ValueError instance with a 
proper message as soon as you throw it because you don’t know what the calling 
code might do with it. Therefore you use string interpolation inline.

When you call into  the logging subsystem, your call might be ignored based on 
the level of the message and the logging configuration. By letting the logging 
code do the string interpolation, you potentially skip the work of serializing 
variables to strings for messages that will be discarded, saving time and 
memory.

These “rules” apply whether your messages are being translated or not, so even 
for debug log messages you should write:

  LOG.debug(‘some message: v1=%(v1)s v2=%(v2)s’, {‘v1’: v1, ‘v2’: v2})

 
 
 
 
 It's always fun when one person provides a -1 for the first usage; the
 submitter changes it and another reviewer gives a -1 and says, no it
 should be the other way.
 
 I'm hoping maybe somebody on the olso team can provide an
 authoritative answer and we can then update the example page
 referenced in [1] to clarify this particular case.
 
 Thanks,
 John
 
 [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2014-12-22 Thread Joe Gordon
On Fri, Dec 19, 2014 at 6:53 AM, Robert Li (baoli) ba...@cisco.com wrote:

  Hi Joe,

  See this thread on the SR-IOV CI from Irena and Sandhya:


 http://lists.openstack.org/pipermail/openstack-dev/2014-November/050658.html


 http://lists.openstack.org/pipermail/openstack-dev/2014-November/050755.html

  I believe that Intel is building a CI system to test SR-IOV as well.


Thanks for the clarification.



  Thanks,
 Robert


  On 12/18/14, 9:13 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) ba...@cisco.com
 wrote:

  Hi,

  During the Kilo summit, the folks in the pci passthrough and SR-IOV
 groups discussed what we’d like to achieve in this cycle, and the result
 was documented in this Etherpad:
 https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

  To get the work going, we’ve submitted a few design specs:

  Nova: Live migration with macvtap SR-IOV
 https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

  Nova: sriov interface attach/detach
 https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

   Nova: Api specify vnic_type
 https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

  Neutron-Network settings support for vnic-type

 https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type

  Nova: SRIOV scheduling with stateless offloads

 https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

  Now that the specs deadline is approaching, I’d like to bring them up
 in here for exception considerations. A lot of works have been put into
 them. And we’d like to see them get through for Kilo.


  We haven't started the spec exception process yet.



  Regarding CI for PCI passthrough and SR-IOV, see the attached thread.


  Can you share this via a link to something on
 http://lists.openstack.org/pipermail/openstack-dev/



  thanks,
 Robert


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Application level HA via Heat

2014-12-22 Thread Steven Hardy
Hi all,

So, lately I've been having various discussions around $subject, and I know
it's something several folks in our community are interested in, so I
wanted to get some ideas I've been pondering out there for discussion.

I'll start with a proposal of how we might replace HARestarter with
AutoScaling group, then give some initial ideas of how we might evolve that
into something capable of a sort-of active/active failover.

1. HARestarter replacement.

My position on HARestarter has long been that equivalent functionality
should be available via AutoScalingGroups of size 1.  Turns out that
shouldn't be too hard to do:

 resources:
  server_group:
type: OS::Heat::AutoScalingGroup
properties:
  min_size: 1
  max_size: 1
  resource:
type: ha_server.yaml

  server_replacement_policy:
type: OS::Heat::ScalingPolicy
properties:
  # FIXME: this adjustment_type doesn't exist yet
  adjustment_type: replace_oldest
  auto_scaling_group_id: {get_resource: server_group}
  scaling_adjustment: 1

So, currently our ScalingPolicy resource can only support three adjustment
types, all of which change the group capacity.  AutoScalingGroup already
supports batched replacements for rolling updates, so if we modify the
interface to allow a signal to trigger replacement of a group member, then
the snippet above should be logically equivalent to HARestarter AFAICT.

The steps to do this should be:

 - Standardize the ScalingPolicy-AutoScaling group interface, so
aynchronous adjustments (e.g signals) between the two resources don't use
the adjust method.

 - Add an option to replace a member to the signal interface of
AutoScalingGroup

 - Add the new replace adjustment type to ScalingPolicy

I posted a patch which implements the first step, and the second will be
required for TripleO, e.g we should be doing it soon.

https://review.openstack.org/#/c/143496/
https://review.openstack.org/#/c/140781/

2. A possible next step towards active/active HA failover

The next part is the ability to notify before replacement that a scaling
action is about to happen (just like we do for LoadBalancer resources
already) and orchestrate some or all of the following:

- Attempt to quiesce the currently active node (may be impossible if it's
  in a bad state)

- Detach resources (e.g volumes primarily?) from the current active node,
  and attach them to the new active node

- Run some config action to activate the new node (e.g run some config
  script to fsck and mount a volume, then start some application).

The first step is possible by putting a SofwareConfig/SoftwareDeployment
resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
node is too bricked to respond and specifying DELETE action so it only runs
when we replace the resource).

The third step is possible either via a script inside the box which polls
for the volume attachment, or possibly via an update-only software config.

The second step is the missing piece AFAICS.

I've been wondering if we can do something inside a new heat resource,
which knows what the current active member of an ASG is, and gets
triggered on a replace signal to orchestrate e.g deleting and creating a
VolumeAttachment resource to move a volume between servers.

Something like:

 resources:
  server_group:
type: OS::Heat::AutoScalingGroup
properties:
  min_size: 2
  max_size: 2
  resource:
type: ha_server.yaml

  server_failover_policy:
type: OS::Heat::FailoverPolicy
properties:
  auto_scaling_group_id: {get_resource: server_group}
  resource:
type: OS::Cinder::VolumeAttachment
properties:
# FIXME: refs is a ResourceGroup interface not currently
# available in AutoScalingGroup
instance_uuid: {get_attr: [server_group, refs, 1]}

  server_replacement_policy:
type: OS::Heat::ScalingPolicy
properties:
  # FIXME: this adjustment_type doesn't exist yet
  adjustment_type: replace_oldest
  auto_scaling_policy_id: {get_resource: server_failover_policy}
  scaling_adjustment: 1

By chaining policies like this we could trigger an update on the attachment
resource (or a nested template via a provider resource containing many
attachments or other resources) every time the ScalingPolicy is triggered.

For the sake of clarity, I've not included the existing stuff like
ceilometer alarm resources etc above, but hopefully it gets the idea
accross so we can discuss further, what are peoples thoughts?  I'm quite
happy to iterate on the idea if folks have suggestions for a better
interface etc :)

One problem I see with the above approach is you'd have to trigger a
failover after stack create to get the initial volume attached, still
pondering ideas on how best to solve that..

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration

2014-12-22 Thread Joe Gordon
On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery mest...@mestery.com wrote:

 On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno ante...@anteaya.info wrote:

 Rather than waste your time making excuses let me state where we are and
 where I would like to get to, also sharing my thoughts about how you can
 get involved if you want to see this happen as badly as I have been told
 you do.

 Where we are:
 * a great deal of foundation work has been accomplished to achieve
 parity with nova-network and neutron to the extent that those involved
 are ready for migration plans to be formulated and be put in place
 * a summit session happened with notes and intentions[0]
 * people took responsibility and promptly got swamped with other
 responsibilities
 * spec deadlines arose and in neutron's case have passed
 * currently a neutron spec [1] is a work in progress (and it needs
 significant work still) and a nova spec is required and doesn't have a
 first draft or a champion

 Where I would like to get to:
 * I need people in addition to Oleg Bondarev to be available to help
 come up with ideas and words to describe them to create the specs in a
 very short amount of time (Oleg is doing great work and is a fabulous
 person, yay Oleg, he just can't do this alone)
 * specifically I need a contact on the nova side of this complex
 problem, similar to Oleg on the neutron side
 * we need to have a way for people involved with this effort to find
 each other, talk to each other and track progress
 * we need to have representation at both nova and neutron weekly
 meetings to communicate status and needs

 We are at K-2 and our current status is insufficient to expect this work
 will be accomplished by the end of K-3. I will be championing this work,
 in whatever state, so at least it doesn't fall off the map. If you would
 like to help this effort please get in contact. I will be thinking of
 ways to further this work and will be communicating to those who
 identify as affected by these decisions in the most effective methods of
 which I am capable.

 Thank you to all who have gotten us as far as well have gotten in this
 effort, it has been a long haul and you have all done great work. Let's
 keep going and finish this.

 Thank you,
 Anita.

 Thank you for volunteering to drive this effort Anita, I am very happy
 about this. I support you 100%.

 I'd like to point out that we really need a point of contact on the nova
 side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to
 continue moving this forward.


At the summit the nova team marked the nova-network to neutron migration as
a priority [0], so we are collectively interested in seeing this happen and
want to help in any way possible.   With regard to a nova point of contact,
anyone in nova-specs-core should work, that way we can cover more time
zones.

From what I can gather the first step is to finish fleshing out the first
spec [1], and it sounds like it would be good to get a few nova-cores
reviewing it as well.




[0]
http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
[1] https://review.openstack.org/#/c/142456/



 Thanks,
 Kyle


 [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron
 [1] https://review.openstack.org/#/c/142456/

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration

2014-12-22 Thread Anita Kuno
On 12/22/2014 01:32 PM, Joe Gordon wrote:
 On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery mest...@mestery.com wrote:
 
 On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno ante...@anteaya.info wrote:

 Rather than waste your time making excuses let me state where we are and
 where I would like to get to, also sharing my thoughts about how you can
 get involved if you want to see this happen as badly as I have been told
 you do.

 Where we are:
 * a great deal of foundation work has been accomplished to achieve
 parity with nova-network and neutron to the extent that those involved
 are ready for migration plans to be formulated and be put in place
 * a summit session happened with notes and intentions[0]
 * people took responsibility and promptly got swamped with other
 responsibilities
 * spec deadlines arose and in neutron's case have passed
 * currently a neutron spec [1] is a work in progress (and it needs
 significant work still) and a nova spec is required and doesn't have a
 first draft or a champion

 Where I would like to get to:
 * I need people in addition to Oleg Bondarev to be available to help
 come up with ideas and words to describe them to create the specs in a
 very short amount of time (Oleg is doing great work and is a fabulous
 person, yay Oleg, he just can't do this alone)
 * specifically I need a contact on the nova side of this complex
 problem, similar to Oleg on the neutron side
 * we need to have a way for people involved with this effort to find
 each other, talk to each other and track progress
 * we need to have representation at both nova and neutron weekly
 meetings to communicate status and needs

 We are at K-2 and our current status is insufficient to expect this work
 will be accomplished by the end of K-3. I will be championing this work,
 in whatever state, so at least it doesn't fall off the map. If you would
 like to help this effort please get in contact. I will be thinking of
 ways to further this work and will be communicating to those who
 identify as affected by these decisions in the most effective methods of
 which I am capable.

 Thank you to all who have gotten us as far as well have gotten in this
 effort, it has been a long haul and you have all done great work. Let's
 keep going and finish this.

 Thank you,
 Anita.

 Thank you for volunteering to drive this effort Anita, I am very happy
 about this. I support you 100%.

 I'd like to point out that we really need a point of contact on the nova
 side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to
 continue moving this forward.

 
 At the summit the nova team marked the nova-network to neutron migration as
 a priority [0], so we are collectively interested in seeing this happen and
 want to help in any way possible.   With regard to a nova point of contact,
 anyone in nova-specs-core should work, that way we can cover more time
 zones.
 
 From what I can gather the first step is to finish fleshing out the first
 spec [1], and it sounds like it would be good to get a few nova-cores
 reviewing it as well.
 
 
 
 
 [0]
 http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
 [1] https://review.openstack.org/#/c/142456/
 
 
Wonderful, thank you for the support Joe.

It appears that we need to have a regular weekly meeting to track
progress in an archived manner.

I know there was one meeting November but I don't know what it was
called so so far I can't find the logs for that.

So if those affected by this issue can identify what time (UTC please,
don't tell me what time zone you are in it is too hard to guess what UTC
time you are available) and day of the week you are available for a
meeting I'll create one and we can start talking to each other.

I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and
1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100 UTC.

Thanks,
Anita.


 Thanks,
 Kyle


 [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron
 [1] https://review.openstack.org/#/c/142456/

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] No weekly meeting until Jan 6th 2015

2014-12-22 Thread Collins, Sean
See everyone next year!

Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Canceling the next two meetings

2014-12-22 Thread Kyle Mestery
Hi folks, given I expect low attendance today and next week, lets just
cancel the next two Neutron meetings. We'll reconvene in the new year on
Monday, January 5, 2015 at 2100 UTC.

Happy holidays to all!

Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Canceling the next two meetings

2014-12-22 Thread Miguel Ángel Ajo
Happy Holidays!, thank you Kyle.  

Miguel Ángel Ajo


On Monday, 22 de December de 2014 at 21:12, Kyle Mestery wrote:

 Hi folks, given I expect low attendance today and next week, lets just cancel 
 the next two Neutron meetings. We'll reconvene in the new year on Monday, 
 January 5, 2015 at 2100 UTC.
  
 Happy holidays to all!
  
 Kyle
  
 [1] https://wiki.openstack.org/wiki/Network/Meetings
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-22 Thread Zane Bitter

On 22/12/14 13:21, Steven Hardy wrote:

Hi all,

So, lately I've been having various discussions around $subject, and I know
it's something several folks in our community are interested in, so I
wanted to get some ideas I've been pondering out there for discussion.

I'll start with a proposal of how we might replace HARestarter with
AutoScaling group, then give some initial ideas of how we might evolve that
into something capable of a sort-of active/active failover.

1. HARestarter replacement.

My position on HARestarter has long been that equivalent functionality
should be available via AutoScalingGroups of size 1.  Turns out that
shouldn't be too hard to do:

  resources:
   server_group:
 type: OS::Heat::AutoScalingGroup
 properties:
   min_size: 1
   max_size: 1
   resource:
 type: ha_server.yaml

   server_replacement_policy:
 type: OS::Heat::ScalingPolicy
 properties:
   # FIXME: this adjustment_type doesn't exist yet
   adjustment_type: replace_oldest
   auto_scaling_group_id: {get_resource: server_group}
   scaling_adjustment: 1


One potential issue with this is that it is a little bit _too_ 
equivalent to HARestarter - it will replace your whole scaled unit 
(ha_server.yaml in this case) rather than just the failed resource inside.



So, currently our ScalingPolicy resource can only support three adjustment
types, all of which change the group capacity.  AutoScalingGroup already
supports batched replacements for rolling updates, so if we modify the
interface to allow a signal to trigger replacement of a group member, then
the snippet above should be logically equivalent to HARestarter AFAICT.

The steps to do this should be:

  - Standardize the ScalingPolicy-AutoScaling group interface, so
aynchronous adjustments (e.g signals) between the two resources don't use
the adjust method.

  - Add an option to replace a member to the signal interface of
AutoScalingGroup

  - Add the new replace adjustment type to ScalingPolicy


I think I am broadly in favour of this.


I posted a patch which implements the first step, and the second will be
required for TripleO, e.g we should be doing it soon.

https://review.openstack.org/#/c/143496/
https://review.openstack.org/#/c/140781/

2. A possible next step towards active/active HA failover

The next part is the ability to notify before replacement that a scaling
action is about to happen (just like we do for LoadBalancer resources
already) and orchestrate some or all of the following:

- Attempt to quiesce the currently active node (may be impossible if it's
   in a bad state)

- Detach resources (e.g volumes primarily?) from the current active node,
   and attach them to the new active node

- Run some config action to activate the new node (e.g run some config
   script to fsck and mount a volume, then start some application).

The first step is possible by putting a SofwareConfig/SoftwareDeployment
resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
node is too bricked to respond and specifying DELETE action so it only runs
when we replace the resource).

The third step is possible either via a script inside the box which polls
for the volume attachment, or possibly via an update-only software config.

The second step is the missing piece AFAICS.

I've been wondering if we can do something inside a new heat resource,
which knows what the current active member of an ASG is, and gets
triggered on a replace signal to orchestrate e.g deleting and creating a
VolumeAttachment resource to move a volume between servers.

Something like:

  resources:
   server_group:
 type: OS::Heat::AutoScalingGroup
 properties:
   min_size: 2
   max_size: 2
   resource:
 type: ha_server.yaml

   server_failover_policy:
 type: OS::Heat::FailoverPolicy
 properties:
   auto_scaling_group_id: {get_resource: server_group}
   resource:
 type: OS::Cinder::VolumeAttachment
 properties:
 # FIXME: refs is a ResourceGroup interface not currently
 # available in AutoScalingGroup
 instance_uuid: {get_attr: [server_group, refs, 1]}

   server_replacement_policy:
 type: OS::Heat::ScalingPolicy
 properties:
   # FIXME: this adjustment_type doesn't exist yet
   adjustment_type: replace_oldest
   auto_scaling_policy_id: {get_resource: server_failover_policy}
   scaling_adjustment: 1


This actually fails because a VolumeAttachment needs to be updated in 
place; if you try to switch servers but keep the same Volume when 
replacing the attachment you'll get an error.


TBH {get_attr: [server_group, refs, 1]} is doing most of the heavy 
lifting here, so in theory you could just have an 
OS::Cinder::VolumeAttachment instead of the FailoverPolicy and then all 
you need is a way of triggering a stack update with the same template  
params. I know Ton added a PATCH method to update in Juno so that you 
don't 

[openstack-dev] [Keystone] Keystone Middleware 1.3.1 release

2014-12-22 Thread Morgan Fainberg
The Keystone development community would like to announce the 1.3.1 release of 
the Keystone Middleware package.

This release can be installed from the following locations:
* http://tarballs.openstack.org/keystonemiddleware 
http://tarballs.openstack.org/keystonemiddleware
* https://pypi.python.org/pypi/keystonemiddleware 
https://pypi.python.org/pypi/keystonemiddleware

1.3.1
---
* auth_token middleware no longer contacts keystone when a request with no 
token is received. 

Detailed changes in this release beyond what is listed above:
https://launchpad.net/keystonemiddleware/+milestone/1.3.1___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No meetings on Christmas or New Year's Days

2014-12-22 Thread Carl Baldwin
The L3 sub team meeting [1] will not be held until the 8th of January,
2015.  Enjoy your time off.  I will try to move some of the
refactoring patches along as I can but will be down to minimal hours.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Sub-team meetings on Dec 20th and 27th?

2014-12-22 Thread Paul Michali (pcm)
Will cancel the next two VPNaaS sub-team meetings.  The next meeting will be 
Tuesday, January 6th at 1500 UTC on meeting-4 ( Note the channel change).


Enjoy the holiday time!

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Dec 19, 2014, at 2:01 PM, Paul Michali (pcm) p...@cisco.com wrote:

 Does anyone have agenda items to discuss for the next two meetings during the 
 holidays?
 
 If so, please let me know (and add them to the Wiki page), and we’ll hold the 
 meeting. Otherwise, we can continue on Jan 6th, and any pop-up items can be 
 addressed on the mailing list or Neutron IRC.
 
 Please let me know by Monday, if you’d like us to meet.
 
 
 Regards,
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pc_m (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] ratio: created to attached

2014-12-22 Thread John Griffith
On Sat, Dec 20, 2014 at 4:56 PM, Tom Barron t...@dyncloud.net wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Does anyone have real world experience, even data, to speak to the
 question: in an OpenStack cloud, what is the likely ratio of (created)
 cinder volumes to attached cinder volumes?

 Thanks,

 Tom Barron
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

 iQEcBAEBAgAGBQJUlgybAAoJEGeKBzAeUxEHqKwIAJjL5TCP7s+Ev8RNr+5bWARF
 zy3I216qejKdlM+a9Vxkl6ZWHMklWEhpMmQiUDMvEitRSlHpIHyhh1RfZbl4W9Fe
 GVXn04sXIuoNPgbFkkPIwE/45CJC1kGIBDub/pr9PmNv9mzAf3asLCHje8n3voWh
 d30If5SlPiaVoc0QNrq0paK7Yl1hh5jLa2zeV4qu4teRts/GjySJI7bR0k/TW5n4
 e2EKxf9MhbxzjQ6QsgvWzxmryVIKRSY9z8Eg/qt7AfXF4Kx++MNo8VbX3AuOu1XV
 cnHlmuGqVq71uMjWXCeqK8HyAP8nkn2cKnJXhRYli6qSwf9LxzjC+kMLn364IX4=
 =AZ0i
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Honestly I think the assumption is and should be 1:1, perhaps not 100%
duty-cycle, but certainly periods of time when there is a 100% attach
rate.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hierarchical Multitenancy

2014-12-22 Thread Raildo Mascena
Hello folks, My team and I developed the Hierarchical Multitenancy concept
for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we
implemented? What are the next steps for kilo?
To answers these questions, I created a blog post
*http://raildo.me/hierarchical-multitenancy-in-openstack/
http://raildo.me/hierarchical-multitenancy-in-openstack/*

Any question, I'm available.

-- 
Raildo Mascena
Software Engineer.
Bachelor of Computer Science.
Distributed Systems Laboratory
Federal University of Campina Grande
Campina Grande, PB - Brazil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-22 Thread Angus Salkeld
On Tue, Dec 23, 2014 at 6:42 AM, Zane Bitter zbit...@redhat.com wrote:

 On 22/12/14 13:21, Steven Hardy wrote:

 Hi all,

 So, lately I've been having various discussions around $subject, and I
 know
 it's something several folks in our community are interested in, so I
 wanted to get some ideas I've been pondering out there for discussion.

 I'll start with a proposal of how we might replace HARestarter with
 AutoScaling group, then give some initial ideas of how we might evolve
 that
 into something capable of a sort-of active/active failover.

 1. HARestarter replacement.

 My position on HARestarter has long been that equivalent functionality
 should be available via AutoScalingGroups of size 1.  Turns out that
 shouldn't be too hard to do:

   resources:
server_group:
  type: OS::Heat::AutoScalingGroup
  properties:
min_size: 1
max_size: 1
resource:
  type: ha_server.yaml

server_replacement_policy:
  type: OS::Heat::ScalingPolicy
  properties:
# FIXME: this adjustment_type doesn't exist yet
adjustment_type: replace_oldest
auto_scaling_group_id: {get_resource: server_group}
scaling_adjustment: 1


 One potential issue with this is that it is a little bit _too_ equivalent
 to HARestarter - it will replace your whole scaled unit (ha_server.yaml in
 this case) rather than just the failed resource inside.

  So, currently our ScalingPolicy resource can only support three adjustment
 types, all of which change the group capacity.  AutoScalingGroup already
 supports batched replacements for rolling updates, so if we modify the
 interface to allow a signal to trigger replacement of a group member, then
 the snippet above should be logically equivalent to HARestarter AFAICT.

 The steps to do this should be:

   - Standardize the ScalingPolicy-AutoScaling group interface, so
 aynchronous adjustments (e.g signals) between the two resources don't use
 the adjust method.

   - Add an option to replace a member to the signal interface of
 AutoScalingGroup

   - Add the new replace adjustment type to ScalingPolicy


 I think I am broadly in favour of this.


  I posted a patch which implements the first step, and the second will be
 required for TripleO, e.g we should be doing it soon.

 https://review.openstack.org/#/c/143496/
 https://review.openstack.org/#/c/140781/

 2. A possible next step towards active/active HA failover

 The next part is the ability to notify before replacement that a scaling
 action is about to happen (just like we do for LoadBalancer resources
 already) and orchestrate some or all of the following:

 - Attempt to quiesce the currently active node (may be impossible if it's
in a bad state)

 - Detach resources (e.g volumes primarily?) from the current active node,
and attach them to the new active node

 - Run some config action to activate the new node (e.g run some config
script to fsck and mount a volume, then start some application).

 The first step is possible by putting a SofwareConfig/SoftwareDeployment
 resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
 node is too bricked to respond and specifying DELETE action so it only
 runs
 when we replace the resource).

 The third step is possible either via a script inside the box which polls
 for the volume attachment, or possibly via an update-only software config.

 The second step is the missing piece AFAICS.

 I've been wondering if we can do something inside a new heat resource,
 which knows what the current active member of an ASG is, and gets
 triggered on a replace signal to orchestrate e.g deleting and creating a
 VolumeAttachment resource to move a volume between servers.

 Something like:

   resources:
server_group:
  type: OS::Heat::AutoScalingGroup
  properties:
min_size: 2
max_size: 2
resource:
  type: ha_server.yaml

server_failover_policy:
  type: OS::Heat::FailoverPolicy
  properties:
auto_scaling_group_id: {get_resource: server_group}
resource:
  type: OS::Cinder::VolumeAttachment
  properties:
  # FIXME: refs is a ResourceGroup interface not currently
  # available in AutoScalingGroup
  instance_uuid: {get_attr: [server_group, refs, 1]}

server_replacement_policy:
  type: OS::Heat::ScalingPolicy
  properties:
# FIXME: this adjustment_type doesn't exist yet
adjustment_type: replace_oldest
auto_scaling_policy_id: {get_resource: server_failover_policy}
scaling_adjustment: 1


 This actually fails because a VolumeAttachment needs to be updated in
 place; if you try to switch servers but keep the same Volume when replacing
 the attachment you'll get an error.

 TBH {get_attr: [server_group, refs, 1]} is doing most of the heavy lifting
 here, so in theory you could just have an OS::Cinder::VolumeAttachment
 instead of the 

Re: [openstack-dev] Hierarchical Multitenancy

2014-12-22 Thread Morgan Fainberg
Hi Raildo,

Thanks for putting this post together. I really appreciate all the work you 
guys have done (and continue to do) to get the Hierarchical Mulittenancy code 
into Keystone. It’s great to have the base implementation merged into Keystone 
for the K1 milestone. I look forward to seeing the rest of the development land 
during the rest of this cycle and what the other OpenStack projects build 
around the HMT functionality.

Cheers,
Morgan



 On Dec 22, 2014, at 1:49 PM, Raildo Mascena rail...@gmail.com wrote:
 
 Hello folks, My team and I developed the Hierarchical Multitenancy concept 
 for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we 
 implemented? What are the next steps for kilo? 
 To answers these questions, I created a blog post 
 http://raildo.me/hierarchical-multitenancy-in-openstack/ 
 http://raildo.me/hierarchical-multitenancy-in-openstack/
 
 Any question, I'm available.
 
 -- 
 Raildo Mascena
 Software Engineer.
 Bachelor of Computer Science. 
 Distributed Systems Laboratory
 Federal University of Campina Grande
 Campina Grande, PB - Brazil
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-12-22 Thread Surojit Pathak

On 11/14/14 2:02 AM, Daniel P. Berrange wrote:

On Thu, Nov 13, 2014 at 01:55:06PM -0800, Surojit Pathak wrote:

Hi all,

[Issue observed]
If we issue 'nova reboot server', we get to have the console output of the
latest bootup of the server only. The console output of the previous boot
for the same server vanishes due to truncation[1]. If we do reboot from
within the VM instance [ #sudo reboot ], or reboot the instance with 'virsh
reboot instance' the behavior is not the same, where the console.log keeps
increasing, with the new output being appended.
This loss of history makes some debugging scenario difficult due to lack of
information being available.

Please point me to any solution/blueprint for this issue, if already
planned. Otherwise, please comment on my analysis and proposals as solution,
below -

[Analysis]
Nova's libvirt driver on compute node tries to do a graceful restart of the
server instance, by attempting a soft_reboot first. If soft_reboot fails, it
attempts a hard_reboot. As part of soft_reboot, it brings down the instance
by calling shutdown(), and then calls createWithFlags() to bring this up.
Because of this, qemu-kvm process for the instance gets terminated and new
process is launched. In QEMU, the chardev file is opened with O_TRUNC, and
thus we lose the previous content of the console.log file.
On the other-hand, during 'virsh reboot instance', the same qemu-kvm
process continues, and libvirt actually does a qemuDomainSetFakeReboot().
Thus the same file continues capturing the new console output as a
continuation into the same file.

Nova and libvirt have support for issuing a graceful reboot via the QEMU
guest agent. So if you make sure that is installed, and tell Nova to use
it, then Nova won't have to stop  recreate the QEMU process and thus
won't have the problem of overwriting the logs.

Hi Daniel,
Having GA to do graceful restart is nice option. But if it were to just 
preserve the same console file, even 'virsh reboot' achieves the 
purpose. As I explained in my original analysis, Nova seems to have not 
taken the path, as it does not want to have a false positive, where the 
GA does not respond or 'virDomain.reboot' fails later and the domain is 
not really restarted. [ CC-ed vish, author of nova 
http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova//virt 
http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt//libvirt 
http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt/libvirt//driver.py 
http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt/libvirt/driver.py 
]


IMHO, QEMU should preserve the console-log file for a given domain, if 
it exists, by not opening with O_TRUNC option, instead opening with 
O_APPEND. I would like to draw a comparison of a real computer to which 
we might be connected over serial console, and the box gets powered down 
and up with external button press, and we do not lose the console 
history, if connected. And that's what is the experience console-log 
intends to provide. If you think, this is agreeable, please let me know, 
I will send the patch to qemu-devel@.


--
Regards,
SURO

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-12-22 Thread Tony Breeds
On Mon, Dec 22, 2014 at 04:36:02PM -0800, Surojit Pathak wrote:

 Hi Daniel,
 Having GA to do graceful restart is nice option. But if it were to just
 preserve the same console file, even 'virsh reboot' achieves the purpose. As
 I explained in my original analysis, Nova seems to have not taken the path,
 as it does not want to have a false positive, where the GA does not respond
 or 'virDomain.reboot' fails later and the domain is not really restarted. [
 CC-ed vish, author of nova
 http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova//virt 
 http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt//libvirt
  
 http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt/libvirt//driver.py
  
 http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt/libvirt/driver.py
 ]
 
 IMHO, QEMU should preserve the console-log file for a given domain, if it
 exists, by not opening with O_TRUNC option, instead opening with O_APPEND. I
 would like to draw a comparison of a real computer to which we might be
 connected over serial console, and the box gets powered down and up with
 external button press, and we do not lose the console history, if connected.
 And that's what is the experience console-log intends to provide. If you
 think, this is agreeable, please let me know, I will send the patch to
 qemu-devel@.

The issue is more complex than just removing the O_TRUNC from the open() flags.

I havd a proposal that will (almost by accident) fix this in qemu by allowing
console log files to be rotated.  I'm also waorking on a similar feature in
libvirt.

I think the tl;dr: is that this /shoudl/ be fixed in kilo with a 'modern' 
libvirt.

Yours Tony.


pgp7TZH5n8wP4.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Access to worklfow/task results without implicit publish

2014-12-22 Thread Dmitri Zimine
The problem:

Refer to workflow / action output without explicitly re-publishing the output 
values. Why we want it: to reduce repetition, and to make modifications in the 
place values are used, not where they are obtained (and not in multiple 
places). E.g., as an editor of a workflow, when I just realized that I need a 
value of some task down the line, I want to make change right here in the tasks 
that consumes the data (and only those which need this data), without finding 
and modifying the task that supplies the data.

Reasons:

We don't have a concept of workflow or action ‘results': it's the task which 
produces and publishes results. Different tasks call same actions/workflows, 
produce same output variables with diff values. We don't want to publish this 
output with output name as a key, to the global context: they will conflict and 
mess up. Instead, we can namespace them by the task (as specific values are the 
attributes of the tasks, and we want to refer to tasks, not actions/workflows).

Solution:

To refer the output of a particular task (aka raw result of action execution 
invoked by this task), use the_task prefix:

 $_task.taskname.path.to.variable
 $_task.my_task.my_task_result.foo.bar


Expanded example
 
my_sublfow:
   output:
- foo #  declare output here
- bar 
   tasks:
   my_task:
 action: get_foo
 publish: 
 foo: $foo #  define output in a task
 bar: $bar
 ...
main_flow_with_explicit_publishing:
tasks:
t1: 
   workflow: my_subflow 
publish: 
   # Today, you must explicitly publish to make data 
   # from action available for other tasks
foo: $foo #  re-plublish, else you can't use it
bar: $bar
t2: 
action: echo output=$foo and $bar #  use it from task t1

main_flow_with_implicit_publishing:
tasks:
t1: 
   workflow: my_subflow 
t2: 
action: echo output=$_task.t1.foo and $_task.t1.bar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-12-22 Thread Surojit Pathak

On 12/22/14 5:04 PM, Tony Breeds wrote:

On Mon, Dec 22, 2014 at 04:36:02PM -0800, Surojit Pathak wrote:


Hi Daniel,
Having GA to do graceful restart is nice option. But if it were to just
preserve the same console file, even 'virsh reboot' achieves the purpose. As
I explained in my original analysis, Nova seems to have not taken the path,
as it does not want to have a false positive, where the GA does not respond
or 'virDomain.reboot' fails later and the domain is not really restarted. [
CC-ed vish, author of nova


IMHO, QEMU should preserve the console-log file for a given domain, if it
exists, by not opening with O_TRUNC option, instead opening with O_APPEND. I
would like to draw a comparison of a real computer to which we might be
connected over serial console, and the box gets powered down and up with
external button press, and we do not lose the console history, if connected.
And that's what is the experience console-log intends to provide. If you
think, this is agreeable, please let me know, I will send the patch to
qemu-devel@.

The issue is more complex than just removing the O_TRUNC from the open() flags.

I havd a proposal that will (almost by accident) fix this in qemu by allowing
console log files to be rotated.  I'm also waorking on a similar feature in
libvirt.

I think the tl;dr: is that this /shoudl/ be fixed in kilo with a 'modern' 
libvirt.

Hi Tony,

Can you please share some details of the effort, in terms of reference?


Yours Tony.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
SURO

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-12-22 Thread Tony Breeds
On Mon, Dec 22, 2014 at 07:16:27PM -0800, Surojit Pathak wrote:
 Hi Tony,
 
 Can you please share some details of the effort, in terms of reference?

Well the initial discussions started with qemu at:
http://lists.nongnu.org/archive/html/qemu-devel/2014-12/msg00765.html
and then here:
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052356.html

You'll note the the focus of the discussion is rotating the log files but I'm
very much aware of the issue covered in theis thread and it will be covered in
my fixes.  Which is why I said 'almost' by accident ;P

I have a partial implementation for the log rotation in qemu (you can issue a
command from the monitor but I haven't looked at the HUP yet).  I started
looking at doing something in libvirt aswell but I haven't made much progress
there due to conflicting priorities.

Yours Tony.


pgpNnUUPgRHYc.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 12/23

2014-12-22 Thread Dugger, Donald D
I'll be hanging out on the IRC channel in case anyone wants to talk but, given 
the holidays, I don't expect much attendance and we'll keep it short no matter 
what.



Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)





1) Status on cleanup work - 
https://wiki.openstack.org/wiki/Gantt/kilo

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] shelved_offload_time configuration

2014-12-22 Thread Kekane, Abhishek
Hi All,

AFAIK, for shelve api the parameter shelved_offload_time need to be configured 
on compute node.
Can we configure this parameter on controller node as well.

Please suggest.

Thank You,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] simulate examples

2014-12-22 Thread Tran, Steven
Hi,
   Does anyone have an example on how to use 'simulate' according to the 
following command line usage?

usage: openstack congress policy simulate [-h] [--delta] [--trace]
  policy query sequence
  action_policy

  What are the query and sequence? The example under 
/opt/stack/congress/examples doesn't mention about query and sequence.  It 
seems like all 4 parameters are required.
Thanks,
-Steven
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross distribution talks on Friday

2014-12-22 Thread Thomas Goirand
On 11/11/2014 12:46 AM, Donald Stufft wrote:
 
 On Nov 10, 2014, at 11:43 AM, Adam Young ayo...@redhat.com wrote:

 On 11/01/2014 06:51 PM, Alan Pevec wrote:
 %install
 export OSLO_PACKAGE_VERSION=%{version}
 %{__python} setup.py install -O1 --skip-build --root %{buildroot}

 Then everything should be ok and PBR will become your friend.
 Still not my friend because I don't want a _build_ tool as runtime 
 dependency :)
 e.g. you don't ship make(1) to run C programs, do you?
 For runtime, only pbr.version part is required but unfortunately
 oslo.version was abandoned.

 Cheers,
 Alan

 Perhaps we need a top level Python Version library, not Oslo?  Is there such 
 a thing?  Seems like it should not be something specific to OpenStack
 
 What does pbr.version do?

Basically, the same as pkg-resources. Therefore I don't really
understand the need for it... Am I missing something?

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross distribution talks on Friday

2014-12-22 Thread Thomas Goirand
On 12/19/2014 11:55 PM, Ihar Hrachyshka wrote:
 Note that OSLO_PACKAGE_VERSION is not public.

Well, it used to be public, it has been added and discussed a few years
ago because of issues I had with packaging.

 Instead, we should use
 PBR_VERSION:
 
 http://docs.openstack.org/developer/pbr/packagers.html#versioning

I don't mind switching, though it's going to be a slow process (because
I'm using OSLO_PACKAGE_VERSION in all packages).

Are we at least *sure* that using OSLO_PACKAGE_VERSION is now deprecated?

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How can I continue to complete a abandoned blueprint?

2014-12-22 Thread li-zheming
 thanks! 
I have submitted a new blueprint(quota-instance-memory) 
the link is:
https://blueprints.launchpad.net/nova/+spec/quota-instance-memory

Merry Christmas!^_^




--

Name :  Li zheming
Company :  Hua Wei
Address  : Shenzhen China
Tel:0086 18665391827



At 2014-12-22 22:32:52,Jay Pipes jaypi...@gmail.com wrote:
On 12/22/2014 04:54 AM, li-zheming wrote:
 hi all: Bp
 flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory)
 was submitted by my partner in havana.   but it has abandoned because
 of  some reason.

Some reason == the submitter failed to provide any details on how the 
work would be implemented, what the use cases were, and any alternatives 
that might be possible.

   I want to  continue to this blueprint. Based on the
 rules about BP for
 https://blueprints.launchpad.net/openstack/?searchtext=for kilo,
 for this bp, spec is not necessary, so I submit the code directly and
 give commit message to clear out questions in spec.  Is it right? how
 can I do? thanks!

Specs are no longer necessary for smallish features, no. A blueprint is 
still necessary on Launchpad, so you should be able to use the abandoned 
one you link above -- which, AFAICT, has enough implementation details 
about the proposed changes.

Alternately, if you cannot get the original submitter to remove the spec 
link to the old spec review, you can always start a new blueprint and we 
can mark that one as obselete.

I'd like Dan Berrange (cc'd) to review whichever blueprint on Launchpad 
you end up using. Please let us know what you do.

All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] copy paste for spice

2014-12-22 Thread Akshik DBK
Going by the documentation Spice console supports copy paste and other 
features, would like to know how do we enable them, meaning how and where do we 
enable it, should we do something wrt the image or some config at openstack 
   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Sub-team meetings on Dec 20th and 27th?

2014-12-22 Thread Mohammad Hanif
Thanks Paul.

Happy holidays everyone!

On Dec 22, 2014, at 1:06 PM, Paul Michali (pcm) 
p...@cisco.commailto:p...@cisco.com wrote:

Will cancel the next two VPNaaS sub-team meetings.  The next meeting will be 
Tuesday, January 6th at 1500 UTC on meeting-4 ( Note the channel change).


Enjoy the holiday time!

PCM (Paul Michali)

MAIL . p...@cisco.commailto:p...@cisco.com
IRC ... pc_m (irc.freenode.comhttp://irc.freenode.com)
TW  @pmichali
GPG Key ... 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Dec 19, 2014, at 2:01 PM, Paul Michali (pcm) 
p...@cisco.commailto:p...@cisco.com wrote:

Does anyone have agenda items to discuss for the next two meetings during the 
holidays?

If so, please let me know (and add them to the Wiki page), and we'll hold the 
meeting. Otherwise, we can continue on Jan 6th, and any pop-up items can be 
addressed on the mailing list or Neutron IRC.

Please let me know by Monday, if you'd like us to meet.


Regards,

PCM (Paul Michali)

MAIL . p...@cisco.commailto:p...@cisco.com
IRC ... pc_m (irc.freenode.comhttp://irc.freenode.com/)
TW  @pmichali
GPG Key ... 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-22 Thread Punith S
Hi Asselin,

i'm following your readme https://github.com/rasselin/os-ext-testing
for setting up our cloudbyte CI on two ubuntu 12.04 VM's(master and slave)

currently the scripts and setup went fine as followed with the document.

now both master and slave have been connected successfully, but in order to
run the tempest integration test against our proposed cloudbyte cinder
driver for kilo, we need to have devstack installed in the slave.(in my
understanding)

but on installing the master devstack i'm getting permission issues in
12.04 in executing ./stack.sh since master devstack suggests the 14.04 or
13.10 ubuntu. and on contrary running install_slave.sh is failing on 13.10
due to puppet modules on found error.

 is there a way to get this work ?

thanks in advance

On Mon, Dec 22, 2014 at 11:10 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Eduard,



 A few items you can try:

 1.   Double-check that the job is in Jenkins

 a.   If not, then that’s the issue

 2.   Check that the processes are running correctly

 a.   ps -ef | grep zuul

i.  Should
 have 2 zuul-server  1 zuul-merger

 b.  ps -ef | grep Jenkins

i.  Should
 have 1 /usr/bin/daemon --name=jenkins  1 /usr/bin/java

 3.   In Jenkins, Manage Jenkins, Gearman Plugin Config, “Test
 Connection”

 4.   Stop and Zuul  Jenkins. Start Zuul  Jenkins

 a.   service Jenkins stop

 b.  service zuul stop

 c.   service zuul-merger stop

 d.  service Jenkins start

 e.  service zuul start

 f.service zuul-merger start



 Otherwise, I suggest you ask in #openstack-infra irc channel.



 Ramy



 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Sunday, December 21, 2014 11:01 PM

 *To:* Asselin, Ramy
 *Cc:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Thanks Ramy,



 Unfortunately i don't see dsvm-tempest-full in the status output.

 Any idea how i can get it registered?



 Thanks,

 Eduard



 On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Eduard,



 If you run this command, you can see which jobs are registered:

 telnet localhost 4730



 status



 There are 3 numbers per job: queued, running and workers that can run job.
 Make sure the job is listed  last ‘workers’ is non-zero.



 To run the job again without submitting a patch set, leave a “recheck”
 comment on the patch  make sure your zuul layout.yaml is configured to
 trigger off that comment. For example [1].

 Be sure to use the sandbox repository. [2]

 I’m not aware of other ways.



 Ramy



 [1]
 https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20

 [2] https://github.com/openstack-dev/sandbox









 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Friday, December 19, 2014 3:36 AM
 *To:* Asselin, Ramy
 *Cc:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Hi all,

 After a little struggle with the config scripts i managed to get a working
 setup that is able to process openstack-dev/sandbox and run
 noop-check-comunication.



 Then, i tried enabling dsvm-tempest-full job but it keeps returning
 NOT_REGISTERED



 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change
 Change 0x7fe5ec029b50 139585,9 depends on changes []

 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job
 noop-check-communication for change Change 0x7fe5ec029b50 139585,9 with
 dependent changes []

 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full
 for change Change 0x7fe5ec029b50 139585,9 with dependent changes []

 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job gear.Job 0x7fe5ec2e2f10
 handle: None name: build:dsvm-tempest-full unique:
 a9199d304d1140a8bf4448dfb1ae42c1 is not registered with Gearman

 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build gear.Job 0x7fe5ec2e2f10
 handle: None name: build:dsvm-tempest-full unique:
 a9199d304d1140a8bf4448dfb1ae42c1 complete, result NOT_REGISTERED

 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build gear.Job 0x7fe5ec2e2d10
 handle: H:127.0.0.1:2 name: build:noop-check-communication unique:
 333c6ea077324a788e3c37a313d872c5 started

 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build gear.Job 0x7fe5ec2e2d10
 handle: H:127.0.0.1:2 name: build:noop-check-communication unique:
 333c6ea077324a788e3c37a313d872c5 complete, result SUCCESS

 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting
 change Change 0x7fe5ec029b50 139585,9, actions: [ActionReporter
 zuul.reporter.gerrit.Reporter object at 0x2694a10, {'verified': -1}]



 Nodepoold's log show no reference to dsvm-tempest-full and neither
 jenkins' logs.