Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-12-22 Thread Bartlomiej Piotrowski
FYI, xz with multithreading support (5.2 release) has been marked as stable
yesterday.

Regards,
Bartłomiej Piotrowski

On Mon, Nov 24, 2014 at 12:32 PM, Bartłomiej Piotrowski 
bpiotrow...@mirantis.com wrote:

 On 24 Nov 2014, at 12:25, Matthew Mosesohn mmoses...@mirantis.com wrote:
  I did this exercise over many iterations during Docker container
  packing and found that as long as the data is under 1gb, it's going to
  compress really well with xz. Over 1gb and lrzip looks more attractive
  (but only on high memory systems). In reality, we're looking at log
  footprints from OpenStack environments on the order of 500mb to 2gb.
 
  xz is very slow on single-core systems with 1.5gb of memory, but it's
  quite a bit faster if you run it on a more powerful system. I've found
  level 4 compression to be the best compromise that works well enough
  that it's still far better than gzip. If increasing compression time
  by 3-5x is too much for you guys, why not just go to bzip? You'll
  still improve compression but be able to cut back on time.
 
  Best Regards,
  Matthew Mosesohn

 Alpha release of xz supports multithreading via -T (or —threads) parameter.
 We could also use pbzip2 instead of regular bzip to cut some time on
 multi-core
 systems.

 Regards,
 Bartłomiej Piotrowski
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fw: [Heat] Multiple_Routers_Topoloy

2014-12-22 Thread Rao Shweta
 

 Hi All

I am working on openstack Heat and i wanted to make below topolgy using heat 
template :



For this i am using a template as given :

AWSTemplateFormatVersion: '2010-09-09'
Description: Sample Heat template that spins up multiple instances and a 
private network
  (JSON)
Resources:
  heat_network_01:
    Properties: {name: heat-network-01}
    Type: OS::Neutron::Net
  heat_network_02:
    Properties: {name: heat-network-02}
    Type: OS::Neutron::Net
  heat_router_01:
    Properties: {admin_state_up: 'True', name: heat-router-01}
    Type: OS::Neutron::Router
  heat_router_02:
    Properties: {admin_state_up: 'True', name: heat-router-02}
    Type: OS::Neutron::Router
  heat_router_int0:
    Properties:
  router_id: {Ref: heat_router_01}
  subnet_id: {Ref: heat_subnet_01}
    Type: OS::Neutron::RouterInterface
  heat_router_int1:
    Properties:
  router_id: {Ref: heat_router_02}
  subnet_id: {Ref: heat_subnet_02}
    Type: OS::Neutron::RouterInterface
  heat_subnet_01:
    Properties:
  cidr: 10.10.10.0/24
  dns_nameservers: [172.16.1.11, 172.16.1.6]
  enable_dhcp: 'True'
  gateway_ip: 10.10.10.254
  name: heat-subnet-01
  network_id: {Ref: heat_network_01}
    Type: OS::Neutron::Subnet
  heat_subnet_02:
    Properties:
  cidr: 10.10.11.0/24
  dns_nameservers: [172.16.1.11, 172.16.1.6]
  enable_dhcp: 'True'
  gateway_ip: 10.10.11.254
  name: heat-subnet-01
  network_id: {Ref: heat_network_02}
    Type: OS::Neutron::Subnet
  instance0:
    Properties:
  flavor: m1.nano
  image: cirros-0.3.2-x86_64-uec
  name: heat-instance-01
  networks:
  - port: {Ref: instance0_port0}
    Type: OS::Nova::Server
  instance0_port0:
    Properties:
  admin_state_up: 'True'
  network_id: {Ref: heat_network_01}
    Type: OS::Neutron::Port
  instance1:
    Properties:
  flavor: m1.nano
  image: cirros-0.3.2-x86_64-uec
  name: heat-instance-02
  networks:
  - port: {Ref: instance1_port0}
    Type: OS::Nova::Server
  instance1_port0:
    Properties:
  admin_state_up: 'True'
  network_id: {Ref: heat_network_01}
    Type: OS::Neutron::Port
  instance11:
    Properties:
  flavor: m1.nano
  image: cirros-0.3.2-x86_64-uec
  name: heat-instance11-01
  networks:
  - port: {Ref: instance11_port0}
    Type: OS::Nova::Server
  instance11_port0:
    Properties:
  admin_state_up: 'True'
  network_id: {Ref: heat_network_02}
    Type: OS::Neutron::Port
  instance1:
    Properties:
  flavor: m1.nano
  image: cirros-0.3.2-x86_64-uec
  name: heat-instance12-02
  networks:
  - port: {Ref: instance12_port0}
    Type: OS::Nova::Server
  instance12_port0:
    Properties:
  admin_state_up: 'True'
  network_id: {Ref: heat_network_02}
    Type: OS::Neutron::Port

I am able to create topology using the template but i am not able to connect 
two routers. Neither i can get a template example on internet through which i 
can connect two routers. Can you please help me with :

1.) Can we connect two routers? I tried with making a interface on router 1 and 
connecting it to the subnet2 which is showing error.

  heat_router_int0:
    Properties:
  router_id: {Ref: heat_router_01}
  subnet_id: {Ref: heat_subnet_02}

Can you please guide me how can we connect routers or have link between routers 
using template.

2.) Can you please forward a link or a example template from which i can refer 
and implement reqiured topology using heat template.

Waiting for a response



Thankyou 

Regards
 Shweta Rao
 Mailto: rao.shw...@tcs.com
 Website: http://www.tcs.com
 

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2014-12-22 Thread Radomir Dopieralski
On 20/12/14 21:25, Richard Jones wrote:
 This is a good proposal, though I'm unclear on how the
 static_settings.py file is populated by a developer (as opposed to a
 packager, which you described).

It's not, the developer version is included in the repository, and
simply points to where Bower is configured to put the files.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How can I continue to complete a abandoned blueprint?

2014-12-22 Thread li-zheming
hi all:
   Bp 
flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory)
  was submitted by my partner in havana.   but it has abandoned because of  
some reason.  I want to  continue to this blueprint. Based on the rules about 
BP for kilo, for this bp, spec is not necessary, so I submit the code directly 
and give commit message to clear out questions in spec.  Is it right? how can I 
do? thanks!


   


 





--

Name :  Li zheming
Company :  Hua Wei
Address  : Shenzhen China
Tel:0086 18665391827___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 'module' object has no attribute 'HVSpec'

2014-12-22 Thread Srinivasa Rao Ragolu
Hi All,

I have integrated below CPU pinning patches to Nova

https://review.openstack.org/#/c/132001/2https://review.openstack.org/#/c/128738/12https://review.openstack.org/#/c/129266/11https://review.openstack.org/#/c/129326/11https://review.openstack.org/#/c/129603/10https://review.openstack.org/#/c/129626/11https://review.openstack.org/#/c/130490/11https://review.openstack.org/#/c/130491/11https://review.openstack.org/#/c/130598/10https://review.openstack.org/#/c/131069/9https://review.openstack.org/#/c/131210/8https://review.openstack.org/#/c/131830/5https://review.openstack.org/#/c/131831/6https://review.openstack.org/#/c/131070/https://review.openstack.org/#/c/132086/https://review.openstack.org/#/c/132295/https://review.openstack.org/#/c/132296/https://review.openstack.org/#/c/132297/https://review.openstack.org/#/c/132557/https://review.openstack.org/#/c/132655/


And now if I try to run nova-compute, getting below error


File /opt/stack/nova/nova/objects/compute_node.py, line 93, in _from_db_object

for hv_spec in hv_specs]

AttributeError: 'module' object has no attribute 'HVSpec'


Please help me in resolving this issue.


Thanks,

Srinivas.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 'module' object has no attribute 'HVSpec'

2014-12-22 Thread Kashyap Chamarthy
On Mon, Dec 22, 2014 at 04:37:47PM +0530, Srinivasa Rao Ragolu wrote:
 Hi All,
 
 I have integrated below CPU pinning patches to Nova

As of now, CPU pinning works directly from Nova git (as you can see,
most of the patches below are merged), you don't have to manually apply
any patches.

 https://review.openstack.org/#/c/132001/2https://review.openstack.org/#/c/128738/12https://review.openstack.org/#/c/129266/11https://review.openstack.org/#/c/129326/11https://review.openstack.org/#/c/129603/10https://review.openstack.org/#/c/129626/11https://review.openstack.org/#/c/130490/11https://review.openstack.org/#/c/130491/11https://review.openstack.org/#/c/130598/10https://review.openstack.org/#/c/131069/9https://review.openstack.org/#/c/131210/8https://review.openstack.org/#/c/131830/5https://review.openstack.org/#/c/131831/6https://review.openstack.org/#/c/131070/https://review.openstack.org/#/c/132086/https://review.openstack.org/#/c/132295/https://review.openstack.org/#/c/132296/https://review.openstack.org/#/c/132297/https://review.openstack.org/#/c/132557/https://review.openstack.org/#/c/132655/

The links are all mangled due to the bad formatting.

 And now if I try to run nova-compute, getting below error
 
 
 File /opt/stack/nova/nova/objects/compute_node.py, line 93, in 
 _from_db_object
 
 for hv_spec in hv_specs]
 
 AttributeError: 'module' object has no attribute 'HVSpec'

You can try directly from git and DevStack without applying manually
patches.

Also, these kind of usage questions are better suited for operator list
or ask.openstack.org.

-- 
/kashyap

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-22 Thread Alex Xu
Joe, thanks, that's useful feature. But still not sure it's good for this
case. Thinking of user's server-group will be deleted by administrator and
new server-group created for user by administrator, that sounds confused
for user. I'm thinking of the HA case, if there is host failed, the
infrastructure can evacuate instance out of failed host automatically, and
user shouldn't be affected by that(user still will know his instance is
down, and the instance get back later. At least we should reduce the
affect).

I think the key is whether we think evacuate instance out of failed host
that in affinity group is violation or not. The host already failed, we can
ignore the failed host which in server group when we evacuate first
instance to another host. After first instance evacuated, there is new
alive host in the server group, then other instances will be evacuated to
that new alive host to comply affinity policy.

2014-12-22 11:29 GMT+08:00 Joe Cropper cropper@gmail.com:

 This is another great example of a use case in which these blueprints [1,
 2] would be handy.  They didn’t make the clip line for Kilo, but we’ll try
 again for L.  I personally don’t think the scheduler should have “special
 case” rules about when/when not to apply affinity policies, as that could
 be confusing for administrators.  It would be simple to just remove it from
 the group, thereby allowing the administrator to rebuild the VM anywhere
 s/he wants… and then re-add the VM to the group once the environment is
 operational once again.

 [1] https://review.openstack.org/#/c/136487/
 [2] https://review.openstack.org/#/c/139272/

 - Joe

 On Dec 21, 2014, at 8:36 PM, Lingxian Kong anlin.k...@gmail.com wrote:

  2014-12-22 9:21 GMT+08:00 Alex Xu sou...@gmail.com:
 
 
  2014-12-22 9:01 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:
 
 
 
  but what if the compute node is back to normal? There will be
  instances in the same server group with affinity policy, but located
  in different hosts.
 
 
  If operator decide to evacuate the instance from the failed host, we
 should
  fence the failed host first.
 
  Yes, actually. I mean the recommandation or prerequisite should be
  emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
  things more confused. But the issue you are working around is indeed a
  problem we should solve.
 
  --
  Regards!
  ---
  Lingxian Kong
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Plans to load and performance testing

2014-12-22 Thread Anastasia Kuznetsova
Dmitry,

Now I see that my comments are not so informative, I will try to describe
environment and scenarios in more details.

1) *1 api 1 engine 1 executor  *it means that there were 3 Mistral
processes running on the same box
2) list-workbooks scenario was run when there were no workflow executions
at the same time, I will notice this your comment and I will measure time
in such situation, but I guess that it will take more time, the question is
as far as.
3) 60 % of success means that only 60 % of number of times execution of
scenario 'list-workbooks' were successful, at the moment I have observed
only one type of error:
error connection to Rabbit : Error ConnectionError: ('Connection aborted.',
error(104, 'Connection reset by peer'))
4) we don't know the durability criteria of Mistral and under what load
Mistral will 'die', we want to define the threshold.

P.S. Dmitry, if you have any ideas/scenarios which you want to test, please
share them.

On Sat, Dec 20, 2014 at 9:35 AM, Dmitri Zimine dzim...@stackstorm.com
wrote:

 Anastasia, any start is a good start.

 * 1 api 1 engine 1 executor, list-workbooks*

 what exactly doest it mean: 1) is mistral deployed on 3 boxes with
 component per box, or all three are processes on the same box? 2) is
 list-workbooks test running while workflow executions going on? How many?
 what’s the character of the load 3) when it says 60% success what exactly
 does it mean, what kind of failures? 4) what is the durability criteria,
 how long do we expect Mistral to withstand the load.

 Let’s discuss this in details on the next IRC meeting?

 Thanks again for getting this started.

 DZ.


 On Dec 19, 2014, at 7:44 AM, Anastasia Kuznetsova 
 akuznets...@mirantis.com wrote:

 Boris,

 Thanks for feedback!

  But I belive that you should put bigger load here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 As I said it is only beginning and  I will increase the load and change
 its type.

 As well concurrency should be at least 2-3 times bigger than times
 otherwise it won't generate proper load and you won't collect enough data
 for statistical analyze.
 
 As well use  rps runner that generates more real life load.
 Plus it will be nice to share as well output of rally task report
 command.

 Thanks for the advice, I will consider it in further testing and reporting.

 Answering to your question about using Rally for integration testing, as I
 mentioned in our load testing plan published on wiki page,  one of our
 final goals is to have a Rally gate in one of Mistral repositories, so we
 are interested in it and I already prepare first commits to Rally.

 Thanks,
 Anastasia Kuznetsova

 On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic bpavlo...@mirantis.com
 wrote:

 Anastasia,

 Nice work on this. But I belive that you should put bigger load here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 As well concurrency should be at least 2-3 times bigger than times
 otherwise it won't generate proper load and you won't collect enough data
 for statistical analyze.

 As well use  rps runner that generates more real life load.
 Plus it will be nice to share as well output of rally task report
 command.


 By the way what do you think about using Rally scenarios (that you
 already wrote) for integration testing as well?


 Best regards,
 Boris Pavlovic

 On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova 
 akuznets...@mirantis.com wrote:

 Hello everyone,

 I want to announce that Mistral team has started work on load and
 performance testing in this release cycle.

 Brief information about scope of our work can be found here:

 https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing

 First results are published here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 Thanks,
 Anastasia Kuznetsova
 @ Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-22 Thread Alex Xu
2014-12-22 10:36 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:

 2014-12-22 9:21 GMT+08:00 Alex Xu sou...@gmail.com:
 
 
  2014-12-22 9:01 GMT+08:00 Lingxian Kong anlin.k...@gmail.com:
 

 
  but what if the compute node is back to normal? There will be
  instances in the same server group with affinity policy, but located
  in different hosts.
 
 
  If operator decide to evacuate the instance from the failed host, we
 should
  fence the failed host first.

 Yes, actually. I mean the recommandation or prerequisite should be
 emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
 things more confused. But the issue you are working around is indeed a
 problem we should solve.


Yea, you are right, we should doc it if we think this make sense. Thanks!


 --
 Regards!
 ---
 Lingxian Kong

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Evacuate instance which in server group with affinity policy

2014-12-22 Thread Sylvain Bauza


Le 22/12/2014 13:37, Alex Xu a écrit :



2014-12-22 10:36 GMT+08:00 Lingxian Kong anlin.k...@gmail.com 
mailto:anlin.k...@gmail.com:


2014-12-22 9:21 GMT+08:00 Alex Xu sou...@gmail.com
mailto:sou...@gmail.com:


 2014-12-22 9:01 GMT+08:00 Lingxian Kong anlin.k...@gmail.com
mailto:anlin.k...@gmail.com:



 but what if the compute node is back to normal? There will be
 instances in the same server group with affinity policy, but
located
 in different hosts.


 If operator decide to evacuate the instance from the failed
host, we should
 fence the failed host first.

Yes, actually. I mean the recommandation or prerequisite should be
emphasized somewhere, e.g. the Operation Guide, otherwise it'll make
things more confused. But the issue you are working around is indeed a
problem we should solve.


Yea, you are right, we should doc it if we think this make sense. Thanks!


As I said, I'm not in favor of adding more complexity in the instance 
group setup that is done in the conductor for basic race condition reasons.


If I understand correctly, the problem is when there is only one host 
for all the instances belonging to a group with affinity filter and this 
host is down, then the filter will deny any other host and consequently 
the request will fail while it should succeed.


Is this really a problem ? I mean, it appears to me that's a normal 
behaviour because a filter is by definition an *hard* policy.


So, provided you would like to implement *soft* policies, that sounds 
more likely a *weigher* that you would like to have : ie. make sure that 
hosts running existing instances in the group are weighted more than 
other ones so they'll be chosen every time, but in case they're down, 
allow the scheduler to pick other hosts.


HTH,
-Sylvain





--
Regards!
---
Lingxian Kong

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How can I continue to complete a abandoned blueprint?

2014-12-22 Thread Jay Pipes

On 12/22/2014 04:54 AM, li-zheming wrote:

hi all: Bp
flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory)
was submitted by my partner in havana.   but it has abandoned because
of  some reason.


Some reason == the submitter failed to provide any details on how the 
work would be implemented, what the use cases were, and any alternatives 
that might be possible.


  I want to  continue to this blueprint. Based on the

rules about BP for
https://blueprints.launchpad.net/openstack/?searchtext=for kilo,
for this bp, spec is not necessary, so I submit the code directly and
give commit message to clear out questions in spec.  Is it right? how
can I do? thanks!


Specs are no longer necessary for smallish features, no. A blueprint is 
still necessary on Launchpad, so you should be able to use the abandoned 
one you link above -- which, AFAICT, has enough implementation details 
about the proposed changes.


Alternately, if you cannot get the original submitter to remove the spec 
link to the old spec review, you can always start a new blueprint and we 
can mark that one as obselete.


I'd like Dan Berrange (cc'd) to review whichever blueprint on Launchpad 
you end up using. Please let us know what you do.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Our idea for SFC using OpenFlow. RE: [NFV][Telco] Service VM v/s its basic framework

2014-12-22 Thread A, Keshava
Vikram,

1.   In this solution it is assumed that all the OpenStack services are 
available/enabled on all the CNs ?

2.   Consider a  scenario: For a particular Tennant traffic  the  flows are 
chained across a set of CNs .

Then if one of the  VM (of that Tennant) migrates to a new CN, where that 
Tennant was not there earlier on that CN, what will be the impact ?

How to control the chaining of flows in these kind of scenario ? so that packet 
will reach that Tennant VM on new CN ?



Here this Tennant VM be a NFV Service-VM (which should be transparent to 
OpenStack).

keshava



From: Vikram Choudhary [mailto:vikram.choudh...@huawei.com]
Sent: Monday, December 22, 2014 12:28 PM
To: Murali B
Cc: openstack-dev@lists.openstack.org; yuriy.babe...@telekom.de; A, Keshava; 
stephen.kf.w...@gmail.com; Dhruv Dhody; Dongfeng (C); Kalyankumar Asangi
Subject: RE: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Sorry for the incontinence. We will sort the issue at the earliest.
Please find the BP attached with the mail!!!

From: Murali B [mailto:mbi...@gmail.com]
Sent: 22 December 2014 12:20
To: Vikram Choudhary
Cc: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de; 
keshav...@hp.commailto:keshav...@hp.com; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; Dhruv Dhody; 
Dongfeng (C); Kalyankumar Asangi
Subject: Re: Our idea for SFC using OpenFlow. RE: [openstack-dev] [NFV][Telco] 
Service VM v/s its basic framework

Thank you Vikram,

Could you or somebody please provide the access the full specification document

Thanks
-Murali

On Mon, Dec 22, 2014 at 11:48 AM, Vikram Choudhary 
vikram.choudh...@huawei.commailto:vikram.choudh...@huawei.com wrote:
Hi Murali,

We have proposed service function chaining idea using open flow.
https://blueprints.launchpad.net/neutron/+spec/service-function-chaining-using-openflow

Will submit the same for review soon.

Thanks
Vikram

From: yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de 
[mailto:yuriy.babe...@telekom.demailto:yuriy.babe...@telekom.de]
Sent: 18 December 2014 19:35
To: 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org; 
stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi,
in the IRC meeting yesterday we agreed to work on the use-case for service 
function chaining as it seems to be important for a lot of participants [1].
We will prepare the first draft and share it in the TelcoWG Wiki for discussion.

There is one blueprint in openstack on that in [2]


[1] 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt
[2] 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

Kind regards/Mit freundlichen Grüßen
Yuriy Babenko

Von: A, Keshava [mailto:keshav...@hp.com]
Gesendet: Mittwoch, 10. Dezember 2014 19:06
An: stephen.kf.w...@gmail.commailto:stephen.kf.w...@gmail.com; OpenStack 
Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should be.
Depending on this we at OpenStack also will have impact for ‘Service Chaining’.
Please find the mail attached w.r.t that discussion with NFV for ‘Service-VM + 
Openstack OVS related discussion”.


Regards,
keshava

From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Wednesday, December 10, 2014 10:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There is already a ServiceVM project (Tacker), currently under development 
on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki page 
above and see if the project's goals align with yours. If so, you are certainly 
welcome to join the IRC meeting and start to contribute to the project's 
direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B 
mbi...@gmail.commailto:mbi...@gmail.com wrote:
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to service 
VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is 
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding the 
ovs-rules in OVS


Thanks
-Murali




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [qa] host aggregate's availability zone

2014-12-22 Thread Danny Choi (dannchoi)
Hi Joe,

No, I did not.  I’m not aware of this.

Can you tell me exactly what needs to be done?

Thanks,
Danny

--

Date: Sun, 21 Dec 2014 11:42:02 -0600
From: Joe Cropper cropper@gmail.commailto:cropper@gmail.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [qa] host aggregate's availability zone
Message-ID: 
b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.commailto:b36d2234-bee0-4c7b-a2b2-a09cc9098...@gmail.com
Content-Type: text/plain; charset=utf-8

Did you enable the AvailabilityZoneFilter in nova.conf that the scheduler uses? 
 And enable the FilterScheduler?  These are two common issues related to this.

- Joe

On Dec 21, 2014, at 10:28 AM, Danny Choi (dannchoi) 
dannc...@cisco.commailto:dannc...@cisco.com wrote:
Hi,
I have a multi-node setup with 2 compute hosts, qa5 and qa6.
I created 2 host-aggregate, each with its own availability zone, and assigned 
one compute host:
localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-1
++---+---+---+--+
| Id | Name  | Availability Zone | Hosts | Metadata 
|
++---+---+---+--+
| 9  | host-aggregate-zone-1 | az-1  | 'qa5' | 
'availability_zone=az-1' |
++---+---+---+--+
localadmin@qa4:~/devstack$ nova aggregate-details host-aggregate-zone-2
++---+---+---+--+
| Id | Name  | Availability Zone | Hosts | Metadata 
|
++---+---+---+--+
| 10 | host-aggregate-zone-2 | az-2  | 'qa6' | 
'availability_zone=az-2' |
++---+---+---+?+
My intent is to control at which compute host to launch a VM via the 
host-aggregate?s availability-zone parameter.
To test, for vm-1, I specify --availiability-zone=az-1, and 
--availiability-zone=az-2 for vm-2:
localadmin@qa4:~/devstack$ nova boot --image cirros-0.3.2-x86_64-uec --flavor 1 
--nic net-id=5da9d715-19fd-47c7-9710-e395b5b90442 --availability-zone az-1 vm-1
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-0066  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| -  
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| kxot3ZBZcBH6   
|
| config_drive |
|
| created  | 2014-12-21T15:59:03Z   
|
| flavor   | m1.tiny (1)
|
| hostId   |
|
| id   | 854acae9-b718-4ea5-bc28-e0bc46378b60   
|
| image| cirros-0.3.2-x86_64-uec 
(61409a53-305c-4022-978b-06e55052875b) |
| key_name | -  
|
| metadata | {} 
|
| name

[openstack-dev] [mistral] Team meeting - 12/22/2014

2014-12-22 Thread Renat Akhmerov
Hi,

Reminding that we have a team meeting today at #openstack-meeting at 16.00 UTC

Review action items
Current status (progress, issues, roadblocks, further plans)
Kilo-1 scope and blueprints
for-each 
Scoping (global, local etc.)
Load testing
Open discussion

(see https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
https://wiki.openstack.org/wiki/Meetings/MistralAgenda to find the agenda and 
the meeting archive)

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread John Griffith
Lately (on the Cinder team at least) there's been a lot of
disagreement in reviews regarding the proper way to do LOG messages
correctly.  Use of '%' vs ',' in the formatting of variables etc.

We do have the oslo i18n guidelines page here [1], which helps a lot
but there's some disagreement on a specific case here.  Do we have a
set answer on:

LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})

vs

LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})


It's always fun when one person provides a -1 for the first usage; the
submitter changes it and another reviewer gives a -1 and says, no it
should be the other way.

I'm hoping maybe somebody on the olso team can provide an
authoritative answer and we can then update the example page
referenced in [1] to clarify this particular case.

Thanks,
John

[1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] Canceling VMware meeting 12/24 and 12/31

2014-12-22 Thread Gary Kotton
Hi,
I am not sure that we will have enough people around for the up and coming 
meetings. I suggest that we cancel them and resume in the New Year. Happy 
holidays to all!
A luta continua
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No Cross-project meeting nor 1:1 syncs for next two weeks

2014-12-22 Thread Thierry Carrez
PTLs and others,

As a reminder, we'll be skipping the cross-project meeting (normally
held on Tuesdays at 21:00 UTC) for the next two weeks. Next meeting will
be on January 6th.

We'll also skip 1:1 sync between release liaisons and release management
(normally held on Tuesdays and Thursdays) for the next two weeks. If you
have anything urgent to discuss don't hesitate to ping me on
#openstack-relmgr-office.

Enjoy the end-of-year holiday season!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] meetings during holidays

2014-12-22 Thread Doug Wiegley
Canceled. The next lbaas meeting will be 1/6. Happy holidays.

Thanks,
doug

On 12/19/14, 11:33 AM, Doug Wiegley do...@a10networks.com wrote:

Hi all,

Anyone have big agenda items for the 12/23 or 12/30 meeting? If not, I’d
suggest we cancel those two meetings, and bring up anything small during
the on-demand portion of the neutron meetings.

If I don’t hear anything by Monday, we will cancel those two meetings.

Thanks,
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature delivery rules and automated tests

2014-12-22 Thread Anastasia Urlapova
Mike, Dmitry, team,
let me add 5 cents - tests per feature have to run on CI before SCF, it is
mean that jobs configuration also should be implemented.

On Wed, Dec 17, 2014 at 7:33 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 I fully support the idea.

 Feature Lead has to know, that his feature is under threat if it's not yet
 covered by system tests (unit/integration tests are not enough!!!), and
 should proactive work with QA engineers to get tests implemented and
 passing before SCF.

 On Fri, Dec 12, 2014 at 5:55 PM, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:

 Guys,

 we've done a good job in 6.0. Most of the features were merged before
 feature freeze. Our QA were involved in testing even earlier. It was much
 better than before.

 We had a discussion with Anastasia. There were several bug reports for
 features yesterday, far beyond HCF. So we still have a long way to be
 perfect. We should add one rule: we need to have automated tests before HCF.

 Actually, we should have results of these tests just after FF. It is
 quite challengeable because we have a short development cycle. So my
 proposal is to require full deployment and run of automated tests for each
 feature before soft code freeze. And it needs to be tracked in checklists
 and on feature syncups.

 Your opinion?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday December 23rd at 19:00 UTC

2014-12-22 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday December 23rd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

Meeting log and minutes from the last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread Ben Nemec
On 12/22/2014 09:42 AM, John Griffith wrote:
 Lately (on the Cinder team at least) there's been a lot of
 disagreement in reviews regarding the proper way to do LOG messages
 correctly.  Use of '%' vs ',' in the formatting of variables etc.
 
 We do have the oslo i18n guidelines page here [1], which helps a lot
 but there's some disagreement on a specific case here.  Do we have a
 set answer on:
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})
 
 vs
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})

This is the preferred way.

Note that this is just a multi-variable variation on
http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
and the reasoning discussed there applies.

I'd be curious why some people prefer the % version because to my
knowledge that's not recommended even for untranslated log messages.

 
 
 It's always fun when one person provides a -1 for the first usage; the
 submitter changes it and another reviewer gives a -1 and says, no it
 should be the other way.
 
 I'm hoping maybe somebody on the olso team can provide an
 authoritative answer and we can then update the example page
 referenced in [1] to clarify this particular case.
 
 Thanks,
 John
 
 [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How can I write at milestone section of blueprint?

2014-12-22 Thread Randall Burt
Its been discussed at several summits. We have settled on a general solution 
using Zaqar, but no work has been done that I know of. I was just pointing out 
that similar blueprints/specs exist and you may want to look through those to 
get some ideas about writing your own and/or basing your proposal off of one of 
them.

On Dec 22, 2014, at 12:19 AM, Yasunori Goto y-g...@jp.fujitsu.com
 wrote:

 Rundal-san,
 
 There should already be blueprints in launchpad for very similar 
 functionality.
 For example: https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks.
 While that specifies Heat sending notifications to the outside world,
 there has been discussion around debugging that would allow the receiver to
 send notifications back. I only point this out so you can see there should be
 similar blueprints and specs that you can reference and use as examples.
 
 Thank you for pointing it out.
 But do you know current status about it?
 Though the above blueprint is not approved, and it seems to be discarded.
 
 Bye,
 
 
 On Dec 19, 2014, at 4:17 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote:
 
 Hello,
 
 This is the first mail at Openstack community,
 
 Welcome! :)
 
 and I have a small question about how to write blueprint for Heat.
 
 Currently our team would like to propose 2 interfaces
 for users operation in HOT. 
 (One is Event handler which is to notify user's defined event to heat.
 Another is definitions of action when heat catches the above notification.)
 So, I'm preparing the blueprint for it.
 
 Please include details of the exact use-case, e.g the problem you're trying
 to solve (not just the proposed solution), as it's possible we can suggest
 solutions based on exiting interfaces.
 
 However, I can not find how I can write at the milestone section of 
 blueprint.
 
 Heat blueprint template has a section for Milestones.
 Milestones -- Target Milestone for completeion:
 
 But I don't think I can decide it by myself.
 In my understanding, it should be decided by PTL.
 
 Normally, it's decided by when the person submitting the spec expects to
 finish writing the code by.  The PTL doesn't really have much control over
 that ;)
 
 In addition, probably the above our request will not finish
 by Kilo. I suppose it will be L version or later.
 
 So to clarify, you want to propose the feature, but you're not planning on
 working on it (e.g implementing it) yourself?
 
 So, what should I write at this section?
 Kilo-x, L version, or empty?
 
 As has already been mentioned, it doesn't matter that much - I see it as a
 statement of intent from developers.  If you're just requesting a feature,
 you can even leave it blank if you want and we'll update it when an
 assignee is found (e.g during the spec review).
 
 Thanks,
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -- 
 Yasunori Goto y-g...@jp.fujitsu.com
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fw: [Heat] Multiple_Routers_Topoloy

2014-12-22 Thread Zane Bitter
The -dev mailing list is not for usage questions. Please post your 
question to ask.openstack.org and include the text of the error message 
you when trying to add a RouterInterface.


cheers,
Zane.

On 22/12/14 04:18, Rao Shweta wrote:



Hi All

I am working on openstack Heat and i wanted to make below topolgy using
heat template :



For this i am using a template as given :

AWSTemplateFormatVersion: '2010-09-09'
Description: Sample Heat template that spins up multiple instances and a
private network
   (JSON)
Resources:
   heat_network_01:
 Properties: {name: heat-network-01}
 Type: OS::Neutron::Net
   heat_network_02:
 Properties: {name: heat-network-02}
 Type: OS::Neutron::Net
   heat_router_01:
 Properties: {admin_state_up: 'True', name: heat-router-01}
 Type: OS::Neutron::Router
   heat_router_02:
 Properties: {admin_state_up: 'True', name: heat-router-02}
 Type: OS::Neutron::Router
   heat_router_int0:
 Properties:
   router_id: {Ref: heat_router_01}
   subnet_id: {Ref: heat_subnet_01}
 Type: OS::Neutron::RouterInterface
   heat_router_int1:
 Properties:
   router_id: {Ref: heat_router_02}
   subnet_id: {Ref: heat_subnet_02}
 Type: OS::Neutron::RouterInterface
   heat_subnet_01:
 Properties:
   cidr: 10.10.10.0/24
   dns_nameservers: [172.16.1.11, 172.16.1.6]
   enable_dhcp: 'True'
   gateway_ip: 10.10.10.254
   name: heat-subnet-01
   network_id: {Ref: heat_network_01}
 Type: OS::Neutron::Subnet
   heat_subnet_02:
 Properties:
   cidr: 10.10.11.0/24
   dns_nameservers: [172.16.1.11, 172.16.1.6]
   enable_dhcp: 'True'
   gateway_ip: 10.10.11.254
   name: heat-subnet-01
   network_id: {Ref: heat_network_02}
 Type: OS::Neutron::Subnet
   instance0:
 Properties:
   flavor: m1.nano
   image: cirros-0.3.2-x86_64-uec
   name: heat-instance-01
   networks:
   - port: {Ref: instance0_port0}
 Type: OS::Nova::Server
   instance0_port0:
 Properties:
   admin_state_up: 'True'
   network_id: {Ref: heat_network_01}
 Type: OS::Neutron::Port
   instance1:
 Properties:
   flavor: m1.nano
   image: cirros-0.3.2-x86_64-uec
   name: heat-instance-02
   networks:
   - port: {Ref: instance1_port0}
 Type: OS::Nova::Server
   instance1_port0:
 Properties:
   admin_state_up: 'True'
   network_id: {Ref: heat_network_01}
 Type: OS::Neutron::Port
   instance11:
 Properties:
   flavor: m1.nano
   image: cirros-0.3.2-x86_64-uec
   name: heat-instance11-01
   networks:
   - port: {Ref: instance11_port0}
 Type: OS::Nova::Server
   instance11_port0:
 Properties:
   admin_state_up: 'True'
   network_id: {Ref: heat_network_02}
 Type: OS::Neutron::Port
   instance1:
 Properties:
   flavor: m1.nano
   image: cirros-0.3.2-x86_64-uec
   name: heat-instance12-02
   networks:
   - port: {Ref: instance12_port0}
 Type: OS::Nova::Server
   instance12_port0:
 Properties:
   admin_state_up: 'True'
   network_id: {Ref: heat_network_02}
 Type: OS::Neutron::Port

I am able to create topology using the template but i am not able to
connect two routers. Neither i can get a template example on internet
through which i can connect two routers. Can you please help me with :

1.) Can we connect two routers? I tried with making a interface on
router 1 and connecting it to the subnet2 which is showing error.

   heat_router_int0:
 Properties:
   router_id: {Ref: heat_router_01}
   subnet_id: {Ref: heat_subnet_02}

Can you please guide me how can we connect routers or have link between
routers using template.

2.) Can you please forward a link or a example template from which i can
refer and implement reqiured topology using heat template.

Waiting for a response



Thankyou

Regards
Shweta Rao
Mailto: rao.shw...@tcs.com
Website: http://www.tcs.com


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Setting MTU size for tap device

2014-12-22 Thread Vishvananda Ishaya
It makes sense to add it to me. Libvirt sets the mtu from the bridge when it 
creates the tap device, but if you are creating it manually you might need to 
set it to something else.

Vish

On Dec 17, 2014, at 10:29 PM, Ryu Ishimoto r...@midokura.com wrote:

 Hi All,
 
 I noticed that in linux_net.py, the method to create a tap interface[1] does 
 not let you set the MTU size.  In other places, I see calls made to set the 
 MTU of the device [2].
 
 I'm wondering if there is any technical reasons to why we can't also set the 
 MTU size when creating tap interfaces for general cases.  In certain overlay 
 solutions, this would come in handy.  If there isn't any, I would love to 
 submit a patch to accomplish this.
 
 Thanks in advance!
 
 Ryu
 
 [1] 
 https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1374
 [2] 
 https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1309
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Listing of backends

2014-12-22 Thread Martin, Kurt Frederick (ESSN Storage MSDU)
You can set/unset key value pairs to your volume type with the cinder type-key 
command. Or you can also set them in the Horizon Admin console under the 
Admin-Volumes-Volume Types tab, then select “View Extra Specs” Action.

$cinder help type-key
usage: cinder type-key vtype action key=value [key=value ...]

Sets or unsets extra_spec for a volume type.

Positional arguments:
  vtype  Name or ID of volume type.
  action The action. Valid values are 'set' or 'unset.'
  key=value  The extra specs key and value pair to set or unset. For unset,
   specify only the key.

e.g.
cinder type-key GoldVolumeType set volume_backend_name=my_iscsi_backend

~Kurt

From: Pradip Mukhopadhyay [mailto:pradip.inte...@gmail.com]
Sent: Sunday, December 07, 2014 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] Listing of backends

Thanks!
One more question.
Is there any equivalent API to add keys to the volume-type? I understand we 
have APIs for creating volume-type? But how about adding key-value pair (say I 
want to add-key to the volume-type as backend-name=my_iscsi_backend ?

Thanks,
Pradip

On Sun, Dec 7, 2014 at 4:25 PM, Duncan Thomas 
duncan.tho...@gmail.commailto:duncan.tho...@gmail.com wrote:
See https://review.openstack.org/#/c/119938/ - now merged. I don't believe the 
python-cinderclient side work has been done yet, nor anything in Horizon, but 
the API itself is now there.

On 7 December 2014 at 09:53, Pradip Mukhopadhyay 
pradip.inte...@gmail.commailto:pradip.inte...@gmail.com wrote:
Hi,

Is there a way to find out/list down the backends discovered for Cinder?

There is, I guess, no API for get the list of backends.


Thanks,
Pradip

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][nova-net]Floating ip assigned as /32 from the start of the range

2014-12-22 Thread Vishvananda Ishaya
Floating ips are always added to the host as a /32. You will need one ip on the
compute host from the floating range with the /16 prefix (which it will use for
natting instances without floating ips as well).

In other words you should manually assign an ip from 10.100.130.X/16 to each
compute node and set that value as routing_source_ip=10.100.130.X (or my_ip) in
nova.conf.

Vish
On Dec 19, 2014, at 7:00 AM, Eduard Matei eduard.ma...@cloudfounders.com 
wrote:

 Hi,
 I'm trying to create a vm and assign it an ip in range 10.100.130.0/16.
 On the host, the ip is assigned to br100 as  inet 10.100.0.3/32 scope global 
 br100
 instead of 10.100.130.X/16, so it's not reachable from the outside.
 
 The localrc.conf :
 FLOATING_RANGE=10.100.130.0/16
 
 Any idea what to change?
 
 Thanks,
 Eduard
 
 
 -- 
 Eduard Biceri Matei, Senior Software Developer
 www.cloudfounders.com | eduard.ma...@cloudfounders.com
  
 
  
 CloudFounders, The Private Cloud Software Company
  
 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited. 
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][nova-net]Floating ip assigned as /32 from the start of the range

2014-12-22 Thread Eduard Matei
Thanks,
I managed to get it working by deleting the public pool (which was the
whole 10.100.X.X subnet) and creating a new pool 10.100.129.X.
This gives me control over which ips are assignable to the vms.

Eduard.

On Mon, Dec 22, 2014 at 7:30 PM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 Floating ips are always added to the host as a /32. You will need one ip
 on the
 compute host from the floating range with the /16 prefix (which it will
 use for
 natting instances without floating ips as well).

 In other words you should manually assign an ip from 10.100.130.X/16 to
 each
 compute node and set that value as routing_source_ip=10.100.130.X (or
 my_ip) in
 nova.conf.

 Vish
 On Dec 19, 2014, at 7:00 AM, Eduard Matei eduard.ma...@cloudfounders.com
 wrote:

 Hi,
 I'm trying to create a vm and assign it an ip in range 10.100.130.0/16.
 On the host, the ip is assigned to br100 as  inet 10.100.0.3/32 scope
 global br100
 instead of 10.100.130.X/16, so it's not reachable from the outside.

 The localrc.conf :
 FLOATING_RANGE=10.100.130.0/16

 Any idea what to change?

 Thanks,
 Eduard


 --

 *Eduard Biceri Matei, Senior Software Developer*
 www.cloudfounders.com
  | eduard.ma...@cloudfounders.com



 *CloudFounders, The Private Cloud Software Company*

 Disclaimer:
 This email and any files transmitted with it are confidential and intended 
 solely for the use of the individual or entity to whom they are addressed.
 If you are not the named addressee or an employee or agent responsible for 
 delivering this message to the named addressee, you are hereby notified that 
 you are not authorized to read, print, retain, copy or disseminate this 
 message or any part of it. If you have received this email in error we 
 request you to notify us by reply e-mail and to delete all electronic files 
 of the message. If you are not the intended recipient you are notified that 
 disclosing, copying, distributing or taking any action in reliance on the 
 contents of this information is strictly prohibited.
 E-mail transmission cannot be guaranteed to be secure or error free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any errors or omissions in the content of this message, and 
 shall have no liability for any loss or damage suffered by the user, which 
 arise as a result of e-mail transmission.

  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread John Griffith
On Mon, Dec 22, 2014 at 10:03 AM, Ben Nemec openst...@nemebean.com wrote:
 On 12/22/2014 09:42 AM, John Griffith wrote:
 Lately (on the Cinder team at least) there's been a lot of
 disagreement in reviews regarding the proper way to do LOG messages
 correctly.  Use of '%' vs ',' in the formatting of variables etc.

 We do have the oslo i18n guidelines page here [1], which helps a lot
 but there's some disagreement on a specific case here.  Do we have a
 set answer on:

 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})

 vs

 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})

 This is the preferred way.

 Note that this is just a multi-variable variation on
 http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
 and the reasoning discussed there applies.

 I'd be curious why some people prefer the % version because to my
 knowledge that's not recommended even for untranslated log messages.

Not sure if it's that anybody has a preference as opposed to an
interpretation, notice the recommendation for multi-vars in raise:

# RIGHT
raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})




 It's always fun when one person provides a -1 for the first usage; the
 submitter changes it and another reviewer gives a -1 and says, no it
 should be the other way.

 I'm hoping maybe somebody on the olso team can provide an
 authoritative answer and we can then update the example page
 referenced in [1] to clarify this particular case.

 Thanks,
 John

 [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread Doug Hellmann

On Dec 22, 2014, at 12:03 PM, Ben Nemec openst...@nemebean.com wrote:

 On 12/22/2014 09:42 AM, John Griffith wrote:
 Lately (on the Cinder team at least) there's been a lot of
 disagreement in reviews regarding the proper way to do LOG messages
 correctly.  Use of '%' vs ',' in the formatting of variables etc.
 
 We do have the oslo i18n guidelines page here [1], which helps a lot
 but there's some disagreement on a specific case here.  Do we have a
 set answer on:
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})
 
 vs
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})
 
 This is the preferred way.

+1

 
 Note that this is just a multi-variable variation on
 http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
 and the reasoning discussed there applies.
 
 I'd be curious why some people prefer the % version because to my
 knowledge that's not recommended even for untranslated log messages.
 
 
 
 It's always fun when one person provides a -1 for the first usage; the
 submitter changes it and another reviewer gives a -1 and says, no it
 should be the other way.
 
 I'm hoping maybe somebody on the olso team can provide an
 authoritative answer and we can then update the example page
 referenced in [1] to clarify this particular case.
 
 Thanks,
 John
 
 [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Logging formats and i18n

2014-12-22 Thread Doug Hellmann

On Dec 22, 2014, at 1:05 PM, John Griffith john.griffi...@gmail.com wrote:

 On Mon, Dec 22, 2014 at 10:03 AM, Ben Nemec openst...@nemebean.com wrote:
 On 12/22/2014 09:42 AM, John Griffith wrote:
 Lately (on the Cinder team at least) there's been a lot of
 disagreement in reviews regarding the proper way to do LOG messages
 correctly.  Use of '%' vs ',' in the formatting of variables etc.
 
 We do have the oslo i18n guidelines page here [1], which helps a lot
 but there's some disagreement on a specific case here.  Do we have a
 set answer on:
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': v2})
 
 vs
 
 LOG.info(_LI('some message: v1=%(v1)s v2=%(v2)s'), {'v1': v1, 'v2': v2})
 
 This is the preferred way.
 
 Note that this is just a multi-variable variation on
 http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-variables-to-log-messages
 and the reasoning discussed there applies.
 
 I'd be curious why some people prefer the % version because to my
 knowledge that's not recommended even for untranslated log messages.
 
 Not sure if it's that anybody has a preference as opposed to an
 interpretation, notice the recommendation for multi-vars in raise:
 
 # RIGHT
 raise ValueError(_('some message: v1=%(v1)s v2=%(v2)s') % {'v1': v1, 'v2': 
 v2})

It’s really not related to translation as much as the logging API itself.

With the exception, you want to initialize the ValueError instance with a 
proper message as soon as you throw it because you don’t know what the calling 
code might do with it. Therefore you use string interpolation inline.

When you call into  the logging subsystem, your call might be ignored based on 
the level of the message and the logging configuration. By letting the logging 
code do the string interpolation, you potentially skip the work of serializing 
variables to strings for messages that will be discarded, saving time and 
memory.

These “rules” apply whether your messages are being translated or not, so even 
for debug log messages you should write:

  LOG.debug(‘some message: v1=%(v1)s v2=%(v2)s’, {‘v1’: v1, ‘v2’: v2})

 
 
 
 
 It's always fun when one person provides a -1 for the first usage; the
 submitter changes it and another reviewer gives a -1 and says, no it
 should be the other way.
 
 I'm hoping maybe somebody on the olso team can provide an
 authoritative answer and we can then update the example page
 referenced in [1] to clarify this particular case.
 
 Thanks,
 John
 
 [1]: http://docs.openstack.org/developer/oslo.i18n/guidelines.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2014-12-22 Thread Joe Gordon
On Fri, Dec 19, 2014 at 6:53 AM, Robert Li (baoli) ba...@cisco.com wrote:

  Hi Joe,

  See this thread on the SR-IOV CI from Irena and Sandhya:


 http://lists.openstack.org/pipermail/openstack-dev/2014-November/050658.html


 http://lists.openstack.org/pipermail/openstack-dev/2014-November/050755.html

  I believe that Intel is building a CI system to test SR-IOV as well.


Thanks for the clarification.



  Thanks,
 Robert


  On 12/18/14, 9:13 PM, Joe Gordon joe.gord...@gmail.com wrote:



 On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli) ba...@cisco.com
 wrote:

  Hi,

  During the Kilo summit, the folks in the pci passthrough and SR-IOV
 groups discussed what we’d like to achieve in this cycle, and the result
 was documented in this Etherpad:
 https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

  To get the work going, we’ve submitted a few design specs:

  Nova: Live migration with macvtap SR-IOV
 https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

  Nova: sriov interface attach/detach
 https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

   Nova: Api specify vnic_type
 https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

  Neutron-Network settings support for vnic-type

 https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type

  Nova: SRIOV scheduling with stateless offloads

 https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

  Now that the specs deadline is approaching, I’d like to bring them up
 in here for exception considerations. A lot of works have been put into
 them. And we’d like to see them get through for Kilo.


  We haven't started the spec exception process yet.



  Regarding CI for PCI passthrough and SR-IOV, see the attached thread.


  Can you share this via a link to something on
 http://lists.openstack.org/pipermail/openstack-dev/



  thanks,
 Robert


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Application level HA via Heat

2014-12-22 Thread Steven Hardy
Hi all,

So, lately I've been having various discussions around $subject, and I know
it's something several folks in our community are interested in, so I
wanted to get some ideas I've been pondering out there for discussion.

I'll start with a proposal of how we might replace HARestarter with
AutoScaling group, then give some initial ideas of how we might evolve that
into something capable of a sort-of active/active failover.

1. HARestarter replacement.

My position on HARestarter has long been that equivalent functionality
should be available via AutoScalingGroups of size 1.  Turns out that
shouldn't be too hard to do:

 resources:
  server_group:
type: OS::Heat::AutoScalingGroup
properties:
  min_size: 1
  max_size: 1
  resource:
type: ha_server.yaml

  server_replacement_policy:
type: OS::Heat::ScalingPolicy
properties:
  # FIXME: this adjustment_type doesn't exist yet
  adjustment_type: replace_oldest
  auto_scaling_group_id: {get_resource: server_group}
  scaling_adjustment: 1

So, currently our ScalingPolicy resource can only support three adjustment
types, all of which change the group capacity.  AutoScalingGroup already
supports batched replacements for rolling updates, so if we modify the
interface to allow a signal to trigger replacement of a group member, then
the snippet above should be logically equivalent to HARestarter AFAICT.

The steps to do this should be:

 - Standardize the ScalingPolicy-AutoScaling group interface, so
aynchronous adjustments (e.g signals) between the two resources don't use
the adjust method.

 - Add an option to replace a member to the signal interface of
AutoScalingGroup

 - Add the new replace adjustment type to ScalingPolicy

I posted a patch which implements the first step, and the second will be
required for TripleO, e.g we should be doing it soon.

https://review.openstack.org/#/c/143496/
https://review.openstack.org/#/c/140781/

2. A possible next step towards active/active HA failover

The next part is the ability to notify before replacement that a scaling
action is about to happen (just like we do for LoadBalancer resources
already) and orchestrate some or all of the following:

- Attempt to quiesce the currently active node (may be impossible if it's
  in a bad state)

- Detach resources (e.g volumes primarily?) from the current active node,
  and attach them to the new active node

- Run some config action to activate the new node (e.g run some config
  script to fsck and mount a volume, then start some application).

The first step is possible by putting a SofwareConfig/SoftwareDeployment
resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
node is too bricked to respond and specifying DELETE action so it only runs
when we replace the resource).

The third step is possible either via a script inside the box which polls
for the volume attachment, or possibly via an update-only software config.

The second step is the missing piece AFAICS.

I've been wondering if we can do something inside a new heat resource,
which knows what the current active member of an ASG is, and gets
triggered on a replace signal to orchestrate e.g deleting and creating a
VolumeAttachment resource to move a volume between servers.

Something like:

 resources:
  server_group:
type: OS::Heat::AutoScalingGroup
properties:
  min_size: 2
  max_size: 2
  resource:
type: ha_server.yaml

  server_failover_policy:
type: OS::Heat::FailoverPolicy
properties:
  auto_scaling_group_id: {get_resource: server_group}
  resource:
type: OS::Cinder::VolumeAttachment
properties:
# FIXME: refs is a ResourceGroup interface not currently
# available in AutoScalingGroup
instance_uuid: {get_attr: [server_group, refs, 1]}

  server_replacement_policy:
type: OS::Heat::ScalingPolicy
properties:
  # FIXME: this adjustment_type doesn't exist yet
  adjustment_type: replace_oldest
  auto_scaling_policy_id: {get_resource: server_failover_policy}
  scaling_adjustment: 1

By chaining policies like this we could trigger an update on the attachment
resource (or a nested template via a provider resource containing many
attachments or other resources) every time the ScalingPolicy is triggered.

For the sake of clarity, I've not included the existing stuff like
ceilometer alarm resources etc above, but hopefully it gets the idea
accross so we can discuss further, what are peoples thoughts?  I'm quite
happy to iterate on the idea if folks have suggestions for a better
interface etc :)

One problem I see with the above approach is you'd have to trigger a
failover after stack create to get the initial volume attached, still
pondering ideas on how best to solve that..

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration

2014-12-22 Thread Joe Gordon
On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery mest...@mestery.com wrote:

 On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno ante...@anteaya.info wrote:

 Rather than waste your time making excuses let me state where we are and
 where I would like to get to, also sharing my thoughts about how you can
 get involved if you want to see this happen as badly as I have been told
 you do.

 Where we are:
 * a great deal of foundation work has been accomplished to achieve
 parity with nova-network and neutron to the extent that those involved
 are ready for migration plans to be formulated and be put in place
 * a summit session happened with notes and intentions[0]
 * people took responsibility and promptly got swamped with other
 responsibilities
 * spec deadlines arose and in neutron's case have passed
 * currently a neutron spec [1] is a work in progress (and it needs
 significant work still) and a nova spec is required and doesn't have a
 first draft or a champion

 Where I would like to get to:
 * I need people in addition to Oleg Bondarev to be available to help
 come up with ideas and words to describe them to create the specs in a
 very short amount of time (Oleg is doing great work and is a fabulous
 person, yay Oleg, he just can't do this alone)
 * specifically I need a contact on the nova side of this complex
 problem, similar to Oleg on the neutron side
 * we need to have a way for people involved with this effort to find
 each other, talk to each other and track progress
 * we need to have representation at both nova and neutron weekly
 meetings to communicate status and needs

 We are at K-2 and our current status is insufficient to expect this work
 will be accomplished by the end of K-3. I will be championing this work,
 in whatever state, so at least it doesn't fall off the map. If you would
 like to help this effort please get in contact. I will be thinking of
 ways to further this work and will be communicating to those who
 identify as affected by these decisions in the most effective methods of
 which I am capable.

 Thank you to all who have gotten us as far as well have gotten in this
 effort, it has been a long haul and you have all done great work. Let's
 keep going and finish this.

 Thank you,
 Anita.

 Thank you for volunteering to drive this effort Anita, I am very happy
 about this. I support you 100%.

 I'd like to point out that we really need a point of contact on the nova
 side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to
 continue moving this forward.


At the summit the nova team marked the nova-network to neutron migration as
a priority [0], so we are collectively interested in seeing this happen and
want to help in any way possible.   With regard to a nova point of contact,
anyone in nova-specs-core should work, that way we can cover more time
zones.

From what I can gather the first step is to finish fleshing out the first
spec [1], and it sounds like it would be good to get a few nova-cores
reviewing it as well.




[0]
http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
[1] https://review.openstack.org/#/c/142456/



 Thanks,
 Kyle


 [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron
 [1] https://review.openstack.org/#/c/142456/

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration

2014-12-22 Thread Anita Kuno
On 12/22/2014 01:32 PM, Joe Gordon wrote:
 On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery mest...@mestery.com wrote:
 
 On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno ante...@anteaya.info wrote:

 Rather than waste your time making excuses let me state where we are and
 where I would like to get to, also sharing my thoughts about how you can
 get involved if you want to see this happen as badly as I have been told
 you do.

 Where we are:
 * a great deal of foundation work has been accomplished to achieve
 parity with nova-network and neutron to the extent that those involved
 are ready for migration plans to be formulated and be put in place
 * a summit session happened with notes and intentions[0]
 * people took responsibility and promptly got swamped with other
 responsibilities
 * spec deadlines arose and in neutron's case have passed
 * currently a neutron spec [1] is a work in progress (and it needs
 significant work still) and a nova spec is required and doesn't have a
 first draft or a champion

 Where I would like to get to:
 * I need people in addition to Oleg Bondarev to be available to help
 come up with ideas and words to describe them to create the specs in a
 very short amount of time (Oleg is doing great work and is a fabulous
 person, yay Oleg, he just can't do this alone)
 * specifically I need a contact on the nova side of this complex
 problem, similar to Oleg on the neutron side
 * we need to have a way for people involved with this effort to find
 each other, talk to each other and track progress
 * we need to have representation at both nova and neutron weekly
 meetings to communicate status and needs

 We are at K-2 and our current status is insufficient to expect this work
 will be accomplished by the end of K-3. I will be championing this work,
 in whatever state, so at least it doesn't fall off the map. If you would
 like to help this effort please get in contact. I will be thinking of
 ways to further this work and will be communicating to those who
 identify as affected by these decisions in the most effective methods of
 which I am capable.

 Thank you to all who have gotten us as far as well have gotten in this
 effort, it has been a long haul and you have all done great work. Let's
 keep going and finish this.

 Thank you,
 Anita.

 Thank you for volunteering to drive this effort Anita, I am very happy
 about this. I support you 100%.

 I'd like to point out that we really need a point of contact on the nova
 side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to
 continue moving this forward.

 
 At the summit the nova team marked the nova-network to neutron migration as
 a priority [0], so we are collectively interested in seeing this happen and
 want to help in any way possible.   With regard to a nova point of contact,
 anyone in nova-specs-core should work, that way we can cover more time
 zones.
 
 From what I can gather the first step is to finish fleshing out the first
 spec [1], and it sounds like it would be good to get a few nova-cores
 reviewing it as well.
 
 
 
 
 [0]
 http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
 [1] https://review.openstack.org/#/c/142456/
 
 
Wonderful, thank you for the support Joe.

It appears that we need to have a regular weekly meeting to track
progress in an archived manner.

I know there was one meeting November but I don't know what it was
called so so far I can't find the logs for that.

So if those affected by this issue can identify what time (UTC please,
don't tell me what time zone you are in it is too hard to guess what UTC
time you are available) and day of the week you are available for a
meeting I'll create one and we can start talking to each other.

I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and
1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100 UTC.

Thanks,
Anita.


 Thanks,
 Kyle


 [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron
 [1] https://review.openstack.org/#/c/142456/

 ___
 OpenStack-operators mailing list
 openstack-operat...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] No weekly meeting until Jan 6th 2015

2014-12-22 Thread Collins, Sean
See everyone next year!

Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Canceling the next two meetings

2014-12-22 Thread Kyle Mestery
Hi folks, given I expect low attendance today and next week, lets just
cancel the next two Neutron meetings. We'll reconvene in the new year on
Monday, January 5, 2015 at 2100 UTC.

Happy holidays to all!

Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Canceling the next two meetings

2014-12-22 Thread Miguel Ángel Ajo
Happy Holidays!, thank you Kyle.  

Miguel Ángel Ajo


On Monday, 22 de December de 2014 at 21:12, Kyle Mestery wrote:

 Hi folks, given I expect low attendance today and next week, lets just cancel 
 the next two Neutron meetings. We'll reconvene in the new year on Monday, 
 January 5, 2015 at 2100 UTC.
  
 Happy holidays to all!
  
 Kyle
  
 [1] https://wiki.openstack.org/wiki/Network/Meetings
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-22 Thread Zane Bitter

On 22/12/14 13:21, Steven Hardy wrote:

Hi all,

So, lately I've been having various discussions around $subject, and I know
it's something several folks in our community are interested in, so I
wanted to get some ideas I've been pondering out there for discussion.

I'll start with a proposal of how we might replace HARestarter with
AutoScaling group, then give some initial ideas of how we might evolve that
into something capable of a sort-of active/active failover.

1. HARestarter replacement.

My position on HARestarter has long been that equivalent functionality
should be available via AutoScalingGroups of size 1.  Turns out that
shouldn't be too hard to do:

  resources:
   server_group:
 type: OS::Heat::AutoScalingGroup
 properties:
   min_size: 1
   max_size: 1
   resource:
 type: ha_server.yaml

   server_replacement_policy:
 type: OS::Heat::ScalingPolicy
 properties:
   # FIXME: this adjustment_type doesn't exist yet
   adjustment_type: replace_oldest
   auto_scaling_group_id: {get_resource: server_group}
   scaling_adjustment: 1


One potential issue with this is that it is a little bit _too_ 
equivalent to HARestarter - it will replace your whole scaled unit 
(ha_server.yaml in this case) rather than just the failed resource inside.



So, currently our ScalingPolicy resource can only support three adjustment
types, all of which change the group capacity.  AutoScalingGroup already
supports batched replacements for rolling updates, so if we modify the
interface to allow a signal to trigger replacement of a group member, then
the snippet above should be logically equivalent to HARestarter AFAICT.

The steps to do this should be:

  - Standardize the ScalingPolicy-AutoScaling group interface, so
aynchronous adjustments (e.g signals) between the two resources don't use
the adjust method.

  - Add an option to replace a member to the signal interface of
AutoScalingGroup

  - Add the new replace adjustment type to ScalingPolicy


I think I am broadly in favour of this.


I posted a patch which implements the first step, and the second will be
required for TripleO, e.g we should be doing it soon.

https://review.openstack.org/#/c/143496/
https://review.openstack.org/#/c/140781/

2. A possible next step towards active/active HA failover

The next part is the ability to notify before replacement that a scaling
action is about to happen (just like we do for LoadBalancer resources
already) and orchestrate some or all of the following:

- Attempt to quiesce the currently active node (may be impossible if it's
   in a bad state)

- Detach resources (e.g volumes primarily?) from the current active node,
   and attach them to the new active node

- Run some config action to activate the new node (e.g run some config
   script to fsck and mount a volume, then start some application).

The first step is possible by putting a SofwareConfig/SoftwareDeployment
resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
node is too bricked to respond and specifying DELETE action so it only runs
when we replace the resource).

The third step is possible either via a script inside the box which polls
for the volume attachment, or possibly via an update-only software config.

The second step is the missing piece AFAICS.

I've been wondering if we can do something inside a new heat resource,
which knows what the current active member of an ASG is, and gets
triggered on a replace signal to orchestrate e.g deleting and creating a
VolumeAttachment resource to move a volume between servers.

Something like:

  resources:
   server_group:
 type: OS::Heat::AutoScalingGroup
 properties:
   min_size: 2
   max_size: 2
   resource:
 type: ha_server.yaml

   server_failover_policy:
 type: OS::Heat::FailoverPolicy
 properties:
   auto_scaling_group_id: {get_resource: server_group}
   resource:
 type: OS::Cinder::VolumeAttachment
 properties:
 # FIXME: refs is a ResourceGroup interface not currently
 # available in AutoScalingGroup
 instance_uuid: {get_attr: [server_group, refs, 1]}

   server_replacement_policy:
 type: OS::Heat::ScalingPolicy
 properties:
   # FIXME: this adjustment_type doesn't exist yet
   adjustment_type: replace_oldest
   auto_scaling_policy_id: {get_resource: server_failover_policy}
   scaling_adjustment: 1


This actually fails because a VolumeAttachment needs to be updated in 
place; if you try to switch servers but keep the same Volume when 
replacing the attachment you'll get an error.


TBH {get_attr: [server_group, refs, 1]} is doing most of the heavy 
lifting here, so in theory you could just have an 
OS::Cinder::VolumeAttachment instead of the FailoverPolicy and then all 
you need is a way of triggering a stack update with the same template  
params. I know Ton added a PATCH method to update in Juno so that you 
don't 

[openstack-dev] [Keystone] Keystone Middleware 1.3.1 release

2014-12-22 Thread Morgan Fainberg
The Keystone development community would like to announce the 1.3.1 release of 
the Keystone Middleware package.

This release can be installed from the following locations:
* http://tarballs.openstack.org/keystonemiddleware 
http://tarballs.openstack.org/keystonemiddleware
* https://pypi.python.org/pypi/keystonemiddleware 
https://pypi.python.org/pypi/keystonemiddleware

1.3.1
---
* auth_token middleware no longer contacts keystone when a request with no 
token is received. 

Detailed changes in this release beyond what is listed above:
https://launchpad.net/keystonemiddleware/+milestone/1.3.1___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] No meetings on Christmas or New Year's Days

2014-12-22 Thread Carl Baldwin
The L3 sub team meeting [1] will not be held until the 8th of January,
2015.  Enjoy your time off.  I will try to move some of the
refactoring patches along as I can but will be down to minimal hours.

Carl

[1] https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Sub-team meetings on Dec 20th and 27th?

2014-12-22 Thread Paul Michali (pcm)
Will cancel the next two VPNaaS sub-team meetings.  The next meeting will be 
Tuesday, January 6th at 1500 UTC on meeting-4 ( Note the channel change).


Enjoy the holiday time!

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pc_m (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Dec 19, 2014, at 2:01 PM, Paul Michali (pcm) p...@cisco.com wrote:

 Does anyone have agenda items to discuss for the next two meetings during the 
 holidays?
 
 If so, please let me know (and add them to the Wiki page), and we’ll hold the 
 meeting. Otherwise, we can continue on Jan 6th, and any pop-up items can be 
 addressed on the mailing list or Neutron IRC.
 
 Please let me know by Monday, if you’d like us to meet.
 
 
 Regards,
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pc_m (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] ratio: created to attached

2014-12-22 Thread John Griffith
On Sat, Dec 20, 2014 at 4:56 PM, Tom Barron t...@dyncloud.net wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Does anyone have real world experience, even data, to speak to the
 question: in an OpenStack cloud, what is the likely ratio of (created)
 cinder volumes to attached cinder volumes?

 Thanks,

 Tom Barron
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

 iQEcBAEBAgAGBQJUlgybAAoJEGeKBzAeUxEHqKwIAJjL5TCP7s+Ev8RNr+5bWARF
 zy3I216qejKdlM+a9Vxkl6ZWHMklWEhpMmQiUDMvEitRSlHpIHyhh1RfZbl4W9Fe
 GVXn04sXIuoNPgbFkkPIwE/45CJC1kGIBDub/pr9PmNv9mzAf3asLCHje8n3voWh
 d30If5SlPiaVoc0QNrq0paK7Yl1hh5jLa2zeV4qu4teRts/GjySJI7bR0k/TW5n4
 e2EKxf9MhbxzjQ6QsgvWzxmryVIKRSY9z8Eg/qt7AfXF4Kx++MNo8VbX3AuOu1XV
 cnHlmuGqVq71uMjWXCeqK8HyAP8nkn2cKnJXhRYli6qSwf9LxzjC+kMLn364IX4=
 =AZ0i
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Honestly I think the assumption is and should be 1:1, perhaps not 100%
duty-cycle, but certainly periods of time when there is a 100% attach
rate.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hierarchical Multitenancy

2014-12-22 Thread Raildo Mascena
Hello folks, My team and I developed the Hierarchical Multitenancy concept
for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we
implemented? What are the next steps for kilo?
To answers these questions, I created a blog post
*http://raildo.me/hierarchical-multitenancy-in-openstack/
http://raildo.me/hierarchical-multitenancy-in-openstack/*

Any question, I'm available.

-- 
Raildo Mascena
Software Engineer.
Bachelor of Computer Science.
Distributed Systems Laboratory
Federal University of Campina Grande
Campina Grande, PB - Brazil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-22 Thread Angus Salkeld
On Tue, Dec 23, 2014 at 6:42 AM, Zane Bitter zbit...@redhat.com wrote:

 On 22/12/14 13:21, Steven Hardy wrote:

 Hi all,

 So, lately I've been having various discussions around $subject, and I
 know
 it's something several folks in our community are interested in, so I
 wanted to get some ideas I've been pondering out there for discussion.

 I'll start with a proposal of how we might replace HARestarter with
 AutoScaling group, then give some initial ideas of how we might evolve
 that
 into something capable of a sort-of active/active failover.

 1. HARestarter replacement.

 My position on HARestarter has long been that equivalent functionality
 should be available via AutoScalingGroups of size 1.  Turns out that
 shouldn't be too hard to do:

   resources:
server_group:
  type: OS::Heat::AutoScalingGroup
  properties:
min_size: 1
max_size: 1
resource:
  type: ha_server.yaml

server_replacement_policy:
  type: OS::Heat::ScalingPolicy
  properties:
# FIXME: this adjustment_type doesn't exist yet
adjustment_type: replace_oldest
auto_scaling_group_id: {get_resource: server_group}
scaling_adjustment: 1


 One potential issue with this is that it is a little bit _too_ equivalent
 to HARestarter - it will replace your whole scaled unit (ha_server.yaml in
 this case) rather than just the failed resource inside.

  So, currently our ScalingPolicy resource can only support three adjustment
 types, all of which change the group capacity.  AutoScalingGroup already
 supports batched replacements for rolling updates, so if we modify the
 interface to allow a signal to trigger replacement of a group member, then
 the snippet above should be logically equivalent to HARestarter AFAICT.

 The steps to do this should be:

   - Standardize the ScalingPolicy-AutoScaling group interface, so
 aynchronous adjustments (e.g signals) between the two resources don't use
 the adjust method.

   - Add an option to replace a member to the signal interface of
 AutoScalingGroup

   - Add the new replace adjustment type to ScalingPolicy


 I think I am broadly in favour of this.


  I posted a patch which implements the first step, and the second will be
 required for TripleO, e.g we should be doing it soon.

 https://review.openstack.org/#/c/143496/
 https://review.openstack.org/#/c/140781/

 2. A possible next step towards active/active HA failover

 The next part is the ability to notify before replacement that a scaling
 action is about to happen (just like we do for LoadBalancer resources
 already) and orchestrate some or all of the following:

 - Attempt to quiesce the currently active node (may be impossible if it's
in a bad state)

 - Detach resources (e.g volumes primarily?) from the current active node,
and attach them to the new active node

 - Run some config action to activate the new node (e.g run some config
script to fsck and mount a volume, then start some application).

 The first step is possible by putting a SofwareConfig/SoftwareDeployment
 resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
 node is too bricked to respond and specifying DELETE action so it only
 runs
 when we replace the resource).

 The third step is possible either via a script inside the box which polls
 for the volume attachment, or possibly via an update-only software config.

 The second step is the missing piece AFAICS.

 I've been wondering if we can do something inside a new heat resource,
 which knows what the current active member of an ASG is, and gets
 triggered on a replace signal to orchestrate e.g deleting and creating a
 VolumeAttachment resource to move a volume between servers.

 Something like:

   resources:
server_group:
  type: OS::Heat::AutoScalingGroup
  properties:
min_size: 2
max_size: 2
resource:
  type: ha_server.yaml

server_failover_policy:
  type: OS::Heat::FailoverPolicy
  properties:
auto_scaling_group_id: {get_resource: server_group}
resource:
  type: OS::Cinder::VolumeAttachment
  properties:
  # FIXME: refs is a ResourceGroup interface not currently
  # available in AutoScalingGroup
  instance_uuid: {get_attr: [server_group, refs, 1]}

server_replacement_policy:
  type: OS::Heat::ScalingPolicy
  properties:
# FIXME: this adjustment_type doesn't exist yet
adjustment_type: replace_oldest
auto_scaling_policy_id: {get_resource: server_failover_policy}
scaling_adjustment: 1


 This actually fails because a VolumeAttachment needs to be updated in
 place; if you try to switch servers but keep the same Volume when replacing
 the attachment you'll get an error.

 TBH {get_attr: [server_group, refs, 1]} is doing most of the heavy lifting
 here, so in theory you could just have an OS::Cinder::VolumeAttachment
 instead of the 

Re: [openstack-dev] Hierarchical Multitenancy

2014-12-22 Thread Morgan Fainberg
Hi Raildo,

Thanks for putting this post together. I really appreciate all the work you 
guys have done (and continue to do) to get the Hierarchical Mulittenancy code 
into Keystone. It’s great to have the base implementation merged into Keystone 
for the K1 milestone. I look forward to seeing the rest of the development land 
during the rest of this cycle and what the other OpenStack projects build 
around the HMT functionality.

Cheers,
Morgan



 On Dec 22, 2014, at 1:49 PM, Raildo Mascena rail...@gmail.com wrote:
 
 Hello folks, My team and I developed the Hierarchical Multitenancy concept 
 for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we 
 implemented? What are the next steps for kilo? 
 To answers these questions, I created a blog post 
 http://raildo.me/hierarchical-multitenancy-in-openstack/ 
 http://raildo.me/hierarchical-multitenancy-in-openstack/
 
 Any question, I'm available.
 
 -- 
 Raildo Mascena
 Software Engineer.
 Bachelor of Computer Science. 
 Distributed Systems Laboratory
 Federal University of Campina Grande
 Campina Grande, PB - Brazil
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-12-22 Thread Surojit Pathak

On 11/14/14 2:02 AM, Daniel P. Berrange wrote:

On Thu, Nov 13, 2014 at 01:55:06PM -0800, Surojit Pathak wrote:

Hi all,

[Issue observed]
If we issue 'nova reboot server', we get to have the console output of the
latest bootup of the server only. The console output of the previous boot
for the same server vanishes due to truncation[1]. If we do reboot from
within the VM instance [ #sudo reboot ], or reboot the instance with 'virsh
reboot instance' the behavior is not the same, where the console.log keeps
increasing, with the new output being appended.
This loss of history makes some debugging scenario difficult due to lack of
information being available.

Please point me to any solution/blueprint for this issue, if already
planned. Otherwise, please comment on my analysis and proposals as solution,
below -

[Analysis]
Nova's libvirt driver on compute node tries to do a graceful restart of the
server instance, by attempting a soft_reboot first. If soft_reboot fails, it
attempts a hard_reboot. As part of soft_reboot, it brings down the instance
by calling shutdown(), and then calls createWithFlags() to bring this up.
Because of this, qemu-kvm process for the instance gets terminated and new
process is launched. In QEMU, the chardev file is opened with O_TRUNC, and
thus we lose the previous content of the console.log file.
On the other-hand, during 'virsh reboot instance', the same qemu-kvm
process continues, and libvirt actually does a qemuDomainSetFakeReboot().
Thus the same file continues capturing the new console output as a
continuation into the same file.

Nova and libvirt have support for issuing a graceful reboot via the QEMU
guest agent. So if you make sure that is installed, and tell Nova to use
it, then Nova won't have to stop  recreate the QEMU process and thus
won't have the problem of overwriting the logs.

Hi Daniel,
Having GA to do graceful restart is nice option. But if it were to just 
preserve the same console file, even 'virsh reboot' achieves the 
purpose. As I explained in my original analysis, Nova seems to have not 
taken the path, as it does not want to have a false positive, where the 
GA does not respond or 'virDomain.reboot' fails later and the domain is 
not really restarted. [ CC-ed vish, author of nova 
http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova//virt 
http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt//libvirt 
http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt/libvirt//driver.py 
http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt/libvirt/driver.py 
]


IMHO, QEMU should preserve the console-log file for a given domain, if 
it exists, by not opening with O_TRUNC option, instead opening with 
O_APPEND. I would like to draw a comparison of a real computer to which 
we might be connected over serial console, and the box gets powered down 
and up with external button press, and we do not lose the console 
history, if connected. And that's what is the experience console-log 
intends to provide. If you think, this is agreeable, please let me know, 
I will send the patch to qemu-devel@.


--
Regards,
SURO

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-12-22 Thread Tony Breeds
On Mon, Dec 22, 2014 at 04:36:02PM -0800, Surojit Pathak wrote:

 Hi Daniel,
 Having GA to do graceful restart is nice option. But if it were to just
 preserve the same console file, even 'virsh reboot' achieves the purpose. As
 I explained in my original analysis, Nova seems to have not taken the path,
 as it does not want to have a false positive, where the GA does not respond
 or 'virDomain.reboot' fails later and the domain is not really restarted. [
 CC-ed vish, author of nova
 http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova//virt 
 http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt//libvirt
  
 http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt/libvirt//driver.py
  
 http://tripsgrips.corp.gq1.yahoo.com:8080/source/xref/nova/nova/virt/libvirt/driver.py
 ]
 
 IMHO, QEMU should preserve the console-log file for a given domain, if it
 exists, by not opening with O_TRUNC option, instead opening with O_APPEND. I
 would like to draw a comparison of a real computer to which we might be
 connected over serial console, and the box gets powered down and up with
 external button press, and we do not lose the console history, if connected.
 And that's what is the experience console-log intends to provide. If you
 think, this is agreeable, please let me know, I will send the patch to
 qemu-devel@.

The issue is more complex than just removing the O_TRUNC from the open() flags.

I havd a proposal that will (almost by accident) fix this in qemu by allowing
console log files to be rotated.  I'm also waorking on a similar feature in
libvirt.

I think the tl;dr: is that this /shoudl/ be fixed in kilo with a 'modern' 
libvirt.

Yours Tony.


pgp7TZH5n8wP4.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Access to worklfow/task results without implicit publish

2014-12-22 Thread Dmitri Zimine
The problem:

Refer to workflow / action output without explicitly re-publishing the output 
values. Why we want it: to reduce repetition, and to make modifications in the 
place values are used, not where they are obtained (and not in multiple 
places). E.g., as an editor of a workflow, when I just realized that I need a 
value of some task down the line, I want to make change right here in the tasks 
that consumes the data (and only those which need this data), without finding 
and modifying the task that supplies the data.

Reasons:

We don't have a concept of workflow or action ‘results': it's the task which 
produces and publishes results. Different tasks call same actions/workflows, 
produce same output variables with diff values. We don't want to publish this 
output with output name as a key, to the global context: they will conflict and 
mess up. Instead, we can namespace them by the task (as specific values are the 
attributes of the tasks, and we want to refer to tasks, not actions/workflows).

Solution:

To refer the output of a particular task (aka raw result of action execution 
invoked by this task), use the_task prefix:

 $_task.taskname.path.to.variable
 $_task.my_task.my_task_result.foo.bar


Expanded example
 
my_sublfow:
   output:
- foo #  declare output here
- bar 
   tasks:
   my_task:
 action: get_foo
 publish: 
 foo: $foo #  define output in a task
 bar: $bar
 ...
main_flow_with_explicit_publishing:
tasks:
t1: 
   workflow: my_subflow 
publish: 
   # Today, you must explicitly publish to make data 
   # from action available for other tasks
foo: $foo #  re-plublish, else you can't use it
bar: $bar
t2: 
action: echo output=$foo and $bar #  use it from task t1

main_flow_with_implicit_publishing:
tasks:
t1: 
   workflow: my_subflow 
t2: 
action: echo output=$_task.t1.foo and $_task.t1.bar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-12-22 Thread Surojit Pathak

On 12/22/14 5:04 PM, Tony Breeds wrote:

On Mon, Dec 22, 2014 at 04:36:02PM -0800, Surojit Pathak wrote:


Hi Daniel,
Having GA to do graceful restart is nice option. But if it were to just
preserve the same console file, even 'virsh reboot' achieves the purpose. As
I explained in my original analysis, Nova seems to have not taken the path,
as it does not want to have a false positive, where the GA does not respond
or 'virDomain.reboot' fails later and the domain is not really restarted. [
CC-ed vish, author of nova


IMHO, QEMU should preserve the console-log file for a given domain, if it
exists, by not opening with O_TRUNC option, instead opening with O_APPEND. I
would like to draw a comparison of a real computer to which we might be
connected over serial console, and the box gets powered down and up with
external button press, and we do not lose the console history, if connected.
And that's what is the experience console-log intends to provide. If you
think, this is agreeable, please let me know, I will send the patch to
qemu-devel@.

The issue is more complex than just removing the O_TRUNC from the open() flags.

I havd a proposal that will (almost by accident) fix this in qemu by allowing
console log files to be rotated.  I'm also waorking on a similar feature in
libvirt.

I think the tl;dr: is that this /shoudl/ be fixed in kilo with a 'modern' 
libvirt.

Hi Tony,

Can you please share some details of the effort, in terms of reference?


Yours Tony.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
SURO

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] - 'nova reboot' causes console-log truncated

2014-12-22 Thread Tony Breeds
On Mon, Dec 22, 2014 at 07:16:27PM -0800, Surojit Pathak wrote:
 Hi Tony,
 
 Can you please share some details of the effort, in terms of reference?

Well the initial discussions started with qemu at:
http://lists.nongnu.org/archive/html/qemu-devel/2014-12/msg00765.html
and then here:
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052356.html

You'll note the the focus of the discussion is rotating the log files but I'm
very much aware of the issue covered in theis thread and it will be covered in
my fixes.  Which is why I said 'almost' by accident ;P

I have a partial implementation for the log rotation in qemu (you can issue a
command from the monitor but I haven't looked at the HUP yet).  I started
looking at doing something in libvirt aswell but I haven't made much progress
there due to conflicting priorities.

Yours Tony.


pgpNnUUPgRHYc.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 12/23

2014-12-22 Thread Dugger, Donald D
I'll be hanging out on the IRC channel in case anyone wants to talk but, given 
the holidays, I don't expect much attendance and we'll keep it short no matter 
what.



Meeting on #openstack-meeting at 1500 UTC (8:00AM MST)





1) Status on cleanup work - 
https://wiki.openstack.org/wiki/Gantt/kilo

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] shelved_offload_time configuration

2014-12-22 Thread Kekane, Abhishek
Hi All,

AFAIK, for shelve api the parameter shelved_offload_time need to be configured 
on compute node.
Can we configure this parameter on controller node as well.

Please suggest.

Thank You,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] simulate examples

2014-12-22 Thread Tran, Steven
Hi,
   Does anyone have an example on how to use 'simulate' according to the 
following command line usage?

usage: openstack congress policy simulate [-h] [--delta] [--trace]
  policy query sequence
  action_policy

  What are the query and sequence? The example under 
/opt/stack/congress/examples doesn't mention about query and sequence.  It 
seems like all 4 parameters are required.
Thanks,
-Steven
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross distribution talks on Friday

2014-12-22 Thread Thomas Goirand
On 11/11/2014 12:46 AM, Donald Stufft wrote:
 
 On Nov 10, 2014, at 11:43 AM, Adam Young ayo...@redhat.com wrote:

 On 11/01/2014 06:51 PM, Alan Pevec wrote:
 %install
 export OSLO_PACKAGE_VERSION=%{version}
 %{__python} setup.py install -O1 --skip-build --root %{buildroot}

 Then everything should be ok and PBR will become your friend.
 Still not my friend because I don't want a _build_ tool as runtime 
 dependency :)
 e.g. you don't ship make(1) to run C programs, do you?
 For runtime, only pbr.version part is required but unfortunately
 oslo.version was abandoned.

 Cheers,
 Alan

 Perhaps we need a top level Python Version library, not Oslo?  Is there such 
 a thing?  Seems like it should not be something specific to OpenStack
 
 What does pbr.version do?

Basically, the same as pkg-resources. Therefore I don't really
understand the need for it... Am I missing something?

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cross distribution talks on Friday

2014-12-22 Thread Thomas Goirand
On 12/19/2014 11:55 PM, Ihar Hrachyshka wrote:
 Note that OSLO_PACKAGE_VERSION is not public.

Well, it used to be public, it has been added and discussed a few years
ago because of issues I had with packaging.

 Instead, we should use
 PBR_VERSION:
 
 http://docs.openstack.org/developer/pbr/packagers.html#versioning

I don't mind switching, though it's going to be a slow process (because
I'm using OSLO_PACKAGE_VERSION in all packages).

Are we at least *sure* that using OSLO_PACKAGE_VERSION is now deprecated?

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How can I continue to complete a abandoned blueprint?

2014-12-22 Thread li-zheming
 thanks! 
I have submitted a new blueprint(quota-instance-memory) 
the link is:
https://blueprints.launchpad.net/nova/+spec/quota-instance-memory

Merry Christmas!^_^




--

Name :  Li zheming
Company :  Hua Wei
Address  : Shenzhen China
Tel:0086 18665391827



At 2014-12-22 22:32:52,Jay Pipes jaypi...@gmail.com wrote:
On 12/22/2014 04:54 AM, li-zheming wrote:
 hi all: Bp
 flavor-quota-memory(https://blueprints.launchpad.net/nova/+spec/flavor-quota-memory)
 was submitted by my partner in havana.   but it has abandoned because
 of  some reason.

Some reason == the submitter failed to provide any details on how the 
work would be implemented, what the use cases were, and any alternatives 
that might be possible.

   I want to  continue to this blueprint. Based on the
 rules about BP for
 https://blueprints.launchpad.net/openstack/?searchtext=for kilo,
 for this bp, spec is not necessary, so I submit the code directly and
 give commit message to clear out questions in spec.  Is it right? how
 can I do? thanks!

Specs are no longer necessary for smallish features, no. A blueprint is 
still necessary on Launchpad, so you should be able to use the abandoned 
one you link above -- which, AFAICT, has enough implementation details 
about the proposed changes.

Alternately, if you cannot get the original submitter to remove the spec 
link to the old spec review, you can always start a new blueprint and we 
can mark that one as obselete.

I'd like Dan Berrange (cc'd) to review whichever blueprint on Launchpad 
you end up using. Please let us know what you do.

All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] copy paste for spice

2014-12-22 Thread Akshik DBK
Going by the documentation Spice console supports copy paste and other 
features, would like to know how do we enable them, meaning how and where do we 
enable it, should we do something wrt the image or some config at openstack 
   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Sub-team meetings on Dec 20th and 27th?

2014-12-22 Thread Mohammad Hanif
Thanks Paul.

Happy holidays everyone!

On Dec 22, 2014, at 1:06 PM, Paul Michali (pcm) 
p...@cisco.commailto:p...@cisco.com wrote:

Will cancel the next two VPNaaS sub-team meetings.  The next meeting will be 
Tuesday, January 6th at 1500 UTC on meeting-4 ( Note the channel change).


Enjoy the holiday time!

PCM (Paul Michali)

MAIL . p...@cisco.commailto:p...@cisco.com
IRC ... pc_m (irc.freenode.comhttp://irc.freenode.com)
TW  @pmichali
GPG Key ... 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




On Dec 19, 2014, at 2:01 PM, Paul Michali (pcm) 
p...@cisco.commailto:p...@cisco.com wrote:

Does anyone have agenda items to discuss for the next two meetings during the 
holidays?

If so, please let me know (and add them to the Wiki page), and we'll hold the 
meeting. Otherwise, we can continue on Jan 6th, and any pop-up items can be 
addressed on the mailing list or Neutron IRC.

Please let me know by Monday, if you'd like us to meet.


Regards,

PCM (Paul Michali)

MAIL . p...@cisco.commailto:p...@cisco.com
IRC ... pc_m (irc.freenode.comhttp://irc.freenode.com/)
TW  @pmichali
GPG Key ... 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-22 Thread Punith S
Hi Asselin,

i'm following your readme https://github.com/rasselin/os-ext-testing
for setting up our cloudbyte CI on two ubuntu 12.04 VM's(master and slave)

currently the scripts and setup went fine as followed with the document.

now both master and slave have been connected successfully, but in order to
run the tempest integration test against our proposed cloudbyte cinder
driver for kilo, we need to have devstack installed in the slave.(in my
understanding)

but on installing the master devstack i'm getting permission issues in
12.04 in executing ./stack.sh since master devstack suggests the 14.04 or
13.10 ubuntu. and on contrary running install_slave.sh is failing on 13.10
due to puppet modules on found error.

 is there a way to get this work ?

thanks in advance

On Mon, Dec 22, 2014 at 11:10 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Eduard,



 A few items you can try:

 1.   Double-check that the job is in Jenkins

 a.   If not, then that’s the issue

 2.   Check that the processes are running correctly

 a.   ps -ef | grep zuul

i.  Should
 have 2 zuul-server  1 zuul-merger

 b.  ps -ef | grep Jenkins

i.  Should
 have 1 /usr/bin/daemon --name=jenkins  1 /usr/bin/java

 3.   In Jenkins, Manage Jenkins, Gearman Plugin Config, “Test
 Connection”

 4.   Stop and Zuul  Jenkins. Start Zuul  Jenkins

 a.   service Jenkins stop

 b.  service zuul stop

 c.   service zuul-merger stop

 d.  service Jenkins start

 e.  service zuul start

 f.service zuul-merger start



 Otherwise, I suggest you ask in #openstack-infra irc channel.



 Ramy



 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Sunday, December 21, 2014 11:01 PM

 *To:* Asselin, Ramy
 *Cc:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Thanks Ramy,



 Unfortunately i don't see dsvm-tempest-full in the status output.

 Any idea how i can get it registered?



 Thanks,

 Eduard



 On Fri, Dec 19, 2014 at 9:43 PM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Eduard,



 If you run this command, you can see which jobs are registered:

 telnet localhost 4730



 status



 There are 3 numbers per job: queued, running and workers that can run job.
 Make sure the job is listed  last ‘workers’ is non-zero.



 To run the job again without submitting a patch set, leave a “recheck”
 comment on the patch  make sure your zuul layout.yaml is configured to
 trigger off that comment. For example [1].

 Be sure to use the sandbox repository. [2]

 I’m not aware of other ways.



 Ramy



 [1]
 https://github.com/openstack-infra/project-config/blob/master/zuul/layout.yaml#L20

 [2] https://github.com/openstack-dev/sandbox









 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Friday, December 19, 2014 3:36 AM
 *To:* Asselin, Ramy
 *Cc:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Hi all,

 After a little struggle with the config scripts i managed to get a working
 setup that is able to process openstack-dev/sandbox and run
 noop-check-comunication.



 Then, i tried enabling dsvm-tempest-full job but it keeps returning
 NOT_REGISTERED



 2014-12-19 12:07:14,683 INFO zuul.IndependentPipelineManager: Change
 Change 0x7fe5ec029b50 139585,9 depends on changes []

 2014-12-19 12:07:14,683 INFO zuul.Gearman: Launch job
 noop-check-communication for change Change 0x7fe5ec029b50 139585,9 with
 dependent changes []

 2014-12-19 12:07:14,693 INFO zuul.Gearman: Launch job dsvm-tempest-full
 for change Change 0x7fe5ec029b50 139585,9 with dependent changes []

 2014-12-19 12:07:14,694 ERROR zuul.Gearman: Job gear.Job 0x7fe5ec2e2f10
 handle: None name: build:dsvm-tempest-full unique:
 a9199d304d1140a8bf4448dfb1ae42c1 is not registered with Gearman

 2014-12-19 12:07:14,694 INFO zuul.Gearman: Build gear.Job 0x7fe5ec2e2f10
 handle: None name: build:dsvm-tempest-full unique:
 a9199d304d1140a8bf4448dfb1ae42c1 complete, result NOT_REGISTERED

 2014-12-19 12:07:14,765 INFO zuul.Gearman: Build gear.Job 0x7fe5ec2e2d10
 handle: H:127.0.0.1:2 name: build:noop-check-communication unique:
 333c6ea077324a788e3c37a313d872c5 started

 2014-12-19 12:07:14,910 INFO zuul.Gearman: Build gear.Job 0x7fe5ec2e2d10
 handle: H:127.0.0.1:2 name: build:noop-check-communication unique:
 333c6ea077324a788e3c37a313d872c5 complete, result SUCCESS

 2014-12-19 12:07:14,916 INFO zuul.IndependentPipelineManager: Reporting
 change Change 0x7fe5ec029b50 139585,9, actions: [ActionReporter
 zuul.reporter.gerrit.Reporter object at 0x2694a10, {'verified': -1}]



 Nodepoold's log show no reference to dsvm-tempest-full and neither
 jenkins' logs.



 

[OpenStack-Infra] [Infra] Meeting Tuesday December 23rd at 19:00 UTC

2014-12-22 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday December 23rd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

Meeting log and minutes from the last meeting are available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-12-16-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Requesting to delete duplicate account

2014-12-22 Thread Jeremy Stanley
On 2014-12-21 10:59:09 +0530 (+0530), Bharat Kumar wrote:
 I have multiple accounts with the same email ID and name,
[...]
  1. bharat.kobag...@redhat.com (Bharat Kumar Kobagana)
  2. bharat.kobag...@redhat.com (Bharat Kumar Kobagana)
  3. bkoba...@redhat.com (Bharat Kumar Kobagana)
  4. bkoba...@redhat.com (Bharat Kumar Kobagana)
[...]
 Please remove/deactivate all the accounts from Gerrit. I will
 create from the scratch.

I have marked account IDs 13131, 13527, 13882 and 14089 inactive in
Gerrit. Please let us know if you run into further issues.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] Gerrit and Storyboard integration

2014-12-22 Thread Zaro
Hello all, I just wanted to let everyone know that I've started a new
Gerrit plugin project, its-storybard[1].  This plugin will allow us to
integrate Gerrit with Storyboard in a similar way we have integrated Gerrit
with Launchpad.  The Gerrit project implemented a set of generic ITS('issue
tracking system') integration points in ver 2.9 that allows plugins to
easily extend to integrate Gerrit with any ITS application.  You can find
other its-* plugins in the Gerrit repo, such as its-bugzilla, its-jira,
etc..

Some examples of what the its-storybard plugin will provide:

- update comments on storyboard stories when an associated Gerrit change is
updated
- update a storyboard's task status on an associated Gerrit change status
transition.
- create a new storyboard task when a new change is created in Gerrit.

I believe our current setup to integrate Gerrit with Launchpad is using
jeepyb[2].  I think jeepyb was a good solution for the pre-Gerrit 2.9 days
but going forward I think the its-storyboard plugin would be a better
solution.  You can find the initial its-storyboard patch[3] upstream.  If
you are interested please take a look.  The initial patch is on Gerrit
master (currently ver 2.11).

The process to getting this into Openstack Gerrit is to upgrade our Gerrit
to at least ver 2.9, back porting the plugin, then adding the installation
to system-config.

Have a very merry x-mas.
-Khai

[1] https://gerrit.googlesource.com/plugins/its-storyboard
[2] http://git.openstack.org/cgit/openstack-infra/jeepyb
[3] https://gerrit-review.googlesource.com/#/c/60590
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [Support] Problems installing zuul for jenkins-gerrit gateway (Jenkins does not run jobs)

2014-12-22 Thread Abhishek Shrivastava
Hi Florian,

Can you please tell me the procedure which you have followed for setting up
the CI environment, so that I can be clear with your *zuul* problem.

On Mon, Dec 22, 2014 at 7:55 PM, Amit Das amit@cloudbyte.com wrote:


 Hey Guys,

 Can we send some kind of info/help to this email thread... e.g. becoming
 more relevant in the community.

 I know I have been stressing a lot on these :)
 These will help us to get an exception request approved in the coming
 months for our check-in.

 Regards,
 Amit
 *CloudByte Inc.* http://www.cloudbyte.com/

 -- Forwarded message --
 From: Florian Schmidt florian.schmidt.wel...@t-online.de
 Date: Mon, Dec 22, 2014 at 7:04 PM
 Subject: [OpenStack-Infra] [Support] Problems installing zuul for
 jenkins-gerrit gateway (Jenkins does not run jobs)
 To: openstack-infra@lists.openstack.org


 Hello all together,

 i hope, someone of you can help me with my little problem. Actually I use
 Gerrit Trigger 8Jenkins plugin) to trigger Jenkins jobs for specific Gerrit
 events. Now I want to migrate from Gerrit Trigger to zuul (easier, layout
 based configuration and more trigger options and different pipelines).

 I installed zuul with this Blog post as a basis:

 http://ritchey98.blogspot.jp/2014/02/openstack-third-party-testing-how-to.ht
 ml
 http://ritchey98.blogspot.jp/2014/02/openstack-third-party-testing-how-to.html

 You can find my zuul configuration here[1] and my initial layout.yaml
 here[2]

 Now I have the problem, that it seems, that Jenkins doesn't run jobs added
 In gearman (they don't appear in Jenkins web frontend). Zuul's status page
 lists the change (added via a recheck comment) and the status of the job
 Jenkins-test is queued forever. That's why I think there is a problem
 somewhere between Jenkins and zuul, but I can't find it. I hope, that
 someone from here can assist me to find out the problem? I cleared the log
 and restarted Jenkins and zuul (stopped both and started zuul first and
 then
 Jenkins) and added a new comment recheck to my test change in gerrit to
 trigger a new build. I uploaded my log files here[3], maybe it helps to
 find
 out the problem. I have anonymized the domain to be example.com in the
 log
 files.

 I'm also online in #openstack-infra on freenode (nick: FlorianSW), so feel
 free to contact me there, I would be very grateful to get help to solve
 this
 problem :)

 [1] https://gist.github.com/Florianschmidtwelzow/ed0e3047f0ef0c5e5554
 [2] https://gist.github.com/Florianschmidtwelzow/a101a87d653d5bbc6de6
 [3] https://gist.github.com/Florianschmidtwelzow/12fff7c4530805dece0c

 Kind regards,
 Florian


 ___
 OpenStack-Infra mailing list
 OpenStack-Infra@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra




-- 
Thanks  Regards,
Abhishek
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [Openstack] Neutron vs. FlatDHCP -- what's the latest?

2014-12-22 Thread Andrew Bogott

On 12/22/14 2:08 PM, Kevin Benton wrote:
Can't you simulate the same topology as the FlatDHCPManager + Floating 
IPs with a shared network attached to a router which is then attached 
to an external network?


Mmmmaybe?  Floating IP support in nova-network is pretty great 
(allocation, assignment, release, etc.) and allows us shuffle around a 
small number of public IPs amongst a much larger number of instances.  
Your suggestion doesn't address that, does it?  Short of my implementing 
a bunch of custom stuff on my own?


-A




On Sun, Dec 21, 2014 at 7:00 PM, Andrew Bogott abog...@wikimedia.org 
mailto:abog...@wikimedia.org wrote:


Greetings!

I'm about to set up a new cloud, so for the second time this year
I'm facing the question of Neutron vs. nova-network.  In our
current setup we're using nova.network.manager.FlatDHCPManager
with floating IPs. This config has been working fine, and would
probably be our first choice for the new cloud as well.

At this point is there any compelling reason for us to switch to
Neutron?  My understanding is that the Neutron flat network model
still doesn't support anything similar to floating IPs, so if we
move to Neutron we'll need to switch to a subnet-per-tenant
model.  Is that still correct?

I'm puzzled by the statement that  upgrades without instance
downtime will be available in the Kilo release[1] -- surely for
such a path to exist, Kilo/Neutron would need to support all the
same use cases as nova-network.  If that's right and Neutron is
right on the verge of supporting flat-with-floating then we may
just cool our jets and wait to build the new cloud until Kilo is
released. I have no particular reason to prefer Neutron, but I'd
like to avoid betting on a horse right before it's put down :)

Thanks!

-Andrew

[1]

http://docs.openstack.org/openstack-ops/content/nova-network-deprecation.html


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
mailto:openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Kevin Benton


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Neutron vs. FlatDHCP -- what's the latest?

2014-12-22 Thread Kevin Benton
The shared network would have all of the VMs attached to it and would
just be private address space. The shared network would be connected
to a virtual router which would be connected to an external network
where all of your floating IPs come from. The floating IPs from there
would have the allocation, assignment, release features you are
looking for.

However, until the ARP poisoning protection is merged, shared networks
aren't very trustworthy across multiple tenants. So you should be able
to experiment with the Juno Neutron code in the topology I described
above to see if it meets your needs, but I wouldn't suggest a
production deployment until the L2 dataplane security features are
merged (hopefully during this cycle).


-
| Shared Network |   --- All tenant VMs attach here
-
 |
   
   | Router |
   
 |
--
| External Network |--- Floating IPs come from here
--

On Mon, Dec 22, 2014 at 1:46 AM, Andrew Bogott abog...@wikimedia.org wrote:
 On 12/22/14 2:08 PM, Kevin Benton wrote:

 Can't you simulate the same topology as the FlatDHCPManager + Floating IPs
 with a shared network attached to a router which is then attached to an
 external network?


 Mmmmaybe?  Floating IP support in nova-network is pretty great (allocation,
 assignment, release, etc.) and allows us shuffle around a small number of
 public IPs amongst a much larger number of instances.  Your suggestion
 doesn't address that, does it?  Short of my implementing a bunch of custom
 stuff on my own?

 -A




 On Sun, Dec 21, 2014 at 7:00 PM, Andrew Bogott abog...@wikimedia.org
 wrote:

 Greetings!

 I'm about to set up a new cloud, so for the second time this year I'm
 facing the question of Neutron vs. nova-network.  In our current setup we're
 using nova.network.manager.FlatDHCPManager with floating IPs.  This config
 has been working fine, and would probably be our first choice for the new
 cloud as well.

 At this point is there any compelling reason for us to switch to Neutron?
 My understanding is that the Neutron flat network model still doesn't
 support anything similar to floating IPs, so if we move to Neutron we'll
 need to switch to a subnet-per-tenant model.  Is that still correct?

 I'm puzzled by the statement that  upgrades without instance downtime
 will be available in the Kilo release[1] -- surely for such a path to
 exist, Kilo/Neutron would need to support all the same use cases as
 nova-network.  If that's right and Neutron is right on the verge of
 supporting flat-with-floating then we may just cool our jets and wait to
 build the new cloud until Kilo is released.  I have no particular reason to
 prefer Neutron, but I'd like to avoid betting on a horse right before it's
 put down :)

 Thanks!

 -Andrew

 [1]
 http://docs.openstack.org/openstack-ops/content/nova-network-deprecation.html


 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




 --
 Kevin Benton





-- 
Kevin Benton

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Instance Bringup fail, when flavor has extra specs

2014-12-22 Thread Bhaskar Rao
Hi Openstack users,

I am trying the following scenario and getting the some error, need help in 
debugging the issue


1)  Created a Host Aggregate and assigned hosts in it, also added metadata 
to the Host aggregate using the command aggregate-set-metadata

2)  Created a flavor and added the extra spec to the flavor using the 
command flavor-key which matches the metadata specified for the Host 
aggregate above.

3)  Now I am trying to launch an instance using the flavor created above.

4)  The instance creation is getting errored out with the following msg
| fault| {message: No valid host was found. 
, code: 500, details:   File 
\/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py\, line 
108, in schedule_run_instance |

|  | raise 
exception.NoValidHost(reason=\\)  



5)  When I remove the extra specs from the flavor and try to launch an 
instance using the same flavor, the instance comes up fine on the same host

Any help here would be great.
Thanks in advance

-Bhaskar

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Fwd: 'module' object has no attribute 'HVSpec'

2014-12-22 Thread Srinivasa Rao Ragolu
Hi All,

I have integrated below CPU pinning patches to Nova

https://review.openstack.org/#/c/132001/2https://review.openstack.org/#/c/128738/12https://review.openstack.org/#/c/129266/11https://review.openstack.org/#/c/129326/11https://review.openstack.org/#/c/129603/10https://review.openstack.org/#/c/129626/11https://review.openstack.org/#/c/130490/11https://review.openstack.org/#/c/130491/11https://review.openstack.org/#/c/130598/10https://review.openstack.org/#/c/131069/9https://review.openstack.org/#/c/131210/8https://review.openstack.org/#/c/131830/5https://review.openstack.org/#/c/131831/6https://review.openstack.org/#/c/131070/https://review.openstack.org/#/c/132086/https://review.openstack.org/#/c/132295/https://review.openstack.org/#/c/132296/https://review.openstack.org/#/c/132297/https://review.openstack.org/#/c/132557/https://review.openstack.org/#/c/132655/


And now if I try to run nova-compute, getting below error


File /opt/stack/nova/nova/objects/compute_node.py, line 93, in _from_db_object

for hv_spec in hv_specs]

AttributeError: 'module' object has no attribute 'HVSpec'


Please help me in resolving this issue.


Thanks,

Srinivas.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] error while installing pbr in devstack

2014-12-22 Thread Srinivasreddy R
hi,
i am trying to install  openstack   branch stable/icehouse release through
devstack on ubuntu 14.04 .
getting the below errors .many times i have installed  with ubuntu 14.04
and worked fine . but dont know why i am getting the below errors now  .

 + git_clone git://git.openstack.org/openstack-dev/pbr.git /opt/stack/pbr
master
 + GIT_REMOTE=git://git.openstack.org/openstack-dev/pbr.git
 + GIT_DEST=/opt/stack/pbr
 + GIT_REF=master
 ++ trueorfalse False False
 + RECLONE=False
 ++ pwd
 + local orig_dir=/opt/stack/fpa
 + [[ False = \T\r\u\e ]]
 + egrep -q '^refs'
 + echo master
 + [[ ! -d /opt/stack/pbr ]]
stack@user-OptiPlex-3020devstack$ 2014-12-22 17:45:55.426 | + [[ False =
\T\r\u\e ]]
 + cd /opt/stack/pbr
 + head -1
 + sudo git show --oneline
 1f5c9f7 Merge tag '0.10.1' into HEAD
 + setup_install /opt/stack/pbr
 + local project_dir=/opt/stack/pbr
 + setup_package_with_req_sync /opt/stack/pbr
 + local project_dir=/opt/stack/pbr
 + local flags=
 ++ cd /opt/stack/pbr
 ++ sudo git diff --exit-code
 + local update_requirements=
 + [[ '' != \c\h\a\n\g\e\d ]]
 + cd /opt/stack/requirements
 + sudo python update.py /opt/stack/pbr
 Syncing /opt/stack/pbr/test-requirements.txt
 + setup_package /opt/stack/pbr
 + local project_dir=/opt/stack/pbr
 + local flags=
 + pip_install /opt/stack/pbr
 + sudo PIP_DOWNLOAD_CACHE=/var/cache/pip  /usr/local/bin/pip install
--build=/tmp/pip-build.gB3FD /opt/stack/pbr
 Unpacking /opt/stack/pbr
   Running setup.py egg_info for package from file:///opt/stack/pbr
 /usr/local/lib/python2.7/dist-packages/setuptools/dist.py:298:
UserWarning: The version specified ('0.11.0.dev1.g1f5c9f7') is an invalid
version, this may not work as expected with newer versions of setuptools,
pip, and PyPI. Please see PEP 440 for more details.
   details. % self.metadata.version
 Traceback (most recent call last):
   File string, line 16, in module
   File /tmp/pip-qG3hz9-build/setup.py, line 22, in module
 **util.cfg_to_args())
   File /usr/lib/python2.7/distutils/core.py, line 151, in setup
 dist.run_commands()
   File /usr/lib/python2.7/distutils/dist.py, line 953, in
run_commands
 self.run_command(cmd)
   File /usr/lib/python2.7/distutils/dist.py, line 972, in run_command
 cmd_obj.run()
   File string, line 11, in replacement_run
   File /usr/local/lib/python2.7/dist-packages/pkg_resources.py, line
2254, in load
 ['__name__'])
 ImportError: No module named pbr_json
 Complete output from command python setup.py egg_info:
 /usr/local/lib/python2.7/dist-packages/setuptools/dist.py:298:
UserWarning: The version specified ('0.11.0.dev1.g1f5c9f7') is an invalid
version, this may not work as expected with newer versions of setuptools,
pip, and PyPI. Please see PEP 440 for more details.

   details. % self.metadata.version

 running egg_info

 creating pip-egg-info/pbr.egg-info

 writing requirements to pip-egg-info/pbr.egg-info/requires.txt

 writing pip-egg-info/pbr.egg-info/PKG-INFO

 writing top-level names to pip-egg-info/pbr.egg-info/top_level.txt

 writing dependency_links to pip-egg-info/pbr.egg-info/dependency_links.txt

 writing entry points to pip-egg-info/pbr.egg-info/entry_points.txt

 Traceback (most recent call last):

   File string, line 16, in module

   File /tmp/pip-qG3hz9-build/setup.py, line 22, in module

 **util.cfg_to_args())

   File /usr/lib/python2.7/distutils/core.py, line 151, in setup

 dist.run_commands()

   File /usr/lib/python2.7/distutils/dist.py, line 953, in run_commands

 self.run_command(cmd)

   File /usr/lib/python2.7/distutils/dist.py, line 972, in run_command

 cmd_obj.run()

   File string, line 11, in replacement_run

   File /usr/local/lib/python2.7/dist-packages/pkg_resources.py, line
2254, in load

 ['__name__'])

 ImportError: No module named pbr_json

 
 Cleaning up...
 Command python setup.py egg_info failed with error code 1 in
/tmp/pip-qG3hz9-build
 Storing complete log in /home/stack/.pip/pip.log
 + exit_trap
 + local r=1
 ++ jobs -p
 + jobs=
 + [[ -n '' ]]
 + exit 1


-- 
 thanks
srinivas.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [OpenStack] Neutron Distributed Virtual Router

2014-12-22 Thread Wilence Yao
Greetings!

I am new for OpenStack and I focus on dvr of Neutron recently. I have learn
some theory about dvr.
The wiki of neutron and dvr in https://wiki.openstack.org also help me a
alot.

Now I want to make the change about the code of neturn, especially in dvr.
But I still don't know how to
start read neturn'code.  One of reseaons is that I haven't make clear the
directory of neturn.
So is there some refernece to help me figure out the neturon codeespecially
dvr and contribute to it .
I have set up a devstack in my laptop and clone netuo code fron github.



Thanks!

-Wilence
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] error while installing pbr in devstack

2014-12-22 Thread Jeremy Stanley
On 2014-12-22 18:09:16 +0530 (+0530), Srinivasreddy R wrote:
 i am trying to install  openstack   branch stable/icehouse release through
 devstack on ubuntu 14.04 .
[...]
  ImportError: No module named pbr_json
  Complete output from command python setup.py egg_info:
  /usr/local/lib/python2.7/dist-packages/setuptools/dist.py:298:
 UserWarning: The version specified ('0.11.0.dev1.g1f5c9f7') is an invalid
 version, this may not work as expected with newer versions of setuptools, pip,
 and PyPI. Please see PEP 440 for more details.
[...]

It looks like you're using the master branch of PBR rather than a
release version. The current source in master is incompatible with
Setuptools 8 (released a week ago) and there is a patch series
currently under review to fix that. Either use the most recent
release of PBR instead (0.10.7 which should work with Setuptools 8)
or wait until https://review.openstack.org/142931 merges.
-- 
Jeremy Stanley

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] COnfiguration to setup basic cloud

2014-12-22 Thread somshekar kadam
Hi All, 

Currently I trying to setup the following cloud setup. 
Before I go further I just need to review the hardware configuration setup 

Basic requirement 

Total 40 VMs
Hard Disk per user 10 to 50 GB
RAM per VM 4 GB, dynamic max 12GB
Network per VM 100Mbits/second per VM 
MAx req is 16GHz per VM

SO I read openstack docs and Mirantis doc on this and cameup with this config 


1. CPU requirement 

total cores required 34 cores rounding it to 36 cores 
6 sockets containing 6 cores per socket 
1 CPU core with 2.4 GHz

Total number of CPU cores per VM = 5 

Total number of servers =3 ( 2 sockets per server).

Number of VM on single VM = 14 


2. Memory requirement
 40 VM's, 4 GB per VM, Min 512MB, Max 32GB  160GB 0f totalmemory = 40 VM x 4 GB 
per VM 
 Consider Dynamicallocation of RAM upto 12GB for each VM it accounts to, 
consideringper server 14 VMS it requires 168 GB per server.
3. Storage 
flat 2 TB disk 50GB per VM 
or with Redundancy wirh Raid 
4 TB ( 1TB of DISK each) with RAID 1, 5 or 10 
4. Networking 
to get 100Mbits/second per VM 
Use Two 1 GB link per server.
or use 10 Gig 

On the switches 24 ports of 1 GB switch should be sufficient . 


Can I have this hosted as single Host machine. ?
Or Is it better to have 3 servers. 
This is just to start inhouse  project replication of data or controller 
failure is OK later need to scale it. 

Please suggest if the above config is ok or any mis calculations  or may be 
better one. 

thanks in advance 


Regards
Neelu  

   ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Openstack capabilities

2014-12-22 Thread Eriane Leobrera
Hi OpenStack,

I would really appreciate if anyone can assist me on my dilemma. Below are the 
capabilities I am looking for. We are on the process of deciding between 
OpenStack vs CloudStack. Capabilities are much more important for us, 
integrated and having everything automated.

Here are the list below:

1.  Integration with CRM 2013 (service tickets, lead/opportunity, 
contact/account and billing system).

2.  Integration with payment gateway (Moneris)

3.  PCI Compliant solution

4.  Audit trail enabled on all interactions (i.e server shutdown etc.)

5.  Support buying a new VM, signing up, auto charging them by credit card 
verification and auto provisioning VMs.

6.  Upgrade existing VMs with ability to reboot VM to have resources added

7.  Cancel VM which stops billing and scheduling for VM to be removed for x 
days later

8.  Support for pay by the minute billing

9.  Support for multiple data centre locations

10.  Supports a DNS Manager

11.  Ability to do a hard shutdown on VM

12.  Ability to have console level access

13.  Spin up new instance which wipes old instance and ability to reinstall

14.  Supports backup manager

15.  Ability to change password for panel login via email or security questions 
etc.

16.  VM management windows the shows RAM, CPU usage, IP, server name etc. 
(dashboard)

17.  Scheduled maintenance window that shows upcoming or passed (dashboard)

18.  Dashboard showing all VMs and current utilizations

19.  One click install software packages for define package

20.  Monitoring management to turn on/off or silence alerts

21.  Mobile support for rebooting VMs

22.  Security Threat Center

23.  Token tracking for resellers of our services

I would really appreciate if anyone can take a time to put a yes/no/NA next to 
each of the item on the list, it will definitely help me big time. I tried 
reading and watching few videos but I would really like to make sure as some of 
the items on the list are must haves.

Thank you in advance.

Regards,
Eriane Leobrera
MANAGER, IT SERVICES

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack capabilities

2014-12-22 Thread Jay Pipes

On 12/22/2014 11:20 AM, Eriane Leobrera wrote:

Hi OpenStack,

I would really appreciate if anyone can assist me on my dilemma. Below
are the capabilities I am looking for. We are on the process of deciding
between OpenStack vs CloudStack. Capabilities are much more important
for us, integrated and having everything automated.

Here are the list below:


I've tried to give you honest answers from the OpenStack roadmap and 
currently-supported perspective below.



1.Integration with CRM 2013 (service tickets, lead/opportunity,
contact/account and billing system).


Not supported now. Not likely to ever be supported by OpenStack, too 
much of a custom feature.



2.Integration with payment gateway (Moneris)


Not supported now. Not likely to ever be supported by OpenStack, too 
much of a custom feature.



3.PCI Compliant solution


This is a giant rabbithole.


4.Audit trail enabled on all interactions (i.e server shutdown etc.)


OpenStack currently supports notification queues that can be used to 
provide audit capabilities.



5.Support buying a new VM, signing up, auto charging them by credit card
verification and auto provisioning VMs.


Not supported now. Not likely to ever be supported by OpenStack, too 
much of a custom feature for the user interface, which is something that 
OpenStack's upstream UI (Horizon) is unlikely to develop.



6.Upgrade existing VMs with ability to reboot VM to have resources added


Currently supported. This is the resize/migrate operation in nova.


7.Cancel VM which stops billing and scheduling for VM to be removed for
x days later


Not currently supported. Possibly supported some time in the future if 
we decide it's worthwhile to put time-based constraints and reservations 
into the scheduler.



8.Support for pay by the minute billing


OpenStack does not ship a billing solution. This is something that is 
the responsibility of the operator, since it's a *very* custom feature 
and almost always involves proprietary code linking.



9.Support for multiple data centre locations


This is currently supported.


10.Supports a DNS Manager


This is currently supported with the Designate component.


11.Ability to do a hard shutdown on VM


This is currently supported.


12.Ability to have console level access


This is currently supported.


13.Spin up new instance which wipes old instance and ability to reinstall


This doesn't really make any sense. This isn't cloud. This is bare-metal 
hosting you are describing.


OpenStack's VMs are hosted in the cloud -- i.e. virtualized. When you 
terminate a VM, you lose the data on the VM's ephemeral storage, which 
is why for data that you need to keep around, you use volumes (block 
storage).



14.Supports backup manager


The snapshot operation and daily/weekly/hourly backup operations are 
currently supported via OpenStack Nova's API. However, if you're looking 
for some Windows GUI that does backups, that isn't something that 
OpenStack is about to provide.



15.Ability to change password for panel login via email or security
questions etc.


Changing passwords is currently supported. Security questions are a 
UI-specific thing and not something that is built-into OpenStack's APIs.



16.VM management windows the shows RAM, CPU usage, IP, server name etc.
(dashboard)


This is currently supported.


17.Scheduled maintenance window that shows upcoming or passed (dashboard)


This is not supported.


18.Dashboard showing all VMs and current utilizations


This is currently supported.


19.One click install software packages for define package


Are you looking for an infrastructure service or a platform service? 
OpenStack's infrastructure services manage virtualized resources. 
Platform services, like the Murano project in Stackforge, can be used to 
interface with things like CloudFoundry to let you define software 
packages that would get installed on your virtual resources.



20.Monitoring management to turn on/off or silence alerts


This is not supported by OpenStack. This is something you can install 
yourself and use as you want.



21.Mobile support for rebooting VMs


This is not supported.


22.Security Threat Center


? We have a security advisory mailing list. But OpenStack is not Macafee 
Windows software.



23.Token tracking for resellers of our services


This is not supported.


I would really appreciate if anyone can take a time to put a yes/no/NA
next to each of the item on the list, it will definitely help me big
time. I tried reading and watching few videos but I would really like to
make sure as some of the items on the list are must haves.


It really sounds to me like you are looking for some all-in-one hosting 
solution, not really running your own cloud infrastructure. I'd 
recommend looking at just being a customer or reseller of one of the 
cloud providers like Rackspace Cloud, HP Cloud, Amazon Web Services, or 
Softlayer.


Best,
-jay


Thank you in advance.

Regards,

*Eriane Leobrera*


[Openstack] Using the new serial console support in Juno

2014-12-22 Thread Lars Kellogg-Stedman
I wrote an article about using the new serial console support for Nova
servers introduced in OpenStack Juno:

  
http://blog.oddbit.com/2014/12/22/accessing-the-serial-console-of-your-nova-servers/

I thought this might be of general interest.

Cheers,

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgpzbIiww8HuX.pgp
Description: PGP signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] Neutron Distributed Virtual Router

2014-12-22 Thread Swaminathan Vasudevan
Hi Wilence,
It is good to hear that you are interested in the DVR and wanted to
contribute to DVR.
There are couple of documents out there to get you started and I can
provide you the link.

The best option would be to listen to the Video recording of our
presentation in the Paris Summit. That would give you a head start.

https://www.openstack.org/summit/openstack-paris-summit-2014/session-videos/presentation/architectural-overview-of-distributed-virtual-routers-in-openstack-neutron

https://www.openstack.org/assets/presentation-media/Openstack-kilo-summit-DVR-Architecture-20141030-Master-submitted-to-openstack.pdf


Then you can dive deep into the code to understand the different aspects of
DVR ( plugin, scheduler and agent).

Thanks
Swami

On Mon, Dec 22, 2014 at 4:37 AM, Wilence Yao wilence@gmail.com wrote:

 Greetings!

 I am new for OpenStack and I focus on dvr of Neutron recently. I have
 learn some theory about dvr.
 The wiki of neutron and dvr in https://wiki.openstack.org also help me a
 alot.

 Now I want to make the change about the code of neturn, especially in
 dvr. But I still don't know how to
 start read neturn'code.  One of reseaons is that I haven't make clear the
 directory of neturn.
 So is there some refernece to help me figure out the neturon
 codeespecially dvr and contribute to it .
 I have set up a devstack in my laptop and clone netuo code fron github.



 Thanks!

 -Wilence

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (Juno) Multiple rings Swift

2014-12-22 Thread Amit Anand
HI all,

So now I want to add a third datacenter basically more storage nodes. Now
I want to try and do the following below:

http://techs.enovance.com/7094/openstack-swift-2-0-introducing-storage-policies

So pretty much have a Paris and Montreal center as described - I want to
add a third datacenter to each as replication. So it the blog it says
create new rings - there is where I am getting confused - may someone help
and let me know what are the commands to create a whole new ring??  I dont
think I am supposed to run this command again (and for object and container)

swift-ring-builder account.builder create

As I already have my base files. So I guess basically what commands would I
need to make object-1.ring.gz and object-2.ring.gz.

Basic setup that I am trying

Paris - HQ  Ring 1
Montreal - HQ Ring 2

Paris has 2 storage nodes, Montreal has 2 and HQ has 2. Right now Paris and
Montreal have the same containers. I want to try and have Paris-HQ and
Montreal-HQ have their own separate containers. Appreciate the time and
help!!

Thanks

Amit Anand


On Wed, Dec 17, 2014 at 4:14 AM, Jonathan Lu jojokur...@gmail.com wrote:

  Hi,
 It seems that John has helped a lot. I just share 2 blogs that I think
 is related with your issue.

 https://swiftstack.com/blog/2012/04/09/swift-capacity-management/

 https://swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/

 -- Jonathan Lu


 On 2014/12/17 4:06, Amit Anand wrote:

 Thanks again John. Out of curiosity, is it possible to see what is where?
 Lets say I have uploaded a video and wanted to see where the three copies
 like out of the 4 nodes I have?

  And you guys dont have a free version of Swiftstack available per chance
 do you :-)




 On Tue, Dec 16, 2014 at 12:48 PM, John Dickinson m...@not.mn wrote:

 Assuming your regions are pretty close to the same size, that's exactly
 what you'll get with 3 replicas across 2 regions. Some data will have 2
 replicas in region 1 and one in region 2. Other data will have 1 in region
 1 and 2 in region 2.

 --John




  On Dec 16, 2014, at 9:39 AM, Amit Anand aan...@viimed.com wrote:
 
  Ok cool Ill wait it out see what happens. So now I have another stupid
 question - after all is said and done, how many copies of my data will I
 have?! What I am aiming for is something like 2 regions and 3 replicas ie,
 2 copies of the data in region one and one copy in region 2.
 
  On Tue, Dec 16, 2014 at 12:35 PM, John Dickinson m...@not.mn wrote:
  That's normal. See the ...or none can be due to min_part_hours. Swift
 is refusing to move more data until the stuff likely currently in flight
 has settled. See
 https://swiftstack.com/blog/2012/04/09/swift-capacity-management/
 
  --John
 
 
 
 
 
 
   On Dec 16, 2014, at 9:09 AM, Amit Anand aan...@viimed.com wrote:
  
   Hi John thank you!
  
   So I went ahead and added two more storage nodes to the existing
 rings (object, account, container) and tried to rebalance on the controller
 I got this:
  
   [root@controller swift]# swift-ring-builder object.builder rebalance
   Reassigned 1024 (100.00%) partitions. Balance is now 38.80.
  
 ---
   NOTE: Balance of 38.80 indicates you should push this
 ring, wait at least 1 hours, and rebalance/repush.
  
 ---
  
  
   For all three. So while waiting, I went ahead and added the *.gz
 files and swift.conf to the new nodes and started the Object Storage
 Services on the both the new storage nodes Now I am seeing this after I
 try to rebalance after waiting about an hour:
  
   [root@controller swift]# swift-ring-builder object.builder rebalance
   No partitions could be reassigned.
   Either none need to be or none can be due to min_part_hours [1].
  
   Devices 4,5,6,7 are the new ones I added in region 2.
  
  
   [root@controller swift]#  swift-ring-builder object.builder
   object.builder, build version 9
   1024 partitions, 3.00 replicas, 2 regions, 2 zones, 8 devices,
 38.80 balance
   The minimum number of hours before a partition can be reassigned is 1
   Devices:id  region  zone  ip address  port  replication ip
 replication port  name weight partitions balance meta
0   1 1   10.7.5.51  6000   10.7.5.51
   6000  sda3 100.00501   30.47
1   1 1   10.7.5.51  6000   10.7.5.51
   6000  sda4 100.00533   38.80
2   1 1   10.7.5.52  6000   10.7.5.52
   6000  sda3 100.00512   33.33
3   1 1   10.7.5.52  6000   10.7.5.52
   6000  sda4 100.00502   30.73
4   2 1   10.7.5.53  6000   10.7.5.53
   6000  sda3 100.00256  -33.33
5   2 1   10.7.5.53  6000   

[Openstack] corrupt downloads: GRE, openvswitch, ipv6

2014-12-22 Thread Harm Weites

Hi,

My cloud is using IPv4 addressing to access metadata only, all 
regular/other traffic is all IPv6. I'm facing a problem now where 
certain (!) downloads are corrupted, eg. making it impossible to yum 
update my kernel (this is a CentOS 7 instance). The instance is 
receiving a MTU of 1454 per DHCPv4, all other interfaces involved are 
set to 1500.


Setup:
- controller node running neutron and openvswitch
- compute node running the instances and neutron-openvswitch-agent

The cloud is configured with GRE tunnels and uses ML2 (openvswitch).

How to aproach this?

Regards,
Harm

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (Juno) Multiple rings Swift

2014-12-22 Thread Christian Schwede
On 22.12.14 19:11, Amit Anand wrote:
 Now I want to try and do the following below:
 
 http://techs.enovance.com/7094/openstack-swift-2-0-introducing-storage-policies
 
 So pretty much have a Paris and Montreal center as described - I want to
 add a third datacenter to each as replication. So it the blog it says
 create new rings - there is where I am getting confused - may someone
 help and let me know what are the commands to create a whole new ring?? 

Author of the post here :)

Use the following command to create another ring:

swift-ring-builder /etc/swift/object-1.builder create 15 3 1

Storage Policies are addressed by an index number (see the blog post,
for example storage-policy:0), and these are related to the ring files:

storage-policy:0 - /etc/swift/object-0.ring.gz

storage-policy:1 - /etc/swift/object-1.ring.gz

and so on.

Policy 0 is the default one, and if there is no ring file named
/etc/swift/object-0.ring.gz Swift tries to use
/etc/swift/object.ring.gz.

Let me know if this helps!

Christian

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (Juno) Multiple rings Swift

2014-12-22 Thread Amit Anand
Thanks Christian I think thats exactly what I needed - I will try and make
2 new rings and see how that works. You wouldnt per chance know how I would
limit the replicas, ie 2 in the Paris region and 1 at HQ? I only want one
replica to goto the HQ region...


So for instance, if I have two nodes in the my HQ region and I only want
one replica, would I run the following commands:

swift-ring-builder account.builder add hq-ip#1:6002/sda3 100

swift-ring-builder account.builder add hq-ip#2:6002/sda3 100

If the nodes in Paris are as is follows:

he minimum number of hours before a partition can be reassigned is 1
Devices:id  region  zone  ip address  port  replication ip
 replication port  name weight partitions balance meta
 0   1 1   10.7.5.51  6002   10.7.5.51
 6002  sda3 100.003840.00
 1   1 1   10.7.5.51  6002   10.7.5.51
 6002  sda4 100.003840.00
 2   1 1   10.7.5.52  6002   10.7.5.52
 6002  sda3 100.003840.00
 3   1 1   10.7.5.52  6002   10.7.5.52
 6002  sda4 100.003840.00


That for some reason doesnt seem correct to me but what do I know. Also, if
you notice i my command for the name i gave hq-ip# - do I have to use the
naming convention r#z#-ip when I add? Thanks

Amit



On Mon, Dec 22, 2014 at 1:46 PM, Christian Schwede 
christian.schw...@enovance.com wrote:

 On 22.12.14 19:11, Amit Anand wrote:
  Now I want to try and do the following below:
 
 
 http://techs.enovance.com/7094/openstack-swift-2-0-introducing-storage-policies
 
  So pretty much have a Paris and Montreal center as described - I want to
  add a third datacenter to each as replication. So it the blog it says
  create new rings - there is where I am getting confused - may someone
  help and let me know what are the commands to create a whole new ring??

 Author of the post here :)

 Use the following command to create another ring:

 swift-ring-builder /etc/swift/object-1.builder create 15 3 1

 Storage Policies are addressed by an index number (see the blog post,
 for example storage-policy:0), and these are related to the ring files:

 storage-policy:0 - /etc/swift/object-0.ring.gz

 storage-policy:1 - /etc/swift/object-1.ring.gz

 and so on.

 Policy 0 is the default one, and if there is no ring file named
 /etc/swift/object-0.ring.gz Swift tries to use
 /etc/swift/object.ring.gz.

 Let me know if this helps!

 Christian

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] (Juno) Multiple rings Swift

2014-12-22 Thread Amit Anand
Sorry the command should read

swift-ring-builder account*-1*.builder add hq-ip#1:6002/sda3 100 ?

I forgot to add the -1 in...

On Mon, Dec 22, 2014 at 3:00 PM, Amit Anand aan...@viimed.com wrote:

 Thanks Christian I think thats exactly what I needed - I will try and make
 2 new rings and see how that works. You wouldnt per chance know how I would
 limit the replicas, ie 2 in the Paris region and 1 at HQ? I only want one
 replica to goto the HQ region...


 So for instance, if I have two nodes in the my HQ region and I only want
 one replica, would I run the following commands:

 swift-ring-builder account.builder add hq-ip#1:6002/sda3 100

 swift-ring-builder account.builder add hq-ip#2:6002/sda3 100

 If the nodes in Paris are as is follows:

 he minimum number of hours before a partition can be reassigned is 1
 Devices:id  region  zone  ip address  port  replication ip
  replication port  name weight partitions balance meta
  0   1 1   10.7.5.51  6002   10.7.5.51
  6002  sda3 100.003840.00
  1   1 1   10.7.5.51  6002   10.7.5.51
  6002  sda4 100.003840.00
  2   1 1   10.7.5.52  6002   10.7.5.52
  6002  sda3 100.003840.00
  3   1 1   10.7.5.52  6002   10.7.5.52
  6002  sda4 100.003840.00


 That for some reason doesnt seem correct to me but what do I know. Also,
 if you notice i my command for the name i gave hq-ip# - do I have to use
 the naming convention r#z#-ip when I add? Thanks

 Amit



 On Mon, Dec 22, 2014 at 1:46 PM, Christian Schwede 
 christian.schw...@enovance.com wrote:

 On 22.12.14 19:11, Amit Anand wrote:
  Now I want to try and do the following below:
 
 
 http://techs.enovance.com/7094/openstack-swift-2-0-introducing-storage-policies
 
  So pretty much have a Paris and Montreal center as described - I want to
  add a third datacenter to each as replication. So it the blog it says
  create new rings - there is where I am getting confused - may someone
  help and let me know what are the commands to create a whole new ring??

 Author of the post here :)

 Use the following command to create another ring:

 swift-ring-builder /etc/swift/object-1.builder create 15 3 1

 Storage Policies are addressed by an index number (see the blog post,
 for example storage-policy:0), and these are related to the ring files:

 storage-policy:0 - /etc/swift/object-0.ring.gz

 storage-policy:1 - /etc/swift/object-1.ring.gz

 and so on.

 Policy 0 is the default one, and if there is no ring file named
 /etc/swift/object-0.ring.gz Swift tries to use
 /etc/swift/object.ring.gz.

 Let me know if this helps!

 Christian



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack capabilities

2014-12-22 Thread Erik McCormick
I have a slightly different take on some of this:

On Mon, Dec 22, 2014 at 11:52 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 12/22/2014 11:20 AM, Eriane Leobrera wrote:

 Hi OpenStack,

 I would really appreciate if anyone can assist me on my dilemma. Below
 are the capabilities I am looking for. We are on the process of deciding
 between OpenStack vs CloudStack. Capabilities are much more important
 for us, integrated and having everything automated.

 Here are the list below:


 I've tried to give you honest answers from the OpenStack roadmap and
 currently-supported perspective below.

  1.Integration with CRM 2013 (service tickets, lead/opportunity,
 contact/account and billing system).


 Not supported now. Not likely to ever be supported by OpenStack, too much
 of a custom feature.


Agreed, but you can do tie in via the billing system potentially so as to
map tickets to tenants. This isn't an Openstack feature, but it is possible
with the correct tooling.


  2.Integration with payment gateway (Moneris)


 Not supported now. Not likely to ever be supported by OpenStack, too much
 of a custom feature.


Again, this is possible via 3rd party tooling integrating with your billing
system, but not an Openstack feature.


  3.PCI Compliant solution


 This is a giant rabbithole.


It is a giant rabbithole, but there's nothing in Openstack that is
inherently *not* compliant. Having been through a number of these, you can
achieve all of the audit requirements by properly configuring your cloud
the way you would any other system.



  4.Audit trail enabled on all interactions (i.e server shutdown etc.)


 OpenStack currently supports notification queues that can be used to
 provide audit capabilities.

  5.Support buying a new VM, signing up, auto charging them by credit card
 verification and auto provisioning VMs.


 Not supported now. Not likely to ever be supported by OpenStack, too much
 of a custom feature for the user interface, which is something that
 OpenStack's upstream UI (Horizon) is unlikely to develop.

 Again, this is a 3rd party problem. Openstack provides all the hooks that
you need to tie in something like Velvica to manage users, signup, billing
integration, etc. Usage data is provided by Ceilometer. Most of these
systems relay out to horizon, but there are a few that replace the
dashboard entirely. This is becoming a much richer space with more players
all the time.

 6.Upgrade existing VMs with ability to reboot VM to have resources added


 Currently supported. This is the resize/migrate operation in nova.

  7.Cancel VM which stops billing and scheduling for VM to be removed for
 x days later


 Not currently supported. Possibly supported some time in the future if we
 decide it's worthwhile to put time-based constraints and reservations into
 the scheduler.

 See #5


  8.Support for pay by the minute billing


 OpenStack does not ship a billing solution. This is something that is the
 responsibility of the operator, since it's a *very* custom feature and
 almost always involves proprietary code linking.

  9.Support for multiple data centre locations


 This is currently supported.

  Specifically look at documentation around Regions.


 10.Supports a DNS Manager


 This is currently supported with the Designate component.

  11.Ability to do a hard shutdown on VM


 This is currently supported.

  12.Ability to have console level access


 This is currently supported.

  13.Spin up new instance which wipes old instance and ability to reinstall


 This doesn't really make any sense. This isn't cloud. This is bare-metal
 hosting you are describing.

 OpenStack's VMs are hosted in the cloud -- i.e. virtualized. When you
 terminate a VM, you lose the data on the VM's ephemeral storage, which is
 why for data that you need to keep around, you use volumes (block storage).

  14.Supports backup manager


 The snapshot operation and daily/weekly/hourly backup operations are
 currently supported via OpenStack Nova's API. However, if you're looking
 for some Windows GUI that does backups, that isn't something that OpenStack
 is about to provide.

  15.Ability to change password for panel login via email or security
 questions etc.


 Changing passwords is currently supported. Security questions are a
 UI-specific thing and not something that is built-into OpenStack's APIs.

  16.VM management windows the shows RAM, CPU usage, IP, server name etc.
 (dashboard)


 This is currently supported.

  17.Scheduled maintenance window that shows upcoming or passed (dashboard)


 This is not supported.

  18.Dashboard showing all VMs and current utilizations


 This is currently supported.

  19.One click install software packages for define package


 Are you looking for an infrastructure service or a platform service?
 OpenStack's infrastructure services manage virtualized resources. Platform
 services, like the Murano project in Stackforge, can be used to interface
 with things like CloudFoundry 

Re: [Openstack] Using the new serial console support in Juno

2014-12-22 Thread Michael Dorman
Great info!  Thanks Lars.





On 12/22/14, 5:08 PM, Lars Kellogg-Stedman l...@redhat.com wrote:

I wrote an article about using the new serial console support for Nova
servers introduced in OpenStack Juno:

  
http://blog.oddbit.com/2014/12/22/accessing-the-serial-console-of-your-nov
a-servers/

I thought this might be of general interest.

Cheers,

-- 
Lars Kellogg-Stedman l...@redhat.com | larsks @ 
{freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] cinder slow (nova issues?)

2014-12-22 Thread Dmitry Makovey
Hi everybody,

using RDO IceHouse packages I've set up an infrastructure atop of
RHEL6.6 and am seeing a very unpleasant performance for the storage.

I've done some testing and here's what I get from the same storage: but
different access points:

cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.162997 s, 1.3 GB/s

nova-compute # dd if=/dev/zero of=baloon bs=1048576 count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 0.167905 s, 1.2 GB/s

instance # dd if=/dev/zero of=baloon bs=1048576 count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 10.064 s, 20.8 MB/s

A bit of explanation: in above scenario I have created LV on
cinder-node, then mounted it locally and ran command for
cinder-volume. Created an iSCSI target, mounted it on nova-compute,
and ran command there. Then, via cinder created storage volume, booted
the OS off it, and ran test from within it... Results are just
miserable. going from 1.2G/s down to 20M/s seems to be a big
degradation. What should I look for? I have also tried running the same
command within our RHEL KVM instance and got great performance.

I have checked under /var/lib/nova/instances/* and libvirt.xml seems to
indicate that virtio is being employed:

disk type=block device=disk
  driver name=qemu type=raw cache=none/
  source
dev=/dev/disk/by-path/ip-192.168.46.18:3260-iscsi-iqn.2010-10.org.openstack:volume-955b25eb-bb48-43c3-a14d-222c9e8c7019-lun-1/
  target bus=virtio dev=vda/
  serial955b25eb-bb48-43c3-a14d-222c9e8c7019/serial
/disk

guest used - is rhel-guest-image-6.6-20140926.0.x86_64.qcow2 downloaded
off RH site.

P.S.
I have cross-posted this to RDO ML as well...

-- 
Dmitry Makovey
Web Systems Administrator
Athabasca University
(780) 675-6245
---
Confidence is what you have before you understand the problem
Woody Allen

When in trouble when in doubt run in circles scream and shout
 http://www.wordwizard.com/phpbb3/viewtopic.php?f=16t=19330
-- 
Dmitry Makovey
Web Systems Administrator
Athabasca University
(780) 675-6245
---
Confidence is what you have before you understand the problem
Woody Allen

When in trouble when in doubt run in circles scream and shout
 http://www.wordwizard.com/phpbb3/viewtopic.php?f=16t=19330



signature.asc
Description: OpenPGP digital signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [swift] [ceph-backend]

2014-12-22 Thread Mark Kirkwood
I've been taking a look at this 
(https://github.com/stackforge/swift-ceph-backend and forks etc). Looks 
good.


I have some possibly dumb questions :-)

1/ Async updates

There's a comment in rados_server.py about not handling these. What 
exactly is the issue? (I note in Juno we need to add a policy arg, but 
they seem to work AFAICS).



2/ Replication

Doing some testing to 2 regions and 2 zones (each region with its own 
ceph cluster), I can get the ceph data out of sync between the regions 
if I stop one region and make changes in the other (then restart). This 
same test works fine for standard swift object backend. Are we in need 
of some more code in the replication side of things?


Cheers

Mark

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Neutron vs. FlatDHCP -- what's the latest?

2014-12-22 Thread Andrew Bogott

Kevin --

Thanks for your thoughts; this seems possible, if ugly.  My original 
question remains, though:  If there is meant to be an upgrade path from 
nova-network (In L, or M, or whenever), what will my use case look like 
after migration?  Will it be this same setup that you suggest, or is a 
proper flat-with-floating setup being added to Neutron in order to allow 
for direct migrations?


Thanks.

-Andrew


On 12/22/14 5:42 PM, Kevin Benton wrote:

The shared network would have all of the VMs attached to it and would
just be private address space. The shared network would be connected
to a virtual router which would be connected to an external network
where all of your floating IPs come from. The floating IPs from there
would have the allocation, assignment, release features you are
looking for.

However, until the ARP poisoning protection is merged, shared networks
aren't very trustworthy across multiple tenants. So you should be able
to experiment with the Juno Neutron code in the topology I described
above to see if it meets your needs, but I wouldn't suggest a
production deployment until the L2 dataplane security features are
merged (hopefully during this cycle).


-
| Shared Network |   --- All tenant VMs attach here
-
  |

| Router |

  |
--
| External Network |--- Floating IPs come from here
--

On Mon, Dec 22, 2014 at 1:46 AM, Andrew Bogott abog...@wikimedia.org wrote:

On 12/22/14 2:08 PM, Kevin Benton wrote:

Can't you simulate the same topology as the FlatDHCPManager + Floating IPs
with a shared network attached to a router which is then attached to an
external network?


Mmmmaybe?  Floating IP support in nova-network is pretty great (allocation,
assignment, release, etc.) and allows us shuffle around a small number of
public IPs amongst a much larger number of instances.  Your suggestion
doesn't address that, does it?  Short of my implementing a bunch of custom
stuff on my own?

-A




On Sun, Dec 21, 2014 at 7:00 PM, Andrew Bogott abog...@wikimedia.org
wrote:

Greetings!

I'm about to set up a new cloud, so for the second time this year I'm
facing the question of Neutron vs. nova-network.  In our current setup we're
using nova.network.manager.FlatDHCPManager with floating IPs.  This config
has been working fine, and would probably be our first choice for the new
cloud as well.

At this point is there any compelling reason for us to switch to Neutron?
My understanding is that the Neutron flat network model still doesn't
support anything similar to floating IPs, so if we move to Neutron we'll
need to switch to a subnet-per-tenant model.  Is that still correct?

I'm puzzled by the statement that  upgrades without instance downtime
will be available in the Kilo release[1] -- surely for such a path to
exist, Kilo/Neutron would need to support all the same use cases as
nova-network.  If that's right and Neutron is right on the verge of
supporting flat-with-floating then we may just cool our jets and wait to
build the new cloud until Kilo is released.  I have no particular reason to
prefer Neutron, but I'd like to avoid betting on a horse right before it's
put down :)

Thanks!

-Andrew

[1]
http://docs.openstack.org/openstack-ops/content/nova-network-deprecation.html


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
Kevin Benton








___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [zuul][tempest] zuul tempest is broken

2014-12-22 Thread LIU Yulong
Hi all,
I noticed that all test tempest here are broken 
http://status.openstack.org/zuul/‍.
There is no passed tempest check today.
All patch with tempest check in review.openstack.org will get a Jenkins -1 mark 
today.
Please some zuul/tempest's core/maintainer to check this issue.‍___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [zuul][tempest] zuul tempest is broken

2014-12-22 Thread LIU Yulong
It seems that the devstack cause the test failed.All openstack services' check‍ 
job 'check-tempest-dsvm-full' has the same error output.
2014-12-23 03:22:14.997 | 0 upgraded, 0 newly installed, 0 to remove and 2 not 
upgraded. 2014-12-23 03:22:15.247 | Running devstack 2014-12-23 03:22:15.247 | 
... this takes 5 - 8 minutes (logs in logs/devstacklog.txt.gz) 2014-12-23 
03:23:03.793 | ERROR: the main setup script run by this job failed - exit code: 
1 2014-12-23 03:23:03.794 | please look at the relevant log files to 
determine the root cause 2014-12-23 03:23:03.794 | Cleaning up host 2014-12-23 
03:23:03.794 | ... this takes 3 - 4 minutes (logs at 
logs/devstack-gate-cleanup-host.txt.gz) 2014-12-23 03:23:06.762 | Build step 
'Execute shell' marked build as failure 2014-12-23 03:23:06.846 | [SCP] 
Connecting to static.openstack.org‍




-- Original --
From:  LIU Yulong;yeul...@qq.com;
Date:  Tue, Dec 23, 2014 11:27 AM
To:  openstack@lists.openstack.orgopenstack@lists.openstack.org; 

Subject:  [Openstack] [zuul][tempest] zuul tempest is broken



Hi all,
I noticed that all test tempest here are broken 
http://status.openstack.org/zuul/‍.
There is no passed tempest check today.
All patch with tempest check in review.openstack.org will get a Jenkins -1 mark 
today.
Please some zuul/tempest's core/maintainer to check this issue.‍___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [zuul][tempest] zuul tempest is broken

2014-12-22 Thread Jeremy Stanley
On 2014-12-23 11:27:49 +0800 (+0800), LIU Yulong wrote:
 I noticed that all test tempest here are broken
 http://status.openstack.org/zuul/. There is no passed tempest
 check today. All patch with tempest check in review.openstack.org
 will get a Jenkins -1 mark today. Please some zuul/tempest's
 core/maintainer to check this issue.

We've been working on it all day. Pip 6.0 was released today and
it's taken most of the day to get DevStack working correctly with
it. Should finally be resolved now, if you try rechecking.
-- 
Jeremy Stanley

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Jclouds installation error

2014-12-22 Thread kyawthu win
Hi,
I want to use jclouds. I create pom.xml from
http://jclouds.apache.org/guides/openstack/#pom

Then I run
mvn dependency:copy-dependencies -DoutputDirectory=./lib

It shows errors like this BUILD FAILURE
Failed to execute goal
org.apache.maven.plugins:maven-dependency-plugin:2.1:copy-dependencies
(default-cli) on project my-app: Error copying artifact from
/home/ucsm/.m2/repository/aopalliance/aopalliance/1.0/aopalliance-1.0.jar
to /jclouds/lib/aopalliance-1.0.jar: /jclouds/lib/aopalliance-1.0.jar
(No such file or directory)

Can anybody solve this?
Please!

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Small openstack

2014-12-22 Thread matt
Sounds like a solid way to approach it george.  I hope you can document and
share your methods and experiences.

Sounds like this would be helpful to folks setting up small test
environments.

On Mon, Dec 22, 2014 at 4:35 PM, George Shuklin george.shuk...@gmail.com
wrote:

 Thank you for everyone!

 After some lurking around I found rather unusual way: use external
 networks on per-tennant based with directly attached interfaces. This will
 not only eliminate neutron nodes (as heavy server), but will remove NAT and
 simplify everything for tenant. All we need just a some VLAN/VXLANs with
 few external networks (per tenant).

 Tenants will have no 'routers' and 'floatingips', but still will have DHCP
 and other yummy neutron things like private networks with overlapping
 numbering plans.

 Future reports follow.


 On 12/21/2014 12:16 AM, George Shuklin wrote:

 Hello.

 I've suddenly got request for small installation of openstack (about 3-5
 computes).

 They need almost nothing (just a management panel to span simple
 instances, few friendly tennants), and I curious, is nova-network good
 solution for this? They don't want network node and do 'network node on
 compute' is kinda sad.

 (And one more: did anyone tried to put management stuff on compute node
 in mild production?)



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators