[openstack-dev] [nova] Nova API meeting

2014-04-17 Thread Kenichi Oomichi
Hi,

Chris has some days off now.
I'd like to run the next meeting instead.

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate. 

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 9:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 


Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread Aaron Rosen
Sorry not really. It's still not clear to me why multiple nics would be
required on the same L2 domain. Would you mind drawing your use case here:
http://asciiflow.com/ (or maybe google docs) labeling the different
interfaces with ips and the flow of packets you want. Also perhaps their
header values. You say Without modifying packet hearders in your email.
I'm guessing your referring to L2 headers? Though I'm still not really
following. Sorry :/


On Wed, Apr 16, 2014 at 10:23 PM, Vikash Kumar 
vikash.ku...@oneconvergence.com wrote:

 Aaron,

   The idea is to steer packets coming from source S1 ( belong to net1)
 destined to destination D1 (belong to net1)  through bunch of L2 appliances
 (like firewall) without modifying packet headers. The core idea is to keep
 appliances (on net1), source S1 (VM on net1) and destination D1(VM on
 net1)  on same broadcast domain. I hope it wl now make sense.


 On Thu, Apr 17, 2014 at 10:47 AM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 Kevin , this can be one approach but not sure. But certainly won't solve
 all cases. :)




 On Thu, Apr 17, 2014 at 10:33 AM, Kevin Benton blak...@gmail.com wrote:

 Yeah, I was aware of allowed address pairs, but that doesn't help with
 the IP allocation part.

 Is this the tenant workflow for this use case?

 1. Create an instance.
 2. Wait to see what which subnet it gets an allocation from.
 3. Pick an IP from that subnet that doesn't currently appear to be in
 use.
 4. Use the neutron-cli or API to update the port object with the extra
 IP.
 5. Hope that Neutron will never allocate that IP address for something
 else.


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Whoops Akihiro beat me to it :)


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 The allowed-address-pair extension that was added here (
 https://review.openstack.org/#/c/38230/) allows us to add arbitrary
 ips to an interface to allow them. This is useful if you want to run
 something like VRRP between two instances.


 On Wed, Apr 16, 2014 at 9:39 PM, Kevin Benton blak...@gmail.comwrote:

 I was under the impression that the security group rules blocked
 addresses not assigned by neutron[1].

 1.
 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L188


 On Wed, Apr 16, 2014 at 9:20 PM, Aaron Rosen 
 aaronoro...@gmail.comwrote:

 You can do it with ip aliasing and use one interface:

 ifconfig eth0 10.0.0.22/24
 ifconfig eth0:1 10.0.0.23/24
 ifconfig eth0:2 10.0.0.24/24

 2: eth0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq state
 DOWN qlen 1000
 link/ether 40:6c:8f:1a:a9:31 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.22/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
 inet 10.0.0.23/24 brd 10.0.0.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
 inet 10.0.0.24/24 brd 10.0.0.255 scope global secondary eth0:2
valid_lft forever preferred_lft forever



 On Wed, Apr 16, 2014 at 8:53 PM, Kevin Benton blak...@gmail.comwrote:

 Web server running multiple SSL sites that wants to be compatible
 with clients that don't support the SNI extension. There is no way for 
 a
 server to get multiple IP addresses on the same interface is there?


 On Wed, Apr 16, 2014 at 5:50 PM, Aaron Rosen aaronoro...@gmail.com
  wrote:

 This is true. Several people have asked this same question over
 the years though I've yet to hear a use case why one really need to do
 this. Do you have one?


 On Wed, Apr 16, 2014 at 3:12 PM, Ronak Shah 
 ro...@nuagenetworks.net wrote:

 Hi Vikash,
 Currently this is not supported. the NIC not only needs to be in
 different subnet, they have to be in different network as well 
 (container
 for the subnet)

 Thanks
 Ronak

 On Wed, Apr 16, 2014 at 3:51 AM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 *With 'interfaces' I mean 'nics' of VM*.


 On Wed, Apr 16, 2014 at 4:18 PM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 Hi,

  I want to launch one VM which will have two Ethernet
 interfaces with IP of single subnet. Is this supported now in 
 openstack ?
 Any suggestion ?


 Thanx



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 

Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread Aaron Rosen
Hi Kevin,

You'd would just create ports that aren't attached to instances and steal
their ip_addresses from those ports and put those in the
allowed-address-pairs on a port OR you could change the allocation range on
the subnet to ensure these ips were never handed out. That's probably the
right approach.

Aaron


On Wed, Apr 16, 2014 at 10:03 PM, Kevin Benton blak...@gmail.com wrote:

 Yeah, I was aware of allowed address pairs, but that doesn't help with the
 IP allocation part.

 Is this the tenant workflow for this use case?

 1. Create an instance.
 2. Wait to see what which subnet it gets an allocation from.
 3. Pick an IP from that subnet that doesn't currently appear to be in use.
 4. Use the neutron-cli or API to update the port object with the extra IP.
 5. Hope that Neutron will never allocate that IP address for something
 else.


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Whoops Akihiro beat me to it :)


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 The allowed-address-pair extension that was added here (
 https://review.openstack.org/#/c/38230/) allows us to add arbitrary ips
 to an interface to allow them. This is useful if you want to run something
 like VRRP between two instances.


 On Wed, Apr 16, 2014 at 9:39 PM, Kevin Benton blak...@gmail.com wrote:

 I was under the impression that the security group rules blocked
 addresses not assigned by neutron[1].

 1.
 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L188


 On Wed, Apr 16, 2014 at 9:20 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 You can do it with ip aliasing and use one interface:

 ifconfig eth0 10.0.0.22/24
 ifconfig eth0:1 10.0.0.23/24
 ifconfig eth0:2 10.0.0.24/24

 2: eth0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq state
 DOWN qlen 1000
 link/ether 40:6c:8f:1a:a9:31 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.22/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
 inet 10.0.0.23/24 brd 10.0.0.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
 inet 10.0.0.24/24 brd 10.0.0.255 scope global secondary eth0:2
valid_lft forever preferred_lft forever



 On Wed, Apr 16, 2014 at 8:53 PM, Kevin Benton blak...@gmail.comwrote:

 Web server running multiple SSL sites that wants to be compatible
 with clients that don't support the SNI extension. There is no way for a
 server to get multiple IP addresses on the same interface is there?


 On Wed, Apr 16, 2014 at 5:50 PM, Aaron Rosen 
 aaronoro...@gmail.comwrote:

 This is true. Several people have asked this same question over the
 years though I've yet to hear a use case why one really need to do 
 this. Do
 you have one?


 On Wed, Apr 16, 2014 at 3:12 PM, Ronak Shah ro...@nuagenetworks.net
  wrote:

 Hi Vikash,
 Currently this is not supported. the NIC not only needs to be in
 different subnet, they have to be in different network as well 
 (container
 for the subnet)

 Thanks
 Ronak

 On Wed, Apr 16, 2014 at 3:51 AM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 *With 'interfaces' I mean 'nics' of VM*.


 On Wed, Apr 16, 2014 at 4:18 PM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 Hi,

  I want to launch one VM which will have two Ethernet
 interfaces with IP of single subnet. Is this supported now in 
 openstack ?
 Any suggestion ?


 Thanx



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[openstack-dev] [Heat][Nova][Neutron]Detach interface will delete the port

2014-04-17 Thread Huangtianhua
Hi all,

Port is a resource define in Heat. And heat support the actions: create a 
port/delete a port/attach to a server/detach from a server.

But we can't re-attach a port which once be detached.

-
There is such a scenario:


1.   Create a stack with a template:

..

resources:

  my_instance:

type: OS::Nova::Server

properties:

  image: { get_param: ImageId }

  flavor: { get_param: InstanceType }

  networks: [ { port : {Ref: instacne_port}}]



  instacne_port:

type: OS::Neutron::Port

properties:

  network_id: { get_param: Network }



Heat will create a port and a server, and attach the port to the server.



2.   I want to attach the port the another server, so I update the stack 
with a new template:

..

resources:

  my_instance:

type: OS::Nova::Server

properties:

  image: { get_param: ImageId }

  flavor: { get_param: InstanceType }

  my_instance2:

type: OS::Nova::Server

properties:

  image: { get_param: ImageId }

  flavor: { get_param: InstanceType }

  networks: [ { port : {Ref: instacne_port}}]



  instacne_port:

type: OS::Neutron::Port

properties:

  network_id: { get_param: Network }



Heat will invoke the nova detach_interface API to detach the interface, and 
wanted to attach the port to the new server.

But the stack update is failed , and there is an 404 portId not find error 
raised on neutron. Because the port has been deleted while detaching.



There is no real detach api for heat to invoke. The nova API detach_interface 
will invoke the Neutron API delete_port, and then the port will be deleted.
   
---

I think there are two solutions:
First:
Heat get the port information before to detach, and to create the port again 
before to attach.

But I think it looks ugly and will increase risk failure for re-create.

Second:
Neutron provide a detach_port api to nova, so that nova provide the real 
detach not delete to heat.

What do you think about?

Cheers

Tianhua



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread Kevin Benton
This seems painful for a tenant workflow to get multiple addresses. I would
like to improve this during the Juno cycle. What is the limitation that is
blocking the multi-nic use cases? Is it Nova?


On Wed, Apr 16, 2014 at 11:27 PM, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi Kevin,

 You'd would just create ports that aren't attached to instances and steal
 their ip_addresses from those ports and put those in the
 allowed-address-pairs on a port OR you could change the allocation range on
 the subnet to ensure these ips were never handed out. That's probably the
 right approach.

 Aaron


 On Wed, Apr 16, 2014 at 10:03 PM, Kevin Benton blak...@gmail.com wrote:

 Yeah, I was aware of allowed address pairs, but that doesn't help with
 the IP allocation part.

 Is this the tenant workflow for this use case?

 1. Create an instance.
 2. Wait to see what which subnet it gets an allocation from.
 3. Pick an IP from that subnet that doesn't currently appear to be in use.
 4. Use the neutron-cli or API to update the port object with the extra IP.
 5. Hope that Neutron will never allocate that IP address for something
 else.


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Whoops Akihiro beat me to it :)


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 The allowed-address-pair extension that was added here (
 https://review.openstack.org/#/c/38230/) allows us to add arbitrary
 ips to an interface to allow them. This is useful if you want to run
 something like VRRP between two instances.


 On Wed, Apr 16, 2014 at 9:39 PM, Kevin Benton blak...@gmail.comwrote:

 I was under the impression that the security group rules blocked
 addresses not assigned by neutron[1].

 1.
 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L188


 On Wed, Apr 16, 2014 at 9:20 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 You can do it with ip aliasing and use one interface:

 ifconfig eth0 10.0.0.22/24
 ifconfig eth0:1 10.0.0.23/24
 ifconfig eth0:2 10.0.0.24/24

 2: eth0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq state
 DOWN qlen 1000
 link/ether 40:6c:8f:1a:a9:31 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.22/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
 inet 10.0.0.23/24 brd 10.0.0.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
 inet 10.0.0.24/24 brd 10.0.0.255 scope global secondary eth0:2
valid_lft forever preferred_lft forever



 On Wed, Apr 16, 2014 at 8:53 PM, Kevin Benton blak...@gmail.comwrote:

 Web server running multiple SSL sites that wants to be compatible
 with clients that don't support the SNI extension. There is no way for a
 server to get multiple IP addresses on the same interface is there?


 On Wed, Apr 16, 2014 at 5:50 PM, Aaron Rosen 
 aaronoro...@gmail.comwrote:

 This is true. Several people have asked this same question over the
 years though I've yet to hear a use case why one really need to do 
 this. Do
 you have one?


 On Wed, Apr 16, 2014 at 3:12 PM, Ronak Shah 
 ro...@nuagenetworks.net wrote:

 Hi Vikash,
 Currently this is not supported. the NIC not only needs to be in
 different subnet, they have to be in different network as well 
 (container
 for the subnet)

 Thanks
 Ronak

 On Wed, Apr 16, 2014 at 3:51 AM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 *With 'interfaces' I mean 'nics' of VM*.


 On Wed, Apr 16, 2014 at 4:18 PM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 Hi,

  I want to launch one VM which will have two Ethernet
 interfaces with IP of single subnet. Is this supported now in 
 openstack ?
 Any suggestion ?


 Thanx



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread Aaron Rosen
Nova currently is preventing one from attaching multiple nics on the same
L2. That said I don't think we've clearly determined a use case for having
multiple nics on the same L2. One reason why we don't allow this is doing
so would allow a tenant to easily loop the network and cause a bcast storm
and neutron doesn't have any mechanism today to break these loops today.
One could just enable STP on ovs to do so though I think we should come up
with a good use case before allowing this type of thing.


On Wed, Apr 16, 2014 at 11:53 PM, Kevin Benton blak...@gmail.com wrote:

 This seems painful for a tenant workflow to get multiple addresses. I
 would like to improve this during the Juno cycle. What is the limitation
 that is blocking the multi-nic use cases? Is it Nova?


 On Wed, Apr 16, 2014 at 11:27 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Hi Kevin,

 You'd would just create ports that aren't attached to instances and steal
 their ip_addresses from those ports and put those in the
 allowed-address-pairs on a port OR you could change the allocation range on
 the subnet to ensure these ips were never handed out. That's probably the
 right approach.

 Aaron


 On Wed, Apr 16, 2014 at 10:03 PM, Kevin Benton blak...@gmail.com wrote:

 Yeah, I was aware of allowed address pairs, but that doesn't help with
 the IP allocation part.

 Is this the tenant workflow for this use case?

 1. Create an instance.
 2. Wait to see what which subnet it gets an allocation from.
 3. Pick an IP from that subnet that doesn't currently appear to be in
 use.
 4. Use the neutron-cli or API to update the port object with the extra
 IP.
 5. Hope that Neutron will never allocate that IP address for something
 else.


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Whoops Akihiro beat me to it :)


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 The allowed-address-pair extension that was added here (
 https://review.openstack.org/#/c/38230/) allows us to add arbitrary
 ips to an interface to allow them. This is useful if you want to run
 something like VRRP between two instances.


 On Wed, Apr 16, 2014 at 9:39 PM, Kevin Benton blak...@gmail.comwrote:

 I was under the impression that the security group rules blocked
 addresses not assigned by neutron[1].

 1.
 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L188


 On Wed, Apr 16, 2014 at 9:20 PM, Aaron Rosen 
 aaronoro...@gmail.comwrote:

 You can do it with ip aliasing and use one interface:

 ifconfig eth0 10.0.0.22/24
 ifconfig eth0:1 10.0.0.23/24
 ifconfig eth0:2 10.0.0.24/24

 2: eth0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq state
 DOWN qlen 1000
 link/ether 40:6c:8f:1a:a9:31 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.22/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
 inet 10.0.0.23/24 brd 10.0.0.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
 inet 10.0.0.24/24 brd 10.0.0.255 scope global secondary eth0:2
valid_lft forever preferred_lft forever



 On Wed, Apr 16, 2014 at 8:53 PM, Kevin Benton blak...@gmail.comwrote:

 Web server running multiple SSL sites that wants to be compatible
 with clients that don't support the SNI extension. There is no way for 
 a
 server to get multiple IP addresses on the same interface is there?


 On Wed, Apr 16, 2014 at 5:50 PM, Aaron Rosen aaronoro...@gmail.com
  wrote:

 This is true. Several people have asked this same question over
 the years though I've yet to hear a use case why one really need to do
 this. Do you have one?


 On Wed, Apr 16, 2014 at 3:12 PM, Ronak Shah 
 ro...@nuagenetworks.net wrote:

 Hi Vikash,
 Currently this is not supported. the NIC not only needs to be in
 different subnet, they have to be in different network as well 
 (container
 for the subnet)

 Thanks
 Ronak

 On Wed, Apr 16, 2014 at 3:51 AM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 *With 'interfaces' I mean 'nics' of VM*.


 On Wed, Apr 16, 2014 at 4:18 PM, Vikash Kumar 
 vikash.ku...@oneconvergence.com wrote:

 Hi,

  I want to launch one VM which will have two Ethernet
 interfaces with IP of single subnet. Is this supported now in 
 openstack ?
 Any suggestion ?


 Thanx



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Heat][Nova][Neutron]Detach interface will delete the port

2014-04-17 Thread Sergey Kraynev
Hello Huang.

You are right, that this problem is presented in networks update for
OS::Nova::Server. I have known about it, and I wanted to discuss it with
Steve Baker, but possibly forgot to do it. Thank you, that you raise this
thread.

About issue.

The cause why it happens is simple: when nova calls detach_interface,  port
will be detached  and deleted at all.


I think there are two solutions:

 First:

 Heat get the port information before to “detach”, and to create the port
 again before to “attach”.

 But I think it looks ugly and will increase risk failure for re-create.


I agree that it's not useful solution. This approach has a lot of bad sides
and one of them :
 - if you update only server, your other resources should stay without
changes, but in this case port will be recreated. (so it will be new
different resource)


Second:

 Neutron provide a detach_port api to nova, so that nova provide the real
 “detach” not “delete” to heat.




I have told with folk from neutron team and they told me that neutron does
not have such api and it's not possible to do this thing.

So I think, that problem should be solved in nova. F.e. will be good to
provide detach_interface command with additional flag delete_port=True.
(some kind of soft detach).
In this case we could use existing port after detaching.

Regards,
Sergey.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting April 17 1800 UTC

2014-04-17 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Agenda_for_April.2C_17

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20140417T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone LDAP check job

2014-04-17 Thread Sergey Nikitin
Hi,

I'm refactoring LDAP driver in Keystone. I have a question:
why do we have no gate job, checking keystone with LDAP?

Are there any reasons not to create the job?
If there aren't I'd like to work on it.

Thanks
Sergey Nikitin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest] Questions about images

2014-04-17 Thread Thomas Spatzier
 From: Steve Baker sba...@redhat.com
 To: openstack-dev@lists.openstack.org
 Date: 17/04/2014 00:55
 Subject: Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest]
 Questions about images

 On 17/04/14 09:11, Thomas Spatzier wrote:
  From: Mike Spreitzer mspre...@us.ibm.com
  To: OpenStack Development Mailing List \(not for usage questions\)
  openstack-dev@lists.openstack.org
  Date: 16/04/2014 19:58
  Subject: Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest]
  Questions about images
 
  Steven Hardy sha...@redhat.com wrote answers to most of my
questions.
 
  To clarify, my concern about URLs and image names is not so much for
  the sake of a person browsing/writing but rather because I want
  programs, scripts, templates, and config files (e.g., localrc for
  DevStack) to all play nice together (e.g., not require a user to
  rename any images or hack any templates).  I think Steve was
  thinking along the same lines when he reiterated the URL he uses in
  localrc and wrote:
 
  We should use the default name that devstack uses in glance, IMO, e.g
 
  fedora-20.x86_64
  FWIW, instead of specifying allowed image names in a template it might
be a
  better approach to allow for specifying constraints against the image
(e.g.
  distro is fedora, or distro is ubuntu, version between 12.04 and 13.04
etc)
  and then use metadata in glance to select the right image. Of course
this
  would require some discipline to maintain metadata and we would have to
  agree on mandatory attributes and values in it (I am sure there is at
least
  one standard with proposed options), but it would make templates more
  portable ... or at least the author could specify more clearly under
which
  environments he/she thinks the template will work.
 
  There is a blueprint which goes in this direction:
  https://blueprints.launchpad.net/heat/+spec/constraint-based-
 flavors-and-images
 
 This would be good, but being able to store and query this metadata from
 glance would be a prerequisite for doing this in heat.

 Can you point to the glance blueprints which would enable this heat
 blueprint?

Sure, we will add references to corresponding glance BPs, since as you say
they are a pre-req.
We'll update the BP in the next couple of days.

Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] API list operations are not fast as they could because they're dumb

2014-04-17 Thread Salvatore Orlando
On 17 April 2014 04:02, Aaron Rosen aaronoro...@gmail.com wrote:

 Hi,

 Comments inline:


 On Tue, Apr 8, 2014 at 3:16 PM, Salvatore Orlando sorla...@nicira.comwrote:

 I have been recently investigating reports of slowness for list responses
 in the Neutron API.
 This was first reported in [1], and then recently was observed with both
 the ML2 and the NSX plugins.
  The root cause of this issues is that a policy engine check is performed
 for every attribute of every resource returned in a response.
 When tenant grow to a lot of ports, or when the API is executed with
 admin credentials without filters, this might become a non-negligible scale
 issue.
 This issue is mostly due to three factors:
 1) A log statement printing a line in the log for every attribute for
 which no policy criterion is defined; this has been treated with [2]
 2) The fact that for every check neutron currently checks whether cached
 policy rules are still valid [3]
 3) The fact that anyway Neutron perform really a lot of policy checks
 whether it should not

 Despite the improvements [2] and [3] (mostly [2]), neutron for a list
 operation still spends for post-plugin operations (ie: policy checks) abotu
 50% of the time it spends in the plugin.
 Solving this problem is not difficult, but it might require changes which
 are worth of a discussion on the mailing list.
 Up to the Havana release policy checks were performed in the plugin; this
 basically made responses dependent on plugin implementation and was
 terrible for API compatibility and portability; we took care of that with
 [4], which moved all policy checks to the API layer. However for one fix
 that we fixed, another thing was broken (*)

 The API layer for list responses puts every item through policy checks to
 see which should not be visible to the user at all, which is fine.
 However it also puts every attribute through a policy check to exclude
 those which should not be visible to the user, such as provider attributes
 for regular users.
 Doing this for every resource might make sense if an attribute should be
 visible or not according to the data in the resource itself.
 For instance a policy that shows port binding attributes could be defined
 for all the ports whose name is ernest.
 This might appear as great flexibility, but does it make any sense at all?
 Does it make sense that an API list operation return a set of attributes
 for some items and  a different one for others?
 I think not.

 For this reason I am thinking we should what technically is a simple
 change: use policy cghecks determine the list of attributes to show only
 once for list response, and then re-use that list for the whole response.
 The limitation here is that we should not have 'attribute-level' policies
 (**) which rely on the resource value.
 I think this limitation is fair. If you like the approach I have some
 code here: http://paste.openstack.org/show/75371/


 I think this makes sense. In theory the first element of the list should
 have all the same columns as the next set of elements so inspecting the
 first one should be fine.


Correct. It would be important however to not have attribute-level checks
which depend on resource data. I reckon this assumption is valid.



 And this leads me to the second part of the discussion I'd like to start.
 The policy engine currently allows me to start a neutron server where,
 for instance, port binding are visible by admins only, and another neutron
 server where any user can see them.
 This kind of makes the API not really portable, as people programming
 against the neutron API might encounter unexpected behaviours.



 This doesn't seem like a neutron specific issue to me. If I understand you
 correctly what you're saying is if an admin changes the policy.json file to
 exclude some data from the response that now the users of the api might
 have to change their code? ..port-binding-extension... X.o


Neutron is the only project which extends policy checks into attributes,
meaning that to a regular user a resource might look different from one
deployment to another.
If one is writing an application or script which uses the Openstack APIs,
it is necessary to perform extra checks to verify whether some attributes
are available or not. They might not be available either because some
extensions are not available or because policy checks are stripping them
off.
The former is a consequence of the fact neutron has a very small core and
everything else is an extensions, and this is surely not just a neutron
problem; the latter issue instead is a consequence of enabling flexible
authorisation policies and extending them into attribute visibility.

However, I have no plans at the moment to make changes in this area -
mostly because hardly any change can be made without causing backward
compatibility issue.




 To this aim, one solution would be to 'hardcode' attributes' access
 rights into extensions definition. This way port bindings will 

Re: [openstack-dev] [Heat] Stack snapshots

2014-04-17 Thread Steven Hardy
Hi Thomas,

On Tue, Apr 15, 2014 at 01:16:50PM +0200, Thomas Herve wrote:
 Hi all,
 
 I started working on the stack snapshot blueprint [1] and wrote a first 
 series of patches [2] to get a feeling of what's possible. I have a couple of 
 related design questions though:
 
  * Is a stack snapshot independent of the stack? That's the way I chose for 
 my patches, you start with a stack, but then you can show and delete 
 snapshots independently. The main impact will be how restoration works: is 
 restoration an update action on a stack towards a specific state, or a 
 creation action with backup data?
 
  * Consequently, should be use volume backups (which survive deleting of the 
 original volumes) or volume snapshots (which don't). If the snapshot is 
 dependent of the stack, then we can use the more efficient snapshot 
 operation. But backup is also an interesting use-case, so should it be 
 another action completely?

Firstly, thanks for picking this up - I raised that bp nearly a year ago
after some use-case discussions with users, I think it we can make it work
it should be a pretty interesting feature :)

Re your questions, I've been thinking about it and I actually think there
are two features, which could be independently implemented:

1. Stack snapshot, use the most lightweight approach to snapshotting the
underlying resources, and store state in the heat DB, for easy roll-back to
a previous state, e.g after one or more updates (e.g after the update has
completed)

2. Stack backup, use persistent snapshot interfaces for the underlying
resources, and store the state of the stack outside the DB, e.g in swift.

I think this approach aligns most closely with existing snapshot/backup
conventions (e.g cinder) and also provides the most flexibility for folks
using the features.

My thoughts originally were that you could take one or more snapshots
(probably up to a fairly small limit per stack) and then trigger a special
type of update, where an existing stack is restored to a previous state,
this is the use-case which the users I spoke to described.

The approach you describe where snapshots are more persistent also seems
valid, but I think the creation action from backup data is actually a
slightly different thing from stack-snapshot, and both could be useful.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Release notes for Icehouse

2014-04-17 Thread Fei Long Wang
Hi Tom,

Thanks for the reminder. I'm not sure if there is anyone from Glance team
working this. But I would like to highlight this in tonight Glance weekly
meeting and I will see what I can do.

Thanks  Best regards,
Fei Long Wang (王飞龙)
-
IBM Cloud OpenStack Platform
Tel: 8610-82450513 | T/L: 905-0513
Email: flw...@cn.ibm.com
China Systems  Technology Laboratory in Beijing
-




From:   Tom Fifield t...@openstack.org
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
Date:   04/17/2014 10:08 AM
Subject:[openstack-dev] [glance] Release notes for Icehouse



Hi,

Is someone working on release notes for glance? At the moment it's
looking pretty bare :)

https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse


Regards,


Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2014-04-17 Thread Tristan Cacqueray
confirmed

On 04/17/2014 06:51 AM, John Dickinson wrote:
 I'd like to announce my Technical Committee candidacy.
 
 I've been involved with OpenStack since it began. I'm one of the original 
 authors of Swift, and I have been serving as PTL since the position was 
 established. I'm employed by SwiftStack, a company building management and 
 integration tools for Swift clusters.
 
 OpenStack is a large (and growing) set of projects, unified under a common 
 open governance model. The important part about OpenStack is not the pieces 
 that make up the stack; it's the concept of open. We, as OpenStack 
 collectively, are a set of cooperating projects striving for excellence on 
 our own, but stronger when put together.
 
 As OpenStack moves forward, I believe the most important challenges the TC 
 faces are:
 
 - Ensuring high-quality, functioning, scalable code is delivered to users.
 - Working with the Board of Directors to establish conditions around 
 OpenStack trademark usage.
 - Ensuring the long-term success of OpenStack by lowering code contribution 
 barriers, incorporating feedback from non-developers, and promoting OpenStack 
 to new users.
 
 As a member of the TC, I will work to ensure these challenges are addressed. 
 I appreciate your vote for me in the TC election.
 
 --John
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][Nova][Neutron]Detach interface will delete the port

2014-04-17 Thread Sergey Kraynev
There is interesting patch on review
https://review.openstack.org/#/c/77043/15.
I suppose that it's related with discussed problems. Possibly we should
wait when it will be merged and then check mentioned use-cases.

Regards,
Sergey.


On 17 April 2014 12:18, Huangtianhua huangtian...@huawei.com wrote:





 *发件人:* Sergey Kraynev [mailto:skray...@mirantis.com]
 *发送时间:* 2014年4月17日 15:35
 *收件人:* OpenStack Development Mailing List (not for usage questions)
 *主题:* Re: [openstack-dev] [Heat][Nova][Neutron]Detach interface will
 delete the port



 Hello Huang.



 You are right, that this problem is presented in networks update for
 OS::Nova::Server. I have known about it, and I wanted to discuss it with
 Steve Baker, but possibly forgot to do it. Thank you, that you raise this
 thread.



 About issue.



 The cause why it happens is simple: when nova calls detach_interface,
  port will be detached  and deleted at all.





  I think there are two solutions:

 First:

 Heat get the port information before to “detach”, and to create the port
 again before to “attach”.

 But I think it looks ugly and will increase risk failure for re-create.



 I agree that it's not useful solution. This approach has a lot of bad
 sides and one of them :

  - if you update only server, your other resources should stay without
 changes, but in this case port will be recreated. (so it will be new
 different resource)



  Second:

 Neutron provide a detach_port api to nova, so that nova provide the real
 “detach” not “delete” to heat.





 I have told with folk from neutron team and they told me that neutron does
 not have such api and it's not possible to do this thing.



 So I think, that problem should be solved in nova. F.e. will be good to
 provide detach_interface command with additional flag delete_port=True.
 (some kind of soft detach).

 In this case we could use existing port after detaching.



 --

We discuss it in our team, it’s relate to server_delete also, if
 we update the stack just to delete the server, the port will be deleted
 too.

 So if we want to solve the problem in nova, the process of delete instance
 need to modify. May be need to modify the server_delete api to add a flag
 too.

  But this change seems provide to heat only. And may be it’s not easy
 to doJ

 Regards,

 Sergey.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
IMHO, zero-copy approach is better
VMThunder's on-demand transferring is the same thing as your zero-copy 
approach.
VMThunder is uses iSCSI as the transferring protocol, which is option #b of 
yours.




Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.

suppose booting one instance requires reading 300MB of data, so 500 ones
require 150GB.  Each of the storage server needs to send data at a rate of 
150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even 
for high-end storage appliances. In production  systems, this request (booting 
500 VMs in one shot) will significantly disturb  other running instances 
accessing the same storage nodes.


VMThunder eliminates this problem by P2P transferring and on-compute-node
caching. Even a pc server with one 1gb NIC (this is a true pc server!) can boot
500 VMs in a minute with ease. For the first time, VMThunder makes bulk 
provisioning of VMs practical for production cloud systems. This is the 
essential
value of VMThunder.








===
From: Zhi Yan Liu lzy@gmail.com
Date: 2014-04-17 0:02 GMT+08:00
Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process 
of a number of vms via VMThunder
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org


Hello Yongquan Fu,

My thoughts:

1. Currently Nova has already supported image caching mechanism. It
could caches the image on compute host which VM had provisioning from
it before, and next provisioning (boot same image) doesn't need to
transfer it again only if cache-manger clear it up.
2. P2P transferring and prefacing is something that still based on
copy mechanism, IMHO, zero-copy approach is better, even
transferring/prefacing could be optimized by such approach. (I have
not check on-demand transferring of VMThunder, but it is a kind of
transferring as well, at last from its literal meaning).
And btw, IMO, we have two ways can go follow zero-copy idea:
a. when Nova and Glance use same backend storage, we could use storage
special CoW/snapshot approach to prepare VM disk instead of
copy/transferring image bits (through HTTP/network or local copy).
b. without unified storage, we could attach volume/LUN to compute
node from backend storage as a base image, then do such CoW/snapshot
on it to prepare root/ephemeral disk of VM. This way just like
boot-from-volume but different is that we do CoW/snapshot on Nova side
instead of Cinder/storage side.

For option #a, we have already got some progress:
https://blueprints.launchpad.net/nova/+spec/image-multiple-location
https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler

Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.

For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
is one of optimized approach for image transferring valuably.

zhiyan


On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu quanyo...@gmail.com wrote:

 Dear all,



  We would like to present an extension to the vm-booting functionality of
 Nova when a number of homogeneous vms need to be launched at the same time.



 The motivation for our work is to increase the speed of provisioning vms for
 large-scale scientific computing and big data processing. In that case, we
 often need to boot tens and hundreds virtual machine instances at the same
 time.


 Currently, under the Openstack, we found that creating a large number of
 virtual machine instances is very time-consuming. The reason is the booting
 procedure is a centralized operation that involve performance bottlenecks.
 Before a virtual machine can be actually started, OpenStack either copy the
 image file (swift) or attach the image volume (cinder) from storage server
 to compute node via network. Booting a single VM need to read a large amount
 of image data from the image storage server. So creating a large number of
 virtual machine instances would cause a significant workload on the servers.
 The servers become quite busy even unavailable during the deployment phase.
 It would consume a very long time before the whole virtual machine cluster
 useable.



   Our extension is based on our work on vmThunder, a novel mechanism
 accelerating the deployment of large number virtual machine instances. It is
 written in Python, can be integrated with OpenStack easily. VMThunder
 addresses the problem described above by following improvements: on-demand
 

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your zero-copy
 approach.
 VMThunder is uses iSCSI as the transferring protocol, which is option #b of
 yours.


IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.


Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.
 suppose booting one instance requires reading 300MB of data, so 500 ones
 require 150GB.  Each of the storage server needs to send data at a rate of
 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
 for high-end storage appliances. In production  systems, this request
 (booting
 500 VMs in one shot) will significantly disturb  other running instances
 accessing the same storage nodes.

 VMThunder eliminates this problem by P2P transferring and on-compute-node
 caching. Even a pc server with one 1gb NIC (this is a true pc server!) can
 boot
 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
 provisioning of VMs practical for production cloud systems. This is the
 essential
 value of VMThunder.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

zhiyan




 ===
 From: Zhi Yan Liu lzy@gmail.com
 Date: 2014-04-17 0:02 GMT+08:00
 Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
 process of a number of vms via VMThunder
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org



 Hello Yongquan Fu,

 My thoughts:

 1. Currently Nova has already supported image caching mechanism. It
 could caches the image on compute host which VM had provisioning from
 it before, and next provisioning (boot same image) doesn't need to
 transfer it again only if cache-manger clear it up.
 2. P2P transferring and prefacing is something that still based on
 copy mechanism, IMHO, zero-copy approach is better, even
 transferring/prefacing could be optimized by such approach. (I have
 not check on-demand transferring of VMThunder, but it is a kind of
 transferring as well, at last from its literal meaning).
 And btw, IMO, we have two ways can go follow zero-copy idea:
 a. when Nova and Glance use same backend storage, we could use storage
 special CoW/snapshot approach to prepare VM disk instead of
 copy/transferring image bits (through HTTP/network or local copy).
 b. without unified storage, we could attach volume/LUN to compute
 node from backend storage as a base image, then do such CoW/snapshot
 on it to prepare root/ephemeral disk of VM. This way just like
 boot-from-volume but different is that we do CoW/snapshot on Nova side
 instead of Cinder/storage side.

 For option #a, we have already got some progress:
 https://blueprints.launchpad.net/nova/+spec/image-multiple-location
 https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler
 https://blueprints.launchpad.net/nova/+spec/vmware-clone-image-handler

 Under #b approach, my former experience from our previous similar
 Cloud deployment (not OpenStack) was that: under 2 PC server storage
 nodes (general *local SAS disk*, without any storage backend) +
 2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
 VMs in a minute.

 For vmThunder topic I think it sounds a good idea, IMO P2P, prefacing
 is one of optimized approach for image transferring valuably.

 zhiyan

 On Wed, Apr 16, 2014 at 9:14 PM, yongquan Fu quanyo...@gmail.com wrote:

 Dear all,



  We would like to present an extension to the vm-booting functionality of
 Nova when a number of homogeneous vms need to be launched at the same
 time.



 The motivation for our work is to increase the speed of provisioning vms
 for
 large-scale scientific computing and big data processing. In that case, we
 often need to boot tens and hundreds virtual machine instances at the same
 time.


 Currently, under the Openstack, we found that creating a large number
 of
 virtual machine instances is very 

Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread Kevin Benton
Well we definitely need a better way to get multiple IP addresses onto one
host. The current steps are terrible for a user and even for an
orchestration system like heat. I can't imagine how convoluted a template
would look to automate that process...

I'm not suggesting multiple NICs is the only approach, but I don't think
STP is a very strong excuse. First, if we trust the spoofing filtering of
security groups, looped traffic won't make it out of the other side of the
instance because it won't have the correct MAC on egress. Second, if a
Neutron implementation has no STP protection now, a tenant can just use two
instances with two NICs, bridge on both, and take down both networks (see
diagram).

NET 1-
   |  |

| A  || B  |

   |  |
NET 2-


On Thu, Apr 17, 2014 at 12:06 AM, Aaron Rosen aaronoro...@gmail.com wrote:

 Nova currently is preventing one from attaching multiple nics on the same
 L2. That said I don't think we've clearly determined a use case for having
 multiple nics on the same L2. One reason why we don't allow this is doing
 so would allow a tenant to easily loop the network and cause a bcast storm
 and neutron doesn't have any mechanism today to break these loops today.
 One could just enable STP on ovs to do so though I think we should come up
 with a good use case before allowing this type of thing.


 On Wed, Apr 16, 2014 at 11:53 PM, Kevin Benton blak...@gmail.com wrote:

 This seems painful for a tenant workflow to get multiple addresses. I
 would like to improve this during the Juno cycle. What is the limitation
 that is blocking the multi-nic use cases? Is it Nova?


 On Wed, Apr 16, 2014 at 11:27 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Hi Kevin,

 You'd would just create ports that aren't attached to instances and
 steal their ip_addresses from those ports and put those in the
 allowed-address-pairs on a port OR you could change the allocation range on
 the subnet to ensure these ips were never handed out. That's probably the
 right approach.

 Aaron


 On Wed, Apr 16, 2014 at 10:03 PM, Kevin Benton blak...@gmail.comwrote:

 Yeah, I was aware of allowed address pairs, but that doesn't help with
 the IP allocation part.

 Is this the tenant workflow for this use case?

 1. Create an instance.
 2. Wait to see what which subnet it gets an allocation from.
 3. Pick an IP from that subnet that doesn't currently appear to be in
 use.
 4. Use the neutron-cli or API to update the port object with the extra
 IP.
 5. Hope that Neutron will never allocate that IP address for something
 else.


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 Whoops Akihiro beat me to it :)


 On Wed, Apr 16, 2014 at 9:46 PM, Aaron Rosen aaronoro...@gmail.comwrote:

 The allowed-address-pair extension that was added here (
 https://review.openstack.org/#/c/38230/) allows us to add arbitrary
 ips to an interface to allow them. This is useful if you want to run
 something like VRRP between two instances.


 On Wed, Apr 16, 2014 at 9:39 PM, Kevin Benton blak...@gmail.comwrote:

 I was under the impression that the security group rules blocked
 addresses not assigned by neutron[1].

 1.
 https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L188


 On Wed, Apr 16, 2014 at 9:20 PM, Aaron Rosen 
 aaronoro...@gmail.comwrote:

 You can do it with ip aliasing and use one interface:

 ifconfig eth0 10.0.0.22/24
 ifconfig eth0:1 10.0.0.23/24
 ifconfig eth0:2 10.0.0.24/24

 2: eth0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq
 state DOWN qlen 1000
 link/ether 40:6c:8f:1a:a9:31 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.22/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
 inet 10.0.0.23/24 brd 10.0.0.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
 inet 10.0.0.24/24 brd 10.0.0.255 scope global secondary eth0:2
valid_lft forever preferred_lft forever



 On Wed, Apr 16, 2014 at 8:53 PM, Kevin Benton blak...@gmail.comwrote:

 Web server running multiple SSL sites that wants to be compatible
 with clients that don't support the SNI extension. There is no way 
 for a
 server to get multiple IP addresses on the same interface is there?


 On Wed, Apr 16, 2014 at 5:50 PM, Aaron Rosen 
 aaronoro...@gmail.com wrote:

 This is true. Several people have asked this same question over
 the years though I've yet to hear a use case why one really need to 
 do
 this. Do you have one?


 On Wed, Apr 16, 2014 at 3:12 PM, Ronak Shah 
 ro...@nuagenetworks.net wrote:

 Hi Vikash,
 Currently this is not supported. the NIC not only needs to be in
 different subnet, they have to be in different network as well 
 (container
 for the subnet)

 Thanks
 Ronak

 On Wed, Apr 16, 2014 at 3:51 AM, Vikash Kumar 
 

[openstack-dev] swift doubts

2014-04-17 Thread Sowmya Nethi
Hello everyone,

 I want to integrate Swift with Nas.

So, I have mounted nas storage on to my linux machine and in the place of 
/dev/sdb,
 (the device that we specify in swift installation process, I referred this 
link for swift installation 
http://docs.openstack.org/developer/swift/howto_installmultinode.html )
   I specified the mounted nas device but it was not successful.

 So, please can anyone help me on this.


Regards,
Sowmya Nethi


DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Persistent Systems Ltd. It is intended only for the use of the 
individual or entity to which it is addressed. If you are not the intended 
recipient, you are not authorized to read, retain, copy, print, distribute or 
use this message. If you have received this communication in error, please 
notify the sender and delete all copies of this message. Persistent Systems 
Ltd. does not accept any liability for virus infected mails.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][VMWare] qemu-img convert problems for VM creation

2014-04-17 Thread Jay Lau
Thanks Brown, I was using same way as you did but always failed, not sure
if it is caused by my image but the image does work for KVM before convert.

I will try your image later.

Thanks!


2014-04-17 11:04 GMT+08:00 Eric Brown bro...@vmware.com:

 Good timing.  I just tried this today for the first time.
 Here's what worked for me:

 wget
 http://cloud-images.ubuntu.com/saucy/current/saucy-server-cloudimg-i386-disk1.img

 qemu-img convert -f qcow2 saucy-server-cloudimg-i386-disk1.img -O vmdk
 saucy-server-cloudimg-i386-disk1.vmdk

 glance image-create --name precise-cloud --is-public=True
 --container-format=bare --disk-format=vmdk --property
 vmware_disktype=sparse --property vmware_adaptertype=ide 
 saucy-server-cloudimg-i386-disk1.vmdk

 nova boot --config-drive=true --image saucy-cloud --flavor m1.small --poll
 saucy


 On Apr 16, 2014, at 6:36 PM, Jay Lau jay.lau@gmail.com wrote:

 Hi,

 Does anyone ever create VMWare image using qemu-img convert from a QCOW2
 image? I did some test according to the following guide but the VM creation
 always failed.

 I tried to logon to the console of the VM and found the console is
 reporting the VM was boot from PXE and no operating systems.

 ==
 =

 Using the qemu-img utility, disk images in several formats (such as,
 qcow2) can be converted to the VMDK format.

 For example, the following command can be used to convert a qcow2 Ubuntu
 Precise cloud 
 imagehttps://urldefense.proofpoint.com/v1/url?u=http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.imgk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=2CQc966BQ6s3Cdd6nQ79uvWP17nF9g%2FX4m3XppGg1xQ%3D%0Am=OuyW8RHPj7QHpDd93d%2FQQwJNpI%2F4XdkhvUGKMXnbw%2Bg%3D%0As=eb589ffb27d6ccf66a79f2ade5db63415db32882d1aa7392812035a6d3aa0a22
 :

 $ qemu-img convert -f qcow2 
 ~/Downloads/precise-server-cloudimg-amd64-disk1.img \
 -O vmdk precise-server-cloudimg-amd64-disk1.vmdk

 VMDK disks converted through qemu-img are always monolithic sparse VMDK
 disks with an IDE adapter type. Using the previous example of the Precise
 Ubuntu image after the qemu-img conversion, the command to upload the
 VMDK disk should be something like:

 $ glance image-create --name precise-cloud --is-public=True \
 --container-format=bare --disk-format=vmdk \
 --property vmware_disktype=sparse \
 --property vmware_adaptertype=ide  \
 precise-server-cloudimg-amd64-disk1.vmdk

 ===

 --
 Thanks,

 Jay
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org

 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=2CQc966BQ6s3Cdd6nQ79uvWP17nF9g%2FX4m3XppGg1xQ%3D%0Am=OuyW8RHPj7QHpDd93d%2FQQwJNpI%2F4XdkhvUGKMXnbw%2Bg%3D%0As=f9ff1b7ad27552da9560a5d008cf6c5ad005402826712e2d477e3dc687044174



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 【openstack-dev】【nova】discussion about add support to SSD ephemeral storage

2014-04-17 Thread Yuzhou (C)
Hi Daniel,
The intention of image type ('default', 'fast', ' shared', 
'sharedfast') look like volume type in cinder. So I think there are two 
solutions :
1. Like using volume type to configure a multiple-storage back-end in cinder, 
we could extend nova API , then create image-type resource to configure a 
multiple-image back-end in nova.
e.g.
in nova.conf,
libvirt_image_type=default:qcow2, fast:qcow2, shared:rbd, 
sharedfast:rbd
instance_path=default:/var/nova/images/hdd, 
fast:/var/nova/imges/ssd
images_rbd_pool=shared:main,sharedfast:mainssd

nova image-type-create normal_image
nova image-type-key normal_image root_disk_type=default
nova image-type-key normal_image ephemeral _disk_type=default 
nova image-type-key normal_image swap_disk_type=default  

nova image-type-create fast_image
nova image-type-key fast_image root_disk_type=fast
nova image-type-key fast_image ephemeral _disk_type=default 
nova image-type-key fast_image swap_disk_type=fast   

nova flavor-key m3.xlarge set quota:image-type= fast_image  

 
2. Like our discussion in mails, image types are defined in configuration file, 
enumerated type, ie set libvirt_image_type in nova.conf
e.g.
in nova.conf,
libvirt_image_type=default:qcow2, fast:qcow2, shared:rbd, 
sharedfast:rbd
instance_path=default:/var/nova/images/hdd, 
fast:/var/nova/imges/ssd
images_rbd_pool=shared:main,sharedfast:mainssd

nova flavor0key m3.xlarge set ephemeral_storage_type =fast
or more fine grained,
nova flavor-key m3.xlarge set quota:root_disk_type=fast
nova flavor-key m3.xlarge set quota:ephemeral_disk_type=default
nova flavor-key m3.xlarge set quota:swap_disk_type=fast


Which solution do you prefer?

If you prefer second solution, I think better to set 
libvirt_image_type like this: libvirt_image_type=default:raw:HDD, fast:raw:SSD
*fast* means what, I think only the deployer of openstack knows clearly. So 
description field would be need. HDD and SSD are the description of image 
type name.
Maybe in second solution, we would not need to create/delete image-type 
resource, but I think the api about listing image-types is needed. Do you think 
so?


 I've already seen people asking for ability to have a choice of local image 
 backends per flavor even before you raised the SSD idea.
I have seen nova blueprint list, not found any blueprint about this idea, I 
will register BP and implement this idea.

Thanks.

Zhou Yu


 -Original Message-
 From: Daniel P. Berrange [mailto:berra...@redhat.com]
 Sent: Wednesday, April 16, 2014 4:48 PM
 To: Yuzhou (C)
 Cc: openstack-dev@lists.openstack.org; Luohao (brian); Liuji (Jeremy); Bohai
 (ricky)
 Subject: Re: 【openstack-dev】【nova】discussion about add support to SSD
 ephemeral storage
 
 On Wed, Apr 16, 2014 at 02:17:13AM +, Yuzhou (C) wrote:
  Hi Daniel,
 
   Thanks for your comments about this
  BP:https://review.openstack.org/#/c/83727/
 
   My initial thoughts is to do little changes then get better
 performance of guest vm. So it is a bit too narrowly focused.
 
   After review SSD use case, I totally agree with your comments. I
 think if I want to implement the broader picture, there are many work items
 that need to do.
 
  1. Add support to create flavor with SSD ephemeral storage.
   The cloud adminstrator create the flavor that indicate which
 backend should be used per instance. e.g.
nova flavor-key m1.ssd set
 quota:ephemeral_storage_type=ssd
  (root_disk ephemeral_disk and swap_disk are placed onto 
  a
 ssd)
   Or more fine grained, e.g.
nova flavor-key m1.ssd set quota:root_disk_type=ssd
nova flavor-key m1.ssd set quota:ephemeral_disk_type=hd
nova flavor-key m1.ssd set quota:swap_disk_type=ssd
  (root_disk and swap_disk are placed onto a ssd,
 ephemeral_disk is
  placed onto a harddisk)
 
 I don't think you should be using the term 'ssd' here, or indeed anywhere.
 We should just be letting the admin configure multiple local image types, and
 given them each a name. Then just refer to the image types by name.
 We don't need to care whether they're SSD backed or not - just that the
 admin can configure whatever backends they want to.  I've already seen
 people asking for ability to have a choice of local image backends per flavour
 even before you raised the SSD idea.
 
  2. When config nova,the deployer of openstack configure
  ephemeral_storage_pools e.g.
   if libvirt_image_type=default (local disk)
ephemeral_storage_pools=path1,path2
   if  libvirt_image_type=RBD
 ephemeral_storage_pools=rdb1,rdb2
 
 We have to bear in 

Re: [openstack-dev] 【openstack-dev】【nova】discussion about add support to SSD ephemeral storage

2014-04-17 Thread Daniel P. Berrange
On Thu, Apr 17, 2014 at 10:06:03AM +, Yuzhou (C) wrote:
 Hi Daniel,
   The intention of image type ('default', 'fast', ' shared', 
 'sharedfast') look like volume
   type in cinder. So I think there are two solutions :

I was explicitly *NOT* considering those names to be standardized,
because that artifically limits how many different image backends
the admin can setup. I merely used those names as examples - the
site admin should be free to use whatever they decide is most
relevant names to them. Nova shouldn't interpret the image names
in anyway - just treat them as unique keys for looking up the
parameters defined in the config/flavour.

 1. Like using volume type to configure a multiple-storage back-end in cinder, 
 we
 could extend nova API , then create image-type resource to configure a 
 multiple-image
 back-end in nova.

   e.g.
   in nova.conf,
   libvirt_image_type=default:qcow2, fast:qcow2, shared:rbd, 
 sharedfast:rbd
   instance_path=default:/var/nova/images/hdd, 
 fast:/var/nova/imges/ssd
   images_rbd_pool=shared:main,sharedfast:mainssd
 
   nova image-type-create normal_image
 nova image-type-key normal_image root_disk_type=default
 nova image-type-key normal_image ephemeral _disk_type=default 
 nova image-type-key normal_image swap_disk_type=default  
 
   nova image-type-create fast_image
 nova image-type-key fast_image root_disk_type=fast
 nova image-type-key fast_image ephemeral _disk_type=default 
 nova image-type-key fast_image swap_disk_type=fast   
 
   nova flavor-key m3.xlarge set quota:image-type= fast_image  

This concept shouldn't be tied to cinder in any way. This is purely something
for nova to be concerned with.


 
  
 2. Like our discussion in mails, image types are defined in configuration 
 file, enumerated type, ie set libvirt_image_type in nova.conf
   e.g.
   in nova.conf,
   libvirt_image_type=default:qcow2, fast:qcow2, shared:rbd, 
 sharedfast:rbd
   instance_path=default:/var/nova/images/hdd, 
 fast:/var/nova/imges/ssd
   images_rbd_pool=shared:main,sharedfast:mainssd
 
   nova flavor0key m3.xlarge set ephemeral_storage_type =fast
   or more fine grained,
   nova flavor-key m3.xlarge set quota:root_disk_type=fast
   nova flavor-key m3.xlarge set quota:ephemeral_disk_type=default
   nova flavor-key m3.xlarge set quota:swap_disk_type=fast

This one is what I'd prefer.

 
 
   Which solution do you prefer?
 
   If you prefer second solution, I think better to set
  libvirt_image_type like this: libvirt_image_type=default:raw:HDD,
 fast:raw:SSD *fast* means what, I think only the deployer of openstack
 knows clearly. So description field would be need. HDD and SSD are
 the description of image type name.

I don't really see a need for the extra description here. 

 Maybe in second solution, we would not need to create/delete
 image-type resource, but I think the api about listing image-types
 is needed. Do you think so?

I guess there could be a need for a way to list image types so the
person defining flavours knows what the host admin has made available
to them.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Stack snapshots

2014-04-17 Thread Duncan Thomas
You're probably going to want to look at the proposals for volume
consistency groups in cinder - backing up/snapshotting volumes
independently is likely to cause issues with some applications, since
different VMs will be in slightly different states, so you can get
lost or duplicated work/records/etc

On 15 April 2014 12:16, Thomas Herve thomas.he...@enovance.com wrote:
 Hi all,

 I started working on the stack snapshot blueprint [1] and wrote a first 
 series of patches [2] to get a feeling of what's possible. I have a couple of 
 related design questions though:

  * Is a stack snapshot independent of the stack? That's the way I chose for 
 my patches, you start with a stack, but then you can show and delete 
 snapshots independently. The main impact will be how restoration works: is 
 restoration an update action on a stack towards a specific state, or a 
 creation action with backup data?

  * Consequently, should be use volume backups (which survive deleting of the 
 original volumes) or volume snapshots (which don't). If the snapshot is 
 dependent of the stack, then we can use the more efficient snapshot 
 operation. But backup is also an interesting use-case, so should it be 
 another action completely?


 [1] https://blueprints.launchpad.net/heat/+spec/stack-snapshot

 [2] 
 https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/stack-snapshot,n,z

 Thanks,

 --
 Thomas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Jesse Pretorius
On 17 April 2014 11:11, Zhi Yan Liu lzy@gmail.com wrote:

 As I said currently Nova already has image caching mechanism, so in
 this case P2P is just an approach could be used for downloading or
 preheating for image caching.

 I think  P2P transferring/pre-caching sounds a  good way to go, as I
 mentioned as well, but actually for the area I'd like to see something
 like zero-copy + CoR. On one hand we can leverage the capability of
 on-demand downloading image bits by zero-copy approach, on the other
 hand we can prevent to reading data from remote image every time by
 CoR.


This whole discussion reminded me of this:

https://blueprints.launchpad.net/glance/+spec/glance-bittorrent-delivery
http://tropicaldevel.wordpress.com/2013/01/11/an-image-transfers-service-for-openstack/

The general idea was that Glance would be able to serve images through
torrents, enabling the capability for compute hosts to participate in image
delivery. Well, the second part was where I thought it was going - I'm not
sure if that was the intention.

It didn't seem to go anywhere, but I thought it was a nifty idea.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Access to the cloud for unconfirmed users

2014-04-17 Thread Roman Bodnarchuk

Hello,

Right now I am trying to set-up a self-signup for users of our OpenStack 
cloud.  One of the essential points of this signup is verification of 
user's email address - until a user proves that this address belongs to 
him/her, he/she should not be able to do anything useful in the cloud.


In the same time, a partial access to the cloud is very desirable - at 
minimum, a user should be able to authenticate to Keystone and 
successfully obtain a token, but should not be able to change anything 
in other services or access information of other users.


It is possible to disable a user with corresponding field in User model, 
but this will not let us to use Keystone as a source of authentication 
data (Keystone returns 401 for request to /auth/token with credentials 
of disabled user).


Other way to do this would be to created a special role like 
`unconfirmed` for a default project/domain, and assign it to users with 
unconfirmed email (this will be the only role assigned for them).  Thus, 
it will be possible to authenticate them, but they won't able to use the 
system.


So, the question - does this approach make sense?  Are there any 
dangerous resources in OpenStack, which user with auth token and some 
unknown role can access?


Any comments about other possible solutions are also welcomed.

Thanks,
Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [nodepool] Modification for use clean KVM/QEMU

2014-04-17 Thread Vladislav Kuzmin
I opened blueprint for detailed description about this feature
https://blueprints.launchpad.net/openstack-ci/+spec/nodepool-kvm-backend


  Hi community!
  I have a modification for Nodepool which allows to use it with a
  clean KVM/QEMUhost while still support OpenStack. This allows parallel
  use KVM/QEMUand OpenStackhosts. As well, this saves computing resources
  required to perform OpenStack and time on its setting.
  Need this feature for the community? Whether to add it to the upstream?

 Can you explain a little more what the change is? It's not entirely
 clear to me from the description.

 I'd say in general always default to proposing upstream. If nothing else
 it will drive a conversation around the feature to figure out how to
 support what's desired in the nodepool base.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Provider Framework and Flavor Framework

2014-04-17 Thread Zang MingJie
Hi Eugene:

I have several questions

1. I wonder if tags is really needed. for example, if I want a ipsec
vpn, I'll define a flavor which is directly refer to ipsec provider.
If using current design, almost all users will end up creating flavors
like this:

ipsec tags=[ipsec]
sslvpn tags=[sslvpn]

so the tags is totally useless, and I suggest replace tags by provider
name/uuid. It is much more straightforward and easier.

2. currently the provider name is something configured in neutron.conf:

service_provider=service_type:name:driver[:default]

the name is arbitrary, user may set whatever he want, and the name is
also used in service instance stored in database. I don't know why
give user the ability to name the provider, and the name is totally
nonsense, it hasn't been referred anywhere. I think each service
provider should provide a alias, so user can configure service more
flexible:

service_provider=service_type:driver_alias[:default]

and the alias can be also used as an identifier in rpc or other place,
also I don't want to see any user configured name used as an
identifier.


On Wed, Apr 16, 2014 at 5:10 AM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 Hi folks,

 In Icehouse there were attempts to apply Provider Framework ('Service Type
 Framework') approach to VPN and Firewall services.
 Initially Provider Framework was created as a simplistic approach of
 allowing user to choose service implementation.
 That approach definitely didn't account for public cloud case where users
 should not be aware of underlying implementation, while being able to
 request capabilities or a SLA.

 However, Provider Framework consists of two parts:
 1) API part.
 That's just 'provider' attribute of the main resource of the service plus a
 REST call to fetch available providers for a service

 2) Dispatching part
 That's a DB table that keeps mapping between resource and implementing
 provider/driver.
 With this mapping it's possible to dispatch a REST call to the particular
 driver that is implementing the service.

 As we are moving to better API and user experience, we may want to drop the
 first part, which makes the framework non-public-cloud-friendly but the
 second part will remain if we ever want to support more then one driver
 simultaneously.

 Flavor framework proposes choosing implementation based on capabilities, but
 the result of the choice (e.g. scheduling) is still a mapping between
 resource and the driver.
 So the second part is still needed for the Flavor Framework.

 I think it's a good time to continue the discussion on Flavor and Provider
 Frameworks.

 Some references:
 1. Flavor Framework description
 https://wiki.openstack.org/wiki/Neutron/FlavorFramework
 2. Flavor Framework PoC/example code https://review.openstack.org/#/c/83055/
 3. FWaaS integration with Provider framework:
 https://review.openstack.org/#/c/60699/
 4. VPNaaS integration with Provider framework:
 https://review.openstack.org/#/c/41827/

 I'd like to see the work on (3) and (4) continued, considering Provider
 Framework is a basis for Flavor Framework.

 Thanks,
 Eugene.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Provider Framework and Flavor Framework

2014-04-17 Thread Eugene Nikanorov
Hi Zang,

1.
 so the tags is totally useless, and I suggest replace tags by provider
 name/uuid. It is much more straightforward and easier.
Funny thing is that the goal of flavor framework is directly opposite.
We need to hide provider/vendor name. Ssl vpn or ipsec could be implemented
by different vendors and we don't want to expose vendor or provider name.
Instead, we may expose the type or other capabilities. It may be a
coincidence the names 'sslvpn' and 'ipsec' map to both types of vpn and the
provider names,
but there are plenty of other cases (for other services) where additional
parameters are needed.

2.
 I don't know why give user the ability to name the provider, and the name
is totally
 nonsense, it hasn't been referred anywhere.
A user isn't given an ability to name providers. Only deployer/cloud admin
has a right to do that.
Right now, when service implementation is chosen by the user directly by
specifying the provider he/she wants, provider name is returned when you
list providers.
It is also used in REST call dispatching.

We definitely will not expose anything like driver aliases to the user as
it is a particular implementation detail.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread CARVER, PAUL

Akihiro Motoki wrote:

To cope with such cases, allowed-address-pairs extension was implemented.
http://docs.openstack.org/api/openstack-network/2.0/content/allowed_address_pair_ext_ops.html


Question on this in particular: Is a tenant permitted to do this? If so, what 
exactly is the iptables rule accomplishing? If the intent was to prevent the 
tenant from spoofing someone else's IP then forcing the tenant to take an extra 
step of making an API call prior to attempting to spoof doesn't really stop 
them.

Question in general: Is there an easy way to see the whole API broken out by 
privilege level? I'd like to have a clear idea of all the functionality that 
requires a cloud operator/admin to perform vs the functionality that a tenant 
can perform. Obviously Horizon looks different for an admin than it does for a 
tenant, but I'm not as clear on how to identify differences in the API.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Proposal to add Ruslan Kamaldinov to murano-core team

2014-04-17 Thread Timur Sufiev
Guys,

Ruslan Kamaldinov has been doing a lot of things for Murano recently
(including devstack integration, automation scripts, making Murano
more compliant with OpenStack standards and doing many reviews). He's
actively participating in our ML discussions as well. I suggest to add
him to the core team.

Murano folks, please say your +1/-1 word.

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread CARVER, PAUL
Aaron Rosen wrote:

Sorry not really. It's still not clear to me why multiple nics would be 
required on the same L2 domain.

I’m a fan of this old paper for nostalgic reasons 
http://static.usenix.org/legacy/publications/library/proceedings/neta99/full_papers/limoncelli/limoncelli.pdf
 but a search for transparent or bridging firewall turns up tons of hits.

Whether any of them are valid use cases for OpenStack is something that we 
could debate, but the general concept of putting two firewall interfaces into 
the same L2 domain and using it to control traffic flow between different hosts 
on the same L2 domain has at least five years of history behind it.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposal to add Ruslan Kamaldinov to murano-core team

2014-04-17 Thread Alexander Tivelkov
+1

Totally agree

--
Regards,
Alexander Tivelkov


On Thu, Apr 17, 2014 at 4:37 PM, Timur Sufiev tsuf...@mirantis.com wrote:

 Guys,

 Ruslan Kamaldinov has been doing a lot of things for Murano recently
 (including devstack integration, automation scripts, making Murano
 more compliant with OpenStack standards and doing many reviews). He's
 actively participating in our ML discussions as well. I suggest to add
 him to the core team.

 Murano folks, please say your +1/-1 word.

 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposal to add Ruslan Kamaldinov to murano-core team

2014-04-17 Thread Dmitry Teselkin
+1

Agree


On Thu, Apr 17, 2014 at 4:51 PM, Alexander Tivelkov
ativel...@mirantis.comwrote:

 +1

 Totally agree

 --
 Regards,
 Alexander Tivelkov


 On Thu, Apr 17, 2014 at 4:37 PM, Timur Sufiev tsuf...@mirantis.comwrote:

 Guys,

 Ruslan Kamaldinov has been doing a lot of things for Murano recently
 (including devstack integration, automation scripts, making Murano
 more compliant with OpenStack standards and doing many reviews). He's
 actively participating in our ML discussions as well. I suggest to add
 him to the core team.

 Murano folks, please say your +1/-1 word.

 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,
Dmitry Teselkin
Deployment Engineer
Mirantis
http://www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.Yes, in this situation, the 
problem lies in the backend storage, so no otherprotocol will perform better. 
However, P2P transferring will greatly reduceworkload on the backend storage, 
so as to increase responsiveness.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.
Nova's image caching is file level, while VMThunder's is block-level. And
VMThunder is for working in conjunction with Cinder, not Glance. VMThunder
currently uses facebook's flashcache to realize caching, and dm-cache,
bcache are also options in the future.



I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

Yes, on-demand transferring is what you mean by zero-copy, and caching
is something close to CoR. In fact, we are working on a kernel module called
foolcache that realize a true CoR. See https://github.com/lihuiba/dm-foolcache.







National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073

At 2014-04-17 17:11:48,Zhi Yan Liu lzy@gmail.com wrote:
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your zero-copy
 approach.
 VMThunder is uses iSCSI as the transferring protocol, which is option #b of
 yours.


IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.


Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.
 suppose booting one instance requires reading 300MB of data, so 500 ones
 require 150GB.  Each of the storage server needs to send data at a rate of
 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
 for high-end storage appliances. In production  systems, this request
 (booting
 500 VMs in one shot) will significantly disturb  other running instances
 accessing the same storage nodes.

 VMThunder eliminates this problem by P2P transferring and on-compute-node
 caching. Even a pc server with one 1gb NIC (this is a true pc server!) can
 boot
 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
 provisioning of VMs practical for production cloud systems. This is the
 essential
 value of VMThunder.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

zhiyan




 ===
 From: Zhi Yan Liu lzy@gmail.com
 Date: 2014-04-17 0:02 GMT+08:00
 Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
 process of a number of vms via VMThunder
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org



 Hello Yongquan Fu,

 My thoughts:

 1. Currently Nova has already supported image caching mechanism. It
 could caches the image on compute host which VM had provisioning from
 it before, and next provisioning (boot same image) doesn't need to
 transfer it again only if cache-manger clear it up.
 2. P2P transferring and prefacing is something that still based on
 copy mechanism, IMHO, zero-copy approach is better, even
 transferring/prefacing could be optimized by such approach. (I have
 not check on-demand transferring of VMThunder, but it is a kind of
 transferring as well, at last from its literal meaning).
 And btw, IMO, we have two ways can go follow zero-copy idea:
 a. when Nova 

[openstack-dev] Reviewing spelling and grammar errors in blueprints Re: [Nova] nova-specs

2014-04-17 Thread Stefano Maffulli
On 04/16/2014 07:56 PM, Dan Smith wrote:
 Do we really want to -1 for spelling mistake in nova-specs?
 
 I do, yes. These documents are intended to be read by deployers and
 future developers. I think it's really important that they're useful in
 that regard.

Guys, use your judgement with this. If a spelling mistake is really an
impediment to understanding the meaning of the sentence or introduces
ambiguity, by all mean fix it (i.e. provide a correction, for native
English speakers).

Always imagine that on the other side there is someone who has feelings
and may have already done an immense effort to learn how to express
technical concepts in a foreign language. Getting a vote for a small
thing brings any adult back to childhood memories and may cause bad
feelings.

Be very very careful. I know most of reviewers are already being
careful, I'm just piling up on top of that carefuless: there is never
enough :)

Please don't -1 if it's a minor grammar/spelling mistake that doesn't
prevent proper understanding of the blueprint by a person skilled in the
art.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread lihuiba
glance-bittorrent-delivery and VMThunder have similar goals  fast 
provisioning
of large amount of VMs, and they share some ideas like P2P transferring, but 
they 
go with different techniques.


VMThunder only downloads data blocks that are really used by VMs, so as to 
reduce bandwith and time required to provision. We have experiments showing
that only a few hundred MB of data is needed to boot an mainstream OS like
CentOS 6.x, Ubuntu 12.04, Windows 2008, etc., while the images are GBs or
even tens of GBs large.



National Key Laboratory for Parallel and Distributed
Processing, College of Computer Science, National University of Defense
Technology, Changsha, Hunan Province, P.R. China
410073



在 2014-04-17 19:06:27,Jesse Pretorius jesse.pretor...@gmail.com 写道:



This whole discussion reminded me of this:


https://blueprints.launchpad.net/glance/+spec/glance-bittorrent-delivery
http://tropicaldevel.wordpress.com/2013/01/11/an-image-transfers-service-for-openstack/


The general idea was that Glance would be able to serve images through 
torrents, enabling the capability for compute hosts to participate in image 
delivery. Well, the second part was where I thought it was going - I'm not sure 
if that was the intention.


It didn't seem to go anywhere, but I thought it was a nifty idea.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] bug/129135 VXLAN kernel version checking

2014-04-17 Thread Terry Wilson
 A question about the fix from https://review.openstack.org/#/c/82931

 Also, how does this work for RHEL-based distros where they tend to backport
 new kernel features? For instance vxlan support was added in the kernel for
 RHEL6.5 which is 2.6.32-based... That changeset looks like it breaks Neutron
 for ovs + vxlan on RHEL distros.

 Nate

The simple answer is that it doesn't work at all on RHEL. RHEL has backported 
upstream VXLAN support to the 2.6.32 kernel they use. It is fundamentally 
unsound to be checking kernel version numbers at runtime. Checking kernel 
version numbers in upstream code at runtime is just a fundamentally flawed 
thing to do. The only way those numbers mean anything is if they are in 
downstream packaging dependencies. There is also a lot of cruft that comes 
along with having to test all kinds of different things to ensure that the 
flawed check works. It quickly gets very messy.

It is almost universally accepted that if you want to test whether support 
exists for a feature, instead of trying to track version numbers across who 
knows how many options, you try to use the feature and then fail/fallback 
gracefully. I have a patch here https://review.openstack.org/#/c/88121/ which 
rips out all of the version checking and instead, at runtime when vxlan support 
is enabled, tries to create a temporary bridge/vxlan port and exits the 
openvswitch agent with a useful error message. With that said, I'm not a huge 
fan of modifying system state at startup just to test this. IMHO it might be 
better to just remove the check at startup altogether and error out with an 
informative message during the normal course when a VXLAN port cannot be 
created.

Anyway, if people could take a look at the review:

  https://review.openstack.org/#/c/88121/

And perhaps have some discussion here, on list, about what we think is the best 
way to move forward with this, I'd be happy. :)

Terry
  

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] bug/129135 VXLAN kernel version checking

2014-04-17 Thread Kyle Mestery
On Thu, Apr 17, 2014 at 9:14 AM, Terry Wilson twil...@redhat.com wrote:
 A question about the fix from https://review.openstack.org/#/c/82931

 Also, how does this work for RHEL-based distros where they tend to backport
 new kernel features? For instance vxlan support was added in the kernel for
 RHEL6.5 which is 2.6.32-based... That changeset looks like it breaks Neutron
 for ovs + vxlan on RHEL distros.

 Nate

 The simple answer is that it doesn't work at all on RHEL. RHEL has backported 
 upstream VXLAN support to the 2.6.32 kernel they use. It is fundamentally 
 unsound to be checking kernel version numbers at runtime. Checking kernel 
 version numbers in upstream code at runtime is just a fundamentally flawed 
 thing to do. The only way those numbers mean anything is if they are in 
 downstream packaging dependencies. There is also a lot of cruft that comes 
 along with having to test all kinds of different things to ensure that the 
 flawed check works. It quickly gets very messy.

 It is almost universally accepted that if you want to test whether support 
 exists for a feature, instead of trying to track version numbers across who 
 knows how many options, you try to use the feature and then fail/fallback 
 gracefully. I have a patch here https://review.openstack.org/#/c/88121/ which 
 rips out all of the version checking and instead, at runtime when vxlan 
 support is enabled, tries to create a temporary bridge/vxlan port and exits 
 the openvswitch agent with a useful error message. With that said, I'm not a 
 huge fan of modifying system state at startup just to test this. IMHO it 
 might be better to just remove the check at startup altogether and error out 
 with an informative message during the normal course when a VXLAN port cannot 
 be created.


I'm not sure throwing an error and exiting when the first VXLAN port
creation happens is a good idea. On the other hand, I agree with Maru
that executing an invasive check at runtime is also potentially
challenging. But given the realities of the situation here (backports,
etc.), I think we don't have a choice. The runtime check at startup is
cleaner and allows the agent to fail right away with a clear error
message.

Thanks,
Kyle

 Anyway, if people could take a look at the review:

   https://review.openstack.org/#/c/88121/

 And perhaps have some discussion here, on list, about what we think is the 
 best way to move forward with this, I'd be happy. :)

 Terry


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposal to add Ruslan Kamaldinov to murano-core team

2014-04-17 Thread Georgy Okrokvertskhov
+1


On Thu, Apr 17, 2014 at 6:01 AM, Dmitry Teselkin dtesel...@mirantis.comwrote:

 +1

 Agree


 On Thu, Apr 17, 2014 at 4:51 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 +1

 Totally agree

 --
 Regards,
 Alexander Tivelkov


 On Thu, Apr 17, 2014 at 4:37 PM, Timur Sufiev tsuf...@mirantis.comwrote:

 Guys,

 Ruslan Kamaldinov has been doing a lot of things for Murano recently
 (including devstack integration, automation scripts, making Murano
 more compliant with OpenStack standards and doing many reviews). He's
 actively participating in our ML discussions as well. I suggest to add
 him to the core team.

 Murano folks, please say your +1/-1 word.

 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,
 Dmitry Teselkin
 Deployment Engineer
 Mirantis
 http://www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposal to add Ruslan Kamaldinov to murano-core team

2014-04-17 Thread Stan Lagun
+1

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 sla...@mirantis.com


On Thu, Apr 17, 2014 at 6:51 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 +1


 On Thu, Apr 17, 2014 at 6:01 AM, Dmitry Teselkin 
 dtesel...@mirantis.comwrote:

 +1

 Agree


 On Thu, Apr 17, 2014 at 4:51 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 +1

 Totally agree

 --
 Regards,
 Alexander Tivelkov


 On Thu, Apr 17, 2014 at 4:37 PM, Timur Sufiev tsuf...@mirantis.comwrote:

 Guys,

 Ruslan Kamaldinov has been doing a lot of things for Murano recently
 (including devstack integration, automation scripts, making Murano
 more compliant with OpenStack standards and doing many reviews). He's
 actively participating in our ML discussions as well. I suggest to add
 him to the core team.

 Murano folks, please say your +1/-1 word.

 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,
 Dmitry Teselkin
 Deployment Engineer
 Mirantis
 http://www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposal to add Ruslan Kamaldinov to murano-core team

2014-04-17 Thread Anastasia Kuznetsova
+1


On Thu, Apr 17, 2014 at 7:11 PM, Stan Lagun sla...@mirantis.com wrote:

 +1

 Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis

  sla...@mirantis.com


 On Thu, Apr 17, 2014 at 6:51 PM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com wrote:

 +1


 On Thu, Apr 17, 2014 at 6:01 AM, Dmitry Teselkin 
 dtesel...@mirantis.comwrote:

 +1

 Agree


 On Thu, Apr 17, 2014 at 4:51 PM, Alexander Tivelkov 
 ativel...@mirantis.com wrote:

 +1

 Totally agree

 --
 Regards,
 Alexander Tivelkov


 On Thu, Apr 17, 2014 at 4:37 PM, Timur Sufiev tsuf...@mirantis.comwrote:

 Guys,

 Ruslan Kamaldinov has been doing a lot of things for Murano recently
 (including devstack integration, automation scripts, making Murano
 more compliant with OpenStack standards and doing many reviews). He's
 actively participating in our ML discussions as well. I suggest to add
 him to the core team.

 Murano folks, please say your +1/-1 word.

 --
 Timur Sufiev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,
 Dmitry Teselkin
 Deployment Engineer
 Mirantis
 http://www.mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Approximate alarming

2014-04-17 Thread Nejc Saje
Hey everyone!

I’d like to get your gut reaction on an idea for the future of alarming. Should 
I or should I not put it up for debate at the design summit?

---TL;DR
Online algorithms for computing stream statistics over sliding windows would 
allow us to provide sample statistics within an error bound (e.g. The average 
cpu utilization in the last hour was 85% +/- 1%”), while significantly reducing 
the load and memory requirements of the computation.
—

Alarm evaluation currently recalculates the aggregate values each time the 
alarm is evaluated, which is problematic because of the load it puts on the 
system. There have been multiple ideas on how to solve this problem, from 
precalculating aggregate values 
(https://wiki.openstack.org/wiki/Ceilometer/Alerting#Precalculation_of_aggregate_values)
 to re-architecting the alarms into the sample pipeline 
(https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements). While Sandy's 
suggestions make sense from the performance viewpoint, the problem of 
scalability remains. Samples in the pipeline need to be kept in-memory for the 
whole evaluation window, which requires O(N) memory for a window of size N.

We could tackle this problem by using cutting edge research in streaming 
algorithms, namely the papers by Datar et al. [1], and Arasu et al. [2]. They 
provide algorithms for computing stream statistics over sliding windows, such 
as *count, avg, min, max* and even *percentile*, **online** and with 
polylogarithmic space requirements. The tradeoff is of course precision, but 
the algorithms are bounded on the relative error - which could be specified by 
the user.

If we can tell the user The average cpu utilization in the last hour was 85% 
+/- 1%, would that not be enough for most use cases, while severely reducing 
the load on the system? We could still support *error_rate=0*, which would 
simply use O(N) space and provide a precise answer for the cases where such an 
answer is needed.

These algorithms were developed with telcos and computer network monitoring in 
mind, in which information about current network performance—latency, 
bandwidth, etc.—is generated online and is used to monitor and adjust network 
performance dynamically[1]. IIUC the main user of alarms is Heat autoscaling, 
which is exactly the kind of problem suitable to 'soft' calculations, with a 
certain tolerance for error.

[1] Datar, Mayur, et al. Maintaining stream statistics over sliding windows. 
*SIAM Journal on Computing* 31.6 (2002): 1794-1813. PDF @ 
http://ilpubs.stanford.edu:8090/504/1/2001-34.pdf

[2] Arasu, Arvind, and Gurmeet Singh Manku. Approximate counts and quantiles 
over sliding windows. *Proceedings of the twenty-third ACM 
SIGMOD-SIGACT-SIGART symposium on Principles of database systems.* ACM, 2004. 
PDF @ http://ilpubs.stanford.edu:8090/624/1/2003-72.pdf


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Summit ticket needed, help?

2014-04-17 Thread Adam Harwell
Hello  everyone!
I was originally not going to be able to attend the summit next month, but 
things have changed and I would now like to attend. Unfortunately, tickets have 
become prohibitively expensive at this point. If any of you have or know anyone 
who has a ticket that they are not going to be able to use (for whatever 
reason), please let me know and we could discuss a transfer! I could afford to 
reimburse you for at least some of the ticket cost, if necessary. It looks like 
transfers are available until April 28, so please let me know, and don't let a 
ticket go to waste!

Thanks very much for your consideration,
--Adam Harwell (prospective Neutron-LBaaS contributor)

PS: I apologize if this is not the place for this, I am on IRC often but 
somewhat unused to Mailing List etiquette.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] bug/129135 VXLAN kernel version checking

2014-04-17 Thread Edgar Magana Perdomo (eperdomo)
I second Kyle on this,
Quite more clear for users to see the version error message on the logs!

Edgar

On 4/17/14, 7:23 AM, Kyle Mestery mest...@noironetworks.com wrote:

On Thu, Apr 17, 2014 at 9:14 AM, Terry Wilson twil...@redhat.com wrote:
 A question about the fix from https://review.openstack.org/#/c/82931

 Also, how does this work for RHEL-based distros where they tend to
backport
 new kernel features? For instance vxlan support was added in the
kernel for
 RHEL6.5 which is 2.6.32-based... That changeset looks like it breaks
Neutron
 for ovs + vxlan on RHEL distros.

 Nate

 The simple answer is that it doesn't work at all on RHEL. RHEL has
backported upstream VXLAN support to the 2.6.32 kernel they use. It is
fundamentally unsound to be checking kernel version numbers at runtime.
Checking kernel version numbers in upstream code at runtime is just a
fundamentally flawed thing to do. The only way those numbers mean
anything is if they are in downstream packaging dependencies. There is
also a lot of cruft that comes along with having to test all kinds of
different things to ensure that the flawed check works. It quickly
gets very messy.

 It is almost universally accepted that if you want to test whether
support exists for a feature, instead of trying to track version numbers
across who knows how many options, you try to use the feature and then
fail/fallback gracefully. I have a patch here
https://review.openstack.org/#/c/88121/ which rips out all of the
version checking and instead, at runtime when vxlan support is enabled,
tries to create a temporary bridge/vxlan port and exits the openvswitch
agent with a useful error message. With that said, I'm not a huge fan of
modifying system state at startup just to test this. IMHO it might be
better to just remove the check at startup altogether and error out with
an informative message during the normal course when a VXLAN port cannot
be created.


I'm not sure throwing an error and exiting when the first VXLAN port
creation happens is a good idea. On the other hand, I agree with Maru
that executing an invasive check at runtime is also potentially
challenging. But given the realities of the situation here (backports,
etc.), I think we don't have a choice. The runtime check at startup is
cleaner and allows the agent to fail right away with a clear error
message.

Thanks,
Kyle

 Anyway, if people could take a look at the review:

   https://review.openstack.org/#/c/88121/

 And perhaps have some discussion here, on list, about what we think is
the best way to move forward with this, I'd be happy. :)

 Terry


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-17 Thread Deepak Shetty
Andrew,
   While i agree there is a thought to change/improve this in future, but
the way it is today isn't acceptable, mainly bcos (as i said) the new
serverIP is taken effect w/o service restart and if someone adds -o
options.. even after service restart it doesn't take effect. Its confusing
to the admin (forget the user!) itself.. as to why serverIP change is
effected but -o options isn't. IMHO we should fix this that this works in a
sane way until the future when the rework / redesign happens... thanks
deepak


On Fri, Apr 11, 2014 at 7:31 PM, Kerr, Andrew andrew.k...@netapp.comwrote:

 Hi Deepak,

 I know that there are plans to completely change how NFS uses (or more
 accurately, will not use) the shares.conf file in the future.  My guess is
 that a lot of this code will be changed in the near future during that
 rework.

 Andrew Kerr
 OpenStack QA
 Cloud Solutions Group
 NetApp


 From:  Deepak Shetty dpkshe...@gmail.com
 Reply-To:  OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date:  Friday, April 11, 2014 at 7:54 AM
 To:  OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Subject:  [openstack-dev] [Cinder] XXXFSDriver: Query on usage
 of  load_shares_config in ensure_shares_mounted


 Hi,

I am using the nfs and glusterfs driver as reference here.


 I see that load_shares_config is called everytime via
 _ensure_shares_mounted which I feel is incorrect mainly because
 ensure_shares_mounted loads the config file again w/o restarting the
 service


 I think that the shares config file should only be loaded once (during
 service startup) as part of do_setup and never again.

 If someone changes something in the conf file, one needs to restart
 service which calls do_setup again and the changes made in shares.conf is
 taken effect.


 In looking further.. the ensure_shares_mounted ends up calling
 remotefsclient.mount() which does _Nothing_ if the share is already
 mounted.. whcih is mostly the case. So even if someone changed something
 in the shares file (like added -o options) it won't take
  effect as the share is already mounted  service already running.

 In fact today, if you restart the service, even then the changes in share
 won't take effect as the mount is not un-mounted, hence when the service
 is started next, the mount is existing and ensures_shares_mounted just
 returns w/o doing anything.


 The only adv of calling load_shares_config in ensure_shares_mounted is if
 someone changed the shares server IP while the service is running ... it
 loads the new share usign the new server IP.. which again is wrong since
 ideally the person should restart service
  for any shares.conf changes to take effect.

 Hence i feel callign load_shares_config in ensure_shares_mounted is
 Incorrect and should be removed

 Thoughts ?

 thanx,

 deepak


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-17 Thread Deepak Shetty
On Fri, Apr 11, 2014 at 8:25 PM, Eric Harney ehar...@redhat.com wrote:

 On 04/11/2014 07:54 AM, Deepak Shetty wrote:
  Hi,
 I am using the nfs and glusterfs driver as reference here.
 
  I see that load_shares_config is called everytime via
  _ensure_shares_mounted which I feel is incorrect mainly because
  ensure_shares_mounted loads the config file again w/o restarting the
 service
 
  I think that the shares config file should only be loaded once (during
  service startup) as part of do_setup and never again.
 

 Wouldn't this change the functionality that this provides now, though?


What functionality are you referring to.. ? didn't get you here



 Unless I'm missing something, since get_volume_stats calls
 _ensure_shares_mounted(), this means you can add a new share to the
 config file and have it become active in the driver.  (While I'm not
 sure this was the original intent, it could be nice to have and should
 at least be considered before ditching it.)


That does sound like a good to have feature but it actually is a bug bcos
for server IP changes, it is effected w/o restarting service, but if one
adds -o options its not effected even if u restart service.. so i feel
whats happening is un-intended and actually a bug!

Config should be loaded once and any changes to it should be effected post
service restart



  If someone changes something in the conf file, one needs to restart
 service
  which calls do_setup again and the changes made in shares.conf is taken
  effect.
 

 I'm not sure this is correct given the above.


Pls see above.. it works in a incorrect way which is confusing to the
admin/user



  In looking further.. the ensure_shares_mounted ends up calling
  remotefsclient.mount() which does _Nothing_ if the share is already
  mounted.. whcih is mostly the case. So even if someone changed something
 in
  the shares file (like added -o options) it won't take effect as the share
  is already mounted  service already running.
 
  In fact today, if you restart the service, even then the changes in share
  won't take effect as the mount is not un-mounted, hence when the service
 is
  started next, the mount is existing and ensures_shares_mounted just
 returns
  w/o doing anything.
 
  The only adv of calling load_shares_config in ensure_shares_mounted is if
  someone changed the shares server IP while the service is running ... it
  loads the new share usign the new server IP.. which again is wrong since
  ideally the person should restart service for any shares.conf changes to
  take effect.
 

 This won't work anyway because of how we track provider_location in the
 database.  This particular case is planned to be addressed via this
 blueprint with reworks configuration:


 https://blueprints.launchpad.net/cinder/+spec/remotefs-share-cfg-improvements


Agree, but until this is realized, we can fix the code/flow such that its
sane.. in the sense, it works consistently for all cases.
today it doesn't.. for some change it works w/o service restart and for
some it doesn't work even after service restart

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-17 Thread Zaro
Hello All.  The OpenStack infra team has been working to put
everything in place so that we can upgrade review.o.o from Gerrit
version 2.4.4 to version 2.8.4  We are happy to announce that we are
finally ready to make it happen!

We will begin the upgrade on Monday, April 28th at 1600 UTC (the
OpenStack recommended 'off' week).

We would like to advise that you can expect a couple hours of downtime
followed by several more hours of automated systems not quite working
as expected.  Hopefully you shouldn't notice anyway because you should
all be on vacation :)

Please let us know if you have any concerns about this upgrade.

Thank You,

-Khai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-17 Thread Deepak Shetty
On Thu, Apr 17, 2014 at 10:00 PM, Deepak Shetty dpkshe...@gmail.com wrote:




 On Fri, Apr 11, 2014 at 8:25 PM, Eric Harney ehar...@redhat.com wrote:

 On 04/11/2014 07:54 AM, Deepak Shetty wrote:
  Hi,
 I am using the nfs and glusterfs driver as reference here.
 
  I see that load_shares_config is called everytime via
  _ensure_shares_mounted which I feel is incorrect mainly because
  ensure_shares_mounted loads the config file again w/o restarting the
 service
 
  I think that the shares config file should only be loaded once (during
  service startup) as part of do_setup and never again.
 

 Wouldn't this change the functionality that this provides now, though?


 What functionality are you referring to.. ? didn't get you here



 Unless I'm missing something, since get_volume_stats calls
 _ensure_shares_mounted(), this means you can add a new share to the
 config file and have it become active in the driver.  (While I'm not
 sure this was the original intent, it could be nice to have and should
 at least be considered before ditching it.)


 That does sound like a good to have feature but it actually is a bug bcos
 for server IP changes, it is effected w/o restarting service, but if one
 adds -o options its not effected even if u restart service.. so i feel
 whats happening is un-intended and actually a bug!

 Config should be loaded once and any changes to it should be effected post
 service restart


forgot to add, that for the above to work consistently, we definitely need
to have a framework / mechanism in cinder where the driver is provided a
function/callback to gracefully cleanup its mounts (or any other thing)
during service goign down. Today we don't have such a thing and hence
drivers don't cleanup their mounts and hence when service starts up, it
ensure_shares_mounted sees the mount being present and does nothing. this
would work nice if drivers were given ability to clean up the mounts as
part of service going down.

thanx,
deepak




  If someone changes something in the conf file, one needs to restart
 service
  which calls do_setup again and the changes made in shares.conf is taken
  effect.
 

 I'm not sure this is correct given the above.


 Pls see above.. it works in a incorrect way which is confusing to the
 admin/user



  In looking further.. the ensure_shares_mounted ends up calling
  remotefsclient.mount() which does _Nothing_ if the share is already
  mounted.. whcih is mostly the case. So even if someone changed
 something in
  the shares file (like added -o options) it won't take effect as the
 share
  is already mounted  service already running.
 
  In fact today, if you restart the service, even then the changes in
 share
  won't take effect as the mount is not un-mounted, hence when the
 service is
  started next, the mount is existing and ensures_shares_mounted just
 returns
  w/o doing anything.
 
  The only adv of calling load_shares_config in ensure_shares_mounted is
 if
  someone changed the shares server IP while the service is running ... it
  loads the new share usign the new server IP.. which again is wrong since
  ideally the person should restart service for any shares.conf changes to
  take effect.
 

 This won't work anyway because of how we track provider_location in the
 database.  This particular case is planned to be addressed via this
 blueprint with reworks configuration:


 https://blueprints.launchpad.net/cinder/+spec/remotefs-share-cfg-improvements


 Agree, but until this is realized, we can fix the code/flow such that its
 sane.. in the sense, it works consistently for all cases.
 today it doesn't.. for some change it works w/o service restart and for
 some it doesn't work even after service restart

 thanx,
 deepak


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] XXXFSDriver: Query on usage of load_shares_config in ensure_shares_mounted

2014-04-17 Thread Deepak Shetty
On Tue, Apr 15, 2014 at 4:14 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 On 11 April 2014 16:24, Eric Harney ehar...@redhat.com wrote:


  I suppose I should also note that if the plans in this blueprint are
  implemented the way I've had in mind, the main issue here about only
  loading shares at startup time would be in place, so we may want to
  consider these questions under that direction.

 Currently, any config changes to a backend require a restart of the
 volume service to be reliably applied, shares included. Some changes
 work for shares, some don't, which is a dangerous place to be. If


Exactly my point ! +1

thanx,
deepak


 we're going to look at restartless config changes, then I thing we
 should look at how it could be generalised for every backend, not just
 shared fs ones.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Error in running rabbit-mq server

2014-04-17 Thread Peeyush Gupta
Hi all,

I am trying to setup OpenStack on Ubuntu 12.04 using devstack. Here is the 
error I am getting:

Setting up rabbitmq-server (2.7.1-0ubuntu4) ...
Starting rabbitmq-server: FAILED - check /var/log/rabbitmq/startup_{log, _err}
rabbitmq-server.
invoke-rc.d: initscript rabbitmq-server, action start failed.
dpkg: error processing rabbitmq-server (--configure):
 subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
                                                              Errors were 
encountered while processing:
 rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
++ err_trap
++ local r=100
++ set +o xtrace
stack.sh failed

I checked the file /var/log/rabbitmq/startup_err and it just says the rabbitmq 
start failed, no other information. Can you please help me resolve the issue?
 
Thanks,
~Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer]

2014-04-17 Thread Hachem Chraiti
Hi ,
how to authenticate against openstack's Ceilometer Client using python
program?
plase i need response please

Sincerly ,
Chraiti Hachem
  software enginee
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Regarding manage_existing and unmanage

2014-04-17 Thread Deepak Shetty
On Fri, Apr 11, 2014 at 7:29 PM, Duncan Thomas duncan.tho...@gmail.comwrote:

 On 11 April 2014 14:21, Deepak Shetty dpkshe...@gmail.com wrote:
  My argument was mostly from the perspective that unmanage shud do its
 best
  to revert back the volume to its original state (mainly the name).
 
  Like you said, once its given to cinder, its not a external volume
 anymore
  similary, once its taken out of cinder, its a external volume and its
 just
  logical for admin/user to expect the volume with its original name
 
  Thinking of scenarios..  (i may be wrong here)
 
  An admin submits few storage array LUNs (for which he has setup a
  mirroring relationship in the storage array) as volumes to cinder using
  manage_existing.. uses the volume as part of openstack, and there
  are 2 cases here
  1) cinder renames the volume, which causes his backend mirroring
  relationship to be broken
  2) He disconnects the mirror relnship while submitting the volume to
  cinder and when he unmanages it, expects the mirror to work
 
  Will this break if cinder renames the volume ?


 Both of those are unreasonable expectations, and I would entirely
 expect both of them to break. Once you give cidner a volume, you no
 longer have *any* control over what happens to that volume. Mirroring
 relationships, volume names, etc *all* become completely under
 cinder's control. Expecting *anything* to go back to the way it was
 before cinder got hold of the volume is completely wrong.


While i agree with you point of cinder taking full control of its volumes,
I still feel that providing the abiity to use the backend array features
along
w/ manage existing should be welcome'd by all.. esp given the price of
these arrays its good to design things in openstack that aid in using
the array features if the setup/env/admin wishes to do so, so that we fully
exploit the investment done in purchasing the storage arrrays :)




 The scenario I *don't want to see is:
 1) Admin import a few hundred volumes into the cloud
 2) Some significant time goes by
 3) Cloud is being decommissioned / the storage transfer / etc. so the
 admin runs unmanage on all cinder volumes on that storage
 4) The volumes get renamed or not, based on whether they happened to
 come into cinder via manage or volume create

 *That* I would consider broken.


What exactly is broken here.. Sorry but i didn't get it!

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Questions about user-facing documentation

2014-04-17 Thread Matthew Farina
Shaunak, these are some good questions. Input from the docs team would
be useful for some of these as well.

On Wed, Apr 16, 2014 at 10:06 PM, Shaunak Kashyap
shaunak.kash...@rackspace.com wrote:
 Hi folks,

 As part of working on 
 https://blueprints.launchpad.net/openstack-sdk-php/+spec/sphinx-docs, I’ve 
 been looking at 
 http://git.openstack.org/cgit/stackforge/openstack-sdk-php/tree/doc.

 Before I start making any changes toward that BP, however, I wanted to put 
 forth a couple of overarching questions and proposals to the group:

 1. Where and how should the user guide (i.e. Sphinx-generated docs) be 
 published?

For this I'll go by example first. If you look at something like the
OpenStack Client docs
(http://git.openstack.org/cgit/openstack/python-openstackclient/tree/doc)
you'll see they are currently published to
http://docs.openstack.org/developer/python-openstackclient/. The same
is true of other projects as well.

I'm not sure this is the ideal place but it is where things are
published to now. There has been talk of producing a
developers.openstack.org. The initial proposed content for that
currently resides at api.openstack.org. If a more detailed portal
comes together that would be a good place for this.


 I know there’s http://docs.openstack.org/. It seems like the logical place 
 for these to be linked off of but where would that link go and what is the 
 process of publishing our Sphinx-generated docs to that place?

 2. How should the user guide(s) be organized?

 If I were a developer, I’m probably looking to use a particular OpenStack 
 service (as opposed to learning about the PHP SDK without a particular 
 orientation). So I propose organizing the PHP SDK user guide accordingly: as 
 a set of user guides, each showing how to use the OpenStack PHP SDK for a 
 particular service. Of course, Identity is common to all so it’s 
 documentation would somehow be included in each user guide. This is similar 
 to how OpenStack organizes its REST API user guides - one per service (e.g. 
 http://docs.openstack.org/api/openstack-object-storage/1.0/content/).

If you take a look at the general SDK development page
(https://wiki.openstack.org/wiki/SDK-Development) there is a
description of the target audience. This target audience is a little
different from most of the other documentation so we should take that
into account.

We shouldn't expect a user of the SDK to understand the internals of
OpenStack or even the names such as swift, nova, etc. An application
developer will likely know little about OpenStack other than it's a
cloud platform. The SDK should introduce them to OpenStack with the
limited amount of knowledge a dev would need to know.

From here I like your idea of a section for each service (e.g.,
identity). This make sense.


 Further, within each user guide, I propose ordering the content according to 
 popularity of use cases for that service (with some other constraints such as 
 introducing concepts first, grouping similar concepts, etc.). This ensures 
 that the reader can get what they want, from their perspective. Particularly, 
 beginners would get what they came for without having to read too far into 
 the documentation. As an example, I think 
 http://git.openstack.org/cgit/stackforge/openstack-sdk-php/tree/doc/oo-tutorial.md
  does a fine job of walking the user through common Object Store use cases. I 
 would just extend it to gradually introduce the user to more advanced use 
 cases as well, thereby completing the user guide for Object Store.

Great. We want to help application developers get up to speed quickly.
They're concerned with their app. I like the idea of common use cases
near the front.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Need help to understand Jenkins msgs

2014-04-17 Thread Deepak Shetty
Hi,
  Can someone help me understand why Jenkins build shows failures
for some of the tests for my patch @
https://review.openstack.org/#/c/86888/

I really couldn't understand it even after clicking those links
TIA

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Elastic Recheck failure?

2014-04-17 Thread Pecoraro, Alex
Yesterday I re-submitted a patchset to Gerrit after updating the some comments 
in a function header and the documentation in docs. So I didn't make any code 
changes and my previous changeset passed all the Jenkins tests, but this time 
it failed the check-grenade-dsvm-neutron and the 
check-tempest-dsvm-postgres-full tests and then it had the following message in 
the comments section:

ElasticRecheck
Patch Set 3:

I noticed jenkins failed, I think you hit bug(s):

check-grenade-dsvm-neutron: https://bugs.launchpad.net/bugs/1307344
check-tempest-dsvm-postgres-full: https://bugs.launchpad.net/bugs/1253896

We don't automatically recheck or reverify, so please consider doing that 
manually if someone hasn't already. For a code review which is not yet 
approved, you can recheck by leaving a code review comment with just the text:

recheck bug 1307344
For bug details see: http://status.openstack.org/elastic-recheck/

Does that mean that my test failed because of a bug in the test system? If so 
then how do I get around the issue so that my changeset passes all the tests?

Here's my patchset for reference:

https://review.openstack.org/#/c/80421/

Thanks.

Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Elastic Recheck failure?

2014-04-17 Thread Pecoraro, Alex
Doh!

Nevermind, I think I figured it out with more careful reading, sorry for the 
unnecessary email.

From: Pecoraro, Alex [mailto:alex.pecor...@emc.com]
Sent: Thursday, April 17, 2014 10:21 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Swift] Elastic Recheck failure?

Yesterday I re-submitted a patchset to Gerrit after updating the some comments 
in a function header and the documentation in docs. So I didn't make any code 
changes and my previous changeset passed all the Jenkins tests, but this time 
it failed the check-grenade-dsvm-neutron and the 
check-tempest-dsvm-postgres-full tests and then it had the following message in 
the comments section:

ElasticRecheck
Patch Set 3:

I noticed jenkins failed, I think you hit bug(s):

check-grenade-dsvm-neutron: https://bugs.launchpad.net/bugs/1307344
check-tempest-dsvm-postgres-full: https://bugs.launchpad.net/bugs/1253896

We don't automatically recheck or reverify, so please consider doing that 
manually if someone hasn't already. For a code review which is not yet 
approved, you can recheck by leaving a code review comment with just the text:

recheck bug 1307344
For bug details see: http://status.openstack.org/elastic-recheck/

Does that mean that my test failed because of a bug in the test system? If so 
then how do I get around the issue so that my changeset passes all the tests?

Here's my patchset for reference:

https://review.openstack.org/#/c/80421/

Thanks.

Alex
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Question regarding Nova in Havana

2014-04-17 Thread Vishvananda Ishaya
I believe the exchange should also be ‘nova’.

Vish

On Apr 15, 2014, at 11:31 PM, Prashant Upadhyaya 
prashant.upadhy...@aricent.com wrote:

 Hi Vish,
  
 Thanks, now one more question –
  
 When I send the request out, I send it to the exchange ‘nova’ and routing key 
 ‘conductor’ (using RabbitMQ), this will take the message to the Nova 
 Conductor on the controller, I have been able to do that much.
 I do see that there is a ‘reply queue’ embedded in the above message so 
 presumably the Nova Conductor will use that queue to send back the response, 
 is that correct ?
 If the above is correct, what is the ‘exchange’ used Nova Conductor to send 
 back this response.
  
 Regards
 -Prashant
  
  
 From: Vishvananda Ishaya [mailto:vishvana...@gmail.com] 
 Sent: Wednesday, April 16, 2014 10:11 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Prashant Upadhyaya
 Subject: Re: [openstack-dev] [Openstack] Question regarding Nova in Havana
  
 The service reference is created in the start method of the service. This 
 happens around line 217 in nova/service.py in the current code. You should be 
 able to do something similar by sending a message to service_create on 
 conductor. It will return an error if the service already exists. Note you 
 can also use service_get_by_args in conductor to see if the service exists.
  
 Vish
  
 On Apr 15, 2014, at 9:22 PM, Swapnil S Kulkarni cools...@gmail.com wrote:
 
 
 Interesting discussion. Forwarding to openstack-dev.
  
 
 On Wed, Apr 16, 2014 at 9:34 AM, Prashant Upadhyaya 
 prashant.upadhy...@aricent.com wrote:
 Hi,
  
 I am writing a Compute Node Simulator.
 The idea is that I would write a piece of software using C which honors the 
 RabbitMQ interface towards the Controller, but will not actually do the real 
 thing – everything on the Compute Node will be simulated by my simulator 
 software.
  
 The  problem I am facing, that I have not been able to get my simulated CN 
 listed in the output of
 nova-manage service list
  
 I am on Havana, and my simulator is sending a periodic  ‘service_update’ and 
 ‘compute_node_update’ RPC messages to the ‘nova’ exchange and the ‘conductor’ 
 routing key.
 I can manipulate the above messages at will to fool the controller.
 (I observe the messages from a real CN and take cues from there to construct 
 a fake one from my simulator)
  
 Question is – what causes the controller to add a new Nova Compute in its 
 database, is it the ‘service_update’ RPC or something else.
  
 Hopefully you can help me reverse engineer the interface.
  
 Regards
 -Prashant
  
  
 
 
 DISCLAIMER: This message is proprietary to Aricent and is intended solely 
 for the use of the individual to whom it is addressed. It may contain 
 privileged or confidential information and should not be circulated or used 
 for any purpose other than for what it is intended. If you have received this 
 message in error, please notify the originator immediately. If you are not 
 the intended recipient, you are notified that you are strictly prohibited 
 from using, copying, altering, or disclosing the contents of this message. 
 Aricent accepts no responsibility for loss or damage arising from the use of 
 the information transmitted by this email including damage from virus.
 
 ___
 Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
 DISCLAIMER: This message is proprietary to Aricent and is intended solely 
 for the use of the individual to whom it is addressed. It may contain 
 privileged or confidential information and should not be circulated or used 
 for any purpose other than for what it is intended. If you have received this 
 message in error, please notify the originator immediately. If you are not 
 the intended recipient, you are notified that you are strictly prohibited 
 from using, copying, altering, or disclosing the contents of this message. 
 Aricent accepts no responsibility for loss or damage arising from the use of 
 the information transmitted by this email including damage from virus.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Design Summit Session Agenda

2014-04-17 Thread Kyle Mestery
On Thu, Apr 17, 2014 at 12:26 PM, Collins, Sean
sean_colli...@cable.comcast.com wrote:
 All,

 We have a couple IPv6 design summit sessions that have been registered,
 and at least one of them is in a pre-approved state:

 http://summit.openstack.org/cfp/details/21

 We'll have at least 40 minutes to discuss a roadmap for IPv6 in the
 coming Juno cycle.

 I have created an etherpad, and would like everyone to start adding
 items to the agenda, so we can ensure that we can cover as much ground
 as possible.

 https://etherpad.openstack.org/p/neutron-ipv6-atlanta-summit

This is a great idea Sean. I'm hoping we can collapse all of the IPv6
sessions into this one to provide a consolidated roadmap for Neutron
IPv6 in Juno. Thanks for driving this!

Kyle

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Should we adopt a blueprint design process

2014-04-17 Thread Kyle Mestery
On Thu, Apr 17, 2014 at 12:11 PM, Devananda van der Veen
devananda@gmail.com wrote:
 Hi all,

 The discussion of blueprint review has come up recently for several reasons,
 not the least of which is that I haven't yet reviewed many of the blueprints
 that have been filed recently.

 My biggest issue with launchpad blueprints is that they do not provide a
 usable interface for design iteration prior to writing code. Between the
 whiteboard section, wikis, and etherpads, we have muddled through a few
 designs (namely cinder and ceilometer integration) with accuracy, but the
 vast majority of BPs are basically reviewed after they're implemented. This
 seems to be a widespread objection to launchpad blueprints within the
 OpenStack community, which others are trying to solve. Having now looked at
 what Nova is doing with the nova-specs repo, and considering that TripleO is
 also moving to that format for blueprint submission, and considering that we
 have a very good review things in gerrit culture in the Ironic community
 already, I think it would be a very positive change.

 For reference, here is the Nova discussion thread:
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html

 and the specs repo BP template:
 https://github.com/openstack/nova-specs/blob/master/specs/template.rst

 So, I would like us to begin using this development process over the course
 of Juno. We have a lot of BPs up right now that are light on details, and,
 rather than iterate on each of them in launchpad, I would like to propose
 that:
 * we create an ironic-specs repo, based on Nova's format, before the summit
 * I will begin reviewing BPs leading up to the summit, focusing on features
 that were originally targeted to Icehouse and didn't make it, or are
 obviously achievable for J1
 * we'll probably discuss blueprints and milestones at the summit, and will
 probably adjust targets
 * after the summit, for any BP not targeted to J1, we require blueprint
 proposals to go through the spec review process before merging any
 associated code.

 Cores and interested parties, please reply to this thread with your
 opinions.

I think this is a great idea Devananda. The Neutron community has
moved to this model for Juno as well, and people have been very
positive so far.

Thanks,
Kyle

 --
 Devananda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Stephen Balukoff
Howdy folks!

Based on this morning's IRC meeting, it seems to me there's some contention
and confusion over the need for single call functionality for load
balanced services in the new API being discussed. This is what I understand:

* Those advocating single call are arguing that this simplifies the API
for users, and that it more closely reflects the users' experience with
other load balancing products. They don't want to see this functionality
necessarily delegated to an orchestration layer (Heat), because
coordinating how this works across two OpenStack projects is unlikely to
see success (ie. it's hard enough making progress with just one project). I
get the impression that people advocating for this feel that their current
users would not likely make the leap to Neutron LBaaS unless some kind of
functionality or workflow is preserved that is no more complicated than
what they currently have to do.

* Those (mostly) against the idea are interested in seeing the API provide
primitives and delegating higher level single-call stuff to Heat or some
other orchestration layer. There was also the implication that if
single-call is supported, it ought to support both simple and advanced
set-ups in that single call. Further, I sense concern that if there are
multiple ways to accomplish the same thing supported in the API, this
redundancy breeds complication as more features are added, and in
developing test coverage. And existing Neutron APIs tend to expose only
primitives. I get the impression that people against the idea could be
convinced if more compelling reasons were illustrated for supporting
single-call, perhaps other than we don't want to change the way it's done
in our environment right now.

I've mostly stayed out of this debate because our solution as used by our
customers presently isn't single-call and I don't really understand the
requirements around this.

So! I would love it if some of you could fill me in on this, especially
since I'm working on a revision of the proposed API. Specifically, what I'm
looking for is answers to the following questions:

1. Could you please explain what you understand single-call API
functionality to be?

2. Could you describe the simplest use case that uses single-call API in
your environment right now? Please be very specific--  ideally, a couple
examples of specific CLI commands a user might run, or API (along with
specific configuration data) would be great.

3. Could you describe the most complicated use case that your single-call
API supports? Again, please be very specific here.

4. What percentage of your customer base are used to using single-call
functionality, and what percentage are used to manipulating primitives?

Thanks!
Stephen


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reviewing spelling and grammar errors in blueprints Re: [Nova] nova-specs

2014-04-17 Thread Carl Baldwin
Personally, I try not to be disagreeable and to be considerate in my
reviews.  However, I don't want to worry too much about hurting
someone's feelings by making a comment.  As a community we should be
considerate and polite but we should also embrace critical reviews of
our own work.

I think adding a note for a minor mistake is fine.  I appreciate those
in reviews of my own patches.  I mark them nit: ... and I don't give
-1 for nits.  To me, the definition of a nit is something that I
noticed, thought I'd point it out but I would not try to hold up the
patch for it.  Keep in mind though, that nit comments might be mixed
in with other comments that are worthy of -1.  If the author wishes to
roll the patch for some other reason then the nits should be
considered.

My $0.02

Carl

On Thu, Apr 17, 2014 at 7:41 AM, Stefano Maffulli stef...@openstack.org wrote:
 On 04/16/2014 07:56 PM, Dan Smith wrote:
 Do we really want to -1 for spelling mistake in nova-specs?

 I do, yes. These documents are intended to be read by deployers and
 future developers. I think it's really important that they're useful in
 that regard.

 Guys, use your judgement with this. If a spelling mistake is really an
 impediment to understanding the meaning of the sentence or introduces
 ambiguity, by all mean fix it (i.e. provide a correction, for native
 English speakers).

 Always imagine that on the other side there is someone who has feelings
 and may have already done an immense effort to learn how to express
 technical concepts in a foreign language. Getting a vote for a small
 thing brings any adult back to childhood memories and may cause bad
 feelings.

 Be very very careful. I know most of reviewers are already being
 careful, I'm just piling up on top of that carefuless: there is never
 enough :)

 Please don't -1 if it's a minor grammar/spelling mistake that doesn't
 prevent proper understanding of the blueprint by a person skilled in the
 art.

 /stef

 --
 Ask and answer questions on https://ask.openstack.org

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reviewing spelling and grammar errors in blueprints Re: [Nova] nova-specs

2014-04-17 Thread Carl Baldwin
I'd prefer that others *not* upload a new patch over mine just to make
a spelling correction.  I might be in the middle of another version of
the patch myself and mine will overwrite yours.  If you want to upload
a patch over mine please ask me first so that we can coordinate and
discuss the change you plan to make.  I might be overjoyed to accept
your help on the patch but probably not to fix nits unless I'm on
vacation or something.

Carl

On Thu, Apr 17, 2014 at 10:41 AM, Chmouel Boudjnah chmo...@enovance.com wrote:

 On Thu, Apr 17, 2014 at 9:41 AM, Stefano Maffulli stef...@openstack.org
 wrote:

 Please don't -1 if it's a minor grammar/spelling mistake that doesn't
 prevent proper understanding of the blueprint by a person skilled in the
 art.



 or it may be acceptable that the reviewer can always do the minor spelling
 correction for the other users directly.

 Chmouel

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]

2014-04-17 Thread Nejc Saje
Hi,

quickly said, you can use the client API with something like:

import keystoneclient.v2_0.client as ksclient
creds = {‘username’:’demo’, ‘password’:’password’, ‘auth_url’:’keystone 
auth url’, ‘tenant_name’:’demo’}
keystone = ksclient.Client(**creds)

import ceilometerclient.v2.client as cclient
ceilometer = cclient.Client(token=lambda: keystone.auth_token,
endpoint=config.get('OpenStack', 
'ceilometer_endpoint’))

Cheers,
Nejc

On Apr 17, 2014, at 6:52 PM, Hachem Chraiti hachem...@gmail.com wrote:

 Hi ,
 how to authenticate against openstack's Ceilometer Client using python 
 program?
 plase i need response please
 
 Sincerly ,
 Chraiti Hachem
   software enginee
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need help to understand Jenkins msgs

2014-04-17 Thread Zaro
then jenkins failure seems to indicate that jenkins xUnit plugin could
not find any test result files for processing.  it could be that the
test didn't get run or that it ran but didn't generate the rest
results required for the plugin to pick up?  that's probably where i
would check.

On Thu, Apr 17, 2014 at 10:16 AM, Deepak Shetty dpkshe...@gmail.com wrote:
 Hi,
   Can someone help me understand why Jenkins build shows failures
 for some of the tests for my patch @
 https://review.openstack.org/#/c/86888/

 I really couldn't understand it even after clicking those links
 TIA

 thanx,
 deepak

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron BP review process for Juno

2014-04-17 Thread Carl Baldwin
Sure thing [1].  The easiest change I saw was to remove the
restriction that the number of sub titles is exactly 9.  This won't
require any of the other blueprints already posted for review to
change.  See what you think.

Carl

[1] https://review.openstack.org/#/c/88381/

On Wed, Apr 16, 2014 at 3:43 PM, Kyle Mestery mest...@noironetworks.com wrote:
 On Wed, Apr 16, 2014 at 4:26 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 Neutron (and Nova),

 I have had one thing come up as I've been using the template.  I find
 that I would like to add just a little document structure in the form
 of a sub-heading or two under the Proposed change heading but before
 the required Alternatives sub-heading.  However, this is not allowed
 by the tests.

 Proposed change
 =

 I want to add a little bit of document structure here but I cannot
 because any sub-headings would be counted among the exactly 9
 sub-headings I'm required to have starting with Alternatives.  This
 seems a bit unnatural to me.

 Alternatives
 
 ...


 The sub-headings allow structure underneath but the first heading
 doesn't.  Could be do it a little bit differentely?  Maybe something
 like this?

 Proposed change
 =

 Overview
 

 I could add structure under here.

 Alternatives
 
 ...

 Thoughts?  Another idea might be to change the test to require at
 least the nine required sub-headings but allow for the addition of
 another.

 I'm fine with either of these proposed changes to be honest. Carl,
 please submit a patch to neutron-specs and we can review it there.

 Also, I'm in the process of adding some jenkins jobs for neutron-specs
 similar to nova-specs.

 Thanks,
 Kyle

 Carl

 On Tue, Apr 15, 2014 at 4:07 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 Given the success the Nova team has had in handling reviews using
 their new nova-specs gerrit repository, I think it makes a lot of
 sense for Neutron to do the same. With this in mind, I've added
 instructions to the BP wiki [1] for how to do. Going forward in Juno,
 this is how Neutron BPs will be handled by the Neutron core team. If
 you are currently working on a BP or code for Juno which is attached
 to a BP, please file the BP using the process here [1].

 Given this is our first attempt at using this for reviews, I
 anticipate there may be a few hiccups along the way. Please reply on
 this thread or reach out in #openstack-neutron and we'll sort through
 whatever issues we find.

 Thanks!
 Kyle

 [1] https://wiki.openstack.org/wiki/Blueprints#Neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread Carl Baldwin
I don't see any indication that a floating ip can be associated with
any of the secondary addresses.  Can this be done?

If not, then multiple addresses are not useful if a floating ip is
required to make the server public facing.

Carl

On Wed, Apr 16, 2014 at 10:46 PM, Aaron Rosen aaronoro...@gmail.com wrote:
 The allowed-address-pair extension that was added here
 (https://review.openstack.org/#/c/38230/) allows us to add arbitrary ips to
 an interface to allow them. This is useful if you want to run something like
 VRRP between two instances.


 On Wed, Apr 16, 2014 at 9:39 PM, Kevin Benton blak...@gmail.com wrote:

 I was under the impression that the security group rules blocked addresses
 not assigned by neutron[1].


 1.https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L188


 On Wed, Apr 16, 2014 at 9:20 PM, Aaron Rosen aaronoro...@gmail.com
 wrote:

 You can do it with ip aliasing and use one interface:

 ifconfig eth0 10.0.0.22/24
 ifconfig eth0:1 10.0.0.23/24
 ifconfig eth0:2 10.0.0.24/24

 2: eth0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq state DOWN
 qlen 1000
 link/ether 40:6c:8f:1a:a9:31 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.22/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
 inet 10.0.0.23/24 brd 10.0.0.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
 inet 10.0.0.24/24 brd 10.0.0.255 scope global secondary eth0:2
valid_lft forever preferred_lft forever



 On Wed, Apr 16, 2014 at 8:53 PM, Kevin Benton blak...@gmail.com wrote:

 Web server running multiple SSL sites that wants to be compatible with
 clients that don't support the SNI extension. There is no way for a server
 to get multiple IP addresses on the same interface is there?


 On Wed, Apr 16, 2014 at 5:50 PM, Aaron Rosen aaronoro...@gmail.com
 wrote:

 This is true. Several people have asked this same question over the
 years though I've yet to hear a use case why one really need to do this. 
 Do
 you have one?


 On Wed, Apr 16, 2014 at 3:12 PM, Ronak Shah ro...@nuagenetworks.net
 wrote:

 Hi Vikash,
 Currently this is not supported. the NIC not only needs to be in
 different subnet, they have to be in different network as well (container
 for the subnet)

 Thanks
 Ronak

 On Wed, Apr 16, 2014 at 3:51 AM, Vikash Kumar
 vikash.ku...@oneconvergence.com wrote:

 With 'interfaces' I mean 'nics' of VM.


 On Wed, Apr 16, 2014 at 4:18 PM, Vikash Kumar
 vikash.ku...@oneconvergence.com wrote:

 Hi,

  I want to launch one VM which will have two Ethernet interfaces
 with IP of single subnet. Is this supported now in openstack ? Any
 suggestion ?


 Thanx



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: [Openstack] Network API: how to set fixed MAC address

2014-04-17 Thread Jeffrey Nguyen (jeffrngu)
Hi Devs,

I'm not getting any answer for my questions from the openstack general alias 
for my question about the new enhancement done by Zhi Yan Liu (copied again 
below).  Does anyone here know the answers or know where to check in the code 
for this enhancement?

Zhi,
Could you please confirm if https://review.openstack.org/#/c/23892/ is used for 
setting fix MAC addr when invoking API to create new server?   If so, could you 
give an example JSON payload where you set the MAC address?   I checked the 
changes you submitted but did not find any payload with MAC address.   Also, 
could you confirm which release of OpenStack has your patch?I'm currently 
using 
http://docs.openstack.org/api/openstack-compute/2/content/POST_createServer_v2__tenant_id__servers_CreateServers.html
  to create new server with fixed IP and subnet.


Thanks,
-Jeffrey

From: jeffrngu jeffr...@cisco.commailto:jeffr...@cisco.com
Date: Tuesday, April 15, 2014 3:52 PM
To: Aaron Segura aaron.seg...@gmail.commailto:aaron.seg...@gmail.com, 
zhiy...@cn.ibm.commailto:zhiy...@cn.ibm.com 
zhiy...@cn.ibm.commailto:zhiy...@cn.ibm.com, 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Subject: Re: [Openstack] Network API: how to set fixed MAC address


I'm adding the alias back to see if anyone knows the answers for the questions 
I asked Zhi below.

Thanks,
-Jeffrey

From: jeffrngu jeffr...@cisco.commailto:jeffr...@cisco.com
Date: Monday, April 14, 2014 8:27 PM
To: Aaron Segura aaron.seg...@gmail.commailto:aaron.seg...@gmail.com, 
zhiy...@cn.ibm.commailto:zhiy...@cn.ibm.com 
zhiy...@cn.ibm.commailto:zhiy...@cn.ibm.com
Subject: Re: [Openstack] Network API: how to set fixed MAC address


Thanks Aaron.  I think this could potentially be used as a work-around if 
there's no API support for setting MAC addr.   I saw a commit that seems like 
the feature I'm looking for:
https://review.openstack.org/#/c/23892/ (I'm copying Zhi who is the committer 
for this patch).

Zhi,
Could you please confirm if https://review.openstack.org/#/c/23892/ is used for 
setting fix MAC addr when invoking API to create new server?   If so, could you 
give an example JSON payload where you set the MAC address?   I checked the 
changes you submitted but did not find any payload with MAC address.   Also, 
could you confirm which release of OpenStack has your patch?I'm currently 
using 
http://docs.openstack.org/api/openstack-compute/2/content/POST_createServer_v2__tenant_id__servers_CreateServers.html
  to create new server with fixed IP and subnet.

Thanks,
-Jeffrey


From: Aaron Segura aaron.seg...@gmail.commailto:aaron.seg...@gmail.com
Date: Monday, April 14, 2014 5:15 PM
To: jeffrngu jeffr...@cisco.commailto:jeffr...@cisco.com
Subject: Re: [Openstack] Network API: how to set fixed MAC address

Not sure if this will work for you, but you can create a port beforehand with 
the required info and specify port-id on boot instead of net-id:

# neutron port-create --fixed-ip ip_address=x.x.x.x --mac-address 
aa:bb:cc:dd:ee:ff my_network
# nova boot ... --nic port-id=portid

those are cmdline examples, but you get the idea...


On Mon, Apr 14, 2014 at 7:23 PM, Jeffrey Nguyen (jeffrngu) 
jeffr...@cisco.commailto:jeffr...@cisco.com wrote:
Hi,

I'm new to this mailing list, please feel free to direct my question to 
appropriate list if this is not the right one.

I'm currently using 
http://docs.openstack.org/api/openstack-compute/2/content/POST_createServer_v2__tenant_id__servers_CreateServers.html
 to launch an instance on OpenStack with an attached network uuid and fixed IP 
address.   I was wondering if there's any API available to set fixed MAC 
address in addition to network UUID and fixed IP.   Ideally, I'd like to set 
all three parameters (network UUID, fixed IP, and MAC addr) in the same API 
call.

Thanks,
-Jeffrey

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Sri
hello Stephen,


I am interested in LBaaS and want to know if we post the weekly meeting's
chat transcripts online?
or may be update an etherpad?


Can you please share the links?

thanks,
SriD



--
View this message in context: 
http://openstack.10931.n7.nabble.com/Neutron-LBaas-Single-call-API-discussion-tp38533p38542.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Sri
hello Stephen,


I am interested in LBaaS and want to know if we post the weekly meeting's
chat transcripts online?
or may be update an etherpad?


Can you please share the links?

thanks,
SriD



--
View this message in context: 
http://openstack.10931.n7.nabble.com/Neutron-LBaas-Single-call-API-discussion-tp38533p38543.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Stephen Balukoff
Hi Sri,

Yes, the meeting minutes  etc. are all available here, usually a few
minutes after the meeting is over:
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/

(You are also, of course, welcome to join!)

Stephen


On Thu, Apr 17, 2014 at 11:34 AM, Sri sri.networ...@gmail.com wrote:

 hello Stephen,


 I am interested in LBaaS and want to know if we post the weekly meeting's
 chat transcripts online?
 or may be update an etherpad?


 Can you please share the links?

 thanks,
 SriD



 --
 View this message in context:
 http://openstack.10931.n7.nabble.com/Neutron-LBaas-Single-call-API-discussion-tp38533p38542.html
 Sent from the Developer mailing list archive at Nabble.com.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2014-04-17 Thread Devananda van der Veen
I would like to announce my candidacy for the Technical Committee this
term.


Background
=

I began working on OpenStack more than two years ago. Initially, I focused
on improving Nova's database API and led the Nova DB team in that cleanup
effort for a time. As much of that work was finished or transitioned to
other capable hands, I began worked on the Nova baremetal driver and
helped to start the TripleO project. At the Havana summit, I proposed that
this driver be split into its own project, and for the last year I have
served as the PTL for Ironic.


Platform
==

OpenStack is not merely a collection of open source software that,
together, provides a cloud; it is the open community which creates and
supports that software. While being open and encouraging new contributions,
that developer community, led by the TC and the PTLs, has a responsibility
to ensure the quality of the resulting software. OpenStack must be
scalable, maintainable, and easy to install, and I believe it must also
look and feel cohesive to operators and end-users, even if, under the hood,
it is a collection of loosely coupled parts. Diversity in projects is good,
as is a healthy PaaS ecosystem, but divergence away from common operational
tooling makes it more difficult for both contributors and operators.

Over the last 6 months, the TC has codified integration and testing
requirements, both raising the bar for new projects and making the path to
integration clearer. Some integrated projects do not meet those standards
yet, and work is underway to remedy that. I would like to see this
continue, leading towards greater consistency across all integrated
projects.

I believe that the greatest barrier to entry for our users is the
complexity of installing an OpenStack cloud; addressing this is the primary
reason that I'm working on Ironic and TripleO. Similarly, I believe the
greatest barrier to entry for new projects is integration with the
community of existing projects, and given my role as PTL for Ironic, I
believe that I bring a unique perspective to this discussion. In this
cycle, I would like the TC to be more involved in individual projects'
technical considerations than it historically had been, particularly during
the incubation process where guidance can have the most positive effect on
a young project.


Whether I am elected or not, I will continue working towards these goals.
It has been, and continues to be, a pleasure to work with such a vast and
vibrant community.

Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2014-04-17 Thread Anita Kuno
confirmed

On 04/17/2014 02:48 PM, Devananda van der Veen wrote:
 I would like to announce my candidacy for the Technical Committee this
 term.
 
 
 Background
 =
 
 I began working on OpenStack more than two years ago. Initially, I focused
 on improving Nova's database API and led the Nova DB team in that cleanup
 effort for a time. As much of that work was finished or transitioned to
 other capable hands, I began worked on the Nova baremetal driver and
 helped to start the TripleO project. At the Havana summit, I proposed that
 this driver be split into its own project, and for the last year I have
 served as the PTL for Ironic.
 
 
 Platform
 ==
 
 OpenStack is not merely a collection of open source software that,
 together, provides a cloud; it is the open community which creates and
 supports that software. While being open and encouraging new contributions,
 that developer community, led by the TC and the PTLs, has a responsibility
 to ensure the quality of the resulting software. OpenStack must be
 scalable, maintainable, and easy to install, and I believe it must also
 look and feel cohesive to operators and end-users, even if, under the hood,
 it is a collection of loosely coupled parts. Diversity in projects is good,
 as is a healthy PaaS ecosystem, but divergence away from common operational
 tooling makes it more difficult for both contributors and operators.
 
 Over the last 6 months, the TC has codified integration and testing
 requirements, both raising the bar for new projects and making the path to
 integration clearer. Some integrated projects do not meet those standards
 yet, and work is underway to remedy that. I would like to see this
 continue, leading towards greater consistency across all integrated
 projects.
 
 I believe that the greatest barrier to entry for our users is the
 complexity of installing an OpenStack cloud; addressing this is the primary
 reason that I'm working on Ironic and TripleO. Similarly, I believe the
 greatest barrier to entry for new projects is integration with the
 community of existing projects, and given my role as PTL for Ironic, I
 believe that I bring a unique perspective to this discussion. In this
 cycle, I would like the TC to be more involved in individual projects'
 technical considerations than it historically had been, particularly during
 the incubation process where guidance can have the most positive effect on
 a young project.
 
 
 Whether I am elected or not, I will continue working towards these goals.
 It has been, and continues to be, a pleasure to work with such a vast and
 vibrant community.
 
 Devananda
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][sahara] Merging Sahara-UI Dashboard code into horizon

2014-04-17 Thread Chad Roberts
Per blueprint  
https://blueprints.launchpad.net/horizon/+spec/merge-sahara-dashboard we are 
merging the Sahara Dashboard UI code into the Horizon code base.

Over the last week, I have been working on making this merge happen and along 
the way some interesting questions have come up.  Hopefully, together we can 
make the best possible decisions.

Sahara is the Data Processing platform for Openstack.  During incubation and 
prior to that, a horizon dashboard plugin was developed to work with the data 
processing api.  Our original implementation was a separate dashboard that we 
would activate by adding to HORIZON_CONFIG and INSTALLED_APPS.  The layout gave 
us a root of Sahara on the same level as Admin and Project.  Under Sahara, we 
have 9 panels that make-up the entirety of the functionality for the Sahara 
dashboard.

Over the past week there seems to be at least 2 questions that have come up.  
I'd like to get input from anyone interested.  

1)  Where should the functionality live within the Horizon UI? So far, 2 
options have been presented.
a)  In a separate dashboard (same level as Admin and Project).  This is 
what we had in the past, but it doesn't seem to fit the flow of Horizon very 
well.  I had a review up for this method at one point, but it was shot down, so 
it is currently abandoned.
b)  In a panel group under Project.  This is what I have stared work on 
recently. This seems to mimic the way other things have been integrated, but 
more than one person has disagreed with this approach.
c)  Any other options?


2)  Where should the code actually reside?
a)  Under openstack_dashboards/dashboards/sahara  (or data_processing).  
This was the initial approach when the target was a separate dashboard.
b)  Have all 9 panels reside in openstack_dashboards/dashboards/project.  
To me, this is likely to eventually make a mess of /project if more and more 
things are integrated there.
c)  Place all 9 data_processing panels under 
openstack_dashboards/dashboards/project/data_processing  This essentially 
groups the code by panel group and might make for a bit less mess.
d)  Somewhere else?


The current plan is to discuss this at the next Horizon weekly meeting, but 
even if you can't be there, please do add your thoughts to this thread.

Thanks,
Chad Roberts (crobertsrh on irc)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NEUTRON] [IPv6] [VPNaaS] - IPSec by default on each Tenant router, the beginning of the Opportunistic Encryption era (rfc4322 ?)...

2014-04-17 Thread Martinx - ジェームズ
Guys,

I here thinking about IPSec when with IPv6 and, one of the first
ideas/wishes of IPv6 scientists, was to always deploy it with IPSec
enabled, always (I've heard). But, this isn't well diffused by now. Who is
actually using IPv6 Opportunistic Encryption?!

For example: With O.E., we'll be able to make a IPv6 IPSec VPN with Google,
so we can ping6 google.com safely... Or with Twitter, Facebook! Or
whatever! That is the purpose of Opportunistic Encryption, am I right?!

Then, with OpenStack, we might have a muiti-Region or even a multi-AZ
cloud, based on the topology Per-Tenant Routers with Private Networks,
for example, so, how hard it will be to deploy the Namespace routers with
IPv6+IPSec O.E. just enabled by default?

I'm thinking about this:


* IPv6 Tenant 1 subnet A - IPv6 Router + IPSec O.E. - *Internet
IPv6* - IPv6 Router + IPSec O.E. - IPv6 Tenant 1 subnet B


So, with O.E., it will be simpler (from the tenant's point of view) to
safely interconnect multiple tenant's subnets, don't you guys think?!

Amazon in the other hand, for example, provides things like VPC Peering,
or VPN Instances, or NAT instances, as a solution to interconnect
creepy IPv4 networks... We don't need none of this kind of solutions when
with IPv6... Right?!

Basically, the OpenStack VPNaaS (O.E.) will come enabled at the Namespace
Router by default, without the tenant even knowing it is there, but of
course, we can still show that IPv6-IPSec-VPN at the Horizon Dashboard,
when established, just for fun... But tenants will never need to think
about it...   =)

And to share the IPSec keys, the stuff required for Opportunistic
Encryption to gracefully works, each OpenStack in the wild, can become a
*pod*, which will form a network of *pods*, I mean, independently owned
*pods* which interoperate to form the *Opportunistic Encrypt Network of
OpenStack Clouds*.

I'll try to make a comparison here, as an analogy, do you guys have ever
heard about the DIASPORA* Project? No, take a look:
http://en.wikipedia.org/wiki/Diaspora_(social_network)

I think that, OpenStack might be for the Opportunistic Encryption, what
DIASPORA* Project is for Social Networks!

If OpenStack can share its keys (O.E. stuff) in someway, with each other,
we can easily build a huge network of OpenStacks, and then, each one will
naturally talk with each other, using a secure connection.

I would love to hear some insights from you guys!

Please, keep in mind that I never deployed a IPSec O.E. before, this is
just an idea I had... If I'm wrong, ignore this e-mail.


References:

https://tools.ietf.org/html/rfc4322

https://groups.google.com/d/msg/ipv6hackers/3LCTBJtr-eE/Om01uHUcf9UJ

http://www.inrialpes.fr/planete/people/chneuman/OE.html


Best!
Thiago
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron BP review process for Juno

2014-04-17 Thread Kyle Mestery
On Thu, Apr 17, 2014 at 1:18 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 Sure thing [1].  The easiest change I saw was to remove the
 restriction that the number of sub titles is exactly 9.  This won't
 require any of the other blueprints already posted for review to
 change.  See what you think.

This was a good change, and in fact it's already been merged. Thanks!

Kyle

 Carl

 [1] https://review.openstack.org/#/c/88381/

 On Wed, Apr 16, 2014 at 3:43 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 On Wed, Apr 16, 2014 at 4:26 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 Neutron (and Nova),

 I have had one thing come up as I've been using the template.  I find
 that I would like to add just a little document structure in the form
 of a sub-heading or two under the Proposed change heading but before
 the required Alternatives sub-heading.  However, this is not allowed
 by the tests.

 Proposed change
 =

 I want to add a little bit of document structure here but I cannot
 because any sub-headings would be counted among the exactly 9
 sub-headings I'm required to have starting with Alternatives.  This
 seems a bit unnatural to me.

 Alternatives
 
 ...


 The sub-headings allow structure underneath but the first heading
 doesn't.  Could be do it a little bit differentely?  Maybe something
 like this?

 Proposed change
 =

 Overview
 

 I could add structure under here.

 Alternatives
 
 ...

 Thoughts?  Another idea might be to change the test to require at
 least the nine required sub-headings but allow for the addition of
 another.

 I'm fine with either of these proposed changes to be honest. Carl,
 please submit a patch to neutron-specs and we can review it there.

 Also, I'm in the process of adding some jenkins jobs for neutron-specs
 similar to nova-specs.

 Thanks,
 Kyle

 Carl

 On Tue, Apr 15, 2014 at 4:07 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 Given the success the Nova team has had in handling reviews using
 their new nova-specs gerrit repository, I think it makes a lot of
 sense for Neutron to do the same. With this in mind, I've added
 instructions to the BP wiki [1] for how to do. Going forward in Juno,
 this is how Neutron BPs will be handled by the Neutron core team. If
 you are currently working on a BP or code for Juno which is attached
 to a BP, please file the BP using the process here [1].

 Given this is our first attempt at using this for reviews, I
 anticipate there may be a few hiccups along the way. Please reply on
 this thread or reach out in #openstack-neutron and we'll sort through
 whatever issues we find.

 Thanks!
 Kyle

 [1] https://wiki.openstack.org/wiki/Blueprints#Neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Stephen Balukoff
Oh! One other question:

5. Should single-call stuff work for the lifecycle of a load balancing
service? That is to say, should delete functionality also clean up all
primitives associated with the service?


On Thu, Apr 17, 2014 at 11:44 AM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 Hi Sri,

 Yes, the meeting minutes  etc. are all available here, usually a few
 minutes after the meeting is over:
 http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/

 (You are also, of course, welcome to join!)

 Stephen


 On Thu, Apr 17, 2014 at 11:34 AM, Sri sri.networ...@gmail.com wrote:

 hello Stephen,


 I am interested in LBaaS and want to know if we post the weekly meeting's
 chat transcripts online?
 or may be update an etherpad?


 Can you please share the links?

 thanks,
 SriD



 --
 View this message in context:
 http://openstack.10931.n7.nabble.com/Neutron-LBaas-Single-call-API-discussion-tp38533p38542.html
 Sent from the Developer mailing list archive at Nabble.com.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Stephen Balukoff
 Blue Box Group, LLC
 (800)613-4305 x807




-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron BP review process for Juno

2014-04-17 Thread Carl Baldwin
Wow, easiest merge ever!  Can we get this repository counted in our stats?!  ;)

Carl

On Thu, Apr 17, 2014 at 1:09 PM, Kyle Mestery mest...@noironetworks.com wrote:
 On Thu, Apr 17, 2014 at 1:18 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 Sure thing [1].  The easiest change I saw was to remove the
 restriction that the number of sub titles is exactly 9.  This won't
 require any of the other blueprints already posted for review to
 change.  See what you think.

 This was a good change, and in fact it's already been merged. Thanks!

 Kyle

 Carl

 [1] https://review.openstack.org/#/c/88381/

 On Wed, Apr 16, 2014 at 3:43 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 On Wed, Apr 16, 2014 at 4:26 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 Neutron (and Nova),

 I have had one thing come up as I've been using the template.  I find
 that I would like to add just a little document structure in the form
 of a sub-heading or two under the Proposed change heading but before
 the required Alternatives sub-heading.  However, this is not allowed
 by the tests.

 Proposed change
 =

 I want to add a little bit of document structure here but I cannot
 because any sub-headings would be counted among the exactly 9
 sub-headings I'm required to have starting with Alternatives.  This
 seems a bit unnatural to me.

 Alternatives
 
 ...


 The sub-headings allow structure underneath but the first heading
 doesn't.  Could be do it a little bit differentely?  Maybe something
 like this?

 Proposed change
 =

 Overview
 

 I could add structure under here.

 Alternatives
 
 ...

 Thoughts?  Another idea might be to change the test to require at
 least the nine required sub-headings but allow for the addition of
 another.

 I'm fine with either of these proposed changes to be honest. Carl,
 please submit a patch to neutron-specs and we can review it there.

 Also, I'm in the process of adding some jenkins jobs for neutron-specs
 similar to nova-specs.

 Thanks,
 Kyle

 Carl

 On Tue, Apr 15, 2014 at 4:07 PM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 Given the success the Nova team has had in handling reviews using
 their new nova-specs gerrit repository, I think it makes a lot of
 sense for Neutron to do the same. With this in mind, I've added
 instructions to the BP wiki [1] for how to do. Going forward in Juno,
 this is how Neutron BPs will be handled by the Neutron core team. If
 you are currently working on a BP or code for Juno which is attached
 to a BP, please file the BP using the process here [1].

 Given this is our first attempt at using this for reviews, I
 anticipate there may be a few hiccups along the way. Please reply on
 this thread or reach out in #openstack-neutron and we'll sort through
 whatever issues we find.

 Thanks!
 Kyle

 [1] https://wiki.openstack.org/wiki/Blueprints#Neutron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Should we adopt a blueprint design process

2014-04-17 Thread Russell Haering
Completely agree.

We're spending too much time discussing features after they're implemented,
which makes contribution more difficult for everyone. Forcing an explicit
design+review process, using the same tools as we use for coding+review
seems like a great idea. If it doesn't work we can iterate.


On Thu, Apr 17, 2014 at 11:01 AM, Kyle Mestery mest...@noironetworks.comwrote:

 On Thu, Apr 17, 2014 at 12:11 PM, Devananda van der Veen
 devananda@gmail.com wrote:
  Hi all,
 
  The discussion of blueprint review has come up recently for several
 reasons,
  not the least of which is that I haven't yet reviewed many of the
 blueprints
  that have been filed recently.
 
  My biggest issue with launchpad blueprints is that they do not provide a
  usable interface for design iteration prior to writing code. Between the
  whiteboard section, wikis, and etherpads, we have muddled through a few
  designs (namely cinder and ceilometer integration) with accuracy, but the
  vast majority of BPs are basically reviewed after they're implemented.
 This
  seems to be a widespread objection to launchpad blueprints within the
  OpenStack community, which others are trying to solve. Having now looked
 at
  what Nova is doing with the nova-specs repo, and considering that
 TripleO is
  also moving to that format for blueprint submission, and considering
 that we
  have a very good review things in gerrit culture in the Ironic
 community
  already, I think it would be a very positive change.
 
  For reference, here is the Nova discussion thread:
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html
 
  and the specs repo BP template:
  https://github.com/openstack/nova-specs/blob/master/specs/template.rst
 
  So, I would like us to begin using this development process over the
 course
  of Juno. We have a lot of BPs up right now that are light on details,
 and,
  rather than iterate on each of them in launchpad, I would like to propose
  that:
  * we create an ironic-specs repo, based on Nova's format, before the
 summit
  * I will begin reviewing BPs leading up to the summit, focusing on
 features
  that were originally targeted to Icehouse and didn't make it, or are
  obviously achievable for J1
  * we'll probably discuss blueprints and milestones at the summit, and
 will
  probably adjust targets
  * after the summit, for any BP not targeted to J1, we require blueprint
  proposals to go through the spec review process before merging any
  associated code.
 
  Cores and interested parties, please reply to this thread with your
  opinions.
 
 I think this is a great idea Devananda. The Neutron community has
 moved to this model for Juno as well, and people have been very
 positive so far.

 Thanks,
 Kyle

  --
  Devananda
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSG][OSSN] Sample Keystone v3 policy exposes privilege escalation vulnerability

2014-04-17 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sample Keystone v3 policy exposes privilege escalation vulnerability
- ---

### Summary ###
The policy.v3cloudsample.json sample Keystone policy file combined with
the underlying mutability of the domain ID for user, group, and project
entities exposed a privilege escalation vulnerability.  When this
sample policy is applied a domain administrator can elevate their
privileges to become a cloud administrator.

### Affected Services / Software ###
Keystone, Havana

### Discussion ###
Changes to the Keystone v3 sample policy during the Havana release cycle
set an excessively broad domain administrator scope that allowed
creation of roles (create_grant) on other domains (among other
actions).  There was no check that the domain administrator had
authority to the domain they were attempting to grant a role on.

Combining the mutable state of the domain ID for user, group, and
project entities with the sample v3 policy resulted in a privilege
escalation vulnerability.  A domain administrator could execute a series
of steps to escalate their access to that of a cloud administrator.

### Recommended Actions ###
Review the following updated sample v3 policy file from the OpenStack
Icehouse release:

https://git.openstack.org/cgit/openstack/keystone/commit/?id=0496466821c1ff6e7d4209233b6c671f88aadc50

You should ensure that your Keystone deployment appropriately reflects
that update.  Domain administrators should generally only be permitted
to perform actions against the domain for which they are an
administrator.

Optionally, review the recent addition of support for immutable domain
IDs and consider it for applicability to your Keystone deployment:

https://git.openstack.org/cgit/openstack/keystone/commit/?id=a2fa6a6f01a4884edf369cafa39946636af5cf1a

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0010
Original LaunchPad Bug : https://bugs.launchpad.net/keystone/+bug/1287219
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTUCuwAAoJEJa+6E7Ri+EVvxwIAKsOIp4gBotwIO9yxTf3y4wF
C7nVi/y5JwwQmzxAHGtMCBn/M6xH8GygMz0P4HWO8B9cI8HWdxpFHy+/504ShTLV
E+ZMNbuJJ6FriKy6HASonfmleHguCT8fWsv5FvHjKsZnBjEY54OYP7Xnw4Kio4rZ
TpCja+vc3IrDnCwqoMHySjD8qSWZLsuYr/klo+AUEt0lry06Zr62Tgb7S6sqYrBn
mcbO0VJ0+89frcyVD4v6aONNX9OcqkQfH0lnriWT2Vyax6+s4DnOqAvsFy8Rdqdf
xWGBkRa7ejDUel5Jgzh9GUwrsk2tpcIpiHh1qXGjgTr8K8xmVu6zaxHE7Cm8wHY=
=l8Lr
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread Chris Friesen

On 04/17/2014 06:37 AM, CARVER, PAUL wrote:

Aaron Rosen wrote:


Sorry not really. It's still not clear to me why multiple nics would be

required on the same L2 domain.

I’m a fan of this old paper for nostalgic reasons
http://static.usenix.org/legacy/publications/library/proceedings/neta99/full_papers/limoncelli/limoncelli.pdf
but a search for transparent or bridging firewall turns up tons of hits.

Whether any of them are valid use cases for OpenStack is something that
we could debate, but the general concept of putting two firewall
interfaces into the same L2 domain and using it to control traffic flow
between different hosts on the same L2 domain has at least five years of
history behind it.


If you want it to act as a transparent firewall then you really need two 
separate physical networks where the firewall acts as a bridge between 
them.  Otherwise the traffic isn't forced to go through the firewall it 
can just go directly to the target MAC address.


To do this in openstack I think you'd need to decouple virtual networks 
from virtual dhcp. So then you'd be able to do stuff like:


1) Create network A with no dhcp server or IP subnet.
2) Create network B with a subnet and dhcp server.
3) Create VM C with a NIC in each network, acting as a bridge/firewall.
4) Connect network B to the outside world.
5) Create VM D with a NIC in network A, it does DHCP broadcast, VM C 
forwards the DHCP request to network B where it gets assigned an address.
6) D can then talk to the outside world with C deciding what outside 
packets are allowed through to it, monitoring/logging the traffic, doing 
traffic shaping, etc.


I wonder if you could do something like this with OpenStack as-is? 
Maybe configure network A with no router, and with an IP address range 
that doesn't overlap with network B.  Then configure network B with a 
non-overlapping address range but also with a router?  Then C could 
still forward packets between the networks...


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Sahara 2014.1 (Icehouse) is released !

2014-04-17 Thread Sergey Lukjanov
Hi everyone,

I'm glad to announce the final release of Sahara 2014.1 Icehouse.
During this cycle we've completed 58 blueprint and fixed 124 bugs.

You can find source tarballs with complete lists of features and bug fixes:

https://launchpad.net/sahara/icehouse/2014.1

Release notes contain an overview of key new features:
https://wiki.openstack.org/wiki/Sahara/ReleaseNotes/Icehouse

Thanks!

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova][Neutron] Launch VM with multiple Ethernet interfaces with I.P. of single subnet.

2014-04-17 Thread Carl Baldwin
This review seems to suggest that it can be done:

https://review.openstack.org/#/c/85432

I was not able to reproduce this in devstack.  How does this work?  My
nova command to add an IP return success but didn't seem to actually
add an IP address to the instance and did not show in neutron
port-show.

Carl

On Thu, Apr 17, 2014 at 12:29 PM, Carl Baldwin c...@ecbaldwin.net wrote:
 I don't see any indication that a floating ip can be associated with
 any of the secondary addresses.  Can this be done?

 If not, then multiple addresses are not useful if a floating ip is
 required to make the server public facing.

 Carl

 On Wed, Apr 16, 2014 at 10:46 PM, Aaron Rosen aaronoro...@gmail.com wrote:
 The allowed-address-pair extension that was added here
 (https://review.openstack.org/#/c/38230/) allows us to add arbitrary ips to
 an interface to allow them. This is useful if you want to run something like
 VRRP between two instances.


 On Wed, Apr 16, 2014 at 9:39 PM, Kevin Benton blak...@gmail.com wrote:

 I was under the impression that the security group rules blocked addresses
 not assigned by neutron[1].


 1.https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_firewall.py#L188


 On Wed, Apr 16, 2014 at 9:20 PM, Aaron Rosen aaronoro...@gmail.com
 wrote:

 You can do it with ip aliasing and use one interface:

 ifconfig eth0 10.0.0.22/24
 ifconfig eth0:1 10.0.0.23/24
 ifconfig eth0:2 10.0.0.24/24

 2: eth0: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc mq state DOWN
 qlen 1000
 link/ether 40:6c:8f:1a:a9:31 brd ff:ff:ff:ff:ff:ff
 inet 10.0.0.22/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
 inet 10.0.0.23/24 brd 10.0.0.255 scope global secondary eth0:1
valid_lft forever preferred_lft forever
 inet 10.0.0.24/24 brd 10.0.0.255 scope global secondary eth0:2
valid_lft forever preferred_lft forever



 On Wed, Apr 16, 2014 at 8:53 PM, Kevin Benton blak...@gmail.com wrote:

 Web server running multiple SSL sites that wants to be compatible with
 clients that don't support the SNI extension. There is no way for a server
 to get multiple IP addresses on the same interface is there?


 On Wed, Apr 16, 2014 at 5:50 PM, Aaron Rosen aaronoro...@gmail.com
 wrote:

 This is true. Several people have asked this same question over the
 years though I've yet to hear a use case why one really need to do this. 
 Do
 you have one?


 On Wed, Apr 16, 2014 at 3:12 PM, Ronak Shah ro...@nuagenetworks.net
 wrote:

 Hi Vikash,
 Currently this is not supported. the NIC not only needs to be in
 different subnet, they have to be in different network as well 
 (container
 for the subnet)

 Thanks
 Ronak

 On Wed, Apr 16, 2014 at 3:51 AM, Vikash Kumar
 vikash.ku...@oneconvergence.com wrote:

 With 'interfaces' I mean 'nics' of VM.


 On Wed, Apr 16, 2014 at 4:18 PM, Vikash Kumar
 vikash.ku...@oneconvergence.com wrote:

 Hi,

  I want to launch one VM which will have two Ethernet interfaces
 with IP of single subnet. Is this supported now in openstack ? Any
 suggestion ?


 Thanx



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Should we adopt a blueprint design process

2014-04-17 Thread Chris Behrens
+1

 On Apr 17, 2014, at 12:27 PM, Russell Haering russellhaer...@gmail.com 
 wrote:
 
 Completely agree.
 
 We're spending too much time discussing features after they're implemented, 
 which makes contribution more difficult for everyone. Forcing an explicit 
 design+review process, using the same tools as we use for coding+review seems 
 like a great idea. If it doesn't work we can iterate.
 
 
 On Thu, Apr 17, 2014 at 11:01 AM, Kyle Mestery mest...@noironetworks.com 
 wrote:
 On Thu, Apr 17, 2014 at 12:11 PM, Devananda van der Veen
 devananda@gmail.com wrote:
  Hi all,
 
  The discussion of blueprint review has come up recently for several 
  reasons,
  not the least of which is that I haven't yet reviewed many of the 
  blueprints
  that have been filed recently.
 
  My biggest issue with launchpad blueprints is that they do not provide a
  usable interface for design iteration prior to writing code. Between the
  whiteboard section, wikis, and etherpads, we have muddled through a few
  designs (namely cinder and ceilometer integration) with accuracy, but the
  vast majority of BPs are basically reviewed after they're implemented. This
  seems to be a widespread objection to launchpad blueprints within the
  OpenStack community, which others are trying to solve. Having now looked at
  what Nova is doing with the nova-specs repo, and considering that TripleO 
  is
  also moving to that format for blueprint submission, and considering that 
  we
  have a very good review things in gerrit culture in the Ironic community
  already, I think it would be a very positive change.
 
  For reference, here is the Nova discussion thread:
  http://lists.openstack.org/pipermail/openstack-dev/2014-March/029232.html
 
  and the specs repo BP template:
  https://github.com/openstack/nova-specs/blob/master/specs/template.rst
 
  So, I would like us to begin using this development process over the course
  of Juno. We have a lot of BPs up right now that are light on details, and,
  rather than iterate on each of them in launchpad, I would like to propose
  that:
  * we create an ironic-specs repo, based on Nova's format, before the summit
  * I will begin reviewing BPs leading up to the summit, focusing on features
  that were originally targeted to Icehouse and didn't make it, or are
  obviously achievable for J1
  * we'll probably discuss blueprints and milestones at the summit, and will
  probably adjust targets
  * after the summit, for any BP not targeted to J1, we require blueprint
  proposals to go through the spec review process before merging any
  associated code.
 
  Cores and interested parties, please reply to this thread with your
  opinions.
 
 I think this is a great idea Devananda. The Neutron community has
 moved to this model for Juno as well, and people have been very
 positive so far.
 
 Thanks,
 Kyle
 
  --
  Devananda
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Michael Still
If you'd like to have a go at implementing this in nova's Juno
release, then you need to create a new-style blueprint in the
nova-specs repository. You can find more details about that process at
https://wiki.openstack.org/wiki/Blueprints#Nova

Some initial thoughts though, some of which have already been brought up:

 - _some_ libvirt drivers already have image caching. I am unsure if
all of them do, I'd have to check.

 - we already have blueprints for better support of glance multiple
image locations, it might be better to extend that work than to do
something completely separate.

 - the xen driver already does bittorrent image delivery IIRC, you
could take a look at how that do that.

 - pre-caching images has been proposed for libvirt for a long time,
but never implemented. I think that's definitely something of interest
to deployers.

Cheers,
Michael

On Wed, Apr 16, 2014 at 11:14 PM, yongquan Fu quanyo...@gmail.com wrote:

 Dear all,



  We would like to present an extension to the vm-booting functionality of
 Nova when a number of homogeneous vms need to be launched at the same time.



 The motivation for our work is to increase the speed of provisioning vms for
 large-scale scientific computing and big data processing. In that case, we
 often need to boot tens and hundreds virtual machine instances at the same
 time.


 Currently, under the Openstack, we found that creating a large number of
 virtual machine instances is very time-consuming. The reason is the booting
 procedure is a centralized operation that involve performance bottlenecks.
 Before a virtual machine can be actually started, OpenStack either copy the
 image file (swift) or attach the image volume (cinder) from storage server
 to compute node via network. Booting a single VM need to read a large amount
 of image data from the image storage server. So creating a large number of
 virtual machine instances would cause a significant workload on the servers.
 The servers become quite busy even unavailable during the deployment phase.
 It would consume a very long time before the whole virtual machine cluster
 useable.



   Our extension is based on our work on vmThunder, a novel mechanism
 accelerating the deployment of large number virtual machine instances. It is
 written in Python, can be integrated with OpenStack easily. VMThunder
 addresses the problem described above by following improvements: on-demand
 transferring (network attached storage), compute node caching, P2P
 transferring and prefetching. VMThunder is a scalable and cost-effective
 accelerator for bulk provisioning of virtual machines.



   We hope to receive your feedbacks. Any comments are extremely welcome.
 Thanks in advance.



 PS:



 VMThunder enhanced nova blueprint:
 https://blueprints.launchpad.net/nova/+spec/thunderboost

  VMThunder standalone project: https://launchpad.net/vmthunder;

  VMThunder prototype: https://github.com/lihuiba/VMThunder

  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder

  VMThunder portal: http://www.vmthunder.org/

 VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf



   Regards



   vmThunder development group

   PDL

   National University of Defense Technology


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Eichberger, German
Hi Stephen,

1. Could you please explain what you understand single-call API functionality 
to be?
From my perspective most of our users will likely create load balancers via  a 
web interface. Thought not necessary, having a single API call makes it easier 
to develop the web interface.

For the “expert” users I envision them to create a load balancer, tweak with 
the settings, and when they arrive at the load balancer they need to automate 
the creation of it. So if they have to create several objects with multiple 
calls in a particular order that is far too complicated and makes the learning 
curve very steep from the GUI to the API. Hence, I like being able to do one 
call and get a functioning load balancer. I like that aspect from Jorge’s 
proposal. On the other hand making a single API call contain all possible 
settings might make it too complex for the casual user who just wants some 
feature activated the GUI doesn’t provide….


2. Could you describe the simplest use case that uses single-call API in your 
environment right now?
Please be very specific--  ideally, a couple examples of specific CLI commands 
a user might run, or API (along with specific configuration data) would be 
great.

http://libra.readthedocs.org/en/latest/api/rest/load-balancer.html#create-a-new-load-balancer

3. Could you describe the most complicated use case that your single-call API 
supports? Again, please be very specific here.

Our API doesn’t have that many features so calls don’t get complicated.

4. What percentage of your customer base are used to using single-call 
functionality, and what percentage are used to manipulating primitives?

We only offer the single call to create a load balancer so 100% of our 
customers use it. (So this is not a good number)

5. Should single-call stuff work for the lifecycle of a load balancing 
service? That is to say, should delete functionality also clean up all 
primitives associated with the service?

Yes. If a customer doesn’t like a load balancer any longer one call will remove 
it. This makes a lot of things easier:

-  GUI development – one call does it all

-  Cleanup scripts: If a customer leaves us we just need to run delete 
on a list of load balancers – ideally if the API had a call to delete all load 
balancers of a specific user/project that would be even better ☺

-  The customer can tear down test/dev/etc. load balancer very quickly


German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Thursday, April 17, 2014 12:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

Oh! One other question:

5. Should single-call stuff work for the lifecycle of a load balancing 
service? That is to say, should delete functionality also clean up all 
primitives associated with the service?

On Thu, Apr 17, 2014 at 11:44 AM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:
Hi Sri,

Yes, the meeting minutes  etc. are all available here, usually a few minutes 
after the meeting is over:  
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/

(You are also, of course, welcome to join!)

Stephen

On Thu, Apr 17, 2014 at 11:34 AM, Sri 
sri.networ...@gmail.commailto:sri.networ...@gmail.com wrote:
hello Stephen,


I am interested in LBaaS and want to know if we post the weekly meeting's
chat transcripts online?
or may be update an etherpad?


Can you please share the links?

thanks,
SriD



--
View this message in context: 
http://openstack.10931.n7.nabble.com/Neutron-LBaas-Single-call-API-discussion-tp38533p38542.html
Sent from the Developer mailing list archive at Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807tel:%28800%29613-4305%20x807



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova][docker] dockenstack updated for nova-docker stackforge repo

2014-04-17 Thread Sylvain Bauza
2014-04-17 21:30 GMT+02:00 Eric Windisch ewindi...@docker.com:


 Furthermore, I've started testing KVM/Qemu support. It's looking
 promising. It's too early to claim it's supported, but I've only ran into
 minor issues so far. I'll update again when I've made further progress.
  Also pending, but not far away, is the effort to have dockenstack run
 devstack-gate, which will bridge much of the gap between the current
 dockenstack environment and that used by openstack-infra for those that
 wish to quickly run functional tests locally on their laptops/workstations
 (or in 3rd-party CI).


Great, let us know when you have time to test it !

I''ll try the new dockenstack repo next week.



 --
 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
Replied as inline comments.

On Thu, Apr 17, 2014 at 9:33 PM, lihuiba magazine.lihu...@163.com wrote:
IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.

 Yes, in this situation, the problem lies in the backend storage, so no other

 protocol will perform better. However, P2P transferring will greatly reduce

 workload on the backend storage, so as to increase responsiveness.


It's not 100% true, in my case at last. We fixed this problem by
network interface driver, it causes kernel panic and readonly issues
under heavy networking workload actually.



As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

 Nova's image caching is file level, while VMThunder's is block-level. And

 VMThunder is for working in conjunction with Cinder, not Glance. VMThunder

 currently uses facebook's flashcache to realize caching, and dm-cache,

 bcache are also options in the future.


Hm if you say bcache, dm-cache and flashcache, I'm just thinking if
them could be leveraged by operation/best-practice level.

btw, we are doing some works to make Glance to integrate Cinder as a
unified block storage backend.


I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

 Yes, on-demand transferring is what you mean by zero-copy, and caching
 is something close to CoR. In fact, we are working on a kernel module called
 foolcache that realize a true CoR. See
 https://github.com/lihuiba/dm-foolcache.


Yup. And it's really interesting to me, will take a look, thanks for sharing.




 National Key Laboratory for Parallel and Distributed
 Processing, College of Computer Science, National University of Defense
 Technology, Changsha, Hunan Province, P.R. China
 410073


 At 2014-04-17 17:11:48,Zhi Yan Liu lzy@gmail.com wrote:
On Thu, Apr 17, 2014 at 4:41 PM, lihuiba magazine.lihu...@163.com wrote:
IMHO, zero-copy approach is better
 VMThunder's on-demand transferring is the same thing as your zero-copy
 approach.
 VMThunder is uses iSCSI as the transferring protocol, which is option #b
 of
 yours.


IMO we'd better to use backend storage optimized approach to access
remote image from compute node instead of using iSCSI only. And from
my experience, I'm sure iSCSI is short of stability under heavy I/O
workload in product environment, it could causes either VM filesystem
to be marked as readonly or VM kernel panic.


Under #b approach, my former experience from our previous similar
Cloud deployment (not OpenStack) was that: under 2 PC server storage
nodes (general *local SAS disk*, without any storage backend) +
2-way/multi-path iSCSI + 1G network bandwidth, we can provisioning 500
VMs in a minute.
 suppose booting one instance requires reading 300MB of data, so 500 ones
 require 150GB.  Each of the storage server needs to send data at a rate
 of
 150GB/2/60 = 1.25GB/s on average. This is absolutely a heavy burden even
 for high-end storage appliances. In production  systems, this request
 (booting
 500 VMs in one shot) will significantly disturb  other running instances
 accessing the same storage nodes.


btw, I believe the case/numbers is not true as well, since remote
image bits could be loaded on-demand instead of load them all on boot
stage.

zhiyan

 VMThunder eliminates this problem by P2P transferring and on-compute-node
 caching. Even a pc server with one 1gb NIC (this is a true pc server!)
 can
 boot
 500 VMs in a minute with ease. For the first time, VMThunder makes bulk
 provisioning of VMs practical for production cloud systems. This is the
 essential
 value of VMThunder.


As I said currently Nova already has image caching mechanism, so in
this case P2P is just an approach could be used for downloading or
preheating for image caching.

I think  P2P transferring/pre-caching sounds a  good way to go, as I
mentioned as well, but actually for the area I'd like to see something
like zero-copy + CoR. On one hand we can leverage the capability of
on-demand downloading image bits by zero-copy approach, on the other
hand we can prevent to reading data from remote image every time by
CoR.

zhiyan




 ===
 From: Zhi Yan Liu lzy@gmail.com
 Date: 2014-04-17 0:02 GMT+08:00
 Subject: Re: [openstack-dev] [Nova][blueprint] Accelerate the booting
 process of a number of vms via VMThunder
 To: OpenStack Development 

Re: [openstack-dev] [Nova][blueprint] Accelerate the booting process of a number of vms via VMThunder

2014-04-17 Thread Zhi Yan Liu
On Fri, Apr 18, 2014 at 5:19 AM, Michael Still mi...@stillhq.com wrote:
 If you'd like to have a go at implementing this in nova's Juno
 release, then you need to create a new-style blueprint in the
 nova-specs repository. You can find more details about that process at
 https://wiki.openstack.org/wiki/Blueprints#Nova

 Some initial thoughts though, some of which have already been brought up:

  - _some_ libvirt drivers already have image caching. I am unsure if
 all of them do, I'd have to check.


Thanks for clarification.

  - we already have blueprints for better support of glance multiple
 image locations, it might be better to extend that work than to do
 something completely separate.


Totally agreed. And I think currently seems there are two places (at
least) could be leveraged:

1. Making this as an image download plug-ins for Nova, to be built-in
or independent. I prefer to go this way, but need to make sure its
context is enough for your case.
2. Making this as a built-in or independent image handler plug-ins, as
a part of supporting of multiple-image-locations (on going) as Michael
mentions here.

zhiyan

  - the xen driver already does bittorrent image delivery IIRC, you
 could take a look at how that do that.

  - pre-caching images has been proposed for libvirt for a long time,
 but never implemented. I think that's definitely something of interest
 to deployers.

 Cheers,
 Michael

 On Wed, Apr 16, 2014 at 11:14 PM, yongquan Fu quanyo...@gmail.com wrote:

 Dear all,



  We would like to present an extension to the vm-booting functionality of
 Nova when a number of homogeneous vms need to be launched at the same time.



 The motivation for our work is to increase the speed of provisioning vms for
 large-scale scientific computing and big data processing. In that case, we
 often need to boot tens and hundreds virtual machine instances at the same
 time.


 Currently, under the Openstack, we found that creating a large number of
 virtual machine instances is very time-consuming. The reason is the booting
 procedure is a centralized operation that involve performance bottlenecks.
 Before a virtual machine can be actually started, OpenStack either copy the
 image file (swift) or attach the image volume (cinder) from storage server
 to compute node via network. Booting a single VM need to read a large amount
 of image data from the image storage server. So creating a large number of
 virtual machine instances would cause a significant workload on the servers.
 The servers become quite busy even unavailable during the deployment phase.
 It would consume a very long time before the whole virtual machine cluster
 useable.



   Our extension is based on our work on vmThunder, a novel mechanism
 accelerating the deployment of large number virtual machine instances. It is
 written in Python, can be integrated with OpenStack easily. VMThunder
 addresses the problem described above by following improvements: on-demand
 transferring (network attached storage), compute node caching, P2P
 transferring and prefetching. VMThunder is a scalable and cost-effective
 accelerator for bulk provisioning of virtual machines.



   We hope to receive your feedbacks. Any comments are extremely welcome.
 Thanks in advance.



 PS:



 VMThunder enhanced nova blueprint:
 https://blueprints.launchpad.net/nova/+spec/thunderboost

  VMThunder standalone project: https://launchpad.net/vmthunder;

  VMThunder prototype: https://github.com/lihuiba/VMThunder

  VMThunder etherpad: https://etherpad.openstack.org/p/vmThunder

  VMThunder portal: http://www.vmthunder.org/

 VMThunder paper: http://www.computer.org/csdl/trans/td/preprint/06719385.pdf



   Regards



   vmThunder development group

   PDL

   National University of Defense Technology


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo removal of use_tpool conf option

2014-04-17 Thread Chris Behrens

I’m going to try to not lose my cool here, but I’m extremely upset by this.

In December, oslo apparently removed the code for ‘use_tpool’ which allows you 
to run DB calls in Threads because it was ‘eventlet specific’. I noticed this 
when a review was posted to nova to add the option within nova itself:

https://review.openstack.org/#/c/59760/

I objected to this and asked (more demanded) for this to be added back into 
oslo. It was not. What I did not realize when I was reviewing this nova patch, 
was that nova had already synced oslo’s change. And now we’ve released Icehouse 
with a conf option missing that existed in Havana. Whatever projects were using 
oslo’s DB API code has had this option disappear (unless an alternative was 
merged). Maybe it’s only nova.. I don’t know.

Some sort of process broke down here.  nova uses oslo.  And oslo removed 
something nova uses without deprecating or merging an alternative into nova 
first. How I believe this should have worked:

1) All projects using oslo’s DB API code should have merged an alternative 
first.
2) Remove code from oslo.
3) Then sync oslo.

What do we do now? I guess we’ll have to back port the removed code into nova. 
I don’t know about other projects.

NOTE: Very few people are probably using this, because it doesn’t work without 
a patched eventlet. However, Rackspace happens to be one that does. And anyone 
waiting on a new eventlet to be released such that they could use this with 
Icehouse is currently out of luck.

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Carlos Garza

On Apr 17, 2014, at 2:11 PM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net
 wrote:

Oh! One other question:

5. Should single-call stuff work for the lifecycle of a load balancing 
service? That is to say, should delete functionality also clean up all 
primitives associated with the service?


We were advocating leaving the primitives behind for the user to delete out 
of respect for shared objects.
The proposal mentions this too.


On Thu, Apr 17, 2014 at 11:44 AM, Stephen Balukoff 
sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net wrote:
Hi Sri,

Yes, the meeting minutes  etc. are all available here, usually a few minutes 
after the meeting is over:  
http://eavesdrop.openstack.org/meetings/neutron_lbaas/2014/

(You are also, of course, welcome to join!)

Stephen


On Thu, Apr 17, 2014 at 11:34 AM, Sri 
sri.networ...@gmail.commailto:sri.networ...@gmail.com wrote:
hello Stephen,


I am interested in LBaaS and want to know if we post the weekly meeting's
chat transcripts online?
or may be update an etherpad?


Can you please share the links?

thanks,
SriD



--
View this message in context: 
http://openstack.10931.n7.nabble.com/Neutron-LBaas-Single-call-API-discussion-tp38533p38542.html
Sent from the Developer mailing list archive at Nabble.comhttp://Nabble.com.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807tel:%28800%29613-4305%20x807



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaas] Single call API discussion

2014-04-17 Thread Brandon Logan

Stephen,
I have responded to your questions below.

On 04/17/2014 01:02 PM, Stephen Balukoff wrote:

Howdy folks!

Based on this morning's IRC meeting, it seems to me there's some 
contention and confusion over the need for single call functionality 
for load balanced services in the new API being discussed. This is 
what I understand:


* Those advocating single call are arguing that this simplifies the 
API for users, and that it more closely reflects the users' experience 
with other load balancing products. They don't want to see this 
functionality necessarily delegated to an orchestration layer (Heat), 
because coordinating how this works across two OpenStack projects is 
unlikely to see success (ie. it's hard enough making progress with 
just one project). I get the impression that people advocating for 
this feel that their current users would not likely make the leap to 
Neutron LBaaS unless some kind of functionality or workflow is 
preserved that is no more complicated than what they currently have to do.
Another reason, which I've mentioned many times before and keeps getting 
ignored, is because the more primitives you add the longer it will take 
to provision a load balancer.  Even if we relied on the orchestration 
layer to build out all the primitives, it still will take much more time 
to provision a load balancer than a single create call provided by the 
API.  Each request and response has an inherent time to process.  Many 
primitives will also have an inherent build time. Combine this in an 
environment that becomes more and more dense, build times will become 
very unfriendly to end users whether they are using the API directly, 
going through a UI, or going through an orchestration layer.  This 
industry is always trying to improve build/provisioning times and there 
are no reasons why we shouldn't try to achieve the same goal.


* Those (mostly) against the idea are interested in seeing the API 
provide primitives and delegating higher level single-call stuff to 
Heat or some other orchestration layer. There was also the implication 
that if single-call is supported, it ought to support both simple 
and advanced set-ups in that single call. Further, I sense concern 
that if there are multiple ways to accomplish the same thing supported 
in the API, this redundancy breeds complication as more features are 
added, and in developing test coverage. And existing Neutron APIs tend 
to expose only primitives. I get the impression that people against 
the idea could be convinced if more compelling reasons were 
illustrated for supporting single-call, perhaps other than we don't 
want to change the way it's done in our environment right now.
I completely disagree with we dont want to change the way it's done in 
our environment right now.  Our proposal has changed the way our 
current API works right now.  We do not have the notion of primitives in 
our current API and our proposal included the ability to construct a 
load balancer with primitives individually.  We kept that in so that 
those operators and users who do like constructing a load balancer that 
way can continue doing so.  What we are asking for is to keep our users 
happy when we do deploy this in a production environment and maintain a 
single create load balancer API call.


I've mostly stayed out of this debate because our solution as used by 
our customers presently isn't single-call and I don't really 
understand the requirements around this.


So! I would love it if some of you could fill me in on this, 
especially since I'm working on a revision of the proposed API. 
Specifically, what I'm looking for is answers to the following questions:


1. Could you please explain what you understand single-call API 
functionality to be?
Single-call API functionality is a call that supports adding multiple 
features to an entity (load balancer in this case) in one API request.  
Whether this supports all features of a load balancer or a subset is up 
for debate.  I prefer all features to be supported.  Yes it adds 
complexity, but complexity is always introduced by improving the end 
user experience and I hope a good user experience is a goal.


2. Could you describe the simplest use case that uses single-call API 
in your environment right now? Please be very specific--  ideally, a 
couple examples of specific CLI commands a user might run, or API 
(along with specific configuration data) would be great.

http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Create_Load_Balancer-d1e1635.html

This page has many different ways to configure a load balancer with one 
call.  It ranges from a simple load balancer to a load balancer with a 
much more complicated configuration.  Generally, if any of those 
features are allowed on a load balancer then it is supported through the 
single call.


3. Could you describe the most complicated use case that your 
single-call API supports? Again, please be very specific here.


[openstack-dev] [Neutron][LBaaS] HA functionality discussion

2014-04-17 Thread Stephen Balukoff
Heyas, y'all!

So, given both the prioritization and usage info on HA functionality for
Neutron LBaaS here:
https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing

It's clear that:

A. HA seems to be a top priority for most operators
B. Almost all load balancer functionality deployed is done so in an
Active/Standby HA configuration

I know there's been some round-about discussion about this on the list in
the past (which usually got stymied in implementation details
disagreements), but it seems to me that with so many players putting a high
priority on HA functionality, this is something we need to discuss and
address.

This is also apropos, as we're talking about doing a major revision of the
API, and it probably makes sense to seriously consider if or how HA-related
stuff should make it into the API. I'm of the opinion that almost all the
HA stuff should be hidden from the user/tenant, but that the admin/operator
at the very least is going to need to have some visibility into HA-related
functionality. The hope here is to discover what things make sense to have
as a least common denominator and what will have to be hidden behind a
driver-specific implementation.

I certainly have a pretty good idea how HA stuff works at our organization,
but I have almost no visibility into how this is done elsewhere, leastwise
not enough detail to know what makes sense to write API controls for.

So! Since gathering data about actual usage seems to have worked pretty
well before, I'd like to try that again. Yes, I'm going to be asking about
implementation details, but this is with the hope of discovering any least
common denominator factors which make sense to build API around.

For the purposes of this document, when I say load balancer devices I
mean either physical or virtual appliances, or software executing on a host
somewhere that actually does the load balancing. It need not directly
correspond with anything physical... but probably does. :P

And... all of these questions are meant to be interpreted from the
perspective of the cloud operator.

Here's what I'm looking to learn from those of you who are allowed to share
this data:

1. Are your load balancer devices shared between customers / tenants, not
shared, or some of both?

1a. If shared, what is your strategy to avoid or deal with collisions of
customer rfc1918 address space on back-end networks? (For example, I know
of no load balancer device that can balance traffic for both customer A and
customer B if both are using the 10.0.0.0/24 subnet for their back-end
networks containing the nodes to be balanced, unless an extra layer of
NATing is happening somewhere.)

2. What kinds of metrics do you use in determining load balancing capacity?

3. Do you operate with a pool of unused load balancer device capacity
(which a cloud OS would need to keep track of), or do you spin up new
capacity (in the form of virtual servers, presumably) on the fly?

3a. If you're operating with a availability pool, can you describe how new
load balancer devices are added to your availability pool?  Specifically,
are there any steps in the process that must be manually performed (ie. so
no API could help with this)?

4. How are new devices 'registered' with the cloud OS? How are they removed
or replaced?

5. What kind of visibility do you (or would you) allow your user base to
see into the HA-related aspects of your load balancing services?

6. What kind of functionality and visibility do you need into the
operations of your load balancer devices in order to maintain your
services, troubleshoot, etc.? Specifically, are you managing the
infrastructure outside the purview of the cloud OS? Are there certain
aspects which would be easier to manage if done within the purview of the
cloud OS?

7. What kind of network topology is used when deploying load balancing
functionality? (ie. do your load balancer devices live inside or outside
customer firewalls, directly on tenant networks? Are you using layer-3
routing? etc.)

8. Is there any other data you can share which would be useful in
considering features of the API that only cloud operators would be able to
perform?


And since we're one of these operators, here are my responses:

1. We have both shared load balancer devices and private load balancer
devices.

1a. Our shared load balancers live outside customer firewalls, and we use
IPv6 to reach individual servers behind the firewalls directly. We have
followed a careful deployment strategy across all our networks so that IPv6
addresses between tenants do not overlap.

2. The most useful ones for us are number of appliances deployed and
number and type of load balancing services deployed though we also pay
attention to:
* Load average per active appliance
* Per appliance number and type of load balancing services deployed
* Per appliance bandwidth consumption
* Per appliance connections / sec
* Per appliance SSL connections / sec

Since our 

Re: [openstack-dev] oslo removal of use_tpool conf option

2014-04-17 Thread Michael Still
It looks to me like this was removed in oslo in commit
a33989e7a2737af757648099cc1af6c642b6e016, which was synced with nova
in 605749ca12af969ac122008b4fa14904df68caf7 (however, I can't see the
change being listed in the commit message for nova, which I assume is
a process failure). That change merged into nova on March 6.

I think the only option we're left with for icehouse is a backport fix for this.

Michael

On Fri, Apr 18, 2014 at 8:20 AM, Chris Behrens cbehr...@codestud.com wrote:

 I’m going to try to not lose my cool here, but I’m extremely upset by this.

 In December, oslo apparently removed the code for ‘use_tpool’ which allows
 you to run DB calls in Threads because it was ‘eventlet specific’. I noticed
 this when a review was posted to nova to add the option within nova itself:

 https://review.openstack.org/#/c/59760/

 I objected to this and asked (more demanded) for this to be added back into
 oslo. It was not. What I did not realize when I was reviewing this nova
 patch, was that nova had already synced oslo’s change. And now we’ve
 released Icehouse with a conf option missing that existed in Havana.
 Whatever projects were using oslo’s DB API code has had this option
 disappear (unless an alternative was merged). Maybe it’s only nova.. I don’t
 know.

 Some sort of process broke down here.  nova uses oslo.  And oslo removed
 something nova uses without deprecating or merging an alternative into nova
 first. How I believe this should have worked:

 1) All projects using oslo’s DB API code should have merged an alternative
 first.
 2) Remove code from oslo.
 3) Then sync oslo.

 What do we do now? I guess we’ll have to back port the removed code into
 nova. I don’t know about other projects.

 NOTE: Very few people are probably using this, because it doesn’t work
 without a patched eventlet. However, Rackspace happens to be one that does.
 And anyone waiting on a new eventlet to be released such that they could use
 this with Icehouse is currently out of luck.

 - Chris



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo removal of use_tpool conf option

2014-04-17 Thread Joshua Harlow
Just an honest question (no negativity intended I swear!).

If a configuration option exists and only works with a patched eventlet why is 
that option an option to begin with? (I understand the reason for the patch, 
don't get me wrong).

Most users would not be able to use such a configuration since they do not have 
this patched eventlet (I assume a newer version of eventlet someday in the 
future will have this patch integrated in it?) so although I understand the 
frustration around this I don't understand why it would be an option in the 
first place. An aside, if the only way to use this option is via a non-standard 
eventlet then how is this option tested in the community, aka outside of said 
company?

An example:

If yahoo has some patched kernel A that requires an XYZ config turned on in 
openstack and the only way to take advantage of kernel A is with XYZ config 
'on', then it seems like that’s a yahoo only patch that is not testable and 
useable for others, even if patched kernel A is somewhere on github it's still 
imho not something that should be a option in the community (anyone can throw 
stuff up on github and then say I need XYZ config to use it).

To me non-standard patches that require XYZ config in openstack shouldn't be 
part of the standard openstack, no matter the company. If patch A is in the 
mainline kernel (or other mainline library), then sure it's fair game.

-Josh

From: Chris Behrens cbehr...@codestud.commailto:cbehr...@codestud.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, April 17, 2014 at 3:20 PM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] oslo removal of use_tpool conf option


I’m going to try to not lose my cool here, but I’m extremely upset by this.

In December, oslo apparently removed the code for ‘use_tpool’ which allows you 
to run DB calls in Threads because it was ‘eventlet specific’. I noticed this 
when a review was posted to nova to add the option within nova itself:

https://review.openstack.org/#/c/59760/

I objected to this and asked (more demanded) for this to be added back into 
oslo. It was not. What I did not realize when I was reviewing this nova patch, 
was that nova had already synced oslo’s change. And now we’ve released Icehouse 
with a conf option missing that existed in Havana. Whatever projects were using 
oslo’s DB API code has had this option disappear (unless an alternative was 
merged). Maybe it’s only nova.. I don’t know.

Some sort of process broke down here.  nova uses oslo.  And oslo removed 
something nova uses without deprecating or merging an alternative into nova 
first. How I believe this should have worked:

1) All projects using oslo’s DB API code should have merged an alternative 
first.
2) Remove code from oslo.
3) Then sync oslo.

What do we do now? I guess we’ll have to back port the removed code into nova. 
I don’t know about other projects.

NOTE: Very few people are probably using this, because it doesn’t work without 
a patched eventlet. However, Rackspace happens to be one that does. And anyone 
waiting on a new eventlet to be released such that they could use this with 
Icehouse is currently out of luck.

- Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [group-based-policy] Moving the meeting time

2014-04-17 Thread Sumit Naiksatam
We realized it the hard way that we didn't have the entire one hour
slot on -meeting-alt at 17:30 on Thursdays.

So, we have to move this meeting again. Based on the opinion of those
present in today's meeting, there was a consensus for the following
time:
Thursdays at 1800 UTC on #openstack-meeting-3

See you there next week.

Thanks,
~Sumit.

On Mon, Apr 14, 2014 at 3:09 PM, Kyle Mestery mest...@noironetworks.com wrote:
 Stephen:

 1730 UTC is available. I've moved the meeting to that time, so
 starting this week the meeting will be at the new time.

 https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting

 Thanks!
 Kyle


 On Thu, Apr 10, 2014 at 3:23 PM, Stephen Wong s3w...@midokura.com wrote:
 Hi Kyle,

 Is 1730UTC available on that channel? If so, and if it is OK with
 everyone, it would be great to have it at 1730 UTC instead (10:30am PDT /
 1:30pm EDT, which would also be at the same time on a different day of the
 week as the advanced service meeting).

 Thanks,
 - Stephen



 On Thu, Apr 10, 2014 at 11:10 AM, Kyle Mestery mest...@noironetworks.com
 wrote:

 Per our meeting last week, I'd like to propose moving the weekly
 Neutron GBP meeting to 1800UTC (11AM PDT / 2PM EDT) on Thursdays in
 #openstack-meeting-3. If you're not ok with this timeslot, please
 reply on this thread. If I don't hear any dissenters, I'll officially
 move the meeting on the wiki and reply here in a few days.

 Thanks!
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral] Mistral agenda item for Heat community meeting on Apr 17

2014-04-17 Thread Zane Bitter

On 17/04/14 00:34, Renat Akhmerov wrote:

Ooh, I confused the day of meeting :(. My apologies, I’m in a completely 
different timezone (for me it’s in the middle of night) so I strongly believed 
it was on a different day. I’ll be there next time.


Yeah, it's really unfortunate that it falls right at midnight UTC, 
because it makes the dates really confusing :/ It's technically correct 
though, so it's hard to know what to do to make it less confusing.


We ended up pretty short on time anyway, so it worked out well ;) I just 
added it to the agenda for next week, and hopefully that meeting time 
should be marginally more convenient for you anyway.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo removal of use_tpool conf option

2014-04-17 Thread Chris Behrens

On Apr 17, 2014, at 4:26 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:

 Just an honest question (no negativity intended I swear!).
 
 If a configuration option exists and only works with a patched eventlet why 
 is that option an option to begin with? (I understand the reason for the 
 patch, don't get me wrong).
 

Right, it’s a valid question. This feature has existed one way or another in 
nova for quite a while. Initially the implementation in nova was wrong. I did 
not know that eventlet was also broken at the time, although I discovered it in 
the process of fixing nova’s code. I chose to leave the feature because it’s 
something that we absolutely need long term, unless you really want to live 
with DB calls blocking the whole process. I know I don’t. Unfortunately the bug 
in eventlet is out of our control. (I made an attempt at fixing it, but it’s 
not 100%. Eventlet folks currently have an alternative up that may or may not 
work… but certainly is not in a release yet.)  We have an outstanding bug on 
our side to track this, also.

The below is comparing apples/oranges for me.

- Chris


 Most users would not be able to use such a configuration since they do not 
 have this patched eventlet (I assume a newer version of eventlet someday in 
 the future will have this patch integrated in it?) so although I understand 
 the frustration around this I don't understand why it would be an option in 
 the first place. An aside, if the only way to use this option is via a 
 non-standard eventlet then how is this option tested in the community, aka 
 outside of said company?
 
 An example:
 
 If yahoo has some patched kernel A that requires an XYZ config turned on in 
 openstack and the only way to take advantage of kernel A is with XYZ config 
 'on', then it seems like that’s a yahoo only patch that is not testable and 
 useable for others, even if patched kernel A is somewhere on github it's 
 still imho not something that should be a option in the community (anyone can 
 throw stuff up on github and then say I need XYZ config to use it).
 
 To me non-standard patches that require XYZ config in openstack shouldn't be 
 part of the standard openstack, no matter the company. If patch A is in the 
 mainline kernel (or other mainline library), then sure it's fair game.
 
 -Josh
 
 From: Chris Behrens cbehr...@codestud.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, April 17, 2014 at 3:20 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] oslo removal of use_tpool conf option
 
 
 I’m going to try to not lose my cool here, but I’m extremely upset by this.
 
 In December, oslo apparently removed the code for ‘use_tpool’ which allows 
 you to run DB calls in Threads because it was ‘eventlet specific’. I noticed 
 this when a review was posted to nova to add the option within nova itself:
 
 https://review.openstack.org/#/c/59760/
 
 I objected to this and asked (more demanded) for this to be added back into 
 oslo. It was not. What I did not realize when I was reviewing this nova 
 patch, was that nova had already synced oslo’s change. And now we’ve 
 released Icehouse with a conf option missing that existed in Havana. 
 Whatever projects were using oslo’s DB API code has had this option 
 disappear (unless an alternative was merged). Maybe it’s only nova.. I don’t 
 know.
 
 Some sort of process broke down here.  nova uses oslo.  And oslo removed 
 something nova uses without deprecating or merging an alternative into nova 
 first. How I believe this should have worked:
 
 1) All projects using oslo’s DB API code should have merged an alternative 
 first.
 2) Remove code from oslo.
 3) Then sync oslo.
 
 What do we do now? I guess we’ll have to back port the removed code into 
 nova. I don’t know about other projects.
 
 NOTE: Very few people are probably using this, because it doesn’t work 
 without a patched eventlet. However, Rackspace happens to be one that does. 
 And anyone waiting on a new eventlet to be released such that they could use 
 this with Icehouse is currently out of luck.
 
 - Chris
 
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements and API revision progress

2014-04-17 Thread Stephen Balukoff
Hi Brandon!

Per the meeting this morning, I seem to recall you were looking to have me
elaborate on why the term 'load balancer' as used in your API proposal is
significantly different from the term 'load balancer' as used in the
glossary at:  https://wiki.openstack.org/wiki/Neutron/LBaaS/Glossary

As promised, here's my elaboration on that:

The glossary above states:  An object that represent a logical load
balancer that may have multiple resources such as Vips, Pools,
Members, etc.Loadbalancer
is a root object in the meaning described above. and references the
diagram here:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#Loadbalancer_instance_solution

On that diagram, it's clear that VIPs,  etc. are subordinate objects to a
load balancer. What's more, attributes like 'protocol' and 'port' are not
part of the load balancer object in that diagram (they're part of a 'VIP'
in one proposed version, and part of a 'Listener' in the others).

In your proposal, you state only one port and one protocol per load
balancer, and then later (on page 9 under GET /vips) you show that a vip
may have many load balancers associated with it. So clearly, load
balancer the way you're using it is subordinate to a VIP. So in the
glossary, it sounds like the object which has a single port and protocol
associated with it that is subordinate to a VIP: A listener.

Now, I don't really care if y'all decide to re-define load balancer from
what is in the glossary so long as you do define it clearly in the
proposal. (If we go with your proposal, it would then make sense to update
the glossary accordingly.) Mostly, I'm just trying to avoid confusion
because it's exactly these kinds of misunderstandings which have stymied
discussion and progress in the past, eh.

Also-- I can guess where the confusion comes from: I'm guessing most
customers refer to a service which listens on a tcp or udp port,
understands a specific protocol, and forwards data from the connecting
client to some back-end server which actually services the request as a
load balancer. It's entirely possible that in the glossary and in
previous discussions we've been mis-using the term (like we have with VIP).
Personally, I suspect it's an overloaded term that in use in our industry
means different things depending on context (and is probably often mis-used
by people who don't understand what load balancing actually is). Again, I
care less about what specific terms we decide on so long as we define them
so that everyone can be on the same page and know what we're talking about.
:)

Stephen



On Wed, Apr 16, 2014 at 7:17 PM, Brandon Logan
brandon.lo...@rackspace.comwrote:

 You say 'only one port and protocol per load balancer', yet I don't know
 how this works. Could you define what a 'load balancer' is in this case?
  (port and protocol are attributes that I would associate with a TCP or UDP
 listener of some kind.)  Are you using 'load balancer' to mean 'listener'
 in this case (contrary to previous discussion of this on this list and the
 one defined here https://wiki.openstack.org/wiki/Neutron/LBaaS
 /Glossary#Loadbalancer )?


 Yes, it could be considered as a Listener according to that
 documentation.  The way to have a listener using the same VIP but listen
 on two different ports is something we call VIP sharing.  You would assign
 a VIP to one load balancer that uses one port, and then assign that same
 VIP to another load balancer but that load balancer is using a different
 port than the first one.  How the backend implements it is an
 implementation detail (redudant, I know).  In the case of HaProxy it would
 just add the second port to the same config that the first load balancer
 was using.  In other drivers it might be different.





-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [heat-templates] [qa] [tempest] Questions about images

2014-04-17 Thread Mike Spreitzer
Steven Dake sd...@redhat.com wrote on 04/16/2014 03:31:16 PM:

 ...
 Fedora 19 shipped in the Fedora cloud images does *NOT* include 
 heat-cfntools.  The heat-cfntools package was added only in Fedora 
 20 qcow2 images.  Fedora 19 must be custom made which those 
 prebuilt-jeos-images are.  They worked for me last time I fired up an 
image.
 
 Regards
 -steve

When I look at http://fedorapeople.org/groups/heat/prebuilt-jeos-images/ 
today I see that F19-x86_64-cfntools.qcow2 is dated Feb 5, 2014.  Wherever 
I download it, its MD5 hash is b8fa3cb4e044d4e2439229b55982225c.  Have you 
succeeded with an image having that hash?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] HA functionality discussion

2014-04-17 Thread Susanne Balle
I agree that the HA should be hidden to the user/tenant. IMHO a tenant
should just use a load-balancer as a “managed” black box where the service
is resilient in itself.



Our current Libra/LBaaS implementation in the HP public cloud uses a pool
of standby LB to replace failing tenant’s LB. Our LBaaS service is
monitoring itself and replacing LB when they fail. This is via a set of
Admin API server.



http://libra.readthedocs.org/en/latest/admin_api/index.html

The Admin server spawns several scheduled threads to run tasks such as
building new devices for the pool, monitoring load balancer devices and
maintaining IP addresses.



http://libra.readthedocs.org/en/latest/pool_mgm/about.html


Susanne


On Thu, Apr 17, 2014 at 6:49 PM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 Heyas, y'all!

 So, given both the prioritization and usage info on HA functionality for
 Neutron LBaaS here:
 https://docs.google.com/spreadsheet/ccc?key=0Ar1FuMFYRhgadDVXZ25NM2NfbGtLTkR0TDFNUWJQUWcusp=sharing

 It's clear that:

 A. HA seems to be a top priority for most operators
 B. Almost all load balancer functionality deployed is done so in an
 Active/Standby HA configuration

 I know there's been some round-about discussion about this on the list in
 the past (which usually got stymied in implementation details
 disagreements), but it seems to me that with so many players putting a high
 priority on HA functionality, this is something we need to discuss and
 address.

 This is also apropos, as we're talking about doing a major revision of the
 API, and it probably makes sense to seriously consider if or how HA-related
 stuff should make it into the API. I'm of the opinion that almost all the
 HA stuff should be hidden from the user/tenant, but that the admin/operator
 at the very least is going to need to have some visibility into HA-related
 functionality. The hope here is to discover what things make sense to have
 as a least common denominator and what will have to be hidden behind a
 driver-specific implementation.



 I certainly have a pretty good idea how HA stuff works at our
 organization, but I have almost no visibility into how this is done
 elsewhere, leastwise not enough detail to know what makes sense to write
 API controls for.

 So! Since gathering data about actual usage seems to have worked pretty
 well before, I'd like to try that again. Yes, I'm going to be asking about
 implementation details, but this is with the hope of discovering any least
 common denominator factors which make sense to build API around.

 For the purposes of this document, when I say load balancer devices I
 mean either physical or virtual appliances, or software executing on a host
 somewhere that actually does the load balancing. It need not directly
 correspond with anything physical... but probably does. :P

 And... all of these questions are meant to be interpreted from the
 perspective of the cloud operator.

 Here's what I'm looking to learn from those of you who are allowed to
 share this data:

 1. Are your load balancer devices shared between customers / tenants, not
 shared, or some of both?

 1a. If shared, what is your strategy to avoid or deal with collisions of
 customer rfc1918 address space on back-end networks? (For example, I know
 of no load balancer device that can balance traffic for both customer A and
 customer B if both are using the 10.0.0.0/24 subnet for their back-end
 networks containing the nodes to be balanced, unless an extra layer of
 NATing is happening somewhere.)

 2. What kinds of metrics do you use in determining load balancing capacity?

 3. Do you operate with a pool of unused load balancer device capacity
 (which a cloud OS would need to keep track of), or do you spin up new
 capacity (in the form of virtual servers, presumably) on the fly?

 3a. If you're operating with a availability pool, can you describe how new
 load balancer devices are added to your availability pool?  Specifically,
 are there any steps in the process that must be manually performed (ie. so
 no API could help with this)?

 4. How are new devices 'registered' with the cloud OS? How are they
 removed or replaced?

 5. What kind of visibility do you (or would you) allow your user base to
 see into the HA-related aspects of your load balancing services?

 6. What kind of functionality and visibility do you need into the
 operations of your load balancer devices in order to maintain your
 services, troubleshoot, etc.? Specifically, are you managing the
 infrastructure outside the purview of the cloud OS? Are there certain
 aspects which would be easier to manage if done within the purview of the
 cloud OS?

 7. What kind of network topology is used when deploying load balancing
 functionality? (ie. do your load balancer devices live inside or outside
 customer firewalls, directly on tenant networks? Are you using layer-3
 routing? etc.)

 8. Is there any other data you can share which would be useful in
 considering 

  1   2   >