Re: [Openstack] Openstack High Availability

2012-12-17 Thread Eugene Kirpichov
Hi,

I suggest you to read a series of our blogposts on H/A in openstack (in
this order):
http://www.mirantis.com/blog/intro-to-openstack-in-production/
http://www.mirantis.com/blog/ha-platform-components-mysql-rabbitmq/
http://www.mirantis.com/blog/software-high-availability-load-balancing-openstack-cloud-api-servic/
http://www.mirantis.com/blog/117072/

Sorry for the shameless promotion but I actually think it's relevant :)


On Mon, Dec 17, 2012 at 12:11 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hi All,


 I am following the high availability guide provided by openstack as seen
 here: http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html

 I currently have mysql and rabbitmq setup with pacemaker and corosync.
  Everything seems to be working fine.

 I am now adding in the openstack services.  I have keystone configured and
 it is using haproxy for load balancing/redundancy and keepalived for the
 virtual IP.  haproxy is set configured to bind to non-local address and is
 running on all HA nodes.

 Is this a good way to make the openstack services such as glance, swift,
 nova and keystone HA?

 Is it preferable to have keepalived start haproxy on the node that it
 assigns the virtual IP to, or is my setup with the non-local binding ok?

 Is this (
 http://docs.openstack.org/folsom/openstack-network/admin/content/ha_pacemaker.html)
 the preferred to make the networking (quantum) HA?


 and one last question:

 What is a good method to detect a compute node failure and automatically
 restart the virutal machines it was running on another node?

 Thank you so much, and sorry to bombard you with questions!

 Sam



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack High Availability

2012-12-17 Thread Eugene Kirpichov
Right, you only need HA for swift-proxy where a simple load balancer suffices.

On Dec 17, 2012, at 12:42 PM, Caitlin Bestler caitlin.best...@nexenta.com 
wrote:

 
 
 Eugene Kirpichov wrote:
 
 I suggest you to read a series of our blogposts on H/A in openstack (in this 
 order):
 http://www.mirantis.com/blog/intro-to-openstack-in-production/
 http://www.mirantis.com/blog/ha-platform-components-mysql-rabbitmq/
 http://www.mirantis.com/blog/software-high-availability-load-balancing-openstack-cloud-api-servic/
 http://www.mirantis.com/blog/117072/
 
 Sorry for the shameless promotion but I actually think it's relevant :)
 
 Good articles. The one thing they don't cover is Swift, which mostly does not 
 need to be covered
 since the Swift service is designed to be highly available even if the 
 individual servers are not.
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ask for comments - Light weight Erasure code framework for swift

2012-10-17 Thread Eugene Kirpichov
 on internal-network traffic. With a
 replica-stored object, the proxy opens one connection to an object server,
 sends a request, gets a response, and streams the bytes out to the client.

 With an EC-stored object, the proxy has to open connections to, say, 10
 different object servers. Further, if one of the data blocks is unavailable
 (say data block 5), then the proxy has to go ahead and re-request all the
 data blocks plus a parity block so that it can fill in the gaps. That may be
 a significant increase in traffic on Swift's internal network. Further, by
 using such a large number of connections, it considerably increases the
 probability of a connection failure, which would mean more client requests
 would fail with truncated downloads.


 Those are all the thoughts I have right now that are coherent enough to put
 into text. Clearly, adding erasure coding (or any other form of tiered
 storage) to Swift is not something undertaken lightly.

 Hope this helps.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov
We're hiring! http://tinyurl.com/mirantis-openstack-engineer

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A few posts on H/A OpenStack

2012-09-04 Thread Eugene Kirpichov
Hi Anne,

Thanks, looks like a good contribution target. I'll take a look.

On Sat, Sep 1, 2012 at 11:14 AM, Anne Gentle a...@openstack.org wrote:
 Hi Eugene  -

 Nice posts. If you're interested in collaborating on a new H/A
 document, Florian started one in the docs repository at
 https://github.com/openstack/openstack-manuals/tree/master/doc/src/docbkx/openstack-ha
 written in asciidoc, automatically publishing, but not yet linked on
 the docs site. It needs more meat and your posts could bring some
 meat - what do you think about bringing them in to enhance the
 guide?

 Thanks,
 Anne

 On Fri, Aug 31, 2012 at 5:59 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Hi,

 Thought the community could be interested. We (Mirantis) recently
 published a few posts about building H/A OpenStack.

 http://www.mirantis.com/blog/intro-to-openstack-in-production/ - Intro
 http://www.mirantis.com/blog/ha-platform-components-mysql-rabbitmq/ -
 MySQL and RabbitMQ
 http://www.mirantis.com/blog/software-high-availability-load-balancing-openstack-cloud-api-servic/
 - API services

 Comments very welcome.

 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A few posts on H/A OpenStack

2012-09-01 Thread Eugene Kirpichov
Thanks! Indeed I put only the tag networking into planet, will fix...



31.08.2012, в 21:20, Atul Jha atul@csscorp.com написал(а):

 Eugene,
 
 snip
 Hi,
 
 Thought the community could be interested. We (Mirantis) recently
 published a few posts about building H/A OpenStack.
 
 http://www.mirantis.com/blog/intro-to-openstack-in-production/ - Intro
 http://www.mirantis.com/blog/ha-platform-components-mysql-rabbitmq/ -
 MySQL and RabbitMQ
 http://www.mirantis.com/blog/software-high-availability-load-balancing-openstack-cloud-api-servic/
 - API services
 
 Comments very welcome.
 
 /snip
 
 Nice, informative blog.
 I am not sure if i missed anything but why i don`t see these blogs in 
 openstack planet feed? (http://planet.openstack.org/)
 
 Am i missing something? Putting it on planet will have more impact as we 
 really don`t know if everyone who wants to know about HA is on the 
 mailing-list and has not missed your announcement mail. In another words you 
 don`t have to take effort in making announcement in mailing list for every 
 new blog published. :)
 
 
 Cheers!!
 
 Atul Jha aka koolhead17
 http://www.csscorp.com/common/email-disclaimer.php

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] A few posts on H/A OpenStack

2012-08-31 Thread Eugene Kirpichov
Hi,

Thought the community could be interested. We (Mirantis) recently
published a few posts about building H/A OpenStack.

http://www.mirantis.com/blog/intro-to-openstack-in-production/ - Intro
http://www.mirantis.com/blog/ha-platform-components-mysql-rabbitmq/ -
MySQL and RabbitMQ
http://www.mirantis.com/blog/software-high-availability-load-balancing-openstack-cloud-api-servic/
- API services

Comments very welcome.

-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question about Flat DHCP networking

2012-08-20 Thread Eugene Kirpichov
Hi,

I found the code. It's a dnsmasq dhcp-script called by dnsmasq for
various events, it resides in nova/bin/nova-dhcpbridge and it, among
other things, calls release_fixed_ip when the lease expires.

On Sat, Aug 18, 2012 at 1:01 AM, Aaron Rosen aro...@nicira.com wrote:
 Hi Eugene,

 I'm not sure I have not looked at the code (I'm guessing that it probably
 keeps the lease around since it knows the VM is still active instead of
 recycling the ip address). Though this is just a guess. You should look at
 the implementation details if you are curious.

 Aaron


 On Sat, Aug 18, 2012 at 3:48 AM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:

 Thanks. And how will n-net react?



 18.08.2012, в 0:43, Aaron Rosen aro...@nicira.com написал(а):

 Hi Eugene,

 This means that if a VM stops it's DHCP client that nova-network will be
 aware of this since the VM will not attempt to renew it's DHCP lease.

 Aaron

 On Fri, Aug 17, 2012 at 5:58 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:

 Hi,

 The documentation

 http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-flat-dhcp-networking.html
 has the passage: The nova-network service will track leases and
 releases in the database so it knows if a VM instance has stopped
 properly configuring via DHCP

 Can someone briefly explain me what this means, if possible with rough
 pointers to code?
 I don't recall nova noticing when my VM actually stopped properly
 configuring via DHCP.

 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp






-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question about Flat DHCP networking

2012-08-20 Thread Eugene Kirpichov
Heh, that's an article by my colleague Piotr Siwczak, which I already
read very thouroughly during the review process - but thanks ;)

Actually I already found the answer to my question and added it to the
documentation (when the codereview is completed) - it was related to
nova-dhcpbridge script.

On Mon, Aug 20, 2012 at 10:21 PM, hitesh wadekar
hitesh.wade...@gmail.com wrote:
 May be this article will help.

 http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/

 Thanks,
 Hitesh


 On Tue, Aug 21, 2012 at 3:16 AM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:

 Hi,

 I found the code. It's a dnsmasq dhcp-script called by dnsmasq for
 various events, it resides in nova/bin/nova-dhcpbridge and it, among
 other things, calls release_fixed_ip when the lease expires.

 On Sat, Aug 18, 2012 at 1:01 AM, Aaron Rosen aro...@nicira.com wrote:
  Hi Eugene,
 
  I'm not sure I have not looked at the code (I'm guessing that it
  probably
  keeps the lease around since it knows the VM is still active instead of
  recycling the ip address). Though this is just a guess. You should look
  at
  the implementation details if you are curious.
 
  Aaron
 
 
  On Sat, Aug 18, 2012 at 3:48 AM, Eugene Kirpichov ekirpic...@gmail.com
  wrote:
 
  Thanks. And how will n-net react?
 
 
 
  18.08.2012, в 0:43, Aaron Rosen aro...@nicira.com написал(а):
 
  Hi Eugene,
 
  This means that if a VM stops it's DHCP client that nova-network will
  be
  aware of this since the VM will not attempt to renew it's DHCP lease.
 
  Aaron
 
  On Fri, Aug 17, 2012 at 5:58 PM, Eugene Kirpichov
  ekirpic...@gmail.com
  wrote:
 
  Hi,
 
  The documentation
 
 
  http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-flat-dhcp-networking.html
  has the passage: The nova-network service will track leases and
  releases in the database so it knows if a VM instance has stopped
  properly configuring via DHCP
 
  Can someone briefly explain me what this means, if possible with rough
  pointers to code?
  I don't recall nova noticing when my VM actually stopped properly
  configuring via DHCP.
 
  --
  Eugene Kirpichov
  http://www.linkedin.com/in/eugenekirpichov
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question about Flat DHCP networking

2012-08-18 Thread Eugene Kirpichov
Thanks. And how will n-net react?



18.08.2012, в 0:43, Aaron Rosen aro...@nicira.com написал(а):

 Hi Eugene, 
 
 This means that if a VM stops it's DHCP client that nova-network will be 
 aware of this since the VM will not attempt to renew it's DHCP lease.
 
 Aaron
 
 On Fri, Aug 17, 2012 at 5:58 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Hi,
 
 The documentation
 http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-flat-dhcp-networking.html
 has the passage: The nova-network service will track leases and
 releases in the database so it knows if a VM instance has stopped
 properly configuring via DHCP
 
 Can someone briefly explain me what this means, if possible with rough
 pointers to code?
 I don't recall nova noticing when my VM actually stopped properly
 configuring via DHCP.
 
 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Question about Flat DHCP networking

2012-08-17 Thread Eugene Kirpichov
Hi,

The documentation
http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-flat-dhcp-networking.html
has the passage: The nova-network service will track leases and
releases in the database so it knows if a VM instance has stopped
properly configuring via DHCP

Can someone briefly explain me what this means, if possible with rough
pointers to code?
I don't recall nova noticing when my VM actually stopped properly
configuring via DHCP.

-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] LBaaS IRC meeting notes

2012-08-16 Thread Eugene Kirpichov
Hi,

This time again Oleg Gelbukh and me were the sole participants, so let
me just repeat the news on our side here:

* Work is in progress on the F5 driver.
* A new scheduler (device allocator) similar to Nova's FilterScheduler
is ready for master (though not pushed to the public repo at the
moment)
* We had some communication with LB vendors and with Dan from Quantum
about LBaaS/Quantum integration, and identified some key questions.
Dan already sent a link to a wiki page about that (though there's not
much there now).
* There are going to be several design sessions on LBaaS and Quantum
integration at the Summit.
* We're going to provide the first Quantum integration proposal next
week on the ML, to start the discussion before the summit.

-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] LBaaS IRC meeting notes

2012-08-16 Thread Eugene Kirpichov
Hi,

Not yet. But I think we'll create one now that you've mentioned it :)

On Thu, Aug 16, 2012 at 3:11 PM, Atul Jha atul@csscorp.com wrote:
 Hi,

 Snip
 Hi,

 This time again Oleg Gelbukh and me were the sole participants, so let
 me just repeat the news on our side here:

 * Work is in progress on the F5 driver.
 * A new scheduler (device allocator) similar to Nova's FilterScheduler
 is ready for master (though not pushed to the public repo at the
 moment)
 * We had some communication with LB vendors and with Dan from Quantum
 about LBaaS/Quantum integration, and identified some key questions.
 Dan already sent a link to a wiki page about that (though there's not
 much there now).
 * There are going to be several design sessions on LBaaS and Quantum
 integration at the Summit.
 * We're going to provide the first Quantum integration proposal next
 week on the ML, to start the discussion before the summit.
 /snip

 Is there a blueprint or project page LBaaS  where i can see ongoing 
 development of the project.

 Thanks,

 Atul Jha
 http://www.csscorp.com/common/email-disclaimer.php



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] LBaaS IRC meeting notes

2012-08-16 Thread Eugene Kirpichov
Created blueprint in Quantum:
https://blueprints.launchpad.net/quantum/+spec/lbaas

On Thu, Aug 16, 2012 at 3:30 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Hi,

 Not yet. But I think we'll create one now that you've mentioned it :)

 On Thu, Aug 16, 2012 at 3:11 PM, Atul Jha atul@csscorp.com wrote:
 Hi,

 Snip
 Hi,

 This time again Oleg Gelbukh and me were the sole participants, so let
 me just repeat the news on our side here:

 * Work is in progress on the F5 driver.
 * A new scheduler (device allocator) similar to Nova's FilterScheduler
 is ready for master (though not pushed to the public repo at the
 moment)
 * We had some communication with LB vendors and with Dan from Quantum
 about LBaaS/Quantum integration, and identified some key questions.
 Dan already sent a link to a wiki page about that (though there's not
 much there now).
 * There are going to be several design sessions on LBaaS and Quantum
 integration at the Summit.
 * We're going to provide the first Quantum integration proposal next
 week on the ML, to start the discussion before the summit.
 /snip

 Is there a blueprint or project page LBaaS  where i can see ongoing 
 development of the project.

 Thanks,

 Atul Jha
 http://www.csscorp.com/common/email-disclaimer.php



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Help with meta-data

2012-08-16 Thread Eugene Kirpichov
Hi Anne,

On Wed, Aug 15, 2012 at 2:55 PM, Anne Gentle a...@openstack.org wrote:
 Hi Eugene -
 But I thought everyone was on the openstack list! :) Thanks for following 
 up.

 On Wed, Aug 15, 2012 at 4:35 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Hi Anne,

 I accidentally found this email of yours while looking for links to my post.
 I'd probably have found it earlier if you cc'd me on
 ekirpic...@gmail.com or ekirpic...@mirantis.com [yes, that's two
 different spellings...] :)

 I support the idea that this should be somehow integrated in the docs,
 but I'm not sure where exactly in the docs is a good place for
 information of this style. Would it help if I just linked to the posts
 from some wiki page or from the docs?

 I do like to put relevant blog posts on the wiki at
 http://wiki.openstack.org/BloggersTips, so you can certainly add to
 that page. If it's really missing information in the docs, though, it
 should be added to the docs. I know that's a tough judgement call but
 we all have to encourage that call.

Thanks. I added a link to the post to this wiki page and I'll now
spend some time picking what to integrate into the docs.


 Or would it only help if I (or
 somebody) actually merged the relevant parts of the posts into
 official documentation?

 I wouldn't say only help but I prefer that you merge the relevant
 parts of the posts. It's tougher for a doc team member to merge only
 parts in without violating the license of the content - you as content
 owner can certainly choose which parts to move into the official
 documentation though.

 Thanks for asking for clarifications - these are certainly gray areas
 that I'd like to shine light upon.
 Anne

 On Thu, Aug 9, 2012 at 8:07 AM, Anne Gentle a...@openstack.org wrote:
 All, sorry for top posting, but this is a fine example of why we
 really need bloggers to help with the documentation. These fragmented
 instructions are difficult to rely on - we need maintainable,
 process-oriented treatment of content.

 Mirantis peeps, you have added in your blog entries to the docs in the
 past, let's find ways to continually do that and maintain going
 forward.

 I'm not so interested in more install guides, but definitely
 interested in more configuration guides. So Kord, while I like the
 idea (and execution!) of the StackGeek 10-minute guide, it's not one
 to bring into the official docs. But we would definitely welcome your
 reviews of incoming updates to the docs!

 Thanks Simon for bringing your difficulties to the list - we
 continually work on improving the docs. What you learn now could help
 hundreds if not thousands of others, so I'd love for you to improve
 the official docs with your findings.
 Thanks,
 Anne

 On Thu, Aug 9, 2012 at 4:42 AM, Simon Walter si...@gikaku.com wrote:

 On 08/09/2012 12:59 PM, Scott Moser wrote:

 On Aug 8, 2012, at 8:20 PM, Simon Walter si...@gikaku.com wrote:


 On 08/09/2012 06:45 AM, Jay Pipes
 I guess I'll have to build a VM from scratch, as I was relying on the ssh
 key to be able to ssh into the VM, which apparently is supplied by the
 meta-data service.

 use cirros.
 load an image, ssh on with 'cirros' user. pass is 'cubswin:)'


 Thank you. That was good advice.

 Somehow I was not able to connect via ssh. I managed to get novnc working
 and logged into the VM. I can't find anything about connecting via serial 
 or
 the like as you can with Xen. I need to read more about KVM I guess.

 Anyway, I think my networking setup is stuffed. I thought the 10 minutes
 install would be the quickest way to get and running. Now I find myself
 pouring over documentation trying to understand how best to setup
 FlatDHCPManager with two network interfaces. I understand many things have
 changed. So I don't want to go reading something out of date. I found these
 blog posts which explained a lot:
 http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/#comments
 http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/
 But am I reading the wrong thing? I like the way Stackgeek had it set up:
 http://stackgeek.com/guides/gettingstarted.html

 But I think they are missing details or it's out dated. For example, with
 their setup the vnc console in horizon does not work because nova-vncproxy
 is installed rather than novnc.

 I'm pretty sure I can figure the networking out if I have the right
 documentation in the first place. Is there a clear instructions for this
 anywhere? Or would someone mind walking me through it again. So far I've
 followed the stackgeek setup above, but the networking is obviously 
 stuffed.

 Must I have the flat_interface in promiscuous mode?
 Or does it actually need an IP address?
 Why are my VMs picking up an IP address from the public_interface DHCP
 server and not from the flat_network_bridge?

 Too many questions to ask. So I thought I should just ask: what is missing
 or incorrect from Stackgeeks 10 minute scripts?

 Many thanks

Re: [Openstack] Help with meta-data

2012-08-16 Thread Eugene Kirpichov
I added some documentation.
Submitted for review here https://review.openstack.org/#/c/11518/

On Thu, Aug 16, 2012 at 4:10 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Hi Anne,

 On Wed, Aug 15, 2012 at 2:55 PM, Anne Gentle a...@openstack.org wrote:
 Hi Eugene -
 But I thought everyone was on the openstack list! :) Thanks for following 
 up.

 On Wed, Aug 15, 2012 at 4:35 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Hi Anne,

 I accidentally found this email of yours while looking for links to my post.
 I'd probably have found it earlier if you cc'd me on
 ekirpic...@gmail.com or ekirpic...@mirantis.com [yes, that's two
 different spellings...] :)

 I support the idea that this should be somehow integrated in the docs,
 but I'm not sure where exactly in the docs is a good place for
 information of this style. Would it help if I just linked to the posts
 from some wiki page or from the docs?

 I do like to put relevant blog posts on the wiki at
 http://wiki.openstack.org/BloggersTips, so you can certainly add to
 that page. If it's really missing information in the docs, though, it
 should be added to the docs. I know that's a tough judgement call but
 we all have to encourage that call.

 Thanks. I added a link to the post to this wiki page and I'll now
 spend some time picking what to integrate into the docs.


 Or would it only help if I (or
 somebody) actually merged the relevant parts of the posts into
 official documentation?

 I wouldn't say only help but I prefer that you merge the relevant
 parts of the posts. It's tougher for a doc team member to merge only
 parts in without violating the license of the content - you as content
 owner can certainly choose which parts to move into the official
 documentation though.

 Thanks for asking for clarifications - these are certainly gray areas
 that I'd like to shine light upon.
 Anne

 On Thu, Aug 9, 2012 at 8:07 AM, Anne Gentle a...@openstack.org wrote:
 All, sorry for top posting, but this is a fine example of why we
 really need bloggers to help with the documentation. These fragmented
 instructions are difficult to rely on - we need maintainable,
 process-oriented treatment of content.

 Mirantis peeps, you have added in your blog entries to the docs in the
 past, let's find ways to continually do that and maintain going
 forward.

 I'm not so interested in more install guides, but definitely
 interested in more configuration guides. So Kord, while I like the
 idea (and execution!) of the StackGeek 10-minute guide, it's not one
 to bring into the official docs. But we would definitely welcome your
 reviews of incoming updates to the docs!

 Thanks Simon for bringing your difficulties to the list - we
 continually work on improving the docs. What you learn now could help
 hundreds if not thousands of others, so I'd love for you to improve
 the official docs with your findings.
 Thanks,
 Anne

 On Thu, Aug 9, 2012 at 4:42 AM, Simon Walter si...@gikaku.com wrote:

 On 08/09/2012 12:59 PM, Scott Moser wrote:

 On Aug 8, 2012, at 8:20 PM, Simon Walter si...@gikaku.com wrote:


 On 08/09/2012 06:45 AM, Jay Pipes
 I guess I'll have to build a VM from scratch, as I was relying on the 
 ssh
 key to be able to ssh into the VM, which apparently is supplied by the
 meta-data service.

 use cirros.
 load an image, ssh on with 'cirros' user. pass is 'cubswin:)'


 Thank you. That was good advice.

 Somehow I was not able to connect via ssh. I managed to get novnc working
 and logged into the VM. I can't find anything about connecting via serial 
 or
 the like as you can with Xen. I need to read more about KVM I guess.

 Anyway, I think my networking setup is stuffed. I thought the 10 minutes
 install would be the quickest way to get and running. Now I find myself
 pouring over documentation trying to understand how best to setup
 FlatDHCPManager with two network interfaces. I understand many things have
 changed. So I don't want to go reading something out of date. I found 
 these
 blog posts which explained a lot:
 http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/#comments
 http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/
 But am I reading the wrong thing? I like the way Stackgeek had it set up:
 http://stackgeek.com/guides/gettingstarted.html

 But I think they are missing details or it's out dated. For example, with
 their setup the vnc console in horizon does not work because nova-vncproxy
 is installed rather than novnc.

 I'm pretty sure I can figure the networking out if I have the right
 documentation in the first place. Is there a clear instructions for this
 anywhere? Or would someone mind walking me through it again. So far I've
 followed the stackgeek setup above, but the networking is obviously 
 stuffed.

 Must I have the flat_interface in promiscuous mode?
 Or does it actually need an IP address?
 Why are my VMs picking up an IP address from the public_interface DHCP

Re: [Openstack] Help with meta-data

2012-08-15 Thread Eugene Kirpichov
Hi Anne,

I accidentally found this email of yours while looking for links to my post.
I'd probably have found it earlier if you cc'd me on
ekirpic...@gmail.com or ekirpic...@mirantis.com [yes, that's two
different spellings...] :)

I support the idea that this should be somehow integrated in the docs,
but I'm not sure where exactly in the docs is a good place for
information of this style. Would it help if I just linked to the posts
from some wiki page or from the docs? Or would it only help if I (or
somebody) actually merged the relevant parts of the posts into
official documentation?

On Thu, Aug 9, 2012 at 8:07 AM, Anne Gentle a...@openstack.org wrote:
 All, sorry for top posting, but this is a fine example of why we
 really need bloggers to help with the documentation. These fragmented
 instructions are difficult to rely on - we need maintainable,
 process-oriented treatment of content.

 Mirantis peeps, you have added in your blog entries to the docs in the
 past, let's find ways to continually do that and maintain going
 forward.

 I'm not so interested in more install guides, but definitely
 interested in more configuration guides. So Kord, while I like the
 idea (and execution!) of the StackGeek 10-minute guide, it's not one
 to bring into the official docs. But we would definitely welcome your
 reviews of incoming updates to the docs!

 Thanks Simon for bringing your difficulties to the list - we
 continually work on improving the docs. What you learn now could help
 hundreds if not thousands of others, so I'd love for you to improve
 the official docs with your findings.
 Thanks,
 Anne

 On Thu, Aug 9, 2012 at 4:42 AM, Simon Walter si...@gikaku.com wrote:

 On 08/09/2012 12:59 PM, Scott Moser wrote:

 On Aug 8, 2012, at 8:20 PM, Simon Walter si...@gikaku.com wrote:


 On 08/09/2012 06:45 AM, Jay Pipes
 I guess I'll have to build a VM from scratch, as I was relying on the ssh
 key to be able to ssh into the VM, which apparently is supplied by the
 meta-data service.

 use cirros.
 load an image, ssh on with 'cirros' user. pass is 'cubswin:)'


 Thank you. That was good advice.

 Somehow I was not able to connect via ssh. I managed to get novnc working
 and logged into the VM. I can't find anything about connecting via serial or
 the like as you can with Xen. I need to read more about KVM I guess.

 Anyway, I think my networking setup is stuffed. I thought the 10 minutes
 install would be the quickest way to get and running. Now I find myself
 pouring over documentation trying to understand how best to setup
 FlatDHCPManager with two network interfaces. I understand many things have
 changed. So I don't want to go reading something out of date. I found these
 blog posts which explained a lot:
 http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/#comments
 http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/
 But am I reading the wrong thing? I like the way Stackgeek had it set up:
 http://stackgeek.com/guides/gettingstarted.html

 But I think they are missing details or it's out dated. For example, with
 their setup the vnc console in horizon does not work because nova-vncproxy
 is installed rather than novnc.

 I'm pretty sure I can figure the networking out if I have the right
 documentation in the first place. Is there a clear instructions for this
 anywhere? Or would someone mind walking me through it again. So far I've
 followed the stackgeek setup above, but the networking is obviously stuffed.

 Must I have the flat_interface in promiscuous mode?
 Or does it actually need an IP address?
 Why are my VMs picking up an IP address from the public_interface DHCP
 server and not from the flat_network_bridge?

 Too many questions to ask. So I thought I should just ask: what is missing
 or incorrect from Stackgeeks 10 minute scripts?

 Many thanks for any advice, tips, docs, etc.


 Cheers,

 Simon


 --
 simonsmicrophone.com

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Is it ok to post a job here?

2012-08-10 Thread Eugene Kirpichov
Hello community,

I'm wondering whether it's ok to post an OpenStack-related job to this
mailing list.
On one hand, I didn't find anything hinting that it's not ok in the
MailingListEtiquette page; on the other hand, I didn't find any job
postings in the archives either.
So I figured I better ask first :)

-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] LBaaS IRC meeting notes

2012-08-09 Thread Eugene Kirpichov
REMINDER: Another meeting will take place today, in ~2 hours from now
(19:00 UTC), on #openstack-meeting (use http://webchat.freenode.net/
to join).

On Mon, Aug 6, 2012 at 3:09 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Hi,

 Below are the meeting notes from the IRC meeting about LBaaS which
 took place on Aug 2.

 The logs can be found at
 http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-02-17.00.log.html
 [19:18:27 .. 19:56:41]

 === Status on our side ===
 https://github.com/Mirantis/openstack-lbaas
 * HAproxy and ACE implemented to some level.
 * F5 to be finished in 6 weeks. By that time driver API will be stable.
 * Support for device capabilities (drivers report them, users request
 them) on its way to master.

 === Discussion with Dan Went of Quantum ===
 Dan said:
 * Quantum is moving up the stack: L2 in essex, L3 in folsom, and in
 Grizzly we'll be moving into L4-L7
 * Many people in Quantum are generally interested in LBaaS
 * We need to make sure we're not duplicating effort by developing our
 own CLI/GUI for LBaaS. We should at least integrate with the Quantum
 CLI framework and generally collaborate with the Quantum community
 (circulate blueprints and POC).

 The next meeting will take place Aug 9, 19:00 UTC at #openstack-meeting.

 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] LBaaS IRC meeting notes

2012-08-09 Thread Eugene Kirpichov
This time I was the sole participant and here's what I had to say :)

Our current progress is as follows:
The team has almost finished the core code and is about to start
working on the F5 driver.
Most of the external API is implemented, and it's planned to polish
the driver/core interaction logic while working on F5
Also, filtering of LB's by their capabilities is almost ready in a
separate branch, will be merged to master in the nearest few days
We've been also talking with a few LB vendors and they seem very excited.
The dominating topic is Quantum integration, on which we'll have more
news in the next days.

On Thu, Aug 9, 2012 at 9:48 AM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 REMINDER: Another meeting will take place today, in ~2 hours from now
 (19:00 UTC), on #openstack-meeting (use http://webchat.freenode.net/
 to join).

 On Mon, Aug 6, 2012 at 3:09 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Hi,

 Below are the meeting notes from the IRC meeting about LBaaS which
 took place on Aug 2.

 The logs can be found at
 http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-02-17.00.log.html
 [19:18:27 .. 19:56:41]

 === Status on our side ===
 https://github.com/Mirantis/openstack-lbaas
 * HAproxy and ACE implemented to some level.
 * F5 to be finished in 6 weeks. By that time driver API will be stable.
 * Support for device capabilities (drivers report them, users request
 them) on its way to master.

 === Discussion with Dan Went of Quantum ===
 Dan said:
 * Quantum is moving up the stack: L2 in essex, L3 in folsom, and in
 Grizzly we'll be moving into L4-L7
 * Many people in Quantum are generally interested in LBaaS
 * We need to make sure we're not duplicating effort by developing our
 own CLI/GUI for LBaaS. We should at least integrate with the Quantum
 CLI framework and generally collaborate with the Quantum community
 (circulate blueprints and POC).

 The next meeting will take place Aug 9, 19:00 UTC at #openstack-meeting.

 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] LBaaS IRC meeting notes

2012-08-06 Thread Eugene Kirpichov
Hi,

Below are the meeting notes from the IRC meeting about LBaaS which
took place on Aug 2.

The logs can be found at
http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-08-02-17.00.log.html
[19:18:27 .. 19:56:41]

=== Status on our side ===
https://github.com/Mirantis/openstack-lbaas
* HAproxy and ACE implemented to some level.
* F5 to be finished in 6 weeks. By that time driver API will be stable.
* Support for device capabilities (drivers report them, users request
them) on its way to master.

=== Discussion with Dan Went of Quantum ===
Dan said:
* Quantum is moving up the stack: L2 in essex, L3 in folsom, and in
Grizzly we'll be moving into L4-L7
* Many people in Quantum are generally interested in LBaaS
* We need to make sure we're not duplicating effort by developing our
own CLI/GUI for LBaaS. We should at least integrate with the Quantum
CLI framework and generally collaborate with the Quantum community
(circulate blueprints and POC).

The next meeting will take place Aug 9, 19:00 UTC at #openstack-meeting.

-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A series of blog posts on OpenStack networking details

2012-08-03 Thread Eugene Kirpichov
Hi Sean,

Yes, I am :) Thanks, I'll follow the necessary procedures.

On Fri, Aug 3, 2012 at 12:50 PM, Sean Dague sda...@linux.vnet.ibm.com wrote:
 On 08/03/2012 02:50 PM, Eugene Kirpichov wrote:

 Hello community,

 I'd like to advertise that me and my colleague Piotr Siwczak at
 Mirantis have started a series of blog posts explaining the gory
 details of OpenStack networking.

 http://www.mirantis.com/tag/networking/

 So far we have two posts:


 http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/
 - basics of FlatManager and FlatDHCPManager in multi-host mode.


 http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/
 - extremely detailed account of FlatDHCPManager in single-host mode;
 down to a walkthrough of L2 packet flow for several scenarios. I wrote
 this post in revenge for my own struggles when I dreamt if only
 someone had described in extreme detail how it is *supposed* to work
 but was not able to find anything like that :)

 A few more will appear soon: two posts from Piotr on VLANManager,
 eventually also analyzing the packet flow.

 (me and Peter have slightly different styles: he prefers to cover
 details across several posts while I prefer to write a huge post with
 all the details at once)

 Comments and especially corrections are extremely welcome! [and, well,
 shares too :) ]


 Look like a great thing to add to the OpenStack Planet if you are
 interested.

 -Sean

 --
 Sean Dague
 IBM Linux Technology Center
 email: sda...@linux.vnet.ibm.com
 alt-email: slda...@us.ibm.com



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A series of blog posts on OpenStack networking details

2012-08-03 Thread Eugene Kirpichov
Here's the review item:

https://review.openstack.org/#/c/10790/

Know anyone who's authorized to approve items in openstack-planet?

On Fri, Aug 3, 2012 at 1:12 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Hi Sean,

 Yes, I am :) Thanks, I'll follow the necessary procedures.

 On Fri, Aug 3, 2012 at 12:50 PM, Sean Dague sda...@linux.vnet.ibm.com wrote:
 On 08/03/2012 02:50 PM, Eugene Kirpichov wrote:

 Hello community,

 I'd like to advertise that me and my colleague Piotr Siwczak at
 Mirantis have started a series of blog posts explaining the gory
 details of OpenStack networking.

 http://www.mirantis.com/tag/networking/

 So far we have two posts:


 http://www.mirantis.com/blog/openstack-networking-flatmanager-and-flatdhcpmanager/
 - basics of FlatManager and FlatDHCPManager in multi-host mode.


 http://www.mirantis.com/blog/openstack-networking-single-host-flatdhcpmanager/
 - extremely detailed account of FlatDHCPManager in single-host mode;
 down to a walkthrough of L2 packet flow for several scenarios. I wrote
 this post in revenge for my own struggles when I dreamt if only
 someone had described in extreme detail how it is *supposed* to work
 but was not able to find anything like that :)

 A few more will appear soon: two posts from Piotr on VLANManager,
 eventually also analyzing the packet flow.

 (me and Peter have slightly different styles: he prefers to cover
 details across several posts while I prefer to write a huge post with
 all the details at once)

 Comments and especially corrections are extremely welcome! [and, well,
 shares too :) ]


 Look like a great thing to add to the OpenStack Planet if you are
 interested.

 -Sean

 --
 Sean Dague
 IBM Linux Technology Center
 email: sda...@linux.vnet.ibm.com
 alt-email: slda...@us.ibm.com



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Announcing proof-of-concept Load Balancing as a Service project

2012-08-02 Thread Eugene Kirpichov
REMINDER: the IRC meeting will happen in 5 minutes on #openstack-meetings.

On Tue, Jul 24, 2012 at 6:33 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Hello community,

 We at Mirantis have had a number of clients request functionality to
 control various load balancer devices (software and hardware) via an
 OpenStack API and horizon. So, in collaboration with Cisco OpenStack
 team and a number of other community members, we’ve started
 socializing the blueprints for an elastic load balancer API service.
 At this point we’d like to share where we are and would very much
 appreciate anyone participate and provide input.

 The current vision is to allow cloud tenants to request and
 provision virtual load balancers on demand and allow cloud
 administrators to manage a pool of available LB devices. Access is
 provided under a unified interface to different kinds of load
 balancers, both software and hardware. It means that API for tenants
 is abstracted away from the actual API of underlying hardware or
 software load balancers, and LBaaS effectively bridges this gap.

 POC level support for Cisco ACE and HAproxy is currently implemented
 in the form of plug-ins to LBaaS called “drivers”. We also started some
 work on F5 drivers. Would appreciate hearing input on what other
 drivers may be important at this point…nginx?

 Another question we have is if this should be a standalone module or a
 Quantum plugin… Dan – any feedback on this (and BTW congrats on the
 acquisition =).

 In order not to reinvent the wheel, we decided to base our API on
 Atlas-LB (http://wiki.openstack.org/Atlas-LB).

 Here are all the pointers:
  * Project overview: http://goo.gl/vZdei
  * Screencast: http://www.youtube.com/watch?v=NgAL-kfdbtE
  * API draft: http://goo.gl/gFcWT
  * Roadmap: http://goo.gl/EZAhf
  * Github repo: https://github.com/Mirantis/openstack-lbaas

 The code is written in Python and based on the OpenStack service
 template. We’ll be happy to give a walkthrough over what we have to
 anyone who may be interested in contributing (for example, creating a
 driver to support a particular LB device).

 All of the documents and code are not set in stone and we’re writing
 here specifically to ask for feedback and collaboration from the
 community.

 We would like to start holding weekly IRC meetings at
 #openstack-meeting; we propose 19:00 UTC on Thursdays (this time seems
 free according to http://wiki.openstack.org/Meetings/ ), starting Aug 2.

 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] High Available queues in rabbitmq

2012-08-01 Thread Eugene Kirpichov
Hi Andrea,

I think you're right that it's necessary to react to cancel notifications.
I'll address this in my patch.

On Fri, Jul 27, 2012 at 7:05 AM, Rosa, Andrea (HP Cloud Services)
andrea.r...@hp.com wrote:
 Hi

As for consumer cancellation notifications - I need to remember when
exactly they happen in an HA setup. Maybe you're right.

 With the HA active/active configuration clients always consume from the 
 Master node so we can have a scenario where the queue is declared on a Slave 
 node but the clients are consuming from the Master node.
 In this situation if the master node goes down the client doesn't get any 
 error/exception from the connection and the consumer is abruptly 
 disconnected, so the client needs to re-consume the queue (register a new 
 consumer) to be able to get messages from the queue.
 The consumer cancellation notifications is a way for a client to get a 
 notification (via a basic.cancel message) when the consumer has been 
 disconnected.

 With the HA configuration we should care about of two things:
 1 connection dying : this is already addressed by the existing code
 2 basic.cancel notification received by the clients: in my opinion this is 
 not yet addressed

As for duplicated messages - the situation here is no different
whether you have 1 or many rabbits; reconnection logic was already in
place. Perhaps something should be done here, but it seems that the
lack of this logic didn't hurt anyone so far. Maybe it is because IIRC
messages are ack'd immediately and the failure window is very small.

 Yes you are absolutely right, but with the HA configuration is more likely to 
 have retransmissions.
 If I have a single rabbit and NOT durable queue, all messages (also messages 
 not yet ack'ed) in the queues are lost if the node goes down, that means that 
 when the server restarts there are no messages left and so no retransmissions.
 With multiple rabbits, messages not yet ack'ed are mirrored in the other 
 queues and in the event of failure of the master those messages will be 
 retransmitted.


 Is there some plan to have a blueprint for this change?
I don't have such a plan. Should I?

 I need to deeply investigate and tests the HA Active/Active configuration, 
 but If I can recreate some complex configuration to prove that we need to 
 deal with the basic.cancel messages maybe it's worth to have one.

 Regards
 --
 Andrea Rosa




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] High Available queues in rabbitmq

2012-07-26 Thread Eugene Kirpichov
Hi Andrea,

On Thu, Jul 26, 2012 at 4:10 AM, Rosa, Andrea (HP Cloud Services)
andrea.r...@hp.com wrote:
 Hi Eugene,

 Thanks for the patch.
 I have a question:
 it seems to me that this patch is a (good) starting point of a broader change 
 to be able to use HA in active/active configuration with RMQ.
 As far as I know, with that configuration we need to add some extra logic for 
 consumers to deal with consumer cancellation notification and duplicated 
 messages due to a (potentially) re-send after a failover.

 Is that correct?
Well, we ran fine in production without this extra logic, but perhaps
we just didn't hit a situation where it was required.

As for consumer cancellation notifications - I need to remember when
exactly they happen in an HA setup. Maybe you're right.

As for duplicated messages - the situation here is no different
whether you have 1 or many rabbits; reconnection logic was already in
place. Perhaps something should be done here, but it seems that the
lack of this logic didn't hurt anyone so far. Maybe it is because IIRC
messages are ack'd immediately and the failure window is very small.

 Is there some plan to have a blueprint for this change?
I don't have such a plan. Should I?


 Regards
 --
 Andrea Rosa


-Original Message-
From: Eugene Kirpichov [mailto:ekirpic...@gmail.com]
Sent: 26 July 2012 00:46
To: Alessandro Tagliapietra; rbry...@redhat.com
Cc: Rosa, Andrea (HP Cloud Services); OpenStack Development Mailing
List; openstack@lists.launchpad.net
Subject: Re: [openstack-dev] [Openstack] High Available queues in
rabbitmq

Gentlemen,

Here is my patch: https://review.openstack.org/#/c/10305/
It also depends on another small patch
https://review.openstack.org/#/c/10197

I'd like to ask someone to review it.
Also, how to get these changes into nova? It seems that nova has a
copy-paste of openstack-common inside it, should I just mirror the
changes to nova once they're accepted in openstack-common?

I'm cc'ing Russell Bryant because he originally created the
openstack-common module.

On Wed, Jul 25, 2012 at 3:03 AM, Alessandro Tagliapietra
tagliapietra.alessan...@gmail.com wrote:
 Yup, using as resource is a old way as
http://www.rabbitmq.com/ha.html
 Active/active makes sure that you have no downtime and it's simple as
you
 don't need to use DRBD.

 2012/7/25 Rosa, Andrea (HP Cloud Services) andrea.r...@hp.com

 Sorry for my question, I have just seen from the original thread that
 we are talking about HA with Active/Active solution.
 --
 Andrea Rosa

 -Original Message-
 From: Rosa, Andrea (HP Cloud Services)
 Sent: 25 July 2012 10:45
 To: Eugene Kirpichov
 Cc: openstack-...@lists.openstack.org; Alessandro Tagliapietra;
 openstack@lists.launchpad.net
 Subject: Re: [openstack-dev] [Openstack] High Available queues in
 rabbitmq
 
 Hi
 
 Your patch doesn't use a Resource manager, so are you working on an
 Active/Active
 configuration using mirrored queues? Or are you working on a cluster
 configuration?
 
 I am really interested in that change, thanks for your help.
 Regards
 --
 Andrea Rosa
 
 -Original Message-
 From: openstack-bounces+andrea.rosa=hp@lists.launchpad.net
 [mailto:openstack-bounces+andrea.rosa=hp@lists.launchpad.net]
On
 Behalf Of Alessandro Tagliapietra
 Sent: 24 July 2012 17:58
 To: Eugene Kirpichov
 Cc: openstack-...@lists.openstack.org;
openstack@lists.launchpad.net
 Subject: Re: [Openstack] High Available queues in rabbitmq
 
 Oh, so without the need to put an IP floating between hosts.
 Good job, thanks for helping
 
 Best
 
 Alessandro
 
 Il giorno 24/lug/2012, alle ore 17:49, Eugene Kirpichov ha scritto:
 
  Hi Alessandro,
 
  My patch is about removing the need for pacemaker (and it's
pacemaker
  that I denoted with the term TCP load balancer).
 
  I didn't submit the patch yesterday because I underestimated the
  effort to write unit tests for it and found a few issues on the
way.
 I
  hope I'll finish today.
 
  On Tue, Jul 24, 2012 at 12:00 AM, Alessandro Tagliapietra
  tagliapietra.alessan...@gmail.com wrote:
  Sorry for the delay, i was out from work.
  Awesome work Eugene, I don't need the patch instantly as i'm
still
 building the infrastructure.
  Will it will take alot of time to go in Ubuntu repositories?
 
  Why you said you need load balancing? You can use only the
master
 node and in case the rabbitmq-server dies, switch the ip to the new
 master with pacemaker, that's how I would do.
 
  Best Regards
 
  Alessadro
 
 
  Il giorno 23/lug/2012, alle ore 21:49, Eugene Kirpichov ha
scritto:
 
  +openstack-dev@
 
  To openstack-dev: this is a discussion of an upcoming patch
about
  native RabbitMQ H/A support in nova. I'll post the patch for
  codereview today.
 
  On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov
 ekirpic...@gmail.com wrote:
  Yup, that's basically the same thing that Jay suggested :)
Obvious
 in
  retrospect...
 
  On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh
 ogelb...@mirantis.com wrote

Re: [Openstack] [openstack-dev] Announcing proof-of-concept Load Balancing as a Service project

2012-07-25 Thread Eugene Kirpichov
Hi Dan,

On Tue, Jul 24, 2012 at 8:30 PM, Dan Wendlandt d...@nicira.com wrote:
 Hi Eugene, Angus,

 Adding openstack-dev (probably the more appropriate mailing list for
 discussion a new openstack feature) and some folks from Radware and F5 who
 had previously also contacted me about Quantum + Load-balancing as a
 service.  I'm probably leaving out some other people who have contacted me
 about this as well, but hopefully they are on the ML and can speak up.

 On Tue, Jul 24, 2012 at 7:51 PM, Angus Salkeld asalk...@redhat.com wrote:

 On 24/07/12 18:33 -0700, Eugene Kirpichov wrote:

 Hello community,

 We at Mirantis have had a number of clients request functionality to
 control various load balancer devices (software and hardware) via an
 OpenStack API and horizon. So, in collaboration with Cisco OpenStack
 team and a number of other community members, we’ve started
 socializing the blueprints for an elastic load balancer API service.
 At this point we’d like to share where we are and would very much
 appreciate anyone participate and provide input.


 Yes, I definitely think LB is one of the key items that we'll want to tackle
 during Grizzly in terms of L4-L7 services.
Great to hear!




 The current vision is to allow cloud tenants to request and
 provision virtual load balancers on demand and allow cloud
 administrators to manage a pool of available LB devices. Access is
 provided under a unified interface to different kinds of load
 balancers, both software and hardware. It means that API for tenants
 is abstracted away from the actual API of underlying hardware or
 software load balancers, and LBaaS effectively bridges this gap.


 That's the openstack way, no arguments there :)



 POC level support for Cisco ACE and HAproxy is currently implemented
 in the form of plug-ins to LBaaS called “drivers”. We also started some
 work on F5 drivers. Would appreciate hearing input on what other
 drivers may be important at this point…nginx?


 haproxy is the most common non-vendor solution I hear mentioned.



 Another question we have is if this should be a standalone module or a
 Quantum plugin…


 Based on discussions during the PPB meeting about quantum becoming core,
 there was a push for having a single network service and API, which would
 tend to suggest it being a sub-component of Quantum that is independently
 loadable.  I also tend to think that its likely to be a common set of
 developers working across all such networking functionality, so it wouldn't
 seem like keeping different core-dev teams, repos, tarballs, docs, etc.
 probably doesn't make sense.  I think this is generally inline with the plan
 of allowing Quantum to load additional portions of the API as needed for
 additional services like LB, WAN-bridging, but this is probably a call for
 the PPB in general.
So, if I'm understanding correctly, you're suggesting LBaaS to be
usable in 2 ways:
* Independently
* As a quantum plugin

Is this right?




 In order not to reinvent the wheel, we decided to base our API on
 Atlas-LB (http://wiki.openstack.org/Atlas-LB).


 Seems like a good place to start.



 Here are all the pointers:
 * Project overview: http://goo.gl/vZdei


 * Screencast: http://www.youtube.com/watch?v=NgAL-kfdbtE
 * API draft: http://goo.gl/gFcWT
 * Roadmap: http://goo.gl/EZAhf
 * Github repo: https://github.com/Mirantis/openstack-lbaas


 Will take a look.. I'm getting a permission error on the overview.





 The code is written in Python and based on the OpenStack service
 template. We’ll be happy to give a walkthrough over what we have to
 anyone who may be interested in contributing (for example, creating a
 driver to support a particular LB device).


 I made a really simple loadbancer (using HAproxy) in Heat
 (https://github.com/heat-api/heat/blob/master/heat/engine/loadbalancer.py)
 to implement the AWS::ElasticLoadBalancing::LoadBalancer but
 it would be nice to use a more complete loadbancer solution.
 When I get a moment I'll see if I can integrate. One issue is
 I need latency statistics to trigger autoscaling events.
 See the statistics types here:

 http://docs.amazonwebservices.com/ElasticLoadBalancing/latest/DeveloperGuide/US_MonitoringLoadBalancerWithCW.html

 Anyways, nice project.


 Integration with Heat would be great regardless of the above decisions.
Yes, sounds like a good idea indeed.
Is Heat mature enough and used enough to warrant doing this in the
near future, or is this better postponed until G at least? Angus?


 dan





 Regards
 Angus Salkeld



 All of the documents and code are not set in stone and we’re writing
 here specifically to ask for feedback and collaboration from the
 community.

 We would like to start holding weekly IRC meetings at
 #openstack-meeting; we propose 19:00 UTC on Thursdays (this time seems
 free according to http://wiki.openstack.org/Meetings/ ), starting Aug 2.

 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

Re: [Openstack] [ANN] Kombu acquires support for AMQP heartbeats and cancel notifications

2012-07-25 Thread Eugene Kirpichov
Hi Ask,

On Wed, Jul 25, 2012 at 1:01 PM, Ask Solem a...@celeryproject.org wrote:
 Dear list,

 I believe this is of interest to the Openstack people.

 I've just started maintaining a fork of the amqplib library,
 that differs by using AMQP 0.9.1 instead of 0.8, and that it supports
 heartbeats and the RabbitMQ extensions (consumer cancel notifications,
 publisher confirms and more).

 Heartbeats are used to detect if a connection has been closed (on either end),
 and is sometimes required in environments where network intermediates
 complicates connection loss detection (like certain firewalls).

 Consumer cancel notifications are important for HA, as it's the only
 way RabbitMQ can tell consumers that the queue it's consuming from went away.

 As you may or may not know, the amqp:// transport in Kombu now
 uses two different underlying libraries:

  1) librabbitmq if installed

   Python extension written in C (using the rabbitmq-c library)
   http://github.com/celery/librabbitmq

  2) amqplib as a fallback.

 The new client (http://pypi.python.org/pypi/amqp) will be the default fallback
 in 3.0 after librabbitmq has also been updated to support the new features.

 So for now if you requires these features you have to manually switch to the
 new client, luckily that's as easy as installing kombu 2.3 or later,
 + the 'amqp' library:

  $ pip install kombu=2.3 amqp

 then setting the broker URL to use 'pyamqp://' (or setting the broker 
 transport
 to be 'pyamqp'):

  from kombu import Connection
  conn = Connection('pyamqp://guest:guest@localhost://')

 There's more information in the Kombu 2.3 changelog:
 http://kombu.readthedocs.org/en/latest/changelog.html


 NOTE TO IMPLEMENTORS:

 - Consumer cancel notifications

 Requires no changes to your code,
 all you need is to properly reconnect when one of the
 errors in Connection.channel_errors occur, which is handled
 automatically by Connection.ensure / Connection.autoretry (I don't believe
 Nova uses that, but it probably should).

Can you elaborate here? I'm working on RabbitMQ H/A support for nova
patch (almost done - just got some unexpected difficulties with
testing it) but even though I didn't explicitly account for this; but
we also didn't encounter any errors related to this in production.

Perhaps could you describe a way to test this?

Are these cancel notifications supported in any way (testable?) by the
Kombu memory transport?


 - Heartbeats

 For heartbeats to be used you need to periodically call the
 Connection.heartbeat_check method at regular intervals (
 suggested is twice the rate of the configured heartbeat).

 This shouldn't be any problem for Nova since it's using
 Eventlet. Special care must be taken so that the heartbeat value
 is not specified so low that blocking calls can defer heartbeats
 being sent out.

 An example of enabling heartbeats with eventlet could be:

 import weakref
 from kombu import Connection
 from eventlet import spawn_after

 def monitor_heartbeats(connection, rate=2):
 if not connection.heartbeat:
 return
 interval = connection.heartbeat / 2.0
 cref = weakref.ref(connection)

 def heartbeat_check():
 conn = cref()
 if conn is not None and conn.connected:
 conn.heartbeat_check(rate=rate)
 spawn_after(interval, heartbeat_check)

  return spawn_after(interval, heartbeat_check)

 connection = Connection('pyamq://', heartbeat=10)

 or:

 connection = Connection('pyamqp://?heartbeat=10')


 If you have any questions about implementing this,
 or about Kombu in general then please don't hesitate to contact
 me on this list, on twitter @asksol, or on IRC (also asksol)


 Regards,

 --
 Ask Solem
 twitter.com/asksol | +44 (0)7713357179



 --
 Ask Solem
 twitter.com/asksol | +44 (0)7713357179


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] High Available queues in rabbitmq

2012-07-25 Thread Eugene Kirpichov
Gentlemen,

Here is my patch: https://review.openstack.org/#/c/10305/
It also depends on another small patch https://review.openstack.org/#/c/10197

I'd like to ask someone to review it.
Also, how to get these changes into nova? It seems that nova has a
copy-paste of openstack-common inside it, should I just mirror the
changes to nova once they're accepted in openstack-common?

I'm cc'ing Russell Bryant because he originally created the
openstack-common module.

On Wed, Jul 25, 2012 at 3:03 AM, Alessandro Tagliapietra
tagliapietra.alessan...@gmail.com wrote:
 Yup, using as resource is a old way as http://www.rabbitmq.com/ha.html
 Active/active makes sure that you have no downtime and it's simple as you
 don't need to use DRBD.

 2012/7/25 Rosa, Andrea (HP Cloud Services) andrea.r...@hp.com

 Sorry for my question, I have just seen from the original thread that
 we are talking about HA with Active/Active solution.
 --
 Andrea Rosa

 -Original Message-
 From: Rosa, Andrea (HP Cloud Services)
 Sent: 25 July 2012 10:45
 To: Eugene Kirpichov
 Cc: openstack-...@lists.openstack.org; Alessandro Tagliapietra;
 openstack@lists.launchpad.net
 Subject: Re: [openstack-dev] [Openstack] High Available queues in
 rabbitmq
 
 Hi
 
 Your patch doesn't use a Resource manager, so are you working on an
 Active/Active
 configuration using mirrored queues? Or are you working on a cluster
 configuration?
 
 I am really interested in that change, thanks for your help.
 Regards
 --
 Andrea Rosa
 
 -Original Message-
 From: openstack-bounces+andrea.rosa=hp@lists.launchpad.net
 [mailto:openstack-bounces+andrea.rosa=hp@lists.launchpad.net] On
 Behalf Of Alessandro Tagliapietra
 Sent: 24 July 2012 17:58
 To: Eugene Kirpichov
 Cc: openstack-...@lists.openstack.org; openstack@lists.launchpad.net
 Subject: Re: [Openstack] High Available queues in rabbitmq
 
 Oh, so without the need to put an IP floating between hosts.
 Good job, thanks for helping
 
 Best
 
 Alessandro
 
 Il giorno 24/lug/2012, alle ore 17:49, Eugene Kirpichov ha scritto:
 
  Hi Alessandro,
 
  My patch is about removing the need for pacemaker (and it's pacemaker
  that I denoted with the term TCP load balancer).
 
  I didn't submit the patch yesterday because I underestimated the
  effort to write unit tests for it and found a few issues on the way.
 I
  hope I'll finish today.
 
  On Tue, Jul 24, 2012 at 12:00 AM, Alessandro Tagliapietra
  tagliapietra.alessan...@gmail.com wrote:
  Sorry for the delay, i was out from work.
  Awesome work Eugene, I don't need the patch instantly as i'm still
 building the infrastructure.
  Will it will take alot of time to go in Ubuntu repositories?
 
  Why you said you need load balancing? You can use only the master
 node and in case the rabbitmq-server dies, switch the ip to the new
 master with pacemaker, that's how I would do.
 
  Best Regards
 
  Alessadro
 
 
  Il giorno 23/lug/2012, alle ore 21:49, Eugene Kirpichov ha scritto:
 
  +openstack-dev@
 
  To openstack-dev: this is a discussion of an upcoming patch about
  native RabbitMQ H/A support in nova. I'll post the patch for
  codereview today.
 
  On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov
 ekirpic...@gmail.com wrote:
  Yup, that's basically the same thing that Jay suggested :) Obvious
 in
  retrospect...
 
  On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh
 ogelb...@mirantis.com wrote:
  Eugene,
 
  I suggest just add option 'rabbit_servers' that will override
  'rabbit_host'/'rabbit_port' pair, if present. This won't break
 anything, in
  my understanding.
 
  --
  Best regards,
  Oleg Gelbukh
  Mirantis, Inc.
 
 
  On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov
 ekirpic...@gmail.com
  wrote:
 
  Hi,
 
  I'm working on a RabbitMQ H/A patch right now.
 
  It actually involves more than just using H/A queues (unless
 you're
  willing to add a TCP load balancer on top of your RMQ cluster).
  You also need to add support for multiple RabbitMQ's directly to
 nova.
  This is not hard at all, and I have the patch ready and tested
 in
  production.
 
  Alessandro, if you need this urgently, I can send you the patch
 right
  now before the discussion codereview for inclusion in core nova.
 
  The only problem is, it breaks backward compatibility a bit: my
 patch
  assumes you have a flag rabbit_addresses which should look
 like
  rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host
 and
  rabbit_port flags.
 
  Guys, can you advise on a way to do this without being ugly and
  without breaking compatibility?
  Maybe have rabbit_host, rabbit_port be ListOpt's? But that
 sounds
  weird, as their names are in singular.
  Maybe have rabbit_host, rabbit_port and also rabbit_host2,
  rabbit_port2 (assuming we only have clusters of 2 nodes)?
  Something else?
 
  On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com
 wrote:
  On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
  Hi guys,
 
  just an idea, i'm deploying Openstack trying

Re: [Openstack] [openstack-dev] High Available queues in rabbitmq

2012-07-25 Thread Eugene Kirpichov
Thanks!

On Wed, Jul 25, 2012 at 5:55 PM, Russell Bryant rbry...@redhat.com wrote:
 On 07/25/2012 07:45 PM, Eugene Kirpichov wrote:
 Gentlemen,

 Here is my patch: https://review.openstack.org/#/c/10305/
 It also depends on another small patch https://review.openstack.org/#/c/10197

 I'd like to ask someone to review it.
 Also, how to get these changes into nova? It seems that nova has a
 copy-paste of openstack-common inside it, should I just mirror the
 changes to nova once they're accepted in openstack-common?

 I'm cc'ing Russell Bryant because he originally created the
 openstack-common module.

 First you get the change accepted into openstack-common.  Then, you use
 the update.py script in openstack-common to update the copy in nova and
 submit that as a patch to nova.

 More info here:

 http://wiki.openstack.org/CommonLibrary

 --
 Russell Bryant





-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-24 Thread Eugene Kirpichov
Hi Alessandro,

My patch is about removing the need for pacemaker (and it's pacemaker
that I denoted with the term TCP load balancer).

I didn't submit the patch yesterday because I underestimated the
effort to write unit tests for it and found a few issues on the way. I
hope I'll finish today.

On Tue, Jul 24, 2012 at 12:00 AM, Alessandro Tagliapietra
tagliapietra.alessan...@gmail.com wrote:
 Sorry for the delay, i was out from work.
 Awesome work Eugene, I don't need the patch instantly as i'm still building 
 the infrastructure.
 Will it will take alot of time to go in Ubuntu repositories?

 Why you said you need load balancing? You can use only the master node and in 
 case the rabbitmq-server dies, switch the ip to the new master with 
 pacemaker, that's how I would do.

 Best Regards

 Alessadro


 Il giorno 23/lug/2012, alle ore 21:49, Eugene Kirpichov ha scritto:

 +openstack-dev@

 To openstack-dev: this is a discussion of an upcoming patch about
 native RabbitMQ H/A support in nova. I'll post the patch for
 codereview today.

 On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Yup, that's basically the same thing that Jay suggested :) Obvious in
 retrospect...

 On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh ogelb...@mirantis.com 
 wrote:
 Eugene,

 I suggest just add option 'rabbit_servers' that will override
 'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
 my understanding.

 --
 Best regards,
 Oleg Gelbukh
 Mirantis, Inc.


 On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:

 Hi,

 I'm working on a RabbitMQ H/A patch right now.

 It actually involves more than just using H/A queues (unless you're
 willing to add a TCP load balancer on top of your RMQ cluster).
 You also need to add support for multiple RabbitMQ's directly to nova.
 This is not hard at all, and I have the patch ready and tested in
 production.

 Alessandro, if you need this urgently, I can send you the patch right
 now before the discussion codereview for inclusion in core nova.

 The only problem is, it breaks backward compatibility a bit: my patch
 assumes you have a flag rabbit_addresses which should look like
 rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
 rabbit_port flags.

 Guys, can you advise on a way to do this without being ugly and
 without breaking compatibility?
 Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds
 weird, as their names are in singular.
 Maybe have rabbit_host, rabbit_port and also rabbit_host2,
 rabbit_port2 (assuming we only have clusters of 2 nodes)?
 Something else?

 On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
 Hi guys,

 just an idea, i'm deploying Openstack trying to make it HA.
 The missing thing is rabbitmq, which can be easily started in
 active/active mode, but it needs to declare the queues adding an
 x-ha-policy entry.
 http://www.rabbitmq.com/ha.html
 It would be nice to add a config entry to be able to declare the queues
 in that way.
 If someone know where to edit the openstack code, else i'll try to do
 that in the next weeks maybe.


 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py

 You'll need to add the config options there and the queue is declared
 here with the options supplied to the ConsumerBase constructor:


 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114

 Best,
 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Announcing proof-of-concept Load Balancing as a Service project

2012-07-24 Thread Eugene Kirpichov
Hello community,

We at Mirantis have had a number of clients request functionality to
control various load balancer devices (software and hardware) via an
OpenStack API and horizon. So, in collaboration with Cisco OpenStack
team and a number of other community members, we’ve started
socializing the blueprints for an elastic load balancer API service.
At this point we’d like to share where we are and would very much
appreciate anyone participate and provide input.

The current vision is to allow cloud tenants to request and
provision virtual load balancers on demand and allow cloud
administrators to manage a pool of available LB devices. Access is
provided under a unified interface to different kinds of load
balancers, both software and hardware. It means that API for tenants
is abstracted away from the actual API of underlying hardware or
software load balancers, and LBaaS effectively bridges this gap.

POC level support for Cisco ACE and HAproxy is currently implemented
in the form of plug-ins to LBaaS called “drivers”. We also started some
work on F5 drivers. Would appreciate hearing input on what other
drivers may be important at this point…nginx?

Another question we have is if this should be a standalone module or a
Quantum plugin… Dan – any feedback on this (and BTW congrats on the
acquisition =).

In order not to reinvent the wheel, we decided to base our API on
Atlas-LB (http://wiki.openstack.org/Atlas-LB).

Here are all the pointers:
 * Project overview: http://goo.gl/vZdei
 * Screencast: http://www.youtube.com/watch?v=NgAL-kfdbtE
 * API draft: http://goo.gl/gFcWT
 * Roadmap: http://goo.gl/EZAhf
 * Github repo: https://github.com/Mirantis/openstack-lbaas

The code is written in Python and based on the OpenStack service
template. We’ll be happy to give a walkthrough over what we have to
anyone who may be interested in contributing (for example, creating a
driver to support a particular LB device).

All of the documents and code are not set in stone and we’re writing
here specifically to ask for feedback and collaboration from the
community.

We would like to start holding weekly IRC meetings at
#openstack-meeting; we propose 19:00 UTC on Thursdays (this time seems
free according to http://wiki.openstack.org/Meetings/ ), starting Aug 2.

--
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Announcing proof-of-concept Load Balancing as a Service project

2012-07-24 Thread Eugene Kirpichov
Hi Dan,

Thanks for the feedback. I will answer in detail tomorrow; for now
just providing a working link to the project overview:

http://goo.gl/LrRik

On Tue, Jul 24, 2012 at 8:30 PM, Dan Wendlandt d...@nicira.com wrote:
 Hi Eugene, Angus,

 Adding openstack-dev (probably the more appropriate mailing list for
 discussion a new openstack feature) and some folks from Radware and F5 who
 had previously also contacted me about Quantum + Load-balancing as a
 service.  I'm probably leaving out some other people who have contacted me
 about this as well, but hopefully they are on the ML and can speak up.

 On Tue, Jul 24, 2012 at 7:51 PM, Angus Salkeld asalk...@redhat.com wrote:

 On 24/07/12 18:33 -0700, Eugene Kirpichov wrote:

 Hello community,

 We at Mirantis have had a number of clients request functionality to
 control various load balancer devices (software and hardware) via an
 OpenStack API and horizon. So, in collaboration with Cisco OpenStack
 team and a number of other community members, we’ve started
 socializing the blueprints for an elastic load balancer API service.
 At this point we’d like to share where we are and would very much
 appreciate anyone participate and provide input.


 Yes, I definitely think LB is one of the key items that we'll want to tackle
 during Grizzly in terms of L4-L7 services.



 The current vision is to allow cloud tenants to request and
 provision virtual load balancers on demand and allow cloud
 administrators to manage a pool of available LB devices. Access is
 provided under a unified interface to different kinds of load
 balancers, both software and hardware. It means that API for tenants
 is abstracted away from the actual API of underlying hardware or
 software load balancers, and LBaaS effectively bridges this gap.


 That's the openstack way, no arguments there :)



 POC level support for Cisco ACE and HAproxy is currently implemented
 in the form of plug-ins to LBaaS called “drivers”. We also started some
 work on F5 drivers. Would appreciate hearing input on what other
 drivers may be important at this point…nginx?


 haproxy is the most common non-vendor solution I hear mentioned.



 Another question we have is if this should be a standalone module or a
 Quantum plugin…


 Based on discussions during the PPB meeting about quantum becoming core,
 there was a push for having a single network service and API, which would
 tend to suggest it being a sub-component of Quantum that is independently
 loadable.  I also tend to think that its likely to be a common set of
 developers working across all such networking functionality, so it wouldn't
 seem like keeping different core-dev teams, repos, tarballs, docs, etc.
 probably doesn't make sense.  I think this is generally inline with the plan
 of allowing Quantum to load additional portions of the API as needed for
 additional services like LB, WAN-bridging, but this is probably a call for
 the PPB in general.



 In order not to reinvent the wheel, we decided to base our API on
 Atlas-LB (http://wiki.openstack.org/Atlas-LB).


 Seems like a good place to start.



 Here are all the pointers:
 * Project overview: http://goo.gl/vZdei


 * Screencast: http://www.youtube.com/watch?v=NgAL-kfdbtE
 * API draft: http://goo.gl/gFcWT
 * Roadmap: http://goo.gl/EZAhf
 * Github repo: https://github.com/Mirantis/openstack-lbaas


 Will take a look.. I'm getting a permission error on the overview.





 The code is written in Python and based on the OpenStack service
 template. We’ll be happy to give a walkthrough over what we have to
 anyone who may be interested in contributing (for example, creating a
 driver to support a particular LB device).


 I made a really simple loadbancer (using HAproxy) in Heat
 (https://github.com/heat-api/heat/blob/master/heat/engine/loadbalancer.py)
 to implement the AWS::ElasticLoadBalancing::LoadBalancer but
 it would be nice to use a more complete loadbancer solution.
 When I get a moment I'll see if I can integrate. One issue is
 I need latency statistics to trigger autoscaling events.
 See the statistics types here:

 http://docs.amazonwebservices.com/ElasticLoadBalancing/latest/DeveloperGuide/US_MonitoringLoadBalancerWithCW.html

 Anyways, nice project.


 Integration with Heat would be great regardless of the above decisions.

 dan





 Regards
 Angus Salkeld



 All of the documents and code are not set in stone and we’re writing
 here specifically to ask for feedback and collaboration from the
 community.

 We would like to start holding weekly IRC meetings at
 #openstack-meeting; we propose 19:00 UTC on Thursdays (this time seems
 free according to http://wiki.openstack.org/Meetings/ ), starting Aug 2.

 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https

Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eugene Kirpichov
Hi,

I'm working on a RabbitMQ H/A patch right now.

It actually involves more than just using H/A queues (unless you're
willing to add a TCP load balancer on top of your RMQ cluster).
You also need to add support for multiple RabbitMQ's directly to nova.
This is not hard at all, and I have the patch ready and tested in
production.

Alessandro, if you need this urgently, I can send you the patch right
now before the discussion codereview for inclusion in core nova.

The only problem is, it breaks backward compatibility a bit: my patch
assumes you have a flag rabbit_addresses which should look like
rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
rabbit_port flags.

Guys, can you advise on a way to do this without being ugly and
without breaking compatibility?
Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds
weird, as their names are in singular.
Maybe have rabbit_host, rabbit_port and also rabbit_host2,
rabbit_port2 (assuming we only have clusters of 2 nodes)?
Something else?

On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
 Hi guys,

 just an idea, i'm deploying Openstack trying to make it HA.
 The missing thing is rabbitmq, which can be easily started in
 active/active mode, but it needs to declare the queues adding an
 x-ha-policy entry.
 http://www.rabbitmq.com/ha.html
 It would be nice to add a config entry to be able to declare the queues
 in that way.
 If someone know where to edit the openstack code, else i'll try to do
 that in the next weeks maybe.

 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py

 You'll need to add the config options there and the queue is declared
 here with the options supplied to the ConsumerBase constructor:

 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114

 Best,
 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eugene Kirpichov
Hi Jay,

Great idea. Thanks. I'll amend and test my patch, and then upload it
to codereview.

On Mon, Jul 23, 2012 at 12:18 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 07/23/2012 02:58 PM, Eugene Kirpichov wrote:
 The only problem is, it breaks backward compatibility a bit: my patch
 assumes you have a flag rabbit_addresses which should look like
 rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
 rabbit_port flags.

 Guys, can you advise on a way to do this without being ugly and
 without breaking compatibility?
 Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds
 weird, as their names are in singular.
 Maybe have rabbit_host, rabbit_port and also rabbit_host2,
 rabbit_port2 (assuming we only have clusters of 2 nodes)?
 Something else?

 I think that the standard (in Nova at least) is to go with a single
 ListOpt flag that is a comma-delimited list of the URIs. We do that for
 Glance APi servers, for example, in the glance_api_servers flag:

 https://github.com/openstack/nova/blob/master/nova/flags.py#L138

 So, perhaps you can add a rabbit_ha_servers ListOpt flag that, when
 filled, would be used instead of rabbit_host and rabbit_port. That way
 you won't break backwards compat?

 Best,
 -jay

 On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
 Hi guys,

 just an idea, i'm deploying Openstack trying to make it HA.
 The missing thing is rabbitmq, which can be easily started in
 active/active mode, but it needs to declare the queues adding an
 x-ha-policy entry.
 http://www.rabbitmq.com/ha.html
 It would be nice to add a config entry to be able to declare the queues
 in that way.
 If someone know where to edit the openstack code, else i'll try to do
 that in the next weeks maybe.

 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py

 You'll need to add the config options there and the queue is declared
 here with the options supplied to the ConsumerBase constructor:

 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114

 Best,
 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp






-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eugene Kirpichov
Yup, that's basically the same thing that Jay suggested :) Obvious in
retrospect...

On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh ogelb...@mirantis.com wrote:
 Eugene,

 I suggest just add option 'rabbit_servers' that will override
 'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
 my understanding.

 --
 Best regards,
 Oleg Gelbukh
 Mirantis, Inc.


 On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:

 Hi,

 I'm working on a RabbitMQ H/A patch right now.

 It actually involves more than just using H/A queues (unless you're
 willing to add a TCP load balancer on top of your RMQ cluster).
 You also need to add support for multiple RabbitMQ's directly to nova.
 This is not hard at all, and I have the patch ready and tested in
 production.

 Alessandro, if you need this urgently, I can send you the patch right
 now before the discussion codereview for inclusion in core nova.

 The only problem is, it breaks backward compatibility a bit: my patch
 assumes you have a flag rabbit_addresses which should look like
 rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
 rabbit_port flags.

 Guys, can you advise on a way to do this without being ugly and
 without breaking compatibility?
 Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds
 weird, as their names are in singular.
 Maybe have rabbit_host, rabbit_port and also rabbit_host2,
 rabbit_port2 (assuming we only have clusters of 2 nodes)?
 Something else?

 On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
  On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
  Hi guys,
 
  just an idea, i'm deploying Openstack trying to make it HA.
  The missing thing is rabbitmq, which can be easily started in
  active/active mode, but it needs to declare the queues adding an
  x-ha-policy entry.
  http://www.rabbitmq.com/ha.html
  It would be nice to add a config entry to be able to declare the queues
  in that way.
  If someone know where to edit the openstack code, else i'll try to do
  that in the next weeks maybe.
 
 
  https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
 
  You'll need to add the config options there and the queue is declared
  here with the options supplied to the ConsumerBase constructor:
 
 
  https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
 
  Best,
  -jay
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-23 Thread Eugene Kirpichov
+openstack-dev@

To openstack-dev: this is a discussion of an upcoming patch about
native RabbitMQ H/A support in nova. I'll post the patch for
codereview today.

On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
 Yup, that's basically the same thing that Jay suggested :) Obvious in
 retrospect...

 On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh ogelb...@mirantis.com wrote:
 Eugene,

 I suggest just add option 'rabbit_servers' that will override
 'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
 my understanding.

 --
 Best regards,
 Oleg Gelbukh
 Mirantis, Inc.


 On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:

 Hi,

 I'm working on a RabbitMQ H/A patch right now.

 It actually involves more than just using H/A queues (unless you're
 willing to add a TCP load balancer on top of your RMQ cluster).
 You also need to add support for multiple RabbitMQ's directly to nova.
 This is not hard at all, and I have the patch ready and tested in
 production.

 Alessandro, if you need this urgently, I can send you the patch right
 now before the discussion codereview for inclusion in core nova.

 The only problem is, it breaks backward compatibility a bit: my patch
 assumes you have a flag rabbit_addresses which should look like
 rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
 rabbit_port flags.

 Guys, can you advise on a way to do this without being ugly and
 without breaking compatibility?
 Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds
 weird, as their names are in singular.
 Maybe have rabbit_host, rabbit_port and also rabbit_host2,
 rabbit_port2 (assuming we only have clusters of 2 nodes)?
 Something else?

 On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
  On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
  Hi guys,
 
  just an idea, i'm deploying Openstack trying to make it HA.
  The missing thing is rabbitmq, which can be easily started in
  active/active mode, but it needs to declare the queues adding an
  x-ha-policy entry.
  http://www.rabbitmq.com/ha.html
  It would be nice to add a config entry to be able to declare the queues
  in that way.
  If someone know where to edit the openstack code, else i'll try to do
  that in the next weeks maybe.
 
 
  https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
 
  You'll need to add the config options there and the queue is declared
  here with the options supplied to the ConsumerBase constructor:
 
 
  https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
 
  Best,
  -jay
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp