Re: [Openstack] HA Openstack with Pacemaker

2013-02-18 Thread Samuel Winchenbach
Any idea why calling keystone from test1 always gets routed to test2
and calling it from test2 requests always get routed to test1?   It
wouldn't be a big deal but if I manually stop keystone on one of the hosts
then requests from the other fail.

Thanks,
Sam


On Mon, Feb 18, 2013 at 2:52 AM, Sebastien HAN han.sebast...@gmail.comwrote:

 Hi,

 Good to hear that you finally managed to get it working. Usually the
 postrouting rule is more for clients that needs to be routed.

 Cheers!

 On 16 févr. 2013, at 03:06, Samuel Winchenbach swinc...@gmail.com wrote:

 Well I got it to work.  I was being stupid, and forgot to change over the
 endpoints in keystone.

 One thing I find interesting is that if I call keystone user-list from
 test1 it _always_ sends the request to test2 and vice versa.

 Also I did not need to add the POSTROUTING rule... I am not sure why.


 On Fri, Feb 15, 2013 at 3:44 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hrmmm it isn't going so well:

 root@test1# ip a s dev eth0
 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP
 qlen 1000
 link/ether 00:25:90:10:00:78 brd ff:ff:ff:ff:ff:ff
 inet 10.21.0.1/16 brd 10.21.255.255 scope global eth0
 inet 10.21.1.1/16 brd 10.21.255.255 scope global secondary eth0
 inet 10.21.21.1/16 scope global secondary eth0
 inet6 fe80::225:90ff:fe10:78/64 scope link
valid_lft forever preferred_lft forever


 root@test1# ipvsadm -L -n
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
   - RemoteAddress:Port   Forward Weight ActiveConn InActConn
 TCP  10.21.21.1:5000 wlc persistent 600
   - 10.21.0.1:5000   Masq1000  1
   - 10.21.0.2:5000   Masq1000  0
 TCP  10.21.21.1:35357 wlc persistent 600
   - 10.21.0.1:35357  Masq1000  0
   - 10.21.0.2:35357  Masq1000  0

 root@test1# iptables -L -v -tnat
 Chain PREROUTING (policy ACCEPT 283 packets, 24902 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain INPUT (policy ACCEPT 253 packets, 15256 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain OUTPUT (policy ACCEPT 509 packets, 37182 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain POSTROUTING (policy ACCEPT 196 packets, 12010 bytes)
  pkts bytes target prot opt in out source
 destination
   277 16700 MASQUERADE  all  --  anyeth0anywhere
 anywhere

 root@test1:~# export OS_AUTH_URL=http://10.21.21.1:5000/v2.0/;
  root@test1:~# keystone user-list
 No handlers could be found for logger keystoneclient.client
 Unable to communicate with identity service: [Errno 113] No route to
 host. (HTTP 400)


 I still have some debugging to do with tcpdump, but I thought I would
 post my initial results.


 On Fri, Feb 15, 2013 at 2:56 PM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Well if you follow my article, you will get LVS-NAT running. It's fairly
 easy, no funky stuff. Yes you will probably need the postrouting rule, as
 usual :). Let me know how it goes ;)

 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 I
 didn't give NAT a shot because it didn't seem as well documented.

 I will give NAT a shot.  Will I need to enable to iptables and add a
 rule to the nat table?   None of the documentation mentioned that but every
 time I have ever done NAT I had to setup a rule like... iptables -t nat -A
 POSTROUTING -o eth0 -j MASQUERADE

 Thanks for helping me with this.


 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han han.sebast...@gmail.com
  wrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_
 work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach 
 swinc...@gmail.com wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24
 VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  

Re: [Openstack] HA Openstack with Pacemaker

2013-02-17 Thread Sebastien HAN
Hi,

Good to hear that you finally managed to get it working. Usually the 
postrouting rule is more for clients that needs to be routed. 

Cheers! 

On 16 févr. 2013, at 03:06, Samuel Winchenbach swinc...@gmail.com wrote:

 Well I got it to work.  I was being stupid, and forgot to change over the 
 endpoints in keystone.
 
 One thing I find interesting is that if I call keystone user-list from 
 test1 it _always_ sends the request to test2 and vice versa.
 
 Also I did not need to add the POSTROUTING rule... I am not sure why.
 
 
 On Fri, Feb 15, 2013 at 3:44 PM, Samuel Winchenbach swinc...@gmail.com 
 wrote:
 Hrmmm it isn't going so well:
 
 root@test1# ip a s dev eth0
 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 
 1000
 link/ether 00:25:90:10:00:78 brd ff:ff:ff:ff:ff:ff
 inet 10.21.0.1/16 brd 10.21.255.255 scope global eth0
 inet 10.21.1.1/16 brd 10.21.255.255 scope global secondary eth0
 inet 10.21.21.1/16 scope global secondary eth0
 inet6 fe80::225:90ff:fe10:78/64 scope link 
valid_lft forever preferred_lft forever
 
 
 root@test1# ipvsadm -L -n
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
   - RemoteAddress:Port   Forward Weight ActiveConn InActConn
 TCP  10.21.21.1:5000 wlc persistent 600
   - 10.21.0.1:5000   Masq1000  1 
   - 10.21.0.2:5000   Masq1000  0 
 TCP  10.21.21.1:35357 wlc persistent 600
   - 10.21.0.1:35357  Masq1000  0 
   - 10.21.0.2:35357  Masq1000  0
 
 root@test1# iptables -L -v -tnat
 Chain PREROUTING (policy ACCEPT 283 packets, 24902 bytes)
  pkts bytes target prot opt in out source   
 destination 
 
 Chain INPUT (policy ACCEPT 253 packets, 15256 bytes)
  pkts bytes target prot opt in out source   
 destination 
 
 Chain OUTPUT (policy ACCEPT 509 packets, 37182 bytes)
  pkts bytes target prot opt in out source   
 destination 
 
 Chain POSTROUTING (policy ACCEPT 196 packets, 12010 bytes)
  pkts bytes target prot opt in out source   
 destination 
   277 16700 MASQUERADE  all  --  anyeth0anywhere anywhere
 
 root@test1:~# export OS_AUTH_URL=http://10.21.21.1:5000/v2.0/;
 root@test1:~# keystone user-list
 No handlers could be found for logger keystoneclient.client
 Unable to communicate with identity service: [Errno 113] No route to host. 
 (HTTP 400)
 
 
 I still have some debugging to do with tcpdump, but I thought I would post 
 my initial results.
 
 
 On Fri, Feb 15, 2013 at 2:56 PM, Sébastien Han han.sebast...@gmail.com 
 wrote:
 Well if you follow my article, you will get LVS-NAT running. It's fairly 
 easy, no funky stuff. Yes you will probably need the postrouting rule, as 
 usual :). Let me know how it goes ;)
 
 --
 Regards,
 Sébastien Han.
 
 
 On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach swinc...@gmail.com 
 wrote:
 I didn't give NAT a shot because it didn't seem as well documented.
 
 I will give NAT a shot.  Will I need to enable to iptables and add a rule 
 to the nat table?   None of the documentation mentioned that but every 
 time I have ever done NAT I had to setup a rule like... iptables -t nat -A 
 POSTROUTING -o eth0 -j MASQUERADE
 
 Thanks for helping me with this.
 
 
 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han han.sebast...@gmail.com 
 wrote:
 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.
 
 LVS has the public IPs and redirects to the private IPs, this _must_ work.
 
 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.
 
 
 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com 
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24 
  VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
there are just IPs... Can I see your conf?

--
Regards,
Sébastien Han.


On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 W
 ell, I think I will have to go with one ip per service and forget about
 load balancing.  It seems as though with LVS routing requests internally
 through the VIP is difficult (impossible?) at least with LVS-DR.  It seems
 like a shame not to be able to distribute the work among the controller
 nodes.


 On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hi Sébastien,

 I have two hosts with public interfaces with a number (~8) compute nodes
 behind them.   I am trying to set the two public nodes in for HA and load
 balancing,  I plan to run all the openstack services on these two nodes in
 Active/Active where possible.   I currently have MySQL and RabbitMQ setup
 in pacemaker with a drbd backend.

 That is a quick summary.   If there is anything else I can answer about
 my setup please let me know.

 Thanks,
 Sam


 On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Well I don't know your setup, if you use LB for API service or if you
 use an active/passive pacemaker but at the end it's not that much IPs I
 guess. I dare to say that Keepalived sounds outdated to me...

 If you use pacemaker and want to have the same IP for all the resources
 simply create a resource group with all the openstack service inside it
 (it's ugly but if it's what you want :)). Give me more info about your
 setup and we can go further in the discussion :).

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 T
 he only real problem is that it would consume a lot of IP addresses
 when exposing the public interfaces.   I _think_ I may have the solution in
 your blog actually:
 http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
 keepalived/haproxy and just biting the bullet and using one IP per service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han han.sebast...@gmail.com
  wrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach 
 swinc...@gmail.com wrote:

 What if the VIP is created on a different host than keystone is
 started on?   It seems like you either need to set 
 net.ipv4.ip_nonlocal_bind
 = 1 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com
 a écrit :

 I'm currently updating that part of the documentation - indeed it
 states that two IPs are used, but in fact, you end up with only one VIP 
 for
 the API service.
 I'll send the patch tonight

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a
 écrit :

 In that documentation it looks like each openstack service gets it
 own IP (keystone is being assigned 192.168.42.103 and glance is getting
 192.168.42.104).

 I might be missing something too because in the section titled
 Configure the VIP it create a primitive called p_api-ip (or 
 p_ip_api if
 you read the text above it) and then in Adding Keystone resource to
 Pacemaker it creates a group with p_ip_keystone???


 Stranger yet, Configuring OpenStack Services to use High Available
 Glance API says:  For Nova, for example, if your Glance API
 service IP address is 192.168.42.104 as in the configuration explained
 here, you would use the following line in your nova.conf file : 
 glance_api_servers
 = 192.168.42.103  But, in the step before it set:  registry_host
 = 192.168.42.104?

 So I am not sure which ip you would connect to here...

 Sam



 On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 I currently have a HA OpenStack cluster running where the
 OpenStack services are kept alive with a combination of haproxy and
 keepalived.

 Is it possible to configure pacemaker so that all the OpenStack
 services  are served by the same IP?  With keepalived I have a 
 virtual ip
 that can move from server to server and haproxy sends the request to a
 machine that has a live service.   This 

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Samuel Winchenbach
Sure...  I have undone these settings but I saved a copy:

two hosts:
test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24

VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24 VIP
for public APIs

k
eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2



in /etc/sysctl.conf:
   net.ipv4.conf.all.arp_ignore = 1
   net.ipv4.conf.eth0.arp_ignore = 1
   net.ipv4.conf.all.arp_announce = 2
   net.ipv4.conf.eth0.arp_announce = 2

root# sysctl -p

in /etc/sysctl.conf:

checktimeout=
3


checkinterval=
5


autoreload=
yes


logfile=/var/log/ldirectord.log

quiescent=no

virtual=10.21.21.1:5000

real=10.2
1
.0.1:5000 gate

real=10.2
1
.0.2:5000 gate


scheduler=
w
rr

  protocol=tcp

  checktype=connect
  checkport=5000

virtual=10.21.21.1:
35357

real=10.2
1
.0.1:
35357
gate

real=10.2
1
.0.2:
35357
gate


scheduler=
w
rr

  protocol=tcp

  checktype=connect
  checkport=35357


crm shell:




primitive
p_openstack_
ip ocf:heartbeat:IPaddr2 \






op monitor interval=60 timeout=20 \






params ip=
10.21.21.1



cidr_netmask=
16



lvs_support=true


p
rimitive
p_openstack_ip_lo
 ocf:heartbeat:IPaddr2 \






op monitor interval=60 timeout=20 \






params ip=
10.21.21.1
 nic=lo

cidr_netmask=32



primitive
p_openstack_
lvs ocf:heartbeat:ldirectord \






op monitor interval=20 timeout=10



group
g_openstack_
ip
_
lvs
p_openstack_
ip
p_openstack_
lvs



clone
c_openstack_ip_lo

p_openstack_ip_lo
meta interleave=true



colocation
 co_openstack_lo_never_lvs
-inf: c
_openstack_ip_lo

g_openstack_ip_lvs

Thanks for taking a look at this.

Sam





On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han han.sebast...@gmail.com
wrote:

 Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
there are just IPs... Can I see your conf?

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.com
wrote:

 W
 ell, I think I will have to go with one ip per service and forget about
load balancing.  It seems as though with LVS routing requests internally
through the VIP is difficult (impossible?) at least with LVS-DR.  It seems
like a shame not to be able to distribute the work among the controller
nodes.


 On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach swinc...@gmail.com
wrote:

 Hi Sébastien,

 I have two hosts with public interfaces with a number (~8) compute
nodes behind them.   I am trying to set the two public nodes in for HA and
load balancing,  I plan to run all the openstack services on these two
nodes in Active/Active where possible.   I currently have MySQL and
RabbitMQ setup in pacemaker with a drbd backend.

 That is a quick summary.   If there is anything else I can answer about
my setup please let me know.

 Thanks,
 Sam


 On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han han.sebast...@gmail.com
wrote:

 Well I don't know your setup, if you use LB for API service or if you
use an active/passive pacemaker but at the end it's not that much IPs I
guess. I dare to say that Keepalived sounds outdated to me...

 If you use pacemaker and want to have the same IP for all the
resources simply create a resource group with all the openstack service
inside it (it's ugly but if it's what you want :)). Give me more info about
your setup and we can go further in the discussion :).

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach swinc...@gmail.com
wrote:

 T
 he only real problem is that it would consume a lot of IP addresses
when exposing the public interfaces.   I _think_ I may have the solution in
your blog actually:
http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
keepalived/haproxy and just biting the bullet and using one IP per service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han 
han.sebast...@gmail.com wrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach 
swinc...@gmail.com wrote:

 What if the VIP is created on a different host than keystone is
started on?   It seems like you either need to set
net.ipv4.ip_nonlocal_bind = 1 or create a colocation in pacemaker (which
would either require all services to be on the same host, or have an
ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 Razique Mahroua - Nuage  Co
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua 
razique.mahr...@gmail.com a écrit :

 I'm currently updating that part of the documentation - indeed it
states that two IPs are used, but in fact, you end up with only one VIP for
the API service.
 I'll send the patch tonight

 Razique Mahroua - Nuage  Co
 razique.mahr...@gmail.com
 Tel : +33 9 72 

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Ok but why direct routing instead of NAT? If the public IPs are _only_
on LVS there is no point to use LVS-DR.

LVS has the public IPs and redirects to the private IPs, this _must_ work.

Did you try NAT? Or at least can you give it a shot?
--
Regards,
Sébastien Han.


On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com wrote:
 Sure...  I have undone these settings but I saved a copy:

 two hosts:
 test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
 test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24

 VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24 VIP for
 public APIs

 k
 eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2



 in /etc/sysctl.conf:
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2

 root# sysctl -p

 in /etc/sysctl.conf:

 checktimeout=
 3


 checkinterval=
 5


 autoreload=
 yes


 logfile=/var/log/ldirectord.log

 quiescent=no

 virtual=10.21.21.1:5000

 real=10.2
 1
 .0.1:5000 gate

 real=10.2
 1
 .0.2:5000 gate

 scheduler=
 w
 rr
   protocol=tcp
   checktype=connect
   checkport=5000

 virtual=10.21.21.1:
 35357

 real=10.2
 1
 .0.1:
 35357
 gate

 real=10.2
 1
 .0.2:
 35357
 gate

 scheduler=
 w
 rr
   protocol=tcp
   checktype=connect
   checkport=35357


 crm shell:


 primitive
 p_openstack_
 ip ocf:heartbeat:IPaddr2 \


 op monitor interval=60 timeout=20 \


 params ip=
 10.21.21.1
 
 cidr_netmask=
 16
 
 lvs_support=true

 p
 rimitive
 p_openstack_ip_lo
  ocf:heartbeat:IPaddr2 \


 op monitor interval=60 timeout=20 \


 params ip=
 10.21.21.1
  nic=lo
 cidr_netmask=32

 primitive
 p_openstack_
 lvs ocf:heartbeat:ldirectord \


 op monitor interval=20 timeout=10

 group
 g_openstack_
 ip
 _
 lvs
 p_openstack_
 ip
 p_openstack_
 lvs

 clone
 c_openstack_ip_lo

 p_openstack_ip_lo
 meta interleave=true

 colocation
 co_openstack_lo_never_lvs
 -inf: c
 _openstack_ip_lo

 g_openstack_ip_lvs

 Thanks for taking a look at this.

 Sam




 On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han han.sebast...@gmail.com
 wrote:

 Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
 there are just IPs... Can I see your conf?

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:

 W
 ell, I think I will have to go with one ip per service and forget about
 load balancing.  It seems as though with LVS routing requests internally
 through the VIP is difficult (impossible?) at least with LVS-DR.  It seems
 like a shame not to be able to distribute the work among the controller
 nodes.


 On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach swinc...@gmail.com
 wrote:

 Hi Sébastien,

 I have two hosts with public interfaces with a number (~8) compute nodes
 behind them.   I am trying to set the two public nodes in for HA and load
 balancing,  I plan to run all the openstack services on these two nodes in
 Active/Active where possible.   I currently have MySQL and RabbitMQ setup 
 in
 pacemaker with a drbd backend.

 That is a quick summary.   If there is anything else I can answer about
 my setup please let me know.

 Thanks,
 Sam


 On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han han.sebast...@gmail.com
 wrote:

 Well I don't know your setup, if you use LB for API service or if you
 use an active/passive pacemaker but at the end it's not that much IPs I
 guess. I dare to say that Keepalived sounds outdated to me...

 If you use pacemaker and want to have the same IP for all the resources
 simply create a resource group with all the openstack service inside it
 (it's ugly but if it's what you want :)). Give me more info about your 
 setup
 and we can go further in the discussion :).

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach
 swinc...@gmail.com wrote:

 T
 he only real problem is that it would consume a lot of IP addresses
 when exposing the public interfaces.   I _think_ I may have the solution 
 in
 your blog actually:
 http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
 keepalived/haproxy and just biting the bullet and using one IP per 
 service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han
 han.sebast...@gmail.com wrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach
 swinc...@gmail.com wrote:

 What if the VIP is created on a different host than keystone is
 started on?   It seems like you either need to set 
 net.ipv4.ip_nonlocal_bind
 = 1 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua
 razique.mahr...@gmail.com wrote:

 There we go
 

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Samuel Winchenbach
I
 didn't give NAT a shot because it didn't seem as well documented.

I will give NAT a shot.  Will I need to enable to iptables and add a rule
to the nat table?   None of the documentation mentioned that but every
time I have ever done NAT I had to setup a rule like... iptables -t nat -A
POSTROUTING -o eth0 -j MASQUERADE

Thanks for helping me with this.


On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_ work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24 VIP
 for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=35357
 
 
  crm shell:
 
 
  primitive
  p_openstack_
  ip ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
  
  cidr_netmask=
  16
  
  lvs_support=true
 
  p
  rimitive
  p_openstack_ip_lo
   ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
   nic=lo
  cidr_netmask=32
 
  primitive
  p_openstack_
  lvs ocf:heartbeat:ldirectord \
 
 
  op monitor interval=20 timeout=10
 
  group
  g_openstack_
  ip
  _
  lvs
  p_openstack_
  ip
  p_openstack_
  lvs
 
  clone
  c_openstack_ip_lo
 
  p_openstack_ip_lo
  meta interleave=true
 
  colocation
  co_openstack_lo_never_lvs
  -inf: c
  _openstack_ip_lo
 
  g_openstack_ip_lvs
 
  Thanks for taking a look at this.
 
  Sam
 
 
 
 
  On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han han.sebast...@gmail.com
  wrote:
 
  Hum I don't see the problem, it's possible to load-balance VIPs with
 LVS,
  there are just IPs... Can I see your conf?
 
  --
  Regards,
  Sébastien Han.
 
 
  On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.com
 
  wrote:
 
  W
  ell, I think I will have to go with one ip per service and forget about
  load balancing.  It seems as though with LVS routing requests
 internally
  through the VIP is difficult (impossible?) at least with LVS-DR.  It
 seems
  like a shame not to be able to distribute the work among the controller
  nodes.
 
 
  On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach 
 swinc...@gmail.com
  wrote:
 
  Hi Sébastien,
 
  I have two hosts with public interfaces with a number (~8) compute
 nodes
  behind them.   I am trying to set the two public nodes in for HA and
 load
  balancing,  I plan to run all the openstack services on these two
 nodes in
  Active/Active where possible.   I currently have MySQL and RabbitMQ
 setup in
  pacemaker with a drbd backend.
 
  That is a quick summary.   If there is anything else I can answer
 about
  my setup please let me know.
 
  Thanks,
  Sam
 
 
  On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han 
 han.sebast...@gmail.com
  wrote:
 
  Well I don't know your setup, if you use LB for API service or if you
  use an active/passive pacemaker but at the end it's not that much
 IPs I
  guess. I dare to say that Keepalived sounds outdated to me...
 
  If you use pacemaker and want to have the same IP for all the
 resources
  simply create a resource group with all the openstack service inside
 it
  (it's ugly but if it's what you want :)). Give me more info about
 your setup
  and we can go further in the discussion :).
 
  --
  Regards,
  Sébastien Han.
 
 
  On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach
  swinc...@gmail.com wrote:
 
  T
  he only real problem is that it would consume a lot of IP addresses
  when exposing the public interfaces.   I _think_ I may have the
 solution in
  your blog actually:
  http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
  and
  http://clusterlabs.org/wiki/Using_ldirectord
 
  I am trying to weigh the pros and cons of this method vs
  keepalived/haproxy and just biting the bullet and using one 

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Sébastien Han
Well if you follow my article, you will get LVS-NAT running. It's fairly
easy, no funky stuff. Yes you will probably need the postrouting rule, as
usual :). Let me know how it goes ;)

--
Regards,
Sébastien Han.


On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 I
 didn't give NAT a shot because it didn't seem as well documented.

 I will give NAT a shot.  Will I need to enable to iptables and add a rule
 to the nat table?   None of the documentation mentioned that but every time
 I have ever done NAT I had to setup a rule like... iptables -t nat -A
 POSTROUTING -o eth0 -j MASQUERADE

 Thanks for helping me with this.


 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_ work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24
 VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=35357
 
 
  crm shell:
 
 
  primitive
  p_openstack_
  ip ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
  
  cidr_netmask=
  16
  
  lvs_support=true
 
  p
  rimitive
  p_openstack_ip_lo
   ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
   nic=lo
  cidr_netmask=32
 
  primitive
  p_openstack_
  lvs ocf:heartbeat:ldirectord \
 
 
  op monitor interval=20 timeout=10
 
  group
  g_openstack_
  ip
  _
  lvs
  p_openstack_
  ip
  p_openstack_
  lvs
 
  clone
  c_openstack_ip_lo
 
  p_openstack_ip_lo
  meta interleave=true
 
  colocation
  co_openstack_lo_never_lvs
  -inf: c
  _openstack_ip_lo
 
  g_openstack_ip_lvs
 
  Thanks for taking a look at this.
 
  Sam
 
 
 
 
  On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han han.sebast...@gmail.com
 
  wrote:
 
  Hum I don't see the problem, it's possible to load-balance VIPs with
 LVS,
  there are just IPs... Can I see your conf?
 
  --
  Regards,
  Sébastien Han.
 
 
  On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach 
 swinc...@gmail.com
  wrote:
 
  W
  ell, I think I will have to go with one ip per service and forget
 about
  load balancing.  It seems as though with LVS routing requests
 internally
  through the VIP is difficult (impossible?) at least with LVS-DR.  It
 seems
  like a shame not to be able to distribute the work among the
 controller
  nodes.
 
 
  On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach 
 swinc...@gmail.com
  wrote:
 
  Hi Sébastien,
 
  I have two hosts with public interfaces with a number (~8) compute
 nodes
  behind them.   I am trying to set the two public nodes in for HA and
 load
  balancing,  I plan to run all the openstack services on these two
 nodes in
  Active/Active where possible.   I currently have MySQL and RabbitMQ
 setup in
  pacemaker with a drbd backend.
 
  That is a quick summary.   If there is anything else I can answer
 about
  my setup please let me know.
 
  Thanks,
  Sam
 
 
  On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han 
 han.sebast...@gmail.com
  wrote:
 
  Well I don't know your setup, if you use LB for API service or if
 you
  use an active/passive pacemaker but at the end it's not that much
 IPs I
  guess. I dare to say that Keepalived sounds outdated to me...
 
  If you use pacemaker and want to have the same IP for all the
 resources
  simply create a resource group with all the openstack service
 inside it
  (it's ugly but if it's what you want :)). Give me more info about
 your setup
  and we can go further in the discussion :).
 
  --
  Regards,
  Sébastien Han.
 
 
  On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach
  swinc...@gmail.com wrote:
 
  T
  he only real problem is that it would consume a lot of IP addresses
  when exposing the public interfaces.   

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Samuel Winchenbach
Hrmmm it isn't going so well:

root@test1# ip a s dev eth0
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:25:90:10:00:78 brd ff:ff:ff:ff:ff:ff
inet 10.21.0.1/16 brd 10.21.255.255 scope global eth0
inet 10.21.1.1/16 brd 10.21.255.255 scope global secondary eth0
inet 10.21.21.1/16 scope global secondary eth0
inet6 fe80::225:90ff:fe10:78/64 scope link
   valid_lft forever preferred_lft forever


root@test1# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  - RemoteAddress:Port   Forward Weight ActiveConn InActConn
TCP  10.21.21.1:5000 wlc persistent 600
  - 10.21.0.1:5000   Masq1000  1
  - 10.21.0.2:5000   Masq1000  0
TCP  10.21.21.1:35357 wlc persistent 600
  - 10.21.0.1:35357  Masq1000  0
  - 10.21.0.2:35357  Masq1000  0

root@test1# iptables -L -v -tnat
Chain PREROUTING (policy ACCEPT 283 packets, 24902 bytes)
 pkts bytes target prot opt in out source
destination

Chain INPUT (policy ACCEPT 253 packets, 15256 bytes)
 pkts bytes target prot opt in out source
destination

Chain OUTPUT (policy ACCEPT 509 packets, 37182 bytes)
 pkts bytes target prot opt in out source
destination

Chain POSTROUTING (policy ACCEPT 196 packets, 12010 bytes)
 pkts bytes target prot opt in out source
destination
  277 16700 MASQUERADE  all  --  anyeth0anywhere
anywhere

root@test1:~# export OS_AUTH_URL=http://10.21.21.1:5000/v2.0/;
root@test1:~# keystone user-list
No handlers could be found for logger keystoneclient.client
Unable to communicate with identity service: [Errno 113] No route to host.
(HTTP 400)


I still have some debugging to do with tcpdump, but I thought I would post
my initial results.


On Fri, Feb 15, 2013 at 2:56 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Well if you follow my article, you will get LVS-NAT running. It's fairly
 easy, no funky stuff. Yes you will probably need the postrouting rule, as
 usual :). Let me know how it goes ;)

 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 I
 didn't give NAT a shot because it didn't seem as well documented.

 I will give NAT a shot.  Will I need to enable to iptables and add a rule
 to the nat table?   None of the documentation mentioned that but every time
 I have ever done NAT I had to setup a rule like... iptables -t nat -A
 POSTROUTING -o eth0 -j MASQUERADE

 Thanks for helping me with this.


 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_
 work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24
 VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=35357
 
 
  crm shell:
 
 
  primitive
  p_openstack_
  ip ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
  
  cidr_netmask=
  16
  
  lvs_support=true
 
  p
  rimitive
  p_openstack_ip_lo
   ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
   nic=lo
  cidr_netmask=32
 
  primitive
  p_openstack_
  lvs ocf:heartbeat:ldirectord \
 
 
  op monitor interval=20 timeout=10
 
  group
  g_openstack_
  ip
  _
  lvs
  p_openstack_
  ip
  p_openstack_
  lvs
 
  clone
  c_openstack_ip_lo
 
  p_openstack_ip_lo
  meta interleave=true
 
  colocation
  co_openstack_lo_never_lvs
  -inf: c
  _openstack_ip_lo
 
  g_openstack_ip_lvs
 
  Thanks for taking a look at this.
 
  Sam
 
 
 
 
  On Fri, Feb 15, 2013 at 3:54 AM, Sébastien Han 
 

Re: [Openstack] HA Openstack with Pacemaker

2013-02-15 Thread Samuel Winchenbach
Well I got it to work.  I was being stupid, and forgot to change over the
endpoints in keystone.

One thing I find interesting is that if I call keystone user-list from
test1 it _always_ sends the request to test2 and vice versa.

Also I did not need to add the POSTROUTING rule... I am not sure why.


On Fri, Feb 15, 2013 at 3:44 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hrmmm it isn't going so well:

 root@test1# ip a s dev eth0
 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
 1000
 link/ether 00:25:90:10:00:78 brd ff:ff:ff:ff:ff:ff
 inet 10.21.0.1/16 brd 10.21.255.255 scope global eth0
 inet 10.21.1.1/16 brd 10.21.255.255 scope global secondary eth0
 inet 10.21.21.1/16 scope global secondary eth0
 inet6 fe80::225:90ff:fe10:78/64 scope link
valid_lft forever preferred_lft forever


 root@test1# ipvsadm -L -n
 IP Virtual Server version 1.2.1 (size=4096)
 Prot LocalAddress:Port Scheduler Flags
   - RemoteAddress:Port   Forward Weight ActiveConn InActConn
 TCP  10.21.21.1:5000 wlc persistent 600
   - 10.21.0.1:5000   Masq1000  1
   - 10.21.0.2:5000   Masq1000  0
 TCP  10.21.21.1:35357 wlc persistent 600
   - 10.21.0.1:35357  Masq1000  0
   - 10.21.0.2:35357  Masq1000  0

 root@test1# iptables -L -v -tnat
 Chain PREROUTING (policy ACCEPT 283 packets, 24902 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain INPUT (policy ACCEPT 253 packets, 15256 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain OUTPUT (policy ACCEPT 509 packets, 37182 bytes)
  pkts bytes target prot opt in out source
 destination

 Chain POSTROUTING (policy ACCEPT 196 packets, 12010 bytes)
  pkts bytes target prot opt in out source
 destination
   277 16700 MASQUERADE  all  --  anyeth0anywhere
 anywhere

 root@test1:~# export OS_AUTH_URL=http://10.21.21.1:5000/v2.0/;
 root@test1:~# keystone user-list
 No handlers could be found for logger keystoneclient.client
 Unable to communicate with identity service: [Errno 113] No route to host.
 (HTTP 400)


 I still have some debugging to do with tcpdump, but I thought I would post
 my initial results.


 On Fri, Feb 15, 2013 at 2:56 PM, Sébastien Han han.sebast...@gmail.comwrote:

 Well if you follow my article, you will get LVS-NAT running. It's fairly
 easy, no funky stuff. Yes you will probably need the postrouting rule, as
 usual :). Let me know how it goes ;)

 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 I
 didn't give NAT a shot because it didn't seem as well documented.

 I will give NAT a shot.  Will I need to enable to iptables and add a
 rule to the nat table?   None of the documentation mentioned that but every
 time I have ever done NAT I had to setup a rule like... iptables -t nat -A
 POSTROUTING -o eth0 -j MASQUERADE

 Thanks for helping me with this.


 On Fri, Feb 15, 2013 at 2:07 PM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 Ok but why direct routing instead of NAT? If the public IPs are _only_
 on LVS there is no point to use LVS-DR.

 LVS has the public IPs and redirects to the private IPs, this _must_
 work.

 Did you try NAT? Or at least can you give it a shot?
 --
 Regards,
 Sébastien Han.


 On Fri, Feb 15, 2013 at 3:55 PM, Samuel Winchenbach swinc...@gmail.com
 wrote:
  Sure...  I have undone these settings but I saved a copy:
 
  two hosts:
  test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
  test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
 
  VIP: 10.21.21.1  (just for testing, later I would add a 130.x.x.x/24
 VIP for
  public APIs
 
  k
  eystone is bound to 10.21.0.1 on test1 and 10.21.0.2 on test2
 
 
 
  in /etc/sysctl.conf:
 net.ipv4.conf.all.arp_ignore = 1
 net.ipv4.conf.eth0.arp_ignore = 1
 net.ipv4.conf.all.arp_announce = 2
 net.ipv4.conf.eth0.arp_announce = 2
 
  root# sysctl -p
 
  in /etc/sysctl.conf:
 
  checktimeout=
  3
 
 
  checkinterval=
  5
 
 
  autoreload=
  yes
 
 
  logfile=/var/log/ldirectord.log
 
  quiescent=no
 
  virtual=10.21.21.1:5000
 
  real=10.2
  1
  .0.1:5000 gate
 
  real=10.2
  1
  .0.2:5000 gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=5000
 
  virtual=10.21.21.1:
  35357
 
  real=10.2
  1
  .0.1:
  35357
  gate
 
  real=10.2
  1
  .0.2:
  35357
  gate
 
  scheduler=
  w
  rr
protocol=tcp
checktype=connect
checkport=35357
 
 
  crm shell:
 
 
  primitive
  p_openstack_
  ip ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
  
  cidr_netmask=
  16
  
  lvs_support=true
 
  p
  rimitive
  p_openstack_ip_lo
   ocf:heartbeat:IPaddr2 \
 
 
  op monitor interval=60 timeout=20 \
 
 
  params ip=
  10.21.21.1
   nic=lo
  cidr_netmask=32
 
  primitive
  p_openstack_
  lvs 

Re: [Openstack] HA Openstack with Pacemaker

2013-02-14 Thread Sébastien Han
What's the problem to have one IP on service pool basis?

--
Regards,
Sébastien Han.


On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 What if the VIP is created on a different host than keystone is started
 on?   It seems like you either need to set net.ipv4.ip_nonlocal_bind = 1
 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com a
 écrit :

 I'm currently updating that part of the documentation - indeed it states
 that two IPs are used, but in fact, you end up with only one VIP for the
 API service.
 I'll send the patch tonight

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a
 écrit :

 In that documentation it looks like each openstack service gets it own IP
 (keystone is being assigned 192.168.42.103 and glance is getting
 192.168.42.104).

 I might be missing something too because in the section titled Configure
 the VIP it create a primitive called p_api-ip (or p_ip_api if you read
 the text above it) and then in Adding Keystone resource to Pacemaker it
 creates a group with p_ip_keystone???


 Stranger yet, Configuring OpenStack Services to use High Available
 Glance API says:  For Nova, for example, if your Glance API service IP
 address is 192.168.42.104 as in the configuration explained here, you would
 use the following line in your nova.conf file : glance_api_servers =
 192.168.42.103  But, in the step before it set:  registry_host =
 192.168.42.104?

 So I am not sure which ip you would connect to here...

 Sam



 On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 I currently have a HA OpenStack cluster running where the OpenStack
 services are kept alive with a combination of haproxy and keepalived.

 Is it possible to configure pacemaker so that all the OpenStack
 services  are served by the same IP?  With keepalived I have a virtual ip
 that can move from server to server and haproxy sends the request to a
 machine that has a live service.   This allows one (public) ip to handle
 all incoming requests.  I believe it is the combination of VRRP/IPVS that
 allows this.


 Is it possible to do something similar with pacemaker?  I really don't
 want to have an IP for each service, and I don't want to make it a
 requirement that all OpenStack services must be running on the same server.

 Thanks... I hope this question is clear, I feel like I sort of
 butchered the wording a bit.

 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-14 Thread Samuel Winchenbach
T
he only real problem is that it would consume a lot of IP addresses when
exposing the public interfaces.   I _think_ I may have the solution in your
blog actually:
http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
and
http://clusterlabs.org/wiki/Using_ldirectord

I am trying to weigh the pros and cons of this method vs keepalived/haproxy
and just biting the bullet and using one IP per service.



On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han han.sebast...@gmail.comwrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 What if the VIP is created on a different host than keystone is started
 on?   It seems like you either need to set net.ipv4.ip_nonlocal_bind = 1
 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com a
 écrit :

 I'm currently updating that part of the documentation - indeed it states
 that two IPs are used, but in fact, you end up with only one VIP for the
 API service.
 I'll send the patch tonight

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a
 écrit :

 In that documentation it looks like each openstack service gets it own
 IP (keystone is being assigned 192.168.42.103 and glance is getting
 192.168.42.104).

 I might be missing something too because in the section titled
 Configure the VIP it create a primitive called p_api-ip (or p_ip_api if
 you read the text above it) and then in Adding Keystone resource to
 Pacemaker it creates a group with p_ip_keystone???


 Stranger yet, Configuring OpenStack Services to use High Available
 Glance API says:  For Nova, for example, if your Glance API service
 IP address is 192.168.42.104 as in the configuration explained here, you
 would use the following line in your nova.conf file : glance_api_servers
 = 192.168.42.103  But, in the step before it set:  registry_host =
 192.168.42.104?

 So I am not sure which ip you would connect to here...

 Sam



 On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 I currently have a HA OpenStack cluster running where the OpenStack
 services are kept alive with a combination of haproxy and keepalived.

 Is it possible to configure pacemaker so that all the OpenStack
 services  are served by the same IP?  With keepalived I have a virtual ip
 that can move from server to server and haproxy sends the request to a
 machine that has a live service.   This allows one (public) ip to handle
 all incoming requests.  I believe it is the combination of VRRP/IPVS that
 allows this.


 Is it possible to do something similar with pacemaker?  I really don't
 want to have an IP for each service, and I don't want to make it a
 requirement that all OpenStack services must be running on the same 
 server.

 Thanks... I hope this question is clear, I feel like I sort of
 butchered the wording a bit.

 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-14 Thread Sébastien Han
Well I don't know your setup, if you use LB for API service or if you use
an active/passive pacemaker but at the end it's not that much IPs I guess.
I dare to say that Keepalived sounds outdated to me...

If you use pacemaker and want to have the same IP for all the resources
simply create a resource group with all the openstack service inside it
(it's ugly but if it's what you want :)). Give me more info about your
setup and we can go further in the discussion :).

--
Regards,
Sébastien Han.


On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 T
 he only real problem is that it would consume a lot of IP addresses when
 exposing the public interfaces.   I _think_ I may have the solution in your
 blog actually:
 http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
 keepalived/haproxy and just biting the bullet and using one IP per service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han han.sebast...@gmail.comwrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 What if the VIP is created on a different host than keystone is started
 on?   It seems like you either need to set net.ipv4.ip_nonlocal_bind =
 1 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com
 a écrit :

 I'm currently updating that part of the documentation - indeed it
 states that two IPs are used, but in fact, you end up with only one VIP for
 the API service.
 I'll send the patch tonight

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a
 écrit :

 In that documentation it looks like each openstack service gets it own
 IP (keystone is being assigned 192.168.42.103 and glance is getting
 192.168.42.104).

 I might be missing something too because in the section titled
 Configure the VIP it create a primitive called p_api-ip (or p_ip_api if
 you read the text above it) and then in Adding Keystone resource to
 Pacemaker it creates a group with p_ip_keystone???


 Stranger yet, Configuring OpenStack Services to use High Available
 Glance API says:  For Nova, for example, if your Glance API service
 IP address is 192.168.42.104 as in the configuration explained here, you
 would use the following line in your nova.conf file : glance_api_servers
 = 192.168.42.103  But, in the step before it set:  registry_host =
 192.168.42.104?

 So I am not sure which ip you would connect to here...

 Sam



 On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 I currently have a HA OpenStack cluster running where the OpenStack
 services are kept alive with a combination of haproxy and keepalived.

 Is it possible to configure pacemaker so that all the OpenStack
 services  are served by the same IP?  With keepalived I have a virtual ip
 that can move from server to server and haproxy sends the request to a
 machine that has a live service.   This allows one (public) ip to 
 handle
 all incoming requests.  I believe it is the combination of VRRP/IPVS that
 allows this.


 Is it possible to do something similar with pacemaker?  I really
 don't want to have an IP for each service, and I don't want to make it a
 requirement that all OpenStack services must be running on the same 
 server.

 Thanks... I hope this question is clear, I feel like I sort of
 butchered the wording a bit.

 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





Re: [Openstack] HA Openstack with Pacemaker

2013-02-14 Thread Samuel Winchenbach
Hi Sébastien,

I have two hosts with public interfaces with a number (~8) compute nodes
behind them.   I am trying to set the two public nodes in for HA and load
balancing,  I plan to run all the openstack services on these two nodes in
Active/Active where possible.   I currently have MySQL and RabbitMQ setup
in pacemaker with a drbd backend.

That is a quick summary.   If there is anything else I can answer about my
setup please let me know.

Thanks,
Sam


On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han han.sebast...@gmail.comwrote:

 Well I don't know your setup, if you use LB for API service or if you use
 an active/passive pacemaker but at the end it's not that much IPs I guess.
 I dare to say that Keepalived sounds outdated to me...

 If you use pacemaker and want to have the same IP for all the resources
 simply create a resource group with all the openstack service inside it
 (it's ugly but if it's what you want :)). Give me more info about your
 setup and we can go further in the discussion :).

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 T
 he only real problem is that it would consume a lot of IP addresses when
 exposing the public interfaces.   I _think_ I may have the solution in your
 blog actually:
 http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
 keepalived/haproxy and just biting the bullet and using one IP per service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 What if the VIP is created on a different host than keystone is started
 on?   It seems like you either need to set net.ipv4.ip_nonlocal_bind =
 1 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com
 a écrit :

 I'm currently updating that part of the documentation - indeed it
 states that two IPs are used, but in fact, you end up with only one VIP 
 for
 the API service.
 I'll send the patch tonight

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a
 écrit :

 In that documentation it looks like each openstack service gets it own
 IP (keystone is being assigned 192.168.42.103 and glance is getting
 192.168.42.104).

 I might be missing something too because in the section titled
 Configure the VIP it create a primitive called p_api-ip (or p_ip_api 
 if
 you read the text above it) and then in Adding Keystone resource to
 Pacemaker it creates a group with p_ip_keystone???


 Stranger yet, Configuring OpenStack Services to use High Available
 Glance API says:  For Nova, for example, if your Glance API service
 IP address is 192.168.42.104 as in the configuration explained here, you
 would use the following line in your nova.conf file : glance_api_servers
 = 192.168.42.103  But, in the step before it set:  registry_host =
 192.168.42.104?

 So I am not sure which ip you would connect to here...

 Sam



 On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 I currently have a HA OpenStack cluster running where the OpenStack
 services are kept alive with a combination of haproxy and keepalived.

 Is it possible to configure pacemaker so that all the OpenStack
 services  are served by the same IP?  With keepalived I have a virtual 
 ip
 that can move from server to server and haproxy sends the request to a
 machine that has a live service.   This allows one (public) ip to 
 handle
 all incoming requests.  I believe it is the combination of VRRP/IPVS 
 that
 allows this.


 Is it possible to do something similar with pacemaker?  I really
 don't want to have an IP for each service, and I don't want to make it a
 requirement that all OpenStack services must be running on the same 
 server.

 Thanks... I hope this question is clear, I feel like I sort of
 butchered the wording a bit.

 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack

Re: [Openstack] HA Openstack with Pacemaker

2013-02-14 Thread Samuel Winchenbach
W
ell, I think I will have to go with one ip per service and forget about
load balancing.  It seems as though with LVS routing requests internally
through the VIP is difficult (impossible?) at least with LVS-DR.  It seems
like a shame not to be able to distribute the work among the controller
nodes.


On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hi Sébastien,

 I have two hosts with public interfaces with a number (~8) compute nodes
 behind them.   I am trying to set the two public nodes in for HA and load
 balancing,  I plan to run all the openstack services on these two nodes in
 Active/Active where possible.   I currently have MySQL and RabbitMQ setup
 in pacemaker with a drbd backend.

 That is a quick summary.   If there is anything else I can answer about my
 setup please let me know.

 Thanks,
 Sam


 On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han han.sebast...@gmail.comwrote:

 Well I don't know your setup, if you use LB for API service or if you use
 an active/passive pacemaker but at the end it's not that much IPs I guess.
 I dare to say that Keepalived sounds outdated to me...

 If you use pacemaker and want to have the same IP for all the resources
 simply create a resource group with all the openstack service inside it
 (it's ugly but if it's what you want :)). Give me more info about your
 setup and we can go further in the discussion :).

 --
 Regards,
 Sébastien Han.


 On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 T
 he only real problem is that it would consume a lot of IP addresses when
 exposing the public interfaces.   I _think_ I may have the solution in your
 blog actually:
 http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
 and
 http://clusterlabs.org/wiki/Using_ldirectord

 I am trying to weigh the pros and cons of this method vs
 keepalived/haproxy and just biting the bullet and using one IP per service.


 On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han 
 han.sebast...@gmail.comwrote:

 What's the problem to have one IP on service pool basis?

 --
 Regards,
 Sébastien Han.


 On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach swinc...@gmail.com
  wrote:

 What if the VIP is created on a different host than keystone is
 started on?   It seems like you either need to set 
 net.ipv4.ip_nonlocal_bind
 = 1 or create a colocation in pacemaker (which would either require all
 services to be on the same host, or have an ip-per-service).




 On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua 
 razique.mahr...@gmail.com wrote:

 There we go
 https://review.openstack.org/#/c/21581/

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com
 a écrit :

 I'm currently updating that part of the documentation - indeed it
 states that two IPs are used, but in fact, you end up with only one VIP 
 for
 the API service.
 I'll send the patch tonight

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a
 écrit :

 In that documentation it looks like each openstack service gets it
 own IP (keystone is being assigned 192.168.42.103 and glance is getting
 192.168.42.104).

 I might be missing something too because in the section titled
 Configure the VIP it create a primitive called p_api-ip (or p_ip_api 
 if
 you read the text above it) and then in Adding Keystone resource to
 Pacemaker it creates a group with p_ip_keystone???


 Stranger yet, Configuring OpenStack Services to use High Available
 Glance API says:  For Nova, for example, if your Glance API
 service IP address is 192.168.42.104 as in the configuration explained
 here, you would use the following line in your nova.conf file : 
 glance_api_servers
 = 192.168.42.103  But, in the step before it set:  registry_host =
 192.168.42.104?

 So I am not sure which ip you would connect to here...

 Sam



 On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 I currently have a HA OpenStack cluster running where the OpenStack
 services are kept alive with a combination of haproxy and keepalived.

 Is it possible to configure pacemaker so that all the OpenStack
 services  are served by the same IP?  With keepalived I have a virtual 
 ip
 that can move from server to server and haproxy sends the request to a
 machine that has a live service.   This allows one (public) ip to 
 handle
 all incoming requests.  I believe it is the combination of VRRP/IPVS 
 that
 allows this.


 Is it possible to do something similar with pacemaker?  I really
 don't want to have an IP for each service, 

Re: [Openstack] HA Openstack with Pacemaker

2013-02-13 Thread JuanFra Rodriguez Cardoso
Hi Samuel:

Yes, it's possible with pacemaker. Look at
http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

Regards,
JuanFra


2013/2/13 Samuel Winchenbach swinc...@gmail.com

 Hi All,

 I currently have a HA OpenStack cluster running where the OpenStack
 services are kept alive with a combination of haproxy and keepalived.

 Is it possible to configure pacemaker so that all the OpenStack services
  are served by the same IP?  With keepalived I have a virtual ip that can
 move from server to server and haproxy sends the request to a machine that
 has a live service.   This allows one (public) ip to handle all incoming
 requests.  I believe it is the combination of VRRP/IPVS that allows this.


 Is it possible to do something similar with pacemaker?  I really don't
 want to have an IP for each service, and I don't want to make it a
 requirement that all OpenStack services must be running on the same server.

 Thanks... I hope this question is clear, I feel like I sort of butchered
 the wording a bit.

 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-13 Thread Samuel Winchenbach
In that documentation it looks like each openstack service gets it own IP
(keystone is being assigned 192.168.42.103 and glance is getting
192.168.42.104).

I might be missing something too because in the section titled Configure
the VIP it create a primitive called p_api-ip (or p_ip_api if you read
the text above it) and then in Adding Keystone resource to Pacemaker it
creates a group with p_ip_keystone???


Stranger yet, Configuring OpenStack Services to use High Available Glance
API says:  For Nova, for example, if your Glance API service IP address
is 192.168.42.104 as in the configuration explained here, you would use the
following line in your nova.conf file : glance_api_servers =
192.168.42.103  But, in the step before it set:  registry_host =
192.168.42.104?

So I am not sure which ip you would connect to here...

Sam



On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

 Hi All,

 I currently have a HA OpenStack cluster running where the OpenStack
 services are kept alive with a combination of haproxy and keepalived.

 Is it possible to configure pacemaker so that all the OpenStack services
  are served by the same IP?  With keepalived I have a virtual ip that can
 move from server to server and haproxy sends the request to a machine that
 has a live service.   This allows one (public) ip to handle all incoming
 requests.  I believe it is the combination of VRRP/IPVS that allows this.


 Is it possible to do something similar with pacemaker?  I really don't
 want to have an IP for each service, and I don't want to make it a
 requirement that all OpenStack services must be running on the same server.

 Thanks... I hope this question is clear, I feel like I sort of butchered
 the wording a bit.

 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-13 Thread Razique Mahroua
I'm currently updating that part of the documentation - indeed it states that two IPs are used, but in fact, you end up with only one VIP for the API service.I'll send the patch tonight
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a écrit :In that documentation it looks like each openstack service gets it own IP (keystone is being assigned 192.168.42.103 and glance is getting 192.168.42.104). 
I might be missing something too because in the section titled "Configure the VIP" it create a primitive called "p_api-ip" (or p_ip_api if you read the text above it) and then in "Adding Keystone resource to Pacemaker" it creates a group with "p_ip_keystone"???
Stranger yet, "Configuring OpenStack Services to use High Available Glance API" says: "For Nova, for example, if your Glance API service IP address is 192.168.42.104 as in the configuration explained here, you would use the following line in yournova.conffile :glance_api_servers = 192.168.42.103" But, in the step before it set: "registry_host = 192.168.42.104"?
So I am not sure which ip you would connect to here... 
Sam
On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com wrote:
Hi Samuel:Yes, it's possible with pacemaker. Look at http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

Regards,JuanFra
2013/2/13 Samuel Winchenbach swinc...@gmail.com

Hi All,I currently have a HA OpenStack cluster running where the OpenStack services are kept alive with a combination of haproxy and keepalived.


Is it possible to configure pacemaker so that all the OpenStack services are served by the same IP? With keepalived I have a virtual ip that can move from server to server and haproxy sends the request to a machine that has a "live" service.  This allows one (public) ip to handle all incoming requests. I believe it is the combination of VRRP/IPVS that allows this.


Is it possible to do something similar with pacemaker? I really don't want to have an IP for each service, and I don't want to make it a requirement that all OpenStack services must be running on the same server.


Thanks... I hope this question is clear, I feel like I sort of butchered the wording a bit.
Sam
___
Mailing list: https://launchpad.net/~openstack
Post to   : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help  : https://help.launchpad.net/ListHelp


___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-13 Thread Razique Mahroua
There we gohttps://review.openstack.org/#/c/21581/
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com a écrit :I'm currently updating that part of the documentation - indeed it states that two IPs are used, but in fact, you end up with only one VIP for the API service.I'll send the patch tonight
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15NUAGECO-LOGO-Fblan_petit.jpg

Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a écrit :In that documentation it looks like each openstack service gets it own IP (keystone is being assigned 192.168.42.103 and glance is getting 192.168.42.104). 
I might be missing something too because in the section titled "Configure the VIP" it create a primitive called "p_api-ip" (or p_ip_api if you read the text above it) and then in "Adding Keystone resource to Pacemaker" it creates a group with "p_ip_keystone"???
Stranger yet, "Configuring OpenStack Services to use High Available Glance API" says: "For Nova, for example, if your Glance API service IP address is 192.168.42.104 as in the configuration explained here, you would use the following line in yournova.conffile :glance_api_servers = 192.168.42.103" But, in the step before it set: "registry_host = 192.168.42.104"?
So I am not sure which ip you would connect to here... 
Sam
On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso juanfra.rodriguez.card...@gmail.com wrote:
Hi Samuel:Yes, it's possible with pacemaker. Look at http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

Regards,JuanFra
2013/2/13 Samuel Winchenbach swinc...@gmail.com

Hi All,I currently have a HA OpenStack cluster running where the OpenStack services are kept alive with a combination of haproxy and keepalived.


Is it possible to configure pacemaker so that all the OpenStack services are served by the same IP? With keepalived I have a virtual ip that can move from server to server and haproxy sends the request to a machine that has a "live" service.  This allows one (public) ip to handle all incoming requests. I believe it is the combination of VRRP/IPVS that allows this.


Is it possible to do something similar with pacemaker? I really don't want to have an IP for each service, and I don't want to make it a requirement that all OpenStack services must be running on the same server.


Thanks... I hope this question is clear, I feel like I sort of butchered the wording a bit.
Sam
___
Mailing list: https://launchpad.net/~openstack
Post to   : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help  : https://help.launchpad.net/ListHelp


___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HA Openstack with Pacemaker

2013-02-13 Thread Samuel Winchenbach
What if the VIP is created on a different host than keystone is started on?
  It seems like you either need to set net.ipv4.ip_nonlocal_bind = 1 or
create a colocation in pacemaker (which would either require all services
to be on the same host, or have an ip-per-service).




On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua
razique.mahr...@gmail.comwrote:

 There we go
 https://review.openstack.org/#/c/21581/

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com a
 écrit :

 I'm currently updating that part of the documentation - indeed it states
 that two IPs are used, but in fact, you end up with only one VIP for the
 API service.
 I'll send the patch tonight

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15

 NUAGECO-LOGO-Fblan_petit.jpg

 Le 13 févr. 2013 à 20:05, Samuel Winchenbach swinc...@gmail.com a écrit
 :

 In that documentation it looks like each openstack service gets it own IP
 (keystone is being assigned 192.168.42.103 and glance is getting
 192.168.42.104).

 I might be missing something too because in the section titled Configure
 the VIP it create a primitive called p_api-ip (or p_ip_api if you read
 the text above it) and then in Adding Keystone resource to Pacemaker it
 creates a group with p_ip_keystone???


 Stranger yet, Configuring OpenStack Services to use High Available
 Glance API says:  For Nova, for example, if your Glance API service IP
 address is 192.168.42.104 as in the configuration explained here, you would
 use the following line in your nova.conf file : glance_api_servers =
 192.168.42.103  But, in the step before it set:  registry_host =
 192.168.42.104?

 So I am not sure which ip you would connect to here...

 Sam



 On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso 
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Samuel:

 Yes, it's possible with pacemaker. Look at
 http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.

 Regards,
 JuanFra


 2013/2/13 Samuel Winchenbach swinc...@gmail.com

  Hi All,

 I currently have a HA OpenStack cluster running where the OpenStack
 services are kept alive with a combination of haproxy and keepalived.

 Is it possible to configure pacemaker so that all the OpenStack services
  are served by the same IP?  With keepalived I have a virtual ip that can
 move from server to server and haproxy sends the request to a machine that
 has a live service.   This allows one (public) ip to handle all incoming
 requests.  I believe it is the combination of VRRP/IPVS that allows this.


 Is it possible to do something similar with pacemaker?  I really don't
 want to have an IP for each service, and I don't want to make it a
 requirement that all OpenStack services must be running on the same server.

 Thanks... I hope this question is clear, I feel like I sort of butchered
 the wording a bit.

 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp