Any idea why calling keystone from test1 always gets routed to test2
and calling it from test2 requests always get routed to test1? It
wouldn't be a big deal but if I manually stop keystone on one of the hosts
then requests from the other fail.
Thanks,
Sam
On Mon, Feb 18, 2013 at 2:52 AM,
Hi,
Good to hear that you finally managed to get it working. Usually the
postrouting rule is more for clients that needs to be routed.
Cheers!
On 16 févr. 2013, at 03:06, Samuel Winchenbach swinc...@gmail.com wrote:
Well I got it to work. I was being stupid, and forgot to change over the
Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
there are just IPs... Can I see your conf?
--
Regards,
Sébastien Han.
On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach swinc...@gmail.comwrote:
W
ell, I think I will have to go with one ip per service and forget
Sure... I have undone these settings but I saved a copy:
two hosts:
test1 eth0: 10.21.0.1/16 eth1: 130.x.x.x/24
test2 eth0: 10.21.0.2/16 eth1: 130.x.x.x/24
VIP: 10.21.21.1 (just for testing, later I would add a 130.x.x.x/24 VIP
for public APIs
k
eystone is bound to 10.21.0.1 on test1 and
Ok but why direct routing instead of NAT? If the public IPs are _only_
on LVS there is no point to use LVS-DR.
LVS has the public IPs and redirects to the private IPs, this _must_ work.
Did you try NAT? Or at least can you give it a shot?
--
Regards,
Sébastien Han.
On Fri, Feb 15, 2013 at 3:55
I
didn't give NAT a shot because it didn't seem as well documented.
I will give NAT a shot. Will I need to enable to iptables and add a rule
to the nat table? None of the documentation mentioned that but every
time I have ever done NAT I had to setup a rule like... iptables -t nat -A
Well if you follow my article, you will get LVS-NAT running. It's fairly
easy, no funky stuff. Yes you will probably need the postrouting rule, as
usual :). Let me know how it goes ;)
--
Regards,
Sébastien Han.
On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach swinc...@gmail.comwrote:
I
Hrmmm it isn't going so well:
root@test1# ip a s dev eth0
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen
1000
link/ether 00:25:90:10:00:78 brd ff:ff:ff:ff:ff:ff
inet 10.21.0.1/16 brd 10.21.255.255 scope global eth0
inet 10.21.1.1/16 brd 10.21.255.255 scope
Well I got it to work. I was being stupid, and forgot to change over the
endpoints in keystone.
One thing I find interesting is that if I call keystone user-list from
test1 it _always_ sends the request to test2 and vice versa.
Also I did not need to add the POSTROUTING rule... I am not sure
What's the problem to have one IP on service pool basis?
--
Regards,
Sébastien Han.
On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach swinc...@gmail.comwrote:
What if the VIP is created on a different host than keystone is started
on? It seems like you either need to set
T
he only real problem is that it would consume a lot of IP addresses when
exposing the public interfaces. I _think_ I may have the solution in your
blog actually:
http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
and
http://clusterlabs.org/wiki/Using_ldirectord
I am trying to
Well I don't know your setup, if you use LB for API service or if you use
an active/passive pacemaker but at the end it's not that much IPs I guess.
I dare to say that Keepalived sounds outdated to me...
If you use pacemaker and want to have the same IP for all the resources
simply create a
Hi Sébastien,
I have two hosts with public interfaces with a number (~8) compute nodes
behind them. I am trying to set the two public nodes in for HA and load
balancing, I plan to run all the openstack services on these two nodes in
Active/Active where possible. I currently have MySQL and
W
ell, I think I will have to go with one ip per service and forget about
load balancing. It seems as though with LVS routing requests internally
through the VIP is difficult (impossible?) at least with LVS-DR. It seems
like a shame not to be able to distribute the work among the controller
Hi Samuel:
Yes, it's possible with pacemaker. Look at
http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.
Regards,
JuanFra
2013/2/13 Samuel Winchenbach swinc...@gmail.com
Hi All,
I currently have a HA OpenStack cluster running where the OpenStack
services are kept alive
In that documentation it looks like each openstack service gets it own IP
(keystone is being assigned 192.168.42.103 and glance is getting
192.168.42.104).
I might be missing something too because in the section titled Configure
the VIP it create a primitive called p_api-ip (or p_ip_api if you
I'm currently updating that part of the documentation - indeed it states that two IPs are used, but in fact, you end up with only one VIP for the API service.I'll send the patch tonight
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 13 févr. 2013 à 20:05, Samuel
There we gohttps://review.openstack.org/#/c/21581/
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 13 févr. 2013 à 20:15, Razique Mahroua razique.mahr...@gmail.com a écrit :I'm currently updating that part of the documentation - indeed it states that two IPs are used,
What if the VIP is created on a different host than keystone is started on?
It seems like you either need to set net.ipv4.ip_nonlocal_bind = 1 or
create a colocation in pacemaker (which would either require all services
to be on the same host, or have an ip-per-service).
On Wed, Feb 13, 2013
19 matches
Mail list logo