Re: [Openstack] Connecting to Keystone from a different port using HAproxy

2013-06-13 Thread Samuel Winchenbach
I may have found a solution to my problem, but I am not sure it will help
you much.

I created an entry in hosts that named my internal ip "local-internal" and
then I bound keystone to that ip.  Next I configured the pacemaker resource
agent to check "local-internal" which will, of course, be different on each
node.   It seems to work quite well.

Sorry that this probably doesn't help you,
Sam


On Thu, Jun 13, 2013 at 10:19 AM, Aaron Knister wrote:

> Hi Sam
>
> I don't have a fix but I actually had the same problem but for a different
> reason.  I was trying to run keystone via apache and listen on multiple
> ports to support regular auth and external auth. I couldn't figure out how
> to map additional ports within keytstone. I'm very much interested in the
> solution here.
>
> Sent from my iPhone
>
> On Jun 13, 2013, at 9:27 AM, Samuel Winchenbach 
> wrote:
>
> Hi All,
>
> I am attempting to set up a high availability openstack cluster.
>  Currently, using pacemaker, I create a Virtual IP for all the highly
> available service, launch haproxy to proxy all the requests and clone
> keystone to all the nodes.   The idea being that the requests come into
> haproxy and are load balanced across all the nodes.
>
>
> To do this I have keystone listen on 26000 for admin, and 26001 for
> public.  haproxy listens on 35357 and 5000 respectively (these ports are
> bound to the VIP).  The problem with setup is that my log is filling
> (MB/min) with this warning:
>
> 2013-06-13 09:20:18 INFO [access] 127.0.0.1 - - [13/Jun/2013:13:20:18
> +] "GET http://10.80.255.1:35357/v2.0/users HTTP/1.0" 200 915
> 2013-06-13 09:20:18  WARNING [keystone.contrib.stats.core] Unable to
> resolve API as either public or admin: 10.80.255.1:35357
> ...
> ...
>
> where 10.80.255.1 is my VIP for highly available services.   I traced down
> that module and added a few lines of code for debugging and it turns out
> that if checks to see if the incoming connection matches a port in the
> config file.  In my case it does not.
>
> I can not just bind keystone to the internal ip and leave the port as
> their defaults because the way pacemaker checks to see if services are
> alive is by sending requests to service it is monitoring, and I do not want
> to send requests to the VIP because any instance of keystone could respond.
>   Basically I would I have to write a pacemaker rule for each node and it
> would become messy quite quickly.
>
> Does anyone see something I could do differently, or a fix for my current
> situation?
>
> Thanks,
> Sam
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Connecting to Keystone from a different port using HAproxy

2013-06-13 Thread Aaron Knister
Hi Sam

I don't have a fix but I actually had the same problem but for a different 
reason.  I was trying to run keystone via apache and listen on multiple ports 
to support regular auth and external auth. I couldn't figure out how to map 
additional ports within keytstone. I'm very much interested in the solution 
here. 

Sent from my iPhone

On Jun 13, 2013, at 9:27 AM, Samuel Winchenbach  wrote:

> Hi All,
> 
> I am attempting to set up a high availability openstack cluster.  Currently, 
> using pacemaker, I create a Virtual IP for all the highly available service, 
> launch haproxy to proxy all the requests and clone keystone to all the nodes. 
>   The idea being that the requests come into haproxy and are load balanced 
> across all the nodes.
> 
> 
> To do this I have keystone listen on 26000 for admin, and 26001 for public.  
> haproxy listens on 35357 and 5000 respectively (these ports are bound to the 
> VIP).  The problem with setup is that my log is filling (MB/min) with this 
> warning:
> 
> 2013-06-13 09:20:18 INFO [access] 127.0.0.1 - - [13/Jun/2013:13:20:18 
> +] "GET http://10.80.255.1:35357/v2.0/users HTTP/1.0" 200 915
> 2013-06-13 09:20:18  WARNING [keystone.contrib.stats.core] Unable to resolve 
> API as either public or admin: 10.80.255.1:35357
> ...
> ...
> 
> where 10.80.255.1 is my VIP for highly available services.   I traced down 
> that module and added a few lines of code for debugging and it turns out that 
> if checks to see if the incoming connection matches a port in the config 
> file.  In my case it does not.
> 
> I can not just bind keystone to the internal ip and leave the port as their 
> defaults because the way pacemaker checks to see if services are alive is by 
> sending requests to service it is monitoring, and I do not want to send 
> requests to the VIP because any instance of keystone could respond.   
> Basically I would I have to write a pacemaker rule for each node and it would 
> become messy quite quickly.
> 
> Does anyone see something I could do differently, or a fix for my current 
> situation?  
> 
> Thanks,
> Sam
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Connecting to Keystone from a different port using HAproxy

2013-06-13 Thread Samuel Winchenbach
Hi All,

I am attempting to set up a high availability openstack cluster.
 Currently, using pacemaker, I create a Virtual IP for all the highly
available service, launch haproxy to proxy all the requests and clone
keystone to all the nodes.   The idea being that the requests come into
haproxy and are load balanced across all the nodes.


To do this I have keystone listen on 26000 for admin, and 26001 for public.
 haproxy listens on 35357 and 5000 respectively (these ports are bound to
the VIP).  The problem with setup is that my log is filling (MB/min) with
this warning:

2013-06-13 09:20:18 INFO [access] 127.0.0.1 - - [13/Jun/2013:13:20:18
+] "GET http://10.80.255.1:35357/v2.0/users HTTP/1.0" 200 915
2013-06-13 09:20:18  WARNING [keystone.contrib.stats.core] Unable to
resolve API as either public or admin: 10.80.255.1:35357
...
...

where 10.80.255.1 is my VIP for highly available services.   I traced down
that module and added a few lines of code for debugging and it turns out
that if checks to see if the incoming connection matches a port in the
config file.  In my case it does not.

I can not just bind keystone to the internal ip and leave the port as their
defaults because the way pacemaker checks to see if services are alive is
by sending requests to service it is monitoring, and I do not want to send
requests to the VIP because any instance of keystone could respond.
Basically I would I have to write a pacemaker rule for each node and it
would become messy quite quickly.

Does anyone see something I could do differently, or a fix for my current
situation?

Thanks,
Sam
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp