Let me explain a bit ;)
If node1 and node2 are in the same cluster group (let say "default"), and you
do:
cluster:feature-install default webconsole
then, the webconsole will be installed on both node1 and node2.
So /system/console path will be on both node1 and node2. So, it's doesn't make
sense to proxy from node1 to node2.
Let's take another scenario: you install a web application (let's say /foo) on
node1, node2, node3 (via cluster:feature-install group foo).
Then, you want to have node4 and node5 proxying and load balancing requests to
/foo. So, on node4 and node5, you don't install foo feature, you just install
http-balancer.
Back on your case, you should install webconsole only on node1 or node2 (using
feature:install on one node, instead of cluster:feature-install).
Agree ?
Regards
JB
On 03/14/2017 10:06 PM, glopez wrote:
Yes, thats the case.
I think I'm confused, what I'm trying to do is the following:
load-balancer
node1 node2
Requests are made to load balancer:
load-balancer:8181/system/console
And then the requests are proxied to node1 or node2
And if one of the nodes (1 or 2) dies, all requests go to the other.
Do I need more then one cluster group to achieve that?
Do I need to install webconsole on the load-balancer?
I have tried with 1 group (default), did the following:
in the 3 nodes:
feature:install http
feature:install http-whiteboard
feature:repo-add cellar
feature:install cellar
Then, in one of the nodes I did:
cluster:feature-install default cellar-http-balancer
Finally I installed webconsole only on node1.
The problem there is that if node1 dies and I do
load-balancer:8181/system/console I get a Connection timed out error.
How can I solve that?
Thank you for your time!
--
View this message in context:
http://karaf.922171.n3.nabble.com/Cellar-Load-balancer-setup-tp4049717p4049847.html
Sent from the Karaf - User mailing list archive at Nabble.com.
--
Jean-Baptiste Onofré
[email protected]
http://blog.nanthrax.net
Talend - http://www.talend.com