TCP connections is what is being load balanced, not RMI requests.  If
several RMI requests are made over a single TCP connection they'll all go to
the same server.



On Sun, Jul 25, 2010 at 11:43 PM, Barak Yaish <[email protected]> wrote:

> Hello all,
>
> I've 2 RMI servers fronted by haproxy 1.4.8, here is the config file:
>
> global
>         stats socket /tmp/stats
>         pidfile /var/run/haproxy.pid
>         daemon
> defaults
>         mode tcp
>         option dontlognull retries 3 option redispatch
>         maxconn 2000
>         contimeout 5000
>         clitimeout 50000
>         srvtimeout 50000
> listen  RMI 10.80.0.55:1099
>         mode tcp
>         balance roundrobin
>         server dev103 10.80.0.206:1099
>         server dev105 10.80.0.212:1099
> listen  OTHER 10.80.0.55:10999
>         mode http
>         stats enable
>         stats uri     /admin?stats
>  I've created a RMI client against the haproxy machine, and invoking
> functionS indeed directed to the concrete servers. The problem is, that
> after the client created, all traffic directed to only one server, and no
> load balance occures. Re-creating the client may result with traffic
> directed to the other server, but still - only to that server.
>
> Question No. 1: Is my config file wrong?
>
> I'm trying to figure it what I'm doing wrong, since my config file
> looks quite simple.
>
> Question No. 2: Is there a way to configure haproxy to dump data regarding
> the traffic it directs to a simple file rarther than syslog server ? Trying
> to run with -d displayed some lines which do not tell me alot:
>
> Available polling systems :
>      sepoll : pref=400,  test result OK
>       epoll : pref=300,  test result OK
>        poll : pref=200,  test result OK
>      select : pref=150,  test result OK
> Total: 4 (4 usable), will use sepoll.
> Using sepoll() as the polling mechanism.
> 00000000:RMI.accept(0004)=0007 from [172.16.0.190:4700]
> 00000000:RMI.srvcls[0007:0008]
> 00000000:RMI.clicls[0007:0008]
> 00000000:RMI.closed[0007:0008]
>
>
>
> Thanks.
>

Reply via email to