Hi alexandre,

On Thu, Oct 22, 2009 at 01:52:05PM +0000, alexandre oliveira wrote:
> 
> Willy, I did what you have suggested.

thanks.

(...)
> holb001:~/haproxy-1.3.22 # haproxy -vv
> HA-Proxy version 1.3.22 2009/10/14
> Copyright 2000-2009 Willy Tarreau <[email protected]>
> 
> Build options :
>   TARGET  = linux26
>   CPU     = generic
>   CC      = gcc
>   CFLAGS  = -O2 -g
>   OPTIONS =
> 
> Default settings :
>   maxconn = 2000, maxpollevents = 200
> 
> Available polling systems :
>      sepoll : pref=400,  test result OK
>       epoll : pref=300,  test result OK
>        poll : pref=200,  test result OK
>      select : pref=150,  test result OK
> Total: 4 (4 usable), will use sepoll.

OK pretty much common.

> holb001:~ # uname -a
> Linux holb001 2.6.16.60-0.37_f594963d-default #1 SMP Mon Mar 23 13:39:48 UTC 
> 2009 s390x s390x s390x GNU/Linux

Less common ;-)

(...)
> # Ive started haproxy and did a test. The result is as follow:
> holb001:~/haproxy-1.3.22 # haproxy -f /etc/haproxy/haproxy.cfg -db
> Available polling systems :
>      sepoll : pref=400,  test result OK
>       epoll : pref=300,  test result OK
>        poll : pref=200,  test result OK
>      select : pref=150,  test result OK
> Total: 4 (4 usable), will use sepoll.
> Using sepoll() as the polling mechanism.
> 00000000:uat.accept(0005)=0007 from [192.168.0.10:4047]
> 00000001:uat.accept(0005)=0009 from [192.168.0.10:4048]
> 00000002:uat.accept(0005)=000b from [192.168.0.10:4049]
> 00000003:uat.accept(0005)=000d from [192.168.0.10:4050]
> 00000004:uat.accept(0005)=000f from [192.168.0.10:4051]
> 00000001:uat.srvcls[0009:000a]
> 00000001:uat.clicls[0009:000a]
> 00000001:uat.closed[0009:000a]
> 00000000:uat.srvcls[0007:0008]
> 00000000:uat.clicls[0007:0008]
> 00000000:uat.closed[0007:0008]
> Segmentation fault

Pretty fast to die... I really don't like that at all, that makes
me think about some uninitialized which has a visible effect on
your arch only.

> Remeber that this server is a zLinux, I mean, it runs under a mainframe.

yes, but that's not an excuse for crashing. Do you have gdb on this
machine ? Would it be possible then to run haproxy inside gdb and
check where it dies, and with what variables, pointers, etc... ?

> Suggestions?

Oh yes I'm thinking about something. Could you send your process
a SIGQUIT while it's waiting for a connection ? This will dump all
the memory pools, and we'll see if some of them are merged. It is
possible that some pointers are initialized and never overwritten
on other archs, but reused on yours due to different structure sizes.
This happened once already. So just do "killall -QUIT haproxy" and
send the output. It should look like this :

Dumping pools usage.
  - Pool pipe (16 bytes) : 0 allocated (0 bytes), 0 used, 2 users [SHARED]
  - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool task (80 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool hdr_idx (416 bytes) : 0 allocated (0 bytes), 0 used, 2 users [SHARED]
  - Pool session (816 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool requri (1024 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
  - Pool buffer (32864 bytes) : 0 allocated (0 bytes), 0 used, 1 users [SHARED]
Total: 7 pools, 0 bytes allocated, 0 used.

Thanks !
Willy


Reply via email to