Hello,

I've messing about with Plack::Handler::Mongrel2 plugged into a Dancer
application and obviously Mongrel2.

Under JMeter load testing hitting the same URL repeatedly I am getting into
a situation where the front end requests start receiving a
 "java.net.SocketTimeoutException: Read timed out" error. The tests are
doing a repeated request "GET
http://localhost:6767/app/polamaolaolaolaoaloalaoalaoalaoalaoalaoalaaoalaoaalaoalaoalaoa?j=8&t=1
"



Mongrel will always produce the following error for a failed request:

[INFO] (src/mongrel2.c:343) Starting Mongrel2/1.7.5. Copyright (C) Zed A.
Shaw. Licensed BSD.
[INFO] (src/server.c:272) Starting server on port 6767
[INFO] (src/task/fd.c:151) MAX limits.fdtask_stack=102400
[INFO] (src/superpoll.c:102) Allowing for 256 hot and 768 idle file
descriptors (dividend was 4)
[INFO] (src/handler.c:209) MAX allowing limits.handler_targets=128
[INFO] (src/handler.c:285) Binding handler PUSH socket
tcp://127.0.0.1:9998with identity:
E80576A8-AC0B-11DF-A841-3D4975AD5E34
[INFO] (src/handler.c:311) Binding listener SUB socket
tcp://127.0.0.1:9999subscribed to:
D807E984-AC0B-11DF-979C-3C4975AD5E34
[INFO] (src/control.c:401) Setting up control socket in at ipc://run/control
*[ERROR] (src/register.c:214: errno: Resource temporarily unavailable)
Nothing registered under id 0.*
*[ERROR] (src/register.c:199: errno: None) Invalid FD given for exists check
*
*
*

Mongrel finish's the tcp connection to the http client after the zmq
message is sent to the Plank back end:

http client -> Mongrel 6767
Mongrel -> plank 9998
*This is when the problem occurs and client http connection is finished*
Plank -> Mongrel 9999
Mongrel -> Client 'high port'


sudo tcpdump -n -i lo -s 158 -p tcp port 6767 or tcp port 9999 or tcp port
9998

18:11:11.991287 IP 127.0.0.1.52686 > 127.0.0.1.6767: Flags [P.], seq
30808:31037, ack 79381, win 384, options [nop,nop,TS val 41369964 ecr
41369963], length 229
18:11:11.991470 IP 127.0.0.1.9998 > 127.0.0.1.55744: Flags [P.], seq
17675:18211, ack 3, win 256, options [nop,nop,TS val 41369964 ecr
41369955], length 536
18:11:11.991480 IP 127.0.0.1.55744 > 127.0.0.1.9998: Flags [.], ack 18211,
win 384, options [nop,nop,TS val 41369964 ecr 41369959], length 0
18:11:12.001689 IP 127.0.0.1.52686 > 127.0.0.1.6767: Flags *[F.]*, seq
31037, ack 79381, win 384, options [nop,nop,TS val 41369966 ecr 41369963],
length 0
18:11:12.001759 IP 127.0.0.1.6767 > 127.0.0.1.52686: Flags *[F.]*, seq
79381, ack 31038, win 384, options [nop,nop,TS val 41369966 ecr 41369964],
length 0
18:11:12.001772 IP 127.0.0.1.52686 > 127.0.0.1.6767: Flags [.], ack 79382,
win 384, options [nop,nop,TS val 41369966 ecr 41369966], length 0
18:11:12.010615 IP 127.0.0.1.50171 > 127.0.0.1.9999: Flags [P.], seq
21123:21763, ack 3, win 257, options [nop,nop,TS val 41369968 ecr
41369956], length 640
18:11:12.010655 IP 127.0.0.1.9999 > 127.0.0.1.50171: Flags [.], ack 21763,
win 384, options [nop,nop,TS val 41369968 ecr 41369960], length 0


Whereas it normally looks like this:

18:11:11.968949 IP 127.0.0.1.52686 > 127.0.0.1.6767: Flags [P.], seq
29663:29892, ack 76441, win 384, options [nop,nop,TS val 41369958 ecr
41369958], length 229
18:11:11.969095 IP 127.0.0.1.9998 > 127.0.0.1.55742: Flags [P.], seq
17137:17673, ack 3, win 256, options [nop,nop,TS val 41369958 ecr
41369954], length 536
18:11:11.971890 IP 127.0.0.1.50169 > 127.0.0.1.9999: Flags [P.], seq
20483:21123, ack 3, win 257, options [nop,nop,TS val 41369959 ecr
41369955], length 640
18:11:11.972006 IP 127.0.0.1.6767 > 127.0.0.1.52686: Flags [P.], seq
76441:77029, ack 29892, win 384, options [nop,nop,TS val 41369959 ecr
41369958], length 588
18:11:11.973493 IP 127.0.0.1.52686 > 127.0.0.1.6767: Flags [P.], seq
29892:30121, ack 77029, win 384, options [nop,nop,TS val 41369959 ecr
41369959], length 229


The only thing of note I found in Plack::Handler::Mongrel2 at the same time
as the errors were undefined variable errors, but they were all related to
the METHOD=JSON disconnect message being sent out after this issue has
occured.

I can increase the the frequency of the errors by increasing the debug
logging on the Perl/Plack side of things so I believe it's something in the
back end but I'm at a loss at the moment as to what could be causing it.
 It happens with or without multiple threads or a single thread at each
http client and plack levels.

Any ideas on where to look next would be appreciated.

Cheers,
Matt

Reply via email to