You can disable this behavior by adding,

proxy_next_upstream error;

to your nginx.conf. Default value for this setting is "error, timeout", so nginx tries to connect to the next mongrel when the current one returns an error or timeouts in your case. I noticed this when I saw multiple copies of same comments on our site.

Check http://wiki.codemongers.com/ NginxHttpProxyModule#proxy_next_upstream
 for more information.

- Firat Can Basarir

On Aug 15, 2007, at 2:58 PM, M. Hakan Aksu wrote:

This may be a nginx issue more than a mongrel one but I though folks in this list might be interested.
Anyway, I have a  mongrel_cluster with 2 front nginx workers as proxy.

I recently replaced apache/mod_proxy for nginx, and I wasn't aware of the 60 seconds default proxy_read_timeout so I went a head and tried to run a process via http GET. Normally because of the 60 seconds the GET I did from the browser should have stopped but it kept running. Checking the database i noticed that my process was running multiple times!. Looking at the nginx error logs below, I noticed that nginx restarted the same call couple of times on a different mongrel after each timeout. I had to kill the nginx processes to stop it all.


2007/08/14 22:18:40 [error] 1720#0: *129 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.102, server: factory, URL: "/opening_balance/ post", upstream: " http://127.0.0.1:8013/opening_balance/post";, host: "rs" 2007/08/14 22:19:40 [error] 1720#0: *129 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.102, server: factory, URL: "/opening_balance/ post", upstream: " http://127.0.0.1:8014/opening_balance/post ", host: "rs" 2007/08/14 22:20:40 [error] 1720#0: *129 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.102 , server: factory, URL: "/opening_balance/ post", upstream: "http://127.0.0.1:8015/opening_balance/post ", host: "rs" 2007/08/14 22:21:40 [error] 1720#0: *129 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.102, server: factory, URL: "/opening_balance/ post", upstream: " http://127.0.0.1:8016/opening_balance/post ", host: "rs" 2007/08/14 22:22:40 [info] 1720#0: *129 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: 192.168.1.102, server: factory, URL: "/ opening_balance/post", upstream: "http://127.0.0.1:8017/ opening_balance/post ", host: "rs"


So here' my question: Is this normal? Shouldn't upstream connection be closed after the timeout instead of re spawning the same call? I think this may be a very dangerous behavior.

I'll appreciate to see your comments.

-Hakan
_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

_______________________________________________
Mongrel-users mailing list
Mongrel-users@rubyforge.org
http://rubyforge.org/mailman/listinfo/mongrel-users

Reply via email to