Another one of my developmemnt servers has started doing this now.. This
time, it's unrelated to any sort of lag/delay in response:

Sat Aug  6 15:50:12 2011 - write(): Broken pipe
[plugins/python/wsgi_subhandler.c line 189]
Sat Aug  6 15:50:14 2011 - write(): Broken pipe
[plugins/python/wsgi_subhandler.c line 189]
Sat Aug  6 15:50:16 2011 - write(): Broken pipe
[plugins/python/wsgi_subhandler.c line 189]
Sat Aug  6 15:50:18 2011 - writev(): Broken pipe
[plugins/python/wsgi_headers.c line 182]
Sat Aug  6 15:50:18 2011 - write(): Broken pipe
[plugins/python/wsgi_subhandler.c line 189]
Sat Aug  6 15:50:30 2011 - write(): Broken pipe
[plugins/python/wsgi_subhandler.c line 189]

PY-Execution-Time: 0.178904056549
PY-Execution-Time: 0.279319047928
PY-Execution-Time: 0.212793111801
PY-Execution-Time: 0.19362783432
PY-Execution-Time: 0.18998003006
PY-Execution-Time: 0.230093002319

Any ideas?

Cal

On Fri, Aug 5, 2011 at 1:46 PM, Roberto De Ioris <[email protected]> wrote:

>
> Il giorno 05/ago/2011, alle ore 14:05, Cal Leeming [Simplicity Media Ltd]
> ha scritto:
>
> >
> >
> > On Fri, Aug 5, 2011 at 12:46 PM, Roberto De Ioris <[email protected]>
> wrote:
> >
> > Il giorno 05/ago/2011, alle ore 12:50, Cal Leeming [Simplicity Media Ltd]
> ha scritto:
> >
> > > Hi Roberto,
> > >
> > > It happened again:
> > >
> > > Traceback:
> > > Traceback (most recent call last):
> > >
> > >   File
> "/home/simplicitymedialtd/webapps/cdn05.prod/src/webapp/queue/models.py",
> line 289, in getURL
> > >     f = urllib.urlopen("
> http://www.spankwire.com/Player/VideoXML.aspx?id=%s"%self.videoid)
> > >
> > >   File "/usr/local/lib/python2.7/urllib.py", line 84, in urlopen
> > >     return opener.open(url)
> > >
> > >   File "/usr/local/lib/python2.7/urllib.py", line 205, in open
> > >     return getattr(self, name)(url)
> > >
> > >   File "/usr/local/lib/python2.7/urllib.py", line 342, in open_http
> > >     h.endheaders(data)
> > >
> > >   File "/usr/local/lib/python2.7/httplib.py", line 940, in endheaders
> > >     self._send_output(message_body)
> > >
> > >   File "/usr/local/lib/python2.7/httplib.py", line 803, in _send_output
> > >     self.send(msg)
> > >
> > >   File "/usr/local/lib/python2.7/httplib.py", line 755, in send
> > >     self.connect()
> > >
> > >   File "/usr/local/lib/python2.7/httplib.py", line 736, in connect
> > >     self.timeout, self.source_address)
> > >
> > >   File "/usr/local/lib/python2.7/socket.py", line 551, in
> create_connection
> > >     for res in getaddrinfo(host, port, 0, SOCK_STREAM):
> > >
> > > IOError: [Errno socket error] [Errno -3] Temporary failure in name
> resolution
> > >
> > > Fri Aug  5 04:20:10 2011 - writev(): Broken pipe
> [plugins/python/wsgi_headers.c line 182]
> > > Fri Aug  5 04:20:11 2011 - write(): Broken pipe
> [plugins/python/wsgi_subhandler.c line 189]
> > > Fri Aug  5 04:21:28 2011 - writev(): Broken pipe
> [plugins/python/wsgi_headers.c line 182]
> > > Fri Aug  5 04:21:28 2011 - write(): Broken pipe
> [plugins/python/wsgi_subhandler.c line 189]
> > > Fri Aug  5 04:21:28 2011 - writev(): Broken pipe
> [plugins/python/wsgi_headers.c line 182]
> > > Fri Aug  5 04:21:28 2011 - write(): Broken pipe
> [plugins/python/wsgi_subhandler.c line 189]
> > > Fri Aug  5 04:21:28 2011 - writev(): Broken pipe
> [plugins/python/wsgi_headers.c line 182]
> > > Fri Aug  5 04:21:28 2011 - write(): Broken pipe
> [plugins/python/wsgi_subhandler.c line 189]
> > >
> > > It happened shortly after a urllib() call.
> > >
> > > urllib() has sometimes been known to block the GIL when it's stuck in
> 'resolving' or certain tcp states. It could also be, that because urllib()
> was taking a long time to return back a result, all the workers became busy,
> and uwsgi was no longer able to handle the request. If this is the case,
> uWSGI should log this as "workers are busy, rejecting request" or something
> like that.
> >
> >
> > Yes, it should but you have configured 120 seconds of harakiri, so it
> will not consider a worker "bad" until it is blocked for more than 120
> seconds :)
> >
> > Ah, yeah I should probably raise that.
> >
> >
> > What about setting a tiny timeout in urllib ? (3-4 seconds should be
> enough)
> >
> > The problem is that the timeout is not always adhered to. In fact, that
> socket has a timeout of 3 seconds applied, but socket/urllib() ignores it.
> Timeout doesn't work if the call is stuck in resolve() or states where the
> TCP connection hangs. Slightly off topic but, the only way to truly resolve
> it was to create a local proxy in stackless/twisted, and pipe all the
> connections through it. But that requires either clever iptables rules to
> auto proxy, or modifying the original code.
> >
>
>
> I know that in the last days i looked a threads morbid-lover, but again…
> why not using threads for this tasks ?
>
> t = Thread(target=your_urlllib_function)
> t.join(timeout)
>
> if t.isAlive():
>    #timeout !!!
>    returns "XXXXXX"
>
> It will avoid your workers being blocked for more than 'timeout' seconds,
> but it is very probably that you will ends up
> with blocked threads all over the place.
>
> So you have to kill the timed-out threads in some way.
>
> The one i prefer (it is very reliable) is this one:
>
> http://code.activestate.com/recipes/496960-thread2-killable-threads/
>
> --
> Roberto De Ioris
> http://unbit.it
> JID: [email protected]
>
> _______________________________________________
> uWSGI mailing list
> [email protected]
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
>
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to