It is not a robot... I performed an IP lookup on some of the IP addresses that caused the error, and they come from ISP and typically in residential areas, and spread out. So this is a big issue since robots do not cause it.
-- Thadeus On Sun, May 9, 2010 at 9:36 PM, Thadeus Burgess <[email protected]> wrote: > Just some more information. > > Static files are served by apache (as to the book) > Migrate = True > > copystream seems to allure to the fact a file is being uploaded (which > is impossible because the application does not store any files) > > Might this be a robot / scanner attempting to upload things? If so, > how to stop this? > > -- > Thadeus > > > > > > On Sun, May 9, 2010 at 9:28 PM, Thadeus Burgess <[email protected]> wrote: >> What could possibly be causing this? >> >> python 2.6 >> web2py trunk >> apache/mod_wsgi 2.6 >> >> Any idea's on how I can narrow this down, or stop this? The pages >> consist of static html (cached in RAM), and a page with a giant >> SQLFORM on it. It kind of concerns me about the scalability of web2py, >> as the errors rapidly increase as web traffic increases. >> >> Traceback (most recent call last): >> File "gluon/main.py", line 396, in wsgibase >> request.body = copystream_progress(request) ### stores request body >> File "gluon/main.py", line 143, in copystream_progress >> copystream(source, dest, size, chunk_size) >> File "gluon/fileutils.py", line 302, in copystream >> data = src.read(size) >> IOError: request data read error >> >> -- >> Thadeus >> >

