Not sure about the problem but I had a few instances of people
clicking reload a lot (and I mean a lot). So I use this:

# drop incoming connections if IP make more than 10 connection
attempts to port 80 within 100
seconds
$IPT -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent
--set
$IPT -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent
--update --seconds 60 --hitcount 10 -j DROP


On May 9, 9:54 pm, mdipierro <[email protected]> wrote:
> Does this result in a ticket or console error? Do you get a lot of
> requests/sec from the same IP?
>
> On May 9, 9:49 pm, Graham Dumpleton <[email protected]>
> wrote:
>
> > On May 10, 12:28 pm, Thadeus Burgess <[email protected]> wrote:
>
> > > What could possibly be causing this?
>
> > A user not waiting for a request to complete before clicking on
> > another link or pressing reload. In other words, client dropped
> > original connection.
>
> > Graham
>
> > > python 2.6
> > > web2py trunk
> > > apache/mod_wsgi 2.6
>
> > > Any idea's on how I can narrow this down, or stop this? The pages
> > > consist of static html (cached in RAM), and a page with a giant
> > > SQLFORM on it. It kind of concerns me about the scalability of web2py,
> > > as the errors rapidly increase as web traffic increases.
>
> > > Traceback (most recent call last):
> > >   File "gluon/main.py", line 396, in wsgibase
> > >     request.body = copystream_progress(request) ### stores request body
> > >   File "gluon/main.py", line 143, in copystream_progress
> > >     copystream(source, dest, size, chunk_size)
> > >   File "gluon/fileutils.py", line 302, in copystream
> > >     data = src.read(size)
> > > IOError: request data read error
>
> > > --
> > > Thadeus
>
>

Reply via email to