It's generally not good to have HTTP requests taking so long. Instead, when 
such a request is made, you should pass the task off to a job queue (e.g., 
using web2py's built-in scheduler or some other similar solution) to be 
completed in the background. You can then have the browser poll the server 
every few seconds to determine when the job has completed, and then show 
the results (for a more sophisticated approach, you could set up websockets 
to push a notification to the browser).

Anthony

On Saturday, April 9, 2016 at 7:08:49 PM UTC-4, Jason Solack wrote:
>
> Hello all, I am wondering if you all could sense chech my issue for me. I 
> am creating sites that do a fair amount of analytics on large data sets and 
> as such requests may take 30-90 seconds to get a response.  My web server 
> has 8 logical cores and once I have over 8 concurrent users they are 
> essentially getting queued by apache ... If the queue becomes too long I 
> start having a 500 response from apps in web2py.
>
> I followed the documentation for deployment in Apache... But I don't know 
> if I should be tweaking Apache setting or if this behaviour is normal.  I 
> am wondering if anyone with more Apache experience may be able to point me 
> in a direction that may keep my apps more stable.
>
> My server is ruining centos, apache, 8 cores... I have web2py launching 
> with 8 processes and 16 threads.
>
> Thank you in advance for any help you can offer!
>
> Jasob
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to