Am Montag, den 05.03.2012, 11:47 -0800 schrieb Luke Scott:
> So for the most part there really isn't much going on after a request has 
> been made. A visitor requests a page, sits on it for 3-5 minutes, submits 
> the form, and then leaves. But the more users we have and the more pages 
> they have the higher chance of requests happening in parallel. Right now it 
> doesn't, unless a user gets a huge surge in traffic (Digg effect).

Do you think the following model would work?

 - have a limit for the number of workers: 100
 - one worker or zero per user (not visitor!)
 - when a worker is needed but none is active for the polls/forms/...
creator, kill the one with the highest idle time and spin up a new one
(EXCEPTION: you'll have to temporarily go over the limit if all
processes are busy)

So, when you get a huge wave of requests for a bunch of forms, the
sandbox processes for them are running all the time. Does that sound
like a good idea?

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to