I strongly advicee the use of the scheduler because your requirements will 
be fullfilled best from that than a homemade task queue, at least, if not 
trying to use celery.............anyway, just my 2 cents:
SQLITE write operations are locking the entire database. one of 2 
controllers (or modules, or scripts, or anything) trying to write into a 
single database will likely to have to wait the other one to finish.

1. with every other database that is not sqlite, it's safe because how most 
relational db work, transactions make you have always a consistent set of 
results (if you use them correctly!!)

2.you can, but installing rabbitmq just to manage a queue seems an 
overkill. at that point, use celery directly. 
How you can detect that your homemade task queue is dead is likely up to 
you, don't you think :-P ?
You can kind of notice that a web2py's scheduler background worker is dead 
because it logs into a table its heartbeat, so if heartbeat is far ago you 
know that it's dead. 
Start a background process from a webserver isn't a good idea, so it's 
better to check into other ways (at least for the moment)

3. yes if you want to use request, db, helpers, etc etc that are normally 
available in web2py controllers. You could however make your own script and 
just use python yourscript.py if you don't need all web2py goodness ;-)
In a production environment I'd use cron to start that script..... or a 
daemonizer service that watch if it breaks down and restart it, like an 
upstart script if you're on ubuntu, or supervisord, if you're on linux.

Il giorno martedì 15 maggio 2012 00:47:31 UTC+2, pbreit ha scritto:
>
> Is this something th scheduler should/couldl be used for?

Reply via email to