This is my scenario:
I'm writing an unsupervised python script that continuously listens for 'alarms' coming from some network 'entities'. Whenever an alarm is received, the script automatically takes some actions on the affected entity to mitigate the alarm cause. Since there can be hundreds of entitites managed in parallel, it's hard to see what's going on by just looking at the messy script logs. Hence I want to add a monitoring web GUI, served by web2py. The users will be able to browse the entities and get their updated/past status. Later on, they will also be able to act on the entities (e.g "manually restart entity"). But for the moment, just a monitoring GUI. As you see, my core script needs to run as a daemon, at all times, no matter if a user is logged in the web or not. I can think of 2 ways for doing this; a.) make a script that's totally independent from web2py, but that exposes some kind of API to the "outside world": get_active_alarms(entity_id), get_alarm_log(entity_id), ... Web2py would access that API to fetch data and render the web. In this case, the web GUI is like an extra feature, on top of the core script. b.) make the script as a web2py app. Then web2py becomes responsible for launching the core script and keeping it running. And I have to ensure that web2py is respawned if it crashes (I guess that's easy with a cronjob watchdog). Suggestions? For a.), how to make the script <-> web2py communication? Beyond using a shared database, please. For b.), how to run the script in background at all times, and as soon as web2py starts? Thanks! --

