> Don't log your monitoring info directly into the database, log
> straight to one
> or more text-files and sync them every few seconds. Rotate the
> files once a
> minute (or whatever seems suitable). Then have a separate process
> that reads
> "old" files and processes them into the database.
> The big advantage - you can take the database down for a short
> period and the
> monitoring goes on. Useful for those small maintenance tasks.

This is a good idea but it'd take a bit of redesign to make it work.  here's
my algorithm now:

- Every 10 seconds I get a list of monitors who have nextdate >= current
- I put the id numbers of the monitors into a queue
- A thread from a thread pool (32 active threads) retrieves the monitor from
the database from its id, updates the nextdate timestamp, executes the
monitor, and stores the status in the database

So I have two transactions, one to update the monitor's nextdate and another
to update its status.  Now that I wrote that I see a possibility to
steamline the last step.  I can wait until I update the status to update the
nextdate.  That would cut the number of transactions in two.  Only problem
is I have to be sure not to add a monitor to the queue when it's currently
executing.  This shouldn't be hard, I have a hashtable containing all the
active monitors.

Thanks for the suggestion, I'm definitely going to give this some more


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to