On Wed, 14 Feb 2001, Pierre Phaneuf wrote:
> I think you are over-evalutating the stress on my server. With the
> speediest time base I was talking about (once per minute cron job), that
> would be 1440 HEAD requests per day, each requiring 1 simple database
> query that will most probably return no row at all (a query that returns
> many rows is slower, because it has to transmit these rows).
>
> Many sites with database-driven pages have multiple complex queries on
> some pages and run in the millions of page-views per day.
Sure, but why waste resources?
> As for the simplicity, having multiple individual custom cron jobs is
> simpler than one single generic cron job?
Yes, much simpler, at least for the scheduling and dispatching part.
Instead of designing database tables to hold timing info on jobs and code
to check it that is smart enough to remember when it last ran and prevent
race conditions, you can write a simple crontab with one call to wget per
job. The actual implementation of the jobs is pretty much identical
either way.
Do it whatever way suits you. I'm just suggesting that you try the lazy
way if possible.
> I was thinking of having a variable with a timestamp of when we last
> checked the database
That will have to be some sort of shared memory or file-based thing, since
you won't be using the same process each time.
- Perrin