First off, thanks for the help and sorry for the formatting of the 
message.  I didn't know how it was going to turn out and I probably was 
overly optimistic as well as too verbose.

Secondly, as I feared, seems like it was an XY question, so sorry for that 
as well.

I'll address the two replies I can see so far, and some of the info in 
each section will likely overlap.

Nico:

I guess that's the trick, to have the "current" or at least "recent" 
database and then the historical one.  As of now, the process of polling 
the 17 machines takes about 40 seconds or so (when I first started running 
the process minutely, it was 20, so you can see I have to do something 
soon :))

So assuming the two-db model, what's the trick to it?  Here are some ideas 
off the top of my head--can you (or any reader) please give me your 
thoughts (be as brutal as you like--I'm under no illusion that I know what 
I'm talking about):

1)  The "current" table only ever has 17 rows. 
        a)Have some kind of thing built in to the script that runs 
minutely to copy the "current" data to the historical DB before kicking 
off the part that updates the current data.
        b)Add a trigger to the DB where the SQLite engine takes care of 
the copy somehow--this would probably be more difficult since I don't know 
how to add a trigger and I am thinking that the historical database will 
be in a different file altogether.
        c)Something I haven't thought of

2)  The current table is only allowed to have a maximum on N rows.  Upon 
reaching this size, data are moved to the historical database and only the 
most recent observations for each machine are left in the current DB.  Not 
sure how I could do that.  Is there a way to do this within SQLite?

3)  A job runs every night or week (at a time when people are least likely 
to be using the page such as 3 am) that transfers the data from the 
current DB to the historical, leaving only the most recent observation for 
each machine.

Jay:

The closer to real-time, the better.  The most often a cron job can run 
under Linux is minutely, and minutely is pretty good.  I guess I could 
have the summary process occur at the end of the script that polls the 
machines.  It could generate static HTML, which would presumably make the 
page load super fast.  However, under the current regime, the process of 
creating that summary is going to take at least 10 seconds.  40 seconds 
for polling + 10 seconds for summarizing=50 seconds, and that number is 
only going to get bigger!  So I'll have to figure out a better table 
structure anyway.

Additional thoughts:

In general, I think splitting the tables up is the way to go.  Any further 
comments/suggestions appreciated!

Jonathan






-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to