----- Original Message ----- From: "nbiggs" <[EMAIL PROTECTED]>

My application generates about 12 records a second.  I have no problems
storing the records into the database, but started thinking that if I
commit every 12 records, will my hard drive eventually die to extreme
usage?  During a 24 hour period up to 1 million records will be
generated and inserted.  At the end of the day, all the records will be
deleted and the inserts will start again for another 24 hours.

Can I store the records into memory, or just not commit as often, maybe
once every 5 minutes while still protecting my data in case of a PC
crash or unexpected shutdown due to user ignorance?

Does anyone have any ideas for this type of situation?

How large are these rows? 12 inserts a second is chump change if they're small ... If you're inserting 100k blobs then you may want to rethink things.

At 12 rows per second (given a relatively small row), 24hrs of usage will still be less than the amount of harddrive churning involved in a single reboot of your machine. Consider that a fast app can insert about 1 million rows into a SQLite table in about 15 seconds.

Robert


Reply via email to