Hello nbiggs, My users typically download between 3 to 40 gigs of data a day to commodity IDE hard drives. This means downloading files in pieces and when there are enough pieces to create the file, assemble the files on the hard disk at maximum speed. The files range from 60K to 50 Megs each. During download they sustain fairly constant writes to disk of between 1.5-10 Mbps. Some run 24x7 (and some have been tossed out by their ISP's).
I've asked them whether they've been seeing increased failure rates on their hard drives, I use SCSI only so, they're designed for this kind of usage. The results were inconclusive. Some have lost hard drives but, for the most part their hard disks just crunch away for years at a time. I think it unlikely that your usage is more than a blip of data to the hard drive. C Friday, January 27, 2006, 12:26:15 PM, you wrote: n> This is what I am inserting per record. n> Insert into table values(1, 1, 172, 97, 1, 4, 1, 2.29, 'A', n> '2006012410052941', 12345, 0, 0, 0, 1, 1, 0) n> Other then that, I do some updates on the last field by setting the n> value to 1 or 2. n> -----Original Message----- n> From: Robert Simpson [mailto:[EMAIL PROTECTED] n> Sent: Friday, January 27, 2006 12:06 PM n> To: sqlite-users@sqlite.org n> Subject: Re: [sqlite] Save my harddrive! n> ----- Original Message ----- n> From: "nbiggs" <[EMAIL PROTECTED]> >> >> My application generates about 12 records a second. I have no n> problems >> storing the records into the database, but started thinking that if I >> commit every 12 records, will my hard drive eventually die to extreme >> usage? During a 24 hour period up to 1 million records will be >> generated and inserted. At the end of the day, all the records will n> be >> deleted and the inserts will start again for another 24 hours. >> >> Can I store the records into memory, or just not commit as often, n> maybe >> once every 5 minutes while still protecting my data in case of a PC >> crash or unexpected shutdown due to user ignorance? >> >> Does anyone have any ideas for this type of situation? n> How large are these rows? 12 inserts a second is chump change if n> they're n> small ... If you're inserting 100k blobs then you may want to rethink n> things. n> At 12 rows per second (given a relatively small row), 24hrs of usage n> will n> still be less than the amount of harddrive churning involved in a single n> reboot of your machine. Consider that a fast app can insert about 1 n> million n> rows into a SQLite table in about 15 seconds. n> Robert -- Best regards, Teg mailto:[EMAIL PROTECTED]