On 2016/11/22 6:00 PM, John R. Sowden wrote:
That was a throw back to years ago. I was trying to protect against y2k by making each dbf for 1 calendar year. Also, these files are about 800k in size, so I was worried about storage and search time. Storage is not an issue anymore. I will know about search time after learning about sql databases. Sometimes over the last 45 years I have changed the dbf structure. This way I only change starting at the year in question, then change the program to work with the structure modification.

SQLite (and indeed all modern SQL engines) can have tables many gigabytes (and even terabytes) in size with extremely fast lookups via simple indexing. I suggest you only create one file with one table and simply add a field to distinguish which year you are logging for (considering the size you've mentioned, this would be child's play for any DB engine - plus very fast).

It is not uncommon to find people here with terabyte multi-volume DBs with billions of rows of data (literally) - so any division of data sets is usually superfluous and should be reserved for the extreme case.

We always encourage experimentation with your specific data and hardware. Speed should be real fast, and speed differences between a table with 1 year's data and 10 years worth of data should be negligible (considering your current year dataset totals 800k). Further, the advantages of having your data in a single table (or at a minimum, a single DB) means you can query relationally between different years and get multi-year statistics without resorting to expensive joins and/or attaching external DBs.

Good luck!
Ryan

_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to