Horace Blegg <tkjthing...@gmail.com> wrote: > So, Example: I'll read in a CSV file (just one, for now.) and store it into > a list. Sometime later, I'll get another CSV file, almost identical/related > to the first. However, a few values might have changed, and there might be a > few new lines (entries) or maybe a few less. I would want to compare the CSV > file I have in my list (in memory) to new CSV file (which I would probably > read into a temporary list). I would then want to track and log the > differences between the two files. After I've figured out what's changed, I > would either update the original CSV file with the new CSV's information, or > completely discard the original and replace it with the new one (whichever > involves less work). Basically, lots of iterating through each entry of each > CSV file and comparing to other information (either hard coded or variable). > > So, to reiterate, are lists what I want to use? Should I be using something > else? (even if that 'something else' only really comes into play when > storing and operating on LOTS of data, I would still love to hear about it!)
Given your description, I don't see any reason to prefer any alternate data structure. 1000 small CSV files should fit in a modern computer's memory with no problem...and if it does become an issue, worry about it then. One thought, though: you might want to create a list subclass to hold your data, so that you can put useful-to-you methods on the subclass... -- R. David Murray http://www.bitdance.com IT Consulting System Administration Python Programming -- http://mail.python.org/mailman/listinfo/python-list