On Wed, 20 Apr 2005, KEVIN ZEMBOWER wrote: > I'm inclined to use Text::xSV because of it's recent update. I've used > Text::CSV_XS successfully before, but it hasn't been revised lately > (maybe it doesn't need to be revised?) and it seems more complex than > the others, requiring the use of IO::File:flock. > > I've got to process about 320,000 records, so speed of execution is an > issue, but it's not the overriding concern.
So use Text::CSV_XS then. Text::CSV and Text::CSV_XS are the standard modules for this. If you need to work with the files as they are now, and you need your code to run fast, then the module to use is Text::CSV_XS. (Text::CSV should be identical, but is implemented in pure Perl; this makes it more portable than the C/XS version, but much slower with big data files like yours.) As an alternative, the next popular option is DBD::CSV, which lets you treat the CSV (or TSV or whatever) files as if they were tables in a relational database, allowing you to issue SQL statements against the file contents. This can be useful -- especially if you're assuming that the data will in fact be migrated to a proper database in the future -- but I'm not sure how it compares speedwise to Text::CSV_XS. If you just need raw speed, it may not help you. -- Chris Devers -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] <http://learn.perl.org/> <http://learn.perl.org/first-response>