Bart Lateur wrote:
> 
> On Thu, 24 Jan 2002 16:50:06 -0600, Jeff Mirabile wrote:
> 
> >It's four types of records that are fixed length fields, but needs alot of
> >cleaning.
> 
> I fail to see why you want shoehorn such a simple problem into such a
> rather inapproriate tool like DBI. 
> ...
> See "perldoc -f pack" for other options.

And see "perldoc -f flock" if more than one process might be modifying
the file.  And see "perldoc LWP" if the file is located remotely.  And
see "perldoc perlre" if you need to clean things based on combinations
of keys in the fields.  And see "perldoc -q duplicate" if you need only
unique rows.  And see the archives of this newsgroup if you need to
build on-the-fly insert strings from the results of the cleaning.  And
then go back to all the docs if the next round of files to clean is
stored in CSV or XML or an HTML table or whatever.  Or just see perldoc
"DBD::AnyData" if you need all those things and are already familiar
with DBI.

Bart's point is certainly well taken and may or may not apply in this
particular case, depending on the answers to some of those ifs.  The
other question is readability and maintainability.  If the OP or someone
else in their shop needs to look at this script a year from now, is it
going to be easier to figure out what the script does if it's plain perl
or it's DBI?   If the person who goes back to the script is a DBA who
knows a bit of perl will it be easier to change the pack patterns and
regexen or to change the SQL statements?  It's a matter of whether one
wants the script to deal primarily with the patterns of the characters
in the file or with the patterns of the content.  Even in cases where
it's a no-brainer to deal with the character patterns, there may be
other reasons to have the script operate at a higher level.

P.S.  I'm well aware that the script I sent earlier in this thread to
use DBI on plain unstructured text files is an absurdity in most
contexts.  :-)

-- 
Jeff

Reply via email to