Hi everyone,

I could use both suggestions and answers on the follow topic.  The problem
I'm trying to solve is quite simple: I must parse through a large csv file.
 A field within the csv will signify which product type the included data is
associated with.  Once the product type is known then a field within the
same csv (a 64 byte long hex string) must be parsed out.  The hex string is
converted to binary where each binary bit represents pass/fail for a
particular functional test.  Each functional test is unique and the index of
said test in the 512 bit string may differ by product type.  Everything
described so far I'm comfortable tackling.  Now to the real question.  My
initial direction with this problem was to have a separate functable.pl file
with unique arrays for each lookup table (the value of the each array
location was simply a scalar of the test name).  Then "use functable.pl" in
the main parsing script and reference the desired lookup table.  If I knew
every lookup table ahead of time, I would of course just include them in the
parsing script.  However, new product types will always be around the
corner.  So is this a reasonable solution or a rookie mistake?  Or should I
simply have a .txt file that might look like the following:
###############################
# begin: product1
testA,0
testB,8
testC,16
testD,32
# end

# begin: product2
testA,2
testB,4
testC,8
testD,16
# end
###############################

Then simply read in the .txt file and build the array from within the main
parsing script as opposed to referencing an array in a separate file?
 Thanks for your time.  I hope to read some good discussion.

Sincerely,

Steven

Reply via email to