I have some code from a while back that does this. This was not written for a 
generic case so it does strip out commas and semi-colons and escaped quotes – 
those chars weren’t needed in my case.



Maybe this will get you started and if you want I can help make it more robust.



It also assumes that row 1 is column names.



From: Fogelson, Steve [mailto:[email protected]]
Sent: Monday, October 24, 2016 1:44 PM
To: [email protected]
Subject: TeraScript-Talk: Processing a csv file with intermittent quotes 
contained in it



I receive an inventory file in a comma delimited csv file.



In the taf I use to process it, first I @TOKENIZE using a <@CRLF> to separate 
rows, then @TRANSPOSE and then @TOKENIZE based on a comma to segment the fields.



I didn’t realize but some of the fields are enclosed in quotes because they 
contain a comma within the field. Two examples are a quantity greater than 999 
– “1,000” and a product description with a comma in it.



So rows containing quotes and extra commas are not being processed correctly. I 
could start at the beginning of each row and search for a starting and ending 
quote and delete any commas between, but this could take a while.



Does anyone have a more efficient way of processing this?



Thanks



Steve Fogelson

Internet Commerce Solutions





  _____

To unsubscribe from this list, please send an email to [email protected] 
<mailto:[email protected]>  with "unsubscribe terascript-talk" in the 
body.




----------------------------------------

To unsubscribe from this list, please send an email to [email protected] 
with "unsubscribe terascript-talk" in the body.

Attachment: parse_csv.tml
Description: Binary data

Reply via email to