We went over this stuff about 17/18 years ago in Pipeline CForum. What did
Melinda come up with then and would it be useful in this case?

Rod
On 21 Aug 2015 22:46, "Rob Van der Heij" <[email protected]> wrote:

> > Thus, the only viable way to process CSV data correctly (i.e.,
> > compensating for downloading errors) is a new built-in that turns the
> > field separating commas into something else, specified by the user.  The
> > program could verify that the input does not contain this separator
> > character.
> >
> > Of course, doing both is also possible, as long as there are no quoted
> > CRLFs in inputRange CSV n.
> >
> > Preferences, anyone?
>
> I don't see how we can do it right when transfer has already interpreted
> all line breaks to build separate records, including the line breaks that
> were imbedded in strings. And how do we want to represent the CRLF as part
> of the string in the EBCDIC domain? By x25 or x15 or so?  It's a bit harder
> than 'joincont not trailing /"/ x15'  if we want to handle the CRLF between
> two escaped quotes :-)
>
> I'm not convinced CSV for scanning fields in the input range would be a big
> plus, considering that you might also want to produce CSV format. Wouldn't
> we rather deblock the stream into one-field-per-record, maybe stripped off
> the quotes, and possibly prefixed with the field number or separated by a
> null record. The reverse process would make it into CSV format again, and
> transfer would do record boundaries and embedded x25 as line breaks.
>
> Rob
> ---
> Rob van der Heij
> z/VM Development, CMS Pipelines
>

Reply via email to