All,

I am passing various pipe delimited CSV test (case) files thru Daffodil
[3.0] on Debian.

Daffodil w/ validate set to "on" throws warnings about 'left over data' for
following test cases (see snippet image below).
Questions:

   1. Why do number of "consumed" bits vary from case to case? Follow up
   question, why do "left over ... starting at byte" locations vary? I assume
   that may be because record lengths from file to file may vary.
   2. Does Daffodil default to maximum input/output file size limitation?
   3. If so, can that they be overridden w/ larger size(s)?
   4. What is Daffodil's "absolute" maximum file size limitations for input
   and output?
   5. At end of each test case, parsed source and unparsed target files are
   diff'd showing they differ but xmllint shows in each case the intermediate
   XML files validate successfully against the DFDL schema. I assume that is
   because only whole records are written to intermediate (parsed) XML file -
   not partial records in which case the XML file will contain truncated data
   from the original source hence the warning. Is this correct?

[image: image.png]

Thx in advance

Attila

Reply via email to