[ 
https://issues.apache.org/jira/browse/CSV-226?focusedWorklogId=324505&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-324505
 ]

ASF GitHub Bot logged work on CSV-226:
--------------------------------------

                Author: ASF GitHub Bot
            Created on: 07/Oct/19 17:13
            Start Date: 07/Oct/19 17:13
    Worklog Time Spent: 10m 
      Work Description: garydgregory commented on issue #30: [CSV-226] Add 
CSVParser test case for standard charsets.
URL: https://github.com/apache/commons-csv/pull/30#issuecomment-539113447
 
 
   Can you post URLs here so we can make the connection?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 324505)
    Time Spent: 40m  (was: 0.5h)

> Add CSVParser test case for standard charsets
> ---------------------------------------------
>
>                 Key: CSV-226
>                 URL: https://issues.apache.org/jira/browse/CSV-226
>             Project: Commons CSV
>          Issue Type: Test
>          Components: Parser
>    Affects Versions: 1.5
>            Reporter: Anson Schwabecher
>            Priority: Minor
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Hello, I'd like to contribute a CSVParser test suite for standard charsets as 
> defined in java.nio.charset.StandardCharsets + UTF-32.
> This is a standalone test but is also in support of a fix for CSV-107.  It 
> also refactors and unifies the testing around your established workaround of 
> inserting BOMInputStream ahead of the CSVParser.
> It will take a single base UTF-8 encoded file (cstest.csv) and copy it to 
> multiple output files (in target dir) with differing character sets, similar 
> to the iconv tool.  Each file will then be fed into the parser to test all 
> the BOM/NOBOM unicode variants.  I think a file based approach is still 
> important here rather than just encoding a character stream inline as a 
> string, that way if issues develop it's easy to inspect the data.
> I noticed in the project’s pom.xml (rat config) that you are excluding 
> individual test resource files by name rather than using a wildcard 
> expression to exclude every file in the directory.  Is there a reason for 
> this? It’s much better if devs do not have to maintain this configuration.
> {code:language=xml|title=i.e.: switch over to a single exclude expression}
> <exclude>src/test/resources/**/*</exclude>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to