Github user HyukjinKwon commented on the pull request:

    https://github.com/apache/spark/pull/11194#issuecomment-183901833
  
    Actually, I have had a thought that we might have to make a class such as 
`TestCSVData` for dataset for testing (similarly with 
[TestJsonData](https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/TestJsonData.scala)
 for JSON datasource) or a class like `CSVTest` (similarly with 
[OrcTest](https://github.com/apache/spark/blob/master/sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcTest.scala)
 fpr ORC datasource) rather than adding test CSV files for everytime.
    
    I think this might better be done in another PR. If you agree on this, I 
will create an issue and PR for this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to