GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13252
[SPARK-15473][SQL] CSV data source fails to write and read back empty data
## What changes were proposed in this pull request?
This PR adds the support for writing and reading back empty data in CSV
data source.
Also, this PR lets the CSV data source write schema in the header when
`header` option is `true` even if given data is empty. Currently, this is not
writing header in this case.
Currently, Parquet and JSON can write and read back empty dataset but CSV
does not support.
## How was this patch tested?
Unit tests in `CSVSuite.scala`.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/HyukjinKwon/spark SPARK-15473
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/13252.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #13252
----
commit 9e505527caaf2db6909f542918ddbe03c112d040
Author: hyukjinkwon <[email protected]>
Date: 2016-05-22T11:17:36Z
CSV data source fails to write and read back
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]