GitHub user liancheng opened a pull request:

    https://github.com/apache/spark/pull/12002

    [SPARK-14206][SQL] buildReader() implementation for CSV

    ## What changes were proposed in this pull request?
    
    Major changes:
    
    1. Implement `FileFormat.buildReader()` for the CSV data source.
    1. Add an extra argument to `FileFormat.buildReader()`, `physicalSchema`, 
which is basically the result of `FileFormat.inferSchema` or user specified 
schema.
    
       This argument is necessary because the CSV data source needs to know all 
the columns of the underlying files to read the file.
    
    ## How was this patch tested?
    
    Existing tests should do the work.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/liancheng/spark spark-14206-csv-build-reader

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/12002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #12002
    
----
commit e559ba58f7a9972f856c192c436e8cf9778e35cb
Author: Cheng Lian <[email protected]>
Date:   2016-03-28T16:35:20Z

    buildReader() implementation for CSV

commit dd2afe6c2e89d4ec68f0d5b00c3bea162947c4ed
Author: Cheng Lian <[email protected]>
Date:   2016-03-28T16:37:49Z

    Fixes import order

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to