ASF GitHub Bot commented on CARBONDATA-257:

GitHub user jackylk opened a pull request:


    [CARBONDATA-257] Make CarbonData readable through Spark/MapReduce program

    User should be able to use SparkContext.newAPIHadoopFile to read CarbonData 
    For example:
        val input = sc.newAPIHadoopFile(s"${cc.storePath}/default/carbon1",
        val result = input.map(x => x._2.toList).collect
        result.foreach(x => println(x.mkString(", ")))
    In this PR, the INPUT_DIR in CarbonInputFormat job configuration is changed 
to table path instead of store path, since sc.newAPIHadoopFile will set it to 
the first input parameter (`path` indicating the table path)

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/jackylk/incubator-carbondata inputformat

Alternatively you can review and apply these changes as the patch at:


To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #174
commit c67e5a91fc32484dea49ede62dbcea68ba11e98f
Author: jackylk <jacky.li...@huawei.com>
Date:   2016-09-18T23:47:55Z

    change INPUT_DIR to tablePath instead of storePath

commit f2858e247a9e968da3cd4121c31cc6f95456d804
Author: jackylk <jacky.li...@huawei.com>
Date:   2016-09-18T23:48:35Z

    add mapreduce example to read carbon files


> Make CarbonData readable through Spark/MapReduce program
> --------------------------------------------------------
>                 Key: CARBONDATA-257
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-257
>             Project: CarbonData
>          Issue Type: Improvement
>          Components: hadoop-integration
>    Affects Versions: 0.1.0-incubating
>            Reporter: Jacky Li
>            Assignee: Jacky Li
>             Fix For: 0.2.0-incubating
> User should be able to use SparkContext.newAPIHadoopFile to read CarbonData 
> files

This message was sent by Atlassian JIRA

Reply via email to