GitHub user ericgarcia opened a pull request:

    https://github.com/apache/spark/pull/1536

    Example pyspark-inputformat for Avro file format

    This is an example showing how to map an Avro file to a pyspark RDD.
    
    Starting Pyspark with
    
`SPARK_CLASSPATH=examples/target/scala-2.10/spark-examples-1.1.0-SNAPSHOT-hadoop1.0.4.jar
 IPYTHON=1 bin/pyspark`
    
        avroRdd = sc.newAPIHadoopFile("/tmp/data.avro", 
          "org.apache.avro.mapreduce.AvroKeyInputFormat", 
          "org.apache.avro.mapred.AvroKey", 
          "org.apache.hadoop.io.NullWritable",
          
keyConverter="org.apache.spark.examples.pythonconverters.AvroGenericConverter")
    
    Note that not all data types are implemented.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/ericgarcia/spark master

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/1536.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1536
    
----
commit 3b4804ba84f8cf36df181868b15894a85c6f7385
Author: Eric Garcia <[email protected]>
Date:   2014-07-22T19:49:38Z

    Example pyspark-inputformat for Avro files

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to