[ 
https://issues.apache.org/jira/browse/AVRO-1271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Hanna resolved AVRO-1271.
------------------------------

    Resolution: Later

Moving this to avro-users.
                
> Hadoop Streaming mangles Python-produced Avro
> ---------------------------------------------
>
>                 Key: AVRO-1271
>                 URL: https://issues.apache.org/jira/browse/AVRO-1271
>             Project: Avro
>          Issue Type: Bug
>          Components: java, python
>    Affects Versions: 1.7.4
>         Environment: Linux
>            Reporter: Alex Hanna
>
> I've got a rather simple script that takes Twitter data in JSON and turns it 
> into an Avro file.
> from avro import schema, datafile, io
> import json, sys
> from types import *
> def main():
>     if len(sys.argv) < 2:
>         print "Usage: cat input.json | python2.7 JSONtoAvro.py output"
>         return
>     s = schema.parse(open("tweet.avsc").read())
>     f = open(sys.argv[1], 'wb')
>     writer = datafile.DataFileWriter(f, io.DatumWriter(), s, codec = 
> 'deflate')
>     failed = 0
>     for line in sys.stdin:
>         line = line.strip()
>     try:
>         data = json.loads(line)
>     except ValueError as detail:
>         continue
>     try:
>         writer.append(data)
>     except io.AvroTypeException as detail:
>         print line
>         failed += 1
> writer.close()
> print str(failed) + " failed in schema"
> if __name__ == '__main__':
>     main()
> From there, I use this to feed a basic Hadoop Streaming script (also in 
> Python) which just pulls out certain elements of the tweets. However, when I 
> do this, it appears that the input for the script is mangled JSON. Usually 
> the JSON fails with some errant \u in the middle of the tweet body or 
> user-defined description.
> The Streaming script is rather basic -- it reads from sys.stdin and attempts 
> to parse the JSON string using the json package.
> Here is the bash script I use to invoke Hadoop Streaming:
> jars=/usr/lib/hadoop/lib/avro-1.7.1.cloudera.2.jar,/usr/lib/hive/lib/avro-mapred-1.7.1.cloudera.2.jar
> hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \
>     -files 
> $jars,$HOME/sandbox/hadoop/streaming/map/tweetMapper.py,$HOME/sandbox/hadoop/streaming/data/keywords.txt,$HOME/sandbox/hadoop/streaming/data/follow-r3.txt
>  \
>      -libjars $jars \
>      -input /user/ahanna/avrotest/avrotest.json.avro \
>      -output output \
>      -mapper "tweetMapper.py -a" \
>      -reducer org.apache.hadoop.mapred.lib.IdentityReducer \
>      -inputformat org.apache.avro.mapred.AvroAsTextInputFormat \
>      -numReduceTasks 1
> I'm starting to think this is a bug with 
> org.apache.avro.mapred.AvroAsTextInputFormat?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to