[ 
https://issues.apache.org/jira/browse/HADOOP-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12679865#action_12679865
 ] 

Klaas Bosteels commented on HADOOP-1722:
----------------------------------------

Zhuweimin, presumably you're expecting the number of bytes reported by "wc -c" 
to be equal to the number of bytes in your input files, but that's not what you 
should be expecting really. Here's a quick outline of what happens when you run 
that command:

# Since you didn't specify an InputFormat, the TextInputFormat is used which 
leads to Text values corresponding to "lines" (i.e. sequences of bytes ending 
with a newline character) and LongWritable keys corresponding to the offsets of 
the lines in the file.
# Because you use rawbytes for the map input, Streaming will pass the keys and 
values to your mapper as  ~<4 byte length><raw bytes>~ byte sequences. These 
byte sequences are obtained using Writable serialization (i.e. by calling the 
{{write()}} method) and prepending the length to the bytes obtained in this way.

You could probably get the behavior you're after by writing a custom 
[InputFormat|http://svn.apache.org/viewvc/hadoop/core/trunk/src/mapred/org/apache/hadoop/mapred/InputFormat.java?view=markup]
 and 
[InputWriter|http://svn.apache.org/viewvc/hadoop/core/trunk/src/contrib/streaming/src/java/org/apache/hadoop/streaming/io/InputWriter.java?view=markup],
 but out of the box it's not supported at the moment as far as I know. 

> Make streaming to handle non-utf8 byte array
> --------------------------------------------
>
>                 Key: HADOOP-1722
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1722
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: contrib/streaming
>            Reporter: Runping Qi
>            Assignee: Klaas Bosteels
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-1722-branch-0.18.patch, 
> HADOOP-1722-branch-0.19.patch, HADOOP-1722-v2.patch, HADOOP-1722-v3.patch, 
> HADOOP-1722-v4.patch, HADOOP-1722-v4.patch, HADOOP-1722-v5.patch, 
> HADOOP-1722-v6.patch, HADOOP-1722.patch
>
>
> Right now, the streaming framework expects the output sof the steam process 
> (mapper or reducer) are line 
> oriented UTF-8 text. This limit makes it impossible to use those programs 
> whose outputs may be non-UTF-8
>  (international encoding, or maybe even binary data). Streaming can overcome 
> this limit by introducing a simple
> encoding protocol. For example, it can allow the mapper/reducer to hexencode 
> its keys/values, 
> the framework decodes them in the Java side.
> This way, as long as the mapper/reducer executables follow this encoding 
> protocol, 
> they can output arabitary bytearray and the streaming framework can handle 
> them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to