[
https://issues.apache.org/jira/browse/HADOOP-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Klaas Bosteels updated HADOOP-1722:
-----------------------------------
Attachment: HADOOP-1722-v3.patch
I realized that it is probably more convenient/intuitive to make {{-typedbytes
input}} correspond to
* {{stream.map.input.typed.bytes}}={{true}}
* {{stream.map.output.typed.bytes}}={{false}}
* {{stream.reduce.input.typed.bytes}}={{false}}
* {{stream.reduce.output.typed.bytes}}={{false}}
instead of
* {{stream.map.input.typed.bytes}}={{true}}
* {{stream.map.output.typed.bytes}}={{false}}
* {{stream.reduce.input.typed.bytes}}={{true}}
* {{stream.reduce.output.typed.bytes}}={{false}}
and similarly that it would be better to let {{-typedbytes output}} correspond
to
* {{stream.map.input.typed.bytes}}={{false}}
* {{stream.map.output.typed.bytes}}={{false}}
* {{stream.reduce.input.typed.bytes}}={{false}}
* {{stream.reduce.output.typed.bytes}}={{true}}
instead of
* {{stream.map.input.typed.bytes}}={{false}}
* {{stream.map.output.typed.bytes}}={{true}}
* {{stream.reduce.input.typed.bytes}}={{false}}
* {{stream.reduce.output.typed.bytes}}={{true}}
Maybe this was also (part of) what Runping was trying to say in his comment? In
any case, the attached third version of my patch incorporates this minor change.
> Make streaming to handle non-utf8 byte array
> --------------------------------------------
>
> Key: HADOOP-1722
> URL: https://issues.apache.org/jira/browse/HADOOP-1722
> Project: Hadoop Core
> Issue Type: Improvement
> Components: contrib/streaming
> Reporter: Runping Qi
> Assignee: Christopher Zimmerman
> Attachments: HADOOP-1722-v2.patch, HADOOP-1722-v3.patch,
> HADOOP-1722.patch
>
>
> Right now, the streaming framework expects the output sof the steam process
> (mapper or reducer) are line
> oriented UTF-8 text. This limit makes it impossible to use those programs
> whose outputs may be non-UTF-8
> (international encoding, or maybe even binary data). Streaming can overcome
> this limit by introducing a simple
> encoding protocol. For example, it can allow the mapper/reducer to hexencode
> its keys/values,
> the framework decodes them in the Java side.
> This way, as long as the mapper/reducer executables follow this encoding
> protocol,
> they can output arabitary bytearray and the streaming framework can handle
> them.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.