[
https://issues.apache.org/jira/browse/HADOOP-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12679487#action_12679487
]
zhuweimin commented on HADOOP-1722:
-----------------------------------
The error occurred when using the -D option,the following is error message
[had...@super03 hadoop-latest]$ hadoop jar
contrib/streaming/hadoop-0.19.1-streaming.jar -input data -output result
-mapper "wc -c" -numReduceTasks 0 -D stream.map.input=rawbytes
09/03/06 13:18:31 ERROR streaming.StreamJob: Unexpected -D while processing
-input|-output|-mapper|-combiner|-reducer|-file|-dfs|-jt|-additionalconfspec|-inputformat|-outputformat|-partitioner|-numReduceTasks|-inputreader|-mapdebug|-reducedebug|||-cacheFile|-cacheArchive|-io|-verbose|-info|-debug|-inputtagged|-help
Usage: $HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/hadoop-streaming.jar [options]
Options:
-input <path> DFS input file(s) for the Map step
-output <path> DFS output directory for the Reduce step
-mapper <cmd|JavaClassName> The streaming command to run
-combiner <JavaClassName> Combiner has to be a Java class
-reducer <cmd|JavaClassName> The streaming command to run
-file <file> File/dir to be shipped in the Job jar file
-inputformat
TextInputFormat(default)|SequenceFileAsTextInputFormat|JavaClassName Optional.
-outputformat TextOutputFormat(default)|JavaClassName Optional.
-partitioner JavaClassName Optional.
-numReduceTasks <num> Optional.
-inputreader <spec> Optional.
-cmdenv <n>=<v> Optional. Pass env.var to streaming commands
-mapdebug <path> Optional. To run this script when a map task fails
-reducedebug <path> Optional. To run this script when a reduce task fails
-io <identifier> Optional.
-verbose
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be
copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to
include in the classpath.
-archives <comma separated list of archives> specify comma separated
archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
For more details about these options:
Use $HADOOP_HOME/bin/hadoop jar build/hadoop-streaming.jar -info
Streaming Command Failed!
> Make streaming to handle non-utf8 byte array
> --------------------------------------------
>
> Key: HADOOP-1722
> URL: https://issues.apache.org/jira/browse/HADOOP-1722
> Project: Hadoop Core
> Issue Type: Improvement
> Components: contrib/streaming
> Reporter: Runping Qi
> Assignee: Klaas Bosteels
> Fix For: 0.21.0
>
> Attachments: HADOOP-1722-branch-0.18.patch,
> HADOOP-1722-branch-0.19.patch, HADOOP-1722-v2.patch, HADOOP-1722-v3.patch,
> HADOOP-1722-v4.patch, HADOOP-1722-v4.patch, HADOOP-1722-v5.patch,
> HADOOP-1722-v6.patch, HADOOP-1722.patch
>
>
> Right now, the streaming framework expects the output sof the steam process
> (mapper or reducer) are line
> oriented UTF-8 text. This limit makes it impossible to use those programs
> whose outputs may be non-UTF-8
> (international encoding, or maybe even binary data). Streaming can overcome
> this limit by introducing a simple
> encoding protocol. For example, it can allow the mapper/reducer to hexencode
> its keys/values,
> the framework decodes them in the Java side.
> This way, as long as the mapper/reducer executables follow this encoding
> protocol,
> they can output arabitary bytearray and the streaming framework can handle
> them.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.