yangping wu created BEAM-1856:
---------------------------------

             Summary: HDFSFileSink class do not use the same configuration in 
master and slave
                 Key: BEAM-1856
                 URL: https://issues.apache.org/jira/browse/BEAM-1856
             Project: Beam
          Issue Type: Bug
          Components: sdk-java-core
    Affects Versions: 0.6.0
            Reporter: yangping wu
            Assignee: Davor Bonaci


I have a code snippet as follow:
{code}
Read.Bounded<KV<LongWritable, Text>> from = 
Read.from(HDFSFileSource.from(options.getInputFile(), TextInputFormat.class, 
LongWritable.class, Text.class));
PCollection<KV<LongWritable, Text>> data = p.apply(from);
data.apply(MapElements.via(new SimpleFunction<KV<LongWritable, Text>, String>() 
{
    @Override
    public String apply(KV<LongWritable, Text> input) {
        return input.getValue() + "\t" + input.getValue();
    }
})).apply(Write.to(HDFSFileSink.<String>toText(options.getOutputFile())));
{code}
and submit job like this:
{code}
spark-submit --class org.apache.beam.examples.WordCountHDFS --master 
yarn-client   \
             ./target/word-count-beam-bundled-0.1.jar                           
   \
             --runner=SparkRunner                                               
   \
             --inputFile=hdfs://master/tmp/input/                               
   \
             --outputFile=/tmp/output/
{code}
Then {{HDFSFileSink.validate}} function will check whether the local filesystem 
(not HDFS) exists {{/tmp/output/}} directory.
But the final result will store in {{hdfs://master/tmp/output/}} directory in 
HDFS filesystem.
The reason is {{HDFSFileSink}} class do not use the same configuration in 
master thread and slave thread.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to