[jira] [Commented] (BEAM-2995) can't read/write hdfs in Flink CLUSTER(Standalone)
[ https://issues.apache.org/jira/browse/BEAM-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16352114#comment-16352114 ] Dawid Wysakowicz commented on BEAM-2995: [~huangjianhuang] I could not reproduce your exact problem, but it does not connect to hdfs in my case if I do not add proper transformer, as explained in BEAM-2457. {code:java} org.apache.maven.plugins maven-shade-plugin false *:* META-INF/*.SF META-INF/*.DSA META-INF/*.RSA package shade true shaded {code} > can't read/write hdfs in Flink CLUSTER(Standalone) > -- > > Key: BEAM-2995 > URL: https://issues.apache.org/jira/browse/BEAM-2995 > Project: Beam > Issue Type: Bug > Components: runner-flink >Affects Versions: 2.2.0 >Reporter: huangjianhuang >Assignee: Dawid Wysakowicz >Priority: Major > > i just write a simple demo like: > {code:java} > Configuration conf = new Configuration(); > conf.set("fs.default.name", "hdfs://localhost:9000"); > //other codes > p.apply("ReadLines", > TextIO.read().from("hdfs://localhost:9000/tmp/words")) > > .apply(TextIO.write().to("hdfs://localhost:9000/tmp/hdfsout")); > {code} > it works in flink local model with cmd: > {code:java} > mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS -Pflink-runner > -Dexec.args="--runner=FlinkRunner > --filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar" > {code} > but not works in CLUSTER mode: > {code:java} > mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS -Pflink-runner > -Dexec.args="--runner=FlinkRunner > --filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar > --flinkMaster=localhost:6123 " > {code} > it seems the flink cluster regard the hdfs as local file system. > The input log from flink-jobmanger.log is: > {code:java} > 2017-09-27 20:17:37,962 INFO org.apache.flink.runtime.jobmanager.JobManager > - Successfully ran initialization on master in 136 ms. > 2017-09-27 20:17:37,968 INFO org.apache.beam.sdk.io.FileBasedSource > - {color:red}Filepattern hdfs://localhost:9000/tmp/words2 > matched 0 files with total size 0{color} > 2017-09-27 20:17:37,968 INFO org.apache.beam.sdk.io.FileBasedSource > - Splitting filepattern hdfs://localhost:9000/tmp/words2 into > bundles of size 0 took 0 ms and produced 0 files a > nd 0 bundles > {code} > The output error message is : > {code:java} > Caused by: java.lang.ClassCastException: > {color:red}org.apache.beam.sdk.io.hdfs.HadoopResourceId cannot be cast to > org.apache.beam.sdk.io.LocalResourceId{color} > at > org.apache.beam.sdk.io.LocalFileSystem.create(LocalFileSystem.java:77) > at org.apache.beam.sdk.io.FileSystems.create(FileSystems.java:256) > at org.apache.beam.sdk.io.FileSystems.create(FileSystems.java:243) > at > org.apache.beam.sdk.io.FileBasedSink$Writer.open(FileBasedSink.java:922) > at > org.apache.beam.sdk.io.FileBasedSink$Writer.openUnwindowed(FileBasedSink.java:884) > at > org.apache.beam.sdk.io.WriteFiles.finalizeForDestinationFillEmptyShards(WriteFiles.java:909) > at org.apache.beam.sdk.io.WriteFiles.access$900(WriteFiles.java:110) > at > org.apache.beam.sdk.io.WriteFiles$2.processElement(WriteFiles.java:858) > {code} > can somebody help me, i've try all the way just can't work it out [cry] > https://issues.apache.org/jira/browse/BEAM-2457 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (BEAM-2995) can't read/write hdfs in Flink CLUSTER(Standalone)
[ https://issues.apache.org/jira/browse/BEAM-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184216#comment-16184216 ] huangjianhuang commented on BEAM-2995: -- yes, i've read BEAM-2457 before, and tried what you suggested (with HADOOP_CONF_DIR), but make no difference. i started my cluster with only one host(localhost), by shell command: FLINK_DIR/bin/start-cluster.sh BTW, i access HDFS with HbaseIO now. it works fine on flink cluster;) > can't read/write hdfs in Flink CLUSTER(Standalone) > -- > > Key: BEAM-2995 > URL: https://issues.apache.org/jira/browse/BEAM-2995 > Project: Beam > Issue Type: Bug > Components: runner-flink >Affects Versions: 2.2.0 >Reporter: huangjianhuang >Assignee: Aljoscha Krettek > > i just write a simple demo like: > {code:java} > Configuration conf = new Configuration(); > conf.set("fs.default.name", "hdfs://localhost:9000"); > //other codes > p.apply("ReadLines", > TextIO.read().from("hdfs://localhost:9000/tmp/words")) > > .apply(TextIO.write().to("hdfs://localhost:9000/tmp/hdfsout")); > {code} > it works in flink local model with cmd: > {code:java} > mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS -Pflink-runner > -Dexec.args="--runner=FlinkRunner > --filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar" > {code} > but not works in CLUSTER mode: > {code:java} > mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS -Pflink-runner > -Dexec.args="--runner=FlinkRunner > --filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar > --flinkMaster=localhost:6123 " > {code} > it seems the flink cluster regard the hdfs as local file system. > The input log from flink-jobmanger.log is: > {code:java} > 2017-09-27 20:17:37,962 INFO org.apache.flink.runtime.jobmanager.JobManager > - Successfully ran initialization on master in 136 ms. > 2017-09-27 20:17:37,968 INFO org.apache.beam.sdk.io.FileBasedSource > - {color:red}Filepattern hdfs://localhost:9000/tmp/words2 > matched 0 files with total size 0{color} > 2017-09-27 20:17:37,968 INFO org.apache.beam.sdk.io.FileBasedSource > - Splitting filepattern hdfs://localhost:9000/tmp/words2 into > bundles of size 0 took 0 ms and produced 0 files a > nd 0 bundles > {code} > The output error message is : > {code:java} > Caused by: java.lang.ClassCastException: > {color:red}org.apache.beam.sdk.io.hdfs.HadoopResourceId cannot be cast to > org.apache.beam.sdk.io.LocalResourceId{color} > at > org.apache.beam.sdk.io.LocalFileSystem.create(LocalFileSystem.java:77) > at org.apache.beam.sdk.io.FileSystems.create(FileSystems.java:256) > at org.apache.beam.sdk.io.FileSystems.create(FileSystems.java:243) > at > org.apache.beam.sdk.io.FileBasedSink$Writer.open(FileBasedSink.java:922) > at > org.apache.beam.sdk.io.FileBasedSink$Writer.openUnwindowed(FileBasedSink.java:884) > at > org.apache.beam.sdk.io.WriteFiles.finalizeForDestinationFillEmptyShards(WriteFiles.java:909) > at org.apache.beam.sdk.io.WriteFiles.access$900(WriteFiles.java:110) > at > org.apache.beam.sdk.io.WriteFiles$2.processElement(WriteFiles.java:858) > {code} > can somebody help me, i've try all the way just can't work it out [cry] > https://issues.apache.org/jira/browse/BEAM-2457 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (BEAM-2995) can't read/write hdfs in Flink CLUSTER(Standalone)
[ https://issues.apache.org/jira/browse/BEAM-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184169#comment-16184169 ] Aljoscha Krettek commented on BEAM-2995: How are you starting your cluster? There was also some discussion about this on BEAM-2457. > can't read/write hdfs in Flink CLUSTER(Standalone) > -- > > Key: BEAM-2995 > URL: https://issues.apache.org/jira/browse/BEAM-2995 > Project: Beam > Issue Type: Bug > Components: runner-flink >Affects Versions: 2.2.0 >Reporter: huangjianhuang >Assignee: Aljoscha Krettek > > i just write a simple demo like: > {code:java} > Configuration conf = new Configuration(); > conf.set("fs.default.name", "hdfs://localhost:9000"); > //other codes > p.apply("ReadLines", > TextIO.read().from("hdfs://localhost:9000/tmp/words")) > > .apply(TextIO.write().to("hdfs://localhost:9000/tmp/hdfsout")); > {code} > it works in flink local model with cmd: > {code:java} > mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS -Pflink-runner > -Dexec.args="--runner=FlinkRunner > --filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar" > {code} > but not works in CLUSTER mode: > {code:java} > mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS -Pflink-runner > -Dexec.args="--runner=FlinkRunner > --filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar > --flinkMaster=localhost:6123 " > {code} > it seems the flink cluster regard the hdfs as local file system. > The input log from flink-jobmanger.log is: > {code:java} > 2017-09-27 20:17:37,962 INFO org.apache.flink.runtime.jobmanager.JobManager > - Successfully ran initialization on master in 136 ms. > 2017-09-27 20:17:37,968 INFO org.apache.beam.sdk.io.FileBasedSource > - {color:red}Filepattern hdfs://localhost:9000/tmp/words2 > matched 0 files with total size 0{color} > 2017-09-27 20:17:37,968 INFO org.apache.beam.sdk.io.FileBasedSource > - Splitting filepattern hdfs://localhost:9000/tmp/words2 into > bundles of size 0 took 0 ms and produced 0 files a > nd 0 bundles > {code} > The output error message is : > {code:java} > Caused by: java.lang.ClassCastException: > {color:red}org.apache.beam.sdk.io.hdfs.HadoopResourceId cannot be cast to > org.apache.beam.sdk.io.LocalResourceId{color} > at > org.apache.beam.sdk.io.LocalFileSystem.create(LocalFileSystem.java:77) > at org.apache.beam.sdk.io.FileSystems.create(FileSystems.java:256) > at org.apache.beam.sdk.io.FileSystems.create(FileSystems.java:243) > at > org.apache.beam.sdk.io.FileBasedSink$Writer.open(FileBasedSink.java:922) > at > org.apache.beam.sdk.io.FileBasedSink$Writer.openUnwindowed(FileBasedSink.java:884) > at > org.apache.beam.sdk.io.WriteFiles.finalizeForDestinationFillEmptyShards(WriteFiles.java:909) > at org.apache.beam.sdk.io.WriteFiles.access$900(WriteFiles.java:110) > at > org.apache.beam.sdk.io.WriteFiles$2.processElement(WriteFiles.java:858) > {code} > can somebody help me, i've try all the way just can't work it out [cry] > https://issues.apache.org/jira/browse/BEAM-2457 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (BEAM-2995) can't read/write hdfs in Flink CLUSTER(Standalone)
[ https://issues.apache.org/jira/browse/BEAM-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16183553#comment-16183553 ] huangjianhuang commented on BEAM-2995: -- by the way, my pom.xml is: {code:java} com.joe flinkBeam 2.2.0-SNAPSHOT org.apache.beam beam-runners-flink_2.10 ${project.version} org.apache.beam beam-sdks-java-core ${project.version} org.apache.beam beam-sdks-java-io-kafka ${project.version} org.apache.beam beam-sdks-java-io-hadoop-file-system ${project.version} org.apache.beam beam-sdks-java-io-google-cloud-platform ${project.version} org.apache.beam beam-sdks-java-extensions-google-cloud-platform-core ${project.version} org.apache.hadoop hadoop-common 2.8.1 org.apache.hadoop hadoop-hdfs 2.8.1 org.apache.hadoop hadoop-client 2.8.1 org.apache.beam beam-sdks-java-extensions-protobuf ${project.version} com.google.protobuf protobuf-java 3.2.0 com.google.protobuf protobuf-java-util 3.2.0 org.codehaus.mojo exec-maven-plugin 1.4.0 false org.apache.maven.plugins maven-shade-plugin false *:* META-INF/*.SF META-INF/*.DSA META-INF/*.RSA package shade true shaded {code} run with flink1.3.2, hadoop2.8.1 > can't read/write hdfs in Flink CLUSTER(Standalone) > -- > > Key: BEAM-2995 > URL: https://issues.apache.org/jira/browse/BEAM-2995 > Project: Beam > Issue Type: Bug > Components: runner-flink >Affects Versions: 2.2.0 >Reporter: huangjianhuang >Assignee: Aljoscha Krettek > > i just write a simple demo like: > {code:java} > Configuration conf = new Configuration(); > conf.set("fs.default.name", "hdfs://localhost:9000"); > //other codes > p.apply("ReadLines", > TextIO.read().from("hdfs://localhost:9000/tmp/words")) > > .apply(TextIO.write().to("hdfs://localhost:9000/tmp/hdfsout")); > {code} > it works in flink local model with cmd: > {code:java} > mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS -Pflink-runner > -Dexec.args="--runner=FlinkRunner > --filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar" > {code} > but not works in CLUSTER mode: > {code:java} > mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS -Pflink-runner > -Dexec.args="--runner=FlinkRunner > --filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar > --flinkMaster=localhost:6123 " > {code} > it seems the flink cluster regard the hdfs as local file system. > The input log from flink-jobmanger.log is: > {code:java} > 2017-09-27 20:17:37,962 INFO org.apache.flink.runtime.jobmanager.JobManager > - Successfully ran initialization on master in 136 ms. > 2017-09-27 20:17:37,968 INFO org.apache.beam.sdk.io.FileBasedSource > - {color:red}Filepattern hdfs://localhost:9000/tmp/words2 > matched 0 files with total size 0{color} > 2017-09-27 20:17:37,968 INFO org.apache.beam.sdk.io.FileBasedSource > - Splitting filepattern hdfs://localhost:9000/tmp/words2 into > bundles of size 0 took 0 ms and produced 0 files a > nd 0 bundles > {code} > The output error message is : > {code:java} > Caused by: java.lang.ClassCastException: > {color:red}org.apache.beam.sdk.io.hdfs.HadoopResourceId cannot be cast to >