Glad to hear it!
Best,
Congxian
Adrian Vasiliu 于2019年10月15日周二 下午9:10写道:
> Hi,
> FYI we've switched to a different Hadoop server, and the issue vanished...
> It does look as the cause was on hadoop side.
> Thanks again Congxian.
> Adrian
>
>
> - Original message -
> From: "Adrian
Hi,
FYI we've switched to a different Hadoop server, and the issue vanished... It does look as the cause was on hadoop side. Thanks again Congxian.Adrian
- Original message -From: "Adrian Vasiliu" To: qcx978132...@gmail.comCc: user@flink.apache.orgSubject: [EXTERNAL] RE: FLINK-13497 /
Thanks Congxian. The possible causes listed in the mostly voted answer of https://stackoverflow.com/questions/36015864/hadoop-be-replicated-to-0-nodes-instead-of-minreplication-1-there-are-1/36310025 do not seem to hold for us, because we have other pretty much similar flink jobs using the same
Hi
>From the given stack trace, maybe you could solve the "replication problem"
first, File /okd-dev/3fe6b069-43bf-4d86-9762-4f501c9db16e could only be
replicated to 0 nodes instead of minReplication (=1). There are 2
datanode(s) running and no node(s) are excluded in this operation, and
maybe
Hello,
We recently upgraded our product from Flink 1.7.2 to Flink 1.9, and we experience repeated failing jobs with
java.lang.RuntimeException: Could not create file for checking if truncate works. You can disable support for truncate() completely via BucketingSink.setUseTruncate(false).