Hi Vino,
Data is ok i double checked. Input is plain json and it can be processed by
same code compiled and run on 1.3.1 flink. Thanks for the hint about avro
and parquet versions. Got my fat jar synced up with flink 1.5.1
avro/parguet versions. Hope was high that it will help to resolve the
Hi Till,
Server start up entrypoint log
2018-07-25T12:19:12.268+
[org.apache.flink.runtime.entrypoint.ClusterEntrypoint] INFO
2018-07-25T12:19:12.271+
[org.apache.flink.runtime.entrypoint.ClusterEntrypoint]
Hi Alex,
could you share with us the full logs of the client and the cluster
entrypoint? That would be tremendously helpful.
Cheers,
Till
On Wed, Jul 25, 2018 at 4:08 AM vino yang wrote:
> Hi Alex,
>
> Is it possible that the data has been corrupted?
>
> Or have you confirmed that the avro
Hi Alex,
Is it possible that the data has been corrupted?
Or have you confirmed that the avro version is consistent in different
Flink versions?
Also, if you don't upgrade Flink and still use version 1.3.1, can it be
recovered?
Thanks, vino.
2018-07-25 8:32 GMT+08:00 Alex Vinnik :
> Vino,
>
Vino,
Upgraded flink to Hadoop 2.8.1
$ docker exec -it flink-jobmanager cat /var/log/flink/flink.log | grep
entrypoint | grep 'Hadoop version'
2018-07-25T00:19:46.142+
[org.apache.flink.runtime.entrypoint.ClusterEntrypoint] INFO Hadoop
version: 2.8.1
but job still fails to start
Ideas?
Hi Alex,
Based on your log information, the potential reason is Hadoop version. To
troubleshoot the exception comes from different Hadoop version. I suggest
you match the both side of Hadoop version.
You can :
1. Upgrade the Hadoop version which Flink Cluster depends on, Flink's
official
Hi Till,
Thanks for responding. Below is entrypoint logs. One thing I noticed that
"Hadoop version: 2.7.3". My fat jar is built with hadoop 2.8 client. Could
it be a reason for that error? If so how can i use same hadoop version 2.8
on flink server side? BTW job runs fine locally reading from
Hi Alex,
I'm not entirely sure what causes this problem because it is the first time
I see it.
First question would be if the problem also arises if using a different
Hadoop version.
Are you using the same Java versions on the client as well as on the server?
Could you provide us with the
Trying to migrate existing job that runs fine on Flink 1.3.1 to Flink 1.5
and getting a weird exception.
Job reads json from s3a and writes parquet files to s3a with avro model.
Job is uber jar file built with hadoop-aws-2.8.0 in order to have access to
S3AFileSystem class.
Fails here