Could not register schema - Forbidden error when upgrading flink from 1.16.1 to 1.17.1

2023-10-23 Thread Lijuan Hou
Hi team, We encountered the schema forbidden issue during deployment with the changes of flink version upgrade (1.16.1 -> 1.17.1), hope to get some help here :pray:. In the changes of upgrade flink version from 1.16.1 to 1.17.1, we also switched from our customized schema-registry dependency to

Upgrading Flink to 1.17, facing org.apache.flink.util.FlinkException: Global failure triggered by OperatorCoordinator

2023-09-18 Thread Hou, Lijuan via user
Hi team, I am upgrading our flink version from 1.16 to 1.17.1, and currently facing this issue, can I get some help? What shall I do for this? Thanks! org.apache.flink.util.FlinkException: Global failure triggered by OperatorCoordinator for 'Source: Customer Product Summary Selected'

Re: Out-of-memory errors after upgrading Flink from version 1.14 to 1.15 and Java 8 to Java 11

2023-03-20 Thread Shammon FY
Hi Ajinkya I think you can try to decrease the size of batch shuffle with config `taskmanager.memory.framework.off-heap.batch-shuffle.size` if the data volume of your job is small, the default value is `64M`. You can find more information in doc [1] [1]

Out-of-memory errors after upgrading Flink from version 1.14 to 1.15 and Java 8 to Java 11

2023-03-20 Thread Ajinkya Pathrudkar
Ajinkya Pathrudkar 10:53 AM (1 hour ago) to user-info I hope this email finds you well. I am writing to inform you of a recent update we made to our Flink version, upgrading from 1.14 to 1.15, along with a shift from Java 8 to Java 11. Since the update, we have encountered out-of-memory (direct

Re: NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-30 Thread Matthias Pohl
Dependending on the build system used, you could check the dependency tree, e.g. for Maven it would be `mvn dependency:tree -Dincludes=org.apache.parquet` Matthias On Wed, Jun 30, 2021 at 8:40 AM Thomas Wang wrote: > Thanks Matthias. Could you advise how I can confirm this in my environment? >

Re: NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-30 Thread Thomas Wang
Thanks Matthias. Could you advise how I can confirm this in my environment? Thomas On Tue, Jun 29, 2021 at 1:41 AM Matthias Pohl wrote: > Hi Rommel, Hi Thomas, > Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in > FLINK-19137 [1]. The error you're seeing looks like some

Re: NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-29 Thread Matthias Pohl
Hi Rommel, Hi Thomas, Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in FLINK-19137 [1]. The error you're seeing looks like some dependency issue where you have a version other than 1.11.1 of org.apache.parquet:parquet-column:jar on your classpath? Matthias [1]

Re: NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-22 Thread Rommel Holmes
To give more information parquet-avro version 1.10.0 with Flink 1.11.2 and it was running fine. now Flink 1.12.1, the error msg shows up. Thank you for help. Rommel On Tue, Jun 22, 2021 at 2:41 PM Thomas Wang wrote: > Hi, > > We recently upgraded our Flink version from 1.11.2 to 1.12.1

NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-22 Thread Thomas Wang
Hi, We recently upgraded our Flink version from 1.11.2 to 1.12.1 and one of our jobs that used to run ok, now sees the following error. This error doesn't seem to be related to any user code. Can someone help me take a look? Thanks. Thomas java.lang.NoSuchMethodError:

Re: ClassCastException after upgrading Flink application to 1.11.2

2021-03-18 Thread Dawid Wysakowicz
Could you share a full stacktrace with us? Could you check the stack trace also in the task managers logs? As a side note, make sure you are using the same version of all Flink dependencies. Best, Dawid On 17/03/2021 06:26, soumoks wrote: > Hi, > > We have upgraded an application originally

ClassCastException after upgrading Flink application to 1.11.2

2021-03-16 Thread soumoks
Hi, We have upgraded an application originally written for Flink 1.9.1 with Scala 2.11 to Flink 1.11.2 with Scala 2.12.7 and we are seeing the following error at runtime. 2021-03-16 20:37:08 java.lang.RuntimeException at

Re: Upgrading Flink

2020-04-14 Thread Chesnay Schepler
ops/upgrading.html#compatibility-table 2. Yes, you need to recompile (but ideally you don't need to change anything). On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly mailto:stephen.alan.conno...@gmail.com>> wrote: Quick questions on upgrading Flink.

Re: Upgrading Flink

2020-04-14 Thread David Anderson
ility-table >> >> 2. Yes, you need to recompile (but ideally you don't need to change >> anything). >> >> >> >> On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly < >> stephen.alan.conno...@gmail.com> wrote: >> >>> Quick questions on upgrad

Re: Upgrading Flink

2020-04-14 Thread Sivaprasanna
g/projects/flink/flink-docs-release-1.10/ops/upgrading.html#compatibility-table > > 2. Yes, you need to recompile (but ideally you don't need to change > anything). > > > > On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly < > stephen.alan.conno...@gmail.com> wrote: &g

Re: Upgrading Flink

2020-04-14 Thread Chesnay Schepler
/flink-docs-release-1.10/ops/upgrading.html#compatibility-table 2. Yes, you need to recompile (but ideally you don't need to change anything). On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <mailto:stephen.alan.conno...@gmail.com>> wrote: Quick questions on upgrading Flink.

Re: Upgrading Flink

2020-04-09 Thread Robert Metzger
< stephen.alan.conno...@gmail.com> wrote: > Quick questions on upgrading Flink. > > All our jobs are compiled against Flink 1.8.x > > We are planning to upgrade to 1.10.x > > 1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x -> > 1.9.x and then

Upgrading Flink

2020-04-06 Thread Stephen Connolly
Quick questions on upgrading Flink. All our jobs are compiled against Flink 1.8.x We are planning to upgrade to 1.10.x 1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x -> 1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump supported, i.e. 1.8.x -&g

Re: How to avoid breaking states when upgrading Flink job?

2016-07-04 Thread Ufuk Celebi
Judging from the stack trace the state should be part of the operator state and not the partitioned RocksDB state. If you have implemented the Checkpointed interface anywhere, that would be a good place to pinpoint the anonymous class. Is it possible to share the job code? – Ufuk On Fri, Jul 1,

Re: How to avoid breaking states when upgrading Flink job?

2016-07-01 Thread Aljoscha Krettek
Ah, this might be in code that runs at a different layer from the StateBackend. Can you maybe pinpoint which of your user classes is this anonymous class and where it is used? Maybe by replacing them by non-anonymous classes and checking which replacement fixes the problem. - Aljoscha On Fri, 1

Re: How to avoid breaking states when upgrading Flink job?

2016-07-01 Thread Josh
I've just double checked and I do still get the ClassNotFound error for an anonymous class, on a job which uses the RocksDBStateBackend. In case it helps, this was the full stack trace: java.lang.RuntimeException: Failed to deserialize state handle and setup initial operator state. at

Re: How to avoid breaking states when upgrading Flink job?

2016-07-01 Thread Josh
Thanks guys, that's very helpful info! @Aljoscha I thought I saw this exception on a job that was using the RocksDB state backend, but I'm not sure. I will do some more tests today to double check. If it's still a problem I'll try the explicit class definitions solution. Josh On Thu, Jun 30,

Re: How to avoid breaking states when upgrading Flink job?

2016-06-30 Thread Aljoscha Krettek
Also, you're using the FsStateBackend, correct? Reason I'm asking is that the problem should not occur for the RocksDB state backend. There, we don't serialize any user code, only binary data. A while back I wanted to change the FsStateBackend to also work like this. Now might be a good time to

Re: How to avoid breaking states when upgrading Flink job?

2016-06-30 Thread Till Rohrmann
Hi Josh, you could also try to replace your anonymous classes by explicit class definitions. This should assign these classes a fixed name independent of the other anonymous classes. Then the class loader should be able to deserialize your serialized data. Cheers, Till On Thu, Jun 30, 2016 at

Re: How to avoid breaking states when upgrading Flink job?

2016-06-30 Thread Aljoscha Krettek
Hi Josh, I think in your case the problem is that Scala might choose different names for synthetic/generated classes. This will trip up the code that is trying to restore from a snapshot that was done with an earlier version of the code where classes where named differently. I'm afraid I don't

Re: How to avoid breaking states when upgrading Flink job?

2016-06-30 Thread Maximilian Michels
Hi Josh, You have to assign UIDs to all operators to change the topology. Plus, you have to add dummy operators for all UIDs which you removed; this is a limitation currently because Flink will attempt to find all UIDs of the old job. Cheers, Max On Wed, Jun 29, 2016 at 9:00 PM, Josh

How to avoid breaking states when upgrading Flink job?

2016-06-29 Thread Josh
Hi all, Is there any information out there on how to avoid breaking saved states/savepoints when making changes to a Flink job and redeploying it? I want to know how to avoid exceptions like this: java.lang.RuntimeException: Failed to deserialize state handle and setup initial operator state.