Hello Yu'an,

Thanks for the reply.
I'm using the SQL api, not using the `DataStream` API in the job. So
there's no `keyby` call directly in our code, but we do have some `group
by` and joins in the SQL. (We are using deprecated table planners both
before and after migration)
Do you know what could be the cause of the incompatibility?

BR,



Le jeu. 14 avr. 2022 à 04:20, yu'an huang <h.yuan...@gmail.com> a écrit :

> Hi Qinghui,
>
> Did you used a difference keyby() for your KeyedCoProcesserOperator? For
> example, did you use a fied name (keyBy(“id”)) in 1.9 and while use a
> lambda (keyBy(e->e.getId()) in 1.13. This will make the key serializer
> incompatible.
>
> You may reference this link for how to use Apache Flink’s State Processor
> API to modifying savepoints:
>
> https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/libs/state_processor_api/
>
>
> On 14 Apr 2022, at 1:44 AM, XU Qinghui <qinghui...@gmail.com> wrote:
>
> The new key serializer (org.apache.flink.api.java.typeutils.runtime.
> RowSerializer@896510d) must be compatible with the previous key serializer
>
>
>

Reply via email to