I also tried different settings for *yarn.per-job-cluster.include-user-jar*
and *classloader.resolve-order*
But none of them worked.

Leon

On Wed, Nov 30, 2022 at 11:17 PM Leon Xu <[email protected]> wrote:

> Hi Biao,
>
> Thanks for getting back to me.
> Here's the command I used:
>
> /usr/bin/flink run-application -t yarn-application \
>     -Dtaskmanager.numberOfTaskSlots=1 \
>     -Djobmanager.memory.process.size=4096m \
>     -Dtaskmanager.memory.process.size=4096m \
>     -Dyarn.application.name=backup-enriched-events-r0 \
>     -c com.xyz.source.SourceDataStream \
>     /tmp/lib-event-platform-core-7c7cc8ca9fc.jar \
>     --datastreamName=backup-enriched-events-r0 --environment=dev
>
> And with the following configuration in flink config:
> yarn.per-job-cluster.include-user-jar: FIRST
> classloader.resolve-order: parent-first
>
> Thanks
> Leon
>
>
> On Wed, Nov 30, 2022 at 10:14 PM Biao Geng <[email protected]> wrote:
>
>> Hi Leon,
>>
>> Can you share your full command for submission?
>>
>>
>> Best,
>> Biao Geng
>>
>> Leon Xu <[email protected]> 于2022年12月1日周四 06:27写道:
>>
>>> Hi Flink Users,
>>>
>>> We ran into java.lang.ClassCastException after moving the flink job from
>>> session mode to application mode.
>>>
>>>
>>> *java.lang.ClassCastException: class [B cannot be cast to class
>>> java.lang.String ([B and java.lang.String are in module java.base of loader
>>> 'bootstrap') at
>>> com.xyz.common.io.sink.HivePartitionedBucketAssigner.getBucketId(HivePartitionedBucketAssigner.java:35)*
>>>
>>> Here *HivePartitionedBucketAssigner* is a template class:
>>> class HivePartitionedBucketAssigner<T> implements BucketAssigner<T,
>>> String>
>>>
>>>
>>> We tried disabling the invert class loading based on this doc
>>> <https://nightlies.apache.org/flink/flink-docs-master/docs/ops/debugging/debugging_classloading/#x-cannot-be-cast-to-x-exceptions>but
>>> it didn't help. In session-mode it works fine. So I wonder what's different
>>> between session-mode and application mode in terms of class loading? And
>>> what would be a solution for this situation?
>>> Our setup:
>>> 1. Flink version: 1.12.7
>>> 2. Java: JDK 11
>>>
>>>
>>> Thanks
>>> Leon
>>>
>>

Reply via email to