Re: What is the state of Scala wrappers?

2023-02-07 Thread Martijn Visser
Hi all,

Is there anything that the Flink community could do to raise awareness?
Perhaps it would be interesting for the maintainers to write a short blog
post about it, which potentially could drive traffic?

Best regards,

Martijn

On Sun, Feb 5, 2023 at 4:39 PM Alexey Novakov via user <
user@flink.apache.org> wrote:

> Hi Erwan,
>
> I think those 2 projects you mentioned are pretty much the options we have
> at the moment if you want to use Scala 2.13 or 3.
> I believe your contribution to upgrade one of them to Flink 1.16 will be
> very welcomed.
>
> Best regards,
> Alex
>
> On Thu, Feb 2, 2023 at 9:32 AM Erwan Loisant  wrote:
>
>> Hi,
>>
>> Back in October, the Flink team announced that the Scala API was to be
>> deprecated them removed. Which I think is perfectly fine, having third
>> party develop Scala wrappers is a good approach.
>>
>> With the announce I expected those wrapper projects to get steam, however
>> both projects linked in the announcement (
>> https://github.com/findify/flink-adt and
>> https://github.com/ariskk/flink4s) don't seem much maintained, and are
>> stuck on Flink 1.15.
>>
>> Any team here using Flink with Scala are moving away from the official
>> Scala API? Maybe there is a project that I'm missing that is getting more
>> attentions than the 2 linked aboved?
>>
>>
>> Thank you!
>> Erwan
>>
>


Re: Unsubscribe

2023-02-07 Thread yuxia
Hi. 
To unsubscribe, you should send email to user-unsubscr...@flink.apache.org with 
any contents or subject. Please see more in the Flink Doc[1] 

[1] https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list 

Best regards, 
Yuxia 


发件人: "liang ji"  
收件人: "User"  
发送时间: 星期三, 2023年 2 月 08日 下午 2:10:03 
主题: Unsubscribe 




Unsubscribe

2023-02-07 Thread liang ji



Re: 退订

2023-02-07 Thread weijie guo
Hi,

你需要发送邮件到 user-zh-unsubscr...@flink.apache.org
 而不是 user-zh@flink.apache.org.


Best regards,

Weijie


wujunxi <462329...@qq.com.invalid> 于2023年2月7日周二 16:52写道:

> 退订


Re: Unsubscribe

2023-02-07 Thread yuxia
Hi, All. 
To unsubscribe, you can send email to user-unsubscr...@flink.apache.org with 
any contents or subject. Please see more in the Flink Doc[1] 

[1] https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list 

Best regards, 
Yuxia 


发件人: "Ragini Manjaiah"  
收件人: "Soumen Choudhury"  
抄送: "User"  
发送时间: 星期三, 2023年 2 月 08日 上午 11:06:30 
主题: Re: Unsubscribe 

Hi Soumen, 
I want to unsubscribe from this mailing list. 

Thanks & Regards 
Ragini Manjaiah 

On Fri, Feb 3, 2023 at 4:07 PM Soumen Choudhury < [ mailto:sou@gmail.com | 
sou@gmail.com ] > wrote: 





-- 
Regards 
Soumen Choudhury 
Cell : +91865316168 
mail to : [ mailto:sou@gmail.com | sou@gmail.com ] 






Re: Unsubscribe

2023-02-07 Thread Ragini Manjaiah
Hi Soumen,
I want to unsubscribe from this mailing list.

Thanks & Regards
Ragini Manjaiah

On Fri, Feb 3, 2023 at 4:07 PM Soumen Choudhury  wrote:

>
>
> --
> Regards
> Soumen Choudhury
> Cell : +91865316168
> mail to : sou@gmail.com
>


Re: I want to subscribe users' questions

2023-02-07 Thread yuxia
Maybe you will also be interested in joining Flink Slack, here is my invite 
link for joining Flink Slack:
https://join.slack.com/t/apache-flink/shared_invite/zt-1obpql04h-R3o5XM8d~Siyl3KGldkl2Q

Best regards,
Yuxia

- 原始邮件 -
发件人: "guanyuan chen" 
收件人: "User" , "user-zh" 
发送时间: 星期五, 2023年 2 月 03日 下午 7:48:55
主题: I want to subscribe users' questions

Hi,
My name is Guanyuan Chen.I am a big data development engineer, tencent
wechat department, china. I have 4 years experience in flink developing,
and want to subscribe flink's development news and help someone developing
flink job willingly.
Thanks a lot.


Re: I want to subscribe users' questions

2023-02-07 Thread yuxia
Maybe you will also be interested in joining Flink Slack, here is my invite 
link for joining Flink Slack:
https://join.slack.com/t/apache-flink/shared_invite/zt-1obpql04h-R3o5XM8d~Siyl3KGldkl2Q

Best regards,
Yuxia

- 原始邮件 -
发件人: "guanyuan chen" 
收件人: "User" , "user-zh" 
发送时间: 星期五, 2023年 2 月 03日 下午 7:48:55
主题: I want to subscribe users' questions

Hi,
My name is Guanyuan Chen.I am a big data development engineer, tencent
wechat department, china. I have 4 years experience in flink developing,
and want to subscribe flink's development news and help someone developing
flink job willingly.
Thanks a lot.


Re: I want to subscribe users' questions

2023-02-07 Thread Hang Ruan
Hi, guanyuan,

This document(
https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list)
will be helpful.
welcome~

Best,
Hang

guanyuan chen  于2023年2月7日周二 21:37写道:

> Hi,
> My name is Guanyuan Chen.I am a big data development engineer, tencent
> wechat department, china. I have 4 years experience in flink developing,
> and want to subscribe flink's development news and help someone developing
> flink job willingly.
> Thanks a lot.
>


Re: I want to subscribe users' questions

2023-02-07 Thread Hang Ruan
Hi, guanyuan,

This document(
https://flink.apache.org/community.html#how-to-subscribe-to-a-mailing-list)
will be helpful.
welcome~

Best,
Hang

guanyuan chen  于2023年2月7日周二 21:37写道:

> Hi,
> My name is Guanyuan Chen.I am a big data development engineer, tencent
> wechat department, china. I have 4 years experience in flink developing,
> and want to subscribe flink's development news and help someone developing
> flink job willingly.
> Thanks a lot.
>


Re: Kafka Sink Kafka Producer metrics?

2023-02-07 Thread Andrew Otto
Wow, not sure how I missed that.  Thank you.



On Mon, Feb 6, 2023 at 9:22 PM Mason Chen  wrote:

> Hi Andrew,
>
> I misread the docs: `register.producer.metrics` is mentioned here, but it
> is not on by default.
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#kafka-connector-metrics
>
> Best,
> Mason
>
> On Mon, Feb 6, 2023 at 6:19 PM Mason Chen  wrote:
>
>> Hi Andrew,
>>
>> Unfortunately, the functionality is undocumented, but you can set the
>> property `register.producer.metrics` to true in your Kafka client
>> properties map. This is a JIRA to document the feature:
>> https://issues.apache.org/jira/browse/FLINK-30932
>>
>> Best,
>> Mason
>>
>> On Mon, Feb 6, 2023 at 11:49 AM Andrew Otto  wrote:
>>
>>> Hi!
>>>
>>> Kafka Source will emit KafkaConsumer metrics
>>> 
>>> .
>>>
>>> It looks like Kafka Sink
>>> 
>>> does not emit KafkaProducer metrics
>>> .  Is this
>>> correct?  If so, why not?
>>>
>>> Thanks,
>>> -Andrew Otto
>>>  Wikimedia Foundation
>>>
>>


I want to subscribe users' questions

2023-02-07 Thread guanyuan chen
Hi,
My name is Guanyuan Chen.I am a big data development engineer, tencent
wechat department, china. I have 4 years experience in flink developing,
and want to subscribe flink's development news and help someone developing
flink job willingly.
Thanks a lot.


回复: Kafka 数据源无法实现基于事件时间的窗口聚合

2023-02-07 Thread drewfranklin
Hi ,应该是Kafka 可能存在空闲分区,如果只是partition 
数量少于并发数的话,并不会影响水位推进,只是会浪费资源。默认程序不指定并行度,使用电脑cpu 核数。


如果是table api 的话,可以添加如下参数解决,table.exec.source.idle-timeout


| |
飞雨
|
|
bigdata
drewfrank...@126.com
|


 回复的原邮件 
| 发件人 | Weihua Hu |
| 发送日期 | 2023年02月7日 18:48 |
| 收件人 |  |
| 主题 | Re: Kafka 数据源无法实现基于事件时间的窗口聚合 |
Hi,

问题应该是 kafka source 配置了多并发运行,但数据量比较少(或者 topic 的 partition 数量小于 task
的并发数量),不是所有的 source task 都消费到了数据并产生 watermark,导致下游聚合算子无法对齐 watermark 触发计算。
可以尝试通过以下办法解决:
1. 将 source 并发控制为 1
2. 为 watermark 策略开始 idleness 处理,参考 [#1]

fromElement 数据源会强制指定并发为 1

[#1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/event-time/generating_watermarks/#dealing-with-idle-sources


Best,
Weihua


On Tue, Feb 7, 2023 at 1:31 PM wei_yuze  wrote:

您好!




我在进行基于事件时间的窗口聚合操作时,使用fromElement数据源可以实现,但替换为Kafka数据源就不行了,但程序并不报错。以下贴出代码。代码中给了两个数据源,分别命名为:streamSource
和 kafkaSource
。当使用streamSource生成watermarkedStream的时候,可以完成聚合计算并输出结果。但使用kafkaSource却不行。




public class WindowReduceTest2 {  public static void
main(String[] args) throws Exception {
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();


// 使用fromElement数据源
DataStreamSource

Updating Scala package names while preserving state

2023-02-07 Thread Thomas Eckestad
Hi,

I would like to change the package name of our Scala code from 
com.company.foo.something to com.company.bar.something while preserving the 
state. How can I make a Savepoint from an application built with 
com.company.foo.something and make that Savepoint compatible with new code 
built from com.company.bar.something?

In a Savepoint directory from one of our Flink jobs. Doing: egrep 
com\.company\.bar produces a lot of hits. Could it be expected to work with 
just using sed to replace the strings? Or are there binary, non-text data, as 
well that needs to be updated?

We are currently on FLink 1.13.6.

Thanks,
Thomas
Thomas Eckestad
Systems Engineer
Development RSI

NIRA Dynamics AB
Wallenbergs gata 4
58330 Link?ping, Sweden
Mobile: +46 701 447 279
thomas.eckes...@niradynamics.se
www.niradynamics.se



Re: Kafka 数据源无法实现基于事件时间的窗口聚合

2023-02-07 Thread Weihua Hu
Hi,

问题应该是 kafka source 配置了多并发运行,但数据量比较少(或者 topic 的 partition 数量小于 task
的并发数量),不是所有的 source task 都消费到了数据并产生 watermark,导致下游聚合算子无法对齐 watermark 触发计算。
可以尝试通过以下办法解决:
1. 将 source 并发控制为 1
2. 为 watermark 策略开始 idleness 处理,参考 [#1]

fromElement 数据源会强制指定并发为 1

[#1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/event-time/generating_watermarks/#dealing-with-idle-sources


Best,
Weihua


On Tue, Feb 7, 2023 at 1:31 PM wei_yuze  wrote:

> 您好!
>
>
>
>
> 我在进行基于事件时间的窗口聚合操作时,使用fromElement数据源可以实现,但替换为Kafka数据源就不行了,但程序并不报错。以下贴出代码。代码中给了两个数据源,分别命名为:streamSource
> 和 kafkaSource
> 。当使用streamSource生成watermarkedStream的时候,可以完成聚合计算并输出结果。但使用kafkaSource却不行。
>
>
>
>
> public class WindowReduceTest2 {  public static void
> main(String[] args) throws Exception {
> StreamExecutionEnvironment env =
> StreamExecutionEnvironment.getExecutionEnvironment();
>
>
> // 使用fromElement数据源
> DataStreamSource env.fromElements(
> new
> Event2("Alice", "./home", "2023-02-04 17:10:11"),
> new Event2("Bob",
> "./cart", "2023-02-04 17:10:12"),
> new
> Event2("Alice", "./home", "2023-02-04 17:10:13"),
> new
> Event2("Alice", "./home", "2023-02-04 17:10:15"),
> new Event2("Cary",
> "./home", "2023-02-04 17:10:16"),
> new Event2("Cary",
> "./home", "2023-02-04 17:10:16")
> );
>
>
> // 使用Kafka数据源
> JsonDeserializationSchema jsonFormat = new JsonDeserializationSchema<(Event2.class);
> KafkaSource KafkaSource.
> .setBootstrapServers(Config.KAFKA_BROKERS)
>
> .setTopics(Config.KAFKA_TOPIC)
>
> .setGroupId("my-group")
>
> .setStartingOffsets(OffsetsInitializer.earliest())
>
> .setValueOnlyDeserializer(jsonFormat)
> .build();
> DataStreamSource env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source");
> kafkaSource.print();
>
>
> // 生成watermark,从数据中提取时间作为事件时间
> SingleOutputStreamOperator watermarkedStream =
> kafkaSource.assignTimestampsAndWatermarks(WatermarkStrategy.
> .withTimestampAssigner(new SerializableTimestampAssigner  
> @Override
>  
> public long extractTimestamp(Event2 element, long recordTimestamp) {
>  
>   SimpleDateFormat simpleDateFormat = new
> SimpleDateFormat("-MM-dd HH:mm:ss");
>  
>   Date date = null;
>  
>   try {
>  
> date =
> simpleDateFormat.parse(element.getTime());
>  
>   } catch (ParseException e) {
>  
> throw new RuntimeException(e);
>  
>   }
>  
>   long time = date.getTime();
>  
>   System.out.println(time);
>  
>   return time;
>   }
> }));
>
>
> // 窗口聚合
> watermarkedStream.map(new MapFunction Tuple2  
> @Override
>  
> public Tuple2  
>   // 将数据转换成二元组,方便计算
>  
>   return Tuple2.of(value.getUser(), 1L);
>   }
> })
> .keyBy(r -
> r.f0)
> // 设置滚动事件时间窗口
>
> .window(TumblingEventTimeWindows.of(Time.seconds(5)))
> .reduce(new
> ReduceFunction  
> @Override
>  
> public Tuple2 Tuple2  
>   // 定义累加规则,窗口闭合时,向下游发送累加结果
>  
>   return Tuple2.of(value1.f0, value1.f1 + value2.f1);
>   }
> })
> .print("Aggregated
> stream");
>
>
> env.execute();
>   }
> }
>
>
>
>
>
>
> 值得注意的是,若将代码中的 TumblingEventTimeWindows 替换为 TumblingProcessingTimeWindows
> ,即使使用 Kafka 数据源也是可以完成聚合计算并输出结果的。
>
>
>
> 感谢您花时间查看这个问题!
> Lucas


Re: EOFException when deserializing from RocksDB

2023-02-07 Thread Clemens Valiente
If I store the Java protobuf objects in the rocksdb instead of the scala
objects, I get this stacktrace:

2023-02-07 09:17:04,246 WARN  org.apache.flink.runtime.taskmanager.Task
   [] - KeyedProcess -> (Map -> Sink: signalSink, Map -> Flat
Map -> Sink: FeatureSink, Sink: logsink) (2/2)#0
(fa4aae8fb7d2a7a94eafb36fe5470851_6760a9723a5626620871f040128bad1b_1_0)
switched from RUNNING to FAILED with failure cause:
org.apache.flink.util.FlinkRuntimeException: Error while adding data to
RocksDB
at
org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:109)
at
com.grab.grabdefence.acorn.app.functions.stream.CollectFeatureProcessFunction$.processElement(CollectFeatureProcessFunction.scala:69)
at
com.grab.grabdefence.acorn.app.functions.stream.CollectFeatureProcessFunction$.processElement(CollectProcessFunction.scala:18)
at
org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:83)
at
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:233)
at
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:134)
at
org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:105)
at
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:542)
at
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:831)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:780)
at
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:935)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:914)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:728)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.IllegalStateException: The Kryo Output still contains
data from a previous serialize call. It has to be flushed or cleared at the
end of the serialize call.
at
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.serialize(KryoSerializer.java:358)
at
org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValueInternal(AbstractRocksDBState.java:158)
at
org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:180)
at
org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:168)
at
org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:107)
... 16 more

I do not touch the Kryo Serializer apart from the one
registerTypeWithKryoSerializer
call, and I only call the state.value() and update() once each in the
processElement()
method.

I thought these Stores are supposedly abstracted away safely enough that as
a user I wouldn't have to worry about the exact
Flush/Serialization/Deserialization logic but it seems that this
application breaks even though I am only using what I think is quite
innocent code?

On Fri, Feb 3, 2023 at 4:52 PM Clemens Valiente 
wrote:

>
> Hi, I have been struggling with this particular Exception for days and
> thought I'd ask for help here.
>
> I am using a KeyedProcessFunction with a
>
>   private lazy val state: ValueState[Feature] = {
> val stateDescriptor = new
> ValueStateDescriptor[Feature]("CollectFeatureProcessState",
> createTypeInformation[Feature])
> getRuntimeContext.getState(stateDescriptor)
>   }
>
>
> which is used in my process function as follows
>
>   override def processElement(value: Feature, ctx:
> KeyedProcessFunction[String, Feature, Feature]#Context, out:
> Collector[Feature]): Unit = {
> val current: Feature = state.value match {
>   case null => value
>   case exists => combine(value, exists)
> }
> if (checkForCompleteness(current)) {
>   out.collect(current)
>   state.clear()
> } else {
>   state.update(current)
> }
>   }
>
>
> Feature is a protobuf class that I registered with kryo as follows (using
> chill-protobuf)
>
> env.getConfig.registerTypeWithKryoSerializer(classOf[Feature],
> classOf[ProtobufSerializer])
>
> But I also got Exceptions with normal scala case classes wrapping this
> Feature class, and without the ProtobufSerializer using the standard slow
> Java Serializer.
> The exception occurs within the first minutes/seconds of starting the app
> and looks as follows:
>
> 2023-02-03 08:41:07,577 WARN  org.apache.flink.runtime.taskmanager.Task
>  [] - KeyedProcess -> (Map -> Sink: FeatureSignalSink, Map
> -> Flat Map -> Sink: FeatureStore, Sink: logsink) (2/2)#0
> 

Re: idea构建flink源码失败

2023-02-07 Thread Ran Tao
jdk8 & scala2.12的组合是支持的,你这个问题一般就是jdk和idea设置问题。可以通过以下方法尝试下:

1.注意查看 maven profile,查看勾选的profile(可以使用scala-2.12)。
2.ProjectStrccture -> modules  设置各个module 匹配 jdk8
3.IDEA-Settings-Builder-Compiler 设置 jdk8,以及使用jdk11的话注意开启交叉编译
4.scala-compiler的server也要设置jdk8



tiger <2372554...@qq.com.invalid> 于2023年2月4日周六 18:06写道:

> hi,
>
>
> 各位大佬好,在idea构建flink源码失败,吧几乎所有scala版本,sbt版本都下载下来,一一测试都失败了。
>
> 环境如下:
>
> 操作系统:Ubuntu22.04
>
> idea:2022.3.2
>
> jdk:
>
>   java version "1.8.0_191"
>  Java(TM) SE Runtime Environment (build 1.8.0_191-b12)
>  Java HotSpot(TM) 64-Bit Server VM (build 25.191-b12, mixed mode)
>
> scala:
>
> 2.12.0,2.12.14(都下载下来了.
> 2.12.1,2.12.2,2.12.3,2.12.4,2.13.3)
>
>
> sbt:
>
>  sbt-1.3.6 (  都下载下:sbt-1.1.4  sbt-1.2.0 , sbt-1.4.0 sbt-1.5.5
> sbt-1.6.1  sbt-1.7.2  sbt-1.8.1)
>
>
> mvn:
>
> 3.8.7,3.2.5
>
>
> 目前的异常是:
>
> scalac: Error: assertion failed:
>(class DataStream,iterate$default$2)
>   while compiling:
>
> /java-source/flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala
>  during phase: typer
>   library version: version 2.12.14
>  compiler version: version 2.12.14
>reconstructed args: -nobootcp -classpath
>
> /jdk/development/jre/lib/charsets.jar:/jdk/development/jre/lib/deploy.jar:/jdk/development/jre/lib/ext/cldrdata.jar:/jdk/development/jre/lib/ext/dnsns.jar:/jdk/development/jre/lib/ext/jaccess.jar:/jdk/development/jre/lib/ext/jfxrt.jar:/jdk/development/jre/lib/ext/localedata.jar:/jdk/development/jre/lib/ext/nashorn.jar:/jdk/development/jre/lib/ext/sunec.jar:/jdk/development/jre/lib/ext/sunjce_provider.jar:/jdk/development/jre/lib/ext/sunpkcs11.jar:/jdk/development/jre/lib/ext/zipfs.jar:/jdk/development/jre/lib/javaws.jar:/jdk/development/jre/lib/jce.jar:/jdk/development/jre/lib/jfr.jar:/jdk/development/jre/lib/jfxswt.jar:/jdk/development/jre/lib/jsse.jar:/jdk/development/jre/lib/management-agent.jar:/jdk/development/jre/lib/plugin.jar:/jdk/development/jre/lib/resources.jar:/jdk/development/jre/lib/rt.jar:/java-source/flink/flink-streaming-scala/target/classes:/java-source/flink/flink-streaming-java/target/classes:/java-source/flink/flink-core/target/classes:/java-source/flink/flink-connectors/flink-file-sink-common/target/classes:/java-source/flink/flink-runtime/target/classes:/java-source/flink/flink-java/target/classes:/rely/maven/repository/com/twitter/chill-java/0.7.6/chill-java-0.7.6.jar:/rely/maven/repository/org/apache/flink/flink-shaded-guava/30.1.1-jre-15.0/flink-shaded-guava-30.1.1-jre-15.0.jar:/rely/maven/repository/org/apache/commons/commons-math3/3.6.1/commons-math3-3.6.1.jar:/java-source/flink/flink-scala/target/classes:/rely/maven/repository/org/apache/flink/flink-shaded-asm-9/9.2-15.0/flink-shaded-asm-9-9.2-15.0.jar:/rely/maven/repository/com/twitter/chill_2.12/0.7.6/chill_2.12-0.7.6.jar:/rely/maven/repository/org/scala-lang/scala-reflect/2.12.7/scala-reflect-2.12.7.jar:/rely/maven/repository/org/scala-lang/scala-library/2.12.7/scala-library-2.12.7.jar:/rely/maven/repository/org/scala-lang/scala-compiler/2.12.7/scala-compiler-2.12.7.jar:/rely/maven/repository/org/scala-lang/modules/scala-xml_2.12/1.0.6/scala-xml_2.12-1.0.6.jar:/java-source/flink/flink-annotations/target/classes:/java-source/flink/flink-metrics/flink-metrics-core/target/classes:/rely/maven/repository/org/apache/flink/flink-shaded-jackson/2.12.4-15.0/flink-shaded-jackson-2.12.4-15.0.jar:/rely/maven/repository/org/apache/commons/commons-lang3/3.3.2/commons-lang3-3.3.2.jar:/rely/maven/repository/com/esotericsoftware/kryo/kryo/2.24.0/kryo-2.24.0.jar:/rely/maven/repository/com/esotericsoftware/minlog/minlog/1.2/minlog-1.2.jar:/rely/maven/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/rely/maven/repository/org/apache/commons/commons-compress/1.21/commons-compress-1.21.jar:/java-source/flink/flink-rpc/flink-rpc-core/target/classes:/java-source/flink/flink-rpc/flink-rpc-akka-loader/target/classes:/java-source/flink/flink-queryable-state/flink-queryable-state-client-java/target/classes:/java-source/flink/flink-filesystems/flink-hadoop-fs/target/classes:/rely/maven/repository/commons-io/commons-io/2.11.0/commons-io-2.11.0.jar:/rely/maven/repository/org/apache/flink/flink-shaded-netty/
> 4.1.70.
> Final-15.0/flink-shaded-netty-4.1.70.Final-15.0.jar:/rely/maven/repository/org/apache/flink/flink-shaded-zookeeper-3/3.5.9-15.0/flink-shaded-zookeeper-3-3.5.9-15.0.jar:/rely/maven/repository/commons-cli/commons-cli/1.5.0/commons-cli-1.5.0.jar:/rely/maven/repository/org/javassist/javassist/3.24.0-GA/javassist-3.24.0-GA.jar:/rely/maven/repository/org/xerial/snappy/snappy-java/
>