Re: Flink没有Operator级别的数据量Metrics

2021-11-16 Thread Ada Luna
看不到Task里Operator之间传输的数据量

zhisheng  于2021年11月4日周四 下午4:56写道:
>
> webui 有 operator 级别的,仔细看看
>
> Ada Luna  于2021年10月26日周二 下午4:08写道:
>
> > Web-UI中的就是Flink原生正常的Metrics,都是Task级别
> >
> > xiazhl  于2021年10月26日周二 下午2:31写道:
> > >
> > > web-ui里面有metrics
> > >
> > >
> > >
> > >
> > > --原始邮件--
> > > 发件人:
> > "user-zh"
> >   <
> > gfen...@gmail.com;
> > > 发送时间:2021年10月26日(星期二) 中午1:55
> > > 收件人:"user-zh" > >
> > > 主题:Flink没有Operator级别的数据量Metrics
> > >
> > >
> > >
> > >
> > Flink只能看到Task级别的流入流出数据量,而没有Operator级别的。这个是出于性能考量吗?未来会加入一个开关,可以看到Operator级别的,方便debug吗?
> >


Re: Flink jdbc Connector 特殊类型问题

2021-11-16 Thread Ada Luna
这指定不是个Bug。Flink SQL 类型是有限的。有限的类型囊括不了JDBC的数据源各种数据源的类型。

Shengkai Fang  于2021年11月16日周二 下午12:38写道:
>
> 如果是个 bug,建议在社区开个 issue 跟踪下这个问题。
>
> Shengkai Fang  于2021年11月16日周二 下午12:37写道:
>
> > 能分享下具体是什么错误类型吗?
> >
> > 我看了下代码,感觉不太好支持。具体的序列化器是由
> > `AbstractJdbcRowConverter`#createExternalConverter 决定的。
> > 根据你的描述,当前的序列化器不够通用导致了这个问题,可能需要改动下源码才能支持。
> >
> > Best,
> > Shengkai
> >
> > Ada Luna  于2021年11月12日周五 上午11:25写道:
> >
> >> Oracle中有VARCHAR 和 CLOB
> >> 如果我在Flink SQL JDBC Sink中配置STRING那么只能写VARCHAR 写CLOB会报错。
> >> 我想扩展FlinkSQL DDL的类型有什么办法吗。是用RAW类型还是有其他更好办法。
> >> Oracle中VARCHAR和CLOB是两种不同的String,我需要在Sink写出的时候根据DDL的类型,调用不同的转换方法
> >>
> >> Ada Luna  于2021年11月12日周五 上午11:23写道:
> >> >
> >> > Oracle中有VARCHAR 和 CLOB
> >> > 如果我在Flink SQL JDBC Sink中配置STRING那么只能写VARCHAR 写CLOB会报错。
> >>
> >


Re: Flink jdbc Connector 特殊类型问题

2021-11-11 Thread Ada Luna
Oracle中有VARCHAR 和 CLOB
如果我在Flink SQL JDBC Sink中配置STRING那么只能写VARCHAR 写CLOB会报错。
我想扩展FlinkSQL DDL的类型有什么办法吗。是用RAW类型还是有其他更好办法。
Oracle中VARCHAR和CLOB是两种不同的String,我需要在Sink写出的时候根据DDL的类型,调用不同的转换方法

Ada Luna  于2021年11月12日周五 上午11:23写道:
>
> Oracle中有VARCHAR 和 CLOB
> 如果我在Flink SQL JDBC Sink中配置STRING那么只能写VARCHAR 写CLOB会报错。


Re: Flink没有Operator级别的数据量Metrics

2021-10-26 Thread Ada Luna
Web-UI中的就是Flink原生正常的Metrics,都是Task级别

xiazhl  于2021年10月26日周二 下午2:31写道:
>
> web-ui里面有metrics
>
>
>
>
> --原始邮件--
> 发件人:  
>   "user-zh"   
>  
>  发送时间:2021年10月26日(星期二) 中午1:55
> 收件人:"user-zh"
> 主题:Flink没有Operator级别的数据量Metrics
>
>
>
> Flink只能看到Task级别的流入流出数据量,而没有Operator级别的。这个是出于性能考量吗?未来会加入一个开关,可以看到Operator级别的,方便debug吗?


Flink没有Operator级别的数据量Metrics

2021-10-25 Thread Ada Luna
Flink只能看到Task级别的流入流出数据量,而没有Operator级别的。这个是出于性能考量吗?未来会加入一个开关,可以看到Operator级别的,方便debug吗?


Re: Flink SQL支持side output

2021-10-14 Thread Ada Luna
举个例子

Kenyore Woo  于2021年10月14日周四 上午10:37写道:
>
> 你可以把使用反向条件把脏数据输出到另外一张表去。source会复用的。其实和side output效果是一致的
> On Oct 13, 2021 at 16:28:57, Ada Luna  wrote:
>
> > 这个没有支持的打算是因为,目前我们假定Flink SQL处理的数据都是干净的经过清洗的是吧。
> >
> > Ada Luna  于2021年9月19日周日 下午7:43写道:
> >
> >
> > 主要是脏数据,Source、Sink或者其他算子产生的脏数据,向把这些数据侧向输出到外部数据库里存起来。
> >
> >
> > Caizhi Weng  于2021年9月16日周四 下午1:52写道:
> >
> > >
> >
> > > Hi!
> >
> > >
> >
> > > 就我所知目前暂时没有支持 side output 的打算。可以描述一下需求和场景吗?
> >
> > >
> >
> > > Ada Luna  于2021年9月15日周三 下午8:38写道:
> >
> > >
> >
> > > > Flink SQL 未来会支持side output,侧向输出一些脏数据吗?
> >
> > > >
> >
> >


Re: Flink SQL支持side output

2021-10-13 Thread Ada Luna
这个没有支持的打算是因为,目前我们假定Flink SQL处理的数据都是干净的经过清洗的是吧。

Ada Luna  于2021年9月19日周日 下午7:43写道:
>
> 主要是脏数据,Source、Sink或者其他算子产生的脏数据,向把这些数据侧向输出到外部数据库里存起来。
>
> Caizhi Weng  于2021年9月16日周四 下午1:52写道:
> >
> > Hi!
> >
> > 就我所知目前暂时没有支持 side output 的打算。可以描述一下需求和场景吗?
> >
> > Ada Luna  于2021年9月15日周三 下午8:38写道:
> >
> > > Flink SQL 未来会支持side output,侧向输出一些脏数据吗?
> > >


Session模式不同Job日志分离问题

2021-10-13 Thread Ada Luna
目前我遇到的问题是不同Job的日志无法再一个Session中区分。

看了京东写的文章。
https://www.infoq.cn/article/1nvlduu82ihmusxxqruq

未来社区在这方面有什么规划吗。

https://issues.apache.org/jira/browse/FLINK-17969
这个Ticket的PR也被关了。


FlinkSQL Source和Sink的Operator name为什么格式不同

2021-09-29 Thread Ada Luna
Source: TableSourceScan(table=[[default_catalog, default_database,
ods_k]], fields=[id, name])
Sink: Sink(table=[default_catalog.default_database.ads_k], fields=[id, name])
Sink: Sink(table=[default_catalog.default_database.ads_k2], fields=[id, name]))


TableSourceScan 和 Sink相比多了个 中括号,并且采用 ',' 分割名字功空间,这是为什么


Re: Flink Session 模式Job日志区分

2021-09-26 Thread Ada Luna
我这个是Flink SQL任务

陈卓宇 <2572805...@qq.com.invalid> 于2021年9月23日周四 下午3:57写道:
>
> 你不同job任务日志上做一个区别LOG_PREFIX private static final String LOG_PREFIX = 
> "【WF事件组件下发缓存处理器】";
> log.info("|prefix={} ☀️☀️☀️☀️ 进行订阅事件缓存处理开始|message={}|componentEvent={}|", 
> LOG_PREFIX, message, componentEvent);
>
>
>
> 陈卓宇
>
>
> 
>
>
>
>
> --原始邮件--
> 发件人:  
>   "user-zh"   
>  
>  发送时间:2021年9月23日(星期四) 中午11:16
> 收件人:"user-zh"
> 主题:Flink Session 模式Job日志区分
>
>
>
> 多个Job跑在一个Session中,如何区分不同job的日志呢?目前有什么好的办法吗?


Flink Session 模式Job日志区分

2021-09-22 Thread Ada Luna
多个Job跑在一个Session中,如何区分不同job的日志呢?目前有什么好的办法吗?


Re: Flink SQL支持side output

2021-09-19 Thread Ada Luna
主要是脏数据,Source、Sink或者其他算子产生的脏数据,向把这些数据侧向输出到外部数据库里存起来。

Caizhi Weng  于2021年9月16日周四 下午1:52写道:
>
> Hi!
>
> 就我所知目前暂时没有支持 side output 的打算。可以描述一下需求和场景吗?
>
> Ada Luna  于2021年9月15日周三 下午8:38写道:
>
> > Flink SQL 未来会支持side output,侧向输出一些脏数据吗?
> >


Flink SQL支持side output

2021-09-15 Thread Ada Luna
Flink SQL 未来会支持side output,侧向输出一些脏数据吗?


dynamic-table-options默认值问题

2021-09-14 Thread Ada Luna
table.dynamic-table-options.enabled
这个参数Flink为什么默认是false,是怕用户误操作还是开启了有性能问题?


Re: FlinkSQL 反压 inputPoolUsage问题

2021-08-16 Thread Ada Luna
taskmanager.network.memory.buffers-per-channel
把这个参数从默认的2调整成5,反压的PoolUsage就和网上的文章一致了,这是为什么?

Ada Luna  于2021年8月16日周一 下午4:17写道:
>
> 在网上看文章一般反压源头的inputPoolUsage应该是高的,其他被反压算子的inputPoolUsage也应该是高的。但是我最近发现的反压inputPoolUsage全是空,是Flink的反压机制就是这样,还是说这个版本的Metrics有问题。
>
> Ada Luna  于2021年8月16日周一 下午4:16写道:
> >
> > 版本1.10.1
> > 最近我观察很多FlinkSQL 任务的反压指标发现,反压为High算子的outputPoolUsage是满的
> > inputPoolUsage是空,反压源头inputPoolUsage和outputPoolUsage都是空的,这是正常的嘛。


Re: FlinkSQL 反压 inputPoolUsage问题

2021-08-16 Thread Ada Luna
在网上看文章一般反压源头的inputPoolUsage应该是高的,其他被反压算子的inputPoolUsage也应该是高的。但是我最近发现的反压inputPoolUsage全是空,是Flink的反压机制就是这样,还是说这个版本的Metrics有问题。

Ada Luna  于2021年8月16日周一 下午4:16写道:
>
> 版本1.10.1
> 最近我观察很多FlinkSQL 任务的反压指标发现,反压为High算子的outputPoolUsage是满的
> inputPoolUsage是空,反压源头inputPoolUsage和outputPoolUsage都是空的,这是正常的嘛。


FlinkSQL 反压 inputPoolUsage问题

2021-08-16 Thread Ada Luna
版本1.10.1
最近我观察很多FlinkSQL 任务的反压指标发现,反压为High算子的outputPoolUsage是满的
inputPoolUsage是空,反压源头inputPoolUsage和outputPoolUsage都是空的,这是正常的嘛。


Flink使用SQL注册UDF未来有规划吗

2021-08-16 Thread Ada Luna
目前注册UDF要通过Table API。
未来会通过SQL直接将UDF注册到上下文中吗?


Re: Flink Yarn Session模式,多任务不同Kerberos认证问题

2021-07-30 Thread Ada Luna
这个不知道未来怎么规划

Paul Lam  于2021年7月30日周五 下午2:51写道:
>
> 现在是不能共享的。Flink JobManager 的 principal 在启动时就确定了。
>
> Best,
> Paul Lam
>
> > 2021年7月30日 14:46,Ada Luna  写道:
> >
> > 在Flink Yarn Session中每次提交Job都更换principal。因为要做权限隔离,每个用户有自己的principal。
> >
> > 现在 Flink Session模式是不是无法满足多个principal共享一个Flink Session集群,只能走perjob。
> > 或者每个持有独立principal的用户独享一个Session。
>


Flink Yarn Session模式,多任务不同Kerberos认证问题

2021-07-30 Thread Ada Luna
在Flink Yarn Session中每次提交Job都更换principal。因为要做权限隔离,每个用户有自己的principal。

现在 Flink Session模式是不是无法满足多个principal共享一个Flink Session集群,只能走perjob。
或者每个持有独立principal的用户独享一个Session。


Re: Flink 1.10 内存问题

2021-07-27 Thread Ada Luna
最后我发现问题的根源是双流JOIN没设置TTL。双流JOIN task的 OutputBuffer会被打满。然后Flink就处于假死状态了。不再消费任何数据。

Ada Luna  于2021年7月19日周一 下午7:06写道:
>
> 异步IO的Order队列打满,导致算子卡死?
>
> Ada Luna  于2021年7月19日周一 下午2:02写道:
> >
> > 我通过反压信息观察到,这个 async wait operator
> > 算子上游全部出现严重反压。很有可能是这个算子死锁或者死循环等类似问题。但是我还不知道如何进一步排查。
> >
> > "async wait operator -> (where: (=(CASE(>(ABS(Z), WaterRoseMax_5), 1,
> > 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD, ITEM, VAL, UNIT,
> > DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX, currtime2(DATATIME,
> > _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0 AS ISFILL, 1 AS
> > STATE, 0E0 AS P5) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
> > WaterRoseMax_5), 1, 0), 1)), select: (STID, DATATIME, _UTF-16LE'2' AS
> > DATATYPE, currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS
> > VALTIME, _UTF-16LE'' AS CORTIME, VAL AS CORBERVAL, _UTF-16LE'' AS
> > CORAFTVAL, _UTF-16LE'' AS CORNAME, _UTF-16LE'' AS CORPHONE, 0 AS
> > STATE, 0 AS ISFILL, 1 AS TYPE) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
> > WaterRoseMax_5), 1, 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD,
> > ITEM, VAL, UNIT, DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX,
> > currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0
> > AS ISFILL, 1 AS STATE, 0E0 AS P5) -> to: Tuple2) (2/2)" #82 prio=5
> > os_prio=0 tid=0x7fd4c4ac5000 nid=0x21c3 in Object.wait()
> > [0x7fd4d5416000]
> > java.lang.Thread.State: WAITING (on object monitor)
> > at java.lang.Object.wait(Native Method)
> > at java.lang.Object.wait(Object.java:502)
> > at 
> > org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.addAsyncBufferEntry(AsyncWaitOperator.java:403)
> > at 
> > org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.processElement(AsyncWaitOperator.java:224)
> > at 
> > org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
> > - locked <0x00074cb5b3a0> (a java.lang.Object)
> > at 
> > org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
> > at 
> > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
> > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> > at java.lang.Thread.run(Thread.java:748)
> >
> > "async wait operator -> (where: (=(CASE(>(ABS(Z), WaterRoseMax_5), 1,
> > 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD, ITEM, VAL, UNIT,
> > DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX, currtime2(DATATIME,
> > _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0 AS ISFILL, 1 AS
> > STATE, 0E0 AS P5) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
> > WaterRoseMax_5), 1, 0), 1)), select: (STID, DATATIME, _UTF-16LE'2' AS
> > DATATYPE, currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS
> > VALTIME, _UTF-16LE'' AS CORTIME, VAL AS CORBERVAL, _UTF-16LE'' AS
> > CORAFTVAL, _UTF-16LE'' AS CORNAME, _UTF-16LE'' AS CORPHONE, 0 AS
> > STATE, 0 AS ISFILL, 1 AS TYPE) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
> > WaterRoseMax_5), 1, 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD,
> > ITEM, VAL, UNIT, DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX,
> > currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0
> > AS ISFILL, 1 AS STATE, 0E0 AS P5) -> to: Tuple2) (1/2)" #81 prio=5
> > os_prio=0 tid=0x7fd4c4ac3000 nid=0x21c2 in Object.wait()
> > [0x7fd4d5517000]
> > java.lang.Thread.State: WAITING (on object monitor)
> > at java.lang.Object.wait(Native Method)
> > at java.lang.Object.wait(Object.java:502)
> > at 
> > org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:539)
> > - locked <0x00074cb5d560> (a java.util.ArrayDeque)
> > at 
> > org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:508)
> > at 
> > org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:165)
> > at 
> > org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:209)
> > at 
> > org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
> > at 
> > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
> > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> > at java.lang.Thread.run(Thread.java:748)
> >
> > Yun Tang  于2021年7月6日周二 下午4:01写道:
> > >
> > > Hi,
> > >
> > > 有可能的,如果下游发生了死锁,无法消费任何数据的话,整个作业就假死了。要定位ro

Re: Flink 1.10 内存问题

2021-07-19 Thread Ada Luna
异步IO的Order队列打满,导致算子卡死?

Ada Luna  于2021年7月19日周一 下午2:02写道:
>
> 我通过反压信息观察到,这个 async wait operator
> 算子上游全部出现严重反压。很有可能是这个算子死锁或者死循环等类似问题。但是我还不知道如何进一步排查。
>
> "async wait operator -> (where: (=(CASE(>(ABS(Z), WaterRoseMax_5), 1,
> 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD, ITEM, VAL, UNIT,
> DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX, currtime2(DATATIME,
> _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0 AS ISFILL, 1 AS
> STATE, 0E0 AS P5) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
> WaterRoseMax_5), 1, 0), 1)), select: (STID, DATATIME, _UTF-16LE'2' AS
> DATATYPE, currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS
> VALTIME, _UTF-16LE'' AS CORTIME, VAL AS CORBERVAL, _UTF-16LE'' AS
> CORAFTVAL, _UTF-16LE'' AS CORNAME, _UTF-16LE'' AS CORPHONE, 0 AS
> STATE, 0 AS ISFILL, 1 AS TYPE) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
> WaterRoseMax_5), 1, 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD,
> ITEM, VAL, UNIT, DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX,
> currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0
> AS ISFILL, 1 AS STATE, 0E0 AS P5) -> to: Tuple2) (2/2)" #82 prio=5
> os_prio=0 tid=0x7fd4c4ac5000 nid=0x21c3 in Object.wait()
> [0x7fd4d5416000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at 
> org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.addAsyncBufferEntry(AsyncWaitOperator.java:403)
> at 
> org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.processElement(AsyncWaitOperator.java:224)
> at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
> - locked <0x00074cb5b3a0> (a java.lang.Object)
> at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:748)
>
> "async wait operator -> (where: (=(CASE(>(ABS(Z), WaterRoseMax_5), 1,
> 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD, ITEM, VAL, UNIT,
> DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX, currtime2(DATATIME,
> _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0 AS ISFILL, 1 AS
> STATE, 0E0 AS P5) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
> WaterRoseMax_5), 1, 0), 1)), select: (STID, DATATIME, _UTF-16LE'2' AS
> DATATYPE, currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS
> VALTIME, _UTF-16LE'' AS CORTIME, VAL AS CORBERVAL, _UTF-16LE'' AS
> CORAFTVAL, _UTF-16LE'' AS CORNAME, _UTF-16LE'' AS CORPHONE, 0 AS
> STATE, 0 AS ISFILL, 1 AS TYPE) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
> WaterRoseMax_5), 1, 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD,
> ITEM, VAL, UNIT, DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX,
> currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0
> AS ISFILL, 1 AS STATE, 0E0 AS P5) -> to: Tuple2) (1/2)" #81 prio=5
> os_prio=0 tid=0x7fd4c4ac3000 nid=0x21c2 in Object.wait()
> [0x7fd4d5517000]
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:539)
> - locked <0x00074cb5d560> (a java.util.ArrayDeque)
> at 
> org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:508)
> at 
> org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:165)
> at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:209)
> at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:748)
>
> Yun Tang  于2021年7月6日周二 下午4:01写道:
> >
> > Hi,
> >
> > 有可能的,如果下游发生了死锁,无法消费任何数据的话,整个作业就假死了。要定位root cause还是需要查一下下游的task究竟怎么了
> >
> > 祝好
> > 唐云
> > 
> > From: Ada Luna 
> > Sent: Tuesday, July 6, 2021 12:04
> > To: user-zh@flink.apache.org 
> > Subject: Re: Flink 1.10 内存问题
> >
> > 反压会导致整个Flink任务假死吗?一条Kafka数据都不消费了。持续几天,不重启不恢复的
> >
> > Yun Tang  于2021年7月6日周二 上午11:12写道:
> > >
> > > Hi,
> > >
> > > LocalBufferPool.requestMemorySegment 
> > > 这个方法并不是在申请内存,而是因为作业存在反压,因为下游没有及时消费,相关buffer被占用,所以上游

Re: Flink 1.10 内存问题

2021-07-19 Thread Ada Luna
我通过反压信息观察到,这个 async wait operator
算子上游全部出现严重反压。很有可能是这个算子死锁或者死循环等类似问题。但是我还不知道如何进一步排查。

"async wait operator -> (where: (=(CASE(>(ABS(Z), WaterRoseMax_5), 1,
0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD, ITEM, VAL, UNIT,
DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX, currtime2(DATATIME,
_UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0 AS ISFILL, 1 AS
STATE, 0E0 AS P5) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
WaterRoseMax_5), 1, 0), 1)), select: (STID, DATATIME, _UTF-16LE'2' AS
DATATYPE, currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS
VALTIME, _UTF-16LE'' AS CORTIME, VAL AS CORBERVAL, _UTF-16LE'' AS
CORAFTVAL, _UTF-16LE'' AS CORNAME, _UTF-16LE'' AS CORPHONE, 0 AS
STATE, 0 AS ISFILL, 1 AS TYPE) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
WaterRoseMax_5), 1, 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD,
ITEM, VAL, UNIT, DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX,
currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0
AS ISFILL, 1 AS STATE, 0E0 AS P5) -> to: Tuple2) (2/2)" #82 prio=5
os_prio=0 tid=0x7fd4c4ac5000 nid=0x21c3 in Object.wait()
[0x7fd4d5416000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.addAsyncBufferEntry(AsyncWaitOperator.java:403)
at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.processElement(AsyncWaitOperator.java:224)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
- locked <0x00074cb5b3a0> (a java.lang.Object)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)

"async wait operator -> (where: (=(CASE(>(ABS(Z), WaterRoseMax_5), 1,
0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD, ITEM, VAL, UNIT,
DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX, currtime2(DATATIME,
_UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0 AS ISFILL, 1 AS
STATE, 0E0 AS P5) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
WaterRoseMax_5), 1, 0), 1)), select: (STID, DATATIME, _UTF-16LE'2' AS
DATATYPE, currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS
VALTIME, _UTF-16LE'' AS CORTIME, VAL AS CORBERVAL, _UTF-16LE'' AS
CORAFTVAL, _UTF-16LE'' AS CORNAME, _UTF-16LE'' AS CORPHONE, 0 AS
STATE, 0 AS ISFILL, 1 AS TYPE) -> to: Tuple2, where: (=(CASE(>(ABS(Z),
WaterRoseMax_5), 1, 0), 0)), select: (ID, STID, _UTF-16LE'' AS STCD,
ITEM, VAL, UNIT, DATATIME AS DT, _UTF-16LE'2.6' AS DATAINDEX,
currtime2(DATATIME, _UTF-16LE'-MM-dd HH:mm:ss') AS CREATETIME, 0
AS ISFILL, 1 AS STATE, 0E0 AS P5) -> to: Tuple2) (1/2)" #81 prio=5
os_prio=0 tid=0x7fd4c4ac3000 nid=0x21c2 in Object.wait()
[0x7fd4d5517000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:539)
- locked <0x00074cb5d560> (a java.util.ArrayDeque)
at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:508)
at 
org.apache.flink.streaming.runtime.io.BarrierBuffer.getNextNonBlocked(BarrierBuffer.java:165)
at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:209)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:105)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)

Yun Tang  于2021年7月6日周二 下午4:01写道:
>
> Hi,
>
> 有可能的,如果下游发生了死锁,无法消费任何数据的话,整个作业就假死了。要定位root cause还是需要查一下下游的task究竟怎么了
>
> 祝好
> 唐云
> 
> From: Ada Luna 
> Sent: Tuesday, July 6, 2021 12:04
> To: user-zh@flink.apache.org 
> Subject: Re: Flink 1.10 内存问题
>
> 反压会导致整个Flink任务假死吗?一条Kafka数据都不消费了。持续几天,不重启不恢复的
>
> Yun Tang  于2021年7月6日周二 上午11:12写道:
> >
> > Hi,
> >
> > LocalBufferPool.requestMemorySegment 
> > 这个方法并不是在申请内存,而是因为作业存在反压,因为下游没有及时消费,相关buffer被占用,所以上游会卡在requestMemorySegment上面。
> >
> > 想要解决还是查一下为什么下游会反压。
> >
> >
> > 祝好
> > 唐云
> > 
> > From: Ada Luna 
> > Sent: Tuesday, July 6, 2021 10:43
> > To: user-zh@flink.apache.org 
> > Subject: Re: Flink 1.10 内存问题
> >
> > "Source: test_records (2/3)" #78 prio=5 os_prio=0
> > tid=0x7fd4c4a24800 nid=0x21bf in Object.wait()
> > [0x7fd4d581a000]
> > java.lang.Thread.St

Re: Flink 1.10 内存问题

2021-07-05 Thread Ada Luna
反压会导致整个Flink任务假死吗?一条Kafka数据都不消费了。持续几天,不重启不恢复的

Yun Tang  于2021年7月6日周二 上午11:12写道:
>
> Hi,
>
> LocalBufferPool.requestMemorySegment 
> 这个方法并不是在申请内存,而是因为作业存在反压,因为下游没有及时消费,相关buffer被占用,所以上游会卡在requestMemorySegment上面。
>
> 想要解决还是查一下为什么下游会反压。
>
>
> 祝好
> 唐云
> ________
> From: Ada Luna 
> Sent: Tuesday, July 6, 2021 10:43
> To: user-zh@flink.apache.org 
> Subject: Re: Flink 1.10 内存问题
>
> "Source: test_records (2/3)" #78 prio=5 os_prio=0
> tid=0x7fd4c4a24800 nid=0x21bf in Object.wait()
> [0x7fd4d581a000]
> java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:251)
> - locked <0x00074d8b0df0> (a java.util.ArrayDeque)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:218)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:264)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:192)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:162)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:128)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
> at 
> org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
> - locked <0x00074cbd3be0> (a java.lang.Object)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
> at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
> - locked <0x00074cbd3be0> (a java.lang.Object)
> at 
> org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:91)
> at 
> org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:156)
> at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:711)
> at 
> com.dtstack.flink.sql.source.kafka.KafkaConsumer011.run(KafkaConsumer011.java:66)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:93)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:57)
> at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:97)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:748)
>
> Ada Luna  于2021年7月6日周二 上午10:13写道:
> >
> > 下面报错调大TaskManager内存即可解决,但是我不知道为什么Flink内存不够大会出现如下假死情况。申请内存卡住。整个任务状态为RUNNING但是不再消费数据。
> >
> >
> >
> > "Map -> to: Tuple2 -> Map -> (from: (id, sid, item, val, unit, dt,
> > after_index, tablename, PROCTIME) -> where: (AND(=(tablename,
> > CONCAT(_UTF-16LE't_real', currtime2(dt, _UTF-16LE'MMdd'))),
> > OR(=(after_index, _UTF-16LE'2.6'), AND(=(sid, _UTF-16LE'7296'),
> > =(after_index, _UTF-16LE'2.10'))), LIKE(item, _UTF-16LE'??5%'),
> > =(MOD(getminutes(dt), 5), 0))), select: (id AS ID, sid AS STID, item
> > AS ITEM, val AS VAL, unit AS UNIT, dt AS DATATIME), from: (id, sid,
> > item, val, unit, dt, after_index, tablename, PROCTIME) -> where:
> > (AND(=(tablename, CONCAT(_UTF-16LE't_real', currtime2(dt,
> > _UTF-16LE'MMdd'))), OR(=(after_index, _UTF-16LE'2.6'), AND(=(sid,
> > _UTF-16LE'7296'), =(after_index, _UTF-16LE'2.10'))), LIKE(item,
> > _UTF-16LE'??5%'), =(MOD(getminutes(dt), 5), 0))), select: (sid AS
> > STID, val AS VAL, dt AS DATATIME)) (1/1)" #79 prio=5 os_prio=0
> > tid=0x7fd4c4a94000 nid=0x21c0 in Object.wait()
> > [0x7fd4d5719000]
> > java.lang.Thread.State: TIMED_WAITING (on object monitor)
> > at java.lang.Object.wait(Native Method)
> > at 
&g

Re: Flink 1.10 内存问题

2021-07-05 Thread Ada Luna
"Source: test_records (2/3)" #78 prio=5 os_prio=0
tid=0x7fd4c4a24800 nid=0x21bf in Object.wait()
[0x7fd4d581a000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:251)
- locked <0x00074d8b0df0> (a java.util.ArrayDeque)
at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:218)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:264)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:192)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:162)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:128)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
- locked <0x00074cbd3be0> (a java.lang.Object)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
at 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
- locked <0x00074cbd3be0> (a java.lang.Object)
at 
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:91)
at 
org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:156)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:711)
at 
com.dtstack.flink.sql.source.kafka.KafkaConsumer011.run(KafkaConsumer011.java:66)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:93)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:57)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:97)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:302)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)

Ada Luna  于2021年7月6日周二 上午10:13写道:
>
> 下面报错调大TaskManager内存即可解决,但是我不知道为什么Flink内存不够大会出现如下假死情况。申请内存卡住。整个任务状态为RUNNING但是不再消费数据。
>
>
>
> "Map -> to: Tuple2 -> Map -> (from: (id, sid, item, val, unit, dt,
> after_index, tablename, PROCTIME) -> where: (AND(=(tablename,
> CONCAT(_UTF-16LE't_real', currtime2(dt, _UTF-16LE'MMdd'))),
> OR(=(after_index, _UTF-16LE'2.6'), AND(=(sid, _UTF-16LE'7296'),
> =(after_index, _UTF-16LE'2.10'))), LIKE(item, _UTF-16LE'??5%'),
> =(MOD(getminutes(dt), 5), 0))), select: (id AS ID, sid AS STID, item
> AS ITEM, val AS VAL, unit AS UNIT, dt AS DATATIME), from: (id, sid,
> item, val, unit, dt, after_index, tablename, PROCTIME) -> where:
> (AND(=(tablename, CONCAT(_UTF-16LE't_real', currtime2(dt,
> _UTF-16LE'MMdd'))), OR(=(after_index, _UTF-16LE'2.6'), AND(=(sid,
> _UTF-16LE'7296'), =(after_index, _UTF-16LE'2.10'))), LIKE(item,
> _UTF-16LE'??5%'), =(MOD(getminutes(dt), 5), 0))), select: (sid AS
> STID, val AS VAL, dt AS DATATIME)) (1/1)" #79 prio=5 os_prio=0
> tid=0x7fd4c4a94000 nid=0x21c0 in Object.wait()
> [0x7fd4d5719000]
> java.lang.Thread.State: TIMED_WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:251)
> - locked <0x00074e6c8b98> (a java.util.ArrayDeque)
> at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:218)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:264)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:192)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:162)
> at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:128)
> at 
> org.apache.flink.streaming.runtime

Flink 1.10 内存问题

2021-07-05 Thread Ada Luna
下面报错调大TaskManager内存即可解决,但是我不知道为什么Flink内存不够大会出现如下假死情况。申请内存卡住。整个任务状态为RUNNING但是不再消费数据。



"Map -> to: Tuple2 -> Map -> (from: (id, sid, item, val, unit, dt,
after_index, tablename, PROCTIME) -> where: (AND(=(tablename,
CONCAT(_UTF-16LE't_real', currtime2(dt, _UTF-16LE'MMdd'))),
OR(=(after_index, _UTF-16LE'2.6'), AND(=(sid, _UTF-16LE'7296'),
=(after_index, _UTF-16LE'2.10'))), LIKE(item, _UTF-16LE'??5%'),
=(MOD(getminutes(dt), 5), 0))), select: (id AS ID, sid AS STID, item
AS ITEM, val AS VAL, unit AS UNIT, dt AS DATATIME), from: (id, sid,
item, val, unit, dt, after_index, tablename, PROCTIME) -> where:
(AND(=(tablename, CONCAT(_UTF-16LE't_real', currtime2(dt,
_UTF-16LE'MMdd'))), OR(=(after_index, _UTF-16LE'2.6'), AND(=(sid,
_UTF-16LE'7296'), =(after_index, _UTF-16LE'2.10'))), LIKE(item,
_UTF-16LE'??5%'), =(MOD(getminutes(dt), 5), 0))), select: (sid AS
STID, val AS VAL, dt AS DATATIME)) (1/1)" #79 prio=5 os_prio=0
tid=0x7fd4c4a94000 nid=0x21c0 in Object.wait()
[0x7fd4d5719000]
java.lang.Thread.State: TIMED_WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:251)
- locked <0x00074e6c8b98> (a java.util.ArrayDeque)
at 
org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:218)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:264)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:192)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:162)
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:128)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
at 
org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
at 
org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
at DataStreamCalcRule$4160.processElement(Unknown Source)
at 
org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:66)
at 
org.apache.flink.table.runtime.CRowProcessRunner.processElement(CRowProcessRunner.scala:35)
at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:488)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:465)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:425)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
at 
org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:37)
at 
org.apache.flink.table.runtime.CRowWrappingCollector.collect(CRowWrappingCollector.scala:28)
at DataStreamSourceConversion$4104.processElement(Unknown Source)
at 
org.apache.flink.table.runtime.CRowOutputProcessRunner.processElement(CRowOutputProcessRunner.scala:70)
at 
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:488)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:465)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:425)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingBroadcastingOutputCollector.collect(OperatorChain.java:686)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingBroadcastingOutputCollector.collect(OperatorChain.java:672)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
at 

Re: JDBC异步lookup join有FLIP或者迭代计划吗?

2021-06-10 Thread Ada Luna
好的后续我会在这个ticket简述方案。

Lin Li  于2021年6月10日周四 下午12:02写道:
>
> 社区之前有过基于 legacy source 的 pr
> https://issues.apache.org/jira/browse/FLINK-14902, 不过目前没有进展, 欢迎贡献!
> cc Guowei Ma
>
>
> Luna Wong  于2021年6月10日周四 上午11:16写道:
>
> > 如果没有我用VertX和Druid连接池贡献下代码。这个要在dev邮件列表里讨论是吗
> >