回复:toAppendStream 类型不匹配问题

2020-05-03 Thread Sun.Zhu
好的,我试试,感谢 | | Sun.Zhu | | 邮箱:17626017...@163.com | Signature is customized by Netease Mail Master 在2020年05月04日 11:22,Jark Wu 写道: 看起来是一个已经修复的 bug (FLINK-16108)。 你可以用正在 RC 的 release-1.10.1 再试下吗? https://dist.apache.org/repos/dist/dev/flink/flink-1.10.1-rc2/ Best, Jark On Mon, 4 May 2020 at

Re: toAppendStream 类型不匹配问题

2020-05-03 Thread Jark Wu
看起来是一个已经修复的 bug (FLINK-16108)。 你可以用正在 RC 的 release-1.10.1 再试下吗? https://dist.apache.org/repos/dist/dev/flink/flink-1.10.1-rc2/ Best, Jark On Mon, 4 May 2020 at 01:01, 祝尚 <17626017...@163.com> wrote: > 参考jark老师博客里的demo,写了个table api/sql的程序,在table转appendStream时报错 > flink版本1.10 > 代码如下: > public

Re: Flink Task Manager GC overhead limit exceeded

2020-05-03 Thread Xintong Song
https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/mem_setup.html Thank you~ Xintong Song On Fri, May 1, 2020 at 8:35 AM shao.hongxiao <17611022...@163.com> wrote: > 你好,宋 > Please refer to this document [1] for more details > 能发一下具体链接吗,我也发现flink ui上显示的内存参数不太对,我想仔细看一下相关说明 > > 谢谢啦

Re: Publishing Sink Task watermarks outside flink

2020-05-03 Thread Shubham Kumar
Following up on this, I tried tweaking the Jdbc Sink as Timo suggested and was successful in it. Basically I added a member *long maxWatermarkSeen *in JDBCOutputFormat, so whenever a new record is added to the batch it updates the *maxWatermarkSeen* for this subtask with

Re: multiple joins in one job

2020-05-03 Thread lec ssmi
Thanks for your replay. But as I known, if the time attribute will be retained and the time attribute field of both streams is selected in the result after joining, who is the final time attribute variable? Benchao Li 于2020年4月30日周四 下午8:25写道: > Hi lec, > > AFAIK, time attribute will be

Re: History Server Not Showing Any Jobs - File Not Found?

2020-05-03 Thread Chesnay Schepler
yes, exactly; I want to rule out that (somehow) HDFS is the problem. I couldn't reproduce the issue locally myself so far. On 01/05/2020 22:31, Hailu, Andreas wrote: Hi Chesnay, yes – they were created using Flink 1.9.1 as we’ve only just started to archive them in the past couple weeks.

toAppendStream  类型不匹配问题

2020-05-03 Thread 祝尚
参考jark老师博客里的demo,写了个table api/sql的程序,在table转appendStream时报错 flink版本1.10 代码如下: public static void main(String[] args) throws Exception { StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); //以后版本会将old planner移除 EnvironmentSettings settings =

Re: Flink: For terabytes of keyed state.

2020-05-03 Thread Gowri Sundaram
Hi Congxian, Thank you so much for your response, that really helps! >From your experience, how long does it take for Flink to redistribute terabytes of state data on node addition / node failure. Thanks! On Sun, May 3, 2020 at 6:56 PM Congxian Qiu wrote: > Hi > > 1. From my experience, Flink

Re: execution.checkpointing.tolerable-failed-checkpoints 无效

2020-05-03 Thread Congxian Qiu
Hi 按理说 1.10 中在 flink-conf 中配置的应该是可以的,具体的 issue 可以参考[1] 请问你用的是啥版本呢? [1] https://issues.apache.org/jira/browse/FLINK-14788 Best, Congxian zhisheng 于2020年4月30日周四 下午6:51写道: > 这个参数好像可以作业里面单独设置,可以试试看 > > env.getCheckpointConfig().setTolerableCheckpointFailureNumber(); > > 蒋佳成(Jiacheng Jiang)

Re: Savepoint memory overhead

2020-05-03 Thread Congxian Qiu
Hi >From the given fig, seems that the end-to-end duration of the two failed checkpoint is small(it is not timeout due to some reason), could you please check why did they fail? Maybe you can find something in jm log such as "Decline checkpoint {} by task {} of job {} at {}." then you can go to

Re: Flink: For terabytes of keyed state.

2020-05-03 Thread Congxian Qiu
Hi 1. From my experience, Flink can support such big state, you can set appropriate parallelism for the stateful operator. for RocksDB you may need to care about the disk performance. 2. Inside Flink, the state is separated by key-group, each task/parallelism contains multiple key-groups. Flink

[ANNOUNCE] Weekly Community Update 2020/18

2020-05-03 Thread Konstantin Knauf
Dear community, happy to share - a brief - community update this week with an update on Flink 1.10.1, our application to Google Season of Docs 2020, a discussion to support Hadoop 3, a recap of Flink Forward Virtual 2020 and a bit more. Flink Development == * [releases] Yu has